Tuesday, August 11, 2020

Pending clean-up of papers

 I received the following comments with potential issues in some of my work.  I'm delighted to receive such feedback since a major point of my work is to try to get a complex topic straight.

At the moment I'm fully engaged working on self-driving car safety so it will be a while before I can resolve these. However, in the spirit of transparency here are the points raised so you know to be careful on these points when reading my work.

(This is posted with permission conditioned on anonymity. Thanks to the person who sent the comments.)

----------------------------------------------

1) On page 5 of http://users.ece.cmu.edu/~koopman/roses/dsn04/koopman04_crc_poly_embedded.pdf you say that SMBus uses CRC-8, polynomial 0xEA in your notation and say that 0x83 would be better. I am not familiar with SMBus but three sources say that the polynomial chosen for SMBus was 0x83.

CRC Catalogue entry CRC-8/SMBUS lists poly=0x07 (they use what Wikipedia calls Normal form)

SMBus Spec page 27 clearly states that the polynomial is x8 + x2 + x1 + 1

Presentation 22 April 1999 USAR Systems has the same.

2) On same page of same document as 1), it says XMODEM also uses CRC-8 polynomial 0xEA but it appears that XMODEM uses a 16-bit CRC according to Wikipedia and the CRC Catalogue amongst others. There is reference to an earlier 8-bit FCS but that was called a checksum in the references I found, not a CRC. ref1, ref2, ref3

3) Finally, I am having trouble reproducing your Probability of Undetected Error graphs from your koopman04 paper. The maxino_ms.pdf, page 9 gives the equation for Pud as

Pud = HW * BER^x * (1-BER)^(n-x)

where n is code word length in bits, x is number of bit errors, HW is Hamming weight at the HD. For example, polynomial 0x9C at 9 bit data word length is shown as just above 1e-33 (I used a tool to read scaled values off log charts to get about 6e-33 since I can't find a numeric value anywhere), but the formula gives 3.4e-29 at same BER of 1e-6, since HD=5 and HW(5)=34.

I suspect this is because your Pud graphs are, as you say in section 4.1, "weighted by the percentage of errors caught per corresponding polynomial weights for each data word length" but I cannot find a formula or example of this weighting which gives the required correction. Although formula (2) in page 18 of the faa15_tc-14-49.pdf document is different from the above, as far as I can see the combin(c, HD) term cancels out because UndetectedFractionHD = HW/combin(c, HD). So please could you provide the weighting formula, preferably with one numerical example.

----------------------------------------------

Initial responses:

- I would not be at all surprised for #1 and #2 I was referring to documents that got the polynomials wrong, which was previously quite common but is getting better over time.

- For #3 the graphs and probability equations can be a bit tricky.  I'll have to get all that stuff back in my head before being able to respond.  I'd suggest going with the analysis here if you're in doubt, which had the math reviewed by researchers from Honeywell, so hopefully it's OK: http://www.tc.faa.gov/its/worldpac/techrpt/tc14-49.pdf  

Monday, June 29, 2020

CRC Information Quick Start

I'm not able to spend much if any time on this lately due to my work in self driving car safety. Here is a quick start for those looking for info.

A tutorial on using CRCs and Checksums (how to apply them, pitfalls, system considerations; not a deep crawl through the math):
https://betterembsw.blogspot.com/2013/11/crc-webinar.html

A report written for the FAA on CRCs and Checksums (goes with the tutorial)
https://betterembsw.blogspot.com/2015/04/faa-crc-and-checksum-report.html

A web site of "best" CRCs, including:

A summary of "Seven Deadly Sins of CRCs and Checksums":
https://betterembsw.blogspot.com/2013/06/seven-deadly-sins-of-crcs-and-checksums.html

Maximal length LFSR polynomials:
https://users.ece.cmu.edu/~koopman/lfsr/index.html  

My papers on this topic are here:
https://users.ece.cmu.edu/~koopman/projects.html#crc

(There are other posts you might find of interest in the blog as well.)
https://betterembsw.blogspot.com/2015/04/faa-crc-and-checksum-report.html

Also, some FAQ:
  • Can I combine using two small CRCs to get better HD?  Probably not.
    It doesn't work because of error patterns that hit both CRC fields. Maybe it can be done, but every paper I've read has been flawed for this reason and the polynomial searches I ran came up empty handed due to this reason.
  • How does polynomial X perform?
    Please look at the zoo and see if you can find it.  If it is not there I'll add it when I have time if it is a published value.  Be sure to consider there are different notations, which are accounted for in the Zoo.
  • Can you please provide me with free consulting help?
    Sorry, too busy with self driving car safety these days in addition to teaching.  I sometimes answer general interest questions on my blog, but I can only spend a minute or two and it has to be something others will likely find interesting.




Friday, August 16, 2019

Effect of Initial Seed Values on CRCs and Checksums

Checksums and CRCs involve a repeated computation over the length of a dataword that
accumulates a result to form an FCS value. The accumulation value must be initialized to some
known value for this process to produce a predictable result for any given codeword. When the
initial value is unstated, it is typically assumed to be zero.

In some cases, a non-zero value is used instead, such as 0xFFFF for a 16-bit checksum or CRC.
This has the advantage of making an all zero codeword impossible. Perhaps more importantly
for Fletcher checksum, ATN-32, and CRCs, this has the additional advantage of causing leading
zeros of a message to affect the FCS value. To better understand, consider a CRC initialized
with an all zero seed value. Any number of all zero dataword chunks can be fed into the CRC
and it will stay at a zero value, throwing away information about how many zero chunks have
been processed. However, if the CRC computation is initialized to any non-zero value,
processing chunks with a value of zero will cycle patterns through the CRC, tracking the number
of zero-valued chunks that have been processed as a sort of counter. This can help with
situations in which additional leading-zero bytes have been erroneously added to a dataword.

In general, any non-zero seed value can be used. The value selected does not affect
error-detection properties beyond those previously discussed. In particular, CRC error detection,
in terms of the BER fault model, is independent of the seed value used.

The CRC and checksum values can be alternately or additionally modified to create an FCS
value. A common modification is to XOR the result of a checksum computation with a value,
such as 0xFFFF, which inverts bits, but does not lose any of the information content of the
checksum computation. In some cases, CRC result values must be bit-reversed to preserve burst
error properties across the dataword to FCS boundary, depending upon the computation method and message bit ordering in the dataword.

Note:
The "seed value" is sometimes also called the "initial value"

Source:

Saturday, March 11, 2017

Video on Hamming Codes & Hamming Distance

Nice video that talks about Hamming Codes and gives a hypercube explanation that helps understand Hamming Distance

Perfect Codes:

https://youtu.be/WPoQfKQlOjg
on computerphile.

Sunday, January 22, 2017

On sabbatical

I noticed that my e-mail has some CRC queries. I'm on sabbatical through summer 2017, so there might be significant delays before I'm able to respond to e-mails.  If you notice something that seems to be a problem please do let me know, but please bear with me until I have time to respond.

(Contrary to what some might think "sabbatical" is usually the opposite of "vacation" for engineering faculty.  It is a chance for us to concentrate on a large difficult problem that we can't do during normal semesters because of all the many demands made upon our time.)

Tuesday, July 28, 2015

Significantly updated CRC data

Over the course of the summer I was able to get a little time to revise and update my CRC data web site.  It is still work-in-progress, but you'll find a lot more data than was there previously.

New summary tables:    http://users.ece.cmu.edu/~koopman/crc/

Recommendations for "good" polynomials with various HD values are completed through 25 bit CRCs, and I'll let the scripts compute away for a few months until eventually the tables are completed through 32 bits.  Past 32 bits my search algorithm starts taking a long time to execute so probably will only have samples above that, but we'll see how it goes.

When you look at the data, yellow and blank mean things I'm working on (the color coding helps me keep straight what the scripts are currently working on).  Green boxes and "under construction" mean preliminary results that should be OK but aren't double-checked yet.  A significant addition is that I've added automatically generated examples of violations of the HD so you can confirm results for yourself. There is also some do-it-yourself HD computation help here.

If you see a polynomial missing that is published somewhere let me know and I'll add it.

I should mention that Prof. Berthold Gick from University of Applied Sciences, Koblenz, Germany has been independently working in this area and verified many of my results.  More importantly, he's pointed out a few typos and other issues that I'm working to correct.  I'd like to take this chance to thank him for his efforts.  If anyone else notices problems please let me know.

Thursday, June 11, 2015

Is HD=1 Useful?

I received an e-mail with a question as to why a HD=1 checksum would ever be useful (referred to on slide 89 of my Sept 2014 slide set on CRCs).  The point is that HD=1 means you are not guaranteeing detection of any error, not even 1-bit errors.  Here's my answer, which I am posting since it might have general interest:

HD=1 (zero errors guaranteed to be detected) means that there is still a residual capability to randomly detect most errors (all errors except 1 in 2**k for k bits).
If you only need an error detection ratio of a few orders of magnitude this is fine if you have a large CRC.

If you are using a 256 bit cryptographic checksum (which must be assumed to have HD=1) then undetected error probability is 1 in 2**256, which is pretty good.  However, a high-HD CRC will usually get you better non-malicious error detection for less computational cost.

Note that any "perfect" pseudo-random number generator will produce HD=1, including cryptographically secure random number generators. 
Getting HD>1 means there is a predictable structure in the pseudo-random numbers generated by a CRC in terms of certain outputs being less probable.  In fact, for small numbers of bits set in the inputs it is impossible to produce certain outputs with small numbers of bits set, because that would violate the HD. Thus, the outputs are less than perfectly random in a way that is predictable. For security this is bad, because it means the hash function leaks information, and you'd hope a good hash doesn't leak information.  But for error detection that's good because it gives you a HD.