12/08/2008

12-08-08 - File Hashes

So I'm sticking a file hash in Oodle and thought I'd test some of the stuff out there. Test candidates :

SHA1 (Sean's stb.h implementation)

MD5 (OpenSSL implementation)

BurtleBurtle Lookup3 (hashlittle2)

Cessu's SSE2 hash (+ extra tail code I added)

CRC32

CRC32+32

In all cases I create a 64 bit hash. Hey, it's plenty of bits, it's easier to pass around cuz I have a 64 bit type, and it makes it a fair competition. SHA1 makes 160 bits (= 5 dwords), MD5 makes 128 bits (= 4 dwords), so I use Bob's Mix method to get that down to 64 bits.

A lot of people think SHA1 or MD5 or something is the "right thing" to do for file hashes. That's not really true. Those hashes were designed for cryptography which is not the same purpose. In particular, they are slow *on purpose* because they want to be hard to invert. They also make tons of bits, not because you need tons of bits to tell files apart, but again to make them hard to invert by brute force attack. I don't care about my file hashes being vulnerable to attack, I just want the chance of accidental collisions to be microscopic.

CRC32+32 means doing CRC32 on alternating bytes and jamming them together to make 64 bits. This is not a true "CRC64" but I might refer to it as CRC64 sometimes. (suggestion by "Joe Smith" ; Joe Smith? Is that a pseudonym?)

Just for background, if the 64 bit hashes are "perfect" - that is the 64 bit words coming out of them are random in every bit, even for input that is very non-random - then the chance of collision is indeed microscopic. (in fact you only need maybe 40 bits). The number of items you can hash in B bits is around 2^(B/2) , so B = 32 is not enough bits since 2^16 = 64k and you may in fact run on 64k files. But even at B = 40, 2^20 = 1 Million is a lot, and certainly B = 64, means 2^32 = 4 Billion items before you expect a collision. So, anyway, the point is to test whether these hashes are actually close enough to perfect on real data that they don't generate an unexpected high number of collisions.

I ran these hashes on every file on my hard drive. I threw out files that were exactly equal to some other file so there would be no false collisions due to the files actually being identical. I have 24963 files. I made 2 variants of every file, one by taking a random byte and changing it to a random value, and another variant by flipping 4 random bits. So in total 74853 arrays were hashed.

First the speed numbers :

sha1                 : 48.218018
md5                  : 19.837351
burtle               : 7.122040
Cessu                : 6.370848
crc32+32             : 15.055287
crc32                : 21.550138

These are in clocks per byte. The CRC numbers are a little funny because the CRC32+32 loop is a bit unrolled, but the CRC32 loop goes byte by byte. In any case, even though CRC is very simple, it is not fast, because even unrolled it still works byte by byte and there's a hard data dependency - you have to completely process each byte before you can work on the next byte.

Cessu's hash is only barely faster than Bob's lookup3 even though it uses SSE2 and works on 16 bytes at a time. Bob's hash is really damn good. When I tested it on strings it did not perform well for me because I'm so close to the bone on strings that the rollup & rolldown overhead killed me, but on larger arrays or even long strings, lookup3 kicks butt. ( Bob's page )

So... how many total collisions in the hashes do you think I had? (only testing collisions versus hashes of the same type of course). Remember I tested on 74853 different arrays, made from 24963 files and 24963+24963 more tiny modifications.

One.

One collision. Of course it was in CRC32. None of the 64 bit hashes had any collisions.

I then tried making 8 variants of each file by 8 different random byte jams, so I was running 199704 arrays. Again zero collisions for any 64 bit hash.

So, in an attempt to actually get a collision, I made 10,000,000 test arrays by sprintf'ing the digits from 1 to 10,000,000 , and then tacked on 2 random bytes. (note : you should not test hashes by making random arrays, because any decent hash will return random output bits from random input bits; the test I am interested in is how close the hash output is to random on highly *nonrandom* input). I ran the hashes on all those strings and got :

collisions on 10,000,000 tests :

sha1                 : 0
md5                  : 0
burtle               : 0
Cessu                : 0
crc64                : 0
rand32               : 11,530
crc32                : 11,576

Again none of the 64 bit hashes has any collisions. CRC32 had quite a few of course - but only as many as a 32 bit random number generator !! That means the CRC is in fact performing as a perfect hash. It is perfectly randomizing the nonrandom input.

So, I have no idea which of the 64 bit hashes is "best" in terms of randomizing bits and detecting errors. Obviously if they are actually perfectly making 64 bits, the chance of me ever seeing a collision is tiny. I thought maybe the "crc32+32" might not have 64 real bits of quality and might fail sooner, but it's not bad enough to fail in any kind of real world scenario apparently.

So, anyway, I'm gonna use "lookup3" because it's both super fast, plenty good, and it has the Bob Jenkins seal of approval which means it should actually be "robust".

HOWEVER : SSE4.2 has a CRC32 instruction. If you were in some application where you could rely on having that, then that would definitely be the fastest way to go, and a CRC32+32 appears to be plenty high quality for file identification.

BTW I keep hearing that CRC32 has degeneracy failures on real world data, but I have yet to see it.

No comments:

old rants