Home > Back-end >  What happens if you don't hash a value before selecting a hash map bucket?
What happens if you don't hash a value before selecting a hash map bucket?

Time:09-13

Since a hash map works with a modulus/division operation to select the appropriate bucket to place the value in, it seems that the chance of collision is dependent on the number of buckets, not "how good the hash function is". How good the function function is decides the likelihood of a same-hash return collision. However 'collision' in a hash map is referring to something else, it's referring to the same value AFTER the modulus operation. Assuming the key value is an integer (say 64 bit), what can be expected if the hash function for a hash map is simply the key value itself? I would venture to say that retrieval would be lot faster, as there wouldn't be a need to loop through a number of bytes and do hash operations, with an end result, with respect to hash table collisions, much the same. I mean, the exact values that end up colliding with an already occupied bucket are different values, but if the values are spread all over the place then overall the results should be very similar.

CodePudding user response:

Assuming the key value is an integer (say 64 bit), what can be expected if the hash function for a hash map is simply the key value itself?

Many languages do exactly that. E.g. Java.

But you have to be careful, if your hash function is too trivial, it would also be trivial for an attacker to exploit hash collisions to cause a DoS in your service. This is known as a Collision Attack. Different libraries deal with that in different ways.

Java HashMap falls back to a red-black tree whenever it detects too many collisions in a single bucket. Other languages introduce randomization in the hash function, so it would be harder for an attack to exploit it.

CodePudding user response:

it seems that the chance of collision is dependent on the number of buckets, not "how good the hash function is

No, that is not correct. Keys are not generally distrusted evenly across bucket indexes. Hashing keys tends to more evenly distribute the bucket index better than raw key.

index = key%bucket_n;
// vs
index = hash(key)%bucket_n;

Further: A good hash function works well with any bucket_n. A weak hash function improves when bucket_n is a prime.


There is a need to balance the number of entries in a table vs. the table size. If entires_n much less than table_size, OP assertions make some sense. Yet this waste lots of memory

If entires_n much greater than table_size, collisions are common. Often even worse without a hash function.

IMO, the hash table size should exponentially grow with the entry count to maintain a density less than some threshold, say 1/3. A re-hash of the table may be needed to accommodate a size change.

  • Related