The total possible no of unique git commit hash values are - 16^40, 16 possible hexadecimal digits and 40 total digits in the SHA value.
This approximates to ~10^48 (more than that, but just an approximation).
My question is - as the values are unique for commits, how are they not exhausted by now?
Or
Are these values unique only inside a repository i.e. locally unique which will prevent them from being exhausted ever?
As you can see, I am not sure whether they are locally unique or globally.
Edit -
The question has been answered but I would recommend this question Git hash duplicates as it is somewhat similar to my question. Thanks to @torek for mentioning this question.
CodePudding user response:
Pay attention to what that "48" is counting. That's how many zeroes after the leading "1".
Say there's ten billion people on earth. That's 1e10. Say all ten billion people on earth are using Git and generating ten billion hash codes each, every second, non stop. That's 1e20 hash codes used per second if we dedicate the entire human race full time with fantasy hardware. How long would it take them to get through even 0.01% of the Git hash codes? There's 1e28 left, 0.01% of that is 1e24, at 1e8 seconds per year is 1e16 years, that's ten million billion years. We'd have gotten almost 0.0000014 of the way to using 0.01% of the Git hash codes by now if we'd started before the big bang.