• theneverfox@pawb.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    Because hashes are deterministic one way functions - they’re generally one way only

    Let’s say I hash a picture. It could go from 14MB to 128 digits of base 64 - there’s orders of magnitude less information in the hash than in the source data

    Now - with that hash can you rebuild the picture? You’ve lost a great deal of information, you don’t necessarily even know the size or the format of the input.

    Let’s set up an equation - x is the input (the photo), so hash_func(x) = hashx

    There are multiple, maybe infinite (depending on the hashing function) values of x that will solve our equation. In the case of the photo, most of it will be random combinations of pixels that mean nothing to a human. There could also randomly be things that appear meaningful, but without knowing more about the original you could never be sure if you have the correct answer

    Now, passwords might actually be shorter than the resulting hash, but we salt them so each password hash function works differently, and can still destroy information from the original password. Part of the password and the salt are then used as basically the seed for a deterministic random function to generate this extra information

    Again, you have the dual problem of a huge problem space as well as an inability to be sure you have the original input or just another solution

    Ultimately, everything is defeatable, and if you can narrow down the problem space (say, by knowing the length of a password, having enough known before and after data, or finding a bias in the algorithm), you can reduce the needed computations by orders of magnitude and make it feasible. Quantum computers also grow exponentially with chained qbits, so I expect someone clever will figure it out sooner or later