38

I have used unsalted md5/sha1 for long time, but as this method isn't really secure (and is getting even less secure as time goes by) I decided to switch to a salted sha512. Furthermore I want to slow the generation of the hash down by using many iterations (e.g. 100).

My question is whether I should append the salt on every iteration or only once at the beginning. Here are the two possible codes:

Append every time:

// some nice big salt
$salt = hash($algorithm, $salt);

// apply $algorithm $runs times for slowdown
while ($runs--) {
    $string = hash($algorithm, $string . $salt, $raw);
}

return $string;

Append once:

// add some nice big salt
$string .= hash($algorithm, $salt);

// apply $algorithm $runs times for slowdown
while ($runs--) {
    $string = hash($algorithm, $string, $raw);
}

return $string;

I first wanted to use the second version (append once) but then found some scripts appending the salt every time.

So, I wonder whether adding it every time adds some strength to the hash. For example, would it be possible that an attacker found some clever way to create a 100timesSha512 function which were way faster than simply executing sha512 100 times?

ircmaxell
  • 163,128
  • 34
  • 264
  • 314
NikiC
  • 100,734
  • 37
  • 191
  • 225

5 Answers5

51

In short: Yes. Go with the first example... The hash function can lose entropy if feed back to itself without adding the original data (I can't seem to find a reference now, I'll keep looking).

And for the record, I am in support of hashing multiple times.

A hash that takes 500 ms to generate is not too slow for your server (considering that generating hashes are typically not done the vast majority of requests). However a hash that takes that long will significantly increase the time it will take to generate a rainbow table...

Yes, it does expose a DOS vulnerability, but it also prevents brute force attacks (or at least makes them prohibitively slow). There is absolutely a tradeoff, but to some the benefits exceed the risks...

A reference (more like an overview) to the entire process: Key Strengthening

As for the degenerating collisions, the only source I could find so far is this discussion...

And some more discussion on the topic:

  1. HEKS Proposal
  2. SecurityFocus blog on hashing
  3. A paper on Oracle's Password Hashing Algorithms

And a few more links:

  1. PBKDF2 on WikiPedia
  2. PBKDF2 Standard
  3. A email thread that's applicable
  4. Just Hashing Is Far From Enough Blog Post

There are tons of results. If you want more, Google hash stretching... There's tons of good information out there...

ircmaxell
  • 163,128
  • 34
  • 264
  • 314
  • +1 Don't get why this got a downvote. I would really appreciate if you could find that reference ;) – NikiC Aug 24 '10 at 18:08
  • +1 for the DoS vulnerability I still think you are recommending a mistake. – rook Aug 24 '10 at 18:18
  • 1
    @The Rook: Considering that the vast majority of security papers that I've read from reputable researchers all suggest stretching keys prior to using them (Whether for storage or for encryption). And considering that you need to stretch for PCI compliance, I'm not sure how this is bad advice... – ircmaxell Aug 24 '10 at 18:22
  • Good information, still think its a terrible trade off. If you look at systems like pbkdf2 you'll see they are only used in cases where the hash is freely given to the attacker like in the case of WinZIP. No one can use this for a web application with more than a handfull of users. Its just a waste of resources. – rook Aug 24 '10 at 18:26
  • @ircmaxell pbkdf2 is made by rsa, thats reputable. Its also a logic error, if its too heavy for the attacker than its too heavy for the server farm host it. Your better off creating an efficient backup security system like password hashing and then testing your app for ways the hash can be leaked, like SQL Injection. Keep in mind password hashing only comes into play after exploitation. – rook Aug 24 '10 at 18:29
  • @The Rock: Sure, I do try to make my applications themselves as secure as possible. But even though I never found a bug in my application which allowed an SQL injection I will not be so arrogant to say that there may not be any. @ircmaxell: Thanks for your answer. But doesn't the second reference ("this discussion") contradict your first sentence? It says that `there is no difference between MD5(MD5(data || nonce)) and MD5(nonce || MD5(data || nonce))`. – NikiC Aug 24 '10 at 18:35
  • @The Rook: I agree that attention should be focused on other areas of security (But not SQL Injection, since good practice should mean that your application is immune to it, such as using parametrized queries). As for 5000 iterations, it's overkill. But that doesn't mean the even 100 or 20 or 10 iterations won't give you some benefit while still being more resource friendly. Not to mention that there's a HUGE difference between the opinion `Its just a waste of resources` and the way you're portraying it as `it's a bad idea in general`. Oh, and plenty of large sites use it (to some extent)... – ircmaxell Aug 24 '10 at 18:36
  • @nikic If you test your application and use parameterized queries, and then yes you can make that claim. – rook Aug 24 '10 at 18:37
  • @ircmaxell I disagree, I think that 2 times is a waste and 100 times still means a dictionary password is broken in under an hour. (i still gave you a +1, i just don't think its suitable for web applications.) – rook Aug 24 '10 at 18:38
  • 6
    @nikic: That's talking about from a brute force perspective (slowing down the attacker). I was talking about from a collision perspective. `MD5(MD5(data))` will be a superset of all of the collisions of `MD5(data)`. The degradation is linear, so it's not a significant concern, but it was enough to cause `PBKDF2` to be introduced to combat the concern (considering that's the main difference between `PBKDF1` and `PBKDF2`)... – ircmaxell Aug 24 '10 at 18:40
  • Ah, thanks for clearing this up. So I will stick with the append-salt-every-time approach. And I will lower the number of iterations. Marking this as the answer. – NikiC Aug 24 '10 at 18:43
  • @The Rook: That's fair enough. I guess my point was that it's a completely valid idea. The decision of whether or not it's worth it is a completely different story. I personally feel that it's worth at least a few iterations (I usually do `50`), but if you don't then no biggie. But it boils down to a personal opinion based upon the trade-offs. I wouldn't reject an app that didn't use it, but at the same point I wouldn't reject an app that did. I guess the best approach is to understand the trade-offs and let the implementers make an educated decision... – ircmaxell Aug 24 '10 at 18:45
  • @The Rook PBKDF2 is also used for secret key generation, for instance to encrypt keys using PKCS #5. It's just a PRF that happens to be designed to stretch low-entropy inputs using iteration. – Jack Lloyd Aug 24 '10 at 21:31
6

In addition to re-hashing it multiple times, I would use a different salt for each password/user. Though I think 5000 iterations is a bit too much, try a lower number. There's a trade-off here; you'll have to tweak it according to your needs and hardware.

With different salts for each password, an attacker would be forced to bruteforce each password individually instead of constructing a rainbow table, which increases the workload considerably.

As always, here's a recommended read for this: Just hashing is far from enough

EDIT: Iterative hashing is a perfectly valid tactic. There are trade-offs, but everything has them. If you are worried about computation time, why not just store the plaintext password?

NullUserException
  • 83,810
  • 28
  • 209
  • 234
3

Please please please do not roll your own crypto. This is what libraries like OpenSSL are for. Here's few good examples of how you would use it to make salted hashes.

Salted Hashes in OpenSSL

Marcin
  • 3,437
  • 1
  • 22
  • 15
  • Using OpenSSL for creating your hashes is pretty much as un-portable as it gets. You only seldom have command line access from PHP and it looks like you need it. Furthermore I don't see why I should not "roll [my] own crypto". I simply assume that a) sha512 is better then sha1 b) salting is good and c) stretching is good. Thus I improve my original sha1-approach in three different ways and I don't see why this shall be bad. – NikiC Oct 29 '10 at 15:11
  • 1
    How is using a very popular, standard library unportable? Commandline access isn't necessary, as basic crypto functionality is usually wrapped up nicely in function calls of whatever language you need. I wasn't saying to blindly cut'n'paste the commands from the link, I was saying this has been done _correctly_ before, so there's no need to reinvent the wheel, incorrectly at that. – Marcin Oct 29 '10 at 16:45
0

The reason for iterative hashing is to make process as slow as possible. So you can do even better: use different salts for each iteration. It can be done by encrypting you original data again and again on each iteration with fixed key and XORing with salt value.

blaze
  • 4,326
  • 18
  • 23
  • Could you provide reference why using multiple salts is even better? – NikiC Oct 29 '10 at 14:31
  • Nothing in academic papers, actually. It's just forces attacker to do one more encryption on each cycle and makes any hash precomputation impossible, since there will be no repeating data at all. – blaze Nov 01 '10 at 07:51
  • Unique salting does not make precomputation impossible, it simply limits the usefulness of a generated rainbow table to cracking a single hash. – Igorio Jun 25 '11 at 03:06
  • What i meant is partially precomputing hash function itself: initialization and common data block at the beginning. Saving this state you can save a little on each turn. – blaze Jun 27 '11 at 09:44
0

I prefer to go with a double sha1 with two different salts and prevent DoS delaying the answer incrementally (with a simple usleep) for every invalid password check.

xPheRe
  • 2,333
  • 24
  • 33
  • 1
    I think that exactly by using usleep you make your application DoS vulnerable. PHP is a synchronous, blocking language, thus sleeping will still block the PHP process. This way you could make the server unresponsive by doing far less connections then you would need without the sleep ;) – NikiC Oct 29 '10 at 14:30
  • 1
    It will not only block PHP with it’s max parallel executions, but also the webserver with its max requests it can handle in parallel (see for example Slowloris; slow requests to bring down apache webservers etc). It's the other way around: usleep helps with brute-force login attacks but even increases the usefulness of it for DoS. – Kissaki Jan 02 '11 at 03:52