当前位置:  开发笔记 > 编程语言 > 正文

"双重哈希"密码是否比仅仅哈希一次更安全?

如何解决《"双重哈希"密码是否比仅仅哈希一次更安全?》经验,为你挑选了4个好方法。

在存储之前对密码进行两次哈希处理的安全性是否比仅仅哈希一次更安全?

我在说什么是这样做的:

$hashed_password = hash(hash($plaintext_password));

而不仅仅是这个:

$hashed_password = hash($plaintext_password);

如果它不太安全,你能提供一个很好的解释(或链接到一个)吗?

此外,使用的哈希函数是否有所作为?如果混合使用md5和sha1(例如)而不是重复相同的散列函数,它会有什么不同吗?

注1:当我说"双重哈希"时,我正在谈论两次哈希密码以试图使其更加模糊.我不是在谈论解决碰撞的技术.

注2:我知道我需要添加一个随机盐来真正使其安全.问题是使用相同算法进行两次散列是否有助于或损害散列.



1> erickson..:

哈希密码一次是不安全的

不,多个哈希的安全性并不低; 它们是安全密码使用的重要组成部分.

迭代哈希会增加攻击者在候选列表中尝试每个密码所需的时间.您可以轻松地将攻击密码所需的时间从数小时增加到数年.

简单的迭代是不够的

仅将哈希输出链接到输入不足以实现安全性.迭代应该在保留密码熵的算法的上下文中进行.幸运的是,有几种已发布的算法已经受到足够的审查,可以对其设计充满信心.

像PBKDF2这样的良好密钥推导算法会在每轮散列中注入密码,从而减轻对散列输出中冲突的担忧.PBKDF2可以按原样用于密码验证.Bcrypt通过加密步骤跟随密钥推导; 这样,如果发现了一种快速反转密钥派生的方法,攻击者仍然必须完成已知的明文攻击.

如何破解密码

存储的密码需要防止脱机攻击.如果密码没有被腌制,则可以使用预先计算的字典攻击(例如,使用彩虹表)来破坏密码.否则,攻击者必须花时间为每个密码计算一个哈希,看看它是否与存储的哈希匹配.

所有密码都不一样.攻击者可能会详尽地搜索所有短密码,但他们知道,每增加一个角色,他们蛮力成功的机会就会急剧下降.相反,他们使用最可能的密码的有序列表.它们以"password123"开头,并进入不经常使用的密码.

让我们说攻击者名单很长,有100亿候选人; 假设桌面系统每秒可以计算100万个哈希值.如果只使用一次迭代,攻击者可以测试她的整个列表不到三个小时.但如果仅使用2000次迭代,则该时间延长至近8个月.要击败更复杂的攻击者 - 例如能够下载可以利用其GPU功能的程序的攻击者 - 你需要更多的迭代.

多少钱够了?

要使用的迭代次数是安全性和用户体验之间的权衡.可以被攻击者使用的专用硬件很便宜,但它仍然可以每秒执行数亿次迭代.攻击者系统的性能决定了在多次迭代的情况下中断密码所需的时间.但是您的应用程序不太可能使用这种专用硬件.在不加剧用户的情况下可以执行多少次迭代取决于您的系统.

您可以让用户在身份验证期间等待额外的3/4秒左右.分析您的目标平台,并使用尽可能多的迭代.我测试的平台(移动设备上的一个用户,或服务器平台上的许多用户)可以轻松地支持PBKDF2,迭代次数在60,000到120,000之间,或成本系数为12或13的bcrypt.

更多背景

阅读PKCS#5,获取有关salt和迭代在散列中的作用的权威信息.即使PBKDF2用于从密码生成加密密钥,它也可以作为密码验证的单向散列.bcrypt的每次迭代都比SHA-2哈希更昂贵,因此您可以使用更少的迭代,但想法是相同的.通过使用派生密钥加密一个众所周知的纯文本,Bcrypt也超越了大多数基于PBKDF2的解决方案.生成的密文与一些元数据一起存储为"哈希".但是,没有什么能阻止你用PBKDF2做同样的事情.

以下是我就此主题撰写的其他答案:

哈希密码

哈希密码

隐藏盐

PBKDF2与bcrypt

Bcrypt


当您试图阻止针对受损的身份验证存储的字典攻击时,故意制作慢速算法是一种公认​​的做法.该技术被称为"关键加强"或"关键拉伸".见http://en.wikipedia.org/wiki/Key_stretching
@RoBorg:_your_实现速度有多慢并不重要,但是攻击者的实现速度有多慢:如果散列本身的速度慢了几千倍,那么攻击者需要数千倍的时间来强制密码.
@devin - 它不是"我的解决方案",它是一种广泛接受的做法,内置于基于密码的加密标准,如PKCS#5,并由Robert Morris等专家推荐.它具有极高的可扩展性,因为在合法应用程序中对用户进行身份验证所花费的时间很少.当您的应用程序破解密码时,只会变得难以扩展 - 因此建议.当然,哈希的搜索空间小于可能的密码的搜索空间,但即使是128位的空间也太大了,无法进行暴力搜索.防御威胁是一种离线词典攻击.
我指的不是给个人用户带来的不便,而是指如果你有庞大的用户群会给服务器带来的压力,因为你依靠CPU负载来减慢请求的数量.这意味着如果你增加更多的CPU能力,你就会减少对那些暴力攻击者的限制. - 但是,您对可扩展性和广泛接受的做法完全正确.我在之前的评论中所说的几乎所有事情都是错的.抱歉:)
可以说你想要在128位空间0到2 ^ 128-1内进行冲突.如果哈希算法的2 ^ 128输出空间是完美的,那么理论上,你只需要一个具有2 ^ 128个字形的字母表的替换密码.
最终,这个问题的大部分都无关紧要,因为他不应该使用MD5.应该使用更高的SHA或BCrypt
它在数学上不太安全,使用sleep()比使用慢速算法更好
所以,(因为我没有明确说明这一点)你应该不应该进行两次散列,你应该采用专门设计用于限制重试次数的其他方法,而不是让CPU为你做这些事.
@hobbs迭代散列和PBKDF2之间的差异可以忽略不计.你失去了熵,但很容易证明,对于一个理想的哈希来说,损失是非常小的.所以你的语言反对简单的迭代哈希太强了.这不是最佳实践,但它也不是一个实际问题.
@lunakid如果你的函数是双射的,那么它就是一个排列,并且违反了加密哈希所需的属性.因此我们不希望常用的哈希值是双射的,请参阅[哈希单个512位块时SHA-512是否具有双射性?关于crypto.SE](http://crypto.stackexchange.com/questions/301/is-sha-512-bijective-when-hashing-a-single-512-bit-block)的一个变体.

2> ircmaxell..:

To those who say it's secure, they are correct in general. "Double" hashing (or the logical expansion of that, iterating a hash function) is absolutely secure if done right, for a specific concern.

To those who say it's insecure, they are correct in this case. The code that is posted in the question is insecure. Let's talk about why:

$hashed_password1 = md5( md5( plaintext_password ) );
$hashed_password2 = md5( plaintext_password );

There are two fundamental properties of a hash function that we're concerned about:

    Pre-Image Resistance - Given a hash $h, it should be difficult to find a message $m such that $h === hash($m)

    Second-Pre-Image Resistance - Given a message $m1, it should be difficult to find a different message $m2 such that hash($m1) === hash($m2)

    抗碰撞性 -它应该是很难找到一个对消息($m1, $m2)这样hash($m1) === hash($m2)(在这里请注意,这类似于二,原像电阻,但不同的攻击者控制了这两个消息)...

对于密码的存储,我们真正关心的是Pre-Image Resistance.另外两个是没有实际意义的,因为$m1用户的密码是我们试图保持安全的.因此,如果攻击者已经拥有它,那么哈希就无法保护...

免责声明

Everything that follows is based on the premise that all we care about is Pre-Image Resistance. The other two fundamental properties of hash functions may not (and typically don't) hold up in the same way. So the conclusions in this post are only applicable when using hash functions for the storage of passwords. They are not applicable in general...

Let's Get Started

For the sake of this discussion, let's invent our own hash function:

function ourHash($input) {
    $result = 0;
    for ($i = 0; $i < strlen($input); $i++) {
        $result += ord($input[$i]);
    }
    return (string) ($result % 256);
}

Now it should be pretty obvious what this hash function does. It sums together the ASCII values of each character of input, and then takes the modulo of that result with 256.

So let's test it out:

var_dump(
    ourHash('abc'), // string(2) "38"
    ourHash('def'), // string(2) "47"
    ourHash('hij'), // string(2) "59"
    ourHash('klm')  // string(2) "68"
);

Now, let's see what happens if we run it a few times around a function:

$tests = array(
    "abc",
    "def",
    "hij",
    "klm",
);

foreach ($tests as $test) {
    $hash = $test;
    for ($i = 0; $i < 100; $i++) {
        $hash = ourHash($hash);
    }
    echo "Hashing $test => $hash\n";
}

That outputs:

Hashing abc => 152
Hashing def => 152
Hashing hij => 155
Hashing klm => 155

Hrm, wow. We've generated collisions!!! Let's try to look at why:

Here's the output of hashing a string of each and every possible hash output:

Hashing 0 => 48
Hashing 1 => 49
Hashing 2 => 50
Hashing 3 => 51
Hashing 4 => 52
Hashing 5 => 53
Hashing 6 => 54
Hashing 7 => 55
Hashing 8 => 56
Hashing 9 => 57
Hashing 10 => 97
Hashing 11 => 98
Hashing 12 => 99
Hashing 13 => 100
Hashing 14 => 101
Hashing 15 => 102
Hashing 16 => 103
Hashing 17 => 104
Hashing 18 => 105
Hashing 19 => 106
Hashing 20 => 98
Hashing 21 => 99
Hashing 22 => 100
Hashing 23 => 101
Hashing 24 => 102
Hashing 25 => 103
Hashing 26 => 104
Hashing 27 => 105
Hashing 28 => 106
Hashing 29 => 107
Hashing 30 => 99
Hashing 31 => 100
Hashing 32 => 101
Hashing 33 => 102
Hashing 34 => 103
Hashing 35 => 104
Hashing 36 => 105
Hashing 37 => 106
Hashing 38 => 107
Hashing 39 => 108
Hashing 40 => 100
Hashing 41 => 101
Hashing 42 => 102
Hashing 43 => 103
Hashing 44 => 104
Hashing 45 => 105
Hashing 46 => 106
Hashing 47 => 107
Hashing 48 => 108
Hashing 49 => 109
Hashing 50 => 101
Hashing 51 => 102
Hashing 52 => 103
Hashing 53 => 104
Hashing 54 => 105
Hashing 55 => 106
Hashing 56 => 107
Hashing 57 => 108
Hashing 58 => 109
Hashing 59 => 110
Hashing 60 => 102
Hashing 61 => 103
Hashing 62 => 104
Hashing 63 => 105
Hashing 64 => 106
Hashing 65 => 107
Hashing 66 => 108
Hashing 67 => 109
Hashing 68 => 110
Hashing 69 => 111
Hashing 70 => 103
Hashing 71 => 104
Hashing 72 => 105
Hashing 73 => 106
Hashing 74 => 107
Hashing 75 => 108
Hashing 76 => 109
Hashing 77 => 110
Hashing 78 => 111
Hashing 79 => 112
Hashing 80 => 104
Hashing 81 => 105
Hashing 82 => 106
Hashing 83 => 107
Hashing 84 => 108
Hashing 85 => 109
Hashing 86 => 110
Hashing 87 => 111
Hashing 88 => 112
Hashing 89 => 113
Hashing 90 => 105
Hashing 91 => 106
Hashing 92 => 107
Hashing 93 => 108
Hashing 94 => 109
Hashing 95 => 110
Hashing 96 => 111
Hashing 97 => 112
Hashing 98 => 113
Hashing 99 => 114
Hashing 100 => 145
Hashing 101 => 146
Hashing 102 => 147
Hashing 103 => 148
Hashing 104 => 149
Hashing 105 => 150
Hashing 106 => 151
Hashing 107 => 152
Hashing 108 => 153
Hashing 109 => 154
Hashing 110 => 146
Hashing 111 => 147
Hashing 112 => 148
Hashing 113 => 149
Hashing 114 => 150
Hashing 115 => 151
Hashing 116 => 152
Hashing 117 => 153
Hashing 118 => 154
Hashing 119 => 155
Hashing 120 => 147
Hashing 121 => 148
Hashing 122 => 149
Hashing 123 => 150
Hashing 124 => 151
Hashing 125 => 152
Hashing 126 => 153
Hashing 127 => 154
Hashing 128 => 155
Hashing 129 => 156
Hashing 130 => 148
Hashing 131 => 149
Hashing 132 => 150
Hashing 133 => 151
Hashing 134 => 152
Hashing 135 => 153
Hashing 136 => 154
Hashing 137 => 155
Hashing 138 => 156
Hashing 139 => 157
Hashing 140 => 149
Hashing 141 => 150
Hashing 142 => 151
Hashing 143 => 152
Hashing 144 => 153
Hashing 145 => 154
Hashing 146 => 155
Hashing 147 => 156
Hashing 148 => 157
Hashing 149 => 158
Hashing 150 => 150
Hashing 151 => 151
Hashing 152 => 152
Hashing 153 => 153
Hashing 154 => 154
Hashing 155 => 155
Hashing 156 => 156
Hashing 157 => 157
Hashing 158 => 158
Hashing 159 => 159
Hashing 160 => 151
Hashing 161 => 152
Hashing 162 => 153
Hashing 163 => 154
Hashing 164 => 155
Hashing 165 => 156
Hashing 166 => 157
Hashing 167 => 158
Hashing 168 => 159
Hashing 169 => 160
Hashing 170 => 152
Hashing 171 => 153
Hashing 172 => 154
Hashing 173 => 155
Hashing 174 => 156
Hashing 175 => 157
Hashing 176 => 158
Hashing 177 => 159
Hashing 178 => 160
Hashing 179 => 161
Hashing 180 => 153
Hashing 181 => 154
Hashing 182 => 155
Hashing 183 => 156
Hashing 184 => 157
Hashing 185 => 158
Hashing 186 => 159
Hashing 187 => 160
Hashing 188 => 161
Hashing 189 => 162
Hashing 190 => 154
Hashing 191 => 155
Hashing 192 => 156
Hashing 193 => 157
Hashing 194 => 158
Hashing 195 => 159
Hashing 196 => 160
Hashing 197 => 161
Hashing 198 => 162
Hashing 199 => 163
Hashing 200 => 146
Hashing 201 => 147
Hashing 202 => 148
Hashing 203 => 149
Hashing 204 => 150
Hashing 205 => 151
Hashing 206 => 152
Hashing 207 => 153
Hashing 208 => 154
Hashing 209 => 155
Hashing 210 => 147
Hashing 211 => 148
Hashing 212 => 149
Hashing 213 => 150
Hashing 214 => 151
Hashing 215 => 152
Hashing 216 => 153
Hashing 217 => 154
Hashing 218 => 155
Hashing 219 => 156
Hashing 220 => 148
Hashing 221 => 149
Hashing 222 => 150
Hashing 223 => 151
Hashing 224 => 152
Hashing 225 => 153
Hashing 226 => 154
Hashing 227 => 155
Hashing 228 => 156
Hashing 229 => 157
Hashing 230 => 149
Hashing 231 => 150
Hashing 232 => 151
Hashing 233 => 152
Hashing 234 => 153
Hashing 235 => 154
Hashing 236 => 155
Hashing 237 => 156
Hashing 238 => 157
Hashing 239 => 158
Hashing 240 => 150
Hashing 241 => 151
Hashing 242 => 152
Hashing 243 => 153
Hashing 244 => 154
Hashing 245 => 155
Hashing 246 => 156
Hashing 247 => 157
Hashing 248 => 158
Hashing 249 => 159
Hashing 250 => 151
Hashing 251 => 152
Hashing 252 => 153
Hashing 253 => 154
Hashing 254 => 155
Hashing 255 => 156

Notice the tendency towards higher numbers. That turns out to be our deadfall. Running the hash 4 times ($hash = ourHash($hash)`, for each element) winds up giving us:

Hashing 0 => 153
Hashing 1 => 154
Hashing 2 => 155
Hashing 3 => 156
Hashing 4 => 157
Hashing 5 => 158
Hashing 6 => 150
Hashing 7 => 151
Hashing 8 => 152
Hashing 9 => 153
Hashing 10 => 157
Hashing 11 => 158
Hashing 12 => 150
Hashing 13 => 154
Hashing 14 => 155
Hashing 15 => 156
Hashing 16 => 157
Hashing 17 => 158
Hashing 18 => 150
Hashing 19 => 151
Hashing 20 => 158
Hashing 21 => 150
Hashing 22 => 154
Hashing 23 => 155
Hashing 24 => 156
Hashing 25 => 157
Hashing 26 => 158
Hashing 27 => 150
Hashing 28 => 151
Hashing 29 => 152
Hashing 30 => 150
Hashing 31 => 154
Hashing 32 => 155
Hashing 33 => 156
Hashing 34 => 157
Hashing 35 => 158
Hashing 36 => 150
Hashing 37 => 151
Hashing 38 => 152
Hashing 39 => 153
Hashing 40 => 154
Hashing 41 => 155
Hashing 42 => 156
Hashing 43 => 157
Hashing 44 => 158
Hashing 45 => 150
Hashing 46 => 151
Hashing 47 => 152
Hashing 48 => 153
Hashing 49 => 154
Hashing 50 => 155
Hashing 51 => 156
Hashing 52 => 157
Hashing 53 => 158
Hashing 54 => 150
Hashing 55 => 151
Hashing 56 => 152
Hashing 57 => 153
Hashing 58 => 154
Hashing 59 => 155
Hashing 60 => 156
Hashing 61 => 157
Hashing 62 => 158
Hashing 63 => 150
Hashing 64 => 151
Hashing 65 => 152
Hashing 66 => 153
Hashing 67 => 154
Hashing 68 => 155
Hashing 69 => 156
Hashing 70 => 157
Hashing 71 => 158
Hashing 72 => 150
Hashing 73 => 151
Hashing 74 => 152
Hashing 75 => 153
Hashing 76 => 154
Hashing 77 => 155
Hashing 78 => 156
Hashing 79 => 157
Hashing 80 => 158
Hashing 81 => 150
Hashing 82 => 151
Hashing 83 => 152
Hashing 84 => 153
Hashing 85 => 154
Hashing 86 => 155
Hashing 87 => 156
Hashing 88 => 157
Hashing 89 => 158
Hashing 90 => 150
Hashing 91 => 151
Hashing 92 => 152
Hashing 93 => 153
Hashing 94 => 154
Hashing 95 => 155
Hashing 96 => 156
Hashing 97 => 157
Hashing 98 => 158
Hashing 99 => 150
Hashing 100 => 154
Hashing 101 => 155
Hashing 102 => 156
Hashing 103 => 157
Hashing 104 => 158
Hashing 105 => 150
Hashing 106 => 151
Hashing 107 => 152
Hashing 108 => 153
Hashing 109 => 154
Hashing 110 => 155
Hashing 111 => 156
Hashing 112 => 157
Hashing 113 => 158
Hashing 114 => 150
Hashing 115 => 151
Hashing 116 => 152
Hashing 117 => 153
Hashing 118 => 154
Hashing 119 => 155
Hashing 120 => 156
Hashing 121 => 157
Hashing 122 => 158
Hashing 123 => 150
Hashing 124 => 151
Hashing 125 => 152
Hashing 126 => 153
Hashing 127 => 154
Hashing 128 => 155
Hashing 129 => 156
Hashing 130 => 157
Hashing 131 => 158
Hashing 132 => 150
Hashing 133 => 151
Hashing 134 => 152
Hashing 135 => 153
Hashing 136 => 154
Hashing 137 => 155
Hashing 138 => 156
Hashing 139 => 157
Hashing 140 => 158
Hashing 141 => 150
Hashing 142 => 151
Hashing 143 => 152
Hashing 144 => 153
Hashing 145 => 154
Hashing 146 => 155
Hashing 147 => 156
Hashing 148 => 157
Hashing 149 => 158
Hashing 150 => 150
Hashing 151 => 151
Hashing 152 => 152
Hashing 153 => 153
Hashing 154 => 154
Hashing 155 => 155
Hashing 156 => 156
Hashing 157 => 157
Hashing 158 => 158
Hashing 159 => 159
Hashing 160 => 151
Hashing 161 => 152
Hashing 162 => 153
Hashing 163 => 154
Hashing 164 => 155
Hashing 165 => 156
Hashing 166 => 157
Hashing 167 => 158
Hashing 168 => 159
Hashing 169 => 151
Hashing 170 => 152
Hashing 171 => 153
Hashing 172 => 154
Hashing 173 => 155
Hashing 174 => 156
Hashing 175 => 157
Hashing 176 => 158
Hashing 177 => 159
Hashing 178 => 151
Hashing 179 => 152
Hashing 180 => 153
Hashing 181 => 154
Hashing 182 => 155
Hashing 183 => 156
Hashing 184 => 157
Hashing 185 => 158
Hashing 186 => 159
Hashing 187 => 151
Hashing 188 => 152
Hashing 189 => 153
Hashing 190 => 154
Hashing 191 => 155
Hashing 192 => 156
Hashing 193 => 157
Hashing 194 => 158
Hashing 195 => 159
Hashing 196 => 151
Hashing 197 => 152
Hashing 198 => 153
Hashing 199 => 154
Hashing 200 => 155
Hashing 201 => 156
Hashing 202 => 157
Hashing 203 => 158
Hashing 204 => 150
Hashing 205 => 151
Hashing 206 => 152
Hashing 207 => 153
Hashing 208 => 154
Hashing 209 => 155
Hashing 210 => 156
Hashing 211 => 157
Hashing 212 => 158
Hashing 213 => 150
Hashing 214 => 151
Hashing 215 => 152
Hashing 216 => 153
Hashing 217 => 154
Hashing 218 => 155
Hashing 219 => 156
Hashing 220 => 157
Hashing 221 => 158
Hashing 222 => 150
Hashing 223 => 151
Hashing 224 => 152
Hashing 225 => 153
Hashing 226 => 154
Hashing 227 => 155
Hashing 228 => 156
Hashing 229 => 157
Hashing 230 => 158
Hashing 231 => 150
Hashing 232 => 151
Hashing 233 => 152
Hashing 234 => 153
Hashing 235 => 154
Hashing 236 => 155
Hashing 237 => 156
Hashing 238 => 157
Hashing 239 => 158
Hashing 240 => 150
Hashing 241 => 151
Hashing 242 => 152
Hashing 243 => 153
Hashing 244 => 154
Hashing 245 => 155
Hashing 246 => 156
Hashing 247 => 157
Hashing 248 => 158
Hashing 249 => 159
Hashing 250 => 151
Hashing 251 => 152
Hashing 252 => 153
Hashing 253 => 154
Hashing 254 => 155
Hashing 255 => 156

We've narrowed ourselves down to 8 values... That's bad... Our original function mapped S(?) onto S(256). That is we've created a Surjective Function mapping $input to $output.

Since we have a Surjective function, we have no guarantee the mapping for any subset of the input won't have collisions (in fact, in practice they will).

That's what happened here! Our function was bad, but that's not why this worked (that's why it worked so quickly and so completely).

The same thing happens with MD5. It maps S(?) onto S(2^128). Since there's no guarantee that running MD5(S(output)) will be Injective, meaning that it won't have collisions.

TL/DR Section

Therefore, since feeding the output back to md5 directly can generate collisions, every iteration will increase the chance of collisions. This is a linear increase however, which means that while the result set of 2^128 is reduced, it's not significantly reduced fast enough to be a critical flaw.

So,

$output = md5($input); // 2^128 possibilities
$output = md5($output); // < 2^128 possibilities
$output = md5($output); // < 2^128 possibilities
$output = md5($output); // < 2^128 possibilities
$output = md5($output); // < 2^128 possibilities

The more times you iterate, the further the reduction goes.

The Fix

Fortunately for us, there's a trivial way to fix this: Feed back something into the further iterations:

$output = md5($input); // 2^128 possibilities
$output = md5($input . $output); // 2^128 possibilities
$output = md5($input . $output); // 2^128 possibilities
$output = md5($input . $output); // 2^128 possibilities
$output = md5($input . $output); // 2^128 possibilities    

Note that the further iterations aren't 2^128 for each individual value for $input. Meaning that we may be able to generate $input values that still collide down the line (and hence will settle or resonate at far less than 2^128 possible outputs). But the general case for $input is still as strong as it was for a single round.

Wait, was it? Let's test this out with our ourHash() function. Switching to $hash = ourHash($input . $hash);, for 100 iterations:

Hashing 0 => 201
Hashing 1 => 212
Hashing 2 => 199
Hashing 3 => 201
Hashing 4 => 203
Hashing 5 => 205
Hashing 6 => 207
Hashing 7 => 209
Hashing 8 => 211
Hashing 9 => 204
Hashing 10 => 251
Hashing 11 => 147
Hashing 12 => 251
Hashing 13 => 148
Hashing 14 => 253
Hashing 15 => 0
Hashing 16 => 1
Hashing 17 => 2
Hashing 18 => 161
Hashing 19 => 163
Hashing 20 => 147
Hashing 21 => 251
Hashing 22 => 148
Hashing 23 => 253
Hashing 24 => 0
Hashing 25 => 1
Hashing 26 => 2
Hashing 27 => 161
Hashing 28 => 163
Hashing 29 => 8
Hashing 30 => 251
Hashing 31 => 148
Hashing 32 => 253
Hashing 33 => 0
Hashing 34 => 1
Hashing 35 => 2
Hashing 36 => 161
Hashing 37 => 163
Hashing 38 => 8
Hashing 39 => 4
Hashing 40 => 148
Hashing 41 => 253
Hashing 42 => 0
Hashing 43 => 1
Hashing 44 => 2
Hashing 45 => 161
Hashing 46 => 163
Hashing 47 => 8
Hashing 48 => 4
Hashing 49 => 9
Hashing 50 => 253
Hashing 51 => 0
Hashing 52 => 1
Hashing 53 => 2
Hashing 54 => 161
Hashing 55 => 163
Hashing 56 => 8
Hashing 57 => 4
Hashing 58 => 9
Hashing 59 => 11
Hashing 60 => 0
Hashing 61 => 1
Hashing 62 => 2
Hashing 63 => 161
Hashing 64 => 163
Hashing 65 => 8
Hashing 66 => 4
Hashing 67 => 9
Hashing 68 => 11
Hashing 69 => 4
Hashing 70 => 1
Hashing 71 => 2
Hashing 72 => 161
Hashing 73 => 163
Hashing 74 => 8
Hashing 75 => 4
Hashing 76 => 9
Hashing 77 => 11
Hashing 78 => 4
Hashing 79 => 3
Hashing 80 => 2
Hashing 81 => 161
Hashing 82 => 163
Hashing 83 => 8
Hashing 84 => 4
Hashing 85 => 9
Hashing 86 => 11
Hashing 87 => 4
Hashing 88 => 3
Hashing 89 => 17
Hashing 90 => 161
Hashing 91 => 163
Hashing 92 => 8
Hashing 93 => 4
Hashing 94 => 9
Hashing 95 => 11
Hashing 96 => 4
Hashing 97 => 3
Hashing 98 => 17
Hashing 99 => 13
Hashing 100 => 246
Hashing 101 => 248
Hashing 102 => 49
Hashing 103 => 44
Hashing 104 => 255
Hashing 105 => 198
Hashing 106 => 43
Hashing 107 => 51
Hashing 108 => 202
Hashing 109 => 2
Hashing 110 => 248
Hashing 111 => 49
Hashing 112 => 44
Hashing 113 => 255
Hashing 114 => 198
Hashing 115 => 43
Hashing 116 => 51
Hashing 117 => 202
Hashing 118 => 2
Hashing 119 => 51
Hashing 120 => 49
Hashing 121 => 44
Hashing 122 => 255
Hashing 123 => 198
Hashing 124 => 43
Hashing 125 => 51
Hashing 126 => 202
Hashing 127 => 2
Hashing 128 => 51
Hashing 129 => 53
Hashing 130 => 44
Hashing 131 => 255
Hashing 132 => 198
Hashing 133 => 43
Hashing 134 => 51
Hashing 135 => 202
Hashing 136 => 2
Hashing 137 => 51
Hashing 138 => 53
Hashing 139 => 55
Hashing 140 => 255
Hashing 141 => 198
Hashing 142 => 43
Hashing 143 => 51
Hashing 144 => 202
Hashing 145 => 2
Hashing 146 => 51
Hashing 147 => 53
Hashing 148 => 55
Hashing 149 => 58
Hashing 150 => 198
Hashing 151 => 43
Hashing 152 => 51
Hashing 153 => 202
Hashing 154 => 2
Hashing 155 => 51
Hashing 156 => 53
Hashing 157 => 55
Hashing 158 => 58
Hashing 159 => 0
Hashing 160 => 43
Hashing 161 => 51
Hashing 162 => 202
Hashing 163 => 2
Hashing 164 => 51
Hashing 165 => 53
Hashing 166 => 55
Hashing 167 => 58
Hashing 168 => 0
Hashing 169 => 209
Hashing 170 => 51
Hashing 171 => 202
Hashing 172 => 2
Hashing 173 => 51
Hashing 174 => 53
Hashing 175 => 55
Hashing 176 => 58
Hashing 177 => 0
Hashing 178 => 209
Hashing 179 => 216
Hashing 180 => 202
Hashing 181 => 2
Hashing 182 => 51
Hashing 183 => 53
Hashing 184 => 55
Hashing 185 => 58
Hashing 186 => 0
Hashing 187 => 209
Hashing 188 => 216
Hashing 189 => 219
Hashing 190 => 2
Hashing 191 => 51
Hashing 192 => 53
Hashing 193 => 55
Hashing 194 => 58
Hashing 195 => 0
Hashing 196 => 209
Hashing 197 => 216
Hashing 198 => 219
Hashing 199 => 220
Hashing 200 => 248
Hashing 201 => 49
Hashing 202 => 44
Hashing 203 => 255
Hashing 204 => 198
Hashing 205 => 43
Hashing 206 => 51
Hashing 207 => 202
Hashing 208 => 2
Hashing 209 => 51
Hashing 210 => 49
Hashing 211 => 44
Hashing 212 => 255
Hashing 213 => 198
Hashing 214 => 43
Hashing 215 => 51
Hashing 216 => 202
Hashing 217 => 2
Hashing 218 => 51
Hashing 219 => 53
Hashing 220 => 44
Hashing 221 => 255
Hashing 222 => 198
Hashing 223 => 43
Hashing 224 => 51
Hashing 225 => 202
Hashing 226 => 2
Hashing 227 => 51
Hashing 228 => 53
Hashing 229 => 55
Hashing 230 => 255
Hashing 231 => 198
Hashing 232 => 43
Hashing 233 => 51
Hashing 234 => 202
Hashing 235 => 2
Hashing 236 => 51
Hashing 237 => 53
Hashing 238 => 55
Hashing 239 => 58
Hashing 240 => 198
Hashing 241 => 43
Hashing 242 => 51
Hashing 243 => 202
Hashing 244 => 2
Hashing 245 => 51
Hashing 246 => 53
Hashing 247 => 55
Hashing 248 => 58
Hashing 249 => 0
Hashing 250 => 43
Hashing 251 => 51
Hashing 252 => 202
Hashing 253 => 2
Hashing 254 => 51
Hashing 255 => 53

There's still a rough pattern there, but note that it's no more of a pattern than our underlying function (which was already quite weak).

Notice however that 0 and 3 became collisions, even though they weren't in the single run. That's an application of what I said before (that the collision resistance stays the same for the set of all inputs, but specific collision routes may open up due to flaws in the underlying algorithm).

TL/DR Section

By feeding back the input into each iteration, we effectively break any collisions that may have occurred in the prior iteration.

Therefore, md5($input . md5($input)); should be (theoretically at least) as strong as md5($input).

Is This Important?

Yes. This is one of the reasons that PBKDF2 replaced PBKDF1 in RFC 2898. Consider the inner loops of the two::

PBKDF1:

T_1 = Hash (P || S) ,
T_2 = Hash (T_1) ,
...
T_c = Hash (T_{c-1}) 

Where c is the iteration count, P is the Password and S is the salt

PBKDF2:

U_1 = PRF (P, S || INT (i)) ,
U_2 = PRF (P, U_1) ,
...
U_c = PRF (P, U_{c-1})

Where PRF is really just a HMAC. But for our purposes here, let's just say that PRF(P, S) = Hash(P || S) (that is, the PRF of 2 inputs is the same, roughly speaking, as hash with the two concatenated together). It's very much not, but for our purposes it is.

So PBKDF2 maintains the collision resistance of the underlying Hash function, where PBKDF1 does not.

Tie-ing All Of It Together:

We know of secure ways of iterating a hash. In fact:

$hash = $input;
$i = 10000;
do {
   $hash = hash($input . $hash);
} while ($i-- > 0);

Is typically safe.

Now, to go into why we would want to hash it, let's analyze the entropy movement.

A hash takes in the infinite set: S(?) and produces a smaller, consistently sized set S(n). The next iteration (assuming the input is passed back in) maps S(?) onto S(n) again:

S(?) -> S(n)
S(?) -> S(n)
S(?) -> S(n)
S(?) -> S(n)
S(?) -> S(n)
S(?) -> S(n)

Notice that the final output has exactly the same amount of entropy as the first one. Iterating will not "make it more obscured". The entropy is identical. There's no magic source of unpredictability (it's a Pseudo-Random-Function, not a Random Function).

There is however a gain to iterating. It makes the hashing process artificially slower. And that's why iterating can be a good idea. In fact, it's the basic principle of most modern password hashing algorithms (the fact that doing something over-and-over makes it slower).

Slow is good, because it's combating the primary security threat: brute forcing. The slower we make our hashing algorithm, the harder attackers have to work to attack password hashes stolen from us. And that's a good thing!!!


对于那些想要节省时间而不必去查看Dan&ircmaxell之间的讨论如何完成的人来说,**它完成得很好**:Dan同意ircmaxell.
让我们[在聊天中继续讨论](http://chat.stackoverflow.com/rooms/66609/discussion-between-ircmaxell-and-dan).
@zerkms:这不是什么严格的事情.我们需要知道底层函数的一些非常具体的细节(在这种情况下是`md5()`才能确定.但一般情况下它会是`<`而不是`<=`...请记住,我们正在讨论*所有*可能`$ inputs`的`$ output`集的大小.因此,如果我们甚至***碰撞它将是`<`,因此`<`是更好的泛化器.
@TomášFejfar我认为问题不是一般的冲突,而是严格输出集中的冲突(2 ^ 128个输出,每个正好是128位宽)。*可能*是内射的,但据我所知,不可能有通用证明(仅针对特定算法的碰撞证明为例)。考虑一下散列函数,如果它是128位,则仅返回输入(否则返回散列)。总的来说,这将是排斥的,但是当输出其输出时,它将始终是排斥的……这就是争论的重点……

3> orip..:

是的,重新散列减少了搜索空间,但不是,这没关系 - 有效减少是微不足道的.

重新散列会增加蛮力所需的时间,但这样做只有两次也不是最理想的.

你真正想要的是用PBKDF2散列密码- 这是一种使用盐和迭代的安全散列的成熟方法.查看此SO响应.

编辑:我差点忘了 - 不要使用MD5 !!!! 使用现代加密哈希,例如SHA-2系列(SHA-256,SHA-384和SHA-512).


2016年,Argon2和scrypt是每个人都应该努力使用的
@DFTR-同意。bcrypt或scrypt是更好的选择。

4> Rich Bradsha..:

是 - 它减少了与字符串匹配的可能字符串的数量.

正如你已经提到的,盐渍哈希要好得多.

这里有一篇文章:http://websecurity.ro/blog/2007/11/02/md5md5-vs-md5/,尝试证明它为何等同,但我不确定逻辑.部分他们假设没有可用于分析md5(md5(文本))的软件,但显然生成彩虹表是相当简单的.

我仍然坚持我的答案,md5(md5(文本))类型的哈希值比md5(文本)哈希值少,增加了碰撞的几率(即使仍然不太可能)并减少了搜索空间.

推荐阅读
ifx0448363
这个屌丝很懒,什么也没留下!
DevBox开发工具箱 | 专业的在线开发工具网站    京公网安备 11010802040832号  |  京ICP备19059560号-6
Copyright © 1998 - 2020 DevBox.CN. All Rights Reserved devBox.cn 开发工具箱 版权所有