Linux /dev/random 非常慢?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/4819359/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
/dev/random Extremely Slow?
提问by Mr. Llama
Some background info: I was looking to run a script on a Red Hat server to read some data from /dev/random and use the Perl unpack() command to convert it to a hex string for usage later on (benchmarking database operations). I ran a few "head -1" on /dev/random and it seemed to be working out fine, but after calling it a few times, it would just kinda hang. After a few minutes, it would finally output a small block of text, then finish.
一些背景信息:我希望在 Red Hat 服务器上运行脚本以从 /dev/random 读取一些数据,并使用 Perl unpack() 命令将其转换为十六进制字符串以供稍后使用(基准数据库操作)。我在 /dev/random 上运行了一些“head -1”,它似乎运行良好,但是在调用了几次之后,它就有点挂了。几分钟后,它最终会输出一小段文本,然后完成。
I switched to /dev/urandom (I really didn't want to, its slower and I don't need that quality of randomness) and it worked fine for the first two or three calls, then it too began hang. I was wondering if it was the "head" command that was bombing it, so I tried doing some simple I/O using Perl, and it too was hanging. As a last ditch effort, I used the "dd" command to dump some info out of it directly to a file instead of to the terminal. All I asked of it was 1mb of data, but it took 3 minutes to get ~400 bytes before I killed it.
我切换到 /dev/urandom (我真的不想,它更慢而且我不需要那种随机性)并且它在前两三个调用中运行良好,然后它也开始挂起。我想知道是否是“head”命令在轰炸它,所以我尝试使用 Perl 做一些简单的 I/O,但它也挂了。作为最后的努力,我使用“dd”命令将一些信息直接转储到文件而不是终端。我所要求的只是 1mb 的数据,但在我杀死它之前需要 3 分钟才能获得大约 400 个字节。
I checked the process lists, CPU and memory were basically untouched. What exactly could cause /dev/random to crap out like this and what can I do to prevent/fix it in the future?
我查看了进程列表,CPU和内存基本没动过。究竟是什么导致 /dev/random 像这样崩溃,我将来可以做些什么来防止/修复它?
Edit:Thanks for the help guys! It seems that I had random and urandom mixed up. I've got the script up and running now. Looks like I learned something new today. :)
编辑:感谢您的帮助!似乎我把 random 和 urandom 搞混了。我现在已经启动并运行了脚本。看来我今天学到了新东西。:)
采纳答案by Tim
On most Linux systems, /dev/random
is powered from actual entropy gathered by the environment. If your system isn't delivering a large amount of data from /dev/random
, it likely means that you're not generating enough environmental randomness to power it.
在大多数 Linux 系统上,/dev/random
由环境收集的实际熵提供动力。如果您的系统没有从 提供大量数据/dev/random
,则可能意味着您没有产生足够的环境随机性来为其提供动力。
I'm not sure why you think /dev/urandom
is "slower" or higher quality. It reuses an internal entropy pool to generate pseudorandomness - making it slightly lower quality - but it doesn't block. Generally, applications that don't require high-level or long-term cryptography can use /dev/urandom
reliably.
我不确定您为什么认为/dev/urandom
“更慢”或更高质量。它重用内部熵池来生成伪随机性——使其质量略低——但它不会阻塞。通常,不需要高级或长期密码学的应用程序可以/dev/urandom
可靠地使用。
Try waiting a little while then reading from /dev/urandom
again. It's possible that you've exhausted the internal entropy pool reading so much from /dev/random
, breaking both generators - allowing your system to create more entropy should replenish them.
尝试稍等片刻,然后重新阅读/dev/urandom
。您可能已经耗尽了从 中读取的内部熵池/dev/random
,破坏了两个生成器 - 允许您的系统创建更多熵应该补充它们。
See Wikipediafor more info about /dev/random
and /dev/urandom
.
有关和 的更多信息,请参阅维基百科。/dev/random
/dev/urandom
回答by Ignacio Vazquez-Abrams
If you want more entropy for /dev/random
then you'll either need to purchase a hardware RNG or use one of the *_entropyd daemonsin order to generate it.
如果您想要更多的熵,/dev/random
那么您需要购买硬件 RNG 或使用*_entropyd 守护进程之一来生成它。
回答by ctrl-alt-delor
If you are using randomness for testing (not cryptography), then repeatable randomness is better, you can get this with pseudo randomness starting at a known seed. There is usually a good library function for this in most languages.
如果您使用随机性进行测试(而不是密码学),那么可重复的随机性会更好,您可以使用从已知种子开始的伪随机性来获得它。在大多数语言中,通常都有一个很好的库函数。
It is repeatable, for when you find a problem and are trying to debug. It also does not eat up entropy. May be seed the pseudo random generator from /dev/urandom and record the seed in the test log. Perl has a pseudo random number generator you can use.
它是可重复的,因为当您发现问题并尝试调试时。它也不会消耗熵。可以从 /dev/urandom 为伪随机生成器做种子,并将种子记录在测试日志中。Perl 有一个你可以使用的伪随机数生成器。
回答by akostadinov
This question is pretty old. But still relevant so I'm going to give my answer. Many CPUs today come with a built-in hardware random number generator (RNG). As well many systems come with a trusted platform module (TPM) that also provide a RNG. There are also other options that can be purchased but chances are your computer already has something.
这个问题很老了。但仍然相关,所以我将给出我的答案。今天的许多 CPU 都带有内置的硬件随机数生成器 (RNG)。此外,许多系统都带有可信平台模块 (TPM),该模块也提供 RNG。还有其他选项可以购买,但很可能您的计算机已经有了一些东西。
You can use rngd from rng-utils package on most linux distros to seed more random data. For example on fedora 18 all I had to do to enable seeding from the TPM and the CPU RNG (RDRAND instruction) was:
您可以在大多数 linux 发行版上使用 rng-utils 包中的 rngd 来播种更多随机数据。例如,在 Fedora 18 上,我需要做的就是从 TPM 和 CPU RNG(RDRAND 指令)启用种子:
# systemctl enable rngd
# systemctl start rngd
You can compare speed with and without rngd. It's a good idea to run rngd -v -f
from command line. That will show you detected entropy sources. Make sure all necessary modules for supporting your sources are loaded. To use TPM, it needs to be activated through the tpm-tools. update: here is a nice howto.
您可以比较使用和不使用 rngd 的速度。从命令行运行是个好主意rngd -v -f
。这将显示您检测到的熵源。确保加载了支持源的所有必要模块。要使用 TPM,需要通过 tpm-tools 激活它。更新:这是一个很好的方法。
BTW, I've read on the Internet some concerns about TPM RNG often being broken in different ways, but didn't read anything concrete against the RNGs found in Intel, AMD and VIA chips. Using more than one source would be best if you really care about randomness quality.
顺便说一句,我在互联网上读到了一些关于 TPM RNG 经常以不同方式被破坏的担忧,但没有读到任何关于英特尔、AMD 和威盛芯片中发现的 RNG 的具体内容。如果您真的关心随机性质量,最好使用多个来源。
urandom is good for most use cases (except sometimes during early boot). Most programs nowadays use urandom instead of random. Even openssl does that. See myths about urandomand comparison of random interfaces.
urandom 适用于大多数用例(有时在早期启动期间除外)。现在大多数程序都使用 urandom 而不是 random。甚至 openssl也是如此。请参阅有关 urandom 的神话和随机接口的比较。
In recent Fedora and RHEL/CentOSrng-tools also support the jitter entropy. You can on lack of hardware options or if you just trust it more than your hardware.
最近 Fedora 和RHEL/CentOSrng-tools 也支持抖动熵。您可能缺乏硬件选项,或者您只是比硬件更信任它。
UPDATE:another option for more entropy is HAVEGED(questioned quality). On virtual machines there is a kvm/qemu VirtIORNG(recommended).
更新:更多熵的另一个选择是HAVEGED(质量有问题)。在虚拟机上有一个 kvm/qemu VirtIORNG(推荐)。
回答by harmv
use /dev/urandom, its cryptographically secure.
使用 /dev/urandom,其密码安全。
good read: http://www.2uo.de/myths-about-urandom/
好读:http: //www.2uo.de/myths-about-urandom/
"If you are unsure about whether you should use /dev/random or /dev/urandom, then probably you want to use the latter."
“如果您不确定应该使用 /dev/random 还是 /dev/urandom,那么您可能想使用后者。”
When in doubt in early boot, wether you have enough entropy gathered. use the system call getrandom()
instead. [1] (from Linux kernel >= 3.17)
Its best of both worlds,
如果在早期启动时有疑问,是否收集了足够的熵。改用系统调用getrandom()
。[1](来自 Linux 内核 >= 3.17)两全其美,
- it blocks until (only once!) enough entropy is gathered,
- after that it will never block again.
- 它阻塞直到(仅一次!)收集到足够的熵,
- 之后它再也不会阻塞了。
[1] git 内核提交
回答by nke
This fixed it for me. Use new SecureRandom() instead of SecureRandom.getInstanceStrong()
这为我修好了。使用 new SecureRandom() 而不是 SecureRandom.getInstanceStrong()
Some more info can be found here : https://tersesystems.com/blog/2015/12/17/the-right-way-to-use-securerandom/
更多信息可以在这里找到:https: //tersesystems.com/blog/2015/12/17/the-right-way-to-use-securerandom/