C++ 使用 SSD 加快编译时间

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/15199356/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-27 19:09:46  来源:igfitidea点击:

Speed up compile time with SSD

c++visual-studio-2010compilationssd

提问by Jamby

I want to try to speed up my compile-time of our C++ projects. They have about 3M lines of code.

我想尝试加快我的 C++ 项目的编译时间。他们有大约 300 万行代码。

Of course, I don't need to always compile every project, but sometimes there are lot of source files modified by others, and I need to recompile all of them (for example, when someone updates an ASN.1source file).

当然,我不需要每次都编译每个项目,但有时会有很多被别人修改的源文件,我需要全部重新编译(例如,当有人更新一个ASN.1源文件时)。

I've measured that compiling a mid-project (that does not involves all the source files) takes about three minutes. I know that's not too much, but sometimes it's really boring waiting for a compile..

我测量过编译一个中期项目(不涉及所有源文件)大约需要三分钟。我知道这还不算太多,但有时等待编译真的很无聊..

I've tried to move the source code to an SSD (an old OCZ Vertex 3 60 GB) that, benchmarked, it's from 5 to 60 times faster than the HDD (especially in random reading/writing). Anyway, the compile-time is almost the same (maybe 2-3 seconds faster, but it should be a chance).

我试图将源代码移动到 SSD(旧的 OCZ Vertex 3 60 GB),经过基准测试,它比 HDD 快 5 到 60 倍(特别是在随机读取/写入方面)。无论如何,编译时间几乎相同(可能快 2-3 秒,但应该是一个机会)。

Maybe moving the Visual Studio bin to SSD would grant additional increment in performance?

也许将 Visual Studio bin 移动到 SSD 会增加性能?

Just to complete the question: I've a W3520 Xeon @2.67 GHz and 12 GB of DDR3 ECC.

只是为了完成这个问题:我有一个 W3520 Xeon @2.67 GHz 和 12 GB 的 DDR3 ECC。

采纳答案by us2012

C++ compilation/linking is limited by processing speed, not HDD I/O. That's why you're not seeing any increase in compilation speed. (Moving the compiler/linker binaries to the SSD will do nothing. When you compile a big project, the compiler/linker and the necessary library are read into memory once and stay there.)

C++ 编译/链接受处理速度的限制,而不是 HDD I/O。这就是为什么您没有看到编译速度有任何提高的原因。(将编译器/链接器二进制文件移动到 SSD 将无济于事。当你编译一个大项目时,编译器/链接器和必要的库会被读入一次内存并留在那里。)

I have seen some minor speedups from moving the working directory to an SSD or ramdisk when compiling C projects (which is a lot less time consuming than C++ projects that make heavy use of templates etc), but not enough to make it worth it.

在编译 C 项目时,我看到将工作目录移动到 SSD 或 ramdisk 有一些小的加速(这比大量使用模板等的 C++ 项目耗时少得多),但还不足以使它值得。

回答by PlasmaHH

This all greatly depends on your build environment and other setup. For example, on my main compile server, I have 96 GiB of RAM and 16 cores. The HDD is rather slow, but that doesn't really matter as about everything is cached in RAM.

这在很大程度上取决于您的构建环境和其他设置。例如,在我的主编译服务器上,我有 96 GiB 的 RAM 和 16 个内核。HDD 相当慢,但这并不重要,因为所有内容都缓存在 RAM 中。

On my desktop (where I also compile sometimes) I only have 8 Gib of RAM, and six cores. Doing the same parallel build there could be greatly sped up, because six compilers running in parallel eat up enough memory for the SSD speed difference being very noticeable.

在我的桌面上(有时我也会编译)我只有 8 Gib 的 RAM 和六个内核。在那里进行相同的并行构建可能会大大加快速度,因为并行运行的六个编译器会消耗足够的内存,因此 SSD 速度差异非常明显。

There are many things that influence the build times, including the ratio of CPU to I/O "boundness". In my experience (GCCon Linux) they include:

有很多因素会影响构建时间,包括 CPU 与 I/O“边界”的比率。根据我的经验(Linux 上的GCC),它们包括:

  • Complexity of code. Lots of metatemplates make it use more CPU time, more C-like code might make the I/O of generated objects (more) dominant
  • Compiler settings for temporary files, like -pipefor GCC.
  • Optimization being used. Usually, the more optmization, the more the CPU work dominates.
  • Parallel builds. Compiling a single file at a time will likely never produce enough I/O to get today's slowest harddisk to any limit. Compiling with eight cores (or more) at once however might.
  • OS/filesystem being used. It seems that some filesystems in the past have choked on the access pattern for many files built in parallel, essentially putting the I/O bottleneck into the filesystem code, rather than the underlying hardware.
  • Available RAM for buffering. The more aggressively an OS can buffer your I/O, the less important the HDD speed gets. This is why sometimes a make -j6can be a slower than a make -j4despite having enough idle cores.
  • 代码的复杂性。许多元模板使它使用更多的 CPU 时间,更多类似 C 的代码可能会使生成对象的 I/O(更多)占主导地位
  • 临时文件的编译器设置,例如-pipeGCC。
  • 正在使用优化。通常,优化越多,CPU 工作就越占主导地位。
  • 并行构建。一次编译一个文件可能永远不会产生足够的 I/O 来使当今最慢的硬盘达到任何限制。然而,一次编译八个内核(或更多)是可能的。
  • 正在使用的操作系统/文件系统。过去的一些文件系统似乎对许多并行构建的文件的访问模式感到窒息,基本上将 I/O 瓶颈置于文件系统代码中,而不是底层硬件中。
  • 用于缓冲的可用 RAM。操作系统缓冲 I/O 的力度越大,硬盘速度就越不重要。这就是为什么尽管有足够的空闲内核,有时 amake -j6可能比 a 慢make -j4

To make it short: It depends on enough things to make any "yes, it will help you" or "no, it will help you not" pure speculation, so if you have the possibility to try it out, do it. But don't spend too much time on it, for every hour you try to cut your compile times into half, try to estimate how often you (or your coworkers if you have any) could have rebuilt the project, and how that relates to the possible time saved.

简而言之:这取决于足够多的事情来做出“是的,它会帮助你”或“不,它不会帮助你”的纯粹猜测,所以如果你有机会尝试一下,那就去做吧。但是不要花太多时间在它上面,每尝试将编译时间减少一半的每一小时,尝试估计您(或您的同事,如果有的话)重建项目的频率,以及这与可能节省的时间。

回答by the_mandrill

I found that compiling a project of around 1 million lines of C++ sped up by about a factor of two when the code was on an SSD (system with an eight-core Core i7, 12 GB RAM). Actually, the best possible performance we got was with one SSD for the system and a second one for the source -- it wasn't that the build was much faster, but the OS was much more responsive while a big build was underway.

我发现,当代码位于 SSD(具有八核Core i7、12GB RAM 的系统)上时,编译大约 100 万行 C++ 的项目速度提高了大约两倍。实际上,我们获得的最佳性能是将一个 SSD 用于系统,第二个用于源 - 并不是构建速度快得多,而是操作系统在进行大型构建时响应速度更快。

The other thing that made a huge difference was enabling parallel building. Note that there are two separate options that both need to be enabled:

另一件产生巨大差异的事情是启用并行构建。请注意,有两个单独的选项都需要启用:

  • Menu ToolsOptionsProjects and Solutions→ maximum number of parallel project builds
  • Project properties → C++/GeneralMulti-processor compilation
  • 菜单工具选项项目和解决方案→ 并行项目构建的最大数量
  • 项目属性 → C++/General多处理器编译

The multiprocessor compilation is incompatible with a couple of other flags (including minimal rebuild, I think) so check the output window for warnings. I found that with the MP compilation flag set all cores were hitting close to 100% load, so you can at least see the CPU is being used aggressively.

多处理器编译与其他几个标志(我认为包括最小重建)不兼容,因此请检查输出窗口是否有警告。我发现设置 MP 编译标志后,所有内核的负载都接近 100%,因此您至少可以看到 CPU 正在积极使用。

回答by nocnokneo

One point not mentioned is that when using ccacheand a highly parallel build, you'll see benefits to using an SSD.

没有提到的一点是,当使用ccache和高度并行的构建时,您会看到使用 SSD 的好处。

回答by midhat karim

I did replace my hard disk drive with an SSD hoping that it will reduce the compilation time of my C++ project. Simply replacing the hard disk drive with an SSD did not solve the problem and compilation time with both were almost the same.

我确实用 SSD 替换了我的硬盘驱动器,希望它能减少我的 C++ 项目的编译时间。简单地用SSD替换硬盘驱动器并不能解决问题,并且两者的编译时间几乎相同。

However, after initial failures, I got success in speeding up the compilation by approximately six times.

然而,在最初的失败之后,我成功地将编译速度提高了大约六倍。

The following steps were done to increase the compilation speed.

完成以下步骤以提高编译速度。

  1. Turned off hibernation: "powercfg -h off" in command prompt

  2. Turned off drive indexing on C drive

  3. Shrunk page file to 800 min/1024 max (it was initially set to system managed size of 8092).

  1. 关闭休眠:命令提示符中的“powercfg -h off”

  2. 关闭 C 驱动器上的驱动器索引

  3. 将页面文件缩小到 800 分钟/1024 最大值(最初设置为系统管理大小 8092)。