如何让 Windows 在编译 C++ 时像 Linux 一样快?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/6916011/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-05 05:25:05  来源:igfitidea点击:

How do I get Windows to go as fast as Linux for compiling C++?

windowslinuxperformancecompilation

提问by gman

I know this is not so much a programming question but it is relevant.

我知道这不是一个编程问题,但它是相关的。

I work on a fairly large cross platform project. On Windows I use VC++ 2008. On Linux I use gcc. There are around 40k files in the project. Windows is 10x to 40x slower than Linux at compiling and linking the same project. How can I fix that?

我从事一个相当大的跨平台项目。在 Windows 上我使用 VC++ 2008。在 Linux 上我使用 gcc。项目中有大约 40k 个文件。Windows 在编译和链接同一项目时比 Linux 慢 10 到 40 倍。我该如何解决?

A single change incremental build 20 seconds on Linux and > 3 mins on Windows. Why? I can even install the 'gold' linker in Linux and get that time down to 7 seconds.

单个更改增量构建在 Linux 上需要 20 秒,在 Windows 上需要 > 3 分钟。为什么?我什至可以在 Linux 中安装“黄金”链接器,并将该时间缩短到 7 秒。

Similarly git is 10x to 40x faster on Linux than Windows.

类似地,git 在 Linux 上比 Windows 快 10 到 40 倍。

In the git case it's possible git is not using Windows in the optimal way but VC++? You'd think Microsoft would want to make their own developers as productive as possible and faster compilation would go a long way toward that. Maybe they are trying to encourage developers into C#?

在 git 情况下,git 可能不是以最佳方式使用 Windows,而是 VC++?您可能会认为 Microsoft 想让他们自己的开发人员尽可能高效,而更快的编译将大有帮助。也许他们试图鼓励开发人员使用 C#?

As simple test, find a folder with lots of subfolders and do a simple

作为简单的测试,找到一个有很多子文件夹的文件夹并做一个简单的

dir /s > c:\list.txt

on Windows. Do it twice and time the second run so it runs from the cache. Copy the files to Linux and do the equivalent 2 runs and time the second run.

在 Windows 上。执行两次并计时第二次运行,以便它从缓存中运行。将文件复制到 Linux 并执行等效的 2 次运行并为第二次运行计时。

ls -R > /tmp/list.txt

I have 2 workstations with the exact same specs. HP Z600s with 12gig of ram, 8 cores at 3.0ghz. On a folder with ~400k files Windows takes 40seconds, Linux takes < 1 second.

我有 2 个具有完全相同规格的工作站。HP Z600s,内存为 12gig,8 核,频率为 3.0ghz。在一个包含大约 400k 个文件的文件夹中,Windows 需要 40 秒,Linux 需要 < 1 秒。

Is there a registry setting I can set to speed up Windows? What gives?

是否有我可以设置的注册表设置来加速 Windows?是什么赋予了?



A few slightly relevant links, relevant to compile times, not necessarily i/o.

一些稍微相关的链接,与编译时间相关,不一定是 i/o。

回答by MSN

Incremental linking

增量链接

If the VC 2008 solution is set up as multiple projects with .lib outputs, you need to set "Use Library Dependency Inputs"; this makes the linker link directly against the .obj files rather than the .lib. (And actually makes it incrementally link.)

如果VC 2008解决方案设置为多个带有.lib输出的项目,则需要设置“Use Library Dependency Inputs”;这使得链接器直接链接到 .obj 文件而不是 .lib。(并且实际上使其增量链接。)

Directory traversal performance

目录遍历性能

It's a bit unfair to compare directory crawling on the original machine with crawling a newly created directory with the same files on another machine. If you want an equivalent test, you should probably make another copy of the directory on the source machine. (It may still be slow, but that could be due to any number of things: disk fragmentation, short file names, background services, etc.) Although I think the perf issues for dir /shave more to do with writing the output than measuring actual file traversal performance. Even dir /s /b > nulis slow on my machine with a huge directory.

将在原始机器上爬取目录与在另一台机器上爬取具有相同文件的新目录进行比较有点不公平。如果您想要一个等效的测试,您可能应该在源计算机上制作另一个目录副本。(它可能仍然很慢,但这可能是由于多种原因造成的:磁盘碎片、短文件名、后台服务等。)虽然我认为性能问题dir /s更多地与写入输出有关,而不是测量实际文件遍历性能。即使dir /s /b > nul在我的机器上有一个巨大的目录也很慢。

回答by Noufal Ibrahim

Unless a hardcore Windows systems hacker comes along, you're not going to get more than partisan comments (which I won't do) and speculation (which is what I'm going to try).

除非有一个铁杆 Windows 系统黑客出现,否则你只会得到党派评论(我不会这样做)和猜测(这是我将要尝试的)。

  1. File system - You should try the same operations (including the dir) on the same filesystem. I came across thiswhich benchmarks a few filesystems for various parameters.

  2. Caching. I once tried to run a compilation on Linux on a RAM disk and found that it was slower than running it on disk thanks to the way the kernel takes care of caching. This is a solid selling point for Linux and might be the reason why the performance is so different.

  3. Bad dependency specifications on Windows. Maybe the chromium dependency specifications for Windows are not as correct as for Linux. This might result in unnecessary compilations when you make a small change. You might be able to validate this using the same compiler toolchain on Windows.

  1. 文件系统 - 您应该在同一个文件系统上尝试相同的操作(包括dir)。我遇到了这个,它针对各种参数对一些文件系统进行了基准测试。

  2. 缓存。我曾经尝试在 Linux 上的 RAM 磁盘上运行编译,发现由于内核处理缓存的方式,它比在磁盘上运行它慢。这是 Linux 的一个可靠卖点,可能是性能如此不同的原因。

  3. Windows 上的依赖项规范错误。也许 Windows 的 Chromium 依赖规范不如 Linux 正确。当您进行小的更改时,这可能会导致不必要的编译。您或许可以在 Windows 上使用相同的编译器工具链来验证这一点。

回答by átila Neves

I'm pretty sure it's related to the filesystem. I work on a cross-platform project for Linux and Windows where all the code is common except for where platform-dependent code is absolutely necessary. We use Mercurial, not git, so the "Linuxness" of git doesn't apply. Pulling in changes from the central repository takes forever on Windows compared to Linux, but I do have to say that our Windows 7 machines do a lot better than the Windows XP ones. Compiling the code after that is even worse on VS 2008. It's not just hg; CMake runs a lot slower on Windows as well, and both of these tools use the file system more than anything else.

我很确定它与文件系统有关。我在 Linux 和 Windows 的跨平台项目上工作,其中所有代码都是通用的,除了平台相关代码是绝对必要的。我们使用 Mercurial,而不是 git,因此 git 的“Linuxness”不适用。与 Linux 相比,在 Windows 上从中央存储库中提取更改需要永远,但我不得不说,我们的 Windows 7 机器比 Windows XP 机器做得好得多。之后编译代码在 VS 2008 上就更糟了。CMake 在 Windows 上的运行速度也慢得多,而且这两个工具都比其他任何工具都更多地使用文件系统。

The problem is so bad that most of our developers that work in a Windows environment don't even bother doing incremental builds anymore - they find that doing a unity build insteadis faster.

问题是如此严重,以至于我们在 Windows 环境中工作的大多数开发人员甚至不再费心进行增量构建 - 他们发现进行统一构建会更快。

Incidentally, if you want to dramatically decrease compilation speed in Windows, I'd suggest the aforementioned unity build. It's a pain to implement correctly in the build system (I did it for our team in CMake), but once done automagically speeds things up for our continuous integration servers. Depending on how many binaries your build system is spitting out, you can get 1 to 2 orders of magnitude improvement. Your mileage may vary. In our case I think it sped up the Linux builds threefold and the Windows one by about a factor of 10, but we have a lot of shared libraries and executables (which decreases the advantages of a unity build).

顺便说一句,如果您想显着降低 Windows 中的编译速度,我建议您使用上述统一构建。在构建系统中正确实现是很痛苦的(我在 CMake 中为我们的团队做过),但是一旦完成,我们的持续集成服务器就会自动加速。根据您的构建系统生成的二进制文件数量,您可以获得 1 到 2 个数量级的改进。你的旅费可能会改变。在我们的例子中,我认为它使 Linux 构建速度提高了三倍,Windows 构建速度提高了大约 10 倍,但我们有很多共享库和可执行文件(这降低了统一构建的优势)。

回答by bfrog

I personally found running a windows virtual machine on linux managed to remove a great deal of the IO slowness in windows, likely because the linux vm was doing lots of caching that Windows itself was not.

我个人发现在 linux 上运行 windows 虚拟机设法消除了 windows 中的大量 IO 缓慢,可能是因为 linux vm 正在执行 Windows 本身没有的大量缓存。

Doing that I was able to speed up compile times of a large (250Kloc) C++ project I was working on from something like 15 minutes to about 6 minutes.

这样做我能够将我正在处理的大型(250Kloc)C++ 项目的编译时间从大约 15 分钟缩短到大约 6 分钟。

回答by TomOnTime

The difficulty in doing that is due to the fact that C++ tends to spread itself and the compilation process over many small, individual, files. That's something Linux is good at and Windows is not. If you want to make a really fast C++ compiler for Windows, try to keep everything in RAM and touch the filesystem as little as possible.

这样做的困难是因为 C++ 倾向于将自身和编译过程分散到许多小的、单独的文件中。这是 Linux 擅长而 Windows 不擅长的事情。如果您想为 Windows 制作一个非常快速的 C++ 编译器,请尝试将所有内容都保存在 RAM 中并尽可能少地接触文件系统。

That's also how you'll make a faster Linux C++ compile chain, but it is less important in Linux because the file system is already doing a lot of that tuning for you.

这也是您制作更快的 Linux C++ 编译链的方法,但它在 Linux 中不太重要,因为文件系统已经为您做了很多调整。

The reason for this is due to Unix culture: Historically file system performance has been a much higher priority in the Unix world than in Windows. Not to say that it hasn't been a priority in Windows, just that in Unix it has been a higher priority.

其原因在于 Unix 文化:从历史上看,文件系统性能在 Unix 世界中比在 Windows 中具有更高的优先级。并不是说它在 Windows 中不是优先级,只是在 Unix 中它具有更高的优先级。

  1. Access to source code.

    You can't change what you can't control. Lack of access to Windows NTFS source code means that most efforts to improve performance have been though hardware improvements. That is, if performance is slow, you work around the problem by improving the hardware: the bus, the storage medium, and so on. You can only do so much if you have to work around the problem, not fix it.

    Access to Unix source code (even before open source) was more widespread. Therefore, if you wanted to improve performance you would address it in software first (cheaper and easier) and hardware second.

    As a result, there are many people in the world that got their PhDs by studying the Unix file system and finding novel ways to improve performance.

  2. Unix tends towards many small files; Windows tends towards a few (or a single) big file.

    Unix applications tend to deal with many small files. Think of a software development environment: many small source files, each with their own purpose. The final stage (linking) does create one big file but that is an small percentage.

    As a result, Unix has highly optimized system calls for opening and closing files, scanning directories, and so on. The history of Unix research papers spans decades of file system optimizations that put a lot of thought into improving directory access (lookups and full-directory scans), initial file opening, and so on.

    Windows applications tend to open one big file, hold it open for a long time, close it when done. Think of MS-Word. msword.exe (or whatever) opens the file once and appends for hours, updates internal blocks, and so on. The value of optimizing the opening of the file would be wasted time.

    The history of Windows benchmarking and optimization has been on how fast one can read or write long files. That's what gets optimized.

    Sadly software development has trended towards the first situation. Heck, the best word processing system for Unix (TeX/LaTeX) encourages you to put each chapter in a different file and #include them all together.

  3. Unix is focused on high performance; Windows is focused on user experience

    Unix started in the server room: no user interface. The only thing users see is speed. Therefore, speed is a priority.

    Windows started on the desktop: Users only care about what they see, and they see the UI. Therefore, more energy is spent on improving the UI than performance.

  4. The Windows ecosystem depends on planned obsolescence. Why optimize software when new hardware is just a year or two away?

    I don't believe in conspiracy theories but if I did, I would point out that in the Windows culture there are fewer incentives to improve performance. Windows business models depends on people buying new machines like clockwork. (That's why the stock price of thousands of companies is affected if MS ships an operating system late or if Intel misses a chip release date.). This means that there is an incentive to solve performance problems by telling people to buy new hardware; not by improving the real problem: slow operating systems. Unix comes from academia where the budget is tight and you can get your PhD by inventing a new way to make file systems faster; rarely does someone in academia get points for solving a problem by issuing a purchase order. In Windows there is no conspiracy to keep software slow but the entire ecosystem depends on planned obsolescence.

    Also, as Unix is open source (even when it wasn't, everyone had access to the source) any bored PhD student can read the code and become famous by making it better. That doesn't happen in Windows (MS does have a program that gives academics access to Windows source code, it is rarely taken advantage of). Look at this selection of Unix-related performance papers: http://www.eecs.harvard.edu/margo/papers/or look up the history of papers by Osterhaus, Henry Spencer, or others. Heck, one of the biggest (and most enjoyable to watch) debates in Unix history was the back and forth between Osterhaus and Selzer http://www.eecs.harvard.edu/margo/papers/usenix95-lfs/supplement/rebuttal.htmlYou don't see that kind of thing happening in the Windows world. You might see vendors one-uping each other, but that seems to be much more rare lately since the innovation seems to all be at the standards body level.

  1. 访问源代码。

    你无法改变你无法控制的东西。无法访问 Windows NTFS 源代码意味着提高性能的大部分努力都是通过硬件改进。也就是说,如果性能很慢,您可以通过改进硬件来解决问题:总线、存储介质等。如果您必须解决问题,而不是解决问题,那么您只能做这么多。

    对 Unix 源代码(甚至在开源之前)的访问更为普遍。因此,如果您想提高性能,您应该首先通过软件(更便宜且更容易)和硬件来解决它。

    因此,世界上有许多人通过研究 Unix 文件系统并寻找提高性能的新方法而获得博士学位。

  2. Unix 倾向于使用许多小文件;Windows 倾向于几个(或一个)大文件。

    Unix 应用程序倾向于处理许多小文件。想想软件开发环境:许多小的源文件,每个文件都有自己的用途。最后阶段(链接)确实创建了一个大文件,但这只是一小部分。

    因此,Unix 对打开和关闭文件、扫描目录等的系统调用进行了高度优化。Unix 研究论文的历史跨越了数十年的文件系统优化,这些优化投入了大量精力来改进目录访问(查找和全目录扫描)、初始文件打开等。

    Windows 应用程序倾向于打开一个大文件,长时间保持打开状态,完成后关闭它。想想MS-Word。msword.exe(或其他)打开文件一次并附加数小时,更新内部块,等等。优化打开文件的价值会浪费时间。

    Windows 基准测试和优化的历史一直是关于读取或写入长文件的速度。这就是优化。

    遗憾的是,软件开发已趋向于第一种情况。哎呀,最好的 Unix 文字处理系统 (TeX/LaTeX) 鼓励您将每一章放在不同的文件中,并将它们一起#include。

  3. Unix 专注于高性能;Windows 专注于用户体验

    在服务器机房启动Unix:没有用户界面。用户唯一能看到的就是速度。因此,速度是重中之重。

    Windows 始于桌面:用户只关心他们看到的内容,他们看到的是 UI。因此,更多的精力花在改进 UI 上而不是性能上。

  4. Windows 生态系统依赖于有计划的淘汰。为什么要在新硬件一两年后优化软件?

    我不相信阴谋论,但如果我相信,我会指出在 Windows 文化中,提高性能的动机较少。Windows 商业模式取决于人们购买新机器(如发条装置)。(这就是为什么如果 MS 延迟发布操作系统或如果英特尔错过芯片发布日期,数千家公司的股价会受到影响。)。这意味着通过告诉人们购买新硬件来解决性能问题;不是通过改进真正的问题:缓慢的操作系统。Unix 来自预算紧张的学术界,您可以通过发明一种使文件系统更快的新方法来获得博士学位;学术界很少有人通过发出采购订单来解决问题而获得积分。

    此外,由于 Unix 是开源的(即使不是,每个人都可以访问源代码)任何无聊的博士生都可以阅读代码并通过改进它而出名。这在 Windows 中不会发生(MS 确实有一个程序可以让学者访问 Windows 源代码,它很少被利用)。查看以下精选的 Unix 相关性能论文:http: //www.eecs.harvard.edu/margo/papers/或查找 Osterhaus、Henry Spencer 或其他人的论文历史。哎呀,Unix 历史上最大的(也是最有趣的)辩论之一是 Osterhaus 和 Selzer 之间的来回http://www.eecs.harvard.edu/margo/papers/usenix95-lfs/supplement/rebuttal。 html您不会在 Windows 世界中看到这种事情发生。您可能会看到供应商相互竞争,但最近这种情况似乎要少得多,因为创新似乎都在标准机构层面。

That's how I see it.

这就是我的看法。

Update:If you look at the new compiler chains that are coming out of Microsoft, you'll be very optimistic because much of what they are doing makes it easier to keep the entire toolchain in RAM and repeating less work. Very impressive stuff.

更新:如果您查看 Microsoft 推出的新编译器链,您会感到非常乐观,因为它们所做的大部分工作可以更轻松地将整个工具链保存在 RAM 中并减少重复工作。非常令人印象深刻的东西。

回答by TomOnTime

The issue with visual c++ is, as far I can tell, that it is not a priority for the compiler team to optimize this scenario. Their solution is that you use their precompiled header feature. This is what windows specific projects have done. It is not portable, but it works.

就我所知,visual c++ 的问题在于,优化此场景并不是编译器团队的优先事项。他们的解决方案是您使用他们的预编译头功能。这就是 Windows 特定项目所做的。它不是便携式的,但它可以工作。

Furthermore, on windows you typically have virus scanners, as well as system restore and search tools that can ruin your build times completely if they monitor your buid folder for you. windows 7 resouce monitor can help you spot it. I have a reply herewith some further tips for optimizing vc++ build times if you're really interested.

此外,在 Windows 上,您通常有病毒扫描程序,以及系统还原和搜索工具,如果它们为您监视您的 buid 文件夹,它们可能会完全破坏您的构建时间。Windows 7 资源监视器可以帮助您发现它。如果您真的感兴趣,我在这里回复了一些关于优化 vc++ 构建时间的进一步提示。

回答by OpenNingia

Try using jom instead of nmake

尝试使用 jom 而不是 nmake

Get it here: https://github.com/qt-labs/jom

在这里获取:https: //github.com/qt-labs/jom

The fact is that nmake is using only one of your cores, jom is a clone of nmake that make uses of multicore processors.

事实是,nmake 仅使用您的一个内核,jom 是使用多核处理器的 nmake 的克隆。

GNU make do that out-of-the-box thanks to the -j option, that might be a reason of its speed vs the Microsoft nmake.

由于 -j 选项,GNU make 开箱即用,这可能是其速度与 Microsoft nmake 相比的一个原因。

jom works by executing in parallel different make commands on different processors/cores. Try yourself an feel the difference!

jom 通过在不同的处理器/内核上并行执行不同的 make 命令来工作。试试自己,感受不同!

回答by Agent_L

NTFS saves file access time everytime. You can try disabling it: "fsutil behavior set disablelastaccess 1" (restart)

NTFS 每次都可以节省文件访问时间。您可以尝试禁用它:“fsutil behavior set disablelastaccess 1”(重启)

回答by b7kich

IMHO this is all about disk I/O performance. The order of magnitude suggests a lot of the operations go to disk under Windows whereas they're handled in memory under Linux, i.e. Linux is caching better. Your best option under windows will be to move your files onto a fast disk, server or filesystem. Consider buying an Solid State Drive or moving your files to a ramdisk or fast NFS server.

恕我直言,这完全是关于磁盘 I/O 性能。数量级表明许多操作在 Windows 下转到磁盘,而在 Linux 下它们在内存中处理,即 Linux 缓存更好。Windows 下的最佳选择是将文件移动到快速磁盘、服务器或文件系统上。考虑购买固态硬盘或将文件移动到虚拟磁盘或快速 NFS 服务器。

I ran the directory traversal tests and the results are very close to the compilation times reported, suggesting this has nothing to do with CPU processing times or compiler/linker algorithms at all.

我运行了目录遍历测试,结果与报告的编译时间非常接近,这表明这与 CPU 处理时间或编译器/链接器算法完全无关。

Measured times as suggested above traversing the chromium directory tree:

上面建议的测量时间遍历铬目录树:

  • Windows Home Premium 7 (8GB Ram) on NTFS: 32 seconds
  • Ubuntu 11.04 Linux (2GB Ram) on NTFS: 10 seconds
  • Ubuntu 11.04 Linux (2GB Ram) on ext4: 0.6 seconds
  • NTFS 上的 Windows Home Premium 7 (8GB Ram):32 秒
  • NTFS 上的 Ubuntu 11.04 Linux (2GB Ram):10 秒
  • ext4 上的 Ubuntu 11.04 Linux (2GB Ram):0.6 秒

For the tests I pulled the chromium sources (both under win/linux)

对于测试,我提取了 Chromium 源(均在 win/linux 下)

git clone http://github.com/chromium/chromium.git 
cd chromium
git checkout remotes/origin/trunk 

To measure the time I ran

测量我跑步的时间

ls -lR > ../list.txt ; time ls -lR > ../list.txt # bash
dir -Recurse > ../list.txt ; (measure-command { dir -Recurse > ../list.txt }).TotalSeconds  #Powershell

I did turn off access timestamps, my virus scanner and increased the cache manager settings under windows (>2Gb RAM) - all without any noticeable improvements. Fact of the matter is, out of the box Linux performed 50x better than Windows with a quarter of the RAM.

我确实关闭了访问时间戳、我的病毒扫描程序并增加了 Windows 下的缓存管理器设置(> 2Gb RAM)——所有这些都没有任何明显的改进。事实是,开箱即用的 Linux 性能比 Windows 好 50 倍,只有四分之一的 RAM。

For anybody who wants to contend that the numbers wrong - for whatever reason - please give it a try and post your findings.

对于任何想要争辩数字错误的人 - 无论出于何种原因 - 请尝试并发布您的发现。

回答by RickNZ

A few ideas:

一些想法:

  1. Disable 8.3 names. This can be a big factor on drives with a large number of files and a relatively small number of folders: fsutil behavior set disable8dot3 1
  2. Use more folders. In my experience, NTFS starts to slow down with more than about 1000 files per folder.
  3. Enable parallel builds with MSBuild; just add the "/m" switch, and it will automatically start one copy of MSBuild per CPU core.
  4. Put your files on an SSD -- helps hugely for random I/O.
  5. If your average file size is much greater than 4KB, consider rebuilding the filesystem with a larger cluster size that corresponds roughly to your average file size.
  6. Make sure the files have been defragmented. Fragmented files cause lots of disk seeks, which can cost you a factor of 40+ in throughput. Use the "contig" utility from sysinternals, or the built-in Windows defragmenter.
  7. If your average file size is small, and the partition you're on is relatively full, it's possible that you are running with a fragmented MFT, which is bad for performance. Also, files smaller than 1K are stored directly in the MFT. The "contig" utility mentioned above can help, or you may need to increase the MFT size. The following command will double it, to 25% of the volume: fsutil behavior set mftzone 2Change the last number to 3 or 4 to increase the size by additional 12.5% increments. After running the command, reboot and then create the filesystem.
  8. Disable last access time: fsutil behavior set disablelastaccess 1
  9. Disable the indexing service
  10. Disable your anti-virus and anti-spyware software, or at least set the relevant folders to be ignored.
  11. Put your files on a different physical drive from the OS and the paging file. Using a separate physical drive allows Windows to use parallel I/Os to both drives.
  12. Have a look at your compiler flags. The Windows C++ compiler has a ton of options; make sure you're only using the ones you really need.
  13. Try increasing the amount of memory the OS uses for paged-pool buffers (make sure you have enough RAM first): fsutil behavior set memoryusage 2
  14. Check the Windows error log to make sure you aren't experiencing occasional disk errors.
  15. Have a look at Physical Disk related performance counters to see how busy your disks are. High queue lengths or long times per transfer are bad signs.
  16. The first 30% of disk partitions is much faster than the rest of the disk in terms of raw transfer time. Narrower partitions also help minimize seek times.
  17. Are you using RAID? If so, you may need to optimize your choice of RAID type (RAID-5 is bad for write-heavy operations like compiling)
  18. Disable any services that you don't need
  19. Defragment folders: copy all files to another drive (just the files), delete the original files, copy all folders to another drive (just the empty folders), then delete the original folders, defragment the original drive, copy the folder structure back first, then copy the files. When Windows builds large folders one file at a time, the folders end up being fragmented and slow. ("contig" should help here, too)
  20. If you are I/O bound and have CPU cycles to spare, try turning disk compression ON. It can provide some significant speedups for highly compressible files (like source code), with some cost in CPU.
  1. 禁用 8.3 名称。对于包含大量文件和相对较少文件夹的驱动器,这可能是一个重要因素:fsutil behavior set disable8dot3 1
  2. 使用更多文件夹。根据我的经验,NTFS 开始变慢,每个文件夹有超过 1000 个文件。
  3. 使用 MSBuild 启用并行构建;只需添加“/m”开关,它就会自动为每个 CPU 内核启动一份 MSBuild 副本。
  4. 将您的文件放在 SSD 上——对随机 I/O 有很大帮助。
  5. 如果您的平均文件大小远大于 4KB,请考虑使用与您的平均文件大小大致对应的更大集群大小重建文件系统。
  6. 确保文件已进行碎片整理。碎片化的文件会导致大量的磁盘搜索,这会使您的吞吐量损失 40 多倍。使用 sysinternals 中的“contig”实用程序或内置的 Windows 碎片整理程序。
  7. 如果您的平均文件大小较小,并且您所在的分区相对已满,则您可能正在使用碎片化的 MFT 运行,这对性能不利。此外,小于 1K 的文件直接存储在 MFT 中。上面提到的“contig”实用程序可以提供帮助,或者您可能需要增加 MFT 大小。以下命令会将其加倍,达到体积的 25%:fsutil behavior set mftzone 2将最后一个数字更改为 3 或 4 以将大小增加 12.5%。运行命令后,重新启动,然后创建文件系统。
  8. 禁用上次访问时间: fsutil behavior set disablelastaccess 1
  9. 禁用索引服务
  10. 禁用您的防病毒和反间谍软件,或至少将相关文件夹设置为忽略。
  11. 将您的文件放在与操作系统和分页文件不同的物理驱动器上。使用单独的物理驱动器允许 Windows 对两个驱动器使用并行 I/O。
  12. 看看你的编译器标志。Windows C++ 编译器有很多选项;确保你只使用你真正需要的那些。
  13. 尝试增加操作系统用于分页池缓冲区的内存量(首先确保您有足够的 RAM): fsutil behavior set memoryusage 2
  14. 检查 Windows 错误日志以确保您没有遇到偶然的磁盘错误。
  15. 查看与物理磁盘相关的性能计数器,了解磁盘的繁忙程度。每次传输的队列长度长或时间长是不好的迹象。
  16. 就原始传输时间而言,前 30% 的磁盘分区比磁盘的其余部分快得多。更窄的分区也有助于最大限度地减少查找时间。
  17. 你在使用 RAID 吗?如果是这样,您可能需要优化您选择的 RAID 类型(RAID-5 不适合编译等写入繁重的操作)
  18. 禁用您不需要的任何服务
  19. 文件夹碎片整理:将所有文件复制到另一个驱动器(仅文件),删除原始文件,将所有文件夹复制到另一个驱动器(仅空文件夹),然后删除原始文件夹,对原始驱动器进行碎片整理,先将文件夹结构复制回来,然后复制文件。当 Windows 一次构建一个大文件夹时,这些文件夹最终会变得碎片化且速度缓慢。(“contig”在这里也应该有所帮助)
  20. 如果您受 I/O 限制并且有空闲的 CPU 周期,请尝试打开磁盘压缩。它可以为高度可压缩的文件(如源代码)提供一些显着的加速,但会占用一些 CPU。