linux时间命令微秒或更准确
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/8586354/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
linux time command microseconds or better accuracy
提问by Pushpak Dagade
I wish to know the amount of time taken for execution of a program under linux in microseconds (or better accuracy). Currently I am using the time
command, but it gives me at the max milliseconds accuracy. Is there some way to tweak the time
command to give better accuracy or is there some other command for the same?
我想知道在 linux 下以微秒(或更高的精度)执行程序所花费的时间。目前我正在使用该time
命令,但它给了我最大毫秒精度。是否有某种方法可以调整time
命令以提供更好的准确性,或者是否有其他相同的命令?
回答by Employed Russian
Your question is meaningless: you will not get repeated measurements even within milliseconds time does report.
你的问题毫无意义:即使在毫秒时间内报告,你也不会得到重复测量。
Adding more digits will just add noise. You might as well pull the extra digits from /dev/random
.
添加更多数字只会增加噪音。您不妨从/dev/random
.
回答by TheCottonSilk
回答by Basile Starynkevitch
I do agree with Employed Russian's answer. It does not have much sense to want microsecond accuracy for such measures. So any additional digit you've got is meaningless (and essentially random).
我确实同意受雇俄罗斯人的回答。对此类措施要求微秒级精度并没有多大意义。因此,您获得的任何额外数字都是毫无意义的(并且基本上是随机的)。
If you have the source code of the application to measure, you might use the clockor clock_gettimefunctions, but don't hope for better than a dozen of microseconds of accuracy. There is also the RDTSCmachine instruction.
如果您有要测量的应用程序的源代码,您可以使用clock或clock_gettime函数,但不要希望精度超过十几微秒。还有RDTSC机器指令。
Read the linux clock howto.
阅读linux 时钟操作指南。
And don't forget that the timing of execution, is from an application point of view, non deterministic and non reproductible (think about context switches, cache misses, interrupts, etc... happenning at random time).
并且不要忘记,从应用程序的角度来看,执行时间是非确定性和不可重现的(考虑上下文切换、缓存未命中、中断等......随机发生)。
If you want to measure the performance of a whole program, make it run for at least several seconds, and measure the time several (e.g. 8) times, and take the average (perhaps dropping the best & worst timing).
如果要测量整个程序的性能,请使其至少运行几秒钟,然后测量几次(例如 8 次),并取平均值(可能会丢弃最佳和最差时间)。
If you want to measure timing for particular functions, learn how to profileyour application (gprof
, oprofile
etc etc...)
See also this question
如果要衡量特定功能的时机,学习如何配置文件的应用程序(gprof
,oprofile
等等...)又见这个问题
Don't forget to read time(7)
别忘了看时间(7)
Be aware that on current (laptop, desktop, server) out-of-orderpipelinedsuperscalarprocessors with complex CPU cachesand TLBand branch predictors, the execution time of some tiny loop or sequence of machine instruction is not reproducible (the nanosecond count will vary from one run to the next). And the OS also adds randomness (scheduling, context switches, interrupts, page cache, copy-on-write, demand-paging...) so it does not make any sense to measure the execution of some command with more than one millisecond -or perhaps 100μs if you are lucky- of precision. You should benchmark your command several times.
请注意,在当前(笔记本电脑、台式机、服务器)具有复杂CPU 缓存和TLB和分支预测器的无序流水线超标量处理器上,某些微小循环或机器指令序列的执行时间是不可重现的(纳秒计数将从一次运行到下一次运行不同)。并且操作系统还增加了随机性(调度、上下文切换、中断、页面缓存、写时复制、按需分页...) 所以用超过 1 毫秒——或者如果你幸运的话可能是 100μs——的精度来测量某些命令的执行是没有任何意义的。您应该多次对您的命令进行基准测试。
To get significant measures, you should change the benchmarked application to run in more than a few seconds (perhaps adding some loop in main
, or run with a bigger data set...), and repeat the benchmarked command a dozen times. That take the mean (or the worst, or the best, depending what you are after) of the measures.
要获得重要的度量,您应该将基准测试应用程序更改为在几秒钟内运行(也许在 中添加一些循环main
,或使用更大的数据集运行...),并重复基准测试命令十几次。采取措施的均值(或最坏,或最好,取决于您所追求的)。
If the system time(1)is not enough, you might make your own measurement facility; see also getrusage(2); I'm skeptical about you getting more accurate measures.
如果系统时间(1)不够用,你可以自己制作测量工具;另见getrusage(2);我对你得到更准确的测量结果持怀疑态度。
BTW on my i3770K recent GNU/Linux (4.2 kernel, Debian/Sid/x86-64) desktop computer, "system"-calls like time(2)or clock_gettime(2)runs in about 3 or 4 nanoseconds (thanks to vdso(7)which avoid the burden of a real syscall...) so you could use them insideyour program quite often.
顺便说一句,在我的 i3770K 最近的 GNU/Linux(4.2 内核,Debian/Sid/x86-64)台式计算机上,“系统”调用像time(2)或clock_gettime(2)运行大约 3 或 4 纳秒(感谢vdso( 7)避免了真正系统调用的负担......) 所以你可以在你的程序中经常使用它们。
回答by Anand P Pai
Use gettimeofday -- gives microsecond accuracy
使用 gettimeofday -- 提供微秒精度