windows 最快的时序解析系统
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/3162826/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Fastest timing resolution system
提问by Poni
What is the fastest timing system a C/C++ programmer can use?
C/C++ 程序员可以使用的最快计时系统是什么?
For example:
time() will give the seconds since Jan 01 1970 00:00.
GetTickCount() on Windows will give the time, in milliseconds, since the system's start-up time, but is limited to 49.7 days (after that it simply wraps back to zero).
例如:
time() 将给出自 1970 年 1 月 1 日 00:00 以来的秒数。
Windows 上的 GetTickCount() 将给出自系统启动时间以来的时间(以毫秒为单位),但仅限于 49.7 天(之后它只是简单地回零)。
I want to get the current time, or ticks since system/app start-up time, in milliseconds.
我想以毫秒为单位获取当前时间或自系统/应用程序启动时间以来的滴答声。
The biggest concern is the method's overhead - I need the lightest one, because I'm about to call it many many times per second.
最大的问题是该方法的开销 - 我需要最轻的一个,因为我将要每秒多次调用它。
My case is that I have a worker thread, and to that worker thread I post pending jobs. Each job has an "execution time". So, I don't care if the time is the current "real" time or the time since the system's uptime - it just must be linear and light.
我的情况是我有一个工作线程,并且我向该工作线程发布了待处理的工作。每个作业都有一个“执行时间”。所以,我不在乎时间是当前的“真实”时间还是自系统正常运行以来的时间——它必须是线性的和轻量级的。
Edit:
编辑:
unsigned __int64 GetTickCountEx()
{
static DWORD dwWraps = 0;
static DWORD dwLast = 0;
DWORD dwCurrent = 0;
timeMutex.lock();
dwCurrent = GetTickCount();
if(dwLast > dwCurrent)
dwWraps++;
dwLast = dwCurrent;
unsigned __int64 timeResult = ((unsigned __int64)0xFFFFFFFF * dwWraps) + dwCurrent;
timeMutex.unlock();
return timeResult;
}
回答by porges
For timing, the current Microsoft recommendationis to use QueryPerformanceCounter
& QueryPerformanceFrequency
.
对于计时,当前的 Microsoft 建议是使用QueryPerformanceCounter
& QueryPerformanceFrequency
。
This will give you better-than-millisecond timing. If the system doesn't support a high-resolution timer, then it will default to milliseconds (the same as GetTickCount
).
这将为您提供优于毫秒的计时。如果系统不支持高分辨率计时器,则默认为毫秒(与 相同GetTickCount
)。
Here is a short Microsoft article with examples of why you should use it:)
回答by reso
I recently had this question and did some research. The good news is that all three of the major operating systems provide some sort of high resolution timer. The bad news is that it is a different API call on each system. For POSIX operating systems you want to use clock_gettime(). If you're on Mac OS X, however, this is not supported, you have to use mach_get_time(). For windows, use QueryPerformanceCounter. Alternatively, with compilers that support OpenMP, you can use omp_get_wtime(), but it may not provide the resolution that you are looking for.
我最近有这个问题并做了一些研究。好消息是所有三个主要操作系统都提供了某种高分辨率的计时器。坏消息是它在每个系统上都是不同的 API 调用。对于 POSIX 操作系统,您要使用 clock_gettime()。但是,如果您使用的是 Mac OS X,则不支持此功能,您必须使用 mach_get_time()。对于 Windows,请使用 QueryPerformanceCounter。或者,对于支持 OpenMP 的编译器,您可以使用 omp_get_wtime(),但它可能无法提供您正在寻找的分辨率。
I also found cycle.h from fftw.org (www.fftw.org/cycle.h) to be useful.
我还发现来自 fftw.org (www.fftw.org/cycle.h) 的 cycle.h 很有用。
Here is some code that calls a timer on each OS, using some ugly #ifdef statements. The usage is very simple: Timer t; t.tic(); SomeOperation(); t.toc("Message"); And it will print out the elapsed time in seconds.
下面是一些使用一些丑陋的#ifdef 语句在每个操作系统上调用计时器的代码。用法很简单: Timer t; t.tic(); 一些操作();t.toc("消息"); 它会以秒为单位打印出经过的时间。
#ifndef TIMER_H
#define TIMER_H
#include <iostream>
#include <string>
#include <vector>
# if (defined(__MACH__) && defined(__APPLE__))
# define _MAC
# elif (defined(_WIN32) || defined(WIN32) || defined(__CYGWIN__) || defined(__MINGW32__) || defined(_WIN64))
# define _WINDOWS
# ifndef WIN32_LEAN_AND_MEAN
# define WIN32_LEAN_AND_MEAN
# endif
#endif
# if defined(_MAC)
# include <mach/mach_time.h>
# elif defined(_WINDOWS)
# include <windows.h>
# else
# include <time.h>
# endif
#if defined(_MAC)
typedef uint64_t timer_t;
typedef double timer_c;
#elif defined(_WINDOWS)
typedef LONGLONG timer_t;
typedef LARGE_INTEGER timer_c;
#else
typedef double timer_t;
typedef timespec timer_c;
#endif
//==============================================================================
// Timer
// A quick class to do benchmarking.
// Example: Timer t; t.tic(); SomeSlowOp(); t.toc("Some Message");
class Timer {
public:
Timer();
inline void tic();
inline void toc();
inline void toc(const std::string &msg);
void print(const std::string &msg);
void print();
void reset();
double getTime();
private:
timer_t start;
double duration;
timer_c ts;
double conv_factor;
double elapsed_time;
};
Timer::Timer() {
#if defined(_MAC)
mach_timebase_info_data_t info;
mach_timebase_info(&info);
conv_factor = (static_cast<double>(info.numer))/
(static_cast<double>(info.denom));
conv_factor = conv_factor*1.0e-9;
#elif defined(_WINDOWS)
timer_c freq;
QueryPerformanceFrequency(&freq);
conv_factor = 1.0/(static_cast<double>freq.QuadPart);
#else
conv_factor = 1.0;
#endif
reset();
}
inline void Timer::tic() {
#if defined(_MAC)
start = mach_absolute_time();
#elif defined(_WINDOWS)
QueryPerformanceCounter(&ts);
start = ts.QuadPart;
#else
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts);
start = static_cast<double>(ts.tv_sec) + 1.0e-9 *
static_cast<double>(ts.tv_nsec);
#endif
}
inline void Timer::toc() {
#if defined(_MAC)
duration = static_cast<double>(mach_absolute_time() - start);
#elif defined(_WINDOWS)
QueryPerformanceCounter(&qpc_t);
duration = static_cast<double>(qpc_t.QuadPart - start);
#else
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts);
duration = (static_cast<double>(ts.tv_sec) + 1.0e-9 *
static_cast<double>(ts.tv_nsec)) - start;
#endif
elapsed_time = duration*conv_factor;
}
inline void Timer::toc(const std::string &msg) { toc(); print(msg); };
void Timer::print(const std::string &msg) {
std::cout << msg << " "; print();
}
void Timer::print() {
if(elapsed_time) {
std::cout << "elapsed time: " << elapsed_time << " seconds\n";
}
}
void Timer::reset() { start = 0; duration = 0; elapsed_time = 0; }
double Timer::getTime() { return elapsed_time; }
#if defined(_WINDOWS)
# undef WIN32_LEAN_AND_MEAN
#endif
#endif // TIMER_H
回答by Arno
GetSystemTimeAsFileTimeis the fastest resource. Its granularity can be obtained by a call to GetSystemTimeAdjustmentwhich fills lpTimeIncrement. The system time as filetime has 100ns units and increments by TimeIncrement. TimeIncrementcan vary and it depends on the setting of the multimedia timer interface.
GetSystemTimeAsFileTime是最快的资源。它的粒度可以通过调用填充lpTimeIncrement 的GetSystemTimeAdjustment获得。作为文件时间的系统时间有 100ns 单位并按TimeIncrement递增。 TimeIncrement可能会有所不同,这取决于多媒体计时器界面的设置。
A call to timeGetDevCapswill disclose the capabilities of the time services. It returns
a value wPeriodMinfor the minimum supported interrupt period. A call to timeBeginPeriodwith wPeriodMinas argument will setup the system to operate at highest possible interrupt frequency (typically ~1ms). This will alsoforce the time increment of the system filetime returned by GetSystemTimeAsFileTime
to be smaller. Its granularity will be in the range of 1ms (10000 100ns units).
对timeGetDevCaps的调用将公开时间服务的功能。它为支持的最小中断周期返回一个值wPeriodMin。以wPeriodMin作为参数调用timeBeginPeriod将设置系统以可能的最高中断频率(通常为~1ms)运行。这也将迫使返回的系统文件时间的时间增量更小。其粒度将在 1ms(10000 100ns 单位)范围内。GetSystemTimeAsFileTime
For your purpose, I'd suggest to go for this approach.
为了您的目的,我建议采用这种方法。
The QueryPerformanceCounterchoice is questionable since its frequency is not accurate by two means: Firstly it deviates from the value given by QueryPerformanceFrequencyby a hardware specific offset. This offset can easely be several hundred ppm, which means that a conversion into time will contain an error of several hundreds of microseconds per second. Secondly it has thermal drift. The drift of such devices can easely be several ppm. This way another - heat dependend - error of several us/s is added.
的QueryPerformanceCounter的选择是有问题的,因为它的频率是不通过两种手段准确:首先,它与给定的值偏离QueryPerformanceFrequency的由硬件特定偏移。该偏移量很容易达到数百 ppm,这意味着转换为时间将包含每秒数百微秒的误差。其次,它具有热漂移。这种设备的漂移很容易达到几 ppm。这样,另一个 - 热依赖 - 增加了几个 us/s 的错误。
So as long as a resolution of ~1ms is sufficient and the main question is the overhead,
GetSystemTimeAsFileTime
is by far the best solution.
所以只要 ~1ms 的分辨率就足够了,主要问题是开销,这
GetSystemTimeAsFileTime
是迄今为止最好的解决方案。
When microseconds matter, you'd have to go a longer way and see more details. Sub-millisecond time services are described at the Windows Timestamp Project
当微秒很重要时,您必须走得更远,才能看到更多细节。Windows Timestamp Project中描述了亚毫秒级时间服务
回答by caf
If you're just worried about GetTickCount()
overflowing, then you can just wrap it like this:
如果你只是担心GetTickCount()
溢出,那么你可以像这样包装它:
DWORDLONG GetLongTickCount(void)
{
static DWORDLONG last_tick = 0;
DWORD tick = GetTickCount();
if (tick < (last_tick & 0xffffffff))
last_tick += 0x100000000;
last_tick = (last_tick & 0xffffffff00000000) | tick;
return last_tick;
}
If you want to call this from multiple threads you'll need to lock access to the last_tick
variable. As long as you call GetLongTickCount()
at least once every 49.7 days, it'll detect the overflow.
如果你想从多个线程调用它,你需要锁定对last_tick
变量的访问。只要您GetLongTickCount()
至少每 49.7 天调用一次,它就会检测到溢出。
回答by rjnilsson
I'd suggest that you use the GetSystemTimeAsFileTimeAPI if you're specifically targeting Windows. It's generally faster than GetSystemTimeand has the same precision (which is some 10-15 milliseconds - don't look at the resolution); when I did a benchmark some years ago under Windows XP it was somewhere in the range of 50-100 times faster.
如果您专门针对 Windows ,我建议您使用GetSystemTimeAsFileTimeAPI。它通常比GetSystemTime快并且具有相同的精度(大约 10-15 毫秒 - 不要看分辨率);几年前我在 Windows XP 下进行基准测试时,它的速度快了 50-100 倍。
The only disadvantage is that you might have to convert the returned FILETIME structures to a clock time using e.g. FileTimeToSystemTimeif you need to access the returned times in a more human-friendly format. On the other hand, as long as you don't need those converted times in real-time you can always do this off-line or in a "lazy" fashion (e.g. only convert the time stamps you need to display/process, and only when you actually need them).
唯一的缺点是,如果您需要以更人性化的格式访问返回的时间,则可能必须使用例如FileTimeToSystemTime将返回的 FILETIME 结构转换为时钟时间。另一方面,只要您不需要那些实时转换的时间,您就可以随时离线或以“懒惰”的方式执行此操作(例如,仅转换您需要显示/处理的时间戳,并且仅当您真正需要它们时)。
QueryPerformanceCountercan be a good choice as others have mentioned, but the overhead can be rather large depending on the underlying hardware support. In my benchmark I mention above QueryPerformanceCounter calls was 25-200 times slower than calls to GetSystemTimeAsFileTime. Also, there are some reliability problems as e.g. reported here.
正如其他人提到的,QueryPerformanceCounter可能是一个不错的选择,但开销可能相当大,具体取决于底层硬件支持。在我上面提到的基准测试中,QueryPerformanceCounter 调用比 GetSystemTimeAsFileTime 调用慢 25-200 倍。此外,还存在一些可靠性问题,例如此处报告的。
So, in summary: If you can cope with a precision of 10-15 milliseconds I'd recommend you to use GetSystemTimeAsFileTime. If you need anything better than that I'd go for QueryPerformanceCounter.
因此,总而言之:如果您可以处理 10-15 毫秒的精度,我建议您使用 GetSystemTimeAsFileTime。如果您需要比这更好的东西,我会选择 QueryPerformanceCounter。
Small disclaimer: I haven't performed any benchmarking under later Windows versions than XP SP3. I'd recommend you to do some benchmarking on you own.
小免责声明:我没有在比 XP SP3 更高的 Windows 版本下执行任何基准测试。我建议你自己做一些基准测试。
回答by Len Holgate
If you are targeting a late enough version of the OS then you could use GetTickCount64()
which has a much higher wrap around point than GetTickCount()
. You could also simply build a version of GetTickCount64()
on top of GetTickCount()
.
如果您的目标是足够晚的操作系统版本,那么您可以使用GetTickCount64()
比GetTickCount()
. 您也可以简单地GetTickCount64()
在GetTickCount()
.
回答by Eric
Have you reviewed the code in this MSDN article?
您是否查看过这篇 MSDN 文章中的代码?
http://msdn.microsoft.com/en-us/magazine/cc163996.aspx
http://msdn.microsoft.com/en-us/magazine/cc163996.aspx
I have this code compiling on a Windows 7 64bit machine using both VC2005 and C++ Builder XE but when executing, it locks up my machine; have not debugged far enough to figure out why yet. It seems overly complicated. Templates of templates of templates of UG...
我在使用 VC2005 和 C++ Builder XE 的 Windows 7 64 位机器上编译了这段代码,但是在执行时,它锁定了我的机器;还没有调试到足以找出原因的程度。似乎过于复杂。UG的模板模板模板...
回答by Jonathan Leffler
POSIX supports clock_gettime()which uses a struct timespec
which has nanosecond resolution. Whether your system really supports that fine-grained a resolution is more debatable, but I believe that's the standard call with the highest resolution. Not all systems support it, and it is sometimes well hidden (library '-lposix4
' on Solaris, IIRC).
POSIX 支持clock_gettime(),它使用struct timespec
具有纳秒分辨率的a。你的系统是否真的支持这种细粒度的分辨率更有争议,但我相信这是最高分辨率的标准调用。并非所有系统都支持它,而且它有时被很好地隐藏起来(-lposix4
Solaris 上的库“ ”,IIRC)。
Update (2016-09-20):
更新 (2016-09-20):
- Mac OS X 10.6.4 did not support
clock_gettime()
, and neither did any other version of Mac OS X up to and including Mac OS X 10.11.6 El Capitan). However, starting with macOS Sierra 10.12 (released September 2016), macOS does have the functionclock_gettime()
and manual pages for it at long last. The actual resolution (onCLOCK_MONOTONIC
) is still microseconds; the smaller units are all zeros. This is confirmed byclock_getres()
which reports that the resolution is 1000 nanoseconds, aka 1 μs.
- Mac OS X 10.6.4 不支持
clock_gettime()
,任何其他版本的 Mac OS X也不支持,包括 Mac OS X 10.11.6 El Capitan)。但是,从 macOS Sierra 10.12(2016 年 9 月发布)开始,macOSclock_gettime()
终于有了它的功能和手册页。实际分辨率(上CLOCK_MONOTONIC
)仍然是微秒;较小的单位都是零。clock_getres()
其报告证实了这一点,分辨率为 1000 纳秒,也就是 1 μs。
The manual page for clock_gettime()
on macOS Sierra mentions mach_absolute_time()
as a way to get high-resolution timing. For more information, amongst other places, see Technical Q&A QA1398: Mach Absolute Time Unitsand (on SO) What is mach_absolute_time()
based on on iPhone?
clock_gettime()
macOS Sierra的手册页提到mach_absolute_time()
了一种获得高分辨率计时的方法。有关更多信息,请参阅技术问答 QA1398:马赫绝对时间单位和(在 SO 上)什么mach_absolute_time()
基于 iPhone?
回答by Dirk Eddelbuettel
On Linux you get microseconds:
在 Linux 上你会得到微秒:
struct timeval tv;
int res = gettimeofday(&tv, NULL);
double tmp = (double) tv.tv_sec + 1e-6 * (double) tv.tv_usec;
On Windows, only millseconds are available:
在 Windows 上,只有毫秒可用:
SYSTEMTIME st;
GetSystemTime(&st);
tmp += 1e-3 * st.wMilliseconds;
return tmp;
This came from R's datetime.c (and was edited down for brevity).
这来自R的 datetime.c (为简洁起见,已被删减)。
Then there is of course Boost's Date_Timewhich can have nanosecond resolution on some systems (details hereand here).
然后当然还有Boost 的 Date_Time,它可以在某些系统上具有纳秒级分辨率(详情请点击此处和此处)。
回答by Robby Shaw
On Mac OS X, you can simple use UInt32 TickCount (void) to get the ticks.
在 Mac OS X 上,您可以简单地使用 UInt32 TickCount (void) 来获取刻度。