使用基本 C++ 获取系统滴答计数?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/2738669/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Getting the System tick count with basic C++?
提问by Donal Rafferty
I essentially want to reconstruct the getTickCount() windows function so I can use it in basic C++ without any non standard libraries or even the STL. (So it complies with the libraries supplied with the Android NDK)
我本质上想重建 getTickCount() 窗口函数,以便我可以在基本 C++ 中使用它,而无需任何非标准库甚至 STL。(因此它符合 Android NDK 提供的库)
I have looked at
我看过
clock()
localtime
time
时钟()
当地时间
时间
But I'm still unsure whether it is possible to replicate the getTickCount windows function with the time library.
但是我仍然不确定是否可以使用时间库复制 getTickCount 窗口函数。
Can anyone point me in the right direction as to how to do this or even if its possible?
任何人都可以指出我如何做到这一点的正确方向,或者即使可能吗?
An overview of what I want to do:
我想要做的概述:
I want to be able to calculate how long an application has been "doing" a certain function.
我希望能够计算应用程序“执行”某个功能的时间。
So for example I want to be able to calculate how long the application has been trying to register with a server
例如,我希望能够计算应用程序尝试向服务器注册的时间
I am trying to port it from windows to run on the linux based Android, here is the windows code:
我试图将它从 windows 移植到基于 linux 的 Android 上运行,这是 windows 代码:
int TimeoutTimer::GetSpentTime() const
{
if (m_On)
{
if (m_Freq>1)
{
unsigned int now;
QueryPerformanceCounter((int*)&now);
return (int)((1000*(now-m_Start))/m_Freq);
}
else
{
return (GetTickCount()-(int)m_Start);
}
}
return -1;
}
回答by fadden
On Android NDK you can use the POSIX clock_gettime() call, which is part of libc. This function is where various Android timer calls end up.
在 Android NDK 上,您可以使用 POSIX clock_gettime() 调用,它是 libc 的一部分。此函数是各种 Android 计时器调用结束的地方。
For example, java.lang.System.nanoTime() is implemented with:
例如, java.lang.System.nanoTime() 实现为:
struct timespec now;
clock_gettime(CLOCK_MONOTONIC, &now);
return (u8)now.tv_sec*1000000000LL + now.tv_nsec;
This example uses the monotonic clock, which is what you want when computing durations. Unlike the wall clock (available through gettimeofday()), it won't skip forward or backward when the device's clock is changed by the network provider.
此示例使用单调时钟,这是计算持续时间时所需的。与挂钟(可通过 gettimeofday() 获得)不同,当网络提供商更改设备时钟时,它不会向前或向后跳过。
The Linux man page for clock_gettime() describes the other clocks that may be available, such as the per-thread elapsed CPU time.
clock_gettime() 的 Linux 手册页描述了其他可能可用的时钟,例如每个线程经过的 CPU 时间。
回答by Adrian McCarthy
clock()
works very similarly to Windows's GetTickCount()
. The units may be different. GetTickCount()
returns milliseconds. clock()
returns CLOCKS_PER_SEC
ticks per second. Both have a max that will rollover (for Windows, that's about 49.7 days).
clock()
与 Windows 的GetTickCount()
. 单位可能不同。 GetTickCount()
返回毫秒。 每秒clock()
返回CLOCKS_PER_SEC
滴答数。两者都有一个可以翻转的最大值(对于 Windows,大约是 49.7 天)。
GetTickCount()
starts at zero when the OS starts. From the docs, it looks like clock()
starts when the process does. Thus you can compare times between processes with GetTickCount()
, but you probably can't do that with clock()
.
GetTickCount()
操作系统启动时从零开始。从文档中,它看起来像是clock()
在过程开始时开始。因此,您可以使用 比较进程之间的时间GetTickCount()
,但您可能无法使用clock()
.
If you're trying to compute how long something has been happening, within a single process, and you're not worried about rollover:
如果您试图计算某件事发生了多长时间,在单个进程中,并且您不担心翻转:
const clock_t start = clock();
// do stuff here
clock_t now = clock();
clock_t delta = now - start;
double seconds_elapsed = static_cast<double>(delta) / CLOCKS_PER_SEC;
Clarification:There seems to be uncertainty in whether clock()
returns elapsed wall time or processor time. The first several references I checked say wall time. For example:
澄清:似乎不确定clock()
返回是经过墙时间还是处理器时间。我查过的前几个参考资料说的是挂墙时间。例如:
Returns the number of clock ticks elapsed since the program was launched.
which admittedly is a little vague. MSDN is more explicit:
诚然,这有点含糊。MSDN 更明确:
The elapsed wall-clock timesince the start of the process....
User darron convinced me to dig deeper, so I found a draft copy of the C standard (ISO/IEC 9899:TC2), and it says:
用户 darron 说服我深入挖掘,所以我找到了 C 标准 (ISO/IEC 9899:TC2) 的草稿副本,它说:
... returns the implementation's best approximation to the processor time used...
... 返回实现对所用处理器时间的最佳近似值...
I believe every implementation I've ever used gives wall-clock time (which I suppose is an approximation to the processor time used).
我相信我曾经使用过的每个实现都给出了挂钟时间(我认为这是所用处理器时间的近似值)。
Conclusion:If you're trying to time so code so you can benchmark various optimizations, then my answer is appropriate. If you're trying to implement a timeout based on actual wall-clock time, then you have to check your local implementation of clock()
or use another function that is documented to give elapsed wall-clock time.
结论:如果您想对代码进行计时以便可以对各种优化进行基准测试,那么我的回答是合适的。如果您尝试根据实际挂钟时间实现超时,则必须检查本地实现clock()
或使用已记录的另一个函数来提供经过的挂钟时间。
Update:With C++11, there is also the portion of the standard library, which provides a variety of clocks and types to capture times and durations. While standardized and widely available, it's not clear if the Android NDK fully supports yet.
更新:在 C++11 中,还有标准库的一部分,它提供了各种时钟和类型来捕获时间和持续时间。虽然标准化且广泛可用,但尚不清楚 Android NDK 是否完全支持。
回答by daramarak
This is platform dependent so you just have to write a wrapper and implement the specifics for each platform.
这是平台相关的,因此您只需要编写一个包装器并为每个平台实现细节。
回答by Edward Strange
It's not possible. The C++ standard and, as consequence the standard library, know nothing about processors or 'ticks'. This may or may not change in C++0x with the threading support but at least for now, it's not possible.
这是不可能的。C++ 标准以及标准库对处理器或“滴答”一无所知。这在 C++0x 中可能会也可能不会随着线程支持而改变,但至少现在,这是不可能的。
回答by Michael Dorgan
Do you have access to a vblank interrupt function (or hblank) on the Android? If so, increment a global, volatile var there for a timer.
您是否可以访问 Android 上的 vblank 中断函数(或 hblank)?如果是这样,请在那里为计时器增加一个全局易失性变量。