如何在 Linux 中获得最准确的实时周期性中断?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/5833550/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-05 03:52:25  来源:igfitidea点击:

How do I get the most accurate realtime periodic interrupts in Linux?

linuxtimerreal-time

提问by Matt

I want to be interrupted at frequencies that are powers of ten, so enabling interrupts from /dev/rtc isn't ideal. I'd like to sleep 1 millisecond or 250 microseconds between interrupts.

我想以 10 的幂的频率被中断,因此从 /dev/rtc 启用中断并不理想。我想在中断之间休眠 1 毫秒或 250 微秒。

Enabling periodic interrupts from /dev/hpet works pretty well, but it doesn't seem to work on some machines. Obviously I can't use it on machines that don't actually have a HPET. But I can't get it working on some machines that have hpet available as a clocksource either. For example, on a Core 2 Quad, the example program included in the kernel documentation fails at HPET_IE_ON when set to poll.

从 /dev/hpet 启用定期中断效果很好,但它似乎不适用于某些机器。显然我不能在实际上没有 HPET 的机器上使用它。但是我也无法在某些将 hpet 用作时钟源的机器上运行它。例如,在 Core 2 Quad 上,内核文档中包含的示例程序在设置为轮询时在 HPET_IE_ON 处失败。

It would be nicer to use the itimer interface provided by Linux instead of interfacing with the hardware device driver directly. And on some systems, the itimer provides periodic interrupts that are more stable over time. That is, since the hpet can't interrupt at exactly the frequency I want, the interrupts start to drift from wall time. But I'm seeing some systems sleep way longer (10+ milliseconds) than they should using an itimer.

使用 Linux 提供的定时器接口而不是直接与硬件设备驱动程序接口会更好。在某些系统上,定时器提供了随着时间的推移更加稳定的周期性中断。也就是说,由于 hpet 无法以我想要的频率中断,中断开始从墙上时间漂移。但是我看到一些系统的睡眠时间(10+ 毫秒)比使用计时器时应该更长。

Here's a test program using itimer for interrupts. On some systems it will only print out one warning that's it slept about 100 microseconds or so over the target time. On others, it will print out batches of warning that it slept 10+ milliseconds over the target time. Compile with -lrt and run with sudo chrt -f 50 [name]

这是一个使用定时器进行中断的测试程序。在某些系统上,它只会打印出一个警告,说明它在目标时间内睡眠了大约 100 微秒左右。在其他情况下,它会打印出在目标时间内睡眠超过 10 毫秒的批量警告。使用 -lrt 编译并使用 sudo chrt -f 50 [name] 运行

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <error.h>
#include <errno.h>
#include <sys/ioctl.h>
#include <sys/types.h>
#include <sys/time.h>
#include <time.h>
#include <signal.h>
#include <fcntl.h>
#define NS_PER_SECOND 1000000000LL
#define TIMESPEC_TO_NS( aTime ) ( ( NS_PER_SECOND * ( ( long long int ) aTime.tv_sec ) ) \
    + aTime.tv_nsec )

int main()
{
    // Block alarm signal, will be waited on explicitly
    sigset_t lAlarm;
    sigemptyset( &lAlarm );
    sigaddset( &lAlarm, SIGALRM  );
    sigprocmask( SIG_BLOCK, &lAlarm, NULL );

    // Set up periodic interrupt timer
    struct itimerval lTimer;
    int lReceivedSignal = 0;

    lTimer.it_value.tv_sec = 0;
    lTimer.it_value.tv_usec = 250;
    lTimer.it_interval = lTimer.it_value;

    // Start timer
    if ( setitimer( ITIMER_REAL, &lTimer, NULL ) != 0 )
    {
        error( EXIT_FAILURE, errno, "Could not start interval timer" );
    }
    struct timespec lLastTime;
    struct timespec lCurrentTime;
    clock_gettime( CLOCK_REALTIME, &lLastTime );
    while ( 1 )
    {
        //Periodic wait
        if ( sigwait( &lAlarm, &lReceivedSignal ) != 0 )
        {
            error( EXIT_FAILURE, errno, "Failed to wait for next clock tick" );
        }
        clock_gettime( CLOCK_REALTIME, &lCurrentTime );
        long long int lDifference = 
            ( TIMESPEC_TO_NS( lCurrentTime ) - TIMESPEC_TO_NS( lLastTime ) );
        if ( lDifference  > 300000 )
        {
            fprintf( stderr, "Waited too long: %lld\n", lDifference  );
        }
        lLastTime = lCurrentTime;
    }
    return 0;
}

回答by Rakis

Regardless of the timing mechanism you use, it boils down to a combination of change in your task's run status, when the kernel scheduler is invoked (usually 100 or 1000 times per second), and cpu contention with other processes.

无论您使用哪种计时机制,它都归结为任务运行状态的变化、调用内核调度程序时(通常每秒 100 或 1000 次)以及与其他进程的 CPU 争用的组合。

The mechanism I've found that achieves the "best" timing on Linux (Windows as well) is to do the following:

我发现在 Linux(Windows 也是如此)上实现“最佳”计时的机制是执行以下操作:

  1. Place the process on a Shielded CPU
  2. Have the process initially sleep for 1ms. If on a shielded CPU, your process shouldwake right on the OS scheduler's tick boundary
  3. Use either the RDTSC directly or CLOCK_MONOTONIC to capture the current time. Use this as the zero time for calculating the absolute wakeup times for all future periods. This will help minimize drift over time. It can't be eliminated completely since hardware timekeeping fluctuates over time (thermal issues and the like) but it's a pretty good start.
  4. Create a sleep function that sleeps 1ms short of the target absolute wakeup time (as that's the most accurate the OS scheduler can be) then burn the CPU in a tight loop continually checking the RDTSC/CLOCK_REALTIME value.
  1. 将进程放在受保护的CPU 上
  2. 让进程最初休眠 1 毫秒。如果在受保护的 CPU 上,您的进程应该在操作系统调度程序的滴答边界上唤醒
  3. 直接使用 RDTSC 或 CLOCK_MONOTONIC 来捕获当前时间。将其用作计算所有未来周期的绝对唤醒时间的零时间。这将有助于最大限度地减少随时间的漂移。它不能完全消除,因为硬件计时会随着时间的推移而波动(热问题等),但这是一个很好的开始。
  4. 创建一个睡眠函数,它比目标绝对唤醒时间少 1 毫秒(因为这是操作系统调度程序可以做到的最准确的),然后在紧密循环中燃烧 CPU,不断检查 RDTSC/CLOCK_REALTIME 值。

It takes some work but you can get pretty good results using this approach. A related question you may want to take a look at can be found here.

这需要一些工作,但使用这种方法可以获得相当不错的结果。可以在此处找到您可能想要查看的相关问题。

回答by SzG

I've had the same problem with a bare setitimer() setup. The problem is that your process is scheduled by the SCHED_OTHER policy on static priority level 0 by default. This means you're in a pool with all other processes, and dynamic priorities decide. The moment there is some system load, you get latencies.

我在裸 setitimer() 设置中遇到了同样的问题。问题在于,默认情况下,您的进程由静态优先级 0 的 SCHED_OTHER 策略调度。这意味着您与所有其他进程处于一个池中,并且动态优先级决定。一旦出现系统负载,就会出现延迟。

The solution is to use the sched_setscheduler() system call, increase your static priority to at least one, and specify SCHED_FIFO policy. It causes a dramatic improvement.

解决方案是使用 sched_setscheduler() 系统调用,将您的静态优先级至少增加到 1,并指定 SCHED_FIFO 策略。它会带来显着的改善。

#include <sched.h>
...
int main(int argc, char *argv[])
{
    ...
    struct sched_param schedp;
    schedp.sched_priority = 1;
    sched_setscheduler(0, SCHED_FIFO, &schedp);
    ...
}

You have to run as root to be able to do this. The alternative is to use the chrt program to do the same, but you must know the PID of your RT process.

您必须以 root 身份运行才能执行此操作。另一种方法是使用 chrt 程序来做同样的事情,但你必须知道你的 RT 进程的 PID。

sudo chrt -f -p 1 <pid>

See my blog post about it here.

在这里查看我的博客文章。