Linux 带 epoll 的多线程 UDP 服务器?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/3959295/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Multithreading UDP server with epoll?
提问by Daniel
I'd like to develop a multithreaded UDP server in C/Linux. The service is running on a single port x, thus there's only the possibility to bind a single UDP socket to it. In order to work under high loads, I have n threads (statically defined), say 1 thread per CPU. Work could be delivered to the thread using epoll_wait, so threads get woken up on demand with 'EPOLLET | EPOLLONESHOT'. I've attached a code example:
我想在 C/Linux 中开发一个多线程 UDP 服务器。该服务在单个端口 x 上运行,因此只能将单个 UDP 套接字绑定到它。为了在高负载下工作,我有 n 个线程(静态定义),比如每个 CPU 1 个线程。可以使用 epoll_wait 将工作交付给线程,因此线程可以使用 'EPOLLET | 按需唤醒。EPOLLONESHOT'。我附上了一个代码示例:
static int epfd;
static sig_atomic_t sigint = 0;
...
/* Thread routine with epoll_wait */
static void *process_clients(void *pevents)
{
int rc, i, sock, nfds;
struct epoll_event ep, *events = (struct epoll_event *) pevents;
while (!sigint) {
nfds = epoll_wait(epfd, events, MAX_EVENT_NUM, 500);
for (i = 0; i < nfds; ++i) {
if (events[i].data.fd < 0)
continue;
sock = events[i].data.fd;
if((events[i].events & EPOLLIN) == EPOLLIN) {
printf("Event dispatch!\n");
handle_request(sock); // do a recvfrom
} else
whine("Unknown poll event!\n");
memset(&ep, 0, sizeof(ep));
ep.events = EPOLLIN | EPOLLET | EPOLLONESHOT;
ep.data.fd = sock;
rc = epoll_ctl(epfd, EPOLL_CTL_MOD, sock, &ep);
if(rc < 0)
error_and_die(EXIT_FAILURE, "Cannot add socket to epoll!\n");
}
}
pthread_exit(NULL);
}
int main(int argc, char **argv)
{
int rc, i, cpu, sock, opts;
struct sockaddr_in sin;
struct epoll_event ep, *events;
char *local_addr = "192.168.1.108";
void *status;
pthread_t *threads = NULL;
cpu_set_t cpuset;
threads = xzmalloc(sizeof(*threads) * MAX_THRD_NUM);
events = xzmalloc(sizeof(*events) * MAX_EVENT_NUM);
sock = socket(PF_INET, SOCK_DGRAM, 0);
if (sock < 0)
error_and_die(EXIT_FAILURE, "Cannot create socket!\n");
/* Non-blocking */
opts = fcntl(sock, F_GETFL);
if(opts < 0)
error_and_die(EXIT_FAILURE, "Cannot fetch sock opts!\n");
opts |= O_NONBLOCK;
rc = fcntl(sock, F_SETFL, opts);
if(rc < 0)
error_and_die(EXIT_FAILURE, "Cannot set sock opts!\n");
/* Initial epoll setup */
epfd = epoll_create(MAX_EVENT_NUM);
if(epfd < 0)
error_and_die(EXIT_FAILURE, "Error fetching an epoll descriptor!\n");
memset(&ep, 0, sizeof(ep));
ep.events = EPOLLIN | EPOLLET | EPOLLONESHOT;
ep.data.fd = sock;
rc = epoll_ctl(epfd, EPOLL_CTL_ADD, sock, &ep);
if(rc < 0)
error_and_die(EXIT_FAILURE, "Cannot add socket to epoll!\n");
/* Socket binding */
sin.sin_family = AF_INET;
sin.sin_addr.s_addr = inet_addr(local_addr);
sin.sin_port = htons(port_xy);
rc = bind(sock, (struct sockaddr *) &sin, sizeof(sin));
if (rc < 0)
error_and_die(EXIT_FAILURE, "Problem binding to port! "
"Already in use?\n");
register_signal(SIGINT, &signal_handler);
/* Thread initialization */
for (i = 0, cpu = 0; i < MAX_THRD_NUM; ++i) {
rc = pthread_create(&threads[i], NULL, process_clients, events);
if (rc != 0)
error_and_die(EXIT_FAILURE, "Cannot create pthread!\n");
CPU_ZERO(&cpuset);
CPU_SET(cpu, &cpuset);
rc = pthread_setaffinity_np(threads[i], sizeof(cpuset), &cpuset);
if (rc != 0)
error_and_die(EXIT_FAILURE, "Cannot create pthread!\n");
cpu = (cpu + 1) % NR_CPUS_ON;
}
printf("up and running!\n");
/* Thread joining */
for (i = 0; i < MAX_THRD_NUM; ++i) {
rc = pthread_join(threads[i], &status);
if (rc != 0)
error_and_die(EXIT_FAILURE, "Error on thread exit!\n");
}
close(sock);
xfree(threads);
xfree(events);
printf("shut down!\n");
return 0;
}
Is this the proper way of handling this scenario with epoll? Should the function _handle_request_ return as fast as possible, because for this time the eventqueue for the socket is blocked?!
这是使用 epoll 处理这种情况的正确方法吗?函数 _handle_request_ 是否应该尽快返回,因为此时套接字的事件队列被阻塞了?!
Thanks for replies!
感谢您的回复!
采纳答案by cmeerw
As you are only using a single UDP socket, there is no point using epoll - just use a blocking recvfrom instead.
由于您仅使用单个 UDP 套接字,因此使用 epoll 毫无意义——只需使用阻塞 recvfrom 即可。
Now, depending on the protocol you need to handle - if you can process each UDP packet individually - you can actually call recvfrom concurrently from multiple threads (in a thread pool). The OS will take care that exactly one thread will receive the UDP packet. This thread can then do whatever it needs to do in handle_request.
现在,根据您需要处理的协议 - 如果您可以单独处理每个 UDP 数据包 - 您实际上可以从多个线程(在线程池中)同时调用 recvfrom 。操作系统会注意只有一个线程会收到 UDP 数据包。然后这个线程可以在handle_request 中做它需要做的任何事情。
However, if you need to process the UDP packets in a particular order, you'll probably not have that many opportunities to parallalise your program...
但是,如果您需要以特定顺序处理 UDP 数据包,您可能没有太多机会并行化您的程序......
回答by slezica
No, this will not work the way you want to. To have worker threads process events arriving through an epoll interface, you need a different architecture.
不,这不会以您想要的方式工作。要让工作线程处理通过 epoll 接口到达的事件,您需要不同的架构。
Example design (there are several ways to do this)Uses: SysV/POSIX semaphores.
示例设计(有几种方法可以做到这一点)用途:SysV/POSIX 信号量。
Have the master thread spawn n subthreads and a semaphore, then block epolling your sockets (or whatever).
Have each subthread block on down-ing the semaphore.
When the master thread unblocks, it stores the events in some global structure and ups the semaphore once per event.
The subthreads unblock, process the events, block again when the semaphore returns to 0.
让主线程产生 n 个子线程和一个信号量,然后阻止 epolling 您的套接字(或其他)。
让每个子线程阻塞信号量。
当主线程解除阻塞时,它会将事件存储在某个全局结构中,并为每个事件提升一次信号量。
子线程解除阻塞,处理事件,当信号量返回 0 时再次阻塞。
You can use a pipe shared among all threads to achieve very similar functionality to that of the semaphore. This would let you block on select()
instead of the semaphore, which you can use to wake the threads up on some other event (timeouts, other pipes, etc.)
您可以使用在所有线程之间共享的管道来实现与信号量非常相似的功能。这将让您阻塞select()
而不是信号量,您可以使用它来唤醒线程在某些其他事件(超时、其他管道等)上
You can also reverse this control, and have the master thread wake up when its workers demand tasks. I think the above approach is better for your case, though.
您还可以反转此控制,并在其工作人员需要任务时唤醒主线程。不过,我认为上述方法更适合您的情况。