/* According to POSIX.1-2001 */
#include <sys/select.h> /* According to earlier standards */
#include <sys/time.h>
#include <sys/types.h>
#include <unistd.h> int select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, struct timeval *timeout); void FD_CLR(int fd, fd_set *set);
int FD_ISSET(int fd, fd_set *set);
void FD_SET(int fd, fd_set *set);
void FD_ZERO(fd_set *set); #include <sys/select.h> int pselect(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, const struct timespec *timeout, const sigset_t *sigmask);
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
pselect(): _POSIX_C_SOURCE >= 200112L || _XOPEN_SOURCE >= 600
The operation of select() and pselect() is identical, with three differences:
Three independent sets of file descriptors are watched. Those listed in readfds will be watched to see if characters become available for reading (more precisely, to see if a read will not block; in particular, a file descriptor is also ready on end-of-file), those in writefds will be watched to see if a write will not block, and those in exceptfds will be watched for exceptions. On exit, the sets are modified in place to indicate which file descriptors actually changed status. Each of the three file descriptor sets may be specified as NULL if no file descriptors are to be watched for the corresponding class of events.
Four macros are provided to manipulate the sets. FD_ZERO() clears a set. FD_SET() and FD_CLR() respectively add and remove a given file descriptor from a set. FD_ISSET() tests to see if a file descriptor is part of the set; this is useful after select() returns.
nfds is the highest-numbered file descriptor in any of the three sets, plus 1.
timeout is an upper bound on the amount of time elapsed before select() returns. If both fields of the timeval structure are zero, then select() returns immediately. (This is useful for polling.) If timeout is NULL (no timeout), select() can block indefinitely.
sigmask is a pointer to a signal mask (see sigprocmask(2)); if it is not NULL, then pselect() first replaces the current signal mask by the one pointed to by sigmask, then does the "select" function, and then restores the original signal mask.
Other than the difference in the precision of the timeout argument, the following pselect() call:
ready = pselect(nfds, &readfds, &writefds, &exceptfds, timeout, &sigmask);is equivalent to atomically executing the following calls:
sigset_t origmask; sigprocmask(SIG_SETMASK, &sigmask, &origmask); ready = select(nfds, &readfds, &writefds, &exceptfds, timeout); sigprocmask(SIG_SETMASK, &origmask, NULL);
The reason that pselect() is needed is that if one wants to wait for either a signal or for a file descriptor to become ready, then an atomic test is needed to prevent race conditions. (Suppose the signal handler sets a global flag and returns. Then a test of this global flag followed by a call of select() could hang indefinitely if the signal arrived just after the test but just before the call. By contrast, pselect() allows one to first block signals, handle the signals that have come in, then call pselect() with the desired sigmask, avoiding the race.)
struct timeval { long tv_sec; /* seconds */ long tv_usec; /* microseconds */ };
and
struct timespec { long tv_sec; /* seconds */ long tv_nsec; /* nanoseconds */ };
(However, see below on the POSIX.1-2001 versions.)
Some code calls select() with all three sets empty, nfds zero, and a non-NULL timeout as a fairly portable way to sleep with subsecond precision.
On Linux, select() modifies timeout to reflect the amount of time not slept; most other implementations do not do this. (POSIX.1-2001 permits either behavior.) This causes problems both when Linux code which reads timeout is ported to other operating systems, and when code is ported to Linux that reuses a struct timeval for multiple select()s in a loop without reinitializing it. Consider timeout to be undefined after select() returns.
pselect() is defined in POSIX.1g, and in POSIX.1-2001.
Concerning the types involved, the classical situation is that the two fields of a timeval structure are typed as long (as shown above), and the structure is defined in <sys/time.h>. The POSIX.1-2001 situation is
struct timeval { time_t tv_sec; /* seconds */ suseconds_t tv_usec; /* microseconds */ };
where the structure is defined in <sys/select.h> and the data types time_t and suseconds_t are defined in <sys/types.h>.
Concerning prototypes, the classical situation is that one should include <time.h> for select(). The POSIX.1-2001 situation is that one should include <sys/select.h> for select() and pselect().
Libc4 and libc5 do not have a <sys/select.h> header; under glibc 2.0 and later this header exists. Under glibc 2.0 it unconditionally gives the wrong prototype for pselect(). Under glibc 2.1 to 2.2.1 it gives pselect() when _GNU_SOURCE is defined. Since glibc 2.2.2 the requirements are as shown in the SYNOPSIS.
Starting with version 2.1, glibc provided an emulation of pselect() that was implemented using sigprocmask(2) and select(). This implementation remained vulnerable to the very race condition that pselect() was designed to prevent. Modern versions of glibc use the (race-free) pselect() system call on kernels where it is provided.
On systems that lack pselect(), reliable (and more portable) signal trapping can be achieved using the self-pipe trick (where a signal handler writes a byte to a pipe whose other end is monitored by select() in the main program.)
Under Linux, select() may report a socket file descriptor as "ready for reading", while nevertheless a subsequent read blocks. This could for example happen when data has arrived but upon examination has wrong checksum and is discarded. There may be other circumstances in which a file descriptor is spuriously reported as ready. Thus it may be safer to use O_NONBLOCK on sockets that should not block.
On Linux, select() also modifies timeout if the call is interrupted by a signal handler (i.e., the EINTR error return). This is not permitted by POSIX.1-2001. The Linux pselect() system call has the same behavior, but the glibc wrapper hides this behavior by internally copying the timeout to a local variable and passing that variable to the system call.
#include <stdio.h> #include <stdlib.h> #include <sys/time.h> #include <sys/types.h> #include <unistd.h> int main(void) { fd_set rfds; struct timeval tv; int retval; /* Watch stdin (fd 0) to see when it has input. */ FD_ZERO(&rfds); FD_SET(0, &rfds); /* Wait up to five seconds. */ tv.tv_sec = 5; tv.tv_usec = 0; retval = select(1, &rfds, NULL, NULL, &tv); /* Don't rely on the value of tv now! */ if (retval == -1) perror("select()"); else if (retval) printf("Data is available now.\n"); /* FD_ISSET(0, &rfds) will be true. */ else printf("No data within five seconds.\n"); exit(EXIT_SUCCESS); }
For vaguely related stuff, see accept(2), connect(2), poll(2), read(2), recv(2), send(2), sigprocmask(2), write(2), epoll(7), time(7)