Subject: Re: support for event-based loops (or workaround for FD_SETSIZE limit)

Re: support for event-based loops (or workaround for FD_SETSIZE limit)

From: Daniel Stenberg <>
Date: Fri, 23 Jan 2015 16:49:15 +0100 (CET)

On Thu, 22 Jan 2015, Daniel Hardman wrote:

> I would love to switch to an epoll-style approach. I found one interaction
> between Daniel S and someone else on this forum, where this was discussed.
> Daniel said he had added ares_getsock() to support just such a use case.
> However, ares_getsock() only supports up to 16 file descriptors, which is
> *way* less than what I need.

We use one channel per lookup so for us that is not a restriction that causes
problems. Each channel rarely use more than 16.

> And besides, I don't want to build up a list of file descriptors each time I
> iterate through a polling loop; one of the main benefits of epoll was
> supposed to be the decoupling of registration of FDs from monitoring them.

You don't need to build up a list of descriptors each time. In our case, we
check if the set of descriptors are still the same for the specific channel
that had traffic, and if so (the majority of times) then things are like
before and we continue. If things have changed, we change what file
descriptors we listen to. (Simply put, we make a library that is event-lib
agnostic so it is slightly more complicated in reality.)

> I also saw some posts back in Nov 2013 that floated the idea of supporting
> an epoll-like mechanism in cares--but no follow-up announcement of a patch.

I wouldn't mind adding a set of callbacks or something that tells the
application about file descriptors. It has been mentioned before but nobody
ever produced a full patch for it that I can remember.

> There are also some tantalizing hints that the curl team (including Daniel?)
> fiddled with c-ares plumbing to support epoll in some manner.

We use the *getsock() for that, and we've done well over 50K parallel HTTP
requests with associated name resolves using that in the past. It works.

> do you think I could solve my problem by simply creating multiple channels,
> each with a more limited capacity (say, 256 pending requests)?

You can, and you don't have to limit it to 256 if you don't want to.

Received on 2015-01-23