Ethereal-dev: Re: [Ethereal-dev] How to stop Linux packet capture from dropping packets?
On Fri, 19 Apr 2002, Guy Harris wrote:
> On Sat, Apr 20, 2002 at 03:30:56PM +0930, Richard Sharpe wrote:
> > I am capturing large amounts of data from lo under Linux, and it seems
> > that with large transmits (around 65535 bytes), the libpcap stuff is
> > dropping the last two segments of each transmit.
> >
> > Does anyone know of any kernel param I can tune to stop this?
>
> On Linux, packet capture is done with PF_PACKET sockets, so the
> buffering would be the socket buffer size; at least from a quick look at
> the 2.4.9 kernel code, the default socket receive buffer size appears to
> be...
>
> ...65535 bytes.
>
> That value comes from a "sysctl_rmem_default" variable, which appears to
> be controlled by "/proc/sys/net/core/rmem_default"; you might also have
> to increase "/proc/sys/net/core/rmem_max".
Hmmm, since I did not need to see all 16384 bytes sent on lo, I cut the
snaplen back to 1500 (which is still more than I needed), and I managed to
only loose some 17,000 packets out of 270,000 or so packets.
The resulting file was a little big so I used editcap to look at the first
10,000 packets.
> An alternative might be to explicitly do a "setsockopt()" on the result
> of "pcap_fileno()", on Linux.
>
> > Similarly for FreeBSD? I already know about debug.bpf_bufsize for FreeBSD.
>
> "debug.bpf_bufsize" would be the default buffer size; does boosting that
> above 65535 bytes not fix the problem? ("debug.bpf_maxbufsize" is
> 524288 in FreeBSD 4.5, at least from a quick look at the code, so you
> should at least be able to make it that large.)
Doesn't help. I still lose packets over GigE. Looking at the code it may
be because capture applications are not woken up (unless they are in
immediate mode) until the buffer is full. Perhaps if I wake them up at 75%
full, I will lose less packets. Also, if I ask for less data per packet, I
will lose less packets.
Regards
-----
Richard Sharpe, rsharpe@xxxxxxxxxx, rsharpe@xxxxxxxxx,
sharpe@xxxxxxxxxxxx