Wireshark-dev: Re: [Wireshark-dev] tshark: drop features "dump to stdout" and "read filter" - c
From: Ulf Lamping <ulf.lamping@xxxxxx>
Date: Wed, 10 Oct 2007 17:29:58 +0200
> 
> Packets should be lost going from the kernel up to dumpcap, not between 
> dumpcap and *shark (unless I miss something: normally I would expect 
> that writing to a full pipe results in your write blocking, not message 
> disposal).  So how is that different then the old model where *shark 
> only read stuff from the kernel as fast as it could?

You are completely ignoring that this mechanism is really time critical and waiting for tshark to complete it's task won't make it better than having only dumpcap alone in the "critical capture path".

What happens with the increasing number of packets in the kernel buffers, if dumpcap is blocking on a write call to the pipe and therefore dumpcap won't fetch any packets from the kernel? After a short time the kernel buffers will get full and the kernel will drop packets as dumpcap is still waiting for tshark to complete.

> 
> > The "temporary file model" is working in Wiresharks "update list of 
> > packets" mode for quite a while and is working ok.
> 
> Except (unless my idea about that problem is incorrect) when you're 
> using a ring buffer (see bug 1650).
> 
> I see two ways of solving that problem:
> 
> - keep dumpcap and *shark synchronized all the time (for example if a
>    pipe was used between the two to transfer the packets)
> 	- if *shark can't keep up then packets will be lost but _when_
> 	  they get lost is really dependent on when *shark is too slow

Now you have two tasks that must process the packets in realtime instead of one - which is very certainly a bad idea if you want to prevent packet drops.

> 
> - have dumpcap and *shark synchronize only when changing files
> 	- in this case dumpcap would be fast up until changing files at
> 	  which point it might block for a potentially huge amount of
> 	  time (while *shark catches up).  In this case all the packet
> 	  loss would happen in "bursts" at file change time.  That seems
> 	  rather unattractive to me.
> 
> Another method would be to have dumpcap create all the ring buffer files 
> and to have *shark delete them (when it has finished with them).  That 
> would avoid the problem but it defeats the (common) purpose of using the 
> ring buffer which is to avoid going over some specified amount of disk 
> usage (because dumpcap could go off and create hundreds of files while 
> *shark is still busy processing the first ones).
> 

BTW: Bug 1650 summarizes as following: If the rate of incoming packets is higher than what Wireshark/tshark can process, every model with somehow limited space (e.g. ringbuffer files) must fail sooner or later.

Regards, ULFL

_________________________________________________________________________
In 5 Schritten zur eigenen Homepage. Jetzt Domain sichern und gestalten! 
Nur 3,99 EUR/Monat! http://www.maildomain.web.de/?mc=021114