Wireshark-dev: Re: [Wireshark-dev] Disabling dissection when a packet is selected in display
From: Guy Harris <guy@xxxxxxxxxxxx>
Date: Wed, 26 Aug 2009 03:39:58 -0700

On Aug 26, 2009, at 3:25 AM, Sudarshan Raghavan wrote:

When running a capture or when opening a captured file, wireshark
dissects it to build the display tree and all. What I also observed is
that when I select a packet in the display it once again calls the
dissector to analyze the packet. This seems to be a little wasteful
since the analysis done earlier is discarded.

That depends on what you want to waste.

*Not* discarding the analysis done earlier would require

1) generating the full protocol tree for every packet when reading in the capture (even if the full information from the protocol tree isn't needed at that point)

and

	2) storing that tree for every packet.

Many years ago, when we first split the protocol tree from the tree widget used to display the packet, there was a bug that caused the protocol tree for a packet not to be freed when Ethereal (as it was called at the time) was finished with it - I discovered this when filtering a large capture, because my machine thrashed like crazy (to the point of unusability).

In other words, it's a question of whether you want to consume lots of memory (possibly wastefully) or consume extra CPU.

It also poses a problem for streaming protocols like RTMP where what
was seen earlier decides how to make sense of the current data. For
example, RTMP has header optimizations by which message length is sent
only once and subsequent RTMP chunks use the length sent earlier.

To be precise, it poses a problem for writers of dissectors of protocols where what was seen earlier controls how to make sense of the current data, so that they are required to:

on the first pass through the packets, maintain state associated with the conversation, and construct data structures so that you can, for any packet, retrieve the relevant state information needed to dissect that packet;

on all subsequent dissections of the packet, fetch that information and use it.

See, for example, the SMTP dissector, which does exactly that.

Is there a way to turn this off and always use the initial analysis?

No.

The *only* way we'd consider it would be if we stored the protocol trees in question in a file, so that they don't take virtual memory (they'd take disk space, of course...).