Wireshark-dev: [Wireshark-dev] [PATCH] HTTP Chunked Body Desegmentation Problem
From: Mike Duigou <wireshark@xxxxxxxxxx>
Date: Tue, 26 Jun 2007 22:11:14 -0700
Enclosed is a patch with fixes problems with the desegmentation of
chunked HTTP message bodies.
The problem occurs with the current source because the TCP desegmentation code appears to obey literally the response asking for "one more byte" that the HTTP dissector was returning.
Using the enclosed trace the HTTP dissector receives frame 10 with a tvbuff of 912 bytes. Since it could not fully dissect the body it asks for one more byte.
The next call to the HTTP dissector is with frame 12 and a tvbuff of 913 bytes. This is exactly the 1 more that was previously requested. However, the rest of the data in frame 12 is not available in the tvbuff. The one additional byte is not enough to satisfy req_resp_hdrs_do_reassembly() so it requests two more bytes.
The request is ignored and the 2 bytes are never provided. The next time that the HTTP dissector is called it is with a new tvbuff containing the remainder of the bytes from frame 12 and eventually the bytes from frame 14 as well.
Changing the value of pinfo->desegment_len to DESEGMENT_ONE_MORE_SEGMENT rather than the current 1 or 2 seems to make everything work. (I'm kind of confused as to why the headers resassembly code already used DESEGMENT_ONE_MORE_SEGMENT and the body reassembly code did not).
Incidentally, is the comment in epan/packet_info.h regarding DESEGMENT_ONE_MORE_SEGMENT not being fully implemented still true?
I find the current behaviour of the TCP desegmentation/reassembly quite odd. For non-TLV style protocols like HTTP I really want to be able to request additional data until I either get enough data or decide to give up. I don't like that the TCP reassembly appears to require that I know exactly how many bytes I am going to need to finish. I don't want to ask for too many, but I may have to ask for more bytes more than once to reassemble every message.
Ideally I'd like to see a little simpler behaviour required of dissectors for desegmentation. I've previously made mistakes with implementing desegmentation (and indeed encouraged others to repeat them by incorrectly describing the erroneous techniques in README.developer. Sorry 'bout that), but it seems I'm not the only one who finds the current desegmentation scheme cryptic, error prone and overly complicated.
Wouldn't it be a lot simpler if every dissector just returned the number of bytes it had processed or wanted and didn't have to muck with the pinfo->desegment_offset and pinfo->desegment_len? Imagine instead, a dissector return result of :
> 0 : I used this many bytes of the tvbuff to make one or more of my PDUs. Don't give me those bytes again. If there were unused bytes in the tvbuff don't call me again until you have at least one more byte. ie. assume that if you called me again with only the unprocessed bytes I'd return a negative result because the remaining buffer doesn't contain a complete PDU.
< 0 : I've used no bytes of the tvbuff. The buffed did not contain a complete PDU. I'll need at least ABS(result) more bytes before it's worth asking me to try dissecting again. Even when you call me again with the requested number of bytes I may still ask for more bytes. The caller will continue to call again with additional bytes as available until one of the other results is returned. Being able to return the DESEGMENT_ONE_MORE_SEGMENT and DESEGMENT_UNTIL_FIN constants would be nice.
0 : I've used no bytes of the tvbuff. I don't know what to do with this tvbuff. Don't call me again with this tvbuff. Give it to someone else.
I know that this is not the way desegmentation works now for either classic or new dissectors but it sure would be a *LOT* simpler than what's currently required to do desegmentation. This interface could also reasonably act as the heuristic dissection interface with the "0" result being the "Not mine/not interested" response.
Mike
Attachment:
http_desegmentation.patch.22197.gz
Description: GNU Zip compressed data
Attachment:
jxta.chunked.http.pcap.gz
Description: GNU Zip compressed data
- Prev by Date: Re: [Wireshark-dev] File size limits on Linux and building for large file support
- Next by Date: Re: [Wireshark-dev] File size limits on Linux and building for large file support
- Previous by thread: [Wireshark-dev] [PATCH] Add heur_dissector_delete
- Next by thread: [Wireshark-dev] [PATCH] PMIPv6
- Index(es):