At 2005-03-05 19:15:07 -0800, gharris@xxxxxxxxx wrote:
>
> > guint32 l = tvb_get_ntohl( ... );
> >
> > if ( l > 0 && l < 1000000 )
> > ...
> > n += l;
> >
> >And that last increment should be done only if l > 0.
>
> If l is 0, "n += l" isn't a problem; the only problem is if l >= 1000000.
Let me rephrase.
The protocol message that code is decoding contains a four byte signed
integer field. A positive value indicates a number of bytes to follow,
and the special value -1 is used to indicate an SQL NULL value (and no
bytes follow in that case).
Thus, the problem with the "n += l" is if l < 0 (which, in my original
code that declared it a gint, it could be). Again, I do not understand
the motivation behind the wholesale change from gint to guint32 in my
original code.
> The right fix for that is to get rid of the test against 1000000, and
> to, instead, use "tvb_ensure_bytes_exist()" before dissecting
That test (and the resulting bug I reported) was introduced to fix a
segfault ("XXX - if we don't limit l here, the function will assert
on very large values") by this change:
<http://anonsvn.ethereal.com/viewcvs/viewcvs.py/trunk/epan/dissectors/packet-pgsql.c?rev=12833&r1=12800&r2=12833>
It seems to me, then, that the "right fix" would be to make
proto_tree_add_item not segfault on very large values rather than
introducing arbitrary tests here and there to compensate for its
failings.
> "tvb_ensure_bytes_exist()" doesn't give a special meaning to a
> negative length, it just assumes that a negative length really
> means "very large positive length" and throws the appropriate
> exception.
There is no exception appropriate to the perfectly valid value of -1.
-- ams