On Sep 18, 2009, at 1:07 PM, Maynard, Chris wrote:
While doing this, I noticed that ICMP sequence #'s from a Linux PC
increase sequentially, as one would expect. For example, 1, 2, 3, ...
The ICMP sequence #'s from a Windows PC is a different matter though.
As an example, Wireshark shows the following sequence in one of my
capture files: 7682 (0x1e02), 7938 (0x1f02), 8194 (0x2002), 8450
(0x2102), 8706 (0x2202), 8962 (0x2302). The problem is obviously one
of endian-ness. Quite surprisingly to me, it seems that Windows sends
ICMP echo request packets with multi-byte fields in little-endian
format.
RFC 792 says
Sequence Number
If code = 0, a sequence number to aid in matching echos and replies,
may be zero.
and
The identifier and sequence number may be used by the echo sender to
aid in matching the replies with the echo requests. For example, the
identifier might be used like a port in TCP or UDP to identify a
session, and the sequence number might be incremented on each echo
request sent. The echoer returns these same values in the echo reply.
which just says "might".
RFC 1122 (Requirements for Internet Hosts -- Communication Layers)
says nothing about the sequence number field.
So, while it's a bit surprising, I wouldn't call it completely wrong.
I guess it's impossible or nearly so to heuristically figure out if
the
format is big or little endian. Would adding a preference to specify
the endian-ness be a reasonable solution, with big-endian being the
obvious default?
That might be reasonable. It's really per-sending-host, but we don't
yet have any mechanism for specifying per-conversation properties.