The 1st packet in the capture file has a timestamp of 08:36:21.416346 while the
170th packet in the capture file has a timestamp of 08:36:22.413486, which is
2.860 ms short of a full second. I think this is probably why some of your
expected numbers are not matching. I'm sure there's some rounding/truncation
going on with the timestamps so it might appear that the capture duration is
exactly 1 second, but it isn't. The capture duration is actually only 997.14
ms, so 170 packets/0.99714s = ~170.488, which is what Wireshark shows for
average packets/second, and (170 * 1514)/0.99714s = ~258118.218. Wireshark
shows 258118.236, which is slightly higher, not exactly sure why, but again,
there's probably some rounding/truncation going on.
I have been perfectly aware of that and I understand that the transfer
rate shown in the summary is slightly higher than it actually was
because of the 2 ms difference.
As for the throughput, I see ~250KB/s throughput for the first ~0.125s and then
~233KB/s for the remaining ~0.875s, so the expected average is thus: (250KB/s *
0.125s/1s) + (233KB/s * 0.875s/1s) = ~235KB/s. So 235KB/s is the average TCP
throughput for the ~1 second duration. The difference in average bytes/sec and
TCP throughput is because the TCP throughput only includes the TCP segment
bytes, not any bytes associated with the Ethernet, IP or TCP headers. This
means you're really only transferring 1460 bytes/packet, not 1514. So for this
calculation, you can assume that there are no headers present and each packet is
only 1460 bytes. In this case your expected average byte/second calculation
would be (170 * 1460)/0.99714s = ~248KB/s. But of course this doesn't match the
graph, so I guess this is where the confusion lies.
Exactly. Thank you for writing a comprehensive explanation of my chaotic
equations from the original post :) The question remains open though.
--
Best regards,
Michal Kepien