I'm trying to teach myself how to use the '-z io,stat' options in tshark
I was imagining that the following would tell me how many seconds the trace covers
tshark -r sample-http.pcapng -o tcp.calculate_timestamps:TRUE -qz "io,stat,0,SUM(tcp.time_delta)tcp.time_delta"
=============================================
| IO Statistics |
| |
| Interval size: 11.1 secs (dur) |
| Col 1: Frames and bytes |
| 2: SUM(tcp.time_delta)tcp.time_delta |
|-------------------------------------------|
| |1 |2 |
| Interval | Frames | Bytes | SUM |
|-------------------------------------------|
| 0.0 <> 11.1 | 216 | 45453 | 23.817352 |
=============================================
capinfos sample-http.pcapng
File name: sample-http.pcapng
[...]
File size: 53 kB
Data size: 45 kB
Capture duration: 11 seconds
[...]
But apparently not: '23.817352' does not equal '11 seconds'
https://vishnu.fhcrc.org/wireshark/sample-http.pcapng
I'm using wireshark 1.10.0rc2
What am I not understanding about this '-z io,stat' feature?
--sk
Stuart Kendrick
FHCRC
P.S.
My actual use case will be more complex than this. This trace was taken next to the Client.
I want to calculate how much time the Client spent thinking:
tshark -r sample-http.pcapng -o tcp.calculate_timestamps:TRUE -qz "io,stat,0,SUM(tcp.time_delta)tcp.time_delta and tcp.dstport==80"
and how much time the Network + Server spent thinking:
tshark -r sample-http.pcapng -o tcp.calculate_timestamps:TRUE -qz "io,stat,0,SUM(tcp.time_delta)tcp.time_delta and tcp.srcport==80"
To give myself insights into how much of the total transaction time the Client is contributing versus that of the Network + Server.
But I figure that if I cannot even persuade tshark to sum every value in the DeltaT column, then I'm not ready to progress to the real-world use case.
P.P.S.
The Average function gives me a plausible answer:
tshark -r sample-http.pcapng -o tcp.calculate_timestamps:TRUE -qz "io,stat,0,AVG(tcp.time_delta)tcp.time_delta"
=============================================
| IO Statistics |
| |
| Interval size: 11.1 secs (dur) |
| Col 1: Frames and bytes |
| 2: AVG(tcp.time_delta)tcp.time_delta |
|-------------------------------------------|
| |1 |2 |
| Interval | Frames | Bytes | AVG |
|-------------------------------------------|
| 0.0 <> 11.1 | 473 | 349155 | 0.050354 |
=============================================
But when I sanity-check this calculation using Excel, I see a different result:
0.023518s