I have a trace of a client loading a large file via HTTP from a
remote Web server, captured at the client. Takes ~7.5s.
I was imagining that I could calculate how much time the client
contributed to the transaction and compare this to how much time the
server + network contributed. But I'm fumbling the calculation
somehow ... I get the same result (~7.5s) regardless of whether I'm
filtering on client-sourced frames or server-sourced frames. I
would have expected the 7.5s to be divided between the two (~.5s for
tcp.dstport==80 and ~7s for tcp.srcport==80). Tips?
C:\Temp> tshark
-nlr client.pcap -o tcp.calculate_timestamps:TRUE -R
"(tcp.dstport==80)" -qz
io,stat,600,"SUM(tcp.time_delta)tcp.time_delta"
============================================
| IO Statistics |
|
|
| Interval size:
7.572 secs
(dur) |
| Col 1:
SUM(tcp.time_delta)tcp.time_delta |
|------------------------------------------|
| |1 | |
| Interval |
SUM | |
|---------------------------| |
| 0.000 <>
7.572 | 7.571759
| |
============================================
C:\Temp>tshark -nlr client.pcap -o
tcp.calculate_timestamps:TRUE -R "(tcp.srcport==80)" -qz
io,stat,600,"SUM(tcp.time_delta)tcp.time_delta"
============================================
| IO Statistics |
|
|
| Interval size:
7.572 secs
(dur) |
| Col 1:
SUM(tcp.time_delta)tcp.time_delta |
|------------------------------------------|
| |1 | |
| Interval |
SUM | |
|---------------------------| |
| 0.000 <>
7.572 | 7.571759
| |
============================================
--sk
Stuart Kendrick
FHCRC