Ethereal-users: [Ethereal-users] "TCP Window Full" vs. "TCP ZeroWindow"
Note: This archive is from the project's previous web site, ethereal.com. This list is no longer active.
Hi Folks,
I am doing some iSCSI testing over a
LAN segment and analysing the traffic captured off a SPAN using Ethereal
0.10.12.
In the capture I see about 0.92% of
segements with "TCP Window Full" in the iSCSI initiator to target
traffic flow and another 0.42% of segments carrying "TCP ZeroWindow"
in the iSCSI target to initiator traffic flow.
I have following queries,
1. I understand that more and more unread
TCP segments accumulates in the receivers TCP buffer smaller will be a
advertised Window and it will be zero at the end, resulting in segment
with Window Size equal to zero. Increasing the target side TCP Window size
or tuning the iSCSI disk array read performance (so that SCSI requests
will be quickly processed increasing the TCP buffer free space quickly)
can reduce the number of TCP Zero occurances. This occurs in the target
to initiator side flow and is understandable. But what does "TCP Window
Full" means which occurs in packets flowing in the opposite direction
(i.e. target to initiator)? Window full should mean that the sender has
sent a Window full of Data without giving sufficient time for the receiver
to free up some buffer space. If this is correct this means that my target
is sending data at a faster rate such that even before the ACK is sent
by the initiator its window is becoming full. Please see my understanding
is correct?
2. I have the following settings in
my Linux box,
net.ipv4.tcp_rmem=1048576 1048576 1048576
net.ipv4.tcp_wmem=1048576 1048576 1048576
net.ipv4.tcp_mem=1048576 1048576 1048576
net.core.rmem_default=1048576
net.core.wmem_default=1048576
i.e. I want to make the window size
1MB, still the adverstised window is shown as 49,051. Any idea why this
is so?
Appreciate guidance here..., Thanks.
Good Luck!
Prasad