TCP Troubleshooting with Packet Captures: When Wireshark is Your Only Tool
- robertbmacdonald
- Dec 3, 2025
- 6 min read
You're staring at the same problem: a server that should be sending 4Gbps is only pushing 120Mbps.Â
But this time, you don't have server access. Maybe it's a remote site. Maybe it's vendor equipment. Maybe it's a colleague handing you a capture and asking for help. All you have are packet captures from both endpoints.
The challenge: extract the same diagnostic insights from PCAPs that you'd get from ss and nstat on Linux. This is practical troubleshooting with Wireshark. No theory. No TCP 101. Just the filters, workflows, and analysis techniques you need when a packet capture is your only option.
The Troubleshooting Mindset
Like the Linux TCP article, we're focused on answering four key questions:
What transfer rate is TCP actually achieving, and why?
Is one endpoint limiting throughput? Which one?
Is TCP adapting correctly to network conditions?
Are packets being lost, and how is TCP recovering?
The difference: instead of using ss and nstat to inspect TCP's internal state, we're extracting the same insights from observed behavior in packet captures. It takes more analysis effort, but the answers are there.
Capture the Handshake or Troubleshoot Blind
Before diving into analysis, understand what you need in your capture. The TCP three-way handshake (SYN, SYN-ACK, ACK) contains critical socket capabilities that affect everything that follows. To find handshakes quickly:
tcp.flags.syn==1Window Scaling - Negotiates a multiplier for all subsequent window values. A scale factor of 7 means a window value of 1000 represents 128,000 bytes (1000 × 2^7).
SACK Permitted - Indicates if Selective Acknowledgments are available for efficient loss recovery.
Maximum Segment Size (MSS)Â - Determines segment sizes. Mismatched or suboptimal values indicate PMTU problems.
Timestamps - Enables accurate RTT measurement for every acknowledged packet and protects against wrapped sequence numbers on high-speed links.
If window scaling wasn't negotiated, you're stuck with a 64KB maximum window regardless of the receiver's actual buffer size. If SACK wasn't negotiated, you're going to see slower loss recovery. These capabilities matter.
Wireshark's Sequence Number Analysis
Wireshark's tcp.analysis feature does all the tedious sequence number tracking that would otherwise require a spreadsheet and a lot of coffee. It maintains a complete TCP state machine for every connection, tracking relative sequence numbers, identifying retransmissions, counting duplicate ACKs, and flagging anomalies. When you're looking at millions of packets, having Wireshark automatically flag the interesting ones is the difference between finding the problem in minutes versus hours.
Key filters:
tcp.analysis.retransmissionPackets that were retransmitted - sender had to send again.
tcp.analysis.fast_retransmission Retransmissions triggered by duplicate ACKs (SACK-based recovery) - much faster than waiting for timeout.
tcp.analysis.duplicate_ackReceiver sent the same ACK multiple times, signaling out-of-order data received.
tcp.analysis.window_fullSender filled the receiver's advertised window and can't send more - smoking gun for receiver-side bottlenecks.
tcp.analysis.zero_windowReceiver advertises zero bytes available - application isn't reading from socket buffer. Another smoking gun for receiver-side bottlenecks.
Combine filters to narrow down problems:
tcp.analysis.retransmission && !tcp.analysis.window_fullRetransmissions when window isn't full - network path loss, not receiver bottleneck.
The Window Field
Every TCP header contains a window field showing the receiver's available buffer space (multiplied by the scaling factor from the handshake). Plotted over time, this reveals application-side bottlenecks.
Visualize the window: Statistics → TCP Stream Graphs → Window Scaling
Common Patterns and What They Tell You
Pattern: Receiver Bottleneck
Observe:
tcp.analysis.window_full flags throughout the connection
tcp.analysis.zero_window when receiver's buffer is completely exhausted
Small or zero window advertisements in TCP headers
Window plot shows consistently low values or drops to zero
tcp.analysis.window_full || tcp.analysis.zero_windowInterpretation:Â Receiver can't process data fast enough. Application isn't reading from socket buffer, or system resources (CPU, disk I/O) are constraining it. If windows drop to zero, application has completely stopped reading. Sender is being throttled by receiver's advertised window.
Pattern: Network Path Loss
Observe:
tcp.analysis.retransmission flags present throughout
Windows remain wide open (no window_full flags)
Fast retransmissions (SACK-based recovery) if SACK was negotiated
tcp.analysis.retransmission && !tcp.analysis.window_fullInterpretation:Â Network path is dropping packets, but receiver has buffer space and is keeping up. Root cause is in the network - congestion on intermediate links, lossy connection, duplex mismatch, or middlebox issues. TCP is recovering from loss, but available window isn't being fully utilized.
Pattern: Receiver Bottleneck Causing Network Drops
Observe:
Both tcp.analysis.retransmission or tcp.analysis.window_full flags (on different packets)
Window plot shows small windows with packet loss occurring
Retransmissions coincide with periods of low window availability
tcp.analysis.retransmission || tcp.analysis.window_fullInterpretation:Â Receiver is slow, causing queue buildup in the network path. Packets accumulate in buffers somewhere between endpoints because receiver can't drain them fast enough. Eventually buffers overflow and packets drop. Primary issue is receiver-side, but it manifests as network loss.
Pattern: Timeout-Based Recovery
Observe:
tcp.analysis.retransmission flags without fast_retransmission
Long gaps (seconds) between original transmission and retransmit
RTO expiration rather than duplicate ACK triggering
tcp.analysis.retransmission && !tcp.analysis.fast_retransmissionInterpretation:Â Severe problem. Sender waited for full RTO before retransmitting. Either SACK wasn't negotiated, ACKs are being lost, or loss is so severe that duplicate ACKs aren't arriving. Time between original and retransmit reveals RTO value. Frequent timeout-based retransmits indicate serious path or endpoint issues.
Pattern: Small Windows Despite No Congestion
Observe:
Windows stay small throughout connection
No retransmissions or congestion indicators
Throughput limited despite no packet loss
Check handshake: tcp.flags.syn==1 to see negotiated capabilities
Interpretation:Â If window scaling wasn't negotiated in handshake, you're stuck with 64KB maximum. On high-bandwidth or high-latency links where BDP exceeds 64KB, TCP can't fill the pipe. If scaling WAS negotiated but windows stay small, application configured small socket buffers (possibly intentional to limit memory usage, or misconfiguration).
Pattern: Measuring Throughput
Observe:
Statistics → TCP Stream Graphs → Throughput
Bytes per second over time for the stream
Compare measured throughput to theoretical maximum: Window Size / RTT
Interpretation:Â If measured throughput is far below theoretical max based on BDP, you have a constraint. Use window analysis and retransmission patterns to identify which constraint: small windows (receiver), network loss, or sender limitations.
Pattern: Measuring RTT
Observe:
Statistics → TCP Stream Graphs → Round Trip Time
Baseline RTT over time
RTT spikes or steadily increasing RTT
Interpretation:Â Baseline RTT should roughly match ICMP ping times. RTT spikes indicate transient congestion or path changes. Steadily increasing RTT suggests growing congestion. If TCP RTT is significantly higher than ICMP ping, you may have application-layer delays (receiver processing time) rather than network latency.
Packet Capture Limitations and Caveats
Packet captures work regardless of operating system or device type, but they have important limitations compared to endpoint tools.
What you can't see:
Congestion Window (cwnd)Â - Sender-side state, not transmitted in TCP headers. You can infer it by watching data sent per RTT, but can't measure directly. This is where Linux's ss -i shines.
Socket Buffer Sizes - Application-configured buffer sizes aren't visible. You see effects (small windows) but not configured sizes.
Application Behavior - Why isn't the application reading or writing? Captures show symptoms, not application-level causes.
Common gotchas:
Capture point matters - Capturing at a firewall includes firewall processing time. Capturing at endpoints may overwhelm the endpoint and drop packets. Capturing on different sides of NAT or load balancers can show confusing sequence number jumps.
TCP checksum offload - With offload enabled, the kernel doesn't calculate checksums. When capturing on the sending machine, Wireshark shows "bad checksum" errors. This is normal behavior - the kernel has no visibility into checksum values. Disable validation: Edit → Preferences → Protocols → TCP → Validate the TCP checksum if possible (uncheck).
Packet capture timestamp accuracy - The timestamp assigned to each captured frame relies on the host's system clock. For microsecond-level accuracy or better, you need specialized capture hardware with GPS or PTP time synchronization.
When captures aren't enough:
When you've exhausted packet analysis, you need endpoint access. This is when you'd use ss -i on Linux for cwnd, kernel-measured RTT, bytes in flight, and retransmit counters. Or on Windows... well, we covered that gap last time.Â
Packet captures require more analysis effort than endpoint tools. With ss, you see cwnd and retransmit counts in one command. With captures, you're inferring behavior from observed traffic. The workflow is: observe behavior in the capture, form hypotheses, then correlate with endpoint data if available. Captures show symptoms. Endpoint tools show internal TCP state.
This is why the last article emphasized endpoint tools first - faster diagnosis when you have access. When you don't, Wireshark and these techniques will get you to the answer. It just takes more work.
References
TCP Protocol Specifications (IETF RFCs)
RFC 793 - Transmission Control Protocol Postel, J., September 1981 https://www.rfc-editor.org/rfc/rfc793.html Original TCP specification, including the three-way handshake mechanism and MSS option.
RFC 2018Â - TCP Selective Acknowledgment Options Mathis, M., Mahdavi, J., Floyd, S., and A. Romanow, October 1996 https://datatracker.ietf.org/doc/html/rfc2018Â Defines SACK-permitted option negotiated in SYN segments for efficient loss recovery.
RFC 7323 - TCP Extensions for High Performance Borman, D., Braden, B., Jacobson, V., and R. Scheffenegger, September 2014 https://www.rfc-editor.org/rfc/rfc7323.html Defines Window Scale option (Section 2) and TCP Timestamps option (Section 3). Section 4 specifies the Round-Trip Time Measurement (RTTM) mechanism using timestamps.
RFC 6298 - Computing TCP's Retransmission Timer Paxson, V., Allman, M., Chu, J., and M. Sargent, June 2011 https://www.rfc-editor.org/rfc/rfc6298.html Defines the RTO calculation algorithm used by TCP implementations.
Wireshark Official Documentation
Wireshark User's Guide - Section 7.5: TCP
Analysis https://www.wireshark.org/docs/wsug_html_chunked/ChAdvTCPAnalysis.html Official documentation of Wireshark's TCP analysis features, including tcp.analysis. filters, sequence number tracking, and automated detection of retransmissions, duplicate ACKs, window issues, and zero windows.
Wireshark Display Filter Reference - TCP https://www.wireshark.org/docs/dfref/t/tcp.html Complete reference of all TCP display filter fields available in Wireshark.
Wireshark User's Guide - Main Documentation https://www.wireshark.org/docs/wsug_html/ Comprehensive official user documentation for Wireshark, including capture techniques, display filters, and statistics features.