mtu interview questions
Top mtu frequently asked interview questions
I'm trying to reverse engineer an application, and i need help understanding how TCP window size works. My MTU is 1460
My application transfers a file using TCP from point A to B. I know the following:
- The file is split into segments of size 8K
- Each segment is compressed
- Then each segment is sent to point B over TCP. These segment for a text file can be of size 148 Bytes, and for a pdf 6000 Bytes.
For a text file, am i supposed to see the segments of 148 attached to one another to form one large TCP stream? and then it is split according to the Window Size?
Any help is appreciated.
Source: (StackOverflow)
I need to know what the largest UDP packet I can send to another computer is without fragmentation.
This size is commonly known as the MTU (Maximum Transmission Unit). Supposedly, between 2 computers, will be many routers and modems that may have different MTUs.
I read that the TCP implementation in windows automatically finds the maximum MTU in a path.
I was also experimenting, and I found out that the maximum MTU from my computer to a server was 57712 bytes+header. Anything above that was discarded. My computer is on a LAN, isn't the MTU supposed to be around 1500 bytes?
Source: (StackOverflow)
ifconfig 1.2.3.4 mtu 1492
This will set MTU to 1492 for incoming, outgoing packets or both? I think it is only for incoming
Source: (StackOverflow)
When using pcap_open_live
to sniff from an interface, I have seen a lot of examples using various numbers as SNAPLEN
value, ranging from BUFSIZ
(<stdio.h>
) to "magic numbers".
Wouldn't it make more sense to set as SNAPLEN the MTU of the interface we are capturing from ?
In this manner, we could fit more packets at once in PCAP buffer. Is it safe to assume that the MRU is equal to the MTU ?
Otherwise, is there a non-exotic way to set the SNAPLEN value ?
Thanks
Source: (StackOverflow)
I am developing an application in C#, using the server-client model, where the server sends a byte array with a bitmap to the client, the client loads it into the screen, sends an "OK" to the server, and the server sends another image, and so on.
The length of the image buffer deppends, usually it is between 60kb and 90kb, but I've seen that it doesn't matter. If I put the client and the server in the same computer, using localhost, everything works fine. The server does beginSend, and the client does endReceive and the whole buffer is transmitted.
However, I am now testing this in a wireless network and what happens is:
- The server sends the image.
- The callback function data_received on the client is called, but there are only 1460 bytes to read (MTU - why? shouldn't only be in UDP?)
- The callback function data_received on the client is called again, now with the rest of the buffer (either it be 1000 bytes or 100 kbytes)...
It's always like this, a first packet with 1460 bytes is received, and then the second packet contains the rest.
I can work around this by joining both byte arrays received, but this doesn't seem right. I'm not even sure why this is happening. Is it some restriction on the network? Why then doesn't C# wait for the whole data to be transmitted? I mean, it's TCP, I shouldn't have to worry about it, right?
Anyway, any help would be great!
Cheers
Source: (StackOverflow)
I'm doing some experiments with path MTU discovery in Linux. As far as I understood from RFC 1191, if a router receives a packet with non-zero DF bit and the packet can't be sent to the next host without fragmentation, then the router should drop the packet and send ICMP message to the initial sender.
I've created several VM on my computer and linked them in the following manner:
VM1 (192.168.100.2)
R1 (192.168.100.1,
192.168.150.1)
R2 (192.168.150.2,
192.168.200.1)
VM2 (192.168.200.2)
Rx - are virtual machines with Linux installed, they have two network interfaces with a static route. Pinging V2 from V1 and vice versa is successful.
traceroute from 192.168.100.2 to 192.168.200.2 (192.168.200.2)
1 192.168.100.1 (192.168.100.1) 0.437 ms 0.310 ms 0.312 ms
2 192.168.150.2 (192.168.150.2) 2.351 ms 2.156 ms 1.989 ms
3 192.168.200.2 (192.168.200.2) 43.649 ms 43.418 ms 43.244 ms
tracepath 192.168.200.2
1: ubuntu-VirtualBox.local 0.211ms pmtu 1500
1: 192.168.100.1 0.543ms
1: 192.168.100.1 0.546ms
2: 192.168.150.2 0.971ms
3: 192.168.150.2 1.143ms pmtu 750
3: 192.168.200.2 1.059ms reached
Segments 100.x and 150.x have MTU 1500. Segment 200.x has MTU 750.
I'm trying to send UDP packets with DF enabled. The fact is the VM1 doesn't send the packet at all in case of the packet's size greater than 750 (I receive EMSGSIZE error for send() invocation).
However I expect such behavior for packets which size is more than 1500. And I expect that the VM1 sends packets which size is between 750 and 1500 to the R1, and the R1 (or R2) drops such packets and returns ICMP packet to the VM1. But this doesn't happen.
There are two questions:
1) Why?
2) Is it possible to set up my virtual network to receive ICMP packets in according to RFC 1191?
Thanks.
Source: (StackOverflow)
Recently we ran into what looked like a connectivity issue when a particular customer of ours installed our product. We ultimately traced it to a low MTU (~1300 bytes) being configured on one of the devices in the network. In this particular deployment, we had two Windows machines running our application communicating with one another, and their link MTUs were set at 1500.
One thing that made this particularly difficult to troubleshoot, was that our application would work fine during the handshake phase (where only small requests are sent), but would sometimes fail sending a specific request of size ~4KB across the network. If it makes a difference, the application is written in C# and these are WCF messages.
What could account for this indeterminism? I would have expected this to always fail, as the message size we were sending was always larger than the link MTU perceived by the Windows client, which would lead to at least one full 1500-byte packet, which would lead to problems. Is there something in TCP that could make it prefer smaller packets, but only sometimes?
Some other things that we thought might be related:
1) The sockets were constantly being set up and torn down (as the application received what it interpreted as a network failure), so this doesn't appear to be related to TCP slow start.
2) I'm assuming that WCF "quickly" pushes the entire 4KB message to the socket, so there's always something to send that's larger than 1500 bytes.
3) Using WireShark, I didn't spot any TCP retransmissions which might explain why only subsets of the buffer were being sent.
4) Using WireShark, I saw a single 4KB IP packet being sent, which perhaps indicates that TCP Segment Offloading is being implemented by the NIC? (I'm not sure how TSO would look on WireShark). I didn't see in WireShark the 4KB request being broken down to multiple IP packets, in either successful or unsuccessful instances.
5) The customer claims that there's no route between the two Windows machines that circumvents the "problematic" device with the small MTU.
Any thoughts on this would be appreciated.
Source: (StackOverflow)
I am trying to understand the "big picture" of MTU. Specifically, many discussions of MTU focus on a single hop (e.g. laptop to router), so the natural question is: how to determine MTU between cable modem and ISP, or more generally, for any given hop of a route.
Now, I can easily see the MTU between my laptop and its: wi-fi router using ifconfig on Mac OS X:
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
options=2b<RXCSUM,TXCSUM,VLAN_HWTAGGING,TSO4>
ether 58:b0:35:f0:14:75
media: autoselect (none)
status: inactive
en1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether 58:b0:35:72:64:fa
inet6 fe80::5ab0:35ff:fe72:64fa%en1 prefixlen 64 scopeid 0x5
inet 192.168.1.100 netmask 0xffffff00 broadcast 192.168.1.255
media: autoselect
status: active
1500 is the canonical value b/c of the limitation of the wi-fi frame (which I am guessing was designed to match the Ethernet frame; please correct me if that's wrong).
So, the question is: How to determine the MTU of arbitrary hops in my route?
Answer summary:
Per the below answer, best bet is "tcpdump", "traceroute --mtu" or "tracepath"
Source: (StackOverflow)
(I'm posting this question after the fact because of the time it took to find the root cause and solution. There's also a good chance other people will run into the same problem)
I have an RDS instance (in a VPC) that I'm trying to connect to from an application running on a classic EC2 instance, connected via ClassicLink. Security groups and DNS aren't an issue.
I am able to establish socket connections to the RDS instance, but cannot connect with CLI tools (psql, mysql, etc.) or DB GUI tools like toad or mysql workbench.
Direct socket connections with telnet or nc result in TCP connections in the "ESTABLISHED" state (output from netstat).
Connections from DB CLI, GUI tools, or applications result in timeouts and TCP connections that are stuck in the "SYN" state.
UPDATE: The root cause in my case was a problem with MTU size and EC2 ClassicLink. I've posted some general troubleshooting information below in an answer in case other people run into similar RDS connectivity issues.
Source: (StackOverflow)
In Erlang, it is very simple to send UDP
packet, that is to use gen_udp:open()
to create a socket, then use gen_udp:send()
to send out the data.
However, by default, the Linux TCP/IP
stack will set the don't fragment (DF)flag in IP header if the size of IP packet doesn't exceed the MTU size. If the size exceeds the MTU
size, the UDP
packet will be fragmented.
Is there some way to not set DF flag for UDP
packet only?
I know in C language, the following code could be used to clear the DF flag. But i couldn't find a way in Erlang.
int optval=0;
if(-1 == setsockopt(sockfd,IPPROTO_IP,IP_MTU_DISCOVER,&optval,sizeof(optval))) {
printf("Error: setsockopt %d\n",errno);
exit(1);
}
Thanks
Source: (StackOverflow)
I want to set MTU from the command line. I'm running under XP.
I've tried:
netsh interface ipv4 set subinterface "Local Area Connection" mtu=1300 store=persistent
But it's not working.
I've tried to change "ipv4" to "ip" but it didn't help. The token "subinterface" is not recognized.
Any ideas?
Thanks in advance.
Source: (StackOverflow)
I am working on an application where I need to send large data in multiple UDP packets to a client, how can I determine programatically the MTU for my UDP socket?
I need to be able to do this on both windows and linux.
Source: (StackOverflow)
I am using sockets in python to send files, and I am doing a packet capture while sending these files. However, I find that each packet is 1434 bytes instead of the 1500 bytes (MTU is set at 1500 bytes on my system).
I have attached some screenshots of the packet capture. I need to send the packet at 1500 bytes rather than the 1434 bytes, can some one tell me what's going on?
Source: (StackOverflow)
By default when we say about MSS for TCP ethernet packet 1460 and MTU is 1500.
MSS = MTU - 20(IP header) - 20(TCP Header) = 1460
from the above the TCP header is calculated without any options
in TCP header.
In case if any packet consists option
value in TCP header it will reduce the MSS size or not?
Then what will be the MSS size presence of option
in TCP header
Source: (StackOverflow)