There is no length field in the Ethernet header (*). The MAC infers how long the frame is by noticing when the carrier goes away, and relies on the CRC to catch truncated or overlength frames. After removing the Ethernet header and CRC, what is left is payload. (*) It isn't quite that simple, as the optional LLC header adds a length field. As is customary when talking about Ethernet, I'll ignore the existence of LLC.
Protocol: | Whatever. When I send N bytes, the other station will receive N bytes. Thats all I care about. |
Ethernet: | Not quite. The sending device will add padding to 64 bytes. |
Protocol: | Whatever. The receiver removes the padding, right? |
Ethernet: | No. |
Protocol: | &%#@*! |
The minimum Ethernet frame size has been a source of confusion for decades. From protocol implementors miffed at getting garbage after their data to driver writers who forget to zero out the padding and end up leaking kernel data onto the wire, its been a hoot. So why have a minimum size?
Wayback Machine to 1976
Ethernet was designed for half duplex operation. If two stations check for a carrier at the same time they might both start transmitting, resulting in a collision. Reliably detecting collisions is important: though Ethernet has never guaranteed delivery, collisions are so common that relying on protocols to retransmit would have resulted in a miserable network.
In the IEEE standard 10 Megabit Ethernet allows up to 5 segments separated by 4 repeaters between any two stations.
- The minimum frame size is 512 bits.
- A segment of 10BASE-5 can be 500 meters long, where the speed of light is 0.77c. A signal takes 2.17 microseconds to propagate 500m, which is 22 bit-times.
- It takes some time for bits to propagate through a repeater. I have no confirmed numbers, but an estimate to synchronize between clock domains plus a little buffering to avoid underruns is 24 bit-times.
- 5 segments x 22 bit times + 4 repeaters x 24 bit times = 206 bit-times.
Station A can start transmitting and its bits almost make it before B checks the carrier and begins transmitting. We need to allow for two crossings of the network, or 412 bit times. Adding some margin for safety and rounding up to the next power of 2 gives us the 512 bit minimum frame size.
So thats how we ended up with 64 byte minimum packets, by defining requirements for distance and working out propagation delays, right? ... Well, no.
The Plot Thins
Ethernet products were available prior to IEEE standardization. As originally specified it allowed for two repeaters and a maximum of three segments between any two hosts, yet still had a minimum frame size of 64 bytes. It could have gotten by with less.
As with so many things in technology, I believe the 64 byte size was chosen mainly for expediency. They knew they needed to listen for collisions, and made some calculations on propagation delay. The earliest Ethernet equipment was constructed out of discrete SSI parts and memories, and I suspect 64 bytes of buffer was available. So there we are.
IEEE defined the repeater limits to match the pre-existing minimum frame size, not the other way round.
Consequences
Though padding sounds wasteful of bandwidth, in practice it doesn't matter. A truism in networking is that most packets are small, but most data is carried in the large packets. Real networks are not made up of minimum sized frames at the link rate, but that is how they are tested. Read any review and you'll find the packet forwarding rate, measured using test equipment sending 64 byte frames at wire speed.
Switch fabric designers are grateful for the minimum frame size: it puts a cap on packets per second. Without the 64 byte minimum, the forwarding logic would have to design for 3x as many packets per second. Real networks don't operate that way, but if you can't handle it your gear gets tossed out of the lab. Really, it would be nice if the minimum frame were even a bit bigger.