Ethernet, Carrier and Datacenter Ethernet Protocol Overhead and Throughput

Thursday, January 9, 2014


Let’s start to calculate the Protocol Overhead, Protocol Efficiency and Throughput for Ethernet, Carrier and Datacenter Ethernet Protocols like 802.3, 802.1q, 802.1ad QinQ, 802.1ah mac-in-mac, MPLS, TRILL. More information about the Ethernet frame formats and fields’ description, Protocols Overhead and the table representing all can be found in my previous post Ethernet and Overlay technologies over Ethernet .

As before, the main tables are below, and after that there is a description of them.


Overhead
Efficiency
 1500
Efficiency
9000
Efficiency
TCP IP
1500
Efficiency
TCP IP
9000
Efficiency
UDP IP
1500
Efficiency
UDP IP
9000
IEEE 802.3 Ethernet frame
38
97.53
99.58
94.15
99.00
95.71
99.27
IEEE 802.1Q – tagged frame
42
97.28
99.54
93.90
98.96
95.46
99.23
IEEE 802.1AD –QinQ frame
46
97.02
99.49
93.66
98.92
95.21
99.18
IEEE 802.3 with 3 MPLS Headers
70
95.54
99.23
92.23
98.65
93.76
98.92
Cisco FabricPath Ethernet frame
58
96.28
99.36
92.94
98.79
94.48
99.05
IEEE 802.1AH –MACinMAC frame
68
95.66
99.25
92.35
98.68
93.88
98.94
TRILL Ethernet frame
68
95.66
99.25
92.35
98.68
93.88
98.94
OTV Ethernet 802.1Q frame 
96
93.98
98.94
90.73
98.37
92.23
98.64
LISP Ethernet 802.1Q frame
96
93.98
98.94
90.73
98.37
92.23
98.64
VxLAN Ethernet 802.1Q frame
96
93.98
98.94
90.73
98.37
92.23
98.64
NvGRE Ethernet 802.1Q frame
88
94.46
99.03
91.18
98.46
92.70
98.72
STT Ethernet 802.1Q frame
120
92.59
98.68
89.38
98.11
90.86
98.38


Throughput 1G
TCP IP
1500
Throughput 1G
TCP IP
9000
Throughput 1G
UDP IP
1500
Throughput 1G
UDP IP
9000
Throughput 10G
TCP IP
1500
Throughput 10G
TCP IP
9000
Throughput 10G
UDP IP
1500
Throughput 10G
UDP IP
9000
IEEE 802.3 Ethernet frame
941.48
990.04
949.28
991.37
9.415
9.900
9.493
9.914
IEEE 802.1Q – tagged frame
939.04
989.60
946.82
990.93
9.390
9.896
9.468
9.909
IEEE 802.1AD –QinQ frame
936.61
989.17
944.37
990.49
9.366
9.892
9.444
9.905
IEEE 802.3 frame with 3 MPLS Headers
922.29
986.55
929.94
987.87
9.223
9.865
9.299
9.879
Cisco FabricPath Ethernet frame
929.40
987.86
937.10
989.18
9.294
9.879
9.371
9.892
IEEE 802.1AH –MACinMAC frame
923.47
986.77
931.12
988.09
9.235
9.868
9.311
9.881
TRILL 802.1aq Ethernet frame
923.47
986.77
931.12
988.09
9.235
9.868
9.311
9.881
OTV Ethernet 802.1Q frame 
907.27
983.73
914.79
985.05
9.073
9.837
9.148
9.850
LISP Ethernet 802.1Q frame
907.27
983.73
914.79
985.05
9.073
9.837
9.148
9.850
VxLAN Ethernet 802.1Q frame
907.27
983.73
914.79
985.05
9.073
9.837
9.148
9.850
NvGRE Ethernet 802.1Q frame
911.84
984.60
919.40
985.92
9.118
9.846
9.194
9.859
STT Ethernet 802.1Q frame
893.83
981.14
901.23
982.46
8.938
9.811
9.012
9.825































KPPS 1G
1500
KPPS 1G
9000
KPPS 10G
1500
KPPS 10G
9000
IEEE 802.3 Ethernet frame
81.274
13.830
812.744
138.305
IEEE 802.1Q – tagged Ethernet frame
81.064
13.824
810.636
138.244
IEEE 802.1AD –QinQ frame
80.854
13.818
808.538
138.183
IEEE 802.3 frame with 3 MPLS Headers
79.618
13.782
796.178
137.817
Cisco FabricPath Ethernet frame
80.231
13.800
802.311
138.000
IEEE 802.1AH –MACinMAC frame
79.719
13.785
797.194
137.847
TRILL  802.1aq Ethernet frame
79.719
13.785
797.194
137.847
OTV Ethernet 802.1Q frame 
78.321
13.742
783.208
137.423
LISP Ethernet 802.1Q frame
78.321
13.742
783.208
137.423
VxLAN Ethernet 802.1Q frame
78.321
13.742
783.208
137.423
NvGRE Ethernet 802.1Q frame
78.715
13.754
787.154
137.544
STT Ethernet 802.1Q frame
77.160
13.706
771.605
137.061





























1. Protocols Overhead

The protocol Overhead represents the encapsulated headers and applications specific data which uses a part of the PDU (Protocol Data Unit) when communicating the information from one host to another. It can be calculated as a sum of all Protocol Headers, Preamble, SFD, CRC and IFG, meaning:

Protocol Overhead =
= Pre (7 bytes) + SFD (1 byte) + Protocol Headers (variable) + CRC (4 bytes) + IFG (12 byte)          
= 24 Bytes +   Protocol Headers (variable)

EX: for the IEEE 802.3 Ethernet frame we have the following Overhead:
7 bytes (Pre) + 1 byte (SFD) + 6 bytes (DMAC ) + 6 bytes (SMAC) + 4 bytes (CRC) + 12 bytes (IFG) = 38 bytes

Also, Protocol Overhead can be calculated as a percentage, using the below formula, it is not my scope here so I will skip it:


EX: for the IEEE 802.3 Ethernet frame





2. Protocol Efficiency

Can be calculated with the following formula:




Maximum efficiency is considered to be the percent between the largest allowed frame size and the actual frame size with payload included (calculated as percent):



EX: for the IEEE 802.3 Ethernet frame with 1500 bytes and 9000 bytes 








Considering that most of the time we are using the TCP/IP or UDP/IP stacks, I think it is better to have a larger view about the payload of the IP header and the TCP header.

IP header overload = 20 bytes with no Option
IPv6 header overload = 40 bytes with no Option
TCP header overload = 20 bytes + 12 bytes time-stamp
UDP header overload = 8 bytes
ICMP header overload = 8 bytes

EX: for the IEEE 802.3 Ethernet frame with 1500 bytes



















3. Protocol Throughput

Can be calculated with the following formula:


EX: for the IEEE 802.3 Ethernet frame 1Gbps link






Theoretically maximum throughput over Ethernet links using TCP/IP depends on the following factors:
- Network Latency (End to End)
- Link Speed (bps)
- Propagation, Serialization, and Queuing delay
- Window size (RWIN), window scaling and CTCP
- Network Congestion
- MTU/MSS
- Protocol Overhead (Layer 2, 3, and 4)
- Ethernet Efficiency
- Link reliability


3. PPS – Packets per Second

Can be calculated with the following formula:


EX: for the IEEE 802.3 Ethernet frame – 1Gbps link



Assuming a 64 bytes frame, we have the following PPS:



Conclusion: The Protocol Efficiency has a very important role into link throughput, you cannot ask for a full link usage when you use so many encapsulation methods…and to test the link throughput with the ping command is not the way to do it. Also, the PPS are very important for the link testing and monitoring, you can make an average calculation between the minimum frame size and maximum frame size of the link and check the pps to see if it reach around the average value. And the last advice, it is always a good idea to enable jumbo frame to have a better performance link (as throughput and efficiency).


What is the worst efficiency / throughput link that you have ever seen? 

by Mihaela Paraschivu

1 comment:

  1. Pretty nice and clear explanation !

    I managed to meet few people from Bucharest Polytech University while working at Google, I almost feel like going to get a degree in Romania :P

    Cheers,
    Ben

    ReplyDelete