Date created: 08/02/18 13:44:35. Last modified: 08/02/18 17:02:56

CRC and Checksum Error Detection


Problem Background
Ethernet CRC Overview
IPv4 and TCP/UDP Checksums
Ethernet CRC Calculation and Polynomials
Implications in Production Networks

Problem Background
Ethernet frames included a 32-bit CRC value (the Frame Check Sequence) to protect them against corruption along the network from source to destination. The Ethernet FCS isn’t perfect and can’t detect every possible error though. When a frame arrives at a network device and it doesn’t match the CRC value the device will likely drop the frame and increment an interface counter, that can in turn be monitored by SNMP or telemetry of some kind.

Separately from the ability to detect and alert on known errors, there is a small percentage of errors that will go uncaught by the 32-bit CRC used in Ethernet. It can’t detect every possible bit error within a frame. This tasks then falls to higher levels such as IPv4 or TCP/UDP however, these protocols use a weaker checksum than the Ethernet CRC so it’s unlikely they would catch an error the CRC didn’t catch. Looking further up the protocol stack the job would then fall on the application layer to verify the data being sent and received. Most applications don’t perform any consistency/integrity checking in data sent across a network though. As an example, making a payment via online banking for £10.00, HTTP simply submits a POST request and the server processes whatever it receives, even if the HTTP POST packet was corrupted in-flight to become £99.99. Thankfully HTTPS is used for online banking which would detect such as error but, most applications are probably without such consistency/integrity checks.

A bona-fide example is BitSquatting. BitSquatting involves an attacker purchasing a domain name that is one bit different from a real domain. For example, is one bit different from (if the last bit of the 2nd byte "n" is flipped from 01101110 to 01101111 "o"). A DNS request for may be corrupted on it’s way to the DNS server, and the request received by the DNS server is for not, which then returns the IP for the domain and not The user is now sent to an entirely different web server than they were expecting, without even knowing it. Just as HTTPS is now becoming de facto, DNS is working towards encryption but it’s still years away.

In addition to this, when a frame is received by a switch or router, the CRC value is popped off and for the time the frame spends inside the device it is without a CRC value. A new value is calculated and pushed onto the frame at egress which means that any corruption that may have happened inside the device hasn’t been detected and the now corrupted frame is implicitly "marked" as uncorrupt by having a new CRC pushed onto it at egress.

In the case that cut-through switching is used though, the device doesn’t receive the CRC value until it has transmitted the entire corrupted frame. This is because the CRC value is at the end of the frame and as soon as the destination MAC is fully received the switch can perform a MAC lookup to find the egress port and start forwarding the frame bit-stream as it is coming in. In this cut-through switching example, once the switch has transmitted the entire frame and then receives the CRC value at the end, and realises the issue, it can increment an egress CRC error counter on the egress interface as well as the ingress interface. This could help identify where errors are coming from and going to, which might identify the hosts involved. However, if multiple ports have ingress and egress CRC error counters increasing this doesn’t help. Additionally, if the destination MAC is unknown in the CAM table and frame is flooded, all ports in the same VLAN may have their egress CRC error counters incremented.

Cut-through switching is typically used in low-latency DC deployments. Within the DC LAN jumbo frames are becoming increasingly common due to replication requirements. Jumbo frames exacerbate the issue of the 32-bit CRC not being able to detect every possible error. The increased frame size, 9000 bytes for example vs. 1500 bytes means that the same 4 byte CRC value now has to "protect" 6 times as much data.


Ethernet CRC Overview

  • What is a CRC/checksum?
    "A cyclic redundancy check (CRC) is an error-detecting code commonly used in digital networks and storage devices to detect accidental changes to raw data. Blocks of data entering these systems get a short check value attached, based on the remainder of a polynomial division of their contents. On retrieval, the calculation is repeated and, in the event the check values do not match, corrective action can be taken against data corruption. CRCs can be used for error correction [as well as error detection]."

    "For the purposes of data communication, the goal of a [IPv4/TCP] checksum algorithm is to balance the effectiveness at detecting errors against the cost of computing the check values. Furthermore, it is expected that a checksum will work in conjunction with other, stronger, data checks such as a CRC. For example, MAC layers are expected to use a CRC to check that data was not corrupted during transmission on the local media, and checksums are used by higher layers to ensure that data was not corrupted in intermediate routers or by the sending or receiving host"

    "By using a CRC checksum rather than simple additive checksums as contained within the UDP and TCP transports, errors generated internal to NICs can be detected as well. Both TCP and UDP have proven ineffective at detecting bus specific bit errors, since these errors with simple summations tend to be self-cancelling. Testing that led to adoption of RFC 3309 compiled evidence based upon simulated error injection against real data that demonstrated as much as 2% of these errors were not being detected."

  • What errors can the 32-bit CRC detect?
    There are certain types of errors that the 32-bit CRC will always or almost always detect and certain error types it may rarely detected (but those error types are expected to be very rare).

    "CRCs are based on polynomial arithmetic, base 2. CRC-32 is a 32-bit polynomial with several useful error detection properties. It will detect all errors that span less than 32 contiguous bits within a packet and all 2-bit errors less than 2048 bits apart. It will also detect all cases where there are an odd number of errors. For other types of errors, if they occur in data which has uniformly distributed values, the chance of not detecting an error is 1 in 2^32"

    "Suffice to say that the Ethernet FCS will detect:
    Any 1 bit error
    Any two adjacent 1 bit errors
    Any odd number of 1 bit errors
    Any burst of errors with a length of 32 or less"

    "The Ethernet CRC is substantially stronger [than IPv4/TCP/UDP checksums], partly because it is twice as long (4 bytes), and partly because CRCs have good mathematical properties, such as detecting all 3 bit errors in 1500 byte Ethernet packets"

    "Larger frames are more likely to suffer undetected errors with the simple CRC32 error detection used in Ethernet frames — a larger amount of data increases the probability that several errors cancel each other out. Consequently, additional mechanisms have been developed to improve error detection on higher network layers.
    IETF solutions for adopting jumbo frames avoids data integrity reduction of the service data unit through use of the Castagnoli CRC polynomial being implemented within the SCTP transport (RFC 4960) and iSCSI (RFC 7143). Selection of this polynomial was based upon work documented in the paper "32-Bit Cyclic Redundancy Codes for Internet Applications". The Castagnoli polynomial 0x1EDC6F41 achieves the Hamming distance HD=6 beyond one Ethernet MTU (to a 16,360 bit data word length) and HD=4 to 114,663 bits, which is more than 9 times the length of an Ethernet MTU. This gives two additional bits of error detection ability at MTU-sized data words compared to the Ethernet CRC standard polynomial while not sacrificing HD=4 capability for data word sizes up to and beyond 72 kbits."

  • When is the CRC value checked?
    Every layer 2 device in the path between two hosts on the same LAN will check the CRC value as it receives the frame. Typically a switch (in store and forward mode) would receive the frame, check the CRC, remove the CRC, move the frame across the switch backplane, calculate a new CRC, push the new CRC onto the frame and then transmit the frame via the egress port.

    There may be no reason to pop, recalculate push the CRC on every device in the layer 2 path if nothing is changed in the frame headers (no VLAN push/pop/swap for example). However, some devices may simply pop the CRC on ingress and push a new one on egress for all frames. Cut-through switches may not recalculate the header, even when they know it to be bad, so that downstream switches also catch the error. Instead they may simply increment the bad Rx CRC counter on the ingress port and bad Tx CRC counter on the egress port. Hopefully a cut-through switch never recalculates a good FCS on egress meaning that the corrupted frame is forward propagated without further detection!

    It is arguable that layer 2 devices (or layer 3 devices at the IP boundary) shouldn’t check the CRC (or checksum) if they aren’t making any changes. If a device finds the CRC value is wrong and drops the frame, the end host will never know where in the path the frame was lost, or if it was ever sent. The more likely causes of errors/corruption are devices that originate the frame, received the frame or modify it along the path. Having only those devices check the CRC and drop invalid frames would initially limit the scope of troubleshooting to just a few devices, or a subset of links. The current method all store-and-forward switches use to drop the frame with a bad CRC immediately means that the initial investigation scope is "every link and device end-to-end" however, the counter point to only dropping packets on devices that modify the frame is that network resource is wasted dropping a packet that is corrupted.


IPv4 and TCP/UDP Checksums
The IPv4 checksum is a 16 bit 1's complement sum of all the 16 bit words in the IPv4 header. Note that this does not cover the TCP header or any of the TCP data. The TCP checksum is a 16 bit 1's complement sum of all the 16 bit words in the TCP header (the checksum field is valued as 0x00 0x00) plus the IPv4 source and destination address values, the protocol value (6), the length of the TCP segment (header + data), and all the TCP data bytes. If the number of header+data bytes is not an integer multiple of 16, pad bytes of 0x00 are added at the end until it is a multiple of 16. These pad bytes are not transmitted.

The IPv4 checksum is only 2 bytes long which is half the length of the Ethernet CRC and uses a "weaker" calculation. Note that CRC is not a checksum (they are similar but not interchangeable), Ethernet uses a CRC but IPv4/TCP use a checksum (the Ethernet CRC is a polynomial division).

  • What can the IPv4/TCP/UDP Checksum Detect?
    "The TCP checksum is two bytes long, and can detect any burst error of 15 bits, and most burst errors of 16 bits (excluding switching 0x0000 and 0xffff). This means that to keep the same checksum, a packet must be corrupted in at least two locations, at least 2 bytes apart. If the chance is purely random, we should expect approximately 1 in 2^16 (approximately 0.001%) of corrupt packets to not be detected…For details about how to compute the TCP checksum and its error properties, see RFC 1071."

    "It is a well-known irony that the very robustness of fault-tolerant systems can conceal a large number of correctable errors. In the Internet, that means w e are sending large volumes of incorrect data without anyone noticing. Our trace data shows that the TCP and UDP checksums are catching a significant number of persistent errors. In practice, the checksum is being asked to detect an error every few thousand packets. After eliminating those errors that the checksum always catches, the data suggests that, on average, between one packet in 10 billion and one packet in a few millions will have an error that goes undetected. The exact range depends on the type of data transferred and the path being traversed."

  • What can’t the IPv4/TCP/UDP Checksum detect?
    "It can’t detect re-ordering of 2-byte aligned words. It can’t detect various bit flips that keep the 1s complement sum the same (e.g. 0x0000 to 0xffff and vice versa)"

    "The TCP checksum is a 16-bit ones complement sum of the data. This sum will catch any burst error of 16 bits or less, and over uniformly distributed values of data is expected to detect other types of errors at a rate proportional to 1 in 2^16. The checksum also has a major limitation: the sum of a set of 16-bit values is the same, regardless of the order in which the values appear."

    "With 1500 Byte packets at 1Gbps you're pushing 83,333 packets per second. If 1% of those (833) are corrupted and 1 in every 2^16 corrupted packets has a valid CRC then you have 1 corrupt packet with a valid CRC every 78 seconds."

    "The checksum calculation will NOT detect:
    Reordering of 2 byte words, i.e. 01 02 03 04 changes to 03 04 01 02
    Inserting zero-valued bytes i.e. 01 02 03 04 changes to 01 02 00 00 03 04
    Deleting zero-valued bytes i.e. 01 02 00 00 03 04 changes to 01 02 03 04
    Replacing a string of sixteen 0's with 1's or 1' with 0's
    Multiple errors which sum to zero, i.e. 01 02 03 04 changes to 01 03 03 03"

  • What are the problems with IPv4/TCP/UDP corruption?
    If the source or destination port is corrupted within TCP or UDP, or the source or destination address within IPv4, and the error is undetected, then the true source or destination port or IP address will be unknown. This means that matching the packet to the original flow to find the true source or destination of the corrupted packet(s) will be difficult. If NAT is used it will be more difficult. section 4.2 shows that small non-data packets (e.g. ACKs) counted for upto 60% of errors. In both Ethernet and IPv4/TCP/UDP it is important to note that corruption may be linked to an event related to frame or packet creation/modification/consumption and not related to the length of a packet or time spent on the wire.


Ethernet CRC Calculation and Polynomials
This is moderately complex topic and it would require a whole separate document to explain. For now, see these external nodes:

Online CRC Calculator -
Online Checksum Calculator -
Ethernet CRC32 Checker -

The polynomial divisor used in most Ethernet implementations, which is often referred to as the AUDODIN II CRC polynomial is 0x04C11DB7. Some polynomials are better than others for the modulus calculation - "The polynomial must be chosen to maximize the error-detecting capabilities while minimizing overall collision probabilities." -

Below are some CRC32 polynomials in common use, different polynomial divisors will be more or less likely to catch certain bit error patterns than others. The Koopman poly 0x741B8CD7 exceeds the IEEE 802.3 defined poly 0x04C11DB7 (see

Wikipedia Name: CRC32 - 0x04C11DB7
Usage: IEEE 802.3 Ethernet, SATA, PKZIP, Gzip, Bzip2, and many more

Wikipedia Name: CRC32 (Castagnoli) - 0x1EDC6F41
Usage: iSCSI, SCP, BTFRS, EXT4 and more

Wikipedia Name: CRC-32K (Koopman) - 0x741B8CD7
Usage: Excellent at Ethernet frame length, poor performance with long files

Wikipedia Name: CRC-32K2 (Koopman) - 0x32583499
Usage: Excellent at Ethernet frame length, poor performance with long files

A reference of different polynomial values vs. hamming distance is available here:


Implications in Production Networks

  • What are the causes of corruption in frames/packets?
    There are many potential causes, such as faulty memory in an end host or intermediate network device, a bug in the devices Ethernet/IPv4/TCP/UDP software implementation (e.g. miscalculating the CRC/checksum value, buffer overflow, pointer arithmetic), a physical hardware cabling/SFP/PHY/MAC/memory/fabric issue, uncontrollable external events like solar flares. Within an individual device PCI Express has a CRC32 mechanism. RAM can be ECC protected. CPUs also use ECC for the on-chip cache.

  • Does IPv6 have a checksum like IPv4?
    No. IPv6 header headers might be corrupted and this would be undetected. IPv6 places the issue of data integrity onto the upper layers (e.g. TCP/UDP and also with TLS). "while IPv4 allowed UDP datagram headers to have no checksum (indicated by 0 in the header field), IPv6 requires a checksum in UDP headers". IPv6 does have optional IPSec support built directly into the protocol though, unlike IPv4 for which it was an addition. When used IPSec would provide strong data integrity checks (e.g MD5, SHA1, SHA2 etc. which should be stronger than CRC-32.).

  • What is the effect of fragmentation?
    When IPv4 packets are fragments new checksums/CRC values are calculated for the individual fragments. If an undetectable error occurs during the transmission/reception of one of the fragments, which will not be evident until the packet is resassembled. This will required the entire packet to be fragments and retransmitted wasting network resources. IPv6 does not allow fragmentation.

  • How can network devices be protected?
    For OSPF/ISIS/LDP/RSVP/BGP/LDP/BFD operators can enable MD5 hashing or TCP-AO to capture issues not caught by the Ethernet CRC. MD5 is still flawed and won’t catch everything though, and SHA1 collisions exist now too so they could be forged although, this is extremely unlikely.

  • How does MPLS affect error detection?
    Layer 2 switches only check the Ethernet CRC of an incoming frame. Layer 3 routers only check the IPv4 checksum. With MPLS, LSRs might not perform an IP checksum as the P node doesn't know what is inside the MPLS VPN. This is typically the case when TTL hiding is used inside MPLS IP VPNs. A router that terminates a layer 3 connection will need to decrement the IP TTL field, maybe NAT is performed too, so it must update the packet headers and recalculate the checksum. If no changes are made though, or MPLS is used, errors may go undetected until they reach the edge of the network.