Ethernet is a family of frame-based computer networking technologies for local area networks (LANs). The name came from the physical concept of the ether. It defines a number of wiring and signaling standards for the Physical Layer of the OSI networking model as well as a common addressing format and Media Access Control at the Data Link Layer.
Ethernet is standardized as IEEE 802.3. The combination of the twisted pair versions of Ethernet for connecting end systems to the network, along with the fiber optic versions for site backbones, is the most widespread wired LAN technology. It has been used from around 1980[1] to the present, largely replacing competing LAN standards such as token ring, FDDI, and ARCNET.
Contents |
Ethernet was developed at Xerox PARC between 1973 and 1975.[2] It was inspired by ALOHAnet which Robert Metcalfe had studied as part of his Ph.D. dissertation.[3] In 1975, Xerox filed a patent application listing Metcalfe, David Boggs, Chuck Thacker and Butler Lampson as inventors.[4] In 1976, after the system was deployed at PARC, Metcalfe and Boggs published a seminal paper.[5][note 1]
Metcalfe left Xerox in 1979 to promote the use of personal computers and local area networks (LANs), forming 3Com. He convinced Digital Equipment Corporation (DEC), Intel, and Xerox to work together to promote Ethernet as a standard, the so-called "DIX" standard, for "Digital/Intel/Xerox"; it specified the 10 megabits/second Ethernet, with 48-bit destination and source addresses and a global 16-bit Ethertype type field. The first standard draft was first published on September 30, 1980 by the Institute of Electrical and Electronics Engineers (IEEE). Support of Ethernet's carrier sense multiple access with collision detection (CSMA/CD) in other standardization bodies (i.e. ECMA, IEC and ISO) was instrumental in getting past delays of the finalization of the Ethernet standard due to the difficult decision processes in the IEEE, and due to the competitive Token Ring proposal strongly supported by IBM. Ethernet initially competed with two largely proprietary systems, Token Ring and Token Bus. These proprietary systems soon found themselves buried under a tidal wave of Ethernet products. In the process, 3Com became a major company. 3Com built the first 10 Mbit/s Ethernet adapter (1981). This was followed quickly by DEC's Unibus to Ethernet adapter, which DEC sold and used internally to build its own corporate network, which reached over 10,000 nodes by 1986; far and away the largest extant computer network in the world at that time.[6]
Through the first half of the 1980s, DEC's Ethernet implementation utilized a coaxial cable about the diameter of a US nickel which became known as Thick Ethernet when its successor, Thinnet Ethernet was introduced. Thinnet uses a cable similar to cable television cable of the era. The emphasis was on making installation of the cable easier and less costly.
The observation that there was plenty of excess capacity in unused unshielded twisted pair (UTP) telephone wiring already installed in commercial buildings provided another opportunity to expand the installed base and thus twisted-pair Ethernet was the next logical development in the mid 1980s, beginning with StarLAN. UTP-based Ethernet became widely known with 10BASE-T standard. This system replaced the coaxial cable systems with a system of hubs linked via UTP.
In 1990, Kalpana introduced the first Ethernet switch[7] which replaced the CSMA/CD scheme in favor of a switched full duplex system offering higher performance and at a lower cost than using routers.
Notwithstanding its technical merits, timely standardization was instrumental to the success of Ethernet. It required well-coordinated and partly competitive activities in several standardization bodies such as the IEEE, ECMA, IEC, and finally ISO.
In February 1980 IEEE started a project, IEEE 802, for the standardization of local area networks (LAN).[8]
The "DIX-group" with Gary Robinson (DEC), Phil Arst (Intel) and Bob Printis (Xerox) submitted the so-called "Blue Book" CSMA/CD specification as a candidate for the LAN specification. Since IEEE membership is open to all professionals, including students, the group received countless comments on this brand-new technology.
In addition to CSMA/CD, Token Ring (supported by IBM) and Token Bus (selected and henceforward supported by General Motors) were also considered as candidates for a LAN standard. Due to the goal of IEEE 802 to forward only one standard and due to the strong company support for all three designs, the necessary agreement on a LAN standard was significantly delayed.
In the Ethernet camp, it put at risk the market introduction of the Xerox Star workstation and 3Com's Ethernet LAN products. With such business implications in mind, David Liddle (General Manager, Xerox Office Systems) and Metcalfe (3Com) strongly supported a proposal of Fritz Röscheisen (Siemens Private Networks) for an alliance in the emerging office communication market, including Siemens' support for the international standardization of Ethernet (April 10, 1981). Ingrid Fromm, Siemens representative to IEEE 802 quickly achieved broader support for Ethernet beyond IEEE by the establishment of a competing Task Group "Local Networks" within the European standards body ECMA TC24. As early as March 1982 ECMA TC24 with its corporate members reached agreement on a standard for CSMA/CD based on the IEEE 802 draft. The speedy action taken by ECMA decisively contributed to the conciliation of opinions within IEEE and approval of IEEE 802.3 CSMA/CD by the end of 1982.
Approval of Ethernet on the international level was achieved by a similar, cross-partisan action with Fromm as liaison officer working to integrate IEC TC83 and ISO TC97SC6, and the ISO/IEEE 802/3 standard was approved in 1984.
Ethernet is an evolving technology. Evolutions have included higher bandwidth, improved media access control methods and changes to the physical medium. Ethernet evolved into the complex networking technology that today underlies most LANs. The coaxial cable was replaced with point-to-point links connected by Ethernet hubs or switches to reduce installation costs, increase reliability, and enable point-to-point management and troubleshooting. There are many variants of Ethernet in common use.
Ethernet stations communicate by sending each other data packets, blocks of data that are individually sent and delivered. As with other IEEE 802 LANs, each Ethernet station is given a single 48-bit MAC address, which is used to specify both the destination and the source of each data packet. Network interface cards (NICs) or chips normally do not accept packets addressed to other Ethernet stations. Adapters generally come programmed with a globally unique address.[note 2] Despite the significant changes in Ethernet from a thick coaxial cable bus running at 10 Mbit/s to point-to-point links running at 1 Gbit/s and beyond, all generations of Ethernet (excluding early experimental versions) share the same frame formats (and hence the same interface for higher layers), and can be readily interconnected through bridging.
Due to the ubiquity of Ethernet, the ever-decreasing cost of the hardware needed to support it, and the reduced panel space needed by twisted pair Ethernet, most manufacturers now build the functionality of an Ethernet card directly into PC motherboards, eliminating the need for installation of a separate network card.
Ethernet was originally based on the idea of computers communicating over a shared coaxial cable acting as a broadcast transmission medium. The methods used were similar to those used in radio systems,[note 3] with the common cable providing the communication channel was likened to the ether and it was from this reference that the name "Ethernet" was derived.
The advantage of CSMA/CD was that, unlike Token Ring and Token Bus, all nodes could "see" each other directly. All "talkers" shared the same medium - a single coaxial cable - however, this was also a limitation; with only one speaker at a time, packets had to be of a minimum size to guarantee that the leading edge of the propagating wave of the message got to all parts of the medium before the transmitter could stop transmitting, thus guaranteeing that collisions (two or more packets initiated within a window of time which forced them to overlap) would be discovered. Minimum packet size and the physical medium's total length were thus closely linked.See a CSMA/CD Simulation video (Collision detection, JAM.)
Ethernet originally used a shared coaxial cable (the shared medium) winding around a building or campus to every attached machine. A scheme known as carrier sense multiple access with collision detection (CSMA/CD) governed the way the computers shared the channel. This scheme was simpler than the competing token ring or token bus technologies. Computers were connected to an Attachment Unit Interface (AUI) transceiver, which was in turn connected to the cable (later with thin Ethernet the transceiver was integrated into the network adapter). While a simple passive wire was highly reliable for small Ethernets, it was not reliable for large extended networks, where damage to the wire in a single place, or a single bad connector, could make the whole Ethernet segment unusable. Multipoint systems are also prone to very strange failure modes when an electrical discontinuity reflects the signal in such a manner that some nodes would work properly while others work slowly because of excessive retries or not at all.[note 4] These could be much more painful to diagnose than a complete failure of the segment.
Since all communications happen on the same wire, any information sent by one computer is received by all, even if that information is intended for just one destination. The network interface card interrupts the CPU only when applicable packets are received: the card ignores information not addressed to it.[note 5][note 6] Use of a single cable also means that the bandwidth is shared, so that network traffic can slow to a crawl when, for example, the network and nodes restart after a power failure.
For signal degradation and timing reasons, coaxial Ethernet segments had a restricted size. Larger networks could be built by using an Ethernet repeater. Initial repeaters only had 2 ports but they gave way to 4-, 6-, 8-ports and more. People recognized the advantages of cabling in a star topology, primarily that only faults at the star point will result in a badly partitioned network
Ethernet on unshielded twisted-pair cables (UTP), beginning with StarLAN and continuing with 10BASE-T, was designed for point-to-point links only and all termination was built into the device. This changed repeaters from a specialist device used at the center of large networks to a device that every twisted pair-based network with more than two machines had to use. The tree structure that resulted from this made Ethernet networks more reliable by preventing faults with one peer or its associated cable from affecting other devices on the network,[note 7] although a failure of a hub or an inter-hub link could still affect lots of users. Also, since twisted pair Ethernet is point-to-point and terminated inside the hardware, the total empty panel space required around a port is much reduced, making it easier to design hubs with lots of ports and to integrate Ethernet onto computer motherboards.
Despite the physical star topology, hubbed Ethernet networks still use half-duplex and CSMA/CD, with only minimal activity by the hub, primarily the Collision Enforcement signal, in dealing with packet collisions. Every packet is sent to every port on the hub, so bandwidth and security problems aren't addressed. The total throughput of the hub is limited to that of a single link and all links must operate at the same speed.
Collisions reduce throughput by their very nature. In the worst case, when there are lots of hosts with long cables that attempt to transmit many short frames, excessive collisions can reduce throughput dramatically. However, a Xerox report in 1980 summarized the results of having 20 fast nodes attempting to transmit packets of various sizes as quickly as possible on the same Ethernet segment.[9] The results showed that, even for the smallest Ethernet frames (64 Bytes), 90% throughput on the LAN was the norm. This is in comparison with token passing LANs (token ring, token bus), all of which suffer throughput degradation as each new node comes into the LAN, due to token waits.
This report was controversial, as modeling showed that collision-based networks became unstable under loads as low as 40% of nominal capacity. Many early researchers failed to understand the subtleties of the CSMA/CD protocol and how important it was to get the details right, and were really modeling somewhat different networks (usually not as good as real Ethernet).[10]
While repeaters could isolate some aspects of Ethernet segments, such as cable breakages, they still forwarded all traffic to all Ethernet devices. This created practical limits on how many machines could communicate on an Ethernet network. The entire network was one collision domain and all hosts had to be able to detect collisions anywhere on the network, limiting the number of repeaters between the farthest nodes. Finally segments joined by repeaters had to all operate at the same speed, making phased-in upgrades impossible.
To alleviate these problems, bridging was created to communicate at the data link layer while isolating the physical layer. With bridging, only well-formed Ethernet packets are forwarded from one Ethernet segment to another; collisions and packet errors are isolated. Bridges learn where devices are, by watching MAC addresses, and do not forward packets across segments when they know the destination address is not located in that direction.
Prior to discovery of network devices on the different segments, Ethernet bridges (and switches) work somewhat like Ethernet hubs, passing all traffic between segments. However, as the bridge discovers the addresses associated with each port, it only forwards network traffic to the necessary segments, improving overall performance. Broadcast traffic is still forwarded to all network segments. Bridges also overcame the limits on total segments between two hosts and allowed the mixing of speeds, both of which became very important with the introduction of Fast Ethernet.
Early bridges examined each packet one by one using software on a CPU, and some of them were significantly slower than hubs (multi-port repeaters) at forwarding traffic, especially when handling many ports at the same time. This was in part because the entire Ethernet packet would be read into a buffer, the destination address compared with an internal table of known MAC addresses and a decision made as to whether to drop the packet or forward it to another or all segments.
In 1989 the networking company Kalpana introduced their EtherSwitch, the first Ethernet switch. This worked somewhat differently from an Ethernet bridge, in that only the header of the incoming packet would be examined before it was either dropped or forwarded to another segment. This greatly reduced the forwarding latency and the processing load on the network device. One drawback of this cut-through switching method was that packets that had been corrupted at a point beyond the header could still be propagated through the network, so a jabbering station could continue to disrupt the entire network. The remedy for this was to make available store-and-forward switching, where the packet would be read into a buffer on the switch in its entirety, verified against its checksum and then forwarded. This was essentially a return to the original approach of bridging, but with the advantage of more powerful, application-specific processors being used. Hence the bridging is then done in hardware, allowing packets to be forwarded at full wire speed. It is important to remember that the term switch was invented by device manufacturers and does not appear in the 802.3 standard.
Since packets are typically only delivered to the port they are intended for, traffic on a switched Ethernet is slightly less public than on shared-medium Ethernet. Despite this, switched Ethernet should still be regarded as an insecure network technology, because it is easy to subvert switched Ethernet systems by means such as ARP spoofing and MAC flooding. The bandwidth advantages, the slightly better isolation of devices from each other, the ability to easily mix different speeds of devices and the elimination of the chaining limits inherent in non-switched Ethernet have made switched Ethernet the dominant network technology.
When a twisted pair or fiber link segment is used and neither end is connected to a hub, full-duplex Ethernet becomes possible over that segment. In full duplex mode both devices can transmit and receive to/from each other at the same time, and there is no collision domain. This doubles the aggregate bandwidth of the link and is sometimes advertised as double the link speed (e.g. 200 Mbit/s) to account for this. However, this is misleading as performance will only double if traffic patterns are symmetrical (which in reality they rarely are). The elimination of the collision domain also means that all the link's bandwidth can be used and that segment length is not limited by the need for correct collision detection (this is most significant with some of the fiber variants of Ethernet).
Simple switched Ethernet networks, while an improvement over hub-based Ethernet, suffer from single points of failure, attacks that trick switches or hosts into sending data to a machine even if it's not intended for it, scalability and security issues with regards to broadcast radiation and multicast traffic and bandwidth choke points where a lot of traffic is forced down a single link.
Advanced networking features in switches and routers combat these issues through a number of means including spanning-tree protocol to maintain the active links of the network as a tree while allowing physical loops for redundancy, various port protection features, virtual LANs to keep different classes of users separate while using the same physical infrastructure, multilayer switching to route between different classes and link aggregation to add bandwidth to overloaded links and to provide some measure of redundancy.
The Ethernet physical layer evolved over a considerable time span and encompasses quite a few physical media interfaces and several magnitudes of speed. The most common forms used are 10BASE-T, 100BASE-TX, and 1000BASE-T. All three utilize Category 5 cables and 8P8C modular connectors. They run at 10 Mbit/s, 100 Mbit/s, and 1 Gbit/s, respectively. Fiber optic variants of Ethernet offer high performance, electrical isolation and distance (up to tens of kilometers with some versions). In general, network protocol stack software will work similarly on all varieties.
A data packet on the wire is called a frame. A frame begins with Preamble and Start Frame Delimiter. Following which, each Ethernet frame features an Ethernet header featuring source and destination MAC addresses. The middle section of the frame consists of payload data including any headers for other protocols (e.g. Internet Protocol) carried in the frame. The frame ends with a 32-bit cyclic redundancy check which is used to detect any corruption of data in transit.
Autonegotiation is the procedure by which two connected devices choose common transmission parameters, such as speed and duplex mode. Autonegotiation was first introduced as an optional feature for Fast Ethernet but it is also backwards compatible with 10BASE-T. Autonegotiation is mandatory for Gigabit Ethernet.