Gigabit Ethernet PCI Express network adapter. Gigabit Ethernet Network Adapter PCI Express Shielded Twisted Pair

I was in no hurry to translate mine home network from 100 Mbps to 1 Gbps, which is quite strange for me since I transfer a large number of files over the network. However, when I spend money on a computer or infrastructure upgrade, I believe I should get an immediate performance boost in the apps and games I run. Many users like to amuse themselves new video card, central processor and some gadget. However, for some reason, networking equipment does not attract such enthusiasm. Indeed, it is difficult to invest the money you earn in network infrastructure instead of another technological birthday gift.

However, the requirements for bandwidth mine are very high, and at one point I realized that the 100 Mbit/s infrastructure was no longer enough. All my home computers already have integrated 1 Gbps adapters (on motherboards ah), so I decided to take the price list of the nearest computer company and see what I would need to convert my entire network infrastructure to 1 Gbps.

No, a home gigabit network is not that complicated at all.

I bought and installed all the equipment. I remember that I used to copy large file on a 100 Mbit/s network it took about a minute and a half. After an upgrade to 1 Gbit/s, the same file began to be copied in 40 seconds. The performance increase was pleasantly pleasing, but still I did not get the tenfold improvement that could be expected from comparing the throughput of 100 Mbps and 1 Gbps of the old and new networks.

What is the reason?

For a gigabit network, all parts must support 1 Gbps. For example, if you have Gigabit network cards and associated cables installed, but the hub/switch only supports 100 Mbps, then the entire network will operate at 100 Mbps.

The first requirement is Network Controller. It is best if each computer on the network is equipped with a gigabit network adapter (separate or integrated on the motherboard). This requirement is the easiest to satisfy, since most motherboard manufacturers have been integrating gigabit network controllers for the last couple of years.

Second requirement - LAN card should also support 1 Gbit/s. There is a common misconception that gigabit networks require Cat 5e cable, but in fact even old Cat 5 cable supports 1 Gbps. However, Cat 5e cables have better characteristics, so they will be a more optimal solution for gigabit networks, especially if the cables are of a decent length. However, Cat 5e cables are still the cheapest today, since old standard Cat 5 is already obsolete. Newer and more expensive Cat 6 cables offer even better performance for gigabit networks. We'll compare the performance of Cat 5e vs Cat 6 cables later in our article.

The third and probably most expensive component in a gigabit network is the 1 Gbps hub/switch. Of course, it is better to use a switch (perhaps paired with a router), since a hub or hub is not the most intelligent device, simply broadcasting all network data on all available ports, which leads to a large number of collisions and slows down network performance. If you need high performance, then you cannot do without a gigabit switch, since it forwards network data only to the desired port, which effectively increases the network speed compared to a hub. A router usually contains a built-in switch (with multiple LAN ports) and also allows you to connect your home network to the Internet. Most home users understand the benefits of a router, so a gigabit router is a very attractive option.

How fast should gigabit be? If you hear the prefix "giga", you probably mean 1000 megabytes, while a gigabit network should provide 1000 megabytes per second. If you think so, then you are not alone. But, alas, in reality everything is different.

What is gigabit? This is 1000 megabits, not 1000 megabytes. There are 8 bits in one byte, so let's just do the math: 1,000,000,000 bits divided by 8 bits = 125,000,000 bytes. There are about a million bytes in a megabyte, so a gigabit network should provide a theoretical maximum data transfer rate of about 125 MB/s.

Sure, 125 MB/s doesn't sound as impressive as gigabit, but think about it: a network at that speed should theoretically transfer a gigabyte of data in just eight seconds. And a 10 GB archive should be transferred in just a minute and 20 seconds. The speed is incredible: just remember how long it took to transfer a gigabyte of data before USB sticks became as fast as they are today.

Our expectations were high, so we decided to transfer the file over a gigabit network and enjoy speeds close to 125 MB/s. We don't have any specialized fancy hardware: a simple home network with some old but decent technology.

Copying a 4.3GB file from one home computer to another performed at an average speed of 35.8MB/s (we ran the test five times). This is only 30% of the theoretical ceiling of a gigabit network of 125 MB/s.

What are the causes of the problem?

Selecting components for installing a gigabit network is quite simple, but getting the network to work at maximum speed is much more difficult. The factors that can cause a network to slow down are numerous, but as we've discovered, it all comes down to how fast hard disks capable of transmitting data to the network controller.

The first limitation that needs to be taken into account is the interface of the gigabit network controller with the system. If your controller is connected via the old PCI bus, then the amount of data it can theoretically transfer is 133 MB/s. For Gigabit Ethernet's 125 MB/s throughput, this seems sufficient, but remember that the PCI bus bandwidth is shared throughout the system. Each additional card PCI and many system components will use the same bandwidth, reducing the resources available to the network card. For controllers with new PCI interface Express (PCIe) does not have such problems, since each PCIe lane provides at least 250 MB/s of bandwidth, and exclusively for the device.

The next important factor that affects network speed is cables. Many experts point out that if network cables are laid next to power cables that are sources of interference, low speeds are guaranteed. Long cable lengths are also problematic, as Cat 5e copper cables are certified to a maximum length of 100 meters.

Some experts recommend running cables to the new Cat 6 standard instead of Cat 5e. Often such recommendations are difficult to justify, but we will try to test the effect of cable category on a small gigabit home network.

Let's not forget about the operating system. Of course, this system is rarely used in a gigabit environment, but it is worth mentioning that Windows 98 SE (and older operating systems) will not be able to take advantage of gigabit Ethernet, since the TCP/IP stack of this operating system is barely able to load a 100-Mbps connection in to the fullest. Windows 2000 and above latest versions Windows will already work, although in older ones operating systems You'll have to do some tweaking to make sure they make the most of the network. We will use a 32-bit OS Windows Vista for our tests, and while Vista doesn't have the best reputation for some tasks, it supports gigabit networking from the start.

Now let's move on to hard drives. Even the older IDE interface with the ATA/133 specification should be sufficient to support a theoretical file transfer speed of 133 MB/s, and the newer SATA specification fits the bill as it provides at least 1.5 Gb/s (150 MB) of throughput. /With). However, while cables and controllers can handle data transfer at such speeds, the hard drives themselves cannot.

Let's take for example a typical modern HDD 500 GB, which should provide a constant throughput of about 65 MB/s. At the beginning of the plates (outer tracks) the speed may be higher, but as you move to the inner tracks the throughput drops. Data on internal tracks is read slower, at about 45 MB/s.

We thought we had covered all possible bottlenecks. What was left to do? We needed to run some tests and see if we could get the network performance up to the theoretical limit of 125 MB/s.

Test configuration

Test systems Server system Client system
CPU Intel Core 2 Duo E6750 (Conroe), 2.66 GHz, FSB-1333, 4 MB cache Intel Core 2 Quad Q6600 (Kentsfield), 2.7 GHz, FSB-1200, 8 MB cache
Motherboard ASUS P5K, Intel P35, BIOS 0902 MSI P7N SLI Platinum, Nvidia nForce 750i, BIOS A2
Net Built-in Abit Gigabit LAN controller Integrated nForce 750i Gigabit Ethernet Controller
Memory Wintec Ampo PC2-6400, 2x 2048 MB, DDR2-667, CL 5-5-5-15 at 1.8 V A-Data EXTREME DDR2 800+, 2x 2048 MB, DDR2-800, CL 5-5-5-18 at 1.8 V
Video cards ASUS GeForce GTS 250 Dark Knight, 1 GB GDDR3-2200, 738 MHz GPU, 1836 MHz shader unit MSI GTX260 Lightning, 1792 MB GDDR3-1998, 590 MHz GPU, 1296 MHz shader unit
Hard drive 1 Seagate Barracuda ST3320620AS, 320 GB, 7200 rpm, 16 MB cache, SATA 300
Hard drive 2 2x Hitachi Deskstar 0A-38016 in RAID 1, 7200 rpm, 16 MB cache, SATA 300 Western Digital Caviar WD50 00AAJS-00YFA, 500 GB, 7200 rpm, 8 MB cache, SATA 300
power unit Aerocool Zerodba 620w, 620 W, ATX12V 2.02 Ultra HE1000X, ATX 2.2, 1000 W
Network switch D-Link DGS-1008D, 8-Port 10/100/1000 Unmanaged Gigabit Desktop Switch
Software and drivers
OS Microsoft Windows Vista Ultimate 32-bit 6.0.6001, SP1
DirectX version DirectX 10
Graphics driver Nvidia GeForce 185.85

Tests and settings

Tests and settings
Nodesoft Diskbench Version: 2.5.0.5, file Copy, Creation, Read, and Batch Benchmark
SiSoftware Sandra 2009 SP3 Version 2009.4.15.92, CPU Test = CPU Arithmetic / Multimedia, Memory Test = Bandwidth Benchmark

Before we move on to any benchmarks, we decided to test the hard drives offline to see what kind of throughput we can expect in an ideal scenario.

We have two PCs running on our home gigabit network. The first, which we will call a server, is equipped with two disk subsystems. The main hard drive is a 320 GB Seagate Barracuda ST3320620AS, a couple of years old. The server operates as a network NAS storage with a RAID array consisting of two 1 TB hard drives Hitachi Deskstar 0A-38016, which are mirrored for redundancy.

We called the second PC on the network a client; it has two hard drives: both 500 GB Western Digital Caviar 00AAJS-00YFA, about six months old.

We first tested the speed of the server and client system hard drives to see what kind of performance we could expect from them. We used the test hard drive in SiSoftware Sandra 2009 package.

Our dreams of achieving gigabit file transfer speeds were immediately dashed. Both of the single hard drives achieved a maximum read speed of around 75 MB/s under ideal conditions. Since this test is carried out in real conditions, and the drives are 60% full, we can expect read speeds closer to the 65 MB/s index that we received from both hard drives.

But let's look at the performance of RAID 1 - the best thing about this array is that the hardware RAID controller can increase read performance by fetching data from both hard drives at the same time, similar to RAID 0 arrays; but this effect occurs (as far as we know) only with hardware RAID controllers, but not with software solutions RAID. In our tests, the RAID array delivered much faster read performance than a single hard drive, so chances are good that we'll get high network file transfer speeds from the RAID 1 array. The RAID array delivered an impressive 108 MB/s peak throughput, but at In reality, performance should be close to the 88 MB/s index, since the array is 55% full.

So we should get about 88 MB/s over a gigabit network, right? That's not nearly as close to the gigabit network's 125 MB/s ceiling, but it's much faster than 100-Mbit/s networks that have a 12.5 MB/s ceiling, so getting 88 MB/s in practice wouldn't be bad at all.

But it's not that simple. Just because the read speed of hard drives is quite high does not mean that they will write information quickly in real conditions. Let's run some disk writing tests before using the network. We'll start with our server and copy the 4.3GB image from the high-speed RAID array to the 320GB system hard drive and back again. We will then copy the file from the client's D: drive to its C: drive.

As you can see, copying from a fast RAID array to drive C: gave an average speed of only 41 MB/s. And copying from the C: drive to a RAID 1 array resulted in a drop of only 25 MB/s. What's happening?

This is exactly what happens in reality: hard drive C: was released a little over a year ago, but it is 60% full, probably a little fragmented, so it doesn’t break records in terms of recording. There are other factors, namely how fast the system and memory in general works. RAID 1 is made up of relatively new hardware, but due to redundancy, information must be written to two hard drives at the same time, which reduces performance. Although RAID 1 can provide high read performance, write speed will have to be sacrificed. Of course, we could use a striped RAID 0 array, which gives high write and read speeds, but if one hard drive dies, then all the information will be corrupted. Overall, RAID 1 is a better option if you value the data stored on the NAS.

However, all is not lost. Digital Caviar's new 500GB drive is capable of writing our file at 70.3MB/s (average across five test runs), and also delivers a top speed of 73.2MB/s.

With that said, we were expecting a real-world maximum transfer speed of 73 MB/s over a gigabit network from the NAS RAID 1 array to the client's C: drive. We'll also test file transfers from the client's C: drive to the server's C: drive to see if we can realistically expect 40MB/s in that direction.

Let's start with the first test, in which we sent a file from the client's C: drive to the server's C: drive.

As we can see, the results correspond to our expectations. A gigabit network, theoretically capable of 125 MB/s, sends data from the client's C: drive at the fastest possible speed, probably around 65 MB/s. But as we showed above, the server's C: drive can only write at about 40 MB/s.

Now let's copy the file from the server's high-speed RAID array to drive C: client computer.

Everything turned out as we expected. From our tests, we know that the client computer's C: drive is capable of writing data at about 70 MB/s, and gigabit network performance came very close to that speed.

Unfortunately, our results do not come close to the theoretical maximum throughput of 125 MB/s. Can we test the maximum network speed? Sure, but not in a realistic scenario. We will try to transfer information across the network from memory to memory to bypass any bandwidth limitations of hard drives.

To do this, we will create a 1 GB RAM disk on the server and client PCs, and then transfer the 1 GB file between these disks over the network. Since even slow DDR2 memory is capable of transferring data at speeds of more than 3000 MB/s, network bandwidth will be the limiting factor.

We got a maximum speed of 111.4 MB/s on our Gigabit network, which is very close to the theoretical limit of 125 MB/s. An excellent result, there is no need to complain about it, since the actual throughput will still not reach the theoretical maximum due to the transmission of additional information, errors, retransmissions, etc.

The conclusion will be as follows: today, the performance of information transfer over a gigabit network is limited by hard drives, that is, the transfer speed will be limited by the slowest hard drive participating in the process. Having answered the most important question, we can move on to speed tests depending on the cable configuration to make our article complete. Could optimizing cabling bring network speeds even closer to the theoretical limit?

Since performance in our tests was close to expected, we're unlikely to see any improvement by changing the cable configuration. But we still wanted to run tests to get closer to the theoretical speed limit.

We conducted four tests.

Test 1: default.

For this test, we used two cables about 8 meters long, each connected to a computer at one end and a gigabit switch at the other. We left the cables where they were laid, that is, next to the power cables and sockets.

This time we used the same 8-mm cables as in the first test, but moved network cable as far as possible from power cords and extension cords.

In this test, we removed one of the 8-m cables and replaced it with a meter of Cat 5e cable.

In the last test, we replaced the 8's Cat 5e cables with the 8's Cat 6 cables.

In general, our testing of different cable configurations did not show a significant difference, but conclusions can be drawn.

Test 2: reducing interference from power cables.

On small networks like our home network, tests show that you don't have to worry about running LAN cables near electrical cables, outlets, and extension cords. Of course, the interference will be higher, but this will not have a serious effect on the network speed. However, with all that said, it is better to avoid laying it near power cables, and you should remember that the situation may be different on your network.

Test 3: reduce the length of the cables.

This is not a completely correct test, but we tried to detect the difference. It should be remembered that replacing an eight-meter cable with a meter may result in the result being simply different cables than differences in distance. In any case, in most tests we do not see significant difference with the exception of an abnormal increase in throughput during copying from the client drive C: to the server drive C:.

Test 4: Replace Cat 5e cables with Cat 6 cables.

Again, we found no significant difference. Since the cables are about 8 meters long, longer cables can make a big difference. But if your length is not the maximum, then Cat 5e cables will work quite well on a home gigabit network with a distance of 16 meters between two computers.

It is interesting to note that manipulating the cables had no effect on data transfer between computer RAM disks. It's clear that some other component on the network was limiting performance to the magic number of 111 MB/s. However, such a result is still acceptable.

Do gigabit networks provide gigabit speeds? As it turns out, they almost do.

However, in real conditions, network speed will be seriously limited by hard drives. In a synthetic memory-to-memory scenario, our gigabit network produced performance very close to the theoretical limit of 125 MB/s. Regular network speeds, taking into account the performance of hard drives, will be limited to levels from 20 to 85 MB/s, depending on the hard drives used.

We also tested the effects of power cords, cable length, and upgrading from Cat 5e to Cat 6. On our small home network, none of the factors mentioned impacted performance significantly, although we do note that on a larger, more complex network with longer lengths these factors can have a much stronger influence.

In general, if you transfer a large number of files on your home network, then we recommend installing a gigabit network. Upgrading from a 100Mbps network will give you a nice performance boost; at least you'll get a 2x increase in file transfer speeds.

Gigabit Ethernet on your home network can provide greater performance gains if you read files from a fast NAS storage device that uses hardware RAID. On our test network, we transferred a 4.3GB file in just one minute. Over a 100 Mbps connection, the same file took about six minutes to copy.

Gigabit networks are becoming more and more accessible. Now all that remains is to wait for the speeds of hard drives to rise to the same level. In the meantime, we recommend creating arrays that can overcome the limitations of modern HDD technologies. Then you can squeeze more performance out of your gigabit network.

The modern world is becoming increasingly dependent on volumes and flows of information flowing in various directions via wires and without them. It all started quite a long time ago and with more primitive means than today’s achievements of the digital world. But we do not intend to describe all the types and methods by which one person conveyed the necessary information to the consciousness of another. In this article I would like to offer the reader a story about a transmission standard that was recently created and is now successfully developing digital information, which is called Ethernet.

The birth of the idea and technology of Ethernet took place within the walls of the Xerox PARC corporation, along with other first developments in the same direction. The official date of invention of Ethernet was May 22, 1973, when Robert Metcalfe wrote a memo to the head of PARC on the potential of Ethernet technology. However, it was patented only a few years later.

In 1979, Metcalf left Xerox and founded 3Com, whose main task was to promote computers and local computer networks(LAN). With the support of such eminent companies as DEC, Intel and Xerox, the Ethernet standard (DIX) was developed. After its official publication on September 30, 1980, it began competing with two major patented technologies, token ring and ARCNET, which were later completely superseded due to their lower efficiency and higher cost than Ethernet products.

Initially, according to the proposed standards (Ethernet v1.0 and Ethernet v2.0), they were going to use coaxial cable as the transmission medium, but later they had to abandon this technology and switch to using optical cables and twisted pair.

The main advantage in the early development of Ethernet technology was the access control method. It involves multiple connections with carrier sensing and collision detection (CSMA/CD, Carrier Sense Multiple Access with Collision Detection), the data transfer rate is 10 Mbit/s, the packet size is from 72 to 1526 bytes, and it also describes data encoding methods . The limit for workstations in one shared network segment is limited to 1024, but other smaller values ​​are possible when setting more stringent restrictions on the thin coaxial segment. But this construction very soon became ineffective and was replaced in 1995 by the IEEE 802.3u Fast Ethernet standard with a speed of 100 Mbit/s, and later the IEEE 802.3z Gigabit Ethernet standard with a speed of 1000 Mbit/s was adopted. On this moment 10 Gigabit Ethernet IEEE 802.3ae is already in full use, with a speed of 10,000 Mbit/s. In addition, we already have developments aimed at achieving speeds of 100,000 Mbit/s 100 Gigabit Ethernet, but first things first.

A very important point underlying the Ethernet standard is its frame format. However, there are quite a few options. Here are some of them:

    Variant I is the first-born and already out of use.

    Ethernet Version 2 or Ethernet frame II, also called DIX (an abbreviation of the first letters of the development companies DEC, Intel, Xerox) is the most common and is used to this day. Often used directly by the Internet protocol.

    Novell - internal modification of IEEE 802.3 without LLC (Logical Link Control).

    IEEE 802.2 LLC frame.

    IEEE 802.2 LLC/SNAP frame.

    In addition, an Ethernet frame may contain an IEEE 802.1Q tag to identify the VLAN to which it is addressed, and an IEEE 802.1p tag to indicate priority.

    Some Ethernet network cards manufactured by Hewlett-Packard used an IEEE 802.12 frame format that complies with the 100VG-AnyLAN standard.

Different frame types have different formats and MTU values.

Functional elements of technologyGigabit Ethernet

Note that manufacturers of Ethernet cards and other devices generally include support for several previous data rate standards in their products. By default, using auto-detection of speed and duplex, the card drivers themselves determine the optimal mode of operation of the connection between two devices, but, usually, there is also a manual choice. So, by purchasing a device with a 10/100/1000 Ethernet port, we get the opportunity to work using 10BASE-T, 100BASE-TX, and 1000BASE-T technologies.

Here is a chronology of modifications Ethernet, dividing them by transmission speeds.

First solutions:

    Xerox Ethernet - original technology, speed 3 Mbit/s, existed in two versions Version 1 and Version 2, frame format latest version is still in wide use.

    10BROAD36 - not widely used. One of the first standards allowing work over long distances. Used broadband modulation technology similar to that used in cable modems. Coaxial cable was used as a data transmission medium.

    1BASE5 - also known as StarLAN, was the first modification of Ethernet technology to use twisted pair cables. It worked at a speed of 1 Mbit/s, but did not find commercial use.

More common and optimized for their time modifications of 10 Mbit/s Ethernet:

    10BASE5, IEEE 802.3 (also called "Thick Ethernet") - the initial development of technology with a data transfer rate of 10 Mbps. IEEE uses 50 ohm coaxial cable (RG-8), with a maximum segment length of 500 meters.

    10BASE2, IEEE 802.3a (called "Thin Ethernet") - uses RG-58 cable, with a maximum segment length of 200 meters. To connect computers to each other and connect the cable to the network card, you need a T-connector, and the cable must have a BNC connector. Requires terminators at each end. For many years this standard was the main one for Ethernet technology.

    StarLAN 10 - The first development that uses twisted pair cables to transmit data at a speed of 10 Mbit/s. Later, it evolved into the 10BASE-T standard.

    10BASE-T, IEEE 802.3i - 4 wires of a twisted pair cable (two twisted pairs) of category 3 or category 5 are used for data transmission. The maximum segment length is 100 meters.

    FOIRL - (acronym for Fiber-optic inter-repeater link). The basic standard for Ethernet technology, using optical cable for data transmission. The maximum data transmission distance without a repeater is 1 km.

    10BASE-F, IEEE 802.3j - The main term for a family of 10 Mbit/s Ethernet standards using fiber optic cable over distances of up to 2 kilometers: 10BASE-FL, 10BASE-FB and 10BASE-FP. Of the above, only 10BASE-FL has become widespread.

    10BASE-FL (Fiber Link) - An improved version of the FOIRL standard. The improvement concerned an increase in the length of the segment to 2 km.

    10BASE-FB (Fiber Backbone) - Currently an unused standard, intended for combining repeaters into a backbone.

  • 10BASE-FP (Fiber Passive) - A passive star topology that does not require repeaters - developed but never used.

The most common and inexpensive choice at the time of writing Fast Ethernet (100 Mbit/s) ( Fast Ethernet):

    100BASE-T - The basic term for one of the three 100 Mbit/s Ethernet standards, using twisted pair cable as a data transmission medium. Segment length up to 100 meters. Includes 100BASE-TX, 100BASE-T4 and 100BASE-T2.

    100BASE-TX, IEEE 802.3u - Development of 10BASE-T technology, a star topology is used, a Category 5 twisted pair cable is used, which actually uses 2 pairs of conductors, the maximum data transfer rate is 100 Mbit/s.

    100BASE-T4 - 100 Mbps Ethernet over Category 3 cable. All 4 pairs are used. Now it is practically not used. Data transmission occurs in half-duplex mode.

    100BASE-T2 - Not used. 100 Mbps Ethernet over Category 3 cable. Only 2 pairs are used. Full duplex transmission mode is supported, when signals propagate in opposite directions on each pair. Transmission speed in one direction is 50 Mbit/s.

    100BASE-FX - 100 Mbps Ethernet over fiber optic cable. The maximum segment length is 400 meters in half-duplex mode (for guaranteed collision detection) or 2 kilometers in full-duplex mode over multimode optical fiber.

    100BASE-LX - 100 Mbps Ethernet over fiber optic cable. The maximum segment length is 15 kilometers in full duplex mode over a pair of single-mode optical fibers at a wavelength of 1310 nm.

    100BASE-LX WDM - 100 Mbps Ethernet over fiber optic cable. The maximum segment length is 15 kilometers in full duplex mode over one single-mode optical fiber at a wavelength of 1310 nm and 1550 nm. Interfaces come in two types, differ in the wavelength of the transmitter and are marked either with numbers (wavelength) or with one Latin letter A (1310) or B (1550). Only paired interfaces can operate in pairs, with a transmitter at 1310 nm on one side and a transmitter at 1550 nm on the other.

Gigabit Ethernet

    1000BASE-T, IEEE 802.3ab - 1 Gbps Ethernet standard. Category 5e or category 6 twisted pair cable is used. All 4 pairs are involved in data transmission. Data transfer speed - 250 Mbit/s over one pair.

    1000BASE-TX, - A 1 Gbps Ethernet standard using only Category 6 twisted pair cable. The transmitting and receiving pairs are physically separated by two pairs in each direction, which greatly simplifies the design of transceiver devices. Data transfer speed - 500 Mbit/s over one pair. Practically not used.

    1000Base-X - general term to denote Gigabit Ethernet technology with pluggable GBIC or SFP transceivers.

    1000BASE-SX, IEEE 802.3z - 1 Gbit/s Ethernet technology uses lasers with an acceptable radiation length within the range of 770-860 nm, transmitter radiation power ranging from -10 to 0 dBm with an ON/OFF ratio (signal/no signal) not less than 9 dB. Receiver sensitivity 17 dBm, receiver saturation 0 dBm. Using multimode fiber, the signal transmission range without a repeater is up to 550 meters.

    1000BASE-LX, IEEE 802.3z - 1 Gbit/s Ethernet technology uses lasers with an acceptable radiation length within the range of 1270-1355 nm, transmitter radiation power ranging from 13.5 to 3 dBm, with an ON/OFF ratio (there is a signal/ no signal) not less than 9 dB. Receiver sensitivity 19 dBm, receiver saturation 3 dBm. When using multimode fiber, the signal transmission range without a repeater is up to 550 meters. Optimized for long distances using single-mode fiber (up to 40 km).

    1000BASE-CX - Gigabit Ethernet technology for short distances (up to 25 meters), uses a special copper cable (Shielded Twisted Pair (STP)) with a characteristic impedance of 150 Ohms. Replaced by the 1000BASE-T standard and is no longer used.

    1000BASE-LH (Long Haul) - 1 Gbit/s Ethernet technology, uses single-mode optical cable, signal transmission range without a repeater is up to 100 kilometers.

Standard

Cable type

Bandwidth (no worse), MHz*Km

Max. distance, m *

1000BASE-LX (1300 nm laser diode)

Singlemode fiber (9 µm)

Multimode fiber
(50 µm)

Multimode fiber
(62.5 µm)

1000BASE-SX (850 nm laser diode)

Multimode fiber
(50 µm)

Multimode fiber
(62.5 µm)

Multimode fiber
(62.5 µm)

Shielded Twisted Pair STP
(150 ohm)

* 1000BASE-SX and 1000BASE-LX standards require full-duplex mode
** Equipment from some manufacturers can provide longer distances; optical segments without intermediate repeaters/amplifiers can reach 100 km.

Specifications 1000Base-X standards

10 Gigabit Ethernet

Still quite expensive, but quite in demand, new standard 10 Gigabit Ethernet includes seven physical media standards for LAN, MAN and WAN. It is currently covered by the IEEE 802.3a amendment and should be included in the next revision of the IEEE 802.3 standard.

    10GBASE-CX4 - 10 Gigabit Ethernet technology for short distances (up to 15 meters), uses CX4 copper cable and InfiniBand connectors.

    10GBASE-SR - 10 Gigabit Ethernet technology for short distances (up to 26 or 82 meters, depending on cable type), uses multimode fiber. It also supports distances of up to 300 meters using new multimode fiber (2000 MHz/km).

    10GBASE-LX4 - uses wavelength multiplexing to support distances of 240 to 300 meters over multimode fiber. Also supports distances up to 10 kilometers using single-mode fiber.

    10GBASE-LR and 10GBASE-ER - these standards support distances of up to 10 and 40 kilometers, respectively.

    10GBASE-SW, 10GBASE-LW and 10GBASE-EW - These standards use a physical interface compatible in speed and data format with the OC-192 / STM-64 SONET/SDH interface. They are similar to the 10GBASE-SR, 10GBASE-LR and 10GBASE-ER standards, respectively, as they use the same cable types and transmission distances.

    10GBASE-T, IEEE 802.3an-2006 - adopted in June 2006 after 4 years of development. Uses shielded twisted pair cable. Distances - up to 100 meters.

And finally, what do we know about 100-Gigabit Ethernet(100-GE), still quite crude, but quite in demand technology.

In April 2007, following the IEEE 802.3 committee meeting in Ottawa, the Higher Speed ​​Study Group (HSSG) agreed on technical approaches to forming 100-GE optical and copper links. On given time The 802.3ba working group has been finalized to develop the 100-GE specification.

As in previous developments, the 100-GE standard will take into account not only economic and technical capabilities its implementation, but also their backward compatibility with existing systems. At this time, the need for such speeds has been indisputably proven by leading companies. Constantly growing volumes of personalized content, including the delivery of videos from portals such as YouTube and other resources using IPTV and HDTV technologies. We should also mention video on demand. All this determines the need for 100 Gigabit Ethernet operators and service providers.

But against the backdrop of a large selection of old and promising new technological approaches within the Ethernet group, we want to dwell in more detail on a technology that today is only becoming fully widespread in use due to the falling cost of its components. Gigabit Ethernet can fully support applications such as video streaming, video conferencing, and complex image transmission that place increased demands on channel bandwidth. The benefits of increasing transmission speeds on corporate and home networks are becoming increasingly clear as prices for this class of equipment fall.

Now the IEEE standard has gained maximum popularity. Adopted in June 1998, it was approved as IEEE 802.3z. But at first, only optical cable was used as a transmission medium. With the approval of the 802.3ab standard over the next year, the transmission medium became Category 5 unshielded twisted pair cable.

Gigabit Ethernet is a direct descendant of Ethernet and Fast Ethernet, which have proven themselves over almost twenty years of history, maintaining their reliability and prospects for use. Along with backward compatibility with previous solutions (the cable structure remains unchanged), it provides a theoretical throughput of 1000 Mbps, which is approximately 120 MB per second. It is worth noting that such capabilities are almost equal to the speed of the 32-bit PCI 33 MHz bus. That is why gigabit adapters are available for both 32-bit PCI (33 and 66 MHz) and 64-bit bus. Along with this increase in speed, Gigabit Ethernet inherits all previous features Ethernet, such as frame format, CSMA/CD technology (transmission sensitive multiple access with collision detection), full duplex, etc. Although high speeds have introduced their own innovations, it is precisely in the inheritance of old standards that lies the huge advantage and popularity of Gigabit Ethernet. Of course, other solutions are now proposed, such as ATM and Fiber Channel, but here the main advantage for the end consumer is immediately lost. The transition to another technology leads to massive rework and re-equipment of enterprise networks, while Gigabit Ethernet will allow you to smoothly increase speed and not change the cable management. This approach has allowed Ethernet technology to take a dominant place in the field of network technologies and conquer more than 80 percent of the global information transmission market.

Structure of building an Ethernet network with smooth transitions to higher data transfer rates.

Initially, all Ethernet standards were developed using only optical cable as a transmission medium - this is how Gigabit Ethernet received the 1000BASE-X interface. It is based on the Fiber Channel physical layer standard (this is a technology for interconnecting workstations, storage devices and peripheral nodes). Since this technology had already been approved previously, this borrowing greatly reduced the time it took to develop the Gigabit Ethernet standard. 1000BASE-X

We, like the average person, were more interested in 1000Base-CX due to its operation on shielded twisted pair (STP “twinax”) over short distances and 1000BASE-T for unshielded twisted pair category 5. The main difference between 1000BASE-T and Fast Ethernet 100BASE- TX became that all four pairs were used (in 100BASE-TX only two were used). Each pair can transmit data at a speed of 250 Mbit/s. The standard provides full-duplex transmission, with flow on each pair being provided in two directions simultaneously. Due to the strong interference during such transmission, it was technically much more difficult to implement gigabit transmission over twisted pair than in 100BASE-TX, which required the development of a special scrambled noise-resistant transmission, as well as an intelligent node for recognizing and restoring the signal at the reception. 5-level PAM-5 pulse-amplitude coding was used as a coding method in the 1000BASE-T standard.

The criteria for cable selection have also become more stringent. To reduce interference, unidirectional transmission, return loss, delay and phase shift, Category 5e for unshielded twisted pair cable was adopted.

Cable crimping for 1000BASE-T is carried out according to one of the following schemes:

Straight-through cable.

Crossover cable.

Cable crimping diagrams for 1000BASE-T

Innovations also affected the level of the 1000BASE-T MAC standard. In Ethernet networks, the maximum distance between stations (collision domain) is determined based on the minimum frame size (in the Ethernet IEEE 802.3 standard it was 64 bytes). The maximum segment length must be such that the transmitting station can detect a collision before the end of frame transmission (the signal must have time to travel to the other end of the segment and return back). Accordingly, when the transmission speed increases, it is necessary to either increase the frame size, thereby increasing the minimum time for frame transmission, or reduce the diameter of the collision domain.

When moving to Fast Ethernet, we used the second option and reduced the segment diameter. This was not acceptable in Gigabit Ethernet. Indeed, in this case, the standard, which inherited such Fast Ethernet components as the minimum frame size, CSMA/CD and collision detection time (time slot), will be able to work in collision domains with a diameter of no more than 20 meters. Therefore, it was proposed to increase the time for transmitting the minimum frame. Considering that for compatibility with previous Ethernet, the minimum frame size was left the same - 64 bytes, and an additional carrier extension field was added to the frame, which expands the frame to 512 bytes, but the field is not added when the frame size is greater than 512 byte. Thus, the resulting minimum frame size was equal to 512 bytes, the time for collision detection increased, and the segment diameter increased to the same 200 meters (in the case of 1000BASE-T). The characters in the carrier extension field do not carry any meaning; the checksum for them is not calculated. When a frame is received, this field is discarded at the MAC layer, so higher layers continue to work with minimum frames of 64 bytes in length.

But even here there were pitfalls. Although the media extension maintained compatibility with previous standards, it was a waste of bandwidth. Losses can reach 448 bytes (512-64) per frame in case of short frames. Therefore, the 1000BASE-T standard was modernized - the concept of Packet Bursting was introduced. It allows you to use the expansion field much more effectively. And it works like this: if an adapter or switch has several small frames that require sending, then the first of them is sent in the standard way, with the addition of an extension field of up to 512 bytes. And all subsequent ones are sent to original form(without extension field), with a minimum interval between them of 96 bits. And, most importantly, this interframe interval is filled with media extension symbols. This happens until the total size of sent frames reaches the limit of 1518 bytes. Thus, the medium does not become silent throughout the transmission of small frames, so a collision can only occur at the first stage, when transmitting the first correct small frame with a media extension field (512 bytes in size). This mechanism can significantly improve network performance, especially under heavy loads, by reducing the likelihood of collisions.

But this turned out to be not enough. At first, Gigabit Ethernet only supported standard Ethernet frame sizes, ranging from a minimum of 64 (expandable to 512) to a maximum of 1518 bytes. Of these, 18 bytes are occupied by the standard service header, and for data there remain from 46 to 1500 bytes, respectively. But even a data packet of 1500 bytes is too small in the case of a gigabit network. Especially for servers that transfer large amounts of data. Let's do some math. To transfer a 1 GB file over an unloaded Fast Ethernet network, the server processes 8200 packets/sec and takes at least 11 seconds. In this case, interrupt processing alone will take about 10 percent of the time of a 200 MIPS computer. After all, the central processor must process (calculate the checksum, transfer data to memory) each incoming packet.

Speed

10 Mbit/s

100 Mbit/s

1000 Mbit/s

Frame size

Frames/sec

Data transfer rate, Mbit/s

Interval between frames, µs

Characteristics of transmission of Ethernet networks.

In gigabit networks, the situation is even sadder - the load on the processor increases by approximately an order of magnitude due to the reduction in the time interval between frames and, accordingly, interrupt requests to the processor. From Table 1 it is clear that even in best conditions(using frames of maximum size) frames are spaced from each other by a time interval not exceeding 12 μs. If smaller frame sizes are used, this time interval only decreases. Therefore, in gigabit networks, the bottleneck, oddly enough, was precisely the frame processing stage of the processor. Therefore, at the dawn of Gigabit Ethernet, actual transfer speeds were far from the theoretical maximum - processors simply could not cope with the load.

The obvious way out of this situation is the following:

    increasing the time interval between frames;

    shifting part of the frame processing load from the central processor to itself network adapter.

Both methods are currently implemented. In 1999 it was proposed to increase the size of the package. Such packets were called giga frames (Jumbo Frames), and their size could be from 1518 to 9018 bytes (currently, equipment from some manufacturers supports larger giga frame sizes). Jumbo Frames have reduced the CPU load by up to 6 times (proportional to their size) and thus significantly increased performance. For example, a maximum Jumbo Frame of 9018 bytes, in addition to the 18-byte header, contains 9000 bytes of data, which corresponds to six standard maximum Ethernet frames. The performance gain is achieved not due to getting rid of several overhead headers (traffic from their transmission does not exceed several percent of the total throughput), but by reducing the time for processing such a frame. More precisely, the time to process a frame remains the same, but instead of several small frames, each of which would require N processor cycles and one interrupt, we process only one, larger frame.

The fairly rapidly developing world of information processing speed provides increasingly faster and inexpensive solutions for the use of special hardware to remove part of the traffic processing load from the central processor. Buffering technology is also used, which ensures that the processor is interrupted to process several frames at once. At this time, Gigabit Ethernet technology is becoming more and more accessible for use at home, which will directly interest the common user. More fast access access to home resources will provide high-quality viewing of high-resolution video, take less time to redistribute information and, finally, allow live encoding of video streams onto network drives.

Resource materials were used in preparing the article http://www.ixbt.com/ andhttp://www.wikipedia.org/.

Article read 15510 times

Subscribe to our channels

Gigabit Ethernet

Now there is a lot of talk about the fact that it is time to massively switch to gigabit speeds when connecting end users of local networks, and the question is again being raised about the justification and progressiveness of solutions “fiber to the workplace”, “fiber to the home”, etc. In this regard, this article, which describes standards not only for copper, but mainly for fiber-optic GigE interfaces, will be quite appropriate and timely.

Gigabit Ethernet architecture

Figure 1 shows the Gigabit Ethernet layer structure. As in the Fast Ethernet standard, Gigabit Ethernet does not have a universal signal encoding scheme that would be ideal for all physical interfaces - so, on the one hand, 8B/10B encoding is used for the 1000Base-LX/SX/CX standards, and on the other On the other hand, the 1000Base-T standard uses a special extended line code TX/T2. The encoding function is performed by the PCS encoding sublayer located below the medium-independent GMII interface.

Rice. 1. Layer structure of the Gigabit Ethernet standard, GII interface and Gigabit Ethernet transceiver

GMII interface. GMII (Gigabit Media Independent Interface) provides interaction between the MAC layer and the physical layer. The GMII interface is an extension of the MII interface and can support speeds of 10, 100 and 1000 Mbps. It has separate 8-bit receiver and transmitter, and can support both half-duplex and full-duplex modes. In addition, the GMII interface carries one signal providing synchronization (clock signal), and two line status signals - the first (in the ON state) indicates the presence of a carrier, and the second (in the ON state) indicates the absence of collisions - and several other signal channels and nutrition. A transceiver module spanning the physical layer and providing one of the physical media-dependent interfaces can connect, for example, to a Gigabit Ethernet switch via a GMII interface.

PCS physical coding sublayer. When connecting 1000Base-X group interfaces, the PCS sublayer uses 8B10B block redundancy coding, borrowed from the ANSI X3T11 Fiber Channel standard. Similar to the discussed FDDI standard, only based on a more complex code table, every 8 input bits intended for transmission to a remote node are converted into 10-bit symbols (code groups). In addition, the output serial stream contains special 10-bit control characters. An example of control characters are those used for media extension (padding a Gigabit Ethernet frame to its minimum size of 512 bytes). When connecting a 1000Base-T interface, the PCS sublayer performs special noise-resistant coding to ensure transmission over UTP Cat.5 twisted pair cable over a distance of up to 100 meters - TX/T2 line code developed by Level One Communications.

Two line status signals, a carrier presence signal and a collision absence signal, are generated by this sublayer.

PMA and PMD sublevels. The Gigabit Ethernet physical layer uses several interfaces, including traditional Category 5 twisted pair cable as well as multimode and single-mode fiber. The PMA sublayer converts the parallel character stream from the PCS into a serial stream, and also performs the reverse conversion (parallelization) of the incoming serial stream from the PMD. The PMD sublayer determines the optical/electrical characteristics physical signals for different environments. A total of 4 are defined various type physical interface of the environment, which are reflected in the specification of the 802.3z (1000Base-X) and 802.3ab (1000Base-T) standard (Fig. 2).

Rice. 2. Gigabit Ethernet physical interfaces

1000Base-X interface

The 1000Base-X interface is based on the Fiber Channel physical layer standard. Fiber Channel is a technology for interconnecting workstations, supercomputers, storage devices and peripheral nodes. Fiber Channel has a 4-layer architecture. The two lower layers FC-0 (interfaces and media) and FC-1 (encoding/decoding) have been moved to Gigabit Ethernet. Since Fiber Channel is an approved technology, this porting greatly reduced the development time for the original Gigabit Ethernet standard.

The 8B/10B block code is similar to the 4B/5B code adopted in the FDDI standard. However, the 4B/5B code was rejected in Fiber Channel because the code does not provide DC balance. Lack of balance can potentially lead to data-dependent heating of the laser diodes, since the transmitter may transmit more "1" (emission) bits than "0" (no emission) bits, which can cause additional errors at high transmission rates.

1000Base-X is divided into three physical interfaces, the main characteristics of which are given below:

The 1000Base-SX interface defines lasers with an acceptable radiation length within the range of 770-860 nm, transmitter radiation power ranging from -10 to 0 dBm, with an ON/OFF ratio (signal / no signal) of at least 9 dB. Receiver sensitivity -17 dBm, receiver saturation 0 dBm;

The 1000Base-LX interface defines lasers with an acceptable radiation length within the range of 1270-1355 nm, transmitter radiation power ranging from -13.5 to -3 dBm, with an ON/OFF ratio (there is a signal / no signal) of at least 9 dB. Receiver sensitivity -19 dBm, receiver saturation -3 dBm;

1000Base-CX shielded twisted pair cable (STP "twinax") over short distances.

For reference, Table 1 shows the main characteristics of optical transceiver modules produced by the company Hewlett Packard for standard interfaces 1000Base-SX (model HFBR-5305, =850 nm) and 1000Base-LX (model HFCT-5305, =1300 nm).

Table 1. Technical characteristics of Gigabit Ethernet optical transceivers

Supported distances for 1000Base-X standards are shown in Table 2.

Table 2. Technical characteristics of Gigabit Ethernet optical transceivers

With 8B/10B encoding, the bit rate in the optical line is 1250 bps. This means that the bandwidth of the permissible length of cable must exceed 625 MHz. From the table 2 shows that this criterion is met for lines 2-6. Due to the high transmission speed of Gigabit Ethernet, you should be careful when building long segments. Of course, preference is given to single-mode fiber. In this case, the characteristics of optical transceivers can be significantly higher. For example, NBase produces switches with Gigabit Ethernet ports that provide distances of up to 40 km over single-mode fiber without relays (using narrow-spectrum DFB lasers operating at a wavelength of 1550 nm).

Features of using multimode fiber

There is a huge amount in the world corporate networks based on multimode fiber optic cable, with 62.5/125 and 50/125 fibers. Therefore, it is natural that even at the stage of formation of the Gigabit Ethernet standard, the task arose of adapting this technology for use in existing multimode cable systems. During the research to develop the 1000Base-SX and 1000Base-LX specifications, one very interesting anomaly was identified associated with the use of laser transmitters in conjunction with multimode fiber.

Multimode fiber was designed for sharing with light-emitting diodes (emission spectrum 30-50 ns). Incoherent radiation from such LEDs enters the fiber over the entire area of ​​the light-carrying core. As a result, a huge number of mode groups are excited in the fiber. The propagating signal lends itself well to description in terms of intermode dispersion. The efficiency of using such LEDs as transmitters in the Gigabit Ethernet standard is low, due to the very high modulation frequency - the bit rate in the optical line is 1250 Mbaud, and the duration of one pulse is 0.8 ns. Maximum speed, when LEDs are still used to transmit the signal over multimode fiber, is 622.08 Mbit/s (STM-4, taking into account the redundancy of the 8B/10B code, the bit rate in the optical line is 777.6 Mbaud). Therefore, Gigabit Ethernet became the first standard regulating the use of laser optical transmitters in conjunction with multimode fiber. The area of ​​input of radiation into the fiber from the laser is much smaller than the size of the core of a multimode fiber. This fact in itself does not lead to a problem. At the same time, in technological process In the production of standard commercial multimode fibers, the presence of some defects (deviations within acceptable limits) that are not critical in the traditional use of fiber is allowed, most concentrated near the axis of the fiber core. Although such a multimode fiber fully satisfies the requirements of the standard, coherent light from a laser introduced through the center of such a fiber, passing through regions of refractive index inhomogeneity, is capable of splitting into a small number of modes, which then propagate along the fiber along different optical paths and with at different speeds. This phenomenon is known as differential mode delay DMD. As a result, a phase shift appears between the modes, leading to unwanted interference on the receiving side and to a significant increase in the number of errors (Fig. 3a). Note that the effect only appears under the simultaneous combination of a number of circumstances: a less successful fiber, a less successful laser transmitter (of course, meeting the standard) and a less successful input of radiation into the fiber. On the physical side, the DMD effect is due to the fact that the energy from a coherent source is distributed within a small number of modes, while an incoherent source uniformly excites a huge number of modes. Research shows that the effect is stronger when using long-wavelength lasers (transparency window 1300 nm).

Fig.3. Propagation of coherent radiation in a multimode fiber: a) Manifestation of the effect of differential mode delay (DMD) with axial input of radiation; b) Off-axis input of coherent radiation into a multimode fiber.

The indicated anomaly in worst case may lead to a reduction in the maximum segment length based on a multimode fiber optic cable. Since the standard must provide a 100% performance guarantee, the maximum segment length must be regulated taking into account the possible occurrence of the DMD effect.

1000Base-LX interface. In order to maintain greater distance and avoid unpredictability of Gigabit Ethernet link behavior due to anomaly, it is proposed to inject radiation into a non-central part of the multimode fiber core. Due to the aperture divergence, the radiation manages to be evenly distributed throughout the entire fiber core, greatly weakening the effect, although the maximum length of the segment remains limited after that (Table 2). Adaptive single-mode optical cords MCP (mode conditioning patch-cords) have been specially developed, in which one of the connectors (namely the one that is planned to be interfaced with multimode fiber) has a slight offset from the axis of the fiber core. An optical cord in which one connector is a Duplex SC with an offset core, and the other is a regular Duplex SC, can be called as follows: MCP Duplex SC - Duplex SC. Of course, such a cord is not suitable for use in traditional networks, for example in Fast Ethernet, due to high insertion losses at the interface with MCP Duplex SC. The transition MCP can be a combination of single-mode and multimode fiber and contain a fiber-to-fiber bias element within it. Then the single-mode end is connected to the laser transmitter. As for the receiver, a standard multimode patch cord can be connected to it. The use of MCP adapter cords allows radiation to be introduced into a multimode fiber through an area shifted by 10-15 µm from the axis (Fig. 3b). Thus, it remains possible to use 1000Base-LX interface ports with single-mode fiber optics, since radiation input there will be carried out strictly in the center of the fiber core.

1000Base-SX interface. Since the 1000Base-SX interface is standardized only for use with multimode fiber, the displacement of the radiation input region from the central axis of the fiber can be implemented within the device itself, thereby eliminating the need for a matching optical cord.

1000Base-T interface

1000Base-T is a standard Gigabit Ethernet interface for transmission over Category 5 and higher unshielded twisted pair cables over distances of up to 100 meters. All four pairs of copper cable are used for transmission, the transmission speed over one pair is 250 Mbit/s. It is assumed that the standard will provide duplex transmission, and data on each pair will be transmitted simultaneously in two directions at once - double duplex. 1000Base-T. Technically, implementing 1 Gbit/s duplex transmission over UTP cat.5 twisted pair cable turned out to be quite difficult, much more difficult than in the 100Base-TX standard. The influence of near and far crosstalk from three neighboring twisted pairs for this pair in a four-pair cable requires the development of a special scrambled, noise-resistant transmission, and an intelligent unit for recognizing and restoring the signal at the reception. Several encoding methods were initially considered as candidates for approval in the 1000Base-T standard, including: 5-level pulse amplitude encoding PAM-5; quadrature amplitude modulation QAM-25, etc. Below are briefly the ideas of PAM-5, which was finally approved as a standard.

Why 5-level coding. Common four-level coding processes incoming bits in pairs. In total, there are 4 different combinations - 00, 01, 10, 11. The transmitter can set each pair of bits to its own voltage level of the transmitted signal, which reduces the modulation frequency of a four-level signal by 2 times, 125 MHz instead of 250 MHz, (Fig. 4), and therefore radiation frequency. The fifth level was added to create code redundancy. As a result, it becomes possible to correct errors during reception. This gives an additional 6 dB headroom in the signal-to-noise ratio.

Fig.4. PAM-4 4-level encoding scheme

MAC level

The Gigabit Ethernet MAC layer uses the same CSMA/CD transport protocol as its predecessors Ethernet and Fast Ethernet. The main restrictions on the maximum length of a segment (or collision domain) are determined by this protocol.

The IEEE 802.3 Ethernet standard adopts a minimum frame size of 64 bytes. It is the value of the minimum frame size that determines the maximum allowable distance between stations (diameter of the collision domain). The time that a station transmits such a frame - channel time - is equal to 512 BT or 51.2 μs. The maximum length of an Ethernet network is determined from the condition of collision resolution, namely the time during which the signal reaches the remote node and returns RDT should not exceed 512 BT (excluding the preamble).

When moving from Ethernet to Fast Ethernet, the transmission speed increases, and the transmission time of a 64-byte frame is correspondingly reduced - it is equal to 512 BT or 5.12 μs (in Fast Ethernet 1 BT = 0.01 μs). In order to be able to detect all collisions until the end of frame transmission, as before, one of the conditions must be satisfied:

Fast Ethernet kept the same minimum frame size as Ethernet. This maintained compatibility but resulted in a significant reduction in the diameter of the collision domain.

Again, due to continuity, the Gigabit Ethernet standard must support the same minimum and maximum frame sizes that are adopted in Ethernet and Fast Ethernet. But as the transmission speed increases, the transmission time of a packet of the same length decreases accordingly. If the same minimum frame length were maintained, this would lead to a reduction in the network diameter, which would not exceed 20 meters, which could be of little use. Therefore, when developing the Gigabit Ethernet standard, it was decided to increase the channel time. In Gigabit Ethernet it is 4096 BT and is 8 times faster than Ethernet and Fast Ethernet. But to maintain compatibility with the Ethernet and Fast Ethernet standards, the minimum frame size was not increased, but an additional field was added to the frame, called the "media extension".

carrier extension

The characters in the additional field usually do not carry any service information, but they fill the channel and increase the “collision window”. As a result, the collision will be registered by all stations with a larger diameter of the collision domain.

If a station wishes to transmit a short (less than 512 bytes) frame, this field is added before transmission - a media extension that complements the frame to 512 bytes. The checksum field is calculated only for the original frame and is not propagated to the extension field. When a frame is received, the extension field is discarded. Therefore, the LLC layer does not even know about the presence of the extension field. If the frame size is equal to or greater than 512 bytes, then there is no media extension field. Figure 5 shows the Gigabit Ethernet frame format when using the media extension.

Fig.5. Gigabit Ethernet frame with media extension field.

Packet Bursting

Media expansion is the most natural solution, which made it possible to maintain compatibility with the Fast Ethernet standard and the same collision domain diameter. But it resulted in unnecessary wastage of bandwidth. Up to 448 bytes (512-64) can be wasted when transmitting a short frame. At the stage of development of the Gigabit Ethernet standard, NBase Communications made a proposal to modernize the standard. This upgrade, called packet congestion, allows for more efficient use of the expansion field. If the station/switch has several small frames to send, then the first frame is padded with a media extension field to 512 bytes and sent. The remaining frames are sent after with a minimum interframe interval of 96 bits, with one important exception - the interframe interval is filled with extension symbols (Fig. 6a). Thus, the medium does not become silent between sending short original frames, and no other device on the network can interfere with the transmission. This arrangement of frames can occur until the total number of bytes transmitted exceeds 1518. Packet congestion reduces the likelihood of collisions, since an overloaded frame can experience a collision only at the stage of transmission of its first original frame, including media expansion, which certainly increases network performance, especially under heavy loads (Fig. 6-b).

Fig.6. Packet congestion: a) frame transmission; b) bandwidth behavior.

Based on materials from the Telecom Transport company

I decided to upgrade my computer a little, and since I needed 2 network cards and there weren’t enough slots, I needed a network card in a PCI-E slot. I had enough time, so I decided to buy it on Aliexpress.

I found it, completely satisfied with the description, and also for the price. When checking the seller, it turned out that the risk level is almost zero. Ordered, the parcel arrived 20 days after sending by the seller. By the way, the seller is currently having a discount or sale, but the card costs 3.63.



But since I don’t really trust Chinese manufacturers, I first looked carefully at the board. My intuition did not deceive me, the main microcircuit was soldered not only with an offset, but there were also solder sticks in three places (indicated by arrows).

I didn’t really try to figure out what these pins were responsible for, but the connections with the memory chip and the power pins, i.e., were stuck on the legs. the board is guaranteed to be undetermined, at a minimum, at a maximum I would be left without a new computer.

And of course, the funny designation for link speed in Hertz.

Without inserting it into the computer, I wrote to the seller that I received the parcel, but it does not work, the microcircuit is poorly soldered. To which he replied that they say send a video. What he was going to see there, I don’t understand. I told him that I would try to take a photo, but everything was so small that it was unlikely that he would see anything. Sent a message.

Without waiting for an answer, I took a soldering iron, removed the snot, checked the card - it worked.

The card was identified as Realtek PCIe GBE Family Controller, but because I already had Realtek drivers, then the card began to work immediately, there was no need to install anything additional.
The equipment manager writes about it -
PCI\VEN_10EC&DEV_8168&SUBSYS_816810EC&REV_02\4&293AFFCC&1&00E0

I tested the copying speed, although it all came down to the speed of the router port (I was surprised to find that I had nothing to test the card at gigabit speed), so far there is nothing to test gigabit, and to be honest, I don’t see an urgent need for it yet, 100 megabits is enough, but I haven’t seen 100 megabit PCI-E, so let it live. Moreover, I’m unlikely to buy it from us for this money.

As a result, I wrote to the seller that the chip was re-soldered, the card works, I will confirm receipt, but I am very dissatisfied. The workmanship is very poor. As a result, the seller offered a refund of 3 dollars, I agreed, in fact, I didn’t have any particular complaints about the seller, I made contact immediately and without problems.

But that’s not the point, the moral of this micro-review is that, just in case, before you insert a new piece of hardware into your computer, don’t be too lazy to inspect it carefully, so as not to be left without a computer at all.

In general, the delivery is excellent, the card is the most banal, the price is reasonable, delivery is fast, but the quality is quite poor.

This is probably how they assembled my network

I'm planning to buy +6 Add to favorites I liked the review +28 +50

I was in no rush to upgrade my home network from 100Mbps to 1Gbps, which is quite strange for me since I transfer a lot of files over the network. However, when I spend money on a computer or infrastructure upgrade, I believe I should get an immediate performance boost in the apps and games I run. Many users like to treat themselves with a new video card, central processor and some gadget. However, for some reason, networking equipment does not attract such enthusiasm. Indeed, it is difficult to invest the money you earn in network infrastructure instead of another technological birthday gift.

However, my bandwidth requirements are very high, and at one point I realized that a 100 Mbit/s infrastructure was no longer enough. All of my home computers already have integrated 1 Gbps adapters (on their motherboards), so I decided to take the price list of the nearest computer company and see what I would need to convert my entire network infrastructure to 1 Gbps.

No, a home gigabit network is not that complicated at all.

I bought and installed all the equipment. I remember that it used to take about a minute and a half to copy a large file over a 100 Mbps network. After an upgrade to 1 Gbit/s, the same file began to be copied in 40 seconds. The performance increase was pleasant, but I still did not get the tenfold improvement that could be expected from comparing the throughput of 100 Mbps and 1 Gbps of the old and new networks.

What is the reason?

For a gigabit network, all parts must support 1 Gbps. For example, if you have Gigabit network cards and associated cables installed, but the hub/switch only supports 100 Mbps, then the entire network will operate at 100 Mbps.

The first requirement is a network controller. It is best if each computer on the network is equipped with a gigabit network adapter (separate or integrated on the motherboard). This requirement is the easiest to satisfy, since most motherboard manufacturers have been integrating gigabit network controllers for the last couple of years.

The second requirement is that the network card must also support 1 Gbit/s. There is a common misconception that gigabit networks require Cat 5e cable, but in fact even old Cat 5 cable supports 1 Gbps. However, Cat 5e cables have better characteristics, so they will be a more optimal solution for gigabit networks, especially if the cables are of a decent length. However, Cat 5e cables are still the cheapest today, since the old Cat 5 standard is already outdated. Newer and more expensive Cat 6 cables offer even better performance for gigabit networks. We'll compare the performance of Cat 5e vs Cat 6 cables later in our article.

The third and probably most expensive component in a gigabit network is the 1 Gbps hub/switch. Of course, it is better to use a switch (perhaps paired with a router), since a hub or hub is not the most intelligent device, simply broadcasting all network data on all available ports, which leads to a large number of collisions and slows down network performance. If you need high performance, then you cannot do without a gigabit switch, since it forwards network data only to the desired port, which effectively increases the network speed compared to a hub. A router usually contains a built-in switch (with multiple LAN ports) and also allows you to connect your home network to the Internet. Most home users understand the benefits of a router, so a gigabit router is a very attractive option.



CONTENT