The speed of each PCIe standard is as follows:
Version |
Release Date |
Original data transmission bandwidth |
Valid bandwidth |
Single lane bandwidth |
Total Bandwidth (x16) |
Pcie1.x |
2003 |
2.5gt/s |
2 Gbps |
250 MB/S |
8 Gb/s |
Pcie2.x |
2007 |
5.0gt/s |
4 Gbps |
500 MB/S |
16 Gb/s |
Pcie3.0 |
2010 |
8.0gt/s |
8 Gbps |
1 Gb/s |
32 Gb/s |
After comparison, we found a strange problem. According to common sense, the bandwidth of the New Generation doubles the bandwidth of the previous generation. The original data transmission bandwidth of pcie3.0 should be 10gt/s, but only 8.0gt/s. We know that the 1.0 and 2.0 standards adopt the 8B/10B encoding method. That is to say, each 8-bit valid data is transmitted with a two-bit verification bit, actually, 10 bits of data need to be transmitted. Therefore, valid bandwidth = raw data transmission bandwidth * 80%.
In the 3.0 standard, a more effective 20% B/130B encoding scheme is used to avoid 3.0 bandwidth loss. 1.538% of the wasted bandwidth is only, which is negligible, therefore, the 8gt/S signal is no longer just a theoretical value, but a real transfer value.
The speed of each USB standard is as follows:
Version |
Release Date |
Data bandwidth |
Data bandwidth |
USB 1.0 |
1996 |
1.5 Mbps |
192kb/s |
USB 1.1 |
1998 |
18 Mbps |
1.5 MB/S |
USB 2.0 |
2000 |
480 Mbps |
60 MB/S |
USB 3.0 |
2008 |
5 Gbps |
640 MB/S |
It is worth noting that usb3.0 also uses the 8B/10B encoding method, so the actual transmission bandwidth should also be reduced to 640 Mb/s: 80% Mb/s * 512 = MB/S