Commit | Line | Data |
---|---|---|
a5050c61 | 1 | Hyper-V network driver |
2 | ====================== | |
3 | ||
4 | Compatibility | |
5 | ============= | |
6 | ||
7 | This driver is compatible with Windows Server 2012 R2, 2016 and | |
8 | Windows 10. | |
9 | ||
10 | Features | |
11 | ======== | |
12 | ||
13 | Checksum offload | |
14 | ---------------- | |
15 | The netvsc driver supports checksum offload as long as the | |
16 | Hyper-V host version does. Windows Server 2016 and Azure | |
17 | support checksum offload for TCP and UDP for both IPv4 and | |
18 | IPv6. Windows Server 2012 only supports checksum offload for TCP. | |
19 | ||
20 | Receive Side Scaling | |
21 | -------------------- | |
22 | Hyper-V supports receive side scaling. For TCP, packets are | |
23 | distributed among available queues based on IP address and port | |
3b0c3458 HZ |
24 | number. |
25 | ||
26 | For UDP, we can switch UDP hash level between L3 and L4 by ethtool | |
27 | command. UDP over IPv4 and v6 can be set differently. The default | |
28 | hash level is L4. We currently only allow switching TX hash level | |
29 | from within the guests. | |
30 | ||
31 | On Azure, fragmented UDP packets have high loss rate with L4 | |
32 | hashing. Using L3 hashing is recommended in this case. | |
33 | ||
34 | For example, for UDP over IPv4 on eth0: | |
d35d6e92 | 35 | To include UDP port numbers in hashing: |
3b0c3458 | 36 | ethtool -N eth0 rx-flow-hash udp4 sdfn |
d35d6e92 | 37 | To exclude UDP port numbers in hashing: |
3b0c3458 HZ |
38 | ethtool -N eth0 rx-flow-hash udp4 sd |
39 | To show UDP hash level: | |
40 | ethtool -n eth0 rx-flow-hash udp4 | |
a5050c61 | 41 | |
42 | Generic Receive Offload, aka GRO | |
43 | -------------------------------- | |
44 | The driver supports GRO and it is enabled by default. GRO coalesces | |
45 | like packets and significantly reduces CPU usage under heavy Rx | |
46 | load. | |
47 | ||
48 | SR-IOV support | |
49 | -------------- | |
50 | Hyper-V supports SR-IOV as a hardware acceleration option. If SR-IOV | |
51 | is enabled in both the vSwitch and the guest configuration, then the | |
52 | Virtual Function (VF) device is passed to the guest as a PCI | |
53 | device. In this case, both a synthetic (netvsc) and VF device are | |
54 | visible in the guest OS and both NIC's have the same MAC address. | |
55 | ||
56 | The VF is enslaved by netvsc device. The netvsc driver will transparently | |
57 | switch the data path to the VF when it is available and up. | |
58 | Network state (addresses, firewall, etc) should be applied only to the | |
59 | netvsc device; the slave device should not be accessed directly in | |
60 | most cases. The exceptions are if some special queue discipline or | |
61 | flow direction is desired, these should be applied directly to the | |
62 | VF slave device. | |
63 | ||
64 | Receive Buffer | |
65 | -------------- | |
66 | Packets are received into a receive area which is created when device | |
67 | is probed. The receive area is broken into MTU sized chunks and each may | |
68 | contain one or more packets. The number of receive sections may be changed | |
69 | via ethtool Rx ring parameters. | |
70 | ||
71 | There is a similar send buffer which is used to aggregate packets for sending. | |
72 | The send area is broken into chunks of 6144 bytes, each of section may | |
73 | contain one or more packets. The send buffer is an optimization, the driver | |
74 | will use slower method to handle very large packets or if the send buffer | |
75 | area is exhausted. |