Netxen is a 10g Nic manufacturer in the United States. It contains its NIC Driver in the Linux kernel source code tree. This article analyzes the data structure and DMA process of the driver, it is hoped that this Nic can be used to achieve zero copy of packets.
1. Data Structure
2. Circular Buffer
Send the circular buffer: adapter-> pai_buf_arr;
Multiple netxen_cmd_buffer structures form a sending ring buffer. After the data packet comes down from the upper layer, it is assigned to a SKB Member of the netxen_cmd_buffer structure, and then performs DMA ing to start DMA transmission to FW.
Number of buffer units in the transmission ring Buffer
Adapter-> max_tx_desc_count = max_assist_descriptors_host; // 256
There are three ring receiving buffers: Normal, Jumbo, and LRO:
Adapter-> recv_ctx [1]. rcv_desc [3]. rx_buf_arr;
The number of buffer units is:
Adapter-> max_rx_desc_count = max_rcv_descriptors; // 16384
Adapter-> max_jumbo_rx_desc_count = max_jumbo_rcv_descriptors; // 1024
Adapter-> max_lro_rx_desc_count = max_lro_rcv_descriptors; // 64
3. Transmission ring Buffer
# Define tx_ringsize (sizeof (struct netxen_assist_buffer) * adapter-> max_tx_desc_count)
Adapter-> pai_buf_arr = (struct netxen_assist_buffer *) vmalloc (tx_ringsize );
For each sending buffer unit, there is a corresponding netxen_cmd_buffer. Multiple Buffer units form a circular sending Buffer: adapter-> pai_buf_arr.
After the data packet is transmitted from the upper layer (to netxen_nic_xmit_frame), netxen_cmd_buffer-> SKB is assigned, and then DMA ing is performed to start the DMA transmission to FW.
Netxen_nic_xmit_frame
Buffrag-> DMA = pci_map_single (adapter-> pdev, SKB-> data, first_seg_len, pci_dma_todevice );
// The transmission process is more complex than this one. If the current SKB has multiple frags, the DMA ing will be performed separately for transmission to FW.
After the data packet is sent, how does the driver destroy the SKB?
It was originally performed in the napi poll function:
Netxen_nic_poll
Netxen_process_pai_ring
Pci_unmap_single
Pci_unmap_page
Dev_kfree_skb_any (buffer-> SKB );
The netxen_process_cmd_ring function determines whether the data of a sending buffer unit has been successfully sent based on the producer consumer pointer;
Perform the un ing operation on sent packets, and then release SKB back to the slab pool of sk_buff.
4. Receive the ring Buffer
A recv_ctx,
The three recv_desc (normal, Jumbo, LRO) contains three ring receiving buffers: adapter-> recv_ctx [1]. rcv_desc [3]. rx_buf_arr;
The PCI probe function is used to set the three recv_desc parameters respectively. Different DMA sizes and SKB sizes:
Switch (rcv_desc_type (ring )){
Case rcv_desc_normal:
Rcv_desc-> max_rx_desc_count = adapter-> max_rx_desc_count; // 16384
Rcv_desc-> flags = rcv_desc_normal;
Rcv_desc-> dma_size = rx_dma_map_len; // 1758
Rcv_desc-> skb_size = max_rx_buffer_length; // 1760
Break;
Case rcv_desc_jumbo:
Rcv_desc-> max_rx_desc_count = adapter-> max_jumbo_rx_desc_count; // 1024
Rcv_desc-> flags = rcv_desc_jumbo;
Rcv_desc-> dma_size = rx_jumbo_dma_map_len; // 8060
Rcv_desc-> skb_size = max_rx_jumbo_buffer_length; // 8062
Break;
Case rcv_ring_lro:
Rcv_desc-> max_rx_desc_count = adapter-> max_lro_rx_desc_count; // 64
Rcv_desc-> flags = rcv_desc_lro;
Rcv_desc-> dma_size = rx_lro_dma_map_len; // (48*1024)-514
Rcv_desc-> skb_size = max_rx_lro_buffer_length; // (48*1024)-512
Break;
}
The PCI probe function is then allocated with three corresponding ring buffers for receiving data packets:
# Define rcv_buffsize (sizeof (struct netxen_rx_buffer) * rcv_desc-> max_rx_desc_count)
Rcv_desc-> rx_buf_arr = (struct netxen_rx_buffer *) vmalloc (rcv_buffsize );
/*
* Receive context. There is one such structure per instance of
* Receive processing. Any State information that is relevant
* The receive, and is must be in this structure. The global data may be
* Present elsewhere.
*/
Struct netxen_recv_context {
Struct netxen_rcv_desc_ctx rcv_desc [num_rcv_desc_rings]; // three receive CTX
U32 status_rx_producer;
U32 status_rx_consumer;
Dma_addr_t rcv_status_desc_phys_addr;
Struct pci_dev * rcv_status_desc_pdev;
Struct status_desc * rcv_status_desc_head;
};
/*
* RCV descriptor context. One such per RCV descriptor. There may
* Be one RCV descriptor for normal packets, one for Jumbo and may be others.
*/
Struct netxen_rcv_desc_ctx {// each receives CTX
U32 flags;
U32 producer;
U32 rcv_pending;/* num of bufs posted in phantom */
U32 rcv_free;/* num of bufs in free list */
Dma_addr_t phys_addr;
Struct pci_dev * phys_pdev;
Struct rcv_desc * desc_head;/* address of RX ring in phantom */
U32 max_rx_desc_count;
U32 dma_size;
U32 skb_size;
Struct netxen_rx_buffer * rx_buf_arr;/* RX buffers for receive */
Int begin_alloc;
};
Struct rcv_desc {
_ Le16 reference_handle;
_ Le16 reserved;
_ Le32 buffer_length;/* usually 2 K. The size is specified as netxen_rcv_desc_ctx-> dma_size in netxen_post_rx_buffers. There are three different sizes (normal, Jumbo, and LRO */
_ Le64 addr_buffer;
};
/* In rx_buffer, we do not need multiple fragments as is a single buffer */
Struct netxen_rx_buffer {
Struct sk_buff * SKB; // allocated and PCI map by netxen_post_rx_buffers. The address is saved in desc_head.
U64 DMA;
2010ref_handle;
B2state;
U32 lro_expected_frags;
U32 lro_current_frags;
U32 lro_length;
};
Although the received ring buffer is composed of multiple netxen_rx_buffer buffer units (structure), the packet content is not in the buffer unit, but in netxen_rx_buffer-> SKB, this SKB is allocated by netxen_post_rx_buffers.
Netxen_post_rx_buffers allocates SKB for each buffer unit in the received ring buffer, maps the SKB to DMA, and stores the ing result in desc_head.
The number of rx_buf_arr is exactly the number of desc_head, which correspond one to one,
In Nic open, desc_head is allocated by netxen_nic_hw_resources (multiple struct rcv_desc units) to form another continuous buffer, which is shared with FW through DMA, each unit contains the DMA address and size of the SKB that receives data packets at the upper layer:
BUS address: netxen_rcv_desc_ctx-> desc_head-> addr_buffer ß SKB mapped address
Data size: netxen_rcv_desc_ctx-> desc_head-> buffer_length limit SKB size
In this way, FW will know where the DMA data packet is sent to the host.
At this point, the receiving buffer units have been allocated, and each buffer unit has been mapped to DMA, forming a ring buffer for receiving data packets;
The DMA ing information (desc_head) of each buffer unit is also shared with FW through DMA. In this way, FW can put data packets to each receiving buffer unit in the host, and send interrupt signals to the host in a timely manner;
The host uses the napi interface and the poll function netxen_nic_poll to receive data packets:
Netxen_nic_poll
Netxen_process_rcv_ring (adapter, CTX, budget/max_rcv_ctx );
Netxen_process_rcv (adapter, ctxid, DESC );
Pci_unmap_single (pdev, buffer-> DMA, rcv_desc-> dma_size, pci_dma_fromdevice); // unbind the DMA device ing. At this time, the data packet is ready in buffer-> SKB.
SKB = (struct sk_buff *) buffer-> SKB;
Ret = netif_receive_skb (SKB); // consume a data packet
// Here is the interface with the kernel network layer. It is required to do this without calling netif_receive_skb. Instead, multiple skbs are formed into a ring receiving buffer, then map a continuous view to the user space.
// That is to say, the data packet producer is FW, the consumer is a user-State Program, and the real zero copy!