SummaryWhen developing a driver for a data stream device, you can use the interrupt-driven I/O method in combination with the use of the buffer zone to isolate data reception from System Call read, improve the efficiency of devices in the system. Based on the discussion of the interrupt processing program and the development of the half-end part in uClinux, this paper takes a data stream device on the E1 line of China Telecom and the Ethernet Interconnection device as an example, describes the interrupt-driven I/O driver development. The main process is to fill in the data to the buffer block during the interruption period, and connect the buffer block with the linked list; Remove the data from the buffer block during the system call read, and then place the buffer block in the free linked list for backup. It involves technical applications such as blocking I/O and spin locks commonly used in drivers. Data Flow device drivers developed using the above technologies ensure stable and efficient system actions.
KeywordsUClinux interrupt driver I/O mode
Introduction
As 32-bit microprocessor gradually becomes the mainstream of embedded systems, embedded applications become more and more complex. Many embedded systems have to use dedicated operating systems to support their own applications. As a Unix-like operating system, uClinux inherits the excellent qualities of Linux and becomes the preferred Operating System for embedded systems.
Adding drivers to your device in the operating system is an essential part of embedded design. It is also important to select the appropriate driver mode for different device types. Generally, device drivers use direct I/O methods, such as memory and dog drivers. for drivers of data stream devices such as networks, the interrupt mechanism should be used.
This document introduces the development of an I/O device driver for interrupt drivers based on uClinux and a data stream device.
1. application background
1.1 Hardware Description
The driver described in this article is applied to a telecom E1 line and an Ethernet device. It receives E1 data by bypass and sends it to a server on the Ethernet. It analyzes E1's voice and signaling time slot on the server.
The processor in this device is a network-type ARM processor, developed by Samsung. The E1 line interface uses the Dallas semiconductor company's dedicated El Interface Unit (Liu) chip ds2148, which completes waveform sorting, clock recovery, and HDB3 decoding. Ds2148 sends the sorted E1 data stream to an FPGA (eplc3t144c8) of the cyclone series of Altera Company. It saves the serial E1 data stream to the FIFO, then, the data is transmitted to arm through the 32-bit external bus of arm. Arm packages the data and sends it to the server over Ethernet. Figure L shows the hardware diagram of the system. This article mainly introduces the FPGA connected to the External Bus of arm and the design of the driver interrupt mechanism in uClinux.
1.2 hardware connection
Shows the connection circuit 2 between the processor and FPGA.
1.3 fpga fifo Structure
Two FIFO instances are set in FPGA. To prevent conflicts between ARM and FPGA operations, arm and FPGA adopt the ping-pong mode for the two FIFO operations, so that arm and FPGA can operate different FIFO operations at the same time without waiting. The size of the FIFO is 4096 BITs, which can accommodate the data volume of an E1 complex frame. When an FPGA fills up a FIFO, it will notify the arm to read the FIFO by means of interruption, and the FPGA will set the internal f1fo Status Register. FIFO) The Status Register is named fpga_imf, which is a 32-bit register. The corresponding FIFO needs to be read by using the "L" position in it.
2 Software Design
Interrupt-driven I/o refers to the process in which the input data is filled into the buffer during the interruption period and the device read data from the buffer zone; the output buffer is filled by the Write Device process and the data is removed during the interruption. Data buffering separates data sending and receiving from write and read system calls to improve the overall performance of the system. The following is the design of the interrupt program under ucllnux.
2.1 interrupt program in uClinux
In uClinux, call the following function to request an interrupt channel (or interrupt request IRQ) from the system and release it after processing.
Mt reqliest_irq (unsigned int IRQ, void (* Handler) (INT, vold *, struct pt_regs *), unsigned 10ng flags, const chat * device, vold * dev_id );
Void free_irq (unstgned int lrq, void * dev_id );
IRQ indicates the interrupt number. In this system, it corresponds to the 21 interrupt sources. Here we use the interrupt source o. Handler points to the interrupt handler function to be installed. Flags is a byte mask for various options related to interrupt management. The string passed by device to request_irq. It is used to display the owner of the interrupt in/proc/interrupts. The dev_id pointer is used to share the interrupt signal line. If the return value of the function is O, the operation is successful, or a negative error code is returned. The function returns an ebujsy to notify another device that the driver has used the interrupt signal line to be applied. The following is the FPGA device interrupt request function. This function is called in the fpga_open function of the driver.
Int fpga_open (struct inode * inocle, stuct_file * file ){
Int result;
Result = request_irq (fpga_irq, Delta FPGA _ ISR, SA_INTER-RUPT, "FPGA", null );
If (resuit! = O ){
Printk (kern_info "can not register fpga isr! \ N ");} else {
Printk (kern_info "fpga isr register successfully! \ N ");
}
}
After applying for an interrupt channel, the system will respond to the External Interrupt 0 and enter the interrupt processing program. The first step of the interrupt handler is to clear the External Interrupt o-bits of the interrupt suspension register of initi4510b. This is to enable FPGA to generate new interruptions. In uClinux, the following macro is called.
# Deflne clear_pend_int (n) intpend = (1 <(n ))
The interrupt handler function is to feed the information about interrupt receipt to the device and read and write the data according to the meaning of the interruption to the service. Therefore, the main task of FPGA interrupt processing is to read the values of the FIFO status registers in FPGA, obtain the FIFO information to be read, and arrange to receive data. The INL function provided by the system is used in the program.
Unmgned Mt status
Status = INL (fpga_imf );
The execution of the interrupt handler should be as short as possible, while receiving data from FPGA must read one FIFO and 128 words at a time. This is an external I/O operation that requires a long time. Therefore, place this operation in the bottom half of the interrupt processing (bottom-haIf. The following describes the design of the bottom half of Interrupt Processing.
2.2 BH Mechanism
The biggest difference between the bottom half processing program and the top half is that all the interruptions are opened during the execution of BH, so it runs in a "safer" period. The uClinux kernel of version 2.4 has three mechanisms for processing the bottom half: Soft Interrupt, tasklet, and BH. A simple BH mechanism is selected here.
The BH mechanism is actually a task queue. The task to be processed by the interrupt handler is inserted into a specific task queue waiting for Kernel execution. The kernel maintains multiple task queues, but the driver can only use the first three:
① Tq_scheduler queue. When the scheduler is running, the queue is processed. At this time, the scheduler runs in the context of the process to be scheduled, so the tasks in the queue can do almost anything. They do not run when they are interrupted.
② Tq_timer queue. This queue is run by the timer queue processor (timertick) because it is run when the thread is interrupted. All tasks in the queue run during the interruption time.
③ Tu_lmmediate queue. The immediate queue is processed as soon as possible when the system call returns or when the scheduler is running (no matter which of the two situations occurs first ). The queue is processed within the interrupt time.
The queue elements are described in the following structure:
Structtq_struct
Structq_struct * MEXT/* activated BH chain table */
Unsigned 1ong sync;/* must be initialized to zero */
Void (* outine) (vold *);/* Called function */
Void * data;/* parameter passed to the function */
};
The most important fields in the above data structure are rotltine and data. To insert delayed tasks into the queue, you must first set these fields in the structure and clear the next and sync fields. The sync flag in the structure is used to prevent the same task from being inserted multiple times, which destroys the next pointer. Once a task is queued, the data structure is considered as "owned" by the kernel and cannot be modified.
In the FPGA driver, a task queue element is defined to complete the bottom half:
Struct tq_struct el_task;
Unsigned int el_line;
The el_line array is used to save the parameters passed to the task. When FPGA is enabled, you must assign a value to the task queue structure:
El_task.routine = fpga_bh;
E1 task. Data = & e0000line:
The above fpga_bh is the void fpga_bh (unsigned int * Line) function name of the processing function, and el_line is the real parameter passed to the fpga_bh function.
The following functions are related to task queue:
Void queue_task (struct tq_struet * task, task_queue * List );
As the name of this function, this function is used to route tasks into the queue. It disables the interrupt and avoids competition. Therefore, it can be called by any function in the module. FPGA tasks are inserted into the tq_immediate queue. Therefore, the list value is & tq_immediate.
When a code segment needs to be scheduled to run the lower half of the process, you only need to call mark_bh:
Void mark_bh (INT nr );
Here, nr is the type of activated BH. This number is a symbolic constant defined in the header file <Linux/interupth>. The corresponding handler for each lower half of BH is provided by the driver that owns it.
After configuring the task queue element, you can enable the BH mechanism in the interrupt processing function. Assign the value of fpga_imf to el_line, call queue_task to insert the task to the tq_immediate queue, call mark_bh (immediate_bh), and start the half-level processing. At this point, the interrupt handler can exit.
Processing programs and buffers at the bottom of 2.3
After the uCLinux operating system exits the Interrupt Processing Program, the tasks in the tq_immediate queue are immediately put into operation, and the fpga_bh function is also available. When you enter fpga_bh, The el_line address is passed as the real parameter to the line parameter. That is, the value of the FIFO Status Register (fpga_imf) is indirectly transmitted to the bottom half of the processing program. Each bit of this value is checked in the half-End program, and the FIFO to be read is determined accordingly.
Data read from the FIFO is stored in the kernel buffer. Because the capacity of each FIFO is an E1 frame, the kernel buffer is also a buffer block based on the size of the E1 frame. The buffer blocks are connected in a linked list. The data structure of the buffer unit is as follows:
Struct buf_struct {
Struct list_head list;/* linked list header */
Unsigned int buf_size;/* data block size */
Unsigned int * bulhead;/* buffer block pointer */
Unsigned int * bul_curl/* Current buffer pointer */
};
Buf_size indicates the size of the data block. This is a value in the unit of "word. The buffer block is opened in the Kernel Heap area. buf_head points to the first address of the actual buffer block, while buf_cur points to the unit being operated in the buffer block. To use the Linked List mechanism, the driver must contain the header file <Linux/list. h>. The list_head type structure is defined:
Struct list_head {
Struct list_head * Next. * Prev;
To access the buffer block linked list, you also need to create a linked list header to define global variables in the driver:
Struct list_head read_list;
The linked list header must be an independent list_head structure. Before use, you must use the init_list_head macro to initialize the linked list header:
Init_list_head (& readlist); I
Linux provides linked list operation functions. In the header file <Linux/list. h>:
List_add (struet list_head * New, struct list_head * head);/* Insert a new item after the chain table header */
List_add_tail (stuot list_head * New, struet list_head * head);/* Add a new item at the end of the linked list */
List_del (struet_list_head * entry);/* delete a given item from the linked list */
List_empty (struct list_head * head)/* determines whether the linked list is empty */
List_entry (struct list_head. PTR, type_of_struet, field _ name);/* access the structure containing the linked list header */
Here, list_entry maps a 1ist_head structure pointer back to a pointer pointing to a large structure containing it. PTR is a pointer to the structlist_head structure, type_of_struct is a structure type containing PTR, and field_name is the name of the linked list field in the structure. If you can use this macro to map the linked list pointer (readl) pointing to the data buffer block to the buffer block structure pointer (BUF ):
Struet buf_strcut * Buf = list_entry (Real, struct buf_struct, list );
In the half-End Processing Program, the kernel buffer is dynamically allocated. Because the driver is a part of the kernel, dedicated functions are required to open a buffer in the Kernel Heap. The following functions are defined in the header file <Linux/malloc. h>:
Void * kmalloc (size T size, int flags);/* empty question about size allocation in the Kernel Heap */
Void kfree (void * Obi/* release the space allocated by kmalloc */
The first parameter of the kmalloc function is size and the second parameter is priority. The most common priority is gfp_kernel, which means that the memory allocation is called by processes running in the kernel state. Sometimes kmalloc is called outside the context of the process, such as interrupt processing, task queue processing, and kernel timer processing. In these cases, the current process should not enter sleep state, and priority gfp_atomic should be used.
Do not use kmalloc too frequently to allocate space in the kernel heap, because there may be interruptions during the allocation of space, which is not safe. Create another linked list in the driver to recycle used buffer blocks. Use free_1ist as the chain table header of the buffer block for recovery in the driver:
Struct list_head free_list;
In this way, there are two linked lists: one is a linked list with data loaded, and the other is a linked list with used buffer blocks (called a free linked list ). As long as there are table items in the free linked list, you can retrieve a table item from the free linked list directly when you need a buffer block, instead of kmalloc.
2.4 use of blocking I/O and spin locks
In the driver, read copies the kernel buffer to the user space. Note the following two situations when performing this operation:
① When reading, it is found that the read linked list is empty, that is, there is no data readable.
In this case, read can immediately return an eagain to inform the user that the process has not read data. The other way is to implement blocking I/O, when no data is readable, let the user process go to sleep state and wait for data.
There are several ways to process and wake up the same basic data type-Wait Queue (walt_queue_head_t), which is a queue composed of processes waiting for an event. Before using the SDK, you must declare and initialize the SDK as follows:
Wait_queue_head_t read_jqueue;
Init_waitqueue_head (& read_queue );
You can call one of the following functions to make the process sleep:
Void wait_evet (wait_queue_head _ queue, int condition );
Int wait_evem_interruptible (walt_queue_hean_t queue, int condition );
These two functions combine the wait event and test event. After the call, the process will sleep until the true condition of the C Boolean expression is true. In the READ function of the driver, if the read linked list is null, call it to sleep:
While (1ist_efnpty (& read_list )){
If (filp 1> f_flags Delta o _ nonblock)/* if it is set to non-blocking I/O */
Return-eagain;
If (wait_evert_interruptible (read_queue ,! List_empty (delta read _ list) Return-erestartsys;
}
For the above function, to wake up the process, you can call the following function:
Wake_up (wait_queue_gead_t * Queue );
Wake_up_jnterruptlbk (wait_queue_head_t * Queue );
The driver should wake up the process in time after the data arrives, that is, after reading data from the FIFO, it should execute before exiting the bottom half of the processing program:
Wake_up_mteriuptible (& read_queue );
It should be noted that the wake-up event does not guarantee the occurrence of the waiting event. Therefore, after returning from the sleep state, we should test the condition cyclically.
② When the read operation is accessing a linked list, the half program also needs to access the same linked list. This is dangerous and should be avoided.
To avoid this, the spin lock is used here. Obtain the lock before the read operation accesses the linked list, and unlock when the access ends. When accessing the linked list in the bottom half, check whether the spin lock is locked. If yes, wait until the lock is available.
Spin locks are described using the type spinlock_t. The spin lock is declared and initialized in the unlocked state as follows:
Spinlock_t1ist_10ck = spin_lock_unlocked;
The spin lock processing function is as follows:
Spill_1ock_bh (spllalock-T * 1ock );
Spin_unloek_bh (splnlock_t * Lock );
Here, we use the function to obtain the spin lock and stop the execution of the bottom half, which completely ensures that the bottom half program will not access the linked list during the read operation. The program implements the following:
Spln_lock_bh (& list_lock );
List_del (readl);/* Delete the used buffer from the read linked list */
List_add_tail (READI, & free_list);/* Insert the used buffer block into the free linked list */
Spin_unlock_bh (& list_lock );
2.5 I/O of Interrupt driver
So far, we can fully describe the process of data flow between the arm and FPGA: when a first-in-first-out FPGA is full, an interruption is sent to the arm. After the arm enters the interrupt processing program, read the value of the flfo Status Register (fpga_imf) in FPGA, insert a task to the instant queue (tq_imrnediate), start the half-end (BH), and set the FIFO) the value of the Status Register is passed to the final half processor (fpga_bh). After completing this operation, the interrupt handler is exited. After entering the half-level processing program, determine the f1f0 to be processed based on the value of the FIFO Status Register. Read the data from the FIFO to the kernel buffer. This buffer may be obtained from the free queue (free_list. If the free queue is empty, a new buffer block is allocated. Next, add the filled buffer block to the read-list and wake up the sleep process. This completes the work at the bottom. When a user process reads an FPGA device, the READ function in the driver checks the read linked list. If the read linked list is empty, sleep and wait for the data to arrive. After the data is available, copy the data of the buffer block retrieved from the read queue to the user space, and then insert the used buffer block into the free queue. The operating process of the kernel buffer is shown in step 3. Figure 3 the upper half is in the lower half of the program, and the lower half is in the READ function.
Conclusion
Drivers of continuous data stream devices in uClinux usually use the interrupt mechanism. The interrupt-driven I/O type discussed in this article provides a practical method for such applications. The linked list, blocking I/O, and spin locks are also frequently used in driver development.