I. First, we should clarify the hierarchical structure of tty drivers in linux: top-down is divided into TTY core layer, TTY line rules, and TTY drivers. 2. the TTY core layer and the line procedure layer analyze user space programs to perform read and write operations directly on the tty core layer. in tty_io.c: int _ inittty _...
I. first, clarify the tty driver's hierarchical structure in linux:
Top-down systems are divided into the TTY core layer, TTY line rules, and TTY drivers.
2. TTY core layer and line procedure layer analysis
The user space program performs read and write operations on the tty core layer directly. in tty_io.c:
Int _ init tty_init (void)
{
Cdev_init (& tty_cdev, & tty_fops );
If (cdev_add (& tty_cdev, MKDEV (TTYAUX_MAJOR, 0), 1) |
Register_chrdev_region (MKDEV (TTYAUX_MAJOR, 0), 1, "/dev/tty") <0)
Panic ("Couldn 'tregister/dev/tty driver \ n ");
Device_create (tty_class, NULL, MKDEV (TTYAUX_MAJOR, 0), NULL, "tty ");
......
}
The above initialization code can obtain the following information:
A character driver is registered, and user space operations correspond to the functions in the tty_fops struct:
Staticconst struct file_operations tty_fops = {
. Llseek = no_llseek,
. Read = tty_read,
. Write = tty_write,
. Poll = tty_poll,
. Unlocked_ioctl = tty_ioctl,
. Compat_ioctl = tty_compat_ioctl,
. Open = tty_open,
. Release = tty_release,
. Fasync = tty_fasync,
};
For character device drivers, we know that read/write operations correspond to fops one by one.
Tty_open:
Static int tty_open (struct inode * inode, struct file * filp)
{
Int index;
Dev_tdevice = inode-> I _rdev;
Structtty_driver * driver;
......
Driver = get_tty_driver (device, & index );
......
Tty = tty_init_dev (driver, index, 0 );
......
Retval = tty_add_file (tty, filp );
......
If (tty-> ops-> open)
Retval = tty-> ops-> open (tty, filp );
}
Get_tty_driver searches for tty_driver by tty_drivers global linked list based on device number.
Tty_init_dev is used to initialize a tty struct:
Tty-> driver = driver;
Tty-> ops = driver-> ops;
And establish line rules:
Ldops = tty_ldiscs [N_TTY];
Ld-> ops = ldops;
Tty-> ldisc = ld;
In fact, tty_ldiscs [N_TTY] is determined in lele_init. This function is called when the kernel is started.
Tty_register_ldisc (N_TTY, & tty_ldisc_N_TTY );
Then: tty_ldiscs [N_TTY] = & tty_ldisc_N_TTY;
Struct tty_ldisc_ops tty_ldisc_N_TTY = {
. Magic = TTY_LDISC_MAGIC,
. Name = "n_tty ",
. Open = n_tty_open,
. Close = n_tty_close,
. Flush_buffer = n_tty_flush_buffer,
. Chars_in_buffer = n_tty_chars_in_buffer,
. Read = n_tty_read,
. Write = n_tty_write,
. Ioctl = n_tty_ioctl,
. Set_termios = n_tty_set_termios,
. Poll = n_tty_poll,
. Receive_buf = n_tty_receive_buf,
. Write_wakeup = n_tty_write_wakeup
};
Tty_add_file stores tty in the private variable private_data of file.
Tty-> ops-> open call is actually the application driver-> ops-> open. In this way, we will go from the tty core layer to the tty driver layer.
Tty_write:
Static ssize_t tty_write (struct file * file, const char _ user * buf,
Size_t count, loff_t * ppos)
{
......
Ld = tty_ldisc_ref_wait (tty );
If (! Ld-> ops-> write)
Ret =-EIO;
Else
Ret = do_tty_write (ld-> ops-> write, tty, file, buf, count );
......
}
From the above function, we can see that tty_write calls the write function of the route procedure. So, let's look at what the write function in ldisc is like. After some operations, the final call is as follows:
Tty-> ops-> flush_chars (tty );
Tty-> ops-> write (tty, B, nr );
Obviously, both functions call the tty_driver operation function, because the previous tty_open function has operations such as tty-> ops = driver-> ops. So what is the tty_driver? in the TTY system, tty_driver must be registered at the driver layer. The ops is initialized during registration. that is to say, the next thing should be tty_driver.
Tty_read:
Static ssize_t tty_read (struct file * file, char _ user * buf, size_t count,
Loff_t * ppos)
{
......
Ld = tty_ldisc_ref_wait (tty );
If (ld-> ops-> read)
I = (ld-> ops-> read) (tty, file, buf, count );
Else
I =-EIO;
......
}
Like in tty_write, the corresponding read function of the line procedure is also called in tty_read. The difference is that this read does not call the ops read in tty_driver, but is like this:
Uncopied = copy_from_read_buf (tty, & B, & nr );
Uncopied + = copy_from_read_buf (tty, & B, & nr );
From the function name, copy_from_read_buf copies data from the buffer zone read_buf. Read data from the end of tty-> read_buf, tty-> read_tail. So how does the data in read_buf come from? Guess, it must have been done by tty_driver.
Tty_ioctl:
Long tty_ioctl (struct file * file, unsigned int cmd, unsigned long arg)
{
......
Switch (cmd ){
Case... ...:
......
}
}
Related operations are performed according to the cmd value. some operations are performed on the line rules, and some operations are performed directly through tty_driver.
3. TTY driver layer analysis
Next, let's look at the TTY driver layer:
The TTY driver layer performs corresponding operations based on different hardware operations. Here we take the serial port as an example.
As a standard device, the serial port separates commonalities into a uart layer and features a serial layer.
The serial layer is loaded as a driver module. Take 8250. c as an example:
Static int _ init serial8250_init (void)
{
......
Serial8250_reg.nr = UART_NR;
Ret = uart_register_driver (& serial8250_reg );
......
Serial8250_register_ports (& serial8250_reg, & serial8250_isa_devs-> dev );
......
}
# Define UART_NR CONFIG_SERIAL_8250_NR_UARTS
CONFIG_SERIAL_8250_NR_UARTS is defined when the kernel is configured, indicating the number of serial ports supported.
Static struct uart_driver serial8250_reg = {
. Owner = THIS_MODULE,
. Driver_name = "serial ",
. Dev_name = "ttyS ",
. Major = TTY_MAJOR,
. Minor = 64,
. Cons = SERIAL8250_CONSOLE,
};
There are several important data structures in the driver layer:
Structuart_driver;
Structuart_state;
Structuart_port;
Structtty_driver;
Structtty_port;
In fact, the relationship between these structs is clarified, and the TTY driver layer is also clarified.
Uart_register_driver:
This function registers a TTY driver with the TTY core layer:
Retval = tty_register_driver (normal );
Where normal is tty_driver.
In addition, some assignment values and pointer connections are performed between tty_driver and uart_driver. We are most concerned about initializing the uart_ops function for tty_driver so that uart_ops can be used to operate the UART layer on the tty core layer.
Serial8250_register_ports:
The two most important functions are serial8250_isa_init_ports and uart_add_one_port.
Serial8250_isa_init_ports initialize uart_8250_port: enable the timer and initialize uart_port.
Uart_add_one_port refers to adding a port for uart_driver, and the state in uart_driver points to NR slots. then, the main task of this function is to add a port for the slot. In this way, uart_driver can perform underlying operations on ops function sets through port.
Now let's analyze the connection section, that is, how tty_driver works, how to connect the tty core layer (or ldisc layer) and serial layer uart_port. The operation part is uart_ops.
Uart_open:
Staticint uart_open (struct tty_struct * tty, struct file * filp)
{
......
Retval = uart_startup (tty, state, 0 );
......
}
Staticint uart_startup (struct tty_struct * tty, struct uart_state * state, int init_hw)
{
......
Retval = uport-> ops-> startup (uport );
......
}
Call the startup function of ops of uart_port to initialize the serial port in this function. in this function, request to receive data interruptions or establish timeout round robin.
After applying for data interruption in startup, the interrupted service program is closely related to read operations. from the read operations on the tty core layer, we can see that the received data must be transmitted to read_buf. Now we can see that the service is interrupted.
Call receive_chars to receive data. in receive_chars, there are two data transmission functions:
Tty_insert_flip_char and tty_flip_buffer_push.
Static inline int tty_insert_flip_char (struct tty_struct * tty,
Unsigned char ch, char flag)
{
Struct tty_buffer * tb = tty-> buf. tail;
If (tb & tb-> used <tb-> size ){
Tb-> flag_buf_ptr [tb-> used] = flag;
Tb-> char_buf_ptr [tb-> used ++] = ch;
Return1;
}
Return tty_insert_flip_string_flags (tty, & ch, & flag, 1 );
}
When the current tty_buffer space is insufficient, call tty_insert_flip_string_flags. in this function, locate the next tty_buffer and put the data in the char_buf_ptr of the next tty_buffer.
So how is the data of char_buf_ptr associated with read_buf in the line procedure? we can see that when tty_buffer is initialized, that is, in the tty_buffer_init function:
Void tty_buffer_init (struct tty_struct * tty)
{
Spin_lock_init (& tty-> buf. lock );
Tty-> buf. head = NULL;
Tty-> buf. tail = NULL;
Tty-> buf. free = NULL;
Tty-> buf. memory_used = 0;
INIT_DELAYED_WORK (& tty-> buf. work, flush_to_ldisc );
}
At the end of the function, a work queue is initialized.
When is the queue scheduled? in the driver layer, receive_chars finally calls the tty_flip_buffer_push function.
Void tty_flip_buffer_push (struct tty_struct * tty)
{
Unsigned long flags;
Spin_lock_irqsave (& tty-> buf. lock, flags );
If (tty-> buf. tail! = NULL)
Tty-> buf. tail-> commit = tty-> buf. tail-> used;
Spin_unlock_irqrestore (& tty-> buf. lock, flags );
If (tty-> low_latency)
Flush_to_ldisc (& tty-> buf. work. work );
Else
Schedule_delayed_work (& tty-> buf. work, 1 );
}
There are two ways to push data to tty_buffer: flush_to_ldisc and tty buffer scheduling.
Flush_to_ldisc is the function called by the queue:
Static void flush_to_ldisc (struct work_struct * work)
{
......
While (head = tty-> buf. head )! = NULL ){
......
Count = head-> commit-head-> read;
......
Char_buf = head-> char_buf_ptr + head-> read;
Flag_buf = head-> flag_buf_ptr + head-> read;
Head-> read + = count;
Disc-> ops-> receive_buf (tty, char_buf,
Flag_buf, count );
......
}
......
}
The main function of this function is to find the data buffer char_buf_ptr from tty_buffer and pass the buffer pointer to the receive_buf operation function of the line procedure. Let's take a look at receive_buf:
Static void n_tty_receive_buf (struct tty_struct * tty, const unsigned char * cp,
Char * fp, int count)
{
......
If (tty-> real_raw ){
......
Memcpy (tty-> read_buf + tty-> read_head, cp, I );
......
} Else {
......
Switch (flags ){
CaseTTY_NORMAL:
N_tty_receive_char (tty, * p );
Break;
......
}
If (tty-> ops-> flush_chars)
Tty-> ops-> flush_chars (tty );
......
}
......
}
From the code above, we can see that the if condition is true, obviously copying data into tty read_buf; entering else, calling n_tty_receive_char in normal state, and then calling put_tty_queue, in this function, the data is eventually copied to the tty read_buf.
At this point, the tty-driven read operation data link is basically connected.
Uart_write:
Static int uart_write (struct tty_struct * tty,
Const unsigned char * buf, int count)
{
......
Port = state-> uart_port;
Circ = & state-> xmit;
......
While (1 ){
C = CIRC_SPACE_TO_END (circ-> head, circ-> tail, UART_XMIT_SIZE );
......
Memcpy (circ-> buf + circ-> head, buf, c );
......
}
......
Uart_start (tty );
Return ret;
}
The code above indicates copying the data to be written to the buffer zone of the state. Then, call uart_start.
Static void _ uart_start (struct tty_struct * tty)
{
Struct uart_state * state = tty-> driver_data;
Struct uart_port * port = state-> uart_port;
If (! Uart_circ_empty (& state-> xmit) & state-> xmit. buf &&
! Tty-> stopped &&! Tty-> hw_stopped)
Port-> ops-> start_tx (port );
}
Start_tx of the operation function set that calls uart_port.
Static void serial8250_start_tx (struct uart_port * port)
{
Struct uart_8250_port * up = container_of (port, struct uart_8250_port, port );
......
Transmit_chars (up );
......
}
In transmit_chars, the data in the state-> xmit buffer is written into the data sending register at the serial port, that is, the data arrives at the hardware layer. At this point, the data links for write operations are also connected.