Http://blog.sina.com.cn/s/blog_53d02d550102v8bu.html
With the wide application of embedded processor and open source Linux, various video services have been developed gradually in embedded system.
1. Introduction
With the development of multimedia technology, video compression coding technology and network communication technology, digital video server has developed gradually. In recent years, with the wide application of embedded processor and open source Linux, video service has been gradually combined with embedded. This paper proposes an embedded web video server construction method, which takes embedded Linux and s3c2440 as the core platform, and builds a Web server and video server on this platform, and customers can view the video directly by browsing the web.
2. Construction of video capture and transmission module
The system adopts the hardware platform is based on the arm920t architecture of the embedded Development Board S3C2440,CPU 400MHz, the Development Board integrates 64MB 32-bit SDRAM and 64MB Nandflash, three USB host ports, 3 uart,4 Road DMA, a A 10M network interface using the CS8900Q3 Ethernet control chip. The camera uses the Medium Star Micro 301 chip USB camera. The software part of this system consists of video capture transmission module and Web server.
2.1 Video Capture Module
The driver must be installed before video capture, Video4linux is the kernel driver for video devices in Linux, which provides a unified programming interface for application programming for video devices. The system's s3c2440 Development Board comes with the Linux2.6.12 kernel, which has been added to the kernel driver video4linux about the video device when it is compiled.
(1) Initialization of the device
Call open to turn on the device and use the IOCTL () to control the device, such as setting contrast, brightness, palette, access mode, and so on. The main code is as follows:
- int fd = open ("/dev/v4l/video0", O_RDWR); Turn on the device
- /* Get basic information about the device (device name, maximum supported minimum resolution, signal source information, etc.) */
- IOCTL (VD->FD, Vidiocgcap, & (vd->capability)):
- /* Get various properties of images captured by the device/*
- IOCTL (VD->FD, Vidiocgpict, & (Vd->picture));
- /* If you need to change the image information, first change the corresponding variable value in the picture, then call the IOCTL (Vd->fd,vidiocspict, & (Vd->picture)), the program initialization all use the default value */
(2) Capturing images
The system uses memory map to collect images, thread acquisition function grab () The main code is as follows:
- /* Allocate a memory to store the captured image data */
- Vd->pframebuffer = (unsigned char *) mmap (0, Vd->videombuf.size, prot_read| prot_write,map_shared, VD->FD, 0);
- for (;;)
- {
- /* Start capturing a frame of image, using memory-mapped method */
- if (IOCTL (VD->FD, Vidiocmcapture, & (Vd->vmmap))) <</span> 0)
- {
- Perror ("Vidiocmcapture error\n");
- Erreur =-1;
- }
- /* Wait for a frame to complete, the captured image data is placed in the memory address of the Vd->pframebuffer start, the size is vd->videombuf.size bytes */
- if (IOCTL (VD->FD, Vidiocsync, &vd->vmmap.frame) <</span> 0)
- {
- Perror ("Vidiocsync error\n");
- Erreur =-1;
- }
- }
The acquisition thread is then created in the main function, pthread_create (&W1, NULL, (void *) grab, NULL); The thread keeps running, capturing the image data, and putting the data in the buffer, and the sending thread reads the video data from the buffer.
2.2 Video Transmission Module
The system supports simultaneous access by multiple customers, so creating a thread for each connected client is specifically responsible for interacting with the client for data. First call the socket () to create the socket interface, then bind () binds 7000 ports to the socket, calls listen () listens to the socket, waits for the client to connect, and finally calls accept () to establish a connection with the client. Process of video Transmission Module 3:
The following is the key code for creating a thread:
- while (signalquit)//If there is no exit signal, keep running.
- {
- /* Wait for the client connection, if there is no connection, it will block, if there is a customer connection to create a thread, on the new socket interface with the client to interact with the data * *
- if (New_sock = Accept (Serv_sock, (struct sockaddr *) &their_addr, &sin_size)) = =-1)
- {
- Continue
- }
- Pthread_create (&server_th, NULL, (void *) service, &new_sock);
- }
The thread function server () is mainly read buffer data, write to the socket interface, while reading the data set interface, set the next time to capture the properties of the image, here is no longer give code.
3. Embedded Web server
An embedded Web server is a server that is porting a Web server to an embedded system. It is still communication based on the HTTP text protocol, with the standard interface form, to the client, access to the embedded Web server is the same as access to ordinary Web services. Boa is a small Web server, executable code only 70KB, take up less system resources, fast and high security performance, the system is a boa server, download the source code after decompression, you need to modify the makefile file, the value of the variable CC is changed to ARM-LINUX-GCC, LD to Arm-linux-ld, then make can generate the BOA application, download Boa and boa.conf to the Development Board, and then according to the Development Board file system, modify the boa.conf configuration file, such as log path, Web root directory, put the Web page into the root directory , Boa can return the Web page requested by the user by reading the contents of the root directory in the configuration file.
The Web page of the system is designed in HTML language, if you want to transfer video data through the Web page, an applet is required, the program is introduced by the applet tag in HTML language, the following is the code to introduce the applet program:
- <</span>applet codebase= "." archive= "Jwebcamplayer.jar" code= "Jwebcamplayer.class" name= "Jwebcamplayer" ID= "JWebcamPlayer" align= "center" width= " height=" MAYSCRIPT>
- <</ Span>param name= "Color" value= "#ffffff";
- <</span>param name= "Server" value= ""
- <</span>param name= "Port" value= "7000"
- <</span>param name= "scriptable" value= "true";
- <</span>param name= "Mayscript" value= "true";
- <</span> /applet>
The Codebase and code properties give the full path to the applet class, align is the location of the applet window, width and height are the size of the applet window, port is the server-side video capture program's binding ports, the server Is the value of the server address, but is obtained through GetHost () in the Jwebcamplayer.java program, so this is set to null. The applet program calls the Jwebcamplayer.jar packet to interact with the server-side video send thread and display the video on the browser.
The Jwebcamplayer program obtains the incoming data from the port and parses it, generates the object, outputs it to the frame object, and completes the video playback, which can reach a frame rate of 20fps.
The Applet applet and Jwebcamplayer.jar are saved on the Web server, and when the user accesses the Web server to watch the video, the program is automatically loaded into the Web page and interpreted by the user's browser. When the Applet is called, the Jwebcamplayer.jar package is executed automatically, and the package first performs some initialization of the socket and image display, such as getting the server IP, port, setting the color value and so on, and then calling start () to connect the video server and play the video.
4. System test
The test of this system is carried out in the LAN. The client is using the Windows XP operating system, and the browser uses the Windows XP-brought IE browser, because the player contains Java controls and requires the JRE to be installed before the browser supports Java controls. Server-side use of TE2440 Linux operating system, the camera into the development Board of the USB interface, through the network cable to the Development Board into the local area network, through the serial terminal to the Development Board IP local address: 222.22.66.246, start the video capture program and Web server.
5. Conclusion
This paper introduces the construction method of an embedded web video server system, which has been implemented on the s3c2440 Development Board and can be used in LAN. A little improvement on the system, such as adding video coding, flow control, can realize the remote video communication. During the project development cycle, the largest number of developers to consider is the amount of research and development personnel to be invested and the resulting human resources costs. In the field of instant communication, if the enterprise formation team all their own to develop, will encounter many difficulties, including: 1), the technical aspects involved, including audio and video capture, codec, streaming media transmission, peer-to technology, mixing technology, to form a team needs a variety of talents; 2), The technology involved is the underlying technology, requires the team members have a very rich development experience, and need to have a high level of knowledge of the underlying technology, but also need to master C + + programming, otherwise the risk of the project will not be controlled; 3), development is a long-term process, enterprises will face the majority of risk, The required development costs will be a burden to the enterprise. Using the Anychat SDK allows enterprises to focus on their business needs, due to the Anychat SDK support VC + +, Delphi, C #, VB. NET and other development language, so the enterprise organization development team can be based on their own research and development strength of flexible decision-making, with smaller human resources investment to achieve higher returns.
The construction of embedded Linux web video server