Application of the Video Server Software mjpg-streamer in Embedded Multimedia Data Processing

Source: Internet
Author: User

Http://www.stmcu.org/article/12-06/2075521340594733.html? Sort = 1127_1502_0_0

"Mjpg-streamer" is a lightweight video server software. A command line application that can obtain images from a single input component and transmit them to multiple output components.

The software can be used in an IP-based network to obtain and transmit JPEG images from network cameras to browsers, such as Firefox, cambozola, videolanclie nt, it is even a windows mobile device running the tcpmp player.

It inherits from uvc_streamer and is written for embedded devices with resource restrictions on Ram and CPU. Because cameras compatible with Linux-Uvc can directly generate JPEG data, even embedded devices running openwrt Linux can quickly process M-JPEG data streams.

The source code of this tool is concise and clear. The component functions are clear and the connection is clear. Developed Using Linux C language, it can be transplanted to different computer platforms, or it can be improved and released under the terms of GPL V2.

1. Process of mjpg-streamer

The mjpg-streamer main function is defined in the mjpg-streamer.c file. The main function processing process 1 is shown.

2. components related to mjpg-streamer

Mjpg-streamer adopts a modular design method and is described in units of functional blocks. These functional blocks are called plug-in (components ). The software defines the input and output of each component and the cohesion between the components. You can select a required module based on your functional requirements. This modular program design simplifies code writing, debugging, and maintenance. programmers can easily rewrite the software by complying with the design specifications, or write a new module to enhance the functionality of the software.

The most important and commonly used component of mjpg-streamer is the input_uvc input component and output_http output component. Other components include input_control, input_file, input_testpicture, input_gspcavl, output_autofocus, output_file, and out_viewer, as shown in figure 2.

2.1 input_uvc input component

The main function of the input_uvc component is to obtain and compress the image taken by the camera. It consists of five component interface functions, which are interfaces provided by each component for external use. For the definition of related functions, see the input_uvc.c file.

The following uses the input_uvc input component as an example to describe the idea of modular program design and the working method of the mjpg-streamer component.

1) int input_init (input_parameter * PARAM)
This function is the initialization function of the input component. Its main task is to initialize relevant parameters, allocate memory space for it, and enable the video device.

The workflow 3 for entering the component initialization function is shown in.

The data structure vdin of video devices is defined as follows:

2) int input_run (void)

This function uses the pthread_create function to create a working thread. Use the pthread_detach function to set the thread to the detached state. In the input_uvc component, the input_run function creates the cam_thread thread, which captures a frame of image collected by the camera and converts the format. It uses the v4l2 video device development framework. For detailed workflow, refer to the descriptions below.

3) int input_stop (void)

By calling the pthread_cancel (pthread_t tid) function, the main thread cancels the cain_thread thread (the worker thread that captures data frames). The two threads share the address space of the process within the same process. The main thread sends a signal through the kernel to cancel the working thread. TID is the ID of the thread to be canceled.

4) int input_cmd (INT in_1__type cmd, int value)

The main function of this function is to control the camera lens, such as color, saturation, focus and other functions. The in_cmd_type struct defines the command type for controlling the input component, and passes in the control command type and parameter value of the lens to this function. Match in the function to control the lens.

The private functions of the input_uvc component are as follows:
1) void help (void)
This function prints the help information to stderr.
2) void * cam_thread (void * Arg)
This worker thread captures a frame of data and copies it to the global buffer. Its workflow 4 is shown in.

The workflow repeats until the thread exits. Use the pthread_cleanup_pop function to call the cam_cleanup thread cleanup function before exiting.

3) void cam_cleanup (void * Arg)

This function is a thread cleanup function. Before the thread exits, use this function to release resources allocated in the work thread.

2.2 ouput_http output component

The HTTP. c file in the output_http output component defines the server's response to client requests.

Send_snapshot, send_stream, send_error, and send_file define how the client sends, sends video streams, sends error messages, and sends file request information.

The command function executes the console to specify the control command and sends feedback.

The server_thread service thread is used to open a TCP socket and wait for the client to connect. If a client connection exists, a client_thread is created for each client connected to the server to serve the client.
The client thread of client_thread serves clients connected to the server.

3. Application of mjpg-streamer in Embedded Systems

This article implements additional functions of the software by modifying the software source code. Use mjpg-streamer as the video server software for embedded development.

The system uses the S3C2440 microprocessor as the core to build an embedded video monitoring system. In addition to viewing the camera image, the client can also control the cloud platform device to obtain images from any angle.

The low-end cloud platform is used as the front-end control device. The embedded video server can provide multiple front-end device interfaces to connect to multiple cloud platforms. The embedded video server transmits the image or cloud platform control signal to the corresponding front-end device according to the request of each client. This article only implements the basic model of the system and uses a cloud platform device. Based on the basic model, it can be extended into multiple channels of transmission.

3.1 system hardware platform

The hardware platform of the system uses the TQ-2440 development board embedded in the sky, CPU processor is s3c2440al, clock speed is 400 MHz, up to 533 MHz, it is equipped with mb nand Flash, 2 MB nor flash, and 64 mb sdram to meet system requirements. In addition, a v4l2 camera and a low-end cloud platform device with RS485 interfaces are used.

3.2 system hardware connection

The cloud platform controller is a front-end control device in the system. It is responsible for receiving serial encoding signals sent by video server devices. Its built-in decoder is used as an intermediate device after RS485 to RS232, it is connected to the RS232 serial port of ARM9. the decoder communicates with ARM9. The arm sends control signals to the cloud platform through a serial port. After the cloud platform decoder receives the signal, it parses the address of the control signal. When the resolution address is the same as the address set in the decoder, the cloud platform parses the control signal, convert it into the control voltage of the control cloud platform or lens, and finally pass the control voltage to the cloud platform and lens, so as to realize the operation of the rotation of the cloud platform and the adjustment of the lens. If the resolution address is inconsistent with the internal address, the decoder does not convert accordingly.
The physical model of the system is shown in Figure 5.

3.3 System Software Platform

3.3.1 integrated cloud platform Control Function

The browser client logs on to the server to obtain the video stream and control the rotation of the cloud platform. Cloud Platform control is designed to achieve multi-angle monitoring. Therefore, you need to rewrite the source code of mjpg-streamer and integrate the cloud platform control function.

3) The command function in the HTTP. c file is used to execute the control commands specified by the client after the server receives and parses the value of the client. It further parses the control commands and determines whether the commands are camera control, mjpg-streamer system control (for example, mjpg-streamer re-configuration command), or the cloud platform control commands added by the author.
In this function, first call the compiled cloud platform device initialization function intptz_device_init (void ). This function is used to enable and configure the serial port on ARM9.

Next, traverse the output component command string and command type ing array to find the control command type that matches the command string.

Call the output_cmd function in output_http output file output_http.c. In this function, call the control function to send data to the cloud platform decoder based on different cloud platform control commands. The communication between arm9-and cloud platform adopts the PELCO-D protocol. Its workflow 6 is shown in.

4) after the cloud platform control function is integrated into the mjpg-streamer, You need to modify the makefile of the output_http output component. The modified makefile is as follows:

3.3.2 porting Video Server Software

First, port the ipeg library and SDL Library to the/lib directory of the Development Board root file system.

Then, transplant the mjpg-streamer module after re-integration to the/lib directory of the root file system of the Development Board, and use the camera input module input_uvc.so, the HTTP output module output_http.so, which integrates the cloud platform control function, port the execution file to the/usr/bin directory.

Store the compiled client webpage in a folder of the Development Board root file system, for example,/var/pages. After the video server is started, specify the input component, output component, and path of the folder on the server. The page browsed by the client is the webpage under the file.

3.4 system browser client

The system client uses a webpage, as shown in page 7. The left side is the image frame display area, which is used to display the image data transmitted by the server. The arrow key on the right is used to control the multi-angle rotation of the cloud platform device.

4 Conclusion

With the advancement of social information technology, video surveillance, as an important tool, covers almost all industries. Video Server applications play an extremely important role in the video monitoring system. This article analyzes the open-source video server software mjpg-streamer from four aspects: features, processing procedures, components, and applications, straighten out the software processing ideas, and after modifying this open-source software, it is applicable to the development of the actual video monitoring system and has a certain reference value.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.