Opencore internal Scheduling

Source: Internet
Author: User

1 Introduction
Multimedia frameworks are very important and important modules. Especially in Android systems, it is necessary to understand the multimedia playing effect.
The scheduling problem in multimedia is a story behind it, but it has a crucial impact on the performance. This article does not fully analyze the Multimedia Framework and aims to explore the differences between the scheduling of media framework and traditional multi-threaded applications.
2 Basic Knowledge
1. Multimedia Framework
From a macro perspective, multimedia frameworks generally include: Engine, parser, codec, and output. Engine is the control part, parser is the file parsing and reading part, and codec is the audio/video coding/decoding part. Output includes audio/video output. Multiple parser and CODEC types are available for different file types and codecs.
2 Linux threads
Generally, Linux defines a maximum of 1024 threads. However, we have not seen any system running with so many threads. We also commented that a failure is returned when 382 threads are created. The maximum number of threads is affected by system resources.
Multithreading also affects system performance in terms of management and scheduling, and affects the kernel part.
In multi-threaded application design, data sharing is a problem. threads A and B share data. To ensure data access is valid, locks such as semaphores are introduced. Among multiple threads, semaphore waiting is very important. Sometimes the data preparation of thread a is OK, but thread B is not used up, then thread a can only be dead or idling.
Multi-thread, but also avoid thread deadlock.
3 scheduling comparison
Here we compare the traditional while loop, multi-thread mode, and opencore scheduling mode, as shown in.
Mode 1:
For sequential execution, add new modules.
Mode 2:
Whether limited or not, the four modules of ABCD are running independently, even if it is wasted, it should be idling.
Mode 3:
The scheduling is conducted by the Unified scheduler, with priority given to scheduling. All four modules of ABCD are registered to the scheduler, And the next scheduling is triggered by state machine switching, it is relatively fair to trigger scheduler execution when the scheduler is urgently called again. Opencore adopts this mode. After playerdriver is created, it enters the OCL thread to process messages. After the engine creates and connects each node, the rest is the communication between nodes, each node status change will trigger the OCL to schedule the next time. All modules involved in the OCL scheduling must provide a run function, and the OCL will call this function.

4. Why does opencore not use multithreading?
Some people have doubts about why opencore does not create a large number of threads for scheduling, which is not faster. In fact, opencore uses Mode 3 above to solve scheduling problems. Attached is a class diagram related to oscl scheduling.

In opencore, why not create a large number of threads to complete each module? Currently, we have not seen the official design documents of opencore to describe their design ideas. We can only analyze the code. The following is just a reference.
1. From the modular perspective, opencore has many modules, including engine, player, author, parser, OMX, codec, output, audio, and video. opencore abstracts a concept called node, parser, codec, and OMX are encapsulated into nodes. After the engine creates and connects each node, the rest is the communication between nodes, they are scheduled by the Unified scheduler. Imagine how to manage each node (or module) by creating a thread for execution is also contrary to the modular idea of software engineering.
2. Let's look at the modularization. If multimedia is reduced to three modules: Engine, parser, and decoder. If multithreading is used, there will be three threads. This is one of the two, file type identification and decoder identification must be completed in advance and must be completed by the engine to determine which parser and decoder to use. In opencore, it is clear that each module does its own tasks and one thread completes unified scheduling.
3 from the CS model, Android multimedia is implemented using the Server Client architecture. The client is a jiava program, which is only used for API encapsulation and control command decentralization; the core media playback control is implemented on the server, and they are connected through the binder call. When a client initiates a media request, the server creates a client to correspond to it. If there is a second media request, the server creates a second client to correspond to it. In this way, the server may have multiple clients, each of which is a media engine instance. If opencore adopts the multi-thread mode, and if the engine involves six threads in total, if five clients initiate a request, the system will add 30 threads, which obviously increases the scheduling time of the kernel and affects the system performance. Using opencore for unified scheduling, five clients call up five threads, which is acceptable.
4. In multimedia application scenarios, video playback, text message ringtones, MMS ringtones, phone ringtones, and alarm clock ringtones can appear at the same time, triggering the situation of the above clients.
5. From the perspective of portability, the unified scheduler used by opencore is called OCL, which is an abstract encapsulation and can be regarded as a pure software module based on OCL, in this way, opencore can be easily transplanted to various platforms and maintained consistent.
6 from the perspective of scalability, it is easy to add a module to the existing opencore framework. For example, to add rmvb support, you only need to add a rmvb parser-node, which has no impact on the system framework. If multiple threads are used, rmvb parser must be implemented. rmvb recognition must be added to the engine, and rmvb-parser thread must be created.
7 multithreading is certainly the fastest. However, considering its own architecture design and scalability, complicated systems are bound to adopt a cautious attitude towards multithreading applications. The key issue should still fall into the system performance. Compare the M clock speed with the 1g clock speed, and how to optimize the M clock speed processor cannot catch up with the 1g processor. In other words, even if the Framework crashes on the 1g processor, the system will not run too poorly.
8 DirectX also has the concept of a filter (similar to the node in opencore). In the development of DirectX, you only need to connect each filter. The rest is about the filter.
5 Appendix
The two diagrams in the appendix are from the network.

A framework diagram of opencore.
The following is a code snippet.

Http://blogold.chinaunix.net/u2/61880/showart_2330325.html

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.