Kotlin/Native Application Development Guide

Source: Internet
Author: User

Kotlin/Native Application Development Guide

In this blog, we will discuss the development of Kotlin/Native applications. Here, we use FFMPEG Audio/Video Decoder and SDL2 for rendering to develop a simple video player. We hope this article will become a valuable development guide for Kotlin/Native developers. This article also explains how to use the expected mechanism of the platform.

In this tutorial, we mainly focus on Kotlin/Native. We will just give a rough introduction to how to develop the video layer. You can refer to this excellent tutorial titled "How to Write a video player with less than 1000 lines of code" to learn how to implement it in C language. If you are interested in comparing the differences between C encoding and Kotlin/Native encoding, we recommend that you start from this tutorial.

In theory, the work of each video player is quite simple: Reading input streams with staggered Video Frames and audio frames, decoding and displaying video frames, and synchronizing them with audio streams. In general, this work is done by multiple threads, and stream decoding, playing video and audio are executed. To achieve this accurately, thread synchronization and specific real-time assurance are required. If the audio stream is not decoded in time, the playing sound may sound unstable. If the video frame is not displayed in time, the image looks not smooth.

Kotlin/Native does not encourage you to use threads, nor provides methods for sharing Kotlin objects between threads. However, we believe that concurrent Soft Real-Time Programming in Kotlin/Native is easy to implement, so we decided to design our player in a concurrent way from the very beginning. Let's see how we did it.

Kotlin/Native computing concurrency is built around workers. A Worker is a more advanced concurrency concept than a thread. Unlike object sharing and synchronization, it allows object transmission. Therefore, only one Worker can access a specific object at a time point. This means that synchronization is not required when accessing object data, because multiple accesses can never be synchronized at the same time. Workers can receive execution requests. These requests can accept objects, execute tasks as needed, and then return the results to the persons who need to calculate the results. This model ensures that many typical concurrent programming errors (such as non-synchronous access to shared data or deadlocks caused by unordered locks) are no longer present.

Let's see how it is converted into a video player architecture. To parse a container format, such as avi2.16.mkv or. mpg, it can decompose and decode cross-audio and video streams, and then provide the decompressed audio to the SDL audio thread. The decompressed video frame should be synchronized with the audio playback. To achieve this goal, the emergence of the concept of worker is also taken for granted. We generate a worker for the decoder and request video and audio data from it as needed. On a multi-core machine, this means that decoding can be performed in parallel with playing. Therefore, the decoder is a data generator from the UI thread and audio thread.

Whenever we need to obtain the next audio or video data block, we rely on the all-powerful schedule () function. It schedules a large amount of work to a specific worker for execution, so as to provide input parameters and return to the Future instance, which may be suspended until the task is completed by the target worker. The Future object may be destroyed, so the generated object will be returned directly from the worker thread to the request program thread.

Kotlin/Native Runtime is theoretically linear, so when running multiple threads, you need to call the function konan before doing other operations. initRuntimeIfNeeded (), which is also used in audio thread callback. To simplify audio playback, we re-sample the audio frame to two channels and identify the 16-bit integer stream at a sampling rate of 44100.

A video frame can be decoded to the desired size. Of course, it has a default value, and its bit depth depends on the default settings of the user's desktop. Pay attention to the Kotlin/Native-specific method for operating the C pointer, that is

Private val resampledAudioFrame: AVFrame =
Disposable (create =: av_frame_alloc, dispose =: av_frame_unref). pointed
...
With (resampledAudioFrame ){
Channels = output. channels
Sample_rate = output. sampleRate
Format = output. sampleFormat
Channel_layout = output. channelLayout. signExtend ()
}

We declare resampledAudioFrame as one-time resource in the C program created by calling avframealloc () and avframeunref () by ffmpeg api. Then, we set the value it points to as the expected field. Note that the definition of FFMPEG (such as AV_PIX_FMT_RGB24) can be used as a constant of Kotlin. However, since they do not have type information and are Int type by default, if a field has different types (such as channellayout), you need to call the adapter function signExtend (). This is the internal feature of the compiler and will insert appropriate conversions.

After the decoder is set, we start the playback process. This is nothing special. It only retrieves the next frame, presents it to the texture, and displays the texture on the screen. Now, the video frame is rendered. The audio thread callback is handled by the Audio thread callback. It obtains the next sampling buffer from the decoder and feeds it back to the audio engine.

Audio/Video Synchronization must be ensured. It ensures that we do not have many unplayed audio frames. The real multimedia player should depend on the frame timestamp. We only calculate it, but it will never be used. Here is an interesting part.

Val ts = av_frame_get_best_effort_timestamp (audioFrame. ptr )*
Av_q2d (audioCodecContext. time_base.readValue ())

It shows how to use an api to receive the structure of a C language. It is declared in libavutil/rational. h

Static inline double av_q2d (AVRational ){
Return a. num/(double) a. den;
}

Therefore, to pass it through a value, we first need to use readValue () on the field ().

To sum up, thanks to the FFMPEG library, we can implement a simple audio/video player that supports multiple input formats at a low cost. Here we also discuss the basic knowledge of C language-based Interoperability in Kotlin/Native and the concurrency methods that are easier to use and maintain.

Application development in Kotlin/Native

This article permanently updates link: https://www.bkjia.com/Linux/2018-03/151371.htm

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.