DaVinci development principles

Source: Internet
Author: User
Tags sdo
This article stipulates:

[Host] indicates host PC Linux

[Target] indicates the target board Linux

One of DaVinci development principles-Establishing an arm Development Environment (dvevm) 1. for the DaVinci platform, Ti provides strong hardware support for the dual-core architecture. It uses DSP/BIOS to support the operation of audio and video algorithms on the DSP end, and uses the montavista Linux (MV) on the arm end) allows you to manage peripherals. for data interaction between ARM and DSP, use code engine and CODEC server for Management 2. the development programs in DaVinci are divided into codec and application. before developing an application, you must build a software and hardware development environment. hardware environments include: dvevm of DaVinci Development Board (dual-core chips including DSP and arm of TMS320DM6446 and a wide range of peripherals), CCD camera, LCD display, hard disk (if NFS is not used to map file systems, you can use the File System on the local hard disk) and serial cable. the second is the arm-side software development environment supporting dvevm. after the environment is set up, you need to configure the Linux host to use the dvevm Development Board. for embedded system development, a bootloader is first required on the Development Board to initialize the hardware, and then the system will be started through the bootloader parameter settings. for example, after bootloader is started, download the MV Linux kernel image file to the memory to run the kernel, and then start the target file system on the Linux host through NFS, the DHCP server is used to assign an IP address to the Development Board, so that you can develop IP-based network video applications. the following describes how to configure the various modules of the arm software development environment. TFTP Server Configuration

Check whether the TFTP service is installed in Linux.

[host]$ rpm -qa|grep tftptftp-0.32-4tftp-server-0.32-4

Otherwise, use RPM to install the TFTP module from the Linux installation disk and enable the TFTP service.

4. Configure the NFS server

NFS is a way to share files between machines on the network, just as files are located on the customer's local hard drive. it can be viewed as a file system format. Red Hat Linux can be either an NFS server or an NFS client, which means it can export the file system to other systems, you can also mount the file system imported from other machines. dvevm NFS is used to map the MV Linux on the host Linux to the dvevm board, so that the dvevm can perform various tasks normally without a file system.

5. DHCP server configuration

IP addresses with Linux host and dvevm are relatively simple.

6. bootloader burning and writing

It is a program run before the operating system kernel runs. Through this program, hardware devices are initialized, memory space ing tables are created, and the system's hardware and software environment is initialized, in order to prepare the correct environment for the final call of the operating system kernel, prepare the following hardware and software facilities before writing.

Software:

  • U-boot image (that is, the file u-boot.bin );
  • File falshwriter. out;
  • CCS 3.2 or later

Hardware:

  • Connect to the JTAG hardware simulator of DaVinci dvevm;
  • A serial line connecting the PC and dacinci dvevm rs323;
  • The patch cord J4 should be marked as "CS2 select" and "Flash" should be selected;
  • Set S3 1 and 2 to off

After preparing the above hardware and software environment, you can start writing. For the writing routine, see Appendix (u-boot example.rar). The writing process is very simple, similar to emulation.

7. Set the startup parameters of dvevm
[target]#setenv bootargs console=ttyS0,115200n8 noinitrd rw ip=......

For detailed parameter descriptions, see the U-BOOT documentation.

So far, we have established an initial development environment for the arm end. We can do some simple development programs, for example:

[Host] $ mkdir ~ /Workdir/filesys/opt/Hello [host] $ Cd ~ /Workdir/filesys/opt/Hello [host] $ VI hello. C # include <stdio. h> int main () {printf ("Hello world! Welcome to DaVinci test "); Return 0;} [host] $ arm_v5t_le-gcc hello. C-O hello/* after saving, compile the program with a cross-compilation tool arm_v5t_le-gcc :*/

In Linux, assume that ~ Mount the/workdir/filesys/directory to the target file system through NFS.

[Target] # cd/opt/Hello/* compile and generate the binary file hello, start the DaVinci dvevm board, and run the command through the Super Terminal */[target] #./Hello
Principle 2 of DaVinci development ---- Establishment of DSP Development Environment (dvsdk)

In section 1, A dvevm development environment is created and only arm-side programs can be developed. If you need to develop DSP-side algorithms, you need to install and use the dvsdk. The software package includes the following content:

  • Monta Vista Linux license sion edition V4: Compared with the version of the monavista Linux demo released by dvevm, this fully-Professional edition includes devrocket IDE and related service support, which is much more comprehensive;
  • Dm6446x SOC analyzer (DSA): this software is installed on Windows OS and is used to observe and analyze the loads, resource conflicts, and performance bottlenecks of programs running on DSP and arm, I have never used it. It seems that I have to pay for it separately;
  • DSP/BIOS for Linux: DSP/BIOS is a real-time DSP kernel that can be upgraded. the Linux version does not contain the corresponding graphic analysis tools compared with the Windows version;
  • Ti codegen tools for Linux: Some compilation and connection tools related to DSP;
  • Framework components: This module is mainly used to support DSP algorithm development. It can manage algorithm modules that comply with the xdais standard and allocate memory and DMA resources. these modules are used by Ce, but they can also be used in DSP programs if necessary;
  • Digital video Test benchmark (dvtb): This is an application that runs on the arm and tests codec based on script language. users can handle Linux I/O, codec APIs, and thread-related issues without writing any C code;
  • CCS: an integrated development environment running on Windows OS, used to develop DSP-based applications and related algorithms.

With the above dvsdk-related kits, you can build related DSPs. The codec engine romote server runs on the DSP/bios, and the related algorithms are encapsulated in the RS, some xdais framework components are used in algorithm encapsulation. the communication between DSP and GPP is completed by DSP/BIOS link.

1. installation and configuration of dvsdk

Installation: The operation in Linux is relatively simple, but you must note that the version of the dvsdk must be consistent with the version number of the dvevm, and it is best to install the relevant dvsdk under dvevm.
Configuration: rules in the dvevm _ #### directory. the make file controls most of the compilation behaviors. This file is included in the dvevm _ ##_ # directory and makefile files in some subdirectories, to compile the DSP application, you need to modify the corresponding file according to the installation path of your actual dvsdk package. specify the path of dvevm, Ce, xdais, DSP link, cmem, codec server, rtsc, FC, DSP/bios, Linux kernel, and so on.

2. install and use dvtb

The digital video testing platform (dvtb) is a tool that uses some scripting languages directly to test the DSP Algorithm without using C code. after the dvsdk is installed, there will be a dvtb directory under the dvevm _ ##_ # directory, where the installation will be executed, then run it in the executable directory of DSP (cmemk is required in the executable directory of DSP. ko, dsplink. files, such as/nfshost/mydemos.
The command syntax of dvtb is as follows: <command> <class> <Options>
The dvtb command can be used to control peripherals such as audio, vpbe, and bpfe or audio/video codecs to complete some testing. for more information about the installation process and command usage, see \ opt \ dvevm _ #####\ dvtb. not used yet.

3. xdc (express DSP component) Configuration

Xdc is a tool used for compiling and packaging. It can create real-time software package rtsc (real time software component ). like other compilation tools, it can generate executable files based on source files and library files. the difference is that it can automatically perform performance optimization and version control. xdc can also generate code based on the provided configuration script language, which is especially important for compiling executable programs such as codecs, servers, and engines.

Syntax of xdc: xdc <target files> <xdcpath> <xdcbuildcfg>

Target files: the target file to be compiled. You can use the Command Script to specify the target file to be generated;
Xdcpth: directory to be searched during compilation;
Xdcbuildcfg: specified by the "config. BLD" file, which contains the compilation commands related to the platform.
The preceding command mode may be complex when there are too many parameters. It is usually written as a shell script to run.

Three configuration files related to xdc:

Package. xdc: mainly contains package-related information: dependency information, module information, and version information. It is provided by you.
Package. BLD: defines how a package should be compiled. the file content is described in JavaScript. it includes the definition of the target platform set [mvarm9, linux86], the definition of the compiled version [release], the determination of the source file set, and the generated executable file information.

These two files are all under the server directory. It can be seen that each codec has its own package Information Description file, and xdc then generates a package based on it.

Config. BLD: this file is located in the codec_engine _ # directory and is shared by each codec. It mainly defines platform-related features, including the following parts: DSP target, arm target, Linux host target, build targets, PKG. attrs. profile, PKG. lib and other details. these three configuration files are usually modified based on the templates provided by TI.

DaVinci development principle 3 ---- da Vinci codec engine (CE)

DaVinci is a SoC chip based on DSP and arm dual-core architecture. the interaction between the chip and the outside world is managed through the arm-side montavista Linux and related drivers and applications. The DSP only processes the encoding and decoding algorithms. communication and interaction between DSP and arm are achieved through the engine and server. this section only describes the encoding and decoding engine ce.

1. Core Engine API

In terms of applications, CE is a set of APIs used to call the xdais algorithm. You can use these APIs to instantiate and call the xdais algorithm. da Vinci provides a set of visa interfaces for applications to interact with xdm-compatible xdais algorithms. it should be noted that no matter whether the algorithm is running locally (arm side) or remotely (DSP side), or whether the hardware system is only arm or only DSP or both, whether the OS is Linux, VxWorks, DSP/bios, or wince, the algorithm interface calls are consistent. this is done through the engine configuration file *. CFG can be seen, and the configuration file can determine whether your CODEC runs on the arm or DSP.
Ce includes the core engine API and visa API. The core engine API interfaces include: the initialization module (ceruntime _) and the CE runtime module (engine _) abstract layer memory module (memory _); Visa API interface modules we commonly use: Video Encoding interface (videncx _), video decoding interface (viddecx _) audio Encoding interface (audencx _) and audio decoding interface (auddecx _). Each module is included in the corresponding header file.

The application must use three modules of the CE Core Engine to open and close the codec engine instance. note that the engine handle is NOT thread-protected. For each thread that uses ce separately, engine_open must be executed and its own engine handle must be managed. For multi-threaded applications, you can also access a shared engine instance in sequence. Currently, the latter is used. Only one engine handle is defined and multiple decoders are shared. the codec engine also provides related APIs to access the system's memory usage and CPU load information. The interfaces are as follows:

  • Engine_open:/* open a codec engine */
  • Engine_close:/* shut down a codec engine, which is usually called after the algorithm instance is deleted to release related resources */
  • Engine_getcpuload:/* Get the CPU usage percentage */
  • Engine_getlasterror:/* Get the error code caused by the last failed operation */
  • Engine_getusef8:/* get the memory usage. For details about the header files required by the engine and how to define and use the engine, refer to the project instance example_dsp1 */

Currently, multiple decoders share one engine handle, for example:

Static string enginename = "videodec";/* defines the engine name, ceapp. */engine_handle cehandle_264 = NULL is used in the CFG configuration file;/* creates a 264 decoder engine handle */engine_error errorcode;/* is used to return the status information of the engine, for meanings of different return values, refer to the corresponding header file */cehandle_264 = engine_open (enginename, null, & errorcode );

According to the above understanding, I think if multiple threads need to use their own engines separately, they should be able to define multiple engine names and create multiple engine handles, in this case, each thread must independently execute engine_open () and manage its own engine handle.

2. Visa API

Create an algorithm instance: * _ create ()

After the codec engine cehandle_264 is created, you can use it to create your own algorithm instance. You need to call * _ create (), where * can be the name of the corresponding Codec Module of video or audio, for example:

Static string decodername = "hsf-dec";/* defines the name of the decoding module, used to identify the algorithm name, ceapp. CFG uses */viddec_handle 264 handle;/* to create a decoder handle */264 handle = viddec_create (cehandle_264, decodername, null);/* to allocate and initialize a decoder on the engine, the third parameter can be used to initialize relevant parameters of the algorithm. These parameters control the various behaviors of the algorithm. The parameter structure varies depending on the encoding or decoder in visa, for details about the structure, refer to the header file */

Close an algorithm instance: * _ Delete ()

Viddec_delete (264 handle);/* Note: You can call this operation to delete an algorithm instance only after the memory slices related to the algorithm are cleared */

Control an algorithm instance: * _ control ()

VIDDEC_control(264Handle, XDM_SETPARAME, dynamicParamsPtr, &encStatus);

The first parameter is an opened algorithm instance handle, and the second parameter is an integer command ID, which is defined in xdm. h; the third parameter is the parameter that needs to be dynamically changed. For example, in create, the third parameter has initialized some parameters for the decoder. You can modify the parameters here, however, the modification is conditional. For details about the structure, refer to the header file. The fourth parameter is a struct variable. Different modules have different structures. For details, refer to the header file.
Process data using an algorithm instance: * _ process ()

status = VIDDEC_process(264Handle &inBufDesc,&outBufDesc, &inArgs, &outArgs);

The second and third parameters are struct types of xdm_bufdesc, which contain the number, start address, and length of memory segments. The fourth and fifth parameters provide the input and output addresses for the algorithm instance, respectively.
All the above structures can be found under \ opt \ dvevm _ # \ xdais _ # \ packages \ xdais \ DM and can be modified. but I still don't know how to use these struct.

3. Compile the "codec engine-engine configuration file (ceapp. cfg)

The engine configuration file is *. CFG files are stored. Currently, our project contains two files *. CFG: Contains ceapp in the app. CFG, which contains the engine configuration, and video_copy.cfg, which is one of the server configuration files. ceapp. CFG uses package through makefile. xdc to generate *. c file and a link command script file. an engine configuration file contains the following content: the name of the engine, the codecs contained in the engine, and their names. it can be seen from this that the role of the previously defined name "h264dec" is used to identify the algorithm category in the application. It can also be seen that an engine can be shared by several decoder modules. we use ceapp. the content of the cfg file is used as an example to describe the meaning of the configuration parameters:

/*--------------set up OSAL----------------*/var osalGlobal = xdc.useModule(‘ti.sdo.ce.osal.Global‘);osalGlobal.runtimeEnv = osalGlobal.DSPLINK_LINUX;

Note: These two statements set the global module to make the configuration script take effect, and then set the engine runtime environment, that is, the DSP/BIOS link and Linux OS to be used ).

/*--------------get codec modules;i.e.implementation of codecs-------*/var H264DEC = xdc.useModule(‘codecs.h264dec.H264DEC‘);

Note: Set the decoder to be used, that is, we will use h264dec under the given directory. note that we are currently using the videnc_copy provided by Ti in the codec directory. In fact, we can modify it. In addition, when we define the decoder name, we use the lower-case 'hsf-dec ', here, the configuration is changed to uppercase.

/* ------------- Engine cofiguation --------------- */var engine = xdc. usemodule ('ti. SDO. CE. engine '); var demoengine = engine. create ("videodec", [{name: "h264dec", MOD: h264dec, local: false},/* {name: "hsf-enc", MOD: hsf-enc, local: false }... if yes */]);

Note: first make ti. SDO. the engine under the CE directory is available, and then an engine is created using CREATE. each engine has a name, which will be used by developers (for example, when opening the engine, the previously defined engine name is "hsf-dec "). the create () parameter is an array of algorithm descriptions. Each algorithm description contains the following fields:

Name: the name of the algorithm instance. It is used to identify the algorithm. videc_creat () and other visa API parameters, such as the 264 decoder name "h264dec" defined earlier ";
MoD: used to identify the actual algorithm implementation module, usually the name is capitalized, such as hsf-dec.
Local: if it is true, the algorithm instance is implemented on the arm side. Otherwise, the codec server is used to create an algorithm instance on the DSP side.

demoEngine.server = "./encodeCombo.x64P";

Note: It is used to specify the codec server.

4) DaVinci development principle ---- da Vinci codec Server)

The codec server (CS) is a binary file that integrates the codecs, framework components, and some system code. When Cs runs on the DSP, it uses DSP/BIOS as its kernel. CS also includes DSP/BIOS threads related to customer requests. CS can represent the actual DSP hardware, mirror files imported to the DSP, and running tasks. The configuration takes two steps:

Use the TCF script language to configure the DSP/BIOS;

Configure the remaining components through xdc, such as FC components, DSP/BIOS link, and CODEC engine. the configured server image file is in the engine configuration file (ceapp. CFG), as described in demoengine. server = ". /encodecombo. x64p ";

Compile a codec Server

The xdc tool described earlier is used to create the CS image file. The difference is that CS requires a main. C and related BIOS configuration scripts. TCF file.

  • TCF: the script file mainly configures the DSP/BIOS kernel, such as defining the memory ing of the DSP, setting the interrupt vector table of the DSP, and creating and initializing other DSP/BIOS data objects, for more information, see video_copy.tcf. Note that I have added a trace parameter configuration that I did not have;
  • Main. c: As long as your algorithm implements the xdm interface, you need a main. C program to initialize ce, and then use other configuration scripts to create a server image *. x64p. in Main. in C, except for calling ceruntime_init () to initialize ce, It is the initialization and processing of Trace-related functions. another point worth noting is that you can reconfigure the cache here, because the configuration of the cache in the TCF file may not work, this can be configured in the form of function code, which has not been noticed before. what I don't understand here is that ceruntime_init () has been performed once in ceapp_init (). Why do I need to do it again in CS? (I think it is because you first compile CS to generate *. x64p, and then compile the app. This can be understood as: As long as your algorithm implements the xdm interface, you need to initialize ceruntime_init () for CE (), while CS compilation calls codec generated after xdm encapsulation *. a64p)

Xdc files:

  • Package. xdc

/* -------------- Declare the package name -----------------*/
Package server {}
(Our current. xdc is: Package server. Video. Copy, Which is \ Server \ video_copy \. xdc. No changes are required)

  • Package. BLD

Declare required packages, link command scripts, TCF files and some source files, define compilation attributes, platforms, and objects.

  • Server. cfg: This is the focus of CS configuration.
/* Part 1: declare the runtime environment and various codec modules, and CE. CFG is similar to * // * -------------- set up osal -------------- */var osalglobal = xdc. usemodule ('ti. SDO. CE. osal. global '); osalglobal. runtimeenv = osalglobal. dsplink_bios; Note: These two statements set the global module for the configuration script to take effect, and then set the engine runtime environment, that is, the DSP/BIOS link that needs to be used, and CE. CFG ). /* ------------- server cofiguation --------------- */var Server = xdc. usemodule ('tisdo. CE. server '); server. threadattrs. stacksize = 16384; server. threadattrs. priority = server. minpri;/* -------------- get codec modules; I. e. implementation of codecs ------- */var hsf-dec = xdc. usemodule ('codecs. hsf-dec. hsf-dec '); // corresponds to CE. CFG is the same as that of hsf-dec. server. algs = [{name: "hsf-dec", MOD: hsf-dec, threadattrs: {stacksize: 4096, stackmemid: 0, priority: Server. minpri + 1 }},{... if have...},]; /* Part 2: dskt2 and dman3 configuration: xdais algorithm memory and DMA allocation, refer to the configuration file */
5-Principles of DaVinci development-memory ing for custom codecs

DDR division, TCF and dsplink memory Configuration consistency, and dsplink. I have already compiled the compilation of Ko many times, and there is basically no difference. note that the DSP end of DaVinci does not have virtual address management, and all programs directly operate on the flat address space. detailed examples are provided in step 4 of DaVinci development principles.

Principles of DaVinci development-working principles of engines (CE) and servers (CS)

The relationship between the codec engine ce and the server CS can be compared to the relationship between the client and the application server. Essentially, the concept of Remote Process calling is implemented on dual-core.

1. Working Principle of Remote Procedure Call (RPC)

Remote Process calling is initially a mechanism used for interoperability in the C/S architecture. It is an extension of inter-process communication in the network environment of the OS. the purpose of this method is to allow an application to call another Remote Application (on another node or another process on the current node) in the same way as a local call, just like the local call process, as shown in:

The process is as follows:

  1. The customer directly calls the local Client Reference (stub) by means of local call, and the customer reference has the same process interface as the server;
  2. The customer does not perform any logical processing, but is only an intermediary. Therefore, it only processes and packs the customer's call requests and sends request information to the low-level communication mechanism;
  3. The client sends messages to the server through a low-level communication mechanism;
  4. Because multiple server programs may run on a server node, the server needs to parse messages to find the server programs that the customer wants to call;
  5. The server skeleton (skeleton) parses the message, obtains the caller's parameters, and then calls the server program;
  6. The server program executes the corresponding process;
  7. The server program returns the result to the server skeleton;
  8. The server skeleton packs the results and sends a Response Message to the low-level communication mechanism;
  9. The server communication mechanism transmits messages to the client;
  10. Because a client node may also have multiple call points, the communication mechanism needs to parse some returned messages to find out which application the message should be returned, and send the message to the corresponding customer;
  11. The customer refers to parsing the result from the message and returning it to the customer program.

From the process analysis, we can see that the main tasks that customers refer to in the RPC process include:

  1. Establish a connection between the customer and the server;
  2. Package the client's high-level calling statement into a low-layer request message (RPC orchestration), and then send the request information to the server;
  3. Wait until the server returns a response message;
  4. Receives the Response Message from the server, and resolves the Response Message from the lower layer to the data that can be returned (RPC restoration );
  5. Transmits the return value to the customer's reference.

The corresponding server skeleton also contains the above similar functions.

2. Communication Framework between engines and servers

On the arm side, for video programs, the application first calls the relevant codec stub function through the acquired original image signal through the visa interface, and the stub function calls the relevant engine API function, that is, the SPI (Service Provider Interface) interface, but because the actual CODEC algorithm is on the remote end (DSP end), the engine must encapsulate the signal, send the packaged data through the OS abstraction layer and DSP link, as shown in:

After the data enters the DSP end, the algorithm instance (codec) on the server receives the relevant data information. However, the information is encapsulated and must be parsed by the server skeleton, obtain the caller's visa interface parameters and call the relevant xdm codec instance. The data return method is opposite. in this process, the arm side is regarded as the client, the DSP side as the server, the server as the service skeleton, and the codec side as the service application.

3. instance of communication details between the engine and the server

Here we will analyze the actual process of calling code. in a video program, a remote algorithm is generally called through a visa interface function such as viddec_process (A, B, C. the internal work details are: first in the application (ARM-side app. c) Call viddec_process (A, B, C) [ceapp_decodebuf () to call viddec_process (). This call is only an interface signal sent by the client], the first step is to call the engine API on the engine, that is, the Service Provider Interface (SPI) viddec_p_process (A, B, C), which can be seen in the app. in C, A ceruntime_init () call is required. then, the parameters A, B, C and call information viddec_p_process () are packaged and sent to the dsplink through osal of the Operating System Abstraction Layer of the engine CE; then, the packet information is forwarded through the dsplink to the DSP server skeleton. Finally, the server CS parses the received packet and the parsed data tells us that we want to call viddec_process () this visa API function parses the parameters used to call videc_ti_process () in A, B, and C [video_copy.c. The call here is the real algorithm executed on the remote server]. from this we can see that at server compilation, Main. c needs to call ceruntime_init () once (). this server skeleton knows how to call the xdm algorithm and other methods on the local (DSP side. this entire process is transparent to users, that is, the application only needs to call the visa API function on the Linux side. the following internal work is solved by the engine ce and server CS. the process in which data is returned from the DSP to the arm is opposite. for example:

This article from: http://blog.sina.com.cn/s/blog_5c427ab50100gye4.html

DaVinci development principles

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.