Abstract: high-resolution radar image display is an important part of Radar Computer simulation. It has high requirements on image fidelity and real-time performance. The use of programmable rendering pipeline technology for radar display system simulation can effectively achieve the layered model of radar images, make full use of the parallel processing capabilities of cptj and GP [J, greatly reducing the computing complexity of CPU. It can generate high-quality radar images and meet the real-time requirements of the system.
Key words: Radar Simulation; programmable rendering pipeline; afterglow; shadow; radar image
0
The radar display system is used to display the echo images output by the receiver and the secondary information and symbols generated by the Information Processor. It is the main way for radar operators to obtain information. Computer Simulation of radar is an effective method for radar design, analysis, and training. The display system simulation is the final output result of Radar Computer simulation. Its fidelity and real-time performance directly affect the overall performance of the system. The main tasks of System Simulation include converting the echo data output by the receiver, the ARPA information generated by the information processor, and various symbols to form a 2D grating image of the display; simulates the afterglow effect of the generated echo image and controls the brightness and contrast of the synthesized image.
Simulation of the display system general method: In the radar image refresh process, the full screen pixels are transformed one by one (the Cartesian coordinate transformation of the grating image is the polar coordinate of the echo data) to find the corresponding echo pulse amplitude, the screen pixel color value is formed. The pixel brightness is reduced to simulate the display afterglow effect (involving color saturation calculation). The pixel color is set to generate radar images with extremely high fidelity. However, this method requires access to a large number of pixels, which is difficult to meet real-time requirements. For example, if the radar display resolution is 1 000 × 1 000, 3.14 × 5 002 = 785 pixels are required for each frame. To keep the image smooth, the frame rate should be above 30 F/s, that is, 550 million pixel access requests per second. This only simulates the radar display system.
To improve efficiency, some improved algorithms have been proposed: creating a coordinate ing table in advance, using the look-up table method to reduce the calculation of coordinate transformation; embedding MMX instructions to reduce the dark color; uses graphics APIs such as DirectX to directly access video memory. These methods reduce the processing time of a single pixel. However, due to the large number of pixel accesses, they still occupy a large amount of CPU time.
To greatly improve the efficiency of radar display system simulation, the total number of pixels to be processed within the frame interval must be reduced. The method proposed in this paper is based on the programmable pipeline technology of modern graphics cards, which isolates echo image update, afterglow effect simulation, and ARPA symbol drawing from the final image synthesis process, and make full use of the CPU and GPU parallel processing. During the frame interval, the CPU completes image update and a small amount of afterglow brightness computing on the area that the scanning line is switched over. The GPU completes image synthesis, which avoids access to massive full-screen pixels, this greatly improves the efficiency of display simulation. For example, the resolution of the display is 1 000 × 1 000, the antenna speed is 20 R/m, and the frame rate of the simulation program is 30 F/s, the antenna rotates for a week to generate lines of orientation. The simulation program needs to access 785 000*20/(60*30) △8 700 pixels per frame. In addition, when the display brightness of 1% lines is set, the total number of accesses is less than, and the effect is remarkable.
1. Programmable rendering pipeline and direct3d 9
Programmable pipeline is an important technical feature of modern high-performance graphics cards. A Programmable pipeline is a code that can be written to a GPU (Image Processing Unit). It processes the raw data of the input video card and then outputs it to the display. Such code is called a shader, including vertex shader and pixel shader ), it is used to transform and mix the vertices and textures of the model to be drawn. Because the coloring er code runs independently on the video card, it does not take up CPU time, And the GPU is specially optimized for image computing, the code runs efficiently and the image processing speed is fast. Although the purpose of the programmable rendering pipeline design is to meet the increasingly complex 3D application environment, due to its flexible structure, it uses appropriate methods to write code, you can still draw a video card in a 2D environment.
Direct3d is an underlying drawing API provided by Microsoft Based on the Component Object Model (COM). It is built on the hardware abstraction layer (HAL. Direct3d checks the video card capabilities and exposes the video card functions to developers using standard com interfaces so that they can directly access the video card hardware securely and improve the rendering speed of applications. Direct3d 9 fully supports the programmable rendering pipeline and introduces the advanced coloring language (HLSL) to compile the coloring machine code. The direct3d 9 SDK is used to compile the display system code of the radar simulator, which gives full play to the video card hardware capabilities.
To use direct3d in a 2D radar display environment, you only need to do the following: Draw the radar image on the texture and use a triangle fan to simulate the circle, set the texture coordinate of the triangle vertex to the corresponding value of the radar image texture. Notify direct3d of textures and vertices to render the radar display image. 1.
We will not go into details about direct3d and the coloring machine programming. The following describes some key technologies in radar display system simulation.
2. layered and texture mixing of radar images
Radar display images can be divided into two types based on their generation methods and features: one is the echo image from the receiver output (which also includes noise), which has the following features: the image is scanned and refreshed while rotating the antenna, and the pixel is drawn based on the polar coordinate system. The brightness of the image degrades with time, that is, the afterglow effect is achieved. The other is the ARPA symbol from the information processing system, such as the dynamic mark, Point Objective symbol, and target track line. Its characteristic is that the symbol changes with the ARPA information and there is no afterglow effect.
To avoid full-screen pixel access, refresh the echo image, simulate the afterglow effect, and draw the ARPA symbol to isolate the final image synthesis. The pixel shader of the programmable rendering pipeline allows you to freely mix multiple textures in the video card. Because the image synthesis is independent, the CPU can only complete the necessary drawing. As shown in figure 2, the texture of the synthetic radar image is divided into three layers: The echo image layer, the afterglow effect layer, and the ARPA symbol layer. The first two textures are refreshed at the frame interval, the next texture is drawn only when the ARPA information changes. All these tasks are completed by the CPU.
GPU is responsible for texture synthesis. When rendering a radar display, these three layers of texture are mixed in the pixel shadow to generate the final radar image. The mixed code is as follows:
In addition, it is easy to implement special display effects such as brightness control and Color Reversal by using the pixel coloring tool. Because the CPU and GPU can be processed in parallel, the code execution efficiency is very high.
3 radar echo image update and direct pixel access
When the echo image is updated within the frame interval, the pulse sequence output by the receiver is used to update the scanning fan area in the echo image texture. The pulse sequence is stored in the ECHO cache array. Because the pulse amplitude and pixel color data formats are different, and the cache array and pixel matrix dimensions are different, direct bit transfer (BLT) cannot be used to draw the echo image. You need to draw a single pixel for update. Direct3d uses the idirect3dtexture9: lockrect () method to lock a rectangular area on the texture, Which is mapped to a pixel array similar to Dib/DDB. The pointer returned by the function can be used to directly access the pixels in the array. By locking the texture to access the video memory, the high performance that the GDI function cannot achieve can be achieved.
The larger the rectangular area to be locked, the more accesses you have. Before locking, calculate the rectangle range to be locked Based on the scan area to greatly reduce the number of access pixels. This is also the benefit of radar image layering. As shown in 3, A needs to access the full-screen pixels, while B only needs to access and scan the pixels in the rectangle enclosed by a fan. During pixel access, the table view method is used to convert Cartesian coordinates and polar coordinates, which further improves the access efficiency.
4. afterglow effect and rendering to texture
When drawing the afterglow layer, use the texture rendering technology (RTT) to set the texture to the rendering target, and use the d3d drawing function to draw the texture directly, form a color gradient and dynamically changing afterglow effect. To render the texture, use the d3dusage rendertarget parameter to specify the texture usage and call the getsurfacelevel () method to obtain the texture surface interface pointer. During rendering, use the setrendertarget () method to set the surface to the rendering target.
In order to plot the afterglow effect texture in figure 4, N rays can be used to form the circle in the afterglow effect texture. N equals to the number of azimuth alignment of the antenna. The color of each Ray is determined by the color of its endpoint. N rays must be described by 2n vertices. In addition to coordinates, the vertices also contain color values. After these vertices are created, direct3d automatically draws them into an image in the rendering pipeline.
To form a Time-Varying Dynamic effect, the simulation program needs to reset the color of each vertex within the frame Interval Based on the vertex coordinates and the current scan line position. Direct3d re-rendering produces a brightness gradient and a dynamic circle.
In this way, the CPU only needs to access the color of 2n vertices within the frame interval. In the previous example, radar scans a week to Form 4 096 orientations and draws 4 096 rays, that is, to set 8 to 192 vertices. When pixel access times are greatly reduced, the added color access time does not affect the overall performance.
5 ARPA symbolic and GDI plotting
The drawing of ARPA information and symbols does not change with scanning, but is related to the state of the radar information processor, that is, the State data table that describes the radar information processor and the table that describes the target tracking table, it is suitable for drawing with the GDI function. In order to use the GDI function drawing on the texture, You need to obtain the surface interface pointer of the texture, and then use the getdc () method of the surface to obtain the context (DC) of the surface device ). In this way, you can call the Win32 GDI function to output symbolic text. Because this part of the drawing code has little content and a low update rate, its CPU usage is almost negligible.
6 knots
The display part of the radar simulator is improved using the above method in a certain type of onboard navigation search radar. The computer platform of the simulator is configured as pentiumxxxxxxxxx IV 2.8 GHz and Asus extreme ax550. The frame rate of the simulated program is increased from 15 f/s to more than 50 F/s, and the effect is obvious. The results show that the use of programmable rendering pipeline technology can achieve the separation of echo image update and afterglow effect computing, give full play to the rendering capability of the video card, can meet the needs of high-resolution radar display system simulation.
In fact, the drawing of the radar display afterglow effect texture is basically independent of the radar pulse data processing and other processes, if the brightness values that describe the afterglow effect can be completely placed in the vertex coloring machine of the programmable rendering pipeline, the vertex coloring machine completes brightness attenuation and update calculation, this will further reduce the CPU computing burden. However, as the afterglow simulation is an iteration process, the brightness of the current frame is the attenuation of the brightness of the previous frame, and the results after each iteration need to be saved. However, in the current direct3d version, rendering to the vertex (RTV) is not supported, and the vertex coloring tool cannot support large array variables, and the results of iterative operations are difficult to save. Currently, the animation of the color of the vertex that simulates the afterglow effect is still completed by the CPU. With the development of graphics technology and direct3d technology, this part of code can be further optimized.