iOS for encoding |iosvideotoolbox: YUV image for read-write decoding callback function cvimagebufferref

Source: Internet
Author: User

iOS-oriented encoding |iosvideotoolbox: YUV image of Read-write decoding callback function cvimagebufferref This document is based on the decoding of H. Read video Toolbox decodes the method of the YUV or RGB data in the callback function parameter Cvimagebufferref, and gives the cvimagebufferref to generate the gray-scale graph code, convenient debugging. At the same time, it also introduces the problem that the video Toolbox decoding callback is easy to ignore in the process of YUV processing. The document is targeted at iOS audio and video advanced programming, is committed to providing high-value core video Chinese materials, recently also on the StackOverflow focus on core video related issues, learning and giving back to the community. Catalog |-read Cvimagebufferref (cvpixelbufferref) |-write Cvimagebufferref (cvpixelbufferref) |-cvpixelbufferpool Memory Pool | | Cvpixelbuffer creating a grayscale map from the core graphics |-|--Direct Manipulation decoding callback Cvimagebuffer (Cvpixelbuffer) Problems |-- Cvpixelbuffer Image Vertical mirroring problem after uploading to GPU |-Reference and recommended reading during the implementation of the Panorama video player and its associated projects, I wrote the following video toolbox related documents (some documents are in draft state due to development tasks, Content revisions later): iOS videotoolbox h.265 (HEVC) H: 1 Overview "drafts" iOS Videotoolbox hard-copy h.265 (HEVC) H. (AVC): 2 H264 data written to file IOS Videotoolbox hard h.265 (HEVC) H: 4 Synchronous-coded iOS audio and video advanced programming: Avassetreadertrackoutput change cmformatdescription cause video Toolbox decoding failed with non-decoding GPU direct display for iOS audio and video advanced Programming: Avasset, Corevideo, Videotoolbox, FFmpeg and Cmtimevideo Toolbox multi-pass Encoding gets the color conversion matrix of H. Videotoolbox decoding live and so on Cvpixelbufferref is the alias of Cvimagebufferref, and the operation is almost identical. Cvpixelbuffer.h/* * Cvpixelbufferref * Based on the image buffer type. * The pixel buffer implements the memory storage for an image buffer. */typedef Cvimagebufferref cvpixelbufferref; Although syntactically cvpixelbufferref are cvimagebufferref aliases, their descriptions in the document differ: Core Video Image buffers provides a convenient interface for managing different types of image data. Pixel buffers and core video OpenGL buffers derive from the core video image buffer. Cvimagebufferref:a reference to A Core Video image buffer. An image buffer was an abstract type representing Core Video buffers the hold images. In Core Video, pixel buffers, OpenGL buffers, and OpenGL textures all derive from the image buffer type. Cvpixelbufferref:a reference to A Core Video pixel buffer object. The pixel buffer stores an image in main memory. From the above, Cvpixelbuffer "inherited" Cvimagebuffer, however, because the core Video exposes the Objective-c interface, meaning that if you want to implement "object-oriented inheritance" in C, the Cvpixelbuffer data member definition location is basically consistent with cvimagebuffer and the compiler is offset to ensure byte alignment. Like FFmpeg in Avframe can be forced to convert to Avpicture, ffmpeg 3.0 source code for example. typedefstruct Avframe {uint8_t *data[av_num_data_pointers]; int linesize[av_num_data_pointers]; uint8_t **extended_data;// There are many further fields}typedef struct Avpicture {///< pointers to the image data planes uint8_t *data[av_num_data_pointers];///< Number of bytes per line int linesize[av_num_data_pointers]; Avpicture, of course, from some of Apple's open-source frameworks, the Core video is most likely to be implemented in objective-c++, and may actually use C + + inheritance, without much speculation. 1, read cvimagebufferref (CVPIXELBUFFERREF) in the decoding callback, passed over the frame data by Cvimagebufferref point. If you need to take out the pixel data for further processing, you have access to the memory where the pixel is actually stored. Videotoolbox decoded image data is not directly to the CPU access, you need to use cvpixelbufferlockbaseaddress () to lock the address to access from main memory, Otherwise, a function called Cvpixelbuffergetbaseaddressofplane returns NULL or an invalid value. It is worth noting that Cvpixelbufferlockbaseaddress's own calls do not consume much performance, and in general, after locking, copying memory to Cvpixelbuffer is a relatively time-consuming operation, such as calculating memory offsets. If the cvpixelbuffer image needs to be displayed on the screen, it is recommended that the GPU be used for image manipulation. The following shows the performance loss when reading and writing the left half image (please ignore the rough Code of Memory calculation). Performance consumption of reading Cvpixelbuffer images write Cvpixelbuffer Image performance consumption However, uiimage with Cvimagebuffer, Ciimage, does not require an explicit call to the lock base address function. Cvpixelbufferlockbaseaddress (Imagebuffer, kcvpixelbufferlock_readonly); Can not add ciimage *ciimage = [CiimAge Imagewithcvpixelbuffer:imagebuffer]; Cicontext *temporarycontext = [Cicontext Contextwithoptions:nil]; Cgimageref videoimage = [temporarycontext createcgimage:ciimage fromrect:cgrectmake (0, 0, CVPixelBufferGetWidth ( Imagebuffer), Cvpixelbuffergetheight (Imagebuffer))]; UIImage *image = [[UIImage alloc] initwithcgimage:videoimage]; Uiimageview *imageview = [[Uiimageview alloc] initwithimage:image]; Cgimagerelease (videoimage);//Cvpixelbufferunlockbaseaddress (Imagebuffer, kcvpixelbufferlock_readonly); Cvpixelbufferisplanar pixels can be stored in planar or chunky. If planar, the number of YUV plane is obtained by Cvpixelbuffergetplanecount. Usually two plane,y for a plane, UV specifies the desired pixel format (which can be different from the video source pixel format) by Vtdecompressionsessioncreate when a decoding session is created by the destinationimagebufferattributes to determine whether a plane is the same. Each plane can be treated as a table by row and column, and pixels are filled in line order. The following is described in planar buffer storage mode. Cvpixelbuffergetplanecount get the number of pixels buffer plane, then by Cvpixelbuffergetbaseaddressofplane (index) to get the corresponding channel, usually y, U, v Channel storage address, Whether the Uvs are separated by a decoding session is specified as described earlier. The cvpixelbuffergetbaseaddress returned for Planar buffer is a pointer to the Planarcomponentinfo struct, which is defined as follows:/* Planar pixel Buffers has the following descriptor at their base address. Clients should generally use Cvpixelbuffergetbaseaddressofplane, Cvpixelbuffergetbytesperrowofplane, etc. instead of accessing it directly. */struct Cvplanarcomponentinfo {/* offset from main base address-to-base address of this plane, Big-endian */int32_t off Set /* bytes per row of this plane, Big-endian */uint32_t rowbytes; };typedef struct Cvplanarcomponentinfo cvplanarcomponentinfo;struct cvplanarpixelbufferinfo {CVPlanarComponentInfo COMPONENTINFO[1]; };typedef struct Cvplanarpixelbufferinfo cvplanarpixelbufferinfo;struct Cvplanarpixelbufferinfo_ycbcrplanar { Cvplanarcomponentinfo Componentinfoy; Cvplanarcomponentinfo COMPONENTINFOCB; Cvplanarcomponentinfo COMPONENTINFOCR; };typedef struct Cvplanarpixelbufferinfo_ycbcrplanar cvplanarpixelbufferinfo_ycbcrplanar;struct Cvplanarpixelbufferinfo_ycbcrbiplanar {cvplanarcomponentinfo Componentinfoy; Cvplanarcomponentinfo componentinfocbcr; };typedef struct CVPLANARPIXELBUfferinfo_ycbcrbiplanar Cvplanarpixelbufferinfo_ycbcrbiplanar; Gets the pixel format according to Cvpixelbuffergetpixelformattype, reads it in the corresponding way, For example, yuv420sp reads all U-to-one buffers across the span. 2. Write Cvimagebufferref (CVPIXELBUFFERREF) The following code shows the process of copying data to Y, UV planar: nsdictionary *pixelattributes = @{(ID) Kcvpixelbufferiosurfacepropertieskey: @{}}; Cvpixelbufferref pixelbuffer = NULL; Cvreturn result = Cvpixelbuffercreate (kcfallocatordefault, width, height, kcvpixelformattype_ 420ypcbcr8biplanarvideorange, (__bridge cfdictionaryref) pixelattributes) &pixelbuffer); Cvpixelbufferlockbaseaddress (pixelbuffer, 0); uint8_t *ydestplane = Cvpixelbuffergetbaseaddressofplane (PixelBuffer, 0); memcpy (Ydestplane, yplane, Width * height); uint8_t *uvdestplane = Cvpixelbuffergetbaseaddressofplane (Pixelbuffer, 1 )///Numberofelementsforchroma for UV wide-high product memcpy (Uvdestplane, Uvplane, Numberofelementsforchroma); Cvpixelbufferunlockbaseaddress (pixelbuffer, 0), if (Result! = kcvreturnsuccess) {NSLog (@ "Unable to create Cvpixelbuffer %d ", result); } ciimage *coreimage = [ciimageImagewithcvpixelbuffer:pixelbuffer]; Cvpixelbufferrelease (Pixelbuffer); The above code through-[Ciimage imagewithcvpixelbuffer:] Create ciimage on ipad Air 2, IPhone 6p on the real machine problems: 1, when using Kcvpixelformattype_420ypcbcr8planarfullrange prompt [Ciimage initwithcvpixelbuffer:options:] Failed Because its pixel format f420 are not supported, that is, yuv420p created Cvpixelbuffer by ciimage format is not supported. After testing, the video source format is yuvj420p (PC, bt709), When Vtdecompressionsessioncreate does not specify a Kcvpixelbufferpixelformattypekey value for Destinationimagebufferattributes, the Video The Cvimagebufferref decoded by toolbox corresponds to f420. When the specified destinationimagebufferattributes needs kcvpixelformattype_420ypcbcr8biplanarvideorange, the decoded imagebuffer is 420v, This problem occurs when you specify PixelFormat as f420 when you create YUV. The reason for this is that the YUV data is copied in 420v, and its storage layout differs from f420, resulting in the creation of ciimage failures. 2. Decide that the format created by Cvpixelbuffercreate is its parameter pixelformattype, and not the parameter pixelattributes uses the Kcvpixelbufferpixelformattypekey specified pixel format. Here are some simple image processing methods. Original gray level image (a) horizontal mirror horizontal mirror is the image around the middle of the image of the vertical line exchange between the left and right pixel location, using the matrix operation is expressed as: [X, Y, 1]-1 0 0, [x ', Y ', 1] 0 1 0 width 0 1 for CPUs, the matrix runs normally without GPU You do 2x2, 3x3 matrix operations are hardware accelerated implementation, it is likely that an instruction is finished, and the CPU is often an element-wise calculation, therefore,At present, we are inclined to do matrix operation of GPU. The sample CPU implementation code is as follows. for (int line = 0, line < 480, ++line) {for (int col = 0; col < 960; ++col) {Dst_buffer[line * 960 + col] = Src_bu Ffer[line * 960 + (960-COL)]; }ios-oriented encoding |iosvideotoolbox: Read-write decoding callback function Cvimagebufferref of YUV image horizontal Mirror (ii) vertical mirror vertical image is the image around the middle horizontal line of the image to exchange the upper and lower pixel location, using the matrix run represented by: [X, Y, 1] 1 0 0, [x ', Y ', 1] 0-1 0 0 Height 1 Example CPU implementation code is as follows. for (int line = 0, line < 480, ++line) {for (int col = 0; col < 960; ++col) {dst_buffer[(480-line) * 960 + col] = Src_buffer[line * 960 + col]; }ios-oriented encoding |iosvideotoolbox: Read-write decoding callback function Cvimagebufferref YUV image vertical Mirror 3, The Cvpixelbufferpool memory pool creates the Cvpixelbufferpool itself and creates cvpixelbuffer through Cvpixelbufferpool, Prone to Cvpixelbuffer being incorrectly released or accidentally increasing the reference count leads to memory leaks, ijkplayer as an example to demonstrate cvpixelbubffer leaks. iOS-oriented encoding |iosvideotoolbox: Read-write decoding callback function Cvimagebufferref YUV image Cvpixelbuffer leaked iOS-oriented encoding | Iosvideotoolbox: Read-write decoding callback function Cvimagebufferref YUV image Cvpixelbuffer End Reference when the reference count is not 0 cause memory leaks and create a self-cvpixelbuffer, it is prone to memory spike problem, If you create a 960x480 yuv420sp format with a memory of more than 700 m, if it is asynchronous decoding and does not have a memory size limit, it will cause the application to crash. iOS-oriented encoding |iosvideotoolbox: Read and Write SolutionsCode callback function Cvimagebufferref YUV image cvpixelbuffercreate occupied memory if you do not want to create cvpixelbufferpool yourself, and do not want to create cvpixelbuffer yourself, trickery way is, Using the Cvpixelbuffer that decodes the callback function, you do not have to worry about memory consumption. In practice, the image is encoded immediately after processing, so that the use of the situation does not cause the decoder itself cache queue Data Image disorder. The premise is that the modified pixel data is within the width and height of the original data. Of course, this can also be problematic, specifically in the subsequent sections of the document for decoding, image processing, encoding process, and the image after processing and the original image size is different, then create the encoder and then create the Cvpixelbufferpool, It is also a reliable practice to make system management Cvpixelbuffer. In addition, in the image processing process, the video toolbox either specifies Fullrange or videorange, thus creating the RGB image through the core graphics is correct, and QuickTime playback is consistent with the screen. However, the decoded YUV420SP data is copied, followed by image processing, and there are some areas of incorrect color. There is no color anomaly problem by specifying the video toolbox output yuv420p and then image processing. Of course, the algorithm used also changes the corresponding yuv420p algorithm, because personally think that it is very likely that our team's YUV420SP copy and operation algorithm is wrong. 4, Cvpixelbuffer through the core graphics to create a grayscale image modified YUV data, if each time the GPU is required to implement the YUV conversion RGB, this is more troublesome, especially for offline computing applications such as transcoding. In this paper, we introduce a method to realize Cvpixelbuffer generation UIImage, using only the Y plane to generate images, and to determine whether the processing results in image imaging are in line with expectations. BaseAddress is the Y-plane address, passing yuv420 (s) p full data address, ignoring uvuiimage* yuv420touiimage (void *baseaddress, size_t width, size_t height, size_t bytesperrow) {//Create a device-dependent gray color space cgcolorspaceref colorspace = Cgcolorspacecreatedeviceg Ray (); Create a bitmap graphicsContext with the sample buffer data Cgcontextref context = Cgbitmapcontextcreate (baseaddress, width, height, 8, Bytesperro W, ColorSpace, Kcgimagealphanone); Create a Quartz image from the pixel data in the bitmap graphics context cgimageref Quartzimage = Cgbitmapcontextcreate Image (context); Free up the context and color space cgcontextrelease (context); Cgcolorspacerelease (ColorSpace); Create an Image object from the Quartz image UIImage *image = [UIImage imagewithcgimage:quartzimage]; return image; The above code may raise the question of why the grayscale graph does not require data from the U and V channels. Indeed, I have recently made a special reference to this issue. When you create a grayscale chart, some people also set the U, v channels before biasing (value range [-128, 127]) to 0, or after biasing (value range [0, 255]) is set to 128, however, when you create a grayscale map, their code does not use UV data. In addition, see a statement is: Y Channel is usually said grayscale channel. Of course, in my limited understanding, individuals are less likely to endorse this claim. The reason is that the Y channel is a component of YUV, and the grayscale is a composite amount, even if the value is close, conceptually there should be a difference. The value is close to the meaning of BT. The 601 conversion matrix is shown as an example: y = 0.299R + 0.587G + 0.114B grayscale = (R + G + B)/3 visible, Y values close to the grayscale value. Below, a brief analysis of the code snippet that created the image. Some open source projects, such as Sdwebimage, use the CGCOLORSPACECREATEDEVICERGB function because its data source is RGB, and our YUV data here needs to pass the color conversion matrix operation to get RGB, simplicity, Created by the Cgcolorspacecreatedevicegray functionGray map can directly see the changes in the image, the disadvantage is that the loss of color information. as shown below. iOS-oriented encoding |iosvideotoolbox: Read-write decoding callback function cvimagebufferref The YUV image generates grayscale images although, the pixel format for YUV video decoding almost all can generate grayscale. However, not all image raw data can generate visual images from the core graphics, and iOS supports a very limited number of pixel formats, as shown below. iOS-oriented encoding |iosvideotoolbox: Read-write decoding callback function Cvimagebufferref YUV image generated grayscale images supported pixel formats 5, pit Operation Cvimagebuffer (Cvpixelbuffer) Although looking at no difficulty, However, there are some big and small problems. If this is not described, then the title of this document is really too headline party. Below, give me the situation I encountered and resolved during the development process. 5.1, the direct operation decoding callback Cvimagebuffer (Cvpixelbuffer) The problem exists in the decoding callback function YUV processing, whether or not synchronous decoding, or decoding and creating textures, refresh the interface is the same thread. Note that the image in the cvpixelbuffer that the decoding callback obtains is the image that was processed in the last decoding callback, not the new full image that the video compression data was decoded from. In other words, after a key-frame decoding succeeds, its subsequent P-frame is based on the previous frame, continues decoding and overlays the result to the new screen, and then passes it to the decoding callback function. It's a simple gesture. Decode Thread:vtdecompressionsessiondecodeframe, Vtdecodercallback (for image processing), adding to the queue to be displayed Rendering Thread: Read the queue to be displayed, get processed cvpixelbuffer, cvopenglestexturecachecreatetexturefromimage below, discuss the above situation in detail. After the YUV three channel processing, the playback of the screen looks normal, the relevant resource occupancy information as shown below. However, the output Video Toolbox callback function passed the Cvpixelbuffer or said Cvimagebuffer, found that we have processed the image before, and on the basis of the previous key frame continuously superimposed p-frames, the resulting image as the next frame of video. iOS-oriented encoding |iosvideotoolbox: Read-write decoding callback function Cvimagebufferref YUV image CPU does not overload the CPU without overloading the GPU consumes CPUNon-overloaded Y-Channel graph CPU does not overload decoding callbacks per frame image visible, as a video sequence with a key frame interval of 15, src_1.jpg and src_16.jpg due to the key frame to get an immediate refresh, the subsequent images are on the basis of YUV processing continues to overlay. 5.2. Cvpixelbuffer image Vertical mirroring problem after uploading to GPU the decoding process of cvpixelbuffer information for cmvideoformatdescription and the specified output is as follows, after the Cvpixelbuffer is created by itself, Copy the Cvpixelbuffer data of the decoding callback function to the new cvpixelbuffer, you will often encounter the image upside down, or, to be exact, a vertical mirroring problem. However, the image generated using the previous generation of the grayscale graph function is positive, there is no upside-down, and this behavior only occurs when uploading to the GPU. The reason is that the computer's image is stored with its own coordinates, which is the opposite of the y-axis of OpenGL ES's texture coordinates, so the image is reversed in the GPU. cmvideoformatdescription {cvfieldcount = 1; Cvimagebufferchromalocationbottomfield = left; Cvimagebufferchromalocationtopfield = left; Fullrangevideo = 0; Sampledescriptionextensionatoms = {AvcC = <01640033 ffe10014 67640033 ac1b4583 c0f68400 000fa000 03a98010 01000468 E92 3CBFD f8f800>; }; } destinationimagebufferattributes = {openglescompatibility = 1; Pixelformattype = 2033463856; Now try to use the core video interface to handle this problem. First, determine whether the source and target image is flipped. BOOL isflipped = cvimagebufferisflipped (Pixelbuffer), if (isflipped) {NSLog (@ "Pixelbuffer is%s", isflipped?) "Flipped": "Not flipped"); } isflipped = cvimagebufferisflipped (imagebuFfer), if (isflipped) {NSLog (@ "Imagebuffer is%s", isflipped? "Flipped": "Not flipped"); The image is found to be flipped, and the result is executed. Pixelbuffer is flipped Imagebuffer is flipped obviously, more information is needed to judge. The Shouldnotpropagate property of the two buffers is obtained and no value is found. However, the pixel buffer of the callback function has the Shouldpropagate property, and the buffer we create ourselves does not have this property, as shown below. Cvfieldcount = 1; Cvimagebufferchromalocationbottomfield = left; Cvimagebufferchromalocationtopfield = left; Cvimagebuffercolorprimaries = "Smpte_c"; Cvimagebuffertransferfunction = "Itu_r_709_2"; Cvimagebufferycbcrmatrix = "Itu_r_601_4"; Colorinfoguessedby = Videotoolbox; Then, according to the H. m document, Cvfieldcount only shows that Cvpixelbuffer has only one access unit (access units), The Bottomfield and Topfield two domains represent the position of two shades of the image buffer, regardless of the image reversal. The remaining parameters, such as Ycbcrmatrix, are just the YUV-to-RGB matrix required for the source video. So, based on my understanding of core video, I can't handle this with the core video interface, only by mirroring the texture coordinates in the GPU or by using the vertical mirroring method described earlier

iOS encoding |iosvideotoolbox: YUV image for read-write decoding callback function Cvimagebufferref

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.