H264 decoding learning-2015.04.16, h264-2015.04.16

Source: Internet
Author: User

H264 decoding learning-2015.04.16, h264-2015.04.16

I have read a lot today, but I feel a little bit of success.

1. Knowledge about H264

Because the data sent from the RTP protocol has undergone H264 encoding, decoding is required here. I would like to add some knowledge about H264.

Compared with the previous video compression standards, H.264 video compression standards (H.264 for short) have better performance. Therefore, H.264 is called the new-generation video compression standard. H.264 compared with H.263 or MPEG-4, the main new features are as follows:
1. More precise and rich methods of intra-frame encoding and inter-frame prediction are adopted to effectively reduce residual data.
2. Introduce a new arithmetic encoding method to increase the data compression ratio.
3. Video data layering is more reasonable, and the introduction of NAL is more conducive to network transmission.
4. Cancel the traditional frame structure, introduce the slice Structure and parameter set, and improve the bitstream anti-code capability.
5. Introduce a flexible reference frame management mechanism. The maximum number of reference frames can be 16.

The above features make H.264 a qualitative leap in video signal-to-noise ratio, image quality, and application flexibility. However, the problem is that H.264 is more complex in implementation.

2. Some possible solutions (not implemented)

Intel media SDK, windows media sdk, http://www.ffmpeg-csharp.com/(the most simple way to use ffmpeg on this website, I have not learned how to use it, and then come back to learn it ), C # Call the ffmpeg development library. The above methods are only possible solutions found, because they are too lazy and eager to make things, so they did not learn, later, I saw the knowledge of H264 (http://www.cnblogs.com/over140/archive/2009/03/22/1418946.html) in the player library API of Haicang. I used to call Haicang camera and often watched his blog. Finally, find a solution at night. The H264 decoding library provided by hith has a dedicated decoding library, but it is also written in C ++. However, there is no problem with this. I think of the C ++ code that was previously used by hikvision cameras and it was called through C. You can use PInvoke.net.

The converted code converts hi_h1_dec_1_dll to a class in C # (also refer to the Code on the Internet)

class Hi264Dec    {         public const int HI_SUCCESS = 0;         public const int HI_FAILURE = -1;         public const int HI_LITTLE_ENDIAN = 1234;         public const int HI_BIG_ENDIAN = 4321;         public const int HI_DECODER_SLEEP_TIME = 60000;         public const int HI_H264DEC_OK = 0;         public const int HI_H264DEC_NEED_MORE_BITS = -1;         public const int HI_H264DEC_NO_PICTURE = -2;         public const int HI_H264DEC_ERR_HANDLE = -3;          [DllImport("hi_h264dec_w.dll",EntryPoint = "Hi264DecImageEnhance", CallingConvention = CallingConvention.Cdecl)]         public static extern int Hi264DecImageEnhance(IntPtr hDec, ref hiH264_DEC_FRAME_S pDecFrame, uint uEnhanceCoeff);          [DllImport("hi_h264dec_w.dll",EntryPoint = "Hi264DecCreate", CallingConvention = CallingConvention.Cdecl)]         public static extern IntPtr Hi264DecCreate(ref hiH264_DEC_ATTR_S pDecAttr);          [DllImport("hi_h264dec_w.dll", EntryPoint = "Hi264DecDestroy", CallingConvention = CallingConvention.Cdecl)]         public static extern void Hi264DecDestroy(IntPtr hDec);          [DllImport("hi_h264dec_w.dll", EntryPoint = "Hi264DecGetInfo", CallingConvention = CallingConvention.Cdecl)]         public static extern int Hi264DecGetInfo(ref hiH264_LIBINFO_S pLibInfo);          [DllImport("hi_h264dec_w.dll", EntryPoint = "Hi264DecFrame", CallingConvention = CallingConvention.Cdecl)]         public static extern int Hi264DecFrame(IntPtr hDec, IntPtr pStream, uint iStreamLen, ulong ullPTS, ref hiH264_DEC_FRAME_S pDecFrame, uint uFlags);          [DllImport("hi_h264dec_w.dll", EntryPoint = "Hi264DecAU", CallingConvention = CallingConvention.Cdecl)]         public static extern int Hi264DecAU(IntPtr hDec, IntPtr pStream, uint iStreamLen, ulong ullPTS, ref hiH264_DEC_FRAME_S pDecFrame, uint uFlags);          [StructLayout(LayoutKind.Sequential)]         public struct hiH264_DEC_ATTR_S         {             public uint uPictureFormat;             public uint uStreamInType;             public uint uPicWidthInMB;             public uint uPicHeightInMB;             public uint uBufNum;             public uint uWorkMode;             public IntPtr pUserData;             public uint uReserved;         }          [StructLayout(LayoutKind.Sequential)]         public struct hiH264_DEC_FRAME_S         {             public IntPtr pY;             public IntPtr pU;             public IntPtr pV;             public uint uWidth;             public uint uHeight;             public uint uYStride;             public uint uUVStride;             public uint uCroppingLeftOffset;             public uint uCroppingRightOffset;             public uint uCroppingTopOffset;             public uint uCroppingBottomOffset;             public uint uDpbIdx;             public uint uPicFlag;             public uint bError;             public uint bIntra;             public ulong ullPTS;             public uint uPictureID;             public uint uReserved;             public IntPtr pUserData;         }          [StructLayout(LayoutKind.Sequential)]         public struct hiH264_LIBINFO_S         {             public uint uMajor;             public uint uMinor;             public uint uRelease;             public uint uBuild;             [MarshalAs(UnmanagedType.LPStr)] public string sVersion;             [MarshalAs(UnmanagedType.LPStr)] public string sCopyRight;             public uint uFunctionSet;             public uint uPictureFormat;             public uint uStreamInType;             public uint uPicWidth;             public uint uPicHeight;             public uint uBufNum;             public uint uReserved;         }          [StructLayout(LayoutKind.Sequential)]         public struct hiH264_USERDATA_S         {             public uint uUserDataType;             public uint uUserDataSize;             public IntPtr pData;             public IntPtr pNext;         }    }

You can load the following code when loading the form.

// Initialization. var decAttr = new Hi264Dec can be completed in the FormLoad transaction. hih1__dec_attr_s (); decAttr. uPictureFormat = 0; decAttr. uStreamInType = 0; decAttr. uPicWidthInMB = 480> 4; decAttr. uPicHeightInMB = 640> 4; decAttr. uBufNum = 8; decAttr. uWorkMode = 16; IntPtr _ decHandle = Hi264Dec. hiwon deccreate (ref decAttr); hiwon dec. hih1__dec_frame_s _ decodeFrame = new Hi264Dec. hih1__dec_frame_s (); // decoding // pData is the H264 nalu data to be decoded, And the length is the length of the data if (hi1_dec. himo-decau (_ decHandle, pData, (uint) length, 0, ref _ decodeFrame, 0) = 0) {if (_ decodeFrame. bError = 0) {// specify the length of y u v. var yLength = _ decodeFrame. uHeight * _ decodeFrame. uYStride; var uLength = _ decodeFrame. uHeight * _ decodeFrame. uUVStride/2; var vLength = uLength; var yBytes = new byte [yLength]; var uBytes = new byte [uLength]; var vBytes = new byte [vLength]; var decodedBytes = new byte [yLength + uLength + vLength]; // _ decodeFrame is the decoded data object, which contains information such as YUV data, width, and height Marshal. copy (_ decodeFrame. pY, yBytes, 0, (int) yLength); Marshal. copy (_ decodeFrame. pU, uBytes, 0, (int) uLength); Marshal. copy (_ decodeFrame. pV, vBytes, 0, (int) vLength); // put the YUV data retrieved from _ decodeFrame into decodedBytes Array. copy (yBytes, decodedBytes, yLength); Array. copy (uBytes, 0, decodedBytes, yLength, uLength); Array. copy (vBytes, 0, decodedBytes, yLength + uLength, vLength); // decodedBytes is yuv data, you can convert it to RGB data, convert it to BitMap, and then display it through the PictureBox control. // This kind of code is common online, so I will not post it}

Destroy decoder handle when disabled

  

Hi264Dec.Hi264DecDestroy(_decHandle);

As mentioned above, after the imported data is converted to YUV data, it needs to be converted to RGB and then converted into an image for display in the control. A rough view from the Internet shows that there are still many such algorithms, so tomorrow's goal is to display the video. Strive to make the existing Wheels run.

3. C ++ Learning

This morning, I also read a small part of C ++, which is slow. I learned the Basic Input and Output statements and only saw Page 28th.

4. Summary

I have received my graduation thesis questions today. How can I start writing. After decoding the RTP and H264 videos, we began to prepare for the graduation thesis. In the evening, the email sent to Mr. Zhou zhaoxiao has received a reply. I also summarized my current problems as follows: (1) the language does not have much to worry about. For me, there are no advantages or disadvantages, what I need to do is try to solve the existing problems in a familiar language. (2) Looking for a job in the future requires not only professional knowledge, but also soft power. It is helpful for your own growth to think of small mistakes made in the past few days. You need to continue reading and take notes on the books you have read. (3) concerning how to keep yourself going to study, I felt annoyed a few days ago because I had to do something better. Over the past few days, I have been trying to work with tomatoes to help myself settle down and stick to using the fourth day, with about five tomato clocks every day (low efficiency ). After reading the efficient work, I was really impressed by the analysis of the three causes of procrastination: (1) what others force you to do is contrary to your will (as if it is true); (2) Stress on yourself must be perfect; (3) fear of making mistakes and be criticized. The three reasons are true. Unpleasant also gradually disappears due to busy times.

5. Many of the things I have learned are actually superficial. Even if I can play this video After decoding it, I still feel that I have improved little. However, it is time-consuming for me to continue. I don't know if what I wrote is helpful to you. I hope you will have a lot of discussions in this regard. I would like to comment on my suggestion. Thank you for your suggestion. Every day, you can record a little bit to make a little progress.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.