Multiple implementations of Unity's access to the Hai Kang Network Camera

Source: Internet
Author: User
Tags error code sleep port number

The project requirement is the real-time monitoring of the network intelligent camera connected to the Conway vision. Online, there are a lot of examples, but most are through the official provision of the RSTP protocol address implementation, for their own memory, under the intention to record here, do not need to skip here ha (here is a plugin called UMP, of course, there are other VLC for unity, OPENCV, etc. can be achieved)

First put a Hai Kang's RTSP protocol address:

Rtsp://[username]:[password]@[ip]:[port]/[codec]/[channel]/[subtype]/av_stream
Description
Username: User name. For example, admin.
Password: password. For example 12345.
IP: For device IP. such as 192.0.0.64.
Port: The port number defaults to 554, which is not required by default.
Codec: There are H264, MPEG-4, MPEG4 these kinds.
Channel number, starting at 1. For example, Channel 1 is CH1.
Subtype: Stream type, main stream is main, auxiliary stream is sub.


For example, request a main stream of Hoi Hong Camera Channel 1 with the following URL
Main stream:
Rtsp://admin:12345@192.0.0.64:554/h264/ch1/main/av_stream
Rtsp://admin:12345@192.0.0.64:554/mpeg-4/ch1/main/av_stream


Sub-stream:
Rtsp://admin:12345@192.0.0.64/mpeg4/ch1/sub/av_stream

Rtsp://admin:12345@192.0.0.64/h264/ch1/sub/av_stream

Then the name of the plug-in is called UMP address I will not post, search will have ha. This is the case after the import


Then pick a scene and fill in your RTSP address.



And then this is the effect of the run



But I passed the test and found that the delay ~ Amount ... One lost a small height, so continue to find the Hoi Hong SDK function is able to callback to get the video stream data, also provides a library of the SDK, the method can be the standard video stream into the YV12 format, I believe that the video stream is not unfamiliar to this format, yes, Little white I'm going to generate a frame-by-frame textur2d of this format data in unity, and then, on the step

First download Hai Kang's latest SDK package

Address: http://www.hikvision.com/cn/download_more_570.html

Remember that the play library is also down together Oh,

The first part of the implementation of the Code in fact, the official has provided a variety of cases in C # and C + +, but the students who have read the code should know that in this writing demo to play video has an essential parameter is the window handle handle, is a IntPtr type of parameters, But in untiy. As you konw, where is the handle Ah, the UI is drawn out, the whole unity is a window, but the SDK also said that you can give this parameter empty and then give a callback function to obtain the video data,



Remember this callback function is used to enable the PLAY_M4 library decoding (a bit long, or do not paste the picture code number, do not spray me Mess (* ̄︶ ̄))



<summary>
Get the data flow callback function
</summary>
<param name= "Lrealhandle" >l Real handle.</param>
<param name= "Dwdatatype" &GT;DW data type.</param>
<param name= "pbuffer" >p buffer.</param>
<param name= "dwbufsize" &GT;DW buffer size.</param>
<param name= "Puser" >p user.</param>
public void Realdatacallback (Int32 lrealhandle, UInt32 dwdatatype, IntPtr pbuffer, UInt32 dwbufsize, IntPtr puser)
{
The following data processing recommendations are used in the form of delegates
Mydebuginfo alarminfo = new Mydebuginfo (debuginfo);
Switch (dwdatatype)
{
Case Chcnetsdk.net_dvr_syshead://SYS head
if (dwbufsize > 0)
{
if (m_lport >= 0)
{
Return No need to call the open-stream interface multiple times for the same stream
}
Debuginfo + = ("System Header data:" + pbuffer + "Data length:" + dwbufsize);
Get the play handle get the port to play
if (! Playctrl.playm4_getport (ref M_lport))
{
Error_num = Playctrl.playm4_getlasterror (M_lport);
Debuginfo + = ("Playm4_getport failed, error code=" + error_num);
Break
}


Set stream playback mode set the stream mode:real-time stream mode
if (! Playctrl.playm4_setstreamopenmode (M_lport, Playctrl.streame_realtime))
{
Error_num = Playctrl.playm4_getlasterror (M_lport);
Debuginfo + = ("Set streame_realtime mode failed, error code=" + error_num);
}


Open stream, Feed header data
if (! Playctrl.playm4_openstream (M_lport, pbuffer, Dwbufsize, 1024*1024))
{
Error_num = Playctrl.playm4_getlasterror (M_lport);
Debuginfo + = ("Playm4_openstream failed, error code=" + error_num);
Break
}




Sets the number of display buffers set the
if (! Playctrl.playm4_setdisplaybuf (M_lport, 15))
{
Error_num = Playctrl.playm4_getlasterror (M_lport);
Debuginfo + = ("Playm4_setdisplaybuf failed, error code=" + error_num);
}


Set the display mode set the
if (! Playctrl.playm4_setoverlaymode (m_lport, 0, 0))//play off screen
{
Error_num = Playctrl.playm4_getlasterror (M_lport);
Debuginfo + = ("Playm4_setoverlaymode failed, error code=" + error_num);
}


Set the decoding callback function to get the decoded audio and video raw data set callback function of decoded data
M_fdisplayfun = new Playctrl.deccbfun (deccallbackfun);
if (! Playctrl.playm4_setdeccallback (M_lport, M_fdisplayfun))
{
Debuginfo + = ("Playm4_setdisplaycallback fail");
}


Starts decoding start to play
if (! Playctrl.playm4_play (M_lport, IntPtr.Zero))
{
Error_num = Playctrl.playm4_getlasterror (M_lport);
Debuginfo + = ("Playm4_play failed, error code=" + error_num);
Break
}



}
Break
Case Chcnetsdk.net_dvr_streamdata://Video stream data
if (dwbufsize > 0 && m_lport! =-1)
{
for (int i = 0; i < 999; i++)
{
Decoding Input stream data to decode
if (! Playctrl.playm4_inputdata (M_lport, pbuffer, dwbufsize))
{
Error_num = Playctrl.playm4_getlasterror (M_lport);
Debuginfo + = ("Playm4_inputdata failed, error code=" + error_num);
Thread.Sleep (10);
}
Else
{
Break
}
}
}
Break
Default
if (dwbufsize > 0 && m_lport! =-1)
{
Input Other data
for (int i = 0; i < 999; i++)
{
if (! Playctrl.playm4_inputdata (M_lport, pbuffer, dwbufsize))
{
Error_num = Playctrl.playm4_getlasterror (M_lport);
Debuginfo + = ("Playm4_inputdata failed, error code=" + error_num);
Thread.Sleep (10);
}
Else
{
Break
}
}
}
Break
}
}


Then add a callback in the library code to process the data yourself

Look at the picture



The data here are the official library function of other people to handle the data of each frame, even frame number, it is very convenient, we just need to take each frame of the data and then converted to the TEXUTRUE2D data we need, and then a frame of the display to our UI on it, then the focus, Previously said that the library to take out the data is YV12 format, and we need the texture2d is RGB (YUV) format, these are the image data storage format, I am too lazy, to give you an address, do not know interested can see O (∩_∩) o https:// Www.cnblogs.com/samaritan/p/YUV.html

Conversion here, because the amount of data is a bit big frame 1280*720 YV12 data about 2764800, if you write abruptly in the data copy replace what, the amount, I tried a bit, less than 10 frames, feel a GIF figure, so on-line again to look for (there is nothing to be collected), See other Bo friends test five or six methods, there is direct hard turn (by looking up table method optimized, actually also did not optimize how much), and then see the efficiency is higher than the two, with OpenCV and FFmpeg realized, OpenCV and FFmpeg have tried, and finally chose FFmpeg, Because the ffmpeg algorithm is the best (others say, the test)

Here, if you want to use the FFmpeg format conversion algorithm requires two libraries Avutil-55.dll and Swscale-4.dll

And then, of course, call these two libraries and write a conversion method to export the DLLs that can be called by untiy.

The following paste the DLL in the conversion function code, relatively rough, we do not mind.



Avpixelformat src_pixfmt;
Avpixelformat dst_pixfmt;


Avpicture Src_frameinfo;
Avpicture Dst_frameinfo;
struct Swscontext *Img_convert_ctx;
Transformresolution*TP;

FFMPEG_FOR_UNITY_API BOOLstartconvert_updated(bool Start_or_end,int src_width,int src_height,int dst_width,int dst_height,int src_type,int dst_type)
{
if (start_or_end)
{
SRC_PIXFMT = (Avpixelformat) src_type;//0
DST_PIXFMT = (Avpixelformat) DST_TYPE;//2


int ret=0;
TP = new Transformresolution ();
Tp->src_width = Src_width;
Tp->src_height = Src_height;
Tp->dst_width = Dst_width;
Tp->dst_height = Dst_height;
Ret= Av_image_alloc (Src_frameinfo.data, Src_frameinfo.linesize,src_width, Src_height, SRC_PIXFMT, 1);
if (ret< 0) {
printf ("Could not allocate source image\n");
strcpy (RST, "Could not allocate source image\n");
return false;
}
ret = Av_image_alloc (Dst_frameinfo.data, Dst_frameinfo.linesize,dst_width, Dst_height, DST_PIXFMT, 1);
if (ret< 0) {
printf ("Could not allocate destination image\n");
strcpy (RST, "Could not allocate destination image\n");
return false;
}


Init Method 1
Img_convert_ctx =sws_alloc_context ();
Show avoption
Av_opt_show2 (Img_convert_ctx,stdout,av_opt_flag_video_param,null);
Set Value
Av_opt_set_int (Img_convert_ctx, "Sws_flags", sws_bicubic| Sws_print_info,null);
Av_opt_set_int (Img_convert_ctx, "SRCW", src_width,null);
Av_opt_set_int (Img_convert_ctx, "Srch", src_height,null);
Av_opt_set_int (Img_convert_ctx, "Src_format", src_pixfmt,null);
' 0 ' for MPEG (y:0-235); ' 1 ' for JPEG (y:0-255)
Av_opt_set_int (Img_convert_ctx, "Src_range", 1,null);
Av_opt_set_int (Img_convert_ctx, "DSTW", dst_width,null);
Av_opt_set_int (Img_convert_ctx, "Dsth", dst_height,null);
Av_opt_set_int (Img_convert_ctx, "Dst_format", dst_pixfmt,null);
Av_opt_set_int (Img_convert_ctx, "Dst_range", 1,null);
Sws_init_context (Img_convert_ctx,null,null);
return true;
}
Else
{
Sws_freecontext (IMG_CONVERT_CTX);
Av_freep (&src_frameinfo);
Av_freep (&dst_frameinfo);
Delete TP;
return true;
}

}

startconvert_update function Mainly do some initialization, C + + inside everyone knows the application of memory ah what


And then the conversion function.

Update
FFMPEG_FOR_UNITY_API BOOLyv12torgb_updated(uint8_t* pDst, uint8_t* PSRC)
{
if (!PDST)
{
strcpy (RST, "pDst is null");
return false;
}
if (! TP)
{
return false;
}


Switch (SRC_PIXFMT) {
Case av_pix_fmt_gray8:{
memcpy (Src_frameinfo.data[0],psrc,tp->src_width*tp->src_height);
Break
}
Case av_pix_fmt_yuv420p:{
memcpy (Src_frameinfo.data[0],psrc,tp->src_width*tp->src_height); Y
memcpy (SRC_FRAMEINFO.DATA[1],PSRC+TP-&GT;SRC_WIDTH*TP-&GT;SRC_HEIGHT*5/4,TP-&GT;SRC_WIDTH*TP-&GT;SRC_HEIGHT/4)  ; V
memcpy (SRC_FRAMEINFO.DATA[2],PSRC+TP-&GT;SRC_WIDTH*TP-&GT;SRC_HEIGHT,TP-&GT;SRC_WIDTH*TP-&GT;SRC_HEIGHT/4); U

Break
}
Case av_pix_fmt_yuv422p:{
memcpy (Src_frameinfo.data[0],psrc,tp->src_width*tp->src_height); Y
memcpy (SRC_FRAMEINFO.DATA[1],PSRC+TP-&GT;SRC_WIDTH*TP-&GT;SRC_HEIGHT,TP-&GT;SRC_WIDTH*TP-&GT;SRC_HEIGHT/2); U
memcpy (SRC_FRAMEINFO.DATA[2],PSRC+TP-&GT;SRC_WIDTH*TP-&GT;SRC_HEIGHT*3/2,TP-&GT;SRC_WIDTH*TP-&GT;SRC_HEIGHT/2)  ; V
Break
}
Case av_pix_fmt_yuv444p:{
memcpy (Src_frameinfo.data[0],psrc,tp->src_width*tp->src_height); Y
memcpy (Src_frameinfo.data[1],psrc+tp->src_width*tp->src_height,tp->src_width*tp->src_height); U
memcpy (Src_frameinfo.data[2],psrc+tp->src_width*tp->src_height*2,tp->src_width*tp->src_height); V
Break
}
Case av_pix_fmt_yuyv422:{
memcpy (src_frameinfo.data[0],psrc,tp->src_width*tp->src_height*2); Packed
Break
}
Case av_pix_fmt_rgb24:{
memcpy (src_frameinfo.data[0],psrc,tp->src_width*tp->src_height*3); Packed
Break
}
default:{
printf ("Not support Input Pixel format.\n");
Break
}
}




Sws_scale (Img_convert_ctx, Src_frameinfo.data, src_frameinfo.linesize, 0, Tp->src_height, DST_frameinfo.data, DST _frameinfo.linesize);


Switch (DST_PIXFMT) {
Case av_pix_fmt_gray8:{
memcpy (Pdst,dst_frameinfo.data[0],tp->dst_width*tp->dst_height);
Break
}
Case av_pix_fmt_yuv420p:{
memcpy (Pdst,dst_frameinfo.data[0],tp->dst_width*tp->dst_height); Y
memcpy (PDST,DST_FRAMEINFO.DATA[1],TP-&GT;DST_WIDTH*TP-&GT;DST_HEIGHT/4); U
memcpy (PDST,DST_FRAMEINFO.DATA[2],TP-&GT;DST_WIDTH*TP-&GT;DST_HEIGHT/4); V
Break
}
Case av_pix_fmt_yuv422p:{
memcpy (Pdst,dst_frameinfo.data[0],tp->dst_width*tp->dst_height); Y
memcpy (PDST,DST_FRAMEINFO.DATA[1],TP-&GT;DST_WIDTH*TP-&GT;DST_HEIGHT/2); U
memcpy (PDST,DST_FRAMEINFO.DATA[2],TP-&GT;DST_WIDTH*TP-&GT;DST_HEIGHT/2); V
Break
}
Case av_pix_fmt_yuv444p:{
memcpy (Pdst,dst_frameinfo.data[0],tp->dst_width*tp->dst_height); Y
memcpy (Pdst,dst_frameinfo.data[1],tp->dst_width*tp->dst_height); U
memcpy (Pdst,dst_frameinfo.data[2],tp->dst_width*tp->dst_height); V
Break
}
Case av_pix_fmt_yuyv422:{
memcpy (pdst,dst_frameinfo.data[0],tp->dst_width*tp->dst_height*2);//packed
Break
}

Case av_pix_fmt_rgb24:{
Fwrite (Dstframeinfo.data[0],1,dst_w*dst_h*3,dst_file);
memcpy (pdst,dst_frameinfo.data[0],tp->dst_width*tp->dst_height*3);//packed
strcpy (RST, "conversion succeeded. ");
return true;
Break
}
default:{
printf ("Not support Output Pixel format.\n");
Break
}
}
strcpy (RST, "conversion failed.") ");
return false;
}

and then the call in Unity


This function is then called to convert the YV12 data stored in the previous callback function to RGB data



Then, the amount, create the texture2d. , or paste it, although the code is ugly.




Finally paste the effect of the run (the other functions are very simple, not much to do introduction)

Try gif o (* ̄︶ ̄*) o



Study together, welcome to comment on the message and give me better suggestions or implementation of the program, thank you.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.