Android audio and video goes deep into FFmpeg to implement Rtmp-based streaming (with source code download) and ffmpegrtmp
Source Code address
Https://github.com/979451341/Rtmp
1. Configure the RTMP Server
I don't want to post two blogs on mac and windows respectively.
Set up an RTMP server on MAC
Https://www.jianshu.com/p/6fcec3b9d644
This is on windows, and the RTMP server is set up (crtmpserver and nginx)
Https://www.jianshu.com/p/c71cc39f72ec
2. Let me talk about the IP address of the streaming output.
I started the hotspot on my mobile phone and connected my computer to my mobile phone. The stream pushing address of this RTMP server is localhost, and the server is on the computer. The localhost on the computer is 127.0.0.1, but for the outside, such as the mobile phone,You cannot use localhost, but the IP address of the computer in this hotspot, that is, the IP address of the LAN.Instead of 127.0.0.1, this only represents the IP address of the current device node. You need to go to the mobile phone to set -- more -- mobile network sharing -- Portable WLAN hotspot -- to manage the device list, you can see the IP address of the computer's LAN.
3. Talk about the code
Register the component. If the second component is not added, the network information cannot be obtained, such as a url
av_register_all ();
avformat_network_init ();
Obtain the information of the input video, and create an environment to output the url address
av_dump_format (ictx, 0, inUrl, 0);
ret = avformat_alloc_output_context2 (& octx, NULL, "flv", outUrl);
if (ret <0) {
avError (ret);
throw ret;
}
Put the input video stream into the output stream just created
for (i = 0; i <ictx-> nb_streams; i ++) {
// Get input video stream
AVStream * in_stream = ictx-> streams [i];
// Add audio and video streams to the output context (initialize an audio and video stream container)
AVStream * out_stream = avformat_new_stream (octx, in_stream-> codec-> codec);
if (! out_stream) {
printf ("Failed to successfully add audio and video streams \ n");
ret = AVERROR_UNKNOWN;
}
if (octx-> oformat-> flags & AVFMT_GLOBALHEADER) {
out_stream-> codec-> flags | = CODEC_FLAG_GLOBAL_HEADER;
}
ret = avcodec_parameters_copy (out_stream-> codecpar, in_stream-> codecpar);
if (ret <0) {
printf ("copy codec context failed \ n");
}
out_stream-> codecpar-> codec_tag = 0;
// out_stream-> codec-> codec_tag = 0;
}
Open the output url and write the header data
// Open IO
ret = avio_open (& octx-> pb, outUrl, AVIO_FLAG_WRITE);
if (ret <0) {
avError (ret);
throw ret;
}
logd ("avio_open success!");
// write header information
ret = avformat_write_header (octx, 0);
if (ret <0) {
avError (ret);
throw ret;
}
Then start to loop decoding and push streaming data
First get one frame of data
ret = av_read_frame (ictx, & pkt);
Then configure parameters for the data of this frame. If the original configuration has no time, configure the time. I will mention two concepts here.
DTS (decoding timestamp) and PTS (display timestamp) are the timestamps relative to SCR (system reference) when the decoder decodes and displays frames, respectively. SCR can be understood as the time when the decoder should start reading data from the disk.
if (pkt.pts == AV_NOPTS_VALUE) {
// AVRational time_base: time base. Through this value, PTS and DTS can be converted into real time.
AVRational time_base1 = ictx-> streams [videoindex]-> time_base;
int64_t calc_duration =
(double) AV_TIME_BASE / av_q2d (ictx-> streams [videoindex]-> r_frame_rate);
// Configuration parameters
pkt.pts = (double) (frame_index * calc_duration) /
(double) (av_q2d (time_base1) * AV_TIME_BASE);
pkt.dts = pkt.pts;
pkt.duration =
(double) calc_duration / (double) (av_q2d (time_base1) * AV_TIME_BASE);
}
Adjust the playback time, that is, we recorded a current time before decoding the video, and then get the current time again when pushing the stream in a loop. The difference between the two is the time that our video should be played. If the video plays too fast, the process will sleep pkt.dts minus the actual playing time difference
if (pkt.stream_index == videoindex) {
AVRational time_base = ictx-> streams [videoindex]-> time_base;
AVRational time_base_q = {1, AV_TIME_BASE};
// Calculate video playback time
int64_t pts_time = av_rescale_q (pkt.dts, time_base, time_base_q);
// Calculate the actual video playback time
int64_t now_time = av_gettime ()-start_time;
AVRational avr = ictx-> streams [videoindex]-> time_base;
cout << avr.num << "" << avr.den << "" << pkt.dts << "" << pkt.pts << ""
<< pts_time << endl;
if (pts_time> now_time) {
// Sleep for a period of time (the purpose is to synchronize the playback time of the current video recording with the actual time)
av_usleep ((unsigned int) (pts_time-now_time));
}
}
If it is delayed, the time recorded in the configuration of this frame should change
// After calculating the delay, specify the timestamp again
pkt.pts = av_rescale_q_rnd (pkt.pts, in_stream-> time_base, out_stream-> time_base,
(AVRounding) (AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
pkt.dts = av_rescale_q_rnd (pkt.dts, in_stream-> time_base, out_stream-> time_base,
(AVRounding) (AV_ROUND_NEAR_INF | AV_ROUND_PASS_MINMAX));
pkt.duration = (int) av_rescale_q (pkt.duration, in_stream-> time_base,
out_stream-> time_base);
Callback the time parameter of this frame, here the interface is instantiated in MainActivity, showing the playing time
int res = FFmpegHandle.setCallback (new PushCallback () {
@Override
public void videoCallback (final long pts, final long dts, final long duration, final long index) {
runOnUiThread (new Runnable () {
@Override
public void run () {
if (pts == -1) {
tvPushInfo.setText ("End of playback");
return;
}
tvPushInfo.setText ("Playing time:" + dts / 1000 + "second");
}
});
}
});
Then the code calls the setCallback function of the C language, obtains the instance of the interface, and the videoCallback function reference of the interface, here also calls this function to initialize the time display
// Convert to global variable
pushCallback = env-> NewGlobalRef (pushCallback1);
if (pushCallback == NULL) {
return -3;
}
cls = env-> GetObjectClass (pushCallback);
if (cls == NULL) {
return -1;
}
mid = env-> GetMethodID (cls, "videoCallback", "(JJJJ) V");
if (mid == NULL) {
return -2;
}
env-> CallVoidMethod (pushCallback, mid, (jlong) 0, (jlong) 0, (jlong) 0, (jlong) 0);
At this time, we return to the loop to push a frame of frame data when calling the videoCallback function
env-> CallVoidMethod (pushCallback, mid, (jlong) pts, (jlong) dts, (jlong) duration,
(jlong) index);
Then is to output data to the output url and release the data of this frame
ret = av_interleaved_write_frame (octx, & pkt);
av_packet_unref (& pkt);
Free up resources
// Close the output context, this is critical.
if (octx! = NULL)
avio_close (octx-> pb);
// Release the output encapsulation context
if (octx! = NULL)
avformat_free_context (octx);
// Close input context
if (ictx! = NULL)
avformat_close_input (& ictx);
octx = NULL;
ictx = NULL;
env-> ReleaseStringUTFChars (path_, path);
env-> ReleaseStringUTFChars (outUrl_, outUrl);
The last callback time is displayed, saying that the playback is over
callback (env, -1, -1, -1, -1);
4. About receiving push stream data
I use VLC here, this mac and windows have versions, FILE ——》 OPEN NETWORK, just input the output URL before. It should be noted here that you must first open the push stream on the app and then use VLC to open the url.
The effect is as follows
Reference article
https://www.jianshu.com/p/dcac5da8f1da
This blogger is really good at pushing the stream. If you want to input the understanding of pushing the stream, you can check his blog