Real-time audio and video domain UDP is the king
In the Internet, audio and video real-time interaction using the Transport Layer Scheme has TCP (such as: RTMP) and UDP (such as: RTP) two kinds. The TCP protocol can provide a relatively reliable guarantee for data transmission between two endpoints, which is achieved t
Turn from: http://blog.csdn.net/lixiaowei16/article/details/53407010
Audio and video synchronization is related to the most intuitive user experience of multimedia products, audio and video media data transmission and rendering playback of the most basic quality assurance. If the
must be Mediaitem serialized, That is, the class implements the Serializable interface, otherwise it will result in an error.In Videopager, pass in list to intent as followsThe object is serialized and uploaded to the Systemvideoplayer Intent intent=new Intent (context, systemvideoplayer.class); Bundle Bundle=new Bundle (); Bundle.putserializable ("Mediallist", (arraylistGet the list in the Systemvideoplayer classGets the list data from the Videopager, which contains the information for the med
playback state, so as to achieve synchronization. Considering that people are more sensitive to sound, in this design select the audio stream as the mainstream, the video stream as the slave stream. The transmitting end encodes the audio and video data acquired by the DirectShow through AMR-WB and H. Encode module, an
constructed via RTCPSENDER::BUILDSR (CTX). Where CTX contains the NTP time of the current moment as the NTP time in the SR message [1]. Next we need to calculate the corresponding RTP timestamp at this point, that is, if a frame of data is just sampled at the moment, its timestamp is:
Rtp_timestamp = start_timestamp_ + Last_rtp_timestamp_ +(Clock_->timeinmilliseconds ()-LAST_FRAME_CAPTURE_TIME_MS_) *(ctx.feedback_state_.frequency_hz/1000);
At this point, the NTP time and RTP timestamp all wor
constructed via RTCPSENDER::BUILDSR (CTX). Where CTX contains the NTP time of the current moment as the NTP time in the SR message [1]. Next we need to calculate the corresponding RTP timestamp at this point, that is, if a frame of data is just sampled at the moment, its timestamp is:
Rtp_timestamp = start_timestamp_ + Last_rtp_timestamp_ +(Clock_->timeinmilliseconds ()-LAST_FRAME_CAPTURE_TIME_MS_) *(ctx.feedback_state_.frequency_hz/1000);
At this point, the NTP time and RTP timestamp all wor
time stamp is not considered, this is the practice of beginners. base on audio timestamp
That is, sync the video to audio. This way the sound plays smoothly.Application scenario: most video players. base on video timestamp
Video
real-time audio and video domain UDP is the kingly
The Transport layer scheme for real-time audio and video interaction on the Internet has two types: TCP (e.g. RTMP) and UDP (e.g. RTP). The TCP protocol provides a relatively reliable guarantee for data transmission between two endpoints, which is achieved through a ha
insert a RTP Header containing the load identifier, serial number, time stamp, and the same-step source identifier, and then transmit the RTP packet on the IP network using the datagram socket, this improves the continuous replay effect and audio/video synchronization. Real-time transmission control protocol RTCP is used for RTP control. The most basic function of RTCP is to use the sender report and
setreceivemode () method of the rtpsession class:
A) receivemode_all: the default receiving mode. All received RTP data packets will be accepted;
B) receivemode_ignoresome except some specific senders, all incoming RTP datagram will be accepted, and the list of rejected senders can be called by addtoignorelist (), deletefromignorelist () and clearignorelist () method;
C) receivemode_acceptsome except for some specific senders, all incoming RTP datagram will be rejected, and the list of accepted
This article mainly introduces the summary of audio and video media playing elements in HTML5, which is the basic knowledge of multimedia development on website pages, you can refer to audio and video encoding/decoder as a set of algorithms to encode and decode a specific audio
In the sender:
The same timestamp (system time) is used for audio/video frames at the same time point)
Receiver:
Stores two queues. Audio and Video are used to store the audio and video
, the receiver side decoding good performance, no mosaic phenomenon.3.2, adding the QoS module will bring a certain delay and lag, because packet retransmission is time-required.3.3, the above plan is WEBRTC inside the nack concrete realization way.The above scheme is provided by Peng Zuyuan, a senior audio and video expert from the ring, with some adjustments, a
with the philosophy of "professional skill", the only purpose of the design is to minimize the impact of jitter on digital audio and pursue the best sound quality. Therefore, "it" is likely to be the future direction of digital hifi.
So what is jitter? Let me refer to the 255 floor explanation.
There are so many noises. Let me describe the jitter of audio digital transmission. We think of
WEBRTC source code, the transmission and reception of video packets is taken as an example, and the implementation of Anck packet retransmission mechanism is deeply analyzed. The main contents include: SDP negotiation Nack, receiving end packet loss determination, NACK message construction, sending, receiving and parsing, RTP packet retransmission. The following are discussed in detail.I. SDP negotiation NACKThe nack is used as the RTP layer feedback
to low-end mobile phones with poor hardware performance.
(7) Installation Package Size
Minimize the installation package size. Such as cropping unnecessary functions.
Video Call test point:
(1) Definition
The appearance of mosaic should be avoided or reduced as much as possible, and the Blur degree should be reduced in the area of image change.
(2) Smoothness
The actual frame rate cannot be too low. Otherwise, the frame rate may be choppy.
(3) la
On the Internet , multimedia services such as streaming media, video conferencing and video-on-demand are becoming an important part of information transmission. The unicast mode of point-to-point transmission cannot adapt to this kind of business transmission characteristic -- single point of sending multipoint receive, because the server must provide each receiver
Since the use of computers and networks, it has greatly changed our world. So far, the intelligent brain of chips and hardware devices has been applied and developed in more and more fields. As for protocol, there are also many changes. Next we will introduce the I2C bus protocol for audio and video devices.
I2C bus Definition
The I2C (Inter-Integrated Circuit) bus is a two-line serial bus developed by PHIL
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.