Some official notes on iOS support for streaming media in RTSP format

Source: Internet
Author: User

iOS explicitly does not support the streaming media in RTSP format, based on the RTSP/RTP for versatility and anti-inflammatory walls, as well as the need to open new ports and other factors that affect stability and versatility.

Video support for HTTP streaming is the best. Although there is a third-party way to cooperate with the FFmpeg library, to realize the streaming content of RTSP playback, but the effect is very general.

If the condition of the schema is not so harsh, it is better to directly choose the HTTP stream provided to the app to play the rendering.

Specific official notes:

Frequently asked Questions
  1. What kinds of encoders is supported?

    The protocol specification does not limit the encoder selection. However, the current Apple implementation should interoperate with encoders, produce MPEG-2 Transport Streams Containi Ng H + video and AAC audio (HE-AAC or AAC-LC). Encoders that is capable of broadcasting the output stream over UDP should also is compatible with the current Implementa tion of the Apple provided Segmenter software.

  2. What is the specifics of the video and audio formats supported?

    Although the protocol specification does not limit the video and audio formats, the current Apple implementation Suppor TS the following formats:

      • Video:

        • H. Baseline Level 3.0, Baseline level 3.1, Main level 3.1, and high profiles level 4.1.

      • Audio:

        • He-aac or aac-lc up to-kHz, Stereo audio

        • MP3 (MPEG-1 audio Layer 3) 8 khz to + khz, stereo audio

        • AC-3 (for Apple TV, in pass-through mode only)

        Note: ipad, IPhone 3G, and IP OD Touch (2nd generation and later) support H. Baseline 3.1. If your app runs on older versions of the IPhone or iPod touch, however, you should use H. Baseline 3.0 for compatibility. If your content is intended solely for IPad, Apple TV, Iphone 4 and later, and mac os x computers, you shou LD use Main level 3.1.

         

  3. What duration should media files be?

    The main point to consider is a shorter segments result in more frequent refreshes of the index file, which might cr Eate unnecessary network overhead for the client. Longer segments would extend the inherent latency of the broadcast and initial startup time. A duration of seconds of media per file seems to strike a reasonable balance for most broadcast content.

  4. How many files should is listed in the index file during a continuous, ongoing session?

    The normal recommendation is 3 and the optimum number may be larger.

    The important point to consider when choosing the optimum number was that the number of files available during a live Sessi On constrains the client's behavior when doing play/pause and seeking operations. The more files on the list, the longer the client can be paused without losing their place in the broadcast, the further BAC K in the broadcast a new client begins when joining the stream, and the wider the time range within which the client can s Eek. The trade-off is, a longer index file adds to network overhead-during live broadcasts, the clients was all refreshing The index file regularly, so it does add up, even though the index file is typically small.

  5. What data rates is supported?

    The data rate, a content provider chooses for a stream are most influenced by the target client platform and the expect Ed network topology. The streaming protocol itself places no limitations on the data rates so can be used. The current implementation have been tested using audio-video streams with data rates as low as up to 3 Mb PS to IPhone. Audio-only streams at ~ Kbps is recommended as alternates for delivery over slow cellular connections.

    For recommended data rates, see Preparing Media for Delivery to ios-based Devices.

    Note:if the data rate exceeds the available bandwidth, there are more latency before startup and the client could have a to PA Use-to-buffer more data periodically. If A broadcast uses an index file that provides a moving window into the content, the client would eventually fall behind I N such cases, causing one or more segments to be dropped. In the case of VOD, no segments is lost, but inadequate bandwidth does cause slower startup and periodic stalling while D ATA buffers.

  6. What's a. ts file?

    . ts  file contains an MPEG-2 Transport Stream. This was a file format that encapsulates a series of encoded media samples-typically audio and video. The file format supports a variety of compression formats, including MP3 audio, AAC audio, H. b Video, and so on. Not all compression formats is currently supported in the Apple HTTP Live streaming implementation, however. (For a list of currently supported formats, See media Encoder.

    MPEG-2 Transport Streams is containers, and should not being confused with MPEG-2 compression.

  7. What's an. m3u8 file?

    an  . m3u8  file is a extensible playlist file format. It's an M3U playlist containing UTF-8 encoded text. The m3u file format is a de facto standard playlist format suitable for carrying lists of media file URLs. This is the format used as the index file for HTTP Live streaming. For details, See ietf Internet-draft of the HTTP Live streaming specification.

  8. How do does the client software determine when to switch streams?

    The current implementation of the client observes the effective bandwidth while playing a stream. If a higher-quality stream is available and the bandwidth appears sufficient to support it, the client switches to a Highe R quality. If a lower-quality stream is available and the current bandwidth appears insufficient to support the current stream, the C lient switches to a lower quality.

    Note: for seamless transitions between alternate streams, the audio portion of the stream should be identical in a ll versions.

     

  9. Where can I find a copy of the media stream segmenter from Apple?

    The media stream segmenter, file stream segmenter, and other tools is frequently updated, so you should download the Curr ENT version of the HTTP Live streaming Tools from the Apple Developer website. See Download the Tools for details.

  10. What settings is recommended for a typical HTTP stream, with alternates, for use and the media segmenter from Apple?

    See Preparing Media for Delivery to ios-based Devices.

    These settings is the current recommendations. There is also certain requirements. The current mediastreamsegmenter tool works with MPEG-2 Transport Streams as defined in ISO/IEC 13818. The transport stream must contain H (MPEG-4, part) video and AAC or MPEG audio. If AAC Audio is used, it must has ADTS headers. H. Video access units must use Access Unit Delimiter nals, and must is in unique PES packets.

    The Segmenter also has a number of user-configurable settings. You can obtain a list of the command line arguments and their meanings by typing from the man mediastreamsegmenter Terminal application. A target Duration (length of the media segments) of seconds is recommended, and are the default if no target duration is Specified.

  11. How can I specify what codecs or H + + profile is required to the play back my stream?

    Use the CODECS attribute of the EXT-X-STREAM-INF tag. When this attribute are present, it must include all codecs and profiles required to play back the stream. The following values are currently recognized:

    Aac-lc

    "mp4a.40.2"

    He-aac

    "mp4a.40.5"

    MP3

    "mp4a.40.34"

    H. Baseline Profile Level 3.0

    "avc1.42001e"Or"avc1.66.30"

    Note:use for "avc1.66.30" compatibility with IOS versions 3.0 to 3.1.2.

    H. Baseline Profile Level 3.1

    "avc1.42001f"

    H. 3.0 Main Profile Level

    "avc1.4d001e"Or"avc1.77.30"

    Note:use for "avc1.77.30" compatibility with IOS versions 3.0 to 3.12.

    H. 3.1 Main Profile level

    "avc1.4d001f"

    H. 4.0 Main Profile level

    "avc1.4d0028"

    H. 3.1

    "avc1.64001f"

    H. 4.0

    "avc1.640028"

    H. 4.1

    "avc1.640028"

    The attribute value must is in quotes. If multiple values is specified, one set of quotes is used to contain all values, and the values were separated by commas. An example follows.

     #EXTM3U 
     #EXT-x-stream-in F:program-id=1, bandwidth=500000, resolution=720x480 
     Mid_video_index. m3u8 
     #EXT-x-stream-inf:program-id=1, bandwidth=800000, resolution=1280x7 
     Wifi_video_index. m3u8 
     #EXT-x-stream-inf:program-id=1, bandwidth=3000000, codecs= "avc1.4d0 01e,mp4a.40.5 ", resolution=1920x1080 
     H264main_heaac_index. m3u8 
     #EXT-x-stream-inf:program-id=1, bandwidth=64000, codecs= "mp4a.40.5" 
     Aacaudio_index. m3u8 
  12. How can I create a audio-only stream from Audio/video input?

    Add the -audio-only argument when invoking the stream or files Segmenter.

  13. How can I add a still image to an audio-only stream?

    Use the -meta-file argument when invoking the stream or a file segmenter with to -meta-type=picture add a image to every segment. For example, this would add a image named Poster.jpg to every segment of a audio stream created from the file track01.mp 3:

    mediafilesegmenter -f/dir/outputfile-a --meta-file=poster.jpg --meta-type=picture track01.mp3

    Remember that the image was typically resent every ten seconds, so it's best to keep the file size small.

  14. How can I specify a audio-only alternate to an audio-video stream?

    Use the CODECS and BANDWIDTH attributes of the EXT-X-STREAM-INF tag together.

    The BANDWIDTH attribute specifies the bandwidth required for each alternate stream. If The available bandwidth is enough for the audio alternate and not enough for the lowest video alternate, the client SW Itches to the audio stream.

    If CODECS The attribute is included, it must lists all codecs required to play the stream. If only audio codec are specified, the stream is identified as audio-only. Currently, it's not required to specify that a stream was audio-only, so use of the CODECS attribute is optional.

    The following is a example that specifies video streams at $ kbps for fast connections, and the Kbps for slower connections , and an audio-only stream at ~ Kbps for very slow connections. All of the streams should use the same Kbps audio to allow transitions between streams without an audible disturbance.

    #EXTM3U
    #EXT-x-stream-inf:program-id=1, bandwidth=500000, resolution=1920x1080
    Mid_video_index. m3u8
    #EXT-x-stream-inf:program-id=1, bandwidth=150000, resolution=720x480
    3g_video_index. m3u8
    #EXT-x-stream-inf:program-id=1, bandwidth=64000, codecs= "mp4a.40.5"
    Aacaudio_index. m3u8
  15. What is the hardware requirements or recommendations for servers?

    See question #1 for encoder hardware recommendations.

    The Apple stream Segmenter is capable of running on any intel-based Mac. We recommend using a Mac with the Ethernet network interfaces, such as a Mac Pro or an XServe. One network interface can used to obtain the encoded stream from the local network, while the second network interface can provide access to a wider network.

  16. Does the Apple implementation of HTTP Live streaming support DRM?

    No. However, media can be encrypted, and key access can is limited by requiring authentication when the client retrieves the K EY from your HTTPS server.

  17. What client platforms is supported?

    IPhone, IPad, and IPod Touch (requires IOS version 3.0 or later), Apple TV (version 2 and later), and Mac OS X computers.

  18. is the protocol specification available?

    Yes. The protocol specification is a IETF internet-draft, at http://tools.ietf.org/html/ Draft-pantos-http-live-streaming.

  19. Does the client cache content?

    The index file can contain an instruction to the client, the content should not be cached. Otherwise, the client may cache data for performance optimization when seeking within the media.

  20. Is this a real-time delivery system?

    No. It has inherent latency corresponding to the size and duration of the media files containing stream segments. At least one segment must fully download before it can is viewed by the client, and both may be required to ensure seamless Transitions between segments. In addition, the encoder and Segmenter must create a file from the input; The duration of this file are the minimum latency before media is available for download. Typical latency with recommended settings are in the neighborhood of seconds.

  21. What's the latency?

    Approximately seconds, with recommended settings. See question #15.

  22. Does I need to use a hardware encoder?

    No. Using the protocol specification, it is possible to implement a software encoder.

  23. What advantages does this approach has over rtp/rtsp?

    HTTP is less likely to being disallowed by routers, NAT, or firewall sett Ings. No ports need to being opened that is commonly closed by default. Content is therefore more likely to get through to the client with more locations and without special settings. HTTP is also supported by more content-distribution networks, which can affect cost in large distribution models. In general, more available hardware and software works unmodified and as intended with HTTP than with RTP/RTSP. Expertise in customizing HTTP content delivery using tools such as PHP are also more widespread.

    Also, HTTP Live streaming is supported in Safari and the Media Player framework O N IOS. RTSP streaming is not supported.

  24. Why is my stream's overall bit rate higher than the sum of the audio and video bitrates?

    MPEG-2 transport streams can include substantial overhead. They utilize fixed packet sizes that is padded when the packet contents is smaller than the default packet size. Encoder and multiplexer implementations vary in their efficiency on packing media data into these fixed packet sizes. The amount of padding can vary with frame rate, sample rate, and resolution.

  25. How can I reduce the overhead and bring the bit?

    Using a more efficient encoder can reduce the amount of overhead, as can tuning the encoder settings.

  26. Do all media files has the to is part of the same MPEG-2 Transport Stream?

    No. You can mix media files from different transport streams, as long as they is separated by EXT-X-DISCONTINUITY tags. See the Protocol specification for more detail. For best results, however, all video media files should has the same height and width dimensions in pixels.

  27. Where can I get help or advice on setting to an HTTP Audio/video server?

    You can visit the Apple Developer Forum at http://devforums.apple.com/.

    Also, check out best practices for Creating and deploying HTTP Live streaming Mediafor the IPhone and IPad.

Some official notes on iOS support for streaming media in RTSP format

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.