Flash Video Player Development experience Summary

Source: Internet
Author: User
Tags browser cache

The HTTP protocol is more optimal

Currently, almost all video-on-demand sites use the HTTP protocol to transmit data. Because the HTTP protocol is stateless compared to protocols such as RTMP, the data transfer is disconnected so that the server can free up resources to serve more users. RTMP will maintain a connection during the user's playback, so the load on the server is very limited. and HTTP server, CDN and so on are already very mature technology, low cost performance is good. In addition, HTTP requests can be directly used by browser cookies, easy and website business to get through. Finally, HTTP can also use the browser cache, this is the advantage is also a disadvantage, the advantage is to request the same resources can be taken directly from the cache, the disadvantage is that the security is poor point.

HTTP has better performance, but it can't transmit things that are too real-time, otherwise performance is not as good as rtmp, like video chat, live streaming.

Security

Sometimes the videos that we are visited may require some restrictions, such as anti-theft chains, video charges, and so on. If the use of the HTTP protocol, the traditional authentication method is sufficient, the cookie with what token to determine whether there is access to video resources, details I will not say. The only problem is that once the user has access to the video, it's possible to download the video and use it for him.

Further increased load capacity through fragmentation

In order for HTTP to serve more users while maintaining fewer connections, we need to transfer as soon as possible. But is that the reason we're dividing? No, because regardless of fragmentation, a user loading the full part of the video data on the server time is certain (assuming that the transfer speed is certain), and even more occupy a lot of resources to create connections and destroy the connection.

But we see the big video sites are actually segmented video, here I talk about the benefits of video segmentation.

    1. Save the site traffic, that is, save server resources to improve load capacity. When the user opens a video, it is likely that the video will not be read, only part of it. If the video is not segmented, users can open the site to all the video data loaded, then the traffic is a huge waste. After segmenting the video we can load the video for a period of time so that the user will see how much we load.
    2. More flexible seek (drag), for a video that does not do any segmentation, such as a static video file on an HTTP server, we cannot jump to the part of the video that is not loaded by the NetStream object Seek method. So to solve this problem, both Apache and Nginx provide the FLV module, which supports the start parameter. When specifying the start parameter, we can reload the video from the specified location to resolve the above problem. But the problem is the flow of waste, which may have been loaded and reloaded again. If we take a segmented approach, we can avoid this problem, which is described in more detail later in the implementation method.
NetStream Object

How we play video on the Flash side is largely limited by the features offered by NetStream. So here's a general introduction to the features and limitations that NetStream provides, which is why the program is designed to do so later.

  1. NetStream provides two modes for playing HTTP video, normal mode and data generation mode.
  2. In normal mode, the HTTP video resource address that we want to play is passed to NetStream, and NetStream starts to load the video and start playing. We can pause the video playback, but we cannot pause the loading of the data, we can seek the data part that has already been loaded, but we cannot seek to the part that is not loaded. After the data has been loaded, we can still play, seek and so on, but if the Close method is called closed stream, then if the data is not loaded, it will stop loading, and can not do any play, seek and other operations, which is equivalent to our original load of the data is wasted, can no longer use. So if we want to split the video back and forth in each segment of the video, we have to have a segmented video corresponding to a NetStream instance, in other words there are several segments that require a few netstream to serve them (we think we'll optimize this later).
  3. In data generation mode, NetStream provides a more flexible way to load. NetStream The Appendbytes method allows you to add external binary data to play the video, and the order in which the data is added is the order of playback. In this case we can load the video file data through the Urlstream object, theoretically all the loaded data can be reused. But be careful not to plug all the data into the memory, otherwise the memory will be blown up. Specific caching strategies are followed specifically.
  4. Unlike video players on other platforms, Flash cannot directly access local files, but it allows the browser to quickly get video data from the cache by loading already loaded video. So how to use caching effectively is the key to optimization.
  5. Do not superstition NetStream netstatusevent events, in different server and browser environment, the timing of this event may be slightly different, so the event can only be referenced, need to do some additional prerequisite judgment.
Server support required for video segmentation
    1. Static segmentation: The video is divided into fixed and can be played independently of several paragraphs saved to the server, play when you need to get a video address list. Each static shard can only be requested from the beginning of the request from the middle of the slice. This is the easiest way to do it and perform well.
    2. Static Shard +start parameter: The improvement of the first scenario, which can support starting requests from the middle of a shard to the end of a shard. That's what Youku potatoes do. This facilitates seek. There are also ready-made nginx and Apache modules to support.
    3. Dynamic sharding: Provides both the start and end parameters, which can be used by the player to determine how shards are requested, more flexible for the player, and more convenient for server file management. This solution for server solution Nginx and Apache should also have, no scrutiny over.
    4. All three of the last requested data is a fully independent video file, the server will automatically help you to add video file header. If Flash is using data generation mode, then actually the directly returned is a file data fragment on the line, do not need to add a file header.
Simple segmented video playback

How to play a single video file directly I won't say it, I'm going to show you how to play a segmented video like a full file. There are some flaws in this scheme, and the subsequent programs are optimized based on this scheme.

Server we use the first of the simplest static segments mentioned above. And before the video starts to play, we get a list of the start time, end time, and segmented address of the video segment, along with a total video metadata message.

Once the video list has been loaded, you can start to play each video shard sequentially via NetStream, with each shard controlled by an NetStream instance.

We can set a maximum buffer distance, combined with the current playback progress, the calculation of an allowable buffer position, in this allows the buffer location of the slice can start loading, start loading when the pause does not play. When a slice starts to load, it does not stop, so the actual buffering progress may be greater than the allowed buffer position.

When a slice is finished, do not hurry to turn it off, it may need to be reserved for subsequent seek use. Immediately after that, we put the next shard in the resume method to let him play. This allows multiple shards to be played sequentially, as in the external world, like playing a full video.

In this structure, if the external need for the video to seek operation, can be divided into three kinds of situations:

    1. Seek to the loaded portion of the loaded Shard, this is the most efficient, directly pausing the currently playing shard (if the position of seek is the current slice can be saved), so that seek target time is the Shard seek to the corresponding location to resume playback.
    2. Seek to the non-loaded portion of the loaded shard, the operation is similar to the one above, since the part to seek is not yet loaded, so we can only seek to get to the nearest location where the Shard has been loaded so that the video starts playing as soon as possible.
    3. Seek to a location of a slice that has not yet started to load, and also pause the currently playing slice, and go to the target slice to have the target slice start loading and start playing as soon as possible. (There are two processing strategies for the tiles currently being loaded, one of which is to have the slices continue to load, and the other is to close them directly.) There is also a compromise strategy, if it has been loaded more than half let him continue loading, otherwise closed)
Segmented video playback improvements: Adding the start parameter

So we can see that the static fragmentation method in the seek of the processing or there are still a lot of shortcomings, not loaded part of the content of seek can not be very accurate. But if you cut the slices relatively short, the problem can be improved, but there are other problems, which I'll talk about later on. In addition, we can then introduce the start parameter on the basis of the static Shard, which is the "static shard +start parameter" type server mentioned above.

When the start parameter is introduced, the above 2 and 32 seek conditions are improved:

    1. Seek to the non-loaded portion of the loaded shard, close the stream that the Shard is loading, and use the Shard's NetStream to start loading again from the location specified by the seek location (by specifying the start parameter). But this start parameter is not anything can be, need to be video keyframe position, otherwise returned will not be broadcast. The keyframe position can be queried in the metadata.
    2. Seek to not load tiles also, starting with the start parameter set according to the seek location, starts loading.

So far in any case seek can be accurate to the keyframe, the disadvantage is that the loading of the tile off will cause data waste. Loading from the middle of a slice can also result in incomplete content for a slice. The next time you seek, you will need to reload the slice if it is unfortunate before the slice start position. This can result in data wastage. Fortunately, the average user will not have enough to seek to seek.

To limit the number of connections through a connection pool

As you can see from the previous strategies, the smaller the video, the better the accuracy of seek, or the waste of data, the problem is that you need to instantiate more netstream to maintain the slices. In addition, for longer video, the number of netstream will become much larger. But the number of connections that NetStream can actually open at the same time is limited, not a memory issue, but a limited number of connections provided by Flash. Beyond this limit netstream will not be able to work properly, but also do not error. This restriction is not the same in different browsers, I suspect this is related to the bottom of the browser.

So in order to limit the number of netstream, we need to design a netstream connection pool to manage all netstream. The connection pool upper limit cannot be less than the maximum number of shards that the max Buffer example might load, otherwise there is a logical problem.

We can get a new netstream from the connection pool (this netstream may be the other netstream after the shutdown, but you can use it as a new one) when the number of connections is full, he automatically shuts down some old netstream that are in the connected state. The elimination principle is based on the principle of spatial locality, meaning that the slice farthest from the current playhead position should be switched off first (the slices within the maximum buffer distance cannot be closed). Because the probability statistic finds that most of the seek appears near the playhead (perhaps in order to find out what the plot is).

Using data generation mode is recommended

The video can be played through multiple NetStream switching, and there will be no noticeable pop-up when switching, but you can still find it carefully. This is also the file that I mentioned earlier in the split too short to appear in another problem, too frequent bursts, may affect video viewing.

So to fundamentally solve this problem, we will have to abandon the NetStream switching mode, to switch to the data generation model. The data generation mode can make the requested slices small (but not too small, otherwise the server performance is degraded). One of the benefits of a small slice is that the request is done faster, and the chance of a request being interrupted is reduced, and when the request is complete, the next request for the same resource can be taken from the browser buffer. So small slices are more likely to be cached. The problem with the small slices above is no longer there.

Depending on the position of the playhead, we load the Shard data back to the maximum buffer distance, which is similar to the previous mentioned method. We then keep these loaded binary data in memory. A certain distance from the play back (we call the NS buffer length), and if there is a shard entry, then it is appendbytes to the NetStream for playback. The blue part shown in the figure is the data that is stored in memory, and it also has an obsolete mechanism similar to the connection pool mentioned earlier to control the total memory size. The data that is freed from memory can be found in the browser cache (because it has already been loaded), and if we want to use it, we can quickly request it in the same way as the server-side data (which is, of course, slower than in memory). The white squares in the figure are data that has not yet been loaded, and they wait to load on the server. Is the level three query of the data.

If the user is to seek:

    1. The position of seek is located in the data in memory: The buffer of the NetStream is emptied first, then the data in the in-memory response position is added to the NetStream for playback at a certain distance (ns buffer length above).
    2. The location of seek is in the cache: the data in the cache is loaded into memory and then implemented by the first article.
    3. Seek is located on the server: The data shard data is loaded from the server into memory and then implemented by the first article.

If the Shard data is large, seek is located in the middle of the Shard, then you can also start loading from the middle of the Shard, so that you can logically divide a shard into two.

Data generation mode in essence to ensure the quality of playback, eliminate the data waste, to ensure the seek accuracy, server implementation is also exceptionally simple, is really the first choice of video playback!

Giveaway: Make a live broadcast in a segmented manner

This approach requires the server to do real-time sharding and distribute to the CDN. For example, the server packages 30 seconds of video data from a live data source into a single packet distributed to a CDN, so in theory the live stream is delayed for at least 30 seconds. But for real-time is not particularly strong live, this way of load capacity will be better.

Traditional long-connect live, need to keep the client and the server is connected, the server needs to maintain the connection of each client, but in fact, the transmission of 30 seconds of video data only 1 seconds, so if the use of HTTP, because the transmission is finished to serve others, So theoretically the efficiency of maintaining the connection can be increased by 30 times times.

Here we ask the server to provide a list of video addresses that provide the latest n video shard addresses. This allows the client to keep the client and live in sync by polling the video list.

The client maintains a slice list queue, by polling the server, we add the latest video address to the queue, and the playback module loads the oldest tile address from the queue.

If the user network is poor, the playback will be Dayton, so the frequency of removing the slice address from the queue will be reduced and the queue will grow longer. The longer the queue, the greater the delay in video playback.

So when the queue is longer than a certain threshold (which we set), we empty the queue to only one of the nearest addresses until the next time the address is removed, allowing the queue to continue to grow. The operation of this queue is actually to rectify the delay caused by playing the lag, so that the live broadcast should not be delayed too badly.

Flash Video Player Development experience Summary

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.