Flash video magic stagevideo

Source: Internet
Author: User

In the past few years, videos have become one of the most popular trends on web pages, which is mainly driven by Adobe Flash Player. Flash Player 9 introduced H.264 and full-screen support in 264. The immersive HD video experience on the Web page actually changed the situation. Recently, the Flash Player implemented on mobile devices has brought new ideas to the Flash Player project team of Adobe, these new ideas will help the project team solve problems such as how to play videos in flash and how to continuously enhance user experience. Stage video is the result of these investments.

The traditional method for rendering videos in Flash Player is to use video objects. Video Objects are considered to be the same as other objects in the stage. This gives developers an unprecedented opportunity to use their ideas. For example, a video can be rendered on each side of a rotating cube, or multiple videos can be merged together. The video object is treated as any other displayobject. Figure 1 illustrates this.

Figure 1. You can use a video object to combine multiple videos.

To support this innovation, Flash Player must process a large number of video frames. Depending on the performance of the underlying basic device, this increase in processing may reduce the video frame rate, or may increase the load of flash player in the CPU.

A New Way of rendering videos

To reduce the performance impact of rendering videos in video objects, Adobe introduces stage video as a new way to render videos. This method makes full use of the underlying Video hardware facilities. The result is that the CPU load is greatly reduced, which means higher Frame Rate and less memory usage are displayed on low-performance devices. When stage video is used, the stagevideo object is not stored in the display list of Flash Player, but hidden behind the stage. Figure 2 illustrates this design method.

Figure 2. The stagevideo object hidden behind the flash stage.

For TVs, set-top boxes, and mobile devices, stage video has obvious performance advantages. These devices do not have the CPU as powerful as a desktop computer, but have very powerful video decoding functions. They can use a small amount of CPU usage to render high-quality videos. However, even on a desktop computer, stage video significantly changes the Video Performance in Flash Player.

As a developer, you must know that stage video is only a step 2 enhancement method related to video GPU acceleration in Flash Player. The first step is video encoding to make full use of the hardware acceleration devices available on the target platform. Therefore, to obtain the best possible video experience, you need to start from these two steps. H.264 video codecs are the best companion to stage video. Using H.264 video codecs ensures full GPU acceleration from video decoding to video rendering. In this way, you no longer need to merge video frames in the display list through read-back (transfer data from GPU to cpu. Video Frames in YUV format are converted to RGB Images Using a gpu stream processor (directx9 or OpenGL) and transmitted to the screen. Therefore, you will see higher pixel fidelity and lower CPU and memory usage.

Restrictions

The application stage video, the video will be rendered in a flash. Media. stagevideo object, rather than a video object. The stagevideo object is always displayed in the rectangular area of an aligned window on the screen. Other layers may be above the stagevideo object, but they cannot be placed behind the video. Because stagevideo is not in the traditional display list, it is synthesized through GPU, so when using the stagevideo object, the following functions will be unavailable:

  • The stagevideo object cannot be rotated. Only orthogonal rotation is possible (incremental rotation at 90 degrees ).
  • The stagevideo object may not be applicable to colortransform or 3D conversion deformation. It does not have a suitable matrix transformation function for video skew.
  • The stagevideo object cannot be used for Alpha channels, hybrid modes, filters, masks, or scale9grid.
  • The video data cannot be copied to the bitmapdata object (bitmapdata. Draw.
  • Videos cannot be cached in bitmap format.
  • Video data cannot be embedded in SWF files. Stagevideo can only be applied to films from netstream objects.
  • Depending on underlying hardware devices, some color spaces may not be supported. In this case, flash player selects an alternative color space. The new stagevideo ActionScript API provides a way to query the color space in use.
  • Depending on the platform, the number of videos that can be displayed in the video plane is limited. In most mobile systems, only one video can be played globally at any time. This means that if you have several SWF file instances simultaneously displayed, only the first SWF file can be displayed in hardware acceleration mode.
  • To ensure the consistency between the flash player in the desktop and TV devices, set the wmode to direct.
  • Avoid overlapping SWF files in wmode = "Transparent. Some platforms do not support the wmode = "Transparent" mode, such as Google TV. This means that when wmode = "window", all SWF instances can be supported by the platform regardless of the value of the <embed> tag parameter.

In actual situations, any of the above restrictions will not affect the most common use cases, that is, the use cases of the Video Player application. When these restrictions are acceptable, developers are strongly encouraged to use stagevideo objects. The Google TV platform and all air programs developed for the TV platform support stage video and will be included in all platforms that support flash.

For more information about using stagevideo for the TV platform and the air program on Google TV platform, refer to the following article: Ring videodelive and content for the Flash Platform on TV.

Requirements

To ensure that stage video is available, you must set wmode = "direct" at any time ". This is a highly recommended video playback mode. This mode uses direct3d in windows, while OpenGL in Mac OS and Linux, so that video frames can be merged directly through GPU. The limitation in this mode is that flash player runs in its own context, rather than overlapping HTML content on the top of the player. If you want to use any other mode, such as wmode = "window", wmode = "Opaque", or wmode = "Transparent ", this will greatly reduce the availability of stage video (see annotations ). Therefore, to maintain consistency, we strongly recommend that you use wmode = "direct" at any time ".

Note:Some browsers, such as Safari 4 (or later) or Internet Explorer 9, use browsers such as coreanimation (MACOs 10.6) or ie9 GPU APIs (Windows Vista/7) class libraries, allows Flash Player to perform merging using the GPU in the context of the browser, just like using the wmode = "direct" mode. Therefore, this allows the use of stage video without considering the value of the wmode parameter. However, to ensure cross-browser consistency, try wmode = "direct" whenever possible ".

Now that you have learned the concepts and restrictions of stage video, let's take a look at what it will look like if it is implemented in the ActionScript mode.

Stage video API

Starting with Flash Player 10.2, there is a new class named stagevideo, which represents a video display instance in the hardware video plane. The stagevideo object is created by Flash Player and cannot be instantiated by itself. You can access the stagevideo object from the available stagevideos array in the stage object:

var v:Vector.<StageVideo> = stage.stageVideos;       var sv:StageVideo;       if ( v.length >= 1 )       {           sv = v[0];       }

  

When the stagevideos attribute is accessed, the length of the stage. stagevideos array varies depending on the available platform and hardware devices. The maximum number of stagevideo objects is 8. Therefore, if you want to use multiple stagevideo objects in an application, this is completely feasible on a desktop computer. On the mobile platform, only one stagevideo object can be used. Therefore, you must consider this. In addition, the length of an array is sometimes zero. If you want to correctly implement stage video, you should always listenStageVideoAvailabilityEvent.STAGE_VIDEO_AVAILABILITYInstead of querying the length of the stagevideos object array manually. This will tell you about the performance of stage video.

At any time, you can listen to such events and wait for the event to be distributed to make appropriate responses:

stage.addEventListener(StageVideoAvailabilityEvent.STAGE_VIDEO_AVAILABILITY, onStageVideoState);

No matter how the availability of stage video changes over time, it will be assigned once the event processor is registered. For more details about this operation, see the "gaining or losing stage video" section.

InonStageVideoHandler, DependencyAt stagevideoavailabilityPossible status attributes of the event object:

private function onStageVideoState(event:StageVideoAvailabilityEvent):void       {           var available:Boolean = (event.availability == StageVideoAvailability.AVAILABLE);       }

This status attribute can have the following values:

  1. StageVideoAbility.AVAILABLE: Stage video is available. There must be at least one stagevideos object array.StageVideoIn the waiting status.
  2. StageVideoEvent.UNAVAILABLE: Stage video is unavailable. The stagevideos object array is empty.

Generally, once you are notified of the availability, you can decide what to do. If stagevideos is available, you can obtain a stagevideo object from the stage. stagevideos object array. If stagevideos is unavailable, you will depend on a traditional video object that has been created and use it as a retreat:

private function toggleStageVideo(on:Boolean):void       {                  // if StageVideo is available, attach the NetStream to StageVideo           if (on)           {               if ( sv == null )               {                   // retrieve the first StageVideo object                   sv = stage.stageVideos[0];                   sv.addEventListener(StageVideoEvent.RENDER_STATE, stageVideoStateChange);               }               sv.attachNetStream(ns);           } else           {               video.attachNetStream(ns);               stage.addChildAt(video, 0);           }           ns.play(FILE_NAME);       }

You may have noticed that we have also listened toStageVideoEvent.RENDER_STATE. Please note thatVideoEvent.RENDER_STATEThis new event is also applicable to traditional video objects, and it tells us how the video is rendered:

private function stageVideoStateChange(event:StageVideoEvent):void       {              var status:String = event.status;           resize();       }

In Flash Player 10.1, we have no way to know whether video frames are decoded or merged using GPU. However, this new event added to the stagevideo and video objects fixes this limitation, which is enabled in Flash Player 10.2. Status attributes can have the following values, which can be usedStageVideoEventOrVideoEventConstants in the class:

  1. StageVideoEvent.RENDER_STATUS_ACCELERATED: Videos are decoded and merged using GPU.
  2. StageVideoEvent.RENDER_STATUS_SOFTWARE: Video is decoded by software and synthesized by GPU (if distributed by stagevideo) or software (if distributed by video.
  3. StageVideoEvent.RENDER_STATUS_UNAVAILABLE: The Video hardware has stopped decoding and merging the video.

When netstream is connected to the stagevideo object,StageVideoEventIt will be assigned, or the video object'sVideoEventAssigned. For example, ifStageVideoEvent.RENDER_STATUS_SOFTWAREThat is, you are notified that the video is being decoded through software. You can try to switch to another video stream, such as H.264 or a video stream of different sizes, to make the video better decoded on the GPU. If hardware decoding suddenly fails, you can select a non-GPU-accelerated decoder to forcibly perform decoding through software. Such events can also be used to track user experience in debugging or logging.

As you can see, you can also use this event to change the video size, because it also tells you when to get the video size from the video or stagevideo object, the final width and height of the video are calculated based on the constraints.

private function resize ():void       {              rc = computeVideoRect(sv.videoWidth, sv.videoHeight);           sv.viewPort = rc;       }

Remember that stagevideo is not a displayobject, so it does not implement all the expected attributes that you use to locate and scale displayobject in flash. The above Code usesviewPortAttribute to specify the video size on the screen, instead of using the expected attributes such as width and height.

The stagevideo object exposes the following attributes:

  1. colorSpaces:Vector.: Available color space on underlying hardware devices.
  2. depth:int:StageVideoObject depth. This attribute allows you to process Z-ordering between multiple stagevideo objects.
  3. pan:Point: Translation (similar to X and Y); a point object must be specified. By default, the pan value is (0, 0 ).
  4. videoHeight:int: The original height of the video stream. It is a read-only attribute.
  5. videoWidth int: The original width of the video stream. It is read-only.
  6. viewport:Rectangle: Visible surface (similar to width and height); a rectangle object must be specified.
  7. zoom:Point: Scaling factor. A point object must be specified. By default, the zoom value is (1, 1 ).

Note that the stagevideo instance is rendered in order. The first stagevideo object in the stagevideos array is first rendered; the next object is rendered above the previous one. To change this, you can usedepthAttribute to manually change the order:

sv.depth = 0;sv2.depth = 1;

You may have noticed that the stagevideo object exposescolorSpacesProperty, which returns information about the color space processed by the current hardware. Now you will find how useful this will be.

Use color space

colorSpacesAttribute can be accessed in the stagevideo object and a string array is returned:

var colorSpace:Vector.<String> = stageVideo.colorSpaces;

The color space names are listed inflash.media.VideoColorSpaceClass:

VideoColorSpace.BT601 = "BT.601";VideoColorSpace.UNKNOWN = "unknown";VideoColorSpace.BT709 = "BT.709";VideoColorSpace.SMPTE_240M = "SMPTE-240M";VideoColorSpace.USFCC:String = "USFCC";

Note that the player will try to match the color space of the stagevideo with the color space of the video stream. In the configuration of some machines, if this match cannot be met, flash player will try to find the closest match.

Remember that some video containers may embed information related to the original color space of the video stream. Therefore, Flash Player must take this into account. A common case is H. 264. The video stream is usually encoded as "bt.709", while the color space attribute may return "bt.601 ", this means that the underlying OS/graphics hardware is not capable of rendering the video plane in the "bt.709" color space. In this case, you can re-use software synthesis (by using a traditional video object), or accept the color mismatch. If "unknown" is returned, the platform cannot query the currently used color space.

Get or release stage video

When a SWF file is instantiated, stage video may not be available, but it may become available after a period of time. You may ask, why? As I have explained earlier, when integrating SWF files, the wmode value you selected will determine the consistency of stage video across browsers. In some cases, you may not be able to set wmode = "direct" --- for example, this may be caused by integration issues on the HTML page. In addition, in full screen mode, because Flash Player does not run in the context of the browser, stage video can always be available without considering the value of wmode. However, when you exit full screen mode, stage video may become unavailable again, and then your SWF file will be replaced in the context of the browser, in this case, stage video is affected by the wmode parameter.

Note:Because of this problem, you should always use a traditional video object as a backup in the video player that uses the stagevideo API.

Therefore, you need to construct a video player in some way to make a proper response. Fortunately, this is quite easy. If the stage video becomes available, you only need to append the netstream object to the obtained st. You need to modify the togglestagevideo function as follows:

private function toggleStageVideo(on:Boolean):void       {                  // if StageVideo is available, attach the NetStream to StageVideo           if (on)           {               stageVideoInUse = true;               if ( sv == null )               {                   sv = stage.stageVideos[0];                   sv.addEventListener(StageVideoEvent.RENDER_STATE, stageVideoStateChange);               }               sv.attachNetStream(ns);               if (classicVideoInUse)               {                   // If using StageVideo, just remove the Video object from                   // the display list to avoid covering the StageVideo object                   // (always in the background)                   stage.removeChild ( video );                   classicVideoInUse = false;               }           } else           {               // Otherwise attach it to a Video object               if (stageVideoInUse)                   stageVideoInUse = false;               classicVideoInUse = true;               video.attachNetStream(ns);               stage.addChildAt(video, 0);           }           if ( !played )           {               played = true;               ns.play(FILE_NAME);           }       }  

This code can properly address the situation where stage video is retired to a traditional video object when it becomes available or unavailable. When netstream is re-appended to the stagevideo object, because hardware resources must be allocated at this time, you may see a delay between the two moments of attaching netstream and viewing pixels on the screen.

For more information about how to use the stagevideo API, see the attachment in this article. The attachment contains a simple Video Player used to describe various scenarios.

Scenario

As you can see throughout the article, you may encounter different scenarios when playing videos in Flash Player. The following lists the different scenarios that you may encounter when using video or stagevideo objects to play a video.

  • Video Objects play the role of non-accelerated codecs:The CPU is used for decoding and merging.
  • Video Objects play the role of GPU-accelerated codecs (H.264:The GPU will be used for decoding, but the CPU can still be used for merging.
  • Stagevideo objects play the role of non-accelerated codecs:The CPU will be used for decoding, but the GPU will be used for merging.
  • Stagevideo objects play the role of GPU-accelerated codecs (H.264): GPU will be used for decoding and merging. The CPU will not be used throughout the pipeline. This is the "direct path" scenario that you want to achieve the best performance.

Extension and meaning

Please be sure to use the developer-oriented Adobe Flash Player 10.2 beta version. In Adobe Labs that introduces new features and enhancements, a new video hardware acceleration model is now included, this model significantly improves the video playback performance.

To fully experience this new stagevideo API, Adobe has introduced two minor improvements that are of great interest:

  • Supports full screen mode for multiple monitors:The full-screen content remains in full-screen mode on the second monitor, allowing users to watch full-screen content while working on the other monitor.
  • Full Screen detection capability:The new allowfullscreen attribute is now applicable to stage objects, allowing developers to check whether the current container/hosting web page allows the player to be displayed in full screen.

The stagevideo API will undoubtedly change the video playing performance. In some scenarios, stage video can reduce the CPU usage by up to 85 percentage points. Even if you have some requirements for stage video, make sure to use it as much as possible. This will allow developers to make full use of the Full Hardware acceleration of the video rendering pipeline, which will bring the first-class video playing performance.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.