ZZ: ASF/WMV profile parameter settings

Source: Internet
Author: User

Profile)
A configuration is a set of description data of an ASF configuration. One setting must contain at least one stream configuration setting.
Stream information
The stream information in the settings includes the bit rate, buffer window, and media attribute settings of the stream. The stream information of video and audio accurately describes the media configurations in the file, including the encoding and decoder used for compressing data (if any ).
A setting also contains many ASF features used when creating an ASF file, including mutex, media priority, bandwidth sharing, and data unit scaling.
Each time you write a file, you must provide settings. You can call iwmwriter: setprofile to specify a setting.
There are three types of settings. In an application, you can set the data contained in an object, an XML file, or an ASF file header.
Set object
You can use the settings manager to create an empty setting object and then load the settings from the existing data.
XML file
With the PRx extension. Note that Windows Media 9 series does not have the original system settings (system profiles) and does not exist in this form. When saving custom settings, you must save them as such files.
ASF File Header
The ASF reader creates a setting object and loads the format information from the ASF file header. However, modifying the file header does not affect the file content. You can re-encode the file to modify the format.
Use the settings Editor
In addition to the Windows Media Format SDK, you can also use the settings editor included in Windows Media Encoder 9 Series to create settings. Use iwmprofilemanager: loadprofilebydata in the application to load predefined settings. However, enabling the "video size: The same as input" option sets the video size to 0. Windows Media Encoder 9 Series can identify and process this situation, however, the write object of the Windows Media Format SDK will not be automatically processed, so the application must also handle this situation.
Below is a configuration in XML format
<Profile version = "589824" storageformat = "1" name = "ICW" Description = "ICW stream">
// 73647561-0000-0010-8000-00aa00389b71 'auds '= wmmediatype_audio
<Streamconfig majortype = "{region}" streamnumber = "1" streamname = "audio stream" inputname = "audio804" bitrate = "1411200" bufferwindow = "-1" reliabletransport = "0 "decodercomplexity =" "rfc1766langid =" ZH-CN ">
// 0001-0000-0010-8000-00aa00389b71 wmmediasubtype_pcm
<Wmmediatype subtype = "{00000001-0000-0010-8000-00aa00389b71}" bfixedsizesamples = "1" btemporalcompression = "0" lsamplesize = "4">
<Waveformatex wformattag = "1" nchannels = "2" nsamplespersec = "44100" navgbytespersec = "176400" nblockalign = "4" wbitspersample = "16"/>
</Wmmediatype>
</Streamconfig>
// 73647561-0000-0010-8000-00aa00389b71 'auds '= wmmediatype_audio
<Streamconfig majortype = "{region}" streamnumber = "2" streamname = "Video Stream" inputname = "video804" bitrate = "4000" bufferwindow = "1000" reliabletransport = "0" decodercomplexity = "au" rfc1766langid = "ZH-CN">
<Videomediaprops maxkeyframespacing = "80000000" Quality = "35"/>
// 56555949-0000-0010-8000-00aa00389b71 'yv12' = mediasubtype_iyuv
<Wmmediatype subtype = "{56555949-0000-0010-8000-00aa00389b71}" bfixedsizesamples = "1" btemporalcompression = "0" lsamplesize = "0">
<Videoinfoheader dwbitrate = "4000" dwbiterrorrate = "0" avgtimeperframe = "1000000">
<Rcsource left = "0" Top = "0" Right = "0" Bottom = "0"/>
<Rctarget left = "0" Top = "0" Right = "0" Bottom = "0"/>
<Bitmapinfoheader biwidth = "0" biheight = "0" biplanes = "1" bibitcount = "12" bicompression = "iyuv" bisizeimage = "0" bixpelspermeter = "0" biypelspermeter =" 0 "biclrused =" 0 "biclrimportant =" 0 "/>
</Videoinfoheader>
</Wmmediatype>
</Streamconfig>
// 73636d64-0000-0010-8000-00aa00389b71 'scmd' = wmmediatype_script
<Streamconfig majortype = "{signature}" streamnumber = "3" streamname = "script stream" inputname = "script804" bitrate = "2560" bufferwindow = "-1" reliabletransport = "0 "decodercomplexity =" "rfc1766langid =" ZH-CN ">
<Wmmediatype subtype = "{00000000-0000-0000-0000-000000000000}" bfixedsizesamples = "0" btemporalcompression = "0" lsamplesize = "0">
// 82f38a70-c29f-11d1-97ad-00a0c95ea850 wmscripttype_twostrings
<Wmscriptformat scripttype = "{82f38a70-c29f-11d1-97ad-00a0c95ea850}"/>
</Wmmediatype>
</Streamconfig>
</Profile>
Media sample)
Media sampling or sampling is a piece of digital media data. Sampling is the minimum unit of data that can be read and written by the Windows Media Format SDK. The sample content is determined by the media type related to the sample. For videos, each sample represents a sequence, and the data volume included in each individual sample is set by the settings specified when the ASF is created.
Sampling can contain uncompressed data or compressed data, which is called Stream sampling. When creating ASF, sampling is passed to the writing object. The writing object uses the relevant encoder to compress the data and writes it to the Data Segment of the ASF file. During playback, read the compressed data from the object, decompress the data, and provide uncompressed data.
Sampling is encapsulated in automatically allocated buffer objects of the Windows Media Format SDK. When necessary, you can also allocate a buffer object and use its read/write features.
The sampling here is not an audio sample. Generally, the audio sampling quality is expressed by the number of sampled data records per second. For example, the CD quality is 44,100 sampling/second or 44.1 kHz.
Input, stream, and output
The input object is any digital media stream that you use to write files. It must be in a supported format. Supports many standard RGB and YUV video input formats and PCM audio input formats. If the encoder does not support certain input formats, the write object initializes a secondary object and converts the input stream to the supported formats, such as adjusting the color depth conversion and scaling, adjust the sound quality, sampling rate, and number of channels. In some cases, compressed food and audio can be used for input. The input can also be in another format, such as text, script commands, images, or any file data.
Output is data that reads objects and passes them to applications to provide user experience. An output is equivalent to a stream. If you use the mutex attribute, all mutex data shares one output.
A stream is the data contained in an ASF file. A stream has only one compression setting in its lifecycle. A simple ASF has two types of streams: video and audio. More complex ASF files can contain two channels of audio and multiple videos. Audio can have the same compression settings, but the content is different. For example, videos can have the same content but have different compression ratios. The format is specified in the setting object.
Some inputs can be compressed. In this case, the read object must access data in sequence with the stream number, rather than in the output order.
No.
The stream has a number starting from 1, which is specified in the settings. At the same time, the stream has an index to enumerate the stream in the settings. These two numbers are irrelevant. For example, input 1 is not necessarily a stream numbered 1, and a stream numbered 1 is not necessarily input 1.
Format
All information about each media type. Each format has a primary type, such as audio or video, and may have a subtype. The format contains different information that depends on the primary type. Video and audio formats require more information than other formats.
Input Format
Describes the digital media type that you pass to the written object. If the stream in the ASF file is compressed by an encoder, the encoder only supports certain input formats. When using the Windows Media Audio and Video Encoder, you can use the input formats supported by the input object enumeration. When writing to a file, you have the responsibility to select an input format that matches the input media.
Some formats do not need to match the input format specified by the encoder. The encoder can convert data to the desired format.
Stream format
The data storage format in the ASF file. The description in the settings can conform to or do not conform to the input or output format (for example, a certain encoding/decoder is used ). You can set the stream format only after obtaining the encoding/decoder information.
Output format.
Describes the digital media type that you pass to the read object. If the stream in the ASF file is compressed by an encoder, the encoder only supports some output formats. When using the Windows Media Audio and Video Encoder, you can use the output formats supported by the read object enumeration. When reading a file, you have the responsibility to select an output format that matches the output media.
Some formats do not need to match the output format specified by the encoder. The encoder can convert data to the desired format.
Bit Rate)
The number of data transmitted to the ASF per second. The unit is bit/second (BPS) or kilobytes/second (Kbps. It is often confused with bandwidth, and the bandwidth is also measured in BPS or kbps.
If your bandwidth is smaller than the ASF bit rate, the playback may be interrupted. In general, insufficient bandwidth may result in skipping some samples or more data buffer times.
Each ASF file is specified with a bit rate when it is created. It is based on the number of files in flow. Different streams can have different bit rates. The bit rate can be a constant (the compressed data can be transmitted at basically the same speed) or variable (retain the compressed data quality, even if it may cause sudden data overflow ).
The same content can be compressed into multiple streams with different bit rates, and you can configure them as mutex. This attribute is called multiple bit rate or MBR.
Metadata
Information describing the content of an ASF file or file, which is located in the file header. The metadata item is called an attribute. Each attribute consists of a name and a value. Global constants are used to identify attributes. For example, the title of an ASF file is saved in the g_wszwmtitle attribute. The most common built-in attributes are defined in the Windows Media Format SDK, but you can also define your own attributes. As other developers may use the same name as you, conflicts may occur.
Some global attributes can be modified, such as the g_wszwmseekable attribute (whether the document can be read from any point)
Some attributes are purely used for information purposes and must be set, such as the g_wszwmauthor attribute (author)
Attribute can be applied to the entire file or a separate stream.
You can use the Windows Media Format SDK to edit the metadata of an MP3 file, but you must use the ID3-compliant property to retain compatibility with other MP3 applications.
Media time
The time measurement method starting from the first sampling. The unit is 100 nanoseconds, which is the same as that of the SDK at other times. It allows different streams in the file to be synchronized. Each sample you write must have a media time. Each data object in the ASF file data segment has a media time. Each output data also has media time.
Buffer
When reading an object to open a stream file, the buffer size is determined by the file header information. The actual bit rate is changed, but the average value should be the value specified in the setting.
The buffer window is measured by the buffer length of data. For example, a 32 Kbps stream with a buffer window of 3 seconds means that the buffer size is 12,000 bytes (32000*3/8 ). The decoder limits this value, so the average bit rate of the buffer window is not greater than the bit rate of the stream.
This value is usually specified in the settings, and the rest of the object processing is written. When writing compressed data to the stream, you must determine that the write speed will not exceed this value.
Segment in the ASF File
Segments in an ASF file are organized as objects. There are three top-level objects, including the header object, data object, and optional index object ).
Each object starts with the guid and size. These numbers allow the file reader to parse the information and load it to the corresponding object. Because of these guids, the underlying objects can be arranged in any order and can still be recognized. This allows an incomplete ASF file to be correctly read, as long as there is a complete file header and at least one data object. Some objects, such as stream property objects, may have multiple examples.
The header object contains the description of the file and is a unique top-level object container.
Data Objects store streaming data in package format. The data object also has the attributes of the file ID and the total number of packages. However, the attributes of the total number of packages are meaningless for the stream format.
Each packet contains the sending time and duration. This allows the reader to discover the interruption of stream transmission.
The data of the data packet is encapsulated into the payloads. A load can contain one or more media objects. An example of a media object is a clip of a video stream. Large media objects, such as a key volume of video streams, may be extended to multiple loads or even multiple packages. To track object fragments, each object segment is numbered from 0 to 255.
In addition to data, the load also has a timestamp in milliseconds.
All packages have a uniform size specified in the header object. When a package contains less data than the specified size, fill in the insufficient data ("padding" data.
The index object contains the matching of key tokens in the time "-" to locate the objects more effectively. Because it is at the end of the file, real-time media cannot access this object.
Use callback
Some methods of the Windows Media Format SDK interfaces are executed asynchronously. Many of these methods use callback methods to communicate with applications.
Onstatus callback
In the Windows Media Format SDK, iwmstatuscallback: onstatus is called by many objects. Onstatus receives SDK operation status changes. Each object may connect to iwmstatuscallback in different ways.
Synchronous call using events
1. Use the Platform sdk api createevent to create an event object
2. Implement the callback function to capture events and call the setevent function to mark event objects.
3. Call waitforsingleobject in the application and monitor the event object. If you are writing code for a Windows program, you must create a message loop to respond to user operations.
Use context parameters
Some callback functions of the Windows Media Format SDK have the pvcontext parameter. This value is passed to the object at the startup of the asynchronous operation.
Generally, when multiple objects use the same callback, the object pointer is passed as this parameter.
Settings
The main purpose of setting is to describe the objects and the relationship between objects. Whether or not the encoding/decoder is used, some streams need to be configured to work. The stream configuration information can be obtained using the iwmcodecinfo3 interface, but do not manually configure a stream that uses the Windows Media Encoding/decoder.
Create/edit settings
1. Create an empty setting or open the old one.
2. Configure each stream. Use the data obtained from the encoding/decoder if needed.
3 configuration mutex (optional)
4. Configure bandwidth sharing (optional)
5. Configure priority (optional)
Design settings
Select encoding method
1-pass constant bit rate (CBR) is the only choice for live streaming. It is encoded according to the predefined bitrate and has the lowest quality.
2-pass CBR file-based streaming media with fixed length and good quality than 1-pass constant bit rate (CBR)
1-pass Variable Bit Rate (VBR) must be used when quality is specified. It can be played locally or after downloading.
2-Pass VBR-unconstrained is used when the bandwidth needs to be specified, but the actual bandwidth usage can deviate from the specified bandwidth. It can be played locally or after downloading.
2-Pass VBR-constrained is used when the bandwidth needs to be specified, but the actual bandwidth usage cannot be greater than the specified bandwidth. It can be played locally or after downloading.
Bitrate
In addition to data, subcontracting also requires a certain amount of bandwidth. If the stream contains data unit extensions, this will greatly increase the stream bit rate.
At the same time, all connections except applications share the network bandwidth with the application, so the application cannot be considered to use the customer's network bandwidth completely.
Configuration stream
If the stream is video/audio and uses Windows Media Encoding/decoder, you must use iwmcodecinfo3 to obtain the stream configuration object from the encoding/decoder.
If the stream is of another type, use iwmprofile: createnewstream. To create a new stream configuration object.
You must set the name, connection name, and stream sequence number (from 1 to 63) for each stream configuration ).
The VBR settings of the two-pass VBR audio stream using the Windows Media Encoding/decoder may be modified. You do not need to modify the video stream configuration.
Configure other types of streams based on the type. Bit Rate and buffer window need to be set for all such streams.
Use iwmprofile: addstream. To add a stream to a media set.
Most settings can be accessed through iwmmediaprops. These settings are stored in the wm_media_type structure. For audio and video, the wm_media_type structure Pointer Points to more media-specific information, typically the waveformatex or wmvideoinfoheader structure. The video has a third structure, bitmapinfoheader, which describes the video watermark.
Obtain stream configuration information from the encoding/Decoder
Video/audio streams using Windows Media Encoding/decoder must obtain stream configuration information from the encoding/decoder. Although you can set these configurations on your own, you can obtain the stream configuration information from the encoding/decoder to make the data accurate. Do not modify the obtained stream configuration information unless recommended in this document.
You can obtain information from the iwmcodecinfo, iwmcodecinfo2, and iwmcodecinfo3 interfaces of the Configuration Manager.
Enumeration installed encoding/Decoder
The encoding/decoder number starts from 0, and the audio/video encoding/decoder has an independent number.
Formats supported by enumeration encoding/Decoder
Configure audio streams
Do not manually modify the quality settings of the obtained configurations, but use the iwmpropertyvault interface to modify them.
The buffer window of the audio stream should not be larger than the buffer window of the video stream. Otherwise, the playing will not be synchronized. Generally, the buffer window of the audio stream is 1.5-3 seconds, and the buffer window of the video stream is 3-5 seconds.
Configure video streams
Unless it is rgb24 data, it should be a multiple of 4, otherwise there will be illegal format/illegal configuration and other errors.
Configure screen streams
The same as the video stream, but if the complexity is set to 0, the quality set by iwmvideomediaprops: setquality will be ignored.
Image stream
Contains JPEG image data.
Video Stream positioning performance
You can use iwmvideomediaprops: setmaxkeyframespacing to set the critical period interval. Increasing the number of key videos reduces the video quality.
Uncompressed audio and video formats
Cannot be used for streaming. You must manually set the bandwidth. The buffer window should be set to 0.
Configure other streams
Generally, this type of Stream only requires the bit rate and buffer window and the Media Master type settings in wm_media_type. However, other settings are required for some types of streams.
Script stream
The formattype of the wm_media_type member must be set to wmformat_script, indicating that the pbformat Member points to a wmscriptformat structure.
There is only one script media type, wmscripttype_twostrings.
File Transfer Stream
Each sample requires a data unit extension. You need to implement a data unit extension system.
Call iwmstreamconfig2: adddataunitextension to add data units to the stream.
HR = pstreamconfig2-> adddataunitextension (clsid_wmtpropertyfilename,
-1, null, 0 );
Webpage stream
Wm_media_type.majortype wmmediatype_filetransfer.
Wm_media_type.subtype wmmediasubtype_webstream.
Wm_media_type.bfixedsizesamples false.
Wm_media_type.btemporalcompression true.
Wm_media_type.lsamplesize 0.
Wm_media_type.formattype wmformat_webstream.
Wm_media_type.punk null.
Wm_media_type.cbformat sizeof (wmt_webstream_format ).
Wm_media_type.pbformat is a pointer to the configured wmt_webstream_format structure.
Wmt_webstream_format.cbsampleheaderfixeddata sizeof (wmt_webstream_sample_header ).
Wmt_webstream_format.wversion 1.
Wmt_webstream_format.wreserved 0.
Text Stream
Media type wmmediatype_text
Computing bit rate and buffer window
The simple method is to set the Data Length/time. however, there may be a lot of burst data in the image and file streams, but there is a lot of free time. the buffer window must be large enough. you can add these values as needed.
Variable code Flow Rate
Data Unit Expansion
Save/reuse Configuration
Do not manually change the PRx file. A small change may invalidate the configuration.
Mutual Exclusion
Stream priority
Bandwidth sharing
Package Size
Write an ASF File
Use iwmwriter: setprofile to set the write object. However, after the settings, the changes to the set object are not automatically reflected in the written object unless iwmwriter: setprofile is called again.
Setting the write object will reset all the header attributes, so you must modify these attributes after setting them.
Input
Each connection in the Set object has an input number. Each stream has a connection unless mutex is configured. Shared connections of mutex streams.
When writing a stream, you must use an input number to differentiate each stream. Therefore, you must use a connection name to determine the input number of each stream.
Enumeration input format
The SDK can pre-process the input to determine whether the input format is supported.
Set input format
After finding the data-compliant input format, you can call iwmwriter: setinputprops to make it available to be written to objects. For video streams, the watermark size must be set.
Other types of streams and pre-compressed streams
You do not need to set other types of streams.
The input format of the pre-compressed stream must be set to null. This setting must be completed before beginwriting. At the same time, you need to call iwmheaderinfo3: addcodecinfo to set the format of the pre-compressed stream.
Before beginwriting, you can also use iwmwriteradvanced2: setinputsetting to set stream-independent settings.
Metadata
Use the iwmheaderinfo or iwmheaderinfo2 interface of the written object to access metadata. You must write metadata before iwmwriter: beginwriting.
Note: If you create a write object without releasing it, and then create a write object, some metadata will be copied to the new object.
Write sampling
Iwmwriter: beginwriting must be called before sampling is written.
1. Use iwmwriter: allocatesample to allocate a buffer and obtain its inssbuffer interface.
2. Use inssbuffer: getbuffer to obtain the buffer address.
3. Copy data to the buffer zone.
4 Use inssbuffer: setlength to set the length of the copied data
5. Pass the buffer, input number, and media time to the iwmwriter: writesample method. The duration of the audio data is the same, so you can simply add a constant to the existing time. For videos, media time must be calculated based on the resolution rate.
Writesample is an asynchronous call. It may not end before the next writesample call. Therefore, you must call allocatesample to obtain the buffer object before each write sample.
After all the samples are written, iwmwriter: endwriting is called to complete the write operation.
The stream data should end almost simultaneously, otherwise some stream data may be lost.
Write compression sampling
Replace iwmwriter: writesample with iwmwriteradvanced: writestreamsample.
Write image sampling
You must use iwmwriteradvanced2: setinputsetting to set the image quality g_wsz1_compressionquality, ranging from 1 to 100. The image sampling compression ratio is usually very large, so try to set the buffer window size.
Force critical token
Use inssbuffer3: setproperty to set the wm_sampleextensionguid_outputcleanpoint of the buffer object to true.
Read
Output
By default, each sample has an output number, which corresponds to a stream in the ASF file. When the reader opens the ASF file, a number is assigned to each stream. Generally, each stream has an output. However, for mutex streams, each set of mutex streams has only one output. In the case of multi-Bit Rate files, or when the program selects the stream by itself, the output of the corresponding stream is determined by the reader.
Because the stream connection name is not stored in the file, the reader creates a simple connection name for each stream, which is the character form of the output number, such as "1", "2 ", "3" and so on.
Each output has one or more supported output formats determined by the encoder. When it is opened, the default output format is obtained from the sub-type of the media by default.
Read ASF files Asynchronously
1. Implement iwmreadercallback to process the reader's messages, onstatus to Process status messages, and onsample to process extracted data
2. Let the reader open a file and set an output number for each stream.
3. Obtain the output format information from the reader.
4. Start the reader to play the video. The sampling is passed to the onsample at the specified media time until the reader is stopped or reaches the end of the file.
5. When data arrives, the program is responsible for playback sampling.
6. Close the reader after the playback ends.
If the sampling is pre-compressed, iwmreadercallbackadvanced: onstreamsample must be implemented. Iwmreadercallbackadvanced: The onstreamsample is almost the same as the onsample, except that it is based on the stream number rather than the output number. Before playback, obtain the iwmreaderadvanced interface of the reader object and call iwmreaderadvanced: setreceivestreamsamples for each pre-compressed stream.
Positioning
An ASF file must be properly configured to locate the specified time. By default, only audio files can be located, but files containing videos must be indexed. If you are not sure about the file creation method, you can call iwmheaderinfo: getattributebyname to pass g_wszwmseekable to obtain the location information.
Iwmreader: start can be called to locate the specified time.
[Development experience]
Select Encoder
Windows Media
Although the results are satisfactory at a low bitrate, the system resources occupied during encoding are too high, and the results are not ideal at a high bitrate.
Windows Media video 9
Windows Media video 9 Screen
The format is very picky. For example, the video type must be aligned by double words. For long-time multimedia encoding, there are periodic quality changes (the resolution rate is high within a period of time, and the resolution rate is low over a period of time)
Windows Media Audio 9
Windows Media Audio 9 Professional
If the system resource usage is too high and the sampling is insufficient, the tone changes and the effect is intolerable.
Custom Encoder
Multiple types of data MIXED encoding avoid synchronization issues, but cannot separately specify the bitrate and priority for a type of data.

 

This article from the csdn blog, reproduced please indicate the source: http://blog.csdn.net/levisqin/archive/2005/08/04/446131.aspx

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.