UWP hand-drawn video creation tool technology sharing series, uwp tool technology
The final product of hand-drawn videos is video files. The previous articles mainly focus on the creation of hand-drawn videos. Today we will talk about the export of hand-drawn videos. This article mainly uses UWP as an example. In addition, we will introduce some Web-side problems and solutions.
As mentioned above, after a hand-drawn video is created, it will eventually be exported as a video file, such as MP4 and WMV. Our current choice is MP4. The entire export process is roughly divided into several steps:
1. Rendering hand-drawn videos in the background
We are still using Win2D for background rendering. As described in the previous articles, the creation and drawing process also uses Win2D for dynamic rendering. Pass the elements to be rendered and the specified time and other attributes to Win2D. Other attributes are completed by Win2D, which is not described here.
2. Custom screenshot by Frame Rate
There are many implementation methods in this step. We use the CanvasBitmap. CreateFromBytes and MediaClip. CreateFromSurface methods, save each part of the video clip file, and look at the sample code:
var img = CanvasBitmap.CreateFromBytes(device, screen.GetPixelBytes(), (int)screen.SizeInPixels.Width, (int)screen.SizeInPixels.Height, screen.Format);var clip = MediaClip.CreateFromSurface(img, span);layerTmp.Overlays.Add(CreateMediaOverlay(clip, size, s - start));var composition = new MediaComposition();composition.Clips.Add(MediaClip.CreateFromSurface(bkScreen, TimeSpan.FromMilliseconds(s - start)));composition.OverlayLayers.Add(layerTmp);var mediaPartFile = await ApplicationData.Current.TemporaryFolder.CreateFileAsync($"part_{mediafileList.Count}.mp4", CreationCollisionOption.ReplaceExisting);await composition.RenderToFileAsync(mediaPartFile, MediaTrimmingPreference.Fast, MediaEncodingProfile.CreateMp4(quality));mediafileList.Add(mediaPartFile);
3. video generation by Image Sequence
This step is generally implemented through FFMpeg. FFMpeg also has many encapsulated versions available in the C # language. However, FFMpeg is not used in UWP. On the one hand, the code library is large, and on the other hand, MediaComposition and MediaClip are available.
We use the video clip saved in the previous step and the MediaComposition. RenderToFileAsync method to save it to the video file paiyun.mp4:
foreach (var mediaPartFile in mediafileList){ var mediaPartClip = await MediaClip.CreateFromFileAsync(mediaPartFile); bkComposition.Clips.Add(mediaPartClip);}var saveOperation = bkComposition.RenderToFileAsync(file, MediaTrimmingPreference.Fast,MediaEncodingProfile.CreateMp4(quality));
4. Process audio tracks inserted into the video
This step is relatively simple, because MediaOverlay supports sound very conveniently. We only need to crop the inserted video according to the set start time and end time, perform the specified rotation and other transformations, and then set MediaOvelay. audioEnabled = true; you can set it to false if you want to mute the video.
var overlay = CreateMediaOverlay(overlayClip, size, effect.StartAt);overlay.AudioEnabled = videoGlygh.IsEnableAudio;layer.Overlays.Add(overlay);bkComposition.OverlayLayers.Add(layer);
5. Process video background music
Background Music processing uses MediaComposition's BackgroundAudioTracks to create BackgroundAudioTrack through audio files. It is an iList type, that is, we can add multiple audio tracks. Let's take a look at the simple sample code:
StorageFile music = await StorageFile.GetFileFromApplicationUriAsync(new Uri(DrawOption.Instance.DefaultMusic.url));var backgroundTrack = await BackgroundAudioTrack.CreateFromFileAsync(music);bkComposition.BackgroundAudioTracks.Add(backgroundTrack);
Here we need to deal with some special situations, such as allowing loop playback of audio files in hand-drawn videos. In this case, we need to splice the audio files and simply splice them manually based on the video time and audio time:
int i = 1; while (DrawOption.Instance.MusicLoop && duration.TotalMilliseconds * i < total){ var track = await BackgroundAudioTrack.CreateFromFileAsync(music); track.Delay = TimeSpan.FromMilliseconds(i * duration.TotalMilliseconds); bkComposition.BackgroundAudioTracks.Add(track); ++i;}
Here we have finished exporting hand-drawn videos in UWP. The export time is generally associated with the video resolution and the complexity of rendering elements, currently, the export time of 720 P videos is about two times the duration of hand-drawn videos. When the video is very long, for example, more than 10 minutes, the export time will be relatively long. We also fixed a bug that when a large number of images are saved locally, the local disk IO becomes a bottleneck, the disk usage is also very high. Later, we made changes to this bug. We changed the local storage file to the memory, so we can do a good job of GC.
In this way, the time consumed for video export can be accepted. At the same time, we also have a Web platform, which also provides the function of creating and exporting hand-drawn videos, its export function is completed on the server, and the server is Linux. It is not as lucky as UWP, and its export work is relatively slow, the video length is about 5-10 times. The process is as follows:
The export time is mainly influenced by PhantomJS, which has poor performance. The time of each frame image is long and slows down the overall speed. At present, we have come to the idea that there is no good way except to use the C ++ rewrite function, and even if it is rewritten, the efficiency will not be greatly improved.
Based on these problems, we have come up with another solution: Use browser plug-ins or local applications locally to complete the conversion and synchronize to the server. Below is a brief introduction to several solutions we are currently trying:
1. Traditional screen recording Solution
This solution is the first time we want to transfer the Web-side video generation to the local device. Compared with the user's direct use of a 3 rdParty screen recording software, the difference is that we can obtain the audio you have selected as the background music, and we can upload it to the server, displayed in the 'my works' list. The process is as follows:
This method is relatively simple, basically the use of FFMpeg, but the drawbacks are also obvious. Because the screen is recorded, the user's browser cannot be moved, minimized, or paused during the recording process, and must be completely previewed and not controllable, so it was quickly rejected.
2. Web end combined with local program Solution
This scheme requires the Web end and local program to do something separately. Simply put, the local program starts a service on the local machine, the Web Side captures images in the Canvas rendered in the background based on the frame rate and sends them to the local program. The local program generates videos, synthesize audio tracks, and upload the images to the server. The process is as follows:
A local program is a background service with no interface and no need for user cooperation. The browser can be completed without being closed, and the user does not need to preview it. These are the advantages of this solution. Currently, this solution is under development. After the development is complete, we will share it in detail. It is also a very brain-shaped implementation method.
Now we have explained the UWP video export and Web video export problems, as well as the solutions we have come up with. If you have any better solutions, please feel free to give us feedback, thank you!