The project requires that a preview of the video be generated based on the number of videos and seconds returned by the server.
An online search for the keyword "ios video frames" Resultsin how iOS gets the first frame of the video .
But what if I don't want the first frame, the X-frame of the second s?
Paste first how to get the first frame of code:
123456789 |
-(UIImage*)Getvideopreviewimage{ Avurlasset*Asset=[[AvurlassetAlloc]Initwithurl:VideopathOptionsNil]; Avassetimagegenerator*Ee=[[AvassetimagegeneratorAlloc]Initwithasset:Asset]; [AssetRelease]; Ee.Appliespreferredtracktransform=YES; CmtimeTime=Cmtimemakewithseconds(0.0,600); Nserror*Error=Nil; CmtimeActualtime; CgimagerefImage=[GenCopycgimageattime:TimeActualtime:&actualtime error:& Error; uiimage *img = [[[uiimage alloc] Initwithcgimage:image autorelease]; cgimagerelease (image [gen release]; return img; /span>
|
This is a very superficial understanding practice, there are a lot of problems have not been taken into account.
In general, if we're going to ask for x seconds to see the code, don't even think about it.
1 |
CMTime time = CMTimeMakeWithSeconds(0.0, 600);
|
Change to the time you want, but you'll find it passable if you run.
Why is it?
Let's start by saying Cmtime is something.
Cmtime
Cmtime is a structure used to describe the time of a video.
He has two constructors: * Cmtimemake * cmtimemakewithseconds
The difference between the two is * Cmtimemake (A, A, b) A current number of frames, how many frames per second. Current playback time A/b * Cmtimemakewithseconds (A, a) A current time, b how many frames per second.
We cite examples to illustrate it:
1234 |
Float64 seconds = 5;int32_t preferredTimeScale = 600;CMTime inTime = CMTimeMakeWithSeconds(seconds, preferredTimeScale);CMTimeShow(inTime);
|
OUTPUT: {3000/600 = 5.000}
Represents the current time of 5s, video a total of 3000 frames, 600 frames a second
1234 |
int64_t value = 10000;int32_t preferredTimeScale = 600;CMTime inTime = CMTimeMake(value, preferredTimeScale);CMTimeShow(inTime);
|
OUTPUT: {10000/600 = 16.667}
Representative time is 16.667s, video is 1000 frames, 600 frames per second
In fact, in our case, we only care about the last total time. In other words, we replace that (0, 600) with (x, 600) is no problem ... = =!
Requestedtimetolerance
So why is the effect so much worse? We can put
123 |
CGImageRef image = [gen copyCGImageAtTime:time actualTime:&actualTime error:&error];
|
The returned actualtime output a bit
You will find that the time difference is far away. What is this for?
First of all his actualtime use FPS * 1000 when the frame rate per second. How to get the FPS access
1 |
float fps = [[[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] nominalFrameRate];
|
Then let's think about why we have requesttime and actualtime. At first I was confused about the API: why my request didn't wait for actual
I checked the documentation later.
when you want a frame at a point in time, he will find it in a range, if there is a cache, or if there are keyframes within the index, return directly to optimize performance .
This definition of the scope of the API is Requestedtimetoleranceafter and Requestedtimetolerancebefore
If we want accurate time, then we just need
12 |
gen.requestedTimeToleranceAfter = kCMTimeZero; gen.requestedTimeToleranceBefore = kCMTimeZero;
|
Reference Links:
1.http://blog.rpplusplus.me/blog/2013/09/04/ios-avassertimagegenerator/
Video---Frame analysis