Logo
Project Description:
Gpuimage is an open source project that Brad Larson hosted on GitHub.
Gpuimage is an open-source iOS framework based on GPU image and video processing, offers a wide range of image processing filters, and supports real-time filters for cameras and cameras, GPU-based image acceleration, so you can accelerate the processing of filters and other effects on real-time camera videos, movies, and images. and the ability to customize the image filter. In addition, Gpuimage supports arc.
Using Gpuimage to process a picture is easier than core image, just assign the filter to the picture object, regardless of the context or device. Gpuimage provides several different effects except Gaussian blur, although the core image also provides several fuzzy effects, but the current can be used on iOS only Gaussian blur, and Gpuimage available Fastblur, Gaussianblur, Gaussianselectiveblur and Boxblur. In addition, Gpuimage, as an open source framework, also supports custom filters.
Development language: Objective-c
License:
BSD 3-clause "New" or "revised" License
Link:
Appendix
Gpuimage Framework
10 most favorite class libraries for iOS developers
A discussion on Gpuimage in Quora
Learning OpenGL with Gpuimage
Gpuimage is now the most mainstream open source framework for filters, not one of them. The author Bradlarson the image processing unit based on OpenGL to provide the Gpuimagefilter base class, with shader, common filters are not the problem.
The following is a general explanation of some of the basic concepts in gpuimage for ease of expression. Already know, please skip.
Several concepts in the Gpuimage
? Output as Source
? Intput as input source
? Filter as Filter
So a complete filter processing process is this: output+x+input, X is the filter group (1+ filters). Gpuimage for convenience, the new version provides the Gpuimagefilterpipeline this east, user-friendly use of multiple filter combinations, do not worry about the chain logic before and after.
Gpuimage the author separates the image filter processing from the live filter, and the dynamic filter is based on the process above, but the image processing is based on the logic of (output+filter) *x + input . If the effect of processing a picture requires a combination of multiple filters, a filter to generate a picture output, and then passed to the next filter processing, the process if the filter overlay more times, or the filter effect is called multiple times, so that the memory consumption is paste not big, Each filter processed after the exported picture output is in memory, if the original image is particularly large, then congratulations. The memory is expected to explode.
When I was making the filter app, I was dealing with this pattern of Output+x+input , which was a simple logic, high efficiency, and not so much memory to eat. Read the source know output +x+ input, when X is multiple, the last filter n processing the resulting texture, there is GPU memory, the GPU directly passes this texture to n+1 as its output, so that the entire filter process down, only a single texture memory occupied.
With this line to go, the process of basic no problem encountered, but the code structure design and packaging time-consuming. Finally, the filter module finished practical to the project, found that the filter module after the call, memory up, I encountered this problem, repeated checks, all gpuimage related elements have been released, then the increased memory is where? Later thought of the video memory, ARC environment, only responsible for recycling OC object memory, video memory naturally need to gpuimage users themselves to recover, so it is easy, turn Gpuimage API, found
There's a framebuffercache in Gpuimagecontext, that's it.
[[Gpuimagecontextsharedimageprocessingcontext].framebuffercachepurgeallunassignedframebuffers].
Gpuimage ==> an open source iOS framework based on GPU image and video processing