IOS graphics programming Summary

Source: Internet
Author: User

IOS graphics programming Summary

Three APIs (UIKIT, Core Graphics, OpenGL ES, and GLKit) can be used for IOS Graphics programming ).

The operations included in these APIs are all drawn in a graphic environment. A graphical environment contains drawing parameters and all the device-specific information required for drawing, including the screen graphics environment, offscreen bitmap environment, and PDF graphics environment, it is used to draw images and images on the screen surface, a bitmap, or a pdf file. Drawing in the screen graphics environment is limited to drawing in an instance of a UIView class or its subclass, and is displayed directly on the screen, drawing in the offscreen bitmap or PDF image environment is not directly displayed on the screen.

1. UIKIT API

UIKIT is a set of Objective-C APIs that provide Objective-C encapsulation for Line Graphics, Quartz images, and color operations, and provide 2D rendering, image processing, and animation at the user interface level.

UIKIT includes UIBezierPath (drawing line, angle, ellipse and other graphics), UIImage (display image), UIColor (color operation), UIFont and UIScreen (provide font and screen information) and other functions such as drawing and operating in the bitmap and PDF graphic environments. It also provides support for standard views and printing functions.

In UIKIT, The UIView class automatically creates a graphical environment (corresponding to the CGContext type of the Core Graphics layer) during painting as the current drawing environment. You can call the UIGraphicsGetCurrentContext function to obtain the current graphic environment.

2. Core Graphics and Quartz 2D APIs

Core Graphics is a C-based API that supports vector Graphics, line, shape, pattern, path, shaving, bitmap image, and pdf content.

Quartz 2D is the 2D rendering engine in Core Graphics. Quartz is a device-independent resource. It provides path rendering, anti-aliased rendering, shaving fill pattern, image, transparent rendering and transparent layer, masking and shadow, color management, and coordinate transformation, font, offscreen rendering, pdf document creation, display, analysis, and other functions.

Quartz 2D can be used with all graphics and Animation technologies (such as Core Animation, OpenGL ES, and UIKit.

Quartz draws in the paint mode.

The graphic environment Used in Quartz is also represented by a CGContext class.

In Quartz, a graphic environment can be used as a drawing target. When you use Quartz for plotting, the specific features of all devices are included in the specific type of graphic environment you are using, therefore, by providing different image environments for the same image operation function, you can draw the same image to different devices. Therefore, the device independence of image painting is achieved.

Quartz provides the following graphic environments for applications:

1) bitmap image environment, used to create a bitmap.

Use the CGBitmapContextCreate function to create an image.

2) The PDF graphic environment is used to create a PDF file.

The Quartz 2D API provides two functions to create a PDF image environment:

Cgw.contextcreatewithurl, with a Core Foundation URL as the position of the pdf output to create a pdf image environment.

Cgw.contextcreate: this function is used when you want to output a PDF file to a data consumer.


3) window graphics environment, used to draw a window.

4) The layer environment (CGLayer) is an offscreen rendering target associated with another graphic environment. The purpose of the layer environment is to optimize the performance of the drawing layer to the creation of its graphic environment. The layer environment provides better offscreen rendering performance than the bitmap graphic environment.

Quartz provides the following main types:

CGContext: indicates a graphical environment;

CGPath: Uses vector graphs to create paths and fill them with stroke;

CGImage: used to represent bitmap;

CGLayer: used to indicate a drawing layer that can be used for repeated drawing and offscreen painting;

CGPattern: used to represent Pattern, used for repeated painting;

CGShading and CGGradient: used to draw shaving;

CGColor and CGColorSpace; used for color and color space management;

CGFont, used to draw text;

Cg1_contentstream, cg1_batch, CGPDFPage, CGPDFObject, cg1_stream, and cg1_string are used to create, parse, and display PDF files.


3. OpenGL ES and GLKit

OpenGL ES is a multi-functional Open Standard C-based graphics library for embedded systems, used for 2D and 3D data visualization. OpenGL is designed to convert a set of graphic calling functions to the underlying graphics hardware (GPU). The GPU executes graphical commands to implement complex graphic operations and operations, this allows high performance and high frame rate to take advantage of the 2D and 3D rendering capabilities provided by the GPU.

The OpenGL ES specification does not define the drawing surface and drawing window. Therefore, ios must provide and create an OpenGL ES rendering environment to use it, create and configure framebuffer that stores the drawing command results and create and configure one or more rendering targets.

In IOS, The EAGLContext class provided by EAGL is used to implement and provide a rendering environment to maintain the hardware State used by OpenGL ES. EAGL is an Objective-c api that provides interfaces for integrating OpenGL ES with Core Animation and UIKIT.

Before calling any OpenGL ES function, you must first Initialize an EAGLContext object.

Every thread of every IOS app has a current context. when calling the OpenGL ES function, it uses or changes the state in this context.

SetCurrentContext: used to set the current context of the current thread. The class method currentContext of EAGLContext returns the current context of the current thread. Before switching the two contexts of the same thread, you must call the glFlush function to ensure that previously submitted commands are submitted to the graphics hardware.

OpenGL ES can be used in different ways to present OpenGL ES content to different targets: GLKit and CAEAGLLayer.

To create a full-screen view or integrate OpenGL ES content with the UIKit view, you can use GLKit. When GLKit is used, the GLKView class provided by GLKit implements the rendering goal and creates and maintains a framebuffer.

To make OpenGL ES content part of a Core Animation layer, you can use CAEAGLLayer as the rendering target, and create a framebuffer as well as implement and control the entire rendering process.

GLKit is a set of Objective-C classes. It provides an object-oriented interface for OpenGL ES to simplify the development of OpenGL ES applications. GLKit supports four key fields for 3D application development:

1) The GLKView and GLKViewController classes provide a standard OpenGL ES view and an associated rendering loop. GLKView can be used as the rendering target of OpenGL ES content. GLKViewController provides Content Rendering Control and animation. View Management and Maintenance of a framebuffer, the application only needs to be painted in framebuffer.

2) GLKTextureLoader automatically loads texture images to the OpenGL ES image environment from sources of various image formats supported by IOS, and can perform proper conversion, it also supports synchronous and asynchronous loading methods.

3) The mathematical operation Library provides OpenGL ES 1.1 functions such as vector, matrix, ry, and matrix stack operations.

4) The Effect class provides the implementation of standard public coloring effects. You can configure effects and related vertex data, and then create and load the appropriate shader. GLKit includes three configurable color effects: GLKBaseEffect implements key lighting and material modes in OpenGL ES 1.1 specifications. GLKSkyboxEffect provides a skybox effect implementation, GLKReflectionMapEffect supports reflection ing Based on GLKBaseEffect.

Use GLKView and OpenGL ES for plotting:

1) create a GLKView object

GLKView objects can be programmed or created and configured using Interface Builder.

When using the programming method, first create a context and then call the initWithFrame: context: method.

When using Interface Builder, after loading a GLKView from the storyboard, create a context and set it as the context attribute of the view.

To use GLKit in iOS, you must create a graphical environment context with OpenGL ES 2.0 or later.

The GLKit view automatically creates and configures all its OpenGL ES framebuffer objects and renderbuffers. You can modify the drawable attribute of the view to control the attributes of these objects.

2) Draw OpenGL content (publish the drawing command)

Using the GLKit view to draw OpenGL content requires three sub-steps: Prepare the OpenGL ES basics, publish the drawing command, and display the content to the Core Animation. The GLKit class itself has implemented the first and third steps. You only need to implement the second step. In the view method drawRect or the glkView of the proxy object of the View: drawInRect: to draw the content.

The GLKViewController class maintains an animation rendering loop (including two update and display methods), which is used to implement complex scenes of continuous animation.

The alternating rate of an animation loop is indicated by the framesPerSecond attribute of GLKViewController and is modified using the preferredFramesPerSecond attribute.

4. Other Graphic programming APIs

1). Core Animation

Core Animation is a set of Objective-C APIs. It implements a high-performance composite engine and provides a simple and easy-to-use programming interface to add smooth motion and dynamic feedback capabilities to the user UI.

Core Animation is the basis for UIKit to implement Animation and transformation, and is also responsible for view compound functions. Using Core Animation, you can customize animations and fine-grained Animation control to create complex layered 2D views that support animations and transformations.

Core Animation is not a drawing system, but is an infrastructure that combines hardware and displays content with operations. The core of this infrastructure is the layer object, which is used to manage and operate the display content. In ios, each view corresponds to a layer object of Core Animation. Like a view, layers are organized as layer relationship trees. One layer captures the View content as a bitmap that is easily operated by the image hardware. In most applications, the middle layer is used as the management view, but you can also create an independent layer to display the display content that is not supported by the view in a layer relationship tree.

OpenGL ES content can also be integrated with Core Animation content.

To use Core Animation for Animation, you can modify the attribute values of layers to trigger the execution of an action object. Different action objects implement different animations.

Core Animation provides the following classes that an application can use to support different Animation types:

CAAnimation is an abstract public base class. CAAnimation uses CAMediaTiming and CAAction protocols to provide time (such as cycle, speed, number of repetitions, etc.) and action Behavior (Start and Stop) for an animation ).

CAPropertyAnimation is an abstract subclass of CAAnimation. It provides an animation with support for layer attributes specified by a key path;

CABasicAnimation is a subclass of CAPropertyAnimation. It provides simple insertion for a layer attribute.

CAKeyframeAnimation is also a subclass of CAPropertyAnimation and provides key frame animation support.

CATransition is a concrete subclass of CAAnimation, which provides the effect of things that affect the content of the entire layer.

CAAnimationGroup is also a subclass of CAAnimation. It allows animation objects to be combined and run simultaneously.

2) Image I/O

Image I/O provides interfaces for reading and writing data in most format Image files. It mainly includes two types: Image Source CGImageSourceRef and Image Target CGImageDestinationRef.

3) Sprite Kit

Sprite Kit is built on OpenGL ES. Sprite Kit uses graphics hardware to effectively present animated frames. Therefore, it can animation and present any 2D texture images or game sprite at a high frame rate, the displayed content includes sprites, text, CGPath shape, and video.

Animation and rendering in Sprite Kit are executed by a SKView object. The content of the game is organized into scenarios represented by SKScene objects. A scenario contains the sprites to be presented and other content. A scenario also implements logic and content processing for each frame Association.

At the same time, only one scene is displayed in a SKView. When a scene is displayed, the animation associated with the scene and the logic associated with the frame are automatically executed. The SKTransition class is used to execute animations between two scenes During scenario switching.

4) SceneKit

SceneKit is an Objective-C framework implemented using 3D graphics technology. It contains a high-performance rendering engine and an advanced descriptive API. You can use this framework to create simple games and rich user UIS. To use SceneKit, you only need to use descriptive APIs to describe the content of your scenario (such as geometric shapes, materials, lights, and cameras) and the actions or animations you want to perform on the content.

The content of SceneKit is organized by a tree structure composed of nodes, called scene graph. A scenario contains a root node that defines the coordinate space of the scenario, and other nodes define the visual content of the scenario. SceneKit displays scenes, processes scene graph, and executes animation processing in a view before each frame is presented on the GPU.

SceneKit contains the following main classes:

SCNView & SCNSceneRenderer: SCNView is a view that displays or presents SceneKit content. SCNSceneRenderer is a protocol that defines some important methods used for views.

SCNScene: a container that represents all SceneKit content. The scenario can be loaded from a file created using the 3D writing tool, or created by programming. The scenario needs to be displayed in a view.

SCNNode: the basic construction block of a scenario, indicating a node of the scene graph tree. The scene graph tree defines the logical structure between nodes in a scenario. It provides visual content of a scenario by attaching geometries, lights, and cameras to a node.

SCNGeometry, SCNLight, and SCNCamera are the corresponding classes of geometries, lights, and cameras respectively. SCNGeometry provides shape, text, or custom vertex data for a scenario. SCNLight provides shadow for the scenario, and SCNCamera provides a viewpoint for the scenario.

SCNMaterial: defines the surface appearance attributes for the SCNGeometry object, specifies how the object surface is colored or textures, and how the light is reflected.

SceneKit content Animation:

SceneKit animations are based on the Core Animation framework and can be created implicitly or explicitly.

Implicit creation is actually achieved through some animation attributes of the animation node: SceneKit automatically combines all the changes to a scene including the node attributes during a run loop into an atomic operation, A transaction is represented by the SCNTransaction class. When the animation period of the SCNTransaction class is not set to 0, all the changes to the node animation attributes are automatically animated.

The following code snippet is shown:

func fallAndFade(sender:  a href="" AnyObject /a ) {    SCNTransaction.setAnimationDuration(1.0)    textNode.position = SCNVector3(x: 0.0, y: -10.0, z: 0.0)    textNode.opacity = 0.0}

When creating an animation explicitly, you can select CAAnimation as a subclass to create an animation of a specific type. Use key-value to specify the animation attributes and set animation parameters, and attach the created animation to one or more elements of the scene. You can use different Core Animation classes to combine or serialize several animations or create animations to insert attribute values between several keyframe values.

The following code snippet shows an example of creating an animation explicitly:

let animation = CABasicAnimation(keyPath: "geometry.extrusionDepth")animation.fromValue = 0.0animation.toValue = 100.0animation.duration = 1.0animation.autoreverses = trueanimation.repeatCount = Float.infinitytextNode.addAnimation(animation, forKey: “extrude")

SceneKit also supports using the SCNSceneSource class to load the CAAnimation animation object from a scene file and attach it to the SCNNode object.

5) Metal

Metal framework is an underlying API similar to OpenGL ES that provides support for GPU-accelerated advanced 3D graphics rendering or data parallel computing tasks. Metal interacts with 3D graphics hardware to provide a fine-grained, underlying modern API that supports stream computing for the Organization, processing, submission, and management of related resources and data. Metal aims to minimize the CPU load when executing GPU tasks and eliminate performance bottlenecks when performing parallel graphics and data computing operations on the GPU, it can effectively use multiple threads to create and submit commands to the GPU in parallel.

Metal also provides a ing programming language for writing graphical ing or computing functions that can be used by Metal applications. Code written in the Metal ing language can be compiled together with the application code during compilation, and then loaded to the GPU for execution at runtime. It also supports editing the Metal ing language code at runtime.

Metal architecture includes the following important classes or protocols:

1. MTLDevice protocol and Object

An MTLDevice represents a GPU device that executes commands. The MTLDevice Protocol defines related interfaces for it, including interfaces for querying device capability attributes and creating specific objects of other devices, for example, create a command queue, allocate a buffer from the memory, and create a texture.

The application calls the mtlcreatesystemdefadevice device function to obtain the MTLDevice object that the system can use.


2. Command and command Encoder

In the Metal framework, 3D graphics rendering commands, computing commands, and blitting commands must be formatted before they are submitted to a GPU on a specific device for GPU recognition and execution.

The Metal framework provides an encoder protocol for each command:

MTLRenderCommandEncoder Protocol: provides an interface to encode the 3D graphics rendering command to be executed during a single loop rendering. The MTLRenderCommandEncoder object is used to represent the rendering status and drawing commands of a graphic rendering process.

MTLComputeCommandEncoder Protocol: Provides interfaces for coding Data parallel computing tasks.

MTLBlitCommandEncoder Protocol: Provides interfaces for encoding simple copy operations between buffering and textures.

At the same time, only one command encoder can be activated to add commands to a command buffer space. That is, each command encoder must be created before another command encoder that uses the same command buffer space.

To support parallel execution of multiple tasks, Metal provides the MTLParallelRenderCommandEncoder protocol to support multi-MTLBlitCommandEncoder to buffer different commands submitted to the same command buffer space at the same time in different threads. Each thread has its own command buffer object. At the same time, the buffer object can only be accessed by a command encoder of the thread.

The MTLParallelRenderCommandEncoder object allows the command encoding of one rendering loop to be decomposed into multiple command encoders for encoding, and multi-thread parallel processing is used to improve processing efficiency.

A command encoding object calls the endEncoding method to end.

Create a command encoder object:

The MTLCommandBuffer object is used to create the command encoder object. The MTLCommandBuffer Protocol defines the following methods to create the corresponding types of command encoder objects:

RenderCommandEncoderWithDescriptor: creates an MTLRenderCommandEncoder object for executing a graphic rendering task. The MTLRenderPassDescriptor parameter of the method represents the goal of an encoding rendering command (a set of ancillary points, it can contain up to four color points, one depth point, and one pattern point. You can specify the image target to be displayed in the attributes of the secondary point of the MTLRenderPassDescriptor object.

The computeCommandEncoder method creates an MTLComputeCommandEncoder object for Data parallel computing tasks.

The blitCommandEncoder method creates an MTLBlitCommandEncoder object for memory Blit operations, texture filling operations, and creation of mipmaps.

ParallelRenderCommandEncoderWithDescriptor: creates an MTLParallelRenderCommandEncoder object. The rendering target is specified by the MTLRenderPassDescriptor parameter.


3. Command Buffer MTLCommandBuffer object and Protocol

The command encoded by the command encoder is added to an MTLCommandBuffer object called the command buffer, and then the CommandBuffer object is submitted to the GPU to execute the commands contained in it.

The MTLCommandBuffer Protocol defines interfaces for CommandBuffer objects and provides operation methods such as creating a command encoder, submitting CommandBuffer to a command queue, and checking the status.

A CommandBuffer object contains the encoded commands that are intended to be executed on a specific device (GPU. Once all the encoding is complete, CommandBuffer itself must be submitted to a command queue and marked as a ready command buffer for GPU execution.

In standard applications, a rendering frame command is usually encoded into a command buffer using a thread.

MTLCommandBuffer object creation and corresponding methods:

An MTLCommandBuffer object is created by the commandBuffer method of MTLCommandQueue or the commandBufferWithUnretainedReferences method.

An MTLCommandBuffer object can only be submitted to the MTLCommandQueue object that creates it.

An MTLCommandBuffer object also implements the following Protocol definition methods:

The enqueue method is used to reserve a location for the command buffer in the command queue.

The commit method submits the MTLCommandBuffer object for execution.

AddScheduledHandler: The method is used to register a code execution block called when the command buffer is scheduled. You can register multiple scheduling execution blocks for a command buffer object.

The waitUntilScheduled method waits for the command buffer to be scheduled and all the scheduling execution blocks that buffer and register for the command have been executed.

AddCompletedHandler: registers a code execution block called after the device has executed this command buffer for a command buffer object. You can also register multiple Execution code blocks for a command buffer object.

The waitUntilCompleted method waits for the execution of all finished execution blocks that are executed by the device and buffered for the command in the command buffer to end.

PresentDrawable: This method is used to display the content of a resource (CAMetalDrawable object) when the command buffer object is scheduled.


4. MTLCommandQueue protocol and command queue object

The MTLCommandQueue protocol is a queue containing command buffer. The command queue is used to organize the execution order of the command buffer objects contained in it and control when the commands contained in the Command Buffer objects in the command queue are executed.

The MTLCommandQueue Protocol defines interfaces for command queues. The main interfaces include creating command buffer objects.

Create MTLCommandQueue object:

Use the newCommandQueue method of the MTLDevice object or newCommandQueueWithMaxCommandBufferCount: to create a command queue object.


For the relationships between these objects:


As shown in: Metal resource objects such as the buffer and texture used for rendering must be set for a rendering command encoder.

The State specified for the rendering command encoder includes a Render Pipeline State, a Depth stencel State, and a Sampler State ).

A Blit command encoder is associated with a buffer and a texture to perform Blit operations between the two.

The command encoder can assign three types of MTLResource Metal resource objects when specifying graphics or computing functions:

MTLBuffer represents a non-formatted memory that can contain any type of data. MTLBuffer is usually used for vertex of polygon vertex, shader of the shader, and calculation State data.

MTLTexture represents an image data with a corresponding format with a specific Texture Type and point format. A Texture object can be used as a source of vertex, fragment, or computing functions of a polygon vertex, or as an output target of graphic rendering in the presentation descriptor.

The MTLSamplerState object is used when a graph or computing function performs texture sampling on a MTLTexture to define addresses, filters, and other attributes.

The image rendering encoder MTLRenderCommandEncoder can use the setVertex * And setFragment * method groups as its parameters to allocate one or more resources to the corresponding ing functions.

5. CAMetalLayer object and CAMetalDrawable Protocol

Core Animation defines a CAMetalLayer class and a CAMetalDrawable protocol to provide a layer backup view for Metal content presentation. The CAMetalLayer object contains the location, size, visual attributes (background color, border, and shadow) of the content to be presented, and the resources used by Metal to present the content. The CAMetalDrawable protocol is an extension of MTLDrawable. It specifies the MTLTexture protocol that can display resource objects to comply with, so that the resource objects can be displayed as the targets of rendering commands.

To render Metal content in a CAMetalLayer object, a CAMetalDrawable object should be created for each rendering process to obtain the MTLTexture object it contains, then, it is used in the color accessory point attribute of the rendering pipeline description MTLRenderPipelineDescriptor to specify it as the target of the graphic rendering command.

A CAMetalLayer object is created by calling the nextDrawable method of the CAMetalLayer object.

After creating a resource that can be displayed as the target of a graphic command, you can call the following steps to complete the drawing.

1) first create an MTLCommandQueue object, and then use it to create an MTLCommandBuffer object;

2) create an MTLRenderPassDescriptor object to present the subsidiary vertex set of the Command target as the encoding in the graphic buffer. Then use this MTLRenderPassDescriptor object to create an MTLRenderCommandEncoder object;

3) create a Metal resource object to store the resource data used for painting, such as vertex coordinates and vertex color data. Call setVertex *: offset: atIndex: And setFragment * of MTLRenderCommandEncoder *: offset: atIndex: Specifies the resources used for the rendering encoder;

4) Create an MTLRenderPipelineDescriptor object and specify vertexFunction and fragmentFunction attributes for it. These attributes are set using the corresponding ing function MTLFunction object read in the Metal al language code.

5) Use MTLDevice's pipeline: error: method or similar method and create an MTLRenderPipelineState object according to the pipeline; then call the setRenderPipelineState method of MTLRenderCommandEncoder to set the pipeline for the rendering encoder object mtlrendercommandenco;

6) Call the drawPrimitives: vertexStart: vertexCount: Method of MTLRenderCommandEncoder to display the image, and then call the endEncoding method of MTLRenderCommandEncoder to complete the encoding of the rendering process, finally, call the commit method of MTLCommandBuffer to execute the entire drawing command on the GPU.


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.