IOS Graphics Programming Summary

Source: Internet
Author: User
Tags uikit

iOS enables graphical programming with three APIs (UIKIT, Core graphics, OpenGL es, and Glkit).

These APIs contain drawing operations that are drawn in a graphical environment. A graphical environment contains the drawing parameters and all the device-specific information required for drawing, including the screen graphics environment, the offscreen bitmap environment, and the PDF graphics environment, which are used to draw graphics and images on the screen surface, a bitmap, or a PDF file. Drawing in a screen graphics environment is limited to drawing in an instance of a UIView class or its subclasses, and drawing directly on the screen, rendering in a offscreen bitmap or PDF graphics environment is not displayed directly on the screen.

First, UIKIT API

Uikit is a set of objective-c APIs that provide objective-c encapsulation of line graphics, quartz images, and color operations, and provide animations for 2D drawing, image processing, and user interface levels.

Uikit includes classes such as Uibezierpath (drawing lines, angles, ellipses, and other graphics), UIImage (display images), uicolor (color manipulation), Uifont and UIScreen (providing fonts and screen information), and bitmap graphics environments, The ability to draw and manipulate on a PDF graphics environment, as well as support for standard views, and support for printing functions.

In Uikit, the UIView class itself automatically creates a graphical environment (corresponding to the Cgcontext type of the core graphics layer) as the current graphical drawing environment when drawing. You can call the Uigraphicsgetcurrentcontext function when drawing to get the current graphical environment.

Second, Core Graphics and quartz 2D API

The Core graphics is a set of c-based APIs that support the drawing of vector graphics, lines, shapes, patterns, paths, tonsure, bitmap images, and PDF content.

Quartz 2D is the 2D rendering rendering engine in the core graphics. Quartz is resource-and device-independent, providing path rendering, anti-aliased rendering, tonsure fill patterns, images, transparent drawing and transparent layers, shading and shading, color management, coordinate transformations, fonts, offscreen rendering, PDF document creation, display, and analysis functions.

The Quartz 2D can be used with all graphics and animation technologies (such as core Animation, OpenGL ES, and UIKit).

The quartz is drawn using the paint mode.

The graphical environment used in Quartz is also represented by a class cgcontext.

A graphical environment can be used as a drawing target in quartz. When drawing with quartz, all device-specific features are included in the specific type of graphics environment you use, so you can draw the same image to different devices by providing different image environments for the same picture manipulation function, so you do the device independence of the image drawing.

The Quartz provides several graphical environments for applications such as:

1) Bitmap graphics environment, used to create a bitmap.

Use the function cgbitmapcontextcreate to create.

2) PDF graphic Environment, used to create a PDF file.

The Quartz 2D API provides two functions to create a PDF graphical environment:

Cgpdfcontextcreatewithurl, a core Foundation URL with a location as the PDF output to create a PDF graphics environment.

Cgpdfcontextcreate, use this function when you want the PDF to output to a data consumer.

3) window graphics environment, used to draw on a window.

4) Layer Environment (Cglayer), which is a offscreen drawing target associated with another graphical environment, uses a layer environment to optimize the performance of the drawing layer to the graphical environment in which it was created. The layer environment provides better offscreen rendering performance than the bitmap graphics environment.

The main classes offered by Quartz include:

Cgcontext: Represents a graphical environment;

Cgpath: Use vector graphics to create paths, and be able to fill and stroke;

Cgimage: Used to represent bitmaps;

Cglayer: Used to represent a drawing layer that can be used for repeated drawing and offscreen drawing;

Cgpattern: Used to denote pattern, used for repeating drawing;

Cgshading and cggradient: used to draw tonsure;

Cgcolor and cgcolorspace; used for color and color space management;

Cgfont, used to draw text;

Cgpdfcontentstream, Cgpdfscanner, Cgpdfpage, Cgpdfobject,cgpdfstream, cgpdfstring, etc. are used to create, parse, and display PDF files.

Third, OpenGL es and Glkit

OpenGL ES is a multi-functional open-standard c-based graphics library for embedded systems for visualizing 3D data. OpenGL is designed to transform a set of graphical calls to the underlying graphics hardware (GPU), which is performed graphically by the GPU for complex graphics operations and operations, enabling high-performance, frame-rate utilization of the 2D and 3D rendering capabilities provided by the GPU.

The OpenGL ES specification itself does not define the drawing surface and drawing window, so iOS must provide and create a opengles rendering environment to use it, create and configure Framebuffer to store drawing command results, and create and configure one or more render targets.

Use the Eaglcontext class provided by EAGL in iOS to implement and provide a rendering environment to maintain the hardware state that Opengles uses. Eagl is an objective-c API that provides interfaces that enable OpenGL ES to integrate with core animation and Uikit.

You must first initialize a Eaglcontext object before invoking any of the opengles features.

Each thread in an iOS app has a current context that uses or alters the state in this context when it calls the Opengles function.

Eaglcontext class method Setcurrentcontext: Used to set the current context of the current thread. The Eaglcontext class method CurrentContext returns the current context of the current thread. Before you can switch the two contexts of the same thread, you must call the Glflush function to ensure that previously committed commands are committed to the graphics hardware.

OpenGL ES can be used in different ways to render OpenGL ES content to different destinations: Glkit and Caeagllayer.

To create a full-screen view or to integrate OpenGL ES content with the Uikit view, you can use Glkit. When using Glkit, Glkit provides a class Glkview class itself that implements the rendering target and creates and maintains a framebuffer.

To make OpenGL ES content part of a core animation layer, you can use Caeagllayer as a rendering target, and you need to create additional framebuffer and implement and control the entire drawing process yourself.

Glkit is a set of objective-c classes that provide an object-oriented interface to use OpenGL ES to simplify the development of OpenGL ES applications. Glkit supports four key areas for 3D application development:

1) The Glkview and Glkviewcontroller classes provide a standard opengles view and associated rendering loops. Glkview can be used as a rendering target for opengles content, Glkviewcontroller provides control and animation of content rendering. View management and maintenance of a framebuffer, the application can only be painted in framebuffer.

2) Glktextureloader provides a way for applications to automatically load texture images into opengles image environments from various image formats supported by iOS, and to be able to perform appropriate conversions and support synchronous and asynchronous loading modes.

3) A mathematical database that provides OpenGL ES 1.1 functions such as vectors, matrices, four-tuple implementations, and matrix stack operations.

4) The Effect effect class provides a standard implementation of the common shading effect. Ability to configure effects and related vertex data, and then create and load the appropriate shader. Glkit includes three configurable shading effects classes: Glkbaseeffect implements the key lighting and material patterns in the OpenGL ES 1.1 specification, Glkskyboxeffect provides an implementation of the Skybox effect, Glkreflectionmapeffect includes reflection mapping support on a glkbaseeffect basis.

Use Glkview and opengles to draw the process:

1) Create a Glkview object

Glkview objects can be programmed or created and configured using Interface Builder.

When you use programming, you first create a context and then call the Initwithframe:context: method.

When using Interface Builder mode, after loading a glkview from storyboard, create a context and set it as the context property of the view.

The use of Glkit in iOS requires creating a graphics environment with OpenGL ES 2.0 or more context.

The Glkit view automatically creates and configures all of its Opengles framebuffer objects and renderbuffers, and you can control the properties of these objects by modifying the view's drawable properties.

2) Draw OpenGL content (publish Draw command)

Using the Glkit view to draw OpenGL content requires three sub-steps: Prepare the Opengles base, publish the Draw command, render the display to the core Animation. The Glkit class itself has implemented the first and third steps, and the user simply implements the second step by invoking the appropriate Opengles draw command in the view's method DrawRect or the view's proxy object Glkview:drawinrect: To draw the content.

The Glkviewcontroller class maintains a animation rendering loop (containing two methods, update and display) for continuous animation of complex scenes.

The alternating rate of the animation rendering cycle is indicated by the Glkviewcontroller property Framespersecond, and it is modified using the Preferredframespersecond property.

Iv. other graphic programming related APIs

1) Core Animation

Core Animation is a set of objective-c APIs that implements a high-performance composite engine and provides an easy-to-use programming interface that adds smooth motion and dynamic feedback to the user's UI.

Core Animation is the basis for Uikit to implement animations and transformations, and is also responsible for the composite functionality of the view. With core animation, you can create custom animations and fine-grained animation controls, creating complex layered 2D views that support animations and transformations.

Core animation is not part of the drawing system, but it is the infrastructure for displaying content in hardware compositing and manipulation. The core of this infrastructure is the Layer object, which is used to manage and manipulate the display content. Each view in iOS corresponds to a Layer object of core animation, and as with views, layers are organized into layer relationships trees. A layer captures the view content as a bitmap that is easily manipulated by the image hardware. Used in the middle of most applications as a management view, but you can also create separate layers into a layer tree to show what the view does not support.

The content of OpenGL ES can also be integrated with core animation content.

To animate with the core animation, you can modify the property values of a layer to trigger the execution of an action object, and different action objects implement different animations.

Core Animation provides a set of classes that can be used by a group of applications to provide support for different types of animations:

Caanimation is an abstract public base class that Caanimation uses Camediatiming and caaction protocols to provide time for animations (such as cycle, speed, repetition, and so on) and action behavior (start, stop, and so on).

Capropertyanimation is an abstract subclass of Caanimation, providing support for animations with a layer attribute specified by a key path;

Cabasicanimation is a specific subclass of Capropertyanimation that provides a simple insertion capability for a layer attribute.

Cakeyframeanimation is also a specific subclass of Capropertyanimation, providing support for key frame animations.

Catransition is a specific subclass of caanimation that provides the effect of things that affect the entire layer of content.

Caanimationgroup is also a subclass of Caanimation, allowing animated objects to be grouped together and run at the same time.

2) Image I/O

Image I/O provides an interface to read and write data for most format image files. Mainly includes the image source Cgimagesourceref and the image target cgimagedestinationref two classes.

3) Sprite Kit

The sprite kit is built on OpenGL ES, and the sprite kit uses graphics hardware to render animated frames efficiently, so you can animate and render any 2D texture image or game Sprite at high frame rates, including sprites, text, cgpath shapes, videos, and more.

Animation and rendering in the Sprite kit is performed by a Skview view object. The content of the game is organized into scenes that are represented as skscene objects. A scene contains the sprites and other content to be rendered, and a scene also implements the logic and content handling associated with each frame.

At the same time, a Skview view renders only one scene, and the animation associated with the scene and the logic associated with the frame are automatically executed when the scene is rendered. Use the Sktransition class when switching scenes to perform animations between two scenes.

4) SceneKit

Scenekit is a OBJECTIVE-C framework implemented using 3D graphics technology that includes a high-performance rendering engine and an advanced, descriptive API. You can use this framework to create simple games and rich user UIs, using Scenekit to describe the content of your scene (such as geometry, materials, lighting, video, etc.) and the actions or animations you want to perform on those content using a descriptive API.

The content of Scenekit is organized into a tree structure composed of nodes, called scene graph. A scene contains a root node, defines the coordinate space of the scene, and other nodes define the visual content of the scene. Scenekit displays the scene on a single view, processes scene graph, and performs animation processing before each frame is rendered on the GPU.

Scenekit contains the main classes:

Scnview & Scnscenerenderer:scnview is a view that displays or renders scenekit content. Scnscenerenderer is a protocol that defines some important methods for viewing.

Scnscene: A scene that is a container for all scenekit content. Scenes can be loaded from a file created using 3D authoring tools, or they can be created programmatically, and the scene needs to be displayed on a single view.

Scnnode: A basic building block of a scene that represents a node of the scene graph tree. The scene graph tree defines the logical structure between nodes on the scene, providing the visual content of the scene by attaching geometries, lights, and cameras to a node.

Scngeometry, Scnlight, Scncamera: respectively, geometries, lights, cameras corresponding classes. Scngeometry provides shape, text, or custom vertex data for a scene, Scnlight provides a shadow effect for the scene, and Scncamera provides a viewpoint for the scene.

Scnmaterial: Defines the surface appearance properties for the Scngeometry object, stipulating how the object's surface is shaded or textured, and how the light reacts.

Scenekit Animation of content:

Scenekit animations are based on the core Animation framework and can be created either implicitly or explicitly.

Implicit creation is actually achieved through some animated properties of the animation node: Scenekit automatically combines all the changes of a scene's containing node properties into one atomic operation during run loop operation, called a transaction, represented by the Scntransaction class When the animation period of the Set Scntransaction class is not 0 o'clock, all changes to the animation properties of the node are automatically animated.

As shown in the following code snippet:

12345 func fallAndFade(sender:  a href=""AnyObject /a ) {    SCNTransaction.setAnimationDuration(1.0)    textNode.position = SCNVector3(x: 0.0, y: -10.0, z: 0.0)    textNode.opacity = 0.0}

When you explicitly create an animation, you can choose to caanimation a subclass of one type to create a specific type of animation. Use Key-value to specify properties and animate parameters for the animation, and then to associate the created animation with one or more elements of the scene. You can use different core animation animation classes to combine or serialize several animations or create animations to insert attribute values between several keyframe values.

The following code snippet is an example of creating an animation explicitly:

1234567 let animation = CABasicAnimation(keyPath: "geometry.extrusionDepth")        animation.fromValue = 0.0        animation.toValue = 100.0        animation.duration = 1.0        animation.autoreverses = true        animation.repeatCount = Float.infinity        textNode.addAnimation(animation, forKey: “extrude")

Scenekit also supports loading Caanimation animation objects from a scene file using the Scnscenesource class, and then attaching it to the Scnnode object.

5) Metal

The metal framework is an OpenGL ES-like underlying API that provides support for advanced GPU-accelerated 3D Graphics rendering or data parallel computing tasks. Metal is responsible for interacting with 3D drawing hardware, providing a granular, bottom-level, modern API for the organization, processing, submission, and management of related resources and data for graphical and computational commands. The goal of metal is to minimize CPU load while performing GPU tasks, eliminate performance bottlenecks when GPU performs graphics and data parallel computing operations, and effectively use multithreading to create and submit commands to the GPU in parallel.

Metal also provides a mapping programming language for writing graphical maps or computational functions that can be used by metal applications. Code written in the metal mapping language can be compiled with the application code at compile time, then loaded on the GPU at run time, and the metal Mapping language code is also supported for editing at runtime.

There are several important classes or protocols that are included in the metal architecture:

1. Mtldevice Protocols and objects

A mtldevice represents a GPU device that executes commands, and the Mtldevice protocol defines the interfaces for them, including interfaces such as querying device capability properties and creating other device-specific objects, such as creating command queues, allocating buffers from memory, and creating textures.

The app obtains a Mtldevice object that the system can use by calling the Mtlcreatesystemdefaultdevice function.

2. Command and Command encoder

In the metal framework, 3D Graphics rendering commands, calculation commands, and blitting commands must be encoded appropriately before they are committed to a specific device GPU, so that they can be recognized and executed by the GPU.

The metal framework provides an encoder protocol for each of the commands:

Mtlrendercommandencoder protocol: Provides an interface to encode a 3D graphics rendering command to be performed during a single cycle rendering. The Mtlrendercommandencoder object is used to represent the rendering State and drawing commands of a graphical rendering process.

Mtlcomputecommandencoder protocol: Provides an interface for encoding data parallel computing tasks.

Mtlblitcommandencoder protocol: Provides an interface to encode simple copy operations between buffers and textures.

At the same time, only one command encoder can be activated to add commands to a command buffer space, i.e. each command encoder must end before another command encoder with the same command buffer space is created.

Metal in order to support parallel execution of multiple different tasks, Provides a mtlparallelrendercommandencoder protocol to support multiple mtlblitcommandencoder at the same time while different threads are running to commit different command buffers to the same command buffer space. Each thread has its own command buffer object, at the same time, the buffer object can only be accessed by one command encoder for that thread.

The Mtlparallelrendercommandencoder object allows the command encoding of a rendering cycle to be decomposed into multiple command encoders for encoding, using multithreading for parallel processing to improve processing efficiency.

A command encoder object calls the Endencoding method to end.

To create a command encoder object:

The command encoder object is created by the Mtlcommandbuffer object. The Mtlcommandbuffer protocol defines the following methods to create a command encoder object of the appropriate type:

Rendercommandencoderwithdescriptor: Creates a Mtlrendercommandencoder object for performing the graphical rendering task. The parameter of the Mtlrenderpassdescriptor method represents the target of a coded rendering command (a collection of attachment points, which can include up to four color point data attachment points, a depth point data attachment point, a pattern point data attachment point), Specifies the graphical target to render in the attachment point property of the Mtlrenderpassdescriptor object.

The Computecommandencoder method creates a Mtlcomputecommandencoder object for the Data Parallel Computing task.

The Blitcommandencoder method creates a Mtlblitcommandencoder object for operations such as memory blit operations and texture fill manipulation and mipmaps generation.

Parallelrendercommandencoderwithdescriptor: Method to create a Mtlparallelrendercommandencoder object. The render target is specified by the parameter mtlrenderpassdescriptor.

3. Command buffer Mtlcommandbuffer objects and protocols

Commands encoded by the command encoder are added to a Mtlcommandbuffer object called a command buffer, and then the Commandbuffer object is submitted to the GPU to execute the commands contained therein.

The Mtlcommandbuffer protocol defines interfaces for Commandbuffer objects and provides operations such as creating a command encoder, committing Commandbuffer to a command queue, and checking the state.

A Commandbuffer object contains encoded commands that are intended to be executed on a specific device (GPU). Once all the encodings are complete, the Commandbuffer itself must be submitted to a command queue, and the flag command buffer is ready for the state so that it can be executed by the GPU.

In standard standard applications, a rendering command that typically renders a frame is encoded into a command buffer using a thread.

Mtlcommandbuffer object creation and corresponding methods:

A Mtlcommandbuffer object is created by the Mtlcommandqueue Commandbuffer method or Commandbufferwithunretainedreferences method.

A Mtlcommandbuffer object can only be submitted to the Mtlcommandqueue object that created it.

A Mtlcommandbuffer object also implements the protocol definition in the following ways:

The Enqueue method is used to reserve a location for the command buffer in the command queue.

The commit method causes the Mtlcommandbuffer object to be submitted for execution.

Addscheduledhandler: Method is used to register a command buffer object with a code execution block that is called when the command buffer is dispatched. You can enlist multiple scheduled execution blocks for a command buffer object.

The Waituntilscheduled method waits for the command buffer to be dispatched and all the scheduled execution blocks registered for the command buffer have been executed.

Addcompletedhandler: Method registers a command buffer object with a code execution block that is called after the device has executed the command buffer. You can also enlist multiple completion code blocks for a command buffer object.

The Waituntilcompleted method waits for the command buffer to execute after the command is executed by the device and all completed execution blocks registered for the command buffer are executed.

Presentdrawable: Method is used to render the contents of a display resource (Cametaldrawable object) when the command buffer object is dispatched.

4. Mtlcommandqueue protocol and Command queue objects

The Mtlcommandqueue protocol is a queue that contains a command buffer. The command queue is used to organize the order in which the command buffers are contained and the command buffer objects contained in the control command queue are executed.

The Mtlcommandqueue protocol defines the interface for the command queue, and the primary interface includes the creation of the command buffer object.

The creation of the Mtlcommandqueue object:

Use the Newcommandqueue method of the Mtldevice object or the Newcommandqueuewithmaxcommandbuffercount: method to create a command queue object.

Diagram for the above objects:

As shown in: A rendering command encoder must be set to render related states, set and create related rendering buffers, textures, etc. metal resource objects.

The states that are specified for the rendering command encoder include a rendering pipeline state (render Pipeline-state), a depth and pattern status (Depth-stencil state), and a sampling status (Sampler states).

A blit command encoder is associated with a buffer and a texture that is used to perform blit operations between the two.

The command encoder can assign three types of Mtlresource metal resource objects to use when specifying graphics or calculation capabilities:

Mtlbuffer shows an unformatted memory that can contain any type of data. Mtlbuffer are typically used for polygon vertex vertex, shader shader, and compute state data.

Mtltexture displays an image data with a specific texture type and point format in the corresponding format. A texture object can be used as a source for polygon vertex vertex, fragment fragment, or compute functionality, or as an output target for graphical rendering in the rendering descriptor.

The Mtlsamplerstate object is used when a graphical or computational function performs a texture sampling operation on a mtltexture to define addresses, filters, and other properties.

Graphics Rendering Encoder Mtlrendercommandencoder You can use the setvertex* and setfragment* method groups as their parameters to assign one or more resources to the corresponding mapping function.

5. Cametallayer object and Cametaldrawable protocol

Core animation defines a Cametallayer class and a cametaldrawable protocol to provide a layer fallback view of metal content rendering. The Cametallayer object contains information about where to render the content, dimensions, visual properties (background color, bounds, and shading), and the resources used to metal the rendered content. The cametaldrawable protocol is an extension of mtldrawable that specifies the Mtltexture protocol that can be used to display a resource object to be compliant, so that the visible resource object can be used as the target of the render command.

To achieve the rendering of metal content in a Cametallayer object, a Cametaldrawable object should be created for each rendering process, from which the Mtltexture object it contains, It is then used in the color attachment point attribute of the rendering pipeline description mtlrenderpipelinedescriptor, which specifies its target for the graphical rendering command.

A Cametallayer object calls the Nextdrawable method of the Cametallayer object to create.

After you create a target that displays a resource as a graphical command, you can call the following steps to finish drawing the drawing.

1), first create a Mtlcommandqueue object, and then use it to create a Mtlcommandbuffer object;

2), create a Mtlrenderpassdescriptor object that specifies the collection of attachment points used as the encoding rendering command target in the graphics buffer, and then use this Mtlrenderpassdescriptor object to create a Mtlrendercommandencoder object;

3) Create the corresponding metal resource object to store the drawing using the resource data, such as vertex coordinates and vertex color data, and call Mtlrendercommandencoder's Setvertex*:offset:atindex: and setfragment*: O Ffset:atindex: Method to specify the resources used for rendering the encoder;

4) Create a Mtlrenderpipelinedescriptor object and specify the Vertexfunction and Fragmentfunction properties for it. These properties are set using the corresponding mapping function Mtlfunction object that is read in the metal Mapping language code.

5) Use Mtldevice's newrenderpipelinestatewithdescriptor:error: method or similar method and create a Mtlrenderpipelinestate object based on Mtlrenderpipelinedescriptor, and then call Mtlrendercommandencoder Setrenderpipelinestate: Method to set up pipeline pipelining for rendering encoder object Mtlrendercommandencoder;

6) Call Mtlrendercommandencoder's DrawPrimitives:vertexStart:vertexCount: method to perform the rendering of the graphic, Then call Mtlrendercommandencoder's Endencoding method to end the encoding of this rendering process, and finally call Mtlcommandbuffer's Commit method to execute the entire drawing command on the GPU.

From: http://blog.csdn.net/goohong/article/details/40743883

IOS Graphical Programming summary

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.