A few months ago, at the max2010 Conference held in LA, we introduced the simple situation of molehill API on the flash desktop platform and mobile platform applications. For details, refer to the following:
Molehill "page
In this article, I want to elaborate on the technical features related to molehill from the perspective of as developers.
Let's get started.
What is molehill?
"Molehill" is an API
Project name, the role of this series of APIS is to expose some programmable interfaces to the actionscript3.0 language on the Flash Platform, so that as3 can directly call the underlying 3D acceleration.
It can bring high-quality 3D rendering to the Flash Platform. molehill relies on directx9 on Windows, MacOS and opengl1.3 on Linux,
On mobile platforms such as Android, molehill uses OpenGL ES2 for rendering. Technically speaking, molehill is a 3D rendering based on the coloring tool.
GPU programming (More about the colorant: http://goo.gl/wqKHs
And brings new features to 3D Flash developers, such as vertex and fragment based pasters (In direct3d, it is called a pixel shadow and becomes a fragment shadow in OpenGL.
) Programming not only supports tasks like vertex texture processing, but also includes native depth buffering, template color buffering, cubic textures, and so on (Private
Too many industry terms, with the original article: To enable things like vertex skinning on the GPU for Bones
Animation but also native Z-buffering, stencel color buffer, cube
Textures and more.
)
From the performance perspective, for flashplayer10.1Thousand
Triangular objects without the Z axis depth buffer are close to 30Hz. With the new 3D API, developers can render the number in full screen HD environments.10 thousand
The frequency of a triangular object with a depth buffer on the Z axis can reach 60Hz! Molehill makes it possible to deliver high-quality 3D experience for network transmission and most devices.
This
The video shows the specific performance of molehill.
How it works
In fp10, the current 2.5D API version is not devalued. molehill API will provide a series of advanced 3D rendering completely accelerated by GPU. which API version can you choose based on the project's actual situation.
In fp10.2
"Stage video"
The beta version can be downloaded through Adobe Labs.
Stagevideo uses the same design and uses GPU to accelerate rendering of high-definition images. through this rendering engine, FP is not added to the display List by adding video frames or 3D buffering, but to the texture processor next to the stage and rendered by GPU (Original article: But inside a texture sitting behind the stage painted through the GPU
). This allows FP to directly call the video card resource to process the display content on the screen. processing is one-way. The video card directly delivers the processed content to the CPU. The CPU adds the data to the display list and finally displays the data by FP (No more read back is required, to retrieve the frames from the GPU to push them on screen through the display list on the CPU
)
Because 3D content is placed next to the FP stage and does not belong to the content displayed in the list,Context3d
AndStage3d
It is not a display object, so remember that you cannot useDisplayobject
Class.
The figure below demonstrates our ideas.
Of course, as you can see, 2D content can enclose 3D content, but in turn it won't work. however, if necessary, we provide an API that allows you to present 3D content in bitmapdata mode. as a developer, most of our work isContext3d
AndStage3d
When you need a 3D content, create a context3d object. Maybe now you will think, what if the GPU driver is not compatible? Will I get a black screen silently?
FP will still return you a context3d, but the content is generated by internal simulation of the software. Therefore, you can still use all molehill APIs, but rendering is calculated by CPU at this time.
To verify what we said, we rely on transgaming, a video rendering enhancement software called "swiftshader (CPU Rasterizer ),
The good news is that even if you use software to simulate swiftshader rendering speed is still 10 times faster than fp10, you can run soft simulation to predict some serious performance differences.
Benefits of "molehill" API
You don't need to know how it works internally. I am running DirectX, OpenGL or swiftshader? Should I use different APIs as the running platform?
When? No, all will be converted to the same programming mode and the same API interface. FP will handle these differences internally.
It should be noted that the molehill API only uses programmable pipeline instead of fixed function pipeline (for more information about the two
Http://goo.gl/HHPNN
), Which means you need to use triangle vertices and pixels to represent all objects to be displayed. in this case, you will be able to pass the video card to your shader as a binary bit stream in the form of low-level agal ("Adobe Graphics assembly language (Original
Text; for this, you will be able to upload on the graphics card your shaders
As pure low-level agal ("Adobe Graphics assembly language") bytecode
A bytearray
As a developer, you have two ways to work. The first method is to use the underlying method to compile your Renderer, this requires a clear understanding of how the Renderer works or the use of advanced languages such as the script language integrated in Pixel Bender 3D to help us convert it into agal bytecode.
To show your triangular elements, you need to understandVertexbuffer3d
AndIndexbuffer3d
Object, which is constructed by inputting triangle vertex coordinates and indexes. Once your vertex coloring tool and pixel coloring tool are ready, you needProgram3d
Objects are transmitted to the video card for processing. Basically, the vertex coloring tool is responsible for processing the vertex coordinates of the triangle you have drawn, and the pixel coloring tool is responsible for calculating the color and concave and convex texture ing by pixel.
The following figure illustrates the differences between the two pasters.
As previously mentioned, molehill does not use the fixed function pipeline, so developers can freely create their own custom pasters and have full control over the rendering pipeline. so let's study the vertex and pixel pasters under molehill.
Deep understanding of vertices and pixel pasters
To clarify this point of view. first, we will show a triangle written with low-level rendering assembly and use it in pixel-level molehill. let's prepare. suppose we want to write a metal material rendered with a low-level shader. If you are so annoying, don't worry, you can write it in a high-level language like pixelbender3d.
To create a coloring unit that can be processed by the video card, we need to use a vertex coloring device (Context3dprogramtype. Vertex
At least output space (
Clip-space
The coordinates of the space after the cone is cropped. To achieve this, we need to useVa0
(The position attribute of each triangle vertex) multipliedVc0
(Constant 0), our projection matrix is stored in this index and passedOP
(Output position) keyword to output the result:
// Create a vertex program using assembly
VaR vertexshaderassembler: agalminiassembler = new agalminiassembler ();
Vertexshaderassembler. Assemble (context3dprogramtype. vertex,
"M44 op, va0, vc0/N" // 4x4 transformation from 0 (vertex position) to the matrix of the output clip Space
);
You may wonder what m44 is. Where does it come from?
This is actually a 4 × 4 matrix conversion, which extends to the matrix projection we defined earlier. We can write a coloring tool to manually calculate the product of each vertex property using the following calculation method,M44
Commands (execute a 4 × 4 matrix all attribute transformations in the same row) are far shorter than the previous ones.
// Create a vertex Program-from assembly
VaR vertexshaderassembler: agalminiassembler = new agalminiassembler ();
Remember that vc0 (vertex constant 0) is actually just the projection matrix we store in the sequence. It is the constant passed to the context3d object through the setprogramsconstantmatrix method earlier:
Context3d. setprogramconstantsfrommatrix (context3dprogramtype. vertex, 0, modelmatrix, true );
Just as our Matrix remains unchanged, the position of va0 (vertex attribute 0) needs to be defined, and we also implement it through the setvertexbufferat method of the context3d object.
Context3d. setvertexbufferat (0, vertexbuffer, 0, context3dvertexbufferformat. float_3 );
In our example, the vertex colorant (va1) is passed to the pixel colorant through the V0 and mov commands to draw our triangle image. to achieve this, we can write the following content:
// Create a vertex Program-from assembly
VaR vertexshaderassembler: agalminiassembler = new agalminiassembler ();
Vertexshaderassembler. Assemble (context3dprogramtype. vertex,
"M44 op, va0, vc0/N" + // 4x4 Matrix Transform from stream 0 (vertex position) to output clipspace
"Mov v0, va1/N" // copy stream 1 (vertex color) to fragment shader
);
As you might imagine, the color of va1 (vertex attribute 1) is defined by setvertexbufferat, to expose the pixel color (float 3) in the shader)
context3D.setVertexBufferAt( 1, vertexbuffer, 3, Context3DVertexBufferFormat.FLOAT_3 );
Our vertex positions and colors are defined as our vertexbuffer3d objects:
// create a vertex buffer
// format is (x,y,z,r,g,b) = 3 vertices, 6 dwords per vertex
vertexbuffer.uploadFromVector ( Vector.<Number>([
-1,-1,0, 255/255,0,0, // red
0,1,0, 193/255,216/255,47/255, // green
1,-1,0, 0,164/255,228/255 // blue
]),0, 3 ); // start at offset 0, count 3
We have the defined Vertex coloring tool. Now we need to determine and upload our pixel coloring tool (context3dprogramtype. fragment), the idea is to retrieve the color of each input vertex (copy va1 to V0) and output the color through the OC operation code.
var fragmentShaderAssembler : AGALMiniAssembler= new AGALMiniAssembler();
fragmentShaderAssembler.assemble( Context3DProgramType.FRAGMENT,
"mov oc, v0" // output color
);
As you can imagine, a pixel shader should always output colors. Then, we need to upload all of this to the context3d object:
// upload the AGAL bytecode
program = context3D.createProgram();
program.upload( vertexShaderAssembler.agalcode, fragmentShaderAssembler.agalcode );
If we compile and run these sdks, we will get the following results:
Now, if we need to reverse the color of each pixel, it will be very easy. Since this operation only targets the pixel color, we only need to modify our pixel color and use the sub operation code minus the color:
var fragmentShaderAssembler : AGALMiniAssembler= new AGALMiniAssembler();
fragmentShaderAssembler.assemble( Context3DProgramType.FRAGMENT,
"sub ft0, fc1, v0 /n" + // subtract the color ( 1 - color)
"mov oc, ft0" // output color
context3D.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT, 1, Vector.( [ 1, 1, 1, 1 ] ) );
The color of the last pixel is stored in a temporary segment register (ft0) and used as the final output color.
The final result of the modified pixel shader is as follows:
As another exercise, let's deal with a brown filter.
To achieve this goal, we need to first convert to grayscale, then to Brown. We will use the following pixel shader
VaR fragmentshaderassembler: agalminiassembler = new agalminiassembler ();
Fragmentshaderassembler. Assemble (context3dprogramtype. fragment,
"Dp3 ft0, FC1, V0/N" + // convert to grayscale
"Mul FT1, FC2, ft0/N" + // convert to sepia
"Mov oC, FT1" // output color
);
As before, we use the constant defined by setprograconstantsfromvector:
// Grayscale
Context3d. setprogramconstantsfromvector (context3dprogramtype. fragment, 1, vector. <number> ([0.3, 0.59, 0.11, 1]);
// Sepia
Context3d. setprogramconstantsfromvector (context3dprogramtype. fragment, 2, vector. <number> ([1.2, 1.0, 0.8, 1]);
Using such a pixel shader, we will eventually get the following results:
As you can imagine, this will bring you a lot of power and allow you to use vertices or pixel pasters more deeply to process light, fog, even animations and other things with skin vertices.
Well, the last one, now let's apply a texture to a triangle through bitmapdata. To do this, we need to passUV
Values are given to the pixel shader and then applied to our textures using these values.
To pass UV values, we need to modify our vertex shader in this form:
Vertexshaderassembler = new agalminiassembler ();
Vertexshaderassembler. Assemble (context3dprogramtype. vertex,
"M44 op, va0, vc0/N" + // 4x4 Matrix Transform from stream 0 to output clipspace
"Mov v0, va1/N" // copy texcoord from Stream 1 to fragment Program
);
Now our UV coordinates are copied from va1 to V0 and are ready to be passed to the pixel shader at any time. Please note that we have not passed the vertex shader to the pixel shader, just the UV coordinates.
As expectedSetvertexbufferat
The method passes the va1 value as the UV value to each vertex (float 2)
Context3d. setvertexbufferat (1, _ vertexbuffer, 2, context3dvertexbufferformat. float_2 );
Our vertex position and UV value are defined in the vertexbuffer3d object:
Vertexbuffer. uploadfromvector (
Vector. <number> (
[
// X, Y, U, V
-1,-1, 0, 1,
0, 1, 0,
1,-1, 1, 1,
]
),
0, 3
);
Then we retrieve the values in the pixel shader and simple texture:
Fragmentshaderassembler. Assemble (context3dprogramtype. fragment,
"Mov ft0, V0/N" +
"Tex FT1, ft0, fs1 <2D, clamp, linear>/N" + // sample texture 1
"Mov oC, FT1/N"
);
To define our texture, We instantiate our bitmapdata and upload it to a texture object and upload it to the GPU:
Texture = context3d. createtexture (256,256, context3dtextureformat. bgra, false );
VaR bitmap: bitmap = new molepeoplebitmap ();
Texture. uploadfrombitmapdata (bitmap. bitmapdata );
Then access it from fs1, and we set it:
Context3d. settextureat (1, texture );
By using this modification, the final result is as follows:
I will introduce some new effects in future tutorials, such as atomization of each pixel or heat texture features.
Of course. here we will introduce how molehill works. to control your images, You need to define triangles and vertices and index values. For this, you need help from other objects such as vertexbuffer3d and indexbuffer3d.
The figure shows the relationships between all objects:
As you can see, molehill's APIs are low-level and some features are exposed to advanced 3D developers. Of course, some developers prefer to use High-level Frameworks, these frameworks are preparing to publish APIs, and we are concerned about this.
As a very important play
We know a lot about ActionScript.
3 developers/users use lights, lenses, and work planes instead of vertex buffering and pixel bytecode. To ensure that everyone can enjoy the power brought by molehill
Frameworks such as alternativa3d, flare3d, away3d, sophie3d, and yogurt3d
Now most of the frameworks are compatible with molehill and will be available in the next version.
Most of the developers of these frameworks demonstrate how they can leverage this molehill on their respective frameworks at this year's Max conference. we hope that the development will be built on the high level of the molehill engine, so both the senior three developers and senior developers can benefit from molehill.
I hope that you can use this article to control molehill slightly. I hope you will hear more about molehill :-)
);
Here, we invert the existing color and 1 minus (1-color ). The white pixels used for subtraction are stored in a fixed pixel coloring device (FC1 ).Setprogramconstantsfromvector
Method to change.
vertexShaderAssembler.assemble( Context3DProgramType.VERTEX,
"dp4 op.x, va0, vc0 /n" + // 4x4 matrix transform from stream 0 (vertex position) to output clipspace
"dp4 op.y, va0, vc1 /n" +
"dp4 op.z, va0, vc2 /n" +
"dp4 op.w, va0, vc3 /n" +
);