Pixel vs Point, FrameBuffer vs Renderbuffer

Source: Internet
Author: User

How IOS app MVC works

View, Window,

Appdelegate

Viewcontroller, Rootviewcontroller

On Pixel VS Point

The ' Point ' (PT) On the other hand is a unit of length, commonly used to measure the height of a font, but technically cap Able of measuring any length. in applications, 1pt are equal to exactly 1/72th of an inch; In traditional print technically 72pt are 0.996264 inches, although I think you'll be forgiven for rounding it up!

How many pixels = 1pt depends on the resolution of your image. If your image is 72ppi (pixels per inch), then one point would equal exactly one pixel.

Framebuffer, Renderbuffer

ios works in Points-not pixels. This was to do it easier to work with sizing and position with different scale displays.

Eg:an IPhone 3Gs at 1x Scale have a width of points (which happens to coincide with + pixels that the DISPL Ay physically have), then IPhone 4 came along with the retina display (at 2x scale) where it width is still points, BU T works out to be 640 physical pixels. the screen renders the UI twice the size as the 3Gs, but fit it into the same physical space. Because of the increased pixel density, this increases the display.

The Frame Buffer object is not actually a Buffer, but an aggregator object that contains one or more attachments, Which by their turn, is the actual buffers. You can understand the Frame buffer as C structure where every member are a pointer to a buffer. Without any attachment, a Frame Buffer object has very low footprint.

Now all buffer attached to a Frame buffer can be a Render buffer or a texture.

The Render buffer is a actual buffer (an array of bytes, or integers, or pixels). The Render Buffer stores pixel values in native format and so it's optimized for offscreen rendering. In the other words, drawing to a Render Buffer can is much faster than drawing to a texture. The drawback is this pixels uses a native, implementation-dependent format, so this reading from a Render buffer is much harder than reading from a texture. Nevertheless, once a Render Buffer has been painted, one can copy it content directly to screens (or to O Ther Render Buffer , I guess), very quickly using pixel transfer operations. This means, a Render buffer can used to efficiently implement the double buffer pattern entioned.

Render buffers is a relatively new concept. Before them, a Frame Buffer is used to the render to a texture, which can is slower because a texture uses A standard format. It is still possible to render to a texture, and that's quite useful when one needs to perform multiple passes over each p Ixel to build a scene, or to draw a scene on a surface of another scene!

The OpenGL wiki has this page, shows more details and links.

Apple/xcode/objectivec:

[Context Renderbufferstorage:gl_renderbuffer_oes Fromdrawable:layer];

https://developer.apple.com/library/ios/documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/ Workingwithopenglescontexts/workingwithopenglescontexts.html

Pixel vs Point, FrameBuffer vs Renderbuffer

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.