Atiti. ui principle and gui theory, atiti. uigui Theory
Atiti. ui principle and gui Theory
1. Overview 2
2. ui type 2
2.1. RMGUI vs IMGUI2
2.2. Cli2
2.3. Gui2
2.4. Nui natural user interface2
3. Three Phases of Ui development 2
3.1.1. CLI2
3.2. Gui (click3
3.3. Nui (touch3
4. How the Gui works to solve two problems: "input" and "output" 3.
4.1. Gui structure 4
4.2. Interface Engine and graphics engine 4
4.3. Window Management Control System 5
4.4. Event System 5
4.5. Graphic System 5
4.6. Layout System 5
4.7. Miscellaneous utility: 5
4.8. set of common controls: special controls: 5
4.9. editing tool: 5
5. Ui trend cli> gui> nui5
5.1.Convenient Dynamic Mechanism, 5
5.2.Convenient control Expansion Mechanism, 6
5.3. Scripted (native> html5) 6
6
1.
Overview2.
Ui type2.1.
RMGUIIMGUI
Immediate Mode GUI (IMGUI)
I will add a relatively new GUI implementation Mode called Immediate Mode GUI (IMGUI ). This type is more suitable for real-time refreshing programs in the display area, such as games and CAD. The advantage of IMGUI is that it is much easier to implement and use than the traditional Retained Mode GUI (RMGUI), such as Qt and MFC. I have implemented both modes. I think IMGUI is more suitable for people who want to learn how to write guis. Paste a lightweight GUI library and program that I implemented myself, and I want to play with the stamp link: EDXGui. It can be seen that IMGUI can also implement more complex controls, more than enough for your own programs.
2.2.
Cli2.3.
Gui2.4.
NuiNatural user interface3.
3 Ui developmentPhase
The development of man-machine interfaces mainly goes through three stages
3.1.
1. CLI
The user behavior mode of the command line is "Memory"> "input". The user needs to memorize the commands and input them to interact with the machine after a single order. The usage threshold is high, currently, only professionals and Hollywood movies need to show hackers.
Author ::★(Attilax)> nickname: old wow's paw (full name: Attilax Akbar Al Rapanui Attila Akba Arla Panui) Chinese name: AI long, EMAIL: 1466519819@qq.com
Reprinted please indicate Source: http://www.cnblogs.com/attilax/
3.2.
Gui (click
Launched by Apple and launched windows, the user behavior pattern is recognition-> selection. You don't have to back up the command, so you can use the software to move your mouse around for exploration, at this time, most of the software user manuals were read by no one, and the interaction threshold was lowered. However, when we were not familiar with computers, our father would still feel a sense of distance and a learning threshold.
The GUI actually uses a standard set of GUI languages (buttons, lists, text boxes, etc.). All application requirements are converted to the interface language, and the information content must be reflected in the interface language.
3.3.
Nui (touch
The emergence of the iphone and ipad greatly lowers the Machine Interaction threshold. Basically everyone, including the elderly and children, can use it with their own hands. Because of the changes in the interaction language, the interface is no longer designed using standardized controls. In this case, content> form, and form must depend on the Use scenario of information content. In line with people's natural expectations for content performance and interaction, the interface engine can not meet the requirements only by providing standard controls.
4.
The principle of Gui solves two problems: "input" and "output"
The graphic interface solves two problems. "input" and "output" and "input" are input operations performed by users on various hardware devices, such as the keyboard, mouse, and touch screen buttons, the drivers of these devices are finally converted into various event messages by the operating system. The graphic interface must respond to and manage these messages. The "output" is the content displayed for users, the graphic interface must be displayed on the screen. The operating system must provide this interface. In essence, the display chip driver must provide this function. If efficiency is not taken into consideration, it is enough to give a painting function. However, it must be too slow to draw images. Therefore, it is usually a screen-on-screen operation on the screen, which uses buffering to prevent flickering; "retrieving input messages" and "outputting screen-on-screen ", these two points are the "Wheel" and "fire" of the graphic interface ". Both methods depend on the interfaces provided by the operating system, and the interfaces of different operating systems are also different. Even the same operating system has different methods. For the cross-platform graphic interface engine, the basic code of these two parts is not cross-platform and may be implemented differently for different platforms. However, these two parts of code will not be much, and encapsulation will not affect upper-layer development.
4.1.
Gui Structure
The GUI library has several basic subsystems:
4.2.
Interface Engine and graphics engine
Graphics engine and basic library </B> <br> The first thing to note is that although graphic interface engine is often described, the interface engine and graphics engine are two different things, just like the relationship between the MFC and GDI, android UI library, and skia, the graphics engine task is to provide a basic drawing method, how to draw bitmap, zoom bitmap, how to draw Line Polygon and other vector graphics, also, how to draw text depends on some basic libraries, unless you want to parse jpg png and other image formats and read the font to perform vector rendering of the text. Fortunately, most of these libraries are open-source and commercial-friendly (free of charge) <br> there are also some support libraries, such as multithreading, file IO, and mathematical algorithms. Therefore, a graphic interface engine has the following basic components:
4.3.
Window Management Control System4.4.
Event System4.5.
Graphics SystemRendering mechanism 4.6.
Layout System4.7.
Animation4.8.
Miscellaneous utility:4.9.
Common controls: special controls:
</B> when the basic framework is stable, you need to make some common basic controls for direct use at the application layer, buttons, lists, slide bars, progress bars, and so on. <br> <B>
Special Controls: </B> some special controls, such as the mode dialog box and pop-up menus, are different from the general controls. They basically need global management. <Br> <B>
4.10.
Editing tool:
</B> whether graphical editing tools are required. For static interfaces, such as windows dialogs, buttons, and list boxes, a WYSIWYG editing tool can improve the interface development efficiency.
5.
Ui trend cli> gui> nui
The UI engine provides a set of standard controls, but more importantly, it must meet two requirements.
5.1.
Convenient Dynamic Mechanism,
You do not need to consider the timer, multi-thread, asynchronous processing, and so on to implement various attribute animations of interface elements.
5.2.
Convenient control Expansion Mechanism,
It is more convenient to define a new control so that users can focus more onDefine control behaviorWithout the control of message processing, draw and process the two layers. In the old HMI engine, every time a new control is generated, the Draw and DoMessage methods must be reloaded. The new NUI engine encapsulates the drawable layer similar to the android library and introduces the signal slot mechanism, you can build controls like building blocks without having to rewrite these two methods.
5.3.
Scripted (native> html5)
Many js libraries on the market represent the future development direction of the interface library: no longer using the traditional control system, instead, we use atomic controls with a smaller granularity than the controls, such as various atomic controls in the thunder interface library, and the html elements used in the js library. You will find that using only a few basic elements, and then strengthening their assembly methods can more easily and quickly implement the strange requirements put forward by the product.
With Atomic controls, it is more convenient for you to assemble controls of the opposite sex. You need to make the various animations of the controls more concise. When you still feel a little inadequate,
Now the NUI engine basically achieves these two goals. Low development threshold,
6.
Self-built ui Engine
In fact, it is not as complicated as you think, as long as you grasp the two clues of input and output. The operating system only needs to provide the program with an infinite window, allowing textures in the window, and receiving messages or events in the window, so it can develop a set of its own GUI.
Then, you need to define a logical concept, corresponding window, control, Sprite. It doesn't matter if you decide it yourself .... all of your interface elements are derived from this concept layer by layer and then organized by Tree. My platform is named Sprite.
The Sprite object comes with a Bitmap. Of course, this Bitmap is also a data structure defined by me to try to be unrelated to the system. The final display is actually to get a total bitmap from the background of the Cavas constantly pasting the Sprite Bitmap of each layer, and then call the system's function to display the image (DirectX or StreetchDIB is available in Windows) to the user.
In this case, RenderOneTime will obviously do two things,
1. Notify the top-layer Sprite to update its Bitmap Based on the timestamp and its status. To do this, the top-layer Sprite must continuously notify the lower-layer Sprite and traverse.
2. After the update is complete, a rectangular area is displayed to indicate the changed areas and displayed using the display function.
In this way, the interface output is complete. The system can update the interface according to the set frame rate, or even support animation.
-------------------------------------------------
The input part is simpler. The custom message queue and message distribution system are used to convert messages or events from the operating system to custom messages, then, you can distribute it to the corresponding Sprite to respond.
Note that rendering and message processing are two different threads, so the synchronization design is very important.
-----------------------------------------------------
The font processing part is still related to some systems and needs to be modified together during transplantation.
GDI draws lines and everything is its own function, because the painting object is directly its own defined memory Bitmap.
-----------------------------------------------------
In this way, you only need to replace the system-related code, including message processing, display processing, and thread-related code, to the new system to complete the transplantation. Currently, my products can be used across Windows and mac platforms. Some of them are small and unstable, but they can barely reach the product application level.
References
How to use C ++ to write a GUI from scratch? -Graphic user interface-zhiyun.html