Cooliris interface for irrlicht game engine development (III)

Source: Internet
Author: User
Tags xdiff

(3) focuses on how to make the scenario work and how to obtain and process messages.

Source code: example_3.zip

 

1. irrlicht Motion Mechanism

 

The so-called motion, in fact, the computer is constantly drawing the scene, each draw is called a frame. When the position or appearance of an object in each frame changes, it starts to work. In irrlicht, drawing a frame is completed in the run loop:

 

CPP Code
 
  1. While (device-> Run ())
  2. {
  3. If (device-> iswindowactive ())
  4. {
  5. Driver-> beginscene (true, true, video: scolor (0, 0, 0, 0 ));
  6. Scene_mgr-> drawall (); // draw a frame
  7. Driver-> endscene ();
  8. }
  9. }
  10. Device-> drop ();
While (device-> Run () {If (device-> iswindowactive () {driver-> beginscene (true, true, video: scolor (0, 0, 0, 0); scene_mgr-> drawall (); // draw a frame driver-> endscene () ;}} device-> drop ();

 

All we need to do is update the position of an object in the drawall () function, so the scene starts to work.

 

Everything is related to the scene: isecennodeanimator interface. All class instances that implement this interface can be added to the iscenenode animators list through the addanimator () function.

 

The drawall () function of scenemanager calls the onanimate () function before rendering (render) scenes. This function is recursive to ensure that every scenenode in the scenario is called. In the onanimate () function, the animatenode () function of each isecennodeanimator in scenenode is called to update the position, size, or texture of scenenode.

 

Sort out the calling sequence as follows:

 

1. scenemanager --> drawall () draws a frame

2. iscenenode --> onanimate () scenenode Motion

3. iscenenodeanimator --> animatenode (iscenenode) implements the specific function of motion

4. scenemanager --> render () Draw

 

It can be seen that as long as the isecennodeanimator interface is implemented and added to scenenode, it can be moved.

 

2. message transmission of irrlicht

 

The transmission of irrlicht messages starts from device-> Run (). In Windows, wndproc () is called to collect messages, package them into its internal sevent structure, and then posteventfromuser () send messages to usercycler, Gui, and 3D scene in sequence.

 

All classes in irrlicht that process messages must implement the ieventreciever interface. Usertransferer is an ieventtransferer specified in the createdevice function. That is to say, the priority of message processing is usercycler> GUI> 3D scene.

 

In our program, no usercycler is specified, and no GUI is available for the time being. Therefore, the message is directly sent to scenemanager.

Scenemanager messages are also transmitted by the iscenemanager's posteventfromuser. The implementation of this function is as follows:

 

CPP Code
 
  1. Bool cscenemanager: posteventfromuser (const sevent & event)
  2. {
  3. Bool ret = false;
  4. Icamerascenenode * cam = getactivecamera ();
  5. If (CAM)
  6. Ret = cam-> onevent (event );
  7. _ Irr_implement_managed_marshalling_bugfix;
  8. Return ret;
  9. }
Bool cscenemanager: posteventfromuser (const sevent & event) {bool ret = false; icamerascenenode * cam = getactivecamera (); If (CAM) ret = cam-> onevent (event ); _ irr_implement_managed_marshalling_bugfix; return ret ;}

 

That is to say, only the currently active icamerascenenode will receive the message. Icamerascenenode checks iseventreceiverenabled () of iscenenodeanimator. If it is true, the onevent function is called.

 

The message transmission mechanism is as follows:

 

1. irrlichtdevice --> Run () collects and packs messages

2. irrlichtdevice --> posteventfromuser () transmits messages to the userpolicer GUI and scene

3. iscenemanager --> posteventfromuser () transmits the message to cameranode

4. icamerascenenode --> onevent () calls the onevent of animator.

5. iscenenodeanimator --> onevent () // if enabled responds to the message.

 

It can be seen that, although iscenenodeanimator can be executed by scenenode, it can receive and process messages only on camerascenenode.

 

3. Implement cameraanimator

 

If we implement a cameraanimator, process messages in it, and update the positions of cameranode and cubenode, interaction and animation can be realized. The key functions of this class are as follows:

 

CPP Code
 
  1. Class cameraanimator: Public iscenenodeanimator
  2. {
  3. Public:
  4. //! Constructor
  5. Cameraanimator (...
  6. //! Destructor
  7. Virtual ~ Cameraanimator (){};
  8. //! Update scenenode location
  9. Virtual void animatenode (iscenenode * node, u32 timems );
  10. //! Process messages
  11. Virtual bool onevent (const sevent & event );
  12. //! This animator will receive events when attached to the active camera
  13. Virtual bool iseventreceiverenabled () const
  14. {
  15. Return true; // The value must be true to receive messages.
  16. }
  17. ... // Other member functions.
  18. PRIVATE:
  19. ... // Fields
  20. };
Class cameraanimator: Public iscenenodeanimator {public ://! Constructorcameraanimator (...//! Destructorvirtual ~ Cameraanimator (){};//! Update scenenode location virtual void animatenode (iscenenode * node, u32 timems );//! Process the message virtual bool onevent (const sevent & event );//! This animator will receive events when attached to the active cameravirtual bool iseventreceiverenabled () const {return true; // The message must be true to receive the message }.... // other member functions. PRIVATE :... // fields };

 

There are three key points to implement:

(1) When dragging, the scenario follows

(2) When you click an image, the scene is moved to this position and the image is highlighted.

(3) scroll wheel Scaling

 

There are several key points in programming:

(1) node selection in 3D scenarios. Fortunately, irrlicht achieves this for us. You only need the following code to select the node from the mouse click:

 

CPP Code
 
  1. Core: position2d <s32> mouse_position (event. mouseinput. X, event. mouseinput. y );
  2. Iscenenode * node = scene_manager _-> getscenecollisionmanager ()
  3. -> Getscenenodefromscreencoordinatesbb (mouse_position,-1 );
Core: position2d <s32> mouse_position (event. mouseinput. x, event. mouseinput. y); iscenenode * node = scene_manager _-> getscenecollisionmanager ()-> getscenenodefromscreencoordinatesbb (mouse_position,-1 );

 

(2) Realization of motion

 

There are three types of motion: the position of cameranode, the shooting point of cameranode, and the position of the selected image.

The location of cameranode determines the movement of the scenario;

The camera position of cameranode determines the shooting angle and the scene tilt can be realized;

The position of the selected image can bring it closer to camera than other images, achieving outstanding results.

 

(2) The position of the image described in is arranged on the plane Z = 0. The position of camera is on the plane Z =-700, and the initial shooting point is the origin.

 

The following figure shows how to move the camera shooting point (target) on the X axis:

CPP Code
 
  1. If (is_drag _ & current_node _ = NULL) // drag
  2. {
  3. Core: position2di mouse_position = DEVICE _-> getcursorcontrol ()-> getposition ();
  4. S32 xdiff = drag_start_point _. X-mouse_position.x;
  5. Drag_start_point _ = mouse_position;
  6. X_target + = 0.01 * xdiff/core: Max _ (DT, 0.20.001f );
  7. }
  8. If (current_node _! = NULL) // select an image
  9. {
  10. X_target = current_node _-> getposition (). X;
  11. }
  12. Sign = (x_target <x_current )? -1: 1;
  13. X_speed = sign * SQRT (ABS (x_target-x_current) * 50; // motion speed
  14. Target_position.x + = x_speed * DT; // move the location
If (is_drag _ & current_node _ = NULL) // drag {core: position2di mouse_position = DEVICE _-> getcursorcontrol ()-> getposition (); s32 xdiff = drag_start_point _. x-mouse_position.x; drag_start_point _ = mouse_position; x_target + = 0.01 * xdiff/core: Max _ (DT, 0.20.001f);} If (current_node _! = NULL) // select an image {x_target = current_node _-> getposition (). X;} Sign = (x_target <x_current )? -1: 1; x_speed = sign * SQRT (ABS (x_target-x_current) * 50; // The movement speed target_position.x + = x_speed * DT; // move the position

 

Moving in the Y axis and Z axis directions, including moving images, is handled in a similar way. See the (animatenode) function.

 

(3) zoom with the scroll wheel

You only need to change the position of camera in the Z axis and make a simple limit:

CPP Code
 
  1. Z_target ++ = 100 * event. mouseinput. Wheel;
  2. If (z_target>-200)
  3. {
  4. Z_target =-200;
  5. }
  6. If (z_target <-3000)
  7. {
  8. Z_target =-3000;
  9. }
Z_target + = 100 * event. mouseinput. Wheel; If (z_target>-200) {z_target =-200;} If (z_target <-3000) {z_target =-3000 ;}

 

4. Last

 

Finally, we need to append the implemented animator to camera:

 

CPP Code
 
  1. Scene: icamerascenenode * camera = scene_mgr-> addcamerascenenode (0,
  2. Core: vector3df (700 ),
  3. Core: vector3df (0, 0), 0 );
  4. Scene: cameraanimator * animator = new scene: cameraanimator (wall_mgr-> getwallitemlist (),
  5. Device, scene_mgr,
  6. Device-> gettimer ()-> gettime ());
  7. Camera-> addanimator (animator );
  8. Animator-> drop ();
Scene: icamerascenenode * camera = scene_mgr-> addcamerascenenode (0, core: vector3df (700,-), Core: vector3df (, 0), 0); scene:: cameraanimator * animator = new scene: cameraanimator (wall_mgr-> getwallitemlist (), device, scene_mgr, device-> gettimer ()-> gettime ()); camera-> addanimator (animator); animator-> drop ();

 

After compilation and running, you should be able to use the mouse to implement interaction similar to cooliris. For further improvement, you also need to support the keyboard ......

 

P.s.

 

This is where the 3D interface is introduced.

There are two parts:

(1) Gui: Introduce GUI writing and add support for TrueType Chinese fonts;

(2) animatedgui: describes how to write directory browsing controls to implement a skin system.

 

As my graduation is approaching, my tutor urged me to write a thesis. Let's continue later.

In view of my learning experience, I advise you never blindly study, do not blindly worship famous schools, and do not be ruined.

I learned about machinery and blindly accepted the so-called qualification for pushing research. Sigh ~~ Poor academic atmosphere ......

 

From: http://arec.iteye.com/blog/338135

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.