Several links under the cocoa touch layer -- coremothion

Source: Internet
Author: User

Introduction and use of the Core Motion framework in ios4

As a person who has just been learning iPhone programming for a week, the purpose of writing this article is not to enlighten the industry, but to summarize the knowledge. It would be better to attract you to give me lessons. The basic content of this article is written in reference to session 2010: sensing device motion in ios4 on WWDC 423, and the development documentation event handling guide for iPhone OS: Motion Events. Of course, I will add some personal summaries and principles when introducing them. The article may have an incorrect description. Please point it out after discovery. Thank you.

My previous article: The cornerstone of mobile device intelligence-I have written about iPhone 4 sensors. The gyroscope added by iPhone 4 makes up for many of the shortcomings of the existing motion sensor and introduces a lot of background knowledge. The biggest defect of an accelerometer is that it cannot detect the rotation changes along the gravity acceleration axis. Moreover, if only an accelerometer exists, it cannot avoid the interference of gravity. How is the traditional method implemented? First, high-pass filtering isolates the gravity acceleration similar to the DC component, and then low-pass filtering removes the high-frequency noise generated by the trembling mobile phone. A large amount of filtering will not only affect the original acceleration signal, but also seriously slow down the processing speed and affect the corresponding speed of the program. The electronic compass is useless in this regard. The reading of the compass itself takes a long time to stabilize and is particularly vulnerable to environmental interference. Therefore, the introduction of gyroscope solves these major problems. We can use it to measure the Rotation along the acceleration of gravity or fast rotation, the processing of motion is greatly enhanced. More importantly, the gyroscope can be used to determine the current placement position and posture of the mobile phone. Then, a fairly accurate gravity acceleration component can be obtained through this information, instead of extracting from the acceleration values that contain a lot of noise collected by the accelerometer. Therefore, gyroscope for Motion
The work on sensing is the most critical component. For more information about the concept of gyroscope, see the previous article.


Prior to ios4, the accelerometer was collected by the uiaccelerometer class, while the electronic compass was taken over by the core location. The launch of iPhone 4, due to the upgrade of the accelerometer (I have heard that this chip is used) and the introduction of gyroscope, motion-related programming has become a major concern, apple added a framework specifically responsible for this aspect in ios4, that is, Core
Motion framework. What are the benefits of this core motion? Simply put, it not only provides you with real-time acceleration and rotation speed values, but more importantly, Apple integrates many algorithms in it, you can directly output the acceleration that separates the gravity acceleration component, save your Qualcomm filter operation, and provide you with the 3D attitude information of a dedicated device!

We will naturally make good use of such a good thing. The following describes how to use the core motion framework to obtain the corresponding motion information.

Core Motion is mainly responsible for three types of data in ios4.0: acceleration value, gyroscope value, and device motion value. In fact, the motion value of this device is calculated by fusing Conversion Based on acceleration and rotation speed. The basic principles will be described later. In the system, Core Motion obtains raw data using a separate background thread. It also executes some motion algorithms to extract more information and presents it to the application layer for further processing. The Core Motion framework contains a dedicated manager class, cmmotionmanager, which then manages three motion-related data encapsulation classes. These classes are subclasses of the cmlogitem class, therefore, the related motion data can be saved to the corresponding file together with the time information. With the timestamp, the actual Update time between two adjacent data is easy to get. This is very useful. For example, sometimes you get 50Hz sample data, but you want to know the average value of the acceleration per second.

There are two main ways to obtain data from Core Motion: Push, that is, you provide a thread manager nsoperationqueue and a block (a bit like the callback function in C, core Motion automatically calls back the block when each sampled data arrives for processing. In this case, the operations in the block will be executed in your own main thread. Another method is pull. In this method, you must take the initiative to request data like Core Motion manager. This data is the last sampled data. If you don't want it, Core Motion manager won't give it to you. Of course, in this case, the core
All motion operations are performed in your own background thread and will not interfere with your current thread.

Then the question is, when will I choose? Apple officially recommended a user guide, compared the advantages and disadvantages of the two methods, and made recommendations for use scenarios. As shown in. It should be said that the advantages and disadvantages of the two methods are quite distinct, and the use cases are also quite different, which are well differentiated.

This is a general introduction to core motion. The following describes the collection, computing, and processing of Core Motion. Core Motion is a three-step process: initialization, data retrieval, and post-processing.

In the initialization phase, no matter what data you want to obtain, the first thing you need to do is

Motionmanager = [[cmmotionmanager alloc] init];

All operations are taken over by the manager. The subsequent initialization operations are quite intuitive. The acceleration pull method is used as an example.

If (! Motionmanager. accelerometeravailable ){
// Fail code // check whether the sensor is available on the device
}
Motionmanager. accelerometerupdateinterval = 0.01; // inform the manager that the update frequency is 100Hz.
[Motionmanager startaccelerometerupdates]; // starts the update and the background thread starts running. This is the pull method.

If the push method is used, the updated code can be written as follows:

[Motionmanager startaccelerometerupdatestoqueue: [nsoperationqueue currentqueue] withhandler: ^ (cmaccelerometerdata * latestacc, nserror * error)
{
// Your code here
}];

The next step is to obtain the data. Again, simple code

Cmaccelerometerdata * newestaccel = motionmanager. accelerometerdata;
Filteredacceleration [0] = newestaccel. Acceleration. X;
Filteredacceleration [1] = newestaccel. Acceleration. Y;
Filteredacceleration [2] = newestaccel. Acceleration. Z;

Obtain cmacceleration information by defining the cmaccelerometerdata variable. Similar to the previous use of the uiaccelerometer class, cmacceleration is defined as a struct in core motion.

Typedef struct {
Double X;
Double Y;
Double Z;
}

The corresponding motion information, such as acceleration or rotation speed, can be obtained directly from these three member variables.

The last step is to release resources when you do not need core motion processing.

[Motionmanager stopaccelerometerupdates];
// [Motionmanager stopgyroupdates];
// [Motionmanager stopdevicemotionupdates];
[Motionmanager release];

You see, it's that simple. Of course, it would be boring if such a core motion is so simple. In fact, the most interesting part of core motion is not acceleration or angular velocity, but the provision of device motion information processed by the sensor fusing algorithm. Core Motion provides a class named cmdevicemotion to encapsulate the data shown in the class into device motion information:

Let's take a look at the introduction of these encapsulated data.

The first attitude is the 3D attitude just mentioned. In general, it is to tell you the location and posture of the mobile phone in the current space.
The second is the gravity information. In essence, the gravity acceleration vector is expressed in the reference coordinate system of the current device. The development no longer needs to extract this information through filtering, because the core motion has already been given to you.
The third is acceleration information. Likewise, filtering is no longer needed here (the filtering algorithm added based on program requirements can naturally be retained ).
The fourth is the real-time rotation rate, that is, the rotation rate, which is the output of the gyroscope.

The following describes the four types of data in detail.

1. Attitude. In the cmdevicemotion object, attitude is

@ Property (readonly, nonatomic) cmattitude * attitude;

Attribute defined. A cmattitude instance encapsulates the attitude information about the current device in the space. This information is defined by the centralized mathematical expression below:

  1. A Quaternary element.
  2. One rotation matrix transformation
  3. Roll, pitch, and yaw ). For more information about again, see my previous article.

The Quaternary element is a frequently used data storage form of the attitude determination system. I am not very clear about it. The Euclidean angle and the transformation matrix complement each other, and the two can be deduced from each other. So here we will mainly introduce the transformation matrices that are helpful to VR or games.

What is the purpose of the Rotation Transformation Matrix? Let's take a look at the figure below.

In essence, the transformation matrix illustrates the ing between one vector space and another vector space. For example, acceleration information needs to be judged in many applications. However, the user's posture is constantly changing when using the mobile phone, we can collect the acceleration and gravity information of a device at the T1 time point, or the information at the T2 time point, but we cannot directly use them for computation. Why? As the mobile phone's axis direction changes, the acceleration and gravity information belongs to one vector space at T1 time point, and belongs to another vector space at T2 time point. x1 and ACC. x2 it is naturally impossible to determine the movement mode of the device.

Therefore, the problem is that we need to find the linear transformation T of two three-dimensional spaces, so that this transformation relationship can help us transform the value of a certain space to another space, in this way, you can compare or perform any calculations in the same space. How does Core Motion solve this problem? It first allows you to collect an attitude value as the reference coordinate system at the initial time point T1 (for example, when you draw the first vertex) of the program. We assume that this vector is v_ref. At any point in time, such as T2, we collect an attitude value. If this vector is v_dev and is located in the coordinate system of the current device, we have the following relationships:

R is the rotation matrix. Because v_ref is an orthogonal base vector

As we mentioned just now, v_ref and v_dev are the orthogonal bases of their corresponding vector space, and this R matrix is just an orthogonal matrix, with all column vectors Linearly Independent. Therefore, the transformation corresponding to r is the linear transformation of the two spaces we are looking for, and this is a one-to-one transformation.

Okay. What does the above conclusion tell us? The acceleration information in the current coordinate system collected by T2 at the current time point not only has the corresponding vector under the reference coordinate system at the T1 time point, but also has only one corresponding vector! If we define a_dev as the current acceleration vector, there is only one corresponding acceleration vector in the reference coordinate system, and it can certainly be obtained by the following formula

This formula does not exist, because Orthogonal Matrices always have inverse matrices. With this transformation, You can freely compare and calculate the acceleration and gravity information at different time points to get a precise user motion mode.

It is interesting that the expression of the r transformation matrix exactly shows the relationship between R and rotation rate: the coordinate system of the current time point and the transformation matrix of the reference coordinate system, it is inferred from the angle information of the yaw, pitch, and roll axes provided by the gyroscope. So again, we felt the power of the new gyroscope. What's more powerful is that core motion directly provides the R matrix to developers, saving developers a lot of trouble-prone and tedious work.

With so many preparations, let's briefly introduce the steps for obtaining the R matrix information at the current time point:

First, obtain the reference matrix information.

If (motionmanager! = Nil ){
Cmdevicemotion * devicemotion = motionmanager. devicemotion;
Referenceattitude = [devicemotion. Attitude retain];
}

Then, when you want to obtain the R matrix, perform the following operations:

Cmrotationmatrix rotation;
Cmdevicemotion * devicemotion = motionmanager. devicemotion;
Cmattitude * attitude = devicemotion. attitude;
If (referenceattitude! = Nil ){
[Attitude multiplybyinverseofattitude: referenceattitude];
}
Rotation = attitude. rotationmatrix;

It is very simple and intuitive. A multiplybyinverseofattitude call reflects the matrix operation relationship we just deduced. So far, we have got rotationmatrix, and we can't think of anything else.

2. Gravity and useracceleration. They are put together because they are essentially similar, and the original acceleration (that is, the value obtained through [motionmanager startaccelerometerupdates]) is their superposition and, in other words, by decomposing the original acceleration, the two of them are obtained, but now Apple has helped you to break down the filter. Their attribute definition in core motion is

@ Property (readonly, nonatomic) cmacceleration gravity;
@ Property (readonly, nonatomic) cmacceleration useracceleration;

The structure encapsulated by cmacceleration. In addition, the reference coordinate system of the two is the same, and the external frame of the device shall prevail:

The two data types can be obtained simply by reading the three member variables of motionmanager. devicemotion. useracceleration/gravity.

3. Rotation Rate. The rotation rate is encapsulated by a struct called cmrotationrate, and its internal variable definition is exactly the same as that of cmacceleration. The right-hand rule determines the positive and negative values. Many of my friends may have a problem: what is the difference between this data and the cmgyrodata obtained through motionmanager? The rotation rate after processing is encapsulated by device motion, removing all the bias of the original cmgyrodata. For example, if we place the device on the table, the output of the gyroscope would ideally be 0. The problem is that the raw data you directly obtain from the gyroscope is not 0, but a non-0 value caused by many uncertainties. This includes many drift errors, for example, the gyroscope temperature drift will affect our reading. Core
Motion has been processed by some algorithms to help developers eliminate this bias, greatly facilitating the development of motion.

Speaking of rotation rate, let's talk about the characteristics of this output value. If you write a simple test program and output the values of the three axes to the screen, you will find a very interesting phenomenon: pitch and roll values, it exactly corresponds to the mobile phone's attitude during reading, while the value of yaw is displayed from 0. If the mobile phone's attitude changes later, the value of yaw changes accordingly. This is because for pitch and roll, they both have a clear reference surface, that is, a horizontal plane, and this value must have been corrected before leaving the factory. But what about yaw? No. When you open an app, it may be in any different direction. So now, Core
Motion simply outputs a relative initial value of 0, and then you can determine the device location based on the relative changes in the yaw direction.

In addition, if the rotation of the device changes on all three axes, the default angle calculation sequence is roll first, then pitch, and finally yaw.

Summary

Now, the basic core motion knowledge has been summarized. Before the advent of smartphones, we all said that mobile phones are used for phone calls. Smartphones have changed everything. Users always have various needs to be met by developers. The key lies in whether developers can understand users, recognize users, and personalize users. The integration of gyroscope and the launch of Core Motion make it possible for many applications that were previously impossible or hard to implement on smartphones. This is an opportunity, an opportunity for imagination, and a challenge for execution.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.