[The Music marks for determining the sound level are called music marks. There are three kinds of music marks: G, F, and C )]
--- Basic five-line spectrum tutorial
Jamendo's gesture operation is used to control the playing of a song. There are four gestures corresponding to play, pause, last and next. This article mainly describes how to implement the gesture operation. As for the media player, I will not elaborate on it in subsequent articles.
1. Android gesture operation API
Android provides gesturelibrary to represent the gesture library, and provides the tool class gesturelibraries to create the gesture library from different data sources. Gesturelibrary is an abstract class with two sub-classes: filegesturelibrary and resourcegesturelibrary. The data sources of these two gesture libraries are file and Android raw resources. These two classes are implemented as private internal classes of the gesturelibraries class.
The gesturelibrary class is actually an encapsulation of another class, gesturestore. Let's look at the code of gesturelibrary:
Public abstract class gesturelibrary {protected final gesturestore mstore; protected gesturelibrary () {mstore = new gesturestore ();} // Save the gesture library public Abstract Boolean save (); // load the gesture library public Abstract Boolean load (); Public Boolean isreadonly () {return false;}/** @ hide */Public learner getlearner () {return mstore. getlearner ();} public void setorientationstyle (INT style) {mstore. setorientationstyle (style);} public int getorientationstyle () {return mstore. getorientationstyle ();} public void setsequencetype (INT type) {mstore. setsequencetype (type);} public int getsequencetype () {return mstore. getsequencetype ();} public set <string> getgestureentries () {return mstore. getgestureentries ();} public arraylist <prediction> recognize (gesture) {return mstore. recognize (gesture);} public void addgesture (string entryname, gesture) {mstore. addgesture (entryname, gesture);} public void removegesture (string entryname, gesture) {mstore. removegesture (entryname, gesture);} public void removeentry (string entryname) {mstore. removeentry (entryname);} public arraylist <gesture> getgestures (string entryname) {return mstore. getgestures (entryname );}}
The two sub-classes mainly implement two Abstract Functions of the parent class, and rewrite the isreadonly function to correspond to different data sources. However, they are all functions that convert data into streams and pass them to the gesturestore instance, it completes the final operation. The following code is clear:
// Class Private Static class filegesturelibrary extends gesturelibrary {private final file mpath; Public filegesturelibrary (file path) {mpath = path ;} @ override public Boolean isreadonly () {return! Mpath. canwrite ();} public Boolean save () {// The gesture library has not changed, so you do not need to save if (! Mstore. haschanged () return true; final file = mpath; final file parentfile = file. getparentfile (); // ensure that the file path exists if (! Parentfile. exists () {If (! Parentfile. mkdirs () {return false ;}} boolean result = false; try {file. createnewfile (); mstore. save (New fileoutputstream (file), true); // The actual save operation result = true;} catch (filenotfoundexception e) {log. D (log_tag, "cocould not save the gesture library in" + mpath, e);} catch (ioexception e) {log. D (log_tag, "cocould not save the gesture library in" + mpath, e);} return result;} public Boolean L Oad () {boolean result = false; final file = mpath; If (file. exists () & file. canread () {try {// call the function of the gesturestore instance to load the gesture library file mstore. load (New fileinputstream (file), true); Result = true;} catch (filenotfoundexception e) {log. D (log_tag, "cocould not load the gesture library from" + mpath, e);} catch (ioexception e) {log. D (log_tag, "cocould not load the gesture library from" + mpath, e );}} Return result ;}}// load the corresponding gesture library file Private Static class resourcegesturelibrary extends gesturelibrary {private final weakreference <context> mcontext; private final int mresourceid from the res/raw directory of Android; public resourcegesturelibrary (context, int resourceid) {mcontext = new weakreference <context> (context); mresourceid = resourceid;} @ override public Boolean isreadonly () {// The gesture library file in the raw directory is read-only. Return true;} // this type of data source can only be loaded. Public Boolean save () {return false;} public Boolean load () {boolean result = false; Final context = mcontext. get (); If (context! = NULL) {final inputstream in = context. getresources (). openrawresource (mresourceid); try {mstore. load (in, true); Result = true;} catch (ioexception e) {log. D (log_tag, "cocould not load the gesture library from raw resource" + context. getresources (). getresourcename (mresourceid), e) ;}} return result ;}}
In addition to the above two classes, the android gesture API also has a gestureoverlayview class inherited from framelayout. You can easily draw various gestures on this component. Because gestureoverlayview is not a standard view component, you must specify a fully qualified class name when using it in an XML layout file, as shown below:
<android.gesture.GestureOverlayView xmlns:android="http://schemas.android.com/apk/res/android" android:id="@+id/gestures" android:layout_width="fill_parent" android:layout_height="fill_parent" android:eventsInterceptionEnabled="false" android:gestureStrokeType="multiple" android:orientation="vertical" >
Here, the Android: gesturestroketype attribute is used to control whether a gesture is complete or needs to be completed multiple times. Single indicates that a single operation is completed, and multiple indicates that multiple operations are completed.
Gestureoverlayview defines three listener interfaces to facilitate user response at each stage of the gesture operation. They are defined as follows:
public static interface OnGesturingListener { void onGesturingStarted(GestureOverlayView overlay); void onGesturingEnded(GestureOverlayView overlay); } public static interface OnGestureListener { void onGestureStarted(GestureOverlayView overlay, MotionEvent event); void onGesture(GestureOverlayView overlay, MotionEvent event); void onGestureEnded(GestureOverlayView overlay, MotionEvent event); void onGestureCancelled(GestureOverlayView overlay, MotionEvent event); } public static interface OnGesturePerformedListener { void onGesturePerformed(GestureOverlayView overlay, Gesture gesture); }
To understand the actual meaning of the above interface functions, you must start with the gestureoverlayview source code. The user's touch events on the view can be divided into three steps, namely, motionevent. action_down, motionevent. action_move and motionevent. action_up: The corresponding processing functions in the gestureoverlayview class are touchdown, touchmove, and touchup.
1) You can see the following code in the touchdown function:
final ArrayList<OnGestureListener> listeners = mOnGestureListeners; final int count = listeners.size(); for (int i = 0; i < count; i++) { listeners.get(i).onGestureStarted(this, event); }
Here we find our ongesturestarted interface function, so it is the first called, that is, when the user presses.
2) the code in the touchmove function first finds the ongesturingstarted function:
final ArrayList<OnGesturingListener> listeners = mOnGesturingListeners; int count = listeners.size(); for (int i = 0; i < count; i++) { listeners.get(i).onGesturingStarted(this); }
The callback code of the ongesture function is found at the end of the function:
final ArrayList<OnGestureListener> listeners = mOnGestureListeners; final int count = listeners.size(); for (int i = 0; i < count; i++) { listeners.get(i).onGesture(this, event); }
3) In the touchup function, ongestureended and ongesturingended are found to be called in sequence:
final ArrayList<OnGestureListener> listeners = mOnGestureListeners; int count = listeners.size(); for (int i = 0; i < count; i++) { listeners.get(i).onGestureEnded(this, event); } clear(mHandleGestureActions && mFadeEnabled, mHandleGestureActions && mIsGesturing, false);... final ArrayList<OnGesturingListener> listeners = mOnGesturingListeners; int count = listeners.size(); for (int i = 0; i < count; i++) { listeners.get(i).onGesturingEnded(this); }
In this way, the sequence of interface function calls is ongesturestarted, ongesturingstarted, ongesture, ongestureended, and ongesturingended, and the difference is ongesturecancelled and ongesturesponmed. First, look for the ongesturecancelled function, which is called in the private function cancelgesture. The code is similar to other interface functions:
private void cancelGesture(MotionEvent event) { // pass the event to handlers final ArrayList<OnGestureListener> listeners = mOnGestureListeners; final int count = listeners.size(); for (int i = 0; i < count; i++) { listeners.get(i).onGestureCancelled(this, event); } clear(false); }
This private function is called only in touchup, and ongesturecancel and ongestureended are two choices, because when the gesture operation is canceled, there is no ongestureended. However, the ongesturingended function is called immediately after ongesturecancel.
The final interface function ongesturesponmed is indirectly called by the internal class fadeoutrunnable, the runnable class uses postdelayed in the clear function of the gestureoverlayview class to send messages to the Message Queue of the message program in asynchronous delayed messages. As the name suggests, the clear function is used to clear gestures. It is called after touchup is followed by ongestureended, and when the gesture operation is canceled, ongesturesponmed will still be called back. See the touchup code segment above.
Therefore, the final interface function call sequence is shown in:
2. Command pattern-based gesture control
The command mode encapsulates a request as an object so that you can parameterize the customer with different requests, queue requests or record request logs, and supports unrecoverable operations. The typical structure of the command mode is as follows:
It mainly involves four roles:
1) receiver role: the object that actually executes the command. Any class can become the receiver, as long as it can complete the corresponding functions required by the command.
2) command role: it actually includes a command interface and a specific command concretecommand class that implements one or more command interfaces. The actual command object usually holds a reference from the consumer and calls the functions provided by the consumer to complete the operations to be executed.
3) caller role invoker: generally holds the command object and requires the command object to execute the corresponding request. invoker is equivalent to using the entry of the command object.
4) the client of the assembler role is used to create a specific command object and set the receiver of the command object.
The following describes the implementation code of the jamendo gesture operation by referring to the structure of the command mode. Most of the Code is defined in the gesture package,
From the class name, we can see the Responsibility Assignment of each class:
1) Abstract command role command: gesturecommand interface, simple definition, the entire interface has only one execute function:
public interface GestureCommand {void execute();}
2) The specific command role concretecommand: playergesturenextcommand, playergestureplaycommand, playergestureprevcommand, and playergesturestopcommand have an instance of the receiver role playerengine in these four implementation classes, it is used to complete specific control operations. Take the following command as an example and check the Code:
Public class playergesturenextcommand implements gesturecommand {playerengine mplayerengine, more flexible public playergesturenextcommand (playerengine engine) {mplayerengine = engine;} @ overridepublic void execute () {log. V (jamendoapplication. tag, "playergesturenextcommand"); mplayerengine. next (); // The operator completes the specific command operation }}
3) receiver role: playerengine is an interface that defines some common functions for audio playback operations. It has two implementation classes: intentplayerengine and playerengineimpl, this part is analyzed in later articles because it does not belong to the gesture topic of this article.
4) caller role invoker: gesturehandler class. This class is a key class for implementing gesture control. It implements the ongestureperformedlistener listener and overrides the ongesturesponmed interface function, in this function, first determine whether the gesture library has been loaded. When the gesture library is correctly loaded, call gesturelibrary. recognize is used for gesture recognition, and the pre-defined gesture is recognized only when the matching degree is greater than 2.0. After the gesture is identified, the Execute function defined by the command interface is called, the code for controlling music playback is as follows:
Public class gestureshandler implements ongesturew.medlistener {private gesturelibrary mlibrary; // gesture library private Boolean mloaded = false; private gesturecommandregister mregister; // Save the registered control commands (including next, Prev, play, and stop) Public gestureshandler (context, gesturecommandregister register) {// load gestures gesture library mlibrary = gesturelibraries from the res/raw directory. fromrawresource (context, R. raw. gestures); load (); setreg Ister (Register);}/*** load the gesture library * @ return */private Boolean load () {mloaded = mlibrary. load (); Return mloaded;} @ overridepublic void ongesturesponmed (gestureoverlayview overlay, gesture) {If (! Mloaded) {If (! Load () {return ;}/// arraylist <prediction> predictions = mlibrary. recognize (gesture); If (predictions. size ()> 0) {prediction = predictions. get (0); log. V (jamendoapplication. tag, "gesture" + prediction. name + "recognized with score" + prediction. score); If (prediction. score> 2.0) {// when the gesture matching degree is greater than 2.0, confirm to recognize the predefined gesture // call the operation gesturecommand command defined by the previously registered command interface = getregister (). getcommand (Prediction. Name); If (command! = NULL) repeated command.exe cute () ;}}} public void setregister (gesturecommandregister mregister) {This. mregister = mregister;} public gesturecommandregister getregister () {return mregister ;}}
5) client of the assembler role: the gesturecommandregister class and playgesturecommandregister class jointly act as the client role. The gesturecommandregister class maintains a hashmap to act as the container for various command roles. The iterator is the subclass of gesturecommandregister, it mainly implements the function of registering the next, Prev, play, and stop commands.
public class GestureCommandRegister {private HashMap<String, GestureCommand> mGestures;public GestureCommandRegister() {mGestures = new HashMap<String, GestureCommand>();}public void registerCommand(String name, GestureCommand gestureCommand) {mGestures.put(name, gestureCommand);}public GestureCommand getCommand(String name) {return mGestures.get(name);}}public class PlayerGestureCommandRegiser extends GestureCommandRegister {public PlayerGestureCommandRegiser(PlayerEngine playerEngine) {super();registerCommand("next", new PlayerGestureNextCommand(playerEngine));registerCommand("prev", new PlayerGesturePrevCommand(playerEngine));registerCommand("play", new PlayerGesturePlayCommand(playerEngine));registerCommand("stop", new PlayerGestureStopCommand(playerEngine));}}
Now, some code of the jamendo gesture has been explained.