Considerations for developing Adobe AIR mobile applications

Source: Internet
Author: User
Tags home screen

Http://www.adobe.com/cn/devnet/air/articles/considerations-air-apps-mobile.html

Adobe AIR has evolved beyond the initial goal of being a desktop application platform. Today, it supports standalone application development across mobile, desktop, and digital home devices. AIR is an attractive development platform, in part because of its wide coverage. At the same time, each of these environments offers unique requirements for mobile application development and design.

Mobile applications, for example, are often run short-term. They need a UI that can be used on smaller screens, and often need to be able to extend to tablets and support different screen orientations. They must support touch input while integrating hardware and software facilities unique to such devices. They must also consider the memory and graphics models of mobile devices.

This article describes the features and design methods that AIR provides to support mobile application development. The features and methods described in this article will help you develop applications that can run on Android, BlackBerry Tablet OS and IOS devices, as well as smartphones and tablets.

Screen

The first and most important consideration when moving a mobile device as a development goal is the screen. This is a relatively small screen, both physically and from the number of pixels that can be displayed. It also has a high density (pixels per inch), and different devices have different densities and dimensional combinations. Mobile devices may also be placed in a horizontal or vertical direction.

In order to operate normally across such a diverse range of sizes and densities, AIR provides support for the following key APIs.

    • Stage.stageWidthStage.stageHeight: These two properties provide the actual screen dimension at run time. Note that these values may change when the application enters or exits full-screen mode, or when the screen rotates. (the rotation is described further in this article.) )
    • Capabilities.screenDPI: This provides the number of pixels per inch on the screen.

By combining the information provided by these properties, the application can be displayed for a wide range of screen adjustments-even to dimensions and dimensions that were not anticipated when the application was written.

Note: If you are building desktop applications on AIR, you should be aware that there is only one Stage,nativewindow class for mobile applications that cannot be used. The inability to use indicates that the class can be referenced and instantiated, but doing so has no effect. This makes it impossible to write shared code that works in both environments. To check if NativeWindow is available, please inquire NativeWindow.isSupported .

Mobile applications do not need to support screen rotation, but at a minimum, it should be considered that not all mobile devices are displayed vertically (tall taller) in their default settings. Applications that do not want to support screen rotation will be set to within the application descriptor <autoOrients> false , thereby choosing to discard completely. Applications that want to handle rotation can be <autoOrients> set to true Select adopted, and then listen for Stage REORIENTING and REORIENT events. Note that not all mobile platforms will distribute REORIENTING events, but they will all distribute REORIENT events.

It is also important to note that applications do not need to use the built-in automatic orientation feature to handle screen rotation. However, if you want to match system behavior, the built-in events will be the most appropriate. For example, on some devices with a slide-out physical keyboard, system release changes to match the keyboard, even if the device itself does not actually rotate. For applications that require text input, it may be necessary to redirect in this case. For other applications, such as games, you may need to turn off automatic orientation and switch to monitor the accelerometer events to determine the physical orientation of the device. I'll cover the accelerometer later in this article.

Touch input

After the application is displayed on the screen, it is usually ready to accept certain input from the user. For mobile applications, this means accepting touch input.

AIR automatically maps simple single-finger gestures, such as a single-finger tap button, to the corresponding mouse event. This makes it possible to write shared code that can be run in a sensible way on mobile platforms and desktop platforms.

For more complex interactions, you need to take advantage of multitouch input. Mobile-oriented AIR provides multi-touch support by supporting the following key APIs:

    • Multitouch: This controller class allows the application to determine which touch events and gesture events are available and to select which events to use.
    • touchevent: When handling raw touch events, the application receives this type of event.
    • gestureevent, Pressandtapgestureevent, transformgestureevent: The application will receive these events when the gesture is processed.

For applications that handle standard gesture events for the underlying platform (for example, to collapse or open two fingers to zoom in or out), the application should be Multitouch.inputMode set to MultitouchInputMode.GESTURE . The system can combine multiple touch points into gestures and provide gesture events for each gesture. For example, a magnification gesture will be assigned as TransformGestureEvent.GESTURE_ZOOM the type TransformGestureEvent .

The application can also choose to accept the original touch event by Multitouch.inputMode setting it to MultitouchInputMode.TOUCH_POINT . The system assigns a series of events to each touch, indicating when the touch point starts, how it moves gradually, and when it ends. In addition, multiple touch points can occur simultaneously. The application is responsible for synthesizing this stream of events into meaningful content.

Text input

In addition, special consideration should be given to mobile devices with a soft keyboard (keyboard displayed on the screen, no physical keys). While not all mobile devices use a soft keyboard, such devices are becoming increasingly popular, so you should make sure your application works well on devices equipped with a soft keyboard.

In the visible case, the soft keyboard will undoubtedly occupy a certain amount of available screen space. To accommodate this situation, AIR will adjust the Stage by default so that the text input control and keyboard remain visible at the same time. In this case, the stage adjustment is usually pushed up, so the top end of the stage will be truncated by the upper end of the screen and cannot be seen.

The application can disable this behavior and implement its own logic to support the soft keyboard. This behavior is controlled by the settings in the application descriptor softKeyboardBehavior . The default setting is pan . To implement your own logic, use the none .

If the default adjustment behavior is disabled and the soft keyboard is activated or deactivated, AIR passes through Stage.softKeyboardRect the area covered by the keyboard in the report Stage. The application should listen for the softkeyboardevent that are notified when this value changes, and then adjust its layout accordingly. (The softkeyboardevent will be dispatched for these soft keyboard behaviors at the same time.) )

Applications typically do not need to be concerned about activating the soft keyboard, because the soft keyboard is automatically activated when the text field receives focus. The application can also be set InteractiveObject.needsSoftKeyboard , requesting that the soft keyboard be displayed for any interactive object that has focusable, and that the InteractiveObject.requestSoftKeyboard() keyboard is displayed immediately by asking. These APIs do not have any effect on devices that do not use a soft keyboard.

Sensor

Mobile device users often do not adapt to interacting with their mobile applications using multitouch screens-they also want the application to understand the location and respond to the physical direction and movement of the device. AIR provides support for this through two key APIs:

    • geolocation: This API Dispatches events to provide the address location (longitude and latitude) of the device, as well as motion (direction of movement, speed).
    • Accelerometer: This API Dispatches events that report the power that is applied to the device as the Forefront x,y , and z axes.

For some applications, geolocation is an inherent feature of application operations. For example, an application that can find the nearest ATM. More applications can leverage this information to enhance the user experience. For example, a voice memo application can record where you record each memo to provide more context during playback.

As mentioned earlier, the accelerometer input can be useful if you want to understand the actual orientation of the device, not just its logical direction. Accelerometer data can also convert the device itself to a controller. Many applications use this to control the application itself by tilting or rotating the device.

All of these sensor APIs allow the calling program to set the required update interval, that is, the update of the location and acceleration will be assigned to any listener at the required frequency. Please note that no one method can guarantee such an update frequency. The actual update frequency will depend on a number of factors, including the underlying hardware.

Web View

Without HTML content support, any modern application runtime is incomplete, and mobile-oriented AIR provides such support through the Stagewebview API. Stagewebview provides an AIR application with a way to access the underlying, built-in HTML rendering capabilities of the target platform. Note that because Stagewebview uses platform HTML control, it is not guaranteed to achieve consistent rendering across platforms. It ensures that consistent content is present on a platform consistent with the platform on which it runs. If you're using it to host Web pages, this is likely to meet the expectations of your users.

Because it relies on native platform control, Stagewebview does not integrate the display list. Instead, it floats on the surface of all other content. It can be thought of as being attached directly to the stage--as its name suggests. The contents of the Stagewebview control can be drawViewPortToBitmapData() placed in the display list by snapping to a bitmap. This can be used to support snapshots of WEB pages that participate in screen transition animations (for example).

For those familiar with HTMLLoader API users in AIR, it is important to note that Stagewebview is not an appropriate alternative. HTMLLoader contains built-in HTML rendering capabilities that support managed HTML and JavaScript that run outside of the browser sandbox that the application contains. Stagewebview can only host HTML and JavaScript content that runs in a traditional browser sandbox, and it cannot host the application itself.

If your users want to go to the browser, you can call navigateToURL() to enable it. This also redirects the user to another application, such as YouTube or Google Maps, if the URL prefix that is registered with the application is called.

Image

In the case of photography, the problem with today's mobile devices is not whether there is a webcam, but how many cameras there are. The new API for mobile AIR includes integration with the camera and any photos already stored in the device.

Cameraui and Cameraroll class

The built-in camera feature is accessible through the new Cameraui class. As its name shows, it differs from the familiar camera class in that it is a camera user interface API, not a direct API to the camera. Depending on the device, this means that the user may have the ability to choose between stills and video recording, as well as choosing different resolutions, switching the flash, selecting the front camera and the rear camera.

Mobile devices not only take pictures, they also store photos. The library where the user has taken pictures can be accessed through the Cameraroll class. browseForImage()method can be used to open the standard UI of a device to select photos from the library. Albums can also be written: images can be addBitmapData() stored in a library by means of methods.

Mediapromise class

Both Cameraui and Cameraroll return the selected picture through a new type of event called Mediaevent. Mediaevent is extremely simple, adding another interesting member to its Event parent: data. The data member type is mediapromise, which must be accessed through this class.

As its name implies, Mediapromise is a commitment to providing data related to a media item, as a sample. But it does not necessarily store these bytes. This difference is very important and it is worth taking a few minutes to study the API in order to understand how to use it effectively.

It is best to save media items in memory or to save media items in storage depending on a number of factors. For example, for video, it is often necessary to keep it in storage because the available memory tends to be too small, and if the media item is in the device's album library, it is already in storage and should not be read into memory unless necessary. On the other hand, a still photo just taken is usually stored in memory because it is likely to be small enough and may be displayed immediately.

The Mediapromise class accommodates such uncertainties in a single, carefully-used object that can be effectively exploited. If your application wants to save a media item in storage to free up memory space, you can easily check that the item is already in storage by checking for non-null values in the Mediapromise.file. This can be the difference between having enough storage or running out of storage while the video is being processed.

If an application wants to process a media item in memory, it will always be MediaPromise.open() read by the stream being accessed. Depending on the location of the item, Mediapromise will automatically return these bytes from the in-memory copy or storage. When used open() , you should ensure that checks are MediaPromise.isAsync made to determine the type of stream that has been returned.

Finally, to handle the common scenarios in which the returned media items will be added to the real-world list, you can Loader.loadFilePromise() extend the Loader class with a new method called. This allows items to be added directly to the real-world list, optimizing any potentially unnecessary replicas in the application node. This method can be used in conjunction with any filepromise, as the name of the method indicates. The Mediapromise class implements the Ifilepromise interface.

Application life cycle

On mobile devices, applications have a life cycle that is almost impossible for them to control. They cannot be started on their own, but can be started directly by the user (for example, when booting from the home screen) or initiated by a user profile (for example, through a registered URL pattern). They can be sent to the background at any time. When running in the background, they can also be stopped at any time, usually when the resources on the device are insufficient for the foreground application to use.

The mobile application cannot start or shut down itself. On some mobile platforms, NativeApplication.exit() it is not operational ("No Jobs"). The application should not rely on saving state during the shutdown process, but should save the state periodically when it is sent to the background and/or at run time.

The application should be notified when the dispatch DEACTIVATE event is sent to the background, and ACTIVATE when it is forwarded to the foreground by dispatching an event accordingly. AIR also takes certain actions when the application transitions to the background and foreground. The situation varies depending on the platform.

Android background behavior

On the Android platform, the application is encouraged to perform as few background operations as possible, but does not impose server constraints. If an AIR application on the Android platform is sent to the background, its animation frame rate will be reduced to four frames per second, although all events will continue to be dispatched, but the rendering phase of the event loop will be skipped.

As a result, AIR applications on Android can continue to perform background tasks, such as completing an upload or download operation, or synchronizing information on a regular basis. However, when running in the background, the application should take steps to further reduce its frame rate, turn off or reduce other timers, and so on.

IOS background behavior

On IOS, applications do not allow to run in the background as usual, but must declare that they want to perform some type of background processing, such as maintaining an IP voice call to continue, or completing an incomplete upload.

Air does not support this IOS background processing model, so when it is sent to the background, the air application pauses directly. Its frame rate will be reduced to zero, no events will be dispatched, and no content will be rendered. However, they will be mainstream in memory by default. This allows the application to retain its state when it is returned to the foreground.

Performance

To achieve excellent performance in mobile applications, it is first best to choose a reliable and basic method for all aspects of your application. The result of an attempt to achieve a 10% improvement for a linear time algorithm is clearly not comparable to the results that can be obtained by using a constant-time algorithm in a reasonable location.

Start time

Startup times are challenging because their costs tend to ripple through the application. To keep startup costs to a minimum, focus on running as little code as possible, rather than increasing the speed of your code.

For example, suppose you are writing a game, on the first screen, you want to display the current highest score, which is saved locally. Executing these codes to retrieve these scores can have a surprisingly high cost. Because this is the first time the code path runs, you may need to pay the cost of explaining or compiling the code, so it will be slower than the steady state of ActionScript performance. Second, you wait for the information to be retrieved from the file system. Finally, you have to pay for the cost of typesetting and presenting information on the screen.

You should consider postponing all of this work until after the first screen is displayed. You can then prepare the highest scoring list as the user concentrates on appreciating your artwork. Finally, you can display it on the screen by fading in or animating the effect.

Note that the optimization mode here involves choosing when to perform the work rather than executing it as soon as possible. The most important thing here is the user's sense of performance: users will notice these things only when they are waiting for their work to be done.

Present

The rise of the GPU has revolutionized the performance characteristics of typical rendering pipelines. When rendered on the CPU, each pixel has a higher cost. It is therefore best to render by describing the shape while performing preprocessing so that each pixel on the screen is drawn only once. This is the basic approach that AIR uses when rendering traditional vector-based content.

On the other hand, the GPU is poorly rendered for shapes, but it is easy to move large numbers of pixels around-often more than a few times the number of pixels that are actually fit to appear on the screen. The best way to take advantage of the GPU is to make the UI from a set of bitmaps, which is then limited to transforming those bitmaps.

AIR sets the director of both. You can draw with the full functionality of the AIR rendering model, which then caches the results as bitmaps, which are effectively rendered to the screen. BitmapData.draw()capture your rendered results in this way.

Note that you can also package these bitmaps with your application instead of dynamically rendering them, and an increase in screen size and density makes it impossible to pre-generate all the necessary variants. So this method is not only high-speed, but also well adapted to today's equipment expansion.

Memory

Although today's mobile devices contain a lot of RAM, it's important to keep in mind that they use different memory management models than traditional desktop operating systems. On the desktop, if the memory requirements are too high, the contents of the memory can overflow to disk and then return memory later. This allows the operating system to keep almost countless programs running at the same time.

On mobile devices, this method of overflow to disk is not available. Conversely, if the memory requirements exceed the physical available memory, the background application is forced to exit, freeing up its memory consumption. If a request for memory is not met at all, the application itself that requests memory exits.

Here are two points. First, it is necessary to understand the overall memory requirements of your application in general to ensure that memory is not exhausted. Second, in order to increase the chances that your application will stay in the background while it is running, your application must use as little memory as possible in the background.

These goals can be achieved by explicitly managing the memory of the application. At first blush, this may seem odd, after all, the garbage collector should do the job instead of you. You'd better treat the garbage collector as a mechanism for emptying your trash. However, the decision to throw useless objects into the trash is still up to you.

With an explicit memory management approach, the first step is to make sure that you clear references to objects that are no longer needed. For example, suppose your application reads an XML configuration file at startup and then copies some important values from this document. Now, the XML object tree created in this process is probably no longer necessary. However, it is also possible for an application to retain a reference to the root XML object, thereby incorporating the entire XML document into memory. After reading the configuration value, the application should set a reference to the XML document to NULL, so that it can be garbage collected by placing the object in the bin.

Explicit memory management is also important when dealing with a large number of given objects. For example, for an application that loads a set of pictures, if a native write is used, it will always run out of memory if the set of pictures is too large. On the other hand, if the implementation limits the number of images that can be saved in memory at a time, the memory will never be exhausted regardless of the number of images in the group. This can be done by releasing the old picture before loading the new picture, or even in a more efficient way, by keeping a fixed number of objects in memory and looping through the pictures with them.

Store

Mobile devices provide a local file system that enables applications to store preferences, documents, and so on, using this local file system. Typically, an application should assume that the store is accessible to the application itself and must not be shared with other applications. This store can be accessed by properties on all platforms File.applicationStorageDirectory .

Android implements a secondary file system, which is usually located within the available SD card and accessed via the "/sdcard" path. Unlike the main application store, these locations can be read and written by all applications on the device. However, the application should be aware that such secondary storage is not always available because the SD card may be removed and may not be hooked up, even if it is not removed.

The growing popularity of cameras on mobile devices also provides a shared storage location specific to photos. As I have shown in the previous "Pictures" section, applications should typically access such locations through the Cameraroll API. Although stored photos can be accessed directly through the file system API on some platforms, this is not a practice for all platforms.

Deployment

In the Mobile world, deployment is done primarily through the application market. These markets include discovery, installation, and update capabilities on the device.

In order to prepare AIR applications for deployment to specific markets, they should be packaged in the appropriate platform-specific format. For example, to upload your app to the Apple app Store, you should package the app as an. ipa file, and you should package it as an. apk file to upload it to the Android market. These options can be found within Flash Builder or scripted through the ADT command-line tool.

All mobile app markets require that the application that is published to be signed. For IOS, the signature must be completed using an Apple-issued certificate. For Android devices, developers should create a self-signed certificate that is valid for at least 25 years, and must use the same certificate to sign all updates for their application. Because of the different certificate requirements, it is necessary to track multiple certificates in order to publish to more than one market.

When you are ready to deploy a mobile application to the Android market, it is important to keep in mind that AIR itself is deployed independently. (on IOS, each application is packaged with an AIR copy, so this discussion does not apply.) If your application is to be installed on a device that does not have air installed, the user will be redirected to install air when the application is first started. As much as possible, you should ensure that this redirection returns the user to the market that they used when they purchased your application. To do this, you can -airDownloadURL pass the appropriate URL for that market by token when you invoke the ADT command-line tool. If you need to determine the correct URL to use, contact the app market.

Subsequent content

Using Adobe AIR to develop mobile applications allows you to create a single application that is deployed across multiple Android, IOS, BlackBerry tablet OS smartphones and tablets.

AIR is able to do this because it provides cross-platform abstraction when necessary, such as for accessing albums, to dynamically discover device properties (such as screen size), and to give way to you if necessary (for example, by using the file system API to access any part of the file system).

Building cross-device applications also requires developers to understand the memory, application lifecycle, and other specific aspects of mobile development. Combining this knowledge with the AIR runtime enables fast creation of powerful, cross-device mobile applications.

For more information about developing mobile applications with Adobe AIR, visit mobile application development resources in mobile and device Developer Center, and the Adobe Air developer Hub.

Considerations for developing Adobe AIR mobile applications

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.