Hyper-Focus: Practical Test Guide for Mobile applications (RPM)

Source: Internet
Author: User
Tags time zones

 

Testers are often seen as bug seekers, but have you ever wondered how they actually carry out the tests? Are you curious about what they are doing, and how they embody value in a typical technical project?

The authors will take you through the testers ' thinking process and explore the various considerations they have when testing mobile apps. The purpose of this article is to reveal the testers ' thinking process and to show the breadth and depth of what they typically consider.

Testers need to ask questions

The core competency of testers is to ask challenging related questions. If you can combine research, questioning techniques, and knowledge of technology and products, you will become a good tester gradually.

For example, testers might ask:

· What platform should this app be used on?

· What the heck is this app for?

· What happens if I do this?

Such.

testers can find problems from a variety of scenarios , which can come from conversations, designs, documents, user feedback, or the product itself. There are too many possibilities ... So let's explore it!

Where to start the test

Ideally, testers should be in control of all the latest details of the product under test. But in fact this is rare, so, like everyone else, testers will only be able to use limited data on their hands. But this is not an excuse not to test! Testers can actually collect information from many different sources, both internally and externally.

At this stage, testers can ask these questions:

· What are the information: specifications? Project meeting? User documentation? A knowledgeable team member? Is there a support forum or a company online forum to help? Do you have a record of existing bugs?

· What systems, platforms, and devices do the application operate on and test?

· What type of data (such as personal information, credit cards, etc.) is processed by the application?

· Does the application have the integration of external applications (such as APIs and data sources)?

· Does the app need to use a specific mobile Web page?

· How do existing consumers evaluate this product?

· How much time is available for testing?

· What are the priorities and risks of testing?

· Which users are not happy to use, why?

· How do I publish and update?

Based on the information gathered above, testers can develop a test plan. Usually the budget determines the method of testing, a day after the test, one weeks or one months to complete the method is certainly different. As you become familiar with the team, the workflow, and the solution to these problems, you are more likely to predict the results.

Case study: Social commentary on Facebook app

When I was collecting information as a tester, I liked to use the Facebook app as a case, because the user's complaints were everywhere. Here are just a few of the most difficult-to-find users who have commented on the itunes App store.

The Facebook app on the iphone has a lot of negative comments

If I take the challenge to test the Facebook app, I'll definitely consider the feedback, or else it's a fool.

The creativity of Testers

You probably know what the app is supposed to do, but what can it do? How does the user actually use it? Testers are adept at thinking as bystanders, experimenting with different things, and constantly asking "if ... What's going on "and" why ".

For example, mobile testers often test with different user roles--a bit of an exaggeration, but the ability to think, analyze, and conceive as a different user is instructive for testing.

Testers may envision themselves as the following users:

· have no experience;

· Have a lot of experience;

· Lovers

· Hackers

· Competitor.

Of course there are more optional roles, depending on what product you are developing. In fact, in addition to character characteristics, its operational behavior and workflow is also very important. The way people use products is often strange, such as:

· Returned when it should not be returned;

· Impatient and repeatedly knocking the keys;

· Enter the wrong data;

· Do not understand what to do;

· may not be set as required;

· You might think that you know what to do (for example, you don't usually read the instructions).

When testers encounter these problems, they often find unexpected bugs. Sometimes these bugs are trivial, but deeper investigations will find more serious problems .

Many problems can be pre-determined and tested. The following questions are not relevant when testing the mobile app, but you can also try asking:

· Is it done according to what is said?

· is the task done by design?

· Isn't the task done by design?

· What happens if you are in a position that is always used or under load? Is it slow to react? Will it crash? Will it be updated? Do you have feedback?

· Does the crash report feed back to the app?

· What are the creative, logical, or negative ways users can navigate? Does the user trust your brand?

· How secure is the user's data?

· Is it possible to be interrupted or cracked?

· What happens when I run to the limit?

· Will you be asked to open the relevant services (e.g. GPS, Wi-Fi)? What happens if the user opens? What if it doesn't open?

· redirect users to where? Go to the web? or from a Web page to an app? Does this cause problems to occur?

· is the communication process and market feedback consistent with the app's functionality, design, and content?

· What is the login process? Is it possible to log in directly to the app or to the Web end?

· Does login integrate with other services, such as using Facebook and Twitter account login?

Case: Run Keeper ' s Gy Update

RunKeeper, is a can track your fitness activities of the app, the latest release has a "Target settings" function, I am very interested to experience, part from the tester's point of view, more as a genuine love of the product users to experience. But I have found some problems:

1. The default unit is sterling, but I want to use the kilogram as a unit of weight;

2. The switch between sterling and kilogram is not easy to use;

3. When the target is set, it will result in displaying the wrong data and graphs, which confuses me;

4. As a result of article 3rd, I would like to delete the target, but I cannot find the deletion at all;

5. In order to solve this problem, I have to change the personal weight of the value, until the "target set" within the scope, so that the goal is achieved, can be re-set goals;

6. I will try to add the target again;

Because of the above doubts, I spent more time playing with it, see if I can find other problems;

Here are some screens to find the problem:

The latest version of the app contains a new "goals" section. When I set the date, I found that the start and end dates are all starting from 1 A.D., and why there are two 1-year options (translator Note: The year column should be shown as "1, 2, 3")?

Another bug is a misspelling in the "Current Weight" section, where the misspelled "Enter" will appear when the data is emptied (Etner in the app), which is only a small bug, but it looks very unprofessional.

Find the problem there is no shortcut, you can only repeat the slow trial. Every app and team faces many different challenges. However, the typical characteristics of testers are: to go beyond the limit, to do something unconventional, to change things around, to keep a long test (test days, weeks, even months, not a few minutes), even if it is clear that these things can not happen. These are just some of the things you can find and elicit.

where are all the data?

Testers like to find problems with data, which makes developers sometimes depressed. In fact, a user or a software developer is really confusing in the flow of information because there can be many errors, so data-and cloud-based services are more important

Maybe you can try to check out the problem in the following scenario:

· Mobile device data is full;

· The tester removed all data;

· The tester deleted the app, what about the data?

· Testers Delete and reload the app, what about the data?

· Too much or too little content leads to changes in design and layout;

· Used in different time periods and time zones;

· Data is out of sync;

· synchronization is interrupted;

· Data updates affect other services (such as web and cloud services);

· Process data quickly or process large amounts of data;

· Use of invalid data

Case: Soup.me's error

The soup.me I tried was a Web service that could categorize photos from personal Instagram via maps and colors, but I didn't use them for long. When registering, it prompts me not to have enough photos on Instagram, but there are more than 500 photos in my account. I don't know where the problem is, maybe it's a data problem, maybe a problem with the presentation layer, or it might be a problem with the app's error message.

Another case: Quicklytics

Quickytics is a Web analytics app on an ipad. During use, although I have removed the site configuration from Google Analytics, it still exists. Here are some questions:

· I have deleted the site configuration, why do I still have this information?

· The left module does not explain why "the operation cannot be completed", so can it be improved to avoid confusing users?

Testers also like to test the situation under the limit data. They are often used as a typical user to understand the app, so testing at the limit won't take a long time. The data is confusing, so testers have to take into account the types of software users and how to test them in different data scenarios.

For example, they might try the following scenarios:

· Test the user can enter the limit value;

· Test with duplicate data;

· Test in a new, data-free phone;

· Test on the old phone;

· Pre-installation of different types of data;

· Consider aggregating your resources to test them;

· let some test automation;

· Test with some data that goes beyond expectations, and see how it's handled;

· Analyze how information and data affect the user experience;

· Always ask questions whether or not users see them correctly

Create error alerts and messages

Here, I'm not going to talk about the design of good error messages from the designer's point of view, but I want to look at the problem from the perspective of the user or the tester. Error alerts and messages are places where testers can easily spot problems.

Questions to ask about the error message:

Please consider the following questions:

· Can the UI design of the error alert be acceptable?

· Do you understand the error message content?

· Does the error message remain consistent?

· Are these error messages helpful?

· Is the content of the error message appropriate?

· Are these errors consistent with conventions and standards?

· Are these error messages inherently secure?

· Are running records and crashes available to users and developers?

· Have all the errors been tested?

· What state will be in after the user finishes processing the error message

· Is the user expected to accept the error message, but no error message pops up?

Error messages can affect the user experience. However, bad or useless error reminders are everywhere. Although the ideal state is to avoid users encountering error messages, this is almost impossible. The design, implementation, and validation of an error may be contrary to expectations, but testers are often adept at discovering unexpected bugs and looking carefully to improve them.

Case of error message

I really like the example of the Facebook app on my iphone. These lengthy and obscure writings are not just trying to cover many different scenarios, but they can also be lost without any reason.

Perhaps the following message box can be included in the counter example "Hall of Fame"?

Look at the Guardian app on this ipad, what if I don't want to "retry"?

Considerations on a specific platform

For any project team member, it is critical to understand the business, technical, and design constraints of the relevant platform.

So what are the platform-related issues that the testers of the mobile app should identify?

· Have you followed the design specifications for this particular platform?

· How does it compare with competitors and the design in the industry?

· Is it suitable for peripheral equipment?

· Touch screen support gestures, such as: Tap, double-tap, long-click, drag, Shake, pinch, flick, swipe?

· Can this app be understood?

· What changes are there when you turn the direction of the device?

· Can I use maps and GPS?

· Do you have a user guide?

· is the email workflow friendly?

· Does it run smoothly when shared over the Internet? Have you integrated other social apps or websites?

· Does it work when the user is working on multitasking and switching between apps?

· Does it show time progress when the user updates it?

· What is the default setting? Has it been adjusted?

· Does it make a difference to use sound?

Case: Chimpstats

Chimpstats is an app that looks at the details of email ads on the ipad. The first time I use this app is in landscape mode. When I needed to enter the API password, I was trapped. I simply cannot enter the API password in horizontal mode until I switch to vertical mode to enter the success.

Connection and interruption issues many interesting things can happen when a connection is intermittent or an unexpected outage occurs.

Have you tried using apps in the following scenarios:

· Walking around the environment?

· Wi-Fi connection?

· No Wi-Fi in the case?

· 3G mode?

· Intermittent connection?

· Set to Airplane mode?

· When a phone call came in?

· When a message is received?

· When I receive a reminder notification?

· When the battery is low or even automatically shuts down?

· When being forced to update?

· When you receive a voice message?

This type of test is most likely to find errors and bugs. I strongly recommend that you test in these situations (not just booting, confirming that it works, but also trying the entire process that the user is using and forcing the connection and interruption within a specific time interval).

· Does this app provide enough feedback?

· Does the data transfer be known to the user?

· Will it stop slowly and then crash?

· What happens when you open it?

· What happens when the task is completed?

· Is it possible to lose unsaved actions?

· Can you ignore notification reminders? What happens when you ignore it?

· Can you respond to notification reminders? What happens after a response?

· Is it appropriate to use error messages for certain issues?

· What happens when a login expires or times out

maintenance of the App

The process of speeding up the entire test is simple, just one test at a time, right? Please think twice.

One of the problems I'm having right now is that some apps on the ipad can't be downloaded after they're updated. This is very frustrating for a user.

Maybe, that's something that developers can't control. Who knows? I only know that it is not available to users. I also tried uninstalling the app and reloading it, but the problem was never resolved. I have a lot of search on the Internet, in addition to find some suggestions about updating the operating system, there is no other way to solve. Maybe, I'll try again when I'm free next time.

The key question is: If an app has been tested only once and only once (or tested only for a short period of time), you won't find a lot of problems . An app may not see changes on its own, but outside conditions can make these problems happen.

When the external environment continues to change, how will the app be affected? Let's ask ourselves:

· Can I download this app?

· Can I download and install the update?

· Can I use it after the update?

· Can I update it when many apps are waiting to be updated?

· What happens after the system is updated?

· What happens when the system is not updated?

· Will it automatically download to other devices via itunes?

· Does it make sense to automate tasks or tests?

· Will it connect to the network service? What difference does it make?

After each version of the mobile app is released, it's a good idea to test it. Each time a new version is released, the highest priority test is defined to ensure it can be performed under a variety of conditions (mainly on the mainstream platform). As time goes on, testing can become automated. But keep in mind that automation is not a panacea, and the problem is found only through the eyes of people.

Case: Analytics app on iphone

I've been using this app for two years and it hasn't been a problem before. But now it shows me that some of my website data is 0 (but in fact, more than one person has visited my site within one months!) )。 Judging from the App store reviews, I'm not the only one who's got this problem.

Another case is Twitter on the iphone. After updating and launching the app, I instantly saw the following message: "Your timeline data is empty and you're not looking at anyone yet" (but I'm an active user with 5 years of experience). I was worried for a moment, and thankfully, the news soon disappeared and then loaded the historical data.

Testing is not a right or wrong judgment

We discussed some aspects of mobile testing, but the premise is that problems can be found with problems.

Typically, tests are considered to be completely logical, predictable, and predictable, including testing scripts and test plans, passing and failing, correct and erroneous feedback. Walking through the test process is not far from the truth.

Of course, if necessary, we can use the above method to test, but this is not the purpose of the test. Not only do we create test cases, find bugs, but more importantly, find key questions for the team to decide when to publish the app to provide valuable information. And the best way to find those key questions is to ask questions!

Hyper-Focus: Practical Test Guide for Mobile applications (RPM)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.