The word "app" means that we are dealing with "small applications". Although this may be true in some cases, this article refers to a fairly large application used to remotely monitor the state of a different part of a machine, such as a lamp, airflow, and position. The machine uses a mobile communication network accessed by an available back-end server (our app is accessed over the Internet). In short, the complexity is the same as a desktop app. An important aspect of the app is the different management. Different customer groups accept different functional devices, while different machine types require specific data statements. This creates a variable app--and its components are the same during build and run, depending on which machine we want to use. Therefore, this is definitely not a "small" project. It is not a mobile app that accompanies existing business applications, it is the only solution to this business process. There will be a longer maintenance phase to support product improvements, new features and more versions. Apps are an intrinsic part of the machine and must have the same good quality, usability and user experience. This article provides an overview of the project, as well as our decisions and experiences on QA and test automation, continuous integration, and project management.
Project settings: A concept, two apps, two teams
projects use the Scrum Agile Software development framework. A sprint takes two weeks, including a one-day sprint review, review, and planning. The sprint review is performed by the entire development team, one or two customer representatives, and sometimes stakeholders from QA or infrastructure departments. After the review, the customer will refine the specification. In the first part of the sprint plan-the user story selection, a customer representative also participates. This allows developers to ask questions about the specifics of the specification, sometimes proposing regulatory changes to allow the app more "local" behavior. The
most important development phase is planned to take nine months. , the team size changes between 8-13 people, including the product owner and the scrum expert. Part of the volatility was the students who had been with the project for a while, and partly for a special reason to join the team for the time being. Our target devices are iphones and Android phones, especially iphone 3gs and above – (ios 6+), as well as android 4 and above. The machine will be controlled and the backend of the server already exists, so our only task is to develop apps, including user interfaces, backend communications, variable management, and integration of specific platform services such as push notifications, maps or social networks. To ensure the best user experience, we do not use cross-platform toolkits; Instead, we are developing two separate local apps. The
developers are divided into sub-teams, each with their own experts and platforms. In order to facilitate communication, the two teams collaborate on a website. Because of these two teams, the product backlog is made up of most user stories two times, and one version is used for a supported platform. For most user stories, two versions were planned in the same sprint based on a different development process than iOS and Android.
When a story is implemented, the results are compared to other platforms ' apps. In the sprint overview, we prefer to render a feature parallel to iOS and Android. By doing this, we can ensure that we have the same app and similar user experience for all two platforms. In addition to the user experience, Android and ios app also has a similar software architecture, albeit separately. A common software architecture file details the data model, hierarchical design, screen flow management, variable management, and specific domain algorithms. Therefore, when performing a function for a second platform, it is easy to understand the template because it is executed on the same basis. Because different parts and user integration concepts, this view implementation is not feasible-here, development is a specific platform.
Figure 1. Communication settings from the mobile device to the machine
Automated testing
Our challenge in testing QA is to test multiple levels (unit, integration, acceptance, UI testing) for each app. Because we want to automate as much as we can, we have a QA consultant who is a member of the team that drives our testing automation. He is responsible for review of test specifications and test execution. The actual execution of the automated test is done by the developer and triggered by a test task that is generated by default for each user story. Depending on the execution function, there are also acceptance tests (automated UI tests), unit tests, and integration tests. Tests are always performed on a specific platform. In a UI test, they use the same acceptance criteria synchronously. There is at least one UI test for each acceptance criteria defined in a (UI-related) user story. To achieve all of these different automated tests, we use a specific platform framework. Our low-level tests were implemented using Sentest or JUnit, respectively. With regard to iOS, additional libraries like Nocilla and jrswizzle are used for simulations. For UI, we use Kif,android with Robotium for iOS. Robotium Recorder (commercial product) has proven to help achieve more stable Android testing and eliminate "false negative" results. Despite the importance of the same app functionality, the difference between iOS and Android's navigation and user experience suggests that the steps required to get and use each feature are different. Unlike in the desktop realm, it is only theoretically possible to use a UI test to cover cross-platform apps that are beyond simple concepts. This has disadvantages, increases technology and effort, but also has the benefit of being able to use specific tools to solve specific problems. On the scale, it is often said that UI testing should be the smallest piece of testing. This is partly because of execution time, and because they are still considered the hardest to write and most difficult to maintain. Our experience is that it is a good idea to reassess the scale of a particular Test level in each project. With the increasing emphasis on customer acceptance (both in agile and mobile), UI testing tools are more stable, UI testing and GUI logic testing should not be overlooked; indeed, unit tests in tests are more specific than UI tests. For this project, we have 10% unit tests, 40% integration tests, and 50% UI tests. Because of the quality issues (poor interface specifications) that we receive from our independent backend vendors, the number of integration tests is low.
Continuous integration
We use GIT with a basic branching model as our version control system. It clarifies a main branch, a publishing branch, and a branch of each feature and bug modification. When the user story is complete, the developer merges a feature into the main branch as a whole. To ensure that incomplete functionality is not integrated into a live branch, each user story generates a default task. There are default tasks for acceptance (reviewed by Product Manager and QA Consultant) and Code quality (code review, Static code analysis, and automated testing). The basis for continuous integration is the main branch, because it should always contain a "ready to publish" project. Each commit (Consolidation feature) triggers a complete development cycle that includes the following:
?? Update/Build Dependencies
?? Build apps (now with three different builds)
?? Static analysis
?? Unit Test
?? UI Test
?? App distribution via the intranet
Figure 2. Test level ratio (best, typical mobile development project)
iOS and Android We use Jenkins because it is the company's default value and is supported by the IT department. Especially in iOS development, we met the initial problem of Jenkins (if we could choose the Xcode server). However, the extra plug-ins in Jenkins ultimately make it possible to integrate our iOS systems into CI, such as Clang Analyzer and the plug-in that manages environment variables or shared workspace workspaces.
The final thought: where Automation doesn't work
As described, our process includes a variety of factors that help us achieve the high quality expectations of our customers. The entire team participates in the quality process, and each sprint project guarantees high quality. This is thanks to the help of automated testing. But in some places, quality must be ensured manually. Teams do not have to think of these immediately when they are developing desktop, they are listed below:
?? Usability and User experience:
This is a widely accepted manual test, but mobile app customers attach great importance to quality. As the order becomes more difficult and the direction changes, we find that we must pay more attention to quality--the test is performed manually by the team and the customer representative. In this setting, it is up to the customer to ensure that different devices are detected as their manual acceptance tests. Our automated tests are limited to one version of a platform.
?? Internationalization:
The normal process in a multilingual desktop app is to let people in your native language detect strings outside of the app text. As the display size becomes smaller, we plan to internationalize more than 15 languages more time than desktop apps. Our conversions are performed by external employees, and each conversion must be manually tested to make sure that it uses the right space on the display. We use our UI tests to support this by creating automatic that can be reviewed by the conversion team.
Quality is the most important-even (especially) for apps
This article shows that: app development requires at least the same level of testing strategy and quality assurance work as desktop business apps. In some ways, because of the challenge of developing an app for two platforms, the requirements for high-quality work are higher. In many ways, we can see that mobile development does not change the quality of the work required. But what can change is the relevance or importance of a particular job. For me, this is evident in our unit, the allocation of integration and acceptance tests, and in manual testing (which is not important or time-consuming in a non-mobile project). Our conclusion? Quality is the most important-knowing what you're testing, why it's tested and how it's tested is important. Even for mobile apps.
This article transferred from: http://www.spasvo.com/news/html/20141224140712.html
Automated testing and QA in the context of multi-platform mobile development