Criterion for our test:
If the tested module (s) can work correctly and complete user scenarios, and also it can deal with some unexpected cases appropriately. in addition, the app can response to users into tively and fluently, we will let it pass.
Test resources:
Platform: wp7.1 & Visual maxcompute 2010 & Windows Phone emulator.
Testers: Dongliang & ting Zhang
Test matrix:
|
User Type |
Mobile OS |
Language |
Network |
Variable count |
2 |
1 |
1 |
3 |
|
Conf. Hosts |
|
English |
No network |
|
Conf. attenders |
WP 7.1 |
|
GPRS |
|
|
|
|
WiFi |
I) functional/performance test (Performance Test): We will test the reliability and stability of each functional module.
Part 1: We do our test strictly follow our written test cases, and test whether the function is work or not. following is some examples of our test cases, to see more detail, please refer to our TFs.
Test Case |
Tested feature |
Test Data |
Operation |
Expected result |
Actual result |
Test Date |
Estimated time and Tester |
1 |
|
|
None |
Three tile on the favorite page |
|
|
|
2 |
|
|
click favorite authors tile |
jump to the favorite panorama show all added to favorite authors |
|
|
|
3 |
|
|
click favorite publications tile |
jump to the favorite panorama publication item and show all added to favorite publications |
|
|
|
4 |
|
No favorite |
Click favorite conferences Tile |
Jump to the Introduction page which show how to add conferences to favorite |
|
|
|
5 |
|
Have favorite |
Click favorite conferences Tile |
Jump to the conferences list page |
|
|
|
6 |
|
On the favorite Panorama |
Click an author |
Jump to the author detail page |
|
|
|
7 |
|
On the author detail page |
Click the favorite button and some other operations |
You can add and remove authors, as well as other common result |
|
|
|
8 |
|
on the favorite panorama |
click a publication |
jump to the publication detail page |
|
|
|
9 |
|
On the publication detail page |
Click the favorite button and some other operations |
You can add and remove publications, as well as other common result |
|
|
|
Part 2: Scenario/integration test (scenario test): We will test whether modules which are related to each other can perform correctly.
We will assume several user scenarios and try to deal with the assumed needs of users.
1, a researcher wowould want to find the Conference agenda which he wants to attend.
2, an attendee woul like to search the agenda to find what he is interested in, and want to look further informations about it.
3, an attendee woul also like to be reminded when his interedted topic is about to start.
4. When a researcher is listenning to a topic at a conference, he wants to write down his thoughts and some confuses.
5, when a researcher is searching some authors, publications as well as conferences, he wowould want to add them as his favorite and want to look for them more conveniently.
Part 3: negative test: We will assume several unexpected situations to test whether our app can still handle and work well.
1. When the network is out of reach, the our app can still work.
2, when the battery is out, our app can store important informations and keep me well when started next time.
3. When users is using this to look for the same agenda, our app wowould still respond in a tolerable time.
Ii) usability test (usability test): We will test whether our app can work efficiently and %tively.
We maybe act as users to test whether our app can respond our queries fluently and the interface is easily and friendly.
Iii) regression test (regression test): Last but not the least.
During our test, we will do regression test everyday to make sure that some passed test cases are not influenced by our bug fix. when all the bugs are fixed (maybe some are postponed to the next iteration), all team members wocould do this regression test, to guarantee that all the bugs are fixed and didn't recur before our release.