Site Loader
111 Rock Street, San Francisco

School of Information Technology Justification of Evaluation Approach (user testing) User testing, this is the process whereby users carry out the intended task efficiently (time), effectively (number of errors and completion rate) and satisfactory. It can either remotely, in the lab and automated. User testing has always been a great way of improving and optimizing user’s experience and increase business performance. The user testing is always needed, to check if the user interface is easy to understand, use and find the site level problems.

It is important to capitalize on people’s increasing propensity to make purchase and order online by offering the best online user experience possible and by doing so it encourage repeat visits of site. And it is best to run specialized user testing on important pages so as to know the page usability problem. E. g. pages like registration pages, and also pages where users buy products and make payment. The possible attendances of user testing are: * Representative user Test host * At least one developer * At least one business representative. In a recent study (Jakob Nielsen 1998), gave two test coverage on the usability testing problems: (1) site level usability which involve (home page, information architecture, navigation and search, linking strategy, internally vs. externally focused design, overall writing style, page templates, layout and site-wide design standards, graphical language and commonly used icon. 2) page-level usability: – specific issues related to the individual pages, understandability of headlines, links and explanations, intuitiveness of forms and error messages, inclusion or exclusion of specific information, individual graphics and icons. User testing issues required in term of cost, this will depend on the size of the website, the number of times you want to test, the number and type of participant you having, and how formal you want the testing and lastly the rental costs if you do not have the equipment for testing and lab to use.

While in term of time, there is the need for you to plan the usability test, and by that, the specialist and team will need time to get familiarized with the website, do a dry run scenarios, budget the time it takes to test users, analyzing the data, discussing the findings and finally writing the report. In a Recent research (Stefan Karytk 2010) shows that with the Gomez load testing methodology, which is the Throughput and Concurrency test. They are the corner stone of Gomez load methodology and when use properly can reduce user testing time and costs.

The throughput refers to the rate in which a web application can process raw data, full pages and an entire transaction, while the concurrency refers to the measures of how many simultaneous user sessions are active on a web application at a given point of time. Below is a table showing the different between Expert Inspection and User Testing; Expert Inspection | User Testing | 1, Cheaper and quicker to do. 2, Few days to inspect a site. 3, miss usability issues that arise during usability testing. 4, finds some issues usability testing didn’t find and report false alarms. , It evaluates more part of a user interface than in usability testing. ‘‘Jim Ross’’ | 1, Takes time to plan and organize. 2, More expensive due to recruiting users from a target audience. 3, less conjecture and feedback from the horse’s mouth. 4, It reveals new insights that no one from a project team ever considered. 5, It takes design criticism out of the realm of option and puts it into the realm of data. ‘‘Paul Sheer man’’| Both user testing and expert inspection are both helpful for the tend to find different issues.

But user testing is preferable because it hears the voice of a user’s needs quotations, frustrations and suggestions for better improvement and it ensures that testing is integrated into the system design. Evaluation consideration (user testing) Usability measures (nature and size of the text sample) The objective of these user testing is to identify difficulties in use from the users’ impulsive comments and from various performance measures such as task execution time, accuracy of the results, number, and types of errors. Nielsen and Molich (1990) reported that five evaluators found about 2/3 of sability problems using Heuristic Evaluation, and Virzi (1992) indicated that only four or five users are needed to detect 80% of usability problems when Think Aloud method is used for usability testing. Getting the number of users to use for testing has both practical (economic and scientific implications. ) the aim of inviting participant for user testing, is to get the design flaws a user interface may have at the lowest cost (cost of participants, cost of observers, cost of laboratory facilities and time to obtain data to provide to developers in a timely fashion) stated by (Lewis JR 2006).

In the latest nineties it is said that with the number of 4 to 5 participant, that is 80% to 85% of the usability problems of an interface design could be uncovered. According to ‘‘Jakob Nielsen 2000’’ said that some people think that usability is very costly and complex and that user tests should be reserved for the rare web design project with a huge budget and a lavish time schedule. Not true. Elaborate usability tests are a waste of resources. The best results comes from testing no more than five(5) users and running as many small tests as you can afford.

And he gave a formulas showing that the number of usability problems found in a usability test with N users is : N(1-(1-L)n) where N is the total number of usability problems in the design and L is the proportion of usability problems discovered while testing a single user. In a recent research by (frank spillers 2005), explain why we don’t do large numbers in usability testing: * We looking for behavioral based insight. * Statistics tell half the story and often are devoid of context (e. g. why did they fail? ) also one of the major problems with gaining insight from web analytics (website traffic statistics). Our objective is to apply findings to fix design problems in a corporate setting (not academic analysis). * Research shows that even with low numbers, you can gain valid data. * Usability testing is being used industry-wide and has been for past 25 years. Experts, authors and academics put their reputations and credentials behind the methodology. Usability measures (type of tasks) The usability testing tasks themselves represent what users do to achieve a goal but they are an important issue and can heavily impudence a usability valuation. Recent research (M. Alshamari and P. Mayhew 2006), the proposed three task to be examined when user testing; I, Structured tasks ii, Uncertain tasks iii, Problem solving tasks. Structured tasks: this type of task will guide users step-by-step and instruct users what to do and where to go. This may reveal potential usability problems. Uncertain tasks: this type of task relies on a fact which is that users are usually uncertain as to whether they will ? nd the information that they are looking for while they surf a website.

Problem solving tasks: Tasks were constructed in problematic statements and left the users to behave as they would do in reality. Task users perform when user testing: * Users read aloud the task description * Users think aloud(so that verbal record exist of their interaction with the website/web application) * Users complete the post task questionnaire and elaborate on the task session. * Complete the post test satisfaction questionnaire. Task inspection observers perform when user testing: Monitoring (shadowing):- This is observation and recording participants on how the react to problems the face and questions they are ask and how the participants respond to it. * Questionnaire (structured):- list of questions given to users to answer. It can be an unambiguous question to maximize the number of respondents. * Interview (unstructured):- They are ad-hoc conversations where by a list of questions are asked to users and their responses are recorded (it is different from questionnaire because it is interactive). Thinking aloud: – This is a technique use when users testing, where by the participant are asked to voice out their thoughts, feelings and opinions while interacting with the application. * De-briefing :- During this de-briefing, it allow users to voice out what the like, also allows collection of subjective preference data about the application and lastly develop a good relationship with user to support and return for further testing. Usability measures (time on task, error rate and frequency)

Time on task (TOT): The time to complete each scenario, and it is measured from the time the participant begins the scenario to the time he/she indicates the scenario’s goal has been obtained (whether successfully or unsuccessfully) or the participant requests and receives sufficient guidance as to warrant scoring the scenario as a critical error. Error rate:  mistakes made during a task. Errors are useful in pointing out particularly confusing or misleading parts of a product, process or interface. They are of two type non-critical error and critical errors.

Critical errors are unresolved errors during the process of completing the task or errors that produce an incorrect outcome. While non-critical errors are procedural errors that are generally frustrating to the participant when they are detected. Frequency: this is the percentage of participants who experience the problem when working on a task. * High: 30% or more of the participants experience the problem * Moderate: 11% – 29% of participants experience the problem * Low: 10% or fewer of the participants experience the problem For example, studies with less than ten participants in a group, the percentages may to be adjusted.

For example, for a study with 8 participants the low frequency should be 12. 5% (1/8 = . 1250). Conclusion In conclusion, user testing is considered one of the best techniques for getting insights into usability problems. User testing is preferable because it hears the voice of a user’s needs quotations, frustrations and suggestions for better improvement and it ensures that testing is integrated into the system design of a website. And with the help of user testing, it helps to understand whether your product meets a user’s expectations through usability testing. http://www. jdsports. co. uk/home). Reference * Mr. Majed Alshamari and Dr. Pam Mayhew ‘Task Formulation in Usability Testing’ [online]. Available at: http://www. uea. ac. uk/polopoly_fs/1. 133551! Alshamari. pdf Accessed: 28 October 2011 * Kushniruk AW, Triola MM, Borycki EM, Stein B, Kannry JL. Technology induced error and usability: the relationship between usability problems and prescription errors when using a handheld application. International Journal of Medical Informatics 2005; 74: 519-526. Whiteside, J. , Bennett, J. , and Holtzblatt, K. (1988). “Usability Engineering: Our Experience and Evolution,” Handbook of Human Computer Interaction, M. Helander (Ed. ). New York: North Holland. * Lewis JR. Sample sizes for usability tests: mostly math, not magic. interactions 2006; XIII(6): page 29-33 * Alistair Edwardes, Dirk Burghardt and Katrin Krug ‘Usability testing template’ [online]. Available at: http://www. webparkservices. info/pdfs/WP_D223_Usability_testing_template_v05. df Accessed: 24 October 2011 * Frank spillers ‘demystifying usability’ [online]. Available at: http://www. demystifyingusability. com/2005/01/latest_research. html Accessed: 24 October 2011 * Nielsen, Jakob, and Landauer, Thomas K. : “A mathematical model of the finding of usability problems,” Proceedings of ACM INTERCHI’93 Conference (Amsterdam, The Netherlands, 24-29 April 1993), pp. 206-213. * Janet M. six (2009) ‘Usability Testing Versus Expert Reviews’ [Online]. Available at: http://www. xmatters. com/mt/archives/2009/10/usability-testing-versus-expert-reviews. php Accessed: 25 October 2011 * Stefan Karytko (2010) ‘Throughput and Concurrency: Reducing Load Testing Time and Costs’ [Online]. Available at: http://application-performance-blog. com/throughput-and-concurrency-reducing-load-testing-time-and-costs/ Accessed: 25 October 2011 * Jakob Nielsen’s (1998) ‘Cost of user testing a website’ [Online]. Available at: http://www. useit. com/alertbox/980503. tml Accessed: 20 October 2011 * Gitte Lindgaard ‘Usability testing and system evaluation’ A guide for designing useful computer systems pp 245-249 * Hartson, H. R. , Terence, S. A. , Williges, R. C. 2001. Criteria for Evaluating Usability Evaluation Methods. International journal of human-computer interaction. 13(4), page 373-410. * Estelle de Kock and Judy van Biljon and Marco Pretorius ‘Usability evaluation methods’ [Online]. Available at: http://www. benschweitzer. org/WORK/eyetrackingpapers/DeKock-Usability%20evaluation%20methods%20Mind%20the%20gaps. pdf Accessed: 20 October 2011

Post Author: admin