THE ROLE OF EVALUATION IN THE DEVELOPMENT OF INTERACTIVE SYSTEMS

Get Complete Project Material File(s) Now! »

Recruiting Test Participants

Participants in usability testing should be representative of the intended users of the application. To find potential candidates for a test, a user profile is established, including age group, computer usage and educational background [Barnum, 2002; Rubin, 1994]. Participants can be recruited directly or through an agency. When recruiting participants directly, appropriate means of searching include the company database (if there are pre-qualified users), product specification document (which usually includes the intended users of the product), customer lists obtained from sales and marketing people, college/university campuses, and qualified friends/relatives. To ensure that potential participants meet the user profile, a screening questionnaire is developed for them to complete. It may also be necessary to offer some incentives in the form of token gifts. Finally, test participants should be contacted, preferably by phone, a day or two before the test to confirm the appointments [Barnum, 2002; Rubin, 1994].

Ethical Considerations

Participants in every evaluation should be treated with respect and dignity. Although usability testing might not expose participants to physical dangers, the controlled nature of the test environment, the presence of recording cameras and participants’ awareness of being watched, can be a source of distress to some participants. Participants may sometimes feel pressured to perform well, even when they have been told that it is the application that is being tested, not them. This applies to both novice and advanced participants.

Supplementary Techniques to Usability Testing

Some techniques that can be used to support or supplement usability testing include: y Think-aloud: A data gathering method used in usability testing and observation where participants are encouraged to verbalize their thoughts, feelings, expectations and decisions as they interact with the application being evaluated. This assists evaluators to gain insight into the reasoning behind users’ actions [Barnum, 2002; Dix et al., 2004; Preece et al., 2007]. A participant can think aloud while carrying out the specified tasks concurrent think-aloud or after the completion of the test session retrospective think-aloud where the participant carries out the tasks in silence. To address the unnaturalness involved in thinking aloud, a variation of the method, called codiscovery or co-participant testing, can be used.

dentifying the Practical Issues

There are a number of practical issues that should be considered before conducting evaluations. Issues that should be considered include access to appropriate users, facilities and equipment, the practicality of evaluation considering time and budget constraints, and evaluators’ expertise. User-based evaluation requires availability of users who are representative of the target population. Users’ levels of expertise, age, cultural diversity, educational experience, and personal differences have to be taken into account, depending on the type of application being evaluated. Another aspect that needs careful consideration is how the users will be involved. Tasks given to users in a usability testing evaluation should be representative of those for which the application will be used in real life.

Usability Evaluation Methods Applicable to the Digital Doorway Context

Having examined the various methods for evaluating the usability of interactive systems, the next appropriate question is which of these methods can be used in a summative evaluation of applications installed on the DD. The heuristic evaluation method, discussed in section 4.2.3.1 is a suitable method for summative and formative evaluation, provided that appropriate evaluation heuristics are used. Its ability to uncover large numbers of potential usability problems and the relative ease with which I could recruit expert evaluators makes the method appropriate for this study. Cognitive walkthrough (section 4.2.3.2) is also a method that can be used in a summative evaluation. However, the method’s assumption that evaluators possess skills in cognitive theory and its focus on the evaluation of only the learnability aspect of an application makes it inadequate for evaluating the DD. The two model-based evaluation methods (GOMS and the keystroke-level models) are used to predict the performance of expert users. DDs are aimed at users with little or no computer experience. This makes these methods inappropriate for the DD environment.

TABLE OF CONTENTS :

  • 1 INTRODUCTION
    • 1.1 INTRODUCTION
    • 1.2 RATIONALE AND MOTIVATION FOR STUDY
    • 1.3 RESEARCH QUESTIONS AND OBJECTIVES
    • 1.4 SCOPE AND LIMITATIONS OF THE STUDY
    • 1.5 RESEARCH DESIGN AND METHODOLOGY
    • 1.6 ETHICAL CONSIDERATIONS
    • 1.7 THE SIGNIFICANCE AND CONTRIBUTION OF THIS STUDY
    • 1.8 LAYOUT OF THE CHAPTERS
  • 2 BACKGROUND AND CONTEXT
    • 2.1 INTRODUCTION
    • 2.2 THE FIELD OF HUMAN-COMPUTER INTERACTION
    • 2.2.1 USABILITY AND ACCESSIBILITY
    • 2.2.2 EVALUATION
    • 2.2.2.1 THE ROLE OF EVALUATION IN THE DEVELOPMENT OF INTERACTIVE SYSTEMS
    • 2.2.2.2 DESIGN PRINCIPLES AND GUIDELINES AS A TOOL FOR EVALUATING THE USABILITY AND ACCESSIBILITY OF INTERACTIVE SYSTEMS
    • 2.3 THE DIGITAL DIVIDE
    • 2.4 INTRODUCING THE DIGITAL DOORWAY
    • 2.4.1 DIGITAL DOORWAY INPUT/OUTPUT DEVICES
    • 2.4.2 TARGET USERS OF THE DIGITAL DOORWAY
    • 2.4.3 TYPICAL APPLICATIONS AND RESOURCES PROVIDED BY THE DIGITAL DOORWAY
    • 2.5 CONCLUSION
  • 3 RESEARCH DESIGN AND METHODOLOGY
    • 3.1 INTRODUCTION
    • 3.2 RESEARCH PHILOSOPHY AND PARADIGMS
    • 3.2.1 DESIGN RESEARCH
    • 3.3 RESEARCH METHODS
    • 3.4 RESEARCH DESIGN AND METHODOLOGY USED IN THIS STUDY
    • 3.4.1 RESEARCH PARADIGM USED – DESIGN RESEARCH
    • 3.4.1.1 OUTER CYCLE OF THE DESIGN RESEARCH PHASES
    • 3.4.1.2 INNER CYCLE OF THE DESIGN RESEARCH PHASES
    • 3.4.2 PRIMARY RESEARCH AND DATA COLLECTION METHODS
    • 3.4.3 DATA ANALYSIS
    • 3.4.4 ETHICAL CONSIDERATIONS
    • 3.5 CONCLUSION
  • 4 USABILITY AND USABILITY EVALUATION
    • 4.1 INTRODUCTION
    • 4.2 USABILITY EVALUATION METHODS AND APPROACHES
    • 4.2.1 EVALUATION IN THE INTERACTIVE SYSTEMS DEVELOPMENT LIFECYCLE
    • 4.2.2 FORMATIVE AND SUMMATIVE EVALUATION
    • 4.2.3 VARIOUS APPROACHES TO USABILITY EVALUATION
    • 4.3.2 GELDERBLOM’S GUIDELINES FOR THE DESIGN OF CHILDREN’S TECHNOLOGY
    • 4.3.2.1 APPLICABILITY OF GELDERBLOM’S GUIDELINES FOR THE DEVELOPMENT OF CHILDREN’S TECHNOLOGY TO THE DIGITAL DOORWAY
    • 4.3.3 NIELSEN’S HEURISTICS
    • 4.3.3.1 APPLICABILITY OF NIELSEN’S HEURISTICS TO THE DIGITAL DOORWAY
    • 4.3.4 USABILITY AND USER EXPERIENCE GOALS OF PREECE, ROGERS AND SHARP GOALS TO THE DIGITAL DOORWAY
    • 4.3.5 NORMAN’S PRINCIPLES OF DESIGN
    • 4.3.5.1 APPLICABILITY OF NORMAN’S DESIGN PRINCIPLES TO THE DIGITAL DOORWAY
    • 4.3.6 SHNEIDERMAN’S GOLDEN RULES OF INTERFACE DESIGN
    • 4.3.6.1 APPLICABILITY OF SHNEIDERMAN’S GOLDEN RULES TO THE DIGITAL DOORWAY
    • 4.4 INTERFACE STYLES
    • 4.4.1 FORM INTERFACE
    • 4.4.1.1 DESIGN PRINCIPLES AND GUIDELINES FOR FORM INTERFACES
    • 4.4.1.2 APPLICATION OF MAYHEW’S GUIDELINES TO THE DIGITAL DOORWAY
    • 4.4.2 WIMP INTERFACE
    • 4.5 CONCLUSION
  • 5 ACCESSIBILITY DESIGN PRINCIPLES AND GUIDELINES
    • 5.1 INTRODUCTION
    • 5.2 THE CASE FOR ACCESSIBLE INTERACTIVE SYSTEMS
    • 5.3 DISABILITY CATEGORIES AND IMPACT OF INTERACTIVE SYSTEMS’ DESIGN ON DISABLED PEOPLE
    • 5.3.1 VISUAL IMPAIRMENTS
    • 5.3.1.1 COLOUR BLINDNESS
    • 5.3.1.2 BLINDNESS
    • 5.3.2 AUDITORY (HEARING) IMPAIRMENTS
    • 5.3.2.1 TECHNIQUES TO SUPPORT USERS WITH AUDITORY IMPAIRMENTS
    • 5.3.3 PHYSICAL IMPAIRMENTS
    • 5.3.3.1 INTERACTION TECHNIQUES TO SUPPORT USERS WITH PHYSICAL IMPAIRMENTS
    • 5.3.4 COGNITIVE IMPAIRMENTS
    • 5.3.4.1 STRATEGIES TO SUPPORT USERS WITH COGNITIVE IMPAIRMENTS
    • 5.4 ACCESSIBILITY EVALUATION TECHNIQUES
    • 5.4.1 HENRY’S METHODS FOR EVALUATING ACCESSIBILITY
    • 5.4.2 GREEFF AND KOTZÉ’S ACCESSIBILITY EVALUATION METHODOLOGY
    • 5.4.3 ACCESSIBILITY EVALUATION METHODS APPLICABLE TO THE DIGITAL DOORWAY
    • 5.5 GUIDELINES TO SUPPORT THE DESIGN OF ACCESSIBLE SYSTEMS
    • 5.5.1 UNIVERSAL DESIGN PRINCIPLES TO SUPPORT ACCESSIBILITY DOORWAY
    • 5.5.2 WEB CONTENT ACCESSIBILITY GUIDELINES
    • 5.5.2.1 APPLICABILITY OF THE WCAG 1.0 TO THE DIGITAL DOORWAY
    • 5.5.3 ELECTRONIC AND INFORMATION TECHNOLOGY ACCESSIBILITY STANDARDS (SECTION 508)
    • 5.5.3.1 APPLICABILITY OF SECTION 508 TO THE DIGITAL DOORWAY
    • 5.5.4 IBM SOFTWARE ACCESSIBILITY CHECKLIST
    • 5.5.4.1 APPLICABILITY OF THE IBM SOFTWARE ACCESSIBILITY CHECKLIST TO THE DIGITAL DOORWAY
    • 5.6 CONCLUSION
  • 6 DESIGN GUIDELINES FOR COMPUTER-BASED EDUCATIONAL GAMES
    • 6.1 INTRODUCTION
    • 6.2 EDUCATIONAL GAMES: WHAT ARE THEY?
    • 6.3 CHARACTERISTICS OF EDUCATIONAL GAMES
    • 6.4 BENEFITS AND CHALLENGES OF COMPUTER GAMES FOR EDUCATIONAL PURPOSES
    • 6.5 DESIGN GUIDELINES FOR COMPUTER-BASED EDUCATIONAL GAMES
    • 6.5.1 SHELLEY’S GUIDELINES FOR DEVELOPMENT OF SUCCESSFUL COMPUTER GAMES
    • 6.5.1.1 APPLICABILITY OF SHELLEY’S GUIDELINE TO THE DIGITAL DOORWAY
    • 6.5.2 ALESSI AND TROLLIP’S GUIDELINES FOR THE DESIGN OF EDUCATIONAL GAMES
    • 6.5.2.1 INTRODUCTION OF THE PROGRAM
    • 6.5.2.2 BODY OF THE GAME
    • 6.5.2.3 CONCLUDING THE GAME
    • 6.5.2.4 APPLICABILITY OF ALESSI AND TROLLIP’S EDUCATIONAL GAME DESIGN GUIDELINES TO THE DIGITAL DOORWAY
    • 6.5.3 MALONE’S GUIDELINES FOR THE DESIGN OF COMPUTER-BASED EDUCATIONAL GAMES
    • 6.5.3.1 CHALLENGE
    • 6.5.3.2 FANTASY
    • 6.5.3.3 CURIOSITY
    • 6.5.3.4 APPLICABILITY OF MALONE’S EDUCATIONAL GAME DESIGN GUIDELINES TO THE DIGITAL DOORWAY
    • 6.5.4 DESURVIRE, CAPLAN AND TOTH’S HEURISTICS FOR EVALUATING THE
    • PLAYABILITY OF GAMES
    • 6.5.4.1 THE GAME USABILITY HEURISTICS CATEGORY OF HEP
    • 6.5.4.2 APPLICABILITY OF DESURVIRE ET AL.’S GAME USABILITY HEURISTICS TO DIGITAL DOORWAY
    • 6.6 USABILITY AS IT RELATES TO COMPUTER-BASED EDUCATIONAL GAMES
    • 6.7 CONCLUSION
  • 7 DERIVED HEURISTICS FOR DIGITAL DOORWAY EVALUATION
    • 7.1 INTRODUCTION
    • 7.2 PROCESS TO DERIVE THE MULTI-CATEGORY HEURISTICS FOR EVALUATING THE DIGITAL DOORWAY
    • 7.2.1 DEVELOPMENT OF THE MULTI-CATEGORY HEURISTICS
    • 7.3 THE MULTI-CATEGORY HEURISTICS
    • 7.3.1 CATEGORY 1: GENERAL USABILITY HEURISTICS
    • 7.3.2 CATEGORY 2: FORM USABILITY HEURISTICS
    • 7.3.3 CATEGORY 3: HEURISTICS TO SUPPORT DIRECT ACCESSIBILITY
    • 7.3.4 CATEGORY 4: EDUCATIONAL GAME USABILITY HEURISTICS
    • 7.4 CONCLUSION
  • 8 HEURISTIC EVALUATION OF THE DIGITAL DOORWAY
    • 8.1 INTRODUCTION
    • 8.2 INTERFACES AND APPLICATIONS EVALUATED
    • 8.2.1 DIGITAL DOORWAY LOGIN SCREEN
    • 8.2.2 THE NEW USER REGISTRATION FORM
    • 8.2.3 DIGITAL DOORWAY DESKTOP
    • 8.2.4 WHAT-WHAT MZANSI
    • 8.2.5 OPENSPELL
    • 8.2.6 THEMBA’S JOURNEY
    • 8.3 HEURISTIC EVALUATION OF THE DIGITAL DOORWAY RESULTS AND ANALYSIS
    • 8.3.1 THE HEURISTIC EVALUATION PROCESS
    • 8.3.2 TOTAL NUMBER OF PROBLEMS IDENTIFIED BY INDIVIDUAL EVALUATOR
    • 8.3.3 PROBLEMS IDENTIFIED PER HEURISTIC CATEGORY
    • 8.3.3.1 NUMBER OF PROBLEMS IDENTIFIED BY EVALUATORS PER HEURISTIC CATEGORY
    • 8.3.3.2 TOTAL PROBLEMS PER HEURISTIC CATEGORY
    • 8.3.4 LOCATIONS OF USABILITY/ACCESSIBILITY PROBLEMS
    • 8.3.5 THE NATURE OF USABILITY AND DIRECT ACCESSIBILITY PROBLEMS FOUND IN
    • 8.4 CONCLUSION
  • 9 TRIANGULATION THROUGH FIELD USABILITY EVALUATION AND
  • 10 CONCLUSION
  • 11 REFERENCES
READ  CORPORATE SOCIAL RESPONSIBILITY (CSR) AND ITS LINK TO CORPORATE REPUTATION

GET THE COMPLETE PROJECT
USABILITY AND ACCESSIBILITY EVALUATION OF THE DIGITAL DOORWAY

Related Posts