A usability assessment will identify design issues with the user interface that could be problematic for the end-user, and in some cases, prevent access to the systems' core functionality. Completion of both an accessibility and usability assessment is essential to providing the best possible user experience.
Method and Procedures
Automated Accessibility Scans
Automated testing provides a quick/broad initial system assessment and verifies system compliance with governing regulations. However, it is not recommended to use the tool to validate accessibility. Instead, use the tool to help identify problem areas for additional testing and research.
The accessibility scans should be included as part of the standard pre-production testing activities. Performing post-production accessibility validation on an annual basis or after each major release is also recommended.
For information on available tools, refer to:
Microsoft Operating System
Manual testing activities identify aspects of the user-interface that are technically accessible but still awkward or difficult for users with accessibility needs. Identify and plan testing activities in advance to ensure thorough testing of the system's core functionality during the assessment. It is recommended that instead of providing users with step-by-step instructions, participants should be provided with usage scenarios to follow.
Participants and use cases for the assessment need to represent the diverse user community, including users with accessibility needs. When selecting participants for the assessment, it is highly recommended that the user base includes individuals who are skilled with assistive technologies (including: screen readers, speech recognition software, and magnification software). For additional information on how to learn more about assistive technologies and testing strategies, or for information on how to find skilled participants, refer to the "Resources" section below.
While performing the assessment, it is recommended that participants use non-production environments to navigate through scenario-based exercises. A preconfigured persona and credentials should be provided. Participants should not be asked to enter personally identifiable information; only fictitious data should be used. Participants should be encouraged to navigate naturally through the system or website, guided by functional scenarios.
Participants should also be provided with any known workarounds for assistive technologies. For example: if alternate key strokes in a screen reader must be used for certain functionality to work, that information needs to be provided to the participants before they begin performing the assessment.
Even if an automated assessment was performed, it is recommended that participants validate the functionality and modules manually to ensure that accessibility issues are properly addressed.
To help focus users during the assessment, it is recommended to provide the assessors with basic procedures on how to use the system being evaluated. To help facilitate a more thorough review of the system, users should be given a list of tasks to perform. Instructions could be employed to help guide the users through the assessment but providing them with step-by-step instructions may reduce the scope of the assessment. Instead, consider providing users with enough information to direct them to the modules that need to be evaluated, but don't direct them fully. This will assist with evaluating not only system accessibility, but also system usability. When evaluating the system using specific assistive technologies, it will also help to provide users with additional instructions or tasks to complete that are more geared towards the assistive tools. This can be done by establishing standard procedures that users can follow for the different technologies:
- Screen Readers
- Speech Recognition Software
- Magnification Software
As different Internet browsers interpret code differently, if the system is web-based, it is also recommended that the testing be performed using multiple browsers.
Setup and Support
Facilities and Equipment
If the scope of the assessment is large and involves outside volunteers, consider creating a testing facility for the people involved in the assessment. When setting up the facility, staff should evaluate all the equipment necessary to perform the assessment.
The room(s) where the assessment will take place should be adequately equipped with all the devices that will be used to access the system being assessed. This includes desktop computers, laptops, tablets, and mobile devices. To support the assistive technologies that will also be employed, headsets and microphones for those devices should be installed and properly configured. In cases where testing with assistive technologies can be loud, consider using spacious areas or private rooms to allow for more efficient use of screen readers and speech recognition software. Test systems should have a variety of supported browsers as well as a variety of software tools required to support the objective of the testing cycle.
Have the test participants sign a standard consent form. The consent form gives the department permission to report on participants' comments and behavior. The form also informs participants that the department plans to use their comments for internal briefings, without disclosing their personal information.
Introduction to a Testing Session
Design the introduction to provide logistical information. Ensure participants are comfortable in the designated testing area. Provide an overview of the functional test objectives and discuss the participant's role in the assessment activities using assistive technologies. Provide opportunities and encourage the participants to ask questions on functionality if needed.
Questionnaire on Computer Use, System and Internet Experience
Prior to beginning the assessment, use a short questionnaire to assess the participant's experience with assistive technologies, systems being assessed, computers and the Internet.
Conduct debriefing interviews to gather information on the participant's impression of the assessment activities. For each test cycle, include questions about the facility, equipment, coordination, instructions and proctor support. Develop unique debriefing questions to gather the participant's personal perspectives on usability and accessibility of the department's IT systems or websites, as applicable to the objective of the test cycle. Perform debriefing interviews in an interactive conversational style.
Metrics and Reporting
Compile and compare the usability and accessibility assessment to the targeted metrics. Document results in a comprehensive issue or improved user experienced report. Include recommendations for future usability and accessibility improvement.
Outcome vs Objectives
Measure the usability and accessibility assessment results against defined objectives. Evaluate the details. Adjust the test approach as warranted and rerun testing in the case the outcome of the testing presents anomalies in the testing approach.
Document the outcome of each major usability and accessibility assessment as to the desired behavior, the impact to the usability and accessibility of the system or website, the severity the defect represents, and the recommended adjustments. Outcomes demonstrating non-compliance with required standards should be given highest priority.
Follow the department's normal incident management process to manage the accessibility assessment reporting phase. This allows report generation using existing defect management tools.