top of page
27304745_juapr35.jpg

Usability Testing

For Company R

Usability testing was conducted on a secure command and control software for Company R.

Project Information:

A usability evaluation for a software by Company R was conducted as part of the Usability Testing course at the university. The usability evaluation focused on a few tasks that were determined important after having conducted a heuristic evaluation of the software. This project was conducted in a team of 4 and my responsibilities included: conducting a heuristic evaluation of the software and moderating, observing or recording the usability study.

Overview: Introduction

I had the opportunity to evaluate the usability of a secure command and control software for Company R, a U.S. company that specializes in developing advanced communication solutions for the highest levels of government and leading Department of Defense communications programs. The software's core tenets of interoperability, command and control, flexibility, and ease of use are critical to ensuring successful deployments in complex environments. In this case study, I will describe the research methods used, the usability issues identified, and the recommendations provided to improve the software's usability.

Overview: Goals

The main purpose of this usability study is to identify important usability issues with the product and to determine if all the components of the application are functioning well. This is important as glitches can prevent the application from working properly and the users might, therefore, face more challenges while using the product.

 

The primary goals were to:

  • Assess the overall ease with which users perform various essential tasks.

  • Understand the expectations of the users.

  • Analyze the system usability on a scale.

  • Find important heuristic and usability issues.

  • Make design, workflow and functional recommendations to address usability issues.

Research: Methodology

This study examined the usability and user experience of the secure command and control software. We conducted a within-subjects test where each participant went through a session of approximately 45 minutes. About 15 minutes were utilized to give the participant a brief introduction of the study and the software, for getting consent forms signed and filling out post-test questionnaires. The remaining 30 minutes were dedicated to usability testing.

Research: Scope

The scope of the evaluation was discussed with the client in January 2022 and it was concluded that the main focus of this usability study would be to identify important usability issues with the Contacts, Chats, Rooms and Groups sections of the platform. Other parts like settings, setting up accounts, making audio calls, making video calls and administrative tasks were not part of the scope of the evaluation.

Research: Timeline
Timeline for the usability study
Usability Evaluation: Participants

The participants who had taken part in the usability test were students at RIT ages 20-28, either from the Cyber Security department, Army ROTC, or both. One participant was an RIT student who was a former military. Each participant had varying levels of skills with computer use. Additionally, each participant is either an ROTC candidate, a Networking student, or has had military experience. All the participants qualified the minimum criteria of having a High School Diploma The chart below demonstrates the demographics of the participants:

Participant demographics
Usability Evaluation: Test Design

The tasks and the scenarios were narrated to the participants one by one. We incorporated counterbalancing to eliminate the learning effect. From the pilot study, we learned that for some of the tasks, the participants applied knowledge gained from the previous task to complete the next task. Hence, we introduced counterbalancing to eliminate this transfer of knowledge by alternating the task order for each participant.

Usability Evaluation: Tasks

We evaluated the tasks revolving around the “Chat” and “Room” features in the software. The tasks included sending a chat, editing a message, cancel editing the message, searching for messages in the chat, clearing chat history, joining a room, private messaging someone from the room, creating a new room, and popping-in/popping-out the main chat panel.

Usability Evaluation: Scenarios

The scenarios were designed to give context to the participants. We used Star Wars characters in the scenarios as actors for the evaluation. The scenarios were administered to the participants orally as well as on paper. The participants were narrated the scenarios one scenario at a time. We also asked the participants to indicate when they are done with the task.

Usability Evaluation: Measurements

The measurements taken during the study were:

  • Success/failure of a task

  • Time of completion for each task

  • Likert scale

  • Open-ended questions/interview responses

Analysis: Findings

The following table gives an overall view of the task completion status for each participant across each task. The green checkmarks depict that the tasks were completed successfully, while the red cross depicts that the participant failed to complete the task. Purple highlighted boxes depict that participants were confused about whether or not they completed the task. Orange depicts that they gave up. Gray depicts completion of tasks with direction/hint from us.

Screenshot 2023-02-26 at 11.12.58 AM.png

Based on our quantitative data, some tasks were more challenging than others. There were a few tasks that were very easy for most participants to complete but our data suggests that there are some areas in each task where participants experienced some confusion.  For example, the window pop-out portion of task 7 was easy for the majority of the participants but when trying to pop-in the chat window, most were unsure if they succeeded in doing so.  Task 6 is another example of a scenario where participants were confused. Many were able to create a new room successfully but adding members was a confusing process for most of them.  Our data shows that task 4 was very easy as most of our participants successfully completed it and experienced the least amount of confusion compared to the other tasks.  This highlights the fact that there are certain features in the software that have a bit of a learning curve and there are other features that are simple for the majority of users.

Participant Completion Rates for Tasks .png

P4: “The easiest part is creating a room but adding members is difficult. I don’t know if they got the invites when I press the add button.”

P2: “I feel like I need to guess where the edit feature is.”

P6: “I found the edit message option but I don’t know how to exit edit mode.”

Analysis: Post-study Interview

After completing all the tasks, we asked some questions to gain a deeper understanding of how each person felt about the software. This gave us their opinions about the application, such as what other features should be in the software and their overall navigation experience.

Analysis: Recommendations

Based on the findings from the study and the post-study interview, we provided a set of recommendations for each feature that was tested in this usability evaluation.

1

Placing the close slider icon and save/send invites icons closer may make it easier for a user to make sure she does not miss clicking the save icon. Replacing the save icon with a Send Invites button may also make the requirement to press the save/send button more evident. Once the user does save and send invitations to other contacts to join a room, showing a message that the invites have been sent and are waiting to be accepted may help make the system more user friendly.

2

The widgets and functions are not near targets as suggested by Charles Schneider’s heuristics which made exiting edit mode a difficult. The process of exiting edit mode without actually editing a message may be made easier by showing the user a X (cross icon) on the top right corner of the edit mode message textbox.

Conclusion

In this study, we evaluated the “Chat” and “Room” features in Company R's software. The evaluation was done through seven tasks revolving around the chat and the room features. Pilot study was conducted to check the test design and make any necessary changes to the methodology. From the pilot study, we learnt how to effectively incorporate counterbalancing and decided not to include the entry criteria to allow the participants to freely navigate through the software.

 

Data was collected through Likert scales and open ended questions for each task as well as a post session interview. We also timed each task using the “”Stopwatch” application on the phone. Based on the quantitative results, some tasks were more challenging than others. Although a few participants found it easier to complete certain tasks, there were areas in every task which confused the participants. Through the qualitative data, we were able to find some suggestions for additional features, what was confusing for the participants and what behavior did they expect instead of what was happening. Based on this, we provided certain recommendations for each task which could offer some improvements to the software.

bottom of page