Using Heuristics with User Centred Design

Examining the interface and judge compliance

2310
Weekly visitors
300
Leads generated
112
User drop-outs
31
Online sales

Heuristic evaluation

heuristic evaluation is a usability inspection method for computer software that helps to identify usability problems in the user interface (UI) design. It specifically involves evaluators examining the interface and judging its compliance with recognized usability principles (the “heuristics”). These evaluation methods are now widely taught and practiced in the new media sector, where UIs are often designed in a short space of time on a budget that may restrict the amount of money available to provide for other types of interface testing. There are many sets of usability design heuristics; they are not mutually exclusive and cover many of the same aspects of user interface design.

Neilsen’s Heuristics

Jakob Nielsen’s heuristics are probably the most-used usability heuristics for user interface design. Nielsen developed the heuristics based on work together with Rolf Molich in 1990.  The final set of heuristics used today were released by Nielsen in 1994.  The heuristics as published in Nielsen’s book Usability Engineering are as follows below:

The Benefits?

The main goal of heuristic evaluations is to identify any problems associated with the design of user interfaces. Usability consultant Jakob Nielsen developed this method on the basis of several years of experience in teaching and consulting about usability engineering.

Usability problems that are categorised according to their estimated impact on user performance or acceptance. Often the heuristic evaluation is conducted in the context of use cases (typical user tasks), to provide feedback to the developers on the extent to which the interface is likely to be compatible with the intended users’ needs and preferences.

The simplicity of heuristic evaluation is beneficial at the early stages of design. This usability inspection method does not require user testing and it is helpful to prototype within the schematic process.

Using heuristic evaluation prior to user testing will reduce the number and severity of design errors discovered by users. Although heuristic evaluation can uncover many major usability issues in a short period of time, a criticism that is often leveled is that results are highly influenced by the expert reviewer. This subjective review repeatedly has different results than software performance testing, each type of testing uncovering a different set of problems.

Nielsen’s heuristics

Visibility of system status

The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.

 

Match between system and the real world

The system should speak the user’s language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.

User control and freedom

Users often choose system functions by mistake and will need a clearly marked “emergency exit” to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.

Consistency and standards

Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.

Error prevention

Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.

Recognition rather than recall

Minimize the user’s memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.

 

Flexibility and efficiency of use

Accelerators—unseen by the novice user—may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.

Aesthetic and minimalist design

Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.

Help users recognize, diagnose, and recover from errors

Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.

Software Testing

Testing types

1.1 Load testing
1.2 Stress testing
1.3 Soak testing
1.4 Spike testing
1.5 Configuration testing
1.6 Isolation testing

Setting performance goals

2.1 Concurrency/throughout
2.2 Server response time
2.3 Render response time
2.4 Performance specifications
2.5 Questions to ask

Pre-requisites for Performance Testing

3.1 Test conditions
3.2 Timing

Tools, Tech & Tasks

4 Tools
5 Technology
6 Tasks to undertake

Performance testing web applications

According to the Microsoft Developer Network the Performance Testing Methodology

Activity 1 - Identify the Test Environment

Identify the physical test environment and the production environment as well as the tools and resources available to the test team. The physical environment includes hardware, software, and network configurations. Having a thorough understanding of the entire test environment at the outset enables more efficient test design and planning and helps you identify testing challenges early in the project. In some situations, this process must be revisited periodically throughout the project’s life cycle. 

Activity 2 - Identify Performance Acceptance Criteria

 Identify the response time, throughput, and resource utilization goals and constraints. In general, response time is a user concern, throughput is a business concern, and resource utilization is a system concern. Additionally, identify project success criteria that may not be captured by those goals and constraints; for example, using performance tests to evaluate what combination of configuration settings will result in the most desirable performance characteristics.

Activity 3 - Plan and Design Tests

 Identify key scenarios, determine variability among representative users and how to simulate that variability, define test data, and establish metrics to be collected. Consolidate this information into one or more models of system usage to be implemented, executed, and analysed.

Activity 4 - Configure the Test Environment

Prepare the test environment, tools, and resources necessary to execute each strategy as features and components become available for test. Ensure that the test environment is instrumented for resource monitoring as necessary. 

Activity 5 - Implement the Test Design

Develop the performance tests in accordance with the test design.

Activity 6 - Execute the Test

Prepare the test environment, tools, and resources necessary to execute each strategy as features and components become available for test. Ensure that the test environment is instrumented for resource monitoring as necessary. 

Activity 7 - Analyse Results, Tune, and Retest

Analyse, Consolidate and share results data. Make a tuning change and retest. Improvement or degradation? Each improvement made will return smaller improvement than the previous improvement. When do you stop? When you reach a CPU bottleneck, the choices then are either improve the code or add more CPU.

 

Thanks for watching!

If you liked this project, please take a second to share with the world.
Keep updated? Follow me on twitter or RSS below in  footer.

 

User Experience Heuristics

Most recent projects