Tuesday 28 June 2011

Aims and objectives: ReScript usability / learnability enhancement

Goal

To enhance the ReScript interface to match the expectations of a variety of researchers working with very different texts, and with differing levels of expertise. This will make the project more attractive to a range of third-party projects looking for a collaborative editing solution, strengthening its case for sustainability.

Our primary outcome for this project is to learn how both editors and readers will interact with the source data and to produce recommendations for ways in which the identification of issues can be built into the ongoing managerial process. We will also build two iterations of the service, with improving quantitative success metrics, thus demonstrating the project's ability to learn from user insight.

The choice of goal was dictated by the Institute of Historical Research's commitment to supporting the study of innovation in humanities research and the study of history more generally. This means looking at the current level of technical skills/proficiencies among researchers, as well as the range of content types which projects are currently producing. Only by using an adaptive and learnable model will the ReScript project be able to match researchers’ expectations as it is likely that each build phase will, at least in some small way, advance their understanding of both what is required and what is possible.

Success measures

Produce evidence of improved quantitative ratings and qualitative feedback on revised designs in each of the areas under review, in two iterations enabling longitudinal analysis. Second, reflect on the specific conditions under which the tools and techniques used generate the most value.

All of the usability components are employed to baseline performance during the initial analysis phase. After the first build phase, remote testing will be used, but will include both quantitative and qualitative strands, with the intention of comparing the two sets of results. Some tests, such as the SUS, will be used through all consultation phases, enabling a complete longitudinal analysis to be produced.

The measures are clear enough to be understood by different roles within the organisation, i.e., they can be used equally to justify change to business managers and to indicate development areas to the information architect or developer.

Conducting the research within the project means that the ambition of the changes proposed is realistically linked to the amount of resource which the project has its disposal, leading to recommendations that are practicable to implement.

A simpler approach would be to identify and invite a small set of researchers to join a closed list in order to discuss proposed changes. However, it becomes impossible to judge where resources should be assigned to the maximum effect (are two medieval historians better than one early modern?). It would also become difficult to decide on action based on quantitative data if it meant overruling the research group.

The project could also have focused wholly on canvassing either qualitative or quantitative feedback and extended the depth of consultation. However, that would be to assume that what people say and what people do is materially equivalent, which would not necessarily be true.

Approach

The following techniques will be used throughout the project: interviews, remote testing (e.g. click, annotation, labelling of system designs), user groups, and the system usability scale (SUS).









































What people say What people do
Initial analysis
  • Individual Interviews
  • User group
  • SUS


  • Click tests

Between build phases
  • Annotation tests
  • SUS


  • Click tests
  • A/B tests

Final analysis
  • Individual interviews
  • Annotation tests
  • SUS


  • Click tests
  • A/B tests



The initial analysis phase will result in a number of identified usability issues which will be presented as report cards. The report card device is easily understood and may not only be re-used by other projects, but also serve as a starting point for discussion around usability issues. This may be critical to the widespread recognition of usability as a core component of academic information service provision.

No comments:

Post a Comment