‹header›
‹date/time›
Click to edit Master text styles
Second level
Third level
Fourth level
Fifth level
‹footer›
‹#›
The user study presented in this paper aims to address how well novices can use the provided visual cues to find the documents that are most likely to be relevant.
Nine undergraduate students participated in the user study. The subjects received short instructions, but they had no training prior to being presented the ten different data sets. Each subject viewed the same data set once in both the Cluster Bulls Eye and RankSpiral displays. The order of presentation was randomized in terms of both the data sets and display type, ensuring that the same data set or display type was not presented consecutively. After a subject had selected ten documents, the subject submitted his or her selection and received visual feedback about the correct top 10 documents. However, a subject was not able to change the made selection while or after viewing the feedback. When ready, the subject requested to have the next data set displayed. At the end of the experiment, the subjects were asked to answer a few questions in writing.
For several of the data sets used in the user study, some of the top documents overlap, making it difficult to select all of them, because a selected document can completely occlude the document icons in close proximity. Subjects received the verbal instruction to first select an icon that lies underneath other icons before selecting an icon that almost completely occludes the former icon. Subjects also received a one-minute demo of the “magnify tool”, which lets users “click & drag” the data display to change its scale. Subjects were told that if the data display is magnified so that the control panel is covered, then its size needs to be reduced so that the pointer tool can be selected.
The hallmark of an effective text retrieval interface is that it guides users toward potentially relevant documents. This section presents the results of a user study, whose goals are:
1) test how well users, who have received only a short introduction and no training, are able to identify the ten documents that are most likely to be relevant;
2) test if there is a statistically significant performance difference between the Cluster Bulls-Eye and RankSpiral in terms of effectiveness and/or efficiency;
3) test how much the overall distance of a document from the display center will interfere with the size coding used to directly encode its probability of being relevant.
Users need to use their visual reasoning skills to identify icons that represent documents that are most likely to be relevant. In particular, users need to decide over which document icons to place their mouse to receive “details on demand” or which icons to select to view the complete document to determine the document’s relevance. A “details-on-demand” display tends to show the document title and a text snippet.
Instead, subjects can only use visual information to decide which icons represent documents that are most likely to be relevant. Once subjects have made their selections and submitted them, they receive visual feedback about the location of the documents that are most likely to be relevant (“top 10”). The feedback consists of surrounding the top ten documents with a “green halo.”
The task for the subjects is to select the ten documents that are most likely to be relevant. The Cluster Bulls-Eye emphasizes how the documents are related to the en­gines, causing the documents to be “scattered” in their re­spective concentric rings based on the “forces of attrac­tion” of the engines that found them. The RankSpiral highlights the ranking of the documents based on the number of engines that found them and their average rank position.