Thursday 3:30–5:00 PM
Chair: Max van Kleek Juliana Friere
A Comparison of Visual and Textual Page Previews in Judging the Helpfulness of Web Pages
Anne Aula, Rehan Khan, Peter Hong, Zhiwei Guan, Paul Fontes
We investigated the efficacy of visual and textual page web page previews in predicting the helpfulness of web pages related to a specific topic. We ran two studies in the usability lab and collected data through an online survey. Participants (total of 245) were asked to rate the expected helpfulness of a web page based on a preview (four different thumbnail variations, a textual web page summary, a thumbnail/title/URL combination, a title/URL combination). In the lab studies, the same participants also rated the helpfulness of the actual web pages themselves. In the online study, the web page ratings were collected from a separate group of participants. Our results show that thumbnails add information about the relevance of web pages that is not available in the textual summaries of web pages (title, snippet & URL). However, showing only thumbnails, with no textual information, results in poorer performance than showing only textual summaries. The prediction inaccuracy caused by textual vs. visual previews was different: textual previews tended to make users overestimate the helpfulness of web pages, whereas thumbnails made users underestimate the helpfulness of web pages in most cases. In our study, the best performance was obtained by combining sufficiently large thumbnails (at least 200×200 pixels) with page titles and URLs – and it was better to make users focus primarily on the thumbnail by placing the title and URL below the thumbnail. Our studies highlighted four key aspects that affect the performance of previews: the visual/textual mode of the previews, the zoom level and size of the thumbnail, as well as the positioning of key information elements.
The “Map Trap”? An Evaluation of Map Versus Text-based Interfaces for Location-based Mobile Search Services
Karen Church, Joachim Neumann, Mauro Cherubini, Nuria Oliver
As the mobile Internet continues to grow, there is an increasing need to provide users with effective search and information access services. And in order to build more effective mobile search services, we must first understand the impact that various interface choices have on mobile users. For example, the majority of mobile location-based search services are built on top of a map visualization, but is this intuitive design-decision the optimal interface choice from a human centric perspective? In order to tackle this fundamental design question, we developed two proactive mobile search interfaces (one map-based and the other text-based) that utilize key mobile contexts to improve the search and information discovery experience of mobile users. In this paper, we present the results of an exploratory field study of these two interfaces involving 34 users over a 1 month period where we focus in particular on the impact that the type of user interface (e.g. map vs text) has on the search and information discovery experience of mobile users. We highlight the main usage results—including that maps are not the interface of choice for certain information access tasks—and outline key implications for the interface design of next generation mobile search services.
Sketcha: A Captcha Based on Line Drawings of 3D Models
Adam Finkelstein, Steven Ross, Alex Halderman
This paper introduces a captcha based on upright orientation of line drawings rendered from 3D models. The models are selected from a large database, and images are rendered from random viewpoints, affording many different drawings from a single 3D model. The captcha presents the user with a set of images, and the user must choose an upright orientation for each image. This task generally requires understanding of the semantic content of the image, which is believed to be difficult for automatic algorithms. We describe a process called covert filtering whereby the image database can be continually refreshed with drawings that are known to have a high success rate for humans, by inserting randomly into the captcha new images to be evaluated. Our analysis shows that covert filtering can ensure that captchas are likely to be solvable by humans while deterring attackers who wish to learn a portion of the database. We performed several user studies that evaluate how effectively people can solve the captcha. Comparing these results to an attack based on machine learning, we find that humans possess a substantial performance advantage over computers.