To read this content please select one of the options below:

Understanding credibility judgements for web search snippets

Markus Kattenbeck (Department of Information Science, University of Regensburg, Regensburg, Germany)
David Elsweiler (Department of Information Science, University of Regensburg, Regensburg, Germany)

Aslib Journal of Information Management

ISSN: 2050-3806

Article publication date: 20 March 2019

Issue publication date: 13 June 2019

1700

Abstract

Purpose

It is well known that information behaviour can be biased in countless ways and that users of web search engines have difficulty in assessing the credibility of results. Yet, little is known about how search engine result page (SERP) listings are used to judge credibility and in which if any way such judgements are biased. The paper aims to discuss these issues.

Design/methodology/approach

Two studies are presented. The first collects data by means of a controlled, web-based user study (N=105). Studying judgements for three controversial topics, the paper examines the extent to which users agree on credibility, the extent to which judgements relate to those applied by objective assessors and to what extent judgements can be predicted by the users’ position on and prior knowledge of the topic. A second, qualitative study (N=9) utilises the same setup; however, transcribed think-aloud protocols provide an understanding of the cues participants use to estimate credibility.

Findings

The first study reveals that users are very uncertain when assessing credibility and their impressions often diverge from objective judges who have fact checked the sources. Little evidence is found indicating that judgements are biased by prior beliefs or knowledge, but differences are observed in the accuracy of judgements across topics. Qualitatively analysing think-aloud transcripts from participants think-aloud reveals ten categories of cues, which participants used to determine the credibility of results. Despite short listings, participants utilised diverse cues for the same listings. Even when the same cues were identified and utilised, different participants often interpreted these differently. Example transcripts show how participants reach varying conclusions, illustrate common mistakes made and highlight problems with existing SERP listings.

Originality/value

This study offers a novel perspective on how the credibility of SERP listings is interpreted when assessing search results. Especially striking is how the same short snippets provide diverse informational cues and how these cues can be interpreted differently depending on the user and his or her background. This finding is significant in terms of how search engine results should be presented and opens up the new challenge of discovering technological solutions, which allow users to better judge the credibility of information sources on the web.

Keywords

Citation

Kattenbeck, M. and Elsweiler, D. (2019), "Understanding credibility judgements for web search snippets", Aslib Journal of Information Management, Vol. 71 No. 3, pp. 368-391. https://doi.org/10.1108/AJIM-07-2018-0181

Publisher

:

Emerald Publishing Limited

Copyright © 2019, Emerald Publishing Limited

Related articles