Algorithmic recommendations enabling and constraining information practices among young people

Ville Jylhä (Information Studies, Faculty of Humanities, University of Oulu, Oulu, Finland)
Noora Hirvonen (Information Studies, Faculty of Humanities, University of Oulu, Oulu, Finland)
Jutta Haider (Swedish School of Library and Information Science, Borås, Sweden)

Journal of Documentation

ISSN: 0022-0418

Article publication date: 16 January 2024

917

Abstract

Purpose

This study addresses how algorithmic recommendations and their affordances shape everyday information practices among young people.

Design/methodology/approach

Thematic interviews were conducted with 20 Finnish young people aged 15–16 years. The material was analysed using qualitative content analysis, with a focus on everyday information practices involving online platforms.

Findings

The key finding of the study is that the current affordances of algorithmic recommendations enable users to engage in more passive practices instead of active search and evaluation practices. Two major themes emerged from the analysis: enabling not searching, inviting high trust, which highlights the how the affordances of algorithmic recommendations enable the delegation of search to a recommender system and, at the same time, invite trust in the system, and constraining finding, discouraging diversity, which focuses on the constraining degree of affordances and breakdowns associated with algorithmic recommendations.

Originality/value

This study contributes new knowledge regarding the ways in which algorithmic recommendations shape the information practices in young people's everyday lives specifically addressing the constraining nature of affordances.

Keywords

Citation

Jylhä, V., Hirvonen, N. and Haider, J. (2024), "Algorithmic recommendations enabling and constraining information practices among young people", Journal of Documentation, Vol. 80 No. 7, pp. 25-42. https://doi.org/10.1108/JD-05-2023-0102

Publisher

:

Emerald Publishing Limited

Copyright © 2023, Ville Jylhä, Noora Hirvonen and Jutta Haider

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


Introduction

This exploratory study deals with the technology-mediated everyday information practices of young people, focusing on the role of algorithmic recommendations in these practices. Algorithmic recommendations, the attempt to offer people the most appropriate – i.e. often the most profitable – content based on their online behaviour (Lu et al., 2015), are increasingly common. In particular, the development of so-called artificial intelligence (AI), which has become a ubiquitous part of everyday life, has increased the importance of algorithmic recommendations. It is claimed that recommender systems and AI systems more broadly are shaping people's everyday experiences to an unprecedented extent (Gómez et al., 2021). However, little research has been done on this topic to date. Although the role of algorithms in everyday life has been addressed in some previous studies, especially in relation to web search engines (Andersson, 2021; Haider and Sundin, 2019, 2022a; Sundin et al., 2017), there is still a lack of understanding in this area. This study attempts to fill this research gap by examining the role of algorithmic recommendations and their affordances in the emergence and formation of everyday-life information practices among young people.

This study adopts a sociocultural approach, which views technologies as cultural tools (Jones, 2020; Limberg et al., 2012), draws from theorizing on “affordances” (Gibson, 2015) as the action possibilities of an environment or tools, and understanding them to mediate action (Kaptelinin and Nardi, 2012). Furthermore, affordances are understood as varying from enabling to constraining (Evans et al., 2017; Kitzie, 2019). For example, while the affordance of anonymity may allow people to hide their identity, in turn it may constrain them from evaluating interpersonal information sources (Kitzie, 2019); the affordance of visibility may enable or constrain them from viewing certain information (Evans et al., 2017); and the affordance of association may enable people to connect with information sources but constrain access from those who lack the language necessary to find relevant information (Kitzie, 2019).

Previous research has pointed out the need for more nuanced empirical research that aims to examine youth information practices from the perspective of the youths themselves (Agosto, 2019). Moreover, the Council of Europe (2020) has stated the need for continuous research to understand the full impact of AI on young people. A recent report by UNICEF (2021) reveals the lack of any complex understanding and a low awareness of the risks associated with AI among adolescents, such as personal data misuse and a lack of clear safeguards concerning the use of AI, while previous research has pointed out risks such as algorithmic overdependence (Banker and Khetani, 2019) and being exposed to undesirable content (Gómez et al., 2021), all of which indicate the need to pay more attention to young people's practices in and around online environments. Building on these observations, this study seeks to provide new insights into the emergence and shaping of everyday information practices by young people through their interactions with recommender systems and how the affordances of recommender systems constitute information for the young people within their everyday online practices.

The study was guided by the following central research question: How are the affordances of algorithmic recommendations shaping young people's everyday information practices?

Background

Algorithmic recommendations

In this study, recommender systems refer both to programs that attempt to suggest the most suitable content, products, and services to their users (Lu et al., 2015), and to the recommender system elements that are incorporated in online tools and applications with different primary purposes (e.g. search engines and social media). Algorithmic recommender systems were originally developed to help users—often in their role as consumers—to manage rapidly growing catalogues of information, such as movies or music (Seaver, 2019). For contemporary users of the web, algorithmic recommendations are practically unavoidable: recommender systems are no longer just an isolated aspect of the interface of a media streaming platform but have settled into the regular infrastructure of online culture (Seaver, 2019). In this culture, any form of interaction, such as interactions on a social network, the user's browser history or information captured by a smartphone's sensors, will produce data for a recommender system (Seaver, 2019). As inseparable part of this development, search engines have increasingly started to turn into “suggest engines” or recommender systems, which are supposed to anticipate what a user wants without a search being performed (Haider and Sundin, 2019).

Recommender systems work “on a large scale, mediating between many users and items” (Seaver, 2021a, p. 511). The content people come across, the information they encounter, when they encounter it and how it is presented all imply the background functioning of algorithmic information intermediaries and curators. The most important feature of a present-day recommender system is, in the words of Seaver (2021b, p. 781) to “give people what they want”. More precisely, it is its ability to anticipate what end-users think they want, without users ever needing to express that desire. Current recommender systems are constantly revised through machine learning (ML) processes and as developers respond to metrics and organizational demands (Seaver, 2021b). Search engine algorithms, for instance, are constantly updated through use, and grow and learn from use and from user data to improve their relevance for the individual and their collective development (Haider and Sundin, 2019). Moreover, as AI systems “learn” from their users' cognitive and behavioural patterns, they, in a sense, become reflections of their users (Lee and Joshi, 2020). Furthermore, the profound interconnectedness of different recommender systems, search engines and social media platforms results in a situation where activities on one platform affect information encountered on another.

People's constant interactions with algorithms and AI systems result in the situation where the information we find online is always – in one way or another – algorithmically framed (Sundin et al., 2017). So-called AI systems shape our experiences of culture, of the everyday and our interactions with information as we delegate more and more mundane routines, such as information searching and analysis, consumption, or selecting cultural content, to be performed through algorithmic functions. Many of the algorithms we encounter have indiscernible intent, yet they have wide-ranging consequences in shaping our everyday knowledge (Hesmondhalgh et al., 2023; Willson, 2017).

Due to their business models, many current social media platforms and search engines prioritize popular content in their application of algorithms for selection and amplification (Striphas, 2015; Willson, 2017), and users are only able to benefit from these modern information mediators if they adopt and develop their own appropriate information practices (Kostagiolas et al., 2015). Drawing on these ideas, in this study, algorithmic recommendations are viewed as relational to human practices and embedded in the social arrangements of everyday life rather than as just technical solutions (Haider and Sundin, 2019).

Within library and information studies (LIS), recent research considering recommender systems has focused mainly on the technical aspects and on the possibilities for recommender systems in different library settings (Collins et al., 2018; De Nart and Tasso, 2014; Jomsri, 2018; Khademizadeh et al., 2022; Rhanoui et al., 2022; Simović, 2018; Yadav and Pervin, 2022), rather than on their role in shaping information practices. However, some recent studies within LIS have focused on AI and algorithms, with major areas of interest being the use and role of search engines in everyday situations (Andersson, 2017, 2021, 2022; Haider and Sundin, 2019, 2022b; Sundin et al., 2017) and information literacy in relation to algorithms or algorithmic literacy (Lloyd, 2019). This study seeks to contribute to this body of research with novel insights concerning the involvement of algorithmic recommendations in young people's information practices.

Young people's everyday information practices in digital environments

The recent studies selected for further discussion in this section share some common ground with this article. These studies have focused on young people and their everyday information practices in digital environments; they share the view that information practices are situated; that is, that they vary and depend on individual needs, on social and societal demands and on technical features. Moreover, they point to the ways in which people adapt the technical features of digital tools to fit their own information practices. These studies have covered young people's health information practices within an anonymous online forum (Hirvonen, 2022), the identity-related information practices of LGBTQ + individuals (Kitzie, 2019), teenagers' information practices on mobile media (Kimm and Boase, 2019), refugees' ICT-mediated information practices (Díaz Andrade and Doolin, 2019) and young people's information practices in library makerspaces (Li, 2021).

Hirvonen (2022) examined affordances of an anonymous online forum for young people's health information practices. The study indicated how young people use the online forum as a source for peer experiences, opinions, and experience-based advice, and how this was enabled by both the forum's technical features and the associated social practices. According to Hirvonen (2022), the findings highlight the need to recognize less visible forms of action and the constraining nature of affordances.

Kitzie (2019) conducted semi-structured conversational interviews with 30 LGBTQ + individuals about their identity-related information practices and identified three key affordances associated with online technology use: visibility, anonymity, and association. The study shows how its participants tactically leverage technical features to engage in their desired information practices and that they engage in several information practices, mediated by various affordances. The findings demonstrated how an information practice approach with an affordance lens can be used to better understand the interrelationship between people and technology, and that affordances need to be envisioned “as varying in degree from enabling to constraining” (Kitzie, 2019, p. 1348). For example, while a blank search box may afford natural language search queries, the visible results may still not be of value to the users, and while anonymity online may allow people to maintain hidden identity, in turn it constrains them from evaluating information sources (Kitzie, 2019). Similarly, Li (2021) depicts how library makerspaces create technological and material contexts which simultaneously afford and constrain their users.

In their article, Kimm and Boase (2019) describe the ways in which teenagers use mobile media to interact with their personal networks and to stay connected with their active and dormant social ties. The study points out that teens use social media on their own terms; they change their practices as needed and adapt the material means of the mobile media to fit their everyday practices. In a similar manner, Díaz Andrade and Doolin's (2019) study of the ICT-mediated information practices of resettled refugees showed that individuals constantly switch between different temporal orientations, depending on their changing needs and goals, and their information practices vary as their temporal orientations change. These studies suggest to also explore the negative influences of social connections in mobile media (Kimm and Boase, 2019), and further examine processes, where users “respond to their environment(s) in light of the perceived properties and functionalities of ICT available to them” (Díaz Andrade and Doolin, 2019, p. 167).

Drawing on these notions from recent research, this study examines the interrelationship between young people's everyday information practices and algorithmic recommendations, with specific focus on their affordances. This study is informed by sociocultural understandings of everyday information practices and affordances, which are presented next in the theoretical framework.

Theoretical framework

In this study, recommender systems with AI components are viewed as cultural tools that make certain actions possible, while they make other actions, if not impossible, at least less likely (Jones, 2020). As such, they are understood as tools with certain affordances. More specifically, the present study focuses on the role of these technologies in the emergence and shaping of everyday-life information practices. Practices are connected to objects and different objects' affordances are central to shaping these practices because they prefigure what can be done and have their own ability to do things (Cox, 2012).

Everyday information practices and information in social practice

This study concentrates on everyday-life information practices that take place in different digital environments, which include services such as search engines and mobile applications. Everyday practices are those practices that have become routine through the process of regular repetition, and the practices are then often performed with minimal thought and rendered in the background. Everyday practices are also performative in the sense that they give shape to the forms of time and space (Willson, 2017). A simplified example is how a street defined in urban planning takes its occupied form as a street when people use it for walking (Certeau, 1988, p. 117). Everyday information practices refer to the interdependent modes of information seeking, information use and information sharing, which, according to Savolainen (2008), comprise different information activities, such as identifying information sources and judging the value of information. Everyday information practices become meaningful tools for various kinds of everyday projects. Information practices are social in the sense that they originate from interactions with others and, similar to practices in general, they draw on an accumulated stock of knowledge when determining how to proceed in a typical situation (Savolainen, 2008).

Agosto (2019), who has focused on children and young people, views information practices as complicated, multi-step interactions. For her, information practices comprise multiple individual behaviours; they strongly reflect social contexts and are shaped by social rules and human responses. For example, reliance on texts for communication among peers, including their habitual monitoring, interaction and the resulting peer relationship building and maintenance, are all interactions that together constitute an information practice.

In this study, we draw on both Savolainen's and Agosto's understandings of information practices. However, as Cox (2012) has noted, information activities are woven throughout all social practices, and we should pay attention to the information aspect of such practices. To avoid a narrow preoccupation with goal-oriented information seeking, we need to ask what constitutes information for social actors within a given practice and how they find, use, create and share it (Cox, 2012).

Cox (2012) uses the example of family photography to demonstrate “how, within a familiar but not obviously information-related practice, what is understood as information and how it is created, used and shared is shaped by that particular practice, decentring notions of information needs and seeking” (Cox, 2012, p. 178). Cox also emphasizes that while one would hesitate to describe family or hobbyist photography as information practice, we do seek and manage information within those practices, and therefore, it might be “more expressive to think about the role of information in a social practice” (Cox, 2012, p. 185). For the purposes of our study, we use “information practices” both to refer to information practices and to highlight information in social practices.

Sociocultural understanding of affordances

The research problem is approached from a sociocultural viewpoint. Thus, information and information practices are positioned as contextually constructed, with an emphasis on their emergence through social interactions within a community. Instead of trying to fit practices into previously established models, a sociocultural perspective emphasizes how practices appear in different communities with their respective activities (Hicks, 2018).

From a sociocultural perspective, we participate in various activities by communicating through different cultural tools, and to participate in a certain activity, we need to learn its context-specific language. The sociocultural perspective underlines the ability to use physical and symbolic tools to participate in specific activities. Practices are shaped through interactions between people and tools, and their meanings vary within different contexts. Hence, activities should be studied in relation to the tools used to carry out these activities (Limberg et al., 2012). In accordance with this thinking, this study draws attention to the ways in which young people's information practices are shaped by their interactions with recommender systems. Furthermore, this study adapts the mediated action perspective (MAP) on affordances, as proposed by Kaptelinin and Nardi (2012), which is also concerned with how humans act in different cultural environments.

The original concept of “affordances” was proposed by Gibson (2015) already in the 1970s and was drawn from his ecological approach to perception. Gibson defined an affordance as “what the environment offers the animal, what it provides or furnishes, either for good or ill” (Gibson, 2015, p. 119). For Gibson, affordances are perceived as possible actions for animals in certain environments, determined by the properties of the environment and by the action capabilities of the animal. The MAP retains the understanding of affordances as action possibilities offered by the environment and as a relational property between an actor and the environment, as originally suggested by Gibson, but going further, it adopts a sociocultural approach rather than Gibson's ecological psychology approach that is concerned with how animals act in their natural habitats (Kaptelinin and Nardi, 2012).

In the MAP, affordances are understood as emerging from interactions between actors, their mediational means (cultural tools) and their environments (Kaptelinin and Nardi, 2012). Therefore, when an actor switches to a different tool, the action capabilities of the actor can change quickly. Another difference between these two approaches lies in the way in which they view the relationship between affordances and needs. For Gibson, an object's affordances do not depend on the actor's needs, but in the MAP, tools are viewed as dynamic and adjustable to a situational need. Recognizing the predefined purposes of the various available tools may influence how actors experience their own needs. The tools also contain something comparable to needs, and for them to work properly, the actor should act in a certain way to meet the demands of the particular tool (Kaptelinin and Nardi, 2012). Hence, the affordances of cultural tools allow us to perform certain actions, but at the same time, those affordances also shape our actions, our social relationships, and the way we enact our identities (Jones, 2020). Furthermore, Zhao et al. (2020) propose a shift in focus away from the affordances of technology and towards the affordances of information practices as enacted through technology within a sociocultural environment.

Material and method

The research material for this exploratory study was collected using a qualitative approach and following a structured interview guide. The material consisted of transcriptions of recorded interviews with 20 Finnish 9th grade students. During the interviews, the participants were asked questions about their attitudes towards AI, their conceptions of AI and their usage of AI-based applications and software.

The interviews were conducted as part of a multidisciplinary research project [anonymized]. The participants in the interviews were taking part in a one-week work experience programme at the university. This work experience is part of the national curriculum by which students are allowed to apply to a working environment based on their own interests. The participants were recruited from their respective schools following a presentation of the work experience opportunity by four researchers who represented different fields. All students who expressed interest were then invited to take part in the work experience programme and in the study. Taking part in the interviews was completely voluntary, and those students who agreed to participate and their legal guardians were asked to sign consent forms.

The interview guide was prepared by two of the authors in collaboration with members of the research project team. Six members of the research project conducted individual interviews and recorded them with a digital recorder. In line with the interview guide, the issues addressed included AI in everyday apps (apps that young people use most often and their understanding of how AI might work in the apps), AI and personalized content (recommender systems (RS), trust in RSs and search engines), and AI in finding content (search engines and apps and how young people utilize them to find content or information). We also touched upon AI-generated content, such as deep fakes, and future ideas for researching everyday AI use among young people. The interviews lasted between 23 and 50 min.

The interview recordings were first listened to carefully and thoroughly, and they were then transcribed into text with the aid of Microsoft Word's (Office) dictation tool. The interview material was anonymized during the transcription process and the respondents were represented by using the pseudonyms (A1–G21) that they had been assigned during their work practice week at the university. The data analysis followed the strategies of qualitative content analysis with inductive reasoning (Zhang and Wildemuth, 2009), with a focus on information practices and the affordances of recommender systems. The material was condensed and coded thematically in QSR NVivo. The practices and affordances drawn from the interviews were categorized according to the different information practices detected, as described by the participants, and according to positive and negative notions held about the features of different online recommender systems. Therefore, the analysis focused on the information practice-related affordances of the features of various recommender systems, how the participants described those features, whether they performed the information-related activities themselves or delegated them to the system, and the extent to which the recommendations satisfied their needs as users. The analysis was mainly guided by theories of information practice (Agosto, 2019; Cox, 2012; Savolainen, 2008), infrastructural awareness and interconnectedness (Haider and Sundin, 2019, 2022a), and MAP (Kaptelinin and Nardi, 2012). Theories on relevance (Saracevic, 2007), and on the varying degrees of affordances (Evans et al., 2017; Kitzie, 2019) were also considered during the final stages of the analysis and the writing up of the findings.

Findings

The presentation of findings was organized according to two major themes drawn from the interview analysis: enabling not searching, inviting high trust, which highlights how the affordances of algorithmic recommendations enable the delegation of search to a recommender system and, at the same time, invite trust in the system, and constraining finding, discouraging diversity, which focuses on the constraining degree of affordances and breakdowns associated with algorithmic recommendations. Both themes were shaped by the practices and affordances they embodied. Rather than listing individual affordances, the findings are presented by their nature to better demonstrate how everyday information practices are shaped through interactions with algorithmic recommendations.

Enabling not searching, inviting high trust

Recommender systems were generally viewed as helpful and time-saving tools. They were used frequently to find content without actual search queries or too much significance being placed on the results. When asked explicitly during the interviews whether the participants found the applications to be good at suggesting content, one of them explained:

Well, it makes your life easier, definitely, ‘cause you don't have to end up having to find the things that you like. You don't have to go around YouTube search bar and looking for the videos that you like, because then all of the recommended ones are there. (B5)

In this excerpt, the participant's notion that using the bare minimum of effort when searching makes life “definitely easier” is interesting. In the same vein, another participant described recommender systems simply as “shortcuts” (G19). A couple of other participants shared the view that they saved them from having to specifically search for content:

I find it very good, so I don't have to waste time searching videos all the time and like this. And it's very like helpful; it saves time that you don't have to all the time be searching. (A1)

You get the content that you like easily. You do not have to specifically search it or like think about what you want to get. (D10)

In these excerpts, it is important to note that performing a search was experienced as “wasting time”. The latter quote reveals how thinking and decision making can be delegated to a recommender system, thereby significantly shifting the decision-making process to the algorithms owned by private companies and out of people's own hands (Striphas, 2015; Werner, 2020). While these affordances enable the users to delegate actions to the recommender system at hand, at the same time they invite this type of behaviour from their users (Withagen et al., 2017), rather than encouraging them to perform the actions themselves since they do not specifically have to (Davis, 2020; Davis and Chouinard, 2016). Moreover, in the participants' descriptions, their practices of search were to some extent reduced to finding answers to specific questions. One participant described an information search as follows:

Probably just search, like Google or Safari, I think, and it can already suggest answers on what you find; it has an answer always. It's very easy to find answers now. -- for example, if it's something school, like something big project that I need to find something, I just search like from YouTube but if it was just a small [thing] I want to find about, related to school, I would probably google it and it has the answers straight away. (A1)

It is important to note couple of things from this excerpt. First, the participant pointed out that Google and Safari already suggested answers before an actual search was even performed, highlighting how search engines and applications have started to develop into “suggest engines,” where the system is supposed to anticipate what a user needs or wants without entering a search query (Haider and Sundin, 2019), and how deeply recommender system elements are merged into these applications. The reduction of searching to finding answers could be explained by the way contemporary search engines are designed to function; they provide users with answers rather than merely being pointers to documents that contain the answers.

Second, the participant speaks synonymously about Google and Safari, further demonstrating how the distinction between everyday apps and search engines is blurred, with the latter being a browser rather than a general-purpose search engine. Also noteworthy is how YouTube is used for finding information that relates to extensive school projects, while Google is preferred for smaller-scale searches. Interestingly, previous research has pinpointed YouTube, which can be described as a combination of search engine and social media, as the main search engine for many teens (Andersson, 2021, 2022; Pires et al., 2021).

Since searching has become so routine and ubiquitous it is hardly recognized as performing a search at all, which may account for the neglect of any reflection to consider the truthfulness of the information, provided a quick answer is available (Haider and Sundin, 2019). Unsurprisingly, at least one participant did not pay any attention to an assessment of the search results:

I just click whatever pops up first. Like if I search for Google. Don't really pay that much attention to it. (G21)

This notion is consistent with earlier research in which the search engine was seen as conducting the critical assessment of sources rather than the person interacting with the system (Haider and Sundin, 2022b), and where the search process was considered unproblematic and without any need for reflection and critical attention (Andersson, 2022). Digital intermediaries are seen to be encouraging this inaction, where “a combination of low agency and high trust would create [a stereotype of] naïve evaluators, who tend to believe in what they find or encounter, without considering themselves as being actively involved in making such a judgement” (Haider and Sundin, 2022b, p. 1187). Most of the participants seemed to fit this stereotype of a “naïve evaluator”, instead of a reflective information seeker (Savolainen, 2008). When searching is already frequently seen as just a simple “look-up” search, people tend to outsource any evaluation of the information to the recommender system or to the search engine algorithms, thus becoming dependent on their rankings and basing their trust on the first thing they encounter online (Haider and Sundin, 2019). In this way, the current information infrastructure grants users the affordance of not having to evaluate search results for themselves, at least not actively. However, the participants went on to say that they might eventually do further research if the material they encountered was somewhat questionable:

If it's sceptical, I would do maybe a bit more research. I mean, to see if it's like true or like. -- I mean, there's a lot of feedback about people, like different people. Then they say like it's true or like. If it has like a positive feedback, then I would know that it's reliable. So, then I would trust that source. And then if … if that source is like negative, then I would like stay away from them, 'cause then I would know like they're not trustable. -- On YouTube or like video media, it can depend if you trust a person or or like the person posting the video is like … a stranger. -- I guess I would trust more, like. Well, the written information, they usually have like sources. Then, I can like check the sources if they're like reliable. (G21)

Despite the potential for additional research by the participant, a lot of trust is still placed on the system's evaluation skills and on other people's second-hand knowledge (Wilson, 1983). To trust a source that someone refers to, the user must also trust the system that is used for referencing the second-hand knowledge (Haider and Sundin, 2022b). Consistent with this point, when asked if AI applications could help solve some of the problems in the world or in everyday life, one participant stated:

They can be programmed into giving you the correct solution or the correct source and stuff like this. (G20)

In addition to this comment, another participant also seemed to have a rather trusting notion about the possibilities for AI applications:

Like young young people, like how it helps us now, for like suggesting content, like generating it and aiding in like writing and making our own ideas and stuff. (D10)

In addition to trusting other people when accepting something as knowledge, we also place our trust in technologies, systems, and institutions (Haider and Sundin, 2019). When using search engines, we trust the capacity of the search engine to deliver relevant content, rather than trusting only the individuals who produce the links to the content provided on the search results page (Haider and Sundin, 2019). When the participants were asked if they trusted recommender systems and the media content they received, they pointed to the familiarity of the systems and sources as grounds for trusting them:

Well, they're by like trustable companies, like big companies, like Google. And YouTube, so like. Everyone like trust them. (G21)

Some sources lie, but usually the bigger sources like don't lie -- So I trust them. I never come across something I’ve realized that it's like fake news or something. -- If I search something, for example from a website that is not used so much, there might be some fake information much. (A1)

Of interest in the first quote is how the participant dismisses their own responsibility in the trust issue, since, according to the participant, everyone trusts the big commercial platforms. Likewise, in the second quote, the participant thinks that the well-known sources do not lie, while less-visited websites might contain more fake information. Judging from the interviews, this also seems to be the case with commercial companies and online sources. This notion of the trustworthiness of well-known companies seemed to come up again when the participants were asked how they searched for information and why they used specific sources:

If the company is like well-known, or like you can know already know that it's like. Reliable. (G21)

I think Google is one of the most trustworthy search engines because it's one of the biggest and it's the most … I think developed on. (C8)

Constraining finding, discouraging diversity

Despite enabling the users to find content easily, recommender systems seemed to constrain the participants from finding new and relevant content. As earlier research has argued, affordances can also be constraining (Evans et al., 2017; Kitzie, 2019). When asked what was good about personalized content, one participant stated:

I mean, suggests you different stuff. Gives you ideas about stuff you should buy. (A3)

In addition to providing content and answers, recommender systems were described as tools for advertising products to consumers. As content providers and advertisers are often the same actors (Haider and Sundin, 2019, p. 59), it should come as no surprise that commercial platforms treat their users as consumers. This is symbiotic, since at least the participants in this study did not seem to hesitate to accept their role as consumers, and they did not consider ads problematic. Previous research has pointed to concerns over a lack of transparency and increasing manipulation of the user's behaviour in favour of the commercial interests of various search engines and social media platforms (Koene et al., 2015). Yet, many of the participants found the ads and recommendations on everyday apps helpful, since they recommended stuff that they might not otherwise have found:

There's pros and cons definitely, because I mean pros, the ads you can find stuff that you like, that you probably would never have found, unless you just make sense of research. (B5)

You can find stuff you maybe weren't sure you could find or stuff and you couldn't find and you can find it through them, those apps. (A2)

Despite these positive expressions from the participants, this is problematic, especially given that previous research has considered online advertising to be a risk linked to the concept of the commodification of childhood, where children are treated as young consumers (Gómez et al., 2021), and recommender systems may so develop their users that they become overdependent on algorithms, even when the recommendations generated are inferior product offerings (Banker and Khetani, 2019). This dependence poses a risk for the well-being of users and may propagate system biases that may affect other users as well (Banker and Khetani, 2019). While advertisements were found to be useful to some degree, they also seemed to constrain the users from accessing a video, or at least they made them wait for access:

They are pretty annoying sometimes, advertisements. Want to watch a YouTube video and it takes one minute to watch the advertisements. (A3)

There are, of course, pop-up ads. That always happens. We've go to a website, that's pop-up ads. And you go there multiple times, there's gonna be like pop-up ads happen—they pop up on Instagram and Snapchat, YouTube sometimes. (B5)

This collision between the users and the system can be described in terms of situational and affective relevance (Saracevic, 2007). According to Saracevic (2007), affective relevance manifests the relation between the information and the users' intents, motivations, emotions, and goals, and it can be argued that it underlies all other manifestations of relevance, particularly situational relevance. Situational relevance manifests the relationship between a situation or problem at hand and the information, and it is inferred by the criteria of the usefulness and appropriateness of the information in the resolution of the problem, and it may be extended to involve social and cultural factors as well (Saracevic, 2007). It can be argued that ads lack situational relevance when the user is trying to do something else, as in this case, watch a video.

The above two quotes can also be described in terms of “frictions of relevance”, a notion suggested by Haider and Sundin (2019, p. 57, p. 64) and used to describe specific forms of infrastructural breakdown, which make the constituents of the information infrastructure noticeable. More specifically, from the user's point of view, the depicted situation constitutes a friction of relevance that makes the information infrastructure noticeable through dissonance between the individual's needs and the interests of other stakeholders. Ads circulating across and between different platforms exemplify this dissonance and cause frequent disturbances in the information infrastructure. Moreover, recommender systems appeared to constrain users from finding new or relevant content, making the participants' experiences repetitive and mundane. Two participants reflected on the recommendations as follows:

But like YouTube, their ads most of the time are really repetitive. (B5)

Recommended things can also be boring because they … people want to see experience, new, new things, new videos, I’d say. (B6)

The latter participant did not elaborate on whether the recommended content was the same things popping up repeatedly in search engines and feeds or whether it was just similar to earlier content the participant had watched or come across. It was, however, implied that the person thought that the recommendations somehow constrained people from experiencing and encountering new things or new content. The two quotes given above would seem to imply that users might be dissatisfied with the low diversity and lack of relevance of the algorithmic recommendations. This indicates that while the participants did not appear to evaluate the truthfulness of the information actively as pointed out in the previous section, they might at least occasionally evaluate the level of diversity, and affective and situational relevance of the recommendations they receive.

Earlier research has pointed out that even though algorithmic recommendations have the benefit of satisfying users' need for effective similarity in the short term, they are associated with lower levels of diversity in the long term (Anderson et al., 2020). One type of infrastructural breakdown occurs in a situation where an interface's affordances seem to make certain types of searches difficult, if not impossible. One participant described such a breakdown when they were asked if some of the everyday applications could be regarded as intelligent:

Right now, I don't really like the algorithm in some apps. Like, for example, on YouTube, I barely … I can barely find like new videos or new creators to watch, because I don't know, the algorithm changed. And they only show creators, which I’ve already subscribed to. Which is kinda sad. (F17)

Since recommender systems are driven by the notion of similarity, to some extent, their affordances enable users to find similar or relevant content, but in turn, they reinforce content based on previous choices, thus constraining their users from experiencing, or giving them fewer opportunities to discover, new content (Anderson et al., 2020). As such, the affordances also discourage (Davis, 2020; Davis and Chouinard, 2016) the users from obtaining more diverse recommendations. While it is possible to obtain new and more heterogenous recommendations, getting them is not seamless and might require creativity and technical savviness from the users (Davis, 2020). While the aim, at least in the case of Spotify, is to personalize music listening and guide users to music they will enjoy, at the same time, it enhances existing structures and patterns of power by promoting similarity (Werner, 2020). By the nature of their design, recommender systems also contain the risk of isolating their users from exposure to different viewpoints and can have negative effects on social utility and the normal functioning of public debate (Milano et al., 2020).

The last quote above demonstrates how affordances can vary from enabling to constraining (Evans et al., 2017; Kitzie, 2019). It also implies that the participants seem to notice the algorithms when they do not match up to their expectations. Taking notice of algorithms is often triggered by changes that affect the user experience or the functionality of a service (Haider and Sundin, 2019). In the present situation, updates to recommender algorithms seem to have resulted in a situation in which the users felt that the system had failed to perform its primary task, that is, to provide its users with relevant content. However, the different ways in which algorithmic systems are imagined have implications for how they are understood, and the situation may be perceived as being due to a failure to enter a search query. Since any interaction with the system would shape future recommendations (Haider and Sundin, 2022a), performing a search would be likely to produce new content in the search results. Judging the quote above by the same logic, the participant did not appear to participate actively in this interaction. Likewise, users' previous searches and interactions with an algorithmic system might also result in situations where a platform would recommend or provide them with content that they did not want to see, depending on their situational mood:

Let's say, if you're scrolling TikTok and you're sad, you don't want to be seeing say like, like happy videos because like you just maybe like like feel sad for the moment, I guess … (B6)

The above quotes illustrate how algorithmic recommendations may lack affective and situational relevance. It also implies how recommender systems, very much like search engines, seem to be embedded in various social practices in such a way that they may require a decision to remove them to avoid emotional distress (Haider and Sundin, 2019, p. 87). To provide users with relevant and suitable content for their situational mood, recommender systems are still strongly reliant on user interaction and feedback.

Discussion and conclusion

This study provides novel insights into the emergence and shaping of everyday information practices through interactions with algorithmic recommendations. Specifically, it improves our understanding of the role played by AI-based algorithmic recommender systems in the everyday information practices of young people. Through interviews with young people, the study addressed how they utilized different search engines and apps to find content or information, how they experienced personalized content and recommendations, and trusted algorithmic recommendations.

A key finding of the study is that the current affordances of algorithmic recommendations enable users to engage in more passive practices in comparison to active search and evaluation practices. In other words, search and evaluation functions tend to be delegated to the recommender systems. The findings also highlight the high degree of trust that users place in these algorithmic recommendations, sometimes dismissing their own responsibility in determining their trustworthiness, for example, in the case of big commercial platforms. It is also noteworthy, how search is often reduced to a simple “look-up” search, or at least, a description was dominant in the accounts of our interview participants. While this reduction of “search” to mere “look-up” is consistent with some earlier research (Andersson, 2022; Haider and Sundin, 2019, 2022a), and is thus not completely unexpected, how young people integrate these tools into their everyday lives is still of significant interest. However, what previous research has not explicitly noted, is the apparent movement from the active into the passive in youth information practices and how the affordances of the technologies further shape and enhance this development in practices. It should be noted how the young people created meaning from the recommended content they encountered or, on the other hand, could not encounter.

As recommender system affordances do not invite users to explicitly search for content or items, their increasing integration into many platforms may be shaping the practice of searching to one of not searching. There are already situations in which the recommender system acts on behalf of the user, such as a music recommender system choosing a song and playing it without consulting the user (Jameson et al., 2015, p. 612), but for the practices of search and evaluation, this is a somewhat new turn. This practice is performative in the sense that it shapes the very space where the online search is performed (Certeau, 1988; Willson, 2017), although the users' role in the shaping seems to be passive rather than active. It can be argued that this development in the practice of searching shapes search engines even more towards being just “suggest engines”.

Judging by our findings, and perhaps unsurprisingly, most participants did not seem to make a strong distinction between recommender systems and search engines, or even web browser applications, indicating how their differences have become almost indistinguishable. For example, YouTube and Safari were referred to as search engines, although that may not have been their original function. This indicates how incorporating algorithmic recommendations into various applications and online platforms may contribute to further blurring of the distinctions between different applications and their functions. This suggests that infrastructural awareness must be understood and researched as a cross-platform phenomenon and that conceptualizations of information literacy must draw attention to algorithmic information infrastructures. People have an idea of how algorithms are involved in society's information infrastructure. While their reactions to changes in the algorithms may not be based on a correct understanding of how the algorithms work, it still contributes to their infrastructural meaning-making (Haider and Sundin, 2019, 2022a, b).

Naturally, these findings do not come without limitations. First and foremost, this is an exploratory study, and one limitation is related to the method. The interview guide was originally developed with a wider interest in AI applications in general, rather than focusing only on recommender systems and algorithmic recommendations. Thus, material addressing algorithmic recommendations is often embedded in conversations about other systems and phenomena. Second, the interview participants were of the same age and from the same geographical area, and thus did not provide a comprehensive picture of youth information practices. When considering future research, the perspectives of the youth themselves need to be addressed, for example, by involving young people as co-researchers so as to make more sense of their practices, as Agosto (2019) has pointed out. The findings further underline the need for future research to attend to information evaluation and devices in use, with a focus on everyday situations, very much as Andersson (2022) has argued. While the focus of this article was on young people and the need for youth perspectives is noted, research into these practices is needed across all age groups (Andersson, 2022).

At the time of the interviews, some of the participants viewed AI as an aid for writing and coming up with their own ideas. They imagined features that are now part of the current possibilities of some AI tools, most notably ChatGPT (Truly, 2023). DuckDuckGo's recently released DuckAssist, which scans Wikipedia and Britannica to generate answers to users' questions (McAuliffe, 2023; Weinberg, 2023), is another application with features that some of our interview participants envisioned. These types of AI-supported tools might further enhance situations in which users are not encouraged to evaluate the trustworthiness of the sources themselves. While these and similar AI-based applications through their affordances do not inhibit source evaluation, they do not encourage it either. The findings have implications for both individuals and policymakers. Considering implications for individuals, new competencies and practices should be developed to avoid overdependence on algorithms and to better understand and recognize possible risks related to algorithmic recommendations and algorithmically generated content. Since individuals have little or no control over how the information infrastructure is constructed, policymakers play a significant role in improving governance over algorithms and AI to steer their development in a direction that supports and encourages users' information evaluation (Hirvonen et al., 2023). Overall, recent developments in AI and in how AI is debated in the media highlight the need for further research in this area as steps have been taken towards these systems becoming more powerful tools in mass-producing content (Floridi and Chiriatti, 2020).

This turn in the development of AI systems further shapes the current information infrastructure as infrastructural arrangements and conditions enable certain types of information to exist in a certain way (Haider and Sundin, 2019, p. 143). With the added complexity of being unable to distinguish between human and synthetic sources, AI can develop far more unpredictable consequences and involves serious ethical concerns (Floridi and Chiriatti, 2020). Furthermore, as AI systems (Rudolph et al., 2023) are trained on human-created content, they can also reflect humanity's worst tendencies, such as endorsement of racist stereotypes (Floridi and Chiriatti, 2020). To sum up, recommender systems that incorporate AI elements clearly have the potential for making life easier – at least for some – by suggesting relevant content or paths to follow, as imagined by one participant in this study. However, this also requires people who use them or are otherwise exposed to them to develop new competencies to navigate these systems successfully, and what is more, to navigate society through such systems.

References

Agosto, D.E. (2019), “Thoughts about the past, present and future of research in youth information behaviors and practices”, Information and Learning Sciences, Vol. 120 Nos 1/2, pp. 108-118, doi: 10.1108/ILS-09-2018-0096.

Anderson, A., Maystre, L., Anderson, I., Mehrotra, R. and Lalmas, M. (2020), “Algorithmic effects on the diversity of consumption on spotify”, Proceedings of The Web Conference 2020, New York, NY, Association for Computing Machinery, pp. 2155-2165, doi: 10.1145/3366423.3380281.

Andersson, C. (2017), “‘Google is not fun’: an investigation of how Swedish teenagers frame online searching”, Journal of Documentation, Vol. 73 No. 6, pp. 1244-1260, doi: 10.1108/JD-03-2017-0048.

Andersson, C. (2021), “Performing search: search engines and mobile devices in the everyday life of young people”, PhD Thesis, Department of Arts and Cultural Sciences.

Andersson, C. (2022), “Smartphones and online search: shifting frames in the everyday life of young people”, Information and Learning Sciences, Vol. 123 Nos 7/8, pp. 351-370, doi: 10.1108/ILS-03-2022-0025.

Banker, S. and Khetani, S. (2019), “Algorithm overdependence: how the use of algorithmic recommendation systems can increase risks to consumer well-being”, Journal of Public Policy and Marketing, Vol. 38 No. 4, pp. 500-515, doi: 10.1177/0743915619858057.

Certeau, M.D. (1988), The Practice of Everyday Life, translated by Rendall, S., 1. paperback pr., 8 [Repr.], Univ. of California Press, Berkeley, CA.

Collins, A., Tkaczyk, D., Aizawa, A. and Beel, J. (2018), “Position bias in recommender systems for digital libraries”, in Chowdhury, G., McLeod, J., Gillet, V. and Willett, P. (Eds), Transforming Digital Worlds, Springer International Publishing, Cham, Vol. 10766, pp. 335-344, doi: 10.1007/978-3-319-78105-1_37.

Council of Europe (2020), “Artificial intelligence and its impact on young people”, Seminar Report, European Youth Centre Strasbourg, 4-6 December 2019.

Cox, A.M. (2012), “An exploration of the practice approach and its place in information science”, Journal of Information Science, Vol. 38 No. 2, pp. 176-188, doi: 10.1177/0165551511435881.

Davis, J.L. (2020), How Artifacts Afford: The Power and Politics of Everyday Things, The MIT Press, Cambridge, MA.

Davis, J.L. and Chouinard, J.B. (2016), “Theorizing affordances: from request to refuse”, Bulletin of Science, Technology and Society, Vol. 36 No. 4, pp. 241-248, doi: 10.1177/0270467617714944.

De Nart, D. and Tasso, C. (2014), “A personalized concept-driven recommender system for scientific libraries”, Procedia Computer Science, Vol. 38, pp. 84-91, doi: 10.1016/j.procs.2014.10.015.

Díaz Andrade, A. and Doolin, B. (2019), “Temporal enactment of resettled refugees' ICT‐mediated information practices”, Information Systems Journal, Vol. 29 No. 1, pp. 145-174, doi: 10.1111/isj.12189.

Evans, S.K., Pearce, K.E., Vitak, J. and Treem, J.W. (2017), “Explicating affordances: a conceptual framework for understanding affordances in communication research: explicating affordances”, Journal of Computer-Mediated Communication, Vol. 22 No. 1, pp. 35-52, doi: 10.1111/jcc4.12180.

Floridi, L. and Chiriatti, M. (2020), “GPT-3: its nature, scope, limits, and consequences”, Minds and Machines, Vol. 30 No. 4, pp. 681-694, doi: 10.1007/s11023-020-09548-1.

Gibson, J.J. (2015), The Ecological Approach to Visual Perception, Psychology Press, New York.

Gómez, E., Charisi, V. and Chaudron, S. (2021),“Evaluating recommender systems with and for children: towards a multi-perspective framework”, in Zangerle, E., Bauer, C. and Said, A. (Eds), Proceedings of the Perspectives on the Evaluation of Recommender Systems Workshop 2021 Co-Located with the 15th ACM Conference on Recommender Systems (RecSys 2021), Amsterdam, September 25, 2021, Vol. 2955, CEUR-WS.org.

Haider, J. and Sundin, O. (2019), Invisible Search and Online Search Engines: The Ubiquity of Search in Everyday Life, 1st ed., Routledge, London, doi: 10.4324/9780429448546.

Haider, J. and Sundin, O. (2022a), Paradoxes of Media and Information Literacy: The Crisis of Information, 1st ed., Routledge, London, doi: 10.4324/9781003163237.

Haider, J. and Sundin, O. (2022b), “Information literacy challenges in digital culture: conflicting engagements of trust and doubt”, Information, Communication and Society, Vol. 25 No. 8, pp. 1176-1191, doi: 10.1080/1369118X.2020.1851389.

Hesmondhalgh, D., Campos Valverde, R., Kaye, D.B.V. and Li, Z. (2023), “The impact of algorithmically driven recommendation systems on music consumption and production: a literature review”, (February 9, 2023)., UK Centre for Data Ethics and Innovation Reports, SSRN, available at: https://ssrn.com/abstract=4365916

Hicks, A. (2018), “Making the case for a sociocultural perspective on information literacy”, in Nicholson, K.P. and Seale, M. (Eds), The Politics of Theory and the Practice of Critical Librarianship, Library Juice Press, Sacramento, CA, pp. 69-85.

Hirvonen, N. (2022), “Nameless strangers, similar others: the affordances of a young people's anonymous online forum for health information practices”, Journal of Documentation, Vol. 78 No. 7, pp. 506-527, doi: 10.1108/JD-09-2021-0192.

Hirvonen, N., Jylhä, V., Lao, Y. and Larsson, S. (2023), “Artificial intelligence in the information ecosystem: affordances for everyday information seeking”, The Journal of the Association for Information Science and Technology, pp. 1-14, doi: 10.1002/asi.24860.

Jameson, A., Willemsen, M.C., Felfernig, A., de Gemmis, M., Lops, P., Semeraro, G. and Chen, L. (2015), “Human decision making and recommender systems”, in Ricci, F., Rokach, L. and Shapira, B. (Eds), Recommender Systems Handbook, Springer US, Boston, MA, pp. 611-648, doi: 10.1007/978-1-4899-7637-6_18.

Jomsri, P. (2018), “FUCL mining technique for book recommender system in library service”, Procedia Manufacturing, Vol. 22, pp. 550-557, doi: 10.1016/j.promfg.2018.03.081.

Jones, R. (2020), “Mediated discourse analysis”, in Adolphs, S. and Knight, D. (Eds), The Routledge Handbook of English Language and Digital Humanities, Routledge, Abingdon, Oxon, New York, NY, pp. 202-219.

Kaptelinin, V. and Nardi, B. (2012), “Affordances in HCI: toward a mediated action perspective”, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, Association for Computing Machinery, pp. 967-976, doi: 10.1145/2207676.2208541.

Khademizadeh, S., Nematollahi, Z. and Danesh, F. (2022), “Analysis of book circulation data and a book recommendation system in academic libraries using data mining techniques”, Library and Information Science Research, Vol. 44 No. 4, 101191, doi: 10.1016/j.lisr.2022.101191.

Kimm, J. and Boase, J. (2019), “Teens' everyday information practices on mobile media: ‘catching up’ and ‘reaching out’”, Proceedings of the Association for Information Science and Technology, Vol. 56 No. 1, pp. 137-146, doi: 10.1002/pra2.12.

Kitzie, V. (2019), “‘That looks like me or something i can do’: affordances and constraints in the online identity work of US LGBTQ+ millennials”, Journal of the Association for Information Science and Technology, Vol. 70 No. 12, pp. 1340-1351, doi: 10.1002/asi.24217.

Koene, A., Perez, E., Carter, C.J., Statache, R., Adolphs, S., O’Malley, C., Rodden, T. and McAuley, D. (2015), “Ethics of personalized information filtering”, in Tiropanis, T., Vakali, A., Sartori, L. and Burnap, P. (Eds), Internet Science, Springer International Publishing, Cham, Vol. 9089, pp. 123-132.

Kostagiolas, P.A., Lavranos, C., Korfiatis, N., Papadatos, J. and Papavlasopoulos, S. (2015), “Music, musicians and information seeking behaviour: a case study on a community concert band”, Journal of Documentation, Vol. 71 No. 1, pp. 3-24, doi: 10.1108/JD-07-2013-0083.

Lee, K. and Joshi, K. (2020), “Understanding the role of cultural context and user interaction in artificial intelligence based systems”, Journal of Global Information Technology Management, Vol. 23 No. 3, pp. 171-175, doi: 10.1080/1097198X.2020.1794131.

Li, X. (2021), “Young people's information practices in library makerspaces”, Journal of the Association for Information Science and Technology, Vol. 72 No. 6, pp. 744-758, doi: 10.1002/asi.24442.

Limberg, L., Sundin, O. and Talja, S. (2012), “Three theoretical perspectives on information literacy”, Human IT, Vol. 11, pp. 93-130.

Lloyd, A. (2019), “Chasing Frankenstein's monster: information literacy in the black box society”, Journal of Documentation, Vol. 75 No. 6, pp. 1475-1485, doi: 10.1108/JD-02-2019-0035.

Lu, J., Wu, D., Mao, M., Wang, W. and Zhang, G. (2015), “Recommender system application developments: a survey”, Decision Support Systems, Vol. 74, pp. 12-32, doi: 10.1016/j.dss.2015.03.008.

McAuliffe, Z. (2023), “Try DuckDuckGo's new AI feature, DuckAssist, now for free”, CNET, 15 March, available at: https://www.cnet.com/tech/services-and-software/try-duckduckgos-new-ai-feature-duckassist-now-for-free/ (accessed 23 March 2023).

Milano, S., Taddeo, M. and Floridi, L. (2020), “Recommender systems and their ethical challenges”, AI and Society, Vol. 35 No. 4, pp. 957-967, doi: 10.1007/s00146-020-00950-y.

Pires, F., Masanet, M.-J. and Scolari, C.A. (2021), “What are teens doing with YouTube? Practices, uses and metaphors of the most popular audio-visual platform”, Information, Communication and Society, Vol. 24 No. 9, pp. 1175-1191, doi: 10.1080/1369118X.2019.1672766.

Rhanoui, M., Mikram, M., Yousfi, S., Kasmi, A. and Zoubeidi, N. (2022), “A hybrid recommender system for patron driven library acquisition and weeding”, Journal of King Saud University - Computer and Information Sciences, Vol. 34 No. 6, pp. 2809-2819, doi: 10.1016/j.jksuci.2020.10.017.

Rudolph, J., Tan, S. and Tan, S. (2023), “ChatGPT: bullshit spewer or the end of traditional assessments in higher education?”, Journal of Applied Learning and Teaching, Vol. 6 No. 1, doi: 10.37074/jalt.2023.6.1.9.

Saracevic, T. (2007), “Relevance: a review of the literature and a framework for thinking on the notion in information science. Part II: nature and manifestations of relevance”, Journal of the American Society for Information Science and Technology, Vol. 58 No. 13, pp. 1915-1933, doi: 10.1002/asi.20682.

Savolainen, R. (2008), Everyday Information Practices: A Social Phenomenological Perspective, Scarecrow Press, Lanham, MD.

Seaver, N. (2019), “Captivating algorithms: recommender systems as traps”, Journal of Material Culture, Vol. 24 No. 4, pp. 421-436, doi: 10.1177/1359183518820366.

Seaver, N. (2021a), “Care and scale: decorrelative ethics in algorithmic recommendation”, Cultural Anthropology, Vol. 36, p. 3, doi: 10.14506/ca36.3.11.

Seaver, N. (2021b), “Seeing like an infrastructure: avidity and difference in algorithmic recommendation”, Cultural Studies, Vol. 35 Nos 4-5, pp. 771-791, doi: 10.1080/09502386.2021.1895248.

Simović, A. (2018), “A Big Data smart library recommender system for an educational institution”, Library Hi Tech, Vol. 36 No. 3, pp. 498-523, doi: 10.1108/LHT-06-2017-0131.

Striphas, T. (2015), “Algorithmic culture”, European Journal of Cultural Studies, Vol. 18 Nos 4-5, pp. 395-412, doi: 10.1177/1367549415577392.

Sundin, O., Haider, J., Andersson, C., Carlsson, H. and Kjellberg, S. (2017), “The search-ification of everyday life and the mundane-ification of search”, Journal of Documentation, Vol. 73 No. 2, pp. 224-243, doi: 10.1108/jd-06-2016-0081.

Truly, A. (2023), “GPT-4: how to use, new features, availability, and more”, Digital Trends, 6 April, available at: https://www.digitaltrends.com/computing/chatgpt-4-everything-we-know-so-far/ (accessed 12 April 2023).

UNICEF (2021), Adolescent Perspectives on Artificial Intelligence. A Report on Consultations with Adolescents across the World, United Nations Children’s Fund (UNICEF), New York.

Weinberg, G. (2023), “DuckDuckGo launches DuckAssist: a new feature that generates natural language answers to search queries using Wikipedia”, 8 March, available at: https://spreadprivacy.com/duckassist-launch/ (accessed 5 April 2023).

Werner, A. (2020), “Organizing music, organizing gender: algorithmic culture and Spotify recommendations”, Popular Communication, Vol. 18 No. 1, pp. 78-90, doi: 10.1080/15405702.2020.1715980.

Willson, M. (2017), “Algorithms (and the) everyday”, Information, Communication and Society, Vol. 20 No. 1, pp. 137-150, doi: 10.1080/1369118X.2016.1200645.

Wilson, P. (1983), Second-Hand Knowledge : an Inquiry into Cognitive Authority, Greenwood Press, Westport, CT.

Withagen, R., Araújo, D. and De Poel, H.J. (2017), “Inviting affordances and agency”, New Ideas in Psychology, Vol. 45, pp. 11-18, doi: 10.1016/j.newideapsych.2016.12.002.

Yadav, P. and Pervin, N. (2022), “Towards efficient navigation in digital libraries: leveraging popularity, semantics and communities to recommend scholarly articles”, Journal of Informetrics, Vol. 16 No. 4, 101336, doi: 10.1016/j.joi.2022.101336.

Zhang, Y. and Wildemuth, B.M. (2009), “Qualitative analysis of content”, in Wildemuth, B.M. (Ed.), Applications of Social Research Methods to Questions in Information and Library Science, 2nd ed., Libraries Unlimited, Westport, CT, pp. 308-319.

Zhao, Y.C., Zhang, Y., Tang, J. and Song, S. (2020), “Affordances for information practices: theorizing engagement among people, technology, and sociocultural environments”, Journal of Documentation, Vol. 77 No. 1, pp. 229-250, doi: 10.1108/JD-05-2020-0078.

Acknowledgements

This research is connected to the GenZ project, supported by the Academy of Finland (Profi4 318930) and the University of Oulu.

Corresponding author

Ville Jylhä is the corresponding author and can be contacted at: ville.jylha@oulu.fi

About the authors

Ville-Petteri Jylhä is Doctoral Researcher in Information Studies at the Faculty of Humanities, University of Oulu, Finland.

Noora Hirvonen is Professor of Information Studies at the Faculty of Humanities, University of Oulu, Finland.

Jutta Haider is Professor of Information Studies at the Swedish School of Library and Information Science, University of Borås and Reader at the Department of Arts and Cultural Sciences, Lund University, Sweden.

Related articles