Responsible living labs: what can go wrong?

Abdolrasoul Habibipour (Department of Computer Science Electrical and Space Engineering, Luleå University of Technology, Luleå, Sweden)

Journal of Information, Communication and Ethics in Society

ISSN: 1477-996X

Article publication date: 13 March 2024

176

Abstract

Purpose

This study aims to investigate how living lab (LL) activities align with responsible research and innovation (RRI) principles, particularly in artificial intelligence (AI)-driven digital transformation (DT) processes. The study seeks to define a framework termed “responsible living lab” (RLL), emphasizing transparency, stakeholder engagement, ethics and sustainability. This emerging issue paper also proposes several directions for future researchers in the field.

Design/methodology/approach

The research methodology involved a literature review complemented by insights from a workshop on defining RLLs. The literature review followed a concept-centric approach, searching key journals and conferences, yielding 32 relevant articles. Backward and forward citation analysis added 19 more articles. The workshop, conducted in the context of UrbanTestbeds.JR and SynAir-G projects, used a reverse brainstorming approach to explore potential ethical and responsible issues in LL activities. In total, 13 experts engaged in collaborative discussions, highlighting insights into AI’s role in promoting RRI within LL activities. The workshop facilitated knowledge sharing and a deeper understanding of RLL, particularly in the context of DT and AI.

Findings

This emerging issue paper highlights ethical considerations in LL activities, emphasizing user voluntariness, user interests and unintended participation. AI in DT introduces challenges like bias, transparency and digital divide, necessitating responsible practices. Workshop insights underscore challenges: AI bias, data privacy and transparency; opportunities: inclusive decision-making and efficient innovation. The synthesis defines RLLs as frameworks ensuring transparency, stakeholder engagement, ethical considerations and sustainability in AI-driven DT within LLs. RLLs aim to align DT with ethical values, fostering inclusivity, responsible resource use and human rights protection.

Originality/value

The proposed definition of RLL introduces a framework prioritizing transparency, stakeholder engagement, ethics and sustainability in LL activities, particularly those involving AI for DT. This definition aligns LL practices with RRI, addressing ethical implications of AI. The value of RLL lies in promoting inclusive and sustainable innovation, prioritizing stakeholder needs, fostering collaboration and ensuring environmental and social responsibility throughout LL activities. This concept serves as a foundational step toward a more responsible and sustainable LL approach in the era of AI-driven technologies.

Keywords

Citation

Habibipour, A. (2024), "Responsible living labs: what can go wrong?", Journal of Information, Communication and Ethics in Society, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/JICES-11-2023-0137

Publisher

:

Emerald Publishing Limited

Copyright © 2024, Abdolrasoul Habibipour.

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial & non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

Digital transformation (DT) is actively reshaping the nature of society and influencing how we live and carry out our professional activities (Agarwal, 2020). DT can be understood as the “changes that digital technology causes or influences in all aspects of human life” (Stolterman and Fors, 2004, p. 689). DT greatly relies on the use of advanced digital technologies, many of which are driven by artificial intelligence (AI) (Holmström, 2022; Rathore, 2023). AI is a particularly powerful enabler of DT since it has the ability to learn from data, identify patterns and make predictions, all without human intervention (Magistretti et al., 2019; Verhoef et al., 2021). As AI becomes more intertwined into our daily lives, it is of crucial importance to ensure that its development and use are in line with ethical principles and societal values, the so-called responsible research and innovation (RRI), as outlined by Owen et al. (2012). RRI is a transparent and interactive process in which various stakeholders from society and innovators engage in mutual responsiveness (von Schomberg, 2013). The primary objective is to evaluate the ethical acceptability, sustainability and societal desirability of the innovation process and the marketable products it generates. This approach aims to facilitate the appropriate integration of scientific and technological advancements into our society (Bajmócy and Pataki, 2019).

There are various approaches that researchers and practitioners use to ensure that DT activities are in line with RRI principles, such as open science (Foster and Deardorff, 2017), citizen science (Fraisl et al., 2022), living lab (LL) (Habibipour et al., 2021; Leminen and Westerlund, 2012; Schuurman, 2015) and so forth. This research is focused on DT activities and actions, which are supported and facilitated by LLs as the overall approach (Bagalkot, 2009; Schaffers et al., 2009; Schuurman, 2015). LLs have been introduced and proposed as an inclusive and sustainable approach involving various stakeholders, focusing on individuals in their role as citizens, inhabitants, end users, etc., who are engaged throughout the DT process in their real-life setting (Bergvall-Kåreborn et al., 2009; Ståhlbröst, 2008). Accordingly, LLs can be seen as an approach for innovation development processes, as they allow one to simultaneously focus on individuals, technologies, tasks and structures, as well as the interactions between different stakeholders (Schaffers et al., 2009).

Despite this, the implementation of DT in LL activities also poses significant ethical and social challenges (Callari et al., 2019; Hasenauer et al., 2022), particularly when it comes to AI-driven innovations, and there is a dearth of research on how LL activities should be more responsible and ethical while benefiting from advanced technologies such as AI throughout DT processes (Ruffolo, 2022; Saurabh et al., 2021). Accordingly, this research aims at exploring “how our LL activities and actions should be in line with RRI, with a particular focus on AI-driven DT processes.” Hence, this study defines the “Responsible Living Lab” (RLL) as an overarching framework for LL researchers and practitioners. The proposed framework emphasizes the need for transparency, stakeholder engagement, ethical considerations and sustainability in all stages of LL activities. Furthermore, this research provides future research directions on this emerging issue, not only for LL researchers but also practitioners within the field.

To accomplish the research objective, a literature search was conducted, followed by a workshop within the context of two European projects, UrbanTestbeds.JR and SynAir-G. The literature review aimed to categorize existing research on LLs and RRI. The workshop used a Reverse Brainstorming method to creatively extract potential issues related to LL activities’ ethics and responsibility, ultimately contributing to our understanding of RLLs within the context of DT and AI.

The remainder of this article is organized as follows: The next section outlines the overall research methodology, encompassing the approach to conducting the literature review and the workshop. The subsequent section presents the results of the literature review, followed by the findings from the workshop. The next section discusses the findings that resulted in defining RLLs. The paper concludes by highlighting the research contribution, study limitations and the directions for future research.

2. Research methodology

To fulfill the objective of this research, a literature review was conducted, complemented by the insights gathered from a workshop focused on defining RLLs. This approach was useful to identify opportunities for co-creation while simultaneously addressing the challenges and implications of DT for both individuals and society as a whole.

When it comes to the literature review, it followed a concept-centric approach as outlined by Webster and Watson (2002). This approach contrasts with the author’s centric approach, in which the readers are usually familiar with the main topic, and there are already available studies that discuss the main topic in detail. The concept-centric method was chosen as it allows for systematically synthesizing the literature and enables creation of a preliminary classification for RLL components.

The process of conducting a literature review on LL research began by determining the primary journals and conferences in the field, namely, Technology Innovation Management Review, Sustainability, as well as ISPIM and the Open Living Lab Days conferences. These sources are considered to be the main generators of LL research within the community. This step was done by going to the table of contents of each of these core journals and conferences and manually searching for the relevant articles by reviewing the title, abstract and keywords of the articles. In addition to the core journals, the search was expanded for the articles in online databases (namely, Scopus, Web of Science, EBSCO, PubMed, MDPI and Taylor and Francis), using the search terms for literature search. The keywords that are used for this literature review are LL, Ethics, RRI, Artificial Intelligence (AI), Citizen engagement and DT. Any meaningful combinations of these keywords were included as a search term. This step resulted in 66 articles, 13 of which were deemed relevant.

Finally, to identify further relevant studies, backward and forward citation analysis based on Webster and Watson’s (2002) recommendation was conducted. This approach was used because there were too few relevant findings in the preliminary step to obtain reliable results. However, only publications in the English language were included in this review, and no time limitation was set. By doing so, 19 extra articles were included. In total, 32 articles (13 from the previous step and 19 from backward and forward research) were reviewed.

The workshop was done in the context of the Interreg Baltic Sea Region project UrbanTestbeds.JR, as well as Horizon Europe project SynAir-G at the Open Living Lab Days 2023 conference, in which 13 LL experts from both industry and academia participated in a 90-min interactive session. The workshop followed a reverse brainstorming approach, enabling participants to discuss problems and solutions.

UrbanTestbeds.JR project aims to foster resilient communities through codesigned urban testbeds, emphasizing tangible sustainability experiences for young citizens. The project focuses on enhancing participatory capacity and inclusivity in addressing climate and sustainability challenges, incorporating AI-driven climate plan analysis and urban data storytelling.

SynAir-G project aims to reveal and quantify synergistic interactions between different pollutants affecting health, from mechanisms to real life, focusing on the school setting. SynAir-G benefits from citizen science as the overall approach to engage schoolchildren in the development of the solutions throughout the project.

As stated, a reverse brainstorming approach was used to extract as many potential issues as possible in an informal, creative way. “Reverse brainstorming is an undirected approach in which a group is asked the question: In how many ways can the area under consideration give trouble?” (Woods and Davies, 1973, p. 26). The point of departure for the workshop discussions was the question: “How can LL activities be made worse regarding ethics and responsibility?” Participants were engaged in collaborative discussions and brainstorming activities to identify main components of RLLs collectively. Through this collaborative process and group discussions, valuable insights into the potential of AI in promoting RRI within LL activities, particularly in citizen engagement, co-creation and real-life innovation development, were gained. Ethical considerations and challenges associated with AI in LL activities, such as dehumanization, responsibility, transparency, power imbalances and the digital divide, were explored. The workshop provided a platform for contributors to share their knowledge, experiences and perspectives and to collectively develop a deeper understanding of RLL, with a particular focus on DT and AI. The workshops followed an interactive approach that enabled participants to share their experiences through a post-it session, resulting in the discovery and discussion of several challenges as the workshop’s outcomes.

3. Literature review results

Ethics has been a central focus in LL research, as many researchers have highlighted its importance. This means that LL practices, by nature, need to align with RRI principles, especially when dealing with the ethical challenges posed by these advanced technologies. The first and foremost ethical consideration in LL activities is whether user engagement is totally voluntary or not. As Ley et al. (2015) argue, in many cases, users feel obliged to participate in LL activities such as diary studies, testing a prototype or being interviewed since they, e.g. received technology in return for their participation. On the other hand, when it comes to group activities, users might have to join the group activities due to group pressure, even though their participation is defined as “voluntary” (Löfman et al., 2004). This pressure to participate can make it difficult for the voluntary contributors to withdraw from the activity or refuse to participate.

Overlooking the users’ interest in LL activities is also an important ethical consideration. As mentioned by Mulder and Stappers (2009):

“Living Labs seem to operate with the implicit assumption that users are cheap or unpaid contributors, motivated by the anticipation that their participation will solve their problems or lead to ‘better’ designs” (p. 2).

It is more important to consider users as a valuable source of knowledge and idea, not to see them as “guinea pigs” for experiment (Eriksson et al., 2005). A lot of LL projects tend to overlook the participants’ needs and interests since the activities are primarily technology-driven and the users are involved with an innovation that is to be designed, tested or evaluated.

Unwitting participation is another ethical consideration in LL activities. As stated earlier, LLs are environments where individual users are involved in innovation development in their own real-life environment. The environment could be their home, their workplace, their car or in public spaces where they spend their time. There are many kinds of LL activities in which users are not able to withdraw from being involved in that activity. An example of these activities is when monitoring infrastructures are established in public places such as an airport, a train station or a city hall. In this case, it might be impossible for the targeted people to opt out of the activity; however, they must be able to do so (Mensink et al., 2010).

Another angle to look at unwitting participation refers to informed consent when the LL research is focused on the participation of whole family members, such as testing an innovation related to energy consumption research (Krogstie et al., 2013). In this case, families are supposed to be involved in their homes. As Hindus (1999, p. 202) states: “informed consent is trickier for homes, because of the presence of children and the centrality of children to home life.” Within open innovation activities in which participation is voluntary, it is essential to spend enough time to prepare the informed consent. As Neuman (2002) argues: “It is not enough to get permission from people; they need to know what they are being asked to participate in so that they can make an informed decision” (p. 135). To avoid overwhelming users, LL researchers should explain and discuss the content of informed consent with the users as much as possible. The information in the informed consent must be realistic enough and provide the participants with not only the purpose and benefits of their engagement but also the potential risks and costs of their involvement (Mensink et al., 2010). This is because sometimes the informed consent does not explain the real issues. As recommended by Vines et al. (2013), these ethical concerns confirm that further research is needed to acquire a better understanding of the procedures and ethical standpoints of unwitting participation when it comes to participatory research in LLs.

The increasing use of AI in DT in various domains raises ethical and social considerations, such as bias, accountability, transparency, dehumanization of actions, digital divide and privacy issues (Kim et al., 2021; Nadoleanu et al., 2022; Saurabh et al., 2021). These challenges are particularly pertinent in the context of LL actions (Harbers and Overdiek, 2022), which are real-world settings for research and innovation that involve stakeholders in co-creation, experimentation and evaluation processes. The unique features of LLs, such as the involvement of end users and other stakeholders in the innovation process, create both opportunities and challenges for the implementation of AI-powered DT at all individual, organizational and societal levels (Frey et al., 2022; Harbers and Overdiek, 2022).

One example of an ethical challenge in LL activities is ensuring that the data to be used to train AI models is representative of the population (Ruffolo, 2022). For example, in a co-creation activity involving the development of a health app, if the data used to train the AI models is biased toward a specific population group, the resulting app may not be effective or safe for other population groups. In addition, if the AI system used to make decisions in the app perpetuates societal biases, it may lead to unequal treatment or discrimination against certain groups (Nebeker et al., 2019).

Another example of an ethical challenge is ensuring that AI systems used in LL activities are transparent and accountable (Hasenauer et al., 2022). If the data-driven decisions that are made using AI tools are not transparent enough, it might be challenging for stakeholders to assess whether the decisions are fair and unbiased. Moreover, it may be challenging to ensure that the decisions align with ethical and social considerations (Lepri et al., 2017).

While there are challenges associated with using AI in DT processes that follow the LL approach, there are also significant opportunities to use AI-driven tools to foster DT in LLs. One way in which AI can support RRLs is by enabling more inclusive and participatory decision-making processes (Lepri et al., 2017). By including large amounts of data (the so-called big data) from various stakeholders and individual users, AI can provide insights into the needs and preferences of different LL actors, which enhances the inclusion of various parties in the decisions (Bibri, 2019). For example, in a co-creation activity of developing a smart city solution, AI can be used as an enabler to analyze data on traffic patterns, energy consumption and public transport use and help the city planners to identify better the needs and preferences of all stakeholders, including public and private sectors and citizens (Bibri, 2019).

Overall, these opportunities highlight the potential for AI to support RRI in LLs by enabling more inclusive, transparent and efficient decision-making processes and accelerating innovation cycles. However, it is essential to ensure that these opportunities are realized responsibly and ethically, taking into account the challenges and risks associated with using AI in LL activities.

4. Workshop results

Participants were engaged in a collaborative process during the workshop, using a reverse brainstorming approach in two groups. This method encouraged them to explore the numerous ways in which challenges could be posed to the ethical and responsible aspects of RLLs, particularly in relation to DT and AI. By examining these challenges, they were able to cocreate innovative solutions and insights that could guide the development of the RLL framework while addressing the complex issues raised by AI and DT.

In doing so, each group identified a range of challenges and opportunities, which were subsequently shared with the whole group of participants and facilitators. These insights included various facets of RLLs, many of which reflected on LL principles (Ståhlbröst, 2012):

Regarding the challenges, workshop participants noted that a challenge in AI is dealing with bias and fairness. They understood that bias could stop everyone from getting equal benefits, so ensuring fairness in AI-driven innovations in RLLs is crucial. Data privacy was another significant challenge, including secondary use of data, data ownership, data leakage and so on. Participants emphasized the importance of preserving user privacy when using AI for data-driven experimentation, highlighting the need to establish trust and commitment among LL participants.

The participants highlighted the importance of transparency in AI-driven decision-making processes, especially in the context of real-life experimentation. Transparency ensures that the principles of real-life experimentation, accountability and trust remain intact in RLLs, even as AI becomes a fundamental part of the LL project or activity. Addressing the digital divide and promoting social inclusion were identified as other key challenges. Participants emphasized the need to ensure that AI-driven solutions are accessible to all, thereby fostering social equity and inclusion within RLLs. Overcoming the digital divide is essential to support open innovation activities (Chesbrough, 2006). The potential dehumanization of AI-driven actions was also highlighted as an ethical challenge. Keeping humans in the loop (Mosqueira-Rey et al., 2023) in RLL activities, especially in co-creation and real-life experimentation, is crucial to prevent the dehumanization of interactions and decision-making processes.

When it comes to the opportunities, participants recognized the potential of AI to facilitate and make more inclusive decision-making when more users and stakeholders are engaged in the LL activities. AI’s ability to analyze diverse stakeholder data allows users to engage more effectively in decision-making, making their participation a core element of the process. They also highlighted the opportunity for AI to enhance efficiency and accelerate innovation within RLLs. This is in line with the principle of co-creation in real-life setting, as AI can advance experimentation, enabling quicker responses to societal challenges and promoting efficient co-creation.

The opportunity to use data-driven insights through AI was also seen as an essential aspect by workshop participants. AI’s capability to contribute to data-driven insights allows for more contextually relevant solutions, enhancing the value created in real-life everyday use contexts. Lastly, AI presents the opportunity to enhance social inclusion within RLLs. By addressing the digital divide and ensuring that technology is accessible to diverse groups, RLLs can strengthen their commitment to open innovation and social equity.

The insights gained from reverse brainstorming sessions and discussions offer a thorough grasp of the challenges and prospects associated with AI in the context of RLLs. These insights serve as an initial point for examining and evaluating AI’s role in RLL activities, with a particular emphasis on promoting a firm commitment to ethical considerations, engaging users actively, cocreating value and endorsing open innovation. Throughout this process, the core principles of LLs remain central to shaping the future of RLLs.

5. Discussion and conclusion

This section provides the definition for RLLs based on the synthesized results from the literature review and the workshop. It also outlines the research contribution as well as limitations and future research.

5.1 Defining responsible living labs

The integration of findings from the literature review and the workshop revealed that several factors significantly influence LL actions, especially in the context of RRI, particularly when LLs benefit from AI-driven solutions within the DT process. The identified factors can be put under four main categories to define RLLs as an overarching framework, namely, transparency, stakeholder engagement, ethical consideration and sustainability (see Figure 1).

5.1.1 Transparency.

Transparency is a fundamental aspect of RLLs (Bajmócy and Pataki, 2019). This element incorporates practices that ensure openness and clarity in all stages of LL activities and actions. This includes open and transparent communication of goals, processes and outcomes to all quadruple helix actors (Steen and Bueren, 2017), including citizens and users. Transparency also involves providing information on data usage, AI algorithms and decision-making processes (Harbers and Overdiek, 2022). Researchers and practitioners should aim to make their actions and intentions as straightforward as possible, allowing stakeholders to understand their involvement’s purpose and potential impact. Although LLs theoretically expect to acknowledge this aspect, in the era of DT and AI, ensuring transparency and openness in RLLs becomes more challenging as complex AI algorithms and decision-making processes may be difficult for nonexperts to comprehend. Balancing the need for transparency with the technical intricacies of AI systems requires innovative approaches to make these processes understandable and accessible to all stakeholders (Nebeker et al., 2019; Rathore, 2023). The rapid development and deployment of AI-driven solutions can make changes even more complicated. Asserting transparency in an environment of constant change is a challenge, as stakeholders must stay informed about evolving technologies and their potential impacts on RLL activities, which has made a big difference compared to traditional LL activities.

5.1.2 Stakeholder engagement.

Active stakeholder engagement is a fundamental element of RLLs, as in traditional LLs (Ståhlbröst, 2012). It emphasizes the involvement of individuals, communities and various stakeholders throughout the DT process. However, unlike many traditional LL activities, RLLs ensure that all actors have a voice and the power to influence the entire process (Gooch et al., 2018). This engagement should be inclusive, taking into consideration the needs and perspectives of different groups (Buhr et al., 2016). Researchers and practitioners should establish mechanisms for acknowledging feedback, ideas and concerns from all stakeholders, including individual users. Furthermore, engagement approaches must respect the voluntary nature of participation (Nov et al., 2011), ensuring that their involvement is in line with RRI. Similarly to transparency, in the digital transition era, engaging stakeholders becomes more complex due to the diversity and global reach of technology users. In contrast to traditional LLs, ensuring meaningful participation of a wide range of stakeholders from different backgrounds and regions poses a challenge to achieving a truly inclusive and representative stakeholder engagement within RLLs. AI can introduce new dimensions to stakeholder engagement, as algorithms can process vast amounts of data to identify relevant insights (Gregory et al., 2021). However, striking a balance between automated AI-driven engagement and maintaining the human touch in stakeholder interactions can be challenging, especially in co-creation and real-life experimentation scenarios (Ahmed and Wahed, 2020).

5.1.3 Ethical considerations.

Ethical considerations should be at the forefront of RLL activities. This element involves a deep commitment to ensuring that AI-driven innovations are aligned with ethical principles. Critical ethical considerations include protecting user privacy, addressing biases and discrimination in AI algorithms, ensuring informed consent and upholding principles of fairness, accountability and transparency in decision-making. Ethical considerations should guide the design, implementation and evaluation of digital solutions within LLs. In AI-driven innovations, ethical concerns encompass issues such as algorithmic bias and the risk that AI systems may reinforce existing societal biases (Ahmed and Wahed, 2020; Belk, 2021). To address these issues, a heightened level of attention and the establishment of comprehensive ethical principles are necessary to guarantee fairness, transparency and accountability. AI’s rapid progress and implementation in DT processes can pose difficulties in conducting in-depth ethical evaluations. RLLs must adeptly navigate the ever-evolving realm of AI ethics by staying informed about the most recent advancements and challenges. This allows them to make well-informed and ethical choices (Gregory et al., 2021).

5.1.4 Sustainability.

Sustainability is an essential element in the RLL framework. This involves considering the long-term impact of DT activities on society, the environment and the well-being of stakeholders (Bajmócy and Pataki, 2019; von Schomberg, 2013). Sustainability in RLLs encompasses responsible resource use, reducing negative environmental and social impacts and ensuring that AI-driven innovations do not exacerbate inequalities or harm the environment (Buhr et al., 2016). Researchers and practitioners should aim to create digital solutions that contribute positively to sustainable development goals. AI can help or hurt sustainability efforts. It might use much energy or have other environmental impacts. RLLs have to figure out how to use AI in a way that helps us in the long run and does not harm the environment. Sometimes, DT’s rapid advancement does not match long-term sustainability goals (Masson-Delmotte et al., 2021). RLLs must find a balance between quick innovation and ensuring AI helps us be more sustainable in the long term (Nishant et al., 2020).

By integrating these four elements – transparency, stakeholder engagement, ethical considerations and sustainability – into the framework for RLLs, researchers and practitioners can ensure that their LL activities and actions are aligned with RRI principles. This framework will help bridge the gap between advanced technologies such as AI and ethical, responsible and sustainable DT processes within LLs.

Based on the main elements of the RLL framework, RLLs can be defined as:

An overarching framework for conducting AI-enabled DT processes within LLs while adhering to RRI principles. It emphasizes openness and transparency, meaningful stakeholder engagement, ethical considerations, and sustainability throughout all stages of LL activities and actions. RLLs aim to ensure that DT processes align with ethical values and societal well-being, fostering inclusivity, responsible resource use, and the protection of human rights.

5.2 Research contribution

The proposed definition for RRL emphasizes transparency, stakeholder engagement, ethical considerations and sustainability in all stages of LL activities and actions. In addition, the researchers can also explore how this framework can be implemented in real-world LL activities and actions, particularly those that involve AI-driven solutions for DT processes.

One of the key benefits of using an RLL is to ensure LL researchers and practitioners that their research is conducted in a way that is more aligned with RRI principles, which are becoming increasingly important in the development and use of advanced technologies such as AI (Frey et al., 2022; Nebeker et al., 2019). Another key benefit of using an RLL framework is that it can help LL practitioners create more inclusive and sustainable innovation processes. RLLs prioritize early stakeholder engagement and inclusion (Habibipour et al., 2021), meaning they involve individuals in their role as citizens, inhabitants, end users, etc., throughout the DT process in their real-life setting. This can help to ensure that the innovation process is more aligned with the needs and values of the stakeholders who will ultimately use the digital solutions (Habibipour et al., 2021). RLLs also focus on the interactions between different stakeholders, which can help to foster collaboration and co-creation. Finally, RLLs emphasize the need for sustainability in all stages of their activities and actions, which can help to ensure that the innovation process is more environmentally and socially responsible (Bajmócy and Pataki, 2019; Harbers and Overdiek, 2022).

The proposed framework can be the first step toward the transition from a traditional LL approach to a more sustainable and responsible LL approach when the activities, processes and final solutions are more intertwined with AI-driven technologies.

Further to this, building upon the insights gained from this research, several key areas emerged for future research that can further advance the understanding and implementation of RLLs. These research directions aim to address existing gaps, tackle emerging challenges and contribute to the sustainable and ethical development of AI-driven DT processes within LLs.

5.3 Limitations and future research agenda

Every research has its own limitations. The proposed definition and framework of RLLs necessitate practical implementation and validation in real-world LL settings. While conceptually aligned with RRI principles, its efficacy and adaptability in diverse LL contexts require empirical testing. This validation process should encompass case studies and practical applications across diverse LL contexts to ensure effectiveness, adaptability and impact on RRI principles. Researchers can investigate stakeholder perceptions and involvement in RLL practices, as well as evaluate outcomes regarding transparency, stakeholder engagement, ethical considerations and sustainability.

Given the complex technical challenges and ethical considerations associated with AI-driven innovations in LLs, future research should focus on developing guidelines and best practices for ensuring ethical AI adoption. Areas of interest include addressing algorithmic bias, data privacy and the human–AI interaction within LL activities (Hasenauer et al., 2022). Researchers can investigate how to integrate ethical principles into AI development processes and evaluate the effectiveness of different approaches in upholding ethical standards in RLLs.

The rapid evolution of digital technologies poses continuous challenges and opportunities for LLs. Future research should explore the implications of emerging technologies beyond AI, such as blockchain, Internet of Things (IoT) and augmented reality, on LL activities and RRI principles. Researchers can investigate how these technologies influence stakeholder engagement, transparency and sustainability in LLs and identify strategies to leverage them effectively while mitigating potential risks.

Assessing the long-term societal and environmental impact of RLLs is essential for ensuring sustainability (Nadoleanu et al., 2022; Nishant et al., 2020). Future research could focus on conducting longitudinal studies and comprehensive impact assessments to evaluate the lasting effects of RLL activities. Researchers can examine how RLLs contribute to achieving sustainable development goals, measure their influence on stakeholder well-being and identify opportunities for continuous improvement and adaptation.

One overarching theme for future research lies in the exploration of ethical breaches and user participation within LL activities. Understanding the dynamics of voluntary participation (Ståhlbröst and Bergvall-Kåreborn, 2013), informed consent procedures (Krogstie et al., 2013) and privacy protection measures is essential for maintaining trust and ensuring the well-being of participants (Kim et al., 2021; Nadoleanu et al., 2022; Saurabh et al., 2021). In addition, investigating strategies to mitigate bias in AI algorithms and decision-making processes is crucial for promoting fairness and inclusivity in LL initiatives (Nebeker et al., 2019).

Transparency and accountability in decision-making represent another critical area for future research (Hasenauer et al., 2022). Enhancing transparency in AI-enabled processes and developing mechanisms for accountability can help build trust among stakeholders and ensure the responsible use of technology. Moreover, promoting meaningful stakeholder engagement and collaboration is essential for fostering inclusive and participatory LL environments.

Research efforts should also be directed toward effective data management and governance practices within LLs. Establishing ethical guidelines for data use, addressing challenges related to data sharing and access and promoting human-centered design principles are necessary in LL actions. Moreover, exploring the adoption and integration of AI solutions into existing LL procedures and workflows presents research opportunities for LL scholars.

Governance and policy considerations play a crucial role in shaping the ethical and regulatory landscape of AI-enabled LL initiatives (Willems et al., 2023). Developing ethical guidelines, providing policymakers with evidence-based recommendations and ensuring compliance with legal and regulatory requirements are essential for fostering responsible innovation.

Lastly, evaluation and impact assessment methodologies are needed to measure the effectiveness and sustainability of AI-driven DT processes within LLs. Further research may focus on developing comprehensive evaluation frameworks and conducting rigorous impact assessments that can help assess the societal, economic and environmental implications of innovation development in LLs.

In summary, future research in LLs should focus on addressing ethical, social and technical challenges associated with AI-driven DT processes, promoting transparency and accountability, fostering stakeholder engagement and integrating RRI principles into LL initiatives. By exploring these research directions, LL researchers and practitioners can contribute to the responsible and sustainable deployment of AI technologies for the benefit of society as a whole.

Figures

RLL framework

Figure 1.

RLL framework

References

Agarwal, R. (2020), “Digital transformation: a path to economic and societal value (SSRN scholarly paper ID 3701906)”, Social Science Research Network, available at: www.papers.ssrn.com/abstract=3701906

Ahmed, N. and Wahed, M. (2020), “The de-democratization of AI: deep learning and the compute divide in”, Artificial Intelligence Research, (arXiv:2010.15581). arXiv, doi: 10.48550/arXiv.2010.15581.

Bagalkot, N.L. (2009), “LivingLabs as real-world co-creation platforms in development of ICT in rural India: a reflection”, Mobile Living Labs 09: Methods and Tools for Evaluation in the Wild, 11.

Bajmócy, Z. and Pataki, G. (2019), “Responsible research and innovation and the challenges of co-creation”, Bammé, A.–Getzinger, pp. 15-29.

Belk, R. (2021), “Ethical issues in service robotics and artificial intelligence”, The Service Industries Journal, Vol. 41 Nos 13/14, pp. 860-876, doi: 10.1080/02642069.2020.1727892.

Bergvall-Kåreborn, B., Eriksson, C.I., Ståhlbröst, A. and Svensson, J. (2009), “A milieu for innovation: Defining living labs”, ISPIM Innovation Symposium: 06/12/2009-09/12/2009, available at: www.diva-portal.org/smash/record.jsf?pid=diva2:1004774

Bibri, S.E. (2019), “The anatomy of the data-driven smart sustainable city: instrumentation, datafication, computerization and related applications”, Journal of Big Data, Vol. 6 No. 1, p. 59, doi: 10.1186/s40537-019-0221-4.

Buhr, K., Federley, M. and Karlsson, A. (2016), “Urban living labs for sustainability in suburbs in need of modernization and social uplift”, Technology Innovation Management Review, Vol. 6 No. 1, p. 1.

Callari, T.C., Moody, L., Saunders, J., Ward, G. and Woodley, J. (2019), “Stakeholder requirements for an ethical framework to sustain multiple research projects in an emerging living lab involving older adults”, Journal of Empirical Research on Human Research Ethics, Vol. 15 No. 3, p. 155626461987379, doi: 10.1177/1556264619873790.

Chesbrough, H. (2006), “Open innovation: a new paradigm for understanding industrial innovation”, Open Innovation: Researching a New Paradigm, Oxford University Press, Oxford, Journal Article, Article Journal Article.

Eriksson, M., Niitamo, V.-P. and Kulkki, S. (2005), State-of-the-Art in Utilizing Living Labs Approach to User-Centric ICT Innovation-a European Approach, Center for Distance-Spanning Technology. Luleå University of Technology Sweden: Luleå, Journal Article, Luleå.

Foster, E.D. and Deardorff, A. (2017), “Open science framework (OSF)”, Journal of the Medical Library Association, Vol. 105 No. 2, pp. 203-206, doi: 10.5195/jmla.2017.88.

Fraisl, D., Hager, G., Bedessem, B., Gold, M., Hsing, P.-Y., Danielsen, F., Hitchcock, C.B., Hulbert, J.M., Piera, J., Spiers, H., Thiel, M. and Haklay, M. (2022), “Citizen science in environmental and ecological sciences”, Nature Reviews Methods Primers, Vol. 2 No. 1, p. 1, doi: 10.1038/s43586-022-00144-4.

Frey, C., Hertweck, P., Richter, L. and Warweg, O. (2022), “Bauhaus.MobilityLab: a living lab for the development and evaluation of AI-Assisted services”, Smart Cities, Vol. 5 No. 1, p. 1, doi: 10.3390/smartcities5010009.

Gooch, D., Barker, M., Hudson, L., Kelly, R., Kortuem, G., Linden, J.V.D., Petre, M., Brown, R., Klis-Davies, A., Forbes, H., Mackinnon, J., Macpherson, R. and Walton, C. (2018), “Amplifying quiet voices: challenges and opportunities for participatory design at an urban scale”, ACM Transactions on Computer-Human Interaction, Vol. 25 No. 1, p. 1, doi: 10.1145/3139398.

Gregory, R.W., Henfridsson, O., Kaganer, E. and Kyriakou, H. (2021), “The role of artificial intelligence and data network effects for creating user value”, Academy of Management Review, Vol. 46 No. 3, pp. 534-551, doi: 10.5465/amr.2019.0178.

Habibipour, A., Lindberg, J., Runardotter, M., Elmistikawy, Y., Ståhlbröst, A. and Chronéer, D. (2021), “Rural living labs: inclusive digital transformation in the countryside”, Technology Innovation Management Review, Vol. 11 Nos 9/10, pp. 59-72, doi: 10.22215/timreview/1465.

Harbers, M. and Overdiek, A. (2022), “Towards a living lab for responsible applied AI”, DRS Biennial Conference Series, available at: www.dl.designresearchsociety.org/drs-conference-papers/drs2022/researchpapers/123

Hasenauer, R., Ehrenmueller, I. and Belviso, C. (2022), “Living labs in social service institutions: an effective method to improve the ethical, reliable use of digital assistive robots to support social services”, 2022 Portland International Conference on Management of Engineering and Technology (PICMET), 1-9, doi: 10.23919/PICMET53225.2022.9882746.

Hindus, D. (1999), “The importance of homes in technology research”, pp. 199-207.

Holmström, J. (2022), “From AI to digital transformation: the AI readiness framework”, Business Horizons, Vol. 65 No. 3, pp. 329-339, doi: 10.1016/j.bushor.2021.03.006.

Kim, S., Choi, B. and Lew, Y.K. (2021), “Where Is the age of digitalization heading? The meaning, characteristics, and implications of contemporary digital transformation”, Sustainability, Vol. 13 No. 1, p. 16, doi: 10.3390/su13168909.

Krogstie, J., Stålbrøst, A., Holst, M., Gudmundsdottir, A., Olesen, A., Braskus, L., Jelle, T. and Kulseng, L. (2013), “Using a living lab methodology for developing energy savings solutions”, Journal Article.

Leminen, S. and Westerlund, M. (2012), “Towards innovation in living labs networks”, International Journal of Product Development, Vol. 17 Nos 1/2, pp. 1-2, doi: 10.1504/IJPD.2012.051161.

Lepri, B., Staiano, J., Sangokoya, D., Letouzé, E. and Oliver, N. (2017), “The tyranny of data? The bright and dark sides of data-driven decision-making for social good”, in Cerquitelli, T., Quercia, D. and Pasquale, F. (Eds), Transparent Data Mining for Big and Small Data, Springer International Publishing, pp. 3-24, doi: 10.1007/978-3-319-54024-5_1.

Ley, B., Ogonowski, C., Mu, M., Hess, J., Race, N., Randall, D., Rouncefield, M. and Wulf, V. (2015), “At home with users: a comparative view of living labs”, Interacting with Computers, Vol. 27 No. 1, pp. 21-35.

Löfman, P., Pelkonen, M. and Pietilä, A. (2004), “Ethical issues in participatory action research”, Scandinavian Journal of Caring Sciences, Vol. 18 No. 3, pp. 333-340.

Magistretti, S., Dell’Era, C. and Messeni Petruzzelli, A. (2019), “How intelligent is watson? Enabling digital transformation through artificial intelligence”, Business Horizons, Vol. 62 No. 6, pp. 819-829, doi: 10.1016/j.bushor.2019.08.004.

Masson-Delmotte, V., Zhai, P., Pirani, A., Connors, S. L., Péan, C., Berger, S., Caud, N., Chen, Y., Goldfarb, L., Gomis, M. I., Huang, M., Leitzell, K., Lonnoy, E., Matthews, J. B. R., Maycock, T. K., Waterfield, T., Yelekçi, Ö., Yu, R. and Zhou, B. (Eds), (2021), Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press.

Mensink, W., Birrer, F.A. and Dutilleul, B. (2010), “Unpacking European living labs: analysing innovation’s social dimensions”, Central European Journal of Public Policy, Vol. 4 No. 1, pp. 60-85.

Mosqueira-Rey, E., Hernández-Pereira, E., Alonso-Ríos, D., Bobes-Bascarán, J. and Fernández-Leal, Á. (2023), “Human-in-the-loop machine learning: a state of the art”, Artificial Intelligence Review, Vol. 56 No. 4, pp. 3005-3054, doi: 10.1007/s10462-022-10246-w.

Mulder, I. and Stappers, P.J. (2009), “Co-creating in practice: results and challenges”, pp. 1-8.

Nadoleanu, G., Staiculescu, A.R. and Bran, E. (2022), “The multifaceted challenges of the digital transformation: creating a sustainable society”, Postmodern Openings, Vol. 13 No. 1 Sup1, pp. Article 1 Sup1, doi: 10.18662/po/13.1Sup1/428.

Nebeker, C., Torous, J. and Bartlett Ellis, R.J. (2019), “Building the case for actionable ethics in digital health research supported by artificial intelligence”, BMC Medicine, Vol. 17 No. 1, p. 137, doi: 10.1186/s12916-019-1377-7.

Neuman, L.W. (2002), “Social research methods: qualitative and quantitative approaches”, Journal Article.

Nishant, R., Kennedy, M. and Corbett, J. (2020), “Artificial intelligence for sustainability: challenges, opportunities, and a research agenda”, International Journal of Information Management, Vol. 53, p. 102104, doi: 10.1016/j.ijinfomgt.2020.102104.

Nov, O., Arazy, O. and Anderson, D. (2011), “Dusting for science: motivation and participation of digital citizen science volunteers”, Proceedings of the 2011 iConference, pp. 68-74, doi: 10.1145/1940761.1940771.

Owen, R., Macnaghten, P. and Stilgoe, J. (2012), “Responsible research and innovation: from science in society to science for society, with society”, Science and Public Policy, Vol. 39 No. 6, pp. 751-760, doi: 10.1093/scipol/scs093.

Rathore, D.B. (2023), “Digital transformation 4.0: integration of artificial intelligence and metaverse in marketing”, Eduzone: International Peer Reviewed/Refereed Multidisciplinary Journal, Vol. 12 No. 1, p. 1,

Ruffolo, M. (2022), “The role of ethical AI in fostering harmonic innovations that support a human-centric digital transformation of economy and society”, in Cicione, F., Filice, L. and Marino, D. (Eds), Harmonic Innovation: Super Smart Society 5.0 and Technological Humanism, Springer International Publishing, pp. 139-143, doi: 10.1007/978-3-030-81190-7_15.

Saurabh, K., Arora, R., Rani, N., Mishra, D. and Ramkumar, M. (2021), “AI led ethical digital transformation: framework, research and managerial implications”, Journal of Information, Communication and Ethics in Society, Vol. 20 No. 2, pp. 229-256, doi: 10.1108/JICES-02-2021-0020.

Schaffers, H., Merz, C. and Guzman, J.G. (2009), “Living labs as instruments for business and social innovation in rural areas”, 2009 IEEE International Technology Management Conference (ICE), pp. 1-8, doi: 10.1109/ITMC.2009.7461429.

Schuurman, D. (2015), “Bridging the gap between open and user innovation? Exploring the value of living labs as a means to structure user contribution and manage distributed innovation”, Dissertation, Ghent University, available at: www.hdl.handle.net/1854/LU-5931264

Ståhlbröst, A. (2008), “Forming future IT – The living lab way of user involvement”, Doctoral Dissertation. Luleå tekniska universitet, p. 2008.

Ståhlbröst, A. (2012), “A set of key principles to assess the impact of living labs”, International Journal of Product Development, Vol. 17 Nos 1/2, pp. 1-2, doi: 10.1504/IJPD.2012.051154.

Ståhlbröst, A. and Bergvall-Kåreborn, B. (2013), “Voluntary contributors in open innovation processes”, Managing Open Innovation Technologies, Springer, (1–Book, Section), pp. 133-149.

Steen, K. and Bueren, E. (2017), “The defining characteristics of urban living labs”, Technology Innovation Management Review, Vol. 7 No. 7, p. 7.

Stolterman, E. and Fors, A.C. (2004), “Information technology and the good life”, in Kaplan, B., Truex, D. P., Wastell, D., Wood-Harper, A. T. and DeGross, J. I. (Eds), Information Systems Research: Relevant Theory and Informed Practice, Springer US, pp. 687-692, doi: 10.1007/1-4020-8095-6_45.

Verhoef, P.C., Broekhuizen, T., Bart, Y., Bhattacharya, A., Qi Dong, J., Fabian, N. and Haenlein, M. (2021), “Digital transformation: a multidisciplinary reflection and research agenda”, Journal of Business Research, Vol. 122, pp. 889-901, doi: 10.1016/j.jbusres.2019.09.022.

Vines, J., Clarke, R., Wright, P., McCarthy, J. and Olivier, P. (2013), Configuring Participation: On How We Involve People in Design, pp. 429-438.

von Schomberg, R. (2013), “A vision of responsible research and innovation”, In Responsible Innovation, John Wiley and Sons, Ltd, pp. 51-74, doi: 10.1002/9781118551424.ch3.

Webster, J. and Watson, R.T. (2002), “Analyzing the past to prepare for the future: writing a literature review”, MIS Quarterly, Vol. 26 No. 2, p. 2.

Willems, J.J., Kuitert, L. and Van Buuren, A. (2023), “Policy integration in urban living labs: delivering multi-functional blue-green infrastructure in Antwerp, Dordrecht, and Gothenburg”, Environmental Policy and Governance, Vol. 33 No. 3, pp. 258-271, doi: 10.1002/eet.2028.

Woods, M.F. and Davies, G.B. (1973), “Potential problem analysis: a systematic approach to problem prediction and contingency planning – an aid to the smooth exploitation of research”, R&D Management, Vol. 4 No. 1, pp. 25-32, doi: 10.1111/j.1467-9310.1973.tb01028.x.

Acknowledgements

This work was funded by the European Commission in the context of Interreg Baltic Sea Region project UrbanTestbeds.JR (#S004) and Horizon Europe project SYNAIR-G (Grant Agreement No. 101057271), which is gratefully acknowledged.

Corresponding author

Abdolrasoul Habibipour can be contacted at: abdolrasoul.Habibipour@ltu.se

Related articles