Where have the ethical and moral standards landed? Consumer self-congruency and psychological distance in the context of AI-based services

Dan Jin (Department of Retail Hospitality and Tourism Management, The University of Tennessee Knoxville, Knoxville, Tennessee, USA)

International Hospitality Review

ISSN: 2516-8142

Article publication date: 1 November 2023

841

Abstract

Purpose

The purpose of this study is to provide insights and guidance for practitioners in terms of ensuring rigorous ethical and moral conduct in artificial intelligence (AI) hiring and implementation.

Design/methodology/approach

The research employed two experimental designs and one pilot study to investigate the ethical and moral implications of different levels of AI implementation in the hospitality industry, the intersection of self-congruency and ethical considerations when AI replaces human service providers and the impact of psychological distance associated with AI on individuals' ethical and moral considerations. These research methods included surveys and experimental manipulations to gather and analyze relevant data.

Findings

Findings provide valuable insights into the ethical and moral dimensions of AI implementation, the influence of self-congruency on ethical considerations and the role of psychological distance in individuals’ ethical evaluations. They contribute to the development of guidelines and practices for the responsible and ethical implementation of AI in various industries, including the hospitality sector.

Practical implications

The study highlights the importance of exercising rigorous ethical-moral AI hiring and implementation practices to ensure AI principles and enforcement operations in the restaurant industry. It provides practitioners with useful insights into how AI-robotization can improve ethical and moral standards.

Originality/value

The study contributes to the literature by providing insights into the ethical and moral implications of AI service robots in the hospitality industry. Additionally, the study explores the relationship between psychological distance and acceptance of AI-intervened service, which has not been extensively studied in the literature.

Keywords

Citation

Jin, D. (2023), "Where have the ethical and moral standards landed? Consumer self-congruency and psychological distance in the context of AI-based services", International Hospitality Review, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/IHR-06-2023-0033

Publisher

:

Emerald Publishing Limited

Copyright © 2023, Dan Jin

License

Published in International Hospitality Review. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

Despite the faster digital adoption of robotics and artificial intelligence (AI) that ensures smooth operation and management (Bharwani & Mathews, 2021), AI service robots are ethically challenging in use and morally controversial in its acceptance of labor replacement in hospitality contexts. In the hospitality industry, service robots have been rapidly adopted replacing frontline human services (Park, Jiang, Lee, & Chang, 2021). Service robots refer to “system-based autonomous and adaptable interfaces that interact, communicate, and deliver service to an organization’s customers” (Wirtz et al., 2018, p. 909). The applications of artificial intelligence (AI)-based service robots are promising in the hospitality field owing to remarkable accuracy in error reduction, portion control and cost control in service operation and delivery (Berezina, Ciftci, & Cobanoglu, 2019). The development of analytical AI by replacing thinking labor with AI employment in services that are more suited to empathetic and feeling tasks (Huang, Rust, & Maksimovic, 2019; Khaliq, Waqas, Nisar, Haider, & Asghar, 2022). For example, hoteliers are making valiant attempts to allay the guests’ AI issues by redesigning and restructuring consumer experiences with robotic transformation (Hu & Min, 2023). The deployment of hospitality AI-robotization brings tremendous benefits to the operational level, minimizing the need for service personnel (Hao, Xiao, & Chon, 2020; Pillai, Haldorai, Seo, & Kim, 2021), working around the clock to control noise, cleanliness and humidity using wireless sensors. Multiple hotel chains, for example, Hilton, launched a robotic concierge called “Connie” who can assist in communicating with in-house guests with queries and service requests (Hilton, 2020).

Although the adoption of service robots in the hospitality industry is becoming increasingly prevalent, especially as the industry faces labor shortages and rising labor costs, there are issues about job displacement as robots become more widespread in the industry (e.g. increasingly prevalent, especially as the industry faces labor shortages and rising labor costs, there are issues about job displacement as robots become more widespread in the industry (Ding, Lee, Legendre, & Madera, 2022). Although prior research has shed light on a spectrum of ethical and moral intricacies entailed by the integration of AI in consumer interactions and marketing domains, there has been a debate on ethical and moral principles and values regarding service robots replacing human labor (Cowls, Tsamados, Taddeo, & Floridi, 2021). An illustrative example is provided by Liu-Thompkins, Okazaki, and Li (2022), who underscore the potential hazards associated with blurring the distinction between authentic human emotions and simulated empathy facilitated by AI. In a similar vein, Li, Peluso, and Duan (2023) elucidate how consumer preference for humans over AI in telemarketing raises inquiries concerning AI’s perceived genuineness and cognitive capabilities. Additionally, the level of acceptance of AI chatbots, as emphasized by Zhu, Zhang, Wu, and Liu (2022), is contingent upon the assurance of consumers’ needs, thereby accentuating the pivotal role of transparent AI design and utilization. Yalcin, Lim, Puntoni, and van Osselaer (2022) highlight how consumers react to choices made by AI programs versus humans. This emphasizes concerns about responsibility, being open and having faith in decisions influenced by AI. The prospect of conscious empathic AI in service interactions, though promising, gives rise to ethical contemplations pertaining to AI’s emulation of human emotions (Esmaeilzadeh & Vaezi, 2022). Collectively, these articles underscore the urgency for judicious and conscientious AI integration to navigate the intricate ethical terrain accompanying AI’s involvement in consumer experiences.

Furthermore, building upon this line of argument, the ethical and moral dimensions entwined with AI and its myriad applications across diverse domains have drawn substantial scholarly attention. For instance, Belk (2021) delved comprehensively into ethical concerns within the realm of service robotics and indicated that the examination of ethical quandaries stemming from the interactions between AI systems and humans prompts a plea for a well-balanced approach that seamlessly integrates technological progress and societal well-being (Belk, 2021). Breidbach and Maglio (2020) offered a compelling analysis of the ethical dimensions intrinsic to data-driven business models and emphasized the ethical responsibility of organizations to ensure that data-driven practices adhere to ethical principles and avoid perpetuating unjust consequences. Bock, Wolter, and Ferrell (2020) looked into the transformative impact of AI on services and marketing and underscored the potential of AI systems to mold customer perceptions, trust and personalization, thus prompting the call for ethical guidelines governing AI-driven services to ensure unbiased and credible customer interactions. However, despite these issues, the existing literature on socially acceptable AI services is inconclusive (e.g. Christou, Hadjielias, Simillidou, & Kvasova, 2023; Rust & Huang, 2021; Park et al., 2021). Thus, more research is needed to fully understand the nuances of AI and its impact on consumer attitudes and behaviors.

In light of these scholarly investigations, the necessity for ethical AI development looms large. The prominence of ethical guidelines and practices is underscored, thereby ensuring AI’s positive contribution while safeguarding societal values and overall well-being. Although the adoption of service robots in the hospitality industry may be inevitable to some extent, it is important for hospitality businesses to carefully consider the costs and benefits of implementing these technologies and to ensure that they are being used in a way that complements, rather than replaces, human employees. Accordingly, the purpose of the study is to investigate consumers’ ethical and moral considerations in the context of AI-based services. A pilot study adopted Huang and Rust’s framework that identifies three levels of AI applications: mechanical, thinking and feeling AIs. Mechanical AIs refer to the automation of repetitive and routine tasks (e.g. self-service technologies); thinking AIs facilitate rational decision-making based on data processing (e.g. conversational intelligent systems such as Siri); feeling AIs can interact with human emotions (e.g. human-like robots that respond to human emotions such as Sophia).

Next, Study 1 adopted congruency theory (Aguirre-Rodriguez, Bosnjak, & Sirgy, 2012) will investigate the moderating effect of self-congruency on consumers’ attention toward ethical (a) and moral (b) aspects of the AI-based service. Lastly, in Study 2, we adopted the construal level theory (Trope & Liberman, 2003) to examine how different levels of AI benefits influence consumers’ ethical and moral issues about AI-based service robotization and its replacement of human labor and social acceptance.

2. Literature review

2.1 AI-based service

The integration of AI robots in marketing endeavors to harness human-like computational capabilities for the purpose of customizing and personalizing service offerings (Huang & Rust, 2021). This study aims to discern the distinct types of AI, categorized as mechanical AI in transactional services, thinking AI in utilitarian services and feeling AI in hedonic services, and their corresponding benefits within the service context, in comparison to interactions with human intelligence (Rust & Huang, 2021). The unequal representation of AIs on the consumer level within AI-enabled e-commerce platforms is propelled by the system's adeptness in interpreting input data and flexibly adapting to tasks (Hermann, 2021; Kaplan & Haenlein, 2019). The ethical and moral discourse surrounding the replacement of human jobs by robotic AI is of heightened significance (Cowls et al., 2021). However, the intricacies pertaining to transparency, autonomy, justice, beneficence and trust underscore the challenge of fully addressing the ethical and moral issues of AI (Floridi et al., 2018). Such a debate gains prominence in the context of AI’s objectives in fostering social goods and sustainable consumption (Hermann, 2021).

AI’s visualization in service settings holds the potential to augment service employees' cognitive capacities and facilitate more intimate consumer relationships (Koo, Xiang, Gretzel, & Sigala, 2021). Notably, thinking AI and feeling AI incorporate emotional data to emulate human empathy, enriching the creation of empathetic elements (Li, Xia, Yu, Xu, & Zhang, 2022). As service jobs emphasize attributes such as judgment, creativity, intuition, emotion and empathy (Khaliq et al., 2022), the literature suggests that feeling and thinking AI evoke more positive consumer attitudes and behaviors compared to mechanical AI (Zhong, Coca-Stefaniak, Morrison, Yang, & Deng, 2022). These emotionally resonant AI types contribute to enhanced consumer experiences, as supported by the Service Robot Acceptance Model (SRAM) (Rességuier & Rodrigues, 2020). Despite its significance, the existing literature remains inconclusive in dissecting the subtleties of socially acceptable AI services. Consequently, this study seeks to elucidate the phenomena of AI by juxtaposing its outcomes against Human Intelligence, thereby evaluating customers' experiential, ethical and moral attitudes within service settings.

2.2 AI-based service levels on ethical and moral issues and social acceptance

The integration of technology and data aggregation within the service domain presents a multifaceted landscape of ethical quandaries. One pivotal ethical concern centers on the deployment of technological advancements such as facial recognition, database linkages and customer surveillance via mobile devices (Condie, Lean, & Wilcockson, 2017). These technologies evoke substantial ethical inquiries, particularly in terms of privacy invasion and responsible data management practices. Another significant moral dilemma, as suggested by Paluch and Tuzovic (2019), revolves around the concept of “persuaded self-tracking”, which involves exerting influence over individuals to monitor their behaviors and activities. The ethical implications of such practices are evident, yet the situation becomes even more intricate when considering the coercive undertones that often underpin these endeavors (Munoko, Brown-Liburd, & Vasarhelyi, 2020).

Moreover, recent developments have sparked ethical and moral concerns in the hospitality industry, particularly concerning the replacement of human labor with robotic AI (Cowls et al., 2021). In the realm of consumption, ethical considerations encompass not only the price and quality of products but also the broader socio-political and environmental implications of purchase decisions (Harrison, Shaw, & Newholm, 2005). This emerging trend reflects an increasing awareness among consumers about the societal impact of their choices and a commitment to aligning their purchases with their values (Hassan, Rahman, & Paul, 2022). In contrast, the concept of morality delves into personal principles and values guiding individual behavior. Shaped by cultural, religious, or philosophical beliefs, morality addresses questions of right and wrong, good and evil and the manner in which individuals should live their lives (Yaprak & Prince, 2019). While ethics tend to be formalized and context-dependent, morality is more subjective and universal across different cultures and belief systems.

Recent discussions in the hospitality industry have underscored ethical and moral issues arising from the replacement of human labor with robotic AI counterparts (Cowls et al., 2021). Ethical issues in consumption transcend conventional considerations of price and quality, extending to encompass the broader societal, political and environmental implications of purchase decisions (Harrison et al., 2005). Ethical consumerism is viewed as the conscious selection of products reflecting strong commitments to social responsibility, aligning with an evolving trend where consumers acknowledge the impact of their choices on the world and strive to harmonize their decisions with their values (Hassan et al., 2022). In contrast, morality is rooted in an individual's principles and values, often shaped by cultural, religious, or philosophical beliefs, addressing questions of right and wrong and guiding personal conduct (Yaprak & Prince, 2019; Jennings, Mitchell, & Hannah, 2015). The convergence of ethics and morals within the realm of AI service robotization calls for a nuanced understanding, as ethical considerations are more formalized and contextual, while moral values are deeply personal and can transcend cultural and belief boundaries (Paluch & Tuzovic, 2019). In response to the evolving landscape of ethical and moral concerns, organizations are adopting self-regulatory mechanisms to address AI ethics and review AI outcomes responsibility (Cath, Wachter, Mittelstadt, Taddeo, & Floridi, 2018; Martin, Shilton & Smith, 2019). However, these efforts are sometimes criticized for their focus on superficial compliance and the need for more robust regulatory frameworks (Rességuier & Rodrigues, 2020). The ambiguity surrounding transparency, autonomy, justice, beneficence and trust raises significant concerns about the governance of AI’s moral dimensions and social acceptance (Floridi et al., 2018).

Consequently addressing AI ethical regulations involves analyzing principles of transparency, responsibility, privacy, human values, governance motivations and fairness (Jobin, Ienca, & Vayena, 2019; Fjeld, Achten, Hilligoss, Nagy, & Srikumar, 2020; Hu & Min, 2023; Stringam, Gerdes, & Anderson, 2021).

This study delves into the pervasive ethical and moral quandaries within the service domain, centering on the intricacies of technology-mediated data aggregation. Accordingly, the sphere of moral considerations of AI takes on a multi-dimensional character, stretching beyond the confines of privacy concerns. The ethical scope of AI encompasses a broader panorama, covering the realm of moral responsibility throughout the stages of design and implementation. This entails the motivations driving designers, the methodologies of persuasion inherent in technological constructs and the potential unintended consequences stemming from the application of persuasive techniques (Paluch & Tuzovic, 2019).

As outlined by the SRAM developed by Wirtz et al. (2018), the acceptance of service robots by consumers is predominantly influenced by three distinct categories of factors: functional components (akin to the Technology Acceptance Model (TAM) model), social-emotional aspects and relational attributes. SRAM and TAM conceptual framework serve to enhance our comprehension of how users engage with robots, particularly within service-oriented settings. The varying degrees of social acceptance towards AI service robotization are deeply intertwined with the intricate landscape of ethical and moral considerations. Addressing the ethical regulations surrounding AI necessitates a comprehensive analysis of fundamental principles such as transparency, responsibility, privacy, human values, governance motivations and fairness (Jobin et al., 2019; Fjeld et al., 2020; Hu & Min, 2023; Stringam et al., 2021). This multifaceted examination sets the stage for understanding the intricate interplay between technology and human values, particularly within the domain of service provision.

In this study, the exploration of social acceptance of robots concerns ethical and moral quandaries. The ethical and moral discourse encompasses a broader vista that encompasses moral responsibility at every stage of the design and implementation process (Ladeira, Perin, & Santini, 2023). This comprehensive social acceptance of robots prompts an intricate interplay of technological advancements and human values and necessitates a thorough exploration of how the adoption and integration of AI services into various contexts can shape societal perceptions and acceptance. By recognizing ethical and moral considerations within the service domain, this study aims to shed light on the factors that influence social acceptance, thereby contributing to a deeper understanding of the complex relationship between AI technology and human values.

3. Pilot study: stimuli development

Before delving into the specific moral and ethical aspects of AI, it is imperative to explore how consumers perceive the social acceptability of different levels of AI-based services. Categorizations such as mechanical, thinking, or feeling AI hold the potential to elicit varying perceptions of social acceptability based on consumer interactions. Therefore, A pilot study was conducted to test the distinctiveness of mechanical, thinking and feeling AI-based service robots in a service setting. A single factor-based experiment with AI types (mechanical, thinking, feeling) was used with a scenario-based experimental approach with 80 respondents from the SurveyMonkey platform in September 2022, who had experience interacting with service robots. Respondents were randomly assigned to one of the AI conditions. The robotization of AI was manipulated at three levels: mechanical vs thinking vs feeling. The mechanical AI condition is described as the service robots that are the replacement of low-skilled frontline employees, taking and delivering a customer’s orders autonomously. In a thinking AI condition, participants were asked to imagine interacting with a service robot that recommends menus based on dietary information. In a feeling AI condition, participants were guided to imagine a service robot that was socially and emotionally communicable and provided highly interactive service just like a human server at the restaurant. The results of a one-way ANCOVA with AI type as an independent variable and intentions to accept as a dependent variable indicated that social acceptance of AI was significantly higher for feeling (M = 4.87, Standard Deviation (SD) = 1.25) and thinking AI (M = 4.67, SD = 1.26) than mechanical AI (M = 4.29, SD = 1.57); F = 3.14, p < 0.05. Further, the results of posthoc analysis showed no statistically significant difference between thinking and feeling AI (Mean difference: 0.198, p = 0.379). The results of a pilot study demonstrated that social acceptance of thinking and feeling AI service robots have no significant differences when measuring its practical implications. Thus, we decided to include two levels of AI-robotization in the main study: Mechanical AI and intuitive, namely:

  1. Mechanical AI-based service level designed for data collection, by presenting menus and taking orders;

  2. Intuitive (thinking and feeling AI) service level designed for personalization, by recommending food choices and incorporating emotions into service modeling.

4. Theoretical framework and hypotheses

4.1 Self-concept – self-brand congruity

Consumers are more likely to perceive products or services as congruent with their self-concept when they share similar values that align with their target consumers’ self-concept to increase acceptance and purchase intent. This mental process of comparing the image portrayed by brands, products, or companies to one’s self-image is known as self-congruity. Self-congruency refers to the degree to which a consumer perceives a brand or product as consistent with their self-image or personality (Stevens, Johnson, & Gleim, 2023). Self-congruency theory suggests that consumers are more likely to have positive perceptions and evaluations of products or services that are consistent with their self-concept (Hu, He, & Liu, 2022). According to self-congruency theory, consumers may develop emotional connections with AI robots, which are perceived as having a unique personality or identity that aligns with the consumer’s self-concept (Shafiee, Ansari, & Mahjob, 2022).

Regarding AI robots, Coeckelbergh (2021) underscores the socio-relational significance embedded within these machines, accentuated by their integration into societal and cultural frameworks. This emphasizes the reciprocal shaping of meaning between humans and robots, illustrating a dynamic interaction. Illustrating the ethical considerations, ethical discourse may inadvertently disregard fundamental individual needs, a perspective evident in the capability approach (Huh, Kim, & Lee, 2023; Kamila & Jasrotia, 2023). While previous ethical studies in technology emphasizes basic requirements in technology design, they overlooks the pivotal role of nurturing relationships in enabling individuals to optimally harness available resources. In terms of moral concerns, improper human conduct and interactions with robots may potentially lead to moral deterioration. The apprehension extends to the concept of moral deskilling, even in the absence of explicit abusive behavior, which arises due to the simulated unconditional recognition by robots that could normalize the exertion of control and authority over autonomous entities (Boada, Maestre, & Genís, 2021).

Applying the framework of self-congruency theory, a stronger alignment with AI-based services could heighten attention toward ethical and moral facets. In the realm of replacing human labor with AI robots, substantial implications emerge for the consumer-AI robot relationship (Långstedt, Spohr, & Hellström, 2023). In other words, consumer reception towards AI robots is likely to be positive if the robots align with their values and self-concept. Conversely, if AI robots conflict with the consumer's self-concept, a more negative perception may arise (Wang, Zhu, Jiang, Xia, & Xiao, 2023). This alignment reflects consumers' resonance with the service’s values and principles (Consiglio & van Osselaer, 2022). Studies from Alabed, Javornik, and Gregory-Smith (2022), Confente, Scarpi, and Russo (2020) and Mehta et al. (2022) emphasize the role of self-congruency in shaping consumer perceptions and technology acceptance and suggest that interactions with intuitive AI may evoke concerns about AI’s potential to replace human jobs, thereby affecting social acceptance.

Furthermore, the use of AI robots to replace human labor can have significant implications for the consumer-AI robot relationship (Långstedt et al., 2023). For example, when consumers interact with intuitive AI that has intuitive (i.e. thinking and feeling) characteristics, they may become more concerned about the potential for AI to completely replace human labor in the future, which can lead to contradictions and reduced social acceptance. In this context, if the AI robot replaces human workers in a way that aligns with the consumer’s values and self-concept, such as increasing efficiency or promoting sustainability, the consumer may be more likely to accept the robot (Stevens et al., 2023). However, if the AI robot replaces human workers in a way that conflicts with the consumer’s self-concept, such as causing job loss or decreasing wages, the consumer may have a more negative perception of the robot (Wang et al., 2023). Consumers who interact with mechanical AI and focus on its ability to increase service efficiency may pay less attention to the ethical and moral issues of AI replacing human labor, as they see the use of mechanical AI as a complement rather than a threat to human labor (Lou, Kang, & Tse, 2022). That is, when consumers feel a strong connection with an AI-based service, they may be more likely to scrutinize the service’s ethical and moral implications and hold it to a higher standard. This may lead to increased attention towards these aspects of the service. Therefore, it is predicted that self-congruency theory may play a role in how consumers perceive and interact with AI robots when they replace human labor (See Figure 1).

H1.

Interacting with mechanical AI that aims to increase efficiency aligns stronger (vs weaker) with the consumer’s ethical congruency compared to intuitive AI with increased concern about human labor replacement, thus resulting in greater social acceptance.

H2.

Interacting with mechanical AI that aims to increase efficiency aligns stronger (vs weaker) with the consumer’s moral congruency compared to intuitive AI with increased concern about human labor replacement, thus resulting in greater social acceptance.

4.2 Service capabilities — psychological distance

One of the most frequently discussed concepts in human-AI interaction is anthropomorphism (Li & Sung, 2021), which is defined as the tendency to imagine a behavior of nonhuman agents with human-like characteristics, emotions and behavioral interventions (Eyssel & Kuchenbrandt, 2012). The underlying mechanism of anthropomorphism was measured using the psychological distance in evaluating how AI agents intimately react when placed within service encounters (Ahn, Kim, & Sung, 2021). According to construal level theory (Trope & Liberman, 2003), high-level construal is more abstract and decontextualized, whereas low-level construal is more concrete and dentally contextualized. When consumers know much about the AI service robots, they are more likely to convey a low level or concrete construal and perceive certainty of the event. On the other hand, when AI service events take place with ambiguity, consumers convey a high level of abstract construal and tend to move away from the present service experience (Yudkin, Pick, Hur, Liberman, & Trope, 2019). This phenomenon can be further explained as the psychological distance of a certain object (Trope & Liberman, 2010). Ahn et al. (2021) demonstrated that people develop different levels of psychological construal depending on the extent to which AI agents show human-like features (Ahn et al., 2021).

Psychological distance refers to the perceived distance between an individual and an object, event, or person (Alaoui, Valette-Florence, & Cova, 2022). Precisely, psychological distances are reflected in that goal’s desirability and feasibility, respectively in high and low levels of construal assessment. The feasibility construal of psychological distance describes the ease or difficulty of interacting and achieving the final service outcome, whereas the desirability construal establishes the value of such an action’s end state (Alaoui et al., 2022). In the case of AIs, psychological distances can refer to the perceived social, temporal, spatial, or hypothetical distance between the user and the AI. Converging evidence demonstrates that the perceived construal level between consumer and AI-related technology is measured by the desirability and feasibility, which reflects the high and low-level features of construal (Suzuki, 2019). Desirability refers to the outcome value produced by AI-robotization, whereas feasibility resonates with the degree of ease/difficulty of achieving the end outcome (Yudkin et al., 2019). Applied to the AI-robotization context, where the end outcome is the most central to the AI-robotization feature, desirability demonstrates a high level of passion in obtaining the end outcome, while the feasibility of AI-robotization reflects the subordinate features, such as the easiness of the interactions with the AI-robotization in the path to obtaining the outcome (Trope, Liberman, & Wakslak, 2007).

Construal level theory (Trope & Liberman, 2003) argues that the further psychological distance a consumer perceives from AI-robotization, the more weight they place on its desirability as opposed to its feasibility. For example, consumers might judge anthropomorphic AI-robotization as more attractive and desirable. However, as the human-AI interaction gets complex with varying degrees of intuitive robotization, consumers might perceive it as less feasible. Therefore, psychological distance is deemed as a reliable prediction that explains consumer psychological proximity and is likely to remain in the relative search for human-AI interaction (Ahn et al., 2021; Alaoui et al., 2022). When consumers interact with mechanical AI that satisfies their immediate feasibility of increasing service efficiency, they pay less attention to the ethical and moral issues of AI replacing human labor as they view a transaction-based task complement of mechanical AI as a less threat to human labor (Wang et al., 2023).

On the other hand, when consumers interact with intuitive AI that is supported by intuitive characteristics of desirability would increase their concerns about replacing human labor entirely in the near future, such that contradictory leads to their reduced social acceptance (Lou et al., 2022). We propose that a person puts more weight on desirability (e.g. attractiveness) than on feasibility when they assess service experience from an intuitive AI as it provides higher levels of human-like performance such as intuitive AI. In turn, a person puts more weight on feasibility (e.g. easiness of interactions) than desirability when they assess service experience from a mechanical AI as it offers functional attributes (Figure 2). Thus, we hypothesize that:

H3.

Interacting with mechanical AI that prioritizes feasibility construal leads consumers to have fewer ethical concerns compared to interacting with intuitive AI that prioritizes desirability construal, which leads to increased concern about human labor replacement, thus resulting in greater social acceptance.

H4.

Interacting with mechanical AI that prioritizes feasibility construal leads consumers to have fewer moral concerns compared to interacting with intuitive AI that prioritizes desirability construal, which leads to increased concern about human labor replacement, thus resulting in greater social acceptance.

5. Study 1: how ethical and moral perceptions intervene with self-brand congruency?

Study 1 investigates how consumers respond to service AIs as a function of evaluating self-brand congruency with different AI-based service levels. We predict that consumers pay more attention to the (a) moral awareness of service and (b) ethical use of AI when interacting with intuitive AI than mechanical AI. When consumers have higher self-brand congruency with an AI-based service could lead to increased attention to ethical and moral aspects of the service.

5.1 Design, sample and procedures

Study 1 used a 2 (AI chef: mechanical vs intuitive) × 2 (self-brand congruency: high vs low) between-subject design. A total of 348 U.S. adult consumers were recruited in the last week of February 2023 (Mage = 39.5, 50.6% female, 79% dining out at least once a week, 78% married, 76% Caucasian, average income between $50,000 and $79,999). These data are taken from the Qualtrics survey panel and distributed to the Prolific platform during January 2023, enabling the random assignment to one of the four experimental conditions. Participants viewed the scenario that describes self-brand congruency depicted which was manipulated to be high or low. Then, they were asked to imagine themselves in a setting where either a mechanical AI or intuitive AI robot serving them. Next, participants were asked to indicate their moral and ethical perceptions of the service experience and desire for ethical purchase evaluation. Three attention check questions were integrated into the survey to ensure the integrity and credibility of participants' responses. During the survey, participants who failed to answer attention checks and screening questions were automatically excluded from the analysis.

5.2 Experimental stimuli and measures

The robotization of AI was manipulated at two levels: mechanical vs intuitive. The mechanical AI-based service condition is described as the service robots that deliver automatic service that is only function-based. In the intuitive AI-based service condition, consumers were imagined to interact with the service robots that are going to replace frontline human jobs. The self-congruency was manipulated with instructions adopted from Holmes (2021). The hypothetical scenario, where the self-congruency of an AI-based service, is manipulated to investigate its impact on consumer behavior. Participants were instructed to imagine that the AI-based service has consistent characteristics that match their own personality or self-image, which would result in higher self-brand congruity. The high self-congruency group would receive the manipulation of high self-congruency instructions, which will translate into higher attention, emotional connection, attitudes and purchase intent, while the low self-congruency group would not (Table A1).

In this study, the assessment of ethical considerations (Reidenbach & Robin, 1988), moral evaluations (Martinez & Jaeger, 2016) and the social acceptance of robots (Savela, Turja, & Oksanen, 2018) have been derived from prior research. Additionally, these measurements have been customized to align with the specific context of this study. All measurement items were adopted from previous studies and measured on a seven-point Likert-type scale after the validity and reliability assurance (Table A2).

5.3 Manipulation checks

A two-way analysis of the variance test was conducted with manipulations as the independent variables and manipulation check questions as the dependent variable. Participants were asked to rate their perception of the AI-based service they just experienced in terms of its design and purpose using a seven-point scale, where 1 indicated a perception aligned with Mechanical AI’s characteristics of complementing and enhancing human abilities while prioritizing safety and user-friendliness and 7 indicated a perception aligned with Intuitive AI’s characteristics of replacing low-skilled frontline service jobs and considering the needs and capabilities of human workers. Results of AI-based service manipulation show a significant main effect of AI stimuli on manipulation check questions with the intended direction of asking the degree of perceived service offerings: F = 4.14, p < 0.05, MmechanicalAI = 1.34 vs MintuitiveAI = 4.15). The self-congruency stimuli asked whether participants' experiences aligned with the intended levels of self-congruency in the AI-based service. (F = 4.14, p < 0.001, Mhigh = 5.60 vs Mlow = 4.56). Specifically, participants were asked to rate their overall level of emotional connection, attentiveness, attitudes and purchase intent during their interaction with the service robots on a scale ranging from 1 (indicating no emotional connection, attentiveness, attitudes, or purchase intent) to 7 (indicating a strong emotional connection, attentiveness, attitudes and purchase intent). No interaction effect was found between the two manipulations, indicating the successfulness of manipulation checks. Furthermore, to assess the respondents’ perception of the scenario’s realism, a set of realism check questions (e.g. “The scenario seems realistic”) was included. The respondents rated the given scenario as realistic, with an average score of 5.86 (SD = 1.12) out of a seven-point Likert scale of agreement. These outcomes collectively suggest that the manipulations were effective.

The main results show a significant main effect of AI [F = 4.58, p < 0.01] on the desire for social acceptance. Receiving service from mechanical AI (M = 4.91) induces less social acceptance desire than receiving service from intuitive AI (M = 5.70). The interaction between the AI chef robotization and self-congruency on the desire for social acceptance was significant, F = 4.15, p < 0.05. Simple effects show that when receiving a service from the intuitive AI, the participants experience higher social acceptance when they have high self-congruency (M = 4.86) than low self-congruency (M = 5.22; p < 0.05) on a 7 point Likert scale of agreement. Participants’ age, gender, income and participants’ technological familiarity were included as a covariate in the analysis.

5.4 Moderated mediation of self-brand congruency

To examine the mediating effects of ethical and moral standards on social acceptance bootstrapping procedure with 5,000 samples (95% CI (Confidence Interval)) was conducted (Hayes, 2018; Model 8). The result of bootstrap tests of moderated mediation (model 8) showed that ethical standards did mediate the AI-based service × self-congruency on social acceptance (Index = 0.3798, 95% CI = [1.0557, 1.79285]). Perceived use of AI was a significant mediator in the high self-congruency condition, leading to stronger consumer ethical concerns on social acceptance (b = 0.93, 95% CI = [0.4881, 1.6386]), when compared to intuitive AI (b = 0.65, 95% CI = [0.1840, 1.5487]). In the low self-congruency condition, neither mechanical AI nor intuitive AI was mediated by ethical concerns in predicting their social acceptance (Mechanical AI: b = −0.55, 95% CI = [−0.0042, 1.8345]; Intuitive AI: b = −0.25, 95% CI = [−0.2485, 1.3859]). H1 is supported. Similarly, the result of the moral awareness of service also mediated the AI-robotization × self-congruency on social acceptance (Index = 0.1238, 95% CI = [0.2153, 1.4941]). Perceived moral standards were a significant mediator in the high self-congruency condition (b = 0.21, 95% CI = [0.0257, 0.5187]), but not in the low self-congruency condition (b = 0.09, 95% CI = [−0.1484, 0.4413]). In high self-congruency condition, perceived use of AI was a significant mediator in the high self-congruency condition, leading to stronger consumer moral concerns on social acceptance (b = 0.92, 95% CI = [0.5841, 2.1452]), when compared to intuitive AI (b = 0.48, 95% CI = [0.2475, 1.6844]). Therefore, H2 is also supported.

6. Study 2: how does psychological distance regulate?

Study 2 investigates how consumers’ psychological distance responds to service AI-based service levels of AI-robotization with ethical and moral concerns. Concerning the mechanical and intuitive AI’s varied desirability and feasibility, we propose that a person puts more weight on desirability (e.g. attractiveness) than on feasibility when they assess service experience from an intuitive AI as it provides higher levels of human-like performance such as thinking and feeling. In turn, a person puts more weight on feasibility (e.g. easiness of interactions) than desirability when they assess service experience from a mechanical AI as it offers functional attributes. Such results will lead to varied social acceptance of AI-based services.

6.1 Design, sample and procedures

Study 2 used a 2 (AI-based service level: mechanical vs intuitive) ×2 (psychological distance: desirability vs feasibility) between-subjects design. The survey was administered using the Qualtrics survey panel and made available on the Prolific platform in February 2023. A total of 324 valid responses were gathered in January 2023, with 48.6% being females, 80.2% being in the age group of 25-44, 74% being Caucasian and 85.2% holding a bachelor’s degree. Three attention check questions were incorporated into the survey to safeguard the quality and validity of participant responses. Participants who did not provide a response to at least one of the screening questions were excluded from any subsequent analysis. The methodology involved the random allocation of participants to a specific scenario, where they were then required to complete a questionnaire.

6.2 Experimental stimuli and measures

The experiment was comprised of scenarios of AI-based service levels and psychological distance stimuli to characterize the situation as described in Study 2. The scenarios of AI-based service levels have remained the same as in Study 1. The psychological distance was manipulated two-fold: desirability vs feasibility. In the desirability condition, participants were told that their interactions with a service robot made them feel that the application of these service robots in the simulation of human variables is abstract enough. In the feasibility condition, participants were instructed to imagine that their interactions with the service robots entertain people who think the service interaction is task-based and straightforward as a self-service technology (Appendix 1). The dependent variable of social acceptance was measured with 5 items adopted from a previous study (Savela et al., 2018, 1 = strong disagree, 7 = strongly agree) that asked about the general acceptance of robots in the service operation (Cronbach’s alpha = 0.75). AI (Appendix 2).

6.3 Manipulation checks

An ANOVA test with independent variables being the manipulations and the dependent variable being the manipulation check questions was performed. The findings of the manipulation of AI-based services indicate a significant primary impact of AI stimuli on the manipulation check questions (See Table 1), where the questions were aimed at evaluating the extent of perceived efficiency-based automatic service offerings: F = 4.64, p < 0.01, MmechanicalAI = 4.18 vs MintuitiveAI = 2.12). The psychological distance stimuli asked for the perceived simulation of human variables with the AI service, which aimed to ascertain the perceived feasibility and desirability of their interactions with the service robots. Participants were asked to indicate their level of agreement with the following statement: “Your interactions with the service robots appeal to you as easily understandable.” Participants were then prompted to select a number on a scale ranging from 1 (reflecting a notion of “Ambiguous”) to 7 (reflecting a notion of “Apparent”). This validation question allowed us to gain insights into how the feasibility and desirability of AI-based service interactions influenced participants’ perceptions of psychological distance (F = 4.53, p < 0.05, Mfeasibility = 4.27 vs Mdesirability = 3.15). To verify the realism and comprehensibility of the scenarios, two questions were posed regarding scenario realism and comprehension with a rating scale of 1 to 7 (1 = highly unrealistic, 7 = highly realistic, 1 = very difficult, 7 = very easy). The outcomes revealed that the participants considered all the scenarios to be realistic and easily understandable, with an average score of 5.82 for realism and 5.64 for comprehensibility. Therefore, manipulations and scenario realism checks were deemed effective.

6.4 AI-based service level and psychological distance

To examine H2, A 2 × 2 ANCOVA was performed. The results including age (F = 0.39, p = 0.53), gender (F = 0.65, p = 0.42), income (F = 0.55, p = 0.734) as covariates showed insignificant effects. The covariates, familiarity with technology was significant (F = 42.80, p < 0.01). The main result revealed a significant main effect of AI-based service level, F = 3.54, p < 0.05 and psychological distance, F = 4.17, p < 0.05 followed by a significant interaction between AI-based service level and psychological distance, F = 4.63, p < 0.05, denoting a significant moderating effect of psychological distance on the direct effect of AI-based service level robotizing structure on social acceptance of service (Figure 3). Specifically, participants showed higher social acceptance of the mechanical (vs intuitive) AI service following a more feasible (vs desirable) psychological proximity, Mmechanical = 5.65 versus Mintuitive = 4.51; F = 5.50, p < 0.01. Conversely, participants showed higher social acceptance of the intuitive (vs mechanical) AI service following a more desirable (vs feasibility) psychological proximity, Mmechanical = 4.26 versus Mintuitive = 5.58; F = 5.19 and p < 0.01.

6.5 Moderated mediation of psychological distance

To examine H3-H4, two separate bootstrapping (sample size = 5000) analyses using PROCESS model 8 were used [IV (X1) = AI-based service level; moderator (X2) = psychological distance; the first analysis of bootstrapping (sample size = 5000) analyses (model 8: IV (X1) = AI-based service level; moderator (X2) = psychological distance; mediator (M1) = ethical standards and dependent variable (DV) = social acceptance of service) also showed a significant increase in the amount of explained variance in social acceptance of service mediated by the ethical standards given the AI-based service level robotization and psychological distance gave (ΔR2 = 0.0708, p < 0.05). Mediator (M2) = moral standards; DV = social acceptance of service]. The first analysis found that the inclusion of the AI-based service level robotization and psychological distance gave a significant increase in the amount of explained variance in social acceptance of service mediated by moral standards (ΔR2 = 0.0711, p < 0.05). A subsequent indirect effect analysis included the technology familiarity as a covariate demonstrated that the relationship between moral standards and social acceptance of service was statistically significant (Index: 0.1136, 95% CI [0.2153, 0.5632]) when interacting with both AI-based service level robotization and psychological distance. The indirect effect analysis found that the effect of social acceptance of service is statistically significant (Index: 0.2796, 95% CI [0.1205, 9795]) at all levels of AI-based service level robotization and psychological distance.

Specifically, mechanical AI (b = 0.2643, 95% CI = [0.3499, 0.8786]) that prioritizes feasibility construal (b = 0.2334, 95% CI = [0.0082, 0.6518]) leads consumers to have fewer ethical concerns compared to interacting with intuitive AI (b = 0.0834, 95% CI = [0.5640, 0.7308]) that prioritizes desirability construal (b = 0.1198, 95% CI = [0.0991, 0.4428]), which leads to increased concern about human labor replacement, thus resulting in greater social acceptance. Therefore, H3 is supported. Similarly, mechanical AI (b = 0.2313, 95% CI = [0.3470, 0.8096]) that prioritizes feasibility construal (b = 0.3663, 95% CI = [0.0203, 0.3485]) leads consumers to have fewer moral concerns compared to interacting with intuitive AI (b = 0.0495, 95% CI = [0.5689, 0.6679]) that prioritizes desirability construal (b = 0.0868, 95% CI = [0.2053, 0.4231]), which leads to increased concern about human labor replacement, thus resulting in greater social acceptance, supporting H4 (See Table 2).

7. General discussions

Service robots are the next wave of the hospitality industry in designing the service model and an increasingly common method to dive into service improvement due to green labor practices and service innovation (e.g. Lin, Cui, Wang, Wu, & Lin, 2022). This study echoes prior endeavors that champion the innovation and advantages of service robots (Fu, Zheng, & Wong, 2022) while pinpointing areas within the realm of service that demand a special focus on ethical and moral concerns. Specifically, this study suggests that the potential misuse of advanced AI-based service robots arises when ethical and moral standards are not upheld, particularly regarding the repercussions of service robot interactions that could impact consumer social acceptance of service robots. This highlights the importance of well-being-supportive design, which offers guidelines to enhance psychological well-being in user experiences (Peters, 2023). This perspective aligns with the argument that giving precedence to moral and ethical factors in technology design, including service robots, promotes positive user interactions. Adhering to moral and ethical principles during design can alleviate disruptions in self-congruency and cognitive dissonance, potential contributors to negative effects on consumer psychological well-being.

In addition, the investigation into service robots, customers and service employees brings to light gaps in comprehending human-robot interactions and the role of service robots in shaping consumer experiences (e.g. Lu et al., 2020). It’s plausible that exploring how the moral and ethical facets of service robots impact consumer psychological well-being constitutes a potential void in this domain. Addressing these gaps and integrating ethical considerations into service robot design could furnish a more holistic comprehension of how technology influences consumer well-being. Thus, bridging psychological distance with the AI-based service level is noteworthy in the negotiation of ethical and moral standards of whether to accept service robots (Hermann, 2021). This is a critical point made from service Robot Acceptance Model (RAM) (Purkitt, 2019), where previous robotic research emphasized that the human-robot interactions are decided by the users’ assessment of the functionality and trustworthiness of humanoid robots (e.g. Parvez, Öztüren, Cobanoglu, Arasli, & Eluwole, 2022). This assessment of robot acceptance also appears to be valid when weighing the ethical and moral standards of service consumption.

Furthermore, Study 1 shows that consumers who sense higher self-brand congruency with the AI service, pay more attention to the social acceptance of service when interacting with a mechanical AI-based service level. Interestingly, when interacting with intuitive AI-based service levels, consumers observe higher ethical and moral concerns for intuitive AIs, resulting in less social acceptance of the service, especially the intuitive AI threatens human replacement. The findings of Study 2 showed that the level of social acceptance is higher for mechanical AI with feasibility features than mechanical AI with desirability features and for intuitive AI with desirability features than intuitive AI with feasibility features. Our findings echoed previous attempts to advocate the innovation and benefits of service robots (Fu et al., 2022) but also pinpointed the areas of service that require special attention to ethical and moral issues of AI-robotization. It is critical to know that consumers prefer intuitive AI when it comes to their social acceptance due to its preferable desirability. However, ethical and moral concerns involved in using intuitive AI to replace human labor, the results indicate an adverse social acceptance. Such results shed light on what consumers expect when interacting with AI service robots and how such expectations of feasibility and desirability could align with the ethical and moral standards of replacing human labor. Bridging psychological construal with AI-robotization (Hermann, 2021), our findings suggest that marketers need to consider how to balance the levels of AI-robotization and service capabilities to reduce customers’ concerns about human labor replacement. This study advances the Service Robots Acceptance Model (SRAM) literature by addressing the issues of social acceptance concerning the mechanical and intuitive AI’s varied desirability and feasibility. Despite the numerous studies examining service robots, a lack of attention has been paid to the ethical and moral attributes of AI service robots on consumers, which could lead to the replacement of service personnel. Our findings suggest that it is important to develop an organizational code of AI ethics regarding human labor replacement.

7.1 Theoretical contribution

Drawing from such a facet, this study advances Service RAM by identifying the functionality and acceptance of the AI type in measuring consumers’ consumption intention. First, this research provides a ground-breaking theoretical contribution by demonstrating that there is no significant difference between “thinking” and “feeling” AI in practice.

It’s possible that there is no difference between thinking and feeling AI because current AI technology is still in the early stages of development, and the development of truly “thinking” and “feeling” AI may be decades or even centuries away. Additionally, the fundamental nature of human consciousness and emotions is still not fully understood, which makes it difficult to replicate these traits in machines. It is arguable that emotions are closely linked to a physical embodiment, which means that true emotional intelligence may require robots that are capable of experiencing the physical sensations that underlie emotions in humans. Therefore, as our pilot study result shows until AI technology advances significantly, it is unlikely that there will be a clear difference between thinking and feeling AI.

Second, findings from self-brand congruency (high vs low) disaccord in favor of human-like robotic AI-based service levels (mechanical vs intuitive) when examining its ethical and moral standards. The existing body of literature provides a foundation for this research development. Nyholm (2020) investigates the ethical aspects of human-robot interaction, focusing on anthropomorphism by examining the ethical and moral facets of anthropomorphism in service robots, emphasizing its consequences for consumer well-being when ethical standards are overlooked. Similarly, Du and Xie (2021) delve into AI’s ethical challenges, aligning with the focus on service robots to underscore the repercussions of disregarding ethical principles in anthropomorphic design. Choi and Wan (2021) also shed light on the rise of service robots, which this research complements by offering an ethical and moral viewpoint on consumer interactions. Ding et al. (2022) conducted a systematic review of anthropomorphism in hospitality, and this research contributes to this discourse by emphasizing the ethical dimensions and potential negative effects on consumer psychological well-being. By extending these discussions, the current research delves further into the ethical and moral aspects of anthropomorphism in service robots, enhancing the literature’s comprehension of its influence on consumer well-being.

Lastly, despite the numerous studies examining service robots, a lack of attention has been paid to the ethical and moral attributes of AI service robots on consumers. The current research contributed to existing service robot research by identifying the effect of AI service robots on consumer desire to purchase. This research also advances construal level theory (Trope & Liberman, 2003) by extending its application to AI service robots with a psychological distance from AI-based service levels. The findings found desirable psychological proximity with the intuitive AI and feasible psychological proximity with mechanical AI on their level of social acceptance of the AI service encounters. Although intuitive AIs might be the next stream of research with the life-changing AI application, consumers’ certainty of interacting with intuitive AI is not in tune with its advanced and complex computations. Therefore, this study argues that when AI service robots are used to perform service tasks, the positive effect of these robots on consumers’ social acceptance may be conditioned by the effect of the construal level of psychological proximity with the AI robots, confirming the RAM model. Besides, as AIs become more pervasive, an advanced service robot model should consider AI trust with increasingly greater ethical and moral responsibility.

7.2 Practical implications

By following ethical and moral values, hospitality operators can design and implement AI technologies that are socially acceptable and aligned with consumers’ values and beliefs. The study’s findings could guide the design of these technologies, including ethical and moral standards of the service robots or AI, to promote greater social acceptance among consumers. In the absence of any rules and regulations governing AI technologies, ethical service design and moral acceptance of AI services have become important topics of discussion. This study’s results could help inform the development of guidelines and regulations for the use of AI-based service robotization in the hospitality industry and beyond, promoting ethical and socially responsible business practices. Social robots, known for their unique “interpersonal” interaction with humans, could play a role in shaping human moral character (Boada et al., 2021). This means that misconduct and interactions between humans and robots may lead to moral corruption and the simulation of unconditional recognition by robots could potentially normalize the exertion of control and power over autonomous agents (e.g. Hunkenschroer & Luetge, 2022). These biases can stem not only from the AI’s decision-making processes but also from the underlying criteria used to predict ethical and moral job performance – criteria that may be technologically validated but may not align ethically or morally.

The study’s findings also contribute to our understanding of the consumer–AI robot relationship as it delves into the often-overlooked aspect of individuals’ fundamental needs, as evidenced by the self-congruency approach. While this approach addresses basic needs in technology design, it may not adequately account for the significance of caring relationships that empower individuals to harness available resources (Huh et al., 2023; Kamila and Jasrotia, 2023). Coeckelbergh's insights (2021) highlight the profound socio-relational meaning embedded in robots due to their integration into social and cultural contexts. This interaction underscores the co-shaping of meaning between humans and robots, a dynamic that underscores the significance of their integration. The replacement of human labor by AI robots can significantly impact this relationship (Långstedt et al., 2023). The introduction of intuitive AI prompts considerations about AI’s potential to replace human labor, influencing consumer acceptance. AI robots that align with consumers’ values and self-concept are more likely to be embraced, while those conflicting with self-concept may elicit negative perceptions (Wang et al., 2023).

As hospitality AI service regulation is currently technology-dominated (vs consumer-dominated), it might leave room for unethical behavior. While business ethics, sustainability and corporate social responsibility have gained in popularity, the problems with management effectiveness occasionally outweigh the ethical issues of AI technologies. In essence, this pioneering study provides a roadmap for businesses to navigate the ethical and moral implications of AI-based service robotization. By integrating ethical and moral values into AI technology design and implementation, businesses can not only enhance value for consumers but also contribute to the broader social good. These actions resonate with the growing movement to align technology with ethical and moral principles, as highlighted by recent research (Cath et al., 2018; Martin et al., 2019; Jobin et al., 2019; Fjeld et al., 2020; Hu and Min, 2023; Stringam et al., 2021). Ultimately, the integration of AI robots and the conscientious consideration of ethical and moral dimensions pave the way for responsible and impactful technological progress, cultivating positive societal outcomes. Hospitality companies must consider approaches to address ethical issues in service settings, especially in developing an organizational code of AI ethics mechanisms that could help to ensure the ethical use of AI within organizations to stay aware of potential shortcomings of recruiting algorithms to foster AI inclusion and equity. It is also crucial to anchor ethics competencies at the human employee levels within organizations, such as developing employees’ ability to audit the assessment of algorithms, data and design processes (Georgieva, Lazo, Timan, & van Veenstra, 2022) to further modify business models that can contribute to the trustworthiness of the technology.

When cultivating the AI service functionality at the operational level, the use of AI systems should be given careful consideration, particularly in situations relating to self-brand congruency issues in upholding consumer rights. AI systems should be specifically monitored in applications affecting consumers’ and employees’ fundamental rights along with user-friendly and multi-dimensional performance functions that are well accepted by the stakeholders with enough certainty and accuracy. More thoughtful AI controls need to be integrated into corporate strategy with the overall ethical and moral monitoring regulations, and a more feasible ecosystem of AI capabilities including data management, big data and deep learning should be implemented at various stages of AI service artifacts.

7.3 Limitations and future research

Several limitations have been raised from the study. First, the study was only interested in AI service self-brand congruency. In response to the ethical and moral-oriented goal, future studies are encouraged to continue to explore other factors from self-congruency that can father influence ethical and moral standards of service acceptance. Second, psychological distance was only present as a moderator in this research, but there is a lot to discover about the cause of psychological proximity. For example, future studies could disclose what the motivations behind psychological proximity are and how the origins of this psychological proximity could further affect ethical and moral standards to enrich the RAM model. The relationship between psychological feasibility and desirability of AI service appeared to have new dimensions at the end of the study, which values further examination. Last, the terminology used in the AI field might be different or more classifiable when conducting more comprehensive and universally applicable AI service robot studies.

While this study has provided valuable insights into the development of AI, it is essential to acknowledge its limitations. One notable limitation is that the conceptual model employed in this study covers a limited set of variables. The scenario of mechanic vs intuitive AI robots, though insightful, represents a simplified context and may not fully capture the complexities of real-world AI interactions. In future studies, an updated and advanced research framework could be developed to incorporate a broader range of variables and consider more realistic scenarios. This expanded model would allow for a more comprehensive exploration of the factors influencing AI development and its impact on human behavior and decision-making. Additionally, future research could delve deeper into specific industries or applications of AI, enabling a more nuanced understanding of the dynamics at play. This is an interesting research question that could be further explored to better understand how self-congruency and ethical considerations intersect in the context of AI-based services. However, it is also important to note that the relationship between self-congruency and attention towards ethical and moral aspects of an AI-based service may depend on various factors, such as the individual’s values, the specific ethical and moral issues at play and the context in which the service is being used.

Figures

Conceptual model of Study 1

Figure 1

Conceptual model of Study 1

Conceptual model of Study 2

Figure 2

Conceptual model of Study 2

Interaction effects of AI-based service level and psychological distance

Figure 3

Interaction effects of AI-based service level and psychological distance

Study 2 manipulation test results

Manipulation MeanTest statistics
AI-based service levelMechanical4.18F = 4.64, p < 0.01
Intuitive2.12
Psychological distanceFeasibility4.27F = 4.53, p < 0.05
Desirability5.15

Source(s): Table by author(s) analysis

Results of moderated mediation

Indirect effectsDirect effects
Through ethical standards
Xi→M1→Y
Through moral standards
Xi→M2→Y
Total direct effect
Xi →Y
AI-based service (X1)
Mechanical AI0.2643 [0.3499, 0.8786]0.2313 [0.3470, 0.8096]0.6121 [0.7627, 1.9869]
Intuitive AI0.0834 [0.5640, 0.7308]0.0495 [0.5689, 0.6679]
Psychological distance (X2)
Desirability0.1198 [0.0991, 0.4428]0.0868 [0.2053, 0.4231]0.7722 [0.6145, 2.159]
Feasibility0.2334 [0.0082, 0.6518]0.3663 [0.0203, 0.3485]
X1 × X20.2796 [0.1205, 9795]0.1136 [0.2153, 0.5632]0.3.835 [1.0539, 1.8209]

Note(s): Coefficients were standardized, and their 95% confidence intervals were indicated in brackets. The estimates were obtained using bootstrapped standard errors

Source(s): Table by author(s) analysis

Stimuli

StimuliConditions
AI-based service
Mechanical AI
Mechanical AI
Imagine that you interact with self-service technology that is designed to complement and enhance human abilities, rather than replace them, and to operate in ways that are safe, transparent and easy for humans to understand
Intuitive AI
Imagine that you interact with self-service technology that is designed to replace low-skilled frontline service jobs and takes into account the needs and capabilities of human workers
Self-brand congruencyHigh
During the interaction with the service robots, you feel a strong emotional feelings that will translate into higher attention, emotional connection, attitudes and purchase intent
Low
During the interaction with the service robots, you feel no strong emotional connection, attitudes or purchase intent
Psychological distanceFeasibility
Your interactions with the service robots entertain people like you who think the service interaction is task-based and straightforward as a self-service technology
Desirability
Your interactions with a service robot make you feel that the application of these service robots in the simulation of human variables is abstract enough

Measurements used in Studies 2 and 3

Measure itemsStudy 1
Cronbach’s alpha
Study 2
Cronbach’s alpha
EE1: The AI [in this scenario] is misleading the appraiser of service0.940.95
EE2: The AI [in this scenario] is over-eager in selling things that I don't need
EE3: The AI [in this scenario] is withholding information when interacting with me
EE4: The AI [in this scenario] is a failure to honor a warranty during and after the service
MA1: I am aware that the purchase from this AI [in this scenario] could harm the original service experience that I had with real service employees0.890.92
MA2: I am aware that the purchase from this AI [in this scenario] could harm the employees' organizational hiring, training, or recruitment activities (by replacing some of their jobs)
MA3: I am aware that the purchase from this AI [in this scenario] could indirectly support organized activities in evading responsibilities for the consequences of robotic hazards
MA4: I am aware that the purchase from this AI [in this scenario] could neglect labor and service standards
SA1: I think such AI is accepted in work tasks related to hospitality by the general public0.910.83
SA2: I think such AI has accepted substitutes for tools or equipment and servers and is even referred to as a social actor or a citizen
SA3: I think such AI is perceived as more desirable than servers for providing a service experience
SA4: I think such AI is perceived as having a positive effect on creating service that is humanly desired by consumers
SA5: I think such AI is well accepted by service professionals and consumers

Note(s): EE = ethical evaluation of use; MA = moral awareness of service and SA = social acceptance

Appendix 1
Appendix 2

References

Aguirre-Rodriguez, A., Bosnjak, M., & Sirgy, M. J. (2012). Moderators of the self-congruity effect on consumer decision-making: A meta-analysis. Journal of Business Research, 65(8), 11791188.

Ahn, J., Kim, J., & Sung, Y. (2021). AI-Powered recommendations: The roles of perceived similarity and psychological distance on persuasion. International Journal of Advertising, 40(8), 13661384.

Alabed, A., Javornik, A., & Gregory-Smith, D. (2022). AI anthropomorphism and its effect on users' self-congruence and self–AI integration: A theoretical framework and research agenda. Technological Forecasting and Social Change, 182, 121786.

Alaoui, M. D., Valette-Florence, P., & Cova, V. (2022). How psychological distance shapes hedonic consumption: The moderating role of the need to justify. Journal of Business Research, 146, 5769.

Belk, R. (2021). Ethical issues in service robotics and artificial intelligence. The Service Industries Journal, 41(13-14), 860876.

Berezina, K., Ciftci, O., & Cobanoglu, C. (2019). Robots, artificial intelligence, and service automation in restaurants. In Ivanov, S., & Webster, C. (Eds), Robots, artificial intelligence, and service automation in travel, tourism and hospitality (pp. 185219). Bingley: Emerald Publishing.

Bharwani, S., & Mathews, D. (2021). Post-pandemic pressures to pivot: Tech transformations in luxury hotels. Worldwide Hospitality and Tourism Themes, 13(5), 569583.

Boada, J. P., Maestre, B. R., & Genís, C. T. (2021). The ethical issues of social assistive robotics: A critical literature review. Technology in Society, 67, 101726.

Bock, D. E., Wolter, J. S., & Ferrell, O. C. (2020). Artificial intelligence: Disrupting what we know about services. Journal of Services Marketing, 34(3), 317334.

Breidbach, C. F., & Maglio, P. (2020). Accountable algorithms? The ethical implications of data-driven business models. Journal of Service Management, 31(2), 163185.

Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2018). Artificial intelligence and the ‘good society’: The US, EU, and UK approach. Science and Engineering Ethics, 24(2), 505528.

Choi, S., & Wan, L. C. (2021). The rise of service robots in the hospitality industry: Some actionable insights. Boston Hospitality Review, 111.

Christou, P., Hadjielias, E., Simillidou, A., & Kvasova, O. (2023). The use of intelligent automation as a form of digital transformation in tourism: Towards a hybrid experiential offering. Journal of Business Research, 155, 113415. doi: 10.1016/j.jbusres.2022.113415.

Coeckelbergh, M. (2021). How to use virtue ethics for thinking about the moral standing of social robots: A relational interpretation in terms of practices, habits, and performance. International Journal of Social Robotics, 13(1), 3140.

Condie, J., Lean, G., & Wilcockson, B. (2017). The trouble with Tinder: The ethical complexities of researching location-aware social discovery apps. In The ethics of online research (Vol. 2, pp. 135158). Emerald Publishing.

Confente, I., Scarpi, D., & Russo, I. (2020). Marketing a new generation of bio-plastics products for a circular economy: The role of green self-identity, self-congruity, and perceived value. Journal of Business Research, 112, 431439.

Consiglio, I., & van Osselaer, S. M. (2022). The effects of consumption on self-esteem. Current Opinion in Psychology, 46, 101341101341, 101341. doi: 10.1016/j.copsyc.2022.101341.

Cowls, J., Tsamados, A., Taddeo, M., & Floridi, L. (2021). A definition, benchmark and database of AI for social good initiatives. Nature Machine Intelligence, 3(2), 111115.

Ding, A., Lee, R. H., Legendre, T. S., & Madera, J. (2022). Anthropomorphism in hospitality and tourism: A systematic review and agenda for future research. Journal of Hospitality and Tourism Management, 52, 404415.

Du, S., & Xie, C. (2021). Paradoxes of artificial intelligence in consumer markets: Ethical challenges and opportunities. Journal of Business Research, 129, 961974.

Esmaeilzadeh, H., & Vaezi, R. (2022). Conscious empathic AI in service. Journal of Service Research, 25(4), 549564.

Eyssel, F., & Kuchenbrandt, D. (2012). Social categorization of social robots: Anthropomorphism as a function of robot group membership. British Journal of Social Psychology, 51(4), 724731.

Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Cambridge, MA: Berkman Klein Center Research Publication, (2020-1).

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Vayena, E. (2018). AI4People—an ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689707.

Fu, S., Zheng, X., & Wong, I. A. (2022). The perils of hotel technology: The robot usage resistance model. International Journal of Hospitality Management, 102, 103174.

Georgieva, I., Lazo, C., Timan, T., & van Veenstra, A. F. (2022). From AI ethics principles to data science practice: A reflection and a gap analysis based on recent frameworks and practical experience. AI and Ethics, 2(4), 697-711.

Hao, F., Xiao, Q., & Chon, K. (2020). COVID-19 and China’s hotel industry: Impacts, a disaster management framework, and post-pandemic agenda. International Journal of Hospitality Management, 90, 102636.

Harrison, R., Shaw, D., & Newholm, T. (2005). The ethical consumer. London: Sage Publications.

Hassan, S. M., Rahman, Z., & Paul, J. (2022). Consumer ethics: A review and research agenda. Psychology and Marketing, 39(1), 111130.

Hayes, A. F. (2018). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. New York: Guilford Publications.

Hermann, E. (2021). Leveraging artificial intelligence in marketing for social good—an ethical perspective. Journal of Business Ethics, 179(1), 43-61.

Hilton (2020). Hilton and IBM pilot “connie,” the world’s first watson-enabled hotel concierge. Available from: https://perspectivemagazine.com/140320164400/hilton-and-ibm-pilot-connie-the-worlds-first-watson-enabled-hotel-concierge (accessed 19 August 2021).

Holmes, T. A. (2021). Effects of self-brand congruity and ad duration on online in-stream video advertising. Journal of Consumer Marketing, 38(4), 374385.

Hu, Y., & Min, H. K. (2023). The dark side of artificial intelligence in service: The ‘watching-eye’ effect and privacy concerns. International Journal of Hospitality Management, 110, 103437.

Hu, X., He, L., & Liu, J. (2022). The power of beauty: Be your ideal self in online reviews—an empirical study based on face detection. Journal of Retailing and Consumer Services, 67, 102975.

Huang, M. H., & Rust, R. T. (2021). A strategic framework for artificial intelligence in marketing. Journal of the Academy of Marketing Science, 49(1), 3050.

Huang, M. H., Rust, R., & Maksimovic, V. (2019). The feeling economy: Managing in the next generation of artificial intelligence (AI). California Management Review, 61(4), 4365.

Huh, J., Kim, H.-Y., & Lee, G. (2023). Oh, happy day!” Examining the role of AI-powered voice assistants as a positive technology in the formation of brand loyalty. Journal of Research in Interactive Marketing, 7(5), 794812.

Hunkenschroer, A. L., & Luetge, C. (2022). Ethics of AI-enabled recruiting and selection: A review and research agenda. Journal of Business Ethics, 178(4), 977-1007.

Jennings, P. L., Mitchell, M. S., & Hannah, S. T. (2015). The moral self: A review and integration of the literature. Journal of Organizational Behavior, 36(S1), S104S168.

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389399.

Kamila, M. K., & Jasrotia, S. S. (2023). Ethical issues in the development of artificial.

Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons, 62(1), 1525.

Khaliq, A., Waqas, A., Nisar, Q. A., Haider, S., & Asghar, Z. (2022). Application of AI and robotics in hospitality sector: A resource gain and resource loss perspective. Technology in Society, 68, 101807.

Koo, C., Xiang, Z., Gretzel, U., & Sigala, M. (2021). Artificial intelligence (AI) and robotics in travel, hospitality and leisure. Electronic Markets, 31(3), 473476.

Långstedt, J., Spohr, J., & Hellström, M. (2023). Are our values becoming more fit for artificial intelligence society? A longitudinal study of occupational values and occupational susceptibility to technological substitution. Technology in Society, 72, 102205. doi: 10.1016/j.techsoc.2023.102205.

Ladeira, W., Perin, M. G., & Santini, F. (2023). Acceptance of service robots: A meta-analysis in the hospitality and tourism industry. Journal of Hospitality Marketing and Management, 32(6), 694716.

Li, X., & Sung, Y. (2021). Anthropomorphism brings us closer: The mediating role of psychological distance in User–AI assistant interactions. Computers in Human Behavior, 118, 106680.

Li, Y., Xia, X., Yu, A., Xu, H., & Zhang, C. (2022). Duration of an acute moderate-intensity exercise session affects approach bias toward high-calorie food among individuals with obesity. Appetite, 172, 105955.

Li, S., Peluso, A. M., & Duan, J. (2023). Why do we prefer humans to artificial intelligence in telemarketing? A mind perception explanation. Journal of Retailing and Consumer Services, 70, 103139.

Lin, M., Cui, X., Wang, J., Wu, G., & Lin, J. (2022). Promotors or inhibitors? Role of task type on the effect of humanoid service robots on consumers’ use intention. Journal of Hospitality Marketing and Management, 31(6), 710729.

Liu-Thompkins, Y., Okazaki, S., & Li, H. (2022). Artificial empathy in marketing interactions: Bridging the human-AI gap in affective and social customer experience. Journal of the Academy of Marketing Science, 50(6), 11981218.

Lou, C., Kang, H., & Tse, C. H. (2022). Bots vs humans: How schema congruity, contingency-based interactivity, and sympathy influence consumer perceptions and patronage intentions. International Journal of Advertising, 41(4), 655684.

Lu, V. N., Wirtz, J., Kunz, W. H., Paluch, S., Gruber, T., Martins, A., & Patterson, P. G. (2020). Service robots, customers and service employees: What can we learn from the academic literature and where are the gaps?. Journal of Service Theory and Practice, 30(3), 361391.

Martin, K., Shilton, K., & Smith, J. (2019). Business and the ethical implications of technology: Introduction to the symposium. Journal of Business Ethics, 160(2), 307317.

Martinez, L. F., & Jaeger, D. S. (2016). Ethical decision making in counterfeit purchase situations: The influence of moral awareness and moral emotions on moral judgment and purchase intentions. Journal of Consumer Marketing, 33(3), 213223.

Mehta, P., Jebarajakirthy, C., Maseeh, H. I., Anubha, A., Saha, R., & Dhanda, K. (2022). Artificial intelligence in marketing: A meta‐analytic review. Psychology and Marketing, 39(11), 20132038.

Munoko, I., Brown-Liburd, H. L., & Vasarhelyi, M. (2020). The ethical implications of using artificial intelligence in auditing. Journal of Business Ethics, 167, 209234.

Nyholm, S. (2020). Humans and robots: Ethics, agency, and anthropomorphism. London: Rowman & Littlefield Publishers.

Paluch, S., & Tuzovic, S. (2019). Persuaded self-tracking with wearable technology: Carrot or stick?. Journal of Services Marketing, 33(4), 436448.

Park, H., Jiang, S., Lee, O. K. D., & Chang, Y. (2021). Exploring the attractiveness of service robots in the hospitality industry: Analysis of online reviews. Information Systems Frontiers, 26(1), 121. doi: 10.1007/s10796-021-10207-8.

Parvez, M.O., Öztüren, A., Cobanoglu, C., Arasli, H., & Eluwole, K.K. (2022). Employees’ perception of robots and robot-induced unemployment in hospitality industry under COVID-19 pandemic. International Journal of Hospitality Management, 107, 103336. doi: 10.1016/j.ijhm.2022.103336.

Peters, D. (2023). Wellbeing supportive design–Research-based guidelines for supporting psychological wellbeing in user experience. International Journal of Human–Computer Interaction, 39(14), 29652977.

Pillai, S. G., Haldorai, K., Seo, W. S., & Kim, W. G. (2021). COVID-19 and hospitality 5.0: Redefining hospitality operations. International Journal of Hospitality Management, 94, 102869.

Purkitt, H. E. (2019). Artificial intelligence and intuitive foreign policy decision-makers viewed as limited information processors: Some conceptual issues and practical concerns for the future. In Artificial Intelligence and International Politics (pp. 3555). Routledge.

Reidenbach, R. E., & Robin, D. P. (1988). Some initial steps toward improving the measurement of ethical evaluations of marketing activities. Journal of Business Ethics, 7(11), 871879.

Rességuier, A., & Rodrigues, R. (2020). AI ethics should not remain toothless! A call to bring back the teeth of ethics. Big Data and Society, 7(2), 2053951720942541.

Rust, R. T., & Huang, M. H. (2021). The physical economy. In The Feeling Economy (pp. 721). Cham: Palgrave Macmillan.

Rust, R. T., & Huang, M. H. (2021). The feeling economy: How artificial intelligence is creating the era of empathy (pp. 438441). London: Palgrave Macmillan.

Savela, N., Turja, T., & Oksanen, A. (2018). Social acceptance of robots in different occupational fields: A systematic literature review. International Journal of Social Robotics, 10(4), 493502.

Shafiee, R., Ansari, F., & Mahjob, H. (2022). Physicians’ brand personality: Building brand personality scale. Services Marketing Quarterly, 43(1), 4866.

Stevens, J. L., Johnson, C. M., & Gleim, M. R. (2023). Why own when you can access? Motivations for engaging in collaborative consumption. Journal of Marketing Theory and Practice, 31(1), 117.

Stringam, B., Gerdes, J. H., & Anderson, C. K. (2021). Legal and ethical issues of collecting and using online hospitality data. Cornell Hospitality Quarterly, 64(1), 5462.

Suzuki, S. (2019). Effects of psychological distance on attraction effect. The Journal of Social Psychology, 159(5), 561574.

Trope, Y., & Liberman, N. (2003). Temporal construal. Psychological Review, 110(3), 403.

Trope, Y., & Liberman, N. (2010). Construal-level theory of psychological distance. Psychological Review, 117(2), 440.

Trope, Y., Liberman, N., & Wakslak, C. (2007). Construal levels and psychological distance: Effects on representation, prediction, evaluation, and behavior. Journal of Consumer Psychology, 17(2), 8395.

Wang, X., Zhu, H., Jiang, D., Xia, S., & Xiao, C. (2023). Facilitators’ vs “substitutes”: The influence of artificial intelligence products’ image on consumer evaluation. Nankai Business Review International, 14(1), 177193.

Wirtz, J., Patterson, P. G., Kunz, W. H., Gruber, T., Lu, V. N., Paluch, S., & Martins, A. (2018). Brave new world: Service robots in the frontline. Journal of Service Management, 29(5), 907931.

Yalcin, G., Lim, S., Puntoni, S., & van Osselaer, S. M. (2022). Thumbs up or down: Consumer reactions to decisions by algorithms versus humans. Journal of Marketing Research, 59(4), 696717.

Yaprak, A., & Prince, M. (2019). Consumer morality and moral consumption behavior: Literature domains, current contributions, and future research questions. Journal of Consumer Marketing, 36(3), 349355.

Yudkin, D. A., Pick, R., Hur, E. Y., Liberman, N., & Trope, Y. (2019). Psychological distance promotes exploration in search of a global maximum. Personality and Social Psychology Bulletin, 45(6), 893906.

Zhong, L., Coca-Stefaniak, J. A., Morrison, A. M., Yang, L., & Deng, B. (2022). Technology acceptance before and after COVID-19: No-touch service from hotel robots. Tourism Review, 77(4), 10621080.

Zhu, Y., Zhang, J., Wu, J., & Liu, Y. (2022). AI is better when I'm sure: The influence of certainty of needs on consumers' acceptance of AI chatbots. Journal of Business Research, 150, 642652.

Further reading

Saydam, M. B., Arici, H. E., & Koseoglu, M. A. (2022). How does the tourism and hospitality industry use artificial intelligence? A review of empirical studies and future research agenda. Journal of Hospitality Marketing and Management, 31(8), 908936.

Corresponding author

Dan Jin can be contacted at: djin4@utk.edu

Related articles