To read this content please select one of the options below:

Available to meet: advances in professional communications

E. Burton Swanson (UCLA Anderson School, Los Angeles, California, USA)

Information Technology & People

ISSN: 0959-3845

Article publication date: 20 July 2020

Issue publication date: 10 October 2020

360

Abstract

Purpose

This viewpoint paper calls in to question the current design approach to personal artificial intelligence (AI) assistance in support of everyday professional communications, where a bot emulates a human in this role. It aims to stimulate fresh thought among designers and users of this technology. It also calls upon scholars to more widely share incidental insights that arise in their own encounters with such new AI.

Design/methodology/approach

The paper employs a case of an email exchange gone wrong to demonstrate the current failings of personal AI assistance in support of professional communications and to yield broader insights into bot design and use. The viewpoint is intended to provoke discussion.

Findings

From the case, it is indicated that industrial-strength personal AI assistance is not here yet. Designing a personal AI assistant to emulate a human is found to be deeply problematic, in particular. The case illuminates what might be called the problem of blinded agency, in performative contexts where human, robotic and organizational identities are at least partially masked and actions, inactions and intentions can too easily disappear in a thick fog of digital exchange. The problem arises where parties must act in contexts not known to each other, and where who is responsible for what in a mundane exchange is obscured (intentionally or not) by design or by actions (or inactions) of the parties. An insight is that while humans act with a sense of agency to affect outcomes that naturally invoke a corresponding sense of responsibility for what transpires, bots in social interaction simply act and feign responsibility as they have no sense of it beyond their code and data. A personal AI assistant is probably best designed to communicate its artificiality clearly. Missing today are distinctive social conventions for identifying machine agency in everyday interactions as well as an accepted etiquette for AI deployment in these settings.

Originality/value

As a viewpoint contribution, the paper's value is as a stimulant to discussion of alternate approaches to design and use of personal AI assistance in professional communications and where we should be going with this. The presented case of an email exchange gone wrong is simple on the face of it but reveals in its examination a number of complexities and broader insights.

Keywords

Acknowledgements

An earlier version of this paper was presented at the IFIP WG8.2 Conference on “Living with Monsters?” held in San Francisco, December 11–12, 2018. Edgar Whitley chaired the discussion. The author is grateful for the wide-ranging comments received, some subsequent to the discussion, motivating the present revision. The author also thanks Christine Borgman, Mary Culnan and Matt Beane for earlier notes and suggestions and Cynthia Beath for her more recent reading and remarks. The present version has further benefitted from the thoughtful comments and suggestions of two anonymous reviewers. The views expressed here are the author’s own views.

Citation

Swanson, E.B. (2020), "Available to meet: advances in professional communications", Information Technology & People, Vol. 33 No. 6, pp. 1543-1553. https://doi.org/10.1108/ITP-06-2019-0311

Publisher

:

Emerald Publishing Limited

Copyright © 2020, Emerald Publishing Limited

Related articles