Guest editorial

Marty J. Wolf (Department of Mathematics and Computer Science, Bemidji State University, Bemidji, Minnesota, USA)
Alexis M. Elder (Department of Philosophy, University of Minnesota Duluth, Duluth, Minnesota, USA)
Gosia Plotka (De Montfort University, Leicester, UK)

Journal of Information, Communication and Ethics in Society

ISSN: 1477-996X

Article publication date: 4 September 2019

Issue publication date: 4 September 2019

393

Citation

Wolf, M.J., Elder, A.M. and Plotka, G. (2019), "Guest editorial", Journal of Information, Communication and Ethics in Society, Vol. 17 No. 2, pp. 114-118. https://doi.org/10.1108/JICES-05-2019-097

Publisher

:

Emerald Publishing Limited

Copyright © 2019, Emerald Publishing Limited


Editorial for creating, changing and coalescing ways of life with technologies

“Congealing” is a word that evokes senses of unpleasantness where perhaps something inviting had once been. It also implies that things are becoming less fluid and more rigid.

As we began organizing ETHICOMP 2018, we wanted a theme that reflected the impact of technologies on human cultures, practices and lives. Our initial draft of the theme was “Creating, Changing, and Congealing Ways of Life with Technologies.” And while we were eventually persuaded to use a more congenial way of putting the idea (it became “Creating, Changing, and Coalescing Ways of Life with Technologies”), in some ways, it remains true for us that “congealing,” and its connotations of something less pleasant, gets at the original idea. As we incorporate technologies into our practices, much attention is paid to how they change our ways of doing things. But technologies can also help ways of life set-up and harden like yesterday's leftovers – not appealing, yet difficult to budge and sometimes quite unhealthy. Once a particular process is built around a piece of technology, it can become entrenched and increasingly difficult to change. For a historical example, consider how difficult it was to adapt records and software on the eve of the 21st century in response to the so-called “Y2K” problem. In early software development, efficient use of memory was an important design consideration. It had become a standard practice to use a 2-digit field for the year, which would roll to from “99” to “00” in the year 2000, causing problems for date-dependent functions. Technologies can also reflect and reinforce existing cultural tendencies. For a recent example, consider the human resources software created by Amazon that used its existing hiring data to train a machine-learning system to rate applicants. The resulting system turned out to be biased against women applicants, downranking resumes that included the word “woman” or “women's.” Amazon ended up scrapping the project altogether. Because of these kinds of effects, we wanted to encourage people to think beyond well-worn paradigms like “technologies are disruptive” to consider other kinds of effects they can have.

As it happened, the imagery of “congealing” proved distasteful enough that the steering committee opted for a more neutral term. But even with the less-dramatic wording the conference ended up attracting a rich and diverse range of papers that examined technological issues from a variety of angles, exactly as we had hoped.

This diversity of approaches underscores the value of interdisciplinary inquiry into computing ethics. What we have in this special issue represents a cross-section of engagements with the impact of technologies on ways of life, and one that invites readers to consider both positive and negative effects in a staggering range of applications. From an empirical engagement, with the challenge of coming up with a “fair” algorithm, to an intersectional feminist examination of the Human Brain Project, to a philosophical analysis of how robotic technologies invite us to confront our assumptions about biological exceptionalism, these papers encourage readers to think carefully and critically about the social and ethical impacts of technologies on ways of life.

For this volume, we have, therefore, organized papers according to these three themes: Creating, Changing and Coalescing.

Creating

From papers that explore how technologies create ways of life, as in “technologies of the self and other,” which provides a detailed examination of how self-tracking technology affects interpersonal relationships, to an investigation of how gender informs work on the Human Brain Project, to a case for incorporating Responsible Research and Innovation (RRI) in additive manufacturing, these papers provide a wealth of resources for thinking about how ways of life can be (responsibly) created with the help of technologies.

In their paper “Technologies of the self and other: how self-tracking technologies also shape the other”, authors Katleen Gabriels and Mark Coeckelbergh argue that so-called “quantified self” technologies – things like fitness trackers that operate in the background of a person’s experience, collecting and collating data on movement, steps taken, web browsing habits and various other activities – are creating new ways of seeing other people. These devices do this in addition to their already-recognized role in reshaping how we see ourselves. For example, because they operate in the background of a particular person’s daily activities, they offer an appearance of objectivity that side-steps that person’s subjective experience. They invite comparisons and competitiveness, because one can see how one’s own numbers line up against others who are represented only as quantified data. And they can collapse a sense of difference among users, because one can compare one’s own data to another’s: “Instead of experiencing the other in terms of otherness (alterity), we are at risk to see more of the same: more numbers, more profiles, more statistics, more objectification.” Through a pair of case studies, the authors explore how interpersonal and workplace relationships are being shaped by these technologies, and make the case that ethical implications of these new features of relationship deserve ongoing attention.

With a powerful statement “there is no such a thing as ‘a woman,’ the paper “Intersectional observations of the Human Brain Project’s approach to sex and gender” by Tyr Fothergill, William Knight, Bernd Stahl and Inga Ulnicane evaluates the HBP efforts toward achieving equal representation of women in its workforce and the implications those efforts have for academia, STEM and ICT projects. Despite the effort made by different parties and a range of policies (including Horizon 2020 “Gender Equality”), women and individuals from less-advantaged backgrounds are continually underrepresented. The authors distinguish sex and gender as a biological and social construct, respectively. Therefore, to avoid a context-dependent ambiguity, they:

[…] agree that every person concomitantly possesses multiple identities and aspects which shape their experiences (particularly their experiences of oppression) and that these cannot simply be individually disentangled from the others.

Only by acknowledging that one’s identity is multivariate and monitoring more than a single variable, can we step closer to being truly inclusive. Their method is pseudo-auto-ethnographic and they make intersectional observations based on first-hand experiences supported by relevant literature findings. This critical look is helpful in identifying barriers to and methods for creating a space that is supportive of meaningful participation of all people in large projects. This work leads to a list of recommendations for policies for increasing diversity.

Additive Manufacturing (AM) is a relatively new way of creating objects. RRI is a new way of creating knowledge and processes. George Inyila Ogoh and N. Ben Fairweather’s paper, “The state of the Responsible Research and Innovation programme: A case for its application in additive manufacturing”, describes a pilot study that begins to establish the relationship professionals in AM have with RRI. At first, the results do not look good. In his qualitative study, only one of the five participants from the AM industry had even heard of RRI. Thankfully, his interviews show that the sorts of social and ethical concerns one considers as part of the RRI process are being considered by the participants of his study. Ogoh’s study is framed around the anticipate, reflect, engage, act (AREA) approach to RRI. By carefully analyzing the transcripts of the interviews with the subjects, he finds evidence that the participants do engage in AREA activities, even though they do not do so in a structured or formalized way. The participants identified seven ethical and social issues during his study. Using discussions surrounding intellectual property rights, health-related concerns, and the impact that AM may have on employment, he demonstrates that at least three of these five participants may have a basis on which to create a more formal RRI approach to AM. There is room to create here: both broader awareness of RRI in the AM industry and better engagement by AM professionals with a broad range of stakeholders.

Changing

From a paper that surveys users’ experiences online in the service of understanding how to improve those experiences, to an investigation of the way robotic technology prompts us to question our own assumptions about bio-exceptionalism (the belief that there’s something special about organic life), these papers present considerations about how introducing technologies can change ways of life, and how we can or ought to involve ourselves in these changes.

Digital technologies change the lives of everyone, regardless of their demographic group. In “[…] they don’t really listen to people Young people’s concerns and recommendations for improving online experiences”, Helen Creswick, Liz Dowthwaite, Ansgar Koene, Elvira Perez Vallejos, VA Portillo, Monica Cano and Christopher Woodard identify results from youth-led discussion among 144 of the younger users (mostly aged between 13 and 17) of digital technology. One important finding was that these youth felt disempowered as a result of the terms and conditions of online websites. Despite starting with this somewhat dark side of life with technologies for one of the most vulnerable groups of users, the article’s conclusion leaves us with hope and identifies avenues for improvement. To change the experience for young people on social media platforms and similar sites, it is important to include “the experiences and views of young people, and that changes to terms and conditions should be co-produced with the young people themselves”.

In “Exceptionalisms in the ethics of humans, animals and machines,” Wilhelm Klein contributes to ongoing scholarship that considers the moral and ethical questions that arise when one begins to contemplate just what sort of moral place robots and artificial intelligences ought to occupy. Rather than take on those questions directly, he identifies a concern with a number of answers to those questions. His analysis identifies places where others have introduced exceptionalism in their arguments. He challenges the reader to consider whether the use of exceptionalism is justified in each of those cases. He carefully avoids refuting any of the claims made in the theories, but rather calls to our attention the fact that exceptionalism is prevalent in some well-established and respected contemporary ethical theories. After demonstrating this prevalence, he asks for a change to a non-exceptionalist stance in ethics. He notes that on an evolutionary time scale, the technology of today is still in its infancy and that the “technological entities, which surround us and shape our present-day lives, are nothing like the agents and entities we had millennia to learn to intuit easily.” By changing and avoiding exceptionalism, we can “assume an ethical stance that places “everyone” (i.e. all entities) on an equal ontological plane”.

Coalescing

These papers examine the way extant beliefs and values can inform work in information technologies. The first paper identifies weaknesses in using values in the design process and proposes a different approach. The remaining papers consider ideas from differing conceptions of fairness at work in algorithm design, to the tangled conceptions of value that inform cybersecurity practices in healthcare, to concerns that are becoming evident about how smart home technologies are being implemented and may, if they go unchallenged, be carried forward into the next generation of domestic robotics. By considering these papers, we are in a better position to help us decide whether to continue to act on existing assumptions, or to take steps (even when they are difficult or costly) to change practices and ways of thinking that inform design decisions.

Ultimately Wessel Reijers and Bert Gordijn argue that by studying narratives or life plans and grounding them in the virtues, specific technologies can be made better when they are considered in the technical practices that surround their use and deployment. In “Moving from value sensitive design to virtuous practice design”, Reijers and Gordijn carefully critique value sensitive design. While they find much good in the approach, they argue that basing it on “values” leaves the approach susceptible to arbitrariness, and advocate basing their design process on a system that has a solid – or at least critiqueable – philosophical underpinning. They argue that Shannon Vallor’s approach to virtue ethics helps avoid shortcomings of other approaches. They also adopt Alasdair MacIntyre’s theory of practice so that the life plans of practitioners using the technology are integrated into the design process. This move allows them to incorporate “a variety of other aspects, such as training, education and regulation of the practice” into the design of the technology at hand. In virtuous practice design, the “point of intervention is always a technical practice in which humans and technologies interact.” Their coalescing of three extant theories brings about a new way to think about designing life with technologies.

Given a choice of different algorithms to solve a particular problem, which one would you choose? Why would you make such a choice? Helena Webb, Menisha Patel, Michael Rovatsos, Alan Davoust, Sofia Ceppi, Ansgar Koene, Liz Dowthwaite, VA Portillo, Marina Jirotka and Monica Cano explore these questions in their article “It would be pretty immoral to choose a random algorithm: Opening up algorithmic interpretability and transparency.” They report on their work with a group of research participants with backgrounds in technology who were asked a series of questions about the results of running different algorithms on a particular problem. The participants responded individually to a questionnaire and then engaged in a group discussion where they were expected to provide justification for the choices they made in the questionnaire and “to explore the reasons behind differences of selection.” Qualitative analysis of the group discussion produced three key findings. The analysis shows how normative issues, the specific context of application and technical features of the algorithms coalesced to form a basis for participants’ choices of most and least preferred algorithms. The scholars conclude that “even when provided with the same information, participants make different preference selections and rationalise them differently.” They poignantly conclude that their findings “demonstrate the complexities around algorithms transparency”.

In their paper “Cybersecurity in health – disentangling value tensions,” Michele Loi, Markus Christen, Nadine Kleine and Karsten Weber tackle the thorny problem of integrating cybersecurity into healthcare. They note that some core practices in cybersecurity, such as protecting data by limiting ease of access, come into tension with basic goals of healthcare, such as beneficence and caring for patients as well as possible, which may require increasing accessibility of important patient information. Their detailed survey of research in healthcare cybersecurity makes it clear that there are no easy answers about how to integrate these concerns, and that a choice to prioritize one value or cluster of values in system design will end up with rapidly congealing problems along other dimensions. For example, a secure system that requires complex passwords and protects autonomy with a sophisticated range of customizable privacy settings will raise concerns about accessibility for vulnerable populations for whom these complicated mechanisms pose a significant challenge, and hence introduce social justice considerations. Enabling ease of use with implantable medical devices via wireless controls can open up security vulnerabilities. And making it easy for medical professionals to access data that enables them to provide better care for their patients along many dimensions will also tend to undercut patients’ autonomy, especially as it involves privacy and control over their information. Their detailed investigation of how core principles in healthcare interact with ICT and cybersecurity values provides a roadmap of the tradeoffs involved, and reminds readers that whichever values are prioritized, the system will tend to make it harder to promote other important concerns going forward.

In “Responsible domestic robotics: exploring ethical implications of robots in the home,” Lachlan Urquhart, Dominic Reedman-Flint and Natalie Leesakul use a survey of attitudes toward existing robotic home technologies to identify entrenched patterns that are concerning to people. From ethereal cognitive assistance such as that provided by Amazon’s Alexa, to smart thermostats and Roombas, widespread concerns about issues like data collection plagued both those who have so far refused to embrace these technologies, and those who factored these concerns into their decisions even when ultimately including them in their homes. The authors identify pervasive mismatches between user concerns and values and designers’ assumptions about what users “really” want. They use this as an opportunity to reflect on lessons learned from how these assumptions have fared in contemporary products before committing to production on yet more sophisticated household robotics. Because these technologies tend to develop incrementally, building on equipment and practices that were parts of previous designs, the authors argue that it will be important to consciously reflect on how things are working so far. Observing how problematic patterns have already congealed in current technologies enables people to make informed decisions and avoid carrying them forward into successive generations. Their survey and analysis thus provide valuable tools for incorporating people’s concerns into these reflections.

If you have stuck with us this long, you will have noted we failed at our goal of neatly categorizing these papers. Our reflections on the last two papers have left us with some unease. In the first case, there is at least a threat that undesirable features arise in important systems when human values come into contact with technological limitations. In the second, undesirable features may become baked into a system if we do not pay careful attention. Perhaps we needed both words, congeal and coalesce, in our conference theme. There is much to be excited about in considering life with technology. As you read through the articles in this issue, consider how technology is Creating, Changing, Coalescing and Congealing ways of life.

Related articles