Anthropomorphising technology

Abstract flowers created by a machine learning programme
Floral still life created through machine learning, based on the paintings of 18th Century female Dutch painters

In a recent discussion of my work I was asked whether I was anthropomorphising the computer, and if so why. As I explained at the time if it sounded like that it wasn’t intentional, I may have referred to the computer as ‘it’ a couple of times and that was purely sloppy language rather than a philosophical standpoint.

However, it did get me wondering. Did it matter if I had been? I understand many of the debates about the flaws of the Anthropocene and the dominance of the anthropocentric, but does it matter when it’s applied to technology?

Anthropomorphism is the attribution of human-like physical or non-physical features, behaviour, emotions, characteristics and attributes to a non-human agent or to an inanimate object. (Epley et al., 2007)

Anthropomorphism provides an intuitive and readily accessible method for reducing uncertainty in contexts in which alternative nonanthropomorphic models of agency do not exist (such as those provided by science or culture). (Epley et al., 2007: 871)

Three factors have been outlined that might influence the application of anthropomorphism (Epley et al., 2007):

  • Elicited agent knowledge: applying human behaviour to non-huma agents. As the human becomes more informed about the non-human agent this new information should amend the anthropomorphic application
  • Effectance: the motivation to understand and interact effectively with a non-human agent. It aims to help reduce uncertainty and anxiety regarding the actions of the non-human agent
  • Sociality: establishing a social connection with the non-human agent to fulfil the human social need. It is suggested that if a human lacks human social connection they are more likely to anthropomorphise the non-human (Epley et al., 2007)

I have outlined these factors because I think they shed light on the motivation for anthropomorphising. While it is often criticised, and there is no doubt our anthropocentric world view has done great harm, anthropomorphising can have a positive impact in that it is possible that it is derived from a desire for understanding rather than a wilful imposition of human characteristics.    

This has created an interesting dynamic (or vicious cycle?) in that designers are well aware of these human tendencies and as such have been designing accordingly. It seems to be the case that ‘hardware developers aim to apply anthropomorphic features and designs to give humans a familiar feeling with IT because a natural and personal connection to a piece of hard or software is missing.’ (Pfeuffer et al., 2019: 1)

This poses some interesting questions for me, I don’t agree that I anthropomorphise my computer/IT in fact I’m more likely to do that to my car because she has eyes, a mouth, and a nose! In fact I don’t like that Siri or Alexa can speak to me, and indeed have done on several occasions unprompted. Maybe that is my ‘uncanny valley’? (Mori et al., 2012)

In some ways I do see my machine learning experiments as a form of non-verbal collaboration but one that has required me to learn the traits of the non-human entity, rather than assigning human characteristics to it.

As for the future, as Anna Ridler has said, ‘everything changes when the machine has an opinion.’

References

EPLEY, N., WAYTZ, A. & CACIOPPO, J. T. 2007. On seeing human: a three-factor theory of anthropomorphism. Psychological review, 114, 864.

MORI, M., MACDORMAN, K. F. & KAGEKI, N. 2012. The uncanny valley [from the field]. IEEE Robotics & Automation Magazine, 19, 98-100.

PFEUFFER, N., BENLIAN, A., GIMPEL, H. & HINZ, O. 2019. Anthropomorphic information systems. Business & Information Systems Engineering, 61, 523-533.

2 thoughts on “Anthropomorphising technology

Leave a Reply

Your email address will not be published. Required fields are marked *