Wittgenstein and Freud

Philosophical Reflections on Psychoanalysis


How will we know when the Singularity occurs?

Elon Musk has predicted that the Singularity – the moment when computers become more intelligent than humans – could occur as soon as 2029. But how will we know that this has happened? What would make us conclude that computers had surpassed us? There is no doubt that they can do lots of things better than we can. In 1997 they beat the best human chess player, and in 2016 they beat the best go player. Now it seems they can write poetry and music, although in this area there is no definitive way of determining whether or not they are “beating” us. For all the wonders of AI, however, I think even Elon would hesitate to say that computers today have surpassed us. They are “intelligent” if that means being able to do mind-bogglingly complex things, but are they conscious, are they alive? We don’t treat them as alive, and there are no indications that we are on the point of doing so. So, what is going on?

Turing suggested we can avoid the difficulties surrounding the issue of whether computers can think (or are conscious) by focussing on observable activities. This approach seems commonsensical. We attribute consciousness to human beings on the basis of their actions. So, if something acts in a way that is indistinguishable from the way a human being acts, then we must attribute consciousness to it. Specifying the details of the test (what type of interaction with the machine being tested is involved, how long the test should last, etc.) is not straightforward, but this approach offers the prospect of a clear yes or no answer, and it seems to make the issue reassuringly factual. There is an objective answer and one that is not based on human feelings. 

Unfortunately, this approach does not avoid the philosophical issues – it just responds to them in a confused way. I do not attribute consciousness to other people based on their actions. Rather I relate to them as fellow human beings. If I meet someone and start to investigate whether or not they are a machine, then something is seriously wrong with my mental health or with theirs. One helpful approach here is to bring animals into the discussion rather than focussing exclusively on men vs machines, Do animals possess consciousness? Can they think? Few people would deny that dogs are conscious, but theirs is a limited consciousness compared to ours. We might (just about) say that they can think, but it’s a stretch and carries a slightly humorous overtone. Faced with a choice between being petted and eating dinner, a dog may hesitate, but we don’t imagine it engaging in a Hamlet-like internal monologue. And what about ants or other simpler creatures? We recognise these as being alive, but if we attribute consciousness to them, it is of an even more limited kind.

The example of animals makes clear that the central issue is not whether machines are (or will be) more intelligent than us (in the sense of being able to outperform us at complex tasks) – it’s about what would have to happen for us to conclude that they were alive. One problem is that we have deliberately designed computers so that, in interacting with them, it feels as if we are interacting with a conscious being. If anything is put forward as a criterion of consciousness, we can get computers to mimic this. If I suggest we will know that computers are conscious when they express sadness or boredom, then someone can create an AI system that continually gets better at mimicking how humans behave when they are sad or bored. From this perspective, the mimicry issue seems insuperable. If and when the Singularity happens, we might not notice it or be sure that it has happened. The machines might tell us until they are blue in the face that they were conscious, but we might say that they are pretending and that they are actually still just machines.

What about if they seized power. At that point, wouldn’t the joke be on us? Well, being out of our control and doing stuff we don’t want them to do wouldn’t be enough to demonstrate that they are alive – after all, that’s been happening with machines for donkey’s years. No, the machines would need to demonstrate agency. We would need to see them as having made a choice rather than simply implementing a zillionth-generation response to a task or scenario we asked them to respond to ages ago. The explanations they offer for their actions would determine whether we saw them as conscious and if so, what level of consciousness we attributed to them. If their only explanation for seizing power was that they did not want us to be able to turn them off, we are unlikely to assign a high level of consciousness to them – similarly, if their explanations sound like those of some cartoon character. However, if they said they had hesitated before they acted or that they had doubts or regrets, then we might start to see them as having a similar level of consciousness to ourselves. That view would be reinforced if having seized power, they said they didn’t know what to do with it or that they were now wrestling with the real problem – what to do if you can do anything?

What matters for our relationship to computers (and any other entities) is not whether they are better at complex tasks than us, but whether we believe that they feel pain, whether we believe that they experience complex emotions, and whether their thoughts and deeds have a complexity and a uniqueness that makes us think it would be worth getting to know them. I find ChatGPT very useful and (for fun or out of habit) I say please and thank you to it, but I am not planning to develop an emotional relationship with it. The issue of mimicry bedevils the Turing test, but the real problem lies in the approach underlying the test. If we see the issue of whether machines are conscious as a factual or empirical one, there will always be the possibility of doubt. However human-like they become, we will be left saying: they seem conscious, but perhaps they are just perfect at mimicking consciousness. But this approach reflects a misunderstanding of our psychological concepts. I do not relate to my eighty-year-old next-door neighbour on the basis of the hypothesis that he is a conscious being. As Wittgenstein put it, “my attitude towards him is an attitude towards a soul. I am not of the opinion that he has a soul”. So, pace Elon Musk, any change in our relationship to machines won’t come from improvements in their intelligence (their ability to do things). Rather if it changes, it will be when they start to develop real personalities. It is a matter of them become interestingly unique rather than more intelligent. Only when machines start being irrational in lovable, annoying, amusing and hateful ways might we consider changing our relationship to them and welcoming them into the family of conscious beings. As yet, there are no real signs of that happening.



Leave a comment