Attending a lecture
- tags
- #Machine Learning #AI Ethics #Ivan Illich #Geoffrey Hinton
- published
- reading time
- 5 minutes
I attended yesterday a lecture by Geoffrey Hinton - the part about LLMs and how they work was helpful and sometimes bizarre - the image of a word with 1000s of trailing hands and gloves that enable it to connect in specific ways with other words was disturbing and begged for a drawing. It’s interesting that there wasn’t a single image in the presentation: just dense text that served more as speaker’s notes than a support for audience understanding.
The term ‘understanding’ was used in reference to LLMs in a way that I found unsettling. He stated toward the beginning of the talk and repeated several times that LLMs are widely acknowledged to ‘understand’ the meaning of words, rather than simply parroting learned patterns in a way that mimics understanding. I truly don’t see the benefit of anthropomorphizing AI in this way. I may be misunderstanding his argument but it seems to me that AI can be a stochastic parrot and a convincing imitator of thought at the same time. Whether we treat AI as a person or as a machine is a choice, and retaining a conceptual distinction between animate creatures and machines is something we can choose to do for ethical and cultural reasons. Emphasizing the ways in which AI models are ’like us’, something that Hinton did at multiple points in his talk, feeds into a narrative that places agency and even ethical responsibility with the AI agent rather than the companies and individuals responsible for their development. This is self-evidently beneficial for corporations and, perhaps, for people like Hinton who could be seen as bearing some responsibility for the benefits and the harms of AI.
This is where it seems important to consider how AI sits in the spectrum of entities that we interact with, and most significantly where humans fall in this spectrum. While AI is often compared with human intelligence, I don’t often see the comparison made with animal intelligence, although in terms of neural complexity (if we accept this as in some way measuring intelligence) AI is far closer to animals than people. Are humans closer to AI in their intelligence, or to non-human animals? Listening to Geoffrey Hinton it sounds like we are practically indistinguishable from the AI we’ve created. Seen another way the AI model is a profoundly weird machine creation that has little in common with humans or non-human animals. The perspective that we take is, I"m convinced, in the end a matter of choice more than one of fact.
I’ve recently been reading Ivan Illich’s Tools for Conviviality and am struck by the number of passages that anticipate the moment we’re in currently with AI. Illich is concerned with the harmful effects of machines like the automobile that become self-perpetuating in that the infrastructure built to support them, like highways, excludes other forms of transport like walking and cycling and thus results in a homogenization of the options for transportation. One of Illich’s key observations is that this development becomes irreversible because once the speed of a car has been experienced it becomes unappealing to go back to modes of locomotion unaided by the machine. Applied to AI, this insight suggests that once humans have experienced the acceleration of thought enabled by AI it will be unappealing to return to unaided thinking. Efficiency was another term used frequently by Hinton: efficiency of education, efficiency of thought. There are indications that the transition to thought enhanced by machines is at least as difficult to reverse as the enhancement of movement by machines, and the consequences in terms of homogenization and alienation may be more unsettling.
Hinton made one really interesting observation about the difference between humans and AI in terms transmission of knowledge. He made the point that knowledge transmission between humans is fundamentally ineffient, requiring a lot of time to move information from one mind to another, quite unlike digital machines which can efficiently transfer knowledge in the form of neural networks which run equally well on any hardware. This idea that human brains are a unique encapsulation of knowledge was one of the few admissions in the talk of essential differences between humans and LLMs. In terms the relation of humans, animals, and AI, humans are far closer to their fellow non-human animals than they are to machines in this and in so many other ways. I am personally interested in making more of this affinity between people and non-human animals as an important contrast with the machines.
What I find most interesting about Hinton’s argument overall is the comparison of digital and analog brains. The digital brain, in his view, has the benefit of efficient information transfer but at the cost of high energy consumption. Digital machines also have the benefit of immortality: because their information can be perfectly abstracted from the hardware it is capable of being transferred to new hardware without loss. This is unlike the analog brain which has the benefit of incredibly energy efficiency but which suffers from the inseparability of hardware and software: when the body dies the ideas die with it.
It seems that at one time Geoffrey Hinton was interested in developing computers that followed the model of the human brain, in other words imperfect and low-energy and mortal. In principle this is a far more appealing vision for intelligence than the one we have currently, which relies on essentially limitless energy for its realization, one that sacrifices any respect for limits in return for immortality.
Somewhere between the anthropomorphism of Anthropic and Geoffrey Hinton, and the vision of AI as a stochastic parrot, there is perhaps a model that advances a new kind of machine intelligence while respecting limits on speed and growth. It is interesting to think about whether any form of AI can meaningfully be understood as a convivial tool in Illich’s sense. This is a question for another day.