The human brain loves patterns. We identify patterns everywhere, even where they don’t exist. In the past, this has been of evolutionary benefit. Being able to recognise poisonous tree frogs or tigers or gaps in a forest floor, or tribe members who look like us have all been helpful to our survival. The brain is an expensive engine to run, and any heuristic that helps us be more efficient saves us energy.
Until quite recently, we thought we were the rulers of the world of broad pattern-matching. Sure, animals could recognise patterns for the same evolutionary reasons we could, but from the point of view of image-based pattern matching (as in Recaptchas where one has to recognise all of the images with traffic lights, for example), or being able to create relationships between language, images and sounds (the word “cat”, a picture of a cat and sound of a cat), we pretty much had the mountain peaks to ourselves.
We learn about patterns by experience. Ray Dalio writes in Principles about the benefit of being able to identify “another one of those” based on the study of past events or past experiences. This recognition of patterns aids in decision-making and provides comfort that we’ve seen this kind of problem before.
We sometimes misidentify patterns. Humans mistake correlation for causation so frequently that it’s truly not funny (although the site Spurious Correlations has some really great examples on it, such as the relationship between the divorce rate in Maine and sales of margarine).
Visual patterns can be tremendously appealing. The arrangement of a honeycomb, the fronds of a fern, or the shapes of snowflakes can all be delightful to us. The rule of thirds is a pattern in photographic composition that is almost universally accepted. Similarly, patterns in mathematics or music can be hugely satisfying to our brains.
We’ve communicated these patterns and their ramifications through language, art, science, music, and culture throughout the generations. Being able to recognise patterns is of importance in all sorts of fields. Biologists can use pattern-based knowledge to identify pests. Medical experts can use them to identify human diseases through visual identification or the description of symptoms and their spread. Technologists can use patterns for short-cuts in code or for speedier troubleshooting. This use of patterns has been a huge advantage to us – and we are very good at it.
Max Tegmark, in Life 3.0, argues that we may be losing that evolutionary high ground. Computers are now faster and more effective than humans in identifying patterns in key areas. Self-driving cars have comprehensively demonstrated the capability of AI-driven visual pattern matching (along with many other complex decision-making algorithms). In addition, despite some highly publicised incidents (and one in which Tesla’s Autopilot feature failed to differentiate between the side of a trailer and the brightly-lit sky), self-driving cars are arguably safer than human drivers.
In the medical arena, AI is proving to be as good as, if not better than, trained radiologists and other specialists at identifying injuries and disease. Articles on selective inattention such as this one describe how trained experts missed the image of a gorilla inserted into medical images, despite looking right at the gorilla. This paper was based on Daniel Simons’ Monkey Business Illusion, which I was gobsmacked by when I saw it first.
The ability of AI to identify patterns in large amounts of information is considerably better than ours. We don’t have the processing capacity or “attentional” capability to review very large volumes of data. We become tired, we lose focus, we become distracted. Computers can run 24 x 7 processing trillions of bits of information without fatigue or mistake.
So does this mean we’re irrelevant, and doomed to be jobless?
I would argue strongly that the answer is a resounding no. There are still areas where human expertise has not been outstripped by the use of computers. We are able to interpret patterns of human behaviour and the demonstration and expression of emotion and develop appropriate responses. Human-human interaction is still a huge part of how the world operates (without it we would clearly not exist). Our ability to excel in the workplace and elsewhere should not be threatened by the addition of AI – it should be enhanced by it.
In my opinion, the most interesting writing about AI at the moment is not focussed on how computers will displace us, such as the many sensationalist “AI is coming for your job!” headlines. It is about how we can leverage AI to improve our capabilities in partnership with technology.
We’ve lost the pattern-matching crown (in some areas many years ago), just as we lost the automotive construction battle, and the chess-playing trophy, and as we will cede many other areas to technology in the future. The great thing about humans is we always find new problems to solve (or failing that, create them for ourselves).