Since submitting my big funding application for Instructions For Humans three weeks ago, and with three week to go until I hear if I’m getting it or not, I’ve been in a weird kind of limbo were I don’t want to start anything major just in case but I can’t just switch off. A new carpentry habit has helped (I seem to be making a new addition / improvement to the rabbit shed whenever the sun is out) but mostly I’m doing lots of thinking. This is good, in that I’m thinking about things, but it’s also dangerous, because I’m thinking about things with no tangible way of realising them. There’s also the realisation that the funding application, while solid and realistic, is not a very good outline of the actual art, because it’s not for that. Its for funding, and translating it back into “the art I want to make” has been a surprisingly tricky thing to do.
Thankfully I’m not alone and have discussed this with (or articulated it at) a few people over the last week, mainly my fellow Goodbye Wittgenstein alumni and the Show And Tell group, which has helped a lot. Conversations like these help me put my thoughts into words but they also provide an environment for ideas to glom together and resurface from my memory; ideas that I’d had or concepts I’d gotten interested in but had forgotten about. It’s a little like the process of applying for funding pushed them all away for the sake of clarity and I’ve been slowly dragging them back into the light, only now with the benefit of clarity.
I am, as usual, overthinking the hell out of this, but that’s why I became an artist.
So, here’s a few of things that I plan to make my art about, once a yes/no on the funding dictates at what scale I can do it at.
I want the art to deal with Brexit and Trump. These are the biggest things happening in my world at the moment and I need to address them. The everyday mechanics of Brexit and Trump are horrifying but too huge and fast-moving for me to get a handle on. What I want to explore is the divisiveness that characterises them and how media distribution systems have changed to bring this about. So we’re talking filter bubbles, mainly. News feeds that are composed through manual selection (choosing which sources to follow and which to ignore) and augmented by obfuscated algorithmic selection (Google’s personalised search results). This can create a perception of the world where everyone seems to agree with you, so when your Facebook and Twitter feeds are full of people saying Brexit will never happen because it’s insane, it’s a bit of a shock when it does. Algorithms, man. They’re all to blame.
But filter bubbles, and their adoption, actually echo a fundamental way that we process and deal with the world. This list of cognitive biases is fascinating but I’m particularly drawn to those which confirm pre-existing beliefs, such as Confirmation Bias:
the tendency to search for, interpret, favour, and recall information in a way that confirms one’s preexisting beliefs or hypotheses. People display this bias when they gather or remember information selectively, or when they interpret it in a biased way. The effect is stronger for emotionally charged issues and for deeply entrenched beliefs. People also tend to interpret ambiguous evidence as supporting their existing position.
This has nothing to do with algorithms and computers. Facebook is not making us do this. We already do it, every day, when we’re moving through the world trying to make sense of it.
Brains in Jars
A classic philosophical conundrum is the brain in a jar paradox which very simply says, if we can only know the world through our senses, and if our senses are simply sending signals to our brains, how do with know that our brains are actually getting these signals from our senses and not from a computer? How do we know we’re not a brain wired up to a simulation? And yes, The Matrix is basically a film about this.
What interests me about this isn’t so much the answer, (like all good philosophy it’s all about the question, not the answer) but the mechanics of it all. Let’s break down how we perceive.
There is the world, full of light and noise and stuff. Our senses are trigged by all this light and noise and stimulate parts of our brains. Our brains then somehow turn these signals into an awareness of the world.
There’s a lot of transduction going on here, converting one sort of energy or signal into another. How does light entering the eye become awareness of colour and shapes? How does sound vibrating bones in the ear become awareness of pitch and volume?
A simple answer is that the brain processes the information given to it and creates a mental model of the world, which is good enough, but leads to another question. To what end?
I know from working with photography that we do not see the world accurately because cameras often do get the world right and have to be corrected so they look “right” to us. Our brains shift blue and orange light towards white, effectively doing White Balance correction, so we can see better. This is helpful for dealing with the world, but it’s not a true reflection of reality. In this, and many other ways, we are constantly Photoshopping our perception of the world so we can function more effectively in it.
So what we think of as our knowledge about the world is filtered through a process which has evolved for a purpose close to but not exactly the same as “showing us an accurate representation of the world”. Which makes sense as we exist to spread our genes and everything we do has to support that. That we also appear to have developed a fairly accurate way of comprehending reality is a bonus.
How does this relate to my bigger picture? There’s a layer between “us” and the world and we don’t, or can’t, understand it. AI and Machine Learning systems are like that.
This is about the power of words (and my extension all media, which in turn is simply binary data) to change the world. I find exploring “magic” rather perilous as it’s full of mystical bollocks and pretentious wankery, but I’ve found it to be a very fruitful way of understanding how irrational things can come about in societies which profess to be governed by rational thought. Here’s my attempt to synthesise it into into something useful.
Magic is about effecting change in the material world without a material cause. This is, of course, impossible. You cannot move objects with your mind. But you can use your mind to convince another mind to move an object. By this definition, which I choose to use, saying “please move that object” to someone and them moving it is a successful magical act. It’s not very impressive, but it’s a start.
Magicians use spells and ritual. A spell is a bunch of words said in a specific order (spell being similar to “spelling” was a big clue to me here) which, when spoken, has the potential to effect change. A ritual is something done over and over again to strengthen an idea or concept.
Politicians campaigning for elections are magicians casting spells which are strengthened through ritual gatherings. By standing on a stage and repeating their rhetoric, amplified by their supporters, they effect change. This can be shockingly powerful, as seen on the face of Boris Johnson the day after he accidentally made Brexit happen.
We live in a weird world where we seem to believe that we are only influenced by tangible things that affect physical security and that of our property, but we make no attempt to protect what you might call our ideaspace. We don’t like to think that our closely held beliefs and ideas are vulnerable to attack by forces that don’t have our best interests at heart.
All media exists to change your mind in some way. That is its purpose. Media is a vehicle for transferring ideas into your head where they mix with other ideas and create a slightly new version of your mental self, and this is a good thing. Change is good. We call this learning or broadening your horizons. The buzz you get after seeing a great film is your mind being changed. This is also a handy definition of great art.
But it can also be weaponised to enact change in quite specific ways and to bypass or work against your pre-existing ideas and beliefs. This comes in many forms but the most prevalent is advertising and PR. This uses the gaps between how we perceive the world and how we think we perceive the world to trick us into changing our behaviour while making us believe we controlled that change. And the most dangerous thing about it is I don’t think the practitioners of these dark arts know what they’re actually doing.
(As you can probably tell I hate advertising and won’t have it in the house. Evil, evil stuff.)
So Brexit and Trump happened by magic, which is how everything in social situations happens. Think about Theresa May’s phrase, “the will of the people”. How can a collective “will” mean anything? If “the people” stood in front of a mountain and “willed” it to move, would it? But the will of the people can undo 70 years of European stability and plunge the world into a new uncertainty.
Trump used magic, but he also showed how much of the USA is based on magical thinking. The belief system of the United States is so strong that Americans seem to believe it to be invulnerable, but it’s ultimately based on a collection of words. The Constitution is a magical document, probably the most magical secular document of our age, because it supports the infrastructure and systems of a superpower with incredible authority. Trump’s people (I’m thinking of Bannon here) have been poking at this and it’s been fascinating to see Americans suddenly become aware of the fragility of their country’s foundations.
At least in a constitutional monarchy like the UK the absurd magic that keeps the country together is overt, yet many arguments for Brexit are based on pushing this mythology to its limit. I guess having had an empire will do that.
So what’s my point. Sure, magic isn’t real, but then neither is art, and it’s the best way I’ve found to group and explain all the irrational stuff in our societies. And I think it can also explain our approach and acceptance of AI systems.
Lots to unpack though…
Finally, something fairly straight forward. The concept of the Black Box.
This does not refer to the flight recorder on a doomed aircraft. It refers to “a device, system or object which can be viewed in terms of its inputs and outputs without any knowledge of its internal workings.”
This can be a literally closed system where seeing inside is either impossible or restricted by legal or proprietary means, or it can be something so massively complex as to be effectively unknowable. The former happens a lot (Google’s search algorithm being kept secret), but the latter is particularly interesting to me as Deep Learning systems, a form of AI that learns from observation, are black boxes even to the people who create them.
The Dark Secret at the Heart of AI looks like the kind of click-bait scare article about machine learning, but it’s a really good overview of this conundrum. “No one really knows how the most advanced algorithms do what they do.”
The workings of any machine-learning technology are inherently more opaque, even to computer scientists, than a hand-coded system. This is not to say that all future AI techniques will be equally unknowable. But by its nature, deep learning is a particularly dark black box.
You can’t just look inside a deep neural network to see how it works. A network’s reasoning is embedded in the behaviour of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers. The neurons in the first layer each receive an input, like the intensity of a pixel in an image, and then perform a calculation before outputting a new signal. These outputs are fed, in a complex web, to the neurons in the next layer, and so on, until an overall output is produced. Plus, there is a process known as back-propagation that tweaks the calculations of individual neurons in a way that lets the network learn to produce a desired output.
This is fascinating because it mirrors how we deal with other conscious entities such as people and other animals. Other minds are black boxes to us. I can give you instructions and you may or may not obey to some degree of accuracy. All I can do is tailor my instructions in such a way that I think will ensure the optimal result, maybe using an NLP spell book. How and why your brain processes them the way it does is a total mystery to me. The best I can hope to do is train you, through repeated instruction, until you produce an acceptable result.
That was like lancing a boil. I will sleep well now. But there’s probably more. Expect another one of these.