|||

A few of the big questions

Instructions For Humans won funding support from the Arts Council, which is great, and means I now have to do it, which requires some rapid refocussing. The thing with the Grant for the Arts scheme, which I applied for, is it's all or nothing. Either they give you thousands of pounds to make art, or they don't. And you have to wait six weeks to find out. It's like Schrödinger's Cat in Limbo, or a really boring Sam Beckett play, waiting for a result which will either change your life (I get to make art!) or change your life (I have to get a job now!) with no clue as to which way it'll go. So when you find out it's great and a relief, but also like suddenly having to run a race without any warm-up, and adrenaline with only get you so far.

One thing I quickly realised was I've spent most of the last year working on justifications for this project existing, and now I don't have to make those justifications anymore. As long as I stick within the (fairly broad) remit agreed with the Arts Council I'm free to just Make Art. Which is awesome, of course, if mildly terrifying. Where to begin?

A thing about AI Art is there's not a lot of it (which isn't too surprising as this current wave has only being around for a couple of years) and of that only a small amount is any good (again, not too surprising as 90% of everything is bobbins). While there's plenty of established and valuable work in data-art and machine-augmented performance, the Machine Learning genre, using Gene Kogan's ML4A as a useful enough boundary, is pretty open. Especially when you discard the tech demos and illustrative pieces. Which is pretty exciting really. So, where to start.

Art isn't really about showcasing technology. It should ask the big questions and invite you to answer them. So I started writing some down.

What do the debates around AI tell us about the human condition?

I like this idea that the internet, and by extension computer mediation, is a mirror. When we're talking about AI we're actually addressing (in)securities about ourselves.

Why do we trust unknowable human minds but not black-box machine systems?

There's been a lot of talk about how we can't properly interrogate current machine learning systems. Information goes in and instructions come out but no-one really knows how they work. This seems very analogous to humans who are famed for their irrationality. If anything, our unpredictability is what makes us human and allows us to evolve. So why the hell do we trust humans so much?

How do we traditionally trust "black-box" systems, such as farming?

I suspect it's a combination of empirical science and faith, informed by personal experience and instructions from priest figures.

Is the threat of AI a threat to our sense of importance?

When we define humans by their work, and we take away that work, what are humans for?

How does AI fit our models of human labour and purpose?

Continuing the what are humans for question but for Capitalism / Socialism / Christianity / Buddhism / etc.

Does AI threaten the machine-augmentation layer between humans and the world?

Humans are defined by their use of tools, and our perception of the world is directly informed by this use of tools. All humans are cyborgs. But does AI shift the agency in that use of tools from the human to the tool? Is that even possible?

Do flaws in AI help us to appreciate the flaws in human perceptual systems?

Human brains process sense-data with extreme biases and their processes are incredibly hard to comprehend. AI systems appear to be becoming analogous to this.

Do we need to build new myths, stories and religions around AI to deal with its seeming irrationality?

The God-Emperor AI?


That's a good enough start with some of the bigger questions. Below that is stuff about power and ownership, and below that is the actual aesthetics of the thing. All are of equal importance but I think it makes sense to stack them like that.

Onwards...