Instructions For Humans won funding support from the Arts Council, which is great, and means I now have to do it, which requires some rapid refocussing. The thing with the Grant for the Arts scheme, which I applied for, is it's all or nothing. Either they give you thousands of pounds to make art, or they don't. And you have to wait six weeks to find out. It's like Schrödinger's Cat in Limbo, or a really boring Sam Beckett play, waiting for a result which will either change your life (I get to make art!) or change your life (I have to get a job now!) with no clue as to which way it'll go. So when you find out it's great and a relief, but also like suddenly having to run a race without any warm-up, and adrenaline with only get you so far.
One thing I quickly realised was I've spent most of the last year working on justifications for this project existing, and now I don't have to make those justifications anymore. As long as I stick within the (fairly broad) remit agreed with the Arts Council I'm free to just Make Art. Which is awesome, of course, if mildly terrifying. Where to begin?
A thing about AI Art is there's not a lot of it (which isn't too surprising as this current wave has only being around for a couple of years) and of that only a small amount is any good (again, not too surprising as 90% of everything is bobbins). While there's plenty of established and valuable work in data-art and machine-augmented performance, the Machine Learning genre, using Gene Kogan's ML4A as a useful enough boundary, is pretty open. Especially when you discard the tech demos and illustrative pieces. Which is pretty exciting really. So, where to start.
Art isn't really about showcasing technology. It should ask the big questions and invite you to answer them. So I started writing some down.
I like this idea that the internet, and by extension computer mediation, is a mirror. When we're talking about AI we're actually addressing (in)securities about ourselves.
There's been a lot of talk about how we can't properly interrogate current machine learning systems. Information goes in and instructions come out but no-one really knows how they work. This seems very analogous to humans who are famed for their irrationality. If anything, our unpredictability is what makes us human and allows us to evolve. So why the hell do we trust humans so much?
I suspect it's a combination of empirical science and faith, informed by personal experience and instructions from priest figures.
When we define humans by their work, and we take away that work, what are humans for?
Continuing the what are humans for question but for Capitalism / Socialism / Christianity / Buddhism / etc.
Humans are defined by their use of tools, and our perception of the world is directly informed by this use of tools. All humans are cyborgs. But does AI shift the agency in that use of tools from the human to the tool? Is that even possible?
Human brains process sense-data with extreme biases and their processes are incredibly hard to comprehend. AI systems appear to be becoming analogous to this.
The God-Emperor AI?
That's a good enough start with some of the bigger questions. Below that is stuff about power and ownership, and below that is the actual aesthetics of the thing. All are of equal importance but I think it makes sense to stack them like that.
Onwards...