Ahead of their live performance at One.Seventy on the 28th of January I spoke to DAAT about the philosophy and methodology in which they create and perform their audio works.
Breaking away from the pack several years ago now, they have now developed AH-64B, an environmental model and tool for music or sound creation. Jason and Joe from DAAT fill us in on its motivation...
Back in 2015 AH-64A was a machine whose shape was indeterminate, which we were already inside of. The present incarnation can be viewed of within the same lines. Even with earlier tracks such as Orange Line and Phytochrome, this same philosophy was valid, as they were related to “idea machines".
Now we are working on the elementary units of these ideas, trying to figure out what exactly they are. We tend to go deeper, identifying new levels as any given piece can't show more than a tiny facet of something larger we interact with. We will only know it for what it is when we find a way to be completely absorbed by it, at which point we will cease to be separate from it.
When working on a track with a DAW we found we were only able to work with a single set of outcomes. If we instead internalised the activity as an environment, rather than an outcome of the output alone, then it too should have its own set of outcomes.
Above: AH-64B Environment in Cycling74's Max
So you’ve created a new premise or reality in which the track becomes an environment? Having made this distinction, what happens next? How does the ideology meet the method?
It depends on how we interact with it. In the most general sense we inhabit an organism or system, but it's an entity not like ourselves. Its senses, thoughts and actions take place within the abstract environment of audio signals.
It has a direct and an indirect form: direct as literal machine sounds (evocation of physical reality) and indirect as causal relationships between sounds that are ultimately abstract fluctuations in a information medium.
So how does this lead into the bricks and mortar techniques, and system you've developed? How do these variables become available to use and direct?
We can generate sound from a stimulus that can then have resultant effects in the defined environment. This process develops as allowed. In an absurd world of clones, drone warfare, accurate sound and image detection, and virtual realities, what (else) is relevant? Given sufficiently powerful tools and enough time, the process will generate its own reality. Every event fully interconnected.
Which tools lend themselves to this way of working?
There are several programming languages people use for music: Pure Data, Supercollider, Max/MSP, Python, ChucK and Csound to name a few. We adopted Max/MSP gradually after using Logic (most recently) and then Live with Max4live. With any of these general-purpose environments it comes down to building your own tools and finding your own way of working.
In that sense, is the listener secondary to the environment playing out its nature?
The chaotic nature of the system is reined in by selecting and manipulating parameters relevant to our aesthetic decisions. We use a variety of methods/algorithms to try to achieve this. Whether or not it works for the listener is up to the individual and their interests. The interactions with these processes help build awareness of our lives. If we're talking music, you could say it's the music that exists all around us, that emerges due to networks of causality.
You can catch DAAT performing live at One.Seventy on 28th January.
For more information about One.Seventy please click HERE