The latest issue of Drunken Boat has a video of a performance of generative text “Instabilities 2” by Hazel Smith and Roger Dean. It’s pretty fascinating to say the least, and merges fixed, versioned and generative states of text production on-screen at once, each competing for attention and all competing / harmonizing with the varying levels of audio speech that surround the performance. (See the authors’ notes for a better explanation) The video, being a documentary of one performance rather than the execution of the programs themselves, nonetheless offers a good showcase of the potentials of this setup the algorithms used to produce the various states of textual transformation.
I’m very interested in how live generative algorithms can be used in live performance. This work seems pretty unique in its merging of 3 different states of stasis / flux in a live context. The linguistic transformation techniques occasionally remind me of John Cayley’s Overboard work, in terms of the text-visual decisions being made by the algorithm. I know nothing about the Python code in question but it seems to apply arbitrary text transformations that are sometimes, if not always, visually similar to the words they replace (although there are often complete breakdowns in the text as shown in the video).
Plus extra kudos for the use of Comic Sans.