There are several Tan Lin videos now being hosted on PennSound.
Pretty fascinating kinetic texts here. I was particularly interested in Disco Eats Itself, which combines typed-out text animated and obscured / obstructed through Flash with a corresponding visual track of YouTube videos tagged with “Disco”. I could not tell for sure, but I think these videos are recorded and presented through a SWF file, since they seem to output the same every time. This made me wonder what would happen using an API or suchlike, through which you could maybe loop YouTube videos with certain tags in relation to a realtime reflection of the current status / content of videos with such tags, and producing a piece which is always transforming in accordance with the YouTube content. I’m sure there’s a project in this (and I guess I should try this out soon) but the Tan Lin work offers a pretty fascinating snapshot of a database’s moment in time directly relating to an unfolding text through meta association.