propecia cancer

Mar 122012

Click here to view this post as a google doc with images etc.


The current features include edited videos of three famous people addressing the camera, as viewed through a video remixer. The Max/MSP patch cycles randomly through each monologue and also group speak, with all three monitors talking at once. There isn’t really a conversation, more a bombardment of information. It is unclear to the user what if anything would contribute to this conversation.

If the participant speaks, an “interruptometer” magnifies his/her voice and pauses all other video. There are also speed controls to warp the content, and a pace control that sets the speed of the cycler. There is limited voice recognition, certain phrases I have taught the system, but it is proving buggy at best during user testing. These features can be easily mapped, like if no one is using the microphone, the videos could slow down gradually to reflect the lack of input.

This is meant to start prototyping the vision­—an immersive conversation told through YouTube content exploring the relationship between:
• Human and video performance
• Celebrity and familiarity
• Experience prototyping and futuristic interactions
• Copyright and content appropriation.

I’m proposing a New York themed conversation with a working title of “Only Living Boy NY”, (does this make sense?) because I am drawn to the content—the oral history of this amazing city as told by its notable citizens. This conversation prototype is suited to the NY gallery audience in a couple of months.

I like the disconnected poetry from the current videos I am working with, but I do not think that chaos is the point I originally set out to make. I’d like to create something that is a little bit more thoughtful and clear, more to the point to prove my thesis statement, which is shaping up to look something like:

New dialogues emerge from a vast online archive. The installation represents a tool for lonely people, but can never capture the poetry of real human interaction. It attempts to leave the participant considering his/her dialogue and performance within the context.

I will need to change my marketing, mock-up, and look book to New York notables as the characters will now follow from the content. I want the New York theme to dictate the panel of experts, but based on a New York query rather than by searching for random celebrities and then choosing a topic. I think that is a more evolved search for a mashup project. I also want to include a line of people waiting to get in the room on my mock-up. The user scenario should be clear that someone may be exiting to talk about the experience with a friend who was in prior, but this is for one participant at a time.

• Edit three new videos with a common theme for a more cohesive conversation: NY theme and NY-appropriate characters.
• Instantiate characters as independent video objects. My equipment request is in to Sven Travis via Katherine Moriwaki for three Mac Minis and three monitors.
• Live listen: video loop of the characters pausing to ‘listen’ to the words coming from the participant.
•Since audio recognition is not working well, inputs need to come from a controller of some sort. The microphone cannot be used exclusively to control the action of the video. I plan to print the words coming from the participant so that there is a devoted display of everything the people have said. The accuracy will be imperfect, but this needs to be part of the upcoming prototype for testing.

Sign on the door: This piece is intended for one person at at time. Please wait your turn.