CLICK TO WATCH PLAY
Can two humans communicate in the absence of a third party intermediary such as language, art, technology...?
I built this model in order to explore how a reduced system of communication operates. Innovation, a counterpoint for two humans to connect. Project, extend themselves into the world, connect and thus recreate themselves and each other. I want to make things. In order for me to do this, it is important for me to explore the principles behind the way systems operate.
Data derived from simple rules give rise to complex systems.
A structure built with a controlled randomness. Determined by two pixels (can be replaced by two humans), each continuously projecting one of two states (white/black). When both are same, a square is drawn within the system. The system uses the screen size as starting point parameters and then maps onto a cylindrical structure. The starting point parameters are controlled by the probabilistic relationship between two pixels. (simple communication model). Some parameters controlled by this are... square size, location to center axis...(-/+ radius * this dimension), the height of the cylinder is mapped to a sine wave(also controlled by these values), as is the rotation of the structure along the x,y,z axis. Each pixel within the dimension of the screen is reconstructed using this system. The drawing ends when every pixels is reached.
Here are a few screenshots from a piece I just finished. I realize (yesterday my friend Hall pointed this out to me :) ) that in my model of "simple" communication, I ended up with something that is ironically obscure, abstract and difficult to communicate. Nonetheless, it is a process that was helpful for me in understanding a system. And hopefully someone might enjoy one or two of the pictures.
For more about this visualization model, based on reduced, simple communication, see previous entries.
20080116
simple communication model - v3 - icm final
20071210
simple communication model - v2
How much data can you derive from a this simple system? 2 pixels. 2 states. communication. center panel is a visualization of the interaction between left and right pixel. When both pixels are the same, a pixel of their colour is output. The pixel decreases in size depending on how many times it takes for both to be the same. Behind it is a coloured square. (more of the coloured square is revealed when there is miscommunication).
run the code.
This becomes a visualization model, not only of simple, realized communication, but also missed communication.
This simple probablistic ruleset is being used to reconstruct a panel in my 2d screen. If this ruleset can be extended to the screen, where else can it be applied? What other models can it build?
If you map the 2d screen onto a cylider, you have a new dimension, a new space to work within.
Problems that I have with these images...
1. the colours need to go away. Any colour other than black and white have an arbitrary relationship to the system. They are usefull now in terms of seeing what is going on, but I ultimately need to get rid of them.
2. I'm also struggling with another assumption. I need some parameters that define the "space" within which this system is operating. Right now the starting point parameters is the screen. If I map it to a cylinder that is another decision. It would be interesting if I didn't have to make this decision, and the ruleset would be able to do it for me. These are more things that I am thinking about...
code (won't render because opengl)
20071205
simple communication model - v1
the black white parody. technology as a mediator for communication. how do 2 distinct entities communicate? I decided to reduce this down to a model that expresses the bare minimum of communication. Two objects (pixel 1 and pixel 2), two states (white and black). How do they know if they are the same? In order for this communication to happen, there needs to be some 3rd party system. one with physical, perceivable symbols, semantics and a way to sense and express those. So last night I made a mini model to visualize this system. The 3rd party intermediary (center panel). A counterpoint for both entities on each side. The program continuously randomly generates a black or white state for each pixel, checks to see if the pixel colours are the same, and if they are draws a pixel of the shared color. If they are not, nothing is drawn. The end result is a pixel map of "realized" communication. Right now, the pixels values on each side are being generated randomly. But you could insert humans into this system and get the same result. is it possible for two people to communicate without this 3rd party? whether that be language, oral, written, art, technology?... do we ever truly know each other? HERE IS A LINK
20071116
embedding content in existing material
A sensation of some aspect of the physical world is a key. This key is a segue to some other dimension of content (in your brain) contained within and stimulated by that key. Material in the physical world is finite. Each sensation of this material (i.e. sight, touch, scent, auditory) via pattern recognition is finite. The possibilities for sensation combinations, patterns, subsets of existing material is infinite.
Cameras are sensors that are becoming as ubiquitous as eyes. They fit in our pockets, embedded in phones, computers. And even more as they can go places where our eyes cannot. Cameras are extensions of our physical bodies. The camera is a new sensory tool.
Cameras can sense and quantify patterns, keys from the physical world. They function in a way that is inherently different from the human eye. It is important to view the camera, not as a surrogate for eyes, but instead as its own sensor. One that sees things differently. Instead of trying to create software to use cameras to mimic the way humans see, how can we build software to use cameras as a tool to help us use our eyes differently?
The quantifiable sensations of cameras can be used as keys to segue from physical world material to a new realm of content. This content can exist within the infinite space of the digital world. So...the eyes are to the brain as the camera is to the computer/internet.
As cameras become increasingly ubiquitous, how will this reshape the material of the physical world? The physical spaces inhabited by humans are built as extensions of our senses. We construct our world to smell, hear, taste, touch,see...but cameras are now a part of the human. Will we start building things for cameras? How and where can we construct, change, reshape our space as a place for the human plus camera? Where can we use cameras to help us gain access to content that is otherwise invisible to the naked eye?
There are an infinite number of applications for this principle. Some ideas that I was toying around with this week for my softness assignment...what if you established a database of quantifiable patterns, and an algorithm to scan through an image to find patterns in a database. Lets say I have a picture of my friends in front of a brick wall. And some company decided to codify the values of the brick wall within a database, so every time a photo was input that contained their designated brick wall value...that would be a pass-key into some other form of content. A way to embed branding within physical world materials...a scary thought. Things to keep in mind. Even though code is precise, this process does not have to be precise. This degree of imprecision, inaccuracy in pattern recognition is a valuable tool. Along these lines, I began to think of instances where the visual material is already bound, controlled and quantifiable. The first thing that came to mind was frames in a movie at a theater. What if a movie was released where you were encouraged to go in to the movie with your digital camera to take pictures of the screen. Then later on, if you ran those photos of the frames through the database, you could access via those images (which are keys)...another dimension of content. What if you do this with paintings at a museum? A digital image of a painting could be a passkey to some other dimension of that artwork.
Yesterday before class I was lucky enough to visit "Material Connexion"...an amazing library of materials. I say lucky because they don't let you in unless you have a pricey membership...but since our topic for Softness this week was "materials" our professor Despina arranged for us to go...many many thanks. At the material connection, I saw material by a company called Microtrace that made me think of these things. It is a company that sells microscopic id tags embedded in paints, dyes, resins... that you can mix with your product's material to give it a unique code. Then you need a reader to detect the code...(microscope, uv light...why not a CAMERA?...)... All of the codes are kept in their database. I'm sure you can imagine the context under which such a material was developed...(and same for pretty much all materials I would think)...the military. But it is used now throughout many industries for brand security. visit their site if interested www.microtracesolutions.com. After leaving Material Connexion...I had 3 hours to do my project for the week. It was actually quite fun because I had spent a lot of time thinking about it and I knew exactly how I would model the concept. here are some images...and the concept overview.
Went to the pharmacy, bought a white t-shirt and some colourful thread. Took some photos of the thread and then got bounds of their normalized RGB values. Started with one colour...and wrote a program that would scan through a video to check if that colour existed. Then I did the same for another colour thread. Then I sewed two tiny square boxes of each colour side by side as a "pattern" on a shirt. Then I had my program alert me when the "pattern" was recognized. So every time the video tracked one colour with the other one in a pixel near (through a threshold)...it would alert you on the screen by saying "you are SOOO cool". This is a pretty simple model of this concept, but I plan on exploring it further...
20071023
ergonomics and digital photography
this week's assignment was ergonomics. The human experience is described by perceiving and evaluating change. Relationships, differences between one sensory perception and the next. I have been interested in playing with the idea of building a space where change is incorporated into the process of making. A more human, ergonomic approach to innovation, maybe... The core of this idea came from a past ICM project. I became interested in the relationship between the viewer and and the screen. And how and why you could take a static 2d image, and by extending lines from it into space (locked to your mouse)...you could create a focal point. Just this slight amount of extra information, you could perceive an entire 180degree perspective on a 2d plane I then started to think about what other information would you need to give a 2d object 360degrees of dimension. From there I built a cylinder that would read in a 2d image, and map the pixels to the surface of the cylinder. my 2d image input was video. Then I made the object on screen dynamic according to the light that was coming in from the video camera. So the video image would not only map to the 3d cylinder but from there onto some other position in space. At this point though, the image was still a video of me, my hands and what ever was present in front of the camera at that moment. This was cool for a second, because I had achieved my initial programming goal. However, it got boring really quickly, and I got really sad that I had made something very uninteresting. I spoke to my classmate...(and krony) Alex about my work. He pretty much affirmed that nothing about this was ergonomic, and that making an image of yourself more 3d on screen is not interesting, but he encouraged me not to get frustrated and to keep plugging away. So I kept at it, and thought about what I was doing, and what I liked and disliked about it. I played around with my code, experimented a bit and started to realize that the most interesting part about this was the aliveness of the object on screen as something that responded, adapted form immediately to my physical environment. I began to ignore the projected video, and focus on the form. From here I went back to the theme of ergonomics. I became interested in the idea of how the physical environment could interact something inside a computer. How the image on screen could respond to light, so immediately that by moving my hands in front of the camera, I could virtually sculpt what was on my screen. The object on screen suddenly felt very alive. I decided to instead map something else onto the form, instead of the video image, I decided used first, plain colour bitmaps, and then I started to project my digital photos onto the cylinder body. I continued to read in video, using those reads to transmit sensory information about my physical environment but only as a means to transform a digital image. I realized at this point that I was playing, sculpting, morphing my digital photos on screen. This started to become really fun. I played around with the code some more, played with the sensitivity to light, immediacy of response. I began to look through my digital photos on my computer, and choosing ones to put into my project. I would let them spin around the cylinder, interact with the camera, light...and once I had them in a position I found interesting I would take a screen shot. My favourites were the ones of faces. More specifically there was something cool about taking some of my pictures of people in rural china and sculpting them into this system. I am interested in playing more with this. figuring out how to export vector formats and maybe printing them. Here are some stills of the project. Here is source code...still needs to be commented...sorry...link