embedding content in existing material

A sensation of some aspect of the physical world is a key. This key is a segue to some other dimension of content (in your brain) contained within and stimulated by that key. Material in the physical world is finite. Each sensation of this material (i.e. sight, touch, scent, auditory) via pattern recognition is finite. The possibilities for sensation combinations, patterns, subsets of existing material is infinite.

Cameras are sensors that are becoming as ubiquitous as eyes. They fit in our pockets, embedded in phones, computers. And even more as they can go places where our eyes cannot. Cameras are extensions of our physical bodies. The camera is a new sensory tool.

Cameras can sense and quantify patterns, keys from the physical world. They function in a way that is inherently different from the human eye. It is important to view the camera, not as a surrogate for eyes, but instead as its own sensor. One that sees things differently. Instead of trying to create software to use cameras to mimic the way humans see, how can we build software to use cameras as a tool to help us use our eyes differently?

The quantifiable sensations of cameras can be used as keys to segue from physical world material to a new realm of content. This content can exist within the infinite space of the digital world. So...the eyes are to the brain as the camera is to the computer/internet.

As cameras become increasingly ubiquitous, how will this reshape the material of the physical world? The physical spaces inhabited by humans are built as extensions of our senses. We construct our world to smell, hear, taste, touch,see...but cameras are now a part of the human. Will we start building things for cameras? How and where can we construct, change, reshape our space as a place for the human plus camera? Where can we use cameras to help us gain access to content that is otherwise invisible to the naked eye?

There are an infinite number of applications for this principle. Some ideas that I was toying around with this week for my softness assignment...what if you established a database of quantifiable patterns, and an algorithm to scan through an image to find patterns in a database. Lets say I have a picture of my friends in front of a brick wall. And some company decided to codify the values of the brick wall within a database, so every time a photo was input that contained their designated brick wall value...that would be a pass-key into some other form of content. A way to embed branding within physical world materials...a scary thought. Things to keep in mind. Even though code is precise, this process does not have to be precise. This degree of imprecision, inaccuracy in pattern recognition is a valuable tool. Along these lines, I began to think of instances where the visual material is already bound, controlled and quantifiable. The first thing that came to mind was frames in a movie at a theater. What if a movie was released where you were encouraged to go in to the movie with your digital camera to take pictures of the screen. Then later on, if you ran those photos of the frames through the database, you could access via those images (which are keys)...another dimension of content. What if you do this with paintings at a museum? A digital image of a painting could be a passkey to some other dimension of that artwork.

Yesterday before class I was lucky enough to visit "Material Connexion"...an amazing library of materials. I say lucky because they don't let you in unless you have a pricey membership...but since our topic for Softness this week was "materials" our professor Despina arranged for us to go...many many thanks. At the material connection, I saw material by a company called Microtrace that made me think of these things. It is a company that sells microscopic id tags embedded in paints, dyes, resins... that you can mix with your product's material to give it a unique code. Then you need a reader to detect the code...(microscope, uv light...why not a CAMERA?...)... All of the codes are kept in their database. I'm sure you can imagine the context under which such a material was developed...(and same for pretty much all materials I would think)...the military. But it is used now throughout many industries for brand security. visit their site if interested www.microtracesolutions.com. After leaving Material Connexion...I had 3 hours to do my project for the week. It was actually quite fun because I had spent a lot of time thinking about it and I knew exactly how I would model the concept. here are some images...and the concept overview.
Went to the pharmacy, bought a white t-shirt and some colourful thread. Took some photos of the thread and then got bounds of their normalized RGB values. Started with one colour...and wrote a program that would scan through a video to check if that colour existed. Then I did the same for another colour thread. Then I sewed two tiny square boxes of each colour side by side as a "pattern" on a shirt. Then I had my program alert me when the "pattern" was recognized. So every time the video tracked one colour with the other one in a pixel near (through a threshold)...it would alert you on the screen by saying "you are SOOO cool". This is a pretty simple model of this concept, but I plan on exploring it further...

No comments:

taylor levy. taylor levy. taylor levy. taylor levy. taylor levy. possible design object. possible design object. possible design object. itp. itp. itp. itp.