Nima Navab
  • Projects
    • All Projects
    • In/Decline
    • in-between
    • sum
    • deathwhiff
    • fade out, fade in
    • genocide memorial
    • @fear
    • flux 1.0
    • control freak
    • Liquid Light
    • Cloud Chamber
  • Research
    • Atmospheres
    • topological media lab
    • LEAP @ Concordia
  • Blog
    • Current blog
    • 2014 spatial theory blog
  • Bio

Pneumatic Playground - Week #6

11/20/2016

 

Summary:

With lessons learned from doingincline decline's massive, static and extremely heavyweight ceiling, we got started on the frames for walls have ears. Unlike the previous wooden model, each frame is now made up of coroplast (corrugated plastic sheets) and styrofoam, making it extremely lightweight. With each frame being able to mount separately on the wall, we are no longer limited to one static frame, but a series of modules behind the massive lycra frame. This means we can experiment with the arrangement and not be restricted to a fixed arrangement, especially when going to install on site. Now the setup is versatile and light.

In addition to building the frames, all modules and our circuit are fully hooked up and ready for programming as you can see in the test below. As of now the balloons are striped naked. Next week with the addition of spandex/ lycra on top; walls have ears ​will really be coming together.

Pneumatic Playground Week #6 - Design + Build from Nima Navab on Vimeo.

Pneumatic Playground - Week #5

11/14/2016

 

Summary:

A couple of breakthroughs this week: The first breakthrough had to do with controlling the deflation rate, a problem that has persisted throughout the project. Our plan up to the weekend was to create different flow channels with incrementing openings, but with a piece of fabric and how much you compress that fabric in the 1" deflation chamber does the job as you can see in the video, where the inflation rate is the the same as the deflation rate.
Picture
Thierry sizing up a fully inflated loon

Pneumatic Playground Week #5/ Inflate = Deflate from Nima Navab on Vimeo.

Picture
TIP Circuit finally replaces the breadboard of doom
To the left you can see the new setup, basically eliminating multiple breadboards and jumper wires that has been bugging us.

Also as you can see in the video below, Thierry programmed multiple buffers so that every segment of a recorded buffer will store separately (up to 30 buffers). This way dynamic buffers will inflate in separate balloons and deflate their associated recorded segments when activated. ​

Pneumatic Playground Week #5/ Multi-Buffer from Nima Navab on Vimeo.

Pneumatic Playground - Week #2-4

11/8/2016

 

Summary:

Three MICS setup around the lab. Any speech in the room directly inflated a 9' weather balloon. Sections of speech gets stored in a buffer. When the balloon is in idle mood, buffer is on delay, sounding as if there are lost voices stored in the balloon. As of now when you hit a switch the balloon will deflate spilling the contents of buffer at first from inside the balloon (with speaker setup inside) and slowly ramps to the second speaker directly above where the ballon chamber spill. Please see video below for a bit of clarification. 

Atmospheres Prototype #2 from Nima Navab on Vimeo.

Picture
Deflation Experiments
Due to the rapid rate of deflation, not enough time is provided for the buffer to spill enough speech that becomes interesting. We tried outputting through 4 empty pneumatic valves but were not successful as the openings of the flow valves proved to be too small. This failure led to an important finding:
  • Inflation based on pressure does not rely on the size of opening since pressure is constant
  • But for deflation, separate valves with various openings need to be consistent to create range

Pneumatic Playground - Week #1

10/20/2016

 

Week #1: Quick Summary

Picture
Picture
Simple switch and potentiometer for controlling the valves.
For the first week of pneumatic playground, me and Thierry hooked up a separate balloon, with discrete inflation and deflation. After powering up the pneumatics, we made a couple of circuits to be able to control pulsating the on/off inflation switch. The following week, we'll be actually embedding a series of these valves to a surface so we can play with deformations caused by the balloons on the surface of tensile structures. First a simple switch and then a potential meter. You can view the experiment below:
Below you can see a short video of our experiment and  the grant application I did, trying to get money to make the Proportional Pressure Controller accessible. Click on full screen to be able to properly view it:

Week #1: pneumatic playground from Nima Navab on Vimeo.

Escape Routes

4/19/2016

 
Picture
CONCEPT
One of the most problematic and complex issues rightfully circulating the media and occupying headlines are conflicts that arise along borderlines. More specifically concerning refugee migration due to catastrophic political, economical, social and cultural circumstances. For example ISIL and the Syrian Regime Army in Syria, drug Cartels and Narco-trafficking in Mexico and throughout South America, and the Libyan Civil War. Much of the relentless response against these massive migration patterns has translated into creating further spaces of confinements, closing down of borders, locking up men and women behind bars in migrant detention centers, crimes of non assistance in the Pacific Ocean and promises of taller and thicker walls along various borders of conflict. Such responses fueled by the panicked public support in many European and North American countries under various organizers and key spokesman such as Trump or Le Pen just to name a couple are not just indicators of some far fetched Orwellian future realities, but highlight the current political geographical landscapes of confinement and indifference.

Focusing on large numbers of migrants I would like to address these spaces of confinement and patterns of movement not through any specific political, environmental or social parameters, but through the nature of movement, response, resilience, acceptance and retaliation itself. To be able to create a dynamic environment where continuous flow of swarms of actants act and react to various immediate spatial surroundings I will turn to models of objects on a table, where every object is surveilled registered and recognized, much like the globe, it’s many borders and geography itself. To program various behaviors, instead of turning to crowd behavioral theories such as theory of contagion, convergent and emergent norm, I will instead begin with basic behaviors of spatial intelligence; intelligence meaning immediate and continuous responses to periphery and dynamic interior borders of objects and their various resulting relationships on the table. Continuous relations between objects will situate swarms in various spatial situations like kettles in a riot, and each situation will spark various choreographed yet dynamic and changing responses (in the case of kettling for example such situation is met with agitation and panic in movement patterns). Arrangements of various objects in different shapes and features, manipulated by the viewers in real- time will situate a few, some or lots of autonomous agents in areas of confinement where solids and void will demand different routes, patterns and behaviors in movement. Consequently highlighting escape routes to spatial freedom against mass control and subversion in 21st centuries’ world of borders.

TECHNICAL DESCRIPTION
Escape Routes enables the user to bridge the line between the physical world and the virtual world by tracking real-life objects and assigning them to recognizable objects in the processing framework. This allows for all sorts of phenomena, dynamic events and special effects to occur in real-time. Furthermore, if additional real-life objects are placed onto the frame, new interactions/ relations can occur between them, demonstrating the code’s ability to control action flow whilst encouraging participation.

FRAMING

Picture
Swarm Urbanism
By Kokkkugia

http://www.kokkugia.com/swarm-urbanism

Through three basic rules such as avoiding crowding neighboring agents, heading towards average direction of flock and moving towards the center of flock Kokkugia uses swarm intelligence to dynamically plan out an urban design in such a way to allow maximum comfort to inhabitants of the area. This is the closest example I found that uses swarms with basic rules or behaviors that outline some relational concepts in space. Where the goal here is to lay the groundwork for the design itself, my project instead provides the dynamic environment where the swarm is constantly responding to its environmental constraints.
Picture
Left-To-Die Boat
By Forensic Architecture

http://www.forensic-architecture.org/case/left-die-boat/

Amongst one of my favorite architectural computation labs, the Forensic Architecture team dissects extremely complex arrangements of events such as bombs exploding over Gaza strip or in this case boats full of refugees strategically avoided by army rescue teams and left to die. Through algorithmically rich and complex methodologies, the lab maps out various spatial realities through all the media they could find such as tweets, snippets of video, UN reports, and news reports to piece together exactly what really happened in areas of conflict, making in most cases covered up stories a public truth.
Picture
SandBox
By Rafael Lozano Hemmer

http://www.lozano-hemmer.com/sandbox.php

The concept and implementation of Hemmer’s ‘Sandbox’ is very closely related to ‘Escape Routes’. Focusing on the border of Mexico and USA the project encourages participation on two very different scales. A small sandbox overlooking the beach projects mini versions of people down below and the live video of people above playing with the small sandbox are projected back onto the beach. With a border drawn at the middle of the frame the project highlights the effects of control and convergence in a playful manner and it does so by using the same surveillance technologies as used along the border. This is a relational experiment that playfully manipulates and redirects people’s movement through space and it is successful through a rather simple intervention. The core strength of this project lies in its spatial arrangement. The play with these two scales and their hierarchical relationship (vertical hierarchy) provides dynamic levels of interaction. A sense of control from the top and response on the beach switches time-to-time depending on the social dynamics of each environment; blurring the lines between the controllers and the controlled.

SOFTWARE ARCHITECTURE
Picture

OpenCV Library

3/11/2016

 
What is CV (Computer Vision)?

"
Computer vision is a field that includes methods for acquiring, processing, analyzing, and understanding images and, in general, high-dimensional data from the real world in order to produce numerical or symbolic information, e.g., in the forms of decisions." (source)

What is OpenCV (Open Source Computer Vision Library)?


"OpenCV is an open source computer vision and machine learning software library. OpenCV was built to provide a common infrastructure for computer vision applications...

The library has more than 2500 optimized algorithms... These algorithms can be used to detect and recognize faces, identify objects, classify human actions in videos, track camera movements, track moving objects, extract 3D models of objects, produce 3D point clouds from stereo cameras, stitch images together to produce a high resolution image of an entire scene, find similar images from an image database, remove red eyes from images taken using flash, follow eye movements, recognize scenery and establish markers to overlay it with augmented reality, etc. OpenCV has more than 47 thousand people of user community and estimated number of downloads exceeding 7 million...

It has C++, C, Python, Java and MATLAB interfaces and supports Windows, Linux, Android and Mac OS. OpenCV leans mostly towards real-time vision applications and takes advantage of MMX and SSE instructions when available... OpenCV is written natively in C++ and has a templated interface that works seamlessly with STL containers." (source)
Picture
"Making Things See" by Greg Bordenstein (click for source)
'OpenCV for Processing' (Do not mistake for 'OpenCV')

"OpenCV for Processing is based on OpenCV's official Java bindings. It attempts to provide convenient wrappers for common OpenCV functions that are friendly to beginners and feel familiar to the Processing environment." (source)

What can it do:
  1. Face Detection
  2. Brightness Contrast
  3. Filter Images
  4. Find Contours
  5. Find Edges
  6. Find Lines
  7. Brightest Point
  8. Region Of Interest
  9. Image Diff
  10. Dilation And Erosion Thin
  11. Working With Color Images
  12. Background Subtraction
  13. Color Channels
  14. Find Histogram
  15. Hue Range Selection
  16. Calibration Demo
  17. Histogram Skin Detection
  18. Depth From Stereo
  19. Warp Perspective
  20. Marker Detection

LINK TO LIBRARY

*IMPORTANT NOTE:
OpenCV will not run on 32bit (OS 10.6+) OpenCV will not work with Processing 3

cv.jit Computer Vision for Jitter (Max/ Msp)

Picture
Picture
Statistics
  • cv.jit.mean Calculates the mean value of a matrix over time.
  • cv.jit.ravg Calculates the running average of a matrix over time. cv.jit.sum Sums all the pixels in a plane, works for any type/dim.
  • cv.jit.variance Estimates the variance of a matrix over time. cv.jit.stddev Estimates the standard deviation of a matrix over time.

Motion Analysis
  • cv.jit.opticalflow Estimates the optical flow using various algorithms.
  • cv.jit.LKflow Estimates the optical flow using the Lucas-Kanade technique.
  • cv.jit.HSflow Estimates the optical flow using the Horn-Schunk technique.
  • cv.jit.track Track the position of up to 255 individual pixels.
  • cv.jit.features2track Initialize cv.jit.track to easiest pixels to track.
  • cv.jit.framesub Difference between consecutive frames.
  • cv.jit.shift Region tracking using the MeanShift and CAMShift algorithms.
  • cv.jit.touches Track multiple regions at a time. (Optimized for multi-touch interfaces.)

Binary Images
  • cv.jit.threshold Adaptive thresholding.
  • cv.jit.canny Extract binary edges from a greyscale image.
  • cv.jit.binedge Returns only edge pixels.
  • cv.jit.dilate Turns a pixel ON if at least one neighbour is ON.
  • cv.jit.erode Removes the edge pixels from an image. c
  • v.jit.open Erode followed by dilate.
  • cv.jit.close Dilate followed by erode.

Image Segmentation
  • cv.jit.floodfill Isolates a single connected component.
  • cv.jit.label Gives each connected component a unique value.
  • cv.jit.blobs.bounds Find bounding boxes for each connected component.
  • cv.jit.blobs.centroids Find center of mass for each connected component.
  • cv.jit.blobs.direction Find direction each connected component points to.
  • cv.jit.blobs.elongation Calculate elongation for each connected component.
  • cv.jit.blobs.moments Calculate moments of inertia for each connected component.
  • cv.jit.blobs.orientation Measure angle of main axis for each connected component.
  • cv.jit.blobs.recon Carry out pattern recognition on each connected components.
  • cv.jit.blobs.sort Re-arrange labels so that each connected component keeps the same from frame to frame.

Shape Analysis
  • cv.jit.mass Returns the number of non-zero pixels.
  • cv.jit.centroids Calculates the center of mass for an image.
  • cv.jit.moments Computes various invariant shape descriptors.
  • cv.jit.orientation Calculates a shape’s main axis.
  • cv.jit.direction Calculates the direction a shape points to.
  • cv.jit.perimeter Counts the number of edge pixels.
  • cv.jit.elongation Estimates how thin a shape is.
  • cv.jit.circularity Estimates how compact a shape is.
  • cv.jit.undergrad Performs simple pattern recognition.
  • cv.jit.learn Performs pattern analyis and recognition on an incoming list.
  • cv.jit.faces Finds human faces in an image.
  • cv.jit.features Finds areas of high contrast, pixels that are easy to track.
  • cv.jit.lines Finds straight lines.
  • cv.jit.hough Compute Hough space. (by Christopher P. Baker and Mateusz Herczka.)
  • cv.jit.hough2lines Find straight lines in Hough space.
  • cv.jit.snake Fit a point sequence to image edges.

Miscelaneous
  • cv.jit.grab Cross-platform wrapper for jit.qt.grab/jit.dx.grab.
  • cv.jit.changetype Change the type of a matrix without changing other attributes.
  • cv.jit.resize Anti-aliased matrix resize.
  • cv.jit.cartopol Treats the data in two matrices as cartesian coordinates and translates to polar data. cv.jit.poltocar …and vice versa.

Drawing and display
  • cv.jit.track.draw Visualize output of cv.jit.track.
  • cv.jit.lines.draw Visualize output of cv.jit.lines.
  • cv.jit.features.draw Visualize output of cv.jit.features.
  • cv.jit.faces.draw Visualize output of cv.jit.faces.
  • cv.jit.centroids.draw Visualize output of cv.jit.centroids.
  • cv.jit.blobs.orient.draw Visualize output of cv.jit.blobs.orientation.
  • cv.jit.blobs.elongation.draw Visualize output of cv.jit.blobs.elongation.
  • cv.jit.blobs.direction.draw Visualize output of cv.jit.blobs.direction.
  • cv.jit.blobs.centroids.draw Visualize output of cv.jit.blobs.centroids.
  • cv.jit.blobs.bounds.draw Visualize output of cv.jit.blobs.bounds.
  • cv.jit.blobs.color Visualize output of cv.jit.label.
  • cv.jit.shift.draw Drawing utility for cv.jit.shift.
  • cv.jit.flow.draw Display optical flow using hue and saturation.
  • cv.jit.touches.draw Drawing utility for cv.jit.touches.

Obsolete objects
  • cv.jit.covariance Computes the covariance matrix of a vector.
  • cv.jit.mahalanobis Computes the Mahalanobis metric.
  • cv.jit.hmean Calculates the harmonic mean over time.
  • cv.jit.gmean Calculates the geometric mean over time.
  • cv.jit.trackpoints Display utility for cv.jit.track.
  • cv.jit.trackgroup Manager utility for cv.jit.track.
  • cv.jit.shapeinfo Wrapper for cv.jit.moments.

LINK TO LIBRARY
*IMPORTANT NOTE:
Patch requires Kinnect 1, Max/ Msp, Jitter, CNMAT Externals, tapTools, freenectlib, oscP5

Tangible Physics Engine Prototype Setting & Processing Pipeline

Picture
Picture
Physical Setup:
  • Kinect hooked up on top the table proving RGB normal video plane matrix, IR matrix and depth map
  • Short throw projecting on top of the table

Mapping:
  • Projection mapping using syphon client mapped through MadMapper

Max/ Jitter:
  • Kinect hooked up using jit.freenect (freenect kinect library for Max)
  • Freenect in mode 1 (depth map) set at 5” depth cut off from table surface (like a scan)
  • Background subtraction + slide down to only see new objects on the surface
  • Using computer vision library’s jit.blobs.centroid, iterate through every blob’s x, y, z position and mass
  • For every frame scale, prepend and route tracking info via Open Sound Control protocol (OSC library
  • for Max by CNMAT), achieved through identifying a port on local host

In Processing:
  • Using OSCp5 library in processing receive OSC message and control the location of attractor, repellor,
  • generator and etc. as part of the tangible physics engine/ particle system.

Tangible Physics Engine Prototype:

3/1/2016

 
I want to create an application that will enable me to bridge the line between the physical world and the virtual world by tracking real-life objects and assigning them to objects in the application. By doing so, the chosen objects become registered as object-orient objects within the code and are able to function as them. This allows for all sorts of phenomena, events and special effects to occur not on the screen but in your immediate built environment, real-time. Furthermore, if additional real-life objects are placed onto the frame, new interactions/ relations can occur between them, demonstrating the code’s ability to control action flow whilst encouraging participation! To prototype this application I will create a 'tangible physics engine' revolved around physical objects in space using OPENCV and releasing particles onto the space via Leap Motion's gesture mapping.

References:

A Table Where Little People Live
By: Team Lab
Reality Editor:
a new kind of tool for empowering you to connect and manipulate the functionality of physical objects.
By: Fluid Interfaces MIT Media Lab
Picture
The Giver of Names:
is quite simply, a computer system that gives objects names.
By: David Rockeby
Picture
Form + Code
By: Casey Reas
Picture

Temporality in Play

12/17/2015

 

synthesis: 'now'

12/10/2015

 
Picture
collage of various illustrations including wave function collapse and qbist interpretation
At first I was after simulating the collapse of the wave function based on QBist (quantum bayesianis) theory, where through the subjective act of measurement one instance of the many probable locations of collapse is experienced. This was the conceptual basis of project derived from recent fascination with quantum theories revolving the collapse which has far reaching grip on our understanding of time and space (both micro & macro).

In order to do this I was trying to create an illusion where a participant in the installation, through the act of looking anywhere in the space in front of him/ her, would make a drop of water appear and suspend exactly in that location (as long as he/ she kept the gaze). Technically it would be possible through a series of linear actuators spread throughout space that precisely release x drops of water per second where the viewer is looking (location in space determined through eye-ball tracking). Furthermore the placement of drop in space needs to be tracked so whenever it passed at the same y location where the spectator is looking, the strobe would flash and freeze that frame after frame creating the illusion of suspension (stroboscope technique) and therefore making it seem as if the act of perceiving literally manifests into a material reality, challenging our conception of how physics of our everyday reality works. I am still perusing this idea, but due to complexity of installation and also amounts of nodes (actuators) need to release drops in space, it became obvious that it needs to evolve gradually.

now the actual installation is the first step exploration into that direction...
Picture
Picture
Picture
1st: photos of 'now' installed @ topological media lab || 2nd sketch collage of Einstein's special relativity theory
Instead of tackling the simulation of the collapse i decided to take a step back and focus on relativity. Main focus then became the construction of time, concept of now vs. the actual slice of time in space (never fixed and the same, constantly shifting positions in past and future based on velocity and position in space). Meaning there is no shared moment in time. What we in the moment perceive as now is actually 85ms behind and the further the object is from our location the further it resides in the past. Various relationships as such is visualized in the poster above.

Given 4 channels of control through the dimmer pack i decided to control the frequency and brightness of a series of lights fixed in space. The challenge was to develop a program that would take into consideration Einstein's theory of special relativity and through tracking of movement in space animate these four light according to speed and position of the viewer in space and their relative positions to each node. In a very straightforward way i programmed these relations based on relativity so that the closest node reacted the fastest with the most brightness, and consequently the further away the more delay in time and dimmer the light.

In the video below there are 4 variations of the program. Total of 3 angles that i shot the installation. For every shot i go through 4 variations, each separated by a white flash in the video. Here are the four variations in order:

  1. fade:
    the closest you are to a node the brighter it is, others are adjusted according to your distance between them and will fade faster or slower based on velocity.

  2. flash fade:
    same thing as above except the lights blink all the same time based on how fast you are moving. The faster one moves the more their time slows down, so the blink simulates that.

  3. relative flash + reverse ramp fade:
    closest node blinks more instantaneously and further the node the more delay in response and longer the second. I reversed the fade based on how in the experience walking towards the 'now' moment made more sense, rather than the now projected above, where it couldn't be seen.

  4. oscillating relative flash + reverse ramp fade & feedback:
    same as the above but added some noise by reducing threshold in tracking and also created a feedback loop where the presence of the light itself was taken into account, meaning the light emitted from each node would increase and decrease the delay others (feeding back to itself in a loop).

Please look at the videos of the main patch and tracking to see it working in real-time.
Tracking patch:
Main max patch (3rd variation):
Setup:

Research on Responsive Computational Environments

11/17/2015

 
Picture
Spring Dragon Trail, 2015, by Phillip Beesley Architects, http://philipbeesleyarchitect.com

Interact | Interaction | Interactive / Interactivity

Interactivity does not have a singular, defining meaning. The term has many implications and is broadly used across various disciplines such as biology, design, new media, communications, computer science and much more. Interactive systems and networks can be biological, virtual, perceptual, and in general are defined in many forms. In fact most things throughout our everyday life can be described as interactive. However there are some fundamental characteristics that make up the most basic description of the term. Interactive by definition means “the pattern of active”, rooting back to mid 19th century. The common thread that makes anything interactive is when messages (patterns) are “related to a number of previous messages [patterns] and to the relationship between them” (Rafaeli 1988). To achieve this there need to be sources that by communication will affect each other and simultaneously transform. ‘Interactivity’ dates back to 1995 and first showed up in Parsons research, which used the suffix ‘ity’ as the “quality or condition of interaction” (Parsons 2010). What this research signifies is that while a mobile app, an art installation, a video game, cellular structures are all interactive; it is the quality and condition of interaction, the interactivity, which distinguishes one form of interaction from another. The main question is the way which something triggers reaction, triggers behavior. Behaviors or actions are required on both sides of the equation and the quality and conditions of messages mediated, defines its interactivity. Therefore a very simple interface such as a website and a very complicated tele-communication system can both be interactive, but the quality, quantity and conditions of messages that go back and forth create extremely different sets of actions and reactions.
Picture
Waving Beans, by A. Kitaoka, http://gizmodo.com
"The idea that there is such a thing as fixed form is actually as much an assumption about perception as it is an assumption about art. It assumes that vision is not dynamic – that it is a passive, transparent registering of something that is just there, simply and inertly. If vision is stable, then to make art dynamic you have to add movement. But if vision is already dynamic, the question changes. It’s not an issue of movement or no movement. The movement is always there in any case. So you have to make distinctions between kinds of movement, kinds of experimental dynamics, and then ask what difference they make."
(Massumi 2008)
 What Massumi points to is that the dynamics and shifts in perception are always there. The line between what is ‘real’ or physical versus ‘virtual’ or abstract behavior is of no importance. Neither is the question is this interactive? Assuming everything is interactive, it is the interactiv (ity) of a work that plays with the quality and conditions of present dynamics, may it be perceptual, physical or virtual in order to create a feedback.
Picture
Ondulation, 2002, by Thomas McIntosh, estuaire.info
Arjen Mulder and Joke Brouwer in the book Interact or Die. They define interaction as ‘formation of forms’. We and other organisms interact to survive. Our existence comes from a complex network of actions and reactions bouncing of each other.
Picture
How many legs does the elephant have? playbuzz.com
“...perception becomes action, and the action of perceiving adds something to the work. The act of perceiving thereby becomes the act of making the work.”
(Mulder & Brouwer 2007)
In the next two sections we are going over some examples of interactive/ responsive body of works revolving around just two branches of interactive works ranging from works involving or inspired by natural processes, paying attention to constant shifting natural environmental factors such as atmospheric physics, geology, astronomy and fluid motion, to a series of exemplary works blending the division between performer and spectator, where the works offer dynamic playful, flexible ways of perceiving the art object that is not static, prescriptive, and therefore is in constant flux.

Vibrant Matter | Dynamic Environments

Picture
Neither From Nor Towards, by Cornelia Parker, 1992, londonartreviews.com
From Cultivating Alternatives.com (link)
On Vibrant Matter, A Political Ecology of Things by Jane Bennet
Blogger unknown...
"My ‘own’ body is material, and yet this vital materiality is not fully or exclusively human.  My flesh is populated and constituted by different swarms of foreigners… the bacteria in the human microbiome collectively possess at least 100 times as many genes as the mere 20,000 or so in the human genome… we are, rather, an array of bodies, many different kinds of them in a nested set of microbiomes."  (Bennet, 112-13)

"If human culture is inextricably enmeshed with vibrant, nonhuman agencies, and if human intentionality can be agentic only if accompanied by a vast entourage of nonhumans, then it seems that the appropriate unit of analysis for democratic theory is neither the individual human nor an exclusively human collective but the (ontologically heterogeneous) ‘public’ coalescing around a problem” (Bennet, 108).

Evolution of the Earth
Visual Representation of The History of Life on Earth (link)

Picture
The geologic time spiral—A path to the past, Joseph Graham, 2008, wikipedia.org
This timeline of evolution of life represents the current scientific theory outlining the major events during the development of life on planet Earth. In biology, evolution is any change across successive generations in the heritable characteristics of biological populations. Evolutionary processes give rise to diversity at every level of biological organization, from kingdoms to species, and individual organisms and molecules, such as DNA and proteins. The similarities between all present day organisms indicate the presence of a common ancestor from which all known species, living and extinct, have diverged through the process of evolution. More than 99 percent of all species, amounting to over five billion species, that ever lived on Earth are estimated to be extinct. Estimates on the number of Earth's current species range from 10 million to 14 million, of which about 1.2 million have been documented and over 86 percent have not yet been described.

Robert Smithson
Spiral Jetty, 1970 (link)

Spiral Jetty is an earthwork sculpture constructed in April 1970 that is considered to be the central work of American sculptor Robert Smithson. Smithson documented the construction of the sculpture in a 32-minute color film also titled Spiral Jetty. Built on the northeastern shore of the Great Salt Lake near Rozel Point in Utah entirely of mud, salt crystals, basalt rocks and water, Spiral Jetty forms a 1,500-foot-long (460 m), 15-foot-wide (4.6 m) counterclockwise coil jutting from the shore of the lake. The water level of the lake varies with precipitation in the mountains surrounding the area, revealing the jetty in times of drought and submerging it during times of normal precipitation. Originally black basalt rock against ruddy water, Spiral Jetty is now largely white against pink due to salt encrustation. Since the initial construction of Spiral Jetty, those interested in its fate have dealt with questions of proposed changes in land use in the area surrounding the sculpture and of the proper amount of preservation, if any.

Ned Kahn
Wind Veil, 2000 (link)

Picture
Wind Veil, by Ned Kahn, 2000, nedkahn.com
The confluence of science and art has fascinated me throughout my career. For the last twenty years, I have developed a body of work inspired by atmospheric physics, geology, astronomy and fluid motion. I strive to create artworks that enable viewers to observe and interact with natural processes. I am less interested in creating an alternative reality than I am in capturing, through my art, the mysteriousness of the world around us.
My artworks frequently incorporate flowing water, fog, sand and light to create complex and continually changing systems. Many of these works can be seen as “observatories” in that they frame and enhance our perception of natural phenomena. I am intrigued with the way patterns can emerge when things flow. These patterns are not static objects, they are patterns of behavior – recurring themes in nature.

Phillip Beesely Architects
Near-Living Responsive Architecture, Hylozoic Ground 2010 (link)

Picture
Hylozoic Ground, Phillip Beesley, 2010, http://philipbeesleyarchitect.com
The studio's design methods combine the durable crafts of heavy machining and building with advanced digital visualization, industrial design, digital prototyping, and mechatronics engineering. Sculptural work in the past three decades has focused on immersive textile environments, landscape installations and intricate geometric structures. The most recent generations of these works feature interactive lighting systems and kinetic mechanisms that use dense arrays of microprocessors and sensors. Chemical protocell metabolisms are in the early stages of development within many of these environments. These works contemplate the ability of an environment to be near-living, to stimulate intimate evocations of compassion with viewers through artificial intelligence and mechanical empathy. The conceptual roots of this work lie in 'hylozoism', the ancient belief that all matter has life.

Performer/ Spectator | Performative Environments

Picture
The Artist is Present, by Marina Abramovic, 2010, www.widewalls.ch
From Rhizome (link)
Performance, All Over the Map: Chris Salter's "Entangled"
By Maria Chatzichristodoulou [aka Maria X]
"...performance in a way that challenges our understanding of what performance is, but also demonstrates the profound connections between diverse sets of interdisciplinary practices that have not, up to now, been approached, considered or articulated as either interconnected or performative...

“everything has become performative” (p. xxi)... a shift in the zeitgeist that occurred at the end of the 20th and the beginning of the 21st century, when the euphoria of the virtual was replaced with a reconsideration and re-foregrounding of the physical body and, with it, “embodiment, situatedness, presence, and materiality.”

As a result, claims Salter, “performance as practice, method, and worldview is becoming one of the major paradigms of the twenty-first century, not only in the arts but also the sciences.”... what performance suggests as a worldview is that 'reality' is not pregiven (and thus cannot be represented), but rather “the world is enacted or actively performed anew.” (p. xxvi) Thus, approaching the world as 'performative', is approaching the world as a 'reality' that “emerges over time” and is “continually transformed through our history of interactions with it.” (Salter, p. xxvii)

Teresa Margolles
In The Air, 2003 (link)

Picture
Teresa Margolles, In The Air, 2003, frieze.com
In the main hall of the museum, soap bubbles are churned into the air by simple, easily purchasable machines. An installation of ethereal beauty, En el aire (In the Air, 2003) turns on us with shocking vengeance when we learn that the water in these soap bubbles comes from the morgue and has been used to clean the dead bodies prior to autopsy.

Chris Salter
ilinx, 2015 (link)

Picture
Chris Salter, ilinx, 2015, chrissalter.com
Ilinx is a performative environment for the general public provoking an intense bodily experience that blurs the senses of sight, sound and touch. In the environment, a group of four visitors at a time wear specially designed garments. These wearables are outfitted with various sensing and actuating devices that enable visitors to interface with the performance space. During the event, a ritualistic progression which lasts approximately twenty minutes, the natural continuum between sound and vibration, vision and feeling becomes increasingly blurred, extending and stretching the body’ boundaries beyond the realm of everyday experience. The project is inspired by work in the area of what is called sensory substitution – the replacement of one sensory input (vision, hearing, touch, taste or smell) by another, while preserving some of the key functions of the original sense. The term ilinx (Greek for whirlpool) comes from the French sociologist Roger Caillois and describes play that creates a temporary but profound disruption of perception as is common in experiences of vertigo, dizziness, or disorienting changes of speed, direction or the body’s sense in space. “…An attempt to momentarily destroy the stability of perception and inflict a kind of voluptuous panic upon an otherwise lucid mind. (Salter, ilinx)

Rafael Lozano Hemmer
Sandbox 2010 (link)

Picture
Rafael Lozano Hemmer, Sandbox, 2010, glowsantamonica.org
Sandbox is a large-scale interactive installation created originally for Glow Santa Monica. The piece consists of two small sandboxes where one can see tiny projections of people who are at the beach. As participants reach out to touch these small ghosts, a camera detects their hands and relays them live to two of the world's brightest projectors, which hang from a boom lift and which project the hands over 8,000 square feet of beach. In this way people share three scales: the tiny sandbox images, the real human scale and the monstrous scale of special effects. The project uses ominous infrared surveillance equipment not unlike what might be found at the US-Mexico border to track illegal immigrants, or at a shopping mall to track teenagers. These images are amplified by digital cinema projectors which create an animated topology over the beach, making tangible the power asymmetry inherent in technologies of amplification.

Topological Media Lab
Einstein's Dream, 2013 (link)

Picture
Topological Media Lab, Einstein's Dream, 2013, topologicalmedialab.net
Einstein’s Dream is an environment in which visitors encounter performers in responsive fields of video, light, and spatialized sound, in a set of tableaus. Each tableau is inspired by a vignette from Alan Lightman’s novel, Einstein’s Dreams, set in Berne Switzerland, in 1904, the year that Albert Einstein received the Nobel prize. Or rather, a set of parallel 1904’s, each of which is a different kind of time. In one, time slows to a halt as you approach a particular place; in another there is no future; in third, time sticks and slips; in a fourth age reverses and what is rotten becomes fresh as time passes.

In one version of this project, a large theatrical space (24m x 20m x 8m) will contain multiple tableaux, each accommodating 6-12 people in a pool of light and sound modulating in concert with activity. Visitors and performers can move from tableau to tableau. The performers’ actions, together with the textures and rhythms of lighting, sound and visitors’ expectations, create different kinds of time poetically related to the novel’s vignettes. As a performer walks from place to place she may drag a pool of conditioning light and sound. The pool mutates or merges into another pool with a different type of time.
<<Previous

    BLOG:
    Research & Creation

    An archive of process:
    including ideas, inspirations, sketches, references, etc.

    Archives:

    November 2016
    October 2016
    April 2016
    March 2016
    December 2015
    November 2015
    October 2015
    September 2015
    May 2015
    January 2015

    Categories

    All
    Ideas
    Inspirations
    Materials
    Photo Journalism
    Pneumatics
    Process Work
    Prototyping
    Research
    Responsive Environments
    Scenerios
    Sensors
    Site
    Spatial Design
    Spatial Theory
    TML
    Workshops

Powered by Create your own unique website with customizable templates.