Each presentation of Wind takes the form of a temporary intervention, constructed around a site-speciﬁc visual environment of objects that are moving in the wind, such as a tree or a ﬁeld of tall grass. A video camera records images of this visual environment. The images are directed through a digital processing system that extracts ﬁelds of motion and mathematically transforms these into ﬁelds of sound. The ﬁelds of sound are in turn reproduced through loudspeakers placed within the original environment.
Heard and seen together, the sound ﬁelds and visual environment form a tight cybernetic feedback loop. The sound ﬁelds guide and enhance the viewer’s visual experience, focusing their visual attention toward particular kinds of visual motion. This strengthened visual experience guides and enhances the viewer’s sonic experience, focusing their sonic attention toward particular kinds of sonic motion. This feedback loop creates a strong interconnection between the visual and the sonic within the mind of the viewer, leading to momentary states of complete attention to the different senses, and an overall heightened awareness of the beauty that is present.
Following a grant from NetzNetz/Wien Kultur in 2011, a battery powered vision processing module incorporating an amplifier was developed, which allows exhibition in any outdoor environment.