Program as of 5 October 2017 and is subject to change.
The KISS2017 events will take place at the orange-flagged venues shown in the map below:
Thursday 12 October 2017
Computer music pioneer Knut Wiggen was the founding director of EMS (Electronic Music Studio) in Stockholm. During his 10-year tenure, he introduced several innovations and built up the international reputation EMS is known for today
Featuring music performed live with Kyma.
This piece uses a ribbon controller and an iPad to control complex clouds of flocking oscillators and voices of feedback noise in a multi-channel space in Kyma.
It takes inspiration from natural phenomena to create a musical experience where the listener is put at the center of a virtual space in which sonic elements appear to interact with each other in the physical space and in the parametric space of the composition. The piece also takes ideas from Gestalt psychology to play with the perception of grouping of individual elements constituting a mass of sound.
Algorithms are used to extend the presence of the performer from a single voice to a force that guides the trajectories in physical and parametric space of complex clouds of events in a flock, with a direct but augmented connection between the gestures of the performer and the extended results in the sonic space of the piece, creating an augmented sound atmosphere where virtual forces manifest themselves through sound. There is also a contrasting element of noise produced through feedback FM, and controlled with a 2D fader on the iPad, which acts as a disruptive force affecting the system
This approach focuses the role that an extended perception (subjective view) of the environment’s acoustical properties can give to a different experience of any form of sound organization. The environment affects and at the same time is affected by any audible sound processing involved and detected.
But environment is also a discrete part of a more complex instrument. You may conceive it as one of the actors acting on a stage. In that, it actively participates to a performance giving back informations on its acoustical properties. So, it can be possible to track those properties and measure how much (and how) they modulate – and be modulated at once – by whatever audible sound processing is going on. In a manner of speaking, the environment already extend itself “in act” (objective view) while interacting with every other “sonic actor’s properties”. It thus extends its active role, its action field. And – even though a composer can design, model, manipulate the environment’s acoustical properties the way he does with an augmenting instrument – you may refer to an environment as not as just being the sum of static acoustical properties, but as a dynamical part of a dynamical whole (living system?).
Scenes pulled at random from Google Street View provide the graphic score for a sonic improvisation in Kyma.
Street view images are merely presets in a vast data structure. Crude photographic representations of a time and place created with no artistry or expression.
Through a process of spontaneous sound design drawing on any cues or inspiration that a random view might contain, the improviser re-imburses these banal images with the creative expression they never had, resurrecting them momentarily as glorious presets in music space
Investigation/TributeWith8-ChannelAudio&Video. AugmentingReality = ✔
TBD
In the beginning was the Bubble. Before life could evolve out of inorganic compounds, there first had to be a bubble, a container, a boundary separating ‘self’ from ‘other’. It is at this boundary between inside and outside, between self and other, and that the metabolic interchanges we call “Life” can occur. Seal off the border and you achieve equilibrium, but equilibrium is death.
During the live performance, the composer explores the boundary between inside and outside, using Kyma to augment some sounds coming from familiar everyday objects, transforming them into music
Continue the discussion over dinner with your newly found friends and old acquaintances. Included with conference registration.
Friday 13 October 2017
Presentations on composing and performing with Kyma, sound design using Kyma, and interacting with other controllers and audio equipment.
An overview of the conference theme — from the history of sound and music augmenting our reality, to current developments in AR, and concluding by drawing attention to opportunities for sound designers & musicians to enhance augmented, virtual and mixed reality applications. Intended to welcome everyone to the conference and provide some unifying themes and context for the upcoming talks and concerts
Planetariums are perfect environments for creating immersive visual and aural experiences. There is some variation in how planetariums organize the playing of sound and image, but in general they can be used for pre-recorded content, live events, or a mixture of the two. In my presentation I will discuss some issues, workflows, and solutions I found in creating the full-dome show Einstein’s Gravity Playlist, a 23 minute 360 degree film with surround sound about the discovery of gravitational waves. Next I will discuss ways to integrate Kyma into the full-dome environment. Finally, I will discuss current practices in many science museum planetariums for mixing live and pre-rendered content with an eye toward creating Kyma-based events and performances.
Theo Lipfert, Director of the School of Film & Photography at Montana State University, Bozeman, is a filmmaker who is exploring the intersection between visual media and sound. He earned an MFA in Painting at Hunter College in New York City and exhibited paintings and prints widely in North America and Germany before devoting his creative energies to film, video, sound, and teaching. His documentary, narrative, and experimental films have been shown widely at film festivals and broadcast on television.
Ambisonic Tools for Kyma is a set of tools for anyone that wants to work with spatial audio and ambisonics in Kyma. As far as we know, this a worlds first.
Developed by composer and musician Anders Tveit in collaboration with NeverEngine Labs (Christian Vogel and Gustav Scholda).
The Ambisonic Tools for Kyma is a collection of tools for working with classic 1st order ambisonic (we would extend this to Higher Order Ambisonic in the future)
This includes standard plane wave encoders ( mono sources to multiple encoding sources), custom decoders for any speaker setup and configuration that is possible within the Kyma system (4, 5.1, 7.1, 8, 3D cube etc) Transformation tools such as rotation of the soundfield (Pitch, Roll and Yaw) Dominance effect to more extended transformation tools that really take advantages of the power of Kyma!
Also developed are necessary utility tools for converting a A-format to 1st order B-format Ambisonic for Soundfield microphones such as the Sennheiser Ambeo, Soundfield SPS200 and TetraMic Microphones etc. As well as tools for converting between different ambisonic standardisations and normalisation schemes – from the classic B-format to the ambix standard used by YouTube.
We think that these tools will provide the artist,composer,sound-designer,video or game developer a creative platform of working with 3D sound production all within the kyma environment
Anders Tveit works mainly with electroacoustic composition, improvisation and sound installations. Where the use of self-developed real-time processing and spatial audio software plays a key role in the musical expression.
“In my work, I am continuously concerned with working with new approaches to using new technology, both in the compositonal process and in the performance. Which for me, creates a holistic thinking about the work and the artistic result. This often means that I like to work in such a way that the differences between developer, peformer and composer are blurred, something I find interesting, challenging and exciting.”
I am conducting research with Mark Gill from the SCSU Visualization Lab exploring the larger question of audio delivery in Virtual Reality settings. Our research is considering approaches to live performance of electroacoustic music to an audience wearing VR headgear, prerecorded versions of concert music in a VR environment, and the spatial location of sound sources with an ear toward environments and applications designed for the output of unique creative works. Our research includes realtime audio production in Kyma which is synced to, transmits control data to, and receives control data from the Unity game engine
Discuss what you heard in the morning presentations and in the previous night’s concert. Included with conference registration.
Presentations on composing and performing with Kyma, sound design using Kyma, and interacting with other controllers and audio equipment.
An exposition of new features in Kyma 7.
The SphericalPanner allows for arbitrary panning in a 3D Space. But without a head-tracker the illusion is not convincing enough. Is it even possible to simulate a real acoustic space in a way you can’t tell the difference anymore?
Desktop Demos: Speak one-on-one with composers and performers about their concert pieces.
Kyma Open Lab: Bring your Kyma questions and consult with the experts.
This approach focuses the role that an extended perception (subjective view) of the environment’s acoustical properties can give to a different experience of any form of sound organization. The environment affects and at the same time is affected by any audible sound processing involved and detected.
But environment is also a discrete part of a more complex instrument. You may conceive it as one of the actors acting on a stage. In that, it actively participates to a performance giving back informations on its acoustical properties. So, it can be possible to track those properties and measure how much (and how) they modulate – and be modulated at once – by whatever audible sound processing is going on. In a manner of speaking, the environment already extend itself “in act” (objective view) while interacting with every other “sonic actor’s properties”. It thus extends its active role, its action field. And – even though a composer can design, model, manipulate the environment’s acoustical properties the way he does with an augmenting instrument – you may refer to an environment as not as just being the sum of static acoustical properties, but as a dynamical part of a dynamical whole (living system?)
Investigation/TributeWith8-ChannelAudio&Video. AugmentingReality = ✔
TBD
Samples of DIY instruments allow expanded sonic modulation for Virtual Percussion live performance not possible without Kyma
In Plurality Spring, players perform music to control robotic avatars exploring an unknown orb in deep space. Using the microphone to track pitches, the live acoustic player/performers control the movement of the robots as well as the emergent sonic environments.
The live acoustic audio mixes with in-game sound, creating an emergent augmented reality musical performance. Randomized levels, real-time decisions, and reactive audio lead to distinct musical results with each playthrough.
Ambisonic Tools for Kyma is a set of tools for anyone that wants to work with spatial audio and ambisonics in Kyma. As far as we know, this a worlds first.
Developed by composer and musician Anders Tveit in collaboration with NeverEngine Labs (Christian Vogel and Gustav Scholda).
The Ambisonic Tools for Kyma is a collection of tools for working with classic 1st order ambisonic (we would extend this to Higher Order Ambisonic in the future)
This includes standard plane wave encoders ( mono sources to multiple encoding sources), custom decoders for any speaker setup and configuration that is possible within the Kyma system (4, 5.1, 7.1, 8, 3D cube etc) Transformation tools such as rotation of the soundfield (Pitch, Roll and Yaw) Dominance effect to more extended transformation tools that really take advantages of the power of Kyma!
Also developed are necessary utility tools for converting a A-format to 1st order B-format Ambisonic for Soundfield microphones such as the Sennheiser Ambeo, Soundfield SPS200 and TetraMic Microphones etc. As well as tools for converting between different ambisonic standardisations and normalisation schemes – from the classic B-format to the ambix standard used by YouTube.
We think that these tools will provide the artist,composer,sound-designer,video or game developer a creative platform of working with 3D sound production all within the kyma environment
Anders Tveit works mainly with electroacoustic composition, improvisation and sound installations. Where the use of self-developed real-time processing and spatial audio software plays a key role in the musical expression.
“In my work, I am continuously concerned with working with new approaches to using new technology, both in the compositonal process and in the performance. Which for me, creates a holistic thinking about the work and the artistic result. This often means that I like to work in such a way that the differences between developer, peformer and composer are blurred, something I find interesting, challenging and exciting.”
The SphericalPanner allows for arbitrary panning in a 3D Space. But without a head-tracker the illusion is not convincing enough. Is it even possible to simulate a real acoustic space in a way you can’t tell the difference anymore?
Featuring music performed live with Kyma.
RunningSong is an experiment in augmenting the physical act of running with sound. It is a digital re-imagination of Aboriginal “SongLines” and is the sonification of the geospatial data of a run path. The performance creates a random trail for a runner through the streets of Oslo. The live position and speed of the runner interacts with the performers to create a real-time audio representation of the route. The principles behind the performance can be used as part of a fitness training program that carefully manages distance to prevent over-training injuries while encouraging spontaneity, exploration and a sense of joyous freedom.
At the start of the performance the RunningSong system produces a random 2km path through the streets of Oslo starting and finishing at the performance venue. A performer in running clothes will set out to run the route and send a live video feed and telemetry data back to the venue. The projection of a map at the venue will show the audience and the on-stage performers the current location of the runner. As the runner enters a road, the performers will improvise a musical rendering for that section of the journey based on the rhythm described by the angle of the subsequent turn. The tempo is set by the live cadence data (steps per minute) from the runner.
At the centre of the performance would be an instrument I have created – an electric bullroarer which is a re-imagining of the ancient aboriginal instrument that traditionally comprises of a piece of wood swung on a string. In the electric bullroarer, a speaker is secured at the end of the string and a piezo microphone element picks up the vibrations created by the speaker over the string. As the speaker is swung around the head of the performer the tension in the string increases and a feedback loop is created. Kyma is used inside this loop to condition the piezo signal and inject other sound sources.
The on-stage ensemble will comprise the electric bullroarer player, a woodwind player and modular synth player.
Kyma is central to the performance by providing the rhythmic backbone through interpreting and sonifying real-time angular mapping data. Kyma also provides spatial processing of the live instruments injecting the signal into the feedback loop of the electric bullroarer at the appropriate point in its swing and positioning the instruments within a quadraphonic sound field.
TBD
Math is one of the most quintessential elements and widely applied in music and arts.
The idea of this project started from the painting called “Ssireum: Korean wrestling” (https://en.wikipedia.org/wiki/Danwon_pungsokdo_cheop, https://upload.wikimedia.org/wikipedia/commons/d/d0/Danwon-Ssireum.jpg) painted by Kim, Hong-Do (https://en.wikipedia.org/wiki/Kim_Hong-do ).
He applied the mathematical elements in his painting internally to see the result of augmenting the viewers’ perception and expectation. The numbers and placement of the people in the painting are specifically configured based on “Mabangjin (magic square)”, and also implies the balance of yin and yang.
Among many ways of augmenting the reality of the game “ssireum”, he chose to apply the mathematical method and artificial number configuration. By transforming them into the roles of augmenting, it gives more realistic presence in a dynamic image and degree of completion as a result.
Over 200 years have passed since Hong-Do Kim’s era, and today we have digital technology to help artists’ intentions and blur the boundaries of artistic environment in the real world and computer generated virtual world.
In my project for KISS2017, I will be using configuration of numbers and/or texts to transform into musical elements and emphasize the aesthetics of music with the combination of digital system and physical input device, and expanding spatial perception of a live performance
http:www.oner.kr
Fearful and terrible things have been with us always. But in the latter part of the twentieth century we began to think that, with our science and our advanced practices of risk-management and regulation, we could at last surpass the fearful and terrible things. To our dismay, this has turned out not to be the case. And to compound our disappointment, our politicians continue to exploit our fears while further estranging us from science. In this piece I present frightful sounds along with correspondingly terrifying, OSC-controlled graphic projections. I too can be a politician, it seems
Continue the discussion over dinner with your newly found friends and old acquaintances. Included with conference registration.
Featuring music performed live with Kyma.
“Date Night” is a multigrid configuration in Kyma designed to produce club dance music augmented with the potential of changing pulse, beat, and meter, along with an array of sounds that can be integrated into the beats
With induction coil pickups scanning electromagnetic fields we will sing with the inaudible voices of the machines around us.
TBA
Saturday 14 October 2017
Presentations on composing and performing with Kyma, sound design using Kyma, and interacting with other controllers and audio equipment.
Kyma is more than just a sound processing computer it is a set of listening practices. This became apparent when a sound aspect of a performance at last years conference could not take place. What we were given instead was just visuals and we were invited to imagine the rest! This created a negative space, not to focus on the environment as Cage might have offered, but to draw on our previous interactions with Kyma derived sound. This talk wishes to explore how Kyma transcends sound and becomes a way of listening
Simon is a sound artist, sound maker, music technician specialising in sound diffusion and alternative DJ. Some of his interests include listening practices in acousmatic music and experimental improvisation.
For individuals with profound hearing loss, cochlear implants provide the only access to sounds. However, to perceive the environmental sounds, speech or music based on the electric signals that implants deliver requires learning. A Kyma-based auditory training program has been developed to present realistic soundscapes that can change in response to the user’s self-directed explorations. The software consists of thematic exercises (modules) that require the user to manipulate acoustic parameters and the implant settings using an intuitive visual interface. The user’s exploratory activities, along with his/her ratings of the related experiences are tracked and saved for later analysis. This approach provides an alternative to supervised learning, the prevailing paradigm in oral rehabilitation, by allowing a direct, self-structured and repeatable discovery of the auditory information
Dragana Barac_-Cikoja holds a Ph.D. in Experimental Psychology from the University of Connecticut, Storrs. She is Associate Professor in the Hearing Speech and Language Sciences (HSLS) Department at Gallaudet University, and serves as a director of research and Ph.D. Program in HSLS. She has served as a principal investigator on several government funded research projects. Dr. Barac_Cikoja conducts experimental research aimed at understanding basic mechanisms in perception and cognitive processing. Her investigations of haptic space perception have application in the area of robotics and navigation by blind individuals. Her investigations of sensory feedback during speech and sign production are aimed at providing new insights into the control of language production that may lead to improvements of assistive technologies beneficial to deaf and hard of hearing individuals. She is currently involved in the development of an interactive learning environment for optimizing hearing assistive technology use that is aimed at improving aural rehabilitation efficiency.Kevin Cole works for NOVA Web Development, and serves as a consultant on various research projects at Gallaudet University. He has extensive experience as a computer programmer, web designer and systems administrator, with a strong focus on Linux and other open source technologies. He has developed Kyma applications that have been used in studies of self-hearing, and in the aural training program.
Is there a particular sound for a specific historical period? How can the curator at a cultural historical museum use sound in the exhibition for the purpose of enhancing the experience of a particular historical period, event or place? Through the use of sound in the permanent exhibitions at “Greve Museum” and “Mosede Fort Denmark 1914-18” it has been possible to avoid a lot of text in the exhibition. One of the mantras in developing the exhibitions has been that the visitors should be able to gain insight into an important section in Denmark’s history without reading long texts. The sound design has made this possible. The sound in the exhibitions has several purposes: It shapes the room, it can give the exhibited objects new life, and it gives the visitor the feeling of being on the place or in the specific time taht the exhibition conveys.
Kamilla Hjortkjær is curator at “Greve Museum” south of Copenhagen. She holds a Master in musicology and has in recent 5 years conducted research into how sound, sound art and music can be integrated in the exhibitions at the cultural historical museums.
Emotionally dis-regulated children and their families frequently endure stereotyped, un- provoked, confrontational scenes with intense aggression. In addition to harsh, negative vocalizations, acoustic features of such scenes include varying rhythms, tempi. Our Kyma-based approach maps and transforms brief ambient acoustic samples into musical statements that incorporate rhythms of prevalent musical genres. The agitated subject’s voice emerges in the novel, musified “product.” Stochastic modifications of volume dynamics, or melody or rhythm may coincide with (downward) tempo trends, with or without intelligible words. The agitated subject’s behavior is thus “modeled” melodically and rhythmically from an approximation of his/her self-generated sounds and then incrementally altered. We hypothesize that introduction of such musical reflections in real time, by a crisis worker, for example, could help to temper these difficult scenes on behalf of emotionally troubled children and their families. Given the opportunity to attend their own acoustic image as music, a subject could be retrieved from a relatively un-responsive state, and their focus brought into groove. Theoretical supports for this approach include identity validation/acknowledgement through mirroring/reflection; (1), hedonic value of autonomic/neuro-rhythms entrainment (2); and benefits of novel/entropic stimuli to signal detection by neural systems (stochastic resonance) (3).
Peter Bingham is a musician and a professor of pediatric neurology at the University of Vermont with a track record of research and innovation featuring digital tools for neuro-rehabilitation. He developed an odor-releasing pacifier to promote feeding in premature infants, a spirometer-controlled video game for children with asthma and cystic fibrosis, and designed Simmer Down, which will be prototyped by John Mantegna for this presentation.
RunningSong is an experiment in augmenting the physical act of running with sound. It is a digital re-imagination of Aboriginal “SongLines” and is the sonification of the geospatial data of a run path. The performance creates a random trail for a runner through the streets of Oslo. The live position and speed of the runner interacts with the performers to create a real-time audio representation of the route. The principles behind the performance can be used as part of a fitness training program that carefully manages distance to prevent over-training injuries while encouraging spontaneity, exploration and a sense of joyous freedom.
At the start of the performance the RunningSong system produces a random 2km path through the streets of Oslo starting and finishing at the performance venue. A performer in running clothes will set out to run the route and send a live video feed and telemetry data back to the venue. The projection of a map at the venue will show the audience and the on-stage performers the current location of the runner. As the runner enters a road, the performers will improvise a musical rendering for that section of the journey based on the rhythm described by the angle of the subsequent turn. The tempo is set by the live cadence data (steps per minute) from the runner.
At the centre of the performance would be an instrument I have created – an electric bullroarer which is a re-imagining of the ancient aboriginal instrument that traditionally comprises of a piece of wood swung on a string. In the electric bullroarer, a speaker is secured at the end of the string and a piezo microphone element picks up the vibrations created by the speaker over the string. As the speaker is swung around the head of the performer the tension in the string increases and a feedback loop is created. Kyma is used inside this loop to condition the piezo signal and inject other sound sources.
The on-stage ensemble will comprise the electric bullroarer player, a woodwind player and modular synth player.
Kyma is central to the performance by providing the rhythmic backbone through interpreting and sonifying real-time angular mapping data. Kyma also provides spatial processing of the live instruments injecting the signal into the feedback loop of the electric bullroarer at the appropriate point in its swing and positioning the instruments within a quadraphonic sound field.
TBD
Discuss what you heard in the morning presentations and in the previous night’s concert. Included with conference registration.
Presentations on composing and performing with Kyma, sound design using Kyma, and interacting with other controllers and audio equipment.
The lecture will focus on the use of scripts in Kyma for generating complex structures through Smalltalk and Capytalk. It will also show how similar results can be achieved with replicators, and when it might be preferable to use one or the other.
I will dissect a patch made for the piece which uses a script to produce clouds of flocking oscillators with a selectable number of voices (up to 64 per cloud in the current implementation). The center frequency of the cloud is controlled by a ribbon controller, and other parameters such as the spread of voices within the cloud, the jitter, the panning position of each voice, as well as the waveform, are either controlled live with an iPad, or automated in the the Kyma timeline.
The presentation will also briefly touch upon another technique used in the piece consisting of feedback FM synthesis, where the output of an oscillator is fed back to its frequency input to produce very dense and noisy textures that are somewhat unstable and very interesting to play with in live performance
Math is one of the most quintessential elements and widely applied in music and arts.
The idea of this project started from the painting called “Ssireum: Korean wrestling” (https://en.wikipedia.org/wiki/Danwon_pungsokdo_cheop, https://upload.wikimedia.org/wikipedia/commons/d/d0/Danwon-Ssireum.jpg) painted by Kim, Hong-Do (https://en.wikipedia.org/wiki/Kim_Hong-do ).
He applied the mathematical elements in his painting internally to see the result of augmenting the viewers’ perception and expectation. The numbers and placement of the people in the painting are specifically configured based on “Mabangjin (magic square)”, and also implies the balance of yin and yang.
Among many ways of augmenting the reality of the game “ssireum”, he chose to apply the mathematical method and artificial number configuration. By transforming them into the roles of augmenting, it gives more realistic presence in a dynamic image and degree of completion as a result.
Over 200 years have passed since Hong-Do Kim’s era, and today we have digital technology to help artists’ intentions and blur the boundaries of artistic environment in the real world and computer generated virtual world.
In my project for KISS2017, I will be using configuration of numbers and/or texts to transform into musical elements and emphasize the aesthetics of music with the combination of digital system and physical input device, and expanding spatial perception of a live performance
http:www.oner.kr
“Date Night” is a multigrid configuration in Kyma designed to produce club dance music augmented with the potential of changing pulse, beat, and meter, along with an array of sounds that can be integrated into the beats
We will talk about our year developing the WireFrames Library for frame synced programming in Kyma. We will describe some of its core concepts and outline possible applications.
Desktop Demos: Speak one-on-one with composers and performers about their concert pieces.
Kyma Open Lab: Bring your Kyma questions and consult with the experts.
For individuals with profound hearing loss, cochlear implants provide the only access to sounds. However, to perceive the environmental sounds, speech or music based on the electric signals that implants deliver requires learning. A Kyma-based auditory training program has been developed to present realistic soundscapes that can change in response to the user’s self-directed explorations. The software consists of thematic exercises (modules) that require the user to manipulate acoustic parameters and the implant settings using an intuitive visual interface. The user’s exploratory activities, along with his/her ratings of the related experiences are tracked and saved for later analysis. This approach provides an alternative to supervised learning, the prevailing paradigm in oral rehabilitation, by allowing a direct, self-structured and repeatable discovery of the auditory information
Dragana Barac_-Cikoja holds a Ph.D. in Experimental Psychology from the University of Connecticut, Storrs. She is Associate Professor in the Hearing Speech and Language Sciences (HSLS) Department at Gallaudet University, and serves as a director of research and Ph.D. Program in HSLS. She has served as a principal investigator on several government funded research projects. Dr. Barac_Cikoja conducts experimental research aimed at understanding basic mechanisms in perception and cognitive processing. Her investigations of haptic space perception have application in the area of robotics and navigation by blind individuals. Her investigations of sensory feedback during speech and sign production are aimed at providing new insights into the control of language production that may lead to improvements of assistive technologies beneficial to deaf and hard of hearing individuals. She is currently involved in the development of an interactive learning environment for optimizing hearing assistive technology use that is aimed at improving aural rehabilitation efficiency.Kevin Cole works for NOVA Web Development, and serves as a consultant on various research projects at Gallaudet University. He has extensive experience as a computer programmer, web designer and systems administrator, with a strong focus on Linux and other open source technologies. He has developed Kyma applications that have been used in studies of self-hearing, and in the aural training program.
I have field recordings from my travels around the Indochina peninsula that I’m going to index and synthesize with Capytalk and a Wacom tablet while a jarring timelapse video is simultaneously playing. This is augmenting reality because the audio is being modified/enhanced by computer generated sensory input coming from the Wacom tablet.
With induction coil pickups scanning electromagnetic fields we will sing with the inaudible voices of the machines around us.
Fearful and terrible things have been with us always. But in the latter part of the twentieth century we began to think that, with our science and our advanced practices of risk-management and regulation, we could at last surpass the fearful and terrible things. To our dismay, this has turned out not to be the case. And to compound our disappointment, our politicians continue to exploit our fears while further estranging us from science. In this piece I present frightful sounds along with correspondingly terrifying, OSC-controlled graphic projections. I too can be a politician, it seems
An augmented stage butoh influenced performance.
Through metaphorical use of symbolic references to myths of fallen entities (prince of darkness, demons, etc.), this performance interrogates the notion of taboos as they emerge in social norms as sin-related cultural artefacts.
Our focus extents on how these artefacts manifest internally on an entity/agent, and again how these internal manifestations are externalized through embodiment. This cycle represents in the work, the schematic notion of falling, which has been unofficially defined for our performance as a ‘linear, accelerating movement towards A center’ (essentially a dynamically changing vector). Furthermore movement has been unofficially redefined for the purpose of this work as a ‘continuous, controlled falling’ or a ‘continuous dynamical displacement of A center’.
The continuous redefinition of a ‘falling’ path (could be visualised in a vector field), creates its own history and its own distinct form, which essentially makes it part of our natural phenomenology. This derived conclusion contradicts the sin-related metaphorical use of the word in some human cultures.
The entity as represented on stage by the performer, is used more as an agent of the energetical shapings taking place in the above mentioned processes, rather than a ‘person’ or personification of the symbols. An attempt is made to dehumanize and deconstruct the strong presence of the stable human form on stage, through the use of almost unnatural symbolism in the choreography, and grotesque digital scenery.
The central artistic question as we define it in the context of this work, can be summed to the following:
Are aesthetics and artistic appreciation primal enough cognitive processes (or forces) to express and reveal in its rawness the simple archetypal mathematical abstractions of our phenomenology
Most live samples were made by O’Donnell. The imagination and complexity of their vocalization inspired me to “Kymaize” what I heard. The inclusion of Anna’s Tai Ji expands the physical spiritual dimension
Postcards explores augmented conversations across time, place, technology and people. The source material is a collection of long-distant conversations between several people immutably bound together. The central texts are between George Sands and Gustave Flaubert taken from their prolific correspondences from 1866 – 1876; these are augmented by other, more personal, texts between Anne and her husband David, and between Anne and the composer. The images and texts of these ‘Postcards’ are held in a computer environment that works in real-time with the performer to co-create a conversation between the here and now of the live performance, the live processing of the sound, and the embedded stories that are held in the media materials. The overall result is anchored in the present, linking time, place and people in an augmented, magnified, expanded, illuminating story of the now
This piece was inspired by the story of how, when the universe was young, the Higgs particle lived in a double-well potential and there was symmetry between the weak and electro-magnetic forces. But, as the universe cooled down, the Higgs settled into just one of the wells and broke that symmetry, resulting in the universe in which we find ourselves today.
A “double-well potential” is a model of a “bi-stable” system shaped like a rounded “W” with two equilibrium points and a hump between them. Depending on its initial energy, a particle tossed into the system can settle into one or the other of the two wells or oscillate between them in a figure-8 shape, crossing the hump in between.
In this piece, a simple two-dimensional double-well serves as a metaphor for some of the binary decisions we make (and those that are made for us), each one of which can lead to a completely different future. Is a bi-stable system the ideal model for democratic elections or could a system with additional equilibrium states be more representative?
In “Double-well”, you control the sounds. Each of you has a “sound-generator”; the microphones placed around the audience capture your sounds, process them, and play them back at various times throughout the piece. Watch the screen at the front for a “score” indicating when the microphones are “live” and when you can just relax and listen. Your sounds control some of the Kyma-generated sounds — especially when you play in sync with other people in the audience.
Thanks to physicist Nicolas Chanon, physicist and science fiction author, who inspired this piece when he drew a double-well potential on the whiteboard during GVA Sessions 2015
In Plurality Spring, players perform music to control robotic avatars exploring an unknown orb in deep space. Using the microphone to track pitches, the live acoustic player/performers control the movement of the robots as well as the emergent sonic environments.
The live acoustic audio mixes with in-game sound, creating an emergent augmented reality musical performance. Randomized levels, real-time decisions, and reactive audio lead to distinct musical results with each playthrough.
Continue the discussion over dinner with your newly found friends and old acquaintances. Included with conference registration.
Sunday 15 October 2017
Presentations on composing and performing with Kyma, sound design using Kyma, and interacting with other controllers and audio equipment.
This keynote will present a philosophical perspective on the co-operative relationships between musicians and their technology. It will explore notions of embodiment and augmented relationships between the musicians and their machines through a concept described as The Living Score. Through this concept Vear will present ideas from his on-going research about embodied intelligence and co-operative musicking (Small) within the Flow of music. And he will demonstrate some of his current practice based research, including a collaboration with Anne La Berge, and a Robot named Max. The overarching aim is to understand how we can better create and innovate with our friends the machine and its code
In Plurality Spring, players perform music to control robotic avatars exploring an unknown orb in deep space. Using the microphone to track pitches, the live acoustic player/performers control the movement of the robots as well as the emergent sonic environments.
The live acoustic audio mixes with in-game sound, creating an emergent augmented reality musical performance. Randomized levels, real-time decisions, and reactive audio lead to distinct musical results with each playthrough.
Discuss what you heard in the morning presentations and in the previous night’s concert. Included with conference registration.
Presentations on composing and performing with Kyma, sound design using Kyma, and interacting with other controllers and audio equipment.
-A short film with an augmented soundtrack (10 minutes)
-A short presentation (12 minutes) on how the soundtrack was executed, with 8 minutes left for questions.
The short film would be the next in my evolving Youtube genre of the ‘Augmented Tutorial’ Here is a playlist with relevant examples of precursor work. https://www.youtube.com/playlist?list=PLJ3a2jbxab6jB3CM3c-uvPm89-AdWK5HY
The video will be partly tutorial and hopefully part art. The material will document the connecting of a multiple pacarana together in a wormhole and exploring the power.
If this proposal was accepted, I would seek to improve the visual quality of my output in a variety of ways. I would shoot the wormhole material on professional cameras at the university and collaborate with visual artists to evolve the production values of this venture.
Charles Norton is a music technologist and academic, Charlie lectures at the London College of Music where he explores Kyma and other modular technologies.
A presentation of practical abstraction, and blending of domains using vectorization techniques in C++, Python and Kyma. Demonstration of use in multimodal and multimedia real time performances. Narrative interpretation of time series “in performance” time
Featuring music performed live with Kyma.
I have field recordings from my travels around the Indochina peninsula that I’m going to index and synthesize with Capytalk and a Wacom tablet while a jarring timelapse video is simultaneously playing. This is augmenting reality because the audio is being modified/enhanced by computer generated sensory input coming from the Wacom tablet.
will perform a short ” live with a Paca, a Clavia NordDrum 2, and an E-Mu Virtuoso, both MIDI controlled and processed with Kyma. MIDI used to control the external devices, incoming audio streams and parameters of the Kyma sounds used in the performance will be simultaneously used to send OSC messages to GEM (a library which allow the user to create OpenGl graphics within Pure Data) through the NeverEngineLabs’ OSC tools.
OSC messages will be used to control the generation of simple geometrical shapes, their rotation and movement, and their colours.
A Wii-Mote and a Wii-Nunchuk controller will be used to further augment the interaction between the human , the machines, and the generated sounds and real-time computer graphics.
This type of performance, intertwining audio and computer graphics could be also used as a methodology for an alternative representation of sound properties.
“Where I come from, the words most highly valued are those spoken from the heart, unpremeditated and unrehearsed.”
-Leslie Marmon Silko (From her essay that began as a speech, “Language and Literature from a Pueblo Indian Perspective.”)
Part 1:
The Pueblo tradition of oral story telling makes no qualitative distinction between types of stories, such as historical, sacred or family specific. They all have equal weight and are intrinsic to individual and pueblo identity. The telling of such stories—themselves webs within webs of stories as each word uttered is itself a story—is passed on from one generation to the next, empowering those who come later to understand who they are and where they come from.
Part 2:
When mapping with an unmanned aerial vehicle (UAV), the process entails collecting spatial data. The UAV has a lightweight digital camera that captures digital images. Once captured, the images knit into georectified orthamosaics—that is they are geometrically corrected into an adjusted uniform scale that adheres to a common geographical coordinate system. Said data can thus generate elevation models, two-dimensional maps, thermal maps and 3D maps. But… Why not sound?
Part 3:
Imagine hearing the topography of a land, a land so sacred to its indigenous people, that they have stories going back centuries about it. Imagine hearing the story expressed as is only possible via heart-felt utterances embodied in words. Imagine in juxtaposed synchronicity, hearing in words and sounds of the land the essence of what is Pueblo people. The words and the sounds each augmenting the other, each interlacing and transforming the other. In this piece, we hope to bring these two elements together as follows: (1) Kyma as shaping the sound of the land and (2) the oral telling of a story in the tradition of the Pueblo people wherein web-like threads radiate outward crisscrossing each other. Each element shall have equal weight and, as with the radiating spider-web threads, a structure shall arise. And you listener, you hearer, must absorb and trust, as Pueblo people do, that meaning shall emerge.
The piece is explores how the language of institutional organisational change neutralises our position as workers. Engaging how language is used to reframe conflict and alter our political reality
Simon is a sound artist, sound maker, music technician specialising in sound diffusion and alternative DJ. Some of his interests include listening practices in acousmatic music and experimental improvisation.
Presentations on composing and performing with Kyma, sound design using Kyma, and interacting with other controllers and audio equipment.
will perform a short ” live with a Paca, a Clavia NordDrum 2, and an E-Mu Virtuoso, both MIDI controlled and processed with Kyma. MIDI used to control the external devices, incoming audio streams and parameters of the Kyma sounds used in the performance will be simultaneously used to send OSC messages to GEM (a library which allow the user to create OpenGl graphics within Pure Data) through the NeverEngineLabs’ OSC tools.
OSC messages will be used to control the generation of simple geometrical shapes, their rotation and movement, and their colours.
A Wii-Mote and a Wii-Nunchuk controller will be used to further augment the interaction between the human , the machines, and the generated sounds and real-time computer graphics.
This type of performance, intertwining audio and computer graphics could be also used as a methodology for an alternative representation of sound properties.
“Where I come from, the words most highly valued are those spoken from the heart, unpremeditated and unrehearsed.”
-Leslie Marmon Silko (From her essay that began as a speech, “Language and Literature from a Pueblo Indian Perspective.”)
Part 1:
The Pueblo tradition of oral story telling makes no qualitative distinction between types of stories, such as historical, sacred or family specific. They all have equal weight and are intrinsic to individual and pueblo identity. The telling of such stories—themselves webs within webs of stories as each word uttered is itself a story—is passed on from one generation to the next, empowering those who come later to understand who they are and where they come from.
Part 2:
When mapping with an unmanned aerial vehicle (UAV), the process entails collecting spatial data. The UAV has a lightweight digital camera that captures digital images. Once captured, the images knit into georectified orthamosaics—that is they are geometrically corrected into an adjusted uniform scale that adheres to a common geographical coordinate system. Said data can thus generate elevation models, two-dimensional maps, thermal maps and 3D maps. But… Why not sound?
Part 3:
Imagine hearing the topography of a land, a land so sacred to its indigenous people, that they have stories going back centuries about it. Imagine hearing the story expressed as is only possible via heart-felt utterances embodied in words. Imagine in juxtaposed synchronicity, hearing in words and sounds of the land the essence of what is Pueblo people. The words and the sounds each augmenting the other, each interlacing and transforming the other. In this piece, we hope to bring these two elements together as follows: (1) Kyma as shaping the sound of the land and (2) the oral telling of a story in the tradition of the Pueblo people wherein web-like threads radiate outward crisscrossing each other. Each element shall have equal weight and, as with the radiating spider-web threads, a structure shall arise. And you listener, you hearer, must absorb and trust, as Pueblo people do, that meaning shall emerge.
The piece is explores how the language of institutional organisational change neutralises our position as workers. Engaging how language is used to reframe conflict and alter our political reality
Simon is a sound artist, sound maker, music technician specialising in sound diffusion and alternative DJ. Some of his interests include listening practices in acousmatic music and experimental improvisation.
Continue the discussion over dinner with your newly found friends and old acquaintances.