Performative Media / Week learning

Performative Media / Week learning

September 23, 2025

24.09.2025 - 2025 / Week 1 - Week 4

GeXianjing / 0377636

Performative Media / Bachelor of Interactive Spatial Design (Honours)

Week 1 — Introduction to Performative Media

Human–Space–Media Interaction

This week marked my first class in Performative Media. The lecture focused on exploring how humans, physical space, and media can interact with each other, transforming the audience from passive viewers into active participants. We were also introduced to TouchDesigner, a node-based visual programming software that allows creators to design real-time visual and sound experiences through sensors like cameras, microphones, or motion detectors. It feels like “programming with LEGO blocks” — logical, yet highly creative.

We explored inspiring examples such as Team Lab’s immersive installations and OK Go’s robot performance video. Team Lab’s work impressed me deeply — the way every human movement can reshape the projection and sound makes the space feel alive. OK Go’s project, on the other hand, revealed the precision of code-driven performance art, where every movement was perfectly synchronized with the music.

Our first group activity required us to apply the Input → Process → Output model to analyze an interactive artwork. This structure helped me understand the essence of performative media — interaction. The audience provides the input, the system processes it, and the artwork responds. It’s not just about visual display; it’s about communication. I look forward to creating an installation that can respond to people, blurring the line between human and art.

Stage Description Details & Function
Input (Sensor) Touchpad / Vibration Module / (Possibly Camera or Microphone)
  • The audience touches the sand surface with their fingers, and the touchpad detects the position and movement of their hands.
  • The vibration module converts sound into physical vibration, influencing the patterns formed in the sand.
  • The camera or microphone may capture surrounding movement or sound, adding additional reactive layers.
Process (Logic) Interactive Rules
  • When the audience touches the sand, the system produces sound.
  • The sound waves cause the sand to vibrate and form shapes.
  • Multiple participants can interact simultaneously, and their touches combine to create mixed soundscapes and evolving sand patterns.
  • There is no fixed choreography — every performance is unique and depends entirely on live audience interaction.
Output (Media / Experience) Visual & Auditory Feedback
  • The sand moves and changes shape with each touch, forming intricate, living patterns.
  • Lights flicker and shift with vibrations, creating a magical, immersive visual effect.
  • The experience feels mysterious, meditative, and empowering, as the audience senses that they are the “gods who control sound and movement.”


Week 2 — Performative Media: What I Learned from Class and Practice

Today was the second week of Performative Media. The lecturer asked us to install TouchDesigner, saying that this week would be more hands-on. As I listened to the lecture while setting up the software, I finally started to connect the abstract idea of “performative media” with something tangible and real.


What I Learned

Creative Coding / Generative Art

The teacher explained “generative” in a way that made sense to me: we create rules and parameters, and through iteration and emergence, complex forms grow from simple logic. Before today, I thought “randomness” was just randomness — but I learned the difference:

  • Randomness = pure unpredictability, no pattern.
  • Noise = structured randomness, smooth and controllable, perfect for natural effects like terrain, clouds, or waves.

This framework fits perfectly with interactive work — the sensor collects audience data (movement, sound, etc.), the system processes it with code, and then outputs visuals or sound. That’s the classic Input → Process → Output loop.

The Three Building Blocks of TouchDesigner

I now understand TouchDesigner as three main “languages”:

  • TOP — things you can see (images, pixels, GPU processing)
  • CHOP — things that drive (numbers, time, devices)
  • SOP — things that exist in space (3D geometry, particles)

It’s not about writing long code lines — it’s about connecting visual blocks like LEGO.

What I Did (My First Network)

My goal was simple: create a colorful, breathing noise animation.

  1. Added a Noise TOP, and tweaked Period and Harmonics to observe texture changes.
  2. Added a Ramp TOP for gradient colors, then used Lookup TOP to map the ramp’s colors to the noise values.
  3. Added a Null TOP and enabled its blue viewer flag to preview the output.
  4. Made it move: created an LFO CHOP, exported it to Translate or Rotate of a Transform TOP, then slowed Frequency to ~0.2 to make it “breathe.”
  5. Duplicated the Noise→Lookup chain with different parameters and used Composite TOP to layer them.
  6. Adjusted with Level / HSV Adjust / Bloom / Displace TOP to make it glow and feel alive.

The coolest part was learning how absTime.seconds can continuously drive parameters, creating non-repetitive smooth motion — a nice contrast to the looping rhythm of the LFO. Using different Seed and Period for the two noise layers also created a richer, more organic texture.

Group Progress & Assignment

By Week 4, our group will present a precedent study & critical analysis. We’ll analyze five areas — context/scenario, tools & hardware, I→P→O logic, audience role, and aesthetics — and identify what we can learn for our own project. Today the lecturer walked around and asked each group to clarify their common interests and project direction.

My Reflection

  • Generative art is not about letting randomness take over. It’s about balancing order and chance.
  • I realized the audience isn’t watching the animation — they make it move.
  • The node-based workflow encourages fast experimentation and live response.

What I’ll Do Before Next Week

  • Recreate today’s network in three variations: only LFO-driven / only absTime-driven / two noise layers + different Composite modes.
  • Try linking a microphone or camera input through CHOP to drive visuals directly.
  • Prepare two reference artists and an IPO breakdown for Week 3 slides.

One-sentence takeaway:

I learned to create “breathing” visuals through rules and noise — and to see that, in performative media, the real heartbeat comes from the data flowing between human and system.


Week 3

Step-by-Step Process

Step 1: Edge Detection

  • Started with a simple circle1 TOP to create a base shape.
  • Connected it to edge1 to extract edges and create a high-contrast outline.

👉 This step helped me understand how to transform a solid shape into a line-based visual suitable for further effects.

Step 2: Displacement

  • Created a noise1 texture and used it in displace1 to distort the circular outline.
  • Adjusted Period / Amplitude and Harmonic Gain / Exponent to control scale and complexity.

👉 I learned how noise textures generate natural, fluid motion.

Step 3: Feedback Loop

  • Built a feedback chain: feedback1 → level1 → transform1 → blur1 → comp1 → bloom1.
  • Set Target TOP in feedback1 to comp1 to close the loop.
  • Learned the role of each operator (decay, subtle scaling, flow softening, glow).




Learning Outcome

This practice helped me understand time-based layering in real-time visuals. The results resembled flowing water waves, energy pulses, and abstract biological motion. Small parameter changes drastically affect rhythm and direction.

Reflection

At first I didn’t fully understand the feedback target, but once I linked it back to comp1, the loop worked perfectly. Even a minor scale change in transform1 can flip the animation’s flow. Overall, I gained a deeper understanding of real-time generative visuals.


Week 4 — Group Presentation: Precedent Study & Critical Analysis

This week, our group successfully completed Assignment 1: Precedent Study & Critical Analysis for the Performative Media module. Our task was to research and critically analyze how media artists design interactions in performative works using the Input → Process → Output (IPO) model.


We selected two artists — Refik Anadol and Blast Theory — who represent contrasting yet complementary approaches to interactive art. Refik Anadol visualizes data through AI-driven immersive environments, while Blast Theory explores human relationships and storytelling through participation.

CHOPs Deep Dive in TouchDesigner



We explored CHOPs and how real-world data (sound, movement, sensors) can be transformed into responsive visuals. By using Constant, Math, LFO, Pattern, and Filter CHOPs, we learned animation control and rhythmic interaction. Generative art reminded me that rules + randomness create evolving results and emotional dialogue.


Assignment 1 — Precedent Study & Critical Analysis  (  For the detailed content of Assignment 1, please refer to this post )

 https://gexianjing.blogspot.com/2025/09/performative-media-poject-1.html

Click to view the group PPT content

https://www.canva.com/design/DAG0z1ILKAk/2CfuSGxJXXgyztmbdO9mnA/edit?utm_content=DAG0z1ILKAk&utm_campaign=designshare&utm_medium=link2&utm_source=sharebutton

Artworks analyzed:

  • WDCH Dreams & Machine Hallucinations — Refik Anadol
  • Can You See Me Now? & Rider Spoke — Blast Theory

Our lecturer commented that our presentation was “very complete and well-structured,” especially in connecting theory with emotional audience experience.

I learned that interactivity is not just technology, but human engagement and perception. Emotion, participation, and media systems must align to create meaningful experiences.


Assignment 1 Planning & Task Allocation Summary

Performative Media — Precedent Study & Critical Analysis (20%)
Week 1 – Week 4

To ensure an efficient and well-structured workflow for Assignment 1, our group established a clear plan and task distribution at the beginning of the project. The assignment required both theoretical research and critical analysis, as well as a strong visual and presentation structure. Therefore, we divided responsibilities based on individual strengths while maintaining continuous group discussion and feedback.


Overall Planning Strategy

From Week 1, our group agreed on three key objectives:

  1. To research media artists whose work demonstrates performativity, interaction, and audience participation

  2. To analyse each artist using a clear Input → Process → Output framework

  3. To present our findings in a coherent, visually clear, and critically reflective presentation

We selected Refik Anadol and Blast Theory because their practices represent two complementary approaches to performative media:

  • Anadol focuses on data-driven, immersive visual systems

  • Blast Theory emphasises human participation, narrative, and social interaction

This contrast allowed us to examine performative media from both technological and experiential perspectives.


Task Distribution Within the Group

To manage the workload effectively, tasks were distributed as follows:

  • Ge Xianjing 
    Responsible for:

    • Overall presentation structure and visual layout

    • Writing and editing critical analysis text

    • Integrating case studies into a coherent narrative

    • Ensuring academic tone and clarity across slides

  • Xia Chen
    Responsible for:

    • Researching Refik Anadol’s background and artistic philosophy

    • Analysing WDCH Dreams with a focus on data input and participatory design

    • Explaining technical tools and real-time data processes

  • Zeng Zifeng
    Responsible for:

    • Researching Blast Theory as an interactive art collective

    • Analysing Can You See Me Now? with emphasis on GPS, liveness, and performance

    • Contributing to discussions on accessibility and audience experience

  • Zoey
    Responsible for:

    • Analysing Rider Spoke and its use of sound, memory, and urban space

    • Preparing reflection points on emotional engagement and intimacy

    • Assisting with reference checking and APA formatting

  • We communicate using WhatsApp


Collaborative Process

Although tasks were divided, all major decisions were discussed collectively. We held regular informal discussions to:

  • compare interpretations of each artwork

  • refine analytical language

  • ensure consistency in terminology and conceptual depth

Each member reviewed the others’ sections to maintain a unified academic voice and avoid repetitive or disconnected analysis.


Outcome & Learning Value

This structured task allocation allowed us to complete the assignment efficiently while maintaining depth and coherence. Through this process, we learned that:

  • Performative media is not only about advanced technology, but about how audiences are positioned as participants

  • Clear planning and role distribution are essential in collaborative media projects

  • Critical analysis benefits from combining technical understanding with emotional and experiential reflection

This assignment also laid a strong conceptual foundation for our later practical projects, particularly in understanding interaction, audience agency, and performative systems.


Week 5 — Learning Summary: CHOPs Logic and Animation Control

This week focused on building interactive control systems using CHOPs. We used Constant, Math, Pattern, LFO, and Switch to animate shapes, manage parameters, and create smooth transitions in real time.

A key exercise involved switching multiple Constant outputs to change colors dynamically. Then we explored rotation and motion control with Pattern, Count, and Lag. LFO waveforms became a practical tool for rhythmic motion.



What I learned from nodes:

  • Constant stores reusable numeric values.
  • Math remaps value ranges for animation.
  • Rename unifies channel names for cleaner connections.
  • Lag smooths rapid changes into stable motion.



This week deepened my understanding that animation is also mathematical — built through logical relationships and continuous feedback.


Week 6 — Noise + Audio Reactivity & Project Planning

Date: Oct 31

Focus: Noise displacement + audio analysis; concept brief setup




  • Mapped motion/brightness data into noise displacement; explored routing with Analyze/Select CHOP.
  • Added Palette → Audio Analysis: linked kick/low/high bands to brightness/offset/strength (Math + Lag smoothing).
  • Began concept → prototype pipeline: modular build; projection framing; plan roles, sensors, IPO.
  • Outcome: a responsive visual sketch plus lean concept slide outline.


Week 7 — Learning Log: Gesture Interaction Practice

This week, I continued developing gesture-based interaction in TouchDesigner, focusing on turning hand-tracking data into stable, responsive visuals.

What I learned

  • Using Select CHOP: isolated pinch distance and XY channels for cleaner control.
  • Smoothing & Remapping: Lag for smoothing, Math for range mapping, Limit for clamping negatives.
  • CHOP Reference Control: linked data to Circle TOP radius/center; adjusted center growth; added webcam displacement.
  • Two Hands: used the second hand as a separate modulation source for multi-parameter control.

Notes on Final Project

The lecturer reminded us that this prototype leads into the final installation. We should keep the physical setup small, modular, and easy to transport, and we can contact the lecturer anytime for feedback.


Week 8 — Interaction Mapping & Prototype Planning

(Performative Media – Week 8)

This week marked a shift in our group’s understanding of interactive artworks. Previously, we tended to focus heavily on visual outcomes — brighter colours, more complex particles, and stronger bloom effects. However, through Activity 1 and Activity 2, we began to understand that interactive work is not driven by visuals alone, but by a relationship: a continuous chain connecting human action, system behaviour, and the meaning behind those interactions.

Within the group, Xia Chen took primary responsibility for the technical development. He reviewed and adjusted the TouchDesigner network, refining how gesture and sound inputs were mapped to particle behaviour. These changes helped the system respond more clearly and consistently, making the cause-and-effect relationship between user action and visual output easier to perceive.

As a team, this process helped us realise that strong interactivity emerges when technical decisions support conceptual clarity. Rather than adding more visual effects, we learned to focus on how each parameter change contributes to the overall experience and reinforces the intended meaning of the interaction.


Activity 1 — Interaction Mapping Sprint

User Action → Input Sensor → Processing → Output Behaviour → Intended Meaning

Writing this line forced me to examine the logic bones of the work. Instead of “how can I make this look cool?” I had to ask: “Why should the system react this way?” “What does the user’s action mean?”

My Interaction Map: Echo Flow Logic Chain

  1. User Action: audience raises hands, waves, or makes sound, becoming participants.
  2. Input Sensor: hand tracking (direction/position/speed) + audio analysis (amplitude/frequency).
  3. Processing: mapped into force strength, turbulence, brightness/bloom, frequency-based layers.
  4. Output Behaviour: particles swirl/stretch/scatter; bright zones burst; feedback creates trails.
  5. Intended Meaning: light feels soft, responsive, alive; the audience co-shapes it.

Activity 2 — Prototype Priority Setting

Don’t polish visuals before core interaction is stable. If the chain is broken, even the prettiest bloom won’t save the piece.

1. Core Interaction (must be first)

Gesture / sound → force + brightness mapping → particle behaviour change

  • gesture tracking cannot jitter
  • audio input must be clean
  • particles must respond smoothly
  • the loop cannot break

2. Secondary Layer (optional)

  • alternative colour modes
  • additional gesture mappings
  • mode switching by frequency differences

3. Aesthetic Polish (last)

  • smoother particle trails
  • cleaner color grading
  • balanced bloom
  • consistent layers and composition

Personal Reflection

I now understand meaning comes from how the system interprets intention. Fast movement should explode with energy; slow movement should feel gentle. Soft sound makes light tremble lightly; loud sound bursts the field outward. This isn’t control — it’s relationship.

An interactive artwork is not about control; it is about resonance.

Next Steps

  • Improve smoothing for hand tracking inputs
  • Refine low / mid / high frequency mappings for audio
  • Fix feedback overexposure issues
  • Test proximity-based particle density
  • Apply polish only at the very end

Assignment 2 Research & Concept Development (For the detailed content of Assignment 2, please refer to this post)

Group Progress Plan — Assignment 2

1. Transition from Assignment 1 to Assignment 2

Building on Assignment 1: Precedent Study & Critical Analysis, our group moved from analysing existing performative media artworks to developing our own interactive concept and prototype.

In Assignment 1, we studied artists such as Refik Anadol and Blast Theory, focusing on how interaction, data, sound, and participation transform audiences from observers into active contributors. These studies highlighted a shared principle:

Technology should serve human meaning rather than replace it.

Assignment 2 extends this research by translating theoretical insights into a functional interactive system, preparing us for full-scale development in Assignment 3.


2. Project Introduction (Week 7)

Assignment 2 explores how interactive media can transform digital visuals into a responsive and expressive environment.

Our group proposed Echo Flow, an audio- and gesture-reactive particle performance built in TouchDesigner. The project investigates how simple human actions—movement and sound—can shape a complex visual system, allowing light to behave like a fluid material that shifts, bends, and reacts to human presence.


3. Initial Research Phase (Week 7)

To support meaningful interaction design, our research focused on three main areas:

3.1 Interactive Digital Art & Light-Based Works

We revisited and expanded upon artists studied in Assignment 1, including:

  • Refik AnadolMachine Hallucinations, Quantum Memories (data as emotional, fluid visuals)

  • Ryoji IkedaTest Pattern (sound-driven minimal light systems)

  • GMUNK — volumetric and cinematic light environments

  • United Visual Artists (UVA) — geometry-based emotional light installations

  • Ouchhh Studio — poetic, AI-driven visual systems

These references reinforced our understanding of light as an active behaviour, not a decorative surface.


3.2 Interaction as a Meaning-Making System

Through class discussions, we learned that interaction must communicate emotion and intention, not just respond technically.

We adopted the framework introduced in class:

User Action → Sensor Input → Processing → Output Behaviour → Intended Meaning

This shifted our design thinking from “What looks visually impressive?” to “What emotional response does each action create?”


3.3 Technical Research: TouchDesigner (Week 7–8)

Our technical exploration focused on understanding core TouchDesigner systems:

  • Webcam-based hand tracking (CHOP tracking)

  • Audio analysis (Analyze, Filter, Math CHOPs)

  • Particle systems (SOP instancing, force fields, noise)

  • Feedback loops (temporal trails and motion memory)

  • Bloom / Edge / Level TOPs (glow, contrast, highlight control)

This research provided the technical foundation for our early prototype.


4. Concept Framework Development (Week 8)

Our concept is built around the following idea:

“The world is complex, but if you dare to move closer, it reveals its beauty.
If you are not bold, you do not get nothing—you simply receive less.”

In Echo Flow:

  • Light represents the world — complex and chaotic from a distance

  • Gesture and sound represent courage and action

  • Approaching and interacting causes the system to respond

This framework defined both the emotional tone and interaction logic of the project.


5. Intended User Experience (Week 8)

5.1 Experience Goals

We summarised our experience goals as:

  • Immediate response — actions trigger instant visual changes

  • Intuitive interaction — no controllers; the body is the interface

  • Immersive atmosphere — feedback trails and bloom create spatial depth

  • Emotional reflection — approaching complexity leads to reward


5.2 Narrative Layer

To support the experience, we developed a short narrative titled:

“The Moment You Reach for the Light”

In this metaphor, light represents the world: initially overwhelming, but increasingly responsive as the audience moves closer and engages. This narrative gives emotional meaning to the interaction, rewarding participation rather than passive observation.


6. Early Prototype Development (Week 8)

Our early TouchDesigner prototype includes:

  • A responsive particle field driven by hand movement

  • Audio-reactive brightness and bloom

  • Feedback-based light trails acting as motion memory

During testing, we encountered several challenges:

  • Unstable gesture tracking

  • Overexposed feedback visuals

  • Unclear cause-and-effect between input and output

To address these issues, we refined the system using:

  • Filter / Lag CHOPs to smooth motion

  • Math CHOPs to control value spikes

  • Trail CHOPs to analyse movement speed

  • Adjusted feedback and bloom thresholds

These changes allowed the visuals to respond more intentionally—slow movements created gentle flows, while fast gestures triggered energetic bursts.


7. Group Roles & Collaboration

To ensure efficient progress, responsibilities were clearly divided:

  • Ge Xianjing  — concept development, narrative writing, presentation structure

  • Xia Chen — TouchDesigner technical research and system testing

  • Zeng Zifeng — material research and support for prototyping and testing

  • Zoey— assisting documentation and research coordination

This collaboration allowed us to develop conceptual clarity and technical functionality in parallel.


8. Lecturer Feedback (Week 9) & Reflection

During the Week 9 group presentation, the lecturer provided constructive feedback:

  • The demo video pacing was too slow

  • Input–output relationships were not immediately clear

  • Interaction strength needed amplification

  • The concept was promising but required sharper narrative focus

  • Presentations should balance technical explanation with audience experience

This feedback highlighted the need to strengthen clarity, decisiveness, and physical readability, guiding our transition toward spatial and installation-based development in later weeks.


9. Summary & Next Direction

Assignment 2 marks a critical transition point in our group’s process:

  • From research → concept → early prototype

  • From abstract visuals → meaningful interaction systems

  • From observation → embodied audience participation

The outcomes of this phase directly inform Assignment 3, where we will translate Echo Flow into a fully realised physical installation.


Week 9 — Weekly To-Do

1. Main Goal of the Week

To present our early prototype, gather feedback, and clarify next steps so the interaction becomes understandable and immediate for the audience.

2. What We Will Test / Improve

  • clarity of input–output relationships
  • visibility and speed of system response
  • rhythm and pacing of demo video (current feels too slow)
  • scale of movement — gestures must trigger bigger changes
  • communication of the idea behind the interaction

📌 In short: The audience should understand what’s happening within the first 2 seconds.

3. One Concrete Task

Re-edit the demo video and adjust particle behavior so gestures and sound produce stronger, more noticeable visual changes. 👉 Essential task: Make the interaction clear and immediately readable.

4. Quick Reflection

✔ What worked well:

  • shorter video with only essential clips
  • stronger force / noise / particle spread makes gestures clearer
  • simpler explanation helps visuals speak

✔ What to review next week:

  • speed up feedback and reduce lag
  • consider simplifying inputs to hand gestures + voice
  • make visual language more intuitive (contrast, bloom, directional flow)


Week 10 Progress|From Digital Energy Field to an Immersive Fabric-Light Installation

During Week 10, our group shifted Echo Flow from a digital concept (Assignment 2) into a tangible spatial installation for Assignment 3. With feedback from our lecturer and deeper discussions among our group members, we began reconsidering one crucial question:
If our particle system truly “exists,” what physical form should it take? And how should audiences enter, feel, and disturb it?

This blog entry records our Week 10 development — including lecturer feedback, our upgraded installation plan, and the next steps toward building an immersive light-fabric environment.


01|Lecturer Feedback: Strong visuals, but the installation is still too abstract

Our lecturer gave positive comments on the TouchDesigner prototype:

  • Strong visual presence

  • Clear gesture/sound reaction

  • The particle system already feels expressive

But the main issue was also very clear:

The work is still too abstract. It lacks a physical object or spatial structure that audiences can enter, approach, or engage with.

We were encouraged to strengthen:

  • The meaning behind the concept

  • A concrete physical form to anchor the visuals

  • A defined audience flow and spatial behaviour

  • A complete installation sketch and equipment plan

This feedback made us realise that moving toward Assignment 3 means letting the piece fall “out of the screen” and truly occupy physical space.


02|Group Discussion: Making the energy field physical, letting audiences walk inside

After feedback, our discussion focused on how to transform Echo Flow into a hybrid of physical and digital elements. Several key decisions emerged:


① The Energy Core — a cosmic metaphor made physical

We plan to anchor the particle system within a real object:

  • A transparent dome or acrylic sphere

  • Internal LED or fibre-optic lighting

  • Natural elements (plants, stones, soil) to shape the surrounding space

  • Projected visuals forming the “outer energy field”

The core represents a living, reactive energy source.
When the audience moves, it “breathes,” pulses, expands, or contracts.


② Adding ‘Spatial Memory’: the system remembers where people walked

We expanded the interaction logic:

Using TouchDesigner’s:

  • Feedback TOP

  • Optical Flow

  • Position tracking via camera

we plan to record the viewer’s walking path.

This produces short-lived glowing trails —
as if the space remembers their presence.

It deepens the theme:
Even after a person leaves a position,
their “energy trace” remains in the field.


③ The major breakthrough: Fabric screen + reflective surfaces

This became our most important advancement this week.

Instead of projecting on a flat screen, we decided to use:


✔ Semi-transparent fabric (sheer curtains) as the projection medium

Fabric allows the particle visuals to appear:

  • floating, vapor-like

  • layered through multiple fabric sheets

  • drifting with air movements

Most importantly:

The audience can step inside the fabric layers,
as if entering a cloud of light or an energy corridor.

When people walk between the curtains, the fabric moves gently.
The projected particles distort physically, bonding digital imagery with real-world motion.


✔ Mirrors / reflective surfaces to shape light behaviour

Mirrors are not for showing the audience’s reflection, but for:

  • bending the projection

  • breaking the particle lines

  • redirecting light into unexpected paths

This reinforces the cosmic idea —
like gravitational lensing or distorted starlight.


03|Initial Exhibition Layout (Week 10 Draft)

(1)Spatial Structure Draft

   ┌─────────────────────────────┐
   │      Overhead soft lighting │
   │    (simulated cosmic glow)  │
   └─────────────────────────────┘

     [Fabric Curtain]     [Fabric Curtain]
           │                    │
           │  Audience enters   │
           │  and walks inside  │

                 ● Energy Core (acrylic dome)
   ← Camera & microphone are hidden in the structure →

(2)Audience Journey (explainable within 30 seconds)

  1. Viewer enters between the fabric layers

  2. Raises hand → particle distortions ripple across the fabric

  3. Makes sound → pulses of light expand from the core

  4. Walks → trails of light follow their movement

  5. Stands still → the energy field slowly returns to equilibrium

The entire experience feels like:

Walking inside a river of light,
where movement and voice reshape the digital universe.


04|Next Steps (for Week 11)

✔ 1. Fabric projection test

  • Buy sheer fabric (white or grey)

  • Test projection distance & brightness

  • Try double-layer fabric for depth

✔ 2. Prototype of the Energy Core

  • Acrylic dome or plastic sphere

  • LED or inner lighting structure

  • Stability + mounting test

✔ 3. Sensor positioning

  • Camera placement

  • Hidden microphone setup

  • Ensure tracking still works when the audience enters the fabric space

✔ 4. TouchDesigner projection layering

  • Core particle layer

  • Outer flowing layer

  • Ground or wall memory trails

✔ 5. Exhibition mockup

  • Top / Front / Side views

  • A near-final conceptual render


05|Conclusion:From particles on screen to an immersive energy field

Week 10 helped us understand that:

Interaction becomes meaningful only when experienced physically.

The new direction of Echo Flow for Assignment 3 transforms it into:

An immersive field of energy that audiences can walk through, disturb, reshape,
and leave temporary traces within.

Fabric, mirrors, particle systems, and spatial memory together create a space where:

  • the audience becomes an active participant

  • their presence has visual impact

  • their movement leaves light behind

  • the digital environment is no longer flat, but alive

In this new iteration, viewers are not “watching a work” —
they are entering a small universe that reacts to them,
recording their existence through momentary echoes of light.



WEEK 11 — Direction Refinement & Installation Exploration

Based on our lecturer’s feedback from last week, we began to re-evaluate the physical direction of our installation. Although our original idea using sheer fabric and projected particles created an atmospheric and immersive effect, the lecturer pointed out that the concept was still too abstract and lacked a tangible physical anchor.

Fabric Layout Discussion (Initial Direction)

At the beginning of the week, our group focused on using multiple layers of sheer fabric to shape the spatial layout. After class, we continued discussing our ideas through WhatsApp and created several rough sketches to visualise how the audience might walk into the projection instead of simply observing it.

We explored references such as fabric-based light tunnels, layered projection corridors, and soft material installations to understand how particles could “float” across overlapping fabric layers. Our sketches included:

  • approximate projector and sensor positions

  • the audience walking path

  • fabric height and layering structure

This direction helped us imagine a dreamy, fluid visual atmosphere—but it still lacked a strong physical component.




Shift Toward Water-Surface Interaction (Updated Direction)

After further consultation, the lecturer suggested that we explore a more material-driven approach, especially one that allows a clearer physical-digital relationship. The idea of interacting through water resonated strongly with our concept.

We realised that water’s natural properties—ripples, reflection, and surface distortion—could visually and conceptually support our particle behaviour much better.

Thus, we began refining a new setup:

  • finding a container capable of holding water

  • placing a screen or projection surface beneath the water layer

  • designing the container’s outer frame to match the theme

  • allowing the audience to interact directly on the water surface

In this configuration, gestures on the water—such as tapping, stirring, or making ripples—would distort the projected visuals below, generating light ripples, particle scattering, and pulsating effects.

This approach offers a clearer physical anchor, a more intuitive interaction, and a sensory connection that expands the meaning of our original TouchDesigner particle design.


Week 11 — From Concept to Site-Ready Installation (Progress Log)

This week was a turning point for our final performative media project. Instead of keeping our idea at a “visual concept” level, we started translating our TouchDesigner prototype into a real, buildable installation that can function in a public exhibition space.

Our lecturer’s feedback was clear: the visuals are strong, but the project must become less abstract and more physically readable—with a clear object, clear interaction, and a clear spatial setup.


Our Theme & Current Direction

At the core, our project is about re-framing discarded materials.

Even if something is unwanted or abandoned, in another person’s eyes—and in another context—it can be re-seen, re-shaped, and re-valued. We want the installation to feel like a small “reconstructed world,” built from everyday leftovers.

To reflect that theme, we are combining:

  • Soft, flowing fabric (to represent energy, waves, memory, and movement)

  • Collected recycled objects (to represent “discarded life” re-entering a new system)

  • Water-based interaction (to create a physical layer of behaviour and reflection)

  • TouchDesigner visuals (to translate movement + sound + water disturbance into reactive light)



Our Current Installation Plan (Updated)

After multiple revisions, we now have a more concrete plan that connects concept, interaction, and physical form.

1) Core interaction (what the audience does)

The audience interacts through hand movement over water, and potentially through body movement in front of the installation.

  • When a visitor moves their hands near/above the water surface, the system detects motion.

  • When the visitor touches or disturbs the water, ripples become part of the experience.

  • The interaction is designed to feel like “activating a hidden energy field” using the body.

2) System response (what happens)

The system responds by generating reactive visuals (particles / flow / light traces) through TouchDesigner.

  • Motion → particle direction, turbulence, trails

  • Water movement concept → wave-like expansion / distortion patterns

  • Ambient sound (optional) → intensity / pulse / glow

3) Spatial form (what it looks like)

We want the environment to be minimal but immersive, dominated by light and shadow, with fabric acting as both material and atmosphere.

Current layout idea:

  • A central interaction object (water tank + screen system)

  • Surrounding fabric structures (some draped, some grounded)

  • A “recycled layer” that holds meaning through texture, fragments, and found materials







Materials & DIY Purchases (What We Collected)

This week, we visited a DIY shop and began sourcing materials to test physical feasibility.

So far, we have prepared or planned to use:

  • Sheer fabric / net fabric (multiple pieces, different opacity)

  • Rope + glue + ties (for shaping and fixing fabric forms)

  • Artificial grass mat (large rectangle that we can cut)

  • Rocks / small stones (ground texture + weight)

  • Recycled objects (planned): old clothes, plastic bottles, cans, paper scraps, old newspaper text, small broken decor items

  • Broken mirror fragments (new addition): used to scatter light and create a “shattered reflection” effect

Our intention is not to decorate randomly, but to make these objects feel like evidence of a previous life, now embedded in a new environment.






GMBB Site Visit (Exhibition Planning)

During Week 11, we also visited GMBB to check the exhibition space and confirm our installation constraints.

Key requirement:

  • The installation must stay within 1m × 1m footprint

  • The structure needs to be stable and safe for 3 days of continuous exhibition

This visit helped us understand:

  • how close audiences can stand

  • how long they might stay

  • what height feels comfortable for hand-based interaction

  • what type of structure is realistic in that public setting



Problems We Found During Testing (and Lecturer Feedback)

Issue 1 — Fabric waves are not controllable

Our initial idea relied heavily on sheer fabric draping to form “wave folds.”
However, our lecturer pointed out a critical problem:

If we only depend on fabric to create the shape, it becomes unpredictable, unstable, and hard to maintain.

Lecturer suggestion:
We need supporting structures to “fix” fabric in a controlled form, so the design principles are visible and consistent.



Issue 2 — Screen + water tank placement wasn’t working

We tested different physical layouts:

Early idea: screen placed underneath
But the school equipment limitations made this difficult.

Tested layout: screen at the back + water tank in front, both at similar height
This created new issues:

  • not visually refined enough

  • not friendly for average visitor height (especially shorter audiences)

  • difficult for the audience to naturally perform “hands into water” interaction

We realised:
If we want visitors to place their hands into the water naturally, the tank cannot be set in a rigid “standing display” way.
A more thoughtful ergonomic setup is needed.



Design Revision (What We Changed After Feedback)

We revised the physical plan in two major ways:

1) Replace wall-hook fabric with a supportive structure

Originally, we wanted to attach fabric to walls using hooks.

Now, we plan to use an umbrella stand / pole support structure so the fabric becomes a real shaped object, not just a hanging surface.

Benefits:

  • controlled form

  • clearer composition

  • easier to maintain for 3-day exhibition

  • supports our theme: “reconstructing order from softness and fragments”

We can also hang colourful old objects (recycled items) from the support structure to strengthen the conceptual layer.





2) Build a protective frame for screen + water tank

We began hand-making a physical frame system to combine:

  • the screen under the tank

  • a transparent structure so the screen is still visible

  • protection for the screen while keeping the installation clean

This was a practical decision that moves the work closer to exhibition reality.







Team Roles (Week 11)

To keep progress efficient, we divided roles clearly:

  • Ge Xianjing and Zoey — visual planning, sketching, spatial design drawings

  • Xia Chen — TouchDesigner technical research, interaction logic testing

  • Zeng Zifeng — sourcing materials + assisting physical build tests with Xia Chen

This division helped us develop the concept and build simultaneously, instead of waiting for one part to finish first.

What’s Next (Our Immediate Plan)

Next week, our priorities are:

  1. Finalize a stable support structure for the fabric (umbrella/pole frame test)

  2. Confirm the best height and orientation for water-based interaction (visitor comfort first)

  3. Test camera distance above water to ensure hand tracking is reliable

  4. Integrate recycled object layer (old clothes, cans, text collage) in a controlled way

  5. Produce clearer documentation:

    • top / front / side plan drawing

    • materials list

    • short test video showing input → process → output

Week 11 Reflection

Week 11 made us realise that strong interactive visuals are only one part of a successful performative media project. What matters equally is:

  • physical readability

  • structural control

  • audience comfort

  • and the ability to sustain the work in a real exhibition environment

This week, our project moved from “an idea” into “a buildable system”—and that shift is what will shape the quality of our final installation.


Week 12 and 13— Final Exhibition & Installation Realisation (Process Log)

Performative Media / Bachelor of Interactive Spatial Design (Honours)
Final Exhibition Week

Week 12 marked the most intensive and meaningful stage of our Performative Media project, as it was the final exhibition week. From Monday to Friday, our group focused on refining the design, resolving technical and structural issues, and transforming our concept into a fully realised, exhibition-ready installation.

Rather than developing new ideas, this week was about making decisions, executing clearly, and responding critically to feedback in a real exhibition environment.


Collective Effort & Exhibition Preparation

Our group worked closely together throughout the week, coordinating tasks, materials, and installation logistics. We agreed to complete the full setup on Thursday at GMBB, dedicating an entire day to assembling, testing, and adjusting the installation on-site.

Beyond the physical work, we also designed a small poster to accompany the installation. This poster introduces the project concept, interaction logic, and theme, helping visitors quickly understand what they are experiencing.





Revised Structural Strategy: Umbrella Support System

Based on the lecturer’s previous feedback, we decided to fully commit to the umbrella-based support structure, but with a more robust and considered approach.

Instead of relying only on fabric tension, we extended the umbrella structure using water pipes, which provided greater stability and allowed the fabric to form more controlled, wave-like folds. This solution helped us balance softness and structure — a key design principle of our work.

The umbrella frame also allowed the fabric to feel suspended and alive, echoing the theme of fluid energy rather than static decoration.







Screen & Water Tank Protection System

To address safety and visibility, we constructed a steel frame to protect the television screen. This frame also stabilised the water tank above it, ensuring that both elements could function together safely in a public exhibition setting.

Initially, when the water tank was added, we noticed that the transparent back revealed the internal support pipes, which disrupted the visual clarity of the installation. Instead of hiding this purely with fabric, we introduced two mirror panels behind the tank.

This design decision created an unexpected and meaningful effect:

  • From a distance, visitors primarily see the shape and movement of water

  • When approaching closely and looking beneath the tank, viewers can:

    • see the camera-generated visuals projected on the screen

    • experience double reflections and layered colour through the mirrors

This subtle detail rewarded close observation and added depth to the interaction — a small but intentional moment of discovery.




Layered Fabric & Lighting Design

Our fabric system consists of two overlapping layers:

  • The upper layer is khaki-coloured

  • The lower layer is white

This layering creates greater depth, softness, and tonal variation when light passes through the material. Rather than appearing flat, the fabric responds dynamically to both projection and ambient lighting.

We also integrated coloured light bulbs within the fabric structure. These lights enhanced the immersive quality of the space, encouraging visitors to feel as if they were stepping into the installation rather than observing it from outside. The lighting concept reinforces the idea of an environment that reacts above and below — visually connecting the overhead fabric with the grounded interaction area.




Material Language & Ground Composition

We deliberately avoided using simple transparent tape for assembly. Instead, we used hemp rope to bind and weave elements together. This method created a tactile, handmade quality that aligned with our theme of reconstruction and reuse.

On the ground, we laid artificial grass, combined with personal and recycled objects brought from home, including:

  • clothes

  • shoes

  • light bulbs

  • artificial flowers

  • artificial plants

Each object was placed intentionally, creating a space that feels slightly chaotic yet alive — not overly polished, but full of everyday traces. This controlled messiness reflects a lived-in environment rather than a sterile display.




Final Installation Outcome

Through continuous testing and adjustment, the installation gradually came together as a cohesive environment:

  • The fabric forms a soft, enclosing upper layer

  • The water tank and screen establish a clear interaction focus

  • Mirrors, lighting, and recycled materials introduce layers of perception

  • The structure remains stable, readable, and safe for public interaction

More importantly, the work communicates our core intention:
to transform ordinary, discarded materials into an emotionally responsive space through interaction, light, and movement.


Week 13 Reflection (Process-Based)

Week 12 taught us that successful interactive installations are not defined by complexity, but by clarity, sensitivity, and physical intelligence. Every decision — from structural support to material binding — influences how an audience understands and feels the work.

Rather than hiding imperfections, we embraced subtle irregularities, allowing the installation to feel human, approachable, and quietly expressive. This final exhibition became not just a presentation of outcomes, but a reflection of our collective effort, problem-solving, and design growth.

Week 14 — Final Group Presentation & Project Reflection

Week 14 marked the final presentation stage of our performative media project. This week focused on consolidating our ideas, refining our narrative, and clearly communicating how our work evolved from Assignment 2 (Concept Proposal & Early Prototyping) into a fully realised installation for Assignment 3.

For the final group presentation, each team member was responsible for a specific section of the PowerPoint. This structure helped us present the project coherently while highlighting individual contributions. I was responsible for the opening section, which introduced our project and explained the key changes and development from Project 2 to Project 3.


My Role — Explaining the Evolution from Project 2 to Project 3

In my presentation section, I focused on explaining how and why our project transformed.

In Assignment 2, our work primarily existed as a digital system built in TouchDesigner. The focus was on gesture and sound interaction, where particles and light reacted to human input in real time. While the prototype successfully demonstrated an input–process–output logic, the experience remained largely screen-based and abstract.

Moving into Assignment 3, we realised that to make the project more performative and meaningful in a public exhibition context, we needed to extend the digital logic into physical space. This became the key turning point of the project.

I explained to the audience that our core concept did not change — we were still exploring how human presence leaves traces and activates hidden responses — but the way this idea was expressed evolved significantly. Instead of focusing only on visual effects, we shifted toward material, space, and embodied interaction.


Key Changes from Assignment 2 to Assignment 3

During the presentation, I highlighted several major developments:

  • From screen-based interaction to spatial installation
    The project moved from a digital prototype to a physical environment involving fabric, water, recycled objects, and structural supports.

  • From abstract gestures to tangible actions
    Interaction evolved from general hand movement in front of a screen to direct engagement with water, creating ripples that trigger visual responses.

  • From visual emphasis to experiential clarity
    Feedback from Assignment 2 encouraged us to make the interaction more readable. In Assignment 3, cause-and-effect became more immediate and intuitive.

  • From isolated visuals to narrative-driven design
    The final installation integrates materials, light, and interaction into a cohesive story about transformation, care, and re-seeing everyday objects.

By explaining these shifts, I aimed to show that Assignment 3 was not a completely new idea, but a natural and thoughtful extension of the research, testing, and feedback we received earlier in the semester.


Group Collaboration and Presentation Structure

After my introduction, each team member presented their assigned section, including technical implementation, material construction, interaction flow, and exhibition considerations. This division of roles allowed us to cover both conceptual intention and practical execution.

The final presentation reflected our collective effort — from early digital experiments to on-site testing and installation. More importantly, it demonstrated how collaboration allowed the project to grow beyond a single perspective, combining technical research, spatial thinking, and narrative design.


Week 14 Reflection

Week 14 was not only about presenting a finished outcome, but also about articulating our learning journey. Preparing this presentation helped me clearly understand how much the project had evolved — not just in form, but in thinking.

I realised that strong performative media work is built through iteration: responding to feedback, identifying limitations, and being willing to change direction while staying true to the core idea. Explaining this evolution to an audience also strengthened my confidence in communicating design intent, especially when bridging digital systems with physical experience.

Overall, the final group presentation marked the completion of a long process of research, experimentation, collaboration, and reflection — and clearly demonstrated how our project matured from concept to installation.



Comments