Sonic Design / Task 1 - Sound Fundamental
Sonic Design / Exercise - Sound Fundamental
September 22, 2025
22.09.2025 - 2025 / Week 1 - Week 4
GeXianjing / 0377636
Sonic Design/Bachelor of Interactive Spatial Design (Honours)
- Exercise 1
- Exercise 2
- Exercise 3
- Exercise 4
Week 1 — Sound Design Fundamentals
Topic: Sound Design and Audio Storytelling
Key Learning Insights
This week’s session focused on the fundamentals of sound design, emphasizing how audio can create immersive environments and emotional depth — much like how visual hierarchy organizes elements in a poster. I learned that sound layers (foreground and background) are essential to building believable and engaging soundscapes.
-
Foreground sounds (like footsteps, door creaks, or speech) guide attention and drive the story.
-
Background sounds (like wind, city noise, or rain) enrich the atmosphere and give spatial depth.
The concept reminded me that sound design is not just about adding noise — it’s about composing auditory space that shapes how the audience feels and perceives the environment.
Tools and Resources
We explored several sound effect libraries and editing platforms:
-
๐ต FreeSound.org — a community-based sound library offering thousands of user-contributed effects.
-
๐️ BBC Sound Effects — a professional archive of royalty-free sounds (e.g., nature, urban, historical).
-
๐งฉ Adobe Audition — the main tool for editing, layering, and balancing sound compositions.
I also learned that studio-grade headphones are recommended (e.g., Sennheiser HD-6 series), since regular consumer earphones might distort frequencies — which can lead to inaccurate sound mixing.
Conceptual Understanding
Sound design follows a clear hierarchy and narrative logic, similar to storyboarding in visual design:
-
Storyline Development — Every project starts from a structured narrative. For example, a morning scene begins in silence, then builds through subtle layers (e.g., birds, footsteps, rain).
-
Sound Curation — Sounds should be collected ethically and modified creatively (pitch adjustment, trimming, reverb) rather than used directly.
-
Execution & Feedback — Continuous testing and feedback loops help refine timing, tone, and realism.
I realized that effective sound design balances realism and imagination — thunder must feel real, but magical elements can be entirely abstract.
Audio Storytelling
We were introduced to our upcoming project — a 2–3 minute narrative recording that relies on sound and voice to tell a story.
Requirements:
-
Human voice acting (AI voices are strictly prohibited).
-
Clear differentiation between character voices.
-
Sound effects and music used only for emphasis, not to replace storytelling.
Grading Focus:
-
๐ค Voice Narration (70%) — clarity, tone, emotion.
-
๐ถ Sound Effects (30%) — appropriate layering and timing.
This project encourages me to think critically about how voice and environment interact — how silence, echo, and rhythm can express meaning even without visuals.
Final Project Preview: Silent Movie Soundscape
In the final phase, we’ll design a soundscape for a silent movie, using only recorded or synthesized sounds (no pre-existing or musical tracks). The aim is to evoke emotion purely through environmental and Foley effects — such as fabric rustling, footsteps, or mechanical sounds.
This project challenges me to listen more consciously — to interpret space through hearing rather than sight.
Common Mistakes to Avoid
-
❌ Using AI-generated voices — immediate disqualification.
-
⚠️ Overproduction — too many sound effects can overwhelm and reduce clarity.
-
๐ง Poor equipment — inaccurate monitoring affects mixing quality.
Personal Reflection
This week taught me that sound is storytelling. Every small detail — from the distance of a voice to the echo of a corridor — carries emotional meaning. In future projects, I want to apply this sensitivity to my spatial design practice too, integrating sound as an experiential layer that enhances user interaction and atmosphere.
Week 2 Adobe Audition Beginner Guide
During this week’s online tutorial, we learned how to use Adobe Audition to edit and improve sound quality through the Parametric Equalizer (EQ) and Multitrack Editing. The lesson focused on understanding how different frequency ranges affect the overall tone and clarity of a sound.
Our instructor guided us step by step, beginning with how to import audio clips and open the equalizer interface. We learned to recognize the frequency spectrum—from the low bass (20 Hz) up to the high treble (20 kHz)—and how each range contributes to what we actually hear.
Step 1: Prepare the Workspace
-
Open Adobe Audition → Switch to Multitrack Mode.
-
Create a new session:
-
Sample rate: 44,100 Hz (for audio work)
-
Bit depth: 24-bit
-
-
Import all files:
-
Flat.mp3 -
EQ1.mp3→EQ6.mp3
-
-
Drag each file into a separate track:
-
Track 1 →
Flat.mp3 -
Track 2 →
EQ1.mp3 -
…
-
Track 7 →
EQ6.mp3
Step 2: Use the Solo Button
-
To compare files clearly, click the Solo (S) button on one track at a time.
-
Avoid playing multiple tracks simultaneously.
-
You can set this in:
Edit → Preferences → Multitrack → Track Solo → Exclusive
Step 3: Add a Parametric Equalizer (EQ)
For each track (EQ1–EQ6):
-
Go to the right side of the track → click the arrow icon (>) → open FX Rack.
-
Choose:
Filter and EQ → Parametric Equalizer -
The EQ window will open, showing a graph (20 Hz – 20 kHz).
-
This is where you will adjust frequencies.
Understanding the Lesson
The main concept was simple but powerful:
Sound can be shaped the same way color and light can be balanced in a photo.
We practiced this through a set of exercises using six equalizer versions (EQ1–EQ6) and one unedited track (Flat.mp3). Each EQ version was meant to highlight or reduce certain tones so we could train our ears to recognize subtle changes.
By boosting or cutting frequencies, we were able to control the warmth, brightness, and depth of the sound. For instance, increasing the low frequencies made the tone feel heavier and more powerful, while reducing the mids helped clear up muddy parts.
My EQ Adjustment Practice
After importing all six tracks into Multitrack View, I compared them one by one using the Solo button. This allowed me to listen carefully to the differences between the original “Flat” sound and each equalized version.
Here’s a short summary of what I experienced while adjusting each EQ version:
-
EQ1 — I boosted the low frequencies, which made the sound warmer and deeper. The voice felt more rounded and powerful.
-
EQ2 — I slightly increased the mid-lows, giving more body to the sound. It felt thicker, as if the music had more presence.
-
EQ3 — I reduced the bass and raised the highs a bit, which made the sound brighter and cleaner.
-
EQ4 — I cut a small amount of low-mid range and emphasized the 1–2 kHz area, helping the vocals stand out clearly.
-
EQ5 — I boosted the mid-highs and treble, creating a lighter and more transparent tone overall.
-
EQ6 — I balanced the mid-lows and lifted the upper highs, which gave a smooth and airy quality to the sound.
Each version felt slightly different when compared to the original Flat track. Even though the adjustments were small, the tonal balance changed noticeably — which helped me realize how sensitive the human ear is to frequency shifts.
Technical Reflection
Working with the equalizer made me more aware of how every frequency range has its role.
-
The bass sets the emotional foundation and warmth.
-
The midrange carries clarity and articulation.
-
The treble adds openness and sparkle.
By comparing all six EQ versions, I understood that equalization isn’t about making the sound louder — it’s about making it more balanced and meaningful.
Personal Reflection
This exercise helped me train my hearing and learn how to identify frequency problems in recordings. I realized that mixing is not just a technical task — it’s also a creative process of balancing emotion and clarity.
Using Adobe Audition’s Parametric Equalizer and Multitrack Session, I was able to visualize how sound frequencies interact like layers of color in a painting. The process made me more confident in handling future audio projects, especially in designing immersive and emotional sound environments.
Week 3 — Sound Adjustment Practice: Solo Track EQ and Reverb
During this week’s class, we practiced sound shaping by working on each track individually using the Solo (S) function in Adobe Audition.
By turning on S for one track at a time, I could isolate that sound, apply specific Parametric Equalizer (EQ) and Reverb settings, and focus on how the voice changed in different simulated environments.
The exercise helped me understand that even when using the same voice clip, subtle EQ and reverb adjustments can completely alter the mood, distance, and realism of the sound.
Workflow and Method
-
Import and Duplicate Audio
I imported the provided sample voice into the multitrack session and duplicated it into six tracks:
Airport, Closet, Stadium, Telephone, Walk, and Bath. -
Activate Solo (S) for Each Track
Before adjusting, I clicked the S button for one track at a time.
This muted all other tracks, allowing me to clearly hear and focus on the sound changes of that specific environment. -
Apply Parametric Equalizer (EQ)
I only modified the mid frequencies (Bands 3–5), as these control clarity and tone in human voice.-
If the voice sounded too dull → I slightly raised the midrange (+2 to +5 dB).
-
If it felt too harsh or nasal → I reduced the mids (-2 to -4 dB).
-
-
Add and Tune Reverb
I opened Reverb → Studio Reverb and adjusted the Decay Time and Wet Level to simulate space size:-
Shorter reverb → small room or closed space.
-
Longer reverb → open area or stadium-like space.
-
Exercise 2
1.Airport
For the airport environment, I turned on Solo (S) and focused on the middle frequencies between 1 kHz and 2 kHz. I slightly increased this range to make the voice sound clearer and cut through the background noise. Then I added a short reverb with moderate decay to create the sense of a large but open area, similar to how voices echo slightly in an airport terminal.
2.Closet
In the closet scene, I again used Solo mode and reduced the mid frequencies around 800 Hz to make the sound softer and more muffled. I applied a very small reverb with short decay and low wet level so that the voice would feel trapped inside a tight, closed space. This combination produced a warm but confined sound, like someone speaking from behind clothes or inside a small room.
3.Stadium
For the stadium effect, I boosted the mid frequencies near 2 kHz to give the sound more projection and brightness. Then I added a long reverb with high decay and wet level, making the echo stretch out and fade slowly. This made the voice sound energetic, as if it was being broadcast across a wide open field.
4.Telephone
To simulate a telephone sound, I narrowed the EQ range to emphasize 1 kHz and reduce both the low and high ends. This created a flat, compressed voice tone typical of phone calls. I applied only a very light reverb, almost dry, to preserve that close and metallic feel of a telephone speaker.
But it was found that the effect was not good, mainly to highlight the voice on the phone5.Walk
For the walking scene, I slightly increased the mids around 1.2 kHz, which kept the voice natural but with a bit of presence. The reverb was set to a medium level, with gentle reflections that suggest outdoor space and movement. The result felt realistic — as if the person was talking while walking down a street.
6.Bath
For the bathroom, I raised the higher mids near 1.8 kHz to make the sound bright and sharp. I used a short but noticeable reverb with clear reflections, simulating how voices bounce off tile walls. This made the sound crisp and echoing, very similar to a real bathroom’s acoustics.
Reflection
Working in Solo mode made it easier to hear fine details and compare subtle differences between each track.
I realized that the mid-frequency range directly shapes how “alive” or “flat” a voice sounds, while reverb defines the environment around it.
This exercise strengthened my listening accuracy and helped me understand how sound design connects to space, emotion, and storytelling.
Week 4— Adobe Audition: Spatial Characteristics & Automation in Sound Design
๐งฉ Core Software and Theme
This week’s session focused on the Adobe Audition software and explored the spatial characteristics of sound through automation editing.
The main objective was to make sounds feel more realistic by controlling volume and balance (panning) to simulate how sound moves in space and changes with distance.
Theoretical Knowledge
1. Sound Envelope (BTSR)
We learned about the four main stages of sound:
-
B (Begin / Attack): The moment when the sound starts and quickly reaches its peak.
-
T (Top / Sustain): The period where the sound remains at its maximum intensity.
-
S (Soft / Decay): The sound gradually fades.
-
R (Release): The sound finally disappears.
Understanding this BTSR process helps us create more natural and controlled sound transitions.
2. Spatial Logic and Distance Relationship
Sound changes depending on the distance between the source and the listener:
-
The louder and sharper the sound → the closer the source appears.
-
The softer and more diffused the sound → the further away it feels.
By adjusting left–right balance (panning), we can simulate sound movement across space.
For example, in the “jet plane fly-over” demonstration:
-
The left channel starts louder while the right channel gradually increases,
-
Overall volume decreases over time, creating the illusion of a plane flying past and moving away.
3. Spatial Texture and Reverb
Besides volume and panning, reverb plays an important role in creating a sense of environment:
-
Factory: long reverb time, metallic reflections.
-
Laboratory: short reverb, crisp and clean reflections.
-
Cave: deeper low frequencies, long echo tails.
By adjusting the reverb settings, we can let the audience “feel” the space — its size, material, and atmosphere.
Key Software Techniques
-
Automation Curves
In the Multitrack view of Audition, both Volume and Balance (Panning) can be automated with keyframes:
-
Moving the Volume line up or down controls loudness.
-
Shifting the Balance line left or right controls stereo position.
These automation envelopes allow us to simulate objects moving through space.
-
Examples from Class:
-
Jet Plane Fly-Over: Volume gradually decreases while panning moves from left to right.
-
Character Walking: Balance moves between –50 (left) and +50 (right) to show motion, with small volume fluctuations.
-
Cave Entrance: Volume slowly decreases while reverb increases; when leaving, the effect is reversed.
-
Sound Selection and Narrative Logic
The instructor emphasized that effective sound design focuses on clear storytelling rather than layering too many effects.
Examples:
-
Mechanical scenes: factory noises, machine movements.
-
Sci-fi scenes: energy charging, laser pulses.
-
Natural scenes: wind, water droplets, footsteps, or echoes.
Recommended resources included the BBC Sound Effects Library and several online sampling sites, reminding us to choose sounds that match the scene’s atmosphere.
Class Exercise 3
Task 1 — Simulating a Character Moving from Left to Right
Goal: Use automation to represent the changing position of a character.
I first imported the footstep sound into the multitrack and adjusted two automation lines:
The blue Balance line smoothly moved from the left channel (–100) to the right channel (+100), making the footsteps sound as if they passed from left to right across my ears.
The yellow Volume line started slightly lower (–6 dB), reached the highest point at the center (0 dB), then dropped again (–6 dB), simulating the feeling of someone approaching and then walking away.
After that, I briefly opened the Parametric Equalizer (EQ) for basic tone correction:
Enabled a high-pass filter at 80 Hz to remove low-frequency noise,
Slightly boosted 200 Hz–1 kHz to enhance the impact of each step,
And cut about –2 dB above 5 kHz to reduce harsh scraping sounds.
After these adjustments, the footsteps sounded clean, layered, and naturally moving, as if someone were really walking from my left side to my right.
Task 2 — Entering and Exiting a Cave
Goal: Show how distance and environment affect sound clarity and loudness.
Steps:
In this part of my project, I first added the previously created left-to-right footstep audio as the main track to represent a person walking. To build a more immersive valley atmosphere, I layered in a dripping-water sound and some bird chirping. The bird sounds were placed both before entering the valley and after leaving it, acting as natural transitions between environments
The dripping-water layer was also given reverb and a bit of low-frequency enhancement so it would echo gently across the space, like water droplets resonating off stone walls. Altogether, the combined sounds clearly convey a spatial transition—from open ground into a deep valley and then back out—where each sound element supports the change in depth and atmosphere.
Exercise 4 Scene-Based Sound Design
Sound Libraries: https://sound-effects.bbcrewind.co.uk/ ; https://sound-effects.bbcrewind.co.uk/
| All |
My goal was to use Reverb, Balance, and Volume automation to make each sound “breathe” and move within the space, creating a layered auditory world that feels both technological and organic.
First, I added a moderate reverb to the main laboratory background sound, setting the decay time to around 2 seconds, early reflections to 50%, and the dry/wet mix to 70:30. This made the sound echo naturally, as if resonating inside a glass chamber.
| Main body laboratori |
| electronic |
| reed |
| atmosphere |
| suburbia |
| footsteps |
Rather than simply layering effects, I treated the entire soundscape as a living narrative—a story told through alarms, radio broadcasts, and the hum of machines inside a cold futuristic lab.
The composition begins with a rising alarm, a deep tone echoing through metallic walls.
I gradually reduced its volume from –4 dB to –20 dB, allowing it to fade into the distance while the low-frequency hum (from bbc_atomic-pow and bbc_power-stat) thickened the atmosphere.
With a reverb decay of 2.4 seconds, the sound filled the space like breath inside a mechanical organism—calm yet uneasy.
As the alarm fades, a radio voice emerges—calm, distant, and strangely authoritarian.
I used automation to fade the broadcast in while the alarm faded out, shifting focus naturally.
Through parametric EQ, I removed low frequencies, slightly boosted 3–6 kHz for clarity, and softened the highs to make it feel like a signal filtered through metal walls.
A Full Reverb effect (room size ≈ 2400 m³, early reflections ≈ 70%) added a sense of depth and isolation, as if the voice were trapped inside the system.
![]() |
| 1 |
To emphasize instability, I intentionally made the broadcast glitch.
Using volume automation, I inserted short pauses (around 0.3 s) and layered faint electrical noise during each stop.
The result is a stuttering machine voice—still trying to speak, yet disrupted by its own malfunction.
It creates the illusion that the AI or communication system is struggling to maintain order amid chaos.
| Bottom air sound |
When the broadcast collapses, the machines reclaim the space.
I boosted low frequencies around 100 Hz to strengthen the pressure and reduced highs by about –5 dB to soften the harshness.
The lingering reverb simulates the echo of energy, a sonic residue after the core system shuts down.
The lab falls silent—but the air still vibrates, filled with mechanical breath.
| reverberation effect |
| noise reduction |
Reflection on Learning Sound Design
At the beginning of this course, I had very little understanding of what sound design truly involved. My initial assumption was that it was a technical process — editing, trimming, and managing audio clarity. However, through continuous practice and experimentation, I gradually discovered that sound design is both a scientific and perceptual discipline, one that merges precision with emotion.
Understanding Sound as Spatial Experience
One of the first concepts I learned was that sound is not confined to a flat timeline; it exists in space.
Through using tools like Balance and Volume Automation, I learned how to simulate direction and distance — for example, adjusting sound from the left to the right channel to create movement, or varying loudness to represent proximity. This process revealed how sound can convey spatial depth and motion without visual aid.
Learning the Technical Foundations
The technical aspect of the course centered on understanding frequency, amplitude, and spatial reflection.
Using the Parametric Equalizer, I experimented with filtering frequencies to control tone and clarity:
-
Reducing low frequencies (<150Hz) to eliminate rumble,
-
Enhancing mid-range (1–3kHz) for presence, and
-
Adjusting higher bands (8–10kHz) to add air and brightness.
With Reverb, I learned to simulate the acoustic properties of different environments.
By modifying decay time, early reflections, and dry/wet ratios, I could create the illusion of open or enclosed spaces — such as hallways, valleys, or metallic laboratories.
These adjustments transformed simple recordings into soundscapes with measurable spatial logic.
Integrating Automation and Expression
Beyond static editing, I began to use automation curves to control dynamics over time.
By adding keyframes to volume and balance lines, I could make a sound evolve — move closer, drift away, or fade naturally. This taught me how temporal changes shape perception, and how small transitions can define the emotional rhythm of an auditory scene.
From Technical Control to Conceptual Awareness
As I practiced, my understanding of sound moved from mechanical control to conceptual awareness.
I realized that technical operations like EQ and reverb are not just for correction; they are tools for storytelling. For instance, a high decay in reverb can imply distance, while a compressed dynamic range can create tension.
Through this, I learned to make design decisions not only based on aesthetics, but on acoustic logic and psychological impact.










Comments
Post a Comment