Performance Visual Mental Model — Wiki
A New Language for Music-Driven Visuals
What Is This?
This wiki defines a mental model for expressing live techno music as visual experiences. It is not a technical specification — it is a language. A shared way of thinking that engineers, visual artists, and collaborators can use to communicate the same ideas.
The core premise: traditional audio analysis exposes data. This model exposes meaning.
Traditional approach:
kick_detected = true
bass_energy = 0.73
bpm = 128.0
This model:
story.foundation.weight = heavy
story.variance = rising
story = "tension_building" The difference is not technical. It is expressive.
Single Source of Truth
This model is built on one assumption: the kick is the only reliably detectable element in a live techno signal in real time. Everything in this model derives from what can be measured about the kick — its presence, its energy, its character, and what changes around it. There is no attempt to decompose the full mix into independent sources. The kick is the anchor. Everything else is relative to it.
Concepts
The model is built from a small hierarchy of concepts. Each has its own page with full definitions, examples, diagrams, and edge cases.
Core Hierarchy
STORY ← the highest level: what is this moment of the performance?
│
├── FOUNDATION ← what stays the same (the kick)
│ ├── Weight ← how heavy the kick feels (continuous)
│ ├── Presence ← how clearly the kick cuts through (continuous + gate)
│ └── Sustain ← how the kick decays (categorical)
│
└── VARIANCE ← what differentiates one moment from another
├── Direction ← rising / stable / falling
├── Magnitude ← how fast things are shifting
└── Timescale ← micro / meso / macro Concept Pages
| Page | What It Covers |
|---|---|
| Story | The highest-level concept. Named moments in a performance, how they are determined, and how they transition. |
| Foundation | The stable anchor of the performance — the kick. Why it is the single source of truth and what properties it exposes. |
| Weight | The primary expressive value. How heavy the kick feels, what makes a kick "heavy" vs "light," and how Weight drives visuals. |
| Presence | How clearly the kick is perceived. The continuous value for smooth visuals and the binary gate for story transitions. |
| Sustain | How the kick decays. Categorical states (dry/mid/wet) and how shifts in sustain signal story changes. |
| Variance | What makes two moments with the same kick feel different. Direction, magnitude, timescale, and indicators. |
| Temporal Thresholds | How the model filters noise from real state changes. Beat-based debouncing, immediate vs. confirmed values. |
| Variance Indicators | The audio signals that feed Variance. Engineering guidance on what to analyze relative to the kick. |
| Story Matrix | How Foundation and Variance combine to produce named stories. The base matrix, Sustain modifiers, and custom story definitions. |
| Transition Map | How stories connect to each other. Which transitions are valid, what triggers them, and example arcs for techno. |
Reference
| Page | What It Covers |
|---|---|
| For Visual Artists | What artists receive from this system, how to use it, example code, and what they do not need to know. |
| Principles | The core design principles of the model, condensed. |
| Glossary | Quick-reference definitions of every term in the model. |
Boundaries
This model does not describe: how to detect beats technically, audio analysis algorithms, frame rates or buffer sizes, specific frequency bins, or how to build the visual engine.
This model describes: what the music means at this moment, how that meaning changes over time, what triggers a change in meaning, and how to name and communicate those changes.
The technical layer — Pro DJ Link, audio analysis, FFT processing — feeds into this model. But this model exists above that layer. Engineers implement it. Artists consume it.
This wiki is a living document. As the model evolves through real performance testing, definitions will sharpen and new pages will be added.