Vendetta Visual Stream
โ Back to Index ยท โ For Visual Artists
Purpose
This page defines the Vendetta Visual Stream Schema (v1) for developers consuming realtime engine output over WebSocket.
This is the contract between vendetta-engine and downstream visual systems.
Transport and Time Model
- Transport format:
line events - Event types:
stream.meta,stream.frame,stream.transition - Time model unit:
milliseconds + beat/bar indices t_msis wall-clock elapsed since playback startedbeatandbarare musical indices from the engine tick clock- Live control path is WebSocket-only (no REST calibration endpoint yet)
Line-Spec Compatibility
If your parser consumes tokenized line-spec output, the extended format now supports up to 14 tokens (previously 10). The 4 additional blend tokens are optional for backward compatibility:
source_blendsource_blend_reliableblend_threshold_scaleblend_mode
Enums
story_phase
Entering | Sustaining | Transitioning
variance_direction
Rising | Stable | Falling
sustain_state
dry | mid | wet | reverse
trigger
PresenceOpened
PresenceClosed
VarianceChanged
WeightBandChanged
SustainChanged
SustainReversed
MultiChange
SustainReversed is an optional specialization of SustainChanged for consumers that need dedicated reverse-swell cues.
story_label (deterministic, explicit)
absence
escalation_heavy_dry
escalation_heavy_mid
escalation_heavy_wet
escalation_heavy_reverse
groove_heavy_dry
groove_heavy_mid
groove_heavy_wet
groove_heavy_reverse
wind_down_heavy_dry
wind_down_heavy_mid
wind_down_heavy_wet
wind_down_heavy_reverse
build_medium_dry
build_medium_mid
build_medium_wet
build_medium_reverse
drive_medium_dry
drive_medium_mid
drive_medium_wet
drive_medium_reverse
cool_medium_dry
cool_medium_mid
cool_medium_wet
cool_medium_reverse
rise_light_dry
rise_light_mid
rise_light_wet
rise_light_reverse
minimal_light_dry
minimal_light_mid
minimal_light_wet
minimal_light_reverse
fade_light_dry
fade_light_mid
fade_light_wet
fade_light_reverse
Event Schemas
stream.meta
playback_started required fields
audio:string(path)bpm:numbersource:string(spec|reactive_audio)beatmap:stringbpmmap:stringbpm_hint:string(number-formatted orauto)
When source=spec, the playback input should follow the Song Spec Sheet format.
playback_finished required fields
- None
Calibration meta events
In addition to playback lifecycle events, stream.meta includes:
calibration_statecalibration_updatedcalibration_resetcalibration_error
All calibration meta responses include envelope fields:
versiont_ms
calibration_state includes timestamped history entries:
at_unix_mssourcevalues
WebSocket Calibration Control Messages (Inbound)
Supported controls:
calibration.getcalibration.setcalibration.resetcalibration.axes.getcalibration.axes.set(supportsreverse_detection_override: true | false | null)calibration.reverse_detection.set(enabled: true | false | null)calibration.templates.getcalibration.template.apply(name)calibration.template.save(name, optionallabel)
Compatibility notes:
- Existing
calibration.get/calibration.set/calibration.resetnow include richer calibration metadata (axes/custom/reverse/blend/template fields) calibration.setremains patch-oriented for provided values
stream.frame required fields
t_ms:integer >= 0beat:integer >= 0bar:integer >= 0story:story_labelphase:story_phaseduration:integer >= 0(beats in current story)weight:number [0..1]presence:number [0..1]sustain:sustain_statevar_dir:variance_directionvar_mag:number [0..1]source_blend:number [0..1]source_blend_reliable:booleancalibration:objecteffective_thresholds:object
Semantics:
- Primary continuous controls:
weight,presence,var_mag - Primary structural controls:
story,phase - Sustain controls envelope timing mode:
dry | mid | wet | reverse - Use
phase=Transitioningto pre-blend visual states source_blendis a confidence modifier for mixed-source moments- In
reactive_audio(audio-only) mode, blend may be estimated from competing transients and emitted withsource_blend_reliable=false calibrationincludes runtime calibration state, including reverse detection and blend-threshold scaling fieldseffective_thresholdscarries currently applied thresholds after calibration/blend scaling
Input Contract: FrameInput
type FrameInput = {
t_ms: number;
beat: number;
bar: number;
story: string;
phase: "Entering" | "Sustaining" | "Transitioning";
duration: number;
weight: number;
presence: number;
sustain: "dry" | "mid" | "wet" | "reverse";
var_dir: "Rising" | "Stable" | "Falling";
var_mag: number;
source_blend: number;
source_blend_reliable: boolean;
calibration: {
raw?: Record<string, number>;
axes?: Record<string, number>;
axis_custom?: Record<string, boolean>;
reverse_detection?: boolean;
reverse_detection_override?: boolean | null;
blend_threshold_scale_factor?: number;
active_template?: { name: string; label?: string } | null;
};
effective_thresholds: Record<string, number>;
};
stream.transition required fields
t_ms:integer >= 0beat:integer >= 0bar:integer >= 0trigger:triggerbridged:booleanfrom:Story object/string representationto:Story object/string representationrequested:Story object/string representation
Semantics:
- Fire one-shot cues on this event
- If
bridged=true, the engine inserted an intermediate legal transition - Transition schema is unchanged by Source Blend
- Blend context should be read from preceding
stream.frameevents
Source Blend Threshold Scaling
Dynamic threshold scaling is applied before transition confirmation:
effective = base * (1 + blend * threshold_scale_factor)
Rules:
variance_confirm_barsscaling is capped at2xsustain_confirm_barsscaling is capped at2xweight_band_confirm_barsis not scaled
See Source Blend for behavioral details.
Visual Mapping Guidelines
Continuous
weight-> scale/intensity/depthpresence-> opacity/clarity/saturationvar_mag-> motion energy/turbulence rate
Categorical
story-> palette/layout/scene familyphase-> enter/sustain/exit animation state machine
Transitions
- Use
stream.transitionas a hard cue boundary - Use
stream.framecontinuity for smooth interpolation
Stability Contract
- Story changes only on confirmed state changes (temporal thresholds)
absenceis outside the matrix and maps to dedicated visual behavior- Do not assume every beat changes story
Minimal WebSocket Consumer Pattern
type StreamEventType = "stream.meta" | "stream.frame" | "stream.transition";
interface BaseEvent {
type: StreamEventType;
}
function handleLine(raw: string) {
const event = JSON.parse(raw) as BaseEvent & Record<string, unknown>;
switch (event.type) {
case "stream.meta":
// playback_started / playback_finished
break;
case "stream.frame":
// continuous + structural frame data
break;
case "stream.transition":
// one-shot transition cue
break;
}
}
Use stream.frame for smooth state and stream.transition for discrete cue triggers.