Vendetta Visual Stream

โ† Back to Index ยท โ† For Visual Artists


Purpose

This page defines the Vendetta Visual Stream Schema (v1) for developers consuming realtime engine output over WebSocket.

This is the contract between vendetta-engine and downstream visual systems.


Transport and Time Model

  • Transport format: line events
  • Event types: stream.meta, stream.frame, stream.transition
  • Time model unit: milliseconds + beat/bar indices
  • t_ms is wall-clock elapsed since playback started
  • beat and bar are musical indices from the engine tick clock
  • Live control path is WebSocket-only (no REST calibration endpoint yet)

Line-Spec Compatibility

If your parser consumes tokenized line-spec output, the extended format now supports up to 14 tokens (previously 10). The 4 additional blend tokens are optional for backward compatibility:

  • source_blend
  • source_blend_reliable
  • blend_threshold_scale
  • blend_mode

Enums

story_phase

Entering | Sustaining | Transitioning

variance_direction

Rising | Stable | Falling

sustain_state

dry | mid | wet | reverse

trigger

PresenceOpened
PresenceClosed
VarianceChanged
WeightBandChanged
SustainChanged
SustainReversed
MultiChange

SustainReversed is an optional specialization of SustainChanged for consumers that need dedicated reverse-swell cues.

story_label (deterministic, explicit)

absence
escalation_heavy_dry
escalation_heavy_mid
escalation_heavy_wet
escalation_heavy_reverse
groove_heavy_dry
groove_heavy_mid
groove_heavy_wet
groove_heavy_reverse
wind_down_heavy_dry
wind_down_heavy_mid
wind_down_heavy_wet
wind_down_heavy_reverse
build_medium_dry
build_medium_mid
build_medium_wet
build_medium_reverse
drive_medium_dry
drive_medium_mid
drive_medium_wet
drive_medium_reverse
cool_medium_dry
cool_medium_mid
cool_medium_wet
cool_medium_reverse
rise_light_dry
rise_light_mid
rise_light_wet
rise_light_reverse
minimal_light_dry
minimal_light_mid
minimal_light_wet
minimal_light_reverse
fade_light_dry
fade_light_mid
fade_light_wet
fade_light_reverse

Event Schemas

stream.meta

playback_started required fields

  • audio: string (path)
  • bpm: number
  • source: string (spec | reactive_audio)
  • beatmap: string
  • bpmmap: string
  • bpm_hint: string (number-formatted or auto)

When source=spec, the playback input should follow the Song Spec Sheet format.

playback_finished required fields

  • None

Calibration meta events

In addition to playback lifecycle events, stream.meta includes:

  • calibration_state
  • calibration_updated
  • calibration_reset
  • calibration_error

All calibration meta responses include envelope fields:

  • version
  • t_ms

calibration_state includes timestamped history entries:

  • at_unix_ms
  • source
  • values

WebSocket Calibration Control Messages (Inbound)

Supported controls:

  • calibration.get
  • calibration.set
  • calibration.reset
  • calibration.axes.get
  • calibration.axes.set (supports reverse_detection_override: true | false | null)
  • calibration.reverse_detection.set (enabled: true | false | null)
  • calibration.templates.get
  • calibration.template.apply (name)
  • calibration.template.save (name, optional label)

Compatibility notes:

  • Existing calibration.get / calibration.set / calibration.reset now include richer calibration metadata (axes/custom/reverse/blend/template fields)
  • calibration.set remains patch-oriented for provided values

stream.frame required fields

  • t_ms: integer >= 0
  • beat: integer >= 0
  • bar: integer >= 0
  • story: story_label
  • phase: story_phase
  • duration: integer >= 0 (beats in current story)
  • weight: number [0..1]
  • presence: number [0..1]
  • sustain: sustain_state
  • var_dir: variance_direction
  • var_mag: number [0..1]
  • source_blend: number [0..1]
  • source_blend_reliable: boolean
  • calibration: object
  • effective_thresholds: object

Semantics:

  • Primary continuous controls: weight, presence, var_mag
  • Primary structural controls: story, phase
  • Sustain controls envelope timing mode: dry | mid | wet | reverse
  • Use phase=Transitioning to pre-blend visual states
  • source_blend is a confidence modifier for mixed-source moments
  • In reactive_audio (audio-only) mode, blend may be estimated from competing transients and emitted with source_blend_reliable=false
  • calibration includes runtime calibration state, including reverse detection and blend-threshold scaling fields
  • effective_thresholds carries currently applied thresholds after calibration/blend scaling

Input Contract: FrameInput

type FrameInput = {
  t_ms: number;
  beat: number;
  bar: number;
  story: string;
  phase: "Entering" | "Sustaining" | "Transitioning";
  duration: number;
  weight: number;
  presence: number;
  sustain: "dry" | "mid" | "wet" | "reverse";
  var_dir: "Rising" | "Stable" | "Falling";
  var_mag: number;
  source_blend: number;
  source_blend_reliable: boolean;
  calibration: {
    raw?: Record<string, number>;
    axes?: Record<string, number>;
    axis_custom?: Record<string, boolean>;
    reverse_detection?: boolean;
    reverse_detection_override?: boolean | null;
    blend_threshold_scale_factor?: number;
    active_template?: { name: string; label?: string } | null;
  };
  effective_thresholds: Record<string, number>;
};

stream.transition required fields

  • t_ms: integer >= 0
  • beat: integer >= 0
  • bar: integer >= 0
  • trigger: trigger
  • bridged: boolean
  • from: Story object/string representation
  • to: Story object/string representation
  • requested: Story object/string representation

Semantics:

  • Fire one-shot cues on this event
  • If bridged=true, the engine inserted an intermediate legal transition
  • Transition schema is unchanged by Source Blend
  • Blend context should be read from preceding stream.frame events

Source Blend Threshold Scaling

Dynamic threshold scaling is applied before transition confirmation:

effective = base * (1 + blend * threshold_scale_factor)

Rules:

  • variance_confirm_bars scaling is capped at 2x
  • sustain_confirm_bars scaling is capped at 2x
  • weight_band_confirm_bars is not scaled

See Source Blend for behavioral details.


Visual Mapping Guidelines

Continuous

  • weight -> scale/intensity/depth
  • presence -> opacity/clarity/saturation
  • var_mag -> motion energy/turbulence rate

Categorical

  • story -> palette/layout/scene family
  • phase -> enter/sustain/exit animation state machine

Transitions

  • Use stream.transition as a hard cue boundary
  • Use stream.frame continuity for smooth interpolation

Stability Contract

  • Story changes only on confirmed state changes (temporal thresholds)
  • absence is outside the matrix and maps to dedicated visual behavior
  • Do not assume every beat changes story

Minimal WebSocket Consumer Pattern

type StreamEventType = "stream.meta" | "stream.frame" | "stream.transition";

interface BaseEvent {
  type: StreamEventType;
}

function handleLine(raw: string) {
  const event = JSON.parse(raw) as BaseEvent & Record<string, unknown>;
  switch (event.type) {
    case "stream.meta":
      // playback_started / playback_finished
      break;
    case "stream.frame":
      // continuous + structural frame data
      break;
    case "stream.transition":
      // one-shot transition cue
      break;
  }
}

Use stream.frame for smooth state and stream.transition for discrete cue triggers.