CONFIDENTIAL
© 2026 Fold Artists Research · All Rights Reserved

LUMINA Technical Paper

Gradient-Based Influence Attribution for AI Music Generators

Research Paper Version 7.0 Jan 2026

1Introduction

When an AI music generator produces audio, rightsholders need answers to three critical questions:

  1. Which training songs influenced the output?
  2. How much did each song contribute?
  3. How confident are we in these attributions?
💡 Core Insight

A model's gradients encode which parameters would change to better fit a sample. By comparing gradient signatures, we can identify which training songs share "influence DNA" with a generated output.

Attribution Pipeline

From raw signal to fair influence share — how we find who really taught the model what it used.

§2.3
Signature Matching
cos(g₁, g₂)
§2.4
Shared Credit
(KKᵀ+λI)⁻¹
§5.1
Unusualness
z=(s−μ)/σ
§3
Trust Gate
erf(z/√2)≥95%
§5.2
Influence Potency
tanh(k·STS)
§5.4
Fair Share
wᵢ/Σwⱼ

2Mathematical Foundations

Cross-Entropy Teacher Forcing

LUMINA uses teacher forcing with cross-entropy loss to extract gradient signatures. Given audio codes from EnCodec:

Loss Function L = CrossEntropy(logits, codes) = -Σ log P(codet | code<t)

Chunked Processing

Audio is processed in 10-second chunks with gradients averaged across chunks:

Gradient Averaging g = (1/N) × Σ ∇θ L(chunki)

Attribution via Cosine Similarity

Cosine Similarity Score score = (goutput · gsong) / (‖goutput‖ × ‖gsong‖)
🎓 Signature Matching

Like spotting which teacher taught the exact method a student used on the test. Each training song leaves a unique gradient fingerprint — a record of how it shaped the model's weights. Cosine similarity measures how closely aligned two fingerprints are, revealing causal influence.

Kernel Regression (SpinTrak-Aligned)

To account for correlations between training songs, we use kernel regression:

Kernel Regression Formula scores = (K KT + λI)-1 K · goutput

Where K is the (N×D) training fingerprint matrix and λ=0.01 is the regularization parameter.

🎓 Shared Credit

If two teachers taught the same lesson, they share the credit rather than both getting full marks. Kernel regression decorrelates overlapping training samples — when two songs taught similar patterns, the regularized inverse (KKᵀ + λI)⁻¹ attributes proportionally rather than double-counting.

3Statistical Confidence

In high-dimensional space (d=512), random vectors have a noise floor of σ ≈ 4.4%. Attribution requires signals significantly above this noise.

Confidence Formula confidence(s) = erf(z / √2), where z = s / σ

Songs must achieve ≥ 95% confidence (~1.65σ) to qualify for attribution.

🎓 Trust Gate

Only the top performers make the finals. In 512-dimensional space, random noise produces a baseline similarity of ~4.4%. The confidence gate (≥ 95% via the error function) ensures we only attribute influence to songs whose signal is statistically significant — not random coincidence.

4Dual-Channel Attribution

LUMINA separates influence into two distinct rights channels:

Attribution Flow
Gradient Extraction
Publishing (Composition)
Master (Production)
Channel Source Captures
Composition (P) Self-Attention (self_attn) Melody, Chord Progression, Structure
Production (M) Output Linears (lm.linears) Timbre, Texture, Sound Design

5Share Allocation

Royalty splits are proportional to Standardized TracIn Score (STS) and LUMINA Influence Potency (LIP).

STS: Standardized TracIn Score

Raw cosine similarities are z-score normalized:

Z-Score Normalization STS = (score - μ) / σ
🎓 Unusualness

Like seeing who scored far above the class average. Z-score standardization measures how many standard deviations each song's influence sits above the population mean. A z-score of 2.0 means a song's contribution was 2σ above average — statistically remarkable, not just noise.

LIP: Influence Potency via Tanh

STS is mapped to a bounded percentage using hyperbolic tangent:

LIP Score LIP = tanh(k × STS) where k = 0.5
LIP Percentage LIP% = (LIP + 1) / 2 × 100

This maps [-∞, +∞] STS to [0%, 100%] LIP with 50% as baseline.

🎓 Influence Potency

Bonus points for rare, high-level performance. The hyperbolic tangent maps unbounded z-scores to a 0–100% scale where differences between truly exceptional contributors are preserved, while diminishing returns prevent any single song from claiming disproportionate influence.

Gated Weights

Only signals above 1σ contribute to share allocation:

Gated Weight wi = max(0, STSi - 1)²
Share Formula sharei = wi / Σ(wj) for all qualified songs
🎓 Fair Share

Like cutting a pizza into fair slices based on effort. Each qualified song's gated weight (the squared excess above 1σ) determines its slice size. The entire 100% is distributed proportionally — a song with 4× the weight gets 4× the royalty share.

🍕 Worked Example: The Pizza Story

Consider 97 training songs evaluated against a generated output:

Song Z-Score Gated Weight Share
Song A 3.2σ max(0, 3.2−1)² = 4.84 71.4% 🍕🍕🍕🍕
Song B 2.1σ max(0, 2.1−1)² = 1.21 17.8% 🍕🍕
Song C 1.8σ max(0, 1.8−1)² = 0.64 9.4% 🍕
Song D 1.3σ max(0, 1.3−1)² = 0.09 1.3% 🤏
Songs E–N (83 songs) < 1.0σ 0 0%

Small help is common. Exceptional help is rare — and that's who earns responsibility.

6Validation

LUMINA has been validated against 10,000 generation cycles.

  • Reproducibility: < 0.1% variance in signatures.
  • Baseline Confidence: > 68% at 1σ qualification gate.
  • Causal Link: 94% accuracy in identifying ground-truth prompts.

7Version History

v7.1 February 13, 2026
Added intuitive pipeline overview with teacher-student analogies and worked examples
v7.0 January 16, 2026
Added SpinTrak alignment, kernel regression, dual-channel refinements
v6.0 December 20, 2025
Introduced LIP scoring with tanh normalization, gated weights
v5.0 November 15, 2025
Dual-channel separation (Publishing vs Master), confidence thresholds
v4.0 October 8, 2025
10-second chunked processing, gradient averaging
v3.0 September 1, 2025
Initial cross-entropy teacher forcing implementation

Intellectual Property Notice

This document contains proprietary and confidential information belonging to Fold Artists Research. The methods, algorithms, and technical implementations described herein are protected intellectual property. Unauthorized reproduction, distribution, or disclosure of this document or its contents is strictly prohibited and may violate applicable trade secret and intellectual property laws.