πŸ”§ Technical Docs

Everything you need to evaluate, install, and run BonfyreFPQ and the Bonfyre Stack. Benchmarks, hardware fits with pricing, reproduction commands, and full architecture reference.

FPQ Compression ↓ Hardware & Pricing ↓ Install ↓ Non-technical overview β†’
4Γ—Compression
0.997SLI Cosine
15HF Models
50Binaries
~2.1 MBTotal Disk
BonfyreFPQ β€” Functional Polar Quantization

Compress any model ~4Γ—. Run inference directly on compressed weights.

Pure C model compression engine. No GPU required. No Python dependencies. No training. The SLI bridge is live β€” native .fpq files run inference directly, no decompression step.

βœ“ Runtime gap closed β€” patch_model() and go. TinyLlama: 155 layers, cos=0.997, top-1=97.3%
0.9999 per-weight cosine
0.9976 output cosine (30 layers)
1,790 tensors compressed
54β†’27 GB Wan2.1-T2V-14B
15 models on Hugging Face

Three layers of correctness β€” all verified

Layer 1 β€” Weight Space
cos β‰ˆ 0.9999

Near-lossless per-tensor across all 307 encoded tensors in Wan2.1-T2V-1.3B. Worst tensor: 0.999590.

Layer 2 β€” Network Propagation
cos β‰ˆ 0.9976

Error stays controlled after 30 stacked transformer blocks. PSNR 35.97 dB. MSE 1.01e-3.

Layer 3 β€” System Behavior
stable Γ— all timesteps

Cosine holds 0.9976–0.9983 across the full diffusion schedule. No drift amplification.

Models compressed β€” real files, real numbers

Model Domain Original Compressed Tensors Avg Cos Worst Cos Avg bpw HF Model
Wan2.1-T2V-14B Video diffusion 54 GB 27 GB 402 0.999882 0.999826 4.05 Download β†’
Phi-4 (14B) Language model 28 GB 28 GB 162 1.000614 1.000149 4.08 Download β†’
Whisper Large V3 Speech recognition 8.7 GB 5.8 GB 998 0.999916 0.999834 4.19 Download β†’
Whisper Large V3 Turbo Speech recognition 1.6 GB 1.6 GB 228 0.999929 0.999858 4.18 Download β†’
Wan2.1-T2V-1.3B Video diffusion 5.3 GB 2.7 GB 307 0.999874 0.999590 β€” local
SmolLM2-135M Language model 101 MB 258 MB F16 211 0.999855 0.999589 β€” GGUF
Gemma 2B-it Language model β€” β€” sampled 0.99995 0.99995 β€” local
Whisper base.en Speech recognition β€” β€” sampled 0.999808 0.999763 β€” GGML

Artifacts on Hugging Face: (1) compatibility safetensors (direct Transformers load), and (2) native .fpq v12 files (rANS entropy-coded, 3.5–5.2Γ— smaller, direct inference via SLI bridge).

Output-Level Proof β€” Wan2.1-T2V-1.3B DiT Forward Pass

Loaded the original Wan2.1 model and FPQ-compressed version into the same WanTransformer3DModel architecture. Fed identical synthetic inputs (seed=42, shape [1,16,1,60,104], BF16 on MPS). Compared full forward pass outputs.

0.99759
Cosine Similarity
35.97 dB
PSNR
1.01e-3
MSE
6.18s β†’ 6.40s
Inference Time
Per-channel: ch0 cos=0.9960, ch1 cos=0.9993, ch2 cos=0.9952, ch3 cos=0.9980
Max absolute error: 0.138 (on a Β±0.45 std output range)
Relative error: 6.97% β€” well within the visually safe zone for video generation

Diffusion timestep sweep β€” Wan2.1-T2V-1.3B

Timestep Cosine PSNR (dB) MSE
t = 00.9983134.821.32e-3
t = 1000.9979235.491.13e-3
t = 5000.9975935.971.01e-3
t = 9000.9978236.021.00e-3
t = 9990.9980435.711.07e-3

Identical inputs at each timestep, BF16 on MPS. Cosine range: 0.99759–0.99831. Zero drift.

Perplexity benchmark β€” Qwen 2.5 0.5B, WikiText-2

994-token slice, max-length 512, stride 256. All runs on same hardware, same data.

Method PPL Ξ” baseline Avg Cos Worst Cos
Baseline (FP32)14.20β€”1.00001.0000
BonfyreFPQ @3-bit14.48+1.97%0.9997830.999588
HQQ @3-bit (g64)32.38+128%β€”β€”
COORD @3-bit (v4)35.59+150%0.9827610.982327

169 tensors. HQQ via standalone benchmark (group-size 64, axis 1, CPU). Proof pack.

Published 3-bit benchmarks β€” Llama-2-7B, WikiText-2

From authors' own papers. Lower PPL = better. FP16 baseline: 5.12.

Method Bits PPL Ξ” FP16 Source
FP16 (baseline)165.12β€”β€”
AQLM3.045.46+6.6%Egiazarian et al., ICML 2024
SpQR2.986.20+21.1%Dettmers et al., 2023
AWQ36.24+21.9%Lin et al., MLSys 2024
GPTQ3.008.06+57.4%Frantar et al., 2022
HQQ3Not published on Llama-2Badri & Shelor, 2023

What's inside

Low-Rank SVD
Global structure extraction
E8 Lattice
Optimal 8D quantization
16D RVQ
Structured residual correction
Ghost Head
Rank-1 error correction

GGUF format support (llama.cpp compatible)

Reads & dequantizes
F32, F16, Q4_0, Q5_0, Q8_0
Q4_K, Q5_K, Q6_K
Writes
GGUF v3 F16 β€” direct llama.cpp load
Preserves all metadata verbatim

Quick start

bonfyre-fpq quantize model.gguf compressed.gguf --bits 3
Input formats:
GGUF (llama.cpp, whisper.cpp)
Safetensors (HuggingFace)
GGML (legacy whisper)
Output formats:
GGUF F16 β†’ llama.cpp direct load
BF16 safetensors β†’ PyTorch/diffusers
Preserves all metadata + tokenizer
SLI Bridge β€” Direct Runtime Inference

Direct runtime inference from .fpq is now working. Load a compressed model and run it immediately β€” no conversion, no extra RAM, no hacks. The SLI bridge (Spectral Lattice Inference) is fully integrated.

Results (TinyLlama, 155 SLI layers):
β€’ 97.3% top-1 agreement vs original
β€’ 0.997 cosine similarity
β€’ 2/5 text matches (identical output)
(Full logs & proof pack)
Usage: patch_model(hf_model, fpq, resolver) β€” replaces nn.Linear layers with FPQLinear. No decode step, no weight copy.
FPQ-X β€” Generalized Compression Algebra Β· All 6 Operators Live

Six operators. One compiler. Rate–distortion–execution optimized.

FPQ-X evolves BonfyreFPQ from a quantizer into a full compression algebra. All six operator families are implemented and validated in fpqx_ops.c + fpq_bridge.py.

βœ“ A β€” Additive βœ“ M β€” Multiplicative Row Scale βœ“ Ξ  β€” Predictive βœ“ D β€” Distilled βœ“ Ξ› β€” Adaptive Policy βœ“ H β€” NEON Packing
𝒯(x,c,h,t) = (B + R + P) βŠ™ S + Ξ (x,c,h,t) + Ξ”_seq(c,t)
A = Additive core Β· M = Multiplicative manifold Β· Ξ  = Predictive restoration Β· D = Sequence distillation

Six operator families β€” each derived from 2026 published research

A
Additive
Inherited from FPQ v10

Low-rank SVD + E8 lattice + 16D RVQ + QJL projection + Ghost correction. The proven foundation delivering 0.999+ cosine across 1,790 tensors.

M
Multiplicative
Low-rank scaling manifold

Learns S = I + ABT via thin SVD of the ratio matrix Q = W/Ε΄ βˆ’ 1. Captures scaling distortion that additive methods miss. Auto-rollback if cosine doesn't improve.

Derived from: LoRDS, WaterSIC
Ξ 
Predictive
Context-conditioned restoration

Per-column linear predictor from the low-rank basis to the quantization residual. Uses already-available L factor to predict and cancel systematic error.

Derived from: EchoKV, MoBiQuant
D
Distilled
Sequence-axis compression

Attention-weighted K-means++ on KV cache vectors. Compresses along the sequence dimension β€” tokens that attend similarly share one cache atom. Orthogonal to weight quantization.

Derived from: KVSculpt, KV-CoRE
Ξ›
Adaptive
Per-tensor policy selection

Profiles each tensor: Ξ·L (low-rank energy), spectral gap, kurtosis, outlier fraction. Decision tree selects which operators to activate and at what rank.

Derived from: KV-CoRE, MoBiQuant
H
Hardware
Kernel-aligned packing

Inner-group quantization that aligns bit boundaries to SIMD lanes. Stores scales per group, enabling vectorized unpacking without scatter/gather overhead.

Derived from: InnerQ, High-Rate QMM

The FPQ-X encode pipeline

1. Ξ› Profile
β†’
2. BWA Prune
β†’
3. A Encode (v9)
β†’
4. M Scale
β†’
5. Ξ  Predict
Each stage has automatic quality rollback β€” if an operator doesn't improve cosine by >1e-7, it's disabled for that tensor.

FPQ v10 vs FPQ-X

Dimension FPQ v10 FPQ-X
Error modelAdditive only (W β‰ˆ Ε΄)Additive Γ— Multiplicative + Predictive
Per-tensor policySame pipeline for allΞ› profiles Ξ·L, gap, kurtosis β†’ selects operators
KV cacheWeight-only quantizationD operator: sequence-axis distillation
Hardware awarenessGeneric packingH operator: SIMD-lane-aligned groups
Objectivemin β€–W βˆ’ Ε΄β€–min Ξ»RΒ·Rate + Ξ»DΒ·Distortion + Ξ»EΒ·Execution
Research basisOriginal FPQ design9 papers from early 2026

bonfyre-fpqx CLI

# Full A+M+Ξ  pipeline
bonfyre-fpqx compress model.safetensors compressed.safetensors --bits 3

# Roundtrip quality test
bonfyre-fpqx roundtrip model.safetensors --bits 3

# Per-tensor compressibility analysis
bonfyre-fpqx profile model.safetensors

# KV cache distillation
bonfyre-fpqx distill cache.safetensors distilled.safetensors --atoms 256

# Hardware-aligned repacking
bonfyre-fpqx pack model.safetensors packed.safetensors --bits 3 --group-size 128

Research foundation β€” 9 papers synthesized

LoRDS
Multiplicative low-rank scaling
arXiv:2601.22716
WaterSIC
Activation-aware rate–distortion
arXiv:2603.04956
EchoKV
Predictive KV reconstruction
arXiv:2603.22910
KVSculpt
Attention-weighted cache distillation
arXiv:2603.27819
KV-CoRE
Data-dependent compressibility
arXiv:2602.05929
InnerQ
Hardware-aligned inner quantization
arXiv:2602.23200
MoBiQuant
Token-adaptive mixed precision
arXiv:2602.20191
High-Rate QMM
Activation-weighted matrix multiply
arXiv:2601.17187
Codebook Opt.
Optimal codebook initialization
arXiv:2602.06557

KV Cache Compression β€” 9 Optimizations

Baseline cosine numbers (bonfyre-kvcache C benchmark). All 9 Python optimizations live in fpq_bridge.py.

Bits KV Cosine Hardware implication
5-bit 0.99996 ~5.3Γ— more context β€” 8K ctx β†’ 42K in same VRAM
4-bit 0.99994 4Γ— more context β€” recommended for production
3-bit 0.99990 5.3Γ— context, some quality loss on long sequences
#3 Attention-Weighted Tiles

High-attention blocks dominate tile assignment. Codebook quality concentrates where the model actually looks.

#4 Per-Layer Adaptive Bits

Ξ›-profiler analyzes each K/V layer β€” kurtosis, spectral gap, outlier fraction β€” to pick the right bit depth automatically.

#5 Cross-Layer Shared Codebook

One 256-tile codebook learned across 8 sample layers. Skip per-call K-means β€” compress all layers in amortized O(1).

#6 D-Operator Distillation

K-means++ on KV vectors to K atoms (K β‰ͺ N). Tokens that attend similarly share one atom. Bug-free nearest-centroid lookup.

#7 Delta Encoding

Only the delta vs previous frame is compressed. Each new token costs far less than storing a new frame.

#8 Huffman PMF Weighting

E8 coordinate magnitude as Huffman code length proxy. High-cost blocks get upweighted β€” rate-quality jointly optimized.

#9 LT_SMALL_INT Fast Path

Near-zero blocks (max abs ≀ 63) bypass E8 lattice β€” 7-bit integer round + clamp. Significant throughput win on embedding layers.

#10 M-Operator Row Scale

Per-row scale vector on each FPQLinear. Applied after SLI matmul: corrects per-output-channel amplitude drift.

#11 H-Operator NEON Packing

ARM NEON 128-bit aligned pre-packing. Eliminates scatter/gather β€” vectorized unpacking on Apple Silicon and Jetson.

Use individually or compose: patch_kv_cache(adaptive_bits=True, shared_tiles=tiles) activates #4 + #5 simultaneously.

Hardware Fits & Pricing

What hardware runs what β€” and what it costs.

FPQ weight compression + KV cache optimizations change the hardware equation. What used to need cloud GPUs now fits on-device. Use cases shift depending on your hardware budget.

Device RAM Approx. Cost (2026) Before (BF16) After (FPQ 4-bit + KV)
Raspberry Pi 5 8 GB $80 TinyLlama only, 512-token ctx, no video TinyLlama + 2K ctx, Whisper turbo inference, local ASR pipeline
Jetson Orin Nano 8 GB $250 Qwen 0.5B only, degraded at >512 ctx Qwen 0.5B @ 4K ctx Β· NEON packing Β· embeddings + FPQ inference co-resident
Apple M1 MacBook (16 GB) 16 GB unified $900 refurb Qwen 0.5B (tight), Wan 1.3B (no headroom) Wan 1.3B + 4K ctx KV Β· HCP speech + SLI co-resident Β· NEON-packed
Apple M2/M3 Max (64 GB) 64 GB unified $2,500–3,500 Phi-4 14B (no KV headroom past 2K) Wan 14B @ 8K ctx Β· Phi-4 @ 32K ctx with delta KV Β· full pipeline concurrent
T4 cloud (16 GB VRAM) 16 GB VRAM ~$0.35/hr spot Qwen 3B, 2K ctx max before OOM Qwen 3B @ 8K ctx Β· Wan 1.3B + full diffusion sweep Β· shared KV codebook
RTX 4090 (24 GB VRAM) 24 GB VRAM $1,600 GPU / ~$0.50/hr cloud Wan 1.3B (tight), Phi-4 14B doesn't fit Wan 1.3B @ 32K ctx Β· Phi-4 14B fits Β· adaptive bits saves ~30% KV RAM
RTX 6000 Ada (48 GB VRAM) 48 GB VRAM ~$1.10/hr (RunPod) Wan 14B (tight), long video sequences OOM Wan 14B Β· 287 SLI layers @ 5-timestep sweep Β· multi-second video KV cached

Budget tiers β€” what opens up at each price point

Under $100
Raspberry Pi 5

Local ASR with Whisper turbo. TinyLlama inference. Bonfyre pipeline binaries. Edge transcription kiosk.

$250 – $1,000
Jetson Orin / M1 Mac

Qwen 0.5B + 4K ctx. Wan 1.3B video. HCP speech + SLI co-resident. NEON packing. Full local pipeline.

$2,500 – $3,500
M2/M3 Max

Wan 14B @ 8K ctx. Phi-4 @ 32K ctx with delta KV. Full Bonfyre pipeline + inference concurrent. Production-grade local stack.

Cloud spot ($0.35–$1.10/hr)
T4 / RTX 4090 / RTX 6000

Burst GPU for video generation, large model inference, SLI sweeps. Use FPQ to fit bigger models on cheaper instances. T4 now handles what used to need A100.

Weight footprint
~2.2 GB β†’ 1.1 GB

TinyLlama 1.1B β€” stays in .fpq at runtime, no decode step. BF16 copy never materializes.

KV context scaling
8K β†’ 32K tokens

4-bit KV compression in same VRAM budget. Delta encoding makes each new token incremental.

ARM throughput
NEON 128-bit

H-operator pre-packing on Apple Silicon and Jetson β€” vectorized unpacking, no scatter/gather.

Co-residency
Inference + pipeline

FPQ model + HCP speech + vector search + pipeline can run concurrently on a 16 GB Mac.

FPQ Compression Benchmarks

Every number from a real run. Raw logs, scripts, and CSVs in the repo. ↑ FPQ overview

0.999882Avg cosine β€” Wan2.1-T2V-14B (402 tensors)
0.999916Avg cosine β€” Whisper Large V3 (998 tensors)
1,790Tensors compressed across 15 models
+1.97%PPL degradation β€” Qwen 0.5B @3-bit
54β†’27 GBWan2.1-T2V-14B (50% compression)
28 GBPhi-4 14B β€” near-lossless (cos 1.000614)
8.7β†’5.8 GBWhisper Large V3 (33% compression)
0.999826Worst-case tensor cosine (Wan2.1-T2V-14B)
4.05–4.19Bits per weight range @3-bit
0.99759DiT output cosine (30 transformer blocks)
+128%HQQ @3-bit PPL (65Γ— worse than FPQ)

Artifact links

Hugging Face Model Hub

BF16 safetensors (drop-in) + native .fpq v12 files (rANS, direct SLI inference).

Wan2.1-T2V-14B (54β†’27 GB) β†’ Phi-4 14B (28 GB) β†’ Whisper Large V3 (8.7β†’5.8 GB) β†’ Whisper Large V3 Turbo (1.6 GB) β†’
Proof Pack (2026-04-10)

Qwen PPL (v8 vs v4 vs HQQ), Whisper roundtrip, CSV, PNG chart, reproduction commands.

View proof pack β†’
BENCHMARKS.md

Full benchmark report: version progression, weight tables, KV cache, speed optimization, binary sizes.

View benchmarks doc β†’
BonfyreFPQ Source

Pure C11: main.c, fpq_codec.c, ggml_reader.c, fpq.h. Builds with make on macOS/Linux.

View source β†’

System Benchmarks

Apple M-series, after 6 optimization passes (P0–P6). All real. FPQ benchmarks ↑

5–8 msPer-stage latency (was 76 ms)
9.3%Lambda Tensors compression (N=10K)
237 msONNX embed (was 600 ms Python)
6 msfastText inference (was 150 ms Python)
536 bytesArtifact struct (was 1,076)
5 msSIMD exact vector search
15.5Γ—Batch embed speedup (10 files)
~10Γ—Hash hex (LUT vs snprintf)

Optimization passes

P0 β€” Foundation
Pure C rewrite
Python β†’ C11, ONNX multi-thread, VECF binary format, -O3 -march=native -flto
P1 β€” Tokenizer
Trie + inline DB
Hash table β†’ trie tokenizer, --insert-db zero-file-I/O embed path
P2 β€” SIMD
Batch + cosine
NEON SIMD cosine, batch embed, libbonfyre shared runtime
P3 β€” Native
fastText in C
Pure C fastText, libbonfyre β†’ 29 binaries, DB connection pooling
P4 β€” Architecture
Hardening pass
FNV hash registry, SHA-256 dedup, PGO targets, TCP_NODELAY, SIGPIPE
P5 β€” Datatype
10 syntax wins
Hex LUT ~10Γ—, struct 1076β†’536, O(nΒ²)β†’O(n), raw syscalls, switch dispatch
P6 β€” Runtime RL
Self-tuning relay
RL agent tunes buffer size and path policy live. Gossip mesh + consensus on top of bonfyre-moq.
MetricBefore (P0)After P5Improvement
Single embed~600 ms237 ms2.5Γ—
10-file batch embed~6,000 ms386 ms15.5Γ—
Pipeline (6 stages)76 ms8 ms9.5Γ—
Tag inference~150 ms6 ms25Γ—
Hash hex~100 ns~10 ns~10Γ—
Artifact struct1,076 bytes536 bytes2Γ— cache density
Vector file (384-dim)6.4 KB JSON1,544 bytes VECF4.2Γ— smaller
BonfyreTel β€” MoQ/WebTransport Relay

Pure C relay. No Node. No JS runtime overhead.

bonfyre-moq replaces the Node.js MoQ relay with a pure C11 implementation. Built on ngtcp2 + nghttp3 + OpenSSL 3 + SQLite. Ships with four live extension modules: inline inference, RL self-tuning, gossip mesh, and lightweight consensus β€” all in the same binary.

βœ“ MoQ-Transport draft-14 βœ“ Inline AI Inference βœ“ RL Self-Tuning βœ“ Gossip Mesh βœ“ Consensus / Leader Election βœ“ Zero-Copy Object Forwarding

Four live extension modules

Inline AI Inference
inference.c + inference_onnx.c

Entropy-based scoring (score 0–100) on every forwarded MoQ object. Weak-linked ONNX Runtime via dlopen β€” falls back to entropy estimator when ONNX Runtime is absent. Same hook as bonfyre-embed.

ext_score_object(path, data, len, &tag)
RL Self-Tuning (P6)
optimizer.c

Background RL agent tunes relay buffer size (16 KB–256 KB) and path policy (round-robin vs least-loaded) every 2 s. Reward signal driven by real relay metrics. Exposes current params for use by the forwarding loop.

ext_get_relay_buf_size() / ext_get_relay_path_policy()
Gossip Mesh
mesh.c + bonfyre-mesh.h

UDP multicast gossip beacon on 239.0.0.57:7942. Background thread maintains a live peer table (peer_info[]). Enables distributed relay clusters without a central registry. Shared peer table feeds the consensus module.

ext_mesh_start() / ext_mesh_stop()
Lightweight Consensus
consensus.c

Simulated Raft: returns the stable leader peer from the mesh table. Used to shard MoQ PUBLISH_NAMESPACE announcements and SUBSCRIBE routing across a relay cluster without split-brain.

ext_consensus_leader() β†’ const char *

Relay internals

QUIC / WebTransport (ngtcp2+nghttp3)
 β†³ MoQ stream demux β†’ ext_score_object() (inline inference, every object)
 β†³ zero-copy forward β†’ subscriber fan-out (buf size from RL optimizer)
 β†³ PUBLISH_NAMESPACE β†’ consensus leader routes β†’ mesh peer table
SQLite stream log Β· SIGTERM graceful drain Β· kqueue/epoll event loop

Build & run

# Build relay + all extension modules
make bonfyre-moq

# Run relay (MoQ + mesh + optimizer + inference)
./bonfyre-moq --host 127.0.0.1 --port 4443 \
               --runtime-dir /tmp/bonfyre-moq \
               --db /tmp/bonfyre-moq/relay.db

# Smoke-test all extension modules
make test-bonfyre  # inference + optimizer + mesh + consensus
C11No Node.js
4Extension modules
RLSelf-tuning (P6)
MeshGossip peer discovery
RaftConsensus leader

Architecture

48 separate binaries. Not a monolith. Not a framework. Each is a standalone Unix process.

Unix philosophy

Each binary does one thing. Compose with pipes, files, or the pipeline binary. bonfyre-media-prep audio.wav | bonfyre-transcribe | bonfyre-brief

Process isolation

Every binary runs as its own process. No shared memory. If one crashes, nothing else does. Separate processes clean up on exit.

Dynamic linking

Whisper via libwhisper (Homebrew). LLM via llama-completion subprocess. SQLite via system library.

Pipeline DAG

Audio in β†’ ingest β†’ media-prep β†’ transcribe β†’ transcript-clean β†’ paragraph β†’ brief β†’ proof β†’ pack β†’ distribute
                                    ↳ embed β†’ vec (semantic search)
                                    ↳ tag + tone (enrichment)
                                    ↳ render β†’ emit (HTML/PDF/EPUB/RSS)
                                    ↳ moq (WebTransport relay Β· RL optimizer Β· gossip mesh Β· consensus)

Five layers

Surfacecms Β· api Β· auth Β· pipeline Β· cli Β· transcript-family Β· project Β· tel Β· proxy
Valueoffer Β· gate Β· meter Β· ledger Β· finance Β· outreach Β· pay Β· pack Β· distribute
Transformmedia-prep Β· transcribe Β· transcript-clean Β· paragraph Β· brief Β· proof Β· embed Β· narrate Β· render Β· emit Β· mfa-dict Β· weaviate-index Β· repurpose Β· segment Β· clips Β· speechloop Β· tone Β· tag Β· canon Β· query
Substrateingest Β· hash Β· index Β· compress Β· stitch Β· graph Β· runtime Β· queue Β· sync
Librarieslibbonfyre (runtime, FNV registry, SHA-256) Β· liblambda-tensors (Huffman, arithmetic coding)

All 50 Binaries

Every binary is standalone. Use one or all. ~2.1 MB total disk.

Substrate (9 binaries)

bonfyre-ingest 35 KB β€” intake + type detection
bonfyre-hash 34 KB β€” SHA-256 content addressing
bonfyre-index 68 KB β€” SQLite artifact index + FTS
bonfyre-compress 34 KB β€” zstd family-aware compression
bonfyre-stitch 34 KB β€” DAG materializer
bonfyre-graph 51 KB β€” Merkle-DAG artifact graph
bonfyre-runtime 34 KB β€” process lifecycle
bonfyre-queue 34 KB β€” persistent job queue
bonfyre-sync 34 KB β€” cross-instance replication

Transform (22 binaries)

bonfyre-media-prep 34 KB β€” audio normalization
bonfyre-transcribe 34 KB β€” speech-to-text (Whisper)
bonfyre-transcript-clean 34 KB β€” remove filler words
bonfyre-paragraph 35 KB β€” structure paragraphs
bonfyre-brief 34 KB β€” summary + action items
bonfyre-proof 34 KB β€” quality scoring
bonfyre-embed 52 KB β€” ONNX embeddings, trie tokenizer, batch
bonfyre-vec 35 KB β€” SIMD cosine vector search
bonfyre-narrate 68 KB β€” verified TTS: 6-layer fidelity
bonfyre-render 34 KB β€” template rendering
bonfyre-emit 34 KB β€” HTML/PDF/EPUB/RSS output
bonfyre-mfa-dict 34 KB β€” pronunciation dictionary
bonfyre-weaviate-index 34 KB β€” Weaviate vector search
bonfyre-transcript-family 34 KB β€” full transcription chain
bonfyre-repurpose 34 KB β€” content repurposing
bonfyre-segment 50 KB β€” speaker segmentation
bonfyre-clips 35 KB β€” audio clip extraction
bonfyre-speechloop 34 KB β€” live speech loop
bonfyre-tone 34 KB β€” tone/sentiment
bonfyre-tag 35 KB β€” topic tagging (native fastText)
bonfyre-quant 42 KB β€” v8 RLF weight quantization
bonfyre-kvcache 42 KB β€” KV cache compression

Surface (9 binaries)

bonfyre-cms 287 KB β€” CMS + Lambda Tensors
bonfyre-api 69 KB β€” HTTP gateway + dashboard
bonfyre-auth 35 KB β€” user auth + sessions
bonfyre-pipeline 52 KB β€” unified pipeline (5-8 ms/stage)
bonfyre 34 KB β€” unified CLI dispatcher
bonfyre-project 34 KB β€” project scaffolding
bonfyre-tel 68 KB β€” FreeSWITCH telephony
bonfyre-moq ~120 KB β€” MoQ/WebTransport relay + inference + RL + mesh + consensus
bonfyre-canon 35 KB β€” canonical artifact format
bonfyre-proxy 53 KB β€” OpenAI-compatible API shim

Value (9 binaries)

bonfyre-offer 34 KB β€” dynamic pricing
bonfyre-gate 34 KB β€” API key tiers
bonfyre-meter 34 KB β€” usage tracking
bonfyre-ledger 34 KB β€” financial records
bonfyre-finance 51 KB β€” bundle pricing
bonfyre-outreach 51 KB β€” outreach tracking
bonfyre-pay 35 KB β€” invoicing + payments
bonfyre-pack 34 KB β€” deliverable packaging
bonfyre-distribute 34 KB β€” email/Slack/webhooks

Libraries

libbonfyre 64 KB β€” runtime contract, FNV hash, SHA-256
liblambda-tensors 72 KB β€” structural JSON compression

How-To: Pick Your Entry Point

Each is standalone β€” you don't need to understand the whole system.

Direct .fpq Inference (SLI Bridge)

Run models directly from .fpq files. patch_model() replaces nn.Linear with FPQLinear. TinyLlama: 155 layers, cos=0.997, top-1=97.3%. bonfyre-oss.

patch_model(hf_model, fpq, resolver)
5 min to try

KV Cache Compression

4Γ— more context tokens in same VRAM. 9 optimization passes. Works with nn.Linear and FPQLinear simultaneously.

patch_kv_cache(model, bits=4, adaptive_bits=True)
2 min to try

Model Quantization (v8 RLF)

Quantize LLM weights to 3-bit with 0.9999+ cosine. E8 lattice snap + ΞΌ-law warp + 16D RVQ. 42 KB binary.

bonfyre-quant benchmark model.gguf --bits 3
5 min to try

Lightweight CMS

Replace Strapi's 500 MB with a 287 KB binary. Dynamic schemas, token auth, REST API. bonfyre-cms.

bonfyre-cms serve --port 8800
2 min to try

Local Transcription + HCP

Local speech for public or private audio. Live proof. bonfyre-intake.

bonfyre-transcribe run audio.wav
5 min to try

Audio-to-Invoice Pipeline

Audio β†’ transcript β†’ summary β†’ quality score β†’ pricing β†’ deliverable. 5–8 ms per stage. bonfyre-pipeline.

bonfyre-pipeline run --input audio.mp3
2 min to try

Semantic Vector Search

Embed docs + NEON SIMD cosine search. Replace $250/mo Pinecone. bonfyre-embed.

bonfyre-embed --insert-db my.db
5 min to try

MoQ Relay + Mesh

Pure C WebTransport relay with inline AI scoring, RL self-tuning, gossip peer discovery, and consensus leader election. Replaces Node.js moq-edge.

./bonfyre-moq --port 4443 --db relay.db
make bonfyre-moq to build

OpenAI-Compatible API

Drop-in replacement. Set OPENAI_API_BASE=http://localhost:8787. 53 KB binary.

bonfyre-proxy serve --port 8787
1 min to try

Reproduce Everything

Every number on this site comes from scripts in the repo.

# Weight roundtrip β€” Wan2.1 (307 tensors)
./bonfyre-fpq roundtrip-v9 ~/.local/share/models/wan2.1-t2v-1.3b/diffusion_pytorch_model.safetensors --bits 3

# Compress GGUF (llama.cpp compatible)
./bonfyre-fpq quantize model.gguf compressed.gguf --bits 3

# Compress safetensors
./bonfyre-fpq quantize model.safetensors compressed.safetensors --bits 3

# Perplexity benchmark
python3 perplexity_benchmark.py --model Qwen/Qwen2.5-0.5B --bits 3 --mode v8

# DiT forward-pass comparison
python3 scripts/wan_dit_compare.py

# SLI bridge inference test
python3 test_sli_plus_kvcache.py --device mps

# Full Bonfyre pipeline
git clone https://github.com/Nickgonzales76017/bonfyre.git && cd bonfyre && make
time ./bin/bonfyre-pipeline run --input audio.wav

# Run tests
make test # 167/167 tests

All scripts, logs, and CSVs: 10-Code/BonfyreFPQ/ Β· Proof pack: results/2026-04-10-proof-pack/

Install

Build from source in under 60 seconds.

# From source (recommended)
git clone https://github.com/Nickgonzales76017/bonfyre.git
cd bonfyre
make # builds 2 libraries + 47 binaries
make install # copies to ~/.local/bin
# One command (macOS / Linux)
curl -fsSL https://raw.githubusercontent.com/Nickgonzales76017/bonfyre/main/install.sh | sh
# BonfyreFPQ / bonfyre-oss
git clone https://github.com/Nickgonzales76017/bonfyre-oss.git
cd bonfyre-oss && make

Requirements: C11 compiler (gcc or clang), SQLite3 dev headers, zlib. Optional: ONNX Runtime (embed), FreeSWITCH (tel), PyTorch (SLI bridge).

MIT Licensed. Ship with it.

Full source. Real benchmarks. Reproduction scripts included.