← Back to all papers

Chameleon: Mixed-Modal Early-Fusion Foundation Models

Chameleon Team (Meta FAIR) — May 2024 (ICLR 2025)

📄 arXiv:2405.09818  ·  📥 PDF

TL;DR: Meta’s early-fusion model that tokenizes everything — images, text, and code — into discrete tokens (~73,828 unified vocabulary), trains one autoregressive transformer with a single cross-entropy loss. 7B and 34B models trained on ~10 trillion tokens. Beats GPT-4V on mixed-modal tasks. Key engineering contribution: QK-Norm for training stability. The architecture that Transfusion argues against — elegant in its simplicity, but lossy in its image representation.

Level 1 — Beginner

The big idea

Most AI models are specialists. GPT writes text. DALL-E makes images. They can’t truly work together. What if you wanted an AI that could write a travel guide with photos, weaving them naturally?

Chameleon says: turn everything into tokens. Text is already tokens (words broken into pieces). Images? Chop them into 1,024 discrete tokens too via a VQ-VAE. Then one transformer processes the whole thing — text, images, code — as a single stream.

The universal language analogy

Core idea

Imagine a United Nations where every delegate speaks a different language. The old approach: hire specialized translators. Chameleon’s approach: teach everyone Esperanto. Every image is translated into the same “language” as text — discrete numbered tokens. One brain processes it all, no translators needed. The tradeoff? Some nuance is lost in translation.

How does it work? (3 steps)

  1. Tokenize everything. Text → BPE tokens (standard). Images → 1,024 discrete tokens per image via a VQ-VAE with a codebook of 8,192 entries. Combined vocabulary: ~73,828 tokens.
  2. One transformer, one loss. Standard next-token prediction (cross-entropy) over the entire mixed sequence — text tokens and image tokens in one stream.
  3. Train on everything. ~10 trillion tokens of interleaved text, images, and code. Two model sizes: 7B and 34B parameters.

How is this different from Transfusion?

Key contrast

Transfusion keeps images as continuous patches and uses diffusion to generate them. Chameleon takes the opposite bet: convert images to discrete tokens (like text) and use next-token prediction for everything. Simpler architecture, one unified loss — but the image-to-token conversion is lossy, and you need 1,024 tokens per image instead of 256 patches.

Key results

60.4%
Human preference
vs Gemini-Pro
51.6%
Human preference
vs GPT-4V
34B
Parameters
trained on ~10T tokens

Two surprising findings

Training mixed-modal models is unstable. Chameleon’s biggest contribution might not be the architecture itself, but figuring out how to train it without it blowing up. They invented QK-Norm and reordered layer norms to prevent divergence. Without these tricks, training collapses ~20% in.

Interleaved generation actually works. One of the first models where you can prompt “show me cool birds and tell me about them” and get a coherent response weaving text and generated images naturally.

Key takeaway

Chameleon proves that the “tokenize everything” approach can produce a genuinely unified multimodal model that beats GPT-4V on mixed-modal tasks. The cost: image quality is capped by the VQ-VAE tokenizer, and generation requires 4× more tokens per image than Transfusion.

Quiz — Level 1
1. What is Chameleon’s core approach to handling multiple modalities?
Chameleon converts images into 1,024 discrete tokens via VQ-VAE, combines them with text tokens into a unified vocabulary, and trains one transformer with a single cross-entropy loss.
2. How many discrete tokens does Chameleon use to represent a single image?
The VQ-VAE encodes a 512×512 image into a 32×32 grid = 1,024 discrete tokens from a codebook of 8,192 entries.
3. Compared to Transfusion, Chameleon’s biggest architectural advantage is:
Chameleon uses the same cross-entropy next-token prediction for both text and images. No dual losses, no noise scheduling, no iterative denoising at inference.
4. What was one of the biggest engineering challenges the Chameleon team had to solve?
Mixed-modal training causes Q/K norms to grow unboundedly, leading to softmax saturation and gradient collapse. QK-Norm and revised layer norm placement were essential fixes.
5. On mixed-modal tasks (generating interleaved text and images), Chameleon achieved:
Humans preferred Chameleon’s mixed-modal outputs over GPT-4V and Gemini-Pro — the first time a unified model beat these dedicated systems on mixed-modal generation.

Level 2 — Intermediate

The training recipe — 10 trillion tokens

Data typeDescriptionScale
TextWeb text, books, code (similar to LLaMA)~4.5T text tokens
Image-text pairsImage + caption datasets~1.4B pairs
Interleaved documentsWeb pages with images and text naturally mixed~400B tokens

Total: ~10 trillion tokens — 5× more data than Transfusion’s 2T. The interleaved data is crucial: this teaches the model to naturally weave text and images. Most prior work only trained on paired data (one image, one caption).

Two model sizes

Chameleon-7BChameleon-34B
Parameters7B34B
Layers3248
Hidden dim4,0968,192
Attention heads3264
Training tokens~4.4T~9.2T

The unified vocabulary

ComponentTokensNotes
BPE text vocabulary65,536Standard text tokens
Image codebook8,192Discrete image tokens from VQ-VAE
Special tokens~100<image_start>, <image_end>, etc.
Total vocabulary~73,828One unified softmax

Text and image tokens live in the same embedding space. The model doesn’t know it’s switching modalities — it just predicts the next token from a ~73K vocabulary.

VQ-VAE image tokenizer (Make-A-Scene heritage)

Chameleon uses a VQ-VAE derived from Meta’s Make-A-Scene image tokenizer:

Encoding:
  Input image (512×512×3)
    ↓ Encoder CNN
  Latent grid (32×32×256)
    ↓ Quantize each position → nearest of 8,192 codebook entries
  Token grid (32×32) = 1,024 discrete tokens
    ↓ Flatten row-by-row
  Token sequence [t_1, t_2, ..., t_1024]

Decoding:
  Token sequence → codebook lookup → latent grid → Decoder CNN → image

The VQ-VAE is frozen during Chameleon training. Its reconstruction quality caps the model’s generation quality:

MetricChameleon’s VQ-VAEStable Diffusion’s VAE
rFID (reconstruction)~1.5–2.0~0.5–1.0
Resolution512×512512×512+
Representation1,024 discrete tokens4,096 continuous floats

Training stability — the hardest problem

This is arguably Chameleon’s most important contribution. Standard transformer training diverges ~20% into mixed-modal training.

Problem 1: Softmax attention explodes. When training on mixed modalities, the norms of Q and K vectors grow unboundedly. Image tokens and text tokens produce very different activation magnitudes. Once norms get large enough, softmax saturates (one weight → 1.0, all others → 0.0) and gradients vanish. Training collapses.

Fix: QK-Norm

Normalize Q and K to unit vectors before computing attention, with a learnable temperature τ per head. This bounds attention logits to a fixed range regardless of input magnitudes. Without QK-Norm, training diverges at ~500B tokens. With it, stable through 10T+.

Problem 2: Layer norm placement. Standard Pre-Norm isn’t enough. Chameleon uses a revised Pre-Norm with QK-Norm applied after the query/key projections. The combination of Pre-Norm + QK-Norm prevents gradient explosions.

Problem 3: Image-text loss ratio instabilities. During training, image loss and text loss oscillate in anti-correlation — when image loss drops, text loss spikes, and vice versa. The modalities compete for model capacity. Fix: careful data scheduling with changing ratios during training (more text early, more interleaved later).

Inference: generating mixed documents

Text generation: Identical to any language model — autoregressive next-token prediction.

Image generation: When the model predicts <image_start>, it generates 1,024 image tokens autoregressively — each sampled from the 8,192-entry image vocabulary using the same softmax as text. No diffusion, no iterative denoising. Just next-token prediction, 1,024 times. Then the tokens are decoded through the VQ-VAE decoder into pixels.

Interleaved generation: The model decides when to insert images based on context, generates them inline, then continues text conditioned on everything before.

ChameleonTransfusion
Forward passes per image1,024 (one per token, sequential)250 × 1 (denoising steps, patches in parallel)
Parallelizable?No (autoregressive)Yes (all patches denoised simultaneously per step)

Head-to-head: Chameleon vs Transfusion

DimensionChameleonTransfusionWinner
Architecture simplicityOne loss, one vocabularyTwo losses, two representations🏆 Chameleon
Image quality (FID)Higher (worse)Lower (better) at same compute🏆 Transfusion
Compute efficiency1,024 tokens × quadratic attention256 patches × cheaper attention🏆 Transfusion
Text qualityCompetitive with LLaMA-2Competitive with LLaMA-1🏆 Chameleon
Mixed-modal generation✔ Native, demonstrated✔ Possible, not benchmarked🏆 Chameleon
Training stabilityHard — needed QK-Norm innovationsEasier — diffusion loss is smoother🏆 Transfusion
ScalabilityProven at 34BOnly tested to 7B🏆 Chameleon
Information preservation✘ VQ-VAE quantization loss✔ Continuous, no quantization🏆 Transfusion
Inference flexibilityFixed: 1,024 tokens alwaysTunable: 16–256 patches, adjustable steps🏆 Transfusion

Benchmarks deep dive

Text benchmarks (Chameleon-34B):

BenchmarkChameleon-34BMixtral 8x7BGemini-ProLLaMA-2 70B
MMLU62.070.671.869.8
ARC-Challenge78.181.478.3
HellaSwag83.986.585.3
WinoGrande77.081.280.2

Competitive but doesn’t beat text-only specialists. Small “multimodal tax” — image training doesn’t destroy text capability, but doesn’t help either.

Image captioning (where Chameleon shines):

BenchmarkChameleon-34BLLaVA-1.5
COCO CIDEr141.1137.2
NoCaps CIDEr124.8117.5
Flickr30K CIDEr106.397.8

State-of-the-art on image captioning — early fusion helps the model deeply understand images.

Mixed-modal human evaluation:

Chameleon preferredTieOther preferred
vs GPT-4V51.6%8.2%40.2%
vs Gemini-Pro60.4%6.1%33.5%

Limitations

LimitationDetail
Image qualityGenerated images decent but not competitive with SDXL/DALL-E 3 — capped by VQ-VAE
Training cost10T tokens at 34B params = enormous compute budget
Safety gating7B released with image generation disabled; 34B restricted access
No videoText + images only
Self-created benchmarkMixed-modal human eval designed by the authors — no independent validation
Quiz — Level 2
1. Chameleon’s unified vocabulary combines ~65K text tokens with 8,192 image codebook tokens. What does this enable?
The entire point of Chameleon’s design: one vocabulary, one softmax, one loss function for everything.
2. QK-Norm was introduced to solve a specific instability. What was the root cause?
Different modalities produce different activation magnitudes. Unbounded Q/K norms cause softmax saturation → one-hot attention → gradient vanishing → training collapse.
3. During image generation, Chameleon produces 1,024 tokens. How does this compare to Transfusion?
Chameleon’s generation is sequential (must generate token 1 before token 2). Transfusion’s is parallel within each denoising step (all 256 patches processed simultaneously).
4. The human eval shows Chameleon-34B preferred over GPT-4V 51.6% of the time. What makes this both impressive and hard to trust?
The authors designed the prompts, chose the criteria, and ran the annotation. 51.6% is also barely above a 50/50 coin flip. No independent replication.
5. Chameleon-34B achieves SOTA on captioning but trails text-only models on MMLU. What does this suggest?
There’s a small capacity cost: shared weights serve both modalities, slightly hurting pure text tasks. But the cross-modal capability (SOTA captioning) is a real, measured gain.

Level 3 — Expert

Mathematical formulations

The unified objective. Chameleon’s beauty is its simplicity — one loss for everything:

L = -Σ_{i=1}^{N} log P_θ(x_i | x_{<i})

Where x_i can be a text token OR an image token. Same cross-entropy, same softmax, same backpropagation path. Compare to Transfusion’s dual loss: L_LM + λ · L_DDPM. No balancing hyperparameter λ, no noise scheduling, no timestep conditioning.

VQ-VAE quantization — the full math

Given encoder output z_e and codebook E = {e_k} with K = 8,192 entries:

Quantization (forward):
  z_q(i,j) = e_{k*}  where  k* = argmin_k ||z_e(i,j) - e_k||_2

Straight-through estimator (gradient hack):
  z_q = z_e + sg(z_q - z_e)

  Forward: equals z_q
  Backward: gradient flows through z_e only (pretend quantization didn't happen)

The argmin is non-differentiable. The straight-through estimator (STE) copies the gradient from z_q directly to z_e — mathematically unjustified but empirically works.

VQ-VAE training loss (3 terms):

L_VQ-VAE = ||x - D(z_q)||^2       // reconstruction: make output look like input
         + ||sg[z_e] - z_q||^2     // codebook: move codebook entries toward encoder outputs
         + β||z_e - sg[z_q]||^2   // commitment: prevent encoder from drifting from codebook
TermGradient flows toPurpose
ReconstructionEncoder + Decoder (via STE)Make reconstructions look good
CodebookCodebook vectors onlyMove codebook entries toward encoder outputs
CommitmentEncoder onlyPrevent encoder from “running away” from codebook

In practice, Chameleon uses EMA codebook updates instead of the gradient-based codebook loss:

e_k ← γ · e_k + (1 - γ) · mean(z_e mapped to k)    (γ = 0.99)

Codebook utilization and collapse

With K = 8,192 entries, typical utilization is only 40–70%. Thousands of entries go unused due to:

  • Rich-get-richer dynamics: Popular codes attract more encoder outputs → get updated more → become even more popular
  • Dead codes: Entries initialized far from data never get selected
  • Redundant codes: Multiple entries converge to similar vectors

Mitigations: code reset (replace dead codes with sampled encoder outputs), EMA decay, entropy regularization. Even with mitigations, the effective information capacity per position is less than the theoretical log2(8192) = 13 bits.

QK-Norm — the full formulation

Standard multi-head attention computes:

attn_logits = Q · K^T / √(d_h)

When ||Q|| and ||K|| grow large (which happens with mixed modalities), logits explode → softmax saturates → one-hot attention → gradient vanishing.

Chameleon’s fix:

Q_hat = Q / ||Q||_2      // normalize to unit vector
K_hat = K / ||K||_2      // normalize to unit vector

attn_logits = τ_h · Q_hat · K_hat^T / √(d_h)

where τ_h is a learnable temperature per head

Now dot products are bounded to [-1, 1], and τ_h controls sharpness: higher τ = sharper focus on specific positions, lower τ = broader attention.

ConfigurationTraining status
Standard attention, no QK-NormDiverges at ~500B tokens
QK-Norm, standard LayerNormDiverges at ~2T tokens
QK-Norm + revised Pre-NormStable through 10T+ tokens

QK-Norm appeared in ViT-22B and nGPT, but Chameleon’s contribution is proving it’s essential for mixed-modal early fusion at scale — make-or-break, not nice-to-have.

Training dynamics — the modality tug-of-war

Text loss and image loss exhibit anti-correlated oscillations. When gradient updates optimize for image prediction, shared weights shift toward image-favorable representations — text prediction temporarily suffers, and vice versa.

Management: data ratio scheduling (more text early, more interleaved later), gradient norm monitoring, and a two-stage training process (pre-training on all modalities, then alignment with curated safety-filtered data).

Pseudocode: training step

def chameleon_train_step(batch, model, vqvae, optimizer):
    total_loss = 0
    for document in batch:
        token_sequence = []
        for element in document:
            if element.type == "text":
                tokens = bpe_tokenize(element.text)
                token_sequence.extend(tokens)
            elif element.type == "image":
                with torch.no_grad():  # VQ-VAE is frozen
                    z_e = vqvae.encoder(element.pixels)
                    indices = vqvae.quantize(z_e)  # [32,32] ints
                    img_tokens = indices.flatten().tolist()
                img_tokens = [t + TEXT_VOCAB_SIZE for t in img_tokens]
                token_sequence.append(IMAGE_START)
                token_sequence.extend(img_tokens)
                token_sequence.append(IMAGE_END)

        # Standard causal LM - no special attention mask needed
        input_ids  = token_sequence[:-1]
        target_ids = token_sequence[1:]
        logits = model(input_ids)  # [seq_len, 73828]

        # ONE cross-entropy loss over ALL positions
        loss = cross_entropy(logits, target_ids)
        total_loss += loss

    total_loss.backward()
    clip_grad_norm_(model.parameters(), max_norm=1.0)
    optimizer.step()

Note the simplicity compared to Transfusion’s training step: no noise sampling, no timestep conditioning, no dual losses.

Pseudocode: QK-Norm attention

class QKNormAttention(nn.Module):
    def __init__(self, d_model, n_heads):
        self.d_h = d_model // n_heads
        self.W_Q = nn.Linear(d_model, d_model, bias=False)
        self.W_K = nn.Linear(d_model, d_model, bias=False)
        self.W_V = nn.Linear(d_model, d_model, bias=False)
        self.W_O = nn.Linear(d_model, d_model, bias=False)
        self.tau = nn.Parameter(torch.ones(n_heads, 1, 1))

    def forward(self, x, causal_mask):
        Q = self.W_Q(x)  # project
        K = self.W_K(x)
        V = self.W_V(x)
        # Reshape to [B, heads, S, d_h] ...
        Q = F.normalize(Q, dim=-1)  # unit vectors
        K = F.normalize(K, dim=-1)
        logits = self.tau * (Q @ K.T) / sqrt(self.d_h)
        logits = logits.masked_fill(~causal_mask, -inf)
        weights = softmax(logits, dim=-1)
        return self.W_O(weights @ V)

Pseudocode: image generation at inference

def generate_image(model, text_context, vqvae, temp=0.9, top_p=0.95):
    tokens = tokenize(text_context) + [IMAGE_START]
    for i in range(1024):
        logits = model(tokens)[-1]
        # Mask out text tokens - only sample from image vocab
        logits[:TEXT_VOCAB_SIZE] = -inf
        probs = softmax(logits / temp, dim=-1)
        # Nucleus (top-p) sampling
        sorted_p, sorted_idx = sort(probs, descending=True)
        cumsum = cumulative_sum(sorted_p)
        mask = (cumsum - sorted_p) > top_p
        sorted_p[mask] = 0
        sorted_p /= sorted_p.sum()
        next_token = sorted_idx[multinomial(sorted_p, 1)]
        tokens.append(next_token)
    tokens.append(IMAGE_END)

    # Decode via VQ-VAE
    img_indices = [t - TEXT_VOCAB_SIZE for t in tokens[-1025:-1]]
    indices = tensor(img_indices).reshape(32, 32)
    with torch.no_grad():
        z_q = vqvae.codebook_lookup(indices)
        image = vqvae.decoder(z_q)
    return image

Critical analysis

What’s genuinely novel
ContributionNoveltyNotes
Early fusion at 34B scale⭐⭐⭐⭐⭐ 5/5First to prove tokenize-everything works at this scale for generation
QK-Norm for mixed-modal stability⭐⭐⭐⭐ 4/5QK-Norm existed, but proving it’s essential for mixed-modal is new
Interleaved generation⭐⭐⭐⭐⭐ 5/5First model to convincingly generate naturally interleaved text-image documents
Alignment for multimodal⭐⭐⭐ 3/5RLHF-style alignment applied to multimodal — needed to be done
Architecture itself⭐⭐ 2/5Standard transformer + VQ-VAE — innovation is in training, not architecture
Weakest claims

No FID reported. The paper never reports FID on a standard benchmark. The VQ-VAE bottleneck makes FID uncompetitive with SDXL/DALL-E 2 — this is a strategic omission. They focus on mixed-modal evaluation where they’re stronger.

Self-created human eval. The “beats GPT-4V 51.6%” headline comes from a benchmark the authors designed, with prompts they chose, criteria they set, and annotation they ran. 51.6% is barely above a coin flip. No independent replication.

Missing compute comparisons. No total training FLOPs, no GPU-hours, no inference latency. We can’t tell if the multimodal capability is “free” or expensive relative to text-only.

Safety gating tells a story. Meta released the 7B model with image generation disabled and safety-gated the 34B. This means: Meta is confident in understanding capabilities (released openly), but NOT confident they’ve solved safety for generation (restricted access). The alignment stage likely reduced but didn’t eliminate harmful image generation.

Expert verdict: Chameleon vs Transfusion

ChameleonTransfusion
Core thesisSimplicity wins — one loss, one vocabularyQuality wins — continuous is worth the complexity
Strongest evidenceMixed-modal generation works; scales to 34B; human evalBetter FID at lower compute; explicit scaling curves
Weakest evidenceNo FID; self-created benchmark; compute not reportedNever tested interleaved generation; only 7B
Real-world readinessCloser — actually generates documentsImage quality too low without upsampler
What it needsBetter VQ-VAE (or switch to continuous)Scale to 34B+; test interleaved; add resolution
Key takeaway

These papers are complementary, not competitive. Chameleon proved the training recipe and the product concept. Transfusion proved the representation and efficiency. The model that wins in production will combine both — Chameleon’s training stability and interleaved capability with Transfusion’s continuous image representation.

Quiz — Level 3
1. In the VQ-VAE training loss, the commitment loss β||z_e - sg[z_q]||² sends gradients to:
sg[z_q] stops gradient flow to the codebook. Only the encoder receives the commitment gradient, keeping its outputs close to codebook entries.
2. The straight-through estimator handles the non-differentiable argmin by:
z_q = z_e + sg(z_q - z_e). During forward pass this equals z_q; during backward pass the gradient flows through z_e only.
3. The learnable temperature τ_h in QK-Norm controls:
Each head learns its own sharpness. Heads needing precise focus (one position) learn high τ; heads needing broad context learn low τ.
4. The paper never reports FID on a standard benchmark. The most likely reason:
The paper’s strength is mixed-modal generation, so they evaluated on that. Reporting FID would show image quality below dedicated models, weakening the story.
5. Codebook collapse means effective utilization is 40–70%. The practical consequence:
With 8,192 codes but only 40–70% active, the effective codebook is ~3,200–5,700 entries. Each position carries less than log2(8192)=13 bits of information.

Phase 4 — Frontier

Chameleon, alongside Transfusion, kicked off a wave of unified multimodal models. Here’s what it sparked and what comes next.

What happened since May 2024

PaperDateApproachKey result
Emu3 (BAAI → Nature)Sep 2024Tokenize everything, but much better visual tokenizer (SBER-MoVQGAN, 32K codebook)Proves Chameleon’s thesis: bottleneck was VQ-VAE quality, not the approach. Matches SDXL on FID.
Janus-Pro (DeepSeek)Jan 2025Separate vision encoders for understanding vs generationKey insight: what makes good visual representation for understanding ≠ generation
JanusFlow (DeepSeek)Jan 2025Janus’s decoupled encoders + rectified flow (continuous)Bridges Chameleon vs Transfusion — unified training + continuous generation
Discrete Diffusion Timestep TokensApr 2025Discrete tokens + diffusion scheduling hybridGets best of both: Chameleon’s simplicity + diffusion’s iterative refinement
Show-o2024Discrete diffusion for images (masking/unmasking)Another hybrid: Chameleon’s vocabulary + diffusion-like generation
The Emu3 vindication

Emu3 deserves special attention: same approach (tokenize everything, single next-token prediction loss), but with a much better visual tokenizer. Published in Nature — rare for an ML paper. Matches SDXL on FID while also being a strong language model. This proves Chameleon’s core claim was right. The bottleneck wasn’t the discrete approach — it was the VQ-VAE quality.

The Janus insight

Janus-Pro’s key finding: what makes a good visual representation for understanding (semantic, abstract) is different from what makes a good representation for generation (pixel-precise, detailed). Use SigLIP/CLIP for understanding, VQ-VAE/VAE for generation, share the transformer backbone. The cost of two vision encoders is minimal compared to the transformer.

The evolving landscape

May 2024: Chameleon               Aug 2024: Transfusion
   |  "Tokenize everything"          |  "Continuous + diffusion"
   v                                  v
Sep 2024: Emu3                    Jan 2025: JanusFlow
"Better tokenizer fixes it"       "Decoupled encoders + flow"
   |                                  |
   +----------------+-----------------+
                    |
                    v
             2025-2026: Convergence

   Emerging consensus:
   1. Separate encoders for understanding vs generation
   2. Better tokenizers (continuous OR high-quality discrete)
   3. Unified transformer backbone
   4. Training stability tricks (QK-Norm) are essential

Improvement vectors

1. Fix the tokenizer

Addressed by Emu3

Chameleon’s 8,192-entry VQ-VAE with 40–70% utilization is the biggest bottleneck. Paths: larger codebook (32K–64K), SBER-MoVQGAN (Emu3’s approach), Finite Scalar Quantization (FSQ — eliminates codebook collapse by construction), or Lookup-Free Quantization (LFQ — exponential codebook without explicit entries).

2. Decouple understanding and generation encoders

Addressed by Janus

One VQ-VAE serving double duty is a forced compromise. Use SigLIP/CLIP for understanding (optimized for semantics), VQ-VAE/VAE for generation (optimized for pixel reconstruction), share the transformer. Janus-Pro proved this works.

3. Reduce tokens per image

Area to explore

1,024 tokens per image is expensive (quadratic attention, sequential generation). Paths: higher compression VQ-VAE (256 tokens), hierarchical coarse-to-fine (64 + 256), variable-length tokenization (simple images get fewer tokens), or multi-scale with upsampler. Dropping to 256 tokens would make Chameleon compute-competitive with Transfusion.

4. Video generation

Area to explore

At 1,024 tokens per frame and 24 fps: 1 second = 24,576 tokens, 10 seconds = 245,760 tokens. Computationally intractable with current sequence lengths. Needs: temporal compression (1 token-set per keyframe), 3D VQ-VAE for video volumes, sparse attention across frames.

5. Scale to 70B+

Area to explore

Does the multimodal tax shrink at larger scale? Hypothesis: yes — at 70B+, the transformer has enough capacity that text and image objectives stop competing. Only a handful of labs can attempt this.

Scorecard

DimensionRatingNotes
Novelty⭐⭐⭐⭐ 4/5Early fusion with discrete tokens at scale for generation + understanding. QK-Norm contribution is real.
Rigor⭐⭐⭐ 3/5Good breadth but key metrics missing (FID, compute). Self-created benchmark. Safety gating limits repro.
Impact⭐⭐⭐⭐⭐ 5/5ICLR 2025. Spawned Emu3 (Nature), Janus-Pro, and the entire “tokenize everything” direction.
Clarity⭐⭐⭐⭐ 4/5Well-written but some key details vague (training data ratios, alignment recipe). Stability section is excellent.
Relevance⭐⭐⭐⭐⭐ 5/5Closest existence proof to a media gen agent product. Proves interleaved generation is viable.
Overall⭐⭐⭐⭐ 4.2/5

Compared to Transfusion

ChameleonTransfusion
Novelty⭐⭐⭐⭐ 4/5⭐⭐⭐⭐ 4/5
Rigor⭐⭐⭐ 3/5⭐⭐⭐⭐ 4/5
Impact⭐⭐⭐⭐⭐ 5/5⭐⭐⭐⭐⭐ 5/5
Clarity⭐⭐⭐⭐ 4/5⭐⭐⭐⭐⭐ 5/5
Relevance⭐⭐⭐⭐⭐ 5/5⭐⭐⭐⭐⭐ 5/5
Bottom line

Together, Chameleon and Transfusion define the design space. Chameleon is the “simplicity” pole, Transfusion is the “quality” pole. Every model since sits somewhere on the spectrum. For your media gen agent: start with Chameleon’s training recipe (QK-Norm, interleaved data), use a better tokenizer (Emu3’s or continuous), consider Janus’s decoupled encoders. Multi-image consistency is still unsolved — that’s your product’s biggest technical risk.

Improvement vectorStatusKey work
Fix tokenizerAddressed by Emu3SBER-MoVQGAN, FSQ, LFQ
Decouple encodersAddressed by JanusSeparate understanding vs generation
Reduce tokens/imageArea to exploreHigher compression, variable-length
Video generationArea to exploreTemporal compression, 3D VQ-VAE
Scale to 70B+Area to exploreMultimodal tax at larger scale
← Back to all papers