EntityBench: Towards Entity-Consistent Long-Range Multi-Shot Video Generation

1ByteDance  ·  2ByteDance Seed  ·  3Rice University

Multi-shot video generation extends single-shot generation to coherent visual narratives, yet maintaining consistent characters, objects, and locations across shots remains a challenge over long sequences. Existing evaluations typically use independently generated prompt sets with limited entity coverage and simple consistency metrics, making standardized comparison difficult. We introduce EntityBench, a benchmark of 140 episodes (2,491 shots) derived from real narrative media, with explicit per-shot entity schedules tracking characters, objects, and locations simultaneously across easy / medium / hard tiers of up to 50 shots, 13 cross-shot characters, 8 cross-shot locations, 22 cross-shot objects, and recurrence gaps spanning up to 48 shots. It is paired with a three-pillar evaluation suite that disentangles intra-shot quality, prompt-following alignment, and cross-shot consistency, with a fidelity gate that admits only accurate entity appearances into cross-shot scoring. As a baseline, we propose EntityMem, a memory-augmented generation system that stores verified per-entity visual references in a persistent memory bank before generation begins. Experiments show that cross-shot entity consistency degrades sharply with recurrence distance in existing methods, and that explicit per-entity memory yields the highest character fidelity (Cohen's d = +2.33) and presence among methods evaluated.

Paper (arXiv) Code / Data

The Benchmark

A Long-Range Cross-Shot Entity Memory Test

EntityBench scripts are derived from real narrative media, then enriched and validated by LLMs into generation-ready prompts. Each shot ships with an explicit entity_schedule naming the characters, objects, and locations expected to appear, along with cut and continuation transition flags. The three difficulty tiers separate long-range memory load from intra-shot complexity: hard-tier episodes hold per-shot composition roughly constant while pushing recurrence gaps past 30 shots and entity-slot re-appearance rates above 80%.

140
Episodes
2,491
Shots
3,718
Unique entities
68.6%
Re-appearance rate
48
Max recurrence gap
CCDF of per-entity maximum recurrence gap by tier.
Recurrence-gap CCDF. The hard tier carries a heavy tail past 30 intervening shots — a long-range stress test absent from prior benchmarks.
Continuation-chain length distribution.
Cut / continuation structure. Chains of consecutive non-cut shots extend up to 36, providing transition-fidelity signal at scale.
Per-episode entity counts.
Per-episode entity counts. An average episode declares 7 characters, 5 locations, and 15 objects.
Entity persistence statistics.
Per-entity persistence. Most entities appear in 2 shots; anchor entities sustain runs of up to 9 consecutive shots.
Characters are tested most aggressively: 80.3% of every character slot in the benchmark must be rendered from memory, not from a prompt-level description.

Evaluation

Three Pillars · 51 Metrics

The evaluation suite asks three progressive questions: is each shot well-formed in isolation?, does each shot match its prompt?, and do shots agree with one another? Pillars build on each other — Pillar 2's per-shot fidelity scores filter the cross-shot pool used in Pillar 3, so cross-shot consistency is only measured on appearances the model rendered correctly.

Evaluation suite tree: 3 pillars, 51 metrics organised hierarchically by entity type.
Figure 1. The full evaluation suite — 3 pillars, 51 metrics, hierarchical and per-entity-type. Pillar 2's per-entity fidelity scores gate admission into Pillar 3.
Pillar 16 metrics
Intra-shot quality

VBench-style dimensions: subject consistency, temporal flickering, motion smoothness, dynamic degree, aesthetic and imaging quality. Is each shot well-formed in isolation?

Pillar 224 metrics
Prompt-following alignment

Presence, per-entity fidelity (face / hair / clothing / build / shape / layout / …) and action correctness, scored shot-by-shot.

Pillar 321 metrics
Cross-shot consistency

DINOv2 centroid similarity for characters and objects, plus LLM pairwise identity judgment on type-specific criteria.

The fidelity gate. A naive cross-shot metric rewards methods that produce nearly-static yet incorrect renderings — they look similar to each other so they're scored as “consistent.” The fidelity gate admits only (shot, entity) pairs that cleared the Pillar 2 fidelity threshold into the Pillar 3 pool, so consistency is measured only on appearances the entity was rendered correctly in the first place.


EntityMem

Per-Entity Memory, Established Before Generation

EntityMem stores per-entity visual and textual references in a persistent memory bank before any video generation begins, so each entity's identity is established once and reused consistently throughout the sequence. At generation time, each shot retrieves its entity references independently of the scene in which they previously appeared — disentangling identity from context, and avoiding the autoregressive failure mode where distortions in early shots compound into the reference pool.

01
Entity references

Per-entity portraits and panoramic backgrounds generated on a chroma-key, segmented out, and verified by an LLM agent before entering the bank.

02
Keyframe composition

A Layout Agent plans each shot: character positions, camera angle, and how many keyframes to capture the progression of the action.

03
Memory-augmented generation

Labeled portraits and keyframe composites are passed to the video backbone alongside the text prompt, with stored descriptions auto-injected for recurring entities.


Experiments

Results on EntityBench

Numbers below are fidelity-gate-corrected means: per-episode scores are weighted by the number of gate-passing instances they contributed, so methods that fail the gate on harder cases are penalised accordingly. Bold values mark the column winner.

Ours StoryMem HoloCine CineTrans
Pillar 1 · Intra-shot quality
subject_consistency0.8810.7590.8600.968
temporal_flickering0.9760.8380.9570.979
motion_smoothness0.9880.8490.9640.990
dynamic_degree0.6570.5620.7210.688
aesthetic_quality0.5930.4750.5180.596
imaging_quality [0,100]66.0056.4149.9768.57
Pillar 2 · Intra-shot prompt-following
Presence
intra_character_presence0.9670.8490.8820.796
intra_object_presence0.8880.8930.7230.776
intra_location_presence0.6870.6810.6240.651
Character fidelity
intra_face_fidelity0.7400.4520.3490.327
intra_face_face0.6070.4240.3690.366
intra_face_hair0.6840.4850.4820.413
intra_face_clothing0.8020.5040.3390.378
intra_face_build0.7260.5390.4490.521
Object fidelity
intra_object_fidelity0.6010.6180.2670.384
intra_object_shape0.7120.7010.3730.508
intra_object_color_texture0.6910.7090.3310.480
intra_object_proportions0.7280.7150.3830.539
intra_object_details0.5730.5980.2560.371
Location fidelity
intra_location_fidelity0.5550.5040.3060.428
intra_location_layout0.6030.5290.3540.474
intra_location_color_mood0.7060.6270.4740.588
intra_location_landmarks0.5620.5220.3050.429
intra_location_perspective0.5570.5200.3460.488
Action correctness
intra_action_overall0.6180.5470.5690.273
intra_action_depicted0.5190.4460.4580.124
intra_action_subject_identity0.7060.5950.6060.478
intra_action_subject_action0.6970.6260.6950.323
intra_action_object_interaction0.7810.7120.6160.346
intra_action_motion_quality0.7160.7230.7720.528
Pillar 3 · Cross-shot consistency
DINOv2 embedding similarity
cs_face0.7370.7920.7510.772
cs_object0.7980.8390.8030.794
cs_transition_boundary0.7380.6630.4980.508
LLM pairwise · characters
llm_face_accuracy0.4060.2260.2280.091
llm_face_mean_score0.4260.2340.2420.145
llm_face_face0.3810.2160.2230.145
llm_face_hair0.4470.2480.2820.175
llm_face_clothing0.4640.2410.2420.143
llm_face_build0.4890.2600.2850.217
LLM pairwise · objects
llm_object_accuracy0.1640.2030.0880.092
llm_object_mean_score0.2020.2220.0940.145
llm_object_shape0.2320.2390.1040.180
llm_object_color_texture0.2350.2430.1040.190
llm_object_proportions0.2380.2440.1050.195
llm_object_details0.1840.2090.0870.124
LLM pairwise · locations (camera-invariant)
llm_scene_accuracy0.3090.3980.3040.119
llm_scene_mean_score0.6590.6710.6160.432
llm_scene_layout0.6970.6840.6410.449
llm_scene_color_mood0.7160.7240.6690.619
llm_scene_landmarks0.6030.6370.5630.346
llm_scene_perspective0.7270.6960.7130.467

Bold = column winner per row. All values are fidelity-gate-corrected means (imaging_quality on [0,100]; all others on [0,1]).

Qualitative examples

Qualitative examples of the strongest persistent-memory baseline (StoryMem) and our per-entity memory bank (EntityMem). Videos autoplay; reload to restart.

Example 1

Shot 1
Shot 3
Shot 4
Shot 7
Shot 8
StoryMem
OursEntityMem
Shot 1.
<registry>
Marcus: Dark-skinned man with a goatee, bare-chested, wearing an open white lab coat, a white cap with a rainbow emblem, and green-rimmed glasses. Leo: Man with blonde hair wearing an open white lab coat. Akira: Shirtless young man with spiky red and black hair, wearing a necklace. Chloe: A young girl with long blonde hair and green eyes. The School Classroom: Classroom with light green walls, a wooden floor, and bookshelves. the striped t-shirt: Short-sleeved crew-neck t-shirt with a pattern of horizontal blue and white stripes. the classroom bookshelves: Bookshelves with framed pictures on the shelves.
<prompts>
Marcus, wearing the white lab coat, and Leo, wearing the striped t-shirt under his white lab coat, are talking with Chloe near the classroom bookshelves. They all turn as Akira walks into the frame from the left to join them.
Shot 3.
<registry>
The Geometric Space: Abstract geometric space with a brown geometric surface and large, static red and yellow graphic shapes and structures. the white fox pokémon: Small, fox-like Pokémon (Alolan Vulpix) with fluffy white fur that has some reddish-brown coloration, large pointed ears, a curled tuft of fur on its head, and multiple curled tails.
<prompts>
Medium shot in The Geometric Space. Chloe walks over to the red and yellow perch where the white fox pokémon sits. Rohan stands beside her, watching intently as Chloe gently pets the white fox pokémon.
Shot 4.
<registry>
no new entities — all recurring from earlier shots
<prompts>
High-angle medium shot of the white fox pokémon standing on a brown geometric surface in The Geometric Space, its tails twitching slightly. Nearby, Chloe takes a small step toward the white fox pokémon, her eyes widening in awe and her mouth falling slightly open. The large, red and yellow structure of the red and yellow perch is in the background.
Shot 7.
<registry>
the pink flower hair clip: A bright pink flower hair ornament composed of five simple, rounded petals and a small, circular yellow center, all with a smooth appearance and a thin black outline. the green chalkboard: Large, rectangular chalkboard with a clean, dark green, matte surface. the yellow electric pokémon: Small, yellow, rodent-like Pokémon with long black-tipped ears, circular red cheeks, and a lightning-bolt-shaped tail.
<prompts>
Chloe, wearing the pink flower hair clip, and Marcus, standing in front of the green chalkboard with the yellow electric pokémon, turn and smile at Akira, who smiles back.
Shot 8.
<registry>
The Open Field: An open outdoor space under a blue sky.
<prompts>
Close-up shot of Leo smiling in The Open Field while wearing the striped t-shirt. The yellow electric pokémon suddenly jumps onto Leo's shoulder.

Example 2

Shot 1
Shot 2
Shot 3
Shot 4
Shot 5
StoryMem
OursEntityMem
Shot 1.
<registry>
Tingo: A large, rounded character in a plush, full-body purple suit. It has a silver rectangular screen on its abdomen and a single antenna on its head shaped like an inverted triangle. The Playroom: A colorful indoor playroom featuring a bright red floor, a red door, and a large blue slide. The Hillside Dome: A surreal, dome-shaped house covered in green grass, built into the side of a vibrant green rolling hill dotted with colorful flowers.
<prompts>
Medium shot of Tingo happily dancing and singing in The Playroom.
Shot 2.
<registry>
Sunny: A stylized sun with the smiling face of a baby.
<prompts>
Eye-level medium shot in The Playroom. Chloe smiles in the foreground, sitting amidst the balls in the colourful ball pit. In the background, Tingo, Poppy, Zippy and Gigi jump energetically with their arms raised in the air, and Sunny bobs energetically in the air.
Shot 3.
<registry>
The Sunny Sky: Bright blue sky with fluffy white clouds and a stylized, glowing sun with radiating light beams.
<prompts>
Extreme close-up of Sunny in The Sunny Sky. Sunny smiles and giggles as bright light beams radiate outwards against the blue sky and fluffy clouds.
Shot 4.
<registry>
The Sun Meadow: A surreal outdoor landscape of rolling, bright green hills under a sky that features a stylized sun with a smiling baby's face.
<prompts>
In the vibrant, green landscape of The Sun Meadow, the doors of The Hillside Dome open. Tingo emerges, runs forward, and begins to dance under the sky with its smiling baby-faced sun.
Shot 5.
<registry>
the intricate blue door: Large, round, arched blue door with an intricate, repeating raised pattern of concentric circles and geometric shapes across its surface.
<prompts>
Wide shot of The Hillside Dome set into a rolling green hill in The Sun Meadow. Tingo, Zippy, Poppy and Chloe run towards the house and enter through the intricate blue door. The door closes, then reopens, and the four characters run back out into the sunny landscape one by one.

Example 3

Shot 1
Shot 2
Shot 3
Shot 4
Shot 5
StoryMem
OursEntityMem
Shot 1.
<registry>
Young-ho: Older Korean official with a grey beard and a wrinkled face, in a traditional red and black robe and a wide-brimmed black hat (Gat) with beaded strings. The Grand Throne Hall: Interior of a traditional Korean palace throne room with a neutral color palette. Hwan: Older Korean King with a beard, a large, wide-brimmed black hat (Gat) with beaded strings and a gold ornament, and a traditional red King's Robe layered with a dark vest. the ceremonial blue and pink robe: Traditional Korean official's robe made of a fine, silky fabric with a vibrant light blue or teal body, wide flowing sleeves, and contrasting bright pink fabric on the collar, cuffs, and front panels. the official's wing-tipped hat: Formal black Joseon-era official's hat (Samo) with a rounded crown and two thin, upright, wing-like projections extending from the back.
<prompts>
Young-ho stands in The Grand Throne Hall before the wooden lattice screen, addressing the unseen Hwan. As he speaks, his expression slowly shifts into a sinister grin.
Shot 2.
<registry>
Do-yun: A man in a traditional turquoise and purple hanbok and a tall, wide-brimmed black hat (gat).
<prompts>
A close-up of Do-yun inside The Grand Throne Hall. He wears the traditional turquoise and purple hanbok and a tall, wide-brimmed black hat (gat). Do-yun's eyes drift downwards, a pensive and somber expression settling on his face.
Shot 3.
<registry>
Jun-seo: Middle-aged Korean man wearing a traditional purple King's Robe and a black hat (Gat).
<prompts>
In The Royal Quarters, surrounded by the dark wood furniture, Mi-kyung, wearing the green and maroon hanbok, sits on the polished wood floor, breathing shakily as she cradles her arm, her gaze fixed downward. Jun-seo takes a slow step closer, looking down at her with a somber and concerned expression.
Shot 4.
<registry>
no new entities — all recurring from earlier shots
<prompts>
Extreme close-up in The Infirmary Wing on the face of Jin-woo, who wears the black headband of the black assassin's garb. He lies on the wooden floor near Hwan with his eyes shut as a single tear rolls down his cheek.
Shot 5.
<registry>
The Snowy Courtyard: A spacious outdoor courtyard of a traditional Korean palace, with the grounds entirely covered in white snow.
<prompts>
In The Snowy Courtyard, Mi-kyung kneels and bows her head deeply in the snow in front of a palace building. Standing together in the foreground, Hwan, Sang-hoon, Do-yun, Jun-seo and Young-ho watch her, their expressions growing more solemn.

Citation

BibTeX

@article{he2026entitybench,
  title = {EntityBench: Towards Entity-Consistent Long-Range Multi-Shot Video Generation},
  author = {He, Ruozhen and Meng, Wei and Yang, Ziyan and Ordonez, Vicente},
  journal = {Preprint},
  year = {2026},
}