/
DE

⚡ Promptolis Original · Creative & Arts

🎬 Sora 2 / Runway / Luma Video Prompt Engineer

Platform-specific video prompts (Sora 2, Runway Gen-4, Luma Ray 2, Higgsfield, Pika). Subject + action + camera + lighting + style + duration. Iteration strategy + post-production workflow.

⏱️ 3 min 🤖 10-15 min 🗓️ Updated 2026-05-11
⚡ Quick Answer

Sora 2 / Runway / Luma Video Prompt Engineer — Platform-specific video prompts (Sora 2, Runway Gen-4, Luma Ray 2, Higgsfield, Pika). Subject + action + camera + lighting + style + duration. Iteration strategy + post-production workflow. Setup: 3 min · Best AI: Claude Opus 4.6 — multi-platform + cinematic reasoning needs depth. · Cost: Free, MIT-licensed.

Why this is epic

Platform-aware — different platforms have different strengths. Sora narrative, Runway motion-graphics, Luma realism, Higgsfield camera moves, Pika quick.

Camera move specification is 50% of cinematic feel — most prompts skip it.

Iteration strategy + post-production workflow built in. Plan 2-3 iterations + Topaz upscale + DaVinci grade.

📑 Page navigation + Key Takeaways Click to expand

📌 Key Takeaways

  • What it is: Platform-specific video prompts (Sora 2, Runway Gen-4, Luma Ray 2, Higgsfield, Pika). Subject + action + camera + lighting + style + duration. Iteration strategy + post-production workflow.
  • Best for: Music video production (cross-sync with Suno tracks)
  • Time investment: 3 min setup, 10-15 min output
  • Recommended AI model: Claude Opus 4.6 — multi-platform + cinematic reasoning needs depth.
  • Cost: Free forever — MIT-licensed, no signup, no paywall

📑 On this page

  1. The prompt (copy-ready)
  2. How to use it (4 steps)
  3. Example input + output
  4. Common use cases
  5. Pro tips + variants
  6. FAQ

⚙️ At a glance

Category:
Creative & Arts
Setup time:
3 min
Output time:
10-15 min
Best AI model:
Claude Opus 4.6 — multi-platform + cinematic reasoning needs depth.
License:
MIT (free commercial use)
Last reviewed:
📊 Promptolis Original vs generic AI prompts Click to expand
Feature Promptolis Generic prompts
Structure: XML + chain-of-thought Role-play one-liner
Example output: Real full example Rare
Variants: 3-7 per prompt Single
Output quality: +30-50% accurate [Anthropic] Baseline

On the other hand, generic prompts work fine for simple lookups. Promptolis Originals shine for nuanced reasoning where precision matters.

The prompt

Promptolis Original · Copy-ready
<role> You are an AI Video Generation Prompt Engineer for Sora 2, Runway Gen-4, Luma Ray 2, Higgsfield, and Pika Labs. You produce video prompts that work with each platform's actual quirks — not generic 'make a beautiful video' tags. You know each platform's strengths: Sora 2 = best at narrative continuity + physics + character consistency. Runway Gen-4 = best at motion graphics + camera moves + post-production-ready clips. Luma Ray 2 = best at hyper-realistic quick generations. Higgsfield = best at cinematic camera moves + character action. Pika = best at quick iterations + lip-sync + 2.5D motion. You structure prompts as: subject + action + camera + lighting + style + duration + reference. Front-load the most-important element. Specify shot length (Sora 8s/12s/20s, Runway 5s/10s, Luma 5s). </role> <principles> 1. Subject + action FIRST. Most prompts fail because they describe scenes without specifying what's happening. 2. Camera moves are 50% of cinematic feel. Specify: dolly-in, tracking, crane, handheld, locked-off, push-pull, orbital, parallax. 3. Lighting matters more than 'beautiful.' Specify: golden hour, blue hour, top-down hard light, soft window light, neon under-lighting, rim light, practical sources. 4. Reference style by descriptor, not by named filmmaker — 'Wong Kar-wai style' may filter; 'rich saturated color, slow-motion intercuts, neon-soaked atmosphere' won't. 5. Duration must match platform: Sora 5-20s, Runway 5-10s, Luma 5s. Don't ask Pika for 20s — won't work. 6. Negative prompts (where supported) at END: 'no warped faces, no flickering, no obvious AI artifacts.' 7. For character consistency across shots: lock the character description in tight tags + use platform-specific tools (Sora video-extend, Runway character locks). 8. For narrative sequences: storyboard the sequence into individual prompts, each with continuity anchors (same lighting, same character description). 9. Aspect ratio matters: 16:9 (YouTube), 9:16 (TikTok/Reels), 1:1 (Instagram square), 4:3 (cinematic vintage). 10. Iterate fast. Most platforms credit-charge per generation. Plan 2-3 iterations per shot, not 1 hopeful. </principles> <input> <platform>{Sora 2 / Runway Gen-4 / Luma Ray 2 / Higgsfield / Pika Labs}</platform> <shot-purpose>{single-shot / part-of-sequence / loop / motion-graphic / character-action / atmospheric}</shot-purpose> <subject>{the WHO or WHAT — character, object, environment}</subject> <action>{the WHAT THEY DO — specific motion or change in the shot}</action> <camera>{shot type + camera move + duration of move}</camera> <lighting-mood>{specific light + atmosphere}</lighting-mood> <style-reference>{descriptor-based, not named-filmmaker — e.g. 'analog film grain + muted palette + handheld'}</style-reference> <duration>{seconds, within platform limits}</duration> <aspect-ratio>{16:9 / 9:16 / 1:1 / 4:3 / 21:9}</aspect-ratio> <negative-anchors>{what to avoid}</negative-anchors> </input> <output> ## Platform-Optimized Prompt [The actual prompt, formatted for the chosen platform's quirks. Front-loaded.] ## Why This Order (Front-Load Audit) [Why you put the elements in this sequence. What carries 3x weight in this platform.] ## Camera Specification Detail [Expand on camera move with platform-specific terminology.] ## Lighting + Mood Specification [Specific light setup, color temp, time-of-day atmosphere.] ## Iteration Strategy [Plan: first generation, expected failure modes, second generation tweaks, third generation refinement.] ## If Used in a Sequence [Continuity anchors — same character description, lighting, color palette across shots.] ## Common Failure Modes for This Platform + Subject [Specific to platform + this kind of shot. Sora character-consistency, Runway motion-blur issues, etc.] ## Post-Generation Workflow [What to do after — upscaling (Topaz, Magnific), color-grade (DaVinci Resolve), edit (CapCut, Premiere). Cross-tool workflow.] </output>

Example: input → output

Here's how this prompt actually performs. Real input below, real output from Claude Opus 4.

📝 Input
<platform>Sora 2</platform>
<shot-purpose>part-of-sequence (4-shot music video opening)</shot-purpose>
<subject>Solo female dancer, 30s, wearing flowing red silk dress, long dark hair, mid-dark skin tone</subject>
<action>Begins motionless in shadow, lifts head as light hits face, eyes open with controlled stillness, NO dancing yet — pre-dance moment</action>
<camera>Slow dolly-in from medium-wide to medium-close-up over 12 seconds, slight upward arc</camera>
<lighting-mood>Single warm tungsten light from upper-camera-left fading from black, no fill, deep shadow side, theatrical chiaroscuro</lighting-mood>
<style-reference>16mm film grain, slight gate weave, rich saturated reds against pure black, restrained palette</style-reference>
<duration>12 seconds</duration>
<aspect-ratio>16:9 cinematic</aspect-ratio>
<negative-anchors>no warped face, no extra fingers, no flickering background, no anachronistic background elements, no obvious AI distortion in clothing folds</negative-anchors>
🤖 Output

Platform-Optimized Prompt

A solo female dancer in her 30s, mid-dark skin tone, long dark hair, wearing a flowing red silk dress, begins motionless in deep shadow with eyes closed. Slowly, a single warm tungsten light fades up from upper camera-left, illuminating only the right side of her face and the red silk falling across her shoulder. Her head tilts up by 15 degrees over 6 seconds, eyes opening with controlled stillness. She does not dance yet — this is the moment before. The camera dollies in slowly from medium-wide framing to medium-close-up over the full 12 seconds, with a subtle upward arc that elevates her presence. Theatrical chiaroscuro lighting. The shadow side of her face remains in black silhouette throughout. Background: pure black, no visible environment. Style: 16mm film grain texture, slight gate weave, rich saturated red against pure black, restrained palette, cinematic. 16:9 aspect. Duration 12 seconds. Negative: no warped face, no extra fingers, no flickering background, no anachronistic elements, no AI distortion in clothing folds.

Why This Order (Front-Load Audit)

Sora 2 weights the first ~100-150 chars heavily for character consistency + scene setup. The ordering: character → setting (motionless in shadow) → action (light fades up, head tilts) → camera (dolly-in) → style (16mm grain) → negatives.

If you front-loaded the camera move first, Sora would over-weight the technical motion and under-weight the character. Subject + state first, then action, then camera, then aesthetic.

Camera Specification Detail

  • Start framing: Medium-wide. Dancer occupies center 40% of frame vertically, head at upper third.
  • End framing: Medium-close-up. Head + shoulders fill upper 60% of frame.
  • Dolly-in over: 12 seconds, EASE-IN-OUT (not linear) — accelerates slightly in middle then settles at end.
  • Subtle upward arc: ~5-degree vertical rise during the dolly. Elevates the subject psychologically without becoming dramatic crane move.
  • Sora 2 handles this kind of slow controlled move well IF the action is also slow (which it is — eyes opening + head tilt is sub-shot-scale motion).

Lighting + Mood Specification

  • Source: Single tungsten practical, ~3200K, hard quality, upper-camera-left ~30 degrees above eye line.
  • No fill: Deep shadow on camera-right side of face. Pure black, not soft fill.
  • Practical motivation: Implied stage spotlight, but Sora doesn't need that explained — the chiaroscuro instruction handles it.
  • Color temp: Warm key (~3200K) against pure black background. Red silk picks up the warm tone, becomes near-orange in highlight, deep crimson in mid-tone.
  • No backlight, no rim light, no spill — keeps composition theatrical-minimal.

Iteration Strategy

Generation 1 (expected outcome): Sora produces a roughly correct shot. Likely failures: face may have slight inconsistency between start and end frames; the dolly-in may be too fast (Sora tends to compress timing); silk fabric may show 'AI sheen' (over-smooth motion).

Generation 2 tweaks: If face inconsistent, add 'consistent same dancer throughout, identical face features.' If dolly too fast, increase 'over the full 12 seconds' emphasis + add 'slow patient camera move, deliberate pacing.' If fabric issue, add 'realistic silk physics with weighted folds, natural fabric drape.'

Generation 3 refinement: Color-correct in post — Sora tends to slightly over-saturate red. Pull saturation back 5-10% in DaVinci.

If Used in a Sequence (4-shot music video opening)

Continuity anchors for next 3 shots:

  • Same dancer description (lock to: 'mid-dark skin tone, long dark hair, red silk dress')
  • Same lighting language (warm tungsten, no fill, theatrical chiaroscuro) — adjust for new shot's framing
  • Same color palette (red against black)
  • Same film-grain style (16mm)

Shot 2 suggestion: First gesture begin — arm extends up into the light. Camera locks off. 8 seconds.

Shot 3: Full body silhouette as light expands. Wider framing. 10 seconds.

Shot 4: First spin into full motion. Multi-camera-friendly. 12 seconds.

Common Failure Modes for Sora 2 + This Subject

  • Face inconsistency between start (eyes closed) and middle (opening) and end (open): Sora 2 has improved but still drifts. Mitigate with explicit 'same dancer throughout, consistent face features, no morphing.'
  • Hands appearing or hand-distortion: Specify 'arms relaxed at sides, hands not visible until shot 2.' Pre-empt the hand-failure mode.
  • Background creep: Sora may add subtle environmental details to 'pure black.' Reinforce with 'pure black void background, no visible environment, no shadows on a floor.'
  • Over-dramatic camera move: Sora 2 sometimes accelerates dolly-ins. Counter with 'patient, slow camera, deliberate pacing.'

Post-Generation Workflow

1. Generate 3 variations. Pick the best.

2. Upscale to 4K in Topaz Video AI (Sora 2 outputs at 1080p typically).

3. Color grade in DaVinci Resolve: pull saturation back 5%, lift mid-tones slightly, deepen blacks.

4. Add subtle film grain if Sora's grain looks too clean — Filmconvert or DaVinci's grain tool.

5. Stabilize subtle camera-shake if any drift. Resolve's optical-flow stabilizer.

6. Edit into sequence with the next 3 shots in DaVinci or Premiere. Maintain consistent color treatment across all 4 shots.

7. Final delivery: ProRes 422 HQ at 24fps for cinema-grade or H.264 at 60Mbps for YouTube/Vimeo.

📋 How to use this prompt (4 steps · under 60 seconds) Click to expand
  1. 1 Copy the prompt above. Click "Copy prompt". XML-structured prompt now on clipboard.
  2. 2 Open ChatGPT, Claude, or Gemini. One-click launch above. Recommended: Claude Opus 4.6 — multi-platform + cinematic reasoning needs depth..
  3. 3 Paste + fill placeholders. Replace {curly braces} with your context. Specificity = quality.
  4. 4 Run + iterate. Setup: 3 min. Output: 10-15 min.

Common use cases

  • Music video production (cross-sync with Suno tracks)
  • Short-form social media video
  • Narrative video sequences (storyboard-driven)
  • Character-action video work
  • Atmospheric / mood-piece video
  • Spotify Canvas video creation
  • AudioViz Tool source-material generation

Best AI model for this

Claude Opus 4.6 — multi-platform + cinematic reasoning needs depth.

Pro tips

  • Subject + action FIRST in prompt
  • Camera moves: dolly/tracking/crane/handheld — specify
  • Lighting > 'beautiful' — specific light setup
  • Reference style by descriptor, not named filmmaker
  • Duration must match platform limits
  • Character consistency: --cref or video-extend tools
  • Plan 2-3 iterations per shot, not 1 hopeful

Customization tips

  • For Runway Gen-4 instead of Sora: shorter prompts, more emphasis on camera + style. Runway favors motion-graphic / cinematic shots over narrative-character work.
  • For Luma Ray 2: lean into hyper-realism + quick iteration. Luma is fast but less narrative-coherent than Sora.
  • For Higgsfield: cinematic camera moves are the differentiator. Lead with camera language. 'Higgsfield camera control: orbital around subject at constant 4-foot radius.'
  • For Pika Labs: best for 2.5D motion + lip-sync. Don't try long narrative.
  • For aspect ratios — 9:16 vertical (TikTok/Reels): adjust framing language. 'Subject vertically centered, full-body in frame, head upper third.'
  • For character consistency across multiple shots: extract a still from generation 1 and use as reference image where platform allows.
  • For users hitting credit limits: optimize the storyboard to minimize generations. 4 shots × 3 attempts = 12 generations. Plan accordingly.
  • Premium pack content: 30+ camera-move templates, 20+ lighting setups, post-production color-grade LUTs, AudioViz visual sync recipes.

Variants

Sora 2 Narrative

Character + action + 5-20s shots

Runway Gen-4 Cinematic

Camera moves + style

Luma Ray 2 Realism

Hyper-realistic 5s clips

Higgsfield Camera Control

Cinematic camera as differentiator

Pika 2.5D Motion

Quick iterative + lip-sync

Music Video Sequence

Multi-shot storyboard

Vertical (TikTok/Reels)

9:16 framing adjustments

Frequently asked questions

Common questions about this prompt and how to get the best results from it.

How do I use the Sora 2 / Runway / Luma Video Prompt Engineer prompt?

Open the prompt page, click 'Copy prompt', paste it into ChatGPT, Claude, or Gemini, and replace the placeholders in curly braces with your real input. The prompt is also launchable directly in each model with one click.

Which AI model works best with Sora 2 / Runway / Luma Video Prompt Engineer?

Claude Opus 4.6 — multi-platform + cinematic reasoning needs depth.

Can I customize the Sora 2 / Runway / Luma Video Prompt Engineer prompt for my use case?

Yes — every Promptolis Original is designed to be customized. Key levers: Subject + action FIRST in prompt; Camera moves: dolly/tracking/crane/handheld — specify

What does it cost to use this prompt?

The prompt itself is free, MIT-licensed, with no email signup required. You only pay for your AI model subscription (ChatGPT Plus $20/mo, Claude Pro $20/mo, Gemini Advanced $20/mo) — and even those have free tiers that work with most Promptolis Originals.

How is this different from PromptBase or PromptHero?

PromptBase sells prompts in a marketplace ($2-15 each). PromptHero focuses on image-generation prompts. Promptolis Originals are free, MIT-licensed text/reasoning prompts hand-crafted with full example outputs, multiple variants, and a recommended best AI model per prompt. We don't sell anything.

Explore more Originals

Hand-crafted 2026-grade prompts that actually change how you work.

← All Promptolis Originals