Grading AI-Generated Footage in DaVinci Resolve
AI-generated video breaks assumptions that colour grading software makes. DaVinci Resolve is exceptional at correcting camera footage because it's built around how cameras behave. AI footage doesn't behave like a camera. This is what we've learned grading it.
The Core Problem
Camera footage contains structured noise. CMOS sensors produce predictable grain patterns that Resolve's noise reduction understands. AI footage contains unstructured artifacts — temporal inconsistencies, edge shimmering, colour shifts that occur mid-clip without corresponding to any physical process. Resolve's auto-tools see this as camera noise and handle it incorrectly.
The fix: turn off all automatic analysis on AI clips. Don't let Resolve's colour science attempt to interpret the footage. Work manually.
The Node Structure
For a mixed production — live footage with AI inserts — we use a specific node tree:
Input Transform → Exposure Normalise → Primary Correction → AI Match → Output Transform
The AI Match node is where the work happens. We use the Colour Warper to push the AI clip's colour distribution toward the live camera package. The goal is matching the midtones. Highlights and shadows in AI footage often look off regardless, but nailing the midtones makes the cut work.
One thing that helps: shoot a colour chart on every live production day. Reference that chart in Resolve to establish the "truth" of the camera package. When AI footage comes in, you have something concrete to match against.
Handling Temporal Drift
The hardest problem: AI clips often shift in brightness or colour between frames, even on what should be a static shot. A single grade node won't fix this — it needs frame-level correction.
We use Resolve's tracker not for motion tracking, but to generate a motion analysis curve. This tells us where the clip has significant frame-to-frame changes. We then go to those frames and apply keyframed colour corrections.
For clips with severe drift, we isolate a neutral area of the frame using a window and use it as a per-frame reference point. Time-consuming, but the only approach that works reliably.
Grain Addition
AI footage has no film grain, so it reads as "digital" against organic camera footage. We add grain after all colour work is done:
- Resolve's Film Grain node only — no third-party plugins
- Grain size matched to the camera package (we document grain characteristics for every camera we shoot on)
- Grain strength: 10–18% — less than you'd think
The instinct is to over-grain because the AI footage looks flat. Resist it. Match the grain to the live footage at exact strength, no more.
When to Stop
Some AI clips can't be saved in grade. The most common failure: if Higgsfield or Kling has generated a clip with a strong internal colour temperature shift — the scene looks warmer in the second half — that's not fixable at grade. It's a re-generation job.
The tell is the vectorscope. If the skin tone line rotates during playback rather than sitting still, that clip goes back to the prompt stack.
Set that boundary early. Spending three hours on a clip that should have been regenerated in twenty minutes is a decision that compounds across a project.