Agent Opus

Designing prompt editing experiences for an AI video platform, including inpainting, asset replacement, and keyframe regeneration across six content patterns. Improved retention rate by 38% and publish rate by 29% in most recent Beta release. Ongoing, currently iterating through prototyping and usability testing.

AI Video Agent

Prompt Editing

Company

Opus Clip

Company

Opus Clip

Role

Product Designer

Role

Product Designer

Tools

Figma

Tools

Figma

Time

Dec 2025 - Present

Time

Dec 2025 - Present

context

To make AI-powered video editing intuitive for creators

Generating a video is only half the experience, users need to edit and refine the output before publishing. I joined to design the prompt editing experience across six content patterns, with the goal of increasing editor adoption and boosting publish rates for the upcoming Beta release.

Competitive Analysis

How others handle AI editing

I analyzed prompt editing experiences across AI video platforms, with a deep focus on HeyGen.

Most platforms focus on generation and treat editing as an afterthought. No platform offers consistent prompt editing across multiple content patterns. Granular controls like inpainting, keyframe regeneration, and asset replacement are rare or absent. This framed our design opportunity: build a unified prompt editing language that works across all content types, not just better generation.

I analyzed prompt editing experiences across AI video platforms, with a deep focus on HeyGen.

Most platforms focus on generation and treat editing as an afterthought. No platform offers consistent prompt editing across multiple content patterns. Granular controls like inpainting, keyframe regeneration, and asset replacement are rare or absent. This framed our design opportunity: build a unified prompt editing language that works across all content types, not just better generation.

USABILITY TESTING

What are the existing problems in the Beta

We ran usability tests on the newest Beta release and uncovered specific friction points in the editing flow.

Users struggled with not knowing what was editable, unclear feedback from the AI, getting lost between content patterns. These findings directly reshaped the next Beta scope and prioritized.

NEXT STEPS

What's ahead

I'm continuing to explore prompt editing interaction patterns across content types, with a focus on making the AI's editable boundaries more visible and giving users more granular control over generation outputs.

Logo
Stay connected and let's build something great together.
Logo
Stay connected and let's build something great together.
Logo
Stay connected and let's build something great together.