The Style-Cloning Video Editing Agent
Built for the Cerebral Valley Agentic Orchestration Hackathon
Reflect is an AI-powered system that automates the "rough cut" video editing process. Instead of manually scrubbing through hours of footage, creators upload their raw clips and Reflect generates a professional timeline ready for refinement.
Key insight: Reflect doesn't render video. It produces an industry-standard OpenTimelineIO (.otio) file that imports directly into DaVinci Resolve, Premiere Pro, or any NLE that supports OTIO.
Raw Footage + Music → AI Analysis → Timeline Blueprint → .otio Export
- Upload your video clips and background audio
- Optionally provide a reference edit to clone its style
- Generate — AI agents analyze footage, plan cuts, and build the timeline
- Download the .otio file and import into your editor for final touches
Reflect orchestrates four AI-powered services:
| Stage | Service | What It Does |
|---|---|---|
| 1 | Asset Annotator | Analyzes footage for speech, stability, and musical beats |
| 2 | Style Extractor | Learns editing patterns from reference timelines |
| 3 | Edit Planner | AI agents determine cuts, pacing, and clip selection |
| 4 | Edit Executor | Converts the plan into a valid .otio timeline |
- Backend: Python, FastAPI, OpenAI Agents, MongoDB Atlas
- Frontend: React, TypeScript, Tailwind CSS, Vite
- Media: OpenTimelineIO, FFmpeg, librosa, faster-whisper
# Install dependencies
make install
# Start backend (terminal 1)
make backend
# Start frontend (terminal 2)
make frontendRequires: Python 3.11+, Node.js 18+, MongoDB Atlas connection string, OpenAI API key
OpenTimelineIO is an open standard for editorial timelines. By outputting .otio instead of rendered video, Reflect:
- Preserves full editing flexibility
- Keeps source media at original quality
- Integrates with professional workflows
- Lets creators refine AI decisions, not fight them
MIT