Powerful media preparation and focused editing for modern workflows.
This video editing suite addressed the challenges of media preparation and editing with a streamlined, direct experience focused on 4K, Sony RAW, and HD workflows.
Based on previous research and conversations we had with editors, we knew that audio and video effects were a core part of the editing workflow.
With the redesign efforts we saw an opportunity to answer how we might help our personas use and manage video and audio effects.
Senior Product Designer (UX) and Researcher
Professional video editors; 20 user interviews and contextual inquiry with 5 users
Product Design team within Engineering; tested with stakeholders, 6 internal team members, and 10 users
Proto-persona workshop, interviews, contextual inquiry, sketching, IA/content modeling, high-fidelity comps, moderated usability testing, iterative refinement (60+ iterations)
The existing effects workflow was hidden behind a small icon on a timeline event, which made it easy to miss and hard to learn.
That hurt usability, slowed down editing, and increased the risk that users would assume the product lacked capabilities they expected from a professional editor.
Needed to support a wide range of editor needs, from solo freelancers to larger productions, where plug-ins and color workflows are central to getting work out the door.
Improved discoverability of effects by moving from a hidden icon to a consistently visible entry point, reducing “missed feature” risk during onboarding.
Faster application of effects to events through fewer clicks and clearer affordances. ~30% reduction in median time-to-apply in moderated testing.
Better scalability for complex projects by prioritizing space for plug-in lists and effect chains, reducing scrolling and context switching.
Reduced design churn and rework by aligning early on personas and a shared effects model, leading to smoother stakeholder reviews. Fewer late-stage reversals during sprint planning
Created a reusable content model and decision record that engineering and design could reference, improving consistency across related features.
Moderated tests demonstrated comparable or better task success than alternatives, with fewer clicks, supporting stakeholder confidence to proceed with the space-prioritized design.
Ran a proto-persona workshop to align on who our research, design, and development efforts should focus on.
I created the agenda and protocol, organized activities, facilitated the workshop, kept time, took notes, and quickly synthesized proto-personas from stakeholder worksheets. We achieved our primary objective.
The team identified primary personas to guide research for the remainder of the project.
From the research, we learned:
Editors used video effects to varying degrees.
A strong audio workflow and tools were essential.
Both were considered table stakes for a professional editing application.
During discovery, I conducted 20 user interviews and ran contextual inquiry with 5 users.
Interviews were conducted in person or via Skype. Contextual inquiry sessions were conducted in person.
For this project, we wanted to learn:
Key insights included:
Color grading was often treated as a workflow of its own.
Users expected the product to support plug-ins, both as a plug-in host and through plug-ins that helped them push their ideas further. We wanted to support that.
Users had to access effects through a small icon on the video event.
Usability suffered because it was hard to discover and tested poorly.
We used this as a starting point to address the problems in the current design.
For this feature, the challenge was:
How might we help editors feel more creative and empowered to achieve their creative vision while working on long-form projects?
I explored multiple sketch concepts for how we could better support video plug-ins.
Based on conversations with stakeholders and users, we started by creating a content model of all the places an effect could appear.
This meant examining the timeline, tracks, and events, and understanding not only what editors needed to accomplish with effects, but also where they would look for effects.
I started with mind mapping: what properties did timeline, track, event, and media level effects share? I mapped out everything I could think of.
These mind maps captured how I organized my thinking after reviewing the research and before moving into sketching.
I shared this with the team for feedback and incorporated their comments.
Next I began sketching potential approaches.
From the sketches, a few ideas stood out more than others.
We moved into full high-fidelity comps for deliverables, which is what you see in the proposed versions.
These were the strongest contenders. They leveraged existing patterns in the app.
We created these high-fidelity wireframes to test with stakeholders and users. They went through two sprints and more than 60 iterations.
Testing included 6 internal team members and 10 users. All sessions were moderated, with a mix of in-person and remote testing.
These designs were created to evaluate the usability of the proposed approaches.
Key question:
Can you apply an effect to an event?
This reflects what exists today with minor differences.
Maximizing available space for the plug-in list and plug-in chain was a high priority.
At the time, the thinking was that this approach would boost space usage and improve discoverability because effects lived in a dedicated tab that stayed visible.
We built three prototypes and tested them. This version required fewer clicks overall and had task success rates better than the comparable approaches.