GIF 1 GIF 1

A hot air balloon floating past a sunset

GIF 1 GIF 1

An ice cream cone melting in front of a brick wall

GIF 1 GIF 1

A plant with green leaves growing in a clay pot

GIF 1 GIF 1

A man walking on the beach in front of the ocean

GIF 1 GIF 1

A yellow kayak and oars floating down a river

GIF 1 GIF 1

A rocket blasting off in front of a planet

Sketching the Future:
Applying Conditional Control Techniques to Text-to-Video Models

Abstract

The proliferation of video content demands efficient and flexible neural network- based approaches for generating new video content. In this paper, we propose a novel approach that combines zero-shot text-to-video generation with ControlNet to improve the output of these models. Our method takes multiple sketched frames as input and generates video output that matches the flow of these frames, building upon the Text-to-Video Zero architecture and incorporating ControlNet to enable additional input conditions. By first interpolating frames between the inputted sketches and then running Text-to-Video Zero using the new interpolated frames video as the control technique, we leverage the benefits of both zero-shot text- to-video generation and the robust control provided by ControlNet. Experiments demonstrate that our method excels at producing high-quality and remarkably consistent video content that more accurately aligns with the user’s intended motion for the subject within the video.

Video

Acknowledgements

The authors would like to thank Professor Pathak and the course staff of Visual Learning and Recognition for their support and Mrinal Verghese for his compute resources.

The website template was borrowed from Jon Barron.