Make-A-Video
Make-A-Video is a state-of-the-art AI system that generates videos from text.
Read research paper

Make-A-Video

Make-A-Video research builds on the recent progress made in text-to-image generation technology built to enable text-to-video generation. The system uses images with descriptions to learn what the world looks like and how it is often described. It also uses unlabeled videos to learn how the world moves. With this data, Make-A-Video lets you bring your imagination to life by generating whimsical, one-of-a-kind videos with just a few words or lines of text.

A dog wearing a Superhero outfit with red cape flying through the sky

Make-A-Video with text

Bring your imagination to life and create one-of-a-kind videos

Surreal
Realistic
Stylized
A teddy bear painting a portrait
Robot dancing in times square
Cat watching TV with a remote in hand
A fluffy baby sloth with an orange knitted hat trying to figure out a laptop close up highly detailed studio lighting screen reflecting in its eye

From static to magic

Add motion to a single image or fill-in the in-between motion to two images.

Single image
Pair of images
Input image
Input images
Make-A-Video output

Adding extra creativity to your video

Create variations of your video based on the original.

Input video
Make-A-Video output

The New State of the Art in Video Generation

3x
Better representation of text input*
3x
Higher quality*
*when compared to the previous state of the art via user studies.

Interested in trying Make-A-Video?

Let us know if you’re interested in gaining access to any future releases of our Make-A-Video research

Advancing AI responsibly

Meta AI is committed to developing responsible AI and ensuring the safe use of this state-of-the-art video technology. Our research takes the following steps to reduce the creation of harmful, biased, or misleading content.

Source Data

This technology analyzes millions of pieces of data to learn about the world. As a way to reduce the risk of harmful content being generated, we examine, applied, and iterated on filters to reduce the potential for harmful content to surface in videos.

Identifying as AI-generated content

Since Make-A-Video can create content that looks realistic, we add a watermark to all videos we generate. This will help ensure viewers know the video was generated with AI and is not a captured video.

A Work-in-Progress

Our goal is to eventually make this technology available to the public, but for now we will continue to analyze, test, and trial Make-A-Video to ensure that each step of release is safe and intentional.

Acknowledgement

Research Authors

Uriel Singer*, Adam Polyak*, Thomas Hayes*, Xi Yin*, Jie An, Songyang Zhang, Qiyuan (Isabelle) Hu, Harry Yang, Oron Ashual, Oran Gafni, Devi Parikh*, Sonal Gupta*, Yaniv Taigman*

*Core Contributors

Project Contributors

Mustafa Said Mehmetoglu, Jacob Xu, Katayoun Zand, Jia-Bin Huang, Jiebo Luo, Shelly Sheynin, Nadav Benedek, Shoshana Swell, Chantal Mora, Ana Paula Kirschner Mofarrej, Raghu Nayani, Eric Kaplan, Aiman Farooq, Alyssa Newcomb, Anne Davidson, Tamara Piksa, Michelle Restrepo, Natalie Hereth, Mallika Malhotra, Harrison Rudolph, Michael Friedrichs, Aran Mun, Angela Fan, Kelly Freed


Huge thanks to all the people internal to FAIR who helped enable this work by providing extra compute for our experimentation.

主站蜘蛛池模板: 国产成人一区二区精品非洲| 痴汉中文字幕视频一区| 久久久av波多野一区二区| 亚洲av无码一区二区三区天堂| 国产福利一区二区| 最新欧美精品一区二区三区| 狠狠做深爱婷婷综合一区| 成人一区二区三区视频在线观看 | 亚洲无人区一区二区三区| 日韩精品一区二区三区中文版| 免费无码一区二区三区| 成人一区二区免费视频| 国产福利一区二区在线视频 | 亚洲第一区香蕉_国产a| 久久99久久无码毛片一区二区| 国产一区二区三区亚洲综合| 亚洲精品精华液一区二区| 久久精品午夜一区二区福利| 久久精品一区二区东京热| 精品一区二区三区免费| 精品国产毛片一区二区无码| 国产aⅴ一区二区| 精品一区二区三区电影| 激情久久av一区av二区av三区| 人妻精品无码一区二区三区| 无码人妻精品一区二区蜜桃| 无码福利一区二区三区| 精品深夜AV无码一区二区老年| 国产精品自拍一区| 亚洲国产一区二区三区在线观看| 日本片免费观看一区二区| 一区二区和激情视频| 日本一区二区三区在线观看视频| 国产伦理一区二区三区| 麻豆视传媒一区二区三区| 久久久人妻精品无码一区| 国产亚洲福利精品一区| 熟妇人妻AV无码一区二区三区| 美女视频一区三区网站在线观看| 亚洲AV无码国产精品永久一区| 精品日韩一区二区|