Meta stated that ‘Movie Gen’ has four capabilities: video generation, personalised video generation, precise video editing, and audio generation.
Meta has announced the launch of ‘Movie Gen,’ an artificial intelligence-powered video generation tool designed to help creatives in producing custom videos and audios using their own prompts. It builds on the tech giant’s two waves of genAI model work–namely the Make-A-Scene series of models and with the Llama Image foundation models.
In a blog post, Meta stated that ‘Movie Gen’ has four capabilities: video generation, personalised video generation, precise video editing, and audio generation. According to the post, they have trained these models on a combination of licensed and publicly available datasets.
“These foundation models required us to push on multiple technical innovations on architecture, training objectives, data recipes, evaluation protocols, and inference optimisations,” the company said.
Meta acknowledges that while the research they are sharing today shows tremendous potential for future applications, they acknowledge that their current models have limitations.
“Notably, there are lots of optimisations we can do to further decrease inference time and improve the quality of the models by scaling up further,” it added.
It has encouraged creatives to imagine animating a ‘day in the life’ video to share on Reels and editing it using text prompts, or creating a customised animated birthday greeting for a friend and sending it to them on WhatsApp. For the company, with creativity and self-expression taking charge, the possibilities are infinite for creatives out there.
“As we continue to improve our models and move toward a potential future release, we’ll work closely with filmmakers and creators to integrate their feedback. By taking a collaborative approach, we want to ensure we’re creating tools that help people enhance their inherent creativity in new ways they may have never dreamed would be possible,” the company said.