Meta has unveiled a new artificial intelligence system called ''Make-A-Video'', which will allow users to generate short video clips by entering a text description of the desired scene. The company has taken the technology a step further by incorporating text-to-video production capabilities apart from text-to-image. However, the company isn''t yet to provide user access to the model.
The prompt-generated videos are five seconds or shorter, meaning no audio would be required. However, Meta believes that the model provides a broad spectrum of suggestions.
Meta, while making the announcement through a blog post, stated that in a commitment to ''open science,'' it will be sharing details of the latest artificial intelligence generative technology while also confirming its intentions to offer a demo experience for users.
In a blog post announcing the work, Generative AI research is pushing creative expression forward by providing individuals with the tools they need to quickly and easily create new content. Make-A-Video can reshape imagination and create one-of-a-kind videos full of vivid colors and landscapes, according to the parent company. Facebook and Instagram have also provided some help.
The company investigates the model''s development, putting together a series of images, captions, and unlabeled video footage from WebVid-10M and HD-VILA-100M datasets, which includes stock video footage produced by websites like Shutterstock and removed from the web that together span hundreds of thousands of hours.
Mark Zuckerberg, the company''s CEO, took to Facebook to describe the work as an extraordinary breakthrough, ajouting that video is much harder to generate than photos, because beyond correcting each pixel, the system also has to predict how they''ll change as long as it is available.
According to a study from The Washington Post, there have been concerns about AI generative media, with some suggesting that it might result in an increase in misinformation, propaganda, and non-consensual pornography. Meta is skeptical about how they develop such generative models and hence intends to limit access to them. However, a timeline on the demo experience and how access would be limited is still to be identified.