Those who've been following this blog know, I've been working intensively in recent months in authoring AI-generated imagery through the use of such apps as Stable Diffusion, DALL-E 2, and Lexica. As a photographer, my greatest interest up to the present has quite naturally been in creating still images. Now, however, just as I've begun working seriously with video for the first time in my career, an article in The Verge has announced that a start-up company called Runway is about to make available to the public a new text-to-video app called Gen-2, a follow-up to the far more primitive Gen-1. Given the almost rabid interest shown it the text-to-image apps named above it was probably inevitable that something like this would soon appear. It's not likely, though, that anyone is going to be making full length motion pictures with Gen-2 in the foreseeable future. The examples I've seen are only 4 seconds long and even at that short length are too jumpy and lacking in anything like 4K resolution to be usable. (As The Verge article notes, "...all we have to judge Gen-2 right now is a demo reel and a handful of clips [most of which were already being advertised as part of Gen 1.]). Nevertheless, it can safely be assumed that the specs will eventually improve to the point that viewers will be unable to differentiate AI-generated videos from those created with traditional media.
Runway's pricing plans seem quite reasonable, but until the app shows greater improvement I will most likely stick with the free basic plan that should be more than sufficient to enable me to experiment with creating short AI-generated videos of my own.
No comments:
Post a Comment