Well, this is the first tech revolution that's happened since I came of age, and I've seen those various steps (and possibly instigated some).
To be fair, by the time that I left uni, Suresh was already using AI to generate things.
That was from a consciousness studies perspective, while my approach was creative.
Rotoscoping wasn't something I thought of doing until I'd made 100 issues of the Dream O'Clock comic strip. By the way, I still love that comic strip: that was so exciting to make.
Essentially, I knew a lot about how animation was made on a technical level, but wasn't at all artistically talented enough to draw my own cartoons. So I got really into it, as we can see from my political caricatures, and the rotoscopes of my cellphone videos.
I'd also say that AI is creating a linguistic shift. Suddenly, extremely literal English is a highly prized skill. A lot of jobs that were done by a human editor are now done by AI, and the gig for the human worker is to proofread/challenge the AI.
DDG and Runway combined is a powerful combo. It's interesting that everyone in the States and other places was talking about DALL-E and Stable Diffusion, but my film only used Stable Diffusion for one bit: the Gaza strip montage set to the theme from 1492, or, as we know it in NZ: the Crusaders theme song. DDG and Runway did most of the AI work, alongside some colorisations by DeepAI.
- My advice to filmmakers is basically that AI is another tool in the arsenal, but it's really unlikely to fully replace VFX artists. This new DDG video tool is a step in the right direction, but the AI can't really understand spatial geography, so it's not useful for most FX.
Right now, I'd say the best way to use AI for a film person is to paint over real footage. That way, the spatial geography is real, and it's about consistency of character models and the ability to program motion (both of which are still nascent abilities). Scaling the Deep Style roto is an immediate way of painting over footage while maintaining real physics.