[ad_1]
Head over to our on-demand library to view sessions from VB Transform 2023. Register Here
Nicolas Neubert did not set out to make international news, but that’s exactly what happened after he sat down at his home desktop computer in Cologne, Germany, near end of June 2023 and began playing around with Gen2, a new generative AI video creation tool from well-funded New York City startup RunwayML.
The 29-year-old senior product designer at Elli, a subsidiary of automaker giant Volkswagen Group focused on electrification and charging experiences, had already been using his free time to generate sci-fi inspired images with Midjourney, a separate, popular text-to-image AI tool. When Neubert caught wind of Runway’s Gen2, which allows users to upload a limited number of images and converts them freely, automatically, into short 4-second animations that contain realistic depth and movement, he decided to turn some of his Midjourney imagery into a concept film trailer.
He posted the result, “Genesis,” a thrilling, cinematic, 45-second-long video that sketches out a variation of the age-old sci-fi theme of man vs. machine — this time, with humanoid robots that have taken over the world and a human rebellion fighting back against them, reminiscent of the Terminator franchise or the upcoming major motion picture The Creator — on his account on the social network X (formerly Twitter). Neubert didn’t expect much in the way of a response, maybe some attention from the highly active community there around AI art. Instead, the trailer quickly went viral, clocking in 1.5 million views at the time of this article’s publication just a week and a half later, and earning him coverage on CNN and in Forbes.
Neubert recently joined VentureBeat for an interview about his process for creating the trailer, his inspirations, his thoughts on the current debate in Hollywood and the arts over the use of AI, and what he has planned next. The following transcript of our question-and-answer (QA) session has been edited for length and clarity.
Event
VB Transform 2023 On-Demand
Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.
Register Now
VentureBeat: Congratulations on all your success and attention so far on the “Genesis” trailer. It seems like you’re enjoying it, and that it’s opening people’s eyes to some of the possibilities and potential with with generative AI tools. Tell me how you’re feeling about it all.
Nicolas Neubert: I think the feedback has been overwhelmingly positive. It was definitely not meant to blow up like this.
I’m a generally curious person who likes to try out tools. When Runway announced they had an image-to-video, tool, Gen2, of course I thought, ‘let’s try it out.’
I had these pictures lying around from previous Midjourney explorations which gave me a good base to get started. I told myself: ‘why not? Let’s try to do a 60-second movie trailer and share it? What’s the worst that can happen?’
I guess people quite liked that a lot. I think it’s a great tech demo to see where we’re heading. And I think it definitely opened some discussions as to where AI can already be utilized to some extent, from a professional standpoint.
Let me back up a little bit and ask you about your job. You’re at Volkswagen subsidiary, is that right?
Exactly. I’ve always had a full time job but I’ve enjoyed side ventures as well. Prior to this year, I always freelanced on the side working with startups helping those scale. And then beginning of this year, I kind of replaced that side hustle with getting invested into AI. Product design is my main job — I’ve been doing it for eight years — and the artistic, creative part has always always been a hobby.
Since I was a child, I always liked sketching, arts, music all of it. So when Midjourney came out in public beta [July 2022], it was kind of like a dream come true, right? You could suddenly visualize your thoughts and your creativity like never before. And that’s when I built my Twitter [X] platform around it, and I started growing that and then kind of always looked at how to combine different tools.
In your in your role as a product designer over the past year at Volkswagen and then even prior to that, what tools were you using?
I explore every hardware on the market, but I think you can really boil the toolset of a product designer down. I would say 95% of all creation comes from Figma. We spend our days creating screens, creating prototypes, designing pretty user interfaces, and all of that. Of course, if you’re working with advanced animations, or you need certain graphics, you might go out into a different tool. But 95% also means most of the job currently doesn’t involve a lot of AI. I would say that Midjourney is entering the ring as a more and more attractive option now for brainstorming, ideation, or illustration, but I would still label that as playing around.
What was the time frame and process for making the Genesis trailer? Did you make all the images beforehand not knowing about Gen2, or did you make some specifically for the trailer?
The week prior to having the idea of the trailer, I posted three photo series on Twitter [X]. And those photos series were so to say already in that world. I already had those themes of robot versus humans in a dystopian world. I already had a prompt that went very much in that direction. So when I decided to do the trailer, I realized I already had prompts and a great fundament, which I then quickly tweaked. Sitting down on my computer, it took seven hours from the beginning to the end.
All in one time frame? Or did you have to take a break for your day job and go back to it? What was the kind of burst of work that you were able to do?
I’m a night owl, so I did the first five hours at night, at some point then the responsibility factor kicked in and I had to cut it off for the day. Job. But I would say I finished everything at night except for the last edits. It was just one or two scenes that were missing. Everything else was finalized. And then on the next day, after work quickly made those scenes published it all up and then posted it. So I would say it was like a five and a two hour session.
An you primarily used Midjourney to create still images and then animated them in Runway? Or did you use any other tools, such as CapCut, or something else for the music?
To go back a step, one of the goals of not only this trailer, but what I do with Midjourney, is to show the accessibility of it — of all the tools I use. And AI is a fascinating technology. For people who are not that confident in their creativity, it’s finely tuned for them to actually get to a result. They can draw, maybe they can visualize something, but then they can take their ideas further with these tools. This is a very important point for me personally.
So with this trailer, I wanted to demo making the entry barrier as low as possible. I wanted to show people they only need a couple of tools, and beyond that, all you need is your imagination. So we have Midjourney and Runway, those are the two paid applications. And then to keep everything else low barrier, for music, I went to Pixabay and took something out of their commercially free pool of soundtracks.
For the editing, I used CapCut because it’s free, and I did not have Adobe Premier installed on the machine I was working on. It was surprisingly good, and I was surprised how much you can do in in the graphics editor. It all just kind of came in perfectly together.
How long do you think it would have taken you if you had not had artificial intelligence? Would it have even been possible for you to create the Genesis trailer, if you had to edit it and animate it manually?
Without AI, would I have had the skills to do it today? No. Is it possible for someone else? Yes. Of course. But you would have a much higher effort, right? You would probably approach it differently. Because right now with AI, we work with a couple of restrictions.
We’re working with images and we’re animating those images. If I would approach this from a non AI standpoint, I would certainly consider using engines gaming engines to get 3D stuff, where they’re using Blender and Cinema 4D, and building it completely differently from the ground up.
That method results in higher quality and it has more control, but it also takes a considerably longer amount of time to do. And if I may add, a lot of those tools can also get very expensive with their licensing.
So, I think this is a perfect example of just opening this field of creating original videos for a very low entry barrier.
And these AI models will get better and we will see the quality go up, we will get more control in the future. But for right now, we got to live with the compromises. I mean even if we don’t pick on the quality, you don’t need to have a professional reason to do it. You can also just throw in some images and see what happens and laugh about it.
Did you post it on X (Twitter) first, or where was it when you made it available initially?
Well, I currently only post on Twitter [X], primarily. But after the reaction there, I also started my Instagram up and posted it on LinkedIn. LinkedIn was a risk as it’s for business, so I’m always a bit more reserved.
I saw recently you were celebrating that you crossed 20,000 followers on X (Twitter). Was that all from the trailer?
Before the trailer, I was around 17,000. Now, almost one week later, I’m sitting on 22,000. So it got me something around 4,000 or 5,000 new followers.
It also got you coverage on CNN and also in Forbes, I’m sure some other media as well. What were the reactions that you were getting, and how were they making you feel as you saw those coming in?
Of course, it was exciting and positive. I remember at some point, I got a comment at night, ‘Hey, I want to interview you for Forbes.’ It was an amazing moment to see a comment like that. I was like, ‘oh, okay!’
I realized the trailer had gotten into a different bubble, then. I had been active on Twitter [X], and I knew it was receptive to AI and there was a nice community around it already. But at this moment, I saw we’ve gone beyond that, we’ve reached something else.
Then I was at work the next day, and suddenly, I got a notification, ‘Hey, by the way you are being streamed on CNN!’. And then I was like, ‘oh, shit, wow. This is really picking up steam!’
Then from there, of course, it’s really nice and happy and cool, but it also gets tiring, in the sense that all my notifications were blowing up, I was getting a ton of comments. And I wanted to do a good community management, so I spent a lot of time interacting with commenters and people who asked questions or left responses.
And I saw you posted a walkthrough or step-by-step of how you made it?
Yeah, and I had those things planned out. Once I knew I was going to do a trailer, I had already decided to post it on Twitter [X] and that I would share the making of it, because I always share my prompts and my process. I think that combination of the trailer plus the making of it very much boosted the algorithm to make it more popular.
This trailer came out at a time when the actors in Hollywood are striking, the writers are striking. They’re concerned about AI. They’ve openly said, ‘you know, we don’t want AI to replace us or take our jobs.’ How do you respond to those concerns? Was there any feedback or concerns about this type of technology and your usage of it being a good illustration of how things are going, how we may need less human labor to create these kinds of cool movies and scenes?
It’s a discussion that is happening in a lot of industries. I completely understand the concern and the importance of having these discussions. Personally, I always try to see the optimistic side of new technology. Rather than saying it will replace jobs, I much more see it as empowering somebody to do more. Because I think the true skill is still storytelling and creativity. And storytelling and creativity is done best when it’s performed by a human who we can relate to, bringing their emotions into it. Therefore, while I do understand the concerns, I really believe that it will help us become better in what we do instead of replacing us in what we do.
I kind of find myself sharing that perspective as well. And I also saw some comments saying the level of the quality of the ‘Genesis’ trailer was not high enough quality to replace a Hollywood movie. But it sounds like that was never your goal, necessarily. What was your thinking when you made it: ‘I’m going to try to do this more as a proof-of-concept rather than achieve the highest quality?’
Absolutely. I primarily work on Midjourney, and we’ve reached a very good quality standard there. While I appreciate what it is and it is truly impressive how good the tools are, I wouldn’t say that the quality is there where we need it to be to actually do proper commercial projects with it. I don’t see it replacing an official trailer for Netflix anytime soon. What it was more for me was a tech demo to show what we can do today, how few resources you need to do something like that. The plot, the whole idea, it wasn’t generated by AI. It was only visuals.
But it is a good test case, and the reception to the trailer showed that it can be used to test ideas. That’s something companies today could do. As a filmmaker, studio —Netflix, Amazon Prime, you name it — you wouldn’t have to film the movie or do high production costs to find out if a idea works with an audience, based on their reaction to an AI-generated trailer.
It’s kind of similar to fashion companies using Midjourney to do mood boards or inspiration boards. It gives us a very low budget tool to visualize ideas. That’s where I kind of draw the line, but I’m sure there are artists and companies that will dare to go beyond and that will use it to do commercials or shows.
Have you had interest from people in Hollywood, in the filmmaking business? Has anyone reached out to you to say ‘hey, I want to learn from you! or ‘hey, I want to collaborate or turn this into a full movie?’ What’s the response been from that field?
There have certainly been requests coming in from filmmakers and other ventures that are interested in the technology. There are more people interested to collab or to find out more about it than there are people sending negative reactions.
Do you plan to pursue those collaborations or turn this into a longer film? How are you thinking about what happens next with, in particular the Genesis trailer or that world?
Look, taking the Genesis trailer on my own with the tools we have today, and making a full feature film probably won’t happen.
But I will definitely explore the world and expand it. The rest kind of depends on what happens, right? If a Netflix or somebody would approach me and be like, ‘Hey, we liked the IP. Want to do something?’ Of course. I’m not saying that I’m not interested in making this something real.
However, I know that at the pace we’re currently running, by the time we’re halfway done with that trailer, we already have new tech already available. So for now, I will definitely scale that world, tell more stories create new generations around it. For something like a feature film to happen, let’s see who approaches me with what ideas.
How defined was was the story going into the trailer and going into those images? Did you write it down or it was just more loose in your head? And do you have names for these characters and items in the trailer?
I didn’t write anything down for the reason. I’m a visual person. I have different mood boards where I have my pictures on it. In this case, I thought more in a visual sense. Before starting the trailer, I had an image pool of roughly 40 images that I had generated, which were enough to at least inspire me to start weaving them together into a story.
Some ideas happened while making it. There’s the scene with the boy holding the glowing amulet, adding a little depth. After I posted the trailer, I have an image world of roughly 500 images to weave together into stories. Right. But again, it was a tech demo. I kind of created the story to optimize for that. I think that’s a different process than actually then building a whole story. Not saying it’s impossible.
Do you have a strong tradition in sci-fi? Or what led you to this genre and these themes of man vs. machine?
Well, I grew up with Star Wars and science fiction. Both of my parents are physicists, so that also played a large role in my life. More recently, topics like Silo from Apple or the upcoming Starfield game from Bethesda, the Cyberpunk 2077 game. Those are interesting topics for me and interesting experiences that I love delving into. So on the one hand, I am genuinely interested in that genre, on the second hand, I wanted to create a trailer in a theme where I know these AI models are capable of producing really good imagery.
What do you plan to do next with AI?
Creativity always has the opportunity to take me somewhere else, but I think there’s some foundational stuff that I’ll always pursue. I have a Twitter [X] platform and I have a strong emphasis on Midjourney. For the foreseeable future, I’ll be there teaching people how to use these tools, trying to empower people to work with their creativity. Runway now is a new tool in the box. I will be experimenting more with them in tandem and with Runway itself. The story will be expanded: new stories will be made, and always will.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
[ad_2]
Source link