The AI Book
    Facebook Twitter Instagram
    The AI BookThe AI Book
    • Home
    • Categories
      • AI Media Processing
      • AI Language processing (NLP)
      • AI Marketing
      • AI Business Applications
    • Guides
    • Contact
    Subscribe
    Facebook Twitter Instagram
    The AI Book
    Daily AI News

    Runway Gen-2 adds multiple motion controls to AI videos

    19 January 2024No Comments4 Mins Read

    [ad_1]

    AI video is still in its early days but the tech is advancing fast. Case in point, New York City-based Runway, a generative AI startup enabling individuals and enterprises to produce videos in different styles, today updated its Gen-2 foundation model with a new tool, Muti Motion Brush, which allows creators to add multiple directions and types of motion to their AI video creations.

    The advance is a first-of-its-kind for commercially available AI video projects: all other rival productrs on the market at this stage simply use AI to add motion to the entire image or a selected highlighted area, not multiple areas.

    The offering builds upon the Motion Brush feature that first debuted in November 2023, which allowed creators to add only a single type of motion to their video at a time.

    Multi Motion Brush was previewed earlier this year through Runway’s Creative Partners Program, which rewards select power users with unlimited plans and pre-release features. But it’s now available for every user of Gen-2, adding to the 30+ tools the model already has on offer for creative video producers.

    The move strengthens the product in the rapidly growing creative AI market, which includes players such as Pika Labs and Leonardo AI.

    What to expect from Motion Brush?

    The idea with Multi Motion Brush is simple: give users better control over the AI videos they generate by allowing them to add independent motion to areas of choice.

    This could be anything, from the movement of a face to the direction of clouds in the sky.

    The user starts by uploading a still image as a prompt and “painting” it with a digital brush controlled by their computer cursor.

    The user then uses slider controls in Runway’s web interface to select which way they want the painted portions of their image to move and how much (intensity), with multiple paint colors each being controlled independently.

    The user can adjust the horizontal, vertical and proximity sliders to define the direction in which the motion is supposed to be executed – left/right, up/down, or closer/further – and hit save.

    “Each slider is controlled with a decimal point value with a range from -10 to +10. You can manually input numerical value, drag the text field left or right or use the sliders. If you need to reset everything, click the ‘Clear’ button to reset everything back to 0,” Runway notes on its website.

    Gen-2 has been getting improved controls

    The introduction of Multi Motion Brush strengthens the set of tools Runway has on offer to control the video outputs from the Gen-2 model. 

    Originally unveiled in March 2023, the model introduced text, video and image-based generation and came as a major upgrade over Gen-1, which only supported video-based outputs.

    However, in the initial stage, it generated clips only up to four seconds. This changed in August when the company added the ability to extend clips up to 18 seconds.

    It also debuted additional features such as a “Director Mode,” allowing users to choose the direction and intensity/speed of the camera movement in generated videos, as well as options to choose the style of the video to be produced – from 3D cartoon and render to cinematic to advertising.

    In the space of AI-driven video generation, Runway takes on players like Pika Labs, which recently debuted its web platform Pika 1.0 for video generation, as well as Stability AI’s Stable Video Diffusion models.

    The company also offers a text-to-image tool, which takes on offerings like Midjourney and DALL-E 3. However, it is important to note that while the outputs from these tools have improved over time, they are still not perfect and can generate images/videos that are blurred, incomplete or inconsistent in different ways.

    VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.



    [ad_2]

    Source link

    Previous ArticleAnthropic hits back at music publishers in AI copyright lawsuit
    Next Article IMF: AI will impact 60% of jobs and worsen inequality
    The AI Book

    Related Posts

    Daily AI News

    Adobe Previews New GenAI Tools for Video Workflows

    16 April 2024
    Daily AI News

    Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

    15 April 2024
    Daily AI News

    8 Reasons to Make the Switch

    15 April 2024
    Add A Comment

    Leave A Reply Cancel Reply

    • Privacy Policy
    • Terms and Conditions
    • About Us
    • Contact Form
    © 2025 The AI Book.

    Type above and press Enter to search. Press Esc to cancel.