The AI Book
    Facebook Twitter Instagram
    The AI BookThe AI Book
    • Home
    • Categories
      • AI Media Processing
      • AI Language processing (NLP)
      • AI Marketing
      • AI Business Applications
    • Guides
    • Contact
    Subscribe
    Facebook Twitter Instagram
    The AI Book
    Daily AI News

    CoreWeave secures $2.3 billion in new financing for GPU cloud, data centers

    3 August 2023No Comments4 Mins Read

    [ad_1]

    Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


    CoreWeave, a specialized cloud provider of large-scale GPU-accelerated workloads, announced today that it has secured a $2.3 billion debt facility.

    The startup, which is poised to make billions off the generative AI boom with its GPU cloud, said the new financing will be used to purchase compute to serve its customers, open new data centers and add to CoreWeave’s staff. The funding was led by Magnetar Capital and funds managed by Blackstone with strategic participation from leading asset management firms Coatue and DigitalBridge Credit, and additional support from funds and accounts managed by BlackRock, PIMCO, and Carlyle.

    “AI has the potential to transform the way we engage with technology, power the industries of the future, and make society’s vital services more efficient – as long as the infrastructure is in place to deliver performance at scale,” said Michael Intrator, CoreWeave CEO and co-founder in a press release. “CoreWeave is delivering on this unprecedented level of demand with the most reliable, flexible, and highly performant compute resources to lead the industry forward. The new resources from these world class investors are a vote of confidence in our accomplishments to date and validate our future strategy.”

    Jasvinder Khaira, a Blackstone Senior Managing Director, said: “The soaring computing demand from generative AI will require significant investment in specialized GPU cloud infrastructure – where CoreWeave is a clear leader in powering innovation.”

    Event

    VB Transform 2023 On-Demand

    Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

     

    Register Now

    Coreweave has ‘come out of nowhere’ over last few months

    Brannin McBee, the company’s cofounder and chief strategy officer, recently told VentureBeat, in a story published this week, that CoreWeave has “come out of nowhere” over the last few months.

    With more than $400 million in new funding; a new $1.6 billion data center in Plano, Texas; and the world’s fastest AI supercomputer built in partnership with Nvidia unveiled last month, the company’s fortunes have shifted dramatically since it was founded in 2017 as an Ethereum mining company.

    In 2019, the founders had pivoted — fortuitously, in hindsight — to building a specialized cloud infrastructure spanning seven facilities that offered GPU acceleration at scale. And as ChatGPT and gen AI drove high demand for GPUs this year, CoreWeave was perfectly placed to give large language model (LLM) companies, including Inflection AI, what they needed.

    McBee said CoreWeave did $30 million in revenue last year, will score $500 million this year and has nearly $2 billion already contracted for next year. CNBC reported in June that Microsoft “has agreed to spend potentially billions of dollars over multiple years on cloud computing infrastructure from startup CoreWeave.”

    “It’s happening very, very quickly,” McBee told VentureBeat. “We have a massive backlog of client demand we’re trying to build for. We’re also building at 12 different data centers right now. I’m engaged in something like one of the largest builds of this infrastructure on the planet today, at a company that you had never heard of three months ago.”

    CoreWeave has benefitted from Nvidia AI dominance strategy

    In addition to being in the right place with the right technology at the right time, CoreWeave has also benefitted significantly from Nvidia’s strategy to stay dominant in the AI space.

    Nvidia has allotted a generous number of its latest AI server chips to CoreWeave and away from top cloud providers like AWS, even though supply is tight. That’s because those companies are developing their own AI chips in an attempt to reduce their reliance on Nvidia.

    “It’s certainly isn’t a disadvantage to not be building our own chips,” McBee admitted. “I would imagine that that certainly helps us in our constant effort to get more GPUs from Nvidia at the expense of our peers.”

    VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

    [ad_2]

    Source link

    Previous ArticleAMD unveils Radeon Pro mid-range graphics cards for creators and animators
    Next Article 3D Internet Building Block Championed by Nvidia and Apple Faces Challenges
    The AI Book

    Related Posts

    Daily AI News

    Adobe Previews New GenAI Tools for Video Workflows

    16 April 2024
    Daily AI News

    Exciting Updates From Stanford HAI’s Seventh Annual AI Index Report

    15 April 2024
    Daily AI News

    8 Reasons to Make the Switch

    15 April 2024
    Add A Comment

    Leave A Reply Cancel Reply

    • Privacy Policy
    • Terms and Conditions
    • About Us
    • Contact Form
    © 2025 The AI Book.

    Type above and press Enter to search. Press Esc to cancel.