[ad_1]
Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.
Eric Boyd, the Microsoft executive responsible for the company’s AI platform, suggested in an interview Wednesday that the company’s AI service will soon offer more LLMs beyond OpenAI, acknowledging that customers want to have choice.
Boyd’s comments came in an exclusive video interview with VentureBeat, where the main focus of the conversation was around the readiness of enterprise companies to adopt AI. Boyd’s hint that more LLMs are coming follow Amazon AWS CEO Adam Selipsky’s veiled criticism of Microsoft last week, in which Selipsky said companies “don’t want a cloud provider that’s beholden primarily to one model provider.”
When I asked Boyd if Microsoft would move to offering more models outside of OpenAI, perhaps even through a relationship with Anthropic, Boyd responded: “I mean, there’s always things coming. I would stay tuned to this space. There’s definitely… we’ve got some things cooking, that’s for sure.”
A Microsoft spokeswoman said the company isn’t ready to share more details.
VB Event
The AI Impact Tour
Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!
Learn More
Microsoft has deployed OpenAI’s models across its consumer and enterprise products, such as Bing, GitHub Copilot and the Office coPilots. Microsoft also offers choice for customers to use other models through its Azure Machine Learning platform, such as the open source models provided by Hugging Face. However, closed-source models such as OpenAI tend to be the easiest and fastest way for many enterprise companies to go to market, because they often come with more support and services. Amazon has made a big deal about offering more choice in this area, boasting a new expanded partnership with OpenAI’s top competitor, Anthropic, as well as offerings from Stability AI, Cohere, and AI21.
In a wide ranging interview, Boyd asserts that Microsoft plans to stay competitive on the choice front. He said the company’s generative AI applications and the LLMs that power them, are safe to use, but that companies that are more focused on where models work really well – for example in text generation – are able to move the fastest.
Watch the whole video by clicking above, but here’s a transcript (edited for brevity and clarity):
Matt: You’ve got one of the biggest breadth of services and compute and data and the huge investment in open AI. You’re positioned well to be a top player in AI as a result. But with recent events, there’s a bunch of questions about whether companies are ready for AI. Do you agree that there is a readiness issue with AI?
Eric: You know, we talked to a lot of different companies from all industries and we’re seeing tremendous uptake in generative AI and applications built on top of open AI models. We have over 18,000 customers currently using the service. And we see healthcare companies, to financial institutions, to big industrial players, to a lot of startups. And so there’s a lot of eagerness and companies moving really quite quickly. And really what we see is the more a company is focused on the places where these models really work well and their core use cases, the faster they’re really moving in this space.
Matt: OpenAI, a company you rely on for a lot of your models, you own a big portion of it. It’s suffered a major crisis in the past few weeks. Its leadership team apparently divided because of safety issues. How is this impacting enterprise readiness to use OpenAI solutions through Microsoft?
Eric: OpenAI has been a key partner of ours for years and we work very closely with them. And we feel very confident that at Microsoft we have all the things we need to continue operating and working really well with OpenAI. We also offer customers a breadth of models as well as they can, you know, choose the best frontier models really, which come from OpenAI, as well as the best open source models, you know, models like Llama 2 and others that are available on the service that companies can go and use. And so, we really want to make sure that we’re helping companies bring all of that together. And as companies work with us, we want to make sure that they’ve got the right set of tools to build these applications as quickly as they can and as maturely as they can, and put it all together into a single place.
Matt: Are there any other key factors that determine an enterprise’s readiness for adopting gen AI solutions?
Eric: We see the most success with companies that have a clear vision for, hey, here’s a problem that’s going to get solved. But particularly when it’s in one of the key categories. These models are great at creating content. And so if you’re trying to create content, that’s a great application. They’re great at summarizing, if you’ve got a lot of user reviews and want to summarize them. They’re great at generating code. They’re great at sort of semantic search: You have a bunch of data and you’re trying to reason over it. And so as long as companies are building applications in those four application areas, which are really broad, then we see a lot of success with companies because that’s what the models work really well at. We do occasionally talk to companies that have grandiose ideas of how AI is going to solve some fanciful problem for them. And so we have to sort of walk them back to, look, this is an amazing tool that does incredible things, but it doesn’t do everything. And so let’s make sure that we really use this tool in the way that it can best work. And then we get great results out of that. We work with Instacart, and they’re making it so that you can take a picture of your shopping list and you can go right off of that. I think just thinking through what are the layers of convenience that we can bring to our customers, and how can companies really adopt that, is really going to help them accelerate where they’re going.
Matt: Your competitors are chomping at the bit to get into the mix, maybe to exploit what’s been happening at OpenAI and the drama around that. You know, Amazon, Google, little companies I’m sure you’ve heard of. What unique value propositions does Microsoft offer with its GenAI solutions that set it apart from those competitors?
Eric: Yeah, I mean, one of the things that we think about is, you know, we’ve been the first in this industry and we’ve been at it for a while now. We’ve had GPT-4 in the market for a year. We’ve been building copilots and other applications on top of it that have been in market for most of this year. We’ve taken all of the learnings of what people are building into those products and put them into the Azure AI Studio and other products that make it easy for customers to build their own applications.
And on top of that, we’ve been thinking very carefully from the start about how do you build these applications in a responsible way? And how do we give customers the toolkit and things that they need to build their own applications in the right responsible way. And so, you know, as I mentioned, we’ve got over 18,000 customers. That’s a lot of customers who are seeing really valuable adoption from using these models. And it’s having a real impact on their products and services.
Matt: You saw a lot of companies trying to exploit the instability at OpenAI. You saw Benioff from Salesforce offering jobs to any OpenAI developer that wanted to walk across the street. You’ve seen Amazon taking a veiled slap at Microsoft for being dependent on OpenAI. How does Microsoft think about its partnerships now, in particular, like OpenAI, and how do you structure those partnerships to reinforce the need to reassure companies, your customers, those thousands of customers, that those models and other products will be safe and well governed?
Eric: We have, as I mentioned, a very close collaboration with OpenAI. We work together in really all phases of building and developing the models. And so we approach it with safety from the outset and thinking through how we’re going to build and deploy those models. We then take those models and host them completely on Azure. And so when a company is working with Azure, they know they get all of the promises that Azure brings. Look, we have a lot of history working with customers’ most private data, their emails, their documents. We know how to manage that to some of the strictest privacy regulations in the industry. And we bring all of that knowledge to how we work with AI and approach it in the exact same manner. And so companies should have a lot of confidence with us. At the same time, we’ve partnered deeply with OpenAI. We’ve partnered with a host of other companies. We’ve partnered with Meta on the Llama model. We’ve partnered with NVIDIA, with Hugging Face, and a number of others. And so we really want to make sure that customers have the choice among the best foundation models, the frontier models that are pushing the envelope for what’s possible, along with the full breadth of everything else that the industry is doing in this space.
Matt: You mentioned Llama and Hugging face. A lot of the experimentation is happening on open source. I think what you’re also hearing is that closed source sometimes can be the fastest to market. And we heard Amazon’s Adam Selipsky last week kind of making a veiled remark – I don’t think he mentioned Microsoft by name – but saying Microsoft’s dependent, highly dependent on OpenAI for that closed model. And he was boasting about [AWS’s] relationships with Anthropic, Cohere, AI21 and Stability AI. Is that a vulnerability to be so reliant on OpenAI, given everything that’s going on there?
Eric: I don’t see it that way at all. I think we have a really strong partnership that together has produced the world’s leading models that we’ve been in market with for the longest amount of time, and have the most customers, and are really pushing the frontier on this. But we also have a breadth of partnerships with other companies. And so, we’re not single-minded in this. We know customers are going to want to have choice and we want to make sure we provide it to them. The way that this industry is moving at such a quick pace, we want to make sure that customers have all the tools that they need so that they can build the best applications possible.
Matt: Do you see a time over the next few weeks, months, where you’re gonna be maybe delivering more models outside of OpenAI, maybe a relationship with Anthropic or others?
Eric: I mean, there’s always things coming. I would say tuned to this space. There’s definitely, we’ve got some things cooking, that’s for sure.
Matt: Many companies see a risk in adopting Gen. AI, including that this technology hallucinates in unpredictable ways. There have been a lot of things that companies such as yours have been doing to reduce that hallucination. How are you tackling that problem?
Eric: Yeah, it’s a really interesting space. There are a couple of ways that we look at this. One is we want to make the models work as well as possible. And so we’ve innovated a lot of new techniques in terms of how you can fine-tune and actually steer the model to give the types of responses that you like to see. The other ways are through how you actually prompt the model and give it specific sets of data. And again, we’ve pioneered a lot of techniques there, where we see dramatically higher accuracy in terms of the results that come through with the model. And we continue to iterate on this. And the last dimension is really in thinking through how people use the models. We’ve really used the metaphor of a co-pilot. If you think about the developer space, if I’m writing code, the model is helping me write code, but I’m still the author of it. I take that to my Word document: “Help me expand these bullet points into a much richer conversation and document that I want to have.” It’s still my voice. It’s still my document. And so that’s where that metaphor really works. You and I are used to having a conversation with another person, and occasionally someone misspeaks or says something wrong. You correct it and you move on and it’s not unusual. And so that metaphor works really well for these models. And so the more people learn the best ways to use them, the better off they’re going to get, the better results they’re going to get.
Matt: Eric, you talked a little bit about human reinforced learning, you know, the fine tuning process to make some of these models safer. One area that it’s been talked about, but hasn’t gotten a lot of attention, is this area of interpretability (or explainability). There’s some research into that, some work being done. Is that promising, or is that something that’s just going to be impossible to do now that these models are so complex?
Eric: I mean, it’s definitely a research area. And so we see a lot of research continuing to push into this, trying counterfactuals, trying different training steps and things like that. We’re at early stages and so we see a lot of that continuing to grow and move. I’m encouraged by some of the responsible AI tooling that we’ve put into our products and that we’ve open sourced as well. And so things like Fairlearn and InterpretML that will help you understand some simpler models, we have a lot of techniques and ideas. The question really is, hey, how do we continue to scale that up to these larger sets of models? I think we’ll continue to see innovation in that space. It’s really hard to predict where this space is going. And so I think we know there are a lot of people working on it and we’ll be excited to see where they get.
Matt: Eric, one of the luminaries in AI, Yan LeCun at Meta, has talked for a while about how important it is for models to be open sourced. But your main bet, OpenAI, is closed. Can you talk about whether this will be a problem, this idea of closed models? We talked about the problem about the research into explainability being limited. Do you see that debate continuing or are you going to bring that to a close pretty soon?
Eric: I mean, we’re very invested in both sides of that. So we obviously work very closely with OpenAI in producing the leading frontier models. And so we want to make sure that those are available to customers to build the best applications they can. We not only partner with customers, we produce a lot of our own models. And so there’s a family of five models that we’ve produced that are open source models. And there’s a whole host of technology around how to optimize your models around ONNX and the ONNX runtime that we’ve open-sourced. And so there’s a lot of things that we contribute to the open source space. And so we really feel like both are going to be really valuable spaces for how this, you know, these new large language models continue to evolve and grow.
Matt: Microsoft has done some of the best work on governance. You had the 45 page white paper released [in May], though any white paper is going to be dated with the pace that things are moving now. But I found it interesting that one of your anchor tenets in that paper was transparency. You have transparency notes on a lot of your features. And I saw one on Azure OpenAI where it was full of cautions: Don’t use OpenAI in scenarios where up-to-date accurate information is crucial, or where high-stakes scenarios exist and so on. Will those cautions be removed soon with the work that you’re doing?
Eric: Again, it’s about thinking through what are the best ways to use the models and what are they good at? And so as customers learn more about what to expect from using this new tool that they have, I think they’ll get more comfortable and more familiar with it. But yeah, I mean, you’re right. We’ve been thinking about responsible AI for years now. We published our responsible AI principles. You’re referencing our Responsible AI standard where we really showed companies that this is the process that we follow internally to make sure that we’re building products in a responsible way. And the impact assessments where we think through all the potential ways a person might use a product and how do we make sure that it’s used in the most beneficial ways possible. We spend a lot of time sort of working through that and we want to make sure that everybody has the same tools available to go and develop those same things that we do.
Matt: You’ve also been on the lead for helping companies think about this. I saw you and Susan Etlinger had a session at your [Ignite] event where you released a paper on the various elements of readiness. One area I’d love to ask you about related to this is you’ve got the Azure AI Studio, Azure ML Studio, Copilot Studio, a lot of products. How do companies get a singular governance framework from Microsoft given these multiple products? Or is it the responsibility of companies to [manage governance] in-house?
Eric: I mean, we work with companies all the time and they’re building products for their own enterprises. And so of course they have their own, different standards that they operate by and that we need to sort of work with. And we work very closely with large financial institutions, we do security reviews and detailed reviews of how these products work and what they should expect from them. And across the board, they have the same consistent set of promises from Microsoft.
They know that we’re going to adhere to our responsible AI standard. They know that we’re going to live up to our responsibility principles. They know that all of these products are going to be protected by Azure Content Safety, and that the customers will have the tools and dials to set those safety systems where they want them to. And so that’s the way that we want to work with customers: giving them the confidence in how all these products work, and the way that Microsoft works, and to bring it into their particular enterprise and their particular situation to figure out how’s that best going to work for their products, for their customers, for their employees.
Matt: Are there any companies that act as standard bearers, or a great precedents, for you, that have done a particularly good job at setting the governance framework or blueprint for AI?
Eric: We work with everyone from healthcare companies to large financial institutions, to industrial companies that are making machines and hardware that has lots of safety concerns and regulations and rules around every sort of aspect of it. In each case, we’ve been able to work with those companies to figure out how do we satisfy the rules and concerns that they have in their industry.
In the healthcare space, [Microsoft acquired] Nuance. We’ve been able to use these models in products that are directly going to be involved in the doctor and patient conversation, helping to directly produce the medical record as a part of what Nuance provides and so, thinking through how to do that in the right way to meet all the regulatory rules that healthcare has – this has been a real journey for us, but it’s also been something where we’ve learned a whole lot along the way about how you do this in the best ways possible.
Matt: Microsoft has a huge advantage with its Office Suite and the fact that you have millions of users using these applications. You have this expertise and research in personal computing and UX. Presumably, we have one of the most experiences in seeing where users get lost and then needing to get them back on track again. Are there specific ways you’re seeing leveraging that already over the last couple of months since you’ve rolled out [co-pilots]?
Eric: I think it’s been interesting to watch as customers adopt these new technologies. We saw it first with GitHub Copilot, which was the first copilot we launched, and that’s been almost two years in market. GitHub Copilot really helps developers write code more productively. But just because I have a new tool doesn’t mean I know how to use it effectively. And so I’m a developer. When I write code, I sit down and I just start typing. And I don’t think I should ask someone, hey, how can I do this? Can you do some of this for me? And so it’s kind of a change in mindset. And so we’re seeing similar things, as we work with customers that are using these co-pilots across our suite of office products, M365 and the like, where now I can ask questions that I don’t know that I should be able to get an answer to. And so just being able to ask, hey, what are the last three documents that I reviewed with my boss, and see them and be like, oh, right, this is super helpful. And hey, I’m meeting with this person tomorrow. What are the things that are most relevant to that? And so I kind of have to learn that this is a new tool and a new capability that I’ve got. And so I think that’s one of the things that we’re seeing is how do customers learn about all the capabilities that are now available to them, you know, because they didn’t used to be. And so that’s to get the most productive benefit out of the tools that they have.
There definitely is a learning curve that the actual end users have to go through. And so how you design and build those experiences is something that we’ve definitely spent a lot of time thinking through as we build and roll out our products.
Matt: You’ve seen a lot of people, including Sam Altman very recently talking about the need for more reasoning in these models. Do you see that happening anytime soon with Microsoft’s efforts or in conjunction with OpenAI?
Eric: I think reasoning is such an interesting capability. We’d like to bring more open-ended problems to the models and have them give us sort of step-by-step, here’s how you sort of approach and solve them. And honestly, they’re really pretty good at it today. What would it take to sort of make them great at it, to sort of make them amazing, so that we start to rely on them in more ways? And so I think that’s something that we’re thinking through. There are a lot of research directions that we’re working through. How do you bring different modalities? You see vision and text, and so expect speech and all of those things kind of coming together. And how do you just sort of bring more capabilities into what the models can do? All of those are research directions, and so I would expect to see a lot of interesting things coming. But I always hesitate to make predictions. The space has moved so far so fast in the last year, it’s really hard to even guess what we’ll see coming next.
Matt: Eric, thank you so much for joining us at VentureBeat. I wish you the best and hope to stay in touch as we cover your journey in this really incredibly exciting area. Until next time.
Eric: Thanks so much, I really appreciate it.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
[ad_2]
Source link