Sam Altman’s Latest Interview: Why Is OpenAI Breaking Up with Microsoft?
- Key Insight: The focus of AI competition is shifting from “the most powerful model” to “enterprise-grade infrastructure.” OpenAI’s collaboration with AWS on Bedrock Managed Agents is not a simple listing of models, but a deep embedding of AI into AWS’s native identity, permission, data, and security systems, enabling enterprises to build “virtual colleagues” that can execute internal tasks.
- Key Elements:
- Amendment to the OpenAI-Microsoft agreement: Azure no longer has exclusive access to OpenAI’s models. OpenAI can now expand its products to other cloud platforms like AWS. Microsoft has stopped paying revenue sharing and given up exclusive IP licensing.
- Product Core: Bedrock Managed Agents packages OpenAI’s frontier models into AWS’s agent runtime, including native services such as authentication, permission management, logging, and governance, lowering the barrier for enterprises to deploy AI agents.
- Enterprise Deployment Challenges: Enterprise AI agents must interact with databases, SaaS, permission systems, and compliance requirements, which is far more complex than a local environment. This partnership aims to provide a one-click enterprise-grade solution.
- Changing Competitive Landscape: Future AI competition will depend on who can build a platform that enterprises feel comfortable using, can scale continuously, and can actually perform work, rather than just competing on API pricing, chip performance, or model benchmarks.
- Roles of Both Parties: Enterprise customers can directly contact AWS support, and data remains within the AWS VPC. OpenAI and AWS jointly build the product and commit to deploying models on AWS’s self-developed Trainium chips.
Original Title: An Interview with OpenAI CEO Sam Altman and AWS CEO Matt Garman About Bedrock Managed Agents
Original Author: Ben Thompson, Stratechery
Original Translation & Compilation: Peggy, BlockBeats
Editor's Note: On April 27, OpenAI and Microsoft just revised their cooperation agreement, meaning Azure no longer exclusively hosts OpenAI models. Consequently, OpenAI can now expand its products to other cloud platforms like AWS.
Note: Azure is Microsoft's cloud computing platform, usually referred to as Microsoft Azure. Like AWS and Google Cloud, it primarily offers enterprise cloud services such as servers, databases, storage, networking, security, and AI model deployment.
To outsiders, this might seem like a simple change in cloud service distribution channels. However, judging from the discussion between Sam Altman and AWS CEO Matt Garman, the more critical shift is that AI is moving from "model invocation" into an "enterprise-level workflow" phase.
This article is compiled from an interview by the tech business analysis media Stratechery with Sam Altman and Matt Garman. It revolves around the Bedrock Managed Agents launched through the OpenAI and AWS collaboration, discussing the parallels between cloud computing and AI platform shifts, the deployment challenges of enterprise-level agents, the difference between AgentCore and managed services, and AWS's position in the AI infrastructure competition.
Note: Stratechery was founded by tech analyst Ben Thompson. It focuses on tech company strategy, platform economics, cloud computing, AI, and media industry changes. Its content features in-depth analysis and executive interviews, holding significant influence in Silicon Valley tech and investment circles, often seen as a key window into major tech companies' strategic moves.
The core of Bedrock Managed Agents isn't just about giving AWS customers access to OpenAI models, but embedding those models into AWS's native identity, permissions, logging, governance, deployment, and security systems. In other words, what enterprises truly need isn't a smarter chat window, but a system of "virtual colleagues" that can operate within the organization, access data, execute tasks, and adhere to permission boundaries.
This is the most noteworthy aspect of this collaboration: the focus of AI competition is shifting from "who has the strongest model" to "who can turn models into usable enterprise infrastructure." In individual developer scenarios, Codex can leverage the local environment to solve many complex problems. But in enterprise scenarios, agents must contend with databases, SaaS, permission systems, security boundaries, and compliance requirements.
In a way, this collaboration echoes the early logic of cloud computing. AWS lowered the startup costs for entrepreneurs, allowing small teams to build internet products without managing their own servers. Now, OpenAI and AWS aim to lower the barrier for enterprises to deploy AI agents, enabling companies to integrate AI into real business processes without having to piece together models, permissions, data, and security systems themselves. The difference is the faster adoption rate and more urgent enterprise demand this time.
Therefore, this article isn't really about OpenAI models being "listed" on AWS, but about the next phase of AI infrastructure: the deep integration of models, cloud, data, and enterprise permission systems. Future competition might not just be about API prices, chip performance, or model benchmarks, but about who can build an AI platform that enterprises trust, can continuously expand, and that truly gets work done.
Below is the original text:
Introduction
Good morning. As I mentioned yesterday, today's Stratechery interview is both early in my publishing schedule (moved up from Thursday to Tuesday) and late in terms of delivery time (pushed from 6 AM ET to 1 PM ET), because the topic was subject to an embargo.
Over the past few days, this embargo put me in a slightly delicate position. Last Friday, I interviewed OpenAI CEO Sam Altman and AWS CEO Matt Garman about Bedrock Managed Agents powered by OpenAI. Naturally, one of my questions was: how does this collaboration square with the agreement between OpenAI and Microsoft that gave Azure exclusive access to OpenAI models?
Note: Bedrock Managed Agents is AWS's managed AI agent service, powered by OpenAI's models. It's not just about allowing enterprises to call OpenAI models on AWS, but embedding the models into AWS's native identity authentication, permission management, logging, security, governance, and deployment systems. This allows enterprises to build AI agents within their own cloud environment that can execute tasks, access internal data, and adhere to permission boundaries. Simply put, it's OpenAI agent infrastructure running within an AWS enterprise environment.
Late Sunday, I heard through the grapevine that Microsoft would announce something on Monday morning. I wondered if it might be a preemptive lawsuit!
By Monday, Microsoft and OpenAI announced they had revised their agreement, allowing OpenAI to offer its products on other cloud providers, including AWS.
Thus, this interview came to be.
I believe this new arrangement between Microsoft and OpenAI is very reasonable for both parties. Here are the key points of the new agreement outlined in Microsoft's official announcement:
· Microsoft remains OpenAI's primary cloud partner; OpenAI products will launch on Azure first, unless Microsoft cannot support or chooses not to support the necessary capabilities. OpenAI can now offer its full product suite to customers through any cloud provider.
· Microsoft will continue to receive licenses for OpenAI model and product-related IP until 2032. However, Microsoft's license will no longer be exclusive.
· Microsoft will no longer pay revenue share to OpenAI.
· The revenue share OpenAI pays to Microsoft will continue until 2030, unaffected by OpenAI's technological progress, maintaining the same percentage but with a total cap.
· Microsoft, as a major shareholder, will continue to directly participate in OpenAI's growth.
I believe the last point is the most important. Previously, Azure had a real competitive advantage as the only hyperscaler offering OpenAI models. However, this exclusivity was also constraining OpenAI, especially as more enterprises prioritize accessing models on the cloud platform they already use. I've pointed out many times that this was a significant competitive advantage for Anthropic. In other words, Azure's exclusive rights were actually harming Microsoft's investment in OpenAI. Given Anthropic's rapid growth this year, Microsoft needs to protect that investment, even if it means Azure's differentiation is weakened.
At the same time, OpenAI clearly sees AWS as a huge opportunity – big enough to forego some revenue associated with Azure for several years. Combined with the previous point, this likely makes it easier for Azure management to accept losing exclusivity: after all, ceasing payments to OpenAI will significantly improve Azure's profit and loss statement. OpenAI has also released Microsoft from the AGI clause; now, whatever happens, the agreement between the two companies will last until 2032.
It's now quite clear that OpenAI's next focus will be on AWS. The strongest evidence is the subject of this interview: Bedrock Managed Agents powered by OpenAI. The simplest way to understand this product is as "Codex within AWS." Codex works well largely because it's local, naturally solving many complex problems, especially security. But making agents work across departments and systems within an organization is an entirely different matter. This product aims to make these workflows easier for organizations that already have most of their data on AWS.
Around this point, in this interview we discussed how AWS pioneered the cloud computing category and its impact on startups; the similarities and differences between AI and that paradigm shift. Then we talked about Bedrock Managed Agents: what it is and how it differs from Amazon's existing AgentCore product. We also touched on Trainium, why chips won't be that important for most AI users, and why collaboration is a sensible choice compared to Google's emphasis on full-stack integration.
A reminder: all Stratechery content, including interviews, is available as a podcast. Click the link at the top of this email to add Stratechery to your podcast player.
Onto the interview.
Interview Content
This interview has been lightly edited for clarity.
OpenAI Enters AWS, the Era of Azure Exclusivity Ends
Ben Thompson (Host): Matt Garman, Sam Altman – Matt, welcome to Stratechery; Sam, welcome back. I previously interviewed Altman in October 2025, March 2025, and February 2023.
Sam Altman (OpenAI CEO): Thank you.
Matt Garman (AWS CEO): Thank you for having me.
Host: Matt, this is your first time on Stratechery. Sadly, I think Sam's presence means we can't do our usual "getting to know the guest" segment. Besides, he probably doesn't want to hear us reminisce about our days at Kellogg School of Management. Still, glad to have an alum on the podcast.
Matt Garman: Yes, I'm happy to be here. I can come back another time, and we can delve deeper.
Host: That would be great. You started at AWS as an intern and are now leading the entire AWS organization during the AI wave. How do you see building the AI business compared to building the original general-purpose computing business – let's put it that way for now – what are the similarities and what are the real differences?
Matt Garman: I think the similarity is seeing the same excitement, and seeing builders out there able to do things they couldn't do before. The cool thing when we started AWS was that suddenly, developers could access infrastructure that only the largest companies previously had. You needed millions in budget to build a data center. Now, a developer with a credit card and a few dollars could launch an application. It dramatically expanded what internet builders could do.
Our philosophy was that people could build whatever they wanted. We wouldn't presume what they should do. We believed creativity existed everywhere; put powerful tools in their hands, and they'd create interesting, amazing things.
I think AI's empowerment of builders is at least as transformative, perhaps even more so. Think about what's now possible: you don't need ten years of coding school to build an application; you don't need a huge team of hundreds or months and months to build things. You can build and iterate quickly with a small team. AI is unlocking innovation across all sectors of the world. In many ways, it's very similar to back then. Seeing what it enables for our customer base is truly exciting.
Host: But back when AWS emerged, you were the only player, so the benefits (and drawbacks) naturally fell your way. Is there a sense that the AWS era was about general-purpose computing – making compute interchangeable, elastic, cheap – whereas in AI, especially during training, the winning abstraction seems more like highly vertically integrated superclusters, advanced networking, and extremely tight coupling between software and hardware. Was that a surprise for you? Because this time, you weren't starting from scratch with "we're the only one here"; you had a specific understanding of large-scale computing, but at least in AI's early years, it didn't perfectly align with that past experience.
Matt Garman: I'm not sure it's that different for us. I think what's truly different is the speed of adoption. It's been astonishingly fast. Maybe Sam, you can disagree. But the rate at which people have grabbed these capabilities has exceeded everyone's expectations, I think.
It's very different from when we started cloud computing. We spent a very long time explaining why a book-selling company would offer computing power. We had to work hard to explain what cloud computing was. There was a lot of heavy lifting that people forget nowadays. But in 2006, nobody took it for granted that the world's computing would move to the cloud. There was a lot of difficult explanation and evangelism.
Host: Do you feel there's still explaining to do now? Because many people were anchored in the training era, and you would say, "we're thinking about the inference era," which would be a different thing. Do you need to re-engage that explanatory capability?
Matt Garman: You do, but the speed at which people understand what you're saying is completely different. So yes, there's some education needed to move people from "this is cool, I can talk to a smart chatbot" to "it can actually do work in your enterprise." But relative to the pace of technological evolution, it's already pretty fast.
Host: I promise we'll get to today's product soon. But Sam, from an entrepreneurial ecosystem perspective, AWS was clearly transformative. It fundamentally changed the barrier to entry for startups. Anyone could start a company. Seed rounds and angel investors followed; the funding hurdle was pushed back. You didn't need "we're buying servers" in your pitch deck; you could build an app first and then raise an A round or something.
From your perspective, what are the differences and similarities between the world AWS opened up and the world AI is opening up now?
Sam Altman: I think there have been four massive platform moments that empowered startups: the internet, cloud, mobile, and AI. Of these four, cloud was the first I experienced as an adult. In YC's early days, it's hard to overstate how transformative that was for startups.
Before that, startups had to rent colo space, assemble their own servers, rack them. It was incredibly complex and you needed a lot of funding first. Then suddenly, the cloud appeared. Cloud emerged right after YC started, maybe in its second year.
Host: I was going to ask that – ultimately, YC and the cloud: were they more intertwined than you realized at the time?
Sam Altman: It felt very intertwined at the time. It felt like YC was riding the cloud wave from the beginning, though there were some early precursors to AWS.
Host: If AWS exists, the capital required to get a startup off the ground is indeed much lower than before.
Sam Altman: It was a massive enablement change, and a big part of why YC sounded so crazy back then. People said, "You can't fund a startup with tens of thousands of dollars, that's impossible, servers alone cost more than that." So it completely changed what startups could accomplish with little capital.
Generally, when a major platform shift occurs and you can do things faster and with less capital than before, startups win. It's the classic way startups beat big companies. I witnessed that firsthand early in my career with cloud. Now, watching companies build with AI feels directionally very similar. But as Matt said, the speed this time is insane.
Host: Is there a situation where incumbents, the big players, are adopting AI much faster than they adopted cloud computing?
Sam Altman: There's definitely more of that. But I'm also including the speed at which startup revenue grows. I gave a talk at YC recently and asked: "What's the revenue expectation for a good company at the end of YC now?" They said, "The answer changes every month. It might change between the start and end of a single YC batch." I've never seen that before. The speed at which people are building significant businesses on this new platform is unlike anything I've seen.
Host: Matt, throughout the cloud era, AWS was the default cloud for almost all startups, giving you a massive advantage. What makes you the default cloud today? Because many people are building directly on the OpenAI API. Do you feel, "we're coming at this from a very different angle. We have a massive existing customer base demanding AI capabilities, but we don't have that same visibility into this whole new wave of startups Sam is talking about"?
Matt Garman: I think there are a few things. First, we're very excited about this collaboration, and I think it will be very significant for startups too. But even today, if you talk to startups, most scaling startups are still scaling on AWS. There's lots of reasons: scale is there, availability, security, reliability, the partner ecosystem of other ISVs is on AWS, their customers are on AWS.
Host: (Laughs) Whether they like it or not, everyone has used the AWS console, so they're used to it.
Matt Garman: And we help them. We spend a huge amount of time enabling startups – not just with credits, but with advice on how to architect systems, how to think about go-to-market, and many things like that. I think a lot of startups really appreciate that. We put enormous time and effort into ensuring that, because we truly believe startups are the lifeblood of AWS. It started that way, as Sam said, and it's true today. I still go to Silicon Valley or other places every quarter to meet directly with startups, hear what they're doing, and confirm that what we're building actually meets their needs.
So, the competition for startup attention is definitely higher than 20 years ago. But it remains just as important to us. We invest massive amounts of time to ensure we meet those startup needs.
Host: Would it be fair to say that people building directly on the OpenAI API (as opposed to using the Azure version of the OpenAI service) are more likely to have a tech stack where their general computing is on AWS and their AI is through OpenAI?
Matt Garman: I think that's a very common pattern for many startups today. Absolutely.
Bedrock Managed Agents: Bringing AI Agents into Enterprise Workflows
Host: That brings us to today's announcement: Bedrock Managed Agents powered by OpenAI. I hope I'm stating that correctly. My understanding is that the selling point isn't just that OpenAI models are available within AWS – I don't think that was allowed yet – but that OpenAI's frontier models are packaged into an AWS-native agent runtime that includes identity, permission states, logging, governance, and deployment. Sam, is that an accurate description?
Sam Altman: Yes, that's a good summary.
Host: Great. So what exactly is this? Explain it in plain English now.
Sam Altman: I think the next phase of AI will move from "you give an agent some text and get more text back," even from "you give it a bunch of code and get more code back," to a new stage: these agents will operate inside companies, doing various different types of work.
"Virtual colleague" is the best description I've heard, but no one has really found the perfect language for it yet. We're co


