BTC
ETH
HTX
SOL
BNB
查看行情
简中
繁中
English
日本語
한국어
ภาษาไทย
Tiếng Việt

Sam Altman最新專訪:OpenAI為何要與微軟分手?

区块律动BlockBeats
特邀专栏作者
2026-04-29 09:00
本文約22249字,閱讀全文需要約32分鐘
Microsoft Azure獨佔結束後,OpenAI正試圖將AI Agent帶入企業真實工作流程
AI總結
展開
  • 核心觀點:AI競爭重心正從「最強模型」轉向「企業級基礎設施」,OpenAI與AWS合作推出的Bedrock Managed Agents並非簡單的模型上架,而是將AI深度嵌入AWS原生的身份、權限、數據和安全體系,使企業能構建可執行內部任務的「虛擬同事」。
  • 關鍵要素:
    1. OpenAI與Microsoft協議修訂:Azure不再獨佔OpenAI模型,OpenAI可將其產品擴展到AWS等其他雲端平台,Microsoft停止支付收入分成並放棄IP獨家授權。
    2. 產品核心:Bedrock Managed Agents將OpenAI前沿模型封裝進AWS的Agent Runtime中,包含身份認證、權限管理、日誌、治理等原生服務,降低企業部署AI Agent的門檻。
    3. 企業部署難點:企業AI Agent需面對資料庫、SaaS、權限系統和合規要求,遠比本地環境複雜,而本次合作旨在提供一鍵式企業級解決方案。
    4. 競爭格局變化:未來AI競爭將取決於誰能構建出讓企業放心使用、持續擴展並真正執行工作的平台,而非僅比拼API價格、晶片效能或模型榜單。
    5. 雙方角色:企業客戶可直接聯繫AWS支援,數據保留在AWS VPC內;OpenAI與AWS共同構建產品,並承諾將模型部署在AWS自研晶片Trainium上。

Original title: An Interview with OpenAI CEO Sam Altman and AWS CEO Matt Garman About Bedrock Managed Agents

Original author: Ben Thompson, Stratechery

Original compilation: Peggy, BlockBeats

Editor's note: On April 27, OpenAI and Microsoft just revised their cooperation agreement. Azure no longer holds exclusive access to OpenAI models, allowing OpenAI to expand its products to other cloud platforms like AWS.

Note: Azure is the cloud computing platform under Microsoft, usually referred to as Microsoft Azure. Like AWS and Google Cloud, it primarily provides enterprises with cloud services such as servers, databases, storage, networking, security, and AI model deployment.

To the outside world, this may seem like merely a change in cloud service distribution channels. However, judging from the discussion between Sam Altman and AWS CEO Matt Garman, the more critical shift is that AI is moving from a "model calling" phase into an "enterprise workflow" phase.

This article is compiled from an interview by Stratechery, a tech business analysis media outlet, with Sam Altman and Matt Garman. It focuses on Bedrock Managed Agents, launched through the collaboration between OpenAI and AWS, discussing the similarities between cloud computing and the AI platform shift, the deployment challenges of enterprise-grade agents, the difference between AgentCore and managed services, and AWS's position in the competition for AI infrastructure.

Note: Stratechery was founded by tech analyst Ben Thompson. It has long focused on tech company strategies, platform economics, cloud computing, AI, and media industry changes. Its content primarily features in-depth analysis and executive interviews, wielding significant influence in Silicon Valley tech and investment circles, often regarded as a key window into the strategic moves of major tech companies.

The core of Bedrock Managed Agents is not just allowing AWS customers to use OpenAI models, but embedding the models into AWS's native identity, permissions, logging, governance, deployment, and security systems. In other words, what enterprises truly need is not a smarter chat window, but a "virtual colleague" system capable of operating within the organization, accessing data, executing tasks, and adhering to permission boundaries.

This is also the most noteworthy aspect of this collaboration: the focus of AI competition is shifting from "who has the strongest model" to "who can turn models into usable enterprise infrastructure." In personal developer scenarios, Codex can rely on the local environment to solve many complex problems. But in enterprise scenarios, agents must contend with databases, SaaS, permission systems, security boundaries, and compliance requirements.

In a sense, this collaboration also reenacts the early logic of cloud computing. AWS once lowered the startup costs for entrepreneurs, allowing small teams to build internet products without setting up their own servers. Today, OpenAI and AWS are trying to lower the barriers for enterprises to deploy AI agents, enabling companies to integrate AI into real business processes without having to piece together models, permissions, data, and security systems themselves. The difference is that the adoption speed is faster this time, and enterprise demand is more urgent.

Therefore, this article isn't really about OpenAI models being "listed" on AWS. It's about AI infrastructure entering the next phase: models, cloud, data, and enterprise permission systems beginning to deeply integrate. Future competition may no longer just be about API pricing, chip performance, or model leaderboards, but about who can build an AI platform that enterprises can confidently use, continuously expand, and truly rely upon to get work done.

The following is the original text:

Introduction

Good morning, as I mentioned yesterday, today's Stratechery interview came early on my publication schedule (moved from Thursday to Tuesday); but it arrived late in terms of actual delivery time (pushed from 6:00 AM to 1:00 PM Eastern Time), because the topic was subject to an embargo.

Over the past few days, this embargo put me in a bit of a delicate position. Last Friday, I interviewed OpenAI CEO Sam Altman and AWS CEO Matt Garman, centered on Bedrock Managed Agents powered by OpenAI. Naturally, one of my questions was: How does this collaboration coordinate with the agreement between OpenAI and Microsoft that granted Azure exclusive access to OpenAI models?

Note: Bedrock Managed Agents is a managed AI agent service launched by AWS, powered by OpenAI's model capabilities. It's not just about allowing enterprises to call OpenAI models on AWS, but embedding the models into AWS's native identity authentication, permission management, logging, security, governance, and deployment systems. This enables enterprises to build AI agents within their own cloud environment that can execute tasks, access internal data, and adhere to permission boundaries. Simply put, it can be understood as an OpenAI agent infrastructure running within the AWS enterprise environment.

Late Sunday, I heard through the grapevine that Microsoft would announce something on Monday. At the time, I even wondered if it might be a preemptive lawsuit!

By Monday, Microsoft and OpenAI announced that both parties had revised their agreement, allowing OpenAI to offer its products on other cloud service providers, including AWS.

Hence, this interview.

I believe this new arrangement between Microsoft and OpenAI makes a lot of sense for both parties. Here are the key points of the new agreement listed in Microsoft's official announcement:

· Microsoft remains OpenAI's primary cloud partner; OpenAI products will be prioritized for release on Azure, unless Microsoft cannot support or chooses not to support the necessary capabilities. OpenAI can now offer its full range of products to customers through any cloud service provider.

· Microsoft will continue to receive licenses to IP related to OpenAI models and products, valid until 2032. However, Microsoft's license will no longer be exclusive.

· Microsoft will no longer pay revenue share to OpenAI.

· The revenue share paid by OpenAI to Microsoft will continue until 2030, unaffected by OpenAI's technological progress, with the proportion remaining unchanged but subject to an overall cap.

· Microsoft, as a major shareholder, will continue to directly participate in OpenAI's growth.

I think the last point is the most important. Previously, Azure did have a genuine competitive advantage as the only hyperscale cloud provider capable of offering OpenAI models. But this exclusivity was also constraining OpenAI, especially as more and more enterprises' primary concern became whether they could access models on the cloud platform they were already using. I have pointed out multiple times that this was a significant competitive advantage for Anthropic. In other words, Azure's exclusive rights were actually harming Microsoft's investment in OpenAI. Given Anthropic's rapid growth this year, Microsoft needs to protect this investment, even if it means Azure's differentiation is diluted.

Meanwhile, OpenAI clearly sees AWS as a massive opportunity – large enough that it's willing to forgo some revenue related to Azure in the coming years. Combined with the previous point, this also makes it easier for Azure management to accept losing exclusive rights: after all, without paying revenue share to OpenAI anymore, Azure's profit and loss statement will look much better. OpenAI has also released Microsoft from the AGI clause; now, regardless of what happens, the agreement between the two companies will last until 2032.

It now seems quite clear that OpenAI's next focus will be on AWS. And the strongest evidence is the subject of this interview: Bedrock Managed Agents powered by OpenAI. The simplest way to understand this product is to think of it as Codex within AWS. Codex works well largely because it's local, which naturally resolves many complex issues, especially security. But making agents work across departments and systems within an organization is an entirely different matter. The goal of this product is to make it easier for organizations, whose data largely resides on AWS, to use such workflows.

Around this point, in this interview, we discussed how AWS pioneered the entire cloud computing category and its impact on startups; the similarities and differences between AI and that paradigm shift. Then we talked about Bedrock Managed Agents: what it is, and its difference from Amazon's existing AgentCore product. We also touched upon Trainium, why chips won't be that important for most AI users, and why collaboration is a reasonable choice compared to Google's emphasis on full-stack integration.

As a reminder, all Stratechery content, including interviews, can be listened to via podcast; click the link at the top of this email to add Stratechery to your podcast player.

Let's start the interview.

Interview Content

This interview has been lightly edited for clarity.

OpenAI Enters AWS, Azure's Exclusive Era Ends

Ben Thompson (Host): Matt Garman, Sam Altman – Matt, welcome to Stratechery; Sam, welcome back. I have previously interviewed Altman in October 2025, March 2025, and February 2023.

Sam Altman (OpenAI CEO): Thank you.

Matt Garman (AWS CEO): Thank you, grateful for the invitation.

Host: Matt, this is your first time on Stratechery. Unfortunately, I think Sam's presence will prevent us from having our usual "getting to know the guest" segment. Besides, he probably doesn't want to hear us reminisce about our days at Kellogg School of Management. Still, it's a pleasure to have an alumnus on the podcast.

Matt Garman: Yes, I'm very happy to be here. I can come again next time, and we can delve deeper into things.

Host: That would be great. You've been involved with AWS since your internship, and now you're leading the entire AWS organization amidst the AI wave. In what ways do you see building an AI business as similar to building the original general-purpose computing business – let's call it that for now? And in what ways is it truly different?

Matt Garman: I think the similarity lies in seeing the same excitement, and seeing builders out there able to do things they couldn't do before. The cool thing back when we started AWS was that developers suddenly gained access to infrastructure previously only available to the largest companies. Only companies with multi-million dollar budgets to build data centers had such capabilities. Now, a developer only needed a credit card and a few dollars to launch an application. This massively expanded what internet builders could achieve.

Our thinking at the time was that people could go build whatever they wanted. We weren't going to prescribe what they should do. We believed that creativity existed everywhere in the world; if we put powerful tools in their hands, they would create interesting and amazing things.

I believe AI's empowerment of builders is at least as transformative, possibly even more so. Think about what's possible now: you don't have to study programming in school for ten years to build an application; you don't need a huge team of hundreds, nor months and months and months to build things. You can build quickly with small teams and iterate fast. AI is unlocking innovation across all fields globally. In many ways, it's very similar to back then. Seeing the capabilities it gives our customer base is genuinely exciting.

Host: But back when AWS emerged, you were the only player, so the benefits and drawbacks naturally fell to you. Is there a feeling that during the AWS era, it was largely about general-purpose computing – making compute interchangeable, elastic, and cheap? Whereas in AI, especially during the training phase, the winning abstraction seems to be highly vertically integrated superclusters, very advanced networking, and extremely tight coupling between software and hardware. Was this unexpected for you? Because this time, you're not starting from scratch as the 'only one here'; you had a specific understanding of large-scale computing, but at least in the early years of AI, it didn't seem to align perfectly.

Matt Garman: I'm not sure how different it is for us. I think the real difference is the astonishing speed of adoption. I think this caught everyone by surprise, maybe even Sam. People's acceptance and adoption of these capabilities happened faster than anyone predicted.

This is very different from when we started cloud computing. We spent a very long time explaining why a book-selling company would offer computing power. We had to work hard to explain what cloud computing was. There was a lot of hard work that people often forget now. But in 2006, no one took for granted that the world's computing would move to the cloud. There was indeed a huge amount of difficult explanation and pushing effort required.

Host: Do you feel the need for similar explanation now? Because many people initially anchored on the training era, and you might say, 'We're thinking about the inference era,' which is a different beast. Do you need to revive that explanatory capability?

Matt Garman: Yes, but the speed at which people understand what you're saying is completely different. So, I think, yes, there is an educational process when moving people from 'this looks cool, I can chat with a smart AI' to 'it can actually do work inside your enterprise.' But the speed of technological evolution makes this process relatively fast.

Host: I promise we'll get to today's product topic soon. But Sam, from a startup ecosystem perspective, looking back, AWS was clearly transformative, fundamentally changing the barriers to starting a company. Now anyone can start. Seed rounds and angel investors emerged, pushing milestone-based funding further down the road. You didn't have to write 'we will buy servers' in your pitch deck; you could build an app first, then raise a Series A or later round.

From your perspective, compared to the world AWS opened up, what are the differences and similarities with the world AI is opening up today?

Sam Altman: I think there have been four major platform moments that massively empowered startups: the internet, cloud, mobile, and AI. Of these four, the first I experienced as an adult was the cloud. In the early days of YC, it's hard to overstate how much that changed things for startups.

Before that, startups had to rent colo space, assemble their own servers, and rack the equipment. It was incredibly complex, and you had to raise a lot of money first. Then suddenly, the cloud appeared. Although the cloud emerged after YC was founded, probably in its second year.

Host: I was just going to ask that – ultimately, are YC and the cloud more intertwined than you realized at the time?

Sam Altman: We felt they were highly intertwined back then. It felt like YC was riding the cloud wave from the beginning, as there were some early cloud service examples even before AWS.

Host: If AWS existed, the capital needed to start a startup became much less than before.

Sam Altman: It was a massive enabling change, and a big reason why YC sounded so crazy at the time. People would say, 'You can't invest tens of thousands of dollars in a startup; it's impossible, server costs alone exceed that.' So it completely changed what startups could achieve with small amounts of capital.

Typically, when a major platform shift occurs, and you can operate on faster cycles with less capital, startups win. This is the classic way startups beat large companies. Early in my career, I witnessed this change firsthand thanks to the cloud. Now, watching companies build products based on AI feels very similar in direction. But as Matt said, the speed this time is insane.

Host: Is it the case that existing large companies, the incumbents, are adopting AI much faster than they adopted the cloud?

Sam Altman: That's certainly happening more. But I'm also talking about the speed at which startup revenues are growing. Recently I spoke at YC and asked at the end: 'What's the revenue expectation for a good company at demo day now?' They said, 'The answer changes every month. Maybe even within the same YC batch, from start to finish.' This never happened before. The speed at which people are building scaled businesses on this new platform is something I have never seen before.

Host: Matt, throughout the cloud era, AWS was essentially the default cloud for all startups, giving you a huge advantage. What makes you the default cloud today? Because many people are building on the OpenAI API. Do you feel you're entering this market from a very different angle? You have a massive existing customer base demanding AI capabilities, but perhaps less visibility into the entire startup cohort Sam mentioned?

Matt Garman: I think there are several aspects. First, we are very excited about this partnership, and I believe it will be very significant for many startups. But even today, if you talk to startups, most that are scaling are still scaling on AWS, for many reasons. The scale is there, the availability is there, the security is there, the reliability is there, the partner ecosystem of other ISVs is on AWS, and the customers are on AWS.

Host: (Laughs) Whether they like it or not, everyone has used the AWS console, so they're used to it.

Matt Garman: And we help them. We spend a tremendous amount of time enabling startups, not just with credits, but also advising them on how to set up their systems, how to think about go-to-market, and many similar things. I think many startups appreciate that a lot. We invest a huge amount of time and effort to ensure this, because we genuinely believe startups are the lifeblood of AWS. It was true from the beginning, as Sam said, and it remains true today. I still go to Silicon Valley or other places every quarter to meet with startups directly, listen to what they're doing, and confirm that what we're building truly meets their needs.

So, yes, the competition for startup attention is more intense today than 20 years ago. But this remains as important to us as ever. We invest a lot of time ensuring we can meet the needs of these startups.

Host: Would it be fair to say that people building directly on the OpenAI API, rather than using the Azure version of the OpenAI service, are more likely to adopt a tech stack where regular computing is on AWS and the AI part uses OpenAI?

Matt Garman: I think that's a very common pattern for many startups today, absolutely.

Bedrock Managed Agents: Bringing AI Agents into Enterprise Workflows

Host: That brings us to today's announcement: Bedrock Managed Agents powered by OpenAI. I think I got that right. As I understand it, the selling point is not just that OpenAI models are available on AWS – I assume that wasn't previously allowed – but that OpenAI's frontier models are embedded in an AWS-native agent runtime, including identity, permissions state, logging

創始人
AI
歡迎加入Odaily官方社群