OpenAI Enterprise API Expands Offerings, Eyes Growth and Safety

In an interview with InformationWeek, OpenAI’s enterprise API product lead opens up about the company’s increasing focus on business users.

Shane Snider , Senior Writer, InformationWeek

April 26, 2024

6 Min Read
OpenAI logo depicted on sign
Greg Guy via Alamy Stock

OpenAI's success in enterprise IT comes as no shock to Miqdad Jaffer. The GenAI juggernaut’s API product lead says enterprise has always been at the core of its product offerings, but the stakes are higher as businesses race to harness competitive advantages.

When OpenAI unleashed ChatGPT in 2022, generative artificial intelligence (GenAI) went from a tech-world talking point to a global household phenomenon. Business leaders immediately recognized the value in finding use cases and GenAI has been on the tip of every tongue in the C-suite ever since.

So, it was no surprise when OpenAI's ChatGPT began launching its own enterprise-focused products last year, targeting large enterprises first, then moving on to expand offerings to small and medium-sized businesses.

Enterprise API is a separate product focused on business use and the API has been available since 2020. Highlighting its expanding enterprise offerings in a blog post this week, OpenAI touts improvements to security, Assistants API, improved administrative control, new pricing options and more.

Jaffer tells InformationWeek that the company’s efforts will continue to evolve quickly to keep pace with a rapidly changing enterprise AI landscape. More than 2 million developers throughout several hundred companies are using OpenAI’s API platform, Jaffer says.

Related:OpenAI Tests New Voice Clone Model

(Editor’s note: The following interview is edited for brevity and clarity).

Can you give our readers the 10,000-foot overview of OpenAI’s enterprise efforts, and how the services are evolving?

We’ve been doing enterprise from the start, but I think the key for us was that we wanted to put a focus on enterprise as a package, simply because it wasn’t obvious [as a standalone enterprise product] to as many people. What we’ve seen already since the launch of ChatGPT is that we have 92% of Fortune 500 companies already using ChatGPT in some way. What we are announcing now in this version are the changes from a pricing and cost perspective, we introduced batch API that lets people do asynchronous requesting, and we introduced a provision throughput, which lets people commit to a certain throughput and get a discount for it. We want more people to build with our API and not have to worry about spending more. The other piece of the release is on actual capabilities. Our new Assistants API has better ability to follow instructions, better retrieval -- we went from a 20-file limit to a 10,000-file limit, so that people can build more real-time applications. On the actual enterprise management side, Projects is our means of creating a hierarchy within an organization, allowing each individual enterprise and even solo developers to start to sequester their work and set individual rate limits and individual cost limits -- to really give that management oversight into the deployment of AI.

Related:OpenAI’s Latest ChatGPT Enterprise Offering Targets Collaboration

We’re seeing new enterprise AI offerings from many different vendors recently. How does OpenAI differentiate its enterprise offerings from competitors?

There are a few different ways we think about it. When we think about our overall enterprise platform, we think in terms of the models, the products, and the platform. When we think about the models, we believe that we’re differentiating there quite a bit. [OpenAI’s competitors] are catching up to a model that we finished training three years ago. And we’re only going to be releasing newer and newer models that are pushing the boundaries of what’s possible. And what’s important for us is to keep in mind the full level of intelligence of these models. In some cases, we will improve the intelligence levels with some of our frontier models. In other cases, we’ll drive costs down by improving the efficiency of existing models and improving latency. We have the models, which can be a differentiator. But from an enterprise package perspective, it’s the additional pieces along with it that make the difference.

Related:ChatGPT Year One: The Drama & Disruption

I know we’re only in the infancy of GenAI’s use in enterprise, but can you talk a little about how the market has changed already? How is it growing? Are smaller businesses keeping pace with some of the Fortune 500 companies?

The way we think about our offerings is not to build something that’s for enterprise, and something else that’s for startups or solo developers. We’ve tried to build a platform in which, whether you’re a solo developer or a very large enterprise, you should have the tools to continue to grow as things change for you. So, as things get more complex, all the capabilities that are available to enterprise should be available for everyone else. From a market standpoint, we see all aspects of the market growing and the way we’re building is to ensure that regardless of who you are, you’ll get what you need.

How are you approaching responsible AI for enterprise products. Can small businesses count on the same level of safety as your larger enterprise customers?

All our APIs, all our safety checks, all of our compliance and safety requirements are for everyone across the board. And we spend an inordinate amount of time to make sure that those things are perfect before we show it to anyone. And it’s critical for us that AI is deployed safely. And if that means that we take longer to release models because of that, then so be it. We’re not trying to push the boundaries of what people can build if it’s not going to be built safely.

I know OpenAI is treading carefully with its Text-to-Speech voice cloning model (Voice Engine). But do you see that being a big part of the enterprise offerings in the future?

I think it will be part of the roadmap for offerings in general. Obviously, that one is a place where things can get a little dicey from a release perspective. Safety is paramount there and we’re working very closely with government agencies to make sure that we’re very clear on what the use cases are and how we deploy those things. And until it is completely safe, we’re not going to just put it out there.

Where do you see areas of growth from enterprise AI in the future?

We’re very bullish on the notion of agentic workflows. [Agentic AI refers to AI systems that can autonomously pursue workflows with limited human supervision]. We think that’s going to be making a big difference in how people deploy and think about AI … those agents are going to be able to help scale organizations that are having issues in hiring -- it’s not always easy to be able to hire in all these environments, so having an agent to augment an existing work base will be very helpful. Imagine having a software engineering agent that when you’re right down to the deadline and you need support -- having that ability will be helpful. We also see the industry moving forward with textual intelligence, which is core to our belief that improving textual intelligence will improve the capabilities of these models across the board. We’ll continue to invest there.

About the Author(s)

Shane Snider

Senior Writer, InformationWeek, InformationWeek

Shane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights