-
Written by
Charlie Cowan
-
Published on
Nov 27, 2025
Share On
All I Want for Shipmas: An Enterprise User's Wish List for OpenAI
Last December, OpenAI gave us the "12 Days of Shipmas" - a product announcement every day for 12 days. We got Sora, o1, ChatGPT Pro, Projects, and more. It was a beautiful demonstration of shipping velocity.
As we approach December 2025, the chatter on X is building about what's next. But rather than predict what OpenAI will ship, I want to share what enterprise users actually need.
At Kowalah we spend our days helping companies deploy ChatGPT Enterprise. We see what works, what frustrates, and what's missing. This is our wish list - the features that would make ChatGPT the first thing people open every morning for work.
Some of these are likely coming. Others are long shots. And a few? OpenAI probably isn't even thinking about them.
Let's dig in.
Projects: Where Work Actually Happens
Projects was one of last year's Shipmas announcements, and it's become essential for enterprise users. But it needs work to become truly powerful.
A Real File Manager
Right now, finding anything in Projects feels like searching through a messy drawer. We need:
- Folders - Obvious, but missing. Let me organize projects by client, by quarter, by team.
- List and card views - Sometimes I want to scan titles. Sometimes I want to see previews.
- Better search - Find that project from three months ago without scrolling forever.
This sounds basic because it is. These are table-stakes features that every file management system has had for decades.
Self-Updating Context
Here's a powerful idea: what if a project conversation could update its own instructions?
Imagine a project for an employees personal development plan. As the conversation develops about their priorities and setting their goals for the year - the project could automatically add these to its own instructions. No more manually updating the context after every call.
This is more than memory - it is the ability to guide the project's own instructions through the chat.
Claude Code has a feature where to type # to add a memory to the Claude.md file - this is similar to what I have in mind.
Bonus point - add documents created in chat (Canvas/Deep Research) directly to the project files without haveing to download then upload.
Live Document Sync
This is my biggest wish for Projects: sync with Google Docs and SharePoint files directly.
Today, if I want ChatGPT to work with my documents, I have to upload static copies. The moment those documents change - and business documents change constantly - my project context is stale.
Imagine instead: you connect a Google Doc or SharePoint folder to your project you use for a 121 with a direct report. ChatGPT always has the latest version.
When your direct report updates the 121 notes, their forecast, their career goals - ChatGPT knows immediately.
This would transform Projects from a workspace into a true command center for knowledge work.
GPTs: The Forgotten Feature
Custom GPTs launched over two years ago. Since then? Barely any meaningful updates. The GPT Store remains a ghost town of consumer novelty apps.
For enterprise users, GPTs are one of the most powerful features in ChatGPT. They let you encode expertise, create repeatable workflows, and scale knowledge across teams. But they're missing critical capabilities.
Verified GPTs
In any enterprise with more than a handful of GPT creators, chaos emerges quickly. Duplicate GPTs. Outdated versions. Nobody knows which "Sales Playbook GPT" is the official one.
We need admin-verified GPTs - a way for workspace administrators to badge official, approved GPTs. A simple checkmark that says: "This is the real one. Use this."
Secured External Sharing
Today, if you want to share a GPT outside your organization, you have one option: make it public. Anyone with the link can access it.
That's useless for business.
I want to share a GPT with a potential client so they can ask about our implementation process. I want to give my customers access to a support GPT. I want to let contractors use specific GPTs without giving them full workspace access.
The solution: email-based access or secure password sharing. Share a GPT to specific email addresses or domains. Or generate a secure code that expires. Simple access control that keeps your competitive advantage safe.
Version History for Instructions
You can see that someone changed a GPT's instructions. But you can't see what they changed.
For GPTs where the instructions are the product - coaching GPTs, process GPTs, expert-knowledge GPTs - this is a significant gap. We need word-by-word diff views, just like Google Docs or any version control system.
GPTs Inside Projects
This is the feature I dream about.
Right now, Projects and GPTs are separate. But the most powerful workflow would combine them: use a specialized GPT within a specific project context.
Example: I have a "Proposal Writer" GPT that knows our methodology, pricing, and case studies. I have a Project for a specific prospect containing their requirements, their industry context, their stakeholder map.
I want to say: "Write a proposal using Proposal Writer GPT, but informed by everything in this prospect's Project."
The GPT provides the expertise. The Project provides the context. Together, they produce work that neither could alone.
Voice: Ready for the Boardroom
ChatGPT's voice mode is impressive. But it's not ready for serious business use.
Wake Word
Here's the problem: voice mode is always listening. Say anything, and it responds.
This makes it impossible to bring ChatGPT into a meeting as a participant. You can't have a side conversation. You can't think out loud. Every utterance triggers a response.
The fix is simple: a wake word. "Hey ChatGPT" to activate, silence otherwise.
This would unlock a powerful use case: ChatGPT as a meeting participant. Listening, taking notes, ready to answer when called upon - but not interrupting every five seconds.
My Voice in My GPTs
Tools like Delphi already let you clone your voice from a 20-minute sample. Imagine this capability built into ChatGPT.
As a manager, I could create a "How to Work with Charlie" GPT for my direct reports. When they use it in voice mode, they hear my voice giving them guidance. It's not a gimmick - it's a fundamentally different experience when coaching feels personal.
This would make GPTs feel less like tools and more like extensions of the people who created them.
Stop Reading the Instructions
A small but maddening issue: when you start a voice conversation with a GPT, it often reads out portions of its custom instructions. "I'm here to help you in good old British English and using UK legal frameworks..."
Nobody needs this. We wrote the custom instructions. We know what they are. Just start the conversation. Please!
Canvas: Great Demo, Needs Work
Canvas - the document and code editing interface - was a standout feature when it launched. In demos, it's magical.
In real work? It struggles.
The editing is too slow for serious documents. It rewrites sections you didn't ask it to touch. Comments don't work reliably. For anything longer than a few pages, you're better off in Google Docs.
Canvas needs to become a true collaborative editing experience - fast, predictable, and respectful of your existing content.
Agent Mode: Almost Useful
Operator and agent mode represent the future of AI. But for enterprise use cases, we're not there yet.
The Credential Problem
Agent mode can browse the web and use tools. But the moment it encounters a login screen, it stops. You have to take over, enter credentials, maybe complete MFA, then hand control back.
This makes sense from a security perspective. Autonomous agents with your credentials are a genuine risk. But it also makes agent mode impractical for the use cases enterprises actually want.
The dream: "Run this workflow every Monday morning. Log into the Financial Times website using our paid subscription, pull the latest stories relevant to our sector, format a summary, and email it to the team."
The reality: You'd have to be there every Monday to handle the login.
We need secure credential storage - a way to give agent mode authenticated access to specific systems, with proper audit trails and the ability to revoke access. This is a hard problem, but it's the unlock for agent mode becoming genuinely useful.
What's Probably Coming
Not everything on this list is a long shot. Some features feel imminent:
Salesforce Integration - OpenAI and Salesforce announced a partnership at Dreamforce. Expect ChatGPT to get deep CRM integration: pull customer data, update records, maybe even draft emails based on account context.
Image and Video Improvements - Sora exists but isn't in core ChatGPT yet. Image generation still feels like a first draft (whilst Google's Nano Banana Pro is leading the image gen field). Expect significant upgrades here, including better images and likely video generation - perhaps a 5o model for full multi-modality.
Faster Reasoning - The 5.1 Thinking and 5.1 Pro models are impressive but slow. As these models mature, expect dramatically faster complex reasoning without sacrificing quality.
What's Probably Not Coming
If I have a concern - it is that OpenAI is overly consumer-focused.
Look at recent launches - Shopping Research, Group Chat, Sora Video. These are consumer plays.
Enterprise features - admin tools, governance, collaboration, workflow automation - tend to lag. GPTs haven't had a meaningful update in two years. The GPT Store is clearly not a priority.
My prediction for this Shipmas: GPTs get nothing. I hope I am wrong.
If I could make one request to OpenAI beyond any specific feature, it would be this: put the business user at the heart of your roadmap.
Think about the type of work people do. Think about where decisions get made. Think about how ChatGPT can be present in every meeting, every conversation, every workflow.
Help non-technical users create, share and improve their own GPTs. Make collaboration seamless. Bring multiplayer mode to enterprise.
The companies that figure out how to integrate AI into every knowledge worker's daily routine will win the next decade.
Now that Google has got its act together with Gemini, and Microsoft investing heavily around their Copilot platform, OpenAI (and Anthropic) have to demonstrate why an 'independent' platform is worth paying for over and above the one embedded into Google and Microsoft's enterprise suite.
Key Takeaways:
- Projects need file management basics - folders, views, and search that actually works
- GPTs need enterprise features - verified badges, secure external sharing, version history
- Voice needs a wake word - to become useful in meetings and professional settings
- Agent mode needs credential handling - secure authentication for automated workflows
- The real wish: put business users first - Enterprise is where the money is going to come from as AI becomes mainstream.
What's On Your List?
These are my requests as someone who lives in ChatGPT Enterprise every day. But I'm curious what I've missed.
- What features would transform how your team uses ChatGPT?
- What's frustrating you?
- What would make ChatGPT the first thing you open every morning?
Go Deeper
Ready to implement what you've learned? These free resources will help:
- Wednesday Webinars - Every week we dive into one ChatGPT topic, review the last week's AI news, and answer your burning questions.
- AI Use Case Discovery Workshop Template - Gather business requirements and ideas from your functional teams
Need help getting enterprise value from ChatGPT?
At Kowalah, we help organisations deploy ChatGPT Enterprise and build the workflows, GPTs, and change programs that drive real adoption. Start a conversation at kowalah.com to see how we can help.
