According to VentureBeat, on Wednesday, Anthropic announced it is releasing its Agent Skills technology as an open standard, a strategic bet to cement its position in the enterprise AI market. The company also unveiled organization-wide management tools for its Team and Enterprise plan customers and launched a partner directory with skills from companies like Atlassian, Figma, Canva, Stripe, Notion, and Zapier. The skills repository has already crossed 20,000 stars on GitHub, and Microsoft has already adopted the standard in VS Code and GitHub. The capability is included at no extra cost across all Claude plans and the API. Notably, developer Elias Judin discovered that OpenAI has quietly implemented a structurally identical architecture in both ChatGPT and its Codex CLI tool.
Why giving it away is smart
Here’s the thing: this isn’t just a nice open-source gesture. It’s a calculated power move. By making Skills an open standard, Anthropic is betting that ecosystem growth will benefit them more than trying to lock everyone into a proprietary system. And the early evidence suggests they’re right. When OpenAI—their biggest rival—starts copying your file structure and metadata format, you’ve basically won the architecture war. It means the industry is converging on their answer to a huge problem: how do you make a general AI model reliably good at specific, specialized work without expensive and slow fine-tuning?
What skills actually do
So what are these things? Basically, they’re reusable modules of procedural knowledge. Think of them as folders with instructions, scripts, and resources that tell Claude (or any AI that adopts the standard) exactly how to perform a specialized task. Instead of crafting a perfect prompt every time you need a legal document reviewed or a specific type of data visualization, you just invoke a “skill” that packages all that know-how. Anthropic’s clever “progressive disclosure” design means the AI only loads the full details when needed, so you can have a huge library without blowing up the context window. It’s a way to encode institutional expertise directly into the AI’s workflow.
The enterprise play and the risks
For businesses, this is huge. It turns AI from a smart chatbot into a customizable productivity engine. An admin can provision a “create company-standard PowerPoint” skill or a “run quarterly financial reconciliation” skill across the whole organization. That’s why the partner list is such a who’s who of enterprise software—Anthropic is positioning Skills as the connective tissue between Claude and the tools companies already use. But it’s not without complications. There are real security questions: a malicious skill could introduce vulnerabilities. And there’s a philosophical worry about “skill atrophy.” If the AI can do a backend developer’s UI work or a researcher’s data viz with a click, does that person stop learning how to do it themselves? It’s a valid concern.
The bigger picture
Look, the trajectory here is wild. Two months ago, this was a niche developer feature. Today, it’s an open spec that Microsoft is building into core tools and that OpenAI is mimicking. This is how you build a platform, not just a product. It echoes strategies from companies like Google, which have found that defining how an industry works is often more valuable than trying to own every piece of it. For any tech leader, the message is clear: skills are becoming infrastructure. The knowledge you encode into them today will shape your AI effectiveness tomorrow, no matter whose model you use. The AI model wars will rage on, but on this specific front—how to make AI *usefully capable*—Anthropic just set the rules of the game. And they did it by giving the playbook away.
