According to Neowin, Microsoft announced at its Ignite 2025 conference that alt text generation in Word and PowerPoint is getting a major AI upgrade. The company is replacing the existing Azure Vision model with generative AI to create richer, more contextual image descriptions. For example, instead of just “Diagram of a house with solar panels,” the new system might describe it as “Diagram illustrating passive solar design principles with photovoltaic panels, insulated walls, and ventilation windows.” The feature is currently available for Microsoft 365 Insiders running Version 2510 (Build 19328.20000) or later, while perpetual license users will get it in future updates. Interestingly, Microsoft is making this an opt-in feature where users must manually select “Generate alt text for me” rather than having it happen automatically.
Why this actually matters
Here’s the thing about accessibility features – they’re often treated as checkboxes rather than genuinely useful tools. The old Azure Vision system was basically like having a robot glance at your image and mutter something generic. It technically met requirements, but didn’t actually help someone understand what the image was communicating. This upgrade changes that completely. We’re talking about descriptions that actually explain the content and context, not just identify objects. For someone using a screen reader, that’s the difference between understanding a document and just getting through it.
The opt-in paradox
Now here’s where it gets interesting. Microsoft is making this an intentional choice rather than automatic. You have to go to Picture Format > Alt Text > Generate alt text for me. On one hand, I get it – giving authors more control over when and how AI modifies their content is probably smart. But doesn’t this risk making the feature less used? When something becomes an extra step, even a small one, adoption tends to drop. And let’s be honest – how many people even know where the alt text options are buried in those menus? It’s a classic case of balancing automation with user agency, and I’m curious to see how this plays out in real usage.
Beyond accessibility
What’s really fascinating is how this reflects Microsoft’s broader AI strategy. They’re not just slapping ChatGPT onto everything – they’re identifying specific pain points where AI can actually solve real problems. Automatic alt text has existed for years, but it was basically useless for anything beyond the simplest images. This upgrade makes it genuinely valuable. And think about the enterprise implications – better accessibility isn’t just good ethics, it’s becoming a legal and compliance requirement in many industries. When you’re dealing with complex technical documentation, having AI that can accurately describe schematics, flowcharts, or engineering diagrams? That’s huge. It’s the kind of practical application that makes AI feel less like magic and more like a tool that actually helps people get work done.
