AI Writing All the Code? What Top Engineers at Anthropic and OpenAI Are Saying
Hey there—have you seen this wild claim going around that AI now writes 100% of the code at companies like Anthropic and OpenAI? I first saw it on Reddit, linked to a Fortune article, and it’s been buzzing in tech circles. But what does it actually mean? Let’s unpack it without the hype.
First, let’s back up. Tools like GitHub Copilot or Tabnine have been around for years, offering code snippets or suggestions. But this claim suggests AI isn’t just helping—it’s doing the whole job. Top engineers from Anthropic and OpenAI say their teams now use AI to write entire systems, not just assist with code. That’s a big shift.
I’m not here to say this is 100% true. The Fortune article is from 2026, so it’s a hypothetical future. Still, the idea reflects a growing trend: AI code tools are becoming smarter, faster, and more trusted. Imagine telling an AI, “Build me a REST API for a weather app,” and it just… does. No debugging, no stack overflow searches. That’s the promise.
But here’s the catch. Code written by humans has bugs. Code written by AI? It might have different kinds of bugs. Or worse, no bugs—until it’s deployed and reality hits. Tools like AI can’t yet replace human judgment about edge cases, security, or system design. So maybe these engineers aren’t saying AI writes all the code, but that AI handles the *first drafts* while humans refine it.
Either way, this feels like a glimpse into the future. If AI code tools keep improving, what does that mean for developers? Will we shift toward being more like architects than coders? How do we ensure the AI-written code aligns with our values or security standards? These are the questions I’m curious about.
If you’re into this, I’d suggest checking out the Reddit comments or the Fortune article for deeper insights. And hey—if you’ve tried AI code tools, what’s your take? Is this the next big step, or are humans here to stay in the loop?