A couple of days ago, Cursor went down during the ChatGPT outage.
I stared at my terminal facing those red error messages that I hate to see. An AWS error glared back at me. I didn’t want to figure it out without AI’s help.
After 12 years of coding, I’d somehow become worse at my own craft. And this isn’t hyperbole—this is the new reality for software developers.
It crept up on me subtly.
First, I stopped reading documentation. Why bother when AI could explain things instantly?
Then, my debugging skills took the hit. Stack traces now feel unapproachable without AI. I don’t even read error messages anymore, I just copy and paste them.
I’ve become a human clipboard, a mere intermediary between my code and an LLM.
Previously, every error message used to teach me something. Now? The solution appears magically, and I learn nothing. The dopamine hit of instant answers has replaced the satisfaction of genuine understanding.
Deep comprehension is the next thing that was affected. Remember spending hours understanding why a solution works? Now, I simply implement AI suggestions. If they don’t work, I improve the context, and just ask the AI again. It’s a cycle of increasing dependency.
Then come the emotional changes. Previously, it was a part of the joy of programming to solve new problems. Now, I get frustrated if AI doesn’t give me a solution in 5 minutes.
The scariest part? I’m building an AI-powered development tool, but I can’t shake the feeling I’m contributing to the very problem that’s eroding our collective skills.
I’m not suggesting anything radical like going AI-free completely—that’s unrealistic. Instead, I’m starting with “No-AI Days.” One day a week where:
I won’t lie, it sucks. I feel slower, dumber, and more frustrated.
But I can also see the difference. I feel a stronger connection with my code and a sense of ownership, which had slowly disappeared with AI. Plus, I learn a lot more.
We’re not becoming 10x developers with AI.
We’re becoming 10x dependent on AI. There’s a difference.
Every time we let AI solve a problem we could’ve solved ourselves, we’re trading long-term understanding for short-term productivity. We’re optimizing for today’s commit at the cost of tomorrow’s ability.
I’m not suggesting we abandon AI tools—that ship has sailed. But we need rules of engagement. Here’s some ideas that I have:
I won’t lie, I don’t think I’ll be able to follow these rules all the time. But it’s a start, and I strongly believe anyone who’s new to programming should definitely follow all of these rules.
Right now, somewhere, a new programmer is learning to code. They’ll never know the satisfaction of solving problems truly on their own. They’ll never experience the deep understanding that comes from wrestling with a bug for hours.
We’re creating a generation of developers who can ask AI the right questions but can’t understand the answers. Every time AI goes down, they’re exposed as increasingly helpless. As of now, AI isn’t capable enough to replace programmers fully, but this will only get worse as it improves. The real question isn’t whether AI will replace programmers. It’s whether we’re replacing ourselves.
Try coding without AI for just one day. The results might surprise you.
The post The CrowdStrike bug and the risk of cascading failures appeared first on SiliconANGLE.
Or so Sarah Butcher reports:
If you’re wondering which coding language to learn for a software engineering job in banking, Goldman Sachs’ CIO Marco Argenti seems to be aligning himself to the people who suggest an advanced knowledge of the English language and an ability to articulate your thoughts clearly and coherently in it, is now up there alongside Python and C++.
Writing in Harvard Business Review, Argenti says he’s advised his daughter to study philosophy as well as engineering because coding in the age of large language models is partly about the “quality of the prompt.”
“Ambiguous or not well-formed questions will make the AI try to guess the question you are really asking, which in turn increases the probability of getting an imprecise or even totally made-up answer,” says Argenti. In the future, he says the most pertinent question won’t be “Can you code?,” but, “Can you get the best code out of your AI by asking the right question?”.
Asking the right question will partly depend upon being able to articulate yourself in English and that will depend upon, “reasoning, logic, and first-principles thinking,” says Argenti. Philosophical thinking skills are suddenly all-important. “I’ve seen people on social media create entire games with just a few skillfully written prompts that in the very recent past would have taken months to develop,” he adds.
I know nothing about coding, but Stu Clayton, who sent me the link, does, and since he thinks this is of interest lächerlich, I’m passing it along. Anything that places value on “advanced knowledge of the English language and an ability to articulate your thoughts” is probably a good thing.