Mark Zuckerberg recently made a bold statement: AI will soon take over the work of mid-level engineers (Forbes). While this may sound like another tech CEO hyping AI, my latest experience with OpenAI’s o3-mini-high model suggests he might not be too far off.
Thanks to DeepSeek, OpenAI was compelled to make o3-mini-high available in a regular ChatGPT subscription instead of locking it behind a steep $200 paywall. I would never pay original $200 for a model, but since I already have the regular ChatGPT subscription, it was an obvious choice to try it out. With this in mind, I decided to experiment: Could o3-mini-high generate a functional Go codebase for my GFSM library?
The experiment
For context, GFSM is my Go Finite State Machine library, and I needed a new generator to extract and save state machines in formats like PlantUML and Mermaid. Writing such a generator requires a solid understanding of Go’s Abstract Syntax Tree (AST) package, something I hadn’t used in years.
Instead of writing the code myself, I handed most of the heavy lifting to o3-mini-high. The result? Almost all the code for the generator was AI-generated, with minimal manual adjustments. You can check out the generated code in this GitHub commit.
Code Quality: Mid-Level Engineer Competency (But Better Documentation)
The AI-generated code was not just functional—it worked on the first attempt! Unfortunately, it still needed some refinements:
- Improving logging and error handling
- Refactoring functions for better readability and maintainability
Overall, the output quality was comparable to that of a solid mid-level engineer—not perfect, but good enough. But here’s the kicker: Mid-level engineers usually suck at writing documentation (if they write any at all), while the documentation generated by o3-mini-high was actually pretty nice—clear, structured, and covering all the necessary details.
My productivity gains may not be impressive, but they are reasonably good for a first attempt:
- Time spent “coding” with ChatGPT: 3 hours
- The time it would have taken manually (my estimation): 6+ hours
The biggest time-saver? AI handled the AST package interactions, which would have required me to refresh my knowledge.
The Future of Software Engineering
What does this mean for engineers? I see this as an evolutionary shift rather than a revolutionary one—similar to previous transitions from Assembly to C or from C to Java. AI isn’t outright replacing engineers; it’s redefining what it means to be one.
The reality is that “boot camp engineers” and all other types of IT specialists without fundamental education are likely obsolete. Learning how to piece together APIs or follow tutorials won’t be enough when AI can do it better and faster, with approximately the same number of mistakes. Instead, fundamental Computer Science education—actual deep IT knowledge—becomes more critical than ever.
Why? Because understanding how and why a system works will distinguish those who can drive AI-assisted development from those who are replaced by it. Engineers with a strong foundation in algorithms, data structures, OS internals, networking, cybersecurity, and distributed systems will remain invaluable.
The AI-driven migration process will not be fast or simple. Even at first glance, there are a significant number of challenges ahead.
The crucial part of the AI migration challenge is system verification. If AI is going to generate more and more of our code, who is going to verify that it actually works? This is where Software Development Engineers in Test (SDET), or rather their next evolution—Software Verification Engineers (SVE), become indispensable. SVE will focus not just on functional correctness but also on formal verification, property-based testing, and system reliability. AI can generate code, but it doesn’t inherently know if it’s correct, secure, or efficient—that’s where human engineers must step in.
Another major challenge? AI-generated code will include security vulnerabilities. It’s not a question of if but when. AI models don’t “understand” security—they just generate plausible-looking code. This means:
- Secure coding practices must be enforced at the architecture level.
- Penetration testing and threat modeling will be more crucial than ever.
- Automated security analysis tools will need to evolve alongside AI-generated code.
Security engineers will no longer be an “optional” hire for big enterprises. Every AI-assisted development team will need a strong cybersecurity presence, or they risk shipping highly vulnerable software.
And the ultimate pain in the ass? AI suggests solutions for AI-generated code. This is where I start crying Right now, AI is great at creating “reasonable-looking” code but lacks true reasoning about correctness and intent. This means:
- AI suggests code that works—but may be subtly wrong.
- The engineer spots a bug and asks AI for a fix.
- AI proposes a fix—but does it actually solve the problem, or does it introduce new ones?
- Repeat until frustration kicks in.
This loop is already frustrating enough when debugging human-written code. With AI generating both the problem and the solution, we might end up in an infinite AI-fix cycle, in which engineers waste more time validating AI’s reasoning than fixing it manually.
How do I see my future? It’s perfectly incredible! With AI tools, I can do more, build faster, and focus on higher-level problem-solving instead of getting bogged down in boilerplate coding. AI is an amplifier, not a replacement—it extends my capabilities rather than diminishing my value.
And as for job security? I’m not worried in the slightest. With my specialization and experience in system architecture, distributed computing, and software engineering at scale, I will always have a place in this industry. If anything, AI clears the way for me to work on more interesting, complex problems—things AI alone can’t solve.