AI is Coming for Software Engineers (I'm a Software Engineer)
There seems to be a lot of AI anxiety across professions and I'm here to tell you that no one should worry as much as software engineers.
Whether or not AI is going to take the jobs of software engineers seems to be a pretty controversial topic (at least among software engineers). It shouldn’t be. There are some specific qualities of software engineering that predispose it to AI takeover and there are some specific qualities of software engineers that will accelerate the process.
What this means for software engineering jobs depends on one’s role and experience. That said, for the industry as a whole I predict a contraction of total positions over the next five years.
I. Why software engineering will be first
First, it’s important to understand the basic output of a software engineer: code. And code is just text. In fact, it’s structured text, specifically written for a machine to understand. Of course there are all sorts of other things software engineers do, but the building blocks are just text files at the end of the day. Programs are written in code. Databases? Just code. Provisioning and managing infrastructure? (It’s ok for non-engineers to not know what this one is. The point is…) It’s also done with code.
The new LLMs (e.g., GPT) are specifically designed to write text! There are plenty of professions whose primary output is text, so what makes programming different (and more susceptible)?
The text programmers write can also be executed, tested and verified by a computer in a tight feedback loop. When code fails, most systems produce specific errors, indicating the place of failure and the reason, which can be fed back into LLMs to debug and improve. Programmers write automated tests that a computer can run to ensure the code does what they expect with given inputs. Because success can be verified by a computer and not a human, infrastructure can (and will) be built around the LLMs to do exactly that.
And the foundations of this infrastructure is advancing as fast (or faster) than the models themselves (Look at LangChain and ControlNet for examples). This shouldn’t come as a surprise. Software engineers can’t help themselves. In our inexorable drive to automate our jobs, we’ll rush to build and improve this infrastructure as fast as possible.
The amazing part about all of this is that AI doesn’t need to be better or even as good as a human coder. It will work tirelessly to write and improve what’s requested of it. The current models are good enough already and we’ll soon write the infrastructure that will birth our first junior coders. They’ll almost immediately out-code us all.
II. Where it starts
It already started? Sometime in 2022? Software engineers already use GPT4 and Github’s co-pilot to augment their work. The real question is when we hit the tipping point.
Soon, teams will have junior AI engineers that understand the codebase in which they’re operating. Imagine filing an issue on Github and assigning it to your AI programmer. It will ingest the issue, consider relevant examples, reference its own corpus of knowledge (which is much larger than yours) and have code ready for review within minutes. In the beginning you’ll comment on its code just like you do now with your colleagues. That comment will get sent to the LLM and it will update the code with a revision.
It won’t stop there. We’ll build proper tooling to have it make revisions on its own until the code works as requested almost every time. As the underlying AI models advance both in their abilities and the amount of context they can consider (GPT4 32K can already consider about 50 pages of text as context in a single prompt), they’ll become more and more reliable and accurate. They will consider your entire codebase when making features. This will lead to significantly better code than the average engineer. The whole process will cost about two dollars a pop—and the AI won’t demand free lunch either.
III. Where it ends
Most new code will be generated by AI—and not just new features, but refactors and bug fixes too. The role of software engineer will slowly shift to something that maybe deserves a title more like “AIInterOps Engineer.”
We’ll (AIInterOps Engineers née Software Engineers) focus on the systems necessary to allow a codebase to be accessed and iterated upon by AI. This will include sandboxes for AI to run and test in, good staging environments and strong test suites (the AI will write these too). We’ll build systems for them to constantly revise and refactor the work. We’ll also connect our maintenance systems with AI as well. Alerts will be fed into tickets and AI will pick them up immediately.
At the Beginning of the End, we’ll want to review all the code, but quickly that will get boring and we’ll just start stamping PRs1 (like we do now). Soon much of the code will be foreign and we’ll have developed a dependence on our AI engineers (Engineering Managers out there already know what I’m talking about).
As we drift further away from the code, really strong automated testing will become more and more valuable. We’ll develop tools to translate our words into exhaustive test suites so we can feel sure that when AI writes code and the tests pass, it’s shippable.
One day, our kids will gape in awe and say things to us like, “You could read code?!”
IV. Why “Increasing output rather than replacing it” is wrong
One of the most common rebuttals to this claim is that new developer tools have only ever increased developer efficiency and expanded the field. The real endgame here, the argument goes, is an explosion of software and commensurate growth within the industry.
This I think (to your credit, straw man) will be what the industry looks like over the next couple of years. And many people will say, “see I told you so.” But there are a couple of things that this argument misses.
First, to-date the job has never be fully automate-able. To the extent you agree with any of the future I’ve outlined, once we get there—when AI agents can be reliably expected to develop complex features—we’ve reached an infinity point. Some number of existing engineers will be safe, but we certainly won’t need new ones.
Second, software engineering has traditionally been the bottleneck for product development. It’s not exactly clear at which point it won’t be, but it is true that at some point it won’t be. For example, at Meta we run lots of experiments (A/B tests, etc) to see which features are successful and which aren’t. Those take real-world time. It’s reasonable that eventually the total amount of software that exists, plus the velocity of the development of that software will hit external rather than internal limits: we won’t be able to ship code fast enough to verify that it’s actually making a difference to our customers. How many more Todo apps does the world need?
V. Some systems need more human intervention
Roles for software engineers will continue to exist in a variety of areas. For example, legacy systems may have trouble properly integrating into the new AI tooling we build. There will always be jobs at Southwest Airlines and hospitals integrating their EMRs.
Likewise, highly regulated companies will move much slower. Companies like Lockheed Martin and Raytheon will not hand their systems over to AI mostly because any unintentional error might be too geopolitically catastrophic (and then a little because of SkyNet). They will probably still use AI, but much more like it’s being used today, as a simple sidekick.
On the other hand, It’s possible that companies with legacy systems will fall behind so quickly that we’ll see those jobs disappear as well, ceded to their competition (i.e. AI). And we might even see the same with big tech. How long until paying exorbitant amounts to the best software engineers becomes a weakness rather than a moat?
VI. Should I learn to code?
It’s a pretty dour blog post if you’re either a software engineer or someone hoping to be one. I expect most software engineers will continue to be employed for the foreseeable future (incidentally, “foreseeable future” becomes a more interesting phrase by the day). There’s too much imminent need and too many legacy systems that will require support.
Also, there will continue to be cutting edge jobs in AI research as well as infrastructure, networking, scaling and tooling. It’s one thing for an AI bot to iterate on top of existing codebases and tools and another for it to invent new systems and paradigms. I expect AI to be very good at writing things like React components very soon and not as good at inventing things like React for quite a while.
If you’re considering going into a coding profession, first, please do far more research than reading this post. But second, think deeply about it. The world of software engineering is going to look radically different in five years.
VII. Predictions
I’m going to try and actually record predictions for each post to keep me honest. Here they are:
We’ll see AI bots writing code and submitting to Github repos within…
6 months (80%)
3 month (60%)
it existing now and I don’t know about it (25%)
The total number of employed software engineers in North America will be lower than it is today (4.4MM as of 2022 according to this source)…
by the end of 2024 (10%)
by the end of 2026 (50%)
by the end of 2028 (95%)
Disclaimer: This represents my own opinions and not that of my employer
For non-eng folks: PR stands for “Pull Request” and it’s a submission of code to be reviewed before it goes into the main codebase.