This article is really only useful if LLMs are actually able to close the gap from where they are now to where they want to be in a reasonable amount of time. There are plenty of historical examples of technologies where the last few milestones are nearly impossible to achieve: hypersonic/supersonic travel, nuclear waste disposal, curing cancer, error-free language translation, etc. All of which have had periods of great immediate success, but development/research always gets stuck in the mud (sometimes for decades) because the level complexity to complete the race is exponentially higher than it was at the start.
Not saying you should disregard today's AI advancements, I think some level of preparedness is a necessity, but to go all in on the idea that deep learning will power us to true AGI is a gamble. We've dumped billions of dollars and countless hours of research into developing a cancer cure for decades but we still don't have a cure.
In software we are always 90% there. Is that 10% the part that gives us jobs. I don’t see LLMs that different from, let’s say, the time compilers or high level languages appeared.
Until LLMs become as reliable as compilers, this isn't a meaningful comparison IMO.
To put it in your words, I don't think LLMs get us 90% of the way there because they get us almost 100% of the way there sometimes, and other times less than 0%.
I would argue that "augmented programming" (as the article terms it) both is and isn't analogous to the other things you mentioned.
"Augmented programming" can be used to refer to a fully-general-purpose tool that one always programs with/through, akin in its ubiquity to the choice to use an IDE or a high-level language. And in that sense, I think your analogies make sense.
But "augmented programming" can also be used to refer to use of LLMs under constrained problem domains, where the problem already can be 100% solved with current technology. Your analogies fall apart here.
A better analogy that covers both of these cases, might be something like grid-scale power storage. We don't have any fully-general grid-scale power storage technologies that we could e.g. stick in front of every individual windmill or solar farm, regardless of context. But we do have domain-constrained grid-scale power storage technologies that work today to buffer power in specific contexts. Pumped hydroelectric storage is slow and huge and only really reasonable in terms of CapEx in places you're free to convert an existing hilltop into a reservoir, but provides tons of capacity where it can be deployed; battery-storage power stations are far too high-OpEx to scale to meet full grid needs, but work great for demand smoothing to loosen the design ramp-rate tolerances for upstream power stations built after the battery-storage station is in place; etc. Each thing has trade-offs that make it inapplicable to general use, but perfect for certain uses.
I would argue that "augmented programming" is in exactly that position: not something you expect to be using 100% of the time you're programming; but something where there are already very specific problems that are constrained-enough that we can design agentive systems that have been empirically observed to solve those problems 100% of the time.
I think parent was making an (apt) reference to Old School RuneScape, where level 92 represent half of the total XP needed to reach the max level of 99.
LLMs are noisy channels. There's some P(correct|context). You can increase the reliability of noisy channels to an arbitrary epsilon using codes. The simplest example of this in action is the majority decoding logic, which maps 1:1 to parallel LLM implementation and solution debate among parallel implementers. You can implement more advanced codes but it requires being able to decompose structured LLM output and have some sort of correctness oracle in most cases.
100%; Exactly as you've pointed out, some technologies - or their "last" milestones - might never arrive or could be way further into the future than people initially anticipated.
I'm already not going back to the way things were before LLMs. This is fortunately not a technology where you have to go all-in. Having it generate tests and classes, solve painful typing errors and help me brainstorm interfaces is already life-changing.
Based on my experiences with LLMs and the hype around it, we will need more experienced programmers.
Because they will have to clean up the huge mess that will come.
In my experience if you look at what effect democratizing code actually has, this is exactly the case. People are generating code, but that’s always been the easy part. The mess this is going to make, gonna need a lot of mops.
> Will this lead to fewer programmers or more programmers?
> Economics gives us two contradictory answers simultaneously.
> Substitution. The substitution effect says we'll need fewer programmers—machines are replacing human labor.
> Jevons’. Jevons’ paradox predicts that when something becomes cheaper, demand increases as the cheaper good is economically viable in a wider variety of cases.
The answer is a little more nuanced. Assuming the above, the economy will demand fewer programmers for the previous set of demanded programs.
However. The set of demanded programs will likely evolve. So to over-simplify it absurdly: if before we needed 10 programmers to write different fibonacci generators, now we'll need 1 to write those and 9 to write more complicated stuff.
Additionally, the total number of people doing "programming" may go up or down.
My intuition is that the total number will increase but that the programs we write will be substantially different.
That's what happened once with PCs in the 80s/90s and then again with the web in the 90s/2000s. The number of developers went up. Maybe the number of developers on the previous technology went down, but I'm not sure about it. Example: there are still developers for Windows native apps. Are they more or less than in 1995? I would bet on less, but I won't bet anything of valuable.
This is insightful. Which programs will the new tech make profitable (be it cash, psychic/emotional, or some other form) to write?
The Keynesian bogeyman of the deflationary spiral ignores intertemporal effects. Cell phones and laptops are getting cheaper all the time, but no one drops into an infinite wait because of time-preference. In the context of producing software becoming cheaper, people at a definite point value having a usable system today over a marginally cheaper version tomorrow.
> now we'll need 1 to write those and 9 to write more complicated stuff.
Or simpler :) I'd argue that in the past we needed more programmers for more complicated stuff (more hand-rolled databases, auth solutions etc. - a lot stuff was reinvented in each company), now we need much more people to glue some libraries and external solutions together.
The future could look similar, a lot of LLM vibe coders and a handful of specialized fixers.
Who knows though. Real life has a lot of inertia. One will probably do just fine writing just enterprise Java or React (or both!) for the next 30 years. I plan to be dead or retired in the next 30 years.
> Don’t bother predicting which future we'll get. Build capabilities that thrive in either scenario.
I feel this is a bit like the "don't be poor" advice (I'm being a little mean here maybe, but not too much). Sure, focus on improving understanding & judgement - I don't think anybody really disagrees that having good judgement is a valuable skill, but how do you improve that? That's a lot trickier to answer, and that's the part where most people struggle. We all intuitively understand that good judgement is valuable, but that doesn't make it any easier to make good judgements.
The role of the entrepreneur is predicting future states of the market and deploying present capital accordingly. Beck is advocating a game-theory optimal strategy.
Judgment is a skill improved through reps. Sturgeon’s law (ninety percent of everything is crap) combined with vibe code spewage will create lots of volume quickly. What this does not accelerate is the process of learning from how bad choices ripple through the system lifecycle.
Make lots of predictions and write down your thought process (seriously write them down!) once the result is in, analyze whether you were right. Were you right for the right reasons? Were you wrong but had the right thought process mostly?
It's just experience, i.e. a collection of personal reference points against seeing how said judgements have played out over time in reality. This is what can't be replaced.
I think the current state of AI is absolutely abysmal, borderline harmful for junior inexperienced devs who will get led down a rabbit hole they cannot recognize. But for someone who really knows what they are doing it has been transformative.
The IT industry has been trying to find ways to cheaply produce low quality code for decades. AI might be the final chapter of that , I'm not even sure about that, but the low quality code is not what programming is about. Even if the context windows and models are scaled 10x, they will be forgetful, they will try to cheat their way to some kind of success. If you are building software because you care about the craft and the result, AI will not replace you in the next decades. You will be just more in the architectural position, not hands on coding. I personally see that as the core of what programming is.
The article reduces programming to its economic and utilitarian components to make this analysis. It's coherent and valuable for analysing decision-making in the context of programming as a means to an end, where the end is to make money.
However, there are other aspects to programming that can't be quantified, subjective components that are stripped away when delegating coding to machines.
The first most immediate effect I think is loss of the sense of ownership with code. The second which takes a bit of time to sink in and is at first buried by the excitement of making something work that is beyond your technical capability is enjoyment.
You take both of these out, you create what I could only describe as soul-less code.
The impact of soul-less code is not obvious, not measurable but I'd argue quite real. We will need time to see these effects in practice. Will companies that go all-in on machine generates code have the upper hand, or those that value traditional approaches more?
This mindset that the value of code is always positive is responsible for a lot of problems in industry.
Additional code is additional complexity, "cheap" code is cheap complexity. The decreasing cost of code is comparable to the decreasing costs of chainsaws, table saws, or high powered lasers. If you are a power user of these things then having them cheaply available is great. If you don't know what you're doing, then you may be exposing yourself to more risk than reward by having easier access to them. You could accidentally create an important piece of infrastructure for your business that gives the wrong answers, or requires expensive software engineers to come in and fix. You accidentally cost yourself more in time dealing with the complexity you created than the automation ever brought in benefit.
Well, this has happened to me with pieces of code directly in front of an AI. You go 800% faster or more and now you have to go and finish it. All the increase in speed is lost in debugging, fixing, fitting and other mundane tasks.
I believe the reason for this is that we still need judgement to do those tasks, AIs are not perfect at it and they spit a lot of extra code and complexity at times. Then now you need to reduce that complexity. But to reduce it, you need to understand the code in the first place. Now you cut here and there, you find a bug, but you are diving in code you do not understand fully yet.
So the human cognition has to go on par with what the AI is doing.
What ended up happening to me (not all the time, for example this for one-off scripts or small scripts is irrelevant, or to author a well-known algorithm that is short enough without bugs) is that I have a sense of speed that ends up not being really true once you have to complete the task as a whole.
On top of that, you tend to lose more context if you generate a lot of code with AI, as a human, and the judgement must be yours anyway. At least, until AIs get really brilliant at it.
They are good at other things. For example, I think they do decently well at reviewing code and finding potential improvements. Bc if they say bullsh*t, as any of us could say in a review, you just go ahead to the next comment and you can always find something valuable from there.
Same for "combinatoric thinking". But for tasks they need more "surgery" and precision, I do not think they are particularly good, but just that they make you feel like they are particularly good, but when you have to deal with the whole task, you notice this is not the case.
Interesting, but way too optimistic and biased towards the scenario that infinite progress of LLMs and similar tools is just given, when it's not.
"Every small business becomes a software company. Every individual becomes a developer. The cost of "what if we tried..." approaches zero.
Publishing was expensive in 1995, exclusive. Then it became free. Did we get less publishing? Quite the opposite. We got an explosion of content, most of it terrible, some of it revolutionary."
A related idea is sub-linear cost growth where the unit cost of operating software gets cheaper the more it’s used. This should be common, right? But it’s oddly rare in practice.
I suspect the reality around programming will be the same - a chasm between perception and reality around the cost.
Some valid questions asked in the article but I don’t like the terminology used from title to content to assess situation and options. I’d rather call it Commoditization of Software Engineering.
I’ve been thinking about the impact of LLMs on software engineering through a Marxist lens. Marx described one of capitalism’s recurring problems as the crisis of overproduction: the economy becomes capable of producing far more goods than the market can absorb profitably. This contradiction (between productive capacity and limited demand) leads to bankruptcies, layoffs, and recessions until value and capital are destroyed, paving the way for the next cycle.
Something similar might be happening in software. LLMs allow us to produce more software, faster and cheaper, than companies can realistically absorb. In the short term this looks amazing: there’s always some backlog of features and technical debt to address, so everyone’s happy.
But a year or two from now, we may reach saturation. Businesses won’t be able to use or even need all the software we’re capable of producing. At that point, wages may fall, unemployment among engineers may grow, and some companies could collapse.
In other words, the bottleneck in software production is shifting from labor capacity to market absorption. And that could trigger something very much like an overproduction crisis. Only this time, not for physical goods, but for code.
> Marx described one of capitalism’s recurring problems as the crisis of overproduction: the economy becomes capable of producing far more goods than the market can absorb profitably
Is that capability a problem? We don't tend to do this unless the state subsidises things or labour unions protect things that don't have customers.
> How would one even market oneself in a world where this is what is most valued?
That's basically the job description of any senior software development role, at least at any place I've worked. As a senior pumping out straightforward features takes a backseat to problem analysis and architectural decisions, including being able to describe tradeoffs and how they impact the business.
This is (at least in theory) the current job of executives. You market yourself with a narrative that explains why projects have succeeded and failed such that you can project high confidence that you will make future projects successful.
I think this is a bit like attempting your own plumbing. Knowledge was never the barrier to entry nor was getting your code to compile. It just means more laypeople can add "programming" to their DIY project skills.
Maybe a few of them will pursue it further, but most won't. People don't like hard labor or higher-level planning.
Long term, software engineering will have to be more tightly regulated like the rest of engineering.
I agree with the first part of your comment, but don't follow the rest - why SE you should be more tightly regulated? It doesn't need to be; if anything, it will just stifle its progress and evolution
I think AI will make more visible where code diverges from the average. Maybe auditing will be the killer app for near-future AI.
I'm also thinking about a world where more programmers are trying to enter the workforce self-taught using AI. The current world is the continued lowering of education standards and political climate against universities.
The answer to all of the above from the perspective of who don't know or really care about the details may be to cut the knot and impose regulation.
Delegate the details to auditors with AI. We're kinda already doing this on the cybersecurity front. Think about all the ads you see nowadays for earning your "cybersecurity certification" from an online-only university. Those jobs are real and people are hiring, but the expertise is still lacking because there aren't clearer guidelines yet.
With the current technology and generations of people we have, how else but AI can you translate NIST requirements, vulnerability reports, and other docs that don't even exist yet but soon will into pointing someone who doesn't really know how to code towards a line of code they can investigate? The tools we have right now like SAST and DAST are full of false positives and non-devs are stumped how to assess them.
Even before all this latest round of AI stuff it's been a concern that we overwork and overtrust devs. Principle of least privilege isn't really enough and is often violated in any scenario that isn't the usual day-to-day work.
Literally all new products nowadays come with a great degree of software and hardware. Whether they are a SaaS or a kitchen product.
Programming will still exist, it will be just different. Programming has changed a lot of times before as well. I don't think this time is different.
If programming became suddenly too easy to iterate upon, people would be building new competitors to SAP, Salesforce, Shopify and other solutions overnight, but you rarely see any good competitor coming around.
The necessary involvement behind understanding your customers needs, iterating on it between product and tech is not to be underestimated. AI doesn't help with that at all, at maximum is a marginal iteration improvement.
Knowing what to build has been for a long time the real challenge.
> Value Migration: Writing code becomes like typing—a basic skill, not a career. Value moves to understanding what to build, how systems fit together, and navigating the complexity of infinite cheap software pieces.
I saw this happening way before LLMs came on the scene in 2015. Back then, I was an ordinary journeyman enterprise developer who had spent the last 7 years both surviving the shit show of the 2008 recession and getting to the other side of being an “expert beginner” after staying at my second job out of college for 9 years until 2008.
I saw that as an enterprise dev in a second tier tech city, no matter what I learned well - mobile, web, “full stack development”, or even “cloud”, they were all commodities that anyone could learn “well enough@ so I wouldn’t command a premium and I was going to plateau at around $150-$160K and it was going to be hard to stand out.
I did start focusing on just what the author said and took a chance on leaving a full time salaried job the next year for a contract to perm opportunities that would give me the chance to lead a major initiative by a then new director of a company [1].
I didn’t learn until 5 years later at BigTech about promotions were about “scope”, “impact” and “dealing with ambiguity”.
I had never had a job before with real documented leveling guidelines.
Long story short, left there and led the architecture of B2B startup and then a job working (full time) in the cloud consulting department of AWS fell into my lap.
After leaving AWS in 2023, I found out how prescient I was, the regular old enterprise dev jobs I was being offered even as a “senior” or “architect” were still topping out at around $160K-$175K and hadn’t kept up with inflation. I have friends who are making around that much with 20 years of experience in Atlanta.
Luckily, I was able to quickly get a job as a staff consultant at a third party consulting company. But I did have to spend 5 years honing my soft skills to get here. I still do some coding. But that’s not my value proposition.
[1] Thanks to having a wife in the school system part time with good benefits I could go from full time to contract to permanent in 2016.
This article is really only useful if LLMs are actually able to close the gap from where they are now to where they want to be in a reasonable amount of time. There are plenty of historical examples of technologies where the last few milestones are nearly impossible to achieve: hypersonic/supersonic travel, nuclear waste disposal, curing cancer, error-free language translation, etc. All of which have had periods of great immediate success, but development/research always gets stuck in the mud (sometimes for decades) because the level complexity to complete the race is exponentially higher than it was at the start.
Not saying you should disregard today's AI advancements, I think some level of preparedness is a necessity, but to go all in on the idea that deep learning will power us to true AGI is a gamble. We've dumped billions of dollars and countless hours of research into developing a cancer cure for decades but we still don't have a cure.
There are no technical hurdles remaining with respect to nuclear waste disposal. The only obstacle is social opposition
In software we are always 90% there. Is that 10% the part that gives us jobs. I don’t see LLMs that different from, let’s say, the time compilers or high level languages appeared.
Until LLMs become as reliable as compilers, this isn't a meaningful comparison IMO.
To put it in your words, I don't think LLMs get us 90% of the way there because they get us almost 100% of the way there sometimes, and other times less than 0%.
Compilers reliably solves 90% of your problems, LLM unreliably solves 100% of your problems.
So yeah, very different.
I would argue that "augmented programming" (as the article terms it) both is and isn't analogous to the other things you mentioned.
"Augmented programming" can be used to refer to a fully-general-purpose tool that one always programs with/through, akin in its ubiquity to the choice to use an IDE or a high-level language. And in that sense, I think your analogies make sense.
But "augmented programming" can also be used to refer to use of LLMs under constrained problem domains, where the problem already can be 100% solved with current technology. Your analogies fall apart here.
A better analogy that covers both of these cases, might be something like grid-scale power storage. We don't have any fully-general grid-scale power storage technologies that we could e.g. stick in front of every individual windmill or solar farm, regardless of context. But we do have domain-constrained grid-scale power storage technologies that work today to buffer power in specific contexts. Pumped hydroelectric storage is slow and huge and only really reasonable in terms of CapEx in places you're free to convert an existing hilltop into a reservoir, but provides tons of capacity where it can be deployed; battery-storage power stations are far too high-OpEx to scale to meet full grid needs, but work great for demand smoothing to loosen the design ramp-rate tolerances for upstream power stations built after the battery-storage station is in place; etc. Each thing has trade-offs that make it inapplicable to general use, but perfect for certain uses.
I would argue that "augmented programming" is in exactly that position: not something you expect to be using 100% of the time you're programming; but something where there are already very specific problems that are constrained-enough that we can design agentive systems that have been empirically observed to solve those problems 100% of the time.
We're 90%... we're almost half way there!
It costs 10% to get 90% of the way there. Nobody ever wants to spend the remaking 90% to get us all the way there.
I think parent was making an (apt) reference to Old School RuneScape, where level 92 represent half of the total XP needed to reach the max level of 99.
Exactly this.
LLMs are noisy channels. There's some P(correct|context). You can increase the reliability of noisy channels to an arbitrary epsilon using codes. The simplest example of this in action is the majority decoding logic, which maps 1:1 to parallel LLM implementation and solution debate among parallel implementers. You can implement more advanced codes but it requires being able to decompose structured LLM output and have some sort of correctness oracle in most cases.
100%; Exactly as you've pointed out, some technologies - or their "last" milestones - might never arrive or could be way further into the future than people initially anticipated.
I'm already not going back to the way things were before LLMs. This is fortunately not a technology where you have to go all-in. Having it generate tests and classes, solve painful typing errors and help me brainstorm interfaces is already life-changing.
Based on my experiences with LLMs and the hype around it, we will need more experienced programmers. Because they will have to clean up the huge mess that will come.
In my experience if you look at what effect democratizing code actually has, this is exactly the case. People are generating code, but that’s always been the easy part. The mess this is going to make, gonna need a lot of mops.
> Will this lead to fewer programmers or more programmers?
> Economics gives us two contradictory answers simultaneously.
> Substitution. The substitution effect says we'll need fewer programmers—machines are replacing human labor.
> Jevons’. Jevons’ paradox predicts that when something becomes cheaper, demand increases as the cheaper good is economically viable in a wider variety of cases.
The answer is a little more nuanced. Assuming the above, the economy will demand fewer programmers for the previous set of demanded programs.
However. The set of demanded programs will likely evolve. So to over-simplify it absurdly: if before we needed 10 programmers to write different fibonacci generators, now we'll need 1 to write those and 9 to write more complicated stuff.
Additionally, the total number of people doing "programming" may go up or down.
My intuition is that the total number will increase but that the programs we write will be substantially different.
That's what happened once with PCs in the 80s/90s and then again with the web in the 90s/2000s. The number of developers went up. Maybe the number of developers on the previous technology went down, but I'm not sure about it. Example: there are still developers for Windows native apps. Are they more or less than in 1995? I would bet on less, but I won't bet anything of valuable.
This is insightful. Which programs will the new tech make profitable (be it cash, psychic/emotional, or some other form) to write?
The Keynesian bogeyman of the deflationary spiral ignores intertemporal effects. Cell phones and laptops are getting cheaper all the time, but no one drops into an infinite wait because of time-preference. In the context of producing software becoming cheaper, people at a definite point value having a usable system today over a marginally cheaper version tomorrow.
> now we'll need 1 to write those and 9 to write more complicated stuff.
Or simpler :) I'd argue that in the past we needed more programmers for more complicated stuff (more hand-rolled databases, auth solutions etc. - a lot stuff was reinvented in each company), now we need much more people to glue some libraries and external solutions together.
The future could look similar, a lot of LLM vibe coders and a handful of specialized fixers.
Who knows though. Real life has a lot of inertia. One will probably do just fine writing just enterprise Java or React (or both!) for the next 30 years. I plan to be dead or retired in the next 30 years.
> Don’t bother predicting which future we'll get. Build capabilities that thrive in either scenario.
I feel this is a bit like the "don't be poor" advice (I'm being a little mean here maybe, but not too much). Sure, focus on improving understanding & judgement - I don't think anybody really disagrees that having good judgement is a valuable skill, but how do you improve that? That's a lot trickier to answer, and that's the part where most people struggle. We all intuitively understand that good judgement is valuable, but that doesn't make it any easier to make good judgements.
The role of the entrepreneur is predicting future states of the market and deploying present capital accordingly. Beck is advocating a game-theory optimal strategy.
Judgment is a skill improved through reps. Sturgeon’s law (ninety percent of everything is crap) combined with vibe code spewage will create lots of volume quickly. What this does not accelerate is the process of learning from how bad choices ripple through the system lifecycle.
Make lots of predictions and write down your thought process (seriously write them down!) once the result is in, analyze whether you were right. Were you right for the right reasons? Were you wrong but had the right thought process mostly?
Do it every day for years.
It's just experience, i.e. a collection of personal reference points against seeing how said judgements have played out over time in reality. This is what can't be replaced.
I think the current state of AI is absolutely abysmal, borderline harmful for junior inexperienced devs who will get led down a rabbit hole they cannot recognize. But for someone who really knows what they are doing it has been transformative.
The IT industry has been trying to find ways to cheaply produce low quality code for decades. AI might be the final chapter of that , I'm not even sure about that, but the low quality code is not what programming is about. Even if the context windows and models are scaled 10x, they will be forgetful, they will try to cheat their way to some kind of success. If you are building software because you care about the craft and the result, AI will not replace you in the next decades. You will be just more in the architectural position, not hands on coding. I personally see that as the core of what programming is.
The article reduces programming to its economic and utilitarian components to make this analysis. It's coherent and valuable for analysing decision-making in the context of programming as a means to an end, where the end is to make money.
However, there are other aspects to programming that can't be quantified, subjective components that are stripped away when delegating coding to machines.
The first most immediate effect I think is loss of the sense of ownership with code. The second which takes a bit of time to sink in and is at first buried by the excitement of making something work that is beyond your technical capability is enjoyment.
You take both of these out, you create what I could only describe as soul-less code.
The impact of soul-less code is not obvious, not measurable but I'd argue quite real. We will need time to see these effects in practice. Will companies that go all-in on machine generates code have the upper hand, or those that value traditional approaches more?
This mindset that the value of code is always positive is responsible for a lot of problems in industry.
Additional code is additional complexity, "cheap" code is cheap complexity. The decreasing cost of code is comparable to the decreasing costs of chainsaws, table saws, or high powered lasers. If you are a power user of these things then having them cheaply available is great. If you don't know what you're doing, then you may be exposing yourself to more risk than reward by having easier access to them. You could accidentally create an important piece of infrastructure for your business that gives the wrong answers, or requires expensive software engineers to come in and fix. You accidentally cost yourself more in time dealing with the complexity you created than the automation ever brought in benefit.
Well, this has happened to me with pieces of code directly in front of an AI. You go 800% faster or more and now you have to go and finish it. All the increase in speed is lost in debugging, fixing, fitting and other mundane tasks.
I believe the reason for this is that we still need judgement to do those tasks, AIs are not perfect at it and they spit a lot of extra code and complexity at times. Then now you need to reduce that complexity. But to reduce it, you need to understand the code in the first place. Now you cut here and there, you find a bug, but you are diving in code you do not understand fully yet.
So the human cognition has to go on par with what the AI is doing.
What ended up happening to me (not all the time, for example this for one-off scripts or small scripts is irrelevant, or to author a well-known algorithm that is short enough without bugs) is that I have a sense of speed that ends up not being really true once you have to complete the task as a whole.
On top of that, you tend to lose more context if you generate a lot of code with AI, as a human, and the judgement must be yours anyway. At least, until AIs get really brilliant at it.
They are good at other things. For example, I think they do decently well at reviewing code and finding potential improvements. Bc if they say bullsh*t, as any of us could say in a review, you just go ahead to the next comment and you can always find something valuable from there.
Same for "combinatoric thinking". But for tasks they need more "surgery" and precision, I do not think they are particularly good, but just that they make you feel like they are particularly good, but when you have to deal with the whole task, you notice this is not the case.
Interesting, but way too optimistic and biased towards the scenario that infinite progress of LLMs and similar tools is just given, when it's not.
"Every small business becomes a software company. Every individual becomes a developer. The cost of "what if we tried..." approaches zero.
Publishing was expensive in 1995, exclusive. Then it became free. Did we get less publishing? Quite the opposite. We got an explosion of content, most of it terrible, some of it revolutionary."
If it only were the same and so simple.
> AI isn't just redistributing the same pie; it's making the pie-making process fundamentally cheaper
Not if you believe most other articles related to AI posted here including the one from today (from Singularity is Nearer).
A related idea is sub-linear cost growth where the unit cost of operating software gets cheaper the more it’s used. This should be common, right? But it’s oddly rare in practice.
I suspect the reality around programming will be the same - a chasm between perception and reality around the cost.
Some valid questions asked in the article but I don’t like the terminology used from title to content to assess situation and options. I’d rather call it Commoditization of Software Engineering.
I’ve been thinking about the impact of LLMs on software engineering through a Marxist lens. Marx described one of capitalism’s recurring problems as the crisis of overproduction: the economy becomes capable of producing far more goods than the market can absorb profitably. This contradiction (between productive capacity and limited demand) leads to bankruptcies, layoffs, and recessions until value and capital are destroyed, paving the way for the next cycle.
Something similar might be happening in software. LLMs allow us to produce more software, faster and cheaper, than companies can realistically absorb. In the short term this looks amazing: there’s always some backlog of features and technical debt to address, so everyone’s happy.
But a year or two from now, we may reach saturation. Businesses won’t be able to use or even need all the software we’re capable of producing. At that point, wages may fall, unemployment among engineers may grow, and some companies could collapse.
In other words, the bottleneck in software production is shifting from labor capacity to market absorption. And that could trigger something very much like an overproduction crisis. Only this time, not for physical goods, but for code.
> Marx described one of capitalism’s recurring problems as the crisis of overproduction: the economy becomes capable of producing far more goods than the market can absorb profitably
Is that capability a problem? We don't tend to do this unless the state subsidises things or labour unions protect things that don't have customers.
"Understanding. Judgment. The ability to see how pieces fit together. The wisdom to know what not to build."
How would one even market oneself in a world where this is what is most valued?
> How would one even market oneself in a world where this is what is most valued?
That's basically the job description of any senior software development role, at least at any place I've worked. As a senior pumping out straightforward features takes a backseat to problem analysis and architectural decisions, including being able to describe tradeoffs and how they impact the business.
This is (at least in theory) the current job of executives. You market yourself with a narrative that explains why projects have succeeded and failed such that you can project high confidence that you will make future projects successful.
Question 1: is this indeed what is most valued at the moment?
Question 2: Do you think this will ever become valuable?
I think this is a bit like attempting your own plumbing. Knowledge was never the barrier to entry nor was getting your code to compile. It just means more laypeople can add "programming" to their DIY project skills.
Maybe a few of them will pursue it further, but most won't. People don't like hard labor or higher-level planning.
Long term, software engineering will have to be more tightly regulated like the rest of engineering.
I agree with the first part of your comment, but don't follow the rest - why SE you should be more tightly regulated? It doesn't need to be; if anything, it will just stifle its progress and evolution
I think AI will make more visible where code diverges from the average. Maybe auditing will be the killer app for near-future AI.
I'm also thinking about a world where more programmers are trying to enter the workforce self-taught using AI. The current world is the continued lowering of education standards and political climate against universities.
The answer to all of the above from the perspective of who don't know or really care about the details may be to cut the knot and impose regulation.
Delegate the details to auditors with AI. We're kinda already doing this on the cybersecurity front. Think about all the ads you see nowadays for earning your "cybersecurity certification" from an online-only university. Those jobs are real and people are hiring, but the expertise is still lacking because there aren't clearer guidelines yet.
With the current technology and generations of people we have, how else but AI can you translate NIST requirements, vulnerability reports, and other docs that don't even exist yet but soon will into pointing someone who doesn't really know how to code towards a line of code they can investigate? The tools we have right now like SAST and DAST are full of false positives and non-devs are stumped how to assess them.
Even before all this latest round of AI stuff it's been a concern that we overwork and overtrust devs. Principle of least privilege isn't really enough and is often violated in any scenario that isn't the usual day-to-day work.
Literally all new products nowadays come with a great degree of software and hardware. Whether they are a SaaS or a kitchen product.
Programming will still exist, it will be just different. Programming has changed a lot of times before as well. I don't think this time is different.
If programming became suddenly too easy to iterate upon, people would be building new competitors to SAP, Salesforce, Shopify and other solutions overnight, but you rarely see any good competitor coming around.
The necessary involvement behind understanding your customers needs, iterating on it between product and tech is not to be underestimated. AI doesn't help with that at all, at maximum is a marginal iteration improvement.
Knowing what to build has been for a long time the real challenge.
> Value Migration: Writing code becomes like typing—a basic skill, not a career. Value moves to understanding what to build, how systems fit together, and navigating the complexity of infinite cheap software pieces.
I saw this happening way before LLMs came on the scene in 2015. Back then, I was an ordinary journeyman enterprise developer who had spent the last 7 years both surviving the shit show of the 2008 recession and getting to the other side of being an “expert beginner” after staying at my second job out of college for 9 years until 2008.
I saw that as an enterprise dev in a second tier tech city, no matter what I learned well - mobile, web, “full stack development”, or even “cloud”, they were all commodities that anyone could learn “well enough@ so I wouldn’t command a premium and I was going to plateau at around $150-$160K and it was going to be hard to stand out.
I did start focusing on just what the author said and took a chance on leaving a full time salaried job the next year for a contract to perm opportunities that would give me the chance to lead a major initiative by a then new director of a company [1].
I didn’t learn until 5 years later at BigTech about promotions were about “scope”, “impact” and “dealing with ambiguity”.
https://www.levels.fyi/blog/swe-level-framework.html
I had never had a job before with real documented leveling guidelines.
Long story short, left there and led the architecture of B2B startup and then a job working (full time) in the cloud consulting department of AWS fell into my lap.
After leaving AWS in 2023, I found out how prescient I was, the regular old enterprise dev jobs I was being offered even as a “senior” or “architect” were still topping out at around $160K-$175K and hadn’t kept up with inflation. I have friends who are making around that much with 20 years of experience in Atlanta.
Luckily, I was able to quickly get a job as a staff consultant at a third party consulting company. But I did have to spend 5 years honing my soft skills to get here. I still do some coding. But that’s not my value proposition.
[1] Thanks to having a wife in the school system part time with good benefits I could go from full time to contract to permanent in 2016.