I see much more histeria and false but extremely high hopes, than real deal from where I sit (deceloper at faanglike high tec company).
Looks like the higher the management, the farther away from real engineering work — the more excitement there is and the less common sense and real understanding of how developers and llms work.
> Are you 10x more efficient?
90% of my time is spent thinking and talking about the problem and solutions. 10% is spent coding (sometimes 1% with 9% integrating this into existing infrastructure and processes). Even with ideal AGI coding agent id be only 10% more efficient.
Imagine a very bright junior developer. You still are heavily time taxed mentoring him and communicating.
Not many non technical people (to my surprise) get it.
Based on posts and comments here there are plenty “technical enough” people who don’t understand the essence of engineering work (software engineering in particular).
Spitting out barely (yet) working throwaway grade code is an impressive accomplishment for TikTok, but it has very little to do with complex business critical software most real engineers deal with everyday
phyalow 59 days ago [-]
On the contrary, I would class myself a mid to high skill dev, I have a CS degree and about 10 years of Java/C++/Rust/Python under my belt (focused on Financial Market applications).
I would consider myself today 2-3x more effective than where I was 12 months ago.
I can grok on a new code bases much faster by having an AI explain things to me only a grey beard could previously, I can ask Gemini 2.5 (1M context length) crazy things like “please create a sprint program for new feature xyz” and get really good high quality answers. Even crazier I can feed those sprints to Claude Code (CI/CD tests all running) and it will do a very good job of implementing. My other option is I can farm those sprints out to human dev resources I have at hand and then spend 90% of my time “thinking, hand holding and talking about code and solutions” and working with other devs to get code in prod.
Imo this is a false victory, emphasis should be on shipping. Although each domain / pipeline / field needs and prioritises different things and rightfully so. AI lets me ship so much faster and for me that means $$$.
I think I am a realist and your last point about “engineering” - is a contradiction. Maybe try better tools? Lastly:
“While the problem of ai can be viewed as, “Which of all the things humans do can machines also do?,” I would prefer to ask the question in another form: “Of all of life’s burdens, which are those machines can relieve, or significantly ease, for us?”
Richard Hamming, pg.43 The Art of Doing Science and Engineering: Learning to Learn
aristofun 59 days ago [-]
> I can grok on a new code bases much faster
How often you grok a new code base per year? If that's the core of your work, then yes - you benefit from ai much more than some other engineers.
Every situation is unique for sure.
> I would class myself a mid to high skill dev
It's not about your skill level, rather about the nature of your job (working on a single product, outsourcing company with time framed projects, r&d etc.)
moribvndvs 59 days ago [-]
Are you able to qualify a “2-3x” improvement? That’s a honest question. The anecdotes out there are wildly all over the place, and don’t match up with my own experience or that of my peers. I’ve only seen a marginal uplift, which includes productivity offsets caused by mistakes and hallucinations, not only for my own work, but from LLM assisted output from coworkers.
timkofu 55 days ago [-]
Exactly.
It's like saying "robots are replacing civil engineers". Asphalt laying is about 10%? of the work required in commissioning a road. The deciding whether to build a road at all, the costs, where to build it, the math all need to be done by a civil engineer.
The bulk of Software Engineering is feasibility study, requirements gathering, detailed design (architecture) then finally the implementation phase where AI comes in.
Those stages are in order of importance. Getting it wrong in especially the first two results in a high quality shiny white elephant at best.
The implementation phase is at most 20% but on average 10% of the work required to commission reliable maintainable software.
markus_zhang 59 days ago [-]
It makes sense. The business stakeholders always want to get things done ASAP, and they don't really care about how it is done. This is especially true if the stakeholders want to do many one time trials.
I think those stakeholders are the true engine of promoting AI.
ookblah 59 days ago [-]
okay, but code still has to be written. you can be a master architect and if the codebase requires X lines they have to come from somewhere. i'm just having a hard time grokking how you you can spend 1-10% of your time coding and actually ship anything at speed. esp if you imply you're not far away from the real engineering work.
or maybe at these companies the product is pretty stable or you're in an area where it's more optimizations vs. feature building?
aristofun 59 days ago [-]
> i'm just having a hard time grokking how you you can spend 1-10% of your time coding and actually ship anything at speed
Because if the rest 90% spent well enough - you do the right thing in remaining 10%.
Just try to work in a company with 100+ engineers and at least few years old profitable product with real customers and you'll get it.
player1234 54 days ago [-]
Who ships anything at "speed"?
aristofun 54 days ago [-]
That’s a good one :)
juancn 58 days ago [-]
I see the same mistake made everywhere, thinking that in software engineering that the hard part is making new code.
A large chunk of the work is dealing with people, understanding what do they really want/need and helping them understand it.
On the technical side, most of the work is around fixing issues with existing software (protecting an investment).
Then, maybe 1 to 10% of the workload is making something new.
AI kinda works for the "making something new" part but sucks at the rest. And when it works, it's at most "average" (in the sense of how good it's training set was, it prefers things it sees more commonly regardless of quality).
My gut instinct is that there's going to be an AI crash, much like in the late 90s/early 2000s. Too much hype, and then, after the crash, maybe we'll start to see something a bit more sane and realistic.
byoung2 59 days ago [-]
I am a tech lead and while I don't use AI at my company (Disney) for writing code, since I have a team of contractors at my disposal, in my spare time I am working on a side project where ChatGPT is writing all of the code. It is an experiment mainly to see if it can be done. I am getting better at writing prompts to get better results, but I don't think we are at the point where a nontechnical project manager could get good results. I feel like a lead or senior can use AI to replace interns and juniors, but more likely it is currently being used to make them more productive rather than replace people. It will be interesting to see the next few years, when it will be possible for one lead or senior to do the work of entire teams.
frank_nitti 59 days ago [-]
I wonder how a nontechnical PM would ever be able to evaluate the outputs well enough for their business to deploy model-generated code into production where end users’ safety/privacy/etc are at stake.
If the answer would be related to extensive testing, who verifies the model-generated tests?
Given that a nontechnical PM would neither be able to inspect the system code nor its tests, this is the part that does not add up for me. It seems at least one person still has to really understand the “hard part” of computing as it relates to their domain.
byoung2 59 days ago [-]
Yeah I often catch bugs in AI generated code and AI generally just codes for the happy path. There’s a big chance a nontechnical person wouldn’t think of the edge cases a seasoned engineer would.
variadix 59 days ago [-]
Not directly, but I wouldn’t be surprised if there’s enough of an efficiency improvement to obviate hiring an engineer or two (across 100+ people). In the same way that Google and StackOverflow made people more efficient when compared to having to otherwise search through and read physical documentation (to debug, to understand some API or hardware thing), LLMs have made me more efficient by being able to get tailored answers to my questions without having to do as much searching or reading. They can provide small code examples as clarification too.
In many ways LLMs feel like the next iteration of search engines: they’re easier to use, you can ask follow up questions or for examples and get an immediate response tailored to your scenario, you can provide the code and get a response for what the issue is and how to fix it, you can let it read internal documentation and get specialized support that wouldn’t be on the internet, you can let it read whole code bases and get reasonable answers to queries about said code, etc.
I don’t really see LLMs automating engineers end-to-end any time soon. They really are incapable of deductive reasoning, the extent to which they are is emergent from inductive phenomena, and breaks down massively when the input is outside the training distribution (see all the examples of LLMs failing basic deductive puzzles that are very similar to a well known one, but slightly tweaked).
Reading, understanding, and checking someone else’s code is harder than writing it correctly in the first place, and letting LLMs write entire code bases has produced immense garbage in all the examples I’ve seen. It’s not even junior level output, it’s something like _panicked CS major who started programming a year ago_ level output.
Eventually I think AI will automate software engineering, but by the time it’s capable of doing so _all_ intellectual pursuits will be automated because it requires human level cognition and adaptability. Until then it’s a moderate efficiency improvement.
timtas 57 days ago [-]
Excellent breakdown. Software engineering will be automated end-to-end around the same time as doctors and lawyers.
jmisavage 59 days ago [-]
We’re in the early stages of this transition. There’s no formal hiring freeze, but leadership has made it clear we should exhaust AI options before considering new hires. At the same time, raises and promotions are frozen this year, which has definitely caused a lot of frustration internally.
As part of this AI-first shift, all engineers now have access to Cursor, and we’re still figuring out how to integrate it. We just started defining .cursorrules files for projects.
What’s been most noticeable is how quickly some people rely too much on AI outputs, especially the first pass. I’ve seen PRs where it’s obvious that the generated code wasn’t even run or reviewed. I know this is part of the messy adjustment period, but right now, it feels like I’m spending more time reviewing and cleaning up code than I did before.
markus_zhang 59 days ago [-]
For now it's more like AI boosting productivity so company doesn't have to hire more.
We are a team of 5 down from 8 a few months ago, and we are working on more stuffs. I would not be able to survive without AI writing some queries and scripts for me. It really saves a tons of time.
baq 59 days ago [-]
Regardless of its realized effectiveness improvements it froze the intern/junior hiring pipelines.
TuringNYC 59 days ago [-]
> Regardless of its realized effectiveness improvements it froze the intern/junior hiring pipelines.
Would be great to see some industry-wide stats here. There are three OTHER factors are play here:
1. Record numbers of CS undergrads (more supply)
2. More remote-CS/tech grad programs (yet more supply, many overseas)
3. Bursting of the tech-vc bubble (less demand)
4. AI (???)
Not sure how much can be attributed to AI. That said, i'd confidently say our team is at least 2x more productive than 3yrs ago. Huge numbers of loose change get thrown to the LLM to solve, instead of writing clever algos, etc.
marifjeren 59 days ago [-]
a fifth plausible force here is that tech companies simply hired too aggressively during the zero interest rate policy, covid-related digitalization, engineering hiring rush of 2021-2023.
things are getting back to normal actually, and the companies who are embarrassed to be making cutbacks are saying it's actually because they're using AI, not because they over-hired.
rsynnott 58 days ago [-]
Also, looming potential financial crisis, and massive uncertainty. Like, under the current economic conditions, a lot of companies will be in wait-and-see mode, and will be slowing or pausing hiring _anyway_.
pajamasam 59 days ago [-]
The company I used to work for did the opposite by freezing senior+ hiring. Their argument is that juniors are more value for money.
TuringNYC 59 days ago [-]
I've seen the opposite -- more senior hiring because seniors can now outsource a portion of rote work to the LLM -- everything from trivial utilities to docs to testing.
Also, I get dozens of calls/emails a month from my undergrad/grad alma matter. The bottom seems to have fallen out of the labor force when even ivy leave and top-5 cs/tech schools have students desperately seeking entry level jobs.
To be fair, as I mentioned on another comment, there are other factors:
1. Record numbers of CS undergrads (more supply)
2. More remote-CS/tech grad programs (yet more supply, many overseas)
3. Bursting of the tech-vc bubble (less demand)
bpt3 59 days ago [-]
That is a poor argument, regardless of the salary. Juniors are a net loss for at least the first year in almost any environment.
pajamasam 58 days ago [-]
They made it work well. They had highly decentralised teams and used a bunch of no-code and serverless tools. They also knew that juniors and interns would work longer hours and give less push-back when gives somewhat crazy requests.
I'm not saying everyone should do this. Just wanted to give another perspective.
soco 58 days ago [-]
Nevertheless I see it happening as well. Juniors left to roam freely in new projects where a senior drafted a happy path then was sent to the next project. Say hello to the way of outsourcing.
WalterGR 59 days ago [-]
Where?
gitfan86 59 days ago [-]
Most but not all companies are bottlenecked by organizatial issues not speed of completing a jira ticket. A lot of those companies have moats or sales issues that prevent competitors from easily taking market share.
So instead of seeing mass drop in job openings you will see companies that are not bottlenecked by org issues start to move very fast. In general that will create new markets and have a positive effect on kobs
chadQuinlan 58 days ago [-]
Agree. For interrupt work for my team, I often spend more time writing the story and prodding coworkers for PR review than I do writing code. GH Copilot has been our tool of choice and it's been a great, advanced tab complete. Haven't used it for much more.
chrisgd 59 days ago [-]
Duolingo made this announcement today about replacing contractors with AI
Those are content creation jobs, not engineering. Which for duolingo is very low quality and repetitive anyway.
Duolingo needs better content, not a faster way of producing the same stuff.
amelius 59 days ago [-]
What kind of jobs?
ilaksh 59 days ago [-]
I use AI as much as possible for programming, but the specific wording "replacing" is not quite there yet. The confusing thing for people is it probably will be at full replacement level within a couple of years.
The leading edge models surpass humans in some ways, but still make weird oversights routinely. I think the models will continue to get bigger and have more comprehensive world models and the remaining brittleness will go away over the next few years.
We are early on in a process that will go from only a few jobs to almost all (existing) jobs very quickly as the models and tools continue to rapidly improve.
frank_nitti 59 days ago [-]
If there are no human software engineers, who will be legally and financially responsible for the code that is shipped into production? Will OpenAI, Anthropic et al assume responsibility for damages when critical systems fail for any of their users, or will it be the non-SWE user who gave the prompt(s)?
ilaksh 58 days ago [-]
Technically it should be the company that is responsible.
frank_nitti 58 days ago [-]
That is what I would assume. So, unless I am overlooking something, this seems like a very bad idea for a company to have zero engineering capability in-house who can read and validate the generated application code, test suites etc before it deploys into production where real end users could be harmed by faulty code generated by the LLM.
For throwaway prototypes, coding and design assistance, I can see it being leveraged very effectively, but for mission-critical software systems I can’t see it going this route or we’ll have some very big problems incoming
markus_zhang 59 days ago [-]
I just pray I still have work in the next 10 years. Then I'll semi retire and get done with it.
joshuanapoli 59 days ago [-]
In software development, we usually don’t really have a firm scope that gets completed in a clean way. So when developers get more efficient (from high level languages, OOP, Agile, Internet, AI, etc.) I think that we normally slide into a bigger scope, rather than finishing sooner or reducing the team size. Everyone usually gets the same boost from a coding productivity innovation at about the same time. So team size for products in competitive markets isn’t affected much by productivity. Improvements received by the customers accelerate, rather than developer job cuts.
jamil7 59 days ago [-]
This is kind of my take as well, most places I've worked at including the current place I work have more work than the team can get done. Features and fixes get cut or depriorised all the time to try release at a reasonable cadence. If the product you're selling is software then to me it makes sense that you'd not cut anyone from a software engineering team if your margins suddenly get better via LLM productivity gains. Rather you can argue to even increase a team size because increased velocity is a competitive advantage. On the other hand if you work somewhere where software is not the end product but a support function, you might be seen as a cost centre in which LLM productivity gains could be seen as a means of freezing hiring or reducing headcount.
due-rr 59 days ago [-]
Just like with traffic: you build more roads, but traffic stays the same. It's induced demand[1].
Yes. No one knows where the top is in term of demand for software engineering. Software continues to eat the world with no end in sight.
zooom 57 days ago [-]
My impression is that LLMs, at least currently, are just like any other modest increase in the power of our tooling. CRUD apps get even easier. Simple UIs get even easier. And on the business side, no-code/low-code (no coder, no human) solutions are hyped to the moon.
But the end result will be, once (if) the economy becomes healthy again, businesses will just become more ambitious and software get more hardware intensive and slower. Same ole same ole.
jonplackett 59 days ago [-]
Is 'replacing' the right way to think of it though?
I don't see any AI yet anywhere near good enough to literally do a person's job.
But I can easily see it making someone, say 20%-50% more effective, based on my own experience using it for coding, data processing, lots of other things.
So now you need 8 people instead of 10 people to do a job.
That's still 2 people who won't be employed, but they haven't been 'replaced' by AI in the way people seem to think they will be.
bondarchuk 59 days ago [-]
>That's still 2 people who won't be employed, but they haven't been 'replaced' by AI in the way people seem to think they will be.
That's exactly what people mean by "replacing".
frank_nitti 59 days ago [-]
There are several commenters in this thread who actually see it as having zero software engineering team at all; there would not be a single person on staff who could read and understand application code.
I would agree about needing fewer heads performing types of roles, and I could even buy that tech staff would hardly ever need to handwrite code directly.
For serious projects where critical data, physical safety, etc for end users are at stake, I still don’t see the path toward simply having no in-house engineer to certify changes generated by an LLM
jonplackett 59 days ago [-]
Basically I’m saying the same - I don’t think it’s gonna actually do an engineer’s job. But that this doesn’t mean engineers are all gonna be fine.
To argue against myself though - it might just mean more/better code is written by the same number of engineers.
If code gets cheaper, people will use more of it
jonplackett 59 days ago [-]
I don’t think they do - I think people feel like they’re somehow ‘safe’ because AI can’t do ALL of their job. My point is that it doesn’t have to and can still screw up the economics of your job.
Maybe I’m just stating the obvious…
voidUpdate 59 days ago [-]
I can see a lot of artists and graphics designers losing their jobs because sales people just generate ai crap instead
f1shy 59 days ago [-]
Should be called enhancing?
scarface_74 59 days ago [-]
My anecdote is that I am in cloud consulting specializing in app dev. 90%+ of my projects are greenfield development.
Before LLMs got good enough, there were projects I would scope with the expectation of having one junior consultant do the coding grunt work - simple Lambdas, Python utility scripts, bash scripts, infrastructure as code, translating some preexisting code to the target language of the customer.
This is the perfect use case for ChatGPT. It’s simple well contained work that can fit in its context window, the AWS SDKs in various languages are well documented, there is plenty of sample code, and it’s easy enough to test.
I can tell it to “verify all AWS SDK functions on the web” or give it the links to newer SDK functionality.
I don’t really ever need a junior developer for anything. If I have to be explicit about the requirements anyway, I can use an LLM.
And before the the gate keeping starts, I’ve been coding as a hobby in 1986 and started coding in assembly language then and have been coding professionally since 1996.
throwaw12 59 days ago [-]
> where they stop recruiting and replace human engineers with AI
I don't think it is possible NOW.
But for specific areas, productivity gain you get from a single developer with LLM is much higher than before. Some areas I see it is shining:
* building independent React/UI components
* boilerplate code
* reusing already solved solutions (e.g. try algorithm X,Y,Z. plot the chart in 2D/3D,...)
> What changed significantly in your workflow?
Hiring freeze, because leaders are not sure yet about the gains from AI, what if we hire bunch of people and can't come up with projects for them (not because we are out of ideas, because getting investment is hard if you are not AI company), while LLM is generating so much code.
> Are you 10x more efficient?
Not always, but I am filtering out things faster giving me opportunity to get into the code concepts sooner (because AI is summarizing it for me before I read 10 page blogpost)
Balgair 59 days ago [-]
My company isn't backfilling positions anymore and we had a ~10% company wide layoff about 2 weeks ago. My team was told to use AI to fill in the roles that were lost on the team [0].
What's changed in the workflow is a lot really. We do a lot of documentation, so most of that boilerplate is not done via AI based workflows. In the past, that would have been one of us copy-pasting from older documents for about a month. Now it takes seconds. Most of the content is still us and the other stakeholders. But the editing passes are mostly AI too now. Still, we very much need humans in the loop.
We don't use copilot as we're doing documentation, not code. We mostly use internal AIs that the company is building and then a vendor that supports workflow-style AI. So, like, iterative passes under the token limits for writing. These workflows do get pretty long, like 100+ steps, just to get to boilerplate.
We're easily 100x more efficient. Four of us can get a document done in a week that took the whole team years to do before.
The effort is more concentrated now. I can shepherd a document to near final review with a meeting or two from the specialist engineers, that used to take many meetings with much of both teams. We were actually able to keep up and not fall behind for about 3 months. But, management see us as a big pointless cost center of silly legal compliance, so we're permanently doomed to never get to caught up. Whatever, still have a job for now.
I guess my questions back are:
- How do you think AI is going to change the other parts of your company than coding/engineering?
- Have you seen other non engineering roles be changed due to AI?
- What do your SOs/family think of AI in their lives and work?
- How fast do you think we're getting to the 'scary' phase of AI? 2 years? 20 years? 200 years?
[0] I try to keep this account anonymous as possible, so no, I'm not sharing the company.
nusl 59 days ago [-]
Salesforce is one company I'm aware of that announced this sort of thing.
I'm personally only more productive with the help of AI if one the following conditions are met;
1. It's something I was going to type anyway, but I can just press Tab and/or make a minor edit
2. The code produced doesn't require many changes or time in understanding, as the times where it has required many changes or deeper understanding probably would have been faster to just code myself
Where it has been helpful, though, is debugging errors or replacing search engines for helping out with docs or syntax. But, sometimes it produces bullsh*t that doesn't exist and this can lead you down a rabbithole to nowhere.
More than once it's suggested something to me that solved all of the things I needed, only to realise none of it existed.
biggestdoofus 59 days ago [-]
I think it just depends on the complexity of what you are working with. For the simpler stuff it seems to work very well. However it becomes a gigantic time sink if you try to use it for more complex tasks, you just go in circles while it has no real idea of what to do. It's not just more complex code it struggles with, it could be simple code in complex systems as well, where a foundational understanding of the different parts is essential.
The people writing boring crud apps should be scared (but I think it's a failure in our industry that this is still a thing).
The technical debt that will be amassed by AI coding is worrying however. Coworkers here routinely try to merge inn stuff that is just absolute slop, and now I even have to argue with them on the basis that they think it's right because the AI wrote it...
anonzzzies 59 days ago [-]
For frontend, we don't need people anymore: backend, especially complex stuff, LLMs really waste our time by not getting it/going in circles so we just don't really do that anymore as plowing through 100s of lines of code or prs that are wrong, not conform to standards, has libs included we don't need etc. It is pretty useless as agents will just go in circles until it's so messy that it would've been many times easier just writing it with code. And often it's hard to get out besides just rollback.
frabjoused 59 days ago [-]
You're definitely a backend engineer aren't you?
jjani 58 days ago [-]
Not OP, but most of my experience is in backend yet I find current LLMs better at backend code than at frontend code :)
jjani 59 days ago [-]
> For frontend, we don't need people anymore
Does this include UI design? We're finding tools like v0 decent, but nowhere near production design quality. Same for just using Claude or Gemini directly.
anonzzzies 59 days ago [-]
A designer does the UI design; I mean frontend as in code.
dimgl 59 days ago [-]
> For frontend, we don't need people anymore
We've found that most, if not all, models are extremely bad at writing frontend code. I really hope you all know what you're doing here; you could end up with unmaintainable, incomprehensible AI slop...
byoung2 59 days ago [-]
I found ChatGPT can write terrible code for the simplest React components, but beautiful code for complex react-three-fiber components. I suppose that is because it was trained on beginner tutorials for basic components and advanced 3d modeling examples for react-three-fiber.
For basic components, I’ve found that by asking for more complexity (e.g. ask it to wrap your nav component in a react context or a custom hook) yields better overall code.
ojr 59 days ago [-]
My friend wants me to build his app for free because he heard the cost of software labor has gone to zero. He gave me ChatGPT responses, thinking that they could help me add features after I told him the complexity was too high.
In a sprint planning scenario, I think tasks that were 1,2,3,5,8,13, etc., get put down a notch, nothing more, with the invention of AI. AIs have not made an 8-point task into a 3-point one at all. There is a 50/50 chance that an old 8-point task before AI remains 8 points, with it sometimes dropping to 5.
anant90 59 days ago [-]
The AI-powered "replacement of engineers" that everyone keeps talking about will look less like existing engineers being laid off, and more like reduced hiring of recent engineering graduates. And, as with any large-scale technology trend, it will take a while before we can say we've come out of the innovator/early adopter phase, which we are clearly still in. In my opinion, it's always easier to invent new technology than to get people to change the ways they currently do things.
MisterTea 59 days ago [-]
> As someone who
... accidentally hit reply before the post was ready?
vvojd 57 days ago [-]
There are two main places where AI still differs from humans. Long enough thought chains and proper memory/learning.
On the first, you really have to consider a number of options when refactoring or adding to a codebase. On the latter, you may be able to get away with having an extremely detailed manual but ultimately a lot of day to day things aren’t suitable to a RAG.
So no, no one’s getting fired anytime soon.
fhd2 59 days ago [-]
Can't directly answer your question, since I'm not working at a company that makes any claims about hiring less human engineers because of automation.
But I think the central question is not how much of software development can be automated. It's rather how many engineers companies _believe_ they need.
Having spent some time in mid sized companies adjacent to large companies, the sheer size of teams working on relatively simple stuff can be stunning. I think companies with a lot of money have overstaffed on engineers for at least a decade now. And the thing is: It kinda works. An individual or a small team can only go so far, a good team can only grow at a certain rate. If you throw hundreds of engineers at something, they _will_ figure it out, even if you could theoretically do it with far less, by optimising for quality hires and effective ways of working. That's difficult and takes time, so if you have the money for it, you can throw more bodies at it instead. You won't get it done cheaper, probably also not better, but most likely faster.
The mere _idea_ that LLMs can replace human engineers kinda resets this. The base expectation is now that you can do stuff with a fraction of the work force. And the thing is: You can, you always could, before LLMs. I've been preaching this for probably 20 years now. It's just that few companies dared to attempt it, investors would scoff at it, think you're being too timid. Now they celebrate it.
So like many, I think any claims of replacing developers with AI are likely cost savings in disguise, presented in a way the stock market might accept more than "it's not going so well, we're reducing investments".
All that aside, I also find it difficult as a layperson to separate the advent of coding LLMs from other, probably more consequential effects, like economic uncertainty. When the economy is stable, companies invest. When it's unstable, they wait.
rcarmo 58 days ago [-]
I want to read about AI replacing C-levels, because most of the non-coding output I see has the same verbosity and general vagueness that comes with too much abstraction.
orwin 59 days ago [-]
Let's say that as long my as the complexity is low, it is really worth using AI, and you can be twice as effective, maybe more, because it can take care of most of the coding/testing/integration which are 80% of project, and can help you with the architecture part as long as it is easy/standard.
As the complexity grow, the usefulness of AI agents decrease: a lot, and quite fast.
In particular, integration of microservices are a really hard case to crack for any AI agent as it often mix training data with context data.
It is more useful in centralised apps, and especially for front dev, as long as you don't use finite state machines. I don't understand why, even Claude/Cursor trip on otherwise really easy code (and btw if you don't use state machines for your complex front end code, you're doing it wrong).
As long as you know what your agent is shitty at however, using AI is a net benefit as you don't loose time trying to communicate your needs and just do it, so it is only gains and no loses.
babyent 59 days ago [-]
I use AI/LLM to run thought experiments randomly or learn about topics. Sometimes I use it to help me with code (not to write code).
jakeoverflow 57 days ago [-]
AI is a great 20% productivity booster - but 20% and 20x are orders of magnitude apart.
teeray 59 days ago [-]
It will be interesting if and when these companies reach a “find out” stage with AI where their entire codebase is incomprehensible slop that not even the AIs can help them with.
notanitguy 58 days ago [-]
[dead]
kypro 59 days ago [-]
Depending on the project Devin.ai should be able to replace ~50% of development. No one should be hiring junior devs in 2025 imo.
flanked-evergl 59 days ago [-]
And according to physics, if you put wings on pigs they should be able to fly.
kypro 59 days ago [-]
What's your argument? I assume you've used devin.ai then? What is and isn't it able to do in your honest experience? I'd be shocked if it can't do at least 20-30% of your development.
gauche 57 days ago [-]
[dead]
voidUpdate 59 days ago [-]
Well if nobody hires junior devs, how do you make more senior devs?
gorbachev 59 days ago [-]
Companies that do that are going to start "training" non-developers to use AI tools to "code".
Once this starts happening and senior developers in these companies are doing nothing but code reviewing PRs written by AI and fixing bugs in that code, they will leave and the company will have no developers.
TuringNYC 59 days ago [-]
> Well if nobody hires junior devs, how do you make more senior devs?
This is an excellent question for society to answer, and hopefully for policy-makers to think about. A challenge with capitalism as I see it practices -- is that most for-profit orgs think quarter to quarter about earnings, costs, etc. They are not focused on second order issues arising 5-10yrs later.
We've all seen this play out in our own lives -- with the gutting of American manufacturing...and the resulting discord a generation later.
kypro 59 days ago [-]
You don't need to. Most of the cost of senior developers can be cut soon too when the AI improves. ~2 years and I'd guess you could cut 50% of your senior dev costs for AI.
johnbellone 59 days ago [-]
I laugh every time that I read these claims. How long have you been at it?
kypro 59 days ago [-]
If you're a senior dev and not already 20% more productive from AI you're doing something wrong. In 2-3 years 1 dev will easily be able to do what 2 devs could 5 years ago – e.g. you can half your senior dev team and operate at the same velocity as what you were previously able to. Given this and (other market factors) I don't see demand for senior devs being so high that a company would need to hire a junior dev because they are completely unable to hire a senior dev. The tech job market isn't likely to get that hot again in my opinion.
So yes, given this broadly speaking junior devs are not needed. If someone is a junior dev and can't find a job they'll need to prove they can function at a senior level if they want to be employable going forward. But this is basically the market today anyway.
But these are my predictions – you can disagree and I'm sure I will be out to some degree, but I'd put money on being mostly correct in these claims.
johnbellone 59 days ago [-]
Productivity enhancements are one thing, but the full elimination of junior development roles is completely different. The dynamics also change with the size and scale of the team and company (components, services, multiple customers, etc).
The role will change and individuals will become more productive. These tools are impressive and moving in the right direction to your prediction. But, personally, I think it is naive to think that the need for junior roles will be entirely eliminated in 5 years.
kypro 59 days ago [-]
What would a junior developer be doing? Genuinely wondering what you would pay a junior developer to do today which couldn't do more cost effectively with AI? If you're talking about someone with a bit of technical knowledge who's cheaper than hiring a senior dev to prompt/manage AI agents then, yeah, I suspect there will be people doing this, but I don't think these are junior developers.
In my opinion there would be no point in getting junior developer to do anything right now in the same way I'm not going to pay a rookie artist or web designer to do anything for me anymore because I'd get better results from AI. Obviously companies which are not productivity and cost optimised might not care/realise they can do this right away (there will always be the odd inefficient hire here and there), but my guess is that 99.9% of these hires make no economical sense and will be so few and far between that the role will effectively be eliminated in place of something else. And this happens often in tech. I used to know "webmasters" who just did HTML/CSS. The web still runs on HTML/CSS, but those jobs no longer exist and people who used to do that work are now doing other things. Again why the hell would I pay someone to write HTML/CSS when there are plenty WYSIWYGs and AI tools which could do a better job, cheaper and quicker?
johnbellone 58 days ago [-]
If we're being honest, by your statements alone I wouldn't consider you a senior developer in my organization. I pay them to learn my business, understand the requirements, and plan/build a solution within the scope of my company's means. The value, for me, is in the intersection of the institutional and technical knowledge of the areas of business. A senior developer needs to be able to mentor business/product/technical juniors about much more than the code they're working on at the moment.
Please don't take it as me attacking you. I'd actually love to have this conversation with you on my podcast (link in description). I think it'd be a great one!
kypro 57 days ago [-]
I was commenting during work hours the other day so I was quickly brain dumping and not considering my words as well as I would like.
Let me try to address what you're saying and see if we still disagree.
> I pay them to learn my business, understand the requirements, and plan/build a solution within the scope of my company's means.
So yeah, I do understand that this is what people pay their most senior developers to do. At a certain level you're not just paying someone to write good code, but also to be able to take loose user requirements and plan a technical solution.
Typically in technical teams you'll have three layers of expertise:
- First you have your lead developers & TAs who are responsible for making, documenting and communicating high-level architectural decisions. They'll also work with business/product people to take their requirements and formulate technical solutions with their tech team.
- Below this you have your "senior developers" who are typically going to be experts in their technologies and will know an area of the codebase very well. This allows them to take those high level technical solutions and produce high-quality code which satisfies the requirements without hand-holding.
- Then finally you have your "junior developers" who are generally going to less experienced at software engineering and will have less experience with the technologies they're using. When given clear technical requirements they can produce workable code, but they will typically require some hand-holding from more senior members before their code is production-ready.
My argument is that tools like devin.ai makes this last group redundant today. Once you have the technical requirements devin.ai in almost all cases will be able to produce code equally as good or better than a junior developer.
devin.ai is also quite competitive with senior developers because at lot of what senior developers do isn't that complicated. A senior developer might need to create a new database model and some queries for example, and while they can do that very easily generally speaking devin.ai will be faster and more cost-effective than getting a senior developer to write that code. When it comes to more complicated requirements though devin.ai will struggle today and while it will generally produce 80% of the solution, you'll still need a senior developer to do that last 20%.
I agree with what you said about senior developers, although I would separate this group out further into senior developers and leads. If I was unclear, I don't think you can replace either with AI today and I don't think you'll be able to replace lead developers completely in < 3 years, but I do think senior developers will likely be replaceable in the vast majority of cases in 3-5 years.
I know some companies are still hiring juniors, but I'd argue if you're hiring junior developers today you're just not productivity and cost optimised. At the company I work at we don't have junior developers anymore because they slow us down. We actually tried to bring one in recently to help out with a few bits (more as a favour) and it was a waste of time.
It's not that junior developers are useless, but that in the vast majority of cases it's quicker, cheaper and easier to work with devin.ai than go back and forth with a junior developer.
> Please don't take it as me attacking you. I'd actually love to have this conversation with you on my podcast (link in description). I think it'd be a great one!
It's cool lol. I'm too autistic to care. It good to attack if you feel strongly. I'd love to come on your podcast and would be interested to hear a debate on this, but I struggle with speech so it just wouldn't work.
I'm kinda surprised people are even finding what I'm saying controversial to be honest. I'd be interested if those disagreeing are even using AI in development stacks.
ukuina 58 days ago [-]
> What would a junior developer be doing? Genuinely wondering what you would pay a junior developer to do today which couldn't do more cost effectively with AI?
Learn. We pay them to learn.
voidUpdate 59 days ago [-]
Well hopefully the amount of companies needing senior devs will double in 2-3 years then otherwise we wont be able to find a job since we'll have been automated away
looofooo0 59 days ago [-]
Jevons paradox here, if you increase the efficiency of anything, the demand goes up, not down. So any Dev-AI Cyborg will be in hot demand.
daniel_iversen 59 days ago [-]
I agree the parent statement seemed a bit "incomplete", high level and hasty, however it's not inconceivable to me that AI will continue to help even the senior devs be way more productive - and even to double productivity (or more..) feels very plausible.
Looks like the higher the management, the farther away from real engineering work — the more excitement there is and the less common sense and real understanding of how developers and llms work.
> Are you 10x more efficient?
90% of my time is spent thinking and talking about the problem and solutions. 10% is spent coding (sometimes 1% with 9% integrating this into existing infrastructure and processes). Even with ideal AGI coding agent id be only 10% more efficient.
Imagine a very bright junior developer. You still are heavily time taxed mentoring him and communicating.
Not many non technical people (to my surprise) get it.
Based on posts and comments here there are plenty “technical enough” people who don’t understand the essence of engineering work (software engineering in particular).
Spitting out barely (yet) working throwaway grade code is an impressive accomplishment for TikTok, but it has very little to do with complex business critical software most real engineers deal with everyday
I would consider myself today 2-3x more effective than where I was 12 months ago.
I can grok on a new code bases much faster by having an AI explain things to me only a grey beard could previously, I can ask Gemini 2.5 (1M context length) crazy things like “please create a sprint program for new feature xyz” and get really good high quality answers. Even crazier I can feed those sprints to Claude Code (CI/CD tests all running) and it will do a very good job of implementing. My other option is I can farm those sprints out to human dev resources I have at hand and then spend 90% of my time “thinking, hand holding and talking about code and solutions” and working with other devs to get code in prod.
Imo this is a false victory, emphasis should be on shipping. Although each domain / pipeline / field needs and prioritises different things and rightfully so. AI lets me ship so much faster and for me that means $$$.
I think I am a realist and your last point about “engineering” - is a contradiction. Maybe try better tools? Lastly:
“While the problem of ai can be viewed as, “Which of all the things humans do can machines also do?,” I would prefer to ask the question in another form: “Of all of life’s burdens, which are those machines can relieve, or significantly ease, for us?”
Richard Hamming, pg.43 The Art of Doing Science and Engineering: Learning to Learn
How often you grok a new code base per year? If that's the core of your work, then yes - you benefit from ai much more than some other engineers.
Every situation is unique for sure.
> I would class myself a mid to high skill dev
It's not about your skill level, rather about the nature of your job (working on a single product, outsourcing company with time framed projects, r&d etc.)
It's like saying "robots are replacing civil engineers". Asphalt laying is about 10%? of the work required in commissioning a road. The deciding whether to build a road at all, the costs, where to build it, the math all need to be done by a civil engineer.
The bulk of Software Engineering is feasibility study, requirements gathering, detailed design (architecture) then finally the implementation phase where AI comes in.
Those stages are in order of importance. Getting it wrong in especially the first two results in a high quality shiny white elephant at best.
The implementation phase is at most 20% but on average 10% of the work required to commission reliable maintainable software.
I think those stakeholders are the true engine of promoting AI.
or maybe at these companies the product is pretty stable or you're in an area where it's more optimizations vs. feature building?
Because if the rest 90% spent well enough - you do the right thing in remaining 10%.
Just try to work in a company with 100+ engineers and at least few years old profitable product with real customers and you'll get it.
A large chunk of the work is dealing with people, understanding what do they really want/need and helping them understand it.
On the technical side, most of the work is around fixing issues with existing software (protecting an investment).
Then, maybe 1 to 10% of the workload is making something new.
AI kinda works for the "making something new" part but sucks at the rest. And when it works, it's at most "average" (in the sense of how good it's training set was, it prefers things it sees more commonly regardless of quality).
My gut instinct is that there's going to be an AI crash, much like in the late 90s/early 2000s. Too much hype, and then, after the crash, maybe we'll start to see something a bit more sane and realistic.
If the answer would be related to extensive testing, who verifies the model-generated tests?
Given that a nontechnical PM would neither be able to inspect the system code nor its tests, this is the part that does not add up for me. It seems at least one person still has to really understand the “hard part” of computing as it relates to their domain.
In many ways LLMs feel like the next iteration of search engines: they’re easier to use, you can ask follow up questions or for examples and get an immediate response tailored to your scenario, you can provide the code and get a response for what the issue is and how to fix it, you can let it read internal documentation and get specialized support that wouldn’t be on the internet, you can let it read whole code bases and get reasonable answers to queries about said code, etc.
I don’t really see LLMs automating engineers end-to-end any time soon. They really are incapable of deductive reasoning, the extent to which they are is emergent from inductive phenomena, and breaks down massively when the input is outside the training distribution (see all the examples of LLMs failing basic deductive puzzles that are very similar to a well known one, but slightly tweaked).
Reading, understanding, and checking someone else’s code is harder than writing it correctly in the first place, and letting LLMs write entire code bases has produced immense garbage in all the examples I’ve seen. It’s not even junior level output, it’s something like _panicked CS major who started programming a year ago_ level output.
Eventually I think AI will automate software engineering, but by the time it’s capable of doing so _all_ intellectual pursuits will be automated because it requires human level cognition and adaptability. Until then it’s a moderate efficiency improvement.
As part of this AI-first shift, all engineers now have access to Cursor, and we’re still figuring out how to integrate it. We just started defining .cursorrules files for projects.
What’s been most noticeable is how quickly some people rely too much on AI outputs, especially the first pass. I’ve seen PRs where it’s obvious that the generated code wasn’t even run or reviewed. I know this is part of the messy adjustment period, but right now, it feels like I’m spending more time reviewing and cleaning up code than I did before.
We are a team of 5 down from 8 a few months ago, and we are working on more stuffs. I would not be able to survive without AI writing some queries and scripts for me. It really saves a tons of time.
Would be great to see some industry-wide stats here. There are three OTHER factors are play here:
1. Record numbers of CS undergrads (more supply)
2. More remote-CS/tech grad programs (yet more supply, many overseas)
3. Bursting of the tech-vc bubble (less demand)
4. AI (???)
Not sure how much can be attributed to AI. That said, i'd confidently say our team is at least 2x more productive than 3yrs ago. Huge numbers of loose change get thrown to the LLM to solve, instead of writing clever algos, etc.
things are getting back to normal actually, and the companies who are embarrassed to be making cutbacks are saying it's actually because they're using AI, not because they over-hired.
Also, I get dozens of calls/emails a month from my undergrad/grad alma matter. The bottom seems to have fallen out of the labor force when even ivy leave and top-5 cs/tech schools have students desperately seeking entry level jobs.
To be fair, as I mentioned on another comment, there are other factors:
1. Record numbers of CS undergrads (more supply)
2. More remote-CS/tech grad programs (yet more supply, many overseas)
3. Bursting of the tech-vc bubble (less demand)
I'm not saying everyone should do this. Just wanted to give another perspective.
So instead of seeing mass drop in job openings you will see companies that are not bottlenecked by org issues start to move very fast. In general that will create new markets and have a positive effect on kobs
https://www.theverge.com/news/657594/duolingo-ai-first-repla...
Duolingo needs better content, not a faster way of producing the same stuff.
The leading edge models surpass humans in some ways, but still make weird oversights routinely. I think the models will continue to get bigger and have more comprehensive world models and the remaining brittleness will go away over the next few years.
We are early on in a process that will go from only a few jobs to almost all (existing) jobs very quickly as the models and tools continue to rapidly improve.
For throwaway prototypes, coding and design assistance, I can see it being leveraged very effectively, but for mission-critical software systems I can’t see it going this route or we’ll have some very big problems incoming
[1] https://en.wikipedia.org/wiki/Induced_demand
But the end result will be, once (if) the economy becomes healthy again, businesses will just become more ambitious and software get more hardware intensive and slower. Same ole same ole.
I don't see any AI yet anywhere near good enough to literally do a person's job.
But I can easily see it making someone, say 20%-50% more effective, based on my own experience using it for coding, data processing, lots of other things.
So now you need 8 people instead of 10 people to do a job.
That's still 2 people who won't be employed, but they haven't been 'replaced' by AI in the way people seem to think they will be.
That's exactly what people mean by "replacing".
I would agree about needing fewer heads performing types of roles, and I could even buy that tech staff would hardly ever need to handwrite code directly.
For serious projects where critical data, physical safety, etc for end users are at stake, I still don’t see the path toward simply having no in-house engineer to certify changes generated by an LLM
To argue against myself though - it might just mean more/better code is written by the same number of engineers.
If code gets cheaper, people will use more of it
Maybe I’m just stating the obvious…
Before LLMs got good enough, there were projects I would scope with the expectation of having one junior consultant do the coding grunt work - simple Lambdas, Python utility scripts, bash scripts, infrastructure as code, translating some preexisting code to the target language of the customer.
This is the perfect use case for ChatGPT. It’s simple well contained work that can fit in its context window, the AWS SDKs in various languages are well documented, there is plenty of sample code, and it’s easy enough to test.
I can tell it to “verify all AWS SDK functions on the web” or give it the links to newer SDK functionality.
I don’t really ever need a junior developer for anything. If I have to be explicit about the requirements anyway, I can use an LLM.
And before the the gate keeping starts, I’ve been coding as a hobby in 1986 and started coding in assembly language then and have been coding professionally since 1996.
I don't think it is possible NOW.
But for specific areas, productivity gain you get from a single developer with LLM is much higher than before. Some areas I see it is shining:
> What changed significantly in your workflow? Hiring freeze, because leaders are not sure yet about the gains from AI, what if we hire bunch of people and can't come up with projects for them (not because we are out of ideas, because getting investment is hard if you are not AI company), while LLM is generating so much code.> Are you 10x more efficient? Not always, but I am filtering out things faster giving me opportunity to get into the code concepts sooner (because AI is summarizing it for me before I read 10 page blogpost)
What's changed in the workflow is a lot really. We do a lot of documentation, so most of that boilerplate is not done via AI based workflows. In the past, that would have been one of us copy-pasting from older documents for about a month. Now it takes seconds. Most of the content is still us and the other stakeholders. But the editing passes are mostly AI too now. Still, we very much need humans in the loop.
We don't use copilot as we're doing documentation, not code. We mostly use internal AIs that the company is building and then a vendor that supports workflow-style AI. So, like, iterative passes under the token limits for writing. These workflows do get pretty long, like 100+ steps, just to get to boilerplate.
We're easily 100x more efficient. Four of us can get a document done in a week that took the whole team years to do before.
The effort is more concentrated now. I can shepherd a document to near final review with a meeting or two from the specialist engineers, that used to take many meetings with much of both teams. We were actually able to keep up and not fall behind for about 3 months. But, management see us as a big pointless cost center of silly legal compliance, so we're permanently doomed to never get to caught up. Whatever, still have a job for now.
I guess my questions back are:
- How do you think AI is going to change the other parts of your company than coding/engineering?
- Have you seen other non engineering roles be changed due to AI?
- What do your SOs/family think of AI in their lives and work?
- How fast do you think we're getting to the 'scary' phase of AI? 2 years? 20 years? 200 years?
[0] I try to keep this account anonymous as possible, so no, I'm not sharing the company.
I'm personally only more productive with the help of AI if one the following conditions are met;
1. It's something I was going to type anyway, but I can just press Tab and/or make a minor edit
2. The code produced doesn't require many changes or time in understanding, as the times where it has required many changes or deeper understanding probably would have been faster to just code myself
Where it has been helpful, though, is debugging errors or replacing search engines for helping out with docs or syntax. But, sometimes it produces bullsh*t that doesn't exist and this can lead you down a rabbithole to nowhere.
More than once it's suggested something to me that solved all of the things I needed, only to realise none of it existed.
The people writing boring crud apps should be scared (but I think it's a failure in our industry that this is still a thing).
The technical debt that will be amassed by AI coding is worrying however. Coworkers here routinely try to merge inn stuff that is just absolute slop, and now I even have to argue with them on the basis that they think it's right because the AI wrote it...
Does this include UI design? We're finding tools like v0 decent, but nowhere near production design quality. Same for just using Claude or Gemini directly.
We've found that most, if not all, models are extremely bad at writing frontend code. I really hope you all know what you're doing here; you could end up with unmaintainable, incomprehensible AI slop...
For basic components, I’ve found that by asking for more complexity (e.g. ask it to wrap your nav component in a react context or a custom hook) yields better overall code.
In a sprint planning scenario, I think tasks that were 1,2,3,5,8,13, etc., get put down a notch, nothing more, with the invention of AI. AIs have not made an 8-point task into a 3-point one at all. There is a 50/50 chance that an old 8-point task before AI remains 8 points, with it sometimes dropping to 5.
... accidentally hit reply before the post was ready?
On the first, you really have to consider a number of options when refactoring or adding to a codebase. On the latter, you may be able to get away with having an extremely detailed manual but ultimately a lot of day to day things aren’t suitable to a RAG.
So no, no one’s getting fired anytime soon.
But I think the central question is not how much of software development can be automated. It's rather how many engineers companies _believe_ they need.
Having spent some time in mid sized companies adjacent to large companies, the sheer size of teams working on relatively simple stuff can be stunning. I think companies with a lot of money have overstaffed on engineers for at least a decade now. And the thing is: It kinda works. An individual or a small team can only go so far, a good team can only grow at a certain rate. If you throw hundreds of engineers at something, they _will_ figure it out, even if you could theoretically do it with far less, by optimising for quality hires and effective ways of working. That's difficult and takes time, so if you have the money for it, you can throw more bodies at it instead. You won't get it done cheaper, probably also not better, but most likely faster.
The mere _idea_ that LLMs can replace human engineers kinda resets this. The base expectation is now that you can do stuff with a fraction of the work force. And the thing is: You can, you always could, before LLMs. I've been preaching this for probably 20 years now. It's just that few companies dared to attempt it, investors would scoff at it, think you're being too timid. Now they celebrate it.
So like many, I think any claims of replacing developers with AI are likely cost savings in disguise, presented in a way the stock market might accept more than "it's not going so well, we're reducing investments".
All that aside, I also find it difficult as a layperson to separate the advent of coding LLMs from other, probably more consequential effects, like economic uncertainty. When the economy is stable, companies invest. When it's unstable, they wait.
As the complexity grow, the usefulness of AI agents decrease: a lot, and quite fast.
In particular, integration of microservices are a really hard case to crack for any AI agent as it often mix training data with context data.
It is more useful in centralised apps, and especially for front dev, as long as you don't use finite state machines. I don't understand why, even Claude/Cursor trip on otherwise really easy code (and btw if you don't use state machines for your complex front end code, you're doing it wrong).
As long as you know what your agent is shitty at however, using AI is a net benefit as you don't loose time trying to communicate your needs and just do it, so it is only gains and no loses.
Once this starts happening and senior developers in these companies are doing nothing but code reviewing PRs written by AI and fixing bugs in that code, they will leave and the company will have no developers.
This is an excellent question for society to answer, and hopefully for policy-makers to think about. A challenge with capitalism as I see it practices -- is that most for-profit orgs think quarter to quarter about earnings, costs, etc. They are not focused on second order issues arising 5-10yrs later.
We've all seen this play out in our own lives -- with the gutting of American manufacturing...and the resulting discord a generation later.
So yes, given this broadly speaking junior devs are not needed. If someone is a junior dev and can't find a job they'll need to prove they can function at a senior level if they want to be employable going forward. But this is basically the market today anyway.
But these are my predictions – you can disagree and I'm sure I will be out to some degree, but I'd put money on being mostly correct in these claims.
The role will change and individuals will become more productive. These tools are impressive and moving in the right direction to your prediction. But, personally, I think it is naive to think that the need for junior roles will be entirely eliminated in 5 years.
In my opinion there would be no point in getting junior developer to do anything right now in the same way I'm not going to pay a rookie artist or web designer to do anything for me anymore because I'd get better results from AI. Obviously companies which are not productivity and cost optimised might not care/realise they can do this right away (there will always be the odd inefficient hire here and there), but my guess is that 99.9% of these hires make no economical sense and will be so few and far between that the role will effectively be eliminated in place of something else. And this happens often in tech. I used to know "webmasters" who just did HTML/CSS. The web still runs on HTML/CSS, but those jobs no longer exist and people who used to do that work are now doing other things. Again why the hell would I pay someone to write HTML/CSS when there are plenty WYSIWYGs and AI tools which could do a better job, cheaper and quicker?
Please don't take it as me attacking you. I'd actually love to have this conversation with you on my podcast (link in description). I think it'd be a great one!
Let me try to address what you're saying and see if we still disagree.
> I pay them to learn my business, understand the requirements, and plan/build a solution within the scope of my company's means.
So yeah, I do understand that this is what people pay their most senior developers to do. At a certain level you're not just paying someone to write good code, but also to be able to take loose user requirements and plan a technical solution.
Typically in technical teams you'll have three layers of expertise:
- First you have your lead developers & TAs who are responsible for making, documenting and communicating high-level architectural decisions. They'll also work with business/product people to take their requirements and formulate technical solutions with their tech team.
- Below this you have your "senior developers" who are typically going to be experts in their technologies and will know an area of the codebase very well. This allows them to take those high level technical solutions and produce high-quality code which satisfies the requirements without hand-holding.
- Then finally you have your "junior developers" who are generally going to less experienced at software engineering and will have less experience with the technologies they're using. When given clear technical requirements they can produce workable code, but they will typically require some hand-holding from more senior members before their code is production-ready.
My argument is that tools like devin.ai makes this last group redundant today. Once you have the technical requirements devin.ai in almost all cases will be able to produce code equally as good or better than a junior developer.
devin.ai is also quite competitive with senior developers because at lot of what senior developers do isn't that complicated. A senior developer might need to create a new database model and some queries for example, and while they can do that very easily generally speaking devin.ai will be faster and more cost-effective than getting a senior developer to write that code. When it comes to more complicated requirements though devin.ai will struggle today and while it will generally produce 80% of the solution, you'll still need a senior developer to do that last 20%.
I agree with what you said about senior developers, although I would separate this group out further into senior developers and leads. If I was unclear, I don't think you can replace either with AI today and I don't think you'll be able to replace lead developers completely in < 3 years, but I do think senior developers will likely be replaceable in the vast majority of cases in 3-5 years.
I know some companies are still hiring juniors, but I'd argue if you're hiring junior developers today you're just not productivity and cost optimised. At the company I work at we don't have junior developers anymore because they slow us down. We actually tried to bring one in recently to help out with a few bits (more as a favour) and it was a waste of time.
It's not that junior developers are useless, but that in the vast majority of cases it's quicker, cheaper and easier to work with devin.ai than go back and forth with a junior developer.
> Please don't take it as me attacking you. I'd actually love to have this conversation with you on my podcast (link in description). I think it'd be a great one!
It's cool lol. I'm too autistic to care. It good to attack if you feel strongly. I'd love to come on your podcast and would be interested to hear a debate on this, but I struggle with speech so it just wouldn't work.
I'm kinda surprised people are even finding what I'm saying controversial to be honest. I'd be interested if those disagreeing are even using AI in development stacks.
Learn. We pay them to learn.