I see much more histeria and false but extremely high hopes, than real deal from where I sit (deceloper at faanglike high tec company).
Looks like the higher the management, the farther away from real engineering work — the more excitement there is and the less common sense and real understanding of how developers and llms work.
> Are you 10x more efficient?
90% of my time is spent thinking and talking about the problem and solutions. 10% is spent coding (sometimes 1% with 9% integrating this into existing infrastructure and processes). Even with ideal AGI coding agent id be only 10% more efficient.
Imagine a very bright junior developer. You still are heavily time taxed mentoring him and communicating.
Not many non technical people (to my surprise) get it.
Based on posts and comments here there are plenty “technical enough” people who don’t understand the essence of engineering work (software engineering in particular).
Spitting out barely (yet) working throwaway grade code is an impressive accomplishment for TikTok, but it has very little to do with complex business critical software most real engineers deal with everyday
phyalow 5 hours ago [-]
On the contrary, I would class myself a mid to high skill dev, I have a CS degree and about 10 years of Java/C++/Rust/Python under my belt (focused on Financial Market applications).
I would consider myself today 2-3x more effective than where I was 12 months ago.
I can grok on a new code bases much faster by having an AI explain things to me only a grey beard could previously, I can ask Gemini 2.5 (1M context length) crazy things like “please create a sprint program for new feature xyz” and get really good high quality answers. Even crazier I can feed those sprints to Claude Code (CI/CD tests all running) and it will do a very good job of implementing. My other option is I can farm those sprints out to human dev resources I have at hand and then spend 90% of my time “thinking, hand holding and talking about code and solutions” and working with other devs to get code in prod.
Imo this is a false victory, emphasis should be on shipping. Although each domain / pipeline / field needs and prioritises different things and rightfully so. AI lets me ship so much faster and for me that means $$$.
I think I am a realist and your last point about “engineering” - is a contradiction. Maybe try better tools? Lastly:
“While the problem of ai can be viewed as, “Which of all the things humans do can machines also do?,” I would prefer to ask the question in another form: “Of all of life’s burdens, which are those machines can relieve, or significantly ease, for us?”
Richard Hamming, pg.43 The Art of Doing Science and Engineering: Learning to Learn
moribvndvs 1 hours ago [-]
Are you able to qualify a “2-3x” improvement? That’s a honest question. The anecdotes out there are wildly all over the place, and don’t match up with my own experience or that of my peers. I’ve only seen a marginal uplift, which includes productivity offsets caused by mistakes and hallucinations, not only for my own work, but from LLM assisted output from coworkers.
aristofun 4 hours ago [-]
> I can grok on a new code bases much faster
How often you grok a new code base per year? If that's the core of your work, then yes - you benefit from ai much more than some other engineers.
Every situation is unique for sure.
> I would class myself a mid to high skill dev
It's not about your skill level, rather about the nature of your job (working on a single product, outsourcing company with time framed projects, r&d etc.)
markus_zhang 5 hours ago [-]
It makes sense. The business stakeholders always want to get things done ASAP, and they don't really care about how it is done. This is especially true if the stakeholders want to do many one time trials.
I think those stakeholders are the true engine of promoting AI.
ookblah 5 hours ago [-]
okay, but code still has to be written. you can be a master architect and if the codebase requires X lines they have to come from somewhere. i'm just having a hard time grokking how you you can spend 1-10% of your time coding and actually ship anything at speed. esp if you imply you're not far away from the real engineering work.
or maybe at these companies the product is pretty stable or you're in an area where it's more optimizations vs. feature building?
aristofun 4 hours ago [-]
> i'm just having a hard time grokking how you you can spend 1-10% of your time coding and actually ship anything at speed
Because if the rest 90% spent well enough - you do the right thing in remaining 10%.
Just try to work in a company with 100+ engineers and at least few years old profitable product with real customers and you'll get it.
juancn 11 minutes ago [-]
I see the same mistake made everywhere, thinking that in software engineering that the hard part is making new code.
A large chunk of the work is dealing with people, understanding what do they really want/need and helping them understand it.
On the technical side, most of the work is around fixing issues with existing software (protecting an investment).
Then, maybe 1 to 10% of the workload is making something new.
AI kinda works for the "making something new" part but sucks at the rest. And when it works, it's at most "average" (in the sense of how good it's training set was, it prefers things it sees more commonly regardless of quality).
My gut instinct is that there's going to be an AI crash, much like in the late 90s/early 2000s. Too much hype, and then, after the crash, maybe we'll start to see something a bit more sane and realistic.
byoung2 5 hours ago [-]
I am a tech lead and while I don't use AI at my company (Disney) for writing code, since I have a team of contractors at my disposal, in my spare time I am working on a side project where ChatGPT is writing all of the code. It is an experiment mainly to see if it can be done. I am getting better at writing prompts to get better results, but I don't think we are at the point where a nontechnical project manager could get good results. I feel like a lead or senior can use AI to replace interns and juniors, but more likely it is currently being used to make them more productive rather than replace people. It will be interesting to see the next few years, when it will be possible for one lead or senior to do the work of entire teams.
frank_nitti 4 hours ago [-]
I wonder how a nontechnical PM would ever be able to evaluate the outputs well enough for their business to deploy model-generated code into production where end users’ safety/privacy/etc are at stake.
If the answer would be related to extensive testing, who verifies the model-generated tests?
Given that a nontechnical PM would neither be able to inspect the system code nor its tests, this is the part that does not add up for me. It seems at least one person still has to really understand the “hard part” of computing as it relates to their domain.
byoung2 3 hours ago [-]
Yeah I often catch bugs in AI generated code and AI generally just codes for the happy path. There’s a big chance a nontechnical person wouldn’t think of the edge cases a seasoned engineer would.
markus_zhang 5 hours ago [-]
For now it's more like AI boosting productivity so company doesn't have to hire more.
We are a team of 5 down from 8 a few months ago, and we are working on more stuffs. I would not be able to survive without AI writing some queries and scripts for me. It really saves a tons of time.
baq 6 hours ago [-]
Regardless of its realized effectiveness improvements it froze the intern/junior hiring pipelines.
TuringNYC 5 hours ago [-]
> Regardless of its realized effectiveness improvements it froze the intern/junior hiring pipelines.
Would be great to see some industry-wide stats here. There are three OTHER factors are play here:
1. Record numbers of CS undergrads (more supply)
2. More remote-CS/tech grad programs (yet more supply, many overseas)
3. Bursting of the tech-vc bubble (less demand)
4. AI (???)
Not sure how much can be attributed to AI. That said, i'd confidently say our team is at least 2x more productive than 3yrs ago. Huge numbers of loose change get thrown to the LLM to solve, instead of writing clever algos, etc.
marifjeren 5 hours ago [-]
a fifth plausible force here is that tech companies simply hired too aggressively during the zero interest rate policy, covid-related digitalization, engineering hiring rush of 2021-2023.
things are getting back to normal actually, and the companies who are embarrassed to be making cutbacks are saying it's actually because they're using AI, not because they over-hired.
pajamasam 5 hours ago [-]
The company I used to work for did the opposite by freezing senior+ hiring. Their argument is that juniors are more value for money.
TuringNYC 5 hours ago [-]
I've seen the opposite -- more senior hiring because seniors can now outsource a portion of rote work to the LLM -- everything from trivial utilities to docs to testing.
Also, I get dozens of calls/emails a month from my undergrad/grad alma matter. The bottom seems to have fallen out of the labor force when even ivy leave and top-5 cs/tech schools have students desperately seeking entry level jobs.
To be fair, as I mentioned on another comment, there are other factors:
1. Record numbers of CS undergrads (more supply)
2. More remote-CS/tech grad programs (yet more supply, many overseas)
3. Bursting of the tech-vc bubble (less demand)
bpt3 4 hours ago [-]
That is a poor argument, regardless of the salary. Juniors are a net loss for at least the first year in almost any environment.
WalterGR 5 hours ago [-]
Where?
chrisgd 6 hours ago [-]
Duolingo made this announcement today about replacing contractors with AI
Those are content creation jobs, not engineering. Which for duolingo is very low quality and repetitive anyway.
Duolingo needs better content, not a faster way of producing the same stuff.
ilaksh 5 hours ago [-]
I use AI as much as possible for programming, but the specific wording "replacing" is not quite there yet. The confusing thing for people is it probably will be at full replacement level within a couple of years.
The leading edge models surpass humans in some ways, but still make weird oversights routinely. I think the models will continue to get bigger and have more comprehensive world models and the remaining brittleness will go away over the next few years.
We are early on in a process that will go from only a few jobs to almost all (existing) jobs very quickly as the models and tools continue to rapidly improve.
frank_nitti 5 hours ago [-]
If there are no human software engineers, who will be legally and financially responsible for the code that is shipped into production? Will OpenAI, Anthropic et al assume responsibility for damages when critical systems fail for any of their users, or will it be the non-SWE user who gave the prompt(s)?
markus_zhang 5 hours ago [-]
I just pray I still have work in the next 10 years. Then I'll semi retire and get done with it.
joshuanapoli 5 hours ago [-]
In software development, we usually don’t really have a firm scope that gets completed in a clean way. So when developers get more efficient (from high level languages, OOP, Agile, Internet, AI, etc.) I think that we normally slide into a bigger scope, rather than finishing sooner or reducing the team size. Everyone usually gets the same boost from a coding productivity innovation at about the same time. So team size for products in competitive markets isn’t affected much by productivity. Improvements received by the customers accelerate, rather than developer job cuts.
jamil7 3 hours ago [-]
This is kind of my take as well, most places I've worked at including the current place I work have more work than the team can get done. Features and fixes get cut or depriorised all the time to try release at a reasonable cadence. If the product you're selling is software then to me it makes sense that you'd not cut anyone from a software engineering team if your margins suddenly get better via LLM productivity gains. Rather you can argue to even increase a team size because increased velocity is a competitive advantage. On the other hand if you work somewhere where software is not the end product but a support function, you might be seen as a cost centre in which LLM productivity gains could be seen as a means of freezing hiring or reducing headcount.
due-rr 5 hours ago [-]
Just like with traffic: you build more roads, but traffic stays the same. It's induced demand[1].
My company isn't backfilling positions anymore and we had a ~10% company wide layoff about 2 weeks ago. My team was told to use AI to fill in the roles that were lost on the team [0].
What's changed in the workflow is a lot really. We do a lot of documentation, so most of that boilerplate is not done via AI based workflows. In the past, that would have been one of us copy-pasting from older documents for about a month. Now it takes seconds. Most of the content is still us and the other stakeholders. But the editing passes are mostly AI too now. Still, we very much need humans in the loop.
We don't use copilot as we're doing documentation, not code. We mostly use internal AIs that the company is building and then a vendor that supports workflow-style AI. So, like, iterative passes under the token limits for writing. These workflows do get pretty long, like 100+ steps, just to get to boilerplate.
We're easily 100x more efficient. Four of us can get a document done in a week that took the whole team years to do before.
The effort is more concentrated now. I can shepherd a document to near final review with a meeting or two from the specialist engineers, that used to take many meetings with much of both teams. We were actually able to keep up and not fall behind for about 3 months. But, management see us as a big pointless cost center of silly legal compliance, so we're permanently doomed to never get to caught up. Whatever, still have a job for now.
I guess my questions back are:
- How do you think AI is going to change the other parts of your company than coding/engineering?
- Have you seen other non engineering roles be changed due to AI?
- What do your SOs/family think of AI in their lives and work?
- How fast do you think we're getting to the 'scary' phase of AI? 2 years? 20 years? 200 years?
[0] I try to keep this account anonymous as possible, so no, I'm not sharing the company.
variadix 4 hours ago [-]
Not directly, but I wouldn’t be surprised if there’s enough of an efficiency improvement to obviate hiring an engineer or two (across 100+ people). In the same way that Google and StackOverflow made people more efficient when compared to having to otherwise search through and read physical documentation (to debug, to understand some API or hardware thing), LLMs have made me more efficient by being able to get tailored answers to my questions without having to do as much searching or reading. They can provide small code examples as clarification too.
In many ways LLMs feel like the next iteration of search engines: they’re easier to use, you can ask follow up questions or for examples and get an immediate response tailored to your scenario, you can provide the code and get a response for what the issue is and how to fix it, you can let it read internal documentation and get specialized support that wouldn’t be on the internet, you can let it read whole code bases and get reasonable answers to queries about said code, etc.
I don’t really see LLMs automating engineers end-to-end any time soon. They really are incapable of deductive reasoning, the extent to which they are is emergent from inductive phenomena, and breaks down massively when the input is outside the training distribution (see all the examples of LLMs failing basic deductive puzzles that are very similar to a well known one, but slightly tweaked).
Reading, understanding, and checking someone else’s code is harder than writing it correctly in the first place, and letting LLMs write entire code bases has produced immense garbage in all the examples I’ve seen. It’s not even junior level output, it’s something like _panicked CS major who started programming a year ago_ level output.
Eventually I think AI will automate software engineering, but by the time it’s capable of doing so _all_ intellectual pursuits will be automated because it requires human level cognition and adaptability. Until then it’s a moderate efficiency improvement.
gitfan86 5 hours ago [-]
Most but not all companies are bottlenecked by organizatial issues not speed of completing a jira ticket. A lot of those companies have moats or sales issues that prevent competitors from easily taking market share.
So instead of seeing mass drop in job openings you will see companies that are not bottlenecked by org issues start to move very fast. In general that will create new markets and have a positive effect on kobs
jonplackett 5 hours ago [-]
Is 'replacing' the right way to think of it though?
I don't see any AI yet anywhere near good enough to literally do a person's job.
But I can easily see it making someone, say 20%-50% more effective, based on my own experience using it for coding, data processing, lots of other things.
So now you need 8 people instead of 10 people to do a job.
That's still 2 people who won't be employed, but they haven't been 'replaced' by AI in the way people seem to think they will be.
bondarchuk 5 hours ago [-]
>That's still 2 people who won't be employed, but they haven't been 'replaced' by AI in the way people seem to think they will be.
That's exactly what people mean by "replacing".
jonplackett 2 hours ago [-]
I don’t think they do - I think people feel like they’re somehow ‘safe’ because AI can’t do ALL of their job. My point is that it doesn’t have to and can still screw up the economics of your job.
Maybe I’m just stating the obvious…
frank_nitti 4 hours ago [-]
There are several commenters in this thread who actually see it as having zero software engineering team at all; there would not be a single person on staff who could read and understand application code.
I would agree about needing fewer heads performing types of roles, and I could even buy that tech staff would hardly ever need to handwrite code directly.
For serious projects where critical data, physical safety, etc for end users are at stake, I still don’t see the path toward simply having no in-house engineer to certify changes generated by an LLM
jonplackett 2 hours ago [-]
Basically I’m saying the same - I don’t think it’s gonna actually do an engineer’s job. But that this doesn’t mean engineers are all gonna be fine.
To argue against myself though - it might just mean more/better code is written by the same number of engineers.
If code gets cheaper, people will use more of it
voidUpdate 5 hours ago [-]
I can see a lot of artists and graphics designers losing their jobs because sales people just generate ai crap instead
f1shy 5 hours ago [-]
Should be called enhancing?
throwaw12 5 hours ago [-]
> where they stop recruiting and replace human engineers with AI
I don't think it is possible NOW.
But for specific areas, productivity gain you get from a single developer with LLM is much higher than before. Some areas I see it is shining:
* building independent React/UI components
* boilerplate code
* reusing already solved solutions (e.g. try algorithm X,Y,Z. plot the chart in 2D/3D,...)
> What changed significantly in your workflow?
Hiring freeze, because leaders are not sure yet about the gains from AI, what if we hire bunch of people and can't come up with projects for them (not because we are out of ideas, because getting investment is hard if you are not AI company), while LLM is generating so much code.
> Are you 10x more efficient?
Not always, but I am filtering out things faster giving me opportunity to get into the code concepts sooner (because AI is summarizing it for me before I read 10 page blogpost)
jmisavage 5 hours ago [-]
We’re in the early stages of this transition. There’s no formal hiring freeze, but leadership has made it clear we should exhaust AI options before considering new hires. At the same time, raises and promotions are frozen this year, which has definitely caused a lot of frustration internally.
As part of this AI-first shift, all engineers now have access to Cursor, and we’re still figuring out how to integrate it. We just started defining .cursorrules files for projects.
What’s been most noticeable is how quickly some people rely too much on AI outputs, especially the first pass. I’ve seen PRs where it’s obvious that the generated code wasn’t even run or reviewed. I know this is part of the messy adjustment period, but right now, it feels like I’m spending more time reviewing and cleaning up code than I did before.
nusl 6 hours ago [-]
Salesforce is one company I'm aware of that announced this sort of thing.
I'm personally only more productive with the help of AI if one the following conditions are met;
1. It's something I was going to type anyway, but I can just press Tab and/or make a minor edit
2. The code produced doesn't require many changes or time in understanding, as the times where it has required many changes or deeper understanding probably would have been faster to just code myself
Where it has been helpful, though, is debugging errors or replacing search engines for helping out with docs or syntax. But, sometimes it produces bullsh*t that doesn't exist and this can lead you down a rabbithole to nowhere.
More than once it's suggested something to me that solved all of the things I needed, only to realise none of it existed.
ojr 5 hours ago [-]
My friend wants me to build his app for free because he heard the cost of software labor has gone to zero. He gave me ChatGPT responses, thinking that they could help me add features after I told him the complexity was too high.
In a sprint planning scenario, I think tasks that were 1,2,3,5,8,13, etc., get put down a notch, nothing more, with the invention of AI. AIs have not made an 8-point task into a 3-point one at all. There is a 50/50 chance that an old 8-point task before AI remains 8 points, with it sometimes dropping to 5.
anant90 5 hours ago [-]
The AI-powered "replacement of engineers" that everyone keeps talking about will look less like existing engineers being laid off, and more like reduced hiring of recent engineering graduates. And, as with any large-scale technology trend, it will take a while before we can say we've come out of the innovator/early adopter phase, which we are clearly still in. In my opinion, it's always easier to invent new technology than to get people to change the ways they currently do things.
MisterTea 5 hours ago [-]
> As someone who
... accidentally hit reply before the post was ready?
anonzzzies 5 hours ago [-]
For frontend, we don't need people anymore: backend, especially complex stuff, LLMs really waste our time by not getting it/going in circles so we just don't really do that anymore as plowing through 100s of lines of code or prs that are wrong, not conform to standards, has libs included we don't need etc. It is pretty useless as agents will just go in circles until it's so messy that it would've been many times easier just writing it with code. And often it's hard to get out besides just rollback.
jjani 5 hours ago [-]
> For frontend, we don't need people anymore
Does this include UI design? We're finding tools like v0 decent, but nowhere near production design quality. Same for just using Claude or Gemini directly.
anonzzzies 4 hours ago [-]
A designer does the UI design; I mean frontend as in code.
frabjoused 5 hours ago [-]
You're definitely a backend engineer aren't you?
dimgl 5 hours ago [-]
> For frontend, we don't need people anymore
We've found that most, if not all, models are extremely bad at writing frontend code. I really hope you all know what you're doing here; you could end up with unmaintainable, incomprehensible AI slop...
byoung2 3 hours ago [-]
I found ChatGPT can write terrible code for the simplest React components, but beautiful code for complex react-three-fiber components. I suppose that is because it was trained on beginner tutorials for basic components and advanced 3d modeling examples for react-three-fiber.
For basic components, I’ve found that by asking for more complexity (e.g. ask it to wrap your nav component in a react context or a custom hook) yields better overall code.
fhd2 5 hours ago [-]
Can't directly answer your question, since I'm not working at a company that makes any claims about hiring less human engineers because of automation.
But I think the central question is not how much of software development can be automated. It's rather how many engineers companies _believe_ they need.
Having spent some time in mid sized companies adjacent to large companies, the sheer size of teams working on relatively simple stuff can be stunning. I think companies with a lot of money have overstaffed on engineers for at least a decade now. And the thing is: It kinda works. An individual or a small team can only go so far, a good team can only grow at a certain rate. If you throw hundreds of engineers at something, they _will_ figure it out, even if you could theoretically do it with far less, by optimising for quality hires and effective ways of working. That's difficult and takes time, so if you have the money for it, you can throw more bodies at it instead. You won't get it done cheaper, probably also not better, but most likely faster.
The mere _idea_ that LLMs can replace human engineers kinda resets this. The base expectation is now that you can do stuff with a fraction of the work force. And the thing is: You can, you always could, before LLMs. I've been preaching this for probably 20 years now. It's just that few companies dared to attempt it, investors would scoff at it, think you're being too timid. Now they celebrate it.
So like many, I think any claims of replacing developers with AI are likely cost savings in disguise, presented in a way the stock market might accept more than "it's not going so well, we're reducing investments".
All that aside, I also find it difficult as a layperson to separate the advent of coding LLMs from other, probably more consequential effects, like economic uncertainty. When the economy is stable, companies invest. When it's unstable, they wait.
babyent 4 hours ago [-]
I use AI/LLM to run thought experiments randomly or learn about topics. Sometimes I use it to help me with code (not to write code).
biggestdoofus 4 hours ago [-]
I think it just depends on the complexity of what you are working with. For the simpler stuff it seems to work very well. However it becomes a gigantic time sink if you try to use it for more complex tasks, you just go in circles while it has no real idea of what to do. It's not just more complex code it struggles with, it could be simple code in complex systems as well, where a foundational understanding of the different parts is essential.
The people writing boring crud apps should be scared (but I think it's a failure in our industry that this is still a thing).
The technical debt that will be amassed by AI coding is worrying however. Coworkers here routinely try to merge inn stuff that is just absolute slop, and now I even have to argue with them on the basis that they think it's right because the AI wrote it...
teeray 5 hours ago [-]
It will be interesting if and when these companies reach a “find out” stage with AI where their entire codebase is incomprehensible slop that not even the AIs can help them with.
orwin 5 hours ago [-]
Let's say that as long my as the complexity is low, it is really worth using AI, and you can be twice as effective, maybe more, because it can take care of most of the coding/testing/integration which are 80% of project, and can help you with the architecture part as long as it is easy/standard.
As the complexity grow, the usefulness of AI agents decrease: a lot, and quite fast.
In particular, integration of microservices are a really hard case to crack for any AI agent as it often mix training data with context data.
It is more useful in centralised apps, and especially for front dev, as long as you don't use finite state machines. I don't understand why, even Claude/Cursor trip on otherwise really easy code (and btw if you don't use state machines for your complex front end code, you're doing it wrong).
As long as you know what your agent is shitty at however, using AI is a net benefit as you don't loose time trying to communicate your needs and just do it, so it is only gains and no loses.
scarface_74 5 hours ago [-]
My anecdote is that I am in cloud consulting specializing in app dev. 90%+ of my projects are greenfield development.
Before LLMs got good enough, there were projects I would scope with the expectation of having one junior consultant do the coding grunt work - simple Lambdas, Python utility scripts, bash scripts, infrastructure as code, translating some preexisting code to the target language of the customer.
This is the perfect use case for ChatGPT. It’s simple well contained work that can fit in its context window, the AWS SDKs in various languages are well documented, there is plenty of sample code, and it’s easy enough to test.
I can tell it to “verify all AWS SDK functions on the web” or give it the links to newer SDK functionality.
I don’t really ever need a junior developer for anything. If I have to be explicit about the requirements anyway, I can use an LLM.
And before the the gate keeping starts, I’ve been coding as a hobby in 1986 and started coding in assembly language then and have been coding professionally since 1996.
kypro 5 hours ago [-]
Depending on the project Devin.ai should be able to replace ~50% of development. No one should be hiring junior devs in 2025 imo.
flanked-evergl 5 hours ago [-]
And according to physics, if you put wings on pigs they should be able to fly.
kypro 5 hours ago [-]
What's your argument? I assume you've used devin.ai then? What is and isn't it able to do in your honest experience? I'd be shocked if it can't do at least 20-30% of your development.
voidUpdate 5 hours ago [-]
Well if nobody hires junior devs, how do you make more senior devs?
gorbachev 5 hours ago [-]
Companies that do that are going to start "training" non-developers to use AI tools to "code".
Once this starts happening and senior developers in these companies are doing nothing but code reviewing PRs written by AI and fixing bugs in that code, they will leave and the company will have no developers.
TuringNYC 5 hours ago [-]
> Well if nobody hires junior devs, how do you make more senior devs?
This is an excellent question for society to answer, and hopefully for policy-makers to think about. A challenge with capitalism as I see it practices -- is that most for-profit orgs think quarter to quarter about earnings, costs, etc. They are not focused on second order issues arising 5-10yrs later.
We've all seen this play out in our own lives -- with the gutting of American manufacturing...and the resulting discord a generation later.
kypro 5 hours ago [-]
You don't need to. Most of the cost of senior developers can be cut soon too when the AI improves. ~2 years and I'd guess you could cut 50% of your senior dev costs for AI.
johnbellone 5 hours ago [-]
I laugh every time that I read these claims. How long have you been at it?
daniel_iversen 4 hours ago [-]
I agree the parent statement seemed a bit "incomplete", high level and hasty, however it's not inconceivable to me that AI will continue to help even the senior devs be way more productive - and even to double productivity (or more..) feels very plausible.
kypro 4 hours ago [-]
If you're a senior dev and not already 20% more productive from AI you're doing something wrong. In 2-3 years 1 dev will easily be able to do what 2 devs could 5 years ago – e.g. you can half your senior dev team and operate at the same velocity as what you were previously able to. Given this and (other market factors) I don't see demand for senior devs being so high that a company would need to hire a junior dev because they are completely unable to hire a senior dev. The tech job market isn't likely to get that hot again in my opinion.
So yes, given this broadly speaking junior devs are not needed. If someone is a junior dev and can't find a job they'll need to prove they can function at a senior level if they want to be employable going forward. But this is basically the market today anyway.
But these are my predictions – you can disagree and I'm sure I will be out to some degree, but I'd put money on being mostly correct in these claims.
johnbellone 3 hours ago [-]
Productivity enhancements are one thing, but the full elimination of junior development roles is completely different. The dynamics also change with the size and scale of the team and company (components, services, multiple customers, etc).
The role will change and individuals will become more productive. These tools are impressive and moving in the right direction to your prediction. But, personally, I think it is naive to think that the need for junior roles will be entirely eliminated in 5 years.
kypro 2 hours ago [-]
What would a junior developer be doing? Genuinely wondering what you would pay a junior developer to do today which couldn't do more cost effectively with AI? If you're talking about someone with a bit of technical knowledge who's cheaper than hiring a senior dev to prompt/manage AI agents then, yeah, I suspect there will be people doing this, but I don't think these are junior developers.
In my opinion there would be no point in getting junior developer to do anything right now in the same way I'm not going to pay a rookie artist or web designer to do anything for me anymore because I'd get better results from AI. Obviously companies which are not productivity and cost optimised might not care/realise they can do this right away (there will always be the odd inefficient hire here and there), but my guess is that 99.9% of these hires make no economical sense and will be so few and far between that the role will effectively be eliminated in place of something else. And this happens often in tech. I used to know "webmasters" who just did HTML/CSS. The web still runs on HTML/CSS, but those jobs no longer exist and people who used to do that work are now doing other things. Again why the hell would I pay someone to write HTML/CSS when there are plenty WYSIWYGs and AI tools which could do a better job, cheaper and quicker?
voidUpdate 4 hours ago [-]
Well hopefully the amount of companies needing senior devs will double in 2-3 years then otherwise we wont be able to find a job since we'll have been automated away
looofooo0 4 hours ago [-]
Jevons paradox here, if you increase the efficiency of anything, the demand goes up, not down. So any Dev-AI Cyborg will be in hot demand.
Looks like the higher the management, the farther away from real engineering work — the more excitement there is and the less common sense and real understanding of how developers and llms work.
> Are you 10x more efficient?
90% of my time is spent thinking and talking about the problem and solutions. 10% is spent coding (sometimes 1% with 9% integrating this into existing infrastructure and processes). Even with ideal AGI coding agent id be only 10% more efficient.
Imagine a very bright junior developer. You still are heavily time taxed mentoring him and communicating.
Not many non technical people (to my surprise) get it.
Based on posts and comments here there are plenty “technical enough” people who don’t understand the essence of engineering work (software engineering in particular).
Spitting out barely (yet) working throwaway grade code is an impressive accomplishment for TikTok, but it has very little to do with complex business critical software most real engineers deal with everyday
I would consider myself today 2-3x more effective than where I was 12 months ago.
I can grok on a new code bases much faster by having an AI explain things to me only a grey beard could previously, I can ask Gemini 2.5 (1M context length) crazy things like “please create a sprint program for new feature xyz” and get really good high quality answers. Even crazier I can feed those sprints to Claude Code (CI/CD tests all running) and it will do a very good job of implementing. My other option is I can farm those sprints out to human dev resources I have at hand and then spend 90% of my time “thinking, hand holding and talking about code and solutions” and working with other devs to get code in prod.
Imo this is a false victory, emphasis should be on shipping. Although each domain / pipeline / field needs and prioritises different things and rightfully so. AI lets me ship so much faster and for me that means $$$.
I think I am a realist and your last point about “engineering” - is a contradiction. Maybe try better tools? Lastly:
“While the problem of ai can be viewed as, “Which of all the things humans do can machines also do?,” I would prefer to ask the question in another form: “Of all of life’s burdens, which are those machines can relieve, or significantly ease, for us?”
Richard Hamming, pg.43 The Art of Doing Science and Engineering: Learning to Learn
How often you grok a new code base per year? If that's the core of your work, then yes - you benefit from ai much more than some other engineers.
Every situation is unique for sure.
> I would class myself a mid to high skill dev
It's not about your skill level, rather about the nature of your job (working on a single product, outsourcing company with time framed projects, r&d etc.)
I think those stakeholders are the true engine of promoting AI.
or maybe at these companies the product is pretty stable or you're in an area where it's more optimizations vs. feature building?
Because if the rest 90% spent well enough - you do the right thing in remaining 10%.
Just try to work in a company with 100+ engineers and at least few years old profitable product with real customers and you'll get it.
A large chunk of the work is dealing with people, understanding what do they really want/need and helping them understand it.
On the technical side, most of the work is around fixing issues with existing software (protecting an investment).
Then, maybe 1 to 10% of the workload is making something new.
AI kinda works for the "making something new" part but sucks at the rest. And when it works, it's at most "average" (in the sense of how good it's training set was, it prefers things it sees more commonly regardless of quality).
My gut instinct is that there's going to be an AI crash, much like in the late 90s/early 2000s. Too much hype, and then, after the crash, maybe we'll start to see something a bit more sane and realistic.
If the answer would be related to extensive testing, who verifies the model-generated tests?
Given that a nontechnical PM would neither be able to inspect the system code nor its tests, this is the part that does not add up for me. It seems at least one person still has to really understand the “hard part” of computing as it relates to their domain.
We are a team of 5 down from 8 a few months ago, and we are working on more stuffs. I would not be able to survive without AI writing some queries and scripts for me. It really saves a tons of time.
Would be great to see some industry-wide stats here. There are three OTHER factors are play here:
1. Record numbers of CS undergrads (more supply)
2. More remote-CS/tech grad programs (yet more supply, many overseas)
3. Bursting of the tech-vc bubble (less demand)
4. AI (???)
Not sure how much can be attributed to AI. That said, i'd confidently say our team is at least 2x more productive than 3yrs ago. Huge numbers of loose change get thrown to the LLM to solve, instead of writing clever algos, etc.
things are getting back to normal actually, and the companies who are embarrassed to be making cutbacks are saying it's actually because they're using AI, not because they over-hired.
Also, I get dozens of calls/emails a month from my undergrad/grad alma matter. The bottom seems to have fallen out of the labor force when even ivy leave and top-5 cs/tech schools have students desperately seeking entry level jobs.
To be fair, as I mentioned on another comment, there are other factors:
1. Record numbers of CS undergrads (more supply)
2. More remote-CS/tech grad programs (yet more supply, many overseas)
3. Bursting of the tech-vc bubble (less demand)
https://www.theverge.com/news/657594/duolingo-ai-first-repla...
Duolingo needs better content, not a faster way of producing the same stuff.
The leading edge models surpass humans in some ways, but still make weird oversights routinely. I think the models will continue to get bigger and have more comprehensive world models and the remaining brittleness will go away over the next few years.
We are early on in a process that will go from only a few jobs to almost all (existing) jobs very quickly as the models and tools continue to rapidly improve.
[1] https://en.wikipedia.org/wiki/Induced_demand
What's changed in the workflow is a lot really. We do a lot of documentation, so most of that boilerplate is not done via AI based workflows. In the past, that would have been one of us copy-pasting from older documents for about a month. Now it takes seconds. Most of the content is still us and the other stakeholders. But the editing passes are mostly AI too now. Still, we very much need humans in the loop.
We don't use copilot as we're doing documentation, not code. We mostly use internal AIs that the company is building and then a vendor that supports workflow-style AI. So, like, iterative passes under the token limits for writing. These workflows do get pretty long, like 100+ steps, just to get to boilerplate.
We're easily 100x more efficient. Four of us can get a document done in a week that took the whole team years to do before.
The effort is more concentrated now. I can shepherd a document to near final review with a meeting or two from the specialist engineers, that used to take many meetings with much of both teams. We were actually able to keep up and not fall behind for about 3 months. But, management see us as a big pointless cost center of silly legal compliance, so we're permanently doomed to never get to caught up. Whatever, still have a job for now.
I guess my questions back are:
- How do you think AI is going to change the other parts of your company than coding/engineering?
- Have you seen other non engineering roles be changed due to AI?
- What do your SOs/family think of AI in their lives and work?
- How fast do you think we're getting to the 'scary' phase of AI? 2 years? 20 years? 200 years?
[0] I try to keep this account anonymous as possible, so no, I'm not sharing the company.
In many ways LLMs feel like the next iteration of search engines: they’re easier to use, you can ask follow up questions or for examples and get an immediate response tailored to your scenario, you can provide the code and get a response for what the issue is and how to fix it, you can let it read internal documentation and get specialized support that wouldn’t be on the internet, you can let it read whole code bases and get reasonable answers to queries about said code, etc.
I don’t really see LLMs automating engineers end-to-end any time soon. They really are incapable of deductive reasoning, the extent to which they are is emergent from inductive phenomena, and breaks down massively when the input is outside the training distribution (see all the examples of LLMs failing basic deductive puzzles that are very similar to a well known one, but slightly tweaked).
Reading, understanding, and checking someone else’s code is harder than writing it correctly in the first place, and letting LLMs write entire code bases has produced immense garbage in all the examples I’ve seen. It’s not even junior level output, it’s something like _panicked CS major who started programming a year ago_ level output.
Eventually I think AI will automate software engineering, but by the time it’s capable of doing so _all_ intellectual pursuits will be automated because it requires human level cognition and adaptability. Until then it’s a moderate efficiency improvement.
So instead of seeing mass drop in job openings you will see companies that are not bottlenecked by org issues start to move very fast. In general that will create new markets and have a positive effect on kobs
I don't see any AI yet anywhere near good enough to literally do a person's job.
But I can easily see it making someone, say 20%-50% more effective, based on my own experience using it for coding, data processing, lots of other things.
So now you need 8 people instead of 10 people to do a job.
That's still 2 people who won't be employed, but they haven't been 'replaced' by AI in the way people seem to think they will be.
That's exactly what people mean by "replacing".
Maybe I’m just stating the obvious…
I would agree about needing fewer heads performing types of roles, and I could even buy that tech staff would hardly ever need to handwrite code directly.
For serious projects where critical data, physical safety, etc for end users are at stake, I still don’t see the path toward simply having no in-house engineer to certify changes generated by an LLM
To argue against myself though - it might just mean more/better code is written by the same number of engineers.
If code gets cheaper, people will use more of it
I don't think it is possible NOW.
But for specific areas, productivity gain you get from a single developer with LLM is much higher than before. Some areas I see it is shining:
> What changed significantly in your workflow? Hiring freeze, because leaders are not sure yet about the gains from AI, what if we hire bunch of people and can't come up with projects for them (not because we are out of ideas, because getting investment is hard if you are not AI company), while LLM is generating so much code.> Are you 10x more efficient? Not always, but I am filtering out things faster giving me opportunity to get into the code concepts sooner (because AI is summarizing it for me before I read 10 page blogpost)
As part of this AI-first shift, all engineers now have access to Cursor, and we’re still figuring out how to integrate it. We just started defining .cursorrules files for projects.
What’s been most noticeable is how quickly some people rely too much on AI outputs, especially the first pass. I’ve seen PRs where it’s obvious that the generated code wasn’t even run or reviewed. I know this is part of the messy adjustment period, but right now, it feels like I’m spending more time reviewing and cleaning up code than I did before.
I'm personally only more productive with the help of AI if one the following conditions are met;
1. It's something I was going to type anyway, but I can just press Tab and/or make a minor edit
2. The code produced doesn't require many changes or time in understanding, as the times where it has required many changes or deeper understanding probably would have been faster to just code myself
Where it has been helpful, though, is debugging errors or replacing search engines for helping out with docs or syntax. But, sometimes it produces bullsh*t that doesn't exist and this can lead you down a rabbithole to nowhere.
More than once it's suggested something to me that solved all of the things I needed, only to realise none of it existed.
In a sprint planning scenario, I think tasks that were 1,2,3,5,8,13, etc., get put down a notch, nothing more, with the invention of AI. AIs have not made an 8-point task into a 3-point one at all. There is a 50/50 chance that an old 8-point task before AI remains 8 points, with it sometimes dropping to 5.
... accidentally hit reply before the post was ready?
Does this include UI design? We're finding tools like v0 decent, but nowhere near production design quality. Same for just using Claude or Gemini directly.
We've found that most, if not all, models are extremely bad at writing frontend code. I really hope you all know what you're doing here; you could end up with unmaintainable, incomprehensible AI slop...
For basic components, I’ve found that by asking for more complexity (e.g. ask it to wrap your nav component in a react context or a custom hook) yields better overall code.
But I think the central question is not how much of software development can be automated. It's rather how many engineers companies _believe_ they need.
Having spent some time in mid sized companies adjacent to large companies, the sheer size of teams working on relatively simple stuff can be stunning. I think companies with a lot of money have overstaffed on engineers for at least a decade now. And the thing is: It kinda works. An individual or a small team can only go so far, a good team can only grow at a certain rate. If you throw hundreds of engineers at something, they _will_ figure it out, even if you could theoretically do it with far less, by optimising for quality hires and effective ways of working. That's difficult and takes time, so if you have the money for it, you can throw more bodies at it instead. You won't get it done cheaper, probably also not better, but most likely faster.
The mere _idea_ that LLMs can replace human engineers kinda resets this. The base expectation is now that you can do stuff with a fraction of the work force. And the thing is: You can, you always could, before LLMs. I've been preaching this for probably 20 years now. It's just that few companies dared to attempt it, investors would scoff at it, think you're being too timid. Now they celebrate it.
So like many, I think any claims of replacing developers with AI are likely cost savings in disguise, presented in a way the stock market might accept more than "it's not going so well, we're reducing investments".
All that aside, I also find it difficult as a layperson to separate the advent of coding LLMs from other, probably more consequential effects, like economic uncertainty. When the economy is stable, companies invest. When it's unstable, they wait.
The people writing boring crud apps should be scared (but I think it's a failure in our industry that this is still a thing).
The technical debt that will be amassed by AI coding is worrying however. Coworkers here routinely try to merge inn stuff that is just absolute slop, and now I even have to argue with them on the basis that they think it's right because the AI wrote it...
As the complexity grow, the usefulness of AI agents decrease: a lot, and quite fast.
In particular, integration of microservices are a really hard case to crack for any AI agent as it often mix training data with context data.
It is more useful in centralised apps, and especially for front dev, as long as you don't use finite state machines. I don't understand why, even Claude/Cursor trip on otherwise really easy code (and btw if you don't use state machines for your complex front end code, you're doing it wrong).
As long as you know what your agent is shitty at however, using AI is a net benefit as you don't loose time trying to communicate your needs and just do it, so it is only gains and no loses.
Before LLMs got good enough, there were projects I would scope with the expectation of having one junior consultant do the coding grunt work - simple Lambdas, Python utility scripts, bash scripts, infrastructure as code, translating some preexisting code to the target language of the customer.
This is the perfect use case for ChatGPT. It’s simple well contained work that can fit in its context window, the AWS SDKs in various languages are well documented, there is plenty of sample code, and it’s easy enough to test.
I can tell it to “verify all AWS SDK functions on the web” or give it the links to newer SDK functionality.
I don’t really ever need a junior developer for anything. If I have to be explicit about the requirements anyway, I can use an LLM.
And before the the gate keeping starts, I’ve been coding as a hobby in 1986 and started coding in assembly language then and have been coding professionally since 1996.
Once this starts happening and senior developers in these companies are doing nothing but code reviewing PRs written by AI and fixing bugs in that code, they will leave and the company will have no developers.
This is an excellent question for society to answer, and hopefully for policy-makers to think about. A challenge with capitalism as I see it practices -- is that most for-profit orgs think quarter to quarter about earnings, costs, etc. They are not focused on second order issues arising 5-10yrs later.
We've all seen this play out in our own lives -- with the gutting of American manufacturing...and the resulting discord a generation later.
So yes, given this broadly speaking junior devs are not needed. If someone is a junior dev and can't find a job they'll need to prove they can function at a senior level if they want to be employable going forward. But this is basically the market today anyway.
But these are my predictions – you can disagree and I'm sure I will be out to some degree, but I'd put money on being mostly correct in these claims.
The role will change and individuals will become more productive. These tools are impressive and moving in the right direction to your prediction. But, personally, I think it is naive to think that the need for junior roles will be entirely eliminated in 5 years.
In my opinion there would be no point in getting junior developer to do anything right now in the same way I'm not going to pay a rookie artist or web designer to do anything for me anymore because I'd get better results from AI. Obviously companies which are not productivity and cost optimised might not care/realise they can do this right away (there will always be the odd inefficient hire here and there), but my guess is that 99.9% of these hires make no economical sense and will be so few and far between that the role will effectively be eliminated in place of something else. And this happens often in tech. I used to know "webmasters" who just did HTML/CSS. The web still runs on HTML/CSS, but those jobs no longer exist and people who used to do that work are now doing other things. Again why the hell would I pay someone to write HTML/CSS when there are plenty WYSIWYGs and AI tools which could do a better job, cheaper and quicker?