• by fizx on 3/30/2025, 10:31:10 PM

    AIs today basically fail because they've been trained to be aggressive editors of code. This makes the first steps feel amazing, gives you the most out of your first tokens, and helps win the evals focused on coding simple-to-moderate tasks.

    Once they hit some threshold of project size, they overcommit, bite off too much or don't recognize that they're missing some context. Agents help this by allowing them to see their mistakes and try again, but eventually you hit some death loop.

    I think someone around now-ish will realize that there should be two separate RLHF tunes--one for the initial prototype, and another for the hard engineering that follows. I doubt its that hard to make an methodical, engineering-minded tune, but the emphasis has been on the flashy demos and the quick wins. Cursor and folks should be collecting this data as we speak, and I expect curmudgeony agents to start appearing within a year.

    Combine this with better feedback loops (e.g. mcp-accessible debuggers), the agent doing its own stackoverflow/github searches, and continued efficiency work driving token costs down by an order-of-magnitude every year or so, and agents will get very very good very fast.

    In this atmosphere, humans will shortly exist to get context for the agent that it can't fetch itself, either for security reasons, or because no one's built the integration yet. And that will be short-lived, because integrations will always be built.

    So I guess there's a window for the "copilot" reality, but it feels very very brief. I don't think agents will need humans for very long.

  • by drweevil on 3/28/2025, 2:59:11 PM

    > It just needs to be able to deliver 80% of your output at 20% of your cost

    What tho is the actual cost? Are AI tools still loss leaders? What will happen if the AI bubble bursts and there is a severe shortage of software engineers? It is this uncertainty that people are having to deal with now.

  • by g9yuayon on 3/30/2025, 10:40:43 PM

    > My thesis is that AI will fragment the role of software engineering. It will become a role with a large pool of low-skilled coders who move forward with AI and a few specialists that will unblock those coders when stuck as well as address performance bottlenecks for production-scale.

    This sounds like outsourcing on steroids. Joke aside, what the software engineering will become really depends on the growth of the industry. Many people thought that most of the software engineering jobs would be outsourced to India and software engineer as a profession would soon die in the US. It turned out that the investment to software engineering far outpaced outsourcing, and as software engineers we were incredibly lucky to work in this field. The trend will not last forever, though. If it turns out that the growth areas in the world do not require much of novel software engineering, then the demand of this profession will dwindle, and the investment will diminish. As a result, our jobs will be outsourced or replaced by AI to a large degree, as AI is really good at slicing and dicing mature code for mature use cases.

  • by mycentstoo on 3/30/2025, 10:38:32 PM

    I’m genuinely curious on the point about reducing headcount because AI will be more efficient. I’ve seen it articulated here but other places too that a company will be able to have less engineers because each would be more productive. What if companies kept the same number of people engineers but now massively out produce what they used to? And I disagree with the example that this is like typewriters replacing typists. I think typists have a fixed number of things that need to be typed. Software is different - a company that has a better or more feature rich project could gain on their competitors.

    Curious if anyone else thinks this. Maybe it’s just optimism but I’ve yet to be convinced that a company would want to maintain its productivity through trading engineers for AI if it had the same opportunity to grow its productivity through AI and maintaining headcount.

  • by SamuelAdams on 3/30/2025, 10:22:18 PM

    > It will become a role with a large pool of low-skilled coders who move forward with AI and a few specialists that will unblock those coders when stuck as well as address performance bottlenecks for production-scale.

    You see this already in medicine. Anesthesiologist can oversee up to 6 concurrent cases, with NP’s or CRNA’s doing the actual work.

    This only works for straightforward, not medically complicated cases. The more complicated cases (pregnancy, cancer, obesity, etc) are still typically fully managed by an MD /DO.

    The results are controversial. Healthcare systems can save cost, but patient care is hit or miss.

  • by recursivedoubts on 3/30/2025, 10:24:42 PM

    The cleanup on aisle 9 when this whole thing is over is going to be biblical.

  • by sansseriff on 3/30/2025, 10:48:40 PM

    I think about the reviewer problem. An AI can write 3000 lines in less than a minute. But it might take me an hour to understand the architecture it's decided on.

    There's a couple possibilities with this:

    1. Agents become so powerful that a human can't conceivably keep up with them. And, it becomes a drain on efficiency for any human to try. The only important things are wether or not the prompt fits the desired outcome, and is the creation 'safe'. Safe can mean many things. Will not crash, will not leak data, will not take over the world... Atlas Computing is one startup that's taking this view. By ensuring an AI can only do 'safe' things as defined by some formal ontology/methods.

    2. A human stays in the loop, and tries to stay at least reasonably up to date on the code architecture. For this to work long term, the weak link is the human understanding. In which case there's interesting opportunities for AI-generated lessons, animations, and examples that are used to get the human up to speed as fast as possible. If I see a very nice 3Blue1Brown style animation generated by AI about how a piece of software functions, than I can probably start working with it more quickly than if I only had the code. At least if the animation links very closely with the code itself.

  • by precompute on 3/30/2025, 10:06:00 PM

    The article is right: the low-tier tech jobs will likely not exist in a few years. These jobs mostly involve gluing APIs together.

    However, I think the assumed usefulness of humans can be slashed even further. Right now, LLMs interface with languages and systems that were abstracted to a human level of understanding. There's an empty spot for new languages and frameworks with thousands of primitive "patterns", all represented by unique symbols, that could be put together much quicker by a LLM than by a human.

    LLMs have monstrously high associative horizons -- this means the way they segment info requires many more "boxes" / classifications / names, while humans top out at some arbitrary low value but are able to generate more / new categories on-demand.

    Instead of faffing about with a thousand examples to get some certain indentation right, it could be something akin to a spoken language but way more logical (or perhaps like a language with incredibly long compound words).

    Removing all computer programmers would require a bottom-up unity of the hardware stack with software, and that's an almost impossible ask by today's standards. Would need to start over and get rid of old systems in many areas.

  • by rglover on 3/30/2025, 10:26:37 PM

    This post is spot on. It will be incredibly lucrative to be one of the "ones who knows" in the relatively near future.

    What's scary is what happens when those types cease to exist (due to retirement or age) and all you're left with is the semi-coders described here. There's a similar problem with outdated technologies that few-to-no developers understand anymore.

  • by hintymad on 3/30/2025, 11:08:10 PM

    > It just needs to be able to deliver 80% of your output at 20% of your cost

    Yeah. The average cost of a senior engineer in an IPO'd company is about $500K in the bay area (salary, stock grants, and all the company expenses). That's enough to buy 500 Cursor business licenses for 2 years. It's a brainer that companies will be trying to figure out how to replace as many engineers with AI licenses.

  • by hintymad on 3/30/2025, 11:12:27 PM

    > In the last century, typesetting used to be a big industry. It once required specialized skills and machinery. You could make great money being one. However, in the 1980s, desktop publishing software suddenly enabled anyone with a computer to design and print content. This democratized publishing led to a decline in traditional print jobs and a rise in graphic design and DIY publishing in its stead.

    A big difference here is that typesetting can be done by individuals as an ongoing task accompanying writing. So, software indeed replaces this profession. On the other hand, we do software engineering all day long. It's a stand-alone profession. A more relatable historical example should be automation in chip industry. With CAD and automation, the chip industry requires a lot fewer engineers, even though chip design and manufacturing are still very challenging. But then, this probably has more to do with limited investment (therefore limited demand for talent) than how much automation in the field.