• by hintymad on 6/24/2025, 1:57:46 AM

    Copying from another post. I’m very puzzled on why people don’t talk more about essential complexity of specifying systems any more:

    In No Silver Bullet, Fred Brooks argues that the hard part of software engineering lies in essential complexity - understanding, specifying, and modeling the problem space - while accidental complexity like tool limitations is secondary. His point was that no tool or methodology would "magically" eliminate the difficulty of software development because the core challenge is conceptual, not syntactic. Fast forward to today: there's a lot of talk about AI agents replacing engineers by writing entire codebases from natural language prompts. But that seems to assume the specification problem is somehow solved or simplified. In reality, turning vague ideas into detailed, robust systems still feels like the core job of engineers.

    If someone provides detailed specs and iteratively works with an AI to build software, aren’t they just using AI to eliminate accidental complexity—like how we moved from assembly to high-level languages? That doesn’t replace engineers; it boosts our productivity. If anything, it should increase opportunities by lowering the cost of iteration and scaling our impact.

    So how do we reconcile this? If an agent writes a product from a prompt, that only works because someone else has already fully specified the system—implicitly or explicitly. And if we’re just using AI to replicate existing products, then we’re not solving technical problems anymore; we’re just competing on distribution or cost. That’s not an engineering disruption—it’s a business one.

    What am I missing here?

  • by agentultra on 6/23/2025, 10:06:39 PM

    … because programming languages are the right level of precision for specifying a program you want. Natural language isn’t it. Of course you need to review and edit what it generates. Of course it’s often easier to make the change yourself instead of describing how to make the change.

    I wonder if the independent studies that show Copilot increasing the rate of errors in software have anything to do with this less bold attitude. Most people selling AI are predicting the obsolescence of human authors.

  • by sysmax on 6/23/2025, 9:25:40 PM

    AI can very efficiently apply common patterns to vast amounts of code, but it has no inherent "idea" of what it's doing.

    Here's a fresh example that I stumbled upon just a few hours ago. I needed to refactor some code that first computes the size of a popup, and then separately, the top left corner.

    For brevity, one part used an "if", while the other one had a "switch":

        if (orientation == Dock.Left || orientation == Dock.Right)
            size = /* horizontal placement */
        else
            size = /* vertical placement */
    
        var point = orientation switch
        {
            Dock.Left => ...
            Dock.Right => ...
            Dock.Top => ...
            Dock.Bottom => ...
        };
    
    I wanted the LLM to refactor it to store the position rather than applying it immediately. Turns out, it just could not handle different things (if vs. switch) doing a similar thing. I tried several variations of prompts, but it very strongly leaning to either have two ifs, or two switches, despite rather explicit instructions not to do so.

    It sort of makes sense: once the model has "completed" an if, and then encounters the need for a similar thing, it will pick an "if" again, because, well, it is completing the previous tokens.

    Harmless here, but in many slightly less trivial examples, it would just steamroll over nuance and produce code that appears good, but fails in weird ways.

    That said, splitting tasks into smaller parts devoid of such ambiguities works really well. Way easier to say "store size in m_StateStorage and apply on render" than manually editing 5 different points in the code. Especially with stuff like Cerebras, that can chew through complex code at several kilobytes per second, expanding simple thoughts faster than you could physically type them.

  • by taysix on 6/23/2025, 10:23:16 PM

    I had a fun result the other day from Claude. I opened a script in Zed and asked it to "fix the error on line 71". Claude happily went and fixed the error on line 91....

    1. There was no error on line 91, it did some inconsequential formatting on that line 2. More importantly, it just ignored the very specific line I told it to go to. It's like I was playing telephone with the LLM which felt so strange with text-based communication.

    This was me trying to get better at using the LLM while coding and seeing if I could "one-shot" some very simple things. Of course me doing this _very_ tiny fix myself would have been faster. Just felt weird and reinforces this idea that the LLM isn't actually thinking at all.

  • by astariul on 6/24/2025, 7:40:14 AM

    > The continued shortage of software engineers, coupled with research showing that AI tools are particularly beneficial for junior developers

    Am I living in a different timeline than these guys ?

    In my timeline, the tech job market is terrible, and AI is the worst for junior developers, because they aren't learning what they should by doing.

  • by exiguus on 6/23/2025, 11:00:54 PM

    Personally, I define the job of a software engineer as transform requirements into software. Software is not only code. Requirements are not only natural language. At the moment I can't manage to be faster with the AI than manually. Unless its a simple task or software. In my experience AI's are atm junior or mid-level developers. And in the last two years, they didn't get significant better.

  • by layer8 on 6/23/2025, 10:15:18 PM

    One of the most useful properties of computers is that they enable reliable, eminently reproducible automation. Formal languages (like programming languages) not only allow to unambiguously specify the desired automation to the upmost level of precision, they also allow humans to reason about the automation with precision and confidence. Natural language is a poor substitute for that. The ground truth of programs will always be the code, and if humans want to precisely control what a program does, they’ll be best served by understanding, manipulating, and reasoning about the code.

  • by CoffeeOnWrite on 6/23/2025, 9:11:53 PM

    “Manual” has a negative connotation. If I understand the article correctly, they mean “human coding remains key”. It’s not clear to me the GitHub CEO actually used the word “manual”, that would surprise me. Is there another source on this that’s either more neutral or better at choosing accurate words? The last thing we need is to put down human coding as “manual”; human coders have a large toolbox of non-AI tools to automate their coding.

    (Wow I sound triggered! sigh)

  • by voidhorse on 6/23/2025, 11:37:55 PM

    Hopefully a CEO finally tempering some expectations and the recent Apple paper bring some sanity back into the discourse around these tools.[^1]

    Are they cool and useful? Yes.

    Do they reason? No. (Before you complain, please first define reason).

    Are they end all be all of all problem solving and the dawn of AGI? Also no.

    Once we actually bring some rationality back into the radius of discourse maybe we'll actually begin to start figuring out how these things actually fit into an engineering workflow and stop throwing around ridiculous terms like vibe coding.

    If you are an engineer, you are signing up to build rigorously verified and validated system, preferably with some amount of certainty about their behavior under boundary conditions. All the current hype-addled discussion around LLM seems to have had everything but correctness as it's focus.

    [^1]: It shouldn't take a CEO but many people, even technologists, who should be more rational about whose opinions they deem worthy of consideration, m seem to overvalue the opinions of the csuite for some bizarre, inexplicable reason.

  • by jasonthorsness on 6/23/2025, 9:23:38 PM

    "He warned that depending solely on automated agents could lead to inefficiencies. For instance, spending too much time explaining simple changes in natural language instead of editing the code directly."

    Lots of changes where describing them in English takes longer than just performing the change. I think the most effective workflow with AI agents will be a sort of active back-and-forth.

  • by lvl155 on 6/24/2025, 1:06:10 AM

    AI is still not there yet. For example, I constantly have to deal with mis-referenced documents and specifications. I think part of the problem is that it was trained on outdated materials and is unable to actively update on the fly. I can get around this problem but…

    The problem with today’s LLM and GI solutions is that they try to solve all n steps when solving the first i steps would be infinitely more useful for human consumption. I’ve yet to see a fully modular solution (though MCPs partly solves it), where I can just say, “Hey, using my specific coding style based on my github, solve problem x, based on resources a, b, and c and only a, b, and c.” I would also like to see a more verbose/interactive coding AI, where it will ask incremental questions as it traverses a problem tree.

  • by strict9 on 6/23/2025, 9:19:56 PM

    It's interesting to see a CEO express thoughts on AI and coding go in a slightly different direction.

    Usually the CEO or investor says 30% (or some other made up number) of all code is written by AI and the number will only increase, implying that developers will soon be obsolete.

    It's implied that 30% of all code submitted and shipped to production is from AI agents with zero human interaction. But of course this is not the case, it's the same developers as before using tools to more rapidly write code.

    And writing code is only one part of a developer's job in building software.

  • by hnthrow90348765 on 6/23/2025, 9:45:35 PM

    My guess is they will settle for 2x the productivity as a before-AI developer as their skill floor, but then not take a look at how long meetings and other processes take.

    Why not look at Bob who takes like 2 weeks to write tickets on what they actually want in a feature? Or Alice who's really slow getting Figma designs done and validated? How nice would having a "someone's bothered a developer" metric be and having the business seek to get that to zero and talk very loudly about it as they have about developers?

  • by h4kunamata on 6/23/2025, 11:48:48 PM

    Too late, I am seeing developer after developer doing copy/paste from AI tools and when asked, they have no idea how the code works coz "it just works"

    Google itself said 30% of their code is AI generated, and yet they had a recent outage worldwide, coincidence??

    You tell me.

  • by stego-tech on 6/24/2025, 4:46:54 PM

    I cannot help but read this in the subtext of, “We need more training data from humans; please don’t stop uploading your code to us.”

    Look, manual coding (and art, and media, and IT, and others) will always be needed. GenAI was never going to replace these jobs wholesale, or even permanently displace workers. The fact this is trending in an APAC-focused site suggests the real message is, “we don’t want our outsource farms thinking this [GenAI] will replace you.”

    Prediction of tokens was never going to lead to AGI. All it’s going to accomplish is instilling a deep, traumatic hostility in the populace towards further automation right when we need it the most.

  • by deadbabe on 6/24/2025, 3:21:15 AM

    The other day we had a fairly complex conditional, modified incorrectly by AI, and it cost the company a lot of money and harmed reputation with partners.

    AI just cannot reason about logical things reliably. And no one noticed because the AI had made several other sweeping changes, and the noise drowned it out.

  • by jstummbillig on 6/23/2025, 9:22:07 PM

    Going by the content of the linked post, this is very much a misleading headline. There is nothing in the quotes that I would read as an endorsement of "manual coding", at least not in the sense that we have used the term "coding" for the past decades.

  • by klysm on 6/23/2025, 10:23:38 PM

    CEOs are possibly the last person you should listen to on any given subject.

  • by careful_ai on 6/24/2025, 12:21:19 PM

    Great post—Dohmke’s call to preserve hands‑on coding while leveraging AI resonates strongly. It’s not about replacing devs but enabling them to build faster while staying in control.

    In practice, pure LLM suggestions often feel detached from your actual codebase—missing intent, architectural constraints, or team conventions. What helped us was adopting a repo‑aware evaluation approach with tooling that: - Scans entire repos, generates architecture diagrams, dependency maps, and feature breakdowns. - Surfaces AI suggestions grounded in context—so prompts don’t float in isolation. - Supports human-in-the-loop validation, making it easy to vet AI‑generated PRs before merging. - Tracks drift, technical debt, and cost per eval, so AI usage isn’t a black box.

    The result isn’t autopilot coding—it’s contextual assistance that amplifies developer decisions. That aligns exactly with Dohmke: use AI to accelerate, but keep the engineer firmly in the driver’s seat.

    Curious if others have tried similar repo‑aware AI workflows that don’t sacrifice control for speed?

  • by notyouraibot on 6/24/2025, 9:37:30 AM

    I recently started consulting for a company that's building an AI first tool.

    The entire application is powered through several AI agents, I had a look at their code and had to throw up, each agent is a single Python file of over 4000 lines, you can just look at the first 100 lines and tell its all LLM generated code, the hilarious part is if I paste the code into ChatGPT to help me break it down, it hits the context window in like 1 response!

    I think this is one of the main problems with AI code, 5 years ago every time I took on a project, I knew the code I was reading and diving into was written by a human, well thought and structured. These days almost all the code I see is glue code, AI has done severe damage by enabling engineers who do not understand basic structures and fundamentals write 'code that just works'.

    At the same time, I don't blame just the AI, cause several times I have written code myself which is gluecode, but then asked AI to refactor it in a way I want and it is usually really good at it.

  • by boshalfoshal on 6/23/2025, 11:30:11 PM

    Imo this is a misunderstanding of what AI companies want AI tools to be and where the industry is heading in the near future. The endgame for many companies is SWE automation, not augmentation.

    To expand -

    1. Models "reason" and can increasingly generate code given natural language. Its not just fancy autocomplete, its like having an intern - mid level engineer at your beck and call to implement some feature. Natural language is generally sufficient enough when I interact with other engineers, why is it not sufficient for an AI, which (in the limit), approaches an actual human engineer?

    2. Business wise, companies will not settle for augmentation. Software companies pay tons of money in headcount, its probably most mid-sized companies top or second line item. The endgame for leadership at these companies is to do more with less. This necessitates automation (in addition to augmenting the remaining roles).

    People need to stop thinking of LLMs as "autocomplete on steroids" and actually start thinking of them as a "24/7 junior SWE who doesn't need to eat or sleep and can do small tasks at 90% accuracy with some reasonable spec." Yeah you'll need to edit their code once in a while but they also get better and cost less than an actual person.

  • by alganet on 6/24/2025, 8:21:49 AM

    > Dohmke described an effective workflow where AI tools generate code and submit pull requests.

    This is kind of vague. What kinds of contributions he's talking about? Adding features, refactoring, removing dead code, optimizing, adding new tests?

    The article mentions boilerplate. Is that all current coding AIs can do?

    > For instance, spending too much time explaining simple changes in natural language instead of editing the code directly.

    Why not write the code directly then? Even less friction.

    Again, _it all depends on the kinds of contributions AI can make_.

    The fact that they're being vague about it tells me something. If you had a bot that can fix tests (or even get you 90% of the way), you'd be boasting that everywhere. It would be an extraordinary evidence and you'd be very proud of it.

    If someone made a PR to my repo adding a bunch of automated boilerplate, I'd be pissed, not encouraged to make it work.

  • by mycocola on 6/23/2025, 9:56:30 PM

    I think most programmers would agree that thinking represents the majority of our time. Writing code is no different than writing down your thoughts, and that process in itself can be immensely productive -- it can spark new ideas, grant epiphanies, or take you in an entirely new direction altogether. Writing is thinking.

    I think an over-reliance, or perhaps any reliance, on AI tools will turn good programmers into slop factories, as they consistently skip over a vital part of creating high-quality software.

    You could argue that the prompt == code, but then you are adding an intermediary step between you and the code, and something will always be lost in translation.

    I'd say just write the code.

  • by mewc on 6/23/2025, 11:14:06 PM

    More complaining & pessimism means better signal for teams building the AI coding tools! Keep it up! The ceiling for AI is not even close to being met. We have to be practical with whats reasonable, but getting 90% complete in a few prompts is magic.

  • by bad_haircut72 on 6/23/2025, 11:34:30 PM

    I think so many devs fail to realise that to your product manager / team lead, the interface between you and the LLM is basically the same. They write a ticket/prompt and they get back a bunch of code that undoubtedly has bugs and misinterpretations in it, will probably go through a few rounds of revisions of back and forth until its good enough to ship (ie they tested it black-box style and it worked once) then they can move on to the next thing until whatever this ticket was about rears its ugly head again at some point in the future. If you arent used to writing user stories / planning, youre really gonna be obsolete soon.

  • by guicen on 6/24/2025, 11:22:12 AM

    I believe it's important to learn the basics of manual programming and also figure out how to work with AI tools in a smart way. It's not just about letting AI do the coding for us. We still need to think clearly and improve our own Solving the problem skills. AI can help turn ideas into reality, but we need to grow too if we want to really make use of it.

  • by Daisywh on 6/24/2025, 11:43:56 AM

    I'm not really afraid that AI will replace programmers. What I worry about is that it might make programmers stop thinking. I've seen beginners who quickly get used to asking AI for answers and no longer try to understand how things actually work. It may feel fast at first, but over time they lose the ability to solve problems on their own.

  • by randomNumber7 on 6/23/2025, 10:12:48 PM

    Code monkeys that doesn't understand the limits of LLMs and can't solve problems where the LLM fails are not needed in the world of tomorrow.

    Why wouldn't your boss ask ChatGPT directly?

  • by WolfOliver on 6/24/2025, 10:48:00 AM

    I think in 5 years from now we will see a major software dev shortage. Mainly because of all the reports that big tech companies are replacing devs with AI which scares new students to sign up for a CS degree.

    The truth is that those devs who has been replaced should not have been hired in the first place as the companies were already overstuffed.

  • by swyx on 6/23/2025, 10:15:19 PM

    > In an appearance on “The MAD Podcast with Matt Turck,” Dohmke said that

    > Source: The Times of India

    what in the recycled content is this trash?

  • by bamboozled on 6/24/2025, 12:10:07 AM

    We've generated a lot of code with claude code recently...then we've had to go back and rationalize it... :) fun times...you absolutely must have a well defined architecture established before using these tools.

  • by bmitc on 6/23/2025, 10:59:50 PM

    It would be nice if he would back that up by increasing focus on quality of life issues on GitHub. GitHub's feature set seems to get very little attention unless it overlaps with Copilot.

  • by OJFord on 6/23/2025, 10:03:22 PM

    This seems to be an AI summary of a (not linked) podcast.

  • by sneak on 6/24/2025, 6:55:54 AM

    > GitHub CEO Thomas Dohmke

    Did I miss Nat Friedman stepping down? Does the new guy think having ICE as a customer is ok, too?

  • by FirmwareBurner on 6/23/2025, 9:13:17 PM

    I wonder how much coding he does and how does he know which code is human written and which by machine.

  • by treefarmer on 6/23/2025, 9:16:02 PM

    I get a 403 forbidden error when trying to view the page. Anyone else get that?

  • by exabrial on 6/23/2025, 10:13:32 PM

    Amazingly, so does air and water. What AI salesman could have predicted this?

  • by lawgimenez on 6/23/2025, 10:18:38 PM

    Not gonna lie, first time I've heard of manual coding.

  • by another_twist on 6/23/2025, 10:19:43 PM

    I think these are coordinated posts by Microsoft execs. First their director of product, now this. Its like they're trying to calm the auto coding hype until they catchup and thus keep OpenAI from running away.

  • by user4673568345 on 6/25/2025, 9:29:34 AM

    WELL YOU DONT SAY

  • by dboreham on 6/23/2025, 10:17:36 PM

    That's him out the Illuminati then.

  • by lunarboy on 6/23/2025, 10:02:16 PM

    It was only 2 years ago we were still taking about GPTs making up completely nonsense, and now hallucinations are almost gone from the discussions. I assume it will get even better, but I also think there is an inherent plateau. Just like how machines solved mass manufacturing work, but we still have factory workers and overseers. Also, "manually" hand crafted pieces like fashion and watches continue to be the most expensive luxury goods. So I don't believe good design architects and consulting will ever be fully replaced.

  • by usmanmehmood55 on 6/24/2025, 7:31:41 AM

    > The continued shortage of software engineers, coupled with research showing that AI tools are particularly beneficial for junior developers...

    I stopped reading after this.

  • by thtuothy57747 on 6/24/2025, 11:11:21 AM

    Yawn... Epistemic standards have fallen to rock bottom levels across STEM due to metric matching, more so in the cookie-cutter jira world of soulless software factories.

    That these "geniuses" think they can replace humans, not only for knowledge work, but also for labour is laughable for any one who has worked with LLMs.

  • by GiorgioG on 6/23/2025, 10:59:07 PM

    No fucking shit Sherlock.

  • by rufius on 6/24/2025, 12:29:10 AM

    Water’s wet, fire’s hot. News at 5.

  • by guluarte on 6/23/2025, 10:14:45 PM

    AI is good for boilerplate, suggestions, nothing more.

  • by Zaylan on 6/24/2025, 2:18:30 AM

    AI tools are great for speeding up small tasks, but when it comes to debugging, refactoring, or designing anything more complex, manual coding is still essential. Especially when things break, if you do not understand the underlying logic, you cannot really fix it properly.

    AI is becoming a helpful assistant, but it is still far from replacing real engineering skill.