• by gblargg on 6/26/2025, 6:44:07 AM

    https://archive.is/5I9sB

    (Works on older browsers and doesn't require JavaScript except to get past CloudSnare).

  • by dirkc on 6/26/2025, 1:56:39 PM

    I have a friend that always says "innovation happens at the speed of trust". Ever since GPT3, that quote comes to mind over and over.

    Verification has a high cost and trust is the main way to lower that cost. I don't see how one can build trust in LLMs. While they are extremely articulate in both code and natural language, they will also happily go down fractal rabbit holes and show behavior I would consider malicious in a person.

  • by HardCodedBias on 6/26/2025, 3:22:33 PM

    All of this fighting against LLMs is pissing in the wind.

    It seems that LLMs, as they work today, make developers more productive. It is possible that they benefit less experienced developers even more than experienced developers.

    More productivity, and perhaps very large multiples of productivity, will not be abandoned due roadblocks constructed by those who oppose the technology due to some reason.

    Examples of the new productivity tool causing enormous harm (eg: bug that brings down some large service for a considerable amount of time) will not stop the technology if it being considerable productivity.

    Working with the technology and mitigating it's weaknesses is the only rational path forward. And those mitigation can't be a set of rules that completely strip the new technology of it's productivity gains. The mitigations have to work with the technology to increase its adoption or they will be worked around.

  • by stavros on 6/26/2025, 9:29:30 AM

    I don't understand the premise. If I trust someone to write good code, I learned to trust them because their code works well, not because I have a theory of mind for them that "produces good code" a priori.

    If someone uses an LLM and produces bug-free code, I'll trust them. If someone uses an LLM and produces buggy code, I won't trust them. How is this different from when they were only using their brain to produce the code?

  • by axegon_ on 6/26/2025, 10:00:28 AM

    That is already the case for me. The amount of times I've read "apologies for the oversight, you are absolutely correct" is staggering: 8 or 9 out of 10 times. Meanwhile I constantly see people mindlessly copy paying llm generated code and subsequently furious when it doesn't do what they expected it to do. Which, btw, is the better option: I'd rather have something obviously broken as opposed to something seemingly working.

  • by pu_pe on 6/26/2025, 12:15:40 PM

    > While the industry leaping abstractions that came before focused on removing complexity, they did so with the fundamental assertion that the abstraction they created was correct. That is not to say they were perfect, or they never caused bugs or failures. But those events were a failure of the given implementation a departure from what the abstraction was SUPPOSED to do, every mistake, once patched led to a safer more robust system. LLMs by their very fundamental design are a probabilistic prediction engine, they merely approximate correctness for varying amounts of time.

    I think what the author misses here is that imperfect, probabilistic agents can build reliable, deterministic systems. No one would trust a garbage collection tool based on how reliable the author was, but rather if it proves it can do what it intends to do after extensive testing.

    I can certainly see an erosion of trust in the future, with the result being that test-driven development gains even more momentum. Don't trust, and verify.

  • by geor9e on 6/26/2025, 2:11:09 PM

    They changed the headline to "Yes, I will judge you for using AI..." so I feel like I got the whole story already.

  • by cheriot on 6/26/2025, 6:44:17 AM

    > promises that the contributed code is not the product of an LLM but rather original and understood completely.

    > require them to be majority hand written.

    We should specify the outcome not the process. Expecting the contributor to understand the patch is a good idea.

    > Juniors may be encouraged/required to elide LLM-assisted tooling for a period of time during their onboarding.

    This is a terrible idea. Onboarding is a lot of random environment setup hitches that LLMs are often really good at. It's also getting up to speed on code and docs and I've got some great text search/summarizing tools to share.

  • by namenotrequired on 6/26/2025, 6:44:19 AM

    > LLMs … approximate correctness for varying amounts of time. Once that time runs out there is a sharp drop off in model accuracy, it simply cannot continue to offer you an output that even approximates something workable. I have taken to calling this phenomenon the "AI Cliff," as it is very sharp and very sudden

    I’ve never heard of this cliff before. Has anyone else experienced this?

  • by acedTrex on 6/26/2025, 11:25:48 AM

    Hi everyone, author here.

    Sorry about the JS stuff I wrote this while also fooling around with alpine.js for fun. I never expected it to make it to HN. I'll get a static version up and running.

    Happy to answer any questions or hear other thoughts.

    Edit: https://static.jaysthoughts.com/

    Static version here with slightly wonky formatting, sorry for the hassle.

    Edit2: Should work on mobile now well, added a quick breakpoint.

  • by beau_g on 6/26/2025, 6:46:02 AM

    The article opens with a statement saying the author isn't going to reword what others are writing, but the article reads as that and only that.

    That said, I do think it would be nice for people to note in pull requests which files have AI gen code in the diff. It's still a good idea to look at LLM gen code vs human code with a bit different lens, the mistakes each make are often a bit different in flavor, and it would save time for me in a review to know which is which. Has anyone seen this at a larger org and is it of value to you as a reviewer? Maybe some tool sets can already do this automatically (I suppose all these companies report the % of code that is LLM generated must have one if they actually have these granular metrics?)

  • by davidthewatson on 6/26/2025, 7:04:22 AM

    Well said. The death of trust in software is a well worn path from the money that funds and founds it to the design and engineering that builds it - at least the 2 guys-in-a-garage startup work I was involved in for decades. HITL is key. Even with a human in the loop, you wind up at Therac 25. That's exactly where hybrid closed loop insulin pumps are right now. Autonomy and insulin don't mix well. If there weren't a moat of attorneys keeping the signal/noise ratio down, we'd already realize that at scale - like the PR team at 3 letter technical universities designed to protect parents from the exploding pressure inside the halls there.

  • by satisfice on 6/26/2025, 2:11:32 PM

    LLMs make bad work— of any kind— look like plausibly good work. That’s why it is rational to automatically discount the products of anyone who has used AI.

    I once had a member of my extended family who turned out to be a con artist. After she was caught, I cut off contact, saying I didn’t know her. She said “I am the same person you’ve known for ten years.” And I replied “I suppose so. And now I realized I have never known who that is, and that I never can know.”

    We all assume the people in our lives are not actively trying to hurt us. When that trust breaks, it breaks hard.

    No one who uses AI can claim “this is my work.” I don’t know that it is your work.

    No one who uses AI can claim that it is good work, unless they thoroughly understand it, which they probably don’t.

    A great many students of mine have claimed to have read and understand articles I have written, yet I discovered they didn’t. What if I were AI and they received my work and put their name on it as author? They’d be unable to explain, defend, or follow up on anything.

    This kind of problem is not new to AI. But it has become ten times worse.

  • by pfdietz on 6/26/2025, 10:33:23 AM

    There was trust?

  • by DyslexicAtheist on 6/26/2025, 7:01:13 AM

    it's really hard using AI (not impossible) to produce meaningful offensive security to improve defense due to there being way too many guard rails.

    While on the other hand real nation-state threat actors would face no such limitations.

    On a more general level, what concerns me isn't whether people use it to get utility out of it (that would be silly), but the power-imbalance in the hand of a few, and with new people pouring their questions into it, this divide getting wider. But it's not just the people using AI directly but also every post online that eventually gets used for training. So to be against it would mean to stop producing digital content.

  • by atemerev on 6/26/2025, 10:00:43 AM

    I am a software engineer who writes 80-90% code with AI (sorry, can't ignore the productivity boost), and I mostly agree with this sentiment.

    I found out very early that under no circumstances you may have the code you don't understand, anywhere. Well, you may, but not in public, and you should commit to understanding it before anyone else sees that. Particularly before sales guys do.

    However, AI can help you with learning too. You can run experiments, test hypotheses and burn your fingers so fast. I like it.

  • by tomhow on 6/26/2025, 9:24:43 AM

    [Stub for offtopicness, including but not limited to comments replying to original title rather than article's content]