• by taylodl on 4/29/2025, 3:09:50 PM

    > In modern, well-structured codebases with decent linting and tests, human-written code is often the weakest link.

    This is a statement written by someone having little work experience. It seems trivially obvious until you get to work and realize what the real problem is: incomplete and/or inconsistent requirements. Close on those heels are poorly documented APIs. Poor architecture and design round at your problems.

    Writing the code? That's the easy part. Maybe that's what we should be emphasizing to people wanting to be professional software developers: the code itself has never been the hard part. That's table stakes.

    Am I supposed to be impressed that AI has taken the easiest part of code development and has made it a little bit easier? Maybe? Don't forget I still have to create tests because I need evidence the code actually does what it's claimed to do. Which is ironic, because test creation and management is an area software developers really struggle with and now it's more important than ever!

  • by addoo on 4/29/2025, 2:52:24 PM

    > Humans

    Please describe in more specific terms. Are we talking non-technical, intern, junior, or senior experienced humans?

    Literally just yesterday I was diverted to help someone who is a senior developer, but a novice in Python itself, figure out why code that AI helped them write was completely busted. They were extra perplexed because most of the code was identical to a block of logic we use today (and works), but in this new context didn’t work at all. Turns out whatever the AI did, it didn’t have a concept of method overloads, so the types being passed around were suddenly wrong.

    AI works well for people who know nothing (it can do things for them that work well enough), or people who know ‘everything’ (it can get them 95% of the way, they can use their experience to find and fill the remaining 5%). It’s absolutely terrible for people with middling experience.

  • by Juliate on 4/29/2025, 2:41:40 PM

    Sounds like very generated LinkedIn-ish: no substance, miraculous-PR-like, signs of lack of experience of the promoted solution's limitations in the discourse, and no call to action or discussion.

  • by ferguess_k on 4/29/2025, 3:03:09 PM

    > Tools like Cursor, properly configured, consistently produce higher-quality code than humans—cleaner, faster, and bug-free.

    Just checked my phone. We are still in 2025, not 2035 or further.

  • by JohnFen on 4/29/2025, 2:36:58 PM

    > Tools like Cursor, properly configured, consistently produce higher-quality code than humans

    This seems very doubtful to me. Do you have evidence?

  • by thesuperbigfrog on 4/29/2025, 2:48:14 PM

    >> In modern, well-structured codebases with decent linting and tests, human-written code is often the weakest link.

    I call BS.

    If a human has never written the code, how will AI generate it?

    If a human does not know how to verify that code works correctly, how can the code (regardless of who writes it) be verified?

    Do you trust AI to write the code that controls the airplane you fly in?

    Would you trust your life (literally) to AI-generated code?