by BrenBarn on 4/11/2025, 9:19:37 AM
by doctoboggan on 4/11/2025, 6:35:18 AM
This is an excellent essay, and I feel similar to the author but couldn't express it as nicely.
However if we are counting on AI researchers to take the advice and slow down then I wouldn't hold my breath waiting. The author indicated they stepped away from a high paying finance job for moral reasons, which is admirable. But wallstreet continues on and does not lack for people willing to play the "make as much money as you can" game.
by unwind on 4/11/2025, 6:37:31 AM
Ah, this [1] meaning of tillering (bending wood to form a bow), not this [2] (production of side shoots in grasses). The joys of new words.
by red_admiral on 4/11/2025, 2:42:29 PM
The story of playing at damming the creek or on the sand at the seaside is wholesome and brought a smile to my face. Cracking the "puzzle" is almost the bad ending of the game, if you don't get any fun at playing it anymore.
People should spend more of their time doing things because they're fun, not because they want to get better at it.
Maybe the apocalypse will happen in our lifetime, maybe not. I intend to have fun as much as I can in my life either way.
by migueldeicaza on 4/11/2025, 1:00:46 PM
Vonnegut said it best:
https://richardswsmith.wordpress.com/2017/11/18/we-are-here-...
by A_D_E_P_T on 4/11/2025, 10:08:24 AM
The author seems concerned about AI risk -- as in, "they're going to kill us all" -- and that's a common LW trope.
Yet, as a regular user of SOTA AI models, it's far from clear to me that the risk exists on any foreseeable time horizon. Even today's best models are credulous and lack a certain insight and originality.
As Dwarkesh once asked:
> One question I had for you while we were talking about the intelligence stuff was, as a scientist yourself, what do you make of the fact that these things have basically the entire corpus of human knowledge memorized and they haven’t been able to make a single new connection that has led to a discovery? Whereas if even a moderately intelligent person had this much stuff memorized, they would notice — Oh, this thing causes this symptom. This other thing also causes this symptom. There’s a medical cure right here.
> Shouldn’t we be expecting that kind of stuff?
I noticed this myself just the other day. I asked GPT-4.5 "Deep Research" what material would make the best [mechanical part]. The top response I got was directly copied from a laughably stupid LinkedIn essay. The second response was derived from some marketingslop press release. There was no original insight at all. What I took away from my prompt was that I'd have to do the research and experimentation myself.
Point is, I don't think that LLMs are capable of coming up with terrifyingly novel ways to kill all humans. And this hasn't changed at all over the past five years. Now they're able to trawl LinkedIn posts and browse the web for press releases, is all.
More than that, these models lack independent volition and they have no temporal/spatial sense. It's not clear, from first principles, that they can operate as truly independent agents.
by FollowingTheDao on 4/11/2025, 11:17:28 AM
"It was only once I got it that I realized I no longer could play the game "make as much money as I can.""
Funny, that is what my father taught me when I was 12 because we had compassion. What is it with glorifying all these logic loving Spock like people? Don't you know Captain Kirk was the real hero of Star Trek? Because he had compassion?
It is no wonder the Zizians were birthed from LW.
by praptak on 4/11/2025, 9:55:46 AM
If there's money to be made, there will always be someone with a shovel or a truckload of sparklers who is willing to take the risk (especially if the risk can be externalized to the public) and reap the reward.
by khazhoux on 4/11/2025, 8:31:33 AM
Parents: you know how every day you look at your child and you’re struck with wonder at the amazing and quirky and unique person your little one is?
I swear that’s what lesswrong posters see every day in the mirror.
by profsummergig on 4/11/2025, 6:21:25 AM
Requesting someone to please explain the "coquina" metaphor.
by ziofill on 4/11/2025, 4:41:51 PM
This is a tangent, but I would love so much to be able to give my kids memories of playing in a creek in the backyard...
by Isamu on 4/11/2025, 1:31:01 PM
>After I cracked the trick of tillering
Guide to Bow Tillering:
https://straightgrainedboard.com/beginners-guide-on-bow-till...
by MrBuddyCasino on 4/11/2025, 8:25:44 AM
That was a well written essay with a non-sequitur AI Safety thing tacked to the end. His real world examples were concrete, and the reason to stop escalating easy to understand ("don't flood the neighbourhood by building a real dam").
The AI angle is not only even hypothetical: there is no attempt to describe or reason about a concrete "x leading to y", just "see, the same principle probably extrapolates".
There is no argument there that is sounder than "the high velocities of steam locomotives might kill you" that people made 200 years ago.
by bogdanoff_2 on 4/11/2025, 3:03:50 PM
The solution to this problem is to choose a "game" that you 100% believe will positively impact the world.
by appleorchard46 on 4/11/2025, 12:49:47 PM
Could someone explain the metaphor? I'm struggling to see the connection between AI and the rest of the post.
by axpvms on 4/11/2025, 7:00:23 AM
My backyard creek also had crocodiles in it.
by DrSiemer on 4/11/2025, 8:42:42 AM
So many articles and comments claim Ai will destroy critical thinking in our youths. Is there any evidence that this conviction that many people share is even remotely true?
To me it just seems like the same old knee-jerk luddite response people have to any powerful new technology that challenges that status quo since the dawn of time. The calculator did not erase math wizards, the television did not replace books and so on. It just made us better, faster, more productive.
Sometimes there is an adjustment period (we still haven't figured out how to deal with short dopamine hits from certain types of entertainment and social media), but things will balance themselves out eventually.
Some people may go full-on Wall-E, but I for one will never stop tinkering, and many of my friends won't either.
The things I could have done if I had had an LLM as a kid... I think I've learned more in the past two years than ever before.
It's a nice article. In a way though it kind of bypasses what I see as the main takeaways.
It's not about AI development, it's about something mentioned earlier in the article: "make as much money as I can". The problems that we see with AI have little to do with AI "development", they have to do with AI marketing and promulgation. If the author had gone ahead and dammed the creek with a shovel, or blown off his hand, that would have been bad, but not that bad. Those kinds of mistakes are self-limiting because if you're doing something for the enjoyment or challenge of it, you won't do it at a scale that creates more enjoyment than you personally can experience. In the parable of the CEO and the fisherman, the fisherman stops at what he can tangibly appreciate.
If everyone working on and using AI were approaching it like damming a creek for fun, we would have no problems. The AI models we had might be powerful, but they would be funky and disjointed because people would be more interested in tinkering with them than making money from them. We see tons of posts on HN every day about remarkable things people do for the gusto. We'd see a bunch of posts about new AI models and people would talk about how cool they are and go on not using them in any load-bearing way.
As soon as people start trying to use anything, AI or not, to make as much money as possible, we have a problem.
The second missed takeaway is at the end. He says Anthropic is noticing the coquinas as if that means they're going to somehow self-regulate. But in most of the examples he gives, he wasn't stopped by his own realization, but by an external authority (like parents) telling him to stop. Most people are not as self-reflective as this author and won't care about "winning zero sum games against people who don't necessarily deserve to lose", let alone about coquinas. They need a parent to step in and take the shovel away.
As long as we keep treating "making as much money as you can" as some kind of exception to the principle of "you can't keep doing stuff until you break something", we'll have these problems, AI or not.