by mrweasel on 6/2/2025, 8:33:55 AM
by rednafi on 6/2/2025, 9:07:57 AM
Not saying people should read the article, but it's a short one and hides some nuanced points behind an ambiguous title. Probably intentional. I liked how it showed plenty of examples of scaleups getting egg-faced after overpromising AI capabilities. LLMs are useful, but still too costly to run and too error-prone to be let loose on anything mission-critical. I use these tools daily, but that doesn't mean I want AI rammed down my throat at every turn.
A few of my favorite tech authors and OSS contributors have gone full AI-parrot mode lately. Everything they post is either "how they use AI," "how amazing it is," or "how much it sucks." I think a bunch of us have had enough and just want to move on. But try saying anything even slightly critical about AI, or about how the big companies are forcing it into everything in the name of “progress,” and suddenly you're a Luddite.
I'm just glad more folks in the trenches are tired of the noise too and are starting to call it out.
by dgb23 on 6/2/2025, 8:47:46 AM
> For a good 12 hours, over the course of 1 1/2 days, I tried to prompt it such that it yields what we needed. Eventually, I noticed that my prompts converged more and more to be almost the code I wanted. After still not getting a working result, I ended up implementing it myself in less than 30 minutes.
This is very common. We need a name for this phenomenon. Any analogies/metaphors?
I generated (huh!) some suggestions and the funniest one seems to be:
> Just One More Prompt Syndrome (JOMPS)
But I assume there's a better one that describes or hints at the irony that describing precisely what a program should be doing _is_ programming.
by fennecfoxy on 6/2/2025, 8:30:41 AM
Revontulet is my favourite myth. The fire fox of the northern lights. There's lots of great stories and animations of the myth, like this one: https://www.youtube.com/watch?v=sN5goxeTfjc
by 6510 on 6/2/2025, 7:24:10 PM
I gave copilot a screenshot of my work schedule 3 times. The first time it extracted the data flawlessly and organized it the way I asked. The second time it did everything wrong until I talked it though the process in tiny steps. The 3rd time I've asked it to repeat the same steps for the new schedule and it got it wrong just like the initial second attempt. I then tried to talk it though the process step by step and it got everything wrong!? With each step it messed up things it got right the step before. It eventually asked me to upload the image again.
I suppose I could upload the image to a table to json website and provide copilot with the json but the point was to make things easier. In my mind there is nothing complicated about the structure of a table but if I ask copilot to merely extract the text from a row starting with some text it goes insane. Optical character recognition is an idea from 1870 and they had working implementations 50 years ago. I read a bunch of comments online about models getting progressively worse but my experience was over the span of a few weeks.
Would they be doctoring with the quality to make newer models look even better?
by FranzFerdiNaN on 6/2/2025, 8:28:09 AM
Wonder how many people are only going to read the headline, because it isnt claiming that AGI is coming in 6 months at all.
by creesch on 6/2/2025, 8:24:50 AM
Did you drop that ";-)" from the title because you want to showcase how few people actually read articles posted on HN and just respond to titles. Or did you not read further yourself and didn't even notice it? :P
by onion2k on 6/2/2025, 9:20:09 AM
Google didn’t make books obsolete.
I don't think anyone expected it to make all books obsolete, but you'd struggle to buy an encyclopaedia that isn't for kids these days.
by 29athrowaway on 6/2/2025, 8:23:06 AM
No, we are not. Pretrained models do not learn from their prompts. Their state is volatile.
by Flemlo on 6/2/2025, 8:36:17 AM
Don't compare Bitcoin and the Meta verse with ai.
Millions of people use LLMs daily.
Veo 3 just came out.
People get fired due to LLMs and GenAI replacing them.
Google uses ai code for years in the background.
Playing around with cursor for a few days is way to naive to be able to determine the current situation with coding and ai
That's pretty much my take. LLMs aren't a bad idea, they are useful, in certain fields, but they aren't living up to the sales pitch and they are to expensive to run.
My personal take is that the whole chat based interface is a mistake. I have no better solution, but for anything beyond a halluciating search engine, it's not really the way we need to interact with AI.
In my mind we're 6 months away from one of the biggest crashes in tech history.