by KomoD on 3/9/2024, 4:59:43 PM
> Will "classic servers" be treated like php (still used a lot, but not by new projects)?
No.
by gorkish on 3/11/2024, 3:59:01 PM
I think there is a severe miscommunication of what is meant by "edge"
To me, a technical person, the "edge" is the work-at-home office, a retail store, a 5 person branch, a jobsite, a delivery van. If "edge" was really happening at the scale that marketing departments want you to believe, there would be better hardware available -- short depth servers, cluster-in-a-box hardware, etc. I don't see this stuff really coming out in any quantity outside of some niche stuff that is clearly targeting fortune-1000-type large businesses.
I've been around a while, and I think I finally figured out that all of the above applications are just the carrot on the stick. For the everyday business, "edge" just means "server room." All the companies who were stupid enough to remove all on-prem computing not so long ago now need to buy it back, but the MBA's need a new word to avoid the untenable position of having made a mistake. Somehow it reminds me of a cat burying its own shit.
We still want legitimate edge compute hardware, btw!
Hello everyone! I'm sorry in advance if the same question was asked and discussed before, but I'm genuinely looking for your thoughts and experience.
I'm working on a project for simplifying architecture and deployment approaches and workflows, very similar to railway.app, but with more focus on freedom of building more customizable architectures, while being open-source and easy-to-use without touching terraform, k8s, aws and stuff like that (could be described better, I know, but not that important for this topic).
The point is, lately, everything seems to shifting to the "edge", at first I thought of it as a very cool way to do frontend, then, all your computing and now with things like turso and FLAME by fly.io you can theoretically forget about servers in more classic definition at all.
Another thing is more abstraction, perhaps we should treat it like going from actual hardware to VMs back in the day, but something rubs me the wrong way, when I think about deploying everything on some infra, that is very hard to replicate on your own, instead of having almost identical linux boxes available from numerous providers.
Don't get me wrong, I'm really happy that we as an industry are moving forward and coming up with new cool things, but what are your thoughts on the near future? Will "classic servers" be treated like php (still used a lot, but not by new projects)? Or will we have another throwback, like what happened with SSR, htmx and statically typed languages?
I know that all we can do is speculate, but I think that insights from different people can help us build a more solid picture of what's going on today and why.
Please keep in mind, that I'm not rooting for crazy micro-services, 10 year old monoliths or huge dev-ops teams, I prefer simplicity, performance and generally always excited about new ideas. I don't know if my brain is acting defensive about the things I'm working on, or is it my linux-user soul ranting about "proprietary XYZ", or it's just the way things go and I should stop bothering?
Thanks to everyone in advance for your thoughts and opinions! And again, sorry if this post is repeating older discussions (I'm not a HN regular).