by robwwilliams on 6/13/2023, 6:23:53 PM
by paxys on 6/13/2023, 6:40:02 PM
> For use by startup investments of Nat Friedman and Daniel Gross
> Reach out if you want access
I'm confused by the last two bullet points. Is this website only meant to be used by these "startup investments" or can anyone fill out the linked form?
by dekhn on 6/13/2023, 6:04:21 PM
Can the creators explain in more detail: how is this different from (for example) the OpenAI cluster that MSFT built in Azure? Is it hosted in an existing cloud provider, or in a data center? Which data center? Who admins the system, is there an SRE team in case it goes down during training? And can you attempt ot run the same benchmarks that Top500 uses to determine what your double precision flops are and give that number in addition to your "10 exaflops" (which I believe is single precision).
by mobileexpert on 6/13/2023, 5:47:18 PM
Emad from Stability estimates this at 4M/month. https://twitter.com/emostaque/status/1668666509298745344
by bigdict on 6/13/2023, 5:59:19 PM
lmao are they trolling with the naming
by MR4D on 6/13/2023, 7:34:34 PM
The mainframe is dead. Long love the new mainframe. We just call it DGX because it’s cool.
Even the leasing model has made a comeback!
by ryanwaggoner on 6/13/2023, 6:21:48 PM
Same guys behind https://aigrant.org, maybe it's mainly as a way to get dealflow?
by Ameo on 6/13/2023, 5:45:59 PM
Would be very cool to see some pictures of the cluster; sounds like an impressive build!
by braindead_in on 6/13/2023, 6:44:21 PM
Someone please start the GPT 5 training before the regulation kicks in.
by 005 on 6/13/2023, 7:38:43 PM
Looks like they've reserved a bunch of compute from Lambda Labs?
Edit: Based off this tweet, looks very similar https://twitter.com/LambdaAPI/status/1668676838044868620
by brucethemoose2 on 6/13/2023, 7:41:21 PM
> Big enough to train llama 65B in ~10 days
Y'all could totally eat Meta's lunch and train an open LLM with all the innovations that have come since LLaMA's release. Other startups are trying, but they all seem bottlenecked by training time/resources.
This could be where the next Stable Diffusion 1.5 comes from.
by redsh on 6/13/2023, 7:31:00 PM
The new lean startup. 3,291 kg
Forget LinPack and friends. Jack Dongarra is going to need to switch to the new metric for supercomputers—-kilograms of H100 GPUs—- about 3,300 give or take a few grams for this system.