Trendaavat aiheet
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
Would love to see this happen as well. But in practice, I'm unsure how to implement something like this.
Does the fed govt simply subsidize the cost for open labs to go acquire this hardware? This would be an ongoing subsidy (need new hardware frequently now), and how to select which labs get the subsidy? Napkin math for 10K H200s is probably north of $300M if we use simple assumption of ~$30K per H200. And this is just GPU hardware acquisition. You need somewhere to run these along with opex to maintain them.
If you force existing compute owners to carve out some amount of their supply to provide to these labs, they'll need some form of subsidy as well. Most of these companies say they are supply constrained right now too.
In any case, it does seem like we're headed towards the creation of a new compute paradigm. The paradigm so far has revolved around scaling up co-located compute. No doubt, there will still be the Manhattan-sized datacenter buildouts in the US and elsewhere. But, there will also be smaller compute islands ranging in size, that are connected with fiber etc. When these are the new/standard constraints and fundamental limitations, it will push the broader AI research community in new, unexplored directions.
The downstream impact of this could mean a large and growing divergence between the research, model architectures, economics etc produced between the largest, closed AI labs (those working with effectively massive single datacenters) and those (likely academics and decentralized AI companies) that use more distributed compute clusters (i.e., the small but numerous compute islands). Unclear how it turns out for either side (and ultimately, the consumers of the models produced by each), but it seems like the direction things are headed in.
You could even argue we've seen glimpses of this already. Chinese labs have fundamentally different compute constraints than that of OpenAI for example. These Chinese labs had to innovate on training/inference techniques because of this. Not a perfect analogy, but maybe it can help elucidate what "small steps" towards a new paradigm looks like, and over time, these small steps compound and produce something that looks/functions quite differently that what the other path produces.

4.8. klo 22.08
To solve this, the key resource threshold is to have multiple open labs with 10000+ GPUs each.
Multiple labs makes it so we are not beholden to big technology co's good graces to want to release models. These institutions increases innovation + derisks this crucial technology.
629
Johtavat
Rankkaus
Suosikit