Trendaavat aiheet
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
I think GPT-5 should only be a tiny update against short timelines.
EPOCH argues that GPT-5 isn't based on a base model scale-up. Let's assume this is true.
What does this say about pre-training?
Option 1: pre-training scaling has hit a wall (or at least massively reduced gains).
Option 2: It just takes longer to get the next pre-training scale-up step right. There is no fundamental limit; we just haven't figured it out yet.
Option 3: No pre-training wall, just basic economics. Most tasks people use the models for right now might not require bigger base models, so focusing on usability is more important.
What is required for AGI?
Option 1: More base model improvements required.
Option 2: RL is all you need. The current base models will scale all the way if we throw enough RL at it.
Timelines seem only affected if pre-training wall and more improvements required. In all other worlds, no major updates.
I personally think GPT-5 should be a tiny update toward slower timelines, but most of my short timeline beliefs come from RL scaling anyway.
5,61K
Johtavat
Rankkaus
Suosikit