Trendaavat aiheet
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.

Jim Fan
NVIDIA Director of Robotics & Distinguished Scientist. Co-Lead of GEAR lab. Solving Physical AGI, one motor at a time. Stanford Ph.D. OpenAI's 1st intern.
Vibe Minecraft: a multi-player, self-consistent, real-time world model that allows building anything and conjuring any objects. The function of tools and even the game mechanics itself can be programmed by natural language, such as "chrono-pickaxe: revert any block to a previous state in time" and "waterfalls turn into rainbow bridge when unicorns pass by". Players collectively define and manipulate a shared world.
The neural sim takes as input a *multimodal* system prompt: game rules, asset pngs, a global map, and easter eggs. It periodically saves game states as a sequence of latent vectors that can be loaded back into context, optionally with interleaved "guidance texts" to allow easy editing. Each gamer has their own explicit stat json (health, inventory, 3D coordinate) as well as implicit "player vectors" that capture higher-order interaction history.
Game admins can create a Minecraft multiverse because the latents are compatible from different servers. Each world can seamlessly cross with another to spawn new worlds in seconds. People can mix & match with their friends' or their own past states. "Rare vectors" can emerge as some players would inevitably wander into the bizarre, uncharted latent space of the world model. Those float matrices can be traded as NFTs. The wilders things you try, the more likely you'll mine rare vectors.
Whoever ships Vibe Minecraft first will go down in history as altering the course of gaming forever.

5,96K
Would love to see the FSD Scaling Law, as it’s the only physical data flywheel at planetary scale. What’s the “emergent ability threshold” for model/data size?

Elon Musk6.8. klo 16.02
Tesla is training a new FSD model with ~10X params and a big improvement to video compression loss.
Probably ready for public release end of next month if testing goes well.
71,87K
This is game engine 2.0. Some day, all the complexity of UE5 will be absorbed by a data-driven blob of attention weights. Those weights take as input game controller commands and directly animate a spacetime chunk of pixels.
Agrim and I were close friends and coauthors back at Stanford Vision Lab. So great to see him at the frontier of such cool research! Congrats!

Agrim Gupta5.8. klo 22.14
Introducing Genie 3, our state-of-the-art world model that generates interactive worlds from text, enabling real-time interaction at 24 fps with minutes-long consistency at 720p. 🧵👇
9,95K
I'm observing a mini Moravec's paradox within robotics: gymnastics that are difficult for humans are much easier for robots than "unsexy" tasks like cooking, cleaning, and assembling. It leads to a cognitive dissonance for people outside the field, "so, robots can parkour & breakdance, but why can't they take care of my dog?" Trust me, I got asked by my parents about this more than you think ...
The "Robot Moravec's paradox" also creates the illusion that physical AI capabilities are way more advanced than they truly are. I'm not singling out Unitree, as it applies widely to all recent acrobatic demos in the industry. Here's a simple test: if you set up a wall in front of the side-flipping robot, it will slam into it at full force and make a spectacle. Because it's just overfitting that single reference motion, without any awareness of the surroundings.
Here's why the paradox exists: it's much easier to train a "blind gymnast" than a robot that sees and manipulates. The former can be solved entirely in simulation and transferred zero-shot to the real world, while the latter demands extremely realistic rendering, contact physics, and messy real-world object dynamics - none of which can be simulated well.
Imagine you can train LLMs not from the internet, but from a purely hand-crafted text console game. Roboticists got lucky. We happen to live in a world where accelerated physics engines are so good that we can get away with impressive acrobatics using literally zero real data. But we haven't yet discovered the same cheat code for general dexterity.
Till then, we'll still get questioned by our confused parents.
351,82K
My bar for AGI is far simpler: an AI cooking a nice dinner at anyone’s house for any cuisine. The Physical Turing Test is very likely harder than the Nobel Prize. Moravec’s paradox will continue to haunt us, looming larger and darker, for the decade to come.

Thomas Wolf19.7.2025
My bar for AGI is an AI winning a Nobel Prize for a new theory it originated.
99,25K
I've been a bit quiet on X recently. The past year has been a transformational experience. Grok-4 and Kimi K2 are awesome, but the world of robotics is a wondrous wild west. It feels like NLP in 2018 when GPT-1 was published, along with BERT and a thousand other flowers that bloomed. No one knew which one would eventually become ChatGPT. Debates were heated. Entropy was sky high. Ideas were insanely fun.
I believe the GPT-1 of robotics is already somewhere on Arxiv, but we don't know exactly which one. Could be world models, RL, learning from human video, sim2real, real2sim, etc. etc, or any combo of them. Debates are heated. Entropy is sky high. Ideas are insanely fun, instead of squeezing the last few % on AIME & GPQA.
The nature of robotics also greatly complicates the design space. Unlike the clean world of bits for LLMs (text strings), we roboticists have to deal with the messy world of atoms. After all, there's a lump of software-defined metal in the loop. LLM normies may find it hard to believe, but so far roboticists still can't agree on a benchmark! Different robots have different capability envelopes - some are better at acrobatics while others at object manipulation. Some are meant for industrial use while others are for household tasks. Cross-embodiment isn't just a research novelty, but an essential feature for a universal robot brain.
I've talked to dozens of C-suite leads from various robot companies, old and new. Some sell the whole body. Some sell body parts such as dexterous hands. Many more others sell the shovels to manufacture new bodies, create simulations, or collect massive troves of data. The business idea space is as wild as research itself. It's a new gold rush, the likes of which we haven't seen since the 2022 ChatGPT wave.
The best time to enter is when non-consensus peaks. We're still at the start of a loss curve - there're strong signs of life, but far, far away from convergence. Every gradient step takes us into the unknown. But one thing I do know for sure - there's no AGI without touching, feeling, and being embodied in the messy world.
On a more personal note - running a research lab comes with a whole new level of responsibility. Giving updates directly to the CEO of a $4T company is, to put it mildly, both thrilling and all-consuming of my attention weights. Gone are the days when I could stay on top of and dive deep into every AI news.
I’ll try to carve out time to share more of my journey.

877,12K
The Physical Turing Test: your house is a complete mess after a Sunday hackathon. On Monday night, you come home to an immaculate living room and a candlelight dinner. And you couldn't tell whether a human or a machine had been there. Deceptively simple, insanely hard.
It is the next North Star of AI. The dream that keeps me awake 12 am at the lab. The vision for the next computing platform that automates chunks of atoms instead of chunks of bits.
Thanks Sequoia for hosting me at AI Ascent! Below is my full talk on the first principles to solve general-purpose robotics: how we think about the data strategy and scaling laws. I assure you it will be 17 minutes you don't regret!
107,39K
Some day in the next decade, we will have robots in every home, every hospital and factory, doing every dull and dangerous jobs with superhuman dexterity. That day will be known as “Thursday”. Not even Turing would dare to dream up our lifetime in his wildest dreams.

signüll21.4.2025
we crossed the turing test & no one gave a shit. no parades. no front page headlines. just… a casual shrug. like “oh yeah, the machines are smart enough to fool us now. anyway, what’s for lunch?”
that silence tells you everything about the pace we’re moving at.
back in my cs classes, the turing test was treated like the final boss. now every break through is another god damn tuesday.
102,14K
Johtavat
Rankkaus
Suosikit
Ketjussa trendaava
Trendaa X:ssä
Viimeisimmät suosituimmat rahoitukset
Merkittävin