Trendaavat aiheet
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
suffering from chatbot fatigue?
frustrated that singularity was cancelled?
looking for something new to give you hope?
here is my delusional, but "hey, it kinda makes sense" plan to build super-intelligence in my small indie research lab
(note: I'll trade accuracy for pedagogy)
first, a background:
I'm a 33 years old guy who spent the last 22 years programming. through the time, I've asked many questions about the nature of computing, and accumulated some quite... peculiar... insights. a few years ago, I've built HVM, a system capable of running programs in an esoteric language called "Haskell" on the GPU - yes, the same chip that made deep learning work, and sparkled this entire AI cycle.
but how does Haskell relate to AI?
well, that's a long story. as the elders might remember, back then, what we called "AI" was... different. nearly 3 decades ago, for the first time ever, a computer defeated the world's champion at chess, sparkling many debates about AGI and singularity - just like today!
the system, named Deep Blue, was very different from the models we have today. it didn't use transformers. it didn't use neural networks at all. in fact, there was no "model". it was a pure "symbolic AI", meaning it was just a plain old algorithm, which scanned billions of possible moves, faster and deeper than any human could, beating us by sheer brute force.
this sparkled a wave of promising symbolic AI research. evolutionary algorithms, knowledge graphs, automated theorem proving, SAT/SMT solvers, constraint solvers, expert systems, and much more. sadly, over time, the approach hit a wall. hand-built rules didn't scale, symbolic systems weren't able to *learn* dynamically, and the bubble burst. a new AI winter started.
it was only years later that a curious alignment of factors changed everything. researchers dusted off an old idea - neural networks - but this time, they had something new: GPUs. these graphics chips, originally built for rendering video games, turned out to be perfect for the massive matrix multiplications that neural nets required. suddenly, what took weeks could be done in hours. deep learning exploded, and here we are today, with transformers eating the world.
but here's the thing: we only ported *one* branch of AI to GPUs - the connectionist, numerical one. the symbolic side? it's still stuck in the CPU stone age.
Haskell is a special language, because it unifies the language of proofs (i.e., the idiom mathematicians use to express theorems) with the language of programming (i.e., what devs use to build apps). this makes it uniquely suited for symbolic reasoning - the exact kind of computation that deep blue used, but now we can run it massively parallel on modern hardware.
(to be more accurate, just massive GPU parallelism isn't the only thing HVM brings to the table. turns out it also results in *asymptotic* speedups in some cases. and this is a key reason to believe in our approach: past symbolic methods weren't just computationally starved. they were exponentially slow, in an algorithmic sense. no wonder they didn't work. they had no chance to.)
my thesis is simple: now that I can run Haskell on GPUs, and given this asymptotic speedup, I'm in a position to resurrect these old symbolic AI methods, scale them up by orders of magnitude, and see what happens. maybe, just maybe, one of them will surprise us.
our first milestone is already in motion: we've built the world's fastest program/proof synthesizer, which I call SupGen. or NeoGen. or QuickGen? we'll release it as an update to our "Bend" language, making it publicly available around late October.
then, later this year, we'll use it as the foundation for a new research program, seeking a pure symbolic architecture that can actually learn from data and build generalizations - not through gradient descent and backpropagation, but through logical reasoning and program synthesis.
our first experiments will be very simple (not unlike GPT-2), and the main milestone would be to have a "next token completion tool" that is 100% free from neural nets.
if this works, it could be a ground-breaking leap beyond transformers and deep learning, because it is an entirely new approach that would most likely get rid of many GPT-inherited limitations that AIs have today. not just tokenizer issues (like the R's in strawberry), but fundamental issues that prevent GPTs from learning efficiently and generalizing
delusional? probably
worth trying? absolutely
(now guess how much was AI-generated, and which model I used)
25,92K
Johtavat
Rankkaus
Suosikit