The much-anticipated GPT-5 is finally here. However, two new model testers interviewed by Reuters said that despite GPT-5's strong programming capabilities and ability to solve scientific and mathematical problems, they believe that the leap from GPT-4 to GPT-5 is not as big as from GPT-3 to GPT-4. The reason for the bottleneck is not difficult to understand - there is not enough data. There is a very imprecise but very vivid saying that GPT-4 has basically grabbed all the data that can be retrieved from the Internet during training, which has a feeling similar to reading the prosperity of the world. Therefore, Ilya Sutskever, once chief scientist of OpenAI, said last year that although computing power is growing, the amount of data is not increasing simultaneously. In fact, frankly speaking, the peak of a generalist AI for a long time is probably GPT-5, and the next AI companies must be expert AI. For example, in this interview, Ram Kumar, an AI expert at OpenLedger, mentioned that many parties (such as Trust Wallet) want to integrate AI into their wallets, but they cannot directly use general-purpose models - they cannot meet specific needs and must be customized according to the scenario. For example, Bloomberg immediately began developing BloombergGPT, which was trained on Bloomberg's vast proprietary terminals, news, enterprise data and text (totaling more than 700 billion tokens). It is precisely because of this closed corpus that it will definitely outperform general-purpose LLMs in financial tasks. Another example is Musk's Tesla (FSD) autonomous driving, which is trained on billions of miles of fleet video/telemetry data collected by Tesla alone, which Tesla's competitors do not have. So a few days ago, Musk hinted that if he could also take local driving-related data in China, Tesla could fully clear the customs in the previous competition to understand the car emperor. Therefore, the future AI white-hot competition must be in the expert data track, and it is definitely not enough to rely on the massive Internet ordinary data of free prostitution. Therefore, data attribution systems like OpenLedger will become the new infrastructure. Imagine that precious data is valuable not only because it is scarce, but also because it can bring returns to the holder of the data (if you think of it as an asset), just like a house generates rent, and data should generate data rent. Ram said in the video that Hugging Face is great, but the 90% of the dataset above is not that useful for commercial implementation. Therefore, if you want to commercially use expert AI, you must first have a data ownership system, so that the holder of precious data can take out his precious data and let him get rewards and taste the sweetness, and then encourage more precious data holders to take it out, forming a positive cycle. In the past, the resources of experts were valuable and unique to the privileged class, after all, the time of experts was limited. And in the AI era, what if it is expert AI? It greatly reduces marginal costs and makes it possible for ordinary people to use expert or quasi-expert services. Looking forward to the OL mainnet launch.
Openledger
Openledger5.8. klo 14.11
.@TrustWallet is now an @OpenLedgerHQ client, officially building with our tech. Proud to support one of Web3’s most trusted wallets as it embraces verifiable AI. Hear @Ramkumartweet and @EowynChen break it down on @therollupco.
5,07K