Trendaavat aiheet
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
pre CoT was the best because you had freedom to structure the CoT how you wanted. gpt4-0314 was awesome for this.
Once they baked in CoT things started to go downhill. All prompts became subject to the same abstractions.
All it does is flood the context window to guide outputs.
Even in gpt 3.5 people knew the best outputs came after "priming the pump" to provide some frame.
The obsession with being able to one-shot everything made the product less malleable.

8.8. klo 07.29
Is Chain-of-Thought Reasoning of LLMs a Mirage?
... Our results reveal that CoT reasoning is a brittle mirage that vanishes when it is pushed beyond training distributions. This work offers a deeper understanding of why and when CoT reasoning fails, emphasizing the ongoing challenge of achieving genuine and generalizable reasoning.
... Our findings reveal that CoT reasoning works effectively when applied to in-distribution or near
in-distribution data but becomes fragile and prone to failure even under moderate distribution shifts.
In some cases, LLMs generate fluent yet logically inconsistent reasoning steps. The results suggest that what appears to be structured reasoning can be a mirage, emerging from memorized or interpolated patterns in the training data rather than logical inference.
... Together, these findings suggest that LLMs are not principled reasoners but rather sophisticated simulators of reasoning-like text.

1,68K
Johtavat
Rankkaus
Suosikit