LcStudy: > practice beating human-like chess engines > by predicting Leela's best moves > track progress over time > distill AlphaZero-like chess into your brain this project was my claude code vs gpt5 eval; thoughts below
obviously i'm incentivized to praise gpt5, but i'll try to keep this unbiased (and i have some nits / things claude code does better)
1. gpt5 is more accurate than opus while a ton of stuff makes agents good beyond accuracy (ie agency, knowing when to ask questions, communication, etc), gpt5 does just make fewer mistakes than opus. feels like it has more 'horsepower'
2. claude code is more polished in ways claude code is somewhat cleaner than codex cli (ie, prettier terminal layout, better command truncation, readability), but just in the last three days the feel of cli has improved a ton. next week will be even better
3. gpt5 is steerable; opus is opinionated i find gpt5 more literal but better at instruction following. it might not add flair unless you ask, but it won't needlessly delete files without asking. i overall prefer this, but sometimes opus just goes off and helpfully beautifies
4. gpt5 has more endurance i'm most surprised by gpt5's clear SOTA endurance (something claude has been great at). i encourage everyone using gpt5 to attempt being 100x more ambitious than they thought possible. it really *can* tackle day-long edits
1,98K