Most use cases don't need fine-tuning.
Before you look into fine-tuning, try the following:
1. Better prompts
2. Richer context
3. A more suitable model
Fine-tuning is expensive, brittle, and often unnecessary. It should be the last thing you try, not the first.
You can probably solve 90% of the problems at a company with a combination of branching (if/else) and iteration (for/while).
Stop overthinking and overengineering everything.
One of the amazing details of GPT-5 is the router that sits in front of the different models.
It’s been a rocky start so far, but over time, this is one of the ideas that will last and become crucial going forward.