Honestly, I’m not shocked. I even posted that it would hit a point of diminishing returns.
“• OpenAI cofounder Ilya Sutskever told Reuters that results from scaling up AI models like this have plateaued.
• "At some point, the scaling paradigm breaks down," OpenAI researcher Noam Brown said at a recent conference.”
The world is messy and has a LOT of conflicting information. Humans can call “Bullshit!”
How do you infer if there are conflicts?