The four hats concept is clean, and "understand the effect well enough to explain it" feels like the right bar for discretionary strategies, but maybe less so for machine learning?
Nobody demands a chess engine explain its moves in human-intuitive terms - we just care that it wins. For automated systems, hat 1 might be less about interpreting why a model finds edge and more about stress-testing whether your validation can fool you.
Machine learning is a Hat 2 tool. It helps you implement. It doesn't tell you whether an effect is real, why it exists, or whether it'll persist.
It’s definitely not an edge. It’s not a class of strategy. It’s a tool. Confusing a tool with an actual edge is a really common mistake. I made this one early on and I’m sure I’m not alone.
If anything, using ML makes Hat 1 more critical. The more expressive your model, the better it is at fitting noise. Without a mechanism, you have no way to distinguish an edge from a beautifully overfit backtest.
The chess engine analogy illustrates the confusion: a chess engine wins because the rules don't change. Markets change specifically because people try to exploit the same patterns.
Love the framework, and it seems applicable to all analysis, even beyond trading. Similar to Adam Grant’s modes of thinking model a few years back. Looking forward to reading more of your work!
Powerfull framework indeed
The four hats concept is clean, and "understand the effect well enough to explain it" feels like the right bar for discretionary strategies, but maybe less so for machine learning?
Nobody demands a chess engine explain its moves in human-intuitive terms - we just care that it wins. For automated systems, hat 1 might be less about interpreting why a model finds edge and more about stress-testing whether your validation can fool you.
Machine learning is a Hat 2 tool. It helps you implement. It doesn't tell you whether an effect is real, why it exists, or whether it'll persist.
It’s definitely not an edge. It’s not a class of strategy. It’s a tool. Confusing a tool with an actual edge is a really common mistake. I made this one early on and I’m sure I’m not alone.
If anything, using ML makes Hat 1 more critical. The more expressive your model, the better it is at fitting noise. Without a mechanism, you have no way to distinguish an edge from a beautifully overfit backtest.
The chess engine analogy illustrates the confusion: a chess engine wins because the rules don't change. Markets change specifically because people try to exploit the same patterns.
Love the framework, and it seems applicable to all analysis, even beyond trading. Similar to Adam Grant’s modes of thinking model a few years back. Looking forward to reading more of your work!
Thank you! I will have to check out “Think Again”. Looks very useful.