For The Love of The Game
Why the path to making money in trading runs through work you'd better find interesting
Data mining and vibe quanting are essentially the same thing. Both fundamentally and philosophically.
Fundamentally, data mining says: “I’ll try enough rules until something sticks.” Vibe quanting says: “I’ll get AI to try enough rules until something sticks.” Same thing, different packaging.
Philosophically, they’re both ways of asking the market to reward you for skipping the hard part. Both are attempts to extract money from markets without understanding why the money is there.
I get it. I went through this phase myself. Building increasingly elaborate systems, torturing parameters until the equity curve looked right, confusing a good backtest with a good idea. It felt like progress. It felt like doing something smart.
It was neither of those things.
If you’ve tried some version of this and felt like something was off, you’re right. Something is off. And today I want to show you something better.
The key thing about an edge is that every dollar of trading profit you make comes from someone else. That someone is losing money on the trade you’re winning. So the first question, before you do anything else, is: who’s paying you, and why are they willing to keep doing it? Extra points for asking why do I, some random dude trading in my pyjamas from a place most of the world has never heard of, get to compete in this trade?
This isn’t an abstract, philosophical exercise. It’s the cornerstone.
A wealth manager running a balanced portfolio has to rebalance when allocations get out whack. They’re mandated to do it. Near month-end, if stocks ran hard, they sell stocks and buy bonds. The timing is semi-predictable. The flows are price-insensitive and large enough to push price around. They’re not choosing to lose money on the rebalance; it’s just not their primary concern. Their job is to maintain the target allocation.
I get to make money from it as an unsophisticated degenerate trading from a laptop because the edge sucks enough that the serious players aren’t interested. It doesn’t come around every day. It’s noisy. It doesn’t always work out. It’s not worth their time.
That’s an edge. You know who’s paying, why they’re paying, and why they’ll keep paying. And crucially, why you get to take the other side.
You could learn every machine learning algorithm ever invented and you still wouldn’t find this by running optimisations. You’d find it by understanding how markets work, then going and looking for it in the data.
Same with crypto carry. Leveraged speculators on perpetual futures pay funding to the other side. That funding is the price of their leverage. They keep paying because they want the leverage more than they care about the cost. You collect it by being the boring counterparty. You know who pays and why.
Now compare: “My backtest of a 14-day RSI crossover with a 50-day moving average filter returned 23% annually from 2019-2025.” Cool. Who’s paying you? Why? Will they keep paying?
You have no idea. You’ve found a pattern. Maybe it’s real. Probably it’s noise.
People resist this. I think I know why. Creating backtests feels technical. Like you’re doing serious work. It’s the most adjacent thing to a trading strategy, so making a good backtest feels like a sensible objective.
But it’s not about the backtest. It’s about the edge.
The backtest simulates a set of rules on past market data. Those rules are not the edge, but they’re designed to harness it. The backtest’s purpose is to tell you if you could have made money in the past by harnessing the edge in a particular way. A more sophisticated use is to explore trade-offs in how you operationalise the strategy to fit your constraints. But that’s a story for another time.
The point I want to make is that because the backtest sits above and outside of the edge, it’s a terrible research tool for understanding the edge. Technically, the backtest is like a complicated transformation of market data into a set of realised returns. It’s an aggregator. You lose a ton of information in the process.
There’s path dependency. Randomness. If the set of rules made money today, maybe it was a result of the edge, maybe it was luck. It’s hard to untangle. At best, it’s a wasteful, inefficient use of data. And it looks at the edge only indirectly.
Market data is insanely low on signal relative to noise. Go looking for patterns without a hypothesis or organising principle and you will find them, because that’s what happens when you search a noisy dataset with enough degrees of freedom (and even simple backtests have many degrees of freedom). Some will pass your robustness tests. A few will make money for a while. But you won’t know which ones, and you won’t know for how long, because you never understood why they worked. This is the same problem whether you’re the one running the optimisations or you’ve outsourced it to ChatGPT. The AI is faster at finding patterns that don’t mean anything. That’s nothing to cheer about.
And before you say it: no, solving the multiple testing problem doesn’t fix this either.
I’ve talked to a lot of smart people who think the answer is better statistical hygiene. Track everything you’ve tried. Apply corrections for multiple comparisons. Use AI to keep a ledger of every hypothesis tested so you can adjust your significance thresholds accordingly. It sounds smart. It sounds rigorous. But it isn’t solving the problem you actually have.
Even if you perfectly correct for multiple testing, all you’ve established is that a pattern is unlikely to be noise. That’s a statistical statement about the past. It tells you nothing about whether the pattern has a reason to persist. “Unlikely to be noise” and “driven by a structural mechanism that will keep generating returns” are completely different claims, and no amount of statistical correction bridges the gap between them.
It’s like fishing a river you’ve never studied. You’ve got the best rod money can buy. You’ve logged every cast so you never repeat one that didn’t produce a fish. Your casting methodology is statistically impeccable. But you’ve never learned where fish hold, what they feed on, or how the current moves. You’re just casting into open water, very efficiently, catching nothing.
The bloke down the bank with a dodgy reel and ten years on this river knows the fish sit behind that rock when the water’s up, and move to the eddy below the bend when it drops. He catches fish every time. Not because his gear is better or his technique is fancier. Because he understands the river.
These statistical techniques feel like doing the hard, sophisticated work. That’s what makes them so seductive. Monte Carlo simulations, walk-forward optimisation, combinatorial cross-validation, tracking tested hypotheses with AI. It all sounds like rigour. But the actual hard work, the work that separates people who make money from people who don’t, is sitting with a blank page and asking: “Who would pay me to take this trade, and why?” No algorithm does that for you. Because the question requires understanding markets, not processing data.
There’s a second problem with data mining, and I think it’s worse.
You never learn anything.
When you do the work of understanding an edge, something happens beyond the immediate trade. Each time you go through the loop, observe something in markets, form a hypothesis about why it exists, look for evidence in the data, learn from what you find, you get a little smarter about how markets work. A few years of this and something changes. You start to develop intuition about where edges live. You can sniff out a good idea before you touch the data. You waste less time on dead ends. When a strategy stops working, you have a framework for understanding why, which tells you whether to wait it out or move on.
That’s compound learning. Each cycle builds on the last. Your understanding deepens, your pattern recognition sharpens, your research becomes more productive.
Now picture the data miner. Three years in. Thousands of backtests. Maybe a couple of things that worked for a while. When they stopped, no idea why, so the only option was to mine again. Backtest number one-thousand taught exactly as much about markets as backtest number one: nothing.
Zero compounds to zero, no matter how many cycles you do.
One path is a treadmill. The other is a staircase.
Your curiosity is a cheat code. Let me explain.
The hypothesis-driven path requires doing slow, careful thinking that looks nothing like “building a trading strategy.” It looks like reading about market structure. Thinking about who participates and what their constraints are. Sketching out why a particular flow might create a predictable distortion. It’s more like scientific exploration than engineering. And if you’re only focused on the money, this step isn’t interesting enough. You’ll skip it, or rush through it, because the interesting part, the part that feels like progress, is building the system. Writing the code, running the backtest, seeing the equity curve. That’s where the dopamine is. And it’s a trap.
But here’s what I’ve seen, running Trade Like a Quant for years now.
About a third of the people who go through the Bootcamp discover that they love this work. They’re wired for it. The market puzzles are interesting to them, not just as a means to make money, but as problems worth solving. Understanding why wealth managers create predictable flows, or why funding rates behave differently on Hyperliquid versus Binance, that stuff gets them going. Those people have a massive advantage, and it has nothing to do with maths or programming. Their curiosity means they’ll do the work that everyone else skips. They’ll build the compound learning that turns into real trader smarts. The money follows.
Another third realise they’d rather harvest risk premia semi-passively than do deep active research. Something they can manage with a small monthly time commitment and appropriate expectations. Great outcome. There’s nothing wrong with knowing what you want.
And the remaining third go “this isn’t for me” and get a refund. Also fine (more than fine, actually - I consider this a massive win).
The worst outcome isn’t any of these. The worst outcome is spending years on a path you don’t enjoy and doesn’t work, because you never stopped to figure out whether the actual work of trading (the thinking, the research, the uncertainty) was something you found interesting in its own right.
It’s got to be about more than the money. Not because money doesn’t matter. Of course it matters. No one ever came to trading intending not to make money. But because the path to making money in active trading runs directly through work that you’ll only do well if you find it worth doing for its own sake.
This is what we teach in the Trade Like a Quant Bootcamp: how to think about edge, how to identify who pays you and why, how to do research that builds compound learning instead of compound frustration. If any of this landed, go check it out. And if you’ve been nodding along because you already know you’re wired this way, you might be one of the rare breed who ends up joining us in RW Pro.

