Judah Taub is the Managing Partner at Hetz Ventures, a venture capital firm based in Tel Aviv. He has joined CTech to share a review of “The Black Swan: The Impact of the Highly Improbable” by Nassim Nicholas Taleb
Title: “The Black Swan: The Impact of the Highly Improbable”
Author: Nassim Nicholas Taleb
“Black Swan” is centered around the idea of “our blindness with respect to randomness, particularly large deviations,” meaning, extremely rare outlier events called ‘black swans’. These black swan events are incredibly unpredictable by definition, but author Nassim Nicholas Taleb, a mathematical statistician, former options trader and risk analyst, argues that we don’t have to be left completely unassuming when hit with them. Examples of black swans include World War I, the personal computer, the fall of the Soviet Union, the rise of the internet, the September 11th attacks, the financial crisis of 2008, and of course, most recently – the Covid-19 pandemic.
Looking at the past to predict the future assumes the future will continue as it did in the past. This is not the best way to make predictions, and the black swan factor shows this. In fact, Taleb is not attempting to make predictions in order to be ready for black swan events – but rather, to understand that they are not as rare as we’d assume. Black swans are moments of deviation, or events that in theory are very rare but actually happen more often than we imagine, with hugely disproportionate influence.
For example: To find out the average height, you could take 20 people at random and calculate the average, and chances are that you won’t be too far off. But what if you do that with wealth? You could pick 20 people at random, and calculate the average wealth, but if Bill Gates happens to be one of them your average person is worth $6 billion. Taleb argues that much of our world works like the wealth example, where a fluke or division can have a dramatic effect whereas averages or historic sampling can’t account for.
Black swans happen more often than we expect, and they have these disproportionate swings that have a real-life effect on society. According to Taleb, we can be better at being ready, despite the sense or assumption that they are rare or impossible to predict. While perhaps generally unpredictable, they are certainly not impossible – and it is possible to be somewhat prepared with a ‘robustness’ to negative outcomes as well as prepared to take fuller advantage of positive ones.
Black Swan’s message is even more relevant today than when it was written (2007), especially in the industry we’re in. I identify three reasons:
We’re now coming out of the Covid-19 pandemic, and it’s something few people saw coming – a true black swan that will have far and wide implications for years to come. In the last year of Covid alone, we can now pile on a big downturn in the tech bubble, Russia invading Ukraine – pick your black swan – but it sure seems that they are creeping into our day-to-day lives more and more.
We are increasingly surrounded by machines (call it Machine Learning algorithms or AI if you want to sound clever, or deep neural networks if you want to sound really clever) but what it all does is take statistics of what already happened and predict them into the future in some shape or form. Taleb argues that this is not always the best way to think about the future, as when it doesn’t work you’re horribly wrong. You could actually be a million miles away, or statistically speaking, much more than one or two margins of error.
In VC, it’s the black swans that you’re betting on. Our business as VCs is not to bet on 10 companies making 2-3x. It’s betting on 10 companies hoping that one of them is 100x. We’re trying to live in the world of the black swans rather than the ‘safe geese’, and Taleb applies this to many other areas of our lives as well.
Who Should Read This Book:
I do think it’s especially useful and relevant for those of us in VC investing, as stated above. But there is something so human in all this; everyone can read this book and find something that feels familiar. There’s a universally human ingenuity element to foreseeing the future. Consider if there was a coin toss where 99 times the coin lands on ‘heads’. You’re about to toss one more time. The machine will still tell you it’s a 50/50 chance for heads vs tails, but most people will have very human feelings about why they’d still bet on heads.
It’s a nice reminder of the limits of ML/AI prediction abilities, and the number of unknown variables that are actually all around us.