Pre-Script: While writing this, I had a feeling that I was just describing the process of good science (e.g. not saying anything new) and putting a modern lens on it. With that said, this lens was still novel to me (and may be to others lacking an academic understanding of good science) — so I’ll put it out there.
It’s a win-win — and an anti-fragile contribution. Perhaps everybody is already aware of what I’m about to say, providing a small cost in time and attention to all those who read it (and perhaps, I even get some things wrong, increasing that cost).
But if there is a chance that some people get a lot from it, and can utilize this lens in a meaningful way, that outsized benefit is well worth the small risk of a shitty Medium blog (in an anti-fragile calculus).
Without further ado …
When knowledge is competitively hoarded — think academia, medical research, insider trading — we are all worse for it.
And yet, today’s systems are built to reward those who legally “possess” knowledge — and therefore, those who “discover” knowledge — which means this competitive knowledge hoarding is the status quo.
As a consequence:
- We make slower progress in medicine
- We make slower progress in science
- We have unfair systems of wealth creation, where personal connections and access to inside information makes the competition fundamentally unfair
- We make mis-steps across all fields of human endeavor, because failed efforts — a valuable form of knowing what doesn’t work — is hoarded as well.
How can we move beyond this phase, to a status quo in which any discovery, failure, or knowledge gained is a benefit for all … and so sharing discoveries, failures, and new knowledge is in the interest of all?
Consider the example of Beth, a 25-year-old student perusing a PhD in math.
Beth’s PhD is contingent on her solving a math problem that has never been solved before.
After 3 years of doctoral work, Beth has solved 80% of the problem (she doesn’t quite know this, but she can grasp that she is quite close.)
And then, the news hits — Adam, a PhD student on the other side of the world, solved Beth’s problem.
What happens?
With her success based on the task “solving a problem that has never been solved before,” does Beth trash her years of work and start a new problem?
What happens to all the knowledge and discoveries she made along the way — knowledge that Adam may not even have discovered?
In short, over these years of work, Beth has become a subject-matter expert on this problem. What will happen to her subject-matter expertise?
What about this:
Once Adam solves Beth’s problem, Beth’s PhD is no longer contingent on solving the same problem (because that would be redundant, a waste of energy for consolation, clearly not in the benefit of all.)
Instead, Beth’s PhD is contingent on using her subject-matter expertise to break Adam’s solution.
If she succeeds, we avoid ingesting a scientific “discovery” that is fundamentally flawed (critical, given that 40% of “discoveries” across social science, psychology, economics, and medicine — the basis of many of our most fundamental ideas about modern humanity — do not replicate in later experiments, meaning they are meaningless.)
At that point, Beth or Adam or any other PhD student who has been working on the same problem now has a new opportunity to solve it, and all of their subject matter-expertise can still be used in a fruitful way towards scientific discovery.
And if Beth fails, we have a true scientific discovery that has passed replication and “stress-testing”, and is fundamentally stronger for it.
In short, we have a win-win — an anti-fragile system of knowledge discovery.
In a fragile system of knowledge discovery, we would take every new “discovery” as a given — hoping that it is true.
In each of those unsubstantiated discoveries, we face a significant “black swan” risk of the discovery being false without us even realizing it — which could lead to decades of times, and incalculable resources, misused based on a misunderstanding of the world.
In an anti-fragile system of knowledge discovery, we would take each new “discovery” as something new to be falsified.
Perhaps, in 80% of cases (or 60%, looking at our track record in social science, psychology, economics, medicine), the original discovery would be true — in this case, there would be some small cost to “break” something which cannot be broken.
But in the 20% of cases (40% in a replication crisis) when that discovery is flawed?
We recognize the flaw at the outset — and potentially save ourselves decades of time, and incalculable resources, because we corrected our understanding of the world.
From the perspective of all the people involved in an anti-fragile system of knowledge discovery …
Academics and researchers no longer need to worry about being the first to discover new knowledge (although they would all like to be, for reasons we can understand) — because they will each have an opportunity to meaningfully contribute to that specific discovery, whether they come first or not.
With less to lose from other’s success, this would ideally lower the risks of sharing knowledge, and accelerate all of the people involved towards faster true (e.g. validated, replicatable) discoveries.
Across medicine (e.g. cancer research), science (e.g. efficiency of renewable energy), technology (e.g. responsible uses of AI), infrastructure (e.g. economic systems like UBI) we can appreciate the global benefits of faster true discoveries — and faster recognition of flawed discoveries.
However, going into the private sphere, there are additional complications. Presumably, when money is involved, there is significantly more to gain by being first (e.g. valuable knowledge protection).
Is there anyway to reconcile this?