13 min read

💡 Single Engine Medicine: Why We Don't Have Cancer Cures Yet

Medicine is missing the one thing that's revolutionised every other field: the network effect. While AI runs on distributed intelligence from millions, medicine operates like a hospital hierarchy. Why the next cure might come from India, not Harvard.
💡 Single Engine Medicine: Why We Don't Have Cancer Cures Yet
Photo by Dibakar Roy / Unsplash

After two decades in surgery, I thought I understood why medical progress felt frustratingly slow. Then a simple question from a running buddy shattered that assumption completely.

'Doctor, we've made incredible advances in AI and space—why hasn't medicine found a cure for cancer?'

Most doctors would deflect this question. I could have given him the standard response about how cancer treatment has progressed tremendously. But as I pushed up those brutal Persian steps in the pre-dawn darkness, something clicked.

Medicine is missing the one thing that's revolutionised every other innovative field: the network effect. While AI runs on distributed intelligence from millions of contributors, medicine still operates like a traditional hospital hierarchy—knowledge flows down from the top, and innovation gets bottlenecked by institutional gatekeeping.

What I discovered will change how you think about medical breakthroughs forever—and why the next cancer cure might come from a village doctor in India rather than Harvard.

The Single Engine Problem

Here's what struck me about that conversation: medicine operates on a fundamentally different engine from every other advancing field.

Picture a modern aircraft with multiple engines. If one fails, the others keep it airborne. Now imagine trying to fly the same route with just one engine—slower, more fragile, and completely grounded when that single engine falters.

That's medicine today.

Fields making rapid breakthroughs—smartphone technology, space exploration, even mapping applications—all run on what I call "multi-engine" systems. Hundreds, sometimes thousands of contributors working simultaneously, each adding their expertise to solve different pieces of the same puzzle.

Medicine? We're still running on single-engine institutional gatekeeping.

Elite institutions work in isolation. Limited teams. Closed systems. When that one engine hits turbulence—when gatekeepers protect interests over progress, when institutional priorities clash with medical advancement—everything slows down.

Other fields learned long ago that breakthrough insights don't respect institutional boundaries. The solution that unlocks rapid progress might come from anywhere—a small team, an unexpected source, even an amateur enthusiast.

Medicine hasn't learned this lesson yet.

We're still flying single-engine whilst the rest of the innovative world has moved to distributed power systems that are faster, more resilient, and fundamentally more effective at reaching their destination.

The Power of Networks

What medicine is missing becomes crystal clear when you examine how breakthroughs actually happen in other fields.

lighted city at night aerial photo
Photo by Nastya Dulhiier / Unsplash

Consider any major collaborative project—whether it's improving maps, building software, or advancing artificial intelligence. You'll discover something fascinating: whilst most contributions come from a core group of experts, the most critical breakthrough often comes from an unexpected source in the "long tail."[1]

Someone working part-time on the project spots the one thing that's been blocking everyone else. A newcomer asks the "obvious" question that challenges fundamental assumptions. An amateur contributor provides the missing piece that suddenly accelerates progress for thousands of others.

This is the network effect in action—a principle first articulated through Metcalfe's Law[2], which states that a network's value increases exponentially with each new participant. And it rests on one profound truth:

"Human thought process and the ability to land upon something interesting is universal and not limited to only a few players."

The capacity for breakthrough insight isn't confined to prestigious institutions. It's distributed across humanity—from rural clinics to metropolitan hospitals, from junior residents to seasoned practitioners. The observation that changes everything might come from anyone, anywhere.

Networks harness this distributed intelligence by democratising contribution. They effectively erase the artificial boundary between amateur and expert when it comes to problem-solving. What matters isn't your institutional affiliation—it's whether your insight moves collective understanding forward.

Medicine operates on the opposite principle.

We're still trapped in gatekeeping patterns that have held back human progress for centuries. Academic gatekeepers replaced royal gatekeepers, but the control mechanism remains identical. Knowledge flows downward from self-appointed authorities who decide what's valid and what isn't.

This creates a critical gap: where can a doctor from a remote village in India or Africa contribute to medical research? Nowhere. That perceptive practitioner seeing patterns metropolitan hospitals miss entirely has no mechanism to share insights with the global medical community.

This is why we desperately need the GitHub of Medicine.

A platform where medical problem-solving is democratised. Where a surgeon from rural India can contribute equally to solving problems that plague patients worldwide. Where the network effect replaces elitism, and the best ideas rise regardless of their source.

When Insights Die in Isolation

Let me share two discoveries from my own practice that illustrate exactly how our single-engine system buries breakthrough insights.

The Hidden Assessment Problem

In hip replacement surgery, we routinely assess cup placement using standard X-rays. The textbooks teach us to evaluate two angles—how deep the cup sits and its angle in one plane. But what about the cup's position in the third dimension, the sagittal plane?

There's virtually no mention of this in the literature.

When I raised this observation, the response was predictable: "Get a CT scan." But here's the reality—not everyone in India or Africa has access to CT scans. Moreover, CT isn't the standard for routine post-operative assessment worldwide.

This is "conclusion predetermination" in action. Instead of exploring a potentially important clinical question, the elite gatekeepers dismiss it with an expensive solution that serves the "CT is better" narrative. The insight gets buried because it doesn't align with existing power structures and commercial interests.

The Socioeconomic Pattern Nobody Wants to See

My second observation came from treating patients with ankylosing spondylitis—a severe inflammatory condition affecting the spine. The established narrative focuses on immunological causes, leading to expensive targeted therapies.

But I noticed something the literature doesn't discuss: these patients were consistently from lower socioeconomic groups.

The standard explanation? "Poor people get worse because they can't afford early treatment." Except this didn't explain why patients from the US or Europe who came for surgery never presented with ankylosing spondylitis. We'd see plenty of Western patients with rheumatoid arthritis—another inflammatory condition—but never with ankylosing spondylitis.

Genes don't differentiate between socioeconomic groups. Infections do.

Could there be an infectious component to ankylosing spondylitis that we're missing? Has anyone systematically explored this angle?

Not really. Because elite institutions have no mechanism to capture patterns from populations they don't treat.

The Speed Differential

Here's what frustrates me most: if we had a network of medical practitioners sharing observations across borders, we could have assembled this data and reached definitive conclusions within months.

Instead, such questions remain buried. They don't fit the budget priorities of elite institutions. They don't serve existing therapeutic narratives. They challenge established thinking.

Meanwhile, patients continue receiving treatments based on incomplete understanding because our single-engine system has no mechanism to harness distributed clinical intelligence.

The Corruption Shield

The most damning evidence against our single-engine system isn't theoretical—it's the scandals that institutional gatekeeping enables and protects.

When Prestige Becomes a Shield

Consider the Karolinska Stem Cell Trachea Scandal[3]. Dr. Paolo Macchiarini performed experimental trachea transplants that resulted in patient deaths, yet continued operating for years at one of the world's most prestigious medical institutions.

How did this happen at an institute known for groundbreaking research and Nobel Prize connections?

purple cells
Photo by National Cancer Institute / Unsplash

The uncomfortable truth: institutional reputation prevented people from speaking up. Colleagues who witnessed problems were silenced by the institution's prestige. Questioning a celebrated surgeon at a celebrated institution meant risking career suicide.

This is how badges of honour enable bad actors to flourish with impunity.

The Opacity Advantage

Here's what makes institutional gatekeeping so dangerous: opacity allows corruption to hide behind reputation.

In Macchiarini's case, if the underlying surgical data had been published openly from the beginning, someone in the global medical network would have detected the abnormalities immediately. A surgeon in Mumbai, a researcher in Toronto, a data analyst in São Paulo—any one of them could have raised red flags that institutional insiders were incentivised to ignore.

Distributed scrutiny exposes problems that institutional loyalty conceals.

The existence of Retraction Watch—a blog dedicated to reporting withdrawn scientific papers—tells you everything about our current system's failures. The fact that such a platform is necessary, and busy, reveals how our peer review system consistently fails to prevent bad actors.

The Ant Colony Alternative

So what exactly would a distributed medical network look like? And why would it work better than our current institutional model?

The Wikipedia Lesson

When Wikipedia gained prominence in the early 2000s, countless articles compared it unfavourably to the expert-oriented Encyclopaedia Britannica. Critics pointed to numerous inaccuracies in Wikipedia articles, dismissing it as unreliable amateur work.

Today, two decades later, only one encyclopaedia remains—Wikipedia.

A 2005 Nature study [4]comparing the two found similar error rates, but Wikipedia's self-correcting mechanism meant errors got fixed within hours or days, whilst Britannica errors persisted until the next edition.

In this contest between experts and common contributors, the expert lost. Not because individual Wikipedia contributors knew more than Britannica's scholars, but because the combined knowledge of several individuals consistently trumped isolated experts.

Visualising Distributed Intelligence

The best way to understand how distributed networks operate is to observe an ant colony.

Move in close and individual ants appear to be running helter-skelter in random directions—picking up food, tending to eggs, bumping antennae with others. Complete chaos.

But step back and look from above.

Suddenly you see an organised army working towards singular goals. Food streams flowing toward the nest. Construction projects coordinated across thousands of participants. What seemed like random individual actions reveals itself as purposeful collective intelligence.

E.O. Wilson's research on ant colonies[5] demonstrates how simple rules followed by many individuals create complex, intelligent behaviour at the group level—exactly what medical research needs.

The Self-Correcting Advantage

Here's what makes the ant colony model superior: when individual ants fail or make mistakes, others immediately step in to correct course. The colony continues progressing because it's not dependent on any single contributor's perfection.

Distributed networks prevent the cascade effect we see in medical research. When Retraction Watch exposes fraudulent research, the individual study gets withdrawn, but what about the dozens of studies that built upon those false conclusions?

In network models, flawed thinking gets caught and corrected before entire research programmes build upon false foundations.

The Lame Seventh Horse

In Dharamvir Bharati's classic Hindi novel Suraj Ka Satvan Ghoda (The Sun's Seventh Horse), the seventh horse meant to pull humanity into the future is depicted as lame and struggling.[6]

The horse is personified by the narrator, Manik Mulla—a man who sees everything but does nothing. When he has chances to intervene, to change outcomes, he pulls back into inhibitions and excuses. He blames fate, society, and circumstances for tragedies around him, never realising that his own inaction was the critical point of failure.

Medicine's Mirror Moment

Now I want you to ask yourself a difficult question: For the last fifty years, has modern medicine been playing the part of Manik Mulla?

We blame the problem, just like Manik blamed fate. When discussing slow medical progress, we point to the immense complexity of biological research, the intricacies of the human body, the lengthy clinical trial requirements. We blame the inherent nature of our work rather than examining our own systemic failures.

We are brilliant witnesses, just like Manik's keen observations. As surgeons and clinicians, we see disease and suffering with incredible clarity. We collect meticulous data, observe outcomes with precision, and document failures with scientific rigour.

But we fail to act collectively, just like Manik's paralysing hesitation. Our system forces us into isolated silos. Groundbreaking data from Mumbai remains inaccessible to researchers in Munich. A surgeon's breakthrough technique in Tokyo gets shared only through slow peer-reviewed papers months or years later—if at all.

The Self-Sabotage Revelation

Here's the uncomfortable truth: we are throttling our own pace of progress.

The future of medicine isn't just about discovering new molecules or perfecting surgical techniques. It's about building distributed networks to develop, refine, and spread breakthroughs at light speed.

Yet we resist this future with the same paralysing fear that defined Manik Mulla.

Medicine tells itself that distributed networks are unreliable because they include "unqualified contributors." This is our Manik Mulla moment—blaming external forces for our own internal limitations.

The reality is the opposite: it's our elitist institutional gatekeeping that's the lame horse slowing the entire chariot.

The Speed-Truth Revolution

For decades, medicine's unofficial motto has been: "Certainty over speed. Perfection over iteration."

Our gold standard isn't Silicon Valley's "move fast and break things"—it's "move slow and fix things, but only when we're absolutely certain we can." This philosophy gave us the medical miracles of the 20th century.

But it's strangling progress in the 21st.

blue green and pink books
Photo by Joshua Woroniecki / Unsplash

The New Path to Perfection

The pioneers of network effects have proven something counterintuitive: rapid iteration is the new path to perfection.

Wikipedia isn't perfect on any given day, yet it's more comprehensive and accurate than any encyclopaedia that took decades to produce. Linux isn't flawless, but it runs the world because millions of developers iterate on it daily.

Meanwhile, we cling to a paradox that's killing patients.

To insist on being "wholly correct" whilst millions of patients suffer in waiting queues is a tragedy of modern medical thinking. The irony is breathtaking: on the ground, physicians and surgeons constantly make calls on uncertain information. We wear "clinical uncertainty" as a badge of professional honour.

The Publishing Bottleneck

Consider something as seemingly objective as journal impact factor. At its core, it measures how often papers get cited. But how it's calculated reveals everything wrong with our system.

Impact factor measures citations in one year to papers from the two previous years. It's a system designed to reward past glory, not measure present velocity.

Most medical breakthroughs sit in editorial limbo for months or years whilst patients who could benefit from those insights continue suffering. When errors emerge in published research, our correction mechanism resembles something from the pre-internet era: writing letters to editors.

No wonder Retraction Watch has exploded in popularity, essentially taking over quality control jobs that journal editors were supposed to perform.

white printer paper on brown wooden table
Photo by VrĂźnceanu Iulia / Unsplash

The Serendipity Problem

For decades, medical science has relied on serendipity for major discoveries. Fleming's accidental penicillin discovery. The chance observation that led to Viagra. The coincidental pattern recognition that revealed Helicobacter pylori's role in ulcers.

But why should life-saving breakthroughs depend on random occurrences?

This isn't romantic scientific tradition—it's systematic failure. We've built a system so rigid, so gatekept, that genuine progress only happens when luck bypasses our institutional barriers.

Network effects could transform serendipity from rare accident to daily occurrence.

Breaking Global Barriers

The single-engine problem in medicine isn't just institutional—it's geographical. We've built artificial barriers that fragment the very intelligence we need to solve global health challenges.

The Collaboration Illusion

Yes, there are countless "international collaborative studies" crossing national borders. But look closer and you'll see the same gatekeeping mechanism operating at a global scale: elite institutions developing channels with sister elite institutions.

It appears global in nature but remains fundamentally limited and isolated.

Meanwhile, technology companies like Meta, Google, and Amazon leverage massive network effects across continents. Their algorithms improve because they capture diverse global data patterns that no single country could provide.

The Human Diversity Problem

Only a global network can truly capture data that represents the full spectrum of human diversity.

Consider the medical insights we're missing:

  • Genetic variations that exist in some populations but not others
  • Environmental factors unique to specific regions affecting disease patterns
  • Traditional remedies that could inspire breakthrough treatments
  • Socioeconomic patterns that reveal hidden disease mechanisms

When medical research operates in national silos, we lose this distributed intelligence. A pattern visible in rural India might be invisible to researchers in urban America, and vice versa.

The Stark Reality Check

Here's a simple question that exposes everything: Can you think of a large pan-world medical organisation that takes advantage of network effects?

None.

COVID-19 revealed this dysfunction dramatically. Crucial early data from one country took months to reach practitioners in other countries. Treatment insights developed in Italy couldn't quickly benefit doctors in India.

We literally fought a global pandemic with fragmented local responses.

Medicine's Internet Moment

Medicine needs its internet moment.

Throughout this exploration, we've seen how medical research remains trapped in a medieval knowledge model whilst every other innovative field has evolved to harness network effects.

The pattern is clear: our single-engine system is systematically failing patients worldwide.

The Answer to Cancer

I finally have an answer to my running buddy's frustrated question about cancer versus smartphone progress:

We don't have cancer cures because medicine operates like a corrupt monarchy when it should operate like an ant colony.

Whilst your phone gets smarter every month through global network effects, cancer research remains locked behind institutional gates, moving at the pace of academic politics rather than patient need.

The Transformation Timeline

This isn't a pipe dream. Breakthroughs should not take years to decades—they should take weeks to months. When we build the GitHub of Medicine, when we harness distributed medical intelligence, when we prioritise speed-to-truth over institutional prestige, medical progress will accelerate exponentially.

We should not rely on serendipity anymore. Random discoveries should be our backup plan, not our primary mechanism for medical advancement.

The Choice Before Us

We stand at crossroads. We can continue flying single-engine whilst other fields soar on distributed networks. We can keep playing Manik Mulla, paralysed by our own institutional inhibitions whilst patients suffer.

Or we can finally allow medicine's lame seventh horse to run.

The ant colony model isn't just waiting to transform medical discovery—it's waiting for us to stop being the primary obstacle to our own progress.

Because every day we delay this revolution, promising treatments remain buried, corrupt practices stay hidden, breakthrough insights get siloed, and patients continue dying from conditions that distributed medical intelligence could solve.

Medicine's internet moment isn't coming someday. It's here, waiting for us to step away from our own stranglehold and embrace the network-powered future that patients desperately need.

The only question left is: Are we brave enough to take that step?

References

[1] Anderson, C. (2004). "The Long Tail." Wired Magazine.

[2] Metcalfe, R. (2013). "Metcalfe's Law after 40 Years of Ethernet." Computer, 46(12), 26-31.

[3] BBC News (2019). "The surgeon who operated in secret."

[4] Giles, J. (2005). "Internet encyclopaedias go head to head." Nature, 438, 900-901.

[5] Wilson, E.O. (2008). "One giant leap: How insects achieved altruism and colonial life." Proceedings of the National Academy of Sciences.

[6] Bharati, D. (1952). Suraj Ka Satvan Ghoda [The Sun's Seventh Horse].