For decades, the development of new drugs has been one of the slowest, most expensive, and most failure-prone processes in modern science. With artificial intelligence however, that bottleneck may be breaking. An experimental drug designed largely by AI has how entered late-stage trials at record speed, and is on track to become the first AI-designed medicine approved for human use. Some call it a medical breakthrough, while others recognise an unsettling shortcut that replaces proper medical understanding with machine-based optimisation.Â

AI Designs a New Drug
The drug nearing approval was developed by Insilico Medicine, an AI-focused biotech company that uses machine-learning models to identify disease targets and generate potential compounds for their treatment. The target of its new medicine is idiopathic pulmonary fibrosis (IPF) – a fatal lung disease that kills about 40,000 Americans every year with no known cure. Incredibly, the development has now progressed from target discovery to human trials in under two years.
To put that in context, conventional drug discovery typically takes around 5 years to reach human trials, followed by another 6-8 years of clinical testing and regulatory review. From start to finish, most drugs take 10-15 years, with an estimated 90% failure rate once human trials begin.
AI is able to compress the early discovery phase – the slowest and most expensive part – dramatically. Insilico’s process replaces years of laboratory iteration with algorithmic screening of millions of molecular structures, predicting toxicity, simulating protein folding, and proposing candidate compounds in a matter of weeks rather than years.
How It Speeds Up the Process
Traditional drug discovery relies on slow iterative lab work: hypothesis, experiment, failure, revision. AI systems shortcut this by training on enormous datasets of chemical structures, biological pathways, and historical trial results. This allows researchers to discard unlikely candidates instantly and focus resources on compounds with the highest predicted success rates.
In simple terms, AI does not understand biology, but rather recognises patterns at scale. It can test millions of theoretical molecules digitally before a human chemist synthesises even one.
This efficiency means drug firms can cut development costs by 30-70%, and why venture capital cash is now pouring into the industry. According to estimates, over $60 billion has been invested into AI-biotech startups globally over the past five years, with major pharmaceutical companies partnering or investing themselves to avoid being left behind.
An Optimistic View
There are genuine humanitarian arguments for AI-accelerated drugs. Rare diseases, neglected conditions, or illnesses with small patient populations have always been commercially unattractive. Making development faster and cheaper using AI could finally make unviable treatments realistic. There’s also the possibility of creating personalised medicine, tailoring treatments to genetic profiles in ways human-led research is unable or unwilling to explore.
An important clarification here is that AI-designed drugs are still tested on humans. Regulators have not waived safety standards, and clinical trials remain mandatory. Therefore, a positive outlook highlights that AI is not replacing scientific judgement but rather augmenting it with rapid trial-and-error. Faster discovery does not automatically mean lower standards.
So, What’s the Problem?
The concern lies less in speed itself and more in what that speed displaces. AI systems often operate as black boxes, which produce effective outputs but do not provide clear explanations of the causal reasoning. In many industries, that opacity isn’t a big deal. In medicine, it is.
Knowing exactly how and why a drug works is crucial to anticipating side effects, long-term risks, and interactions with other treatments. If development timelines shrink dramatically, there is less opportunity for exploratory science – the slow, often inconclusive human-led work that builds conceptual understanding rather than statistical confidence. What happens if regulators approve drugs that performed well in trials, but whose mechanisms remain only partially understood?
AI Has a Concerning Track Record Beyond Pharma
In recent years, AI systems have repeatedly demonstrated their tendency to generate confident but incorrect outputs – a concept we explored in more detail in this article. Large language models fabricate technical details and citations; image-recognition tools misclassify objects in safety-critical environments; automated decision software continue to amplify bias in politics and beyond.
These failures do not mean there is malicious intent at its heart, but it does identify structural limitations. AI models optimise for probability rather than truth. They perform best in environments where patterns are stable, recognisable, and the consequences are reversible. Biology is none of those things. Errors in pharmaceutical development – misjudging toxicity, side effects, or long-term interactions – are costly, irreversible, and sometimes lethal.
Trading Speed for Safety Has Backfired Before
Medical history offers stark warnings. Some of the most infamous drug disasters in the 20th century occurred within fully human-led systems that followed their own scientific standards. Thalidomide, being one of the best-known examples, was approved in multiple countries in the late 1950s and passed all required tests before causing catastrophic birth defects. The safeguards that now slow drug development were built in response to such failures.
The worry is not that AI will produce more bad drugs, but that it could produce bad drugs faster, at scale, and before institutional safeguards can adapt.
The Precedent
If Insilico’s AI-designed IPF treatment is approved, it would set a powerful precedent. Suddenly the idea that medicines can be generated faster than scientists can fully understand them would become normalised, and over time, that could reshape diagnostics, treatment protocols, and clinical trial designs too.
The challenge now is for regulators and society to decide how much opacity they can afford to trade for speed. Trust in medicine must rest on more than just outcomes – confidence in the process is critical as well.
Final Thought
Patients long underserved by traditional research models – because their disease is deemed economically unviable to cure – may be offered hope if AI-designed drugs succeed. But the promise of speed should not obscure the risks of substituting optimisation for understanding. The consequences of being wrong are profound in medicine, and slow development has, in a way, protected against such risks. As AI accelerates discovery, how do we ensure progress doesn’t outrun the guardrails that keep us all safe?Â
The Expose Urgently Needs Your Help…
Can you please help to keep the lights on with The Expose’s honest, reliable, powerful and truthful journalism?
Your Government & Big Tech organisations
try to silence & shut down The Expose.
So we need your help to ensure
we can continue to bring you the
facts the mainstream refuses to.
The government does not fund us
to publish lies and propaganda on their
behalf like the Mainstream Media.
Instead, we rely solely on your support. So
please support us in our efforts to bring
you honest, reliable, investigative journalism
today. It’s secure, quick and easy.
Please choose your preferred method below to show your support.
Categories: Breaking News
The problem is not with the process or the speed of the process. The problem is corruption. That begins with the false narrative around the disease in question. Human scientists are already focused on identifying “disease targets” that will “generate potential compounds for their treatment”. All that matters is that these compounds are profitable. Safety and efficacy do not matter.
All AI will do is to help maximise these profits by cutting “development costs by 30-70%”. The clinical trials are already rigged to exaggerate benefits and hide harm. Perhaps AI could be used to better rig these already rigged trials?
It is very naïve to assume that regulators currently have any effective “safety standards” since it has been blatantly obvious for many years that the regulators have been captured by the drug companies and have total disregard for public safety.
It is also very naïve to assume that “scientific judgement” drives current human drug discovery. Its all about the money. Faster discovery will just speed up the already very low standards that exist.
If the “exploratory science” currently performed by humans is bad for profits then it is simply buried. This is already based on poor “conceptual understanding” of disease causation
and “only partially understood” mechanisms of supposedly beneficial interventions.
Often toxicity, side effects and long-term interactions are already understood but the drug still gets approved. Statins are a notable example of this. Corruption is the problem that needs to be fixed.
Drug disasters did not end in the 20th century but continue on to this day. Vioxx and Celebrex spring to mind but there are many other unacknowledged disasters too. “The safeguards that now slow drug development” and other “institutional safeguards” are clearly not working.
I am gobsmacked that the author thinks that society has any remaining trust in medicine following the covid scamdemic which demonstrated both catastrophic outcomes and processes. We clearly do not have “guardrails that keep us all safe”.
Until the corruption is dealt with AI will indeed just “produce bad drugs faster, at scale”.
I think the author was just trying to be even handed and is skeptical too. I know I am skeptical and have little trust in AI.
Hi,
Indeed, it’s a new development and I’m interested to explore all angles. I, however, am skeptical of where this is headed. For now, nothing is approved, but it feels like it’s only a matter of time before we implement AI decision-making in more critical areas like medicine.
We’ll continue to keep a close eye on how this one goes.
Regards,
G Calder
Hi Sam,
Your angle is certainly valid. This article is intended to explore a high-level overview of the potential outcomes here. Of course there is some corruption in the pharmaceutical industry and I am under no illusion that we live in a perfectly fair world. However, this development could highlight a possible upside for a minority: those whose diseases are not profitable enough. There are plenty of conditions less common that are totally curable, but not “worth” exploring from a £££ point of view. I am indeed very skeptical, like you, about all of this, but I’m always interested to investigate both sides.
Thanks for your comment.
G Calder
I have been investigating both sides for 35 years. I regard the pharmaceutical industry as being completely corrupt and completely evil in its present form. They make the worlds most prolific mass murderers look like amateurs.
The cardiovascular risks of Vioxx were known before it was approved. A very conservative estimate of the number of people killed by that ONE drug is 60,000. Stalin and Pol Pot would be impressed with these numbers.
I think this development will just allow those with less common diseases to be poisoned too. All for a tidy profit of course. Can you think of a disease that is “totally curable” but is currently not being cured? Is intervention with synthetic compounds ever helpful?
Are you familiar with the computer term Garbage in = Garbage out? The AI with be trained on flawed and fraudulent data eg “historical trial results”. We already know that predicting toxicity and protein folding using computer modelling is extremely unreliable. Will AI know this? Will the humans using it care? Will the biological pathways it uses be accurate and relevant to the disease in question?
AI will just amplify the bias of its human handlers by analysing garbage data and consequently it will produce garbage solutions.
Thanks for a thought provoking article.
I know it gets pretty old, but I have to ask again, “What could possibly go wrong?”
Propably more that we could imagine.
I am very leary of an AI drug.
AI lacks moral and ethical bumper guards.