In 1906, William David Coolidge had a problem. Ever since Thomas Edison had invented the filament lightbulb, back in 1879, scientists had struggled to make them commercially viable. The challenge lay in the filaments themselves, made from cardboard or bamboo or cotton thread, and which glowed dimly and lived briefly. To be fair, Edison’s General Electric Company had made some improvements. But with a lifespan of just 500 hours, and new European competition looming behind, something had to change.

Coolidge knew that tungsten, with the highest melting point of all the elements, was a potential replacement. But no matter how many hours he spent at his research laboratory in Schenectady, New York, Coolidge couldn’t turn this hard and brittle metal into a filament worth the name. But then, a breakthrough. In March 1906, the scientist accidentally dropped a rod of tungsten into a pool of liquid mercury nearby. Soon enough, the mercury filled the pores, simultaneously strengthening the compound while also making it flexible.

From these unsure beginnings, so-called ‘ductile tungsten’ filaments would go on to dominate lightbulbs for a century, even as 795 million were sold in 1945 alone. The point, at any rate, is that many of the great scientific discoveries of earlier times rested on tungsten falling into mercury. They rested, in other words, on pure luck. But what if scientists could remove the trial and error from their research? What if, instead, they could use powerful algorithms to discover thousands of viable materials? As a team of researchers at Google DeepMind are vividly proving, both these revolutions could soon become reality – especially when dovetailed by similar advances in manufacturing labs.

Discovering materials faster and more intentionally can help unlock future tech that is ‘critical’ for devices. Image Credit: nevodka/ www.Shutterstock.com

Trials and errors

Discovering new materials is central to scientific development. That’s clear enough in the tungsten example – but also in the medical device space. Consider, for instance, the rise of so-called nanomaterials, and how something like nanosilver can be used to prevent or reduce inflammation. Humbler materials doubtless have a role to play here too. Robust kinds of synthetic rubber can, for instance, be used to develop oncology drug devices. Lightweight and heat-resistant, borosilicate glass wafers are central to machines like x-rays, even as novel alloys like nitinol are proving useful as mouldable stents. “Every single technology is enabled and limited by the materials that are involved,” says Ekin Dogus Cubuk, a research scientist at Google DeepMind. “Being able to discover and develop materials faster and more intentionally can help improve current technologies and unlock future technologies that are critical for medical devices.”

If you examine the numbers, it’s hard to disagree. According to work by Precedence Research, for example, the global advanced materials market was already worth some $61bn in 2022, a figure expected to reach over $112bn by 2032. Listen to Cubuk, however, and it’s obvious that much of the progress here has traditionally been painstaking. As he stresses: “Many materials discoveries involve a lot of trial and error, luck and serendipity.” That’s apparent enough far beyond lightbulbs, with everything from implantable pacemakers to coronary angiograms all accidental finds.

It goes without saying that – in principle anyway – the immense power of computers offers a solution here. Given, after all, that AI can now sift through millions of theoretical crystal structures at speed, then predict the most viable for a range of medical uses, medical science should by rights be on the verge of a revolution. Yet if the worldwide materials informatics is set to enjoy CAGR of 13.7% through 2030, bringing it to over $702m, Cubuk equally warns that computational approaches to material design have traditionally been beset by problems.

That begins, Cubuk explains, with something called ‘density functional theory’ (DFT). A computational quantum mechanical modelling method for simulations in materials, it struggles to simulate larger and more complex structures. No less important, the researcher continues, modelling materials at short time frames can be prohibitively expensive for DFT. “Similarly,” he adds, “DFT is not necessarily suitable for predicting materials with quantum properties or more complex physics under increasing temperature.” Because of their unusual properties, including superconductivity, so-called quantum materials could soon transform medical life. The fact that existing algorithms can’t truly understand them is therefore an obvious difficulty, not least when international private investment into quantum technology has topped $7bn since 2012.

$112bn
The estimated size of the global advanced materials market by 2032.
Precedence Research

Going deep

Into this exciting if underdeveloped field steps Cubuk. Together with a colleague at Google DeepMind, he’s developed Graph Networks for Materials Exploration (GNoME’), a state-of-the-art graph neural network (GNN) model. Referring to the form the input data takes – the researchers compare it to the connection between atoms – GNNs are ideal for discovering new crystalline materials. For their part, candidates for research are taken from something called the Materials Project, an open-source database encompassing around 35,000 molecules and over 130,000 inorganic compounds.

From there, Cubuk takes up the story of how his platform actually works. “We used a training process called ‘active learning’ that dramatically boosted GNoME’s performance,” he explains. “GNoME would generate predictions for the structures of novel, stable crystals – which were then tested using DFT. The resulting high-quality training data was then fed back into our model training.” Examine GNoME’s results, meanwhile, and it’s clear this approach is doing well. As Cubuk says, by scaling up GNN training, it’s possible to boost the platform’s “discovery efficiency” from just a few percent to over 80%. That’s bearing fruit in other ways too. Beyond discovering over two million new crystals, Cubuk and his colleague Amil Merchant are also releasing the predicted structures of the 380,000 materials they feel have the best chance of being replicated in the lab.

It goes without saying, of course, that success with GNoME isn’t an end in itself. Rather, the idea is to use its theoretical discoveries to then transform science in the real world. Fortunately, there are signs that here too, the DeepMind team are prodding their sector forward, with researchers already creating 736 of GNoME’s new materials in the lab. Just as important, some of these creations show real promise. Take, for example, a compound called Li4MgGe2S7, an alkaline-earth material that shows huge promise as an optical material. Mo5GeB2 on the other hand, is a potential superconductor. Considering the need for exactly such materials right across medical life, from beam therapy to diagnostic imaging, that’s doubtless good news.

736
The number of GNoME materials external researchers have successfully replicated in the lab.
Google DeepMind

Given these achievements, should we expect a flood of exciting new materials imminently? Perhaps not. As Cubuk warns, one of the fundamental issues involves the machine learning (ML) that underpins GNoME. “One challenge,” he says, “was in being able to create a large enough training set while also preserving the amount of ‘novelty’ in the training samples. This is in general an unsolved problem in deep learning: we don’t know how to quantify the diversity of a training set sample in an automated fashion.”

To be fair, this stumbling block is hardly insurmountable. By training various GNN to predict the energy of a new sample, then comparing how much the various GNN disagreed, the experts were able to estimate a sample’s novelty. A more pressing hurdle, rather, is securing the manufacturing capabilities needed to get new wonder materials into the hands of doctors. As Cubuk says: “Computational tools and predictions do not mean much unless they impact the actual materials being developed in the lab. This has been our goal from the beginning, but it takes a lot of time and research, and we are still working on it.” Fair enough: while it’s obviously possible to simulate hundreds of thousands of materials at once, actually experimenting on them using physical space is a wholly different matter.

Yet here too, there are signs that progress is imminent. After partnering with DeepMind, to give one example, the Lawrence Berkeley National Laboratory successfully synthesised more than 40 new materials. Tellingly – and just like GNoME generally – this practical work is being prodded along with the help of new technology. Rather than relying on flesh-and-blood researchers, after all, the Berkeley lab instead programmed robots to do the work, presaging a future where autonomous labs could build on the victories of AI scientists. Certainly, Cubuk seems to be looking this way himself. “The next goal,” as he puts it, “is to use DFT and ML to predict and influence experiments.” An exciting prospect, and surely one William David Coolidge would have admired.