AI can design new proteins unlock new cures materials

[ad_1]

The new tool, ProteinMPNN, described by a group of researchers from the University of Washington in two papers published in Science today (available here and here), offers a powerful complement to that technology.

The papers are the latest example of how deep learning is revolutionizing protein design by giving scientists new research tools. Traditionally researchers engineer proteins by tweaking those that occur in nature, but ProteinMPNN will open an entire new universe of possible proteins for researchers to design from scratch .

“In nature, proteins solve basically all the problems of life, ranging from harvesting energy from sunlight to making molecules. Everything in biology happens from proteins,” says David Baker, one of the scientists behind the paper and director of the Institute for Protein Design at the University of Washington.

“They evolved over the course of evolution to solve the problems that organisms faced during evolution. But we face new problems today, like covid. If we could design proteins that were as good at solving new problems as the ones that evolved during evolution are at solving old problems, it would be really, really powerful.”

Proteins consist of hundreds of thousands of amino acids that are linked up in long chains, which then fold into three-dimensional shapes. AlphaFold helps researchers predict the resulting structure, offering insight into how they will behave.

ProteinMPNN will help researchers with the inverse problem. If they already have an exact protein structure in mind, it will help them find the amino acid sequence that folds into that shape. The system uses a neural network trained on a very large number of examples of amino acid sequences, which fold into three-dimensional structures.

But researchers also need to solve another problem. To design proteins that tackle real-world problems, such as a new enzyme that digests plastic, they first have to figure out what protein backbone would have that function.

To do that, researchers in Baker’s lab use two machine-learning methods, detailed in an article in Science last July, that the team calls “constrained hallucination” and “in painting.”

[ad_2]

Source link