FREED++: Improving RL Agents for Fragment-Based Molecule Generation by Thorough Reproduction

Alexander Telepov · Artem Tsypin · Kuzma Khrabrov · Sergey Yakukhnov · Pavel Strashnov · Petr Zhilyaev · Egor Rumiantsev · Daniel Ezhov · Manvel Avetisian · Olga Popova · Artur Kadurin

Video

Paper PDF

Thumbnail of paper pages

Abstract

A rational design of new therapeutic drugs aims to find a molecular structure with desired biological functionality, e.g., an ability to activate or suppress a specific protein via binding to it. Molecular docking is a common technique for evaluating protein-molecule interactions. Recently, Reinforcement Learning (RL) has emerged as a promising approach to generating molecules with the docking score (DS) as a reward. In this work, we reproduce, scrutinize and improve the recent RL model for molecule generation called FREED (Yang et al., 2021). Extensive evaluation of the proposed method reveals several limitations and challenges despite the outstanding results reported for three target proteins. Our contributions include fixing numerous implementation bugs and simplifying the model while increasing its quality, significantly extending experiments, and conducting an accurate comparison with current state-of-the-art methods for protein-conditioned molecule generation. We show that the resulting fixed model is capable of producing molecules with superior docking scores compared to alternative approaches.