Derivative-based pre-training of graph neural networks for materials property predictions

2Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

While pre-training has transformed many fields in deep learning tremendously, its application to three-dimensional crystal structures and materials science remains limited and under-explored. In particular, devising a general pre-training objective which is transferable to many potential downstream tasks remains challenging. In this paper, we demonstrate the benefits of pre-training graph neural networks (GNNs) with the objective of implicitly learning an approximate force field via denoising, or explicitly via supervised learning on energy, force, or stress labels. For implicit learning of the force field, we find there are significant benefits to training the model on the derivatives of the output, rather than on the output itself. We further show an explicit training of the force field using labelled data yields an even greater benefit than implicit training, and similarly benefits from a derivative-based training objective. We find that overall, the best pre-training performance can be achieved by explicitly learning the full combination of energy, force, and stress labels using output derivatives. This pre-training approach is advantageous as it leverages readily available forces from non-equilibrium structures produced during ab initio calculations, enabling the usage of significantly larger datasets for pre-training than using only equilibrium structures in denoising. We demonstrate the effectiveness of this approach on a wide range of materials property benchmarks across many materials systems and properties. These results suggest exciting future opportunities for scaling up pre-training on GNNs to build foundational models in materials science.

References Powered by Scopus

Deep residual learning for image recognition

176503Citations
N/AReaders
Get full text

Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties

1611Citations
N/AReaders
Get full text

Quantum chemistry structures and properties of 134 kilo molecules

1340Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Universal neural network potentials as descriptors: towards scalable chemical property prediction using quantum and classical computers

3Citations
N/AReaders
Get full text

A Perspective on Foundation Models in Chemistry

0Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Jia, S., Parthasarathy, A. R., Feng, R., Cong, G., Zhang, C., & Fung, V. (2024). Derivative-based pre-training of graph neural networks for materials property predictions. Digital Discovery, 3(3), 586–593. https://doi.org/10.1039/d3dd00214d

Readers over time

‘24‘25036912

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 10

83%

Professor / Associate Prof. 2

17%

Readers' Discipline

Tooltip

Engineering 8

73%

Chemical Engineering 1

9%

Computer Science 1

9%

Materials Science 1

9%

Save time finding and organizing research with Mendeley

Sign up for free
0