News

Jonas Eichelsdörfer presents his M.Sc. thesis on "Physics Informed Hamiltonian Neural Networks for System Identification"


Abstract:

Neural networks excel at finding patterns in large amounts of data, yet they struggle
to learn the basic laws of physics. Applying the methods of machine learning to
build accurate models of the world thus requires a strong inductive bias, e.g. a
notion of symmetry, invariances or conservation principles. The focus of this thesis
is on two different approaches to incorporate physically motivated inductive bias
to data-driven system identification methods. Greydanus [25] suggested to learn a
neural network representation of a physical system’s Hamiltonian. Cranmer [13]
on the other hand proposed to parameterize arbitrary Lagrangians using neural
networks. The performance of these two algorithms is compared and practical
advice on implementation details is given. Furthermore, a combination of both
algorithms based on the Routhian formulation of mechanics is developed. Raissi
[56] suggested to compute data driven solutions to partial differential equations by
adding the residual of the respective differential equation to the loss term. Inspired
by Raissi’s work a domain-specific regularization term based on the total energy
of the physical system is proposed. Presented numerical schemes are evaluated
on three problems from classical mechanics, where conservation of energy is
important. The regularized models outperform baselines without domain-specific
regularization in energy accuracy as well as data efficiency. Recently introduced
graph neural networks [61] can be interpreted as a family of general weight-sharing
methods. In this work the graph network architecture is leveraged to introduce
geometrically motivated inductive bias to the Hamiltonian neural network scheme.