Geometry-Informed Neural Networks

* equal contribution
The GINN learning paradigm applied to four different geometry-constrained problems. A given set of constraints on the shape $\Omega$ defines the set of feasible shapes $\mathcal{K}$. A GINN is a neural network trained to find feasible shapes, which are unique in the top two rows. However, as often in geometry, the problems in the bottom two rows have multiple solutions. To produce diverse solutions $S\subset\mathcal{K}$ we add a diversity loss. Using only constraints and diversity, the GINN paradigm for shape generative modeling is entirely data-free.

Abstract

Geometry is a ubiquitous language of computer graphics, design, and engineering. However, the lack of large shape datasets limits the application of state-of-the-art supervised learning methods and motivates the exploration of alternative learning strategies. To this end, we introduce geometry-informed neural networks (GINNs) to train shape generative models without any data. GINNs combine
  1. learning under constraints,
  2. neural fields as a suitable representation, and
  3. generating diverse solutions to under-determined problems.
We apply GINNs to several two and three-dimensional problems of increasing levels of complexity. Our results demonstrate the feasibility of training shape generative models in a data-free setting. This new paradigm opens several exciting research directions, expanding the application of generative models into domains where data is sparse.

Examples

Min Surf
Minimal surface. GINN finds the unique surface that attaches to the prescribed boundary while having zero mean-curvature everywhere, also known as the Plateau's problem.
Mirror
Parabolic mirror. GINN finds the unique surface of a mirror -- a parabolic mirror -- that collects reflected rays into a single point.
Simple SCC Stack
Connectedness. The connectedness loss allows the GINN to find the bottom shape connecting the two interfaces within the allowed design region. The top shape is obtained by training without the connectedness loss.
Generative obstacle.
Latent space interpolation of a GINN trained to produce diverse solutions to the aforementioned obstacle problem. Compare the simplicity bias of a softplus-MLP to different SIREN models in the figure below.
Training of jet-engine bracket.
Jet-engine bracket. The training animation shows the search for a shape with interface, design-space, and connectedness constraints inspired by a realistic 3D engineering design use-case.
Generative Physics-Informed neural network.
Under-determined physics. A generative physics-informed neural network (PINN) applied to an under-determined system of reaction-diffusion. Traversing the latent space of the trained network produces morphing Turing patterns.

BibTeX

@article{berzins2024ginn,
  title={Geometry-Informed Neural Networks},
  author={Berzins, Arturs and Radler, Andreas and Sanokowski, Sebastian and Hochreiter, Sepp, and Brandstetter, Johannes},
  journal={arXiv preprint arXiv:2402.14009},
  year={2024}
}

Acknowledgements

This project was supported by the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement number 860843.