Short Bio
Fabian Jogl is a third-year PhD student in Computer Science at TU Wien, advised by Thomas Gärtner. His research focuses on the expressivity of graph neural networks (GNNs), combining machine learning with classical algorithmics. He holds a BSc in Physics and an MSc in Logic and Computation. As part of his PhD, he works on the project “Expressivity of GNNs.” His work has been published at NeurIPS and ICML, and he has contributed as a reviewer for major conferences.
PhD Project - Graph Neural Networks
Supervised by Thomas Gärtner
Objectives:
We investigate which functions can be learned by graph neural networks.
Common graph neural networks cannot represent some seamingly simple functions on the space of all graphs. This limits the learning problems and applications they can be used for. The objective of our research is
1. to improve the understanding of what functions graph neural networks can express
2. to develop graph neural networks that can express more functions.
For this we intend to use existing concepts of theoretical computer science and graph theory in the context of graph neural networks.
Expected Results:
For (1) we intend to develop mathematical techniques that allow to more easily compare the expressivity of different graph neural networks. In our paper “Expressivity-Preserving GNN Simulation” (NeurIPS, 2023) we have proven that it suffices to represent many different graph neural networks as graph transformations with message passing which lays the foundation for this task. For (2) we intend to propose novel graph neural networks with guarantees on their expressivity and runtime.
Publications and Conferences
*indicates equal contributions
Conference and Journal Papers
- Franka Bause*, Fabian Jogl*, Patrick Indri, Tamara Drucks, Nils Morten Kriege, Thomas Gärtner, Pascal Welke, and Maximilian Thiessen. Maximally expressive GNNs for outerplanar graphs. In TMLR, 2024
- Tamara Drucks*, Caterina Graziani*, Fabian Jogl, Monica Bianchini, Franco Scarselli, and Thomas Gärtner. The expressive power of path based graph neural networks. In ICML, 2024
- Fabian Jogl, Maximilian Thiessen, and Thomas Gärtner. Expressivity-preserving GNN simulation. In NeurIPS, 2023
- Pascal Welke, Maximilian Thiessen, Fabian Jogl, and Thomas Gärtner. Expectation-complete graph representations with homomorphisms. In ICML, 2023
- Daniel Helm, Fabian Jogl, and Martin Kampel. HISTORIAN: A large-scale historical film dataset with cinematographic annotation. In ICIP, 2022
- Jiehua Chen*, Adrian Chmurovic*, Fabian Jogl*, and Manuel Sorge*. On (coalitional) exchange-stable matching. In SAGT, 2021
Peer-Reviewed Workshop Papers and Extended Abstracts
- Fabrizio Frasca, Fabian Jogl, Moshe Eliasof, Matan Ostrovsky, Carola-Bibiane Schönlieb, Thomas Gärtner, and Haggai Maron. Towards foundation models on graphs: An analysis on cross-dataset transfer of pretrained GNNs. In NeurReps @ NeurIPS, 2024
- Fabian Jogl, Pascal Welke, and Thomas Gärtner. Is expressivity essential for the predictive performance of graph neural networks? In Workshop on Scientific Methods for Understanding Deep Learning @ NeurIPS, 2024
- Franka Bause*, Fabian Jogl*, Patrick Indri, Tamara Drucks, Nils Morten Kriege, Thomas Gärtner, Pascal Welke, and Maximilian Thiessen. Maximally expressive GNNs for outerplanar graphs. In Workshop on New Frontiers in Graph Learning @ NeurIPS (accepted as oral), 2023
- Franka Bause*, Fabian Jogl*, Patrick Indri, Tamara Drucks, Nils Morten Kriege, Thomas Gärtner, Pascal Welke, and Maximilian Thiessen. Maximally expressive GNNs for outerplanar graphs. In Learning on Graphs Conference (Extended Abstract), 2023
- Andrei Dragos Brasoveanu, Fabian Jogl, Pascal Welke, and Maximilian Thiessen. Extending graph neural networks with global features. In Learning on Graphs Conference (Extended Abstract), 2023
- Fabian Jogl, Maximilian Thiessen, and Thomas Gärtner. Weisfeiler and Leman return with graph transformations. In Workshop on Mining and Learning with Graphs at ECMLPKDD, 2022
- Fabian Jogl, Maximilian Thiessen, and Thomas Gärtner. Reducing learning on cell complexes to graphs. In Workshop on Geometrical and Topological Representation Learning at ICLR, 2022
- Jiehua Chen*, Adrian Chmurovic*, Fabian Jogl*, and Manuel Sorge*. On (coalitional) exchange-stable matching. In COMSOC, 2021
Invited Talks
- Expressivity-Preserving GNN Simulation
Reading group at the lab of Haggai Maron – Technion (April 2024) - AI as Opportunity and Challenge
BeSt Vienna (March 2024)
AHS Wolkersdorf (September 2024) - Do we need to Improve Message Passing?
Workshop on Hot Topics in Graph Neural Networks, University of Kassel (October 2022)
Talk at the Laboratory of Computational and Quantitative Biology, Sorbonne University (September 2022)
Reading group at the Complexity Science Hub Vienna (June 2022)