TopoBenchmarkX: A Modular Open-Source Library Designed to Standardize Benchmarking and Accelerate Research in Topological Deep Learning (TDL)

Topological Deep Learning (TDL) advances beyond traditional GNNs by modeling complex multi-way relationships, unlike GNNs that only capture pairwise interactions. This capability is critical for understanding complex systems like social networks and protein interactions. Topological Neural Networks (TNNs), a subset of TDL, excel in handling higher-order relational data and have shown superior performance in various machine-learning tasks. Despite rapid advancements in TDL, reproducibility, standardization, and benchmarking remain. Recent efforts, including unified theories and software implementations, aim to address these issues and enhance TDL research and applications.

Researchers from several institutions, including Sapienza University and UC Santa Barbara, have developed TopoBenchmarkX, a flexible, open-source library for benchmarking in TDL. TopoBenchmarkX organizes TDL workflows into modular components for data processing, model training, and evaluation, making it adaptable and user-friendly. It transforms graph data into higher-order topological forms, such as simplicial and cell complexes, enhancing data representation and analysis. This framework addresses challenges in TDL, including the scarcity of topological data, standardization across domains, and the diversity of TNN architectures, thus facilitating robust benchmarking and reproducibility in TDL research.

Several software packages support graph-based learning and Geometric Deep Learning (GDL). NetworkX allows for graph computations, while KarateClub provides unsupervised learning algorithms for graph data. PyG and DGL cater to both GDL and general graph learning. For higher-order domains, tools like HyperNetX and XGI handle hypergraphs and simplicial complexes, and DHG offers deep learning for graphs and hypergraphs. The TopoX suite, which includes TopoNetX, TopoEmbedX, and TopoModelX, supports computations, embeddings, and learning with TNNs across diverse topological structures. Unlike the Open Graph Benchmark (OGB), which focuses on graph-based learning, TopoBenchmarkX specifically benchmarks TNNs and generates higher-order datasets for TDL.

In TopoBenchmarkX, graphs can be enhanced into “featured topological domains,” extending to structures like cell complexes. A featured graph maps nodes and edges to feature vectors. Cell complexes extend this by mapping nodes, edges, and faces to feature vectors. Lifting transforms a graph into a higher-order domain, embedding its nodes and edges into more complex structures, such as cell complexes or simplicial complexes. Machine learning models can fix this process using predefined rules or be learnable and optimized. Different lifting methods include converting graphs to cell complexes via cycles, simplicial complexes via cliques or neighborhoods, and hypergraphs using k-hop neighborhoods.

TopoBenchmarkX’s numerical experiments tested twelve neural network models across graphs, hypergraphs, and topological domains on four tasks (node classification, node regression, graph classification, graph regression) using 22 datasets. Higher-order neural networks outperformed graph neural networks on 16 of 22 datasets, especially in node regression. Ablation studies demonstrated the importance of signal propagation strategies, showing varied performance impacts based on model architecture. TopoBenchmarkX enables comprehensive model comparisons, enhancing insights in topological deep learning.

TopoBenchmarkX is an open-source benchmarking tool for TDL designed to streamline the research process by organizing TDL tasks into modular steps. It excels at transforming graph data into richer topological representations, facilitating comprehensive model evaluations. While the framework showcases promising results, it lacks features like learnable liftings and built-in higher-order datasets. Future work will integrate these capabilities and improve performance metrics to evaluate models’ expressivity, explainability, and fairness. Researchers are encouraged to contribute to these enhancements and extend the framework’s capabilities.


Check out the Paper and GitHub. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter

Join our Telegram Channel and LinkedIn Group.

If you like our work, you will love our newsletter..

Don’t Forget to join our 44k+ ML SubReddit

🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others...