Explainable and privacy preserving graph ML

Explainability and privacy preserving techniques for GraphML and their interplay

Explainable GraphML: Learning complex neighborhood aggregations and latent feature extraction has enabled GNNs to achieve state-of-the-art performance on node and graph classification tasks. This complexity, on the other hand, leads to a more opaque and non-interpretable model. There have been numerous approaches proposed in the literature for the general interpretability of machine learning models; however, models learned over graph-structured data have some unique challenges. Specifically, predictions on graphs are induced by a complex combination of nodes and paths of edges between them in addition to the node features. This makes applying directly existing interpretability methods on graph models infeasible. We develop methods to explain decisions of already trained GNN models as well as shallow node embeddings.

Moreover, suitable notions of what constitutes an explanation in a graph model and its evaluation are missing. Due to the abstract nature of graphs human evaluation is usually impossible. Besides developing methods for explaining or interpreting the rationale underlying a given prediction for a GNN in our recent evaluation benchmark BAGEL, we also focus on effective evaluation strategies for measuring the goodness of an explanation.

Privacy in GraphML: Graph based learning methods, specifically graph neural networks (GNNs) are being applied in several domains including sensitive domains like medicine. It is important to ensure that such models do not inadvertantly leak any information about the data used to train such models. We work towards exposing vulnerabilities of graph neural networks towards privacy leaks as well as develop mitigating techniques to ensure privacy-preserving learning under differential privacy guarantees.

Privacy and explainability are fundamental in ensuring the trustworthiness of Graph Machine Learning. However, achieving both of these objectives simultaneously can be challenging. In our recent research, we delve into the dynamic relationship between explainability and privacy within the context of GraphML, offering fresh insights and perspectives on this complex interplay.