privacy preserving graph ML

Attacks and defenses for graph neural networks

Graph based learning methods, specifically graph neural networks (GNNs) are being applied in several domains including sensitive domains like medicine. It is important to ensure that such models do not inadvertantly leak any information about the data used to train such models. We work towards exposing vulnerabilities of graph neural networks towards privacy leaks as well as develop mitigating techniques to ensure privacy-preserving learning under differential privacy guarantees.