privacy preserving graph ML
Attacks and defenses for graph neural networks
Graph based learning methods, specifically graph neural networks (GNNs) are being applied in several domains including sensitive domains like medicine. It is important to ensure that such models do not inadvertantly leak any information about the data used to train such models. We work towards exposing vulnerabilities of graph neural networks towards privacy leaks as well as develop mitigating techniques to ensure privacy-preserving learning under differential privacy guarantees.
-
Olatunji, Iyiola E., Nejdl, Wolfgang, and Khosla, Megha. Membership inference attack on graph neural networks. In IEEE International Conference on Trust, Privacy and Security in Intelligent Systems, and Applications 2021 [Best Student Paper award].
-
Olatunji, Iyiola E., Funke, Thorben, and Khosla, Megha. Releasing Graph Neural Networks with Differential Privacy Guarantees. arxiv, 2021