August 7, 2024
Conference Paper

Harnessing ML Privacy by Design Through Crossbar Array Non-idealities

Abstract

Deep Neural Networks (DNNs), handling computeand data-intensive tasks, often utilize accelerators like Resistiveswitching Random-access Memory (RRAM) crossbar for energyefficient in-memory computation. Despite RRAM’s inherent nonidealities causing deviations in DNN output, this study transforms the weakness into strength. By leveraging RRAM non-idealities, the research enhances privacy protection against Membership Inference Attacks (MIAs), which reveal private information from training data. RRAM non-idealities disrupt MIA features, increasing model robustness and revealing a privacy-accuracy tradeoff. Empirical results with four MIAs and DNNs trained on different datasets demonstrate significant privacy leakage reduction with a minor accuracy drop (e.g., up to 2.8% for ResNet-18 with CIFAR-100).

Published: August 7, 2024

Citation

Islam M., S.B. Dutta, A. Marquez, I. Alouani, and K.N. KHASAWNEH. 2024. Harnessing ML Privacy by Design Through Crossbar Array Non-idealities. In Design, Automation and Test in Europe Conference (DATE 2024), March 25-27, 2024, Valencia, Spain, 1-2. Piscataway, New Jersey:IEEE. PNNL-SA-194845.

Research topics