August 7, 2024
Conference Paper
Harnessing ML Privacy by Design Through Crossbar Array Non-idealities
Abstract
Deep Neural Networks (DNNs), handling computeand data-intensive tasks, often utilize accelerators like Resistiveswitching Random-access Memory (RRAM) crossbar for energyefficient in-memory computation. Despite RRAM’s inherent nonidealities causing deviations in DNN output, this study transforms the weakness into strength. By leveraging RRAM non-idealities, the research enhances privacy protection against Membership Inference Attacks (MIAs), which reveal private information from training data. RRAM non-idealities disrupt MIA features, increasing model robustness and revealing a privacy-accuracy tradeoff. Empirical results with four MIAs and DNNs trained on different datasets demonstrate significant privacy leakage reduction with a minor accuracy drop (e.g., up to 2.8% for ResNet-18 with CIFAR-100).Published: August 7, 2024