This paper investigates the effectiveness of machine unlearning techniques in removing sensitive data from pre-trained Resnet-18 models using the CIFAR-10 dataset. Specifically, it compares the performance of Fine-Tuning and Fisher Noise-based Impair-Repair methods in minimizing data leakage and preserving model performance. The study evaluates the techniques' ability to reduce Membership Inference Attack (MIA) scores while maintaining comparable accuracy on the retained data. The findings demonstrate that the Impair-Repair technique significantly reduces MIA scores compared to Fine-Tuning, showcasing its potential for responsible AI development. This approach allows for data privacy protection without compromising the model's performance. The research contributes to advancing techniques that address the challenges of data privacy in machine learning. |
*** Title, author list and abstract as seen in the Camera-Ready version of the paper that was provided to Conference Committee. Small changes that may have occurred during processing by Springer may not appear in this window.