User Privacy

Patient-Driven Privacy Control

By Berkay Celik, PhD

Patients are asked to disclose personal information such as genetic markers, lifestyle habits, and clinical history. This data is then used by statistical models to predict personalized treatments. However, due to privacy concerns, patients often desire to withhold sensitive information. This self-censorship can impede proper diagnosis and treatment, which may lead to serious health complications. We implemented privacy distillation, a mechanism which allows patients to control the type and amount of information they wish to disclose to the healthcare providers for use in ML models.

This project lead to two papers published in IEEE PAC 2017: Patient-Driven Privacy Control through Generalized Distillation and Achieving Secure and Differentially Private Computations in Multiparty Settings.

Agility Maneuvers to Mitigate Inference Attacks on Sensed Location Data

By Giuseppe Petracca, PhD

Sensed location data is subject to inference attacks by cybercriminals that aim to obtain the exact position of sensitive locations, such as the victim’s home and work locations, to launch a variety of different attacks. Various Location-Privacy Preserving Mechanisms (LPPMs) exist to reduce the probability of success of inference attacks on location data. However, such mechanisms have been shown to be less effective when the adversary is informed of the protection mechanism adopted, also known as white-box attacks. We propose a novel approach that makes use of targeted agility maneuvers as a more robust defense against white-box attacks. Agility maneuvers are systematically activated in response to specific system events to rapidly and continuously control the rate of change in system configurations and increase diversity in the space of readings, which would decrease the probability of success of inference attacks by an adversary. Experimental results, performed on a real dataset, show that the adoption of agility maneuvers reduces the probability of success of white-box attacks to 2.68% on average, compared to 56.92% when using state-of-the-art LPPMs.

Agility Maneuvers to Mitigate Inference Attacks on Sensed Location Data was published in MILCOM 2016.