Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Resource-Constrained Learning in Wireless Networks

Trained-MPC: A Private Inference by Training-Based Multiparty Computation

Hamidreza Ehteram · Mohammad Ali Maddah-Ali · Mahtab Mirmohseni


Abstract:

How can we perform inference on data using cloud servers without leaking any information to them? The answer lies in Trained-MPC, an innovative approach to inference privacy that can be applied to deep learning models. It relies on a cluster of servers, each running a learning model, which are fed with the client data added with strong noise. The noise is independent of user data, but dependent across the servers. The variance of the noise is set to be large enough to make the information leakage to the servers negligible. The dependency among the noise of the queries allows the parameters of the models running on different servers to be trained such that the client can mitigate the contribution of the noises by combining the outputs of the servers, and recover the final result with high accuracy and with a minor computational effort. In other words, in the proposed method, we develop a multiparty computation (MPC) by training for a specific inference task while avoiding the extensive communication overhead that MPC entails. Simulation results demonstrate Trained-MPC resolves the tension between privacy and accuracy while avoiding the computational and communication load needed in cryptography schemes.

Chat is not available.