Convolutional Neural Networks (CNNs) have been widely used in various areas. As the training of CNNs requires powerful computing resources, data owners are now employing clouds to accomplish the task. However, this inevitably introduces serious privacy issues against the data owners, as the training images are now outsourced to the clouds, who may illegally spy on the content of the images for potential benefit. In this work, we propose HeHe, a CNN training framework over encrypted images with practical efficiency via additively homomorphic encryption and a delicate interaction scheme in CryptoHeader, which are shallow layers of the network. To evaluate whether the image content is preserved through a processing system, we propose (α, β) -recoverable, a novel image privacy model, and theoretically prove HeHe is robust against it. We test HeHe on several datasets in the aspects of accuracy, efficiency, and privacy. The empirical study justifies that HeHe is practical for the CNN training over encrypted images while preserving the accuracy with acceptable training cost and content leakage.
CITATION STYLE
Sun, L., Li, H., Yu, S., Ma, X., Peng, Y., & Cui, J. (2022). HeHe: Balancing the Privacy and Efficiency in Training CNNs over the Semi-honest Cloud. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13640 LNCS, pp. 422–442). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-22390-7_25
Mendeley helps you to discover research relevant for your work.