Show simple item record

dc.contributor.authorDagan, Yuval
dc.contributor.authorFeldman, Vitaly
dc.date.accessioned2025-09-02T18:42:27Z
dc.date.available2025-09-02T18:42:27Z
dc.date.issued2020-06-22
dc.identifier.isbn978-1-4503-6979-4
dc.identifier.urihttps://hdl.handle.net/1721.1/162595
dc.descriptionSTOC ’20, June 22–26, 2020, Chicago, IL, USAen_US
dc.description.abstractLocal differential privacy (LDP) is a model where users send privatized data to an untrusted central server whose goal it to solve some data analysis task. In the non-interactive version of this model the protocol consists of a single round in which a server sends requests to all users then receives their responses. This version is deployed in industry due to its practical advantages and has attracted significant research interest. Our main result is an exponential lower bound on the number of samples necessary to solve the standard task of learning a large-margin linear separator in the non-interactive LDP model. Via a standard reduction this lower bound implies an exponential lower bound for stochastic convex optimization and specifically, for learning linear models with a convex, Lipschitz and smooth loss. These results answer the questions posed by Smith, Thakurta, and Upadhyay (IEEE Symposium on Security and Privacy 2017) and Daniely and Feldman (NeurIPS 2019). Our lower bound relies on a new technique for constructing pairs of distributions with nearly matching moments but whose supports can be nearly separated by a large margin hyperplane. These lower bounds also hold in the model where communication from each user is limited and follow from a lower bound on learning using non-adaptive statistical queries.en_US
dc.publisherACM|Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computingen_US
dc.relation.isversionofhttps://doi.org/10.1145/3357713.3384315en_US
dc.rightsArticle is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use.en_US
dc.sourceAssociation for Computing Machineryen_US
dc.titleInteraction Is Necessary for Distributed Learning with Privacy or Communication Constraintsen_US
dc.typeArticleen_US
dc.identifier.citationYuval Dagan and Vitaly Feldman. 2020. Interaction is necessary for distributed learning with privacy or communication constraints. In Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing (STOC 2020). Association for Computing Machinery, New York, NY, USA, 450–462.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.mitlicensePUBLISHER_POLICY
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2025-09-01T07:48:44Z
dc.language.rfc3066en
dc.rights.holderThe author(s)
dspace.date.submission2025-09-01T07:48:45Z
mit.licensePUBLISHER_POLICY
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record