New Project Aims to Prevent Abuse in Encrypted Communication
Categories
Mitigating abuses of encrypted social media communication, on outlets such as WhatsApp and Signal, while ensuring user privacy is a massive challenge on a range of fronts, including technological, legal and social.
A five-year, $3 million National Science Foundation grant to a multidisciplinary team of Cornell researchers aims to take early but significant steps on that arduous journey toward safe, secure online communication.
Thomas Ristenpart, associate professor of computer science at the Cornell Ann S. Bowers College of Computing and Information Science and at Cornell Tech, is principal investigator (PI) of the project, “Privacy-Preserving Abuse Prevention for Encrypted Communications Platforms.”
“This is a charged topic area, because of the fears that these types of abuse mitigations will come at the cost of degrading privacy guarantees,” Ristenpart said. “So the real trick is trying to preserve privacy in a meaningful way, while still empowering and enabling users to be more protected from these kinds of abuse.”
Co-PI’s are Mor Naaman, professor of information science at Cornell Bowers CIS and at Cornell Tech; James Grimmelmann, the Tessler Family Professor of Digital and Information Law at Cornell Tech and at Cornell Law School; J. Nathan Matias, assistant professor of communication in the College of Agriculture and Life Sciences; and Amy Zhang, assistant professor in the Allen School of Computer Science and Engineering at the University of Washington.
“This problem needs an approach that goes well beyond just the technical aspects,” Naaman said. “In putting our team together, we aimed to get broad coverage – anything from the design of the systems, understanding their use by different communities, legal frameworks that can enable innovation in this space, and questions about the social norms and expectations around these areas.”
The team has been working on this challenge for some time; in fact, a new paper just released on arXiv, “Increasing Adversarial Uncertainty to Scale Private Similarity Testing,” addresses the challenge of enabling privacy-preserving client-side warnings of potential abuse in encrypted communication. First author Yiqing Hua, a doctoral student in the field of computer science at Cornell Tech, will present the work next summer at USENIX Security 2022.
Ristenpart, whose research spans a wide range of computer security topics, said abuse mitigation in encrypted messaging is a wide-open field.
“For the most part, the protections are pretty rudimentary in this space,” he said. “And part of that is due to kind of fundamental tensions that arise because you’re trying to provide strong privacy guarantees … while working to build out these (abuse mitigation) features.”
The NSF-funded research is organized around two overlapping approaches: algorithmic-driven and community-driven.
The former will focus on developing better cryptographic tools for privacy-aware abuse detection in encrypted settings, such as detection of viral, fast-spreading content. These designs will be informed by a human-centered approach to understanding people’s privacy expectations, and supported by legal analyses that ensure tools are consistent with applicable privacy and content-moderation laws.
The latter will focus on giving online communities the tools they need to address abuse challenges in encrypted settings. Given the challenges and pitfalls of centralized approaches for abuse mitigation, the project will explore building distributed moderation capabilities to support communities and groups on these platforms.
The new paper, of which Ristenpart and Naaman are co-authors, addresses the algorithm side of abuse mitigation with a prototype concept, called “similarity-based bucketization,” or SBB. A client reveals a small amount of information to a database-holding server so that it can generate a “bucket” of potentially similar items.
“This bucket,” Hua said, “would be small enough for efficient computation, but big enough to provide ambiguity so the server doesn’t know exactly what the image is, protecting the privacy of the user.”
The key to SBB, as with all secure encryption: striking the correct balance of obtaining enough information to detect possible abuses while preserving user privacy.
Ristenpart said questions regarding usability and implementation of SBB will be addressed in future research, but this work has given his group a running start into the five-year grant work on tech companies’ detection of abuses.
“There are a lot of usability questions,” Ristenpart said. “We don’t really understand how users react to information on these private channels already, let alone when we do interventions, such as warning people about disinformation. So there are a lot of questions, but we’re excited to work on it.”
Funding for this work was provided by the National Science Foundation.