UNIVERSITY PARK, Pa. -- Online social networks such as Facebook, Twitter and Google+ have become popular vehicles for sharing information and socializing. However, there is often a discrepancy between what social network users intend to share and the information that is actually being disclosed. A group of researchers at Penn State’s College of Information Sciences and Technology (IST), in partnership with two professors at the University of Kansas, are working to develop a theory and system that would help social network users reduce the gap between perceived and actual privacy.
“People don’t clearly understand the boundaries of personal information versus sharing boundaries,” said Dongwon Lee, the principal investigator (PI) of the project.
Lee, along with IST professor Peng Liu; Mary Beth Rosson, interim dean of the College of IST; Bo Luo, co-PI; and Jun (Luke) Huan, professors in the Department of Electrical Engineering and Computer Science at the University of Kansas; were recently awarded a collaborative grant ($279,154 for Penn State and $220,162 for University of Kansas) from the National Science Foundation (NSF) to support their project “Privacy Protection in Social Networks: Bridging the Gap Between User Perception and Privacy Enforcement.” The goal of the project is to develop methods to detect the discrepancies between users’ information sharing expectations and actual information disclosure; to design a user-centered and computationally efficient formal model of user privacy in social networks; and to develop a mechanism used to effectively enforce privacy policies.
The widespread use of online social networks has become a double-edged sword, according to Lee. While the websites promote online socialization, he said, there is a “discrepancy between privacy expectations and the sharing nature of social sites.” Hackers can infiltrate social networks and steal personal information, while cyber attackers can link mainstream social network accounts (e.g. Facebook and LinkedIn) with more obscure, anonymous accounts. For example, he said, someone who anonymously participates in an online community on a medical information site such as WebMD may have a disease that they don’t wish to make public. However, hackers can connect an identity-revealing clue from the medical site with a publicly known identity in social media accounts, enabling them to access information that was intended to be private.
Further complicating the issue, Lee said, is a “privacy paradox” that is common among social network users. While many people are concerned about controlling the information they share on social media, they often don’t take the protective measures needed to guard their privacy, such as setting strong passwords or modifying access control policies. Lee explained the paradox in terms of “bounded rationality” — while people understand the possible consequences of their lax behavior, they don’t believe that the risk is great enough to warrant extra vigilance.
While other researchers have attempted to solve the privacy conundrum of online social networks, Lee said, the solutions they have generated have been either technological or human-oriented. The technological framework, which relies on algorithms to develop privacy controls, lacks real-world context; while the human-oriented perspective, which studies user behaviors, does not provide a model that can be efficiently implemented in a practical setting. The research focus of Lee’s team is unique in the sense that it encompasses both the technological and human-oriented solutions.
“We feel that if we take advantage of both frameworks, we’ll be able to come up with a better solution,” he said.
After conducting a large-scale user study to prove the effectiveness of their proposed model, Lee said, he and his fellow researchers hope to develop a mechanism that would enable social network users to set their privacy preferences with greater ease and receive warnings about possible leaks. The technology could take the form of a social app that could be implemented into a user’s social media accounts.
“Hopefully, we will develop better, very vigorous underpinnings of the privacy model and a slew of technological tools to enforce this newly developed model,” Lee said.