LinkedIn may have a reputation as a relatively boring social network—a virtual version of a networking happy hour filled with people wearing lanyards—but it’s in the news thanks to some recent research it undertook and the interesting findings it led to.
A group of researchers from Harvard, Stanford, MIT, and LinkedIn recently published the results of a five-year-long study on social connections and job mobility in the journal Science.
From 2015 to 2019, LinkedIn played with its underlying algorithm that powered its “People You May Know” feature by randomly varying the amount of weak and strong contacts suggested as new connections to 20 million of its users. LinkedIn measured mutual connections and interactions between users in order to correlate “strong ties” to close friends, and “weak ties” to more occasional acquaintances.
In a series of micro-experiments that it later analyzed with other experts, it found that people were more likely to get jobs through “weak ties,” especially in more digital industries. This finding is in line with an influential sociological theory proposed in 1973 that said that casual contacts tend to be more important sources of new information and opportunities than close friends.
LinkedIn, a platform owned by Microsoft, had intended to use these insights to make a better algorithm for all of its users. And in its privacy policy, the company does note that users’ personal data could be used for research purposes. But experts recently voiced their concerns to The New York Times that these behind-the-scenes tweaks could have long-term negative consequences for users.
“The findings suggest that some users had better access to job opportunities or a meaningful difference in access to job opportunities,” Michael Zimmer, an associate professor of computer science and the director of the Center for Data, Ethics and Society at Marquette University told NYT. “These are the kind of long-term consequences that need to be contemplated when we think of the ethics of engaging in this kind of big data research.”
[Related: The fascinating and fraught ways researchers are studying modern friendships]
Transparency is not the only issue that these companies are grappling with. LinkedIn has also been dealing with emerging incidents of connection fraud, as a recent investigation by MIT Tech Review showed that scammers with false identities took advantage of mutual connections to gain their victims’ trust.
It’s not unusual for tech companies to pilot test various features on small groups of users. However, large-scale, undisclosed social experiments by big tech companies have historically been met with mixed receptions. A 2014 Facebook study analyzing how user moods could be influenced by manipulating News Feed content, for example, was met with backlash. OKCupid, that same year, also fessed up to fudging with compatibility scores in order to see its effects on user behavior on the site.
On the other hand, Spotify is conducting more passive, observational studies, and YouTube and Twitter have both been actively testing out features like misinformation-identification education and crowd-sourced content labeling in an attempt to help users have better experiences on the platform.
Modern psychologists and sociologists, too, are looking to use the internet and its various applications as a way to study friendships, social networks, online culture and its impact on behaviors. But psychology, as a field, has long contended with issues surrounding ethics in experiments and the concept of deceiving its participants. Many of the classical studies from the 1900s would, thankfully, not be possible to conduct today (just think of the twin experiments and the Stanford prison experiment). Understanding where the boundaries lie between researchers, tech platforms, and unwitting users is in some ways, simply a 21st century iteration of this ongoing challenge.