We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
Despite its massive array of more than 750 million members, employment-focused social media giant, LinkedIn’s head of data, Ya Xu, distills the role of artificial intelligence (AI) and data at the company down to three categories: Talent, knowledge and product.
Regardless of where data or AI is applied, Xu asserts that LinkedIn ultimately aims to approach everything in a way that “creates economic opportunities and value for members, customers and companies.”
But, how is the networking giant addressing key issues like bias while innovating for the future? How is it protecting privacy, while also providing its swath of useful data to inform research?
As the field of AI continues to evolve, conversations among professionals in the industry are continuing to evolve with it — perhaps as a way of holding the technology and its developers accountable.
Threads and posts about AI bias, sentience and its opportunities are plentiful across Twitter and, of course, LinkedIn as well. These conversations often turn to how the AI is impacting the use and experience of social platforms themselves.
At VentureBeat’s Transform 2022 Data and AI Executive Summit, the conversations further underscored these trends.
LinkedIn is uniquely positioned as both a developer and deployer of its own AI models and a researcher with its spectrum of data collection. It’s also a platform that’s central to professional connections, fostering a space for dialogue about industry issues and evolution.
Xu said that because LinkedIn aims to provide economic opportunities for every member, the company simply, “can’t afford to not do AI responsibly.”
LinkedIn integrates responsible AI across its company, with checks and balances in place that measure and aim to catch any unintended consequences or biased results from the models that train the company’s AI. Importantly, Xu said, teams interacting with any part of the AI algorithms should plan together, meet with one another and effectively communicate. Doing so, particularly at a large company, helps provide another way to keep AI “in check.”
For instance, Xu saidthat if a recruiter conducts a search on LinkedIn’s platform for “nurses” or “data scientists” her team works hard to make sure responsible results are shown — meaning that the algorithm isn’t pulling up more female results for the nurse profession or a disproportionate number of males for data scientist search results. Xu noted that regardless of if a LinkedIn member is taking advantage of learning features, applying to jobs, making connections or is seeking potential candidates for recruitment, the AI must be developed with the end user’s interactions in mind always.
“It has been interesting to see AI become such a buzzword with many business leaders,” Xu said. “But this [AI] is an area LinkedIn does particularly well with. AI development should be very well-integrated with the rest of the product development processes and [across] teams… It’s different from software development, and it’s also not deterministic. When you are designing, you have to think about, ‘does the AI interact [with] a user in a way that makes sense?’ It’s important to differentiate and understand the nuances [in its functionalities].”
However, algorithm tweaks to the platform’s social media aspect, which one Transform audience member said has “become a lot more like Facebook” as of late, is still a work in progress, according to Xu. She acknowledged that it’s feedback the company has heard before and that “our user experience is very important to us,” but noted this piece has been challenging to strike the right balance with. Xu said that ultimately, LinkedIn is “a professional social network and it is important to us that we stay true to that.”
An abundance of data, but at what cost?
Of course, as a social media-infused professional engagement platform, the company has access to an abundance of data — an aspect that grows every day. The data LinkedIn has positioned itself to harness about our professional world is enormous — ranging from which university graduates are getting the most jobs in a certain field to just how work has evolved because of COVID-19.
It’s this AI-driven data that garners interest from the likes of government entities, nonprofits and global economic forums. With the company’s swath of data comes great responsibility for maintaining user privacy, while also fueling global economic research with its detailed insights.
“We recognize LinkedIn is in a unique position … our economic graph team, run by our chief economist, ensures that data insights are synthesized in an aggregated format to make sure there are no privacy concerns,” Xu said.
She went on to note that the company recognizes that within the incredible power of AI algorithms that help harness data and form user experience — problems of bias and privacy are prioritized.
Balancing a spectrum of challenges
The evolving landscape of privacy, specifically, is one of the top challenges Xu and her team at LinkedIn are focused on.
“When we talk about responsible AI, it’s not just about fairness, privacy is also an important pillar of it,” she said.
On one hand, machine learning algorithms that LinkedIn has developed are trying to glean data, learn from it and personalize it for users, but because privacy is rooted in not wanting certain information known, Xu said, the company is constantly navigating these two contrasting priorities.
“They’re pulling two different directions on the surface. How can we advance things while also preserving privacy?” Xu explained. “We are always balancing the privacy challenges… It is really important for us to work on how we continue to enjoy the benefits of AI and honor the privacy of individuals.”
Another opportunity the networking giant is focused is employing MLops for data observability over its critical functions to ensure smooth pipelines and user experiences, which Xu noted is no small feat.
“A lot of people think AI is a magic thing. That if you sprinkle some, then things will be better,” she said. “But it’s not. What’s going to make AI work at any company is engineering, hard work and most importantly data … You need good data to make a good experience for your users.”