Community-Specific AI: Building Solutions for Any Audience Community-Specific AI: Building Solutions for Any Audience
This post discusses building solutions for community-specific AI. With half of the world population online, and spending over 5 hours a... Community-Specific AI: Building Solutions for Any Audience

This post discusses building solutions for community-specific AI.

With half of the world population online, and spending over 5 hours a day there, online communities are flourishing. It is now easier than ever for niche communities to form: gamers can find other players and form teams, dating adults can find better matches, students of particular subjects can find teachers and help each other. With faster networks images, audio, and video, are increasingly complementing text, creating a richer experience.

[Related article: Trust, Control, and Personalization Through Human-Centric AI]

When the Problem is at Internet (and Global) Scale

However, as these communities grow wider and deeper, they can become a target for toxic behavior. Forums for underage users can be subverted with users attempting illicit solicitations and exploitation. Chat rooms can see participants engaging in cyberbullying and toxic language. According to Pew Research, over half of online users have seen offensive name-calling and intentional embarrassment, while a quarter have witnessed physical threats and even prolonged harassment.  This clearly has to stop. Businesses must now manage their communities in such a way that first and foremost protects their users, while also guarding their brand’s reputation. 

The traditional way to address this issue was through human moderation. Companies like Google and Facebook hire thousands of moderators to respond to flagged content and unwanted activities while respecting the user’s desire for sharing and self-expression. While achieving its goal, this approach is unscalable, and beyond the means of most other businesses. More recently, advancements in technologies such as Natural Language Processing (NLP) has signaled great promise. But off-the-shelf solutions typically lack the power to represent the unique shared terminology and conversational pattern (eg. dating chats vs gaming chats) that each community exhibit, limiting their usefulness. 

A Local Solution for a Global Problem

Although the problem is ubiquitous across platforms, the style, language, and nature of the problem is unique to each community. For some simple examples, the use of “weed” may be suspicious on an educational site, unless the topic is gardening; “slave” may be suspicious unless we’re having a technical discussion on Stack Overflow; “PM me pics” would be common on an e-commerce site, but could raise an issue on sites with underage users. That is why we cannot expect a single one-size-fits-all approach alone to tackle this issue. Instead, we need community-specific AI solutions that aim to identify and adapt to the unique toxic online behaviors of a particular group. Also, we cannot expect that our solution, even after iterating and adapting, can be fully automated.  Rather, we seek a synergistic relationship, where the AI can multiply the human-power of a moderation team and the feedback of the team can further refine and evolve the AI.

Scaling a Local solution at a Global scale

One final obstacle to consider is that this isn’t just a global problem, in the sense that it affects any platform working with user-generated content, but that it’s a global problem that spans languages, countries, and cultures. If we’ve gathered training data and built up our solution for our English speaking users, what do we do for our Spanish speaking users? Do we wait to gather another sufficiently large dataset to train a Spanish model, meanwhile desperately throwing human Spanish-speaking moderators at the issue? We need something faster so that platforms can roll out to new audiences while still delivering quality service.

A solution to this cold-start problem is to bring what we learned from our English users to bear for our Spanish users. Word embeddings and language models can help us understand the nature of conversations. By understanding how languages map to one another semantically, we can identify correlated meanings and thereby transfer what we’ve learned about toxicity in one language to a new language with minimal cross-language data. Even with as little as a multilingual dictionary, we can build a solution for our Spanish users that can start addressing toxic behavior and provide a starting point for the iterative process of refining that solution for the unique nature of the Spanish language. 

[Related: Check out the global ODSC Slack channel!]

Recognizing and Responding 

It’s an exciting and challenging task ahead of us; to continue to grow the Internet while learning how to use this responsibly for the betterment of all. Recent advances in Deep Learning and NLP have given us new tools to see this vision become reality, but we still need the skill and forethought to use them effectively. By integrating our understanding of the nuances of language and online communities with the latest technology, we can develop the solutions and processes that help us recognize and respond to toxic behavior today and in the future. 

Want to learn more? At ODSC West 2019, Spectrum Lab’s VP of Data Science Jonathan Purnell will go into greater detail about building community-centric solutions and be available for additional questions. 

Jonathan Purnell

Jonathan Purnell is the VP of Data Science at Spectrum, building tools to recognize and respond to harmful user-generated content and behaviors. Previously a Data Scientist for Krux and Salesforce DMP, Jon delivered Internet-scale distributed products using innovative machine learning techniques, including Deep Learning and NLP. Before that, he was an Applied Scientist at Bing Ads and a collaborative researcher with BBN Technologies (a division of Raytheon). He holds a Ph.D. in computer science focused on machine learning.