A few years ago, Silicon Valley engineer Bindu Reddy was raising money for a new startup. An investor offered to contribute — not because of what she was trying to do, but because she was a woman.

That rubbed Reddy the wrong way, and she wrote about it — then the backlash began.

Now Reddy's goal, with the new social network Candid, is to facilitate online conversations but without the trolls. She spoke to NPR's Kelly McEvers about finding the balance between free speech and moderation on social media.


Interview Highlights

On the dangers of online speech

As a person who's in a professional field, you have to censor your opinions. And if you don't, you get attacked by trolls, and sometimes you get judged negatively by your peers, and that might even affect your career. And what I've seen again and again is a lot of my friends and my peers kind of hold back and not speak openly.

On distinguishing Candid from anonymous platforms like Yik Yak and Secret

Now, all of these other platforms, they were founded a couple of years ago. Over the last two years, there's been a lot of advances in natural language processing [NLP], in machine learning, and kind of artificial intelligence that the machine can understand what you're saying. ...

The algorithm on Candid is a learning algorithm, so as we get more data we learn more. But the idea is to kind of weed out the bad posts, as I call them.

The social media network Candid uses an algorithm to classify posts as negative or positive statements.

The social media network Candid uses an algorithm to classify posts as negative or positive statements.

Courtesy of Bindu Reddy

On how automated moderation works

The way other artificial intelligence works, is it basically parses your sentence. We use a deep learning NLP algorithm, which basically looks at what you're saying and decides ... whether it's positive or negative. So it kind of classifies things as having a negative sentiment or a positive sentiment.

It then gives it a score of how kind of strong your statement is — let's say you said something about someone or you threatened someone, it classifies that as saying, "Hey this is a very strong statement," because these kinds of categories are not good in terms of social discourse. And when we do that, we basically say if this thing has a score which is more than a particular level, a cut-off, then we basically take out the whole post. So whether it's self harm or like bullying or harassment, we look for certain phrases and the context of those phrases.

On the line between moderation and censorship

I mean, here is the thing between what is "loud free speech," quote-unquote, right? At some level you should be able to say what you want to say, but on another level, you also want to facilitate, you know, what I would say constructive social discussion. ...

There is a kind of a trade-off or a fine line that you need to walk, because if you let everything in, you know the fear is that social discussion stops and it just becomes a name-calling game. And that's what happens if you just leave — like certain discussions, just let them be, don't pull things down — you will see they quickly devolve into people calling each other names and not having any kind of constructive conversations.

Copyright 2016 NPR. To see more, visit NPR.

300x250 Ad

Support quality journalism, like the story above, with your gift right now.

Donate