To make it easier to catch and resolve volatile situations early on, Facebook is changing how content is reported, the company announced Wednesday. It’s giving users tools to better communicate their feelings and handle conflicts themselves. The changes are the result of collaborations with Yale, Columbia and Berkeley that involved months of research and focus groups with kids, teachers and clinical psychologists.
The first change is specifically for 13- and 14-year-olds (the minimum age required to sign up for a Facebook account is 13). If a boy in that age range wants to report a mean or threatening post or image a schoolmate has put on Facebook, he can click “This post is a problem”, a new phrase chosen to replace “Report”, and go through a series of casually worded questions to determine what kind of issue he’s having and how serious it is. There’s even a grid for ranking his emotions.
Once he finishes the questions, a list of suggested actions is generated based on how pressing his complaint is. If the boy is more annoyed than fearful, he might choose to send a pre-written message to the other person saying that the post makes him uncomfortable. If he is afraid, he will be prompted to get help from a trusted friend or adult. There are links to catch anyone who may be feeling suicidal and direct them to professionals and Facebook’s own suicide chat hotline.
“We feel it is important that Facbook provide encouragement for kids to seek out their own support network,” said Robin Stern, a psychoanalyst from Columbia University who worked on the project. “The children tell us they are spending hours online… they are living their lives with Facebook on in the background.”
Kids aren’t the only ones who need a little help communicating their feelings on the Internet. Facebook looked at photos that are reported for removal by all ages, flagged for offenses like being pornographic, containing hate speech or depicting drug use. When they started to dig in, the team noticed images were frequently being flagged for more personal reasons — someone didn’t like how they looked in the photo, was embarrassed their boss could see them dancing on a table or maybe was just trying to wipe away evidence of an old romance.
Usually when a photo is reported for violating community standards, it goes to a Facebook employee who has to determine what steps to take. That adds up a lot of requests. By expanding the options and directing people to ask the person who posted a photo to take it down, Facebook is putting its members in charge of their own issues and saving itself some resources as a bonus.
“How do you build in emotion, this ancient part of human nature, in to the Facebook site?” asked Dacher Keltner, director of the Social Interaction Laboratory at Berkeley.
Keltner’s team worked with Facebook to add some feeling to the process, customizing the stock requests based on the reason for wanting the photo removed and how important it was to the offended party that it come down. The wording was made more polite and the recipient given pre-writen answers to choose from too, opening up a dialog between the two sides.
By researching, changing wording and tracking response rates, Facebook is also figuring out how to better engage its users. The motive was good, conflict resolution and helping kids, but the method could also have handy applications for delivering paid content.
“Language really matters and design really matters for this stuff,” said Jake Brill, a Facebook product manager. “The smallest change can have a really notable impact.”
The changes have been available to many Facebook users as part of a test period and are rolling out to all U.S. members this week. The team hopes to expand the program to other languages and countries but only after carefully recalibrating the wording for those cultures.
Facebook’s anti-cyberbullying push was started shortly after the Tyler Clementi suicide, though Facebook says it wasn’t inspired by a single event.