Molly Russell: The suicide of a British girl triggers the debate on the responsibility of social networks | Technology
is the headline of the news that the author of WTM News has collected this article. Stay tuned to WTM News to stay up to date with the latest news on this topic. We ask you to follow us on social networks.
Molly Russell, 14, was found dead in her bedroom on the morning of November 21, 2017, in Harrow, north-west London. She had taken her own life. Her family never detected any strange behavior in her, beyond the fact that during the last year she spent more time locked up in her room. She blamed it on the changes of adolescence. But when her father, Ian Russell, checked Molly’s email for any possible explanation for the tragedy, he came across a two-week-old Pinterest post titled “Depression Pins You Might Like.” He continued to investigate and found that, during the six months prior to her death, the young woman shared or reacted on Instagram to more than 2,000 posts related to suicide, self-harm or depression.
Five years later, Instagram and Pinterest have been called to chapter by the British authorities. Elizabeth Lagone, director of health and wellness policy at Meta, Instagram’s parent company, and Jud Hoffman, global director of community operations for Pinterest, testified in early October in a British court. It is the first time that two technology companies have participated in a legal process related to the suicide of a user.
“[Molly Russell] She died of self-inflicted injuries while suffering from depression and the negative effects of online content,” said Andrew Walker, attorney and medical examiner.senior coroner) for the North London District. This figure has in the United Kingdom the power to initiate independent investigations that determine the causes of death of people. Walker did not classify the event as a suicide: the jurist established that the internet had “affected his mental health [en referencia a Russell] negatively and contributed to his death.”
Companies do not face fines or penalties. They were not summoned to a criminal or civil trial, but to forensic sessions. But the debate has been opened about their co-responsibility in certain cases of suicide, something that had not happened before. “Our thoughts are with Molly’s family and with other families who have been affected by suicide or self-harm,” Hoffman assures EL PAÍS by email. “Molly’s story has been a significant change for us and we will continue to work to create a safe and positive place for our pinners [usuarios de Pinterest]”.
Frances Haugen pointed the way a year ago now. The former employee of Meta led the company to its worst existential crisis due to leaks of internal documents, which fueled a vast journalistic investigation of The Wall Street Journal. Among the many revelations that the engineer provided, there was one that had a special impact: Instagram executives knowingly offered toxic content to young people because it was more addictive and monetized better. To the point that a work presentation revealed that 13% of British girls and 6% of American girls who said they had had suicidal thoughts had cultivated that desire thanks to said social network.
Data from the Pew Research Center show that lawsuits are multiplying in the US from parents who believe that social network algorithms cause physical harm to their children. So far this year, more than 70 lawsuits have been filed against Meta, Snap (owner of Snapchat), ByteDance (parent of TikTok) and Google for having caused anxiety, depression, eating disorders or lack of sleep in adolescents and young adults for Blame for his addiction to social networks. In accordance with Bloomberg Business Weekat least seven of these processes come from parents whose children have committed suicide.
Janet Majewski, whose 14-year-old daughter took her own life, sued TikTok, Snapchat and Meta in August alleging that the aforementioned social networks are responsible for the young woman embarking on a path of no return. “They have to change what they show the children, modify the algorithm so that they don’t lead them into the dark,” she told Bloomberg Business Week.
The lawsuits that social networks face ask them to take responsibility for the harmful effects of their products, just as happened 30 years ago with the tobacco companies. “The technology companies believe that this is not their problem. It is not in their business culture to really fight against the dissemination of content that can encourage suicide,” says Albert Gimeno, spokesperson for the Padres 2.0 association, which specializes in cyberbullying, technological addictions or digital violence, among others. “The measures they have put in place and the teams they have created to eliminate harmful content not only have to deal with a huge volume of information to review, but also with other departments of the companies themselves that go in the opposite direction, such as marketingadvertising, sales or communication”, he adds.
Social networks are enormously influential in the lives of young people. “The adolescent population with certain personality traits and emotional vulnerabilities come into contact in an environment where they can display pain, desperation and disconnection with traditional contact channels”, illustrates the psychologist and psychotherapist Luis Fernando López, co-director of the ISNISS Project and coordinator technician of the Let’s Talk about Suicide program of the Official College of Psychologists of Madrid. “These profiles are found on social networks because they feel accompanied on the issues that concern them, they maintain a certain anonymity and they see that they belong to a group, they have the security of not being judged and rejected. They start with public communications and then evolve into private settings where behaviors like self-harm or suicidal thoughts begin to develop,” he describes. In Spain, the number of child suicides has tripled since 2006.
Gimeno is not aware that lawsuits against social networks are proliferating in Spain, as is the case in the US. Nor does he believe that they had much of a journey, much less that they solved the problem. “The parents themselves, the Administration, the educational centers, the rest of Internet users and other technology and communication companies also have a role to play,” he explains.
Algorithms and manual supervision
Every minute 2.4 million images are uploaded to the Internet on Snapchat, 1.7 million posts on Facebook or 66,000 photos on Instagram, according to the Domo consultancy. The technological approach to sifting through all this information combines automatic and analog means. “Pinterest’s current policy on self-harm provides a detailed list of content for removal and limitation of distribution, with over 25,000 terms on the block list,” Hoffman notes. “When content violates our policies, we take action on it through human and automated processes. If a user searches for content related to suicide or self-harm, they should not be shown results and instead be shown a notice directing them to experts who can help if they have problems.
Instagram’s response is more flexible. On the one hand, they establish parental control tools for the content that adolescents see. They also ban those that promote suicide or self-harm. “We found and removed 98% of that content before we were told about it,” says a Meta spokesperson. On the other hand, the company allows people to talk about their own feelings and share content that deals with suicide, as long as they do not promote it.
The blended approach, combining automated problematic material detection tools with human content moderation, dominates the industry. TikTok, for example, publishes quarterly reports on compliance with its standards. In the last one, which covers from April to July of this year, it is shown that 113.8 million videos were deleted, which represents around 1% of the total videos published. “Of these, 6.1% were eliminated for failing to comply with policies related to suicide and dangerous challenges,” say sources from ByteDance, owner of the social network.
A more porous measure, because it can be falsified, but one that technology companies take seriously, is the minimum age of access. Facebook, TikTok, Instagram, Pinterest and Snapchat do not accept children under 13 years of age; on YouTube you need to have 14. Google blocks certain searches and shows helplines to those who are interested in content related to self-harm or suicide. So does TikTok, which is beginning to replace Google as the preferred search engine for the youngest.
You can follow THE COUNTRY TECHNOLOGY in Facebook Y Twitter or sign up here to receive our weekly newsletter.
Subscribe to continue reading
read without limits