top of page

Farcebook

"Your post didn't follow our Community Standards"

I realise I won't win any prize for originality, but one cannot possibly deny the appropriateness of the title of this article when it comes to evaluating Facebook's "Community Standards" and how they are applied in relation to nudity and sex.  For any organisation promoting or celebrating the naked human body, having a Facebook page requires extreme vigilance and a highly developed skill of walking on eggshells in order to avoid having accounts suspended (placed in Facebook Jail) or worse - having accounts closed down permanently.

Cartoon 1.jpg

In order to set the stage for this discourse, and to get a basic understanding of the problem, let's first have a look at the wording of these "Community Standards."

FB Standards Header.jpg
Adult nudity header.jpg
first 3 sections.jpg
Do not post 4.jpg

Nudity equals sex

From the very outset, one thing is clear: "nudity" and "sex" are lumped together in this section as though they are synonymous.  This isn't accid-ental; it's typical of modern western society's mindset - to some extent here in New Zealand, but far more so in the U.S., where Mark Zuckerberg's Meta Platforms company is founded and based.

 

Is this surprising?  Not at all.  The U.S.A. is the world's most prolific supplier of pornography of all kinds, including child porn.  Americans took the lead in global porn production following the Second World War, where thousands of soldiers were trained as semi-professional filmmakers. Using smaller, portable 8mm and 16mm cameras to shoot combat footage, these troops learned how to create movies cheaply and flexibly.  Today, The U.S. produces an estimated 89% of the world's porn.

And yet, despite this, America is viewed by much of the rest of the world as a very prudish nation when it comes to nudity.  While there are no laws in the U.S. that specifically prohibit public nudity at a Federal level, generally most local jurisdictions make nudity against the law in public places.  Moreover, nudity is also generally illegal on a person’s own property if the nude person is visible to the public, such as through an open window or sunbathing nude in someone's yard.  While most state laws are clear about nudity around children and nudity meant to arouse, some other wording is vague and violations often are a matter of community standards for indecency.  Anyone intending to go on a clothing-optional holiday in the U.S. will need to do a thorough investigation of the law in the jurisdictions they intend to visit.  It really is a minefield of confusion!

But when you ask Americans if they consider their country to be prudish about nudity, they almost inevitably reply by saying "Oh, no.  Not at all.  Look at all the porn we produce!"  If you ask the question on sites like Quora, that's the response that's most often returned.  And so the mindset continues to be reinforced:  Nudity = pornography = sex!  Americans simply cannot distinguish those terms!

So, it should come as no surprise, then, to see nudity and sex linked as one in Facebook's Community Standards.  Note that this only applies to photographs of REAL humans.  It doesn't apply to paintings, drawings or computer-generated images of humans, no matter how realistic or close to photographic quality they are.  That's art!  More on that later!

Let's have a look at the biggest headache that confronts Facebook in its quest for a happy balance between its idea of public safety and its goal of free speech: moderation.

A machine decides!

In 2017 the company said it would be hiring 3,000 extra moderators, bringing its total headcount for reviewing posts to 7,500.  Now, 5 years later, that number has doubled to 15,000.  However this remains a drop in the ocean for a service that now has 3 billion users who are sharing an aggregate of tens of billions of pieces of content daily!  Enter : Artificial Intelligence!

Artificial Intelligence (AI) algorithms are Facebook's first line of defence against content that goes against its Community Standards.  Algorithms are highly complex pieces of software machinery designed to detect patterns of imagery and text in every piece of content that users post on the platform.  It's generally only when the algorithm is unsure whether the content contravenes the Standards that a human moderator becomes involved, and then only when a user disagrees with the violation.

 

But how accurate are these algorithms?  Not accurate enough, it seems!  Although the platform has invested significantly in artificial intelligence and machine learning, its abilities in terms of content moderation are still woeful.  For example, despite the fact that Facebook has technology that can detect images, audio, and text that potentially violate the company’s Community Standards on the livestream feature, the Christchurch terrorist was still able to livestream his attack in New Zealand.  In addition, the company has faced significant criticism when its automated tools have resulted in the erroneous removal of user content.  Not even MPs are safe from censorship as the Danish MP Mette Gjerskov found out.  Facebook decided that her photograph of the “Little Mermaid” had “too much skin and sexual undertones”.  The decision was reversed, but ordinary users have found Facebook much more difficult to deal

APOLOGY.jpg

with, typically refusing to enter into any discussion of their rules and refusing to provide any evidence regarding the harm that they claim to be preventing.  The idea that algorithms can intelligently judge such human complexities as when nudity may be either beneficial or harmful is very much a huge step of faith on the part of the techno-utopianists, as my first example will demonstrate. . .

 

This photo of me wading across the Kauaeranga River was deleted by Facebook's algorithms on 26 May 2022 within one minute of it being posted, and I was given a 4-day ban from posting or commenting.  The reason?  It supposedly went against their Community Standards on Sex and Nudity!  Obviously it doesn't, and so I submitted a disagreement.  After another 5 minutes the post had been reviewed (I suspect by a human moderator) and the photo was back up.

Although initially feeling somewhat exasperated with yet another Facebook ban, I was happy with the response time for correcting their mistake.  But the happiness was short-lived when I discovered that, although the post had been restored, my 4-day ban wasn't lifted!  I still had to do the FB jail time!  There is simply no means available to contact any form of support service in order to resolve such problems.  And Facebook's systems certainly have a number of them!  How about this one . . .

Facebook scores an "own goal"

On 18 June 2022 I was in a discussion with someone who, I felt, could benefit from being a member of Hauraki Naturally, and included a link to our website in the content

of my post.  Within a few seconds the post was deleted by Facebook's A.I. and I was sentenced to a further 6 days of Facebook Jail.  Why?  Because Facebook's system generates an accompanying image of the page being linked to, and on this occasion the image included nudity.

Banned link due to photo.jpg

If you have a quick look at the home page of this website, you'll see the image that Facebook managed to grab.  How an algorithm managed to detect an "offensive" body part in that image is miraculous.  But, be that as it may,  what is most concerning is the fact that I didn't post the image - Facebook did, and offended itself while blaming me for the offence!  There is, in fact a way to dodge this bullet if you are the author of a post, by clicking the X at the top right of the offending image before hitting the POST button.  However, if you are including a link in your comment on someone else's post, you're not given that option.  Beware!

As long as it's art, the kids are fine!

In countless articles easily found online, Mark Zuckerberg and his various spokespeople make a huge song and dance over Meta Platforms' commitment to keep children safe from both exploitation and viewing what they believe to be inappropriate content.  But what is appropriate for kids to see?  And what isn't?  Consider the following examples.

Some time ago I posted a photo of myself at work, feeding our newborn calves.  I was very careful, of course to "fix" the photo with an icon to conceal body parts that Facebook's

DSCF5411fb.jpg

Community Standards disallow.  There was nothing remotely sexual about the photo, of course; it was simply an image of me without clothing, going about my daily activities.  But remember, to Facebook nudity equals sex, and children must be protected from witnessing adults being sexual.  Well - real adults, anyway.  And so Facebook's artificial brain somehow decided that I was naked, i.e. sexual, to the extent that it was indecent despite the strategically placed logo, and took the photo down.  But what about adults that aren't real?

A couple of weeks ago I was invited to join a Facebook group called "Nude is Normal" - a private group with more that 1,600 members.  The name of the group suggested to me that their aims would include promoting the normality of nudity to society, and helping to dispel the myth held by many that nudity is inherently sexual.  Within the first few minutes after being accepted into the group it became obvious that my

Not ok

The image.png

Perfectly ok

assumptions were completely wrong!

The image of the girl in bondage was the first image that drew my attention.  It had already gathered many positive comments from the group members, but not one that suggested that the image might not be helping the aims of the group.  So I dared to suggest that if the group wanted to make nudity normal to a society that struggles to separate nudity from sex, the image might not be all that helpful.  And if nudity should be normalised in the whole of society, including children, is this image the way to achieve that?

My comment was immediately met with scorn and derision!  "You're more of a prude than Facebook!" someone barked.  "It's Art!  It's not real!", they protested, convinced that I was stupid enough to not realise that much. "And anyway, there are no children in this group!" they continued.  And that's where their own stupidity showed itself - naively believing that private groups are completely secure - even from minors who falsify their age.  But, of course, their stupidity really lay in the fact that they couldn't see the point of my comment, that the image is counterproductive to making nudity normal.  Eventually the admins of the group stepped in and sided with their members and I left the group

Yes, of course the image is not a photo.  But it may as well be!  Anyone with half a brain can see what it depicts and the message it portrays.  But what is really disturbing is that Meta Platforms permits this image to be posted because it doesn't

go against Facebook's Community Standards, simply because it's art - not an actual photo.  If it was a photo, it would be deemed to be inappropriate sexual content for children to see.  But because it is "art" it's allowed - no matter how realistic the image is.

And so, after reporting the image back to Facebook for review (and the review will have been carried out by a human), the feedback shown here was returned.

It's also worth noting that the image didn't qualify for either of the provisions shown with a yellow exclamation mark at the bottom of the Community Standards - the sensitive warning label (because it's not a real person) and the over-18 restriction (because it's not deemed an actual sex act).

If you think that's absurd, the lack of concern over child exploitation is even more worrying. BBC News journalist Angus Crawford revealed on 28 October 2021 that an unnamed whistleblower has told US authorities the company's efforts to remove child abuse material from the platform were "inadequate"

Response.jpg

and "under-resourced".  Facebook said in a statement: "We have no tolerance for this abhorrent abuse of children and use sophisticated technologies known as PhotoDNA and VideoDNA to combat it. 

But in a sworn statement to the SEC, which regulates securities markets and protects investors, the whistle-blower said there was no solution to illegal material at Facebook because there had not been "adequate assets devoted to the problem".  The person claims that a small team set up to develop software which could detect indecent videos of children was broken up and redeployed, because it was seen as "too complex".  Facebook doesn't know the full scale of the problem of child abuse material because it "doesn't track it".

And a constant question allegedly asked by senior managers was "what's the return on investment?"  The whistleblower told the SEC that this was a legitimate business question, "but not when it comes to public safety issues as critical as child sex abuse".

 

He also warned about Facebook "Groups", which were described as "facilitating harm".  The groups, many of which are only visible to members, is where "a lot of terrifying and abhorrent behaviours occur".  Paedophiles "use code words to describe the type of child, the type of sexual activity they want, and then use Facebook's encrypted Messenger service to share these codes, which change routinely..

These groups are not hard to find.  All it takes is a simple search on Facebook.  

Lara Putnam of Wired stumbled on these groups while researching an entirely different topic about disinformation being spread on social media.  She typed the numbers 10 11 12 into the search bar and was immediately confronted with a list of groups enticing children of those ages, along with paedophiles interested in them.

The page headers were usually a cartoon or photoshopped image of a child with cute oversized eyes with long lashes, embellished with hearts and pastels.  Within the group the posts that made explicit references to photographed genitalia were gamified and spangled with emoticons: “See your age in this list? Type it into the replies and I’ll show ‘it’ to you.”

Again, none of the content was detectable by Facebook's algorithms because no image or text went specifically against the Community Standards.  Most often posts were just doorways to connections; the real danger being offstage. “Looking for a perverted girlfriend of 11,” read one post, with purple background and heart emojis.  Replies asked for friend requests to continue via Facebook Messenger or WhatsApp chats, which are encrypted beyond the eyes of the platforms' artificial bots.

 

Lara discovered one more alarming problem.  The AI-driven algorithms were actually working to expand, not reduce, child endangerment.  She now noticed that new child sexualization groups began getting recommended to her as “Groups You May Like.”  Each new group recommended to her had the

Pedo group.jpg
Ideas_Screenshot_1.webp

same mix of cartoon-filled come-ons, emotional grooming, and gamified invites to share sexual materials as the groups she had initially reported to Facebook to no avail.

Facebook offers no ability to debate or discuss a concern.  Once a "support message" on a complaint is received from their moderators, no further recourse is offered.  The only means left is a long and winding road via official agencies, and nobody is going to embark on that mission with every instance of possible child predation they find on the platform.  Facebook is neither proactively set to prevent harm nor consistent in acting when flagged.  That demonstrates more than anything else just how the balance between engineering for protection and engineering for expansion is working in practice, and it should make us very afraid!

Why?  Because Facebook knows that its audience is mostly made up of older users these days and Mr Zuckerberg realises that the younger set have been departing Facebook in droves in recent years - preferring the more hip platforms such as Tik Tok, Twitter and Snapchat.  Facebook has tried to buy Snapchat on multiple occasions, but Snapchat’s founders have not been willing to sell.  And Snapchat wasn’t the only other social media competitor that Facebook has tried to buy.  Twitter, too, has been in their sights.  Both social media platforms turned Facebook down.

In his desperation to attract more young users, Mark Zuckerberg’s vision for Meta as a virtual reality emporium focuses on the enticing nature of multiplayer games. The ease through which gamification pulls children in is on sickening display in the groups.  Given Lara Putnam's experience, how is Meta ever going to ensure these groups are safe for kids?  With the interlocking complexity of mutable algorithms and stacked internal policy choices that determine how platforms actually work, effective external regulation seems pretty much a pipe dream!

Rok

8 July 2022

bottom of page