On Monday, 2 April 2016, the Facebook page of a citizen journalist titled ‘Fahroong Srikhao [2]’ published an interview from jail with Harit Mahaton, one of the eight junta critics abducted by the military on 27 April. Harit said that the authorities showed him a capture of his Facebook chat and asked him who he was chatting with.
Harit added that he wants Facebook Thailand to investigate the case and warned the public that chatting via the Facebook inbox is no longer safe and private.
-- from Prachatai: "Facebook chat is no longer safe: abducted junta critic"
Facebook privacy?
The news here is that some unfortunate person, living in a land where the military took power by force and continues to rule, is now imprisoned and suprised that a Facebook chat is being used as evidence against him. Let's be clear: there is no "justice" under military rule - not in the sense that we apply to the term under civil rule. So the military does not need the aid of Facebook to imprison. But, apparently, there was a belief that Facebook offered some form of sanctuary from the totalitarian gaze of military rule (and perhaps even that Facebook Thailand has some sort of human rights "watchdog" status): score one for successful corporate propaganda. Yet, for years now, Facebook has been well established as a tool of state security, well-deserving of the nickname Snitchbook.
On Facebook's Terms: We Know Who You Are
You have one identity… The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly… Having two identities for yourself is an example of a lack of integrity.If this fatwa on "integrity" from His High Holiness sounds a tad pretentious to you, just relax. Return to it after reading the next section and you'll laugh your ass off.
– Facebook CEO Mark Zuckerberg, 2009
Facebook's terms state:
Facebook users provide their real names and information...
In short, Facebook does not permit the privacy and protection of anonymous or pseudonymous logins.
You will not provide any false personal information on Facebook, or create an account for anyone other than yourself without permission.
You will not create more than one personal account.
Facebook defends this policy, claiming “It’s part of what made Facebook special in the first place...By differentiating the service from the rest of the internet where pseudonymity, anonymity, or often random names were the social norm.”. Facebook also argues that this tends to protect people from threats from anonymous attackers: “This policy, on balance, and when applied carefully, is a very powerful force for good”.
Of course, in reality, Facebook has no automatic mechanism preventing the use of a pseudonym: proof of identity is only required if a complaint is filed. So people who comply with Facebook rules and provide their legal identity are completely exposed to the threat of attackers - who, by definition, do not play by the rules and thus will likely remain anonymous. The way to protect oneself from such attackers is obvious: don't comply with Facebook's identity restriction.
Obviously, a more realistic explanation for Facebook's identity policy lies in the nature of its business: the Facebook community is the Facebook product. Pimping your data is the company's bread and butter - and it has been quite profitable.
On Facebook's Terms: We Sell Who You Are
Facebook's Data Policy provides the long list of johns that your data is passed around to, including:
- Apps, websites and third-party integrations on or using Facebook services
- The family of companies that are part of Facebook
- Advertising, measurement and analytics services (policy notes that only non-personally identifiable information is shared)
- Vendors, service providers and other partners (policy provides no restriction to the type of data shared)
-
In 2007, Facebook launched Beacon, an online tracking program that forced Facebook members to unknowingly and unwillingly endorse products. Beacon sent messages to a member's friends about what that member is buying on websites. Advertisers then ran ads next to these purchase messages. Facebook trumpeted the ads as being like a “recommendation from a trusted friend”.
But, in fact, the "trusted friend" never agreed to make any such "recommendation". An opt-out box would appear on the website for a few seconds, that members complained was hard to find. And, initially, Facebook refused to allow members to completely opt out of the program. Two weeks after the launch of Beacon an online petition against it was created, and within 10 days, more than 50,000 Facebook members signed. Still, Facebook insisted: “Whenever we innovate and create great new experiences and new features, if they are not well understood at the outset,... After a while, they [members] fall in love with them.” Meanwhile, security researchers found that Beacon transmitted data even if the member was logged out of Facebook.
Less than a week later, Facebook retreated and announced that it would allow people to opt out. Eight months later, a class action lawsuit was filed against Facebook and its Beacon affiliates. The suit was settled in 2009, a year after filing, and forced Facebook to shut down Beacon. After Beacon, Facebook learned to be more covert in pimping data. Indeed, pimpin' ain't easy.
-
In 2008, representatives of the Canadian Internet Policy and Public Interest Clinic (CIPPIC) filed a complaint against Facebook Inc. on topics ranging from the collection of date of birth at registration to the sharing of members’ personal information with third-party application developers. In a report of findings, the Assistant Privacy Commissioner of Canada noted that, in regards to default privacy settings and advertising, Facebook was in contravention of the Personal Information Protection and Electronic Documents Act , but concluded that the allegations were "resolved on the basis of corrective measures proposed by Facebook in response to her recommendations...On the remaining subjects of third-party applications, account deactivation and deletion, accounts of deceased users, and non-users’ personal information, the Assistant Commissioner likewise found Facebook to be in contravention of the Act and concluded that the allegations were well-founded. In these four cases, there remain unresolved issues where Facebook has not yet [2009] agreed to adopt her recommendations. Most notably, regarding third-party applications, the Assistant Commissioner determined that Facebook did not have adequate safeguards in place to prevent unauthorized access by application developers to users’ personal information, and furthermore was not doing enough to ensure that meaningful consent was obtained from individuals for the disclosure of their personal information to application developers."
-
In 2009, Facebook pulled off a classic bait-and-switch: after winning new members by promising a more private alternative to the wider Web and sites like MySpace, promising that members could restrict their information to a limited audience using privacy settings, Facebook revised members’ privacy settings to disclose personal information to the public that was previously restricted. The new privacy settings also disclosed personal information to third-party application developers that was previously not available. As a result, 15 privacy and consumer protection organizations filed a complaint with the US Federal Trade Commission (FTC), charging that Facebook has engaged in unfair and deceptive trade practices in violation of consumer protection law. Roughly two years later Facebook agreed to settle the FTC's eight-count complaint that charged that the claims that Facebook made were indeed unfair and deceptive, and violated federal law. As part of the settlement, every two years for 20 years, Facebook was required to obtain independent, third-party audits certifying that it has a privacy program in place that meets or exceeds the requirements of the FTC order.
So now whenever you see Facebook touting its privacy controls, keep in mind that you are seeing a fraudster tout punishment for past crimes.
This is an appropriate moment to surface from the sleaze for a bit of comic relief:
Having two identities for yourself is an example of a lack of integrity.
The punchline is: "lack of integrity".
– Facebook CEO Mark Zuckerberg, 2009
Facebook and Governments: Pimp Snitchin'
The Data Policy provides a simple description of Facebook's handling of your data in the event of government requests:
We may access, preserve and share your information in response to a legal request (like a search warrant, court order or subpoena)
if we have a good faith belief that the law requires us to do so.
This policy must be understood in light of the facts outlined so far:
- Facebook is a business, not a human rights organization. It is not concerned with privacy (its CEO frequently preaches that privacy is a dead notion from another era). Its focus is on maximizing monetization of customer data - a.k.a data pimping.
- Facebook has a history of shady business practices, that has run afoul of US, Canadian, and EU law among others. Facebook is also one of the corporate giants at the center of the ongoing debate on legal corporate tax avoidance vs illegal tax evasion.
Fact #1 makes it clear that Facebook has no ideological interest in resisting government requests for data. Moreover, Fact #2 suggests that collaborating with governments offers Facebook an opportunity to earn goodwill to offset its other troubles - i.e., rat out its customers to save its own hide.
State security agencies worldwide have benefitted from the global epidemic of addiction to Internet sharing. With the help of years of confusion about Facebook's privacy controls, along with Facebook's constant push to force its members to share more data, surveillance has become simpler. Authorities no longer need to get a court to sanction wiretapping when communication can easily be monitored on Facebook.
Beyond passive surveillance and requests for data of individuals, the US National Security Agency (NSA) and its global partners also draft the help of Facebook (along with Google, Apple, Microsoft, and others) in the global mass surveillance program called PRISM, according to documents leaked by ex-NSA contractor Edward Snowden in 2013. The PRISM program, launched in 2007, collects targeted stored Internet communications from Internet companies - including chat (voice, video), email, file transfers, photos, videos, and social networking details. The extent of the program, the nature of the targeting, and storage life of collected data are all withheld as state secrets. Although companies are legally required to provide the government the data it requests, they are not required to build systems that make it easier for the government. In this way, the various Internet companies of PRISM can be distinguished: Twitter declined to make it easier, but Facebook built a system dedicated to providing the government access to requested data.
But in 2012 Facebook revealed that it has even gone beyond the passive role, and put its technology to work to proactively snoop and snitch, using monitoring software to scan chats and posts for criminal activity. If the software detects "suspicious behavior", it automatically flags the content for Facebook staff to determine if further steps, such as informing the police, are required. Law enforcement officials praise Facebook for speedy action. And, predictably, publicity for the program uses the all-purpose bogeyman, the child predator (where would state security be without him?), to win public approval. No details of the program are revealed, so there is no way of determining its definition of "suspicious behavior" or "criminal activity". Would it flag chats of Facebook execs discussing new deceptive claims, or planning tax evasion? Or is the Thailand Facebook monitor programmed to detect content that can be used in a lèse majesté case?