Duopoly, Personal Data and the Tech Giants That Sell Your Secrets

Amnesty International has released a damning report into surveillance-based business models and how this new field of data-driven commerce is increasingly impacting human rights. Largely pointing the finger at the usual suspects, such as Facebook and Google, the report is a stark look at the companies which have rapidly come to dominate the personal data market at the expense of privacy.

The notion of a ‘surveillance economy’ is a relatively new concept, and it is only in recent years that our infrastructure and technology has been advanced enough to facilitate this level of data collection. Allowing a select few companies to generate billions of pounds in advertisement revenue, personal data is now considered to be a raw material, not unlike oil.

In the last several weeks, we have heard platforms taking varying, and controversial stances upon political advertisement policies. This topic being one of the main battlegrounds for those concerned about the level of control a very small number of companies command over our political and social discourse, and just one area of concern illuminated in the Amnesty report.

How We Arrived Here

Both Google and Facebook’s revenues are almost exclusively ad-based, 84% and 98% respectively. These are both companies which rely on user input – vast amounts of information to which access is sold to advertisers and marketers the world over, in the hopes of pinpointing valuable customers. If you’re searching a specific car manufacturer online, you can be sure the platform is charging a premium to that company to send you their message.

“Google and Facebook dominate our modern lives – amassing unparalleled power

 over the digital world by harvesting and monetizing the personal data of billions of people”

Kumi Naidoo, Secretary General of Amnesty International

Anyone surprised by the business models of Facebook etc. at this point is perhaps a few years behind the times. Though it must be noted, it has not always been this way. In fact, in the early days of Silicon Valley, these huge databases of information were not looked upon as the commodity they are now understood to be and were considered merely a by-product.

Despite this underappreciation of assets in by-gone times, these platforms understood that internet users were not especially tolerant of intrusions into privacy, and given the chance would move to competitors that were less Orwellian; and thus were far less focused on extraction.

Both Google and Facebook both understood that in order to have a viable product, what they needed more than anything, was users. Lots of them. Users are the fountain heads from which data springs, and it’s only once you’ve conglomerated a large enough user base, you can then fiddle with your terms of service enough to begin siphoning off your greatest asset: information.

Big Data

Though most users realise that, as the now common heuristic goes – ‘if the service is free, you’re the product’, the extent to which businesses are keeping tabs on their users continues to shock. A recent study showed than an undisturbed Android phone sent around 900 data points to Google servers in a 24 hour period, of which ~35% of the data samples related to location.

Testing the frequency with which different devices communicated with the business’s servers, the same study found the Google owned Android device did so 10x more frequently than the comparative iPhone. These practices don’t, at this point, seem all too uncharacteristic of a company which trades in some of our most intimate thoughts, feelings and data.

Facebook has also made a similar fortune from convincing users to hand over their information in exchange for access to ‘free’ products, including Facebook’s ‘Free Basics’ internet service. The Amnesty report goes as far to state of this duopoly: “[it] is now effectively impossible to engage with the internet without “consenting” to their surveillance-based business model.”

“We don’t exactly have the strongest reputation on privacy right now, to put it lightly”

Mark Zuckerberg, CEO of Facebook, F8 Conference 2019

This practice, which since March of 2018, has drawn widespread criticisms from users and governments alike, which alongside the introduction of GDPR, has forced tech platforms (at least to some extent) to engage in a dialogue regarding their duties, responsibilities and ethical practices. As to their commitment to genuine change, this remains to be seen or widely believed.

The End of The Beginning

Despite proclamations that “the future is private”, it’s hard to believe that tech platforms are likely to turn their backs on an ad-based revenue stream any time soon. What’s more is the concern that these technologies are being ever more leaned upon by governments to gather intelligence on their citizens and even sow political unrest.

Across the world, campaigns of disinformation have been waged on tech platforms. Leading to consequences as disastrous as genocide, it’s unlikely that platform creators could have anticipated how their innovations were destined to change the world.

Although NSA style practices of data gathering have long since been known, new forms of social manipulation continue to be exposed and platforms must persist in grappling with the monsters they have created. But to what degree are these problems fixable?

Considering that engagement and ‘time on platform’ (TOP) are the main metrics on which to judge platform health/profitability, the underlying drivers of these can give us some indication of the consequences of seeking to maximise engagement.

Speaking in 2018, Mark Zuckerberg acknowledged that his platform invariably incentivises content which pushes the limits. Be it violent, sexual or plain old divisive, Facebook’s research “suggests that no matter where [they] draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average.”

Human Rights Implication

It can admittedly seem sensational to talk about Facebook and human rights in the same sentence, but as many are quickly coming to realise – concerns are undoubtedly well founded. Article 19 of the UN’s International Covenant on Civil and Political Rights states that “everyone shall have the right to hold opinions without interference”.

A right that is jeopardised, as some have argued, in a context when “rather than individuals being exposed to parity and diversity in political messaging, for example, the deployment of microtargeting through social media platforms is creating a curated worldview inhospitable to pluralistic political discourse.”

As pressure mounts from governments and fines are levied, there is also the growing expectation that platforms should be doing more to combat issues such as misinformation, hate speech and generally disagreeable content. An expectation that forces platforms to moderate user generated content and leverage algorithms to aid in policing. A process which it is not unreasonable to anticipate, may exacerbate in “curating a worldview antithetical to pluralistic discourse”.

Like these companies, we are at an impasse. We are reaching the bottleneck of having to balance our needs, objectives and desired outcomes. All of which is occurring in relatively new social terrain. 

An Individual Trade-Off

When considering both the trajectory and implications of the growing surveillance-based economy, we must consider how we as individuals wish to engage with it. We must ask ourselves whether we are happy to trade our privacy, however we personally define that, for the convenience and the services that are increasingly ubiquitous and necessary to our modern lives.

For those who feel the moral pang when acknowledging the role they play in ‘feeding the beast’, are in some sense, stuck between a rock and a hard place. We can at one moment, both recognise that allowing tech companies unfettered access to various layers of our consciousness may lead down a dark road, but also without these tools our ability to access information and resources is severely hindered. It is a conundrum and a Faustian pact, the consequence of which are unknown.

“Facebook’s business model to date is based on a blatant disregard for people’s personal data and privacy”

Damian Collins, Chair of Digital, Culture, Media and Sports Committee, March 2019

Big tech, big data and predictive algorithms allow us to find and access information in ways unimaginable only a few decades ago. Likewise, these platforms enable us to stay connected to those we care about, teach ourselves new skills, perform all manner of tasks more efficiently and enables businesses across the globe to grow.

Yet, at the same time, these systems are collecting and aggregating data about us at every opportunity, often without our direct consent and commonly in ways that are at odds with international standards of human rights. They are also fundamentally changing how we communicate in unforeseen and potentially damaging ways and perhaps most worryingly of all, they are being utilised to affect democratic processes in ways that are scarcely understood.

Data privacy, information security and personal sovereignty are ideas that are frequently intersecting within the mainstream of public conversation. A discussion which is as equally difficult to navigate, as it is necessary to participate in.

To solve these issues, there obviously needs to be a greater effort for legislators to remain abreast of the evolving tech landscape, and though efforts in this direction are certainly mounting (GDPR), the most vital piece of this puzzle is for individuals to understand how this new style of company turns your secrets, interests and profiles into profit, and decide – is the trade worth it?