Why does the European Union have its head on straight regarding the reining-in of big-tech abuses, but the US continues to dither?
Although most of us never read the endless expanses of fine print in social-media-user “terms of service” agreements, the European Union’s (EU) “big-tech” regulators do.
These agreements and the corporate abuses they accommodate can have far-reaching negative effects on societies—such as social unrest of the kind that ended up with a lethal, riotous assault on the US Capitol on Jan. 6, and that have turbo-charged fascist movements in other nations. More on this later.
What the EU found is that social-media titans, notably Meta (parent company of Facebook and Instagram) have for years illegally forced users on their websites—through mandated acceptance of these fine-print user “contracts”—to allow Meta to target individual users with ads based on their tracked private on-site behavior.
This targeting practice, which earned Meta $118 billion in 2022, “effectively means users must allow their data to be used for personalized ads or stop using Meta’s social media services altogether,” the New York Times explained in a news article this week. It added,
E.U. authorities determined that placing the legal consent within the terms of service essentially forced users to accept personalized ads, violating the European law known as the General Data Protection Regulation, or G.D.P.R. … If a large number of users choose not to share their data, it would cut off one of the most valuable parts of Meta’s business.
The EU decision included a fine of 390 million euros ($414 million).
Meta has three months to show the EU how it will comply.
The Times added,
The ruling is one of the most consequential judgments since the 27-nation bloc, home to roughly 450 million people, enacted a landmark data-privacy law aimed at restricting the ability of Facebook and other companies from collecting information about users without their prior consent. The law took effect in 2018.
Although the United States has no federal data privacy law (a few states, like California, do), the Times points out that any policy changes Meta makes related to the new ruling could also impact users in the US.
“Many tech companies apply E.U. rules globally because that is easier to put in effect than limiting them to Europe,” the Times reported.
Although it’s taken four years for the EU to begin forceful enforcement on big-tech transgressions against law and customer rights, it’s now surging ahead with prosecutions.
Along with this most recent “behavior advertising” case, EU regulators, particularly in Ireland, where the union’s headquarters is located, have issued edicts against Meta and its subsidiaries, including WhatsApp, totaling $900 million, the Wall Street Journal reported. Infractions, the Journal reported, included the leak of half a billion users’ private info to so-called “data scrapers,” Instagram’s questionable handling of children’s data, and WhatsApp’s lack of transparency in how it processes user information.
Of particular note was the recent EU decision against Meta for how it, in effect, “forces” users to accept Meta using data that reveals their private behavior on the corporation’s sites to personally direct advertising at them, as well as to feed customized information—often mis- and disinformation—that meshes with users’ predominant search and viewing choices.
A CNN article this week warned,
January 6 was a clear turning point for major social media companies—proving that they would, under certain circumstances, be willing to deplatform a sitting US president. But some experts worry that they still haven’t done enough to address the underlying issues that allowed Trump supporters and others on the far-right to be misled and radicalized, and to get organized using their platforms.
In the meantime, Facebook and Twitter and other influential social media in the US are still able to turbo-charge ideological emotions, particularly in far-right Americans, by purposefully directing information and disinformation at them—their online habits reveal what excites and enrages them—that sharply exacerbates existing and latent prejudices and animosities.
A recent article in Forbes magazine surmised that social media could be viewed as at least partly responsible for the Jan. 6 Capitol riots because those platforms were used as tools to communicate lies and deceits—“and the various networks did little to stop it.”
William V. Pelfrey, Jr., Ph.D., a professor in the Wilder School of Government and Public Affairs at Virginia Commonwealth University, explained in a recent email to Forbes:
Social media companies must know that one, actions have consequences; and two scale matters. A person with 40 followers is very different from a person with a million followers. Review and regulation efforts should be concordant with the possible implications of the post and the history of the person posting. Social media companies have an ethical responsibility to review the posts of persons with a problematic history and block, or quickly remove, dangerous posts. January 6 should have taught the leaders of social media organizations that actions have consequences. Conversely, failing to act—or remove/block a post/tweet— also has consequences. Continued abrogation of ethical responsibilities to protect the public will likely lead to government regulation.
Which is exactly what the EU’s big-tech regulators are doing.
It’s by now crystal clear that largely unregulated social media and lack of common-sense big-tech rules and enforcement are dangerous to human beings and societies.
What’s the holdup for similar enforceable regulation in the US?