Friday, 15 November 2024 | Login
A 'dangerous situation' at Facebook could get 'much worse' if we don't take 'immediate steps'

A 'dangerous situation' at Facebook could get 'much worse' if we don't take 'immediate steps' Featured

Social media platforms that can be used as weapons by malicious parties need to be regulated. As we process the details unfolding around what truly happened at Cambridge Analytica and Facebook, it becomes increasingly

apparent that we have allowed a dangerous situation to develop, but that things could get much worse unless we take immediate steps to address the larger risks associated with social media platforms.

Recent events require a careful examination of the risks social media platforms entail to individuals and society and a plan for dealing with these risks. The statement by the head of Cambridge Analytica that "we just put information into the bloodstream of the Internet, and then watch it grow, give it a little push every now and then" -- is a chilling but accurate metaphor for a new kind of "digital infection" to which we are susceptible, without adequate mechanisms for protection.

We need a plan to guard against platforms and personal data being misused by malicious parties. Finance provides a good roadmap which I outline below.

 

Back in December, I wrote an editorial arguing that social media platforms be subject to "KYC" (Know Your Customer) laws as in Finance. The financial services industry provides an appropriate lens for regulation of social media platforms.

In finance the objectives are well defined, for example, investor protection, market manipulation, knowing who is paying you and why, and data protection. Compliance is verifiable after the fact as long as the regulators have access to the right data and sufficient time for analysis.

While KYC starts to follow the money, it doesn't address the deeper problem of the legitimate but malicious use of these platforms. For example, it doesn't address the legitimate creation of millions of fake accounts and trolls that are activated at will with the sole objective of deception.

The science demonstrates the possibility of influence at scale, and emerging data is revealing how it was used prior to the last US presidential election. What is to stop these platforms from being used more broadly for attacks in the future?

Not surprisingly, the answer is that the platforms themselves are best equipped in terms of data and capability to ferret out such attempts but they lack the appropriate incentive systems. Despite lofty vision statements about creating better societies, we should have no illusions that their primary obligation is to their shareholders.

While fake news is legal, given the influence social media platforms wield, a systematic pattern of neglect by social media platforms should not be legal. So, how do we strike the right balance between allowing freedom of expression of individual users while protecting ourselves from malicious parties?

In finance, it works as follows. It is assumed that compliance along things of concern – such as market manipulation and customer treatment – can be ascertained with sufficient time through an analysis of the relevant data. Material non-compliance is determined post-facto for randomly selected entities.

An occasional error, such as a misallocated trade, isn't considered "material" whereas a systematic pattern of misallocation is material, and appropriate action is taken. Social media platforms have the data. They also have the technology for detecting suspicious activity and are investing heavily in beefing it up in light of the embarrassing facts emerging by the day. All we need is an appropriate incentive and verification mechanism.

The solution isn't difficult in principle. Research has shown that it is possible to identify fake accounts and suspicious activity with reasonable accuracy. The errors in this exercise are the false positives – flagging good activity or accounts as suspicious – and false negatives – failing to detect malicious activity.

The important consideration would be to negotiate acceptable rates of acceptable false positives and false negatives in light of their relative costs, and to estimate these errors using rigorous state-of-the-art testing methods in data science.

The analogs to market manipulation would be fake news, suspicious accounts, and episodes of malicious activity. Similar to Finance, occasional errors would not be material where a consistent pattern of negligence would be material, and provide sufficient cause for penalty.

An accompanying benefit of determining materiality would be enhanced transparency about threats, and measures taken to combat them. Even though the definitions of the things of interest such as "market manipulation" might change over time due to technological advancements, it is relatively easy to ascertain compliance, post-facto.

While a definition of social manipulation is harder a priori than defining market manipulation, it isn't impossible. If social media platforms can periodically share their analysis with regulators, it would go a long way towards making them more accountable without endangering the First Amendment.

The alternative would be to trust these platforms to fix the problem without any oversight, would be akin to trusting financial institutions in 2009 to fix things on their own without any oversight or future accountability. That's not a risk we should take.

Additional Info

  • Origin: cnbc/GhAgent