Monday, October 12, 2020

The Social Dilemma

The Social Dilemma, the Netflix documentary on the mushrooming problems caused by the various social networking platforms and their use of artificial intelligence (AI) to grow, consolidate and basically control their user base.  The movie itself invokes a powerful approach to illustrate the issues, primarily by interviewing some of the major players in the tech industry.  These software developers, for the most part white, male and in their 20’s when the social networking software was being created, have realized that these offerings have taken a Frankenstein-like pattern of exponential growth and unintended consequence.  There are, of course, many problems engendered by the unfettered evolution of the internet, its many tentacles invading all parts of modern life.  It has engendered some new and different forms of harm, from viruses, phishing, doxing, industrial espionage and DDOSing, never mind its contributions to video game and porn addiction, the promulgation of misogyny and racial hatred, and on and on. This documentary does not address all of this, focusing specifically on the rise of the social networking platforms.


The movie promulgates Tristan Harris, a former Design Ethicist at Google, as a focal point.  He was involved in the design of Google’s Gmail product, and at the time began to wonder why there was never any discussion on the ethics of creating a product with the ability to addict its consumer base, a difficulty Tristan was personally experiencing.  He attempted to start such an internal debate, but it went nowhere.  As such, Tristan eventually quit Google and subsequently  founded an organization called the Center for Humane Society, which he is using as a pulpit to deliver information on these issues and some proposed solutions. 


The documentary first centers its attention on the issue of phone addiction for both pre-teens and teenagers, which is driven by the notifications, likes, pings and messaging most favoured by the social networking platforms.  In one intense dramatization, a mother attempts to lock away her children’s phones in a glass safe, in order to have one uninterrupted meal.  Her pre-teen ends up breaking the glass to retrieve her phone.  Alarming statistics on the rise of self-harm and suicide among pre-teens and teens show a disconcerting correlation with the advent of smart phones and social networks. Cyber-bullying is also briefly touched upon, conveying the ease upon which the self-esteem of the vulnerable pre-teen cohort can be affected.


Teenagers are susceptible to the same phenomena, given their need to fit in and form relationships with their peers.  Unfortunately, it appears that social networking is actually reducing the quantity and quality of real-world connections, in favour of virtual ones.  The movie delves, via an effective simulation, into how the AI algorithms work to maximise the time that its users spend on the platform.  Using data compiled from every click and view, the software attempts to find the correct mix of offerings, often by trial and error,  that will continue to engage the hapless user’s interest.  And the AI learns from every interaction, it gets better and better with time.  The reason for this is clear; the platforms are selling the user’s attention to the advertisers’ highest bidder - this is how the social networks monetize!  What’s more, as one developer Jaron Lanier (famed as the father of virtual reality software) proclaims, what is actually being sold is the ability to shift behavior, in tiny imperceptible increments, without the user actually being aware of it.


It is the data that the tech companies collect and analyze, that feeds the hungry intelligence of the software algorithms.  It is harvested and stored and managed and monetized, simply because no one told the tech companies that it could not or should not be done.  After all, the data should actually be the property of the user who generated it.  Is there any valid reason from the users’ point of view that Google stores the history of every search string, that all the platforms offer free applications and gigabytes of email storage.  The answer is no, there is no real benefit to the user other than lazy convenience.  The data is priceless, not because it is being sold, but because it is used to manipulate.  


The solution is therefore obvious; the tech companies need to be regulated such that user data belongs to the user and cannot be accessed by the software for any purpose.  Users can keep their data or send it straight to oblivion, where it belongs.  If that means we will be required to return to a business model where we have to pay for software, so be it.


Another equally problematic issue is being exacerbated by the AI algorithms, one that is leading the world headlong into an abyss of non-truth, societal discord, and perhaps the end of democratic civilization as we know it.  In a perfect illustration, the documentary asks us to imagine a version of Wikipedia that is customized to each user, that doles out ‘facts’ based on a person’s preferences, online history and worldview.  This is exactly what Facebook and YouTube feeds are doing, searching for rabbit holes for its users to dive into and spend ever more time on the platform.  The AI software has not been programmed with any sense of morality or right and wrong, of what is true or not.  It is simply and ever so efficiently learning what attracts the mind of the person at its mercy.  Social media plays a pivotal role in enabling a grouping and recruitment  by political ideology, by racial or gender or sexual identity. Of course, there are other forces in the world that are also contributing to and profiting from a siloed society.   Cable news shows are a prime example, aided by politicians who do not let facts get in the way of their quest for power.  


The solution to this problem does not seem obvious.  It is impossible to regulate the ‘truth’.  But it does not need to be, at least in terms of AI.  If the software is not allowed to harvest user data, it will not be able lead its users into further realms of controversy.  Perhaps this may happen anyway.  But at least this would be a human problem that can be solved by the human abilities to think logically, to reason out and understand other points of view, to compromise and find a better path to the future.

 

No comments: