Thank you, Frances Haugen. For anyone who has been living under a rock, Ms. Haugen is the ex-Facebook employee who has gone public about how Facebook’s algorithms are designed in a manner that amplifies anger, extremism, and playing fast and furious with facts and truth. (By the way, an algorithm is a set of rules to be followed in operations and calculations performed by a computer.)

There are many reasons why what she has said is true. I can touch on only a few. Most simply, it is true because anger and extremism “get more clicks,” keep more eyes on the site for longer, and thereby increase Facebook’s revenue. Also, their algorithms keep feeding each reader with more of the same type of material that attracted the reader’s first clicks — creating a feedback loop that fosters “same thinking” rather than critical thinking. Each reader gets the impression “there are so many people who think like me, and there’s so much information ‘out there’ that agrees with me that I must be right.” But this process is merely tautological feedback and circular reasoning.

Make no mistake about it, the above process has been carefully orchestrated by thousands of Facebook employees who have created thousands of invisible algorithms operating as a form of Artificial Intelligence (AI) — all working to lure people onto the site and keep them there. The same is true, of course, for all mass-market social media. The gargantuan flaw in social media has been summed up in an aphorism coined by Kate Crawford: “AI is neither artificial nor intelligent.”

I highly recommend that everyone view the documentary “The Social Dilemma,” where you can observe a number of the original creators of social media as they expose the flaws and dangers of this technology (released in 2020, via Netflix). More than 99% of social media sites are designed to make money, period; they are not designed to foster rational thinking, to promote democracy, to promote ethical behavior, to problem solve, or to create a kinder and gentler world.

I will go a step further: Any social enterprise that depends on algorithms and AI is the born enemy of society and human persons. This is because, as Oxford professor Carissa Veliz has recently pointed out, algorithms have no morality (see “Moral zombies: Why algorithms are not moral agents,” in AI & Society, vol. 36, 487-497, April 2021). Professor Veliz employs the word “zombie” because, while an algorithm mimics a human being’s ability for classification, problem solving and decision making, an algorithm (like all AI) has no sentience, consciousness or conscience — nor will it ever have.

Living in the midst of all the revolutions of the Digital Age, I’m afraid we’ve become too prone to forget that this revolution’s star general (the almighty algorithm) is fundamentally an amoral beast. This beast can be programmed to execute immoral commands but in and of itself, it is merely a well designed amoral cog in a machine — but this makes it even more dangerous. A smooth-talking guy in a well designed suit, with a knife up his sleeve, is more dangerous than a rough-talking guy in a dirty sweatsuit, with a knife brandished in his bloodied hand. We know real quick which one to avoid, but both of them can be equally deadly.

Because algorithms are laws that operate invisibly beneath the surface of social media and the digital landscape, they operate to some degree like something philosophers have long called Natural Law. However, these two types of laws could not be more different.

Natural Law is a moral/ethical system underlying people’s intrinsic sense that all persons have inherent rights conferred by God, nature and/or reason. Our Declaration of Independence makes pointed reference to Natural Law in its most famous phrase: “All men are endowed by their creator with certain unalienable rights …” At the opposite extreme, the algorithm laws underlying AI and social media are merely products of human enterprise and whim — hugely influenced by the monetary profit motive.

A fascinating book has just come out that deals with these issues. It is titled “Atlas of AI: Power, Politics and the Planetary Costs of Artificial Intelligence” by Kate Crawford (Yale University Press, 2021). Among Professor Crawford’s many fascinating points is that AI (especially when involving the World Wide Web) has become the “atlas” for how we view the world. But every atlas is constructed from a point of view that determines the shape of geographical imaging — whether we realize it or not. Another thing Professor Crawford’s research has shown is that AI is not very truthful. The AI Alignment Forum has conducted tests using 817 questions in 38 categories, including law, finance, health and politics. The best AI system they could find was truthful 58% of the time, whereas humans were truthful 94% of the time.

Further along these troubling lines, Douglas Heaven published an article in the prestigious journal Nature (Oct. 9, 2019) in which he discusses research at major universities that shows it is not difficult for AI to be fooled; and often, the larger the AI system is (i.e., the deeper the data), the more frequently AI is fooled.

Two anti-human beasts have now been unleashed into our world: 1) the operational tenet of post modernism that there is no ultimate, unchangeable source of truth and goodness (no Natural Law or Divine Law); and 2) the operational tendency to hand over more and more of our collective understanding of ourselves to various forms of Artificial Intelligence that are run by soulless, amoral algorithms (zombies).

Nature abhors a vacuum. As we have emptied out more and more of Natural and Divine Law from our social, legal and philosophical systems, we have created a vacuum. Something will rush into that vacuum. We have hardly noticed it, but that something is, to a large extent, Artificial Intelligence — operating on massive worldwide platforms. We are depending upon it more and more, and far too many of us think of it as our friend. AI is not our friend. At best, AI is a servant — but one that is dishonest, stupid and has no morality.

Philosopher Blaise Pascal (1623-62) in his masterwork “Pensees” created an image that has come to be called “a God-shaped vacuum in the heart of mankind,” which we constantly attempt to fill. He went on to say, “Since mankind has abandoned God … nothing in nature has been found to take God’s place. Since losing his true good, mankind is capable of seeing it in anything, even in his own destruction — although it is so contrary at once to God, to reason and to nature.”

Frances Haugen’s warning about Facebook is about something much bigger than Facebook. Ultimately, it is a warning about the vacuum we are encountering in the center of our being, in our souls. It is about mankind’s attempt to fill this vacuum with anything we can — including useless nonsense spun out by soulless algorithms.

The devil is in the details, and the details are in the algorithms.

John Nassivera is a former professor who retains affiliation with Columbia University’s Society of Fellows in the Humanities. He lives in Vermont and part time in Mexico.

You must be logged in to react.
Click any reaction to login.
0
0
0
0
0

(0) comments

Welcome to the discussion.

Keep it Clean. Please avoid obscene, vulgar, lewd, racist or sexually-oriented language.
PLEASE TURN OFF YOUR CAPS LOCK.
Don't Threaten. Threats of harming another person will not be tolerated.
Be Truthful. Don't knowingly lie about anyone or anything.
Be Nice. No racism, sexism or any sort of -ism that is degrading to another person.
Be Proactive. Use the 'Report' link on each comment to let us know of abusive posts.
Share with Us. We'd love to hear eyewitness accounts, the history behind an article.