Who will watch the watchers?
The debate about Artificial Intelligence (AI)and how it should be used and allowed to develop came to a head recently when OpenAI (the parent of ChatGPT) cofounder Sam Bankman was fired by the company’s board of directors. Although he was reinstated, the incident highlights the growing philosophical schism that is developing inside the industry (and the world at large).
These actions come on the heels of an internal memo that alludes to a “significant Generative AI breakthrough that could lead to so-called, ‘super intelligence,’” which can potentially “outstrip humanity,” and our ability to control it.
This is fundamentally different from the current commercial applications. As “smart” as ChatGPT may appear, it still regurgitates what it has seen before, albeit in a manner that evaluates information in ways that make it seem intelligent. Apparently, the new algorithm can essentially, “think outside the box.” It can make human-like inferences that are not simply the result of analyzing existing information. It can find new ways to approach the problem, something previously the exclusive domain of humans.
There is a growing fear that if we simply push capability for its own sake, we may ultimately regret what it can do. It is very difficult to put the technological genie back in the bottle once it is out. Many people think that we should restrict expansive AI growth until we create adequate controls to ensure its safety.
As a consequence, a movement called “effective altruism” is now emerging. ChatGPT was formed in part incorporating these principles. This is guided by a broad set of social and moral principles. In addition to computer scientists and philosophers, its supporters include animal rights and climate change advocates. It is a sort of informal self-regulating guidance from within.
These people believe that a headlong rush into adoption of AI could destroy humankind. This is the “Sky Net” scenario from The Terminator (sci-fi movie of the 80’s) where a computer system running the defense of the country becomes sentiment and attempts to destroy the human race to save the planet. This notion is also reflected in the plot of PK Dick’s story “I Robot,” which has a similar “technology versus man” plot. The adherents appear to favor safety over speed.
The premise sounds reasonable. However, it depends on how one defines “safety.” In fact, it may represent in the near term an even greater threat to society. This can become a powerful tool for those who would control our lives and dictate what is “correct” and “acceptable.” We have seen the ability of tyrants (e.g., Hitler and Stalin) to manipulate information and dominate the emergent technology of the day (then radio) to radically influence people, ultimately to the detriment of society.
I have personally experienced the initial subtle effects of this new influence (control). I am working to set up a micro venture fund to help early-stage companies grow in our region. As an experiment, I asked ChatGPT to write an Investment Thesis based on several criteria. I gave it multiple different prompts organized around the same idea, and it produced some remarkably good results. Using AI effectively follows the axiom, “Garbage in, garbage out.” (It generally takes several queries to get a usefull result).
Despite my satisfaction with the results, I was struck by an addition. Given my specific criteria, the output also included the comment, “while also maintaining a focus on responsible and sustainable investing practices.” Initially, I thought that was a complimentary addition. However, I realized that I had never asked for the inclusion of ESG (environmental, sustainability, and governance) in my criteria.
In effect the AI was adding its own “bias” into the answers. Admittedly, it was not unreasonable; however, it was not part of “my” prompt. It was also very subtle. The comments, were not “in your face.” It simply appeared to be “fleshing out” the answer (AI doing what AI does…improve the product).
So, what is “wrong” here? We have seen the result of unhindered expansion of technology (e.g., nuclear weapons). Should we do things today without understanding the ultimate consequences? When circumstances can easily slide out of control, don’t we need to be “protected” from ourselves?
Remember, AI is basically a “machine learning process.” It starts with “human-supplied” data and algorithms to then independently make predictions (and decisions). The operative phrase is “human-supplied.” In other words, the system is dependent on the original programing and ideas embedded in the code.
This becomes a self-reinforcing process. As each new iteration occurs, a bias grows stronger. After a certain point, it will become the norm, every output will incorporate it.
There is “bias” everywhere. It is inherent in our beliefs and thoughts. We manifest those in our actions and words (and the computer code we write). There is much in the modern world that allows for the propagation of ideas and beliefs. It evolved from newspapers to radio and TV. Today we are bombarded by twitter “feeds” and spam email. Google learns our preferences and feeds us “you may like this.” This is rudimentary AI at work.
The future with AI is a certainty. It will open amazing possibilities but is also fraught with risk. One path may ultimately lead to run-away computers and systems that could be harmful to humanity. That will be a long road. The more present danger is in those who would “protect” us by injecting their own ideas and influence in the name of our “safety.” We return to the question, Who will watch the “watchers?”
In this rapidly evolving environment, which is the greater threat? That machines may “ultimately” surpass human intelligence and threaten our survival? Or, that evil human beings will manipulate information for their own ends to the detriment of society? The former is a possibility; history tells us that the latter is a certainty.
In the case of AI, those with influence at the start of the day (or who may ultimately gain control of the process), will have a monumental impact on the way we think and act in the future. I am very concerned with that scenario.