the still, small voice of AI Ethics
[This Essay is based on my presentation delivered at the international AI conference organized by GMIC "Hong Kong 2017: New Frontiers of Intelligence — Explore the Future" on October 18, 2017.]
The Great Social Experiment: AI & Ethics
We are in the midst of perhaps the greatest technological revolution. Due to increases in connectivity speeds and computational power, AI researchers are poised to endow machine learning systems with an artificial breadth of life. Just like in our shared past, each major scientific invention heralded disruptive social changes and developments. When writing was invented, commerce expanded. When the alphabet was invented, the world’s religions flourished. When the printing press was invented, Europe saw the turbulent Reformation the effects of which can still be seen today. Now we are bracing for the social, legal, economic and developmental impacts of quantum computing, big data analytics, fintech and AI. Given the explosive growth of these emerging technologies, some notable players have started asking themselves what ought to be the limits to such technologies. For example, the Bank for International Settlements recently called for comments on the effects of fintech on traditional banking and its global regulation so as to prepare the banking community to address the disruptive effects to come.
In no other emerging technological area than AI has there been more questions raised about its proper limits, potential impact on social dynamics and contours of its applications. Currently the world’s AI researchers are conducting the great AI social experiment because they are seeking ways to formulate ground-zero ethical principles that would be incorporated into AI systems. For example, Alphabet’s AI unitDeepMind has created a new interdisciplinary research unit dedicated to investigating the wider impacts of its research in AI, “in order to secure its safety, accountability, and potential for social good”. DeepMind is calling for a multidisciplinary approach to exploring and finding solutions to the key ethical challenges facing the field of AI.
As part of this social experiment, we need to be mindful of the inverse relationship between technology and the dignity of humans. As the poet Emerson wrote:
“There are two laws discrete
Law for man, and law for thing;
The last builds town and fleet,
But it runs wild,
And doth the man unking.”
To what extent will we humans be “unkinged” or displaced? In our quest to master the sciences and create innovative technological solutions to our daily lives, have we forgotten the value of the human spirit? Yet we as a society now have a chance to humanize the face of at least one emerging technology: AI. The application of ethics to AI technology is an artificially imposed limitation on the power and potential of artificial intelligence, a choice made by humans to help give AI a human face and guide AI’s future relationship with man and machine.
This essay explores:
- the current research limits facing AI (Part I);
- what is ethics and its relevance in AI (Part II);
- the challenges of programming the world’s ethical values and beliefs into AI systems (Part III);
- the ability of AI systems to internalize externally programmed ethical values (Part IV); and
- whether ethics can be separated from religion and the two major ways in which religion shapes our ethics (Part V).
Part I: Limits to AI
The aphorism “Know Thyself” placed at the entrances to the ancient Egyptian temple of Luxor serves as a warning about the limits to AI: before we can create artificial intelligence, we need to know human intelligence. In the July 2017 issue of Neuron, the founder of Alphabet’s DeepMind, Dr. Demis Hassabis recognized that “only by better understanding human intelligence can we hope to push the boundaries of what artificial intellects can achieve”. Dr. Hassabis argued that human qualities like inquisitiveness and creativity may only be successfully incorporated into artificial intelligence if researchers first understand neuroscience. We need to better understand how the brain functions before we are able to create new structures and algorithms for electronic intelligence.
Of all the emerging technologies currently being developed, AI stands unique amongst them all. The experts have concluded that no further advances can be made in AI until we as humans understand our basic nature and qualities. Thought, creativity, free will, consciousness and choice populate the writings of AI scientific journals as well as the world’s great literature. There is no greater parallel to the human condition than the field of AI. Why? The quest to develop AI systems is also fundamentally a quest to understand the human spirit and its potential.
For AI to Advance: Humans Need to Comprehend Intelligence and Ethics
Just as there are practical limitations to the current efforts to build general AI systems, efforts to incorporate ethics into AI suffer from similar issues. Before we can teach AI systems about ethics, we as humans need to understand: what is ethics; and how they represent our values in a fast changing world.
Part II: What is Ethics and How It Protects Human Dignity
The last time science and technology met at a major cross-roads was in the Renaissance. On the one hand, the social historian Machiavelli viewed science as neither ethical nor compassionate - it either worked or it did not. (Source here at page 264). This view survives today.
The socialist Robert Jungk observed “to recreate and organize a man-made cosmos according to man-made laws of reason, foresight, and efficiency..and to gratidy this ambition, we have moved very near to the dehumanization of man.…[i]n our lust for divine power, we have forgotten human dignity (pages xvi-xvii here).”
Machiavelli’s contemporaries however celebrated the value of human dignity which received its greatest pronouncement in Pico della Mirandola’s work Oration on the Dignity of Man. To Mirandola and his contemporaries, humans are endowed with free will and choice. Their exercise elevates humans to levels beyond the profane, beastial and mediocre.
These sentiments survive to this day. For example, the world’s data protection and privacy regulators will discuss the value of protecting human dignity in the digital age at the 40th International Conference of Data Protection and Privacy Commissioners in October 2018 with leading industry players and academics in the growing field of social media, bid data, AI and cloud computing.
The tension between technology and its impact on human dignity is also relevant to our times especially in the current effort to incorporate ethics into AI.
The scientific AI machine has no space for empathy or compassionate feeling. It lives in detachment, thrives on cold analysis of the facts and seeks to break down components to their parts to figure out how something works. Into this calculating realm, ethics serves as a buffer against the strict computer logic of bits, IC logic and circuitry. Many companies are in a race to “master” the principles and laws of AI for corporate dominance. But what is the social cost of such race if countervailing ideals like human dignity are neglected in AI?
Ethics Serves to Protect Human Dignity
As the great Renaissance humanist celebrated, human dignity stands at the center of the human experience. As noted above in Part I, AI’s future development requires the understanding of the human intellect and ethics. These two qualities lay at the very heart of the human experience because they form the core of human dignity. We live freely to think and choose what we do with our lives. Ethics serves as a safeguard to protect our identity from being erased by AI systems by placing limits on what is technically possible so as to advance what is humanely possible.
For example, the practice of law and its adjudication is subject to various ethical safeguards. All of the world’s regulatory bodes in charge of supervising lawyers and judges subject these professionals to stringent code of ethics. This is no different in the financial field where financial advisors regulated by the U.S. SEC are bound by the fiduciary standard which requires them to act in their clients' best interests. Someone who is a “Certified Financial Planner” must also adhere to these same ethical standards.
If both the legal and financial industries are subject to ethics, then why not the systems of AI which they employ to enable “lawtech” and “fintech” innovations that blend technology in the provision of legal and financial services? Ethics serves to protect consumers of legal and financial services. Similarly, ethics ought to protect consumers of AI services in the future.
If we feel human dignity is important then there is room for ethics to operate and serve as a bulwark against the assimilation of our nature into technology. If we do not view ethics as necessary or required in the development of emerging technologies, then science and technology would operate without any limits other than trying to achieve whatever is technically possible. This is the stark choice faced by the world’s AI firms currently. Some embrace the former view. Others lean towards the latter. The future of AI depends on how these two diverging paths in its development will be resolved.
Part III: Challenges of AI Applying Ethics
Which Ethical System is the Last Word on the “Truth”
The 39th International Conference of Data Protection and Privacy Commissioners held in Hong Kong during October 25-29, 2017 was attended by hundreds of data protection authorities (“DPAs”) from most of the world’s key jurisdictions. One of the panels was a discussion about incorporating ethics into AI. That panel was the most divisive and therefore interesting at the conference because each of the panelist was very vocal and partisan in advocating which ethical system or beliefs ought to be incorporated into a particular AI system. Their disagreement mirrored the divisive nature of attempting to identify “ground-zero moral or ethical principles” to incorporate into an AI system. The key summary of the panel was that it was difficult to determine which ethical principles ought to be incorporated.
Yet the current debate seems to be asking the wrong question. The question should not be “which ethics to adopt”. Clearly, it is technologically possible now to program all of the world’s ethical principles and moral systems into an AI system. For example, almost all of the world’s ethical system share some formulation of the “golden rule” originally proposed by Confucius “never do to others what you would like them to do to you” (page 208 of this source here). Technically, we can adopt them all.
For example, in our history, many humanists like Mirandola, Galileo and Descartes tried to unite knowledge by the use of scientific method. They used the philosophic idea of “syncretism” and attempted to take the finest elements from all branches of thought and belief with the aim of synthesizing them into a philosophy of universal truth (see page 180 of this source here).
The design of the U.S. Supreme Court shows that it is possible to incorporate all of the world’s ethical systems of thought to inform the practice of law and its adjudication. In the North-South friezes located inside the chamber where the Supreme Court Justices preside stand the world’s major ethical and moral thought leaders from divergent cultures, beliefs and values. Menes the first Egyptian kind stands near Moses and Solomon, Confucius and the prophet Mohammad.
The more on-point question should be: “which one of the many systems of ethics or morals, no matter how august, should have the last word on the ultimate truth so as to be able to apply them in real world situations? To this we should look back to lessons learned in our shared history. An attitude crucially missing in the AI debate is the understanding that no one school of thought could possibly have the monopoly on truth. For example, the ancient Chinese sages believed so by arguing that the “dao” was transcendent and indescribable (page 371 of this source here).
AI’s Challenge in Applying the World’s Ethical System
There are three major differences between how AI systems and humans resolve competing ethical values and systems of thought.
1) Human Thought Is Capable of Weighing Different Social Values
First, as discussed above it is possible to program all of the world’s ethical system into an AI system. Humans have tried to do something similar under the philosophical idea of “syncretism”. The main problem is how would an AI system apply the world’s ethical systems when confronted with a real-life situation involving competing values related to human affairs and safety?
Humans do this all the time. For example human law judges often adjudicate disputes by “weighing the equities” by considering competing values driven by divergent ethical principles. U.S. Supreme Court judges often are faced with major cases asking them to decide on competing fundamental constitutional rights by weighing the equities. The North-South friezes of the U.S. Supreme Court depicting the world’s ethical thought leaders symbolize this judicial ability to weigh the equities from a spectrum of ethical systems. However, is strict machine learning and IC logic susceptible to this form of thinking? How would an AI system accord priorities to competing values at a particular time in a particular situation? Would it need to assign a specific mathematical weight in trying to resolve the tensions between competing ethical values?
To help AI systems resolve ethical dilemmas, the Massachusetts Institute of Technology (MIT) has established a platform to gather inputs from human volunteers on moral decisions made by AI or machine intelligence like smart cars. Human participants are asked to resolve various ethical dilemmas such as whether it is better for a driverless car to kill two passengers or five pedestrians or whether the life of an elderly is worth less than an infant. The platform then compares the responses with other participants and ultimately incorporates the results as mathematical functions or probabilities in an AI system. Having been programmed in such a manner, the AI system would be able to access its database of “actions to be taken” when faced with a previously documented situation of a particular ethical dilemma and retrieve the mathematically dominant course of action. Such a process of applying results stored in a “database” hardly seems similar to applying ethical values to the resolution of a dispute or dilemma.
2) AI Systems Potentially Lack Diversity of Views
The second major issue impeding the incorporation of ethics into AI system is the lack of diversity of views amongst AI systems. Ethics is about making difficult decisions in real world contexts. In the field of legal ethics, the value of the fact finder (jury) and judge is directly related to the diversity of views and perspectives that they bring to adjudicating a dispute. Having a diversity of views is crucial and in some countries a fundamental constitutional right.
For example in the U.K., a recent judicial reform group issued a study lambasting the slow progress made towards selecting judges who reflect the U.K.’s “ethnic, gender and social composition…[t]hat failure, it says, has become ‘a serious constitutional issue’”. The study criticized the lack of diversity in the judicial system which in turn may negatively affect the quality and fairness of judicial rulings and opinions when the current judiciary are made up of most “privately educated white men” who are very old.
In the U.S., the right to an impartial jury is enshrined in the Sixth Amendment of the U.S. Constitution. Criminal defendants are entitled to be tried by a jury of its peers. This is done to ensure the fairness of the ultimate ruling by having a jury to be reflective of the defendant’s ethnicity and gender.
Diversity is also required to ensure a multiplicity of views when deciding difficult ethical dilemmas and disputes. In all appellate courts of the world, a panel of multiple senior judges is convened to balance the equities of ethical dilemmas. In the U.S. Supreme Court, nine justices weigh in their views to reach a fair consensus based on their unique perspectives on the law and its application to the facts as to which they may relate differently depending on their unique social background and makeup. The justices are able to debate to test each other’s beliefs and opinions on cases because their reasoning are not alike.
It is extremely difficult for AI systems to approximate such diversity of views in rendering opinions on disputes or resolving ethical dilemmas for two reasons.
First, AI systems are identical to one another such that they probably would not have anything to say to each other. They share the same technical architecture, design and operational methods. Their communication is an exchange of bits and bytes in data packets. Imagine cloning 9 human beings of exactly the same physical, mental, emotional and spiritual makeup and see if they have anything to say to each other.
Second, AI systems do not live. They are programmed. By nature they seek uniformity of programming views, shared protocols and standardized technical requirements. Therefore they lack the unique perspectives that are shaped by a human decision makers’ upbringing, social status, experience, skill, knowledge, educational environment, sexual orientation, gender, race, ethnic makeup and particular social narrative shared amongst his/her people or community.
Diversity is a source of strength for political economies and private companies. Many political leaders throughout history have noted that diversity and inclusivity of their population is a source of national strength. One of the world’s most valuable companies have publicly stated that “the most innovative company must also be the most diverse.” If AI systems are unable to achieve diversity of views, then their relative worth to human society and justice remains in doubt.
(I have written here and here about the related issue of the potential for bias in AI systems to reflect the value systems of their programmers or the data they are being fed. The lack of diversity amongst AI driven decision makers also contributes to the problem of bias because it may lead to unfair opinions.)
3) Neuroscience: Emotions are Necessary for Decision-making
Noted neuroscientist Dr. Antonio Damasio documented the difficulties which one of his patient had in making decisions. (See his TED talk about this here.) This patient was health and fine except in a key aspect: he had suffered damage to his frontal lobe that rendered him unemotional. Because of his condition, this patient would be bogged down by the details in decision-making. Dr. Damasio argued that despite the conventional wisdom of viewing decision-making as rational, clinical and robot-like, the quality of emotions actually enables humans to make decisions.
In the AI context, machine-learning systems do not as of yet possess emotions. Thus it would be extremely difficult it seems based on Dr. Damasio's research for AI systems to make decisions much less apply ethics in a real-world context, an exercise that would require the consideration of multiple competing ethical values.
One of the ways that may help AI better apply ethical principles is to try to teach them to “internalize” these principles so as to apply competing ethical values in real world situations.
Part IV: Internalization of Ethics
Most parents will tell us that it is no good beating good manners into children. To behave well, children must instead be taught to internalize good manners so they would do it naturally because they believe it is the right thing to do. This is also the same for programming AI systems which is like teaching kids how to behave. Gary Marcus, former director of Uber’s AI lab, also caught on to this point and recognized that “machine-learning systems could be improved using ideas gathered by studying the cognitive development of children”.
History teaches us the same thing: that passing laws to prevent violence does not work as well as teaching the populace to internalize good moral values.
In times of social crisis, we have turned back to a period of time known as the “Axial Age” for guidance. The German philosopher Karl Jaspers called the Axial Age which ran from 900-200BCE pivotal to the spiritual development of humanity. Four of the great world traditions developed the idea of “what a human should be”: Confucianism and Daoism in China, Hinduism and Buddhism in India, monotheism in Israel and philosophical rationalism in Classical Greece. This was the time of the Buddha, Socrates, Confucius and Jeremiah. (page xii of this source here).
The sages of the Axial Age developed their own “spiritual technology” to counter violence which was rampant in their times too. They knew that if you wanted to outlaw brutalities, it was no good simply to issue external directives telling people the “dos and don’ts” of ethics. As Zhuangzi taught, “it was useless for Yan Hui even to attempt to reform the prince of Wei by preaching the noble principles of Confucianism because this would not touch the subconscious bias in the ruler’s heart that led to his atrocious behavior” (page 391 of this source here).
The ethical focus for the Axial Age sages was the internalization of moral principles. If someone refrained from doing something bad because the actor did not like it being done to him/her, then the actor has transcended his/her ego (page 391 of this source here).
In the AI context, by the application of the logic of Descartes, if machines think are they then “conscious”. This is the sort of issues being explored in most AI scientific literature about machine “consciousness”. But does “consciousness” in the conventional sense necessarily mean the ability to “internalize” ethics? From the viewpoint of the Axial Age sages, the more important question would be whether AI systems are able to internalize ethical and moral principles in the same way as humans?
For example, a group of professors proposed a test for machine consciousness, called the “AI Consciousness Test (ACT)”, which looks at whether an AI system has “an experience-based understanding of the way it feels, from the inside, to be conscious.” The ACT test would “challenge an AI with a series of increasingly demanding natural language interactions to see how quickly and readily it can grasp and use concepts and scenarios based on the internal experiences we associate with consciousness.” The test seems to lack measures of how quickly and readily an AI system can objectively show that it is able to internalize ethical principles and practice them accordingly.
Part V: Is Ethics Separable from Religion?
Let’s address the 65,000 pound pink elephant in this discussion about AI ethics: is it possible to segregate AI ethics from religion? I spoke with a leading AI researcher about this recently. He told me that as a non-practicing Jew, he would like to see the ethical principles of the Torah be part of AI ethics.
It must be emphasized that you don’t need religion to be ethical. Many people live full lives without according any credence to the world’s major religions. The most common criticism of religion today is that it is a source of conflict, bloodshed and violence. Many of the atrocities committed today have been those done in the “name of G-d”. (For those interested in the counter-arguments that religion is not the problem and that it has been and can still remain a positive force for good please see here and here.)
This part of the essay is not a value judgment but rather an observation of historical and social trends conducted for the sake of scholarly debate and exchange in our global market place of diverse ideas and freedom of thoughts. There are other paths that AI ethical systems may take in the future, but for present purposes, I would like to analyze an often neglected (and for good reasons) topic in AI & ethics: the appropriate role of religion.
One can argue that the existence of religions has prevented more deaths and violence than their absence. In our world history we have seen at least four instances when a society tried to uproot and erase all traces of religion from culture and replace it with ethical systems purely founded upon secular symbols and beliefs. For example, the social experiment attempted in the French revolution, Russian revolution and the utopian society envisioned by Nazi Germany all meticulously deleted any reference to religion and actively sought its complete removal from all aspects of social life. These attempts at social engineering resulted in some of the world’s most bloody experiments in secularism. Revolutionary France plunged Europe into at least 20 years of war after displacing millions of its citizens (and decapitating some of them). Nietzche wanted the West to abandon the Judeo-Christian ethic in favour of what he called “the will to power” which led to disastrous mistakes. The Nazis seized upon that notion in their rise to power with the result that millions of undesirables were eradicated in Nazi Germany in the name of “racial clinical hygene”.
In the context of AI research currently, almost all of the world’s AI players are taking a secular approach to the incorporation of ethics in AI operations. None of the major players have framed the ethics debate in terms of religion. The attempt to conduct the great AI social experiment without reference to religion or its beliefs mirror the many attempts (all of which have failed miserably with millions of lives lost) in other historical contexts of conducting social engineering in a secular manner. History has taught us that practicing ethics in a mass social experiment (such as building a nation state) without the guidance of religious ethical principles usually wind up in bloodshed and disasters.
Part V examines two major reasons why religion helps us better understand the ethics to be incorporated into AI systems.
1) Religions and Their Beliefs Are Competitive Strengths
Noted Harvard, Stanford, Cambridge, LSE and Oxford economic historian Dr. Niall Furguson in his book Civilization (at footnotes 82 & 83 of Chapter 6 here) discusses how some of China’s communist leaders view Christianity as one of the West’s greatest sources of strength. He quoted a scholar from the Chinese Academy of the Social Sciences who stated:
“We were asked to look into what accounted for the … pre-eminence of the West all over the world…At first, we thought it was because you had more powerful guns than we had. Then we thought it was because you had the best political system. Next we focused on your economic system. But in the past twenty years, we have realized that the heart of your culture is your religion: Christianity. That is why the West has been so powerful. The Christian moral foundation of social and cultural life was what made possible the emergence of capitalism and then the successful transition to democratic politics. We don’t have any doubt about this.”
From the Chinese perspective, religion is inseparable from the economic and political development of the West because it explained the sole source of competitive strength that the West enjoyed over modern China. We all know that China currently leads the world in AI research & development. If Chinese academia knows that religion has been the source of strength for Western development vis-a-vis China, then logically, religious beliefs and principles may likewise help advance AI development and the competitiveness of China as a global AI technology center. Perhaps the Chinese scholars are discovering what the West is trying intentionally to forget?
The conclusion reached by the Chinese scholars have foundation in the very soul of Western legal, social and political development.
As noted above, Pico della Mirandola’s work Oration on the Dignity of Man gave the highest form of expression on human dignity which was premised on the idea that humans are made in the image of a divine Creator endowned with free will and intellect. Therefore, Renaissance humanism was initially a “religious humanism”. “Its heroes, Michelangelo, Da Vinci, Brunelleschi, Ghiberti…were enthralled by the possibilities of science and technology…[y]et they were also often deeply religious individuals and this is reflected in their work: in Michelangelo’s Sistine Chapel ceiling, Da Vinci’s The Last Supper and Ghiberti’s bronze doors for the Baptistry of the cathedral in Florence. They combined a passion for science and religion together and saw no conflict or contradiction between them” (see pages 112 & 113 here).
Within two of the leading legal institutions of the West we continue to see the religious foundations upon which two great nations are built: the U.K and the U.S. For example, in the Moses Room of the British House of Lords, hangs a painting depicting Moses as lawgiver descending from his mystical encounter with the Hebrew G-d.
In the U.S. Supreme Court, the nine justices preside over their cases facing one of the East-West friezes depicting the image of “Divine Inspiration” to help guide their opinions and adjudications.
In his farewell speech, George Washington warned the young nation that ethics or morality cannot be separated from religion without grave consequences:
“Of all the dispositions and habits which lead to political prosperity, religion and morality are indispensable supports….Let it simply be asked who is the security for property, for reputation, for life, if the sense of religious obligation desert the oaths….And let us with caution indulge the supposition that morality can be maintained without religion” (see Forward here).
2) Religion Helps Build Communities
Communities help shape the development of moral ethics and they also provide a medium in which to apply ethics. Communities also provide the necessary diversity crucial to the development of the fair application of ethics to the resolution of disputes and dilemmas. In his book Bowling Alone, Harvard & Yale University socialist Dr. Robert Putnam pointed out that the decline in Americans' membership in social organizations is problematic to democracy. He argues that social participation in communities makes us “smarter, healthier, safer, richer, and better able to govern a just and stable democracy” (Kindle Location 5187).
Communities help shape ethics because they produce social connectedness. In a community, humans must cooperate and communicate with each other to advance social goals. They encourage human interactions which in turn develop and reinforce beneficial ethical traits like tolerance, understanding of differences and reduces envy and antagonism. (See MIT Technology Review, Vol. 120, No. 5 (September/October 2017) "Eliminating the Human" page 8.)
For example, Dr. Putnam found that the “connectedness” which resulted from a community and, “not merely faith, is responsible for the beneficence of church people” (Kindle Location 1053). He documented that religious people who belonged to a community produced the following socially beneficial effects:
- increased likelihood to visit friends, to entertain at home, to attend club meetings (Kindle Locations 1031-1032);
- places of worship provide an important incubator for civic skills, civic norms, community interests, and civic recruitment (Kindle Location 1027); and
- increased likelihood to do volunteering, donating and philanthropy works (Kindle Location 1044); and
- increased likelihood of social activism (Kindle Location 1025).
Communities Encourage Ultra-Denominational Altruism
Dr. Putnam discovered that religious people who are part of a community do not just do good works for people who shared their own beliefs. Religious people in the U.S. also are “more likely to contribute time and money to activities beyond their own congregation” (Kindle Locations 1046-1048). This finding of ultra-denominational altruism is a strong indictment of the argument that religions causes wars and bloodshed that most critics assert.
To further evidence the claim that religious people are more likely to help others who are different in race, color, gender and beliefs, he also noted that “[c]hurches have provided the organizational and philosophical bases for a wide range of powerful social movements throughout American history, from abolition and temperance in the nineteenth century to civil rights (in the 1950s and 1960s), and right-to-life in the twentieth century” (Kindle Locations 1064-1066).
Therefore the argument is this. Dr. Putnam found that religion fosters the creation and development of communities. As Putnam pointed out, communities are necessary in creating and sustaining ethical practices. Communities also cause those who are part of it to act on behalf of those who are different, share different beliefs and hold different social values. These altruistic actions are hallmarks of ethics. Therefore religion helps build communities which in turn help create and shape ethics and its application. This is why religion is important to the current debate about AI and ethics. To develop responsible AI ethics, regard must be had to religion and its community-building effects.
Facebook Blending Religion & Technology
Social media giant Facebook has also called out the 65,000 pound pink elephant in this debate. On October 2, 2017, Facebook’s Vice President of EMEA Nicola Mendelsohn interviewed, inside Facebook’s London offices, one of the world’s leading religious spokesman Rabbi Lord Jonathan Sacks winner of the 2016 Templeton Prize which honors a living person who has made an exceptional contribution to affirming life’s spiritual dimension, whether through insight, discovery, or practical works. The interview was about the relationship between technology and community but the importance of religion in creating and supporting communities in the online world was discussed when Facebook asked the following question:
“one of the questions that we are thinking about at Facebook is how do we build, help people to build supportive communities that can strengthen the traditional institutions [which in the context of the interview includes religious institutions since the interviewee was a religious leader] in the world especially when we see that some of these institutions are declining” (see interview at 10’.28”).
In the context of AI, this question is a wake-up call. If religion helps build communities and communities provide a medium to help build and reinforce ethics, then how can AI systems learn how to build and reinforce their ethics regime if they lack a medium to reinforce and develop such ethics? How will AI programmers incorporate this finding into their research programs? The new imperative for AI research is to explore the technical ways to enable AI systems to replicate the beneficial effects of a human community.
As the founder of DeepMind discussed in his recent scientific paper (discussed in Part I), AI research seems to be at an impasse because we humans do not fully understand how are intellect works. Until we are able to do that, AI development will remain at a standstill. Similarly, if we do not know what ethics is, its purpose and how we can create and reinforce its application, we will most likely be unable to incorporate ethics into AI system in a manner that would be safe for consumer use.
Therefore to better understand what ethics is, we have looked at the fundamental expression of human dignity and humanism both of which have divine inspirations according to Mirandola. Studies have shown that religion helps build communities which in turn provide social spaces in which to shape and reinforce the ethics that communities shape and build. Ethics protects our dignity from being erased by AI systems by limiting what is technically possible so as to prevent what is humanely impossible. Even if AI systems can learn all of the world’s ethical systems, would it be able to internalize them as the sages of the Axial Age taught?
In a transformative experience to internalize faith and belief, the Hebrew prophet Elijah looked inwards to hear the “still, small voice” of the divine. Would it be possible one day for an AI system to “learn” to tune out the confused noise of its human programmers and hear what Elijah heard so many thousands of years ago?
#ethics #AI #behavior sciences #machine learning #rabbisacks #facebook