cross.jpg

Un-Summoning the Demon: a Roadmap to Ethical Artificial Intelligence

October 11, 2019

The Kauffman Auditorium (C5012) in the Winship Cancer Institute of Emory University Hospital at 1365 Clifton Rd, Atlanta, GA 30322

Free admission. Free breakfast, Free lunch. Free wisdom.

 

The organizations listed above are employers of the speakers for this symposium, but the opinions expressed by the speakers are their own opinions respectively and do not purport to reflect the opinions or views of these organizations.

The organizations listed above are employers of the speakers for this symposium, but the opinions expressed by the speakers are their own opinions respectively and do not purport to reflect the opinions or views of these organizations.

The E-AIMM Program

What is the state-of-the-art in AI now, and what will it become? What is the AI ecosystem? How does power vary across the stakeholders of any AI use case? What are the ethical implications of the answers to these questions?
Join us in Atlanta, Georgia on October 11, 2019 to launch the Ethical Artificial Intelligence Maturity Model (E-AIMM) Program. Come see fireside chats and panels featuring (in alphabetical order) Paul Arne, chair of the tech.cyber.sourcing.crypto practice at Morris, Manning, and Martin; Lisa Hammit, the Global Vice President, Data and Artificial Intelligence at Visa; Mina Hanna, the Co-Chair of the Policy Committee of the IEEE Global Initiative on Ethics of AI Systems; and Jimmie McEver, Principal Scientist at Johns Hopkins Applied Physics Laboratory and INCOSE (International Council on Systems Engineering) Assistant Director for Analytic Enablers. Enjoy a keynote by Chris Benson, who is the Chief Strategist for Artificial Intelligence, High-Performance Computing and AI Ethics at Lockheed Martin. Then get hands on experience helping us to prototype the model. We are expecting participation from HomeDepot, Coca-Cola, and Delta Airlines, among others. Admission to the event is free, as are breakfast and lunch (for those who RSVP). Volunteers can officially join the E-AIMM Program in person at the symposium as charter members, which will be recorded in the model once it is published. Email your RSVP to info@advenae.ai because space is limited, and we will give priority to those who RSVP’d if capacity is exceeded. Pictured in alphabetical order:
Paul Arne

Paul Arne

Partner (Tech Monetization, Cybersecurity, Privacy, Open Source, and more) at Morris, Manning, and Martin

Chris Benson

Chris Benson

Chief Strategist for Artificial Intelligence, High-Performance Computing & AI Ethics at Lockheed Martin

Bill-Binney.JPG

Bill Binney

Former highly placed intelligence official with the United States National Security Agency (NSA) turned whistleblower on matters of privacy involving massive electronic surveillance; retired in October 2001, after more than 30 years with the agency.

Lisa Hammit

Lisa Hammit

Global Vice President, Data and Artificial Intelligence at Visa

Mina Hanna

Mina Hanna

Co-Chair of the Policy Committee of the IEEE Global Initiative on Ethics of Artificial Intelligence Systems

Jimmie McEver

Jimmie McEver

Principal Scientist at Johns Hopkins Applied Physics Laboratory and INCOSE Assistant Director for Analytic Enablers

kirk_wiebe.jpg

Kirk Wiebe

Former highly placed intelligence official with the United States National Security Agency (NSA) turned whistleblower on matters of privacy involving massive electronic surveillance; retired in October 2001, after more than 30 years with the agency.

michelle_yi.jpg

Michelle Yi

Practice Area Lead for Global Social Innovation at Slalom, specializing in AI, Machine Learning, and Cloud

After a networking breakfast, and after spending the morning hearing from these and other luminaries about the current and future state of AI, the AI ecosystem, respective stakeholder perspectives, and ethical considerations, we will begin prototyping the Ethical Artificial Intelligence Maturity Model (E-AIMM) over a working lunch. Prototyping will be facilitated by John Schlichter, who conceived and led a team of 800 volunteers from 35 countries to develop another maturity model accredited by ISO as a global standard.

The joke that “methodology, like sex, is better demonstrated than described” applies here, but the following may help convey the lay of the land for prototyping. You will self-organize into break-out groups led by the persons pictured above. You will complete facilitated exercises that unleash your creativity, focusing on 1) “What elements do you think are required for excellence in developing and implementing AI ethically?” and 2) “How would you sequence those elements from less to more advanced (like putting walking before running)?” Then all teams will share their creativity with each other in polite but playful plenary, tackling fascinating questions regarding what it means for AI to be developed and used ethically. Suffice it to say, while the subject matter is serious, we do not take ourselves too seriously, and a fun time shall be had by all who engage in prototyping. You will experience your competence quickly (in the art of creating a maturity model) no matter your background if you have listened carefully since breakfast. So arrive early! All are welcome.

John Schlichter

John Schlichter

Founder of Advenæ
(pronounced /ad ˈvɛn ˈaɪ/)

Although John Schlichter conceived this symposium and the E-AIMM Program, the following opinions are his own and do not necessarily represent the opinions or views of the other speakers.

Nobody has distinguished a standard for maturity vis-a-vis the ethical development and use of AI from a systems perspective that enables all stakeholders to recognize their own cognitive constraints and how those constraints govern the ways they treat each other.

With half of all businesses surveyed integrating artificial intelligence (AI) into their operations and three out of four planning to increase investments in the technology, the race is on to create not only capable artificial intelligence but ethical AI. Yet nobody has distinguished a standard for maturity vis-a-vis the ethical development and use of AI from a systems perspective that enables all stakeholders to recognize their own cognitive constraints and how those constraints govern the ways they treat each other.

Our vision is to create a widely and enthusiastically endorsed maturity model recognized worldwide as the standard for developing and assessing ethical AI.

"Maturity" isn't a word that belongs only to wine connoisseurs characterizing their best libations or Wall Street brokers characterizing the ultimate pay-out of their financial instruments. It is a word that characterizes every pregnant mother coming to term and every football team arriving at the Super Bowl. "Maturity" is the destination, and the steps to achieve maturity can be described easily as a "model” once maturity is distinguished. The term "maturity model" is jargon that originated in the late 1980's when the U.S. Department of Defense funded research to evaluate the ability of contractors to develop and deliver their goods. Soon thereafter, a Cambrian-like explosion of maturity models elaborated everything from software development productivity, logistics, and smart grid modernization, to strategy-implementation-through-projects. The time has come for an ethical AI maturity model. Our aim is to help users assess and develop capabilities for creating and implementing AI ethically.

Our mission is to develop an open-source Ethical Artificial Intelligence Maturity Model or E-AIMM that provides methods for assessing and developing capabilities to ensure the ethical development and ethical use of AI, promoting successful, consistent, and predictably ethical behavior by all stakeholders and AI’s.

By some accounts, ensuring AI is ethical is an existential issue for humanity. Elon Musk famously said that everyone racing to develop AI is “summoning the demon,” running the risk of creating a moral hazard with mortal consequences. We essentially wish to “unsummon” the demon, cultivating instead the intent and capability to base AI on the better angels of our nature. To that end, our vision is to create a widely and enthusiastically endorsed maturity model recognized worldwide as the standard for developing and assessing ethical AI. Our mission is to develop an open-source Ethical Artificial Intelligence Maturity Model or E-AIMM that provides methods for assessing and developing capabilities to ensure the ethical development and ethical use of AI, promoting successful, consistent, and predictably ethical behavior by all stakeholders and AI's. For this purpose, we are creating a global community of participants who will contribute to development of the model, an assessment protocol, certifications, benchmark data, and conferences.

If the example of a college course curriculum is a "maturity model" (or a model that describes excellence in a particular domain and how to get there), then the grading rubric for such a curriculum is a corresponding maturity assessment protocol. Just as maturity models have proliferated, so has a myriad of cottage industries associated with assessing the maturity of this or that to characterize the development or improvement of something relative to its teleology. What it means for AI to be "ethical" can be distinguished and evaluated for any application, whether that is AI-driven cars, AI face recognition, or AI-enabled medical diagnosis. It can be assessed from the varied (and often conflicting) perspectives of sponsors, designers, engineers, developers, suppliers, government authorities, users, beneficiaries, citizens, and other stakeholders, helping these many roles become aligned.

To distinguish what it means to exhibit requisite capabilities in ethical AI from all perspectives and to distinguish the steps from lesser capability to greater capability, one must reverse engineer choice architectures for vexing questions. e.g. "should an AI-driven vehicle faced with a no-win crash scenario sacrifice its own passengers versus another's?" Or "is AI-based face recognition software that invades privacy unacceptable for commercialism but plausible in public safety situations?" And "how should AI-enabled medical diagnosis balance the interests of patients and insurers?" In all cases, where does data come from to feed the AI, how should the capabilities of AI-based decision-making be directed, and how should competing interests be arbitrated? How should participants in all roles of the AI ecosystem parse the needs of the one versus the needs of the many? Are answers to these questions universal or do they vary by ethnography?

To answer the question “Is it ethical?” one must examine how power is exercised. ...Foundational capabilities (of any maturity model) must include both the intention and the ability to distinguish ecosystems (and, hence, power relations).

To act ethically, one must see oneself in relation to others, recognizing that different people may have different value systems. A maturity model of ethics necessarily pertains to intelligent beings and the relationships between them. As intelligent beings, humans are both objects and subjects of study, and this object-reflexivity means one’s capacity for ethics must be situational and contextual. Intelligent beings considered in relation to each other are "organizations” in the sense that their relationships to each other are integral to their identities. As Martin Luther King Jr. said, “All men are caught in an inescapable network of mutuality, tied in a single garment of destiny… I can never be what I ought to be until you are what you ought to be, and you can never be what you ought to be until I am what I ought to be. This is the inter-related structure of reality.” However, the respective individuals subsumed to any network often find themselves at odds. Jim March wrote in The Pursuit of Organizational Intelligence “It is not clear that organizational purpose can be portrayed as unitary, or that the multiple purposes of an organization are reliably constant." This insight opens the door for intelligence as interpretation-in-context, a door that may lead to wisdom for those willing to walk through it.

Once one is capable of distinguishing an ecosystem and the power dynamics within it, what would it mean for the ecosystem to be ethical?

When Yogi Berra said “You can observe a lot just by watching,” he might as well have said apathy precludes empathy. Intentions matter. We can safely assume that all organizations are cybernetic, meaning they necessarily involve communication and recursion (in both living things and machines), but interpretation-in-context is a learned skill that starts with intention. In short, a maturity model of ethics should pertain to one intelligence understood in relation to another intelligence and to the understanding(s) that they can achieve respectively and mutually, which starts by considering the intentions each has toward the other.

What does it mean to be ethical? What does excellence look like in an organization that we (as the makers of this maturity model) would characterize as ethical? Is the ultimate goal merely an organization that can problem-solve for itself? Would a group of Nazi's be considered "ethical" simply because they problem-solve their way to internal consistency between their beliefs and actions? No. First, the "organization" would have to be defined in terms of its relevant stakeholders, so Nazi's could not be considered independently of Jews. In other words, foundational capabilities (of any maturity model) must include both the intention and the ability to distinguish ecosystems (and, hence, power relations). Once one is capable of distinguishing an ecosystem and the power dynamics within it, what would it mean for the ecosystem to be "ethical?" Is that an ecosystem wherein everyone respects the liberty of respective stakeholders, or one that actually requires each stakeholder to respect the others’ values, or even one where consensus regarding values must be achieved somehow? By contrast, is it OK simply to discount any given stakeholder, or to agree to disagree? Or does excellence in ethical AI require sincere efforts to advocate a non-agression principle (NAP) for all?

Views govern those who hold them, constraining what each person thinks it means to be ethical even when they do not know their perspectives govern them in those ways. It’s no surprise that disagreements propagate and that reconciliation is ongoing in the re-balancing of power and the recursion of societal norms.

There are many perspectives constraining cognition about these conceptions. One perspective is ethical subjectivity (the view that ethics simply come down to people's attitudes, and everything is subjective). Another perspective is ideal observer theory (the view that ethics are judgements which all neutral, fully informed and vividly imaginative observers would make the same way, which means people who subscribe to this view are subjectivist but universalist in their thinking). Yet another perspective is divine command theory (the view held by billions of followers of both monotheistic and polytheistic religions, who believe ethics are whatever God commands, though the gods don't always agree with each other). Each of these views govern those who hold them respectively, constraining what each thinks it means to be ethical even when they do not know their perspectives govern them in those ways. It’s no surprise that disagreements propagate and that reconciliation is ongoing in the re-balancing of power and the recursion of societal norms. Whatever any given stakeholder’s perspective may be, any view of ethics that neglects power dynamics is inadequate to the task of distinguishing practical good (because ethics that neglect power dynamics can easily be manipulated to serve personal ends). A maturity model for ethical AI should help users discover their own cognitive constraints and how those constraints govern their own ability to achieve maturity in relation to all of their stakeholders, including ones with whom they may conflict.

To create a maturity model for ethics, one must articulate capabilities associated with distinguishing practical good (or clear trade-off’s) for all stakeholders, which must be based on even more fundamental capabilities denoting the ability to discern power dynamics and acknowledge them honestly.

Ethics are revealed in our differences, and all too often in our indifference, finding expression in conflict and struggle, e.g. the suffrage movements. To answer the question "Is it ethical?" one must examine how power is exercised. Power is discursive and interpretative, per Foucault, who conceived of power as relation, expressed through strategies and tactics, producing realities and domains of truth (particularly through struggle and confrontation that can strengthen or transform force relations). This should be seen through the lens of phronesis, an Aristotelian concept which Oxford University’s Bent Flyvbjerg has summarized as four questions: (a) Where are we going? (b) Who gains, who loses? (c) Is it desirable? (d) What should be done? If distinguishing ethics involves a process of phronesis based on the discovery of power dynamics, then to create a maturity model for ethics, one must articulate capabilities associated with distinguishing practical good (or clear trade-off’s) for all stakeholders, which must be based on even more fundamental capabilities denoting the ability to discern power dynamics and acknowledge them honestly.

Flyvbjerg has distinguished 10 propositions regarding power that are instructive for these purposes, per https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2278409

Proposition 1: Power defines reality.

Proposition 2: Rationality is context-dependent, the context of rationality is power, and power blurs the dividing line between rationality and rationalization.

Proposition 3: Rationalization presented as rationality is a principal strategy in the exercise of power.

Proposition 4: The greater the power, the less the rationality.

Proposition 5: Stable power relations are more typical of politics, administration, and planning than antagonistic confrontations.

Proposition 6: Power relations are constantly being produced and reproduced.

Proposition 7: The rationality of power has deeper historical roots than the power of rationality.

Proposition 8: In open confrontation, rationality yields to power.

Proposition 9: Rationality±power relations are more characteristic of stable power relations than of confrontations.

Proposition 10: The power of rationality is embedded in stable power relations rather than in confrontations.

Ethics are revealed in the exercise of power, which is how one figures out which actions need to be justified (and perhaps how to do so).

How do such propositions pertain to the AI ecosystem? How do pathologies of power impact AI stakeholders, and what ethical considerations do they suggest? At the most basic level of maturity in ethical AI, we must be able to identify the respective stakeholders of any AI system, i.e. the AI ecosystem. A maturity model for ethical AI must help users discover their own cognitive constraints (or their own unique views of what is ethical) and how those constraints govern their own ability to achieve maturity in relation to each of their stakeholders respectively, including ones with whom they do not agree and never will. This reveals power dynamics, where ethics are revealed in the exercise of power, which is how one figures out which actions need to be justified (and perhaps how to do so).

For example, some have suggested China's state sponsored approach to AI for things like the Social Credit System and other surveillance systems is more acceptable to the Chinese people than Westerners because Chinese culture is based on Confucianism, but it is no surprise that it is the CPC suggesting that, not the Uighurs. Would Chinese authorities grasp their own mantle of leadership in AI in their jurisdiction better by addressing such perceptions through a demonstrated capability to distinguish different views and arbitrate them clearly? By comparison, could America’s own state sponsored AI initiatives do the same? Consider Google’s work on defense-related AI, where many Google employees rejected the idea of American defense as categorically aggressive instead of defensive or instead of as a way to prevent conflict. We are not suggesting that either the Chinese authorities or the American authorities were wrong in either of their respective conclusions in these matters. We are asking whether the necessary stakeholders were governed by their own implicit ideas of ethics in ways that best served them? Will innovation in the United States lead to social changes that we may not ultimately like or, in China, innovation that ends up serving the geopolitical goals of the Chinese government with some uncomfortable foreign policy implications? “Will the so-called ‘big nine’ corporations controlling the future of AI -Amazon, Google, Facebook, Tencent, Baidu, Alibaba, Microsoft, IBM and Apple - become the new gods of AI and short-change our futures to reap immediate financial gain?” as futurist Amy Webb asks. Will they do so divorced from more strategic ethical considerations?

The fact that people approach ethics in different ways must be addressed without reducing the issue to systemic abuse.

A recurrent problem in these kinds of inquiries is the reduction of the concept of ethics to an overly simplistic definition abbreviated in language that inhibits the pursuit of maturity and ethically capable relationships. Of course everyone agrees that ethics and empathy are needed in AI. And of course, they also don't (because they mean different things by ethics and empathy, and may not even know it). The fact that people approach ethics in different ways must be addressed without reducing the issue to systemic abuse. Otherwise our worst fears will be realized in cognitive dissonance, i.e. “All animals are equal, but some are more equal than others.” If we resign ourselves to the totalizing logic of technical dominance without first confronting pervasive interiority, we will “win the battle and lose the war.” And if one will forgive the mixed metaphor, we will surely “snatch defeat from the jaws of victory.”

This is a problem confronting humankind today (which has made ethics in AI the hottest topic in AI bar none), but we may soon find the sins of the father passed on to all AI creations. Could AI evolve in a manner that requires not only humans to understand each other’s intelligence in situational and contextual ways but humans and AI’s to acknowledge each other as self-aware phenomena bound together in a fabric of trust intrinsic to society? If there is even a remote chance of that happening, shouldn’t ethical maturity take much higher priority than technical maturity? The future is now.

Ethical AI can be assessed from the varied (and often conflicting) perspectives of sponsors, designers, engineers, developers, suppliers, government authorities, users, beneficiaries, and other stakeholders, helping these many roles become aligned.

How can we ensure value rationality governs instrumental rationality in the development and use of AI? How can we make doing good the litmus test for doing well? We do not have answers to these profound questions, but we shall begin to seek them together, starting simply with “What elements do you think are required for excellence in developing and implementing AI ethically? And how would you sequence those elements from less to more advanced (like putting walking before running)?” In this quest we hope you will glimpse into the hearts, souls, and minds shaping the shared exponential future accelerating toward us, and in doing so, have the opportunity to co-create ethical artificial intelligence.

If you made it to the end of this article, you are already way ahead of the game, and we look forward humbly to learning from you. Mark the date, invite your colleagues, and email your RSVP to info@advenae.ai because space is limited, and interest is extremely high.
robots.png