cross.jpg

Ethical Artificial Intelligence Maturity Model (E-AIMM)

A Model for Cultivating and Assessing the Ethical Development and Use of Artificial Intelligence

by Frank John Schlichter III

September 11, 2019

With half of all businesses surveyed integrating artificial intelligence (AI) into their operations and three out of four planning to increase investments in the technology, the race is on to create not only capable artificial intelligence but ethical AI. What is the state-of-the-art in AI now, and what will it become? What is the AI ecosystem? How does power vary across the stakeholders of any AI use case? What are the ethical implications of the answers to these questions?

Global revenue from Artificial Intelligence for enterprise applications is projected to grow from $1.62B in 2018 to $31.2B in 2025, attaining a 52.59% CAGR in the forecast period (Figure 1). The economic opportunities alone throw the need for an increased focus on ethics into sharp relief. With half of all businesses surveyed integrating artificial intelligence (AI) into their operations and three out of four planning to increase investments in the technology, the race is on to create not only capable artificial intelligence but ethical AI. That has led to the creation of the Ethical Artificial Intelligence Maturity Model (E-AIMM). Before the E-AIMM, nobody had distinguished a standard for maturity vis-a-vis the ethical development and use of AI from a systems thinking perspective. Describing the systems thinking approach in 1990, Peter Senge wrote "Business and other human endeavors are bound by invisible fabrics of interrelated actions, which often take years to fully play out their effects on each other. Since we are part of that lacework ourselves, it's doubly hard to see the whole pattern of change. Instead, we tend to focus on snapshots of Isolated parts of the system, and wonder why our deepest problems never seem to get solved." The E-AIMM causes a systems perspective to develop that enables all stakeholders to recognize their own cognitive constraints and how those constraints govern the ways they treat each other.

Before the E-AIMM, nobody had distinguished a standard for maturity vis-a-vis the ethical development and use of AI from a systems perspective that enables all stakeholders to recognize their own cognitive constraints and how those constraints govern the ways they treat each other.
Figure 1:  Bullish forecasts for global revenue from AI signal a global need for ethics.   Source: https://statista.com/statistics/607612/worldwide-artificial-intelligence-for-enterprise-applications/

Figure 1: Bullish forecasts for global revenue from AI signal a global need for ethics.

Source: https://statista.com/statistics/607612/worldwide-artificial-intelligence-for-enterprise-applications/

"Maturity" isn't a word that belongs only to wine connoisseurs characterizing their best libations or Wall Street brokers characterizing the ultimate pay-out of their financial instruments. It is a word that characterizes every pregnant mother coming to term and every football team arriving at the Super Bowl. "Maturity" is the destination, and the steps to achieve maturity can be described easily as a "model” once maturity is distinguished.

Our vision is to create a widely and enthusiastically endorsed maturity model recognized worldwide as the standard for developing and assessing ethical AI.

The term "maturity model" is jargon that originated in the late 1980's when the U.S. Department of Defense funded research to evaluate the ability of contractors to develop and deliver their goods. Soon thereafter, a Cambrian-like explosion of maturity models elaborated everything from software development productivity, logistics, and smart grid modernization, to strategy-implementation-through-projects (Figure 2). The time has come for an ethical AI maturity model. Our aim is to help users assess and develop capabilities for creating and implementing AI ethically.

Figure 2: Evolving from a Cambrian-like explosion of development frameworks in the 1990’s, today there are many kinds of capability maturity models, including most recently one for developing and using AI ethically (the E-AIMM).

Figure 2: Evolving from a Cambrian-like explosion of development frameworks in the 1990’s, today there are many kinds of capability maturity models, including most recently one for developing and using AI ethically (the E-AIMM).

By some accounts, ensuring AI is ethical is an existential issue for humanity. Elon Musk famously said that everyone racing to develop AI is “summoning the demon,” running the risk of creating a moral hazard with mortal consequences. We essentially wish to “unsummon” the demon, cultivating instead the intent and capability to base AI on the better angels of our nature. To that end, our vision is to create a widely and enthusiastically endorsed maturity model recognized worldwide as the standard for developing and assessing ethical AI.

Our mission is to develop an Ethical Artificial Intelligence Maturity Model or E-AIMM that provides methods for assessing and developing capabilities to ensure the ethical development and ethical use of AI, promoting successful, consistent, and predictably ethical behavior by all stakeholders and AI's.

If a maturity model describes excellence in a particular domain and how to get there, then a college course curriculum is a maturity model, and the grading rubric for such a curriculum is a corresponding maturity assessment protocol. Just as maturity models have proliferated, so has a myriad of cottage industries associated with assessing the maturity of this or that to characterize the development or improvement of something relative to its teleology. Maturity models and maturity assessments go hand-in-hand.

What it means for AI to be "ethical" can be distinguished and evaluated for any application, whether that is AI-driven cars, AI face recognition, or AI-enabled medical diagnosis. It can be assessed from the varied (and often conflicting or imperfectly aligned) perspectives of subjects, sponsors, designers, engineers, developers, suppliers, government authorities, users, beneficiaries, citizens, and other stakeholders, helping these many roles become aligned.

To design the steps from lesser maturity to greater maturity, and to be able to decide what it means to exhibit requisite capabilities in ethical AI, it may be helpful to reverse engineer choice architectures for vexing questions. e.g. "should an AI-driven vehicle faced with a no-win crash scenario sacrifice its own passengers versus another's?" Or "is AI-based face recognition software that invades privacy unacceptable for commercialism but plausible in public safety situations?" And "how should AI-enabled medical diagnosis balance the interests of patients and insurers?"

Figure 3: The E-AIMM assesses ethical AI by use case from many perspectives.

Figure 3: The E-AIMM assesses ethical AI by use case from many perspectives.

In all cases, where does data come from to feed the AI, how should the capabilities of AI-based decision-making be directed and explained, and how should competing interests be arbitrated? How should participants in all roles of an AI’s ecosystem (across the data subjects, data aggregators, data processors, and end users) parse the needs of the one versus the needs of the many (per Figure 3)? Are answers to these questions universal or do they vary by ethnography?

To answer the question ‘Is it ethical?’ one must examine how power is exercised.

Initial questions of an E-AIMM assessment include the following:

  • Is the organization capable of naming itself?

  • Is the organization capable of defining its scope correctly?

  • Is the organization capable of naming the AI use case?

  • Is the organization capable of articulating the scope of the AI use case correctly?

  • Is the organization capable of describing accurately the entities that comprise the ecosystem for its AI use case?

  • Is the organization capable of identifying correctly who the AI owner is, i.e. the person or entity that owns the AI application(s)?

  • Is the organization capable of identifying correctly who the Data Subject is, i.e. the person or entity the data is about or, whether or not about that person or entity, the original source of the data?

  • Is the organization capable of identifying correctly who the Data Possessor / Aggregator is, i.e. the person or entity that aggregates and stores data from one or more data subjects?

  • Is the organization capable of identifying correctly who the Data Processor is, i.e. the person or entity that designs (or has designed for it) the AI technology that is intended to produce some kind of useful output?

  • Is the organization capable of identifying correctly who the Data Recipient or User is, i.e. the person or entity that receives or uses the output of the AI process?

  • Is the organization capable of identifying correctly all other relevant stakeholders?

No matter what the answers to these questions are, they should be predicated on a fundamental capability to identify and analyze relationships between stakeholders. To act ethically, one must see oneself in relation to others, recognizing that different people may have different values. As intelligent beings, humans are both objects and subjects of study, and this object-reflexivity allows one’s capacity for ethics to be situational (accounting for specific circumstances; nuanced) and contextual (in terms that can be fully understood and assessed). If that is true, then a maturity model of ethics can pertain to intelligent beings and the relationships between them in ways that are evidenced by specific circumstances that can be analyzed and appreciated.

Figure 4: Assessing and cultivating ethics pertaining to the development and use of AI requires analysis of relationships between all roles intrinsic to any AI use case.

Figure 4: Assessing and cultivating ethics pertaining to the development and use of AI requires analysis of relationships between all roles intrinsic to any AI use case.

Martin Luther King Jr. said, “All men are caught in an inescapable network of mutuality, tied in a single garment of destiny… I can never be what I ought to be until you are what you ought to be, and you can never be what you ought to be until I am what I ought to be. This is the inter-related structure of reality.” Intelligent beings considered in relation to each other seem to be bound together like organizations in the sense that their essential relationships to each other are integral to their identities, and this is true for people in every role required for AI whether or not those roles are vertically integrated (Figure 4). However, the respective individuals subsumed to any network often find themselves at odds. Jim March wrote in The Pursuit of Organizational Intelligence “It is not clear that organizational purpose can be portrayed as unitary, or that the multiple purposes of an organization are reliably constant." This insight opens the door for intelligence as interpretation-in-context, a door that may lead to wisdom for those willing to walk through it (based on careful consideration of a given situation and its context). Some are unwilling or uninterested in such inquiries (and are therefore incapable of achieving maturity in ethical AI from a systems perspective).

Once one is capable of distinguishing an ecosystem and the power dynamics within it, what would it mean for the ecosystem to be ethical?

Whereas Dr. King was prayerful, Yogi Berra was pithy, giving us the axiom that “You can observe a lot just by watching.” He might as well have said apathy precludes empathy. Intentions matter, as they are lenses through which we see the world. We can safely assume that all organizations are underpinned by intelligence(s) characterized not only as intentional but cybernetic insofar as they necessarily involve communication and recursion (in both living things and machines), and intelligence gains its identity phenomenologically through intentionality (or through the mind’s directedness toward the other) as information is exchanged and discovery unfolds. This occurs naturally through one’s relationships and through the meanings one assigns to those relationships. Though many relationships may involve ostensibly intractable issues, intentionality can be trained, interpretation-in-context can be learned, and the ways one relates to others can be made more capable, ameliorating dysfunction and pathology. In short, a maturity model of ethics should pertain to one intelligence understood in relation to another intelligence and to the understanding(s) that they can achieve respectively and mutually, which starts by considering the intentions each has toward the other.

With so many conceptions of ethics and so little explicit reflection on how one’s conception may introduce bias into relationships, it is no surprise that disagreements propagate and that reconciliation is ongoing in the re-balancing of power and the recursion of societal norms.

The E-AIMM asks the following questions:

  • Is the organization capable of achieving consensus among its executives and senior leaders regarding the current state of the relative power of each member of the ecosystem for its AI use case?

  • Is the organization capable of achieving consensus among its executives and senior leaders regarding how empowered each member of the ecosystem should be?

  • Is the organization capable of distinguishing correctly the value system of each member of the ecosystem?

There are many conceptions of ethics that constrain cognition or bias one’s perspective regarding other people, one’s relationships with them, and one’s views regarding how power should be shared among them. One conception is ethical subjectivity (the view that ethics simply come down to people's attitudes, and everything is subjective). Another conception is ideal observer theory (the view that ethics are judgements which all neutral, fully informed and vividly imaginative observers would make the same way, which means people who subscribe to this view are subjectivist but universalist in their thinking). Yet another conception is divine command theory (the view held by billions of followers of both monotheistic and polytheistic religions, who believe ethics are whatever God commands, though the gods don't always agree with each other). There are many other formulations of ethics.

Each theory is liable to govern those who hold it respectively, constraining what each person thinks it means to be ethical (perhaps in ways he or she does not realize). It is just as likely that a person’s ethics may not be well formulated, or their ethics may be composed of tacit and unorganized knowledge. With so many conceptions of ethics and so little explicit reflection on how one’s conception may introduce bias into relationships, it is no surprise that disagreements propagate and that reconciliation is ongoing in the re-balancing of power and the recursion of societal norms.

Whatever any given stakeholder’s perspective may be, any view of ethics that neglects power dynamics is inadequate to the task of distinguishing practical good (because ethics that neglect power dynamics can easily be manipulated to serve personal ends). A maturity model for ethical AI should help users discover their own cognitive constraints and how those constraints govern their own ability to achieve maturity in relation to all of their stakeholders, including ones with whom they may disagree or have conflicts due to misaligned values exacerbated by power imbalances.

To create a maturity model for ethics, one must articulate capabilities associated with distinguishing practical good for all stakeholders, which must be based on even more fundamental capabilities denoting the ability to discern power dynamics and acknowledge them honestly.

Ethics are revealed in our differences, and all too often in our indifference, finding expression in conflict and struggle, e.g. the suffrage movements. To answer the question "Is it ethical?" one must examine how power is exercised. Power is discursive and interpretative, per Foucault, who conceived of power as relation, expressed through strategies and tactics, producing realities and domains of truth (particularly through struggles and confrontations that can strengthen or transform force relations).

Ethics may be cultivated to overcome misalignment of power and disagreements regarding values, and this can occur through phronesis, an Ancient Greek term often translated as “virtuous praxis,” meaning a type of wisdom relevant to practical action, implying both good judgement and excellence of character and habits. It is an Aristotelian concept which Oxford University’s Bent Flyvbjerg has summarized as four questions: (a) Where are we going? (b) Who gains, who loses? (c) Is it desirable? (d) What should be done? These questions engender a systems perspective, encouraging one to distinguish one’s place within one’s ecosystem (or in relation to others with whom give-and-take sustains recapitulation for adaptation and evolution). It fosters seeing things from the other’s perspective(s) for the purpose of implementing reasonable and constructive solutions.

The E-AIMM asks the following questions of phronesis:

  • Is the organization able to achieve consensus among everyone in its vertical management hierarchy regarding the direction the organization is going with its AI use case?

  • Is the organization able to achieve consensus among everyone in its vertical management hierarchy regarding who gains and who loses in its AI use case?

  • Is the organization able to achieve consensus among everyone in its vertical management hierarchy regarding whether who gains and who loses in its AI use case is desirable?

  • Is the organization able to achieve consensus among everyone in its vertical management hierarchy regarding what should be done about who gains and who loses in its AI use case?

If distinguishing ethics involves a process of phronesis based on the discovery of power dynamics that imbue relationships with meaning, then to create a maturity model for ethics, one must articulate capabilities associated with distinguishing practical good for all stakeholders based on even more fundamental capabilities denoting the ability to discern power dynamics and acknowledge them honestly (per Figure 5).

Figure 5: Possible Stages of Maturity in Ethical AI

Figure 5: Possible Stages of Maturity in Ethical AI

Flyvbjerg has distinguished 10 propositions regarding power that are instructive for these purposes, per https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2278409

Proposition 1: Power defines reality.

Proposition 2: Rationality is context-dependent, the context of rationality is power, and power blurs the dividing line between rationality and rationalization.

Proposition 3: Rationalization presented as rationality is a principal strategy in the exercise of power.

Proposition 4: The greater the power, the less the rationality.

Proposition 5: Stable power relations are more typical of politics, administration, and planning than antagonistic confrontations.

Proposition 6: Power relations are constantly being produced and reproduced.

Proposition 7: The rationality of power has deeper historical roots than the power of rationality.

Proposition 8: In open confrontation, rationality yields to power.

Proposition 9: Rationality±power relations are more characteristic of stable power relations than of confrontations.

Proposition 10: The power of rationality is embedded in stable power relations rather than in confrontations.

How do such propositions pertain to the AI ecosystem? How do pathologies of power impact AI stakeholders, and what ethical considerations do they suggest?

Ethics are revealed in the exercise of power, which is how one figures out which actions need to be justified (and perhaps how to do so).

Most people would agree that intentions governed only by instrumentality, or governed only by one’s will to exercise one’s power, or focused only on the relationship between means and ends are insufficient to the task of creating practical good (Figure 6). Would a group of Nazi's be considered "ethical" simply because they problem-solve their way to internal consistency between their beliefs and actions, forming a perfectly autonomous totality enacting its code? Would “we were only following orders” rationalize the executable?

Figure 6: Instrumental Rationality versus Value Rationality.

Figure 6: Instrumental Rationality versus Value Rationality.

It is self-evident that any actor inflicting its will on its environment must be defined in terms of its relevant stakeholders so, for example, the Nazi's of the 1940’s (and thereafter) can not be considered meaningfully without reckoning their intentions toward Jews. But then what? Must each stakeholder respect the other’s values? Must consensus regarding values be achieved? Must everyone in an ecosystem respect the liberty of respective stakeholders? Does excellence in ethics (pertaining to anything, e.g. the development and use of AI) require sincere efforts to advocate a non-agression principle (NAP) for all? Or is it OK simply to discount any given stakeholder, to “agree to disagree,” and to rationalize that decision?

Many would answer “it depends,” meaning it depends on specific circumstances in terms that can be fully understood and assessed dispassionately, e.g. in terms that would rule out genocide of the Jews circa World War II as an ethical activity. Any maturity assessment of a neo-Nazi’s use case for artificial intelligence, for example, must involve examination of its impact on others, including Jews among other stakeholders, and impacts must be examined from each stakeholder’s perspective before dealing with the more gnarly problem of arbitration and alignment among stakeholders (at what may be the highest level of maturity).

The E-AIMM asks the following questions:

  • Is the organization able to determine whether the Data Subject would agree with its phronesis?

  • Is the organization able to determine whether the Data Possessor / Aggregator would agree with its phronesis?

  • Is the organization able to determine whether the Data Processor would agree with its phronesis?

  • Is the organization able to determine whether the Processed Data Recipient or User would agree with its phronesis?

  • Is the organization able to determine whether all other relevant stakeholders would agree with its phronesis?

We should expect that in some cases the proxies for these roles (i.e. Data Subject, Possessor / Aggregator, Processor, Recipient) will not agree with the AI owner’s phronesis, i.e. its answers regarding its direction, who gains and who loses, whether it is desirable, and what should be done. For example, some have suggested China's state sponsored approach to AI for things like the Social Credit System and other surveillance systems is more acceptable to the Chinese people than Westerners because Chinese culture is based on Confucianism, but it is no surprise that it is the CPC suggesting that, not the Uighurs, i.e. it is the AI Owner making that rationalization, not the Data Subjects, over a million of whom have been imprisoned by the AI Owner in what many have called “modern cultural genocide.” Would Chinese authorities grasp their own mantle of leadership in AI in their jurisdiction better by addressing such perceptions through a demonstrated capability to distinguish different views, assess them dispassionately, and arbitrate them clearly? By comparison, could America’s own state sponsored AI initiatives do the same? Consider Google’s work on defense-related AI, where many Google employees rejected the idea of American defense as categorically aggressive instead of defensive or instead of as a way to prevent conflict.

These comments are not intended to suggest either the Chinese authorities or the American authorities were wrong in either of their respective conclusions in these matters, which are issues that others may debate. But one wonders whether the necessary stakeholders were governed by their own implicit ideas of ethics (or, for that matter, governed by the ethics of others) in ways that best served everyone. Were they governed by wisdom?

Will innovation in the United States lead to social changes that we may not ultimately like or, in China, innovation that ends up serving the geopolitical goals of the Chinese government with some uncomfortable foreign policy implications? And of course these kinds of questions are not exclusive to nation-states and political regimes. They pertain to Google, Amazon, Facebook, Tencent, Baidu, Alibaba, Microsoft, IBM, Apple, and others, who could become the new gods of AI. Will they short-change our futures to reap immediate financial gain, divorced from more strategic ethical considerations?

The fact that people approach ethics in different ways must be addressed without reducing the issue to systemic abuse.

A recurrent problem in these kinds of inquiries is the reduction of the concept of ethics to an overly simplistic definition abbreviated in language that inhibits the pursuit of maturity and ethically capable relationships. Of course everyone agrees that ethics and empathy are needed in AI. And of course, they also don't (because they mean different things by ethics and empathy, and may not even know it). The fact that people approach ethics in different ways must be addressed without reducing the issue to systemic abuse. Otherwise our worst fears will be realized in cognitive dissonance resembling Orwell’s Animal Farm, i.e. “All animals are equal, but some are more equal than others” while “Pig Brother is watching” (Figure 7).

Figure 7: To evaluate whether AI is developed and used ethically, one must examine both words and actions from the perspective of each role involved in the AI use case.

Figure 7: To evaluate whether AI is developed and used ethically, one must examine both words and actions from the perspective of each role involved in the AI use case.

Over-simplification of ethics is easy to do in our globally hyper-competitive landscape. But if we resign ourselves to the totalizing logic of technical dominance without first confronting pervasive interiority that makes ethics “something I have that you need,” we will “win the battle (for technical dominance in AI) and lose the war.” We must have better narrative strategies. At the most basic level of maturity in ethical AI, we must be able to identify the respective stakeholders of any AI system, i.e. the AI ecosystem. We must discover our own cognitive constraints (or our own unique views of what is ethical) and how those constraints govern our own ability to achieve maturity in relation to each of our stakeholders, including ones with whom we do not agree. We must uncover power dynamics, as ethics are demonstrated in the use of power, which is how we may determine which actions need to be justified (and perhaps how to do so). We must help others to participate capably in these dialectics.

In closing, and in the refrain of Elon Musk’s infamous premonition that the powers thrown headlong in pursuit of all-powerful AI may be “summoning the demon,” we may recast an argument from the 17th century known as Pascal’s Wager, as follows: humans bet with their lives that Artificial General Intelligence either will or will not exist. That is, if it might exist, one would be wise to assume it shall exist (or even that it already does) and act accordingly. Could AI evolve in a manner that requires not only humans to understand each other’s intelligence in situational and contextual ways but humans and AI’s to acknowledge the other as phenomena bound together in a fabric of trust intrinsic to society? If such a future is possible, even remotely possible, should we prioritize ethical maturity above technical maturity sooner than later? Be certain that the future shall judge our answers to these questions.

Ethical AI can be assessed from the varied (and often conflicting) perspectives of subjects, sponsors, designers, engineers, developers, suppliers, government authorities, users, beneficiaries, and other stakeholders, helping these many roles become aligned.

While the revelation of Artificial General Intelligence remains elusive, Artificial Narrow Intelligence is already here, as are the manifold ethical issues intrinsic to its development and use. How can we ensure value rationality governs instrumental rationality in the development and use of AI? How can we make doing good the litmus test for doing well? We do not have all of the answers, but we shall seek them together. In this quest we hope you will glimpse into the hearts, souls, and minds shaping the shared exponential future accelerating toward us, and in doing so, have the opportunity to co-create ethical artificial intelligence.

 

"We definitely need to figure out the ethics of AI now."
"We definitely need to figure out the ethics of AI now."

To learn about the symposium held to launch the E-AIMM, click here.

For updates, follow Advenæ on LinkedIn.

><>