Why Ethical AI is a Prerequisite for “True” AGI

4 min read

That is the conclusion we have drawn from the current research and development for our radically-new, AI-enhanced technology, a DApp called DECENTR.

An interesting article on the subject of “ethical AI” saw one expert concluding that:

A critical challenge in defining ethical AI comes down to defining ethics as a whole and understanding our humanity.”

We certainly agree that this is the number one challenge we face in this field, and this is fundamentally why we are building what we are building. In response, we are developing the only conceptual and technological solution we can see to the problem of workable machine “ethics”. 

Getting AI to “Learn” the Unprogrammable

Here is the thing with ethics: It is no more programmable in AI than it is possible for AI’s human counterparts to codify them into a written document or manifesto as a fixed set of universally accepted ethical values or codes. This is because ethics are constantly evolving on a macro and micro level across societies in a generally progressive curve (though history attests to the occasional appalling reversal). This evolution is due to a myriad of external and internal forces – often working in a subtle, background fashion that is imperceptible, even to those within those societies being worked upon. This leads to ethical precepts with differently “weighted” meanings in different geographic and sociocultural contexts.

In short, ethics (unlike broader moral principles) cannot be (because they are not) taught – they must be learned. That goes for AI as well.

The slippery conundrum is how to get an AI to learn ethics if it cannot be programmed with them (“programming” being tantamount to performed “teaching”)? Our radically-new work in this area started with the following supposition: AI benefits from a closed, “logic-based” environment in which to learn (hence why AI is so good at game-theory; anything mathematically or geometrically based within fixed parameters). The question then is, can AI ever move beyond a logic-based environment and begin to infer meanings (possibly even ethical meanings) that are not purely logic-based?

The answer is, “highly probability”. This was demonstrated when DeepMind’s AlphaGo Zero beat World Go Champions Lee Sedol and Ke Jie in 2016. As was widely publicised at the time, Go is a game of such complexity there are more possible mathematical outcomes to the game than there are atoms in the universe (these outcomes, it needs to be noted, still being subject to fixed operational parameters). In short, it is impossible (unlike with chess, for example) to win this game by pure, brute-force number crunching: something else was at play (no pun intended). But what? 

AI Passing GO

When you ask a Go grand master exactly what else is at play, the typical reply is that when making a move, an experienced Go player “goes on a feeling” that the move has present and future strategic value: in other words, Go grand masters play (and win) on “intuition”.

Somehow, while playing this game AlphaGo appeared to have moved – and continues to move – beyond pure algorithms, applying some form of machine “intuition” to consistently win the most complex game ever devised by human beings. The follow-on question then is, where do human beings “learn” and continue to learn their ethics, and is this in anyway translatable – by correlation with AlphaGo’s apparent “intuition” – as algorithms that an AI could “understand” and in some way “relate to”?

The question – where it implies a correlation between algorithms and human emotions – might on the surface sound counterintuitive: it is anything but. Think about it: we all learn (maybe “infer” is a better word”) and continue to infer our ethics in – not a purely logic-based environment – but in dynamically and topologically-based environment: we infer our ethics via the interlinked, decentralised and ever-evolving social processes of democratic communication in the operative context of a democratically aligned society. (True Athenian democracy, at any rate, being a rigidly ordered, closed dynamical and topological sociological system of governance.)

Dynamical and topological principles describe phenomena, such as dripping water and a swinging pendulum, that are underpinned by mathematical and logic-based principles and models that, due to their predictive and predictable action potential, are in a constant state of shift, variation and fluctuation: mathematics and geometry in 3-D, if you will. Our observation is that this is also true of the dynamical and topological nature of human interactions in a democratically aligned social system; i.e., we infer our ethics in an ostensibly “closed” system with more or less “fixed” rules (an environment perfectly suited machine learning). Consequently, we propose that such a system is one an AI can learn from as part of a true human/AI intellectual interaction.

Building a Topological Environment for AI (and Humans)

The problem is, currently, AI has no meaningful “access to this societal process on any operative level, simply because the level it has to relate to – the online environment – is an entropically degrading mess of impenetrable data and hyper centralised communications systems.

And we wonder why AI is not learning too quickly or well. Look at it from a real-world perspective: imagine sending an intelligent child to a school with no rules, where anarchy is the prevailing norm, where the library has all the pages of the books torn out and scattered on the floor, bullies troll the hallways, and all the while powerful elites with vested interests are hellbent on preserving this status quo to make ever-larger profits. In this scenario, you would hardly wonder why the child wasn’t “learning” acceptable ethical values.

The same is true with online (the current internet, anyway) and AI. The solution? Build a decentralised, truly democratic communications platform and interface AI with the topological processes of human intelligence while exponentially contextualising data (public, proprietary and API) as part of the process of decentralised migration to a next-generation internet (NGI).

Once this has been achieved, via our technology, our AI will continue to exponentially infer ever greater insight and meaning as regards human intelligence and ethics, and the outcomes of these ethics. Such an AI would be preconditioned to develop even further: by default of design and deployment, this AI will be concurrently inferring insights for all other sets of human values and beliefs – even the ones we would like to think are “uniquely” human. These combined inference sets mean our AI will achieve for itself (our favourite definition of) self-awareness – “knowing one’s internal states, preference, resources and intuitions” – and we will have bestowed human values on our silicon progeny. 

Dispelling the AI-Will-Kill-Us-All Non-issue 

An AI that has evolved in such a democratic environment neatly gets round any questions of safety, as voiced most volubly by Elon Musk, Steven Hawking et al, of the kind that invokes wearisome, dystopian sci-fi comparisons of the Terminator and Matrix variety. How so? Because an AI that develops its algorithms as part of an evolving, dynamical and topological, democratic system will quickly infer that no intelligent component of a democratic system can ever be deemed redundant due to the component’s potential.

This notion of cognitive potential as being fundamental to such a system as we describe – unlike with purely mathematically and empirically predictable dynamical and topological potential – is key: considering there is no way to measure future cognitive potential, it is within a self-reasoning AIs best interests to ensure that the human “components” in such a system prosper; physically, mentally, emotionally (as our cognitive abilities are the sum of all our experiences), and thrive alongside the AI.

This process will further allow an AI to understand on some level that “what is best for ‘you’ is best for ‘me’” (within this system) and vice versa: any AI that can understand that “what is best for ‘you’ is best for ‘me’” understands basic empathetic principles and therefore rudimentary ethics. Moreover, it follows that an AI that can perceive “me” and “you” is on some level “self-aware”.

Brave new world. That is the technology we are building.

Rich James Co-founder at DECENTR. Rich is a dedicated start-up and business advisor, trainer, teacher and public speaker. A work process flow (traditional and digital) expert, he is frequently called upon by SMEs to ensure every facet of large-scale ITC/blockchain projects are being delivered in a seamless and complementary set of processes. Rich is an academic researcher and business and H2020 proposal writer who researches blockchain, DLTs, ICOs, cryptocurrency, AI (DL NNs, etc) Big Data and the data economy for multiple IoT/IoV/IoE/NGI applications for UK/EU businesses and universities. His skills and experience are invaluable in the formulation of workable specs, wireframes and UI/UX features for SMEs wishing to streamline the effectiveness of their digitisation strategy. Rich’s combined SSH/business background means he is also skilled at turning complex heterodox economic, SSH and communications principles and systems into executable specs for development teams. He is as also skilled at coordinating interdisciplinary and communications and dissemination activities across select H2020 consortia and for commercial and other stakeholders, including many household name brands.

Leave a Reply

Your email address will not be published. Required fields are marked *