What is FRIENDLY ARTIFICIAL INTELLIGENCE? What does FRIENDLY ARTIFICIAL INTELLIGENCE mean?

What is FRIENDLY ARTIFICIAL INTELLIGENCE? What does FRIENDLY ARTIFICIAL INTELLIGENCE mean? FRIENDLY ARTIFICIAL INTELLIGENCE meaning – FRIENDLY ARTIFICIAL INTELLIGENCE definition – FRIENDLY ARTIFICIAL INTELLIGENCE explanation.

Source: Wikipedia.org article, adapted under license.

A friendly artificial intelligence (also friendly AI or FAI) is a hypothetical artificial general intelligence (AGI) that would have a positive rather than negative effect on humanity. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behaviour and ensuring it is adequately constrained.

The term was coined by Eliezer Yudkowsky to discuss superintelligent artificial agents that reliably implement human values. Stuart J. Russell and Peter Norvig’s leading artificial intelligence textbook, Artificial Intelligence: A Modern Approach, describes the idea:

Yudkowsky (2008) goes into more detail about how to design a Friendly AI. He asserts that friendliness (a desire not to harm humans) should be designed in from the start, but that the designers should recognize both that their own designs may be flawed, and that the robot will learn and evolve over time. Thus the challenge is one of mechanism design—to define a mechanism for evolving AI systems under a system of checks and balances, and to give the systems utility functions that will remain friendly in the face of such changes.

‘Friendly’ is used in this context as technical terminology, and picks out agents that are safe and useful, not necessarily ones that are “friendly” in the colloquial sense. The concept is primarily invoked in the context of discussions of recursively self-improving artificial agents that rapidly explode in intelligence, on the grounds that this hypothetical technology would have a large, rapid, and difficult-to-control impact on human society.

The roots of concern about artificial intelligence are very old. Kevin LaGrandeur showed that the dangers specific to AI can be seen in ancient literature concerning artificial humanoid servants such as the golem, or the proto-robots of Gerbert of Aurillac and Roger Bacon. In those stories, the extreme intelligence and power of these humanoid creations clash with their status as slaves (which by nature are seen as sub-human), and cause disastrous conflict. By 1942 these themes prompted Isaac Asimov to create the “Three Laws of Robotics” – principles hard-wired into all the robots in his fiction, and which meant that they could not turn on their creators, or allow them to come to harm.

In modern times as the prospect of superintelligent AI looms nearer, philosopher Nick Bostrom has said that superintelligent AI systems with goals that are not aligned with human ethics are intrinsically dangerous unless extreme measures are taken to ensure the safety of humanity. He put it this way:

Basically we should assume that a ‘superintelligence’ would be able to achieve whatever goals it has. Therefore, it is extremely important that the goals we endow it with, and its entire motivation system, is ‘human friendly.’

Ryszard Michalski, a pioneer of machine learning, taught his Ph.D. students decades ago that any truly alien mind, including a machine mind, was unknowable and therefore dangerous to humans.

More recently, Eliezer Yudkowsky has called for the creation of “friendly AI” to mitigate existential risk from advanced artificial intelligence. He explains: “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”

Steve Omohundro says that a sufficiently advanced AI system will, unless explicitly counteracted, exhibit a number of basic “drives”, such as resource acquisition, because of the intrinsic nature of goal-driven systems and that these drives will, “without special precautions”, cause the AI to exhibit undesired behavior.