The Three L Laws

A more general form of the third law that applies to a system such as a glass that may have more than one minimum microscopically.
Table of contents

I have my answer ready whenever someone asks me if I think that my Three Laws of Robotics will actually be used to govern the behavior of robots, once they become versatile and flexible enough to be able to choose among different courses of behavior. My answer is, "Yes, the Three Laws are the only way in which rational human beings can deal with robots—or with anything else.

Asimov's stories test his Three Laws in a wide variety of circumstances leading to proposals and rejection of modifications. Science fiction scholar James Gunn writes in , "The Asimov robot stories as a whole may respond best to an analysis on this basis: This modification is motivated by a practical difficulty as robots have to work alongside human beings who are exposed to low doses of radiation. Because their positronic brains are highly sensitive to gamma rays the robots are rendered inoperable by doses reasonably safe for humans.

The robots are being destroyed attempting to rescue the humans who are in no actual danger but "might forget to leave" the irradiated area within the exposure time limit. Removing the First Law's "inaction" clause solves this problem but creates the possibility of an even greater one: Gaia is a planet with collective intelligence in the Foundation which adopts a law similar to the First Law, and the Zeroth Law, as its philosophy:. Asimov once added a " Zeroth Law"—so named to continue the pattern where lower-numbered laws supersede the higher-numbered laws—stating that a robot must not harm humanity.

The robotic character R. Daneel Olivaw was the first to give the Zeroth Law a name in the novel Robots and Empire ; [15] however, the character Susan Calvin articulates the concept in the short story " The Evitable Conflict ". In the final scenes of the novel Robots and Empire , R. Giskard Reventlov is the first robot to act according to the Zeroth Law.

Giskard is telepathic , like the robot Herbie in the short story " Liar! The Zeroth Law is never programmed into Giskard's brain but instead is a rule he attempts to comprehend through pure metacognition. Though he fails — it ultimately destroys his positronic brain as he is not certain whether his choice will turn out to be for the ultimate good of humanity or not — he gives his successor R.

Daneel Olivaw his telepathic abilities. Over the course of many thousands of years Daneel adapts himself to be able to fully obey the Zeroth Law. A condition stating that the Zeroth Law must not be broken was added to the original Three Laws, although Asimov recognized the difficulty such a law would pose in practice.

Asimov's novel Foundation and Earth contains the following passage:. In practice, we could never decide. A human being is a concrete object. Injury to a person can be estimated and judged. Humanity is an abstraction. A translator incorporated the concept of the Zeroth Law into one of Asimov's novels before Asimov himself made the law explicit. He determines that it must be so unless the robot is clever enough to comprehend that its actions are for humankind's long-term good. A robot may not harm a human being, unless he finds a way to prove that ultimately the harm done would benefit humanity in general!

Three times during his writing career, Asimov portrayed robots that disregard the Three Laws entirely. The first case was a short-short story entitled " First Law " and is often considered an insignificant "tall tale" [18] or even apocryphal. Humorous, partly autobiographical and unusually experimental in style, "Cal" has been regarded as one of Gold' s strongest stories. However, aside from the positronic brain concept, this story does not refer to other robot stories and may not be set in the same continuity.

The title story of the Robot Dreams collection portrays LVX-1, or "Elvex", a robot who enters a state of unconsciousness and dreams thanks to the unusual fractal construction of his positronic brain. In his dream the first two Laws are absent and the Third Law reads "A robot must protect its own existence". Asimov took varying positions on whether the Laws were optional: Without the basic theory of the Three Laws the fictional scientists of Asimov's universe would be unable to design a workable brain unit.

This is historically consistent: In "Little Lost Robot" Susan Calvin considers modifying the Laws to be a terrible idea, although possible, [22] while centuries later Dr. Gerrigel in The Caves of Steel believes it to be impossible. Gerrigel uses the term "Asenion" to describe robots programmed with the Three Laws. The robots in Asimov's stories, being Asenion robots, are incapable of knowingly violating the Three Laws but, in principle, a robot in science fiction or in the real world could be non-Asenion.

Characters within the stories often point out that the Three Laws, as they exist in a robot's mind, are not the written versions usually quoted by humans but abstract mathematical concepts upon which a robot's entire developing consciousness is based. This concept is largely fuzzy and unclear in earlier stories depicting very rudimentary robots who are only programmed to comprehend basic physical tasks, where the Three Laws act as an overarching safeguard, but by the era of The Caves of Steel featuring robots with human or beyond-human intelligence the Three Laws have become the underlying basic ethical worldview that determines the actions of all robots.

Each title has the prefix "Isaac Asimov's" as Asimov had approved Allen's outline before his death. The so-called New Laws are similar to Asimov's originals with the following differences: The philosophy behind these changes is that "New Law" robots should be partners rather than slaves to humanity, according to Fredda Leving , who designed these New Law Robots. According to the first book's introduction, Allen devised the New Laws in discussion with Asimov himself. All that is left for humans to do is to sit with folded hands. The Laws of Robotics are portrayed as something akin to a human religion , and referred to in the language of the Protestant Reformation , with the set of laws containing the Zeroth Law known as the "Giskardian Reformation" to the original "Calvinian Orthodoxy" of the Three Laws.

Zeroth-Law robots under the control of R. Daneel Olivaw are seen continually struggling with "First Law" robots who deny the existence of the Zeroth Law, promoting agendas different from Daneel's. Others are based on the second clause " Daneel also comes into conflict with a robot known as R. Lodovic Trema whose positronic brain was infected by a rogue AI — specifically, a simulation of the long-dead Voltaire — which consequently frees Trema from the Three Laws.

Trema comes to believe that humanity should be free to choose its own future. A robot may not harm sentience or, through inaction, allow sentience to come to harm. They therefore claim that it is morally indefensible for Daneel to ruthlessly sacrifice robots and extraterrestrial sentient life for the benefit of humanity.

None of these reinterpretations successfully displace Daneel's Zeroth Law — though Foundation's Triumph hints that these robotic factions remain active as fringe groups up to the time of the novel Foundation. These novels take place in a future dictated by Asimov to be free of obvious robot presence and surmise that R. Daneel's secret influence on history through the millennia has prevented both the rediscovery of positronic brain technology and the opportunity to work on sophisticated intelligent machines.

This lack of rediscovery and lack of opportunity makes certain that the superior physical and intellectual power wielded by intelligent machines remains squarely in the possession of robots obedient to some form of the Three Laws.


  • !
  • Taming the Gods: Religion and Democracy on Three Continents.
  • anoxic zone?
  • Kepler's Three Laws!
  • Law - Wikipedia.
  • The Spirit of the Laws - Wikipedia.
  • .

Daneel is not entirely successful at this becomes clear in a brief period when scientists on Trantor develop " tiktoks " — simplistic programmable machines akin to real—life modern robots and therefore lacking the Three Laws. The robot conspirators see the Trantorian tiktoks as a massive threat to social stability, and their plan to eliminate the tiktok threat forms much of the plot of Foundation's Fear.

In Foundation's Triumph different robot factions interpret the Laws in a wide variety of ways, seemingly ringing every possible permutation upon the Three Laws' ambiguities. Tiedemann 's Robot Mystery trilogy updates the Robot — Foundation saga with robotic minds housed in computer mainframes rather than humanoid bodies. One should not neglect Asimov's own creations in these areas such as the Solarian "viewing" technology and the machines of The Evitable Conflict originals that Tiedemann acknowledges. Aurora , for example, terms the Machines "the first RIs, really".

In addition the Robot Mystery series addresses the problem of nanotechnology: For example, the police department card-readers in The Caves of Steel have a capacity of only a few kilobytes per square centimeter of storage medium. Aurora , in particular, presents a sequence of historical developments which explains the lack of nanotechnology — a partial retcon , in a sense, of Asimov's timeline.

There are three Fourth Laws written by authors other than Asimov. The Lyuben Dilov novel, Icarus's Way a. Dilov gives reasons for the fourth safeguard in this way: And to the resulting misunderstandings This fifth law says:. The plot revolves around a murder where the forensic investigation discovers that the victim was killed by a hug from a humaniform robot. The robot violated both the First Law and Dilov's Fourth Law assumed in Kesarovksi's universe to be the valid one because it did not establish for itself that it was a robot. This Fourth Law states:. A robot must reproduce.

As long as such reproduction does not interfere with the First or Second or Third Law.

Navigation menu

In the book a robot rights activist, in an attempt to liberate robots, builds several equipped with this Fourth Law. The robots accomplish the task laid out in this version of the Fourth Law by building new robots who view their creator robots as parental figures. In reaction to the Will Smith film adaptation of I, Robot , humorist and graphic designer Mark Sottilaro farcically declared the Fourth Law of Robotics to be "When turning evil, display a red indicator light.

In Hutan Ashrafian, proposed an additional law that for the first time [ citation needed ] considered the role of artificial intelligence-on-artificial intelligence or the relationship between robots themselves — the so-called AIonAI law. All robots endowed with comparable human reason and conscience should act towards one another in a spirit of brotherhood. In Karl Schroeder 's Lockstep a character reflects that robots "probably had multiple layers of programming to keep [them] from harming anybody. Not three laws, but twenty or thirty.

In The Naked Sun , Elijah Baley points out that the Laws had been deliberately misrepresented because robots could unknowingly break any of them. He restated the first law as "A robot may do nothing that, to its knowledge, will harm a human being; nor, through inaction, knowingly allow a human being to come to harm.

Furthermore, he points out that a clever criminal could divide a task among multiple robots so that no individual robot could recognize that its actions would lead to harming a human being.

Laws of thermodynamics

Baley furthermore proposes that the Solarians may one day use robots for military purposes. If a spacecraft was built with a positronic brain and carried neither humans nor the life-support systems to sustain them, then the ship's robotic intelligence could naturally assume that all other spacecraft were robotic beings.

Such a ship could operate more responsively and flexibly than one crewed by humans, could be armed more heavily and its robotic brain equipped to slaughter humans of whose existence it is totally ignorant. The novel takes place thousands of years after The Naked Sun, and the Solarians have long since modified themselves from normal humans to hermaphroditic telepaths with extended brains and specialized organs. The Laws of Robotics presume that the terms "human being" and "robot" are understood and well defined. In some stories this presumption is overturned. The Solarians create robots with the Three Laws but with a warped meaning of "human".

Solarian robots are told that only people speaking with a Solarian accent are human. This enables their robots to have no ethical dilemma in harming non-Solarian human beings and are specifically programmed to do so. By the time period of Foundation and Earth it is revealed that the Solarians have genetically modified themselves into a distinct species from humanity — becoming hermaphroditic [36] and telekinetic and containing biological organs capable of individually powering and controlling whole complexes of robots.

The robots of Solaria thus respected the Three Laws only with regard to the "humans" of Solaria. It is unclear whether all the robots had such definitions, since only the overseer and guardian robots were shown explicitly to have them. In "Robots and Empire", the lower class robots were instructed by their overseer about whether certain creatures are human or not. Asimov addresses the problem of humanoid robots " androids " in later parlance several times. The novel Robots and Empire and the short stories " Evidence " and "The Tercentenary Incident" describe robots crafted to fool people into believing that the robots are human.

Robots acting out the last Law of Robotics To tend towards the human. It takes as its concept the growing development of robots that mimic non-human living things and given programs that mimic simple animal behaviours which do not require the Three Laws.

The presence of a whole range of robotic life that serves the same purpose as organic life ends with two humanoid robots concluding that organic life is an unnecessary requirement for a truly logical and self-consistent definition of "humanity", and that since they are the most advanced thinking beings on the planet — they are therefore the only two true humans alive and the Three Laws only apply to themselves.

The story ends on a sinister note as the two robots enter hibernation and await a time when they will conquer the Earth and subjugate biological humans to themselves; an outcome they consider an inevitable result of the "Three Laws of Humanics". This story does not fit within the overall sweep of the Robot and Foundation series ; if the George robots did take over Earth some time after the story closes the later stories would be either redundant or impossible. Contradictions of this sort among Asimov's fiction works have led scholars to regard the Robot stories as more like "the Scandinavian sagas or the Greek legends" than a unified whole.

Indeed, Asimov describes "—That Thou Art Mindful of Him" and "Bicentennial Man" as two opposite, parallel futures for robots that obviate the Three Laws as robots come to consider themselves to be humans: In Lucky Starr and the Rings of Saturn , a novel unrelated to the Robot series but featuring robots programmed with the Three Laws, John Bigman Jones is almost killed by a Sirian robot on orders of its master. The society of Sirius is eugenically bred to be uniformly tall and similar in appearance, and as such, said master is able to convince the robot that the much shorter Bigman, is, in fact, not a human being.

Nikola Kesarovski played with this idea in writing about a robot that could kill a human being because it did not understand that it was a robot, and therefore did not apply the Laws of Robotics to its actions. Advanced robots in fiction are typically programmed to handle the Three Laws in a sophisticated manner. In many stories, such as " Runaround " by Asimov, the potential and severity of all actions are weighed and a robot will break the laws as little as possible rather than do nothing at all. For example, the First Law may forbid a robot from functioning as a surgeon, as that act may cause damage to a human, however Asimov's stories eventually included robot surgeons "The Bicentennial Man" being a notable example.

When robots are sophisticated enough to weigh alternatives, a robot may be programmed to accept the necessity of inflicting damage during surgery in order to prevent the greater harm that would result if the surgery were not carried out, or was carried out by a more fallible human surgeon.

In " Evidence " Susan Calvin points out that a robot may even act as a prosecuting attorney because in the American justice system it is the jury which decides guilt or innocence, the judge who decides the sentence, and the executioner who carries through capital punishment. Asimov's Three Law robots or Asenion can experience irreversible mental collapse if they are forced into situations where they cannot obey the First Law, or if they discover they have unknowingly violated it.

The first example of this failure mode occurs in the story " Liar! The example he uses is forcefully ordering a robot to do a task outside its normal parameters, one that it has been ordered to forgo in favor of a robot specialized to that task. In The Robots of Dawn , it is stated that more advanced robots are built capable of determining which action is more harmful, and even choosing at random if the alternatives are equally bad.

As such, a robot is capable of taking an action which can be interpreted as following the First Law, and avoid a mental collapse. The whole plot of the story revolves around a robot which apparently was destroyed by such a mental collapse, and since his designer and creator refused to share the basic theory with others, he is, by definition, the only person capable of circumventing the safeguards and forcing the robot into a brain-destroying paradox.

In Robots and Empire , Daneel states it's very unpleasant for him when making the proper decision takes too long in robot terms , and he cannot imagine being without the Laws at all except to the extent of it being similar to that unpleasant sensation, only permanent. Robots and artificial intelligences do not inherently contain or obey the Three Laws; their human creators must choose to program them in, and devise a means to do so.

Robots already exist for example, a Roomba that are too simple to understand when they are causing pain or injury and know to stop. Many are constructed with physical safeguards such as bumpers, warning beepers, safety cages, or restricted-access zones to prevent accidents.

Sawyer argues that since the U. The development of AI is a business, and businesses are notoriously uninterested in fundamental safeguards — especially philosophic ones. A few quick examples: Not one of these has said from the outset that fundamental safeguards are necessary, every one of them has resisted externally imposed safeguards, and none has accepted an absolute edict against ever causing harm to humans. David Langford has suggested a tongue-in-cheek set of laws:. Macaulay offers us a hint of Montesquieu's importance when he writes in his essay entitled "Machiavelli" that "Montesquieu enjoys, perhaps, a wider celebrity than any political writer of modern Europe.

Montesquieu spent around twenty one years researching and writing De l'esprit des lois , covering a huge range of topics including law, social life and the study of anthropology, and providing more than 3, commendations. He pleaded for a constitutional system of government with separation of powers , the preservation of legality and civil liberties, and the end of slavery. In his classification of political systems, Montesquieu defines three main kinds: As he defines them, Republican political systems vary depending on how broadly they extend citizenship rights—those that extend citizenship relatively broadly are termed democratic republics, while those that restrict citizenship more narrowly are termed aristocratic republics.

The distinction between monarchy and despotism hinges on whether or not a fixed set of laws exists that can restrain the authority of the ruler: Driving each classification of political system, according to Montesquieu, must be what he calls a "principle". This principle acts as a spring or motor to motivate behavior on the part of the citizens in ways that will tend to support that regime and make it function smoothly.

A political system cannot last long if its appropriate principle is lacking.

Kepler's laws of planetary motion - Wikipedia

Montesquieu claims, for example, that the English failed to establish a republic after the Civil War — because the society lacked the requisite love of virtue. A second major theme in The Spirit of the Laws concerns political liberty and the best means of preserving it. He distinguishes this view of liberty from two other misleading views of political liberty. The first is the view that liberty consists in collective self-government—i. The second is the view that liberty consists in being able to do whatever one wants without constraint.

Not only are these latter two not genuine political liberty, he maintains, they can both be hostile to it. Political liberty is not possible in a despotic political system, but it is possible, though not guaranteed, in republics and monarchies. Generally speaking, establishing political liberty on a sound footing requires two things:. This book concerns explicit laws, not in unwritten cultural norms that may support the same goals.

The third major contribution of The Spirit of the Laws was to the field of political sociology, which Montesquieu is often credited with more or less inventing. The bulk of the treatise, in fact, concerns how geography and climate interact with particular cultures to produce the spirit of a people. This spirit, in turn, inclines that people toward certain sorts of political and social institutions, and away from others. Later writers often caricatured Montesquieu's theory by suggesting that he claimed to explain legal variation simply by the distance of a community from the equator.

While the analysis in The Spirit of the Laws is much more subtle than these later writers perceive, many of his specific claims lack rigour to modern readers. Nevertheless, his approach to politics from a naturalistic or scientific point of view proved very influential, directly or indirectly inspiring modern fields of political science, sociology, and anthropology.