Manual Power and Principle in the Market Place: On Ethics and Economics (Law, Ethics and Economics)

Free download. Book file PDF easily for everyone and every device. You can download and read online Power and Principle in the Market Place: On Ethics and Economics (Law, Ethics and Economics) file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Power and Principle in the Market Place: On Ethics and Economics (Law, Ethics and Economics) book. Happy reading Power and Principle in the Market Place: On Ethics and Economics (Law, Ethics and Economics) Bookeveryone. Download file Free Book PDF Power and Principle in the Market Place: On Ethics and Economics (Law, Ethics and Economics) at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Power and Principle in the Market Place: On Ethics and Economics (Law, Ethics and Economics) Pocket Guide.
leondumoulin.nl: Power and Principle in the Market Place: On Ethics and Economics (Law, Ethics and Economics) (): Jacob Dahl Rendtorff: Books.
Table of contents

The management team sets the tone for how the entire company runs on a day-to-day basis. When the prevailing management philosophy is based on ethical practices and behavior, leaders within an organization can direct employees by example and guide them in making decisions that are not only beneficial to them as individuals, but also to the organization as a whole. Building on a foundation of ethical behavior helps create long-lasting positive effects for a company, including the ability to attract and retain highly talented individuals, and building and maintaining a positive reputation within the community.

Running a business in an ethical manner from the top down builds a stronger bond between individuals on the management team, further creating stability within the company. When management is leading an organization in an ethical manner, employees follow in those footsteps. Employees make better decisions in less time with business ethics as a guiding principle; this increases productivity and overall employee morale. When employees complete work in a way that is based on honesty and integrity, the whole organization benefits.

Employees who work for a corporation that demands a high standard of business ethics in all facets of operations are more likely to perform their job duties at a higher level and are also more inclined to stay loyal to that organization. Business ethics differ from industry to industry, and nation to nation.

The nature of a business's operations has a major influence on the ethical issues with which it must contend. For example, an ethical quandary arises for an investment brokerage when the best decision for a client and his or her money does not coincide with what pays the brokerage the highest commission.

A media company that produces TV content aimed at children may feel an ethical obligation to promote good values and eschew off-color material in its programming. A striking example of industry-specific business ethics is in the energy field. One misstep — whether it is a minor coal ash spill at a power plant or a major disaster such as the BP oil spill — forces a company to answer to numerous regulatory bodies and society at large regarding whether it skirted its duty to protect the environment in an aggressive pursuit of higher profits.

A stringent, clearly defined system of environmental ethics is paramount for an energy company if it wants to thrive in a climate of increased regulations and public awareness on environmental issues. These two characteristics are intimately connected with the goals that the individuals who created the object seek with it, so that they do not stray from the intended purposes Vermaas et al. Faced with this inseparability, the questioning of the morality of human objectives and actions extends to the morality of technical artifacts Vermaas et al.

Considering that the objectives sought by the humans when creating a technical artifact are not separated from the characteristics of the object itself, we can conclude that the technical artifacts have an intrinsically moral character.

Hide and Seek in the Dark: On the Inherent Ambiguity of CSR — CBS Research Portal

For a regulatory analysis, this concept is even more fundamental Vermaas et al. To illustrate the difference between the concepts of technical artifact and sociotechnical system, we can think of the former being represented by an airplane, and the second by the complex aviation system. The sociotechnical system is formed by the set of interrelated agents human and non-human actants - things , institutions, etc. The materiality and effects of a sociotechnical system depend on the sum of the agency of each actant. However, there are parameters of how the system should be used, which means that these systems have pre-defined operational processes and can be affected by regulatory laws and policies.

Thus, when a tragic accident involving an airplane occurs, it is necessary to analyse what was in the sphere of control and influence of each actor and technical artifact components of the sociotechnical network. Quite possibly we will observe a very complex and symbiotic relationship between the components that led to this fateful result Saraiva, Moreover, this result is often unpredictable, due to the autonomy of the system based on a diffused and distributed agency among all components actants.

These complex systems bring us to debate the liability and ethics concerning technical artifacts and sociotechnical systems.


  1. Reflexology Part 3: Complete Training Course.
  2. Unification!
  3. The Wager: Book One of the Tattle Tale Series?

Issues such as the liability of developers and the existence of morality in non-human agents - with a focus here on technological objects - need a response or, at least, reflections that contribute to the debate in the public sphere. Given this context, from a legal and regulatory point of view, assigning a different status to technical artifacts and sociotechnical systems, according to their capacity for agency and influence is justifiable and should be endowed with different moral status and levels of liability.

It is necessary, then, to distinguish the influence and importance that each thing also has in the network and, above all, in the public sphere Latour, Colin Allen and Wendell Wallach Wallach and Allen, argue that as intelligent Things - like robots 3 - become more autonomous and assume more responsibility, they must be programmed with moral decision-making skills for our own safety.

Corroborating this thesis, Peter-Paul Verbeek, while dealing with the morality of Things understands that: as machines now operate more frequently in open social environments, such as connected public spheres, it becomes increasingly important to design a type of functional morality that is sensitive to ethically relevant characteristics and applicable to intended situations Verbeek, A good example is Microsoft's robot Tay, which helps to illustrate the effects that a non-human element can have on society.

In , Microsoft launched an artificial intelligence programme named Tay. Endowed with a deep learning 4 ability, the robot shaped its worldview based on online interactions with other people and producing authentic expressions based on them. The experience, however, proved to be disastrous and the company had to deactivate the tool in less than 24 hours due to the production of worrying results. The goal was to get Tay to interact with human users on Twitter, learning human patterns of conversation. It turns out that in less than a day, the chatbot was generating utterly inappropriate comments, including racist, sexist and antisemitic publications.

This was a programme that also learned from users to tag photos automatically. However, their results were also outright discriminatory, and it was noticed, for example, that the bot was labeling coloured people as gorillas. The implementation of programmes capable of learning and adapting to perform functions that relate to people creates new ethical and regulatory challenges, since it increases the possibility of obtaining results other than those intended, or even totally unexpected ones. In addition, these results can cause harm to other actors, such as the discriminatory offenses generated by Tay and Google Photos.

Particularly, the use of artificial intelligence tools that interact through social media requires reflection on the ethical requirements that must accompany the development of this type of technology. This is because, as previously argued, these mechanisms also act as agents in society, and end up influencing the environment around them, even though they are non-human elements. However, for Miller et al. For the authors, the fact that the creators did not expect this outcome is part of the very unpredictable nature of this type of system Miller et al.

The attempt to make artificial intelligence systems increasingly adaptable and capable of acting in a human-like manner, makes them present less predictable behaviours. Thus, they begin to act not only as tools that perform pre-established functions in the various fields in which they are employed, but also to develop a proper way of acting. They impact the world in a way that is less determinable or controllable by human agents. Also, the more adaptable the artificial intelligence programmes become, the more unpredictable are their actions, bringing new risks.

This makes it necessary for developers of this type of programme to be more aware of the ethical and legal responsibilities involved in this activity. In addition, there is a need for dedicated monitoring to verify the actions taken by such a programme, especially in the early stages of implementation.

In the Tay case, for instance, developers should have monitored the behaviour of the bot intensely within the first 24 hours of its launch, which is not known to have occurred Miller et al. The logic should be to prevent possible damages and to monitor in advance, rather than the remediation of losses, especially when they may be unforeseeable. To limit the possibilities of negative consequences, software developers must recognise those potentially dangerous and unpredictable programmes and restrict their possibilities of interaction with the public until it is intensively tested in a controlled environment.

After this stage, consumers should be informed about the vulnerabilities of a programme that is essentially unpredictable, and the possible consequences of unexpected behaviour Miller et al. The use of technology, with an emphasis on artificial intelligence, can cause unpredictable and uncontrollable consequences, so that often the only solution is to deactivate the system. Therefore, the increase in autonomy and complexity of the technical artifacts is evident.

They are endowed with an increased agency, and are capable of influencing others but also of being influenced in the sociotechnical system in a significant way, often composing even more autonomous and unpredictable networks. Although there is no artificial intelligence system yet that is completely autonomous, with the pace of technological development, it is possible to create machines that will have the ability to make decisions in an increasingly autonomous way, which raises questions about who would be responsible for the result of its actions and for eventual damages caused to others Vladeck, The ability to amass experiences and learn from massive data processing, coupled with the ability to act independently and make choices autonomously can be considered preconditions for legal liability.

However, since artificial intelligence is not recognised today as a subject of law, it cannot be held individually liable for the potential damage it may cause. In this sense, according to Article 12 of the United Nations Convention on the Use of Electronic Communications in International Contracts, a person natural or an entity on behalf of whom a programme was created must, ultimately, be liable for any action generated by the machine.

Online Library of Liberty

On the other hand, in the case of damage caused by acts of an artifact with artificial intelligence, another type of responsibility is the one that makes an analogy with the responsibility attributed to the parents by the actions of their children or even the responsibility of animal owners in case of damage.

Another possibility is the model that focuses on the ability of programmers or users to predict the potential for these damages to occur.

The Economics and Ethics of Coffee

According to this model, the programmer or user can be held liable if they acted deceitfully or had been negligent considering a result that would be predictable Hallevy, George S. Cole refers to predetermined types regarding civil liability: i product liability, ii service liability, iii malpractice, and iv negligence. The author sustains that the standards, in this case, should be set by the professional community.

Still, as the field develops, for Cole, the negligence model would be the most applicable. However, it can be difficult to implement, especially when some errors are unpredictable or even unavoidable Cole, To date, the courts worldwide have not formulated a clear definition of the responsibility involved in creating AIs which, if not undertaken, should lead to negligent liability. This model will depend on standards set by the professional community, but also clearer guidelines from the law side and jurisprudence.

The distinction between the use of negligence rule and strict liability rule may have different impacts on the treatment of the subject and especially on the level of precaution that is intended to be imposed in relation to the victim, or in relation to the one who develops the AI. In establishing strict liability, a significant incentive is created for the offender to act diligently in order to reduce the costs of anticipating harm.

In fact, in the economic model of strict responsibility, the offender responds even if he adopts a high level of precaution. This does not mean that there is no interest in adopting cautious behaviour. There is a level of precaution in which the offender, in the scope of strict liability will remove the occurrence of damage. In this sense, if the adoption of the precautionary level is lower than the expected cost of damages, from an economic point of view, it is desirable to adopt the precautionary level Shavell, But even if the offender adopts a diligent behaviour, if the victim suffers damage, she will be reimbursed, which favours, in this case, the position of the victim Magrani, Viola, and Silva, The negligence rule, however, forms a completely different picture.

As the offender responds only when he acts guilty, if he takes diligent behaviour, the burden of injury will necessarily fall on the victim, even if the damage is produced by reason of a potentially dangerous activity. Therefore, the incentive for victims to adopt precautionary levels is greater, because if they suffer any kind of loss, they will bear it Magrani, Viola, and Silva, However, it is often not easy to know how these programmes come to their conclusion or even lead to unexpected and possibly unpleasant consequences.

The Relationship & How to Improve It

This harmful potential is especially dangerous in the use of artificial intelligence programmes that rely on machine learning and especially deep learning mechanisms, in which the very nature of the software involves the intention of developing an action that is not predictable, and which will only be determined from the data processing of all the information with which the programme had contact.

Existing laws are not adequate to guarantee a fair regulation for the upcoming artificial intelligence context. The structure contained in the table below, produced in a UNESCO study UNESCO, , contains important parameters that help us think about these issues, at the same time trying to identify the different agencies involved. Out of a range of options, with room for flexibility, according to a preset policy. Although the proposed structure is quite simple and gives us important insights, its implementation in terms of assigning responsibility and regulating usage is complex and challenging for scientists and engineers, policymakers and ethicists, and eventually it will not be sufficient for applying a fair and adequate response.

Hence the importance of taking into consideration and investigating the spheres of control and influence of designers and other agents during the creation and functional development of technical artifacts Vladeck, Often, during the design phase, the consequences are indeterminate because they depend partly on the actions of other agents and factors beside those of the designers. Also, since making a decision can be a complex process, it may be difficult for a human to even explain it. As the behaviour of an advanced AI is not totally predictable, and its behaviour is the result of the interaction between several human and non-human agents that make up the sociotechnical system and even of self-learning processes, it can be difficult to determine the causal nexus 6 between the damage caused and the action of a human being or legal entity.

This will occur mainly when the damage transpires within a complex sociotechnical system, in which the liability of the intelligent thing itself, or of a natural or legal person, will not be obvious. When dealing with artificial intelligence, it is essential for the research community and academia to promote an extensive debate about the ethical guidelines that should guide the construction of these intelligent machines. There is a strong growth of this segment of scientific research.


  • Meditations: By Marcus Aurelius : Illustrated.
  • RF and Microwave Transmitter Design (Wiley Series in Microwave and Optical Engineering).
  • The Mirror of Literature, Amusement, and Instruction Volume 10, No. 283, November 17, 1827.
  • Weird Comics #4.
  • Genesis Robots Book #3: Robo Games!
  • The need to establish a regulatory framework for this type of technology has been highlighted by some initiatives as mentioned in this section. The guidelines put forward a set of seven key requirements that AI systems should meet in order to be deemed trustworthy. According to the document, a specific assessment list hereunder aims to help verify the application of each of the key requirements:. Similar to this well-grounded initiative, many countries, companies and professional communities are publishing guidelines for AI, with analogous values and principles, intending to ensure the positive aspects and diminish the risks involved in AI development.

    In that sense, it is worth mentioning the recent and important initiatives coming from:.

    Regulation & the Economy

    The different degrees of autonomy allotted to the machines must be thought of, determining what degree of autonomy is reasonable and where substantial human control should be maintained. The different levels of intelligence and autonomy that certain technical artifacts may have must directly influence the ethical and legal considerations about them.

    On 16 February , the European Parliament issued a resolution with recommendations from the European Commission on civil law rules in robotics. Bank bail-out debt has burdened whole nations, just so that we can bid to stay in the game, so it remains to be seen how well we can play our way out of the current economic crisis.