Technology without ethics will harm humans. It is imperative for manufacturers to consider the ethical implications of their inventions. Programming machines to instill morals in them is a noble idea, although it may not be successful. Nonetheless, technology is helpful in the medical field as it aids incapacitated patient. To prevent machines from overtaking humans and spark dilemmas, stakeholders should keenly monitor and regulate automated technology. Programming ethics in automated machines helps them make ethical life and death decisions. However, it would be challenging to program such parameters. Nonetheless, moral dilemmas, especially in AI can be resolved in ways like having a code of ethics.
Ethical Decision-Making Parameters
There are benefits of being able to program ethical decision-making parameters into automated technologies. Firstly, it will reduce unwanted behavior (Torresen, 2018). Ethical decision making will enable the robots to make just decisions without any human intervention. Correspondingly, the machines should be able to evaluate the ethical implications of different actions even those that may not be illegal. Having unethical technologies could cost society economically, politically, and socially. However, with ethics ingrained in technology, it will be difficult to manipulate the robots to cause harm. Unwanted behavior will significantly diminish if ethical parameters are incorporated in automated technologies.
Conversely, there are reasons why ethical parameters should not be included. There is no clear way of assigning responsibility to the machines. Currently, if a machine harms a person, the manufacturer is held responsible (Hammond, 2016). However, disciplining an unethical device would result in a quandary. There is no straightforward manner of holding the machine accountable should it go wrong. Furthermore, there is no guarantee that the computer would feel remorseful for its wrongdoing.In cases where the unethical act occurs due to a malfunction, it is probable that the machine would not even be aware that it is on the wrong, adding to a lack of responsibility. It is arduous to hold a robot responsible for being unethical.
Further, the ethical guidelines will likely be biased. Already there is concern that machines, mainly created by white males, are biased in their conduct. The situation will even be worse should ethics be involved given the subjective nature of ethics. Adhering to a particular moral philosophy may be challenging to the machines as what may be considered an ethical outcome by one individual may be wrong to another (Etzioni & Etzioni, 2017). The devices will thus behave according to the ethical guidelines that the manufacturer will put.Notably, even if a wide range of ethical schools is added, there is no guarantee that the actions will be perceived as just to all. Ethical guidelines are subject to bias based on the manufacturer and the subjectivity of ethics.
Automated technologies may be unable to resolve ethical dilemmas. There is no proof that the technologies would be able to make the best choice when faced with a moral dilemma. A prime example is the trolley dilemma. In this case, a trolley loses control, but there is only one track (Hammond, 2016). One way of stopping the cart is by throwing a large object on the track, which unfortunately is the person seated next to you. If a machine were used in this case, it would also face the same problem. Even though it will be programmed with an ethical decision, it is highly unlikely that the outcome would be favorable to all. Although people say that the probability of such events arising is very low, one cannot take chances (Etzioni & Etzioni, 2016). Car manufacturers claim that the machines would be equipped for such events, but success rates are still low.Worse still, the machine cannot be held accountable for any decision made, which makes justification even harder. Automated technologies, would prove ineffective when ethical dilemmas arise.
Making Life or Death Decisions
I would be comfortable allowing an automated technology to make an ethical life or death decision for me.The machine will have enough knowledge of me to predict my pattern. AI machine will use details from my health records and social media platforms to understand my preferences regarding health care (Lamanna, Phil, and Byrne, 2018). Given its knowledge of my needs, I would be confident that the machine will act in my best interest. Moreover, by the time I need the computer to decide on my behalf, I would be completely incapacitated. In such a state, I would need someone who knows me well enough to make healthy choices, which are critical at that point. I would regularly update the machine so that it captures any changes in my preferences or medical conditions. With such updates, it will be unlikely to go wrong. Through data analysis, I trust the machine will have enough knowledge of me to make the right choice.
Equally, the machine would avoid errors that patient surrogates and family members may make. In a life and death situation, stress and frail health could hamper the choices by the surrogate parent, which occurs in one-third of such instances (Lamanna et al., 2018). I would not want to take any chances with my health by leaving the decision to these parties, who may get it wrong. Comparatively, it would relieve my family of the stress of any guilt as to whether they made the right choice or not. As long as the machine has my records, it will make accurate predictions by using algorithmic analyses. With my computer, I trust that minimal errors will occur, and when they do happen, my loved ones would not be blamed.
The machine will play an instrumental role in detecting the problem. AI technology helps identify medical conditions like stroke (Jiang et al., 2018). At a 90.5% accuracy in predicting such a situation, I am at least guaranteed that the machine will not only make the right choice should I have a stroke, but it will warn me in advance as well. Once I customize the device, it will aid in predicting any health problem and help me make the right choice. I am confident that the machine will help me predict illness and take preventive measures.
The machine will give me patient-centered care. In my worst health moment, I would be comfortable knowing that the computer would accord me care based on my preferences (Lamanna et al., 2018). Its intimate knowledge of me would guide it in deciding what would suit me best. Further, it would aid the healthcare providers to learn about me and respond appropriately. Patient-centered care is especially vital when one’s health is frail.
The comfort will arise from my ability to give it instructional parameters in advance. Before making any choices, I would be aware of challenges like bias (Lamanna et al., 2018). Algorithm bias could affect the health outcome. However, as I would be mindful of this issue, I will create instructions with the full knowledge of such weaknesses. My participation in shaping the instructional parameters will boost my confidence that the machine will make the right choice for me.
Avoiding Encountering Ethical Dilemmas in AI
Hiring ethicist will guide organizations in ethical adherence, thus avoid dilemmas. AI companies should have ethicist staff who play a critical role in the development and deployment of the machines (West, 2018). With such workers, it will be possible to work out scenarios that would result in ethical dilemmas and take preventive measures. The employees will also stress the importance of transparency in the development of these machines, failure to which would lead to ethical dilemmas. Ethicists will ensure that manufacturers do not create devices that would threaten human existence by taking over human responsibilities or causing harm. Ethicist will foresee ethical dilemmas and work with the organization to eliminate them.
Transparency concerning ethical outcomes and metrics would abate dilemmas. Policymakers should stress the implementation of transparent, moral decisions (Polonski, 2018). The manufacturers should take full responsibility for any mistakes that their inventions make. In programming ethical conduct into the machines, the engineers should be frank about the quantifications used and their effects. Correspondingly, transparency audits should be done regularly to review the ethical performance of the machines (West, 2018). The examinations would especially be useful if the company faces litigation for harming consumers. Audits should be done to ensure ethical compliance by the engineers.
AI should have a code of ethics. AI manufacturers should have an ethics code that lays out the principles and resolution of ethical problems (West, 2018). Forthwith, manufacturers should adhere to these rules or face harsh penalties. The codes should also be made transparent and easily accessible to the public. That way, customers can give their opinions on ethical issues and monitor company activities. Henceforth, review boards should be instituted to review the codes and make necessary changes regularly. The members should come from multiple fields like health, psychology, and law to ensure that the codes are comprehensive. There should be a code of ethics guiding practice, which should be reviewed by a qualified board.
Giving human being precedence will reduce dilemmas. A major ethical dilemma is that technology will displace human labor and inflict harm (Toressen, 2018). People are worried that the machines will take over their jobs due to their accuracy and efficiency. Further, there is concern that AI will be used by the government to spy on the citizens and in warfare. That AI technology at one point will be smarter than humans is a worrying trend. Therefore, AI manufacturers should put the interests of the people first. The technology should not takeover jobs to the extent of massive unemployment. Again, people’s privacy should be respected by the government and organizations. The concerns and needs of the human being should be given precedence over AI advancement.
Ethics in automated machines help make ethical life and death decisions, although it would be challenging to ethical program parameters. However, moral dilemmas, especially in AI can be resolved in ways like having a code of ethics. There is a challenge in programming ethical conduct in machines because it is hard to hold the devices responsible for any breaches, existing bias by the manufacturers and different ideas on ethics, and the unlikelihood of the machines to effectively resolve ethical problems. Regarding dilemmas, the problem can be solved by hiring ethicists to aid in production, having an ethics code, and giving humans priority. Ethics in automation is essential and should be thoroughly analyzed to preserve human nature.
References
Etzioni, A. & Etzioni, O. (2017). Incorporating Ethics into Artificial Intelligence. The Journal of Ethics, 17(3). Retrieved From http://ai2-website.s3.amazonaws.com/publications/etzioni-ethics-into-ai.pdf
Hammond, K. (2018). Ethics and Artificial Intelligence: The Moral Compass of a Machine. Recode. Retrieved From https://www.recode.net/2016/4/13/11644890/ethics-and-artificial-intelligence-the-moral-compass-of-a-machine
Jiang, F., Jiang, Y., Dong, Y. & Wang, Y. (2018). Artificial Intelligence in Healthcare: Past, Present, and Future. BMJ Journals, 2 (4). Retrieved From https://svn.bmj.com/content/2/4/230
Lamanna, C., Phil, M., & Byrne, L. (2018). Should Artificial Intelligence Augment Medical Decision Making? The Case for an Autonomy Algorithm. AMA Journal of Ethics, 20 (9). Retrieved From https://journalofethics.ama-assn.org/article/should-artificial-intelligence-augment-medical-decision-making-case-autonomy-algorithm/2018-09
Polonksi, V. (2018). The Hard Problem of AI Ethics – Three Guidelines for Building Morality into Machines. The Forum Network. Retrieved From https://www.oecd-forum.org/users/80891-dr-vyacheslav-polonski/posts/30743-the-hard-problem-of-ai-ethics-three-guidelines-for-building-morality-into-machines
Toressen, J. (2018). A Review of Future and Ethical Perspectives of Robotics and AI. Frontiers in Robotics and AI. Retrieved from https://www.frontiersin.org/articles/10.3389/frobt.2017.00075/full
West, D. (2018). The Role of Corporations in Addressing AI’s Ethical Dilemmas. Brookings. Retrieved From https://www.brookings.edu/research/how-to-address-ai-ethical-dilemmas/
Do you need high quality Custom Essay Writing Services?