Page 3: Implications of AI
Unit 3, Lab 4, Page 3
On this page, you will consider some of the ethical issues of robots and AI.
-
Humanoid Robots: What are the social implications of building robots that look and act like humans?
The “uncanny valley” is a term that describes the eerie feeling when we realize that something we thought was human is not actually human.
-
Ethics and AI: Should there be limits to the development of AI? How can we be developing AI responsibly? Below are three possible questions in this area. They’re not all equally good questions; start by asking, for each of them, whether there might be a more productive question to pose about the issue. (So, what’s the best question to ask about the ethics of self-driving cars? About AI and jobs? About AI and laws?)
- What about a self-driving car that has to make a choice between the life of its passenger and the life of a pedestrian? How does that choice get made?
- There’s also the issue of who gets the benefits of AI. The people whose jobs are being replaced by robots are disproportionately lower-income. How can we make sure that everyone benefits from developments in AI?
-
What laws apply to AI? What happens when a robot commits a crime? Who gets punished?
-
Bias and AI is a new problem. Please consider carefully during review. –MF, 6/14/17
IOC-1.B.1 bullet 3, IOC-1.D, IOC-1.D.1, IOC-1.D.2, IOC-1.D.3, IOC-1.F.11 bullet 2
Bias and AI: Machine learning algorithms have enabled innovation in medicine, business, and science. However, because they use existing data to develop “understanding” of the world, they are influenced by existing biases in the data, and their results have been used to discriminate against groups of individuals. Research this issue and discuss how AI researchers might overcome this problem.
-
IOC-1.A
Throughout this course you have seen that technology has both benefits and risks. Imagine yourself working in the field of AI or robotics. What are you interested in working on? What are the benefits? How will you minimize the risks?