Artificial Intelligence has great promise for advancing human understanding. However, when used inappropriately or when consequences and limitations are not considered, AI has the potential to do great harm. Thus, the question of alignment lies at the heart of AI ethics. How do we ensure the technology supports our values? Although this module focuses on ethical considerations in AI, it’s not the only place ethics is discussed in the Practicum AI curriculum. Here we address broad frameworks to facilitate ongoing discussions of ethical, equitable and just AI.
Topics: The following topics are covered in this module:
- The ethical dimension of a good research question or service.
- The philosophial frameworks for ethical thinking.
- The importance of representative data samples.
- The recognition of model assumptions and limitations.
By the end of this module, students will be able to:
- Compare and contrast the ethical frameworks of Consequentialism and Deontology.
- Describe the issue of biased data and why training data ought to be representative of a given population.
- Evaluate data collection processes and data sets to ensure alignment to a research question or AI service.
- Recognize the inherent complexity of ethical thinking and the need to balance competing interests.
- Summarize the limitations of narrow AI: a. Open category problem b. Model brittleness c. Others
To Do List
Optional Content / Additional Resources
- The Alignment Problem By Brian Christian.