Amy Jackson

Complex algorithms can create inadvertent bias. Universities like Stanford have created new courses that combine computer science and ethics, in an effort to prevent bias from occurring.

Engineering Education Now: Combining Ethics and AI

In our last post we mentioned how some universities are concerned about the ethical implications of artificial intelligence (AI). Schools that have recently developed courses centered around ethics and computer science include Harvard, Stanford, M.I.T., Cornell, Carnegie Mellon, the University of Washington, and the University of Texas at Austin.

According to a recent New York Times report, these courses “amount to an open challenge to a common Silicon Valley attitude that has generally dismissed ethics as a hindrance.”

Before we focus on what these universities are doing to address the issues that AI raises, let’s explore a bit more how AI algorithms can have consequences that its designers did not foresee.

Combining Ethics and AI: The Issue of Racial Bias

Back in 2016, ProPublica released a comprehensive examination of risk assessments generated by an algorithm called COMPAS for over 7,000 people arrested in Broward County, Florida.

COMPAS’ conclusions had significant racial disparities: African-Americans were almost twice as likely as whites to be labeled a higher risk but not actually re-offend. COMPAS made the opposite mistake for whites, who were much more likely to be labeled lower risk but went on to commit other crimes.

As of 2016, the states of Arizona, Colorado, Delaware, Kentucky, Louisiana, Oklahoma, Virginia, Washington and Wisconsin used risk assessments like this that are given to judges during criminal sentencing.

Combining Ethics and AI: The Issue of Gender Bias

Here’s another example of AI that demonstrated bias: Amazon at one time planned to use an algorithm to score applicant resumes from one to five stars. But in 2015 the company realized that the algorithm was giving women’s resumes lower scores across the board. That’s in large part because the data fed into the algorithm consisted mainly of men’s resumes.

Amazon attempted to build an algorithm that was more gender-neutral, but in the end abandoned the project.

Combining Ethics and AI: The Issue of Automated Bias

As Kristian Lum, the lead statistician at the San Francisco-based nonprofit Human Rights Data Analysis Group (HRDAG), states, “If you’re not careful, you risk automating the exact same biases these programs are supposed to eliminate.”

The Guardian notes that AI has “flagged the innocent as terrorists, sent sick patients home from hospital, lost people their jobs and car licenses, had people kicked off the electoral register, and chased the wrong men for child support bills.”

That’s not because coders intentionally build bias into AI. But as algorithms grow and learn, their decision-making processes become more and more opaque.

Another issue is the data that is fed to algorithms. The universal rule here is that applied to all data: garbage in, garbage out.

Combining Ethics and AI: Ethics Coursework

ABET, a global organization that accredits computing, applied and natural science, and engineering technology programs, requires that students acquire an understanding of ethical responsibility as part of their coursework. But how that requirement is executed has varied widely from school to school.

Some engineering and computing schools have held stand-alone courses on ethics, while others have incorporated ethics into courses with broader scope. Ethics has, in the past, been a footnote for students eager to learn coding skills.

That’s no longer the case. “As we start to see things, like autonomous vehicles, that clearly have the ability to save people but also cause harm, I think that people are scrambling to build a system of ethics,” Joi Ito, director of the M.I.T. Media Lab, noted. He is co-teaching a course on ethics for Harvard and M.I.T.

This course encourages students to ask questions such as: Is the technology fair? How do you make sure that the data is not biased? Should machines be judging humans?

Combining Ethics and AI: Stanford University Ethics Course

Stanford University now has a course developed by political scientist Rob Reich, computer scientist Mehran Sahami, political scientist Jeremy Weinstein, and research fellow and course manager Hilary Cohen, called “Computers, Ethics, and Public Policy.”

Students are given assignments in three areas: coding exercises, a philosophy paper, and policy memos, as they consider topics like civil rights and privacy from the point of view of software engineers, product designers and policymakers.

Rob Reich notes that computer science has become hugely popular at Stanford. The course he helped develop is designed for both engineering and non-technical students, so that both types come away with an understanding of algorithms and ethical and policy questions.

Jeremy Weinstein, a co-developer of the course, states, “Stanford absolutely has a responsibility to play a leadership role in integrating these perspectives, but so does Carnegie Mellon and Caltech and Berkeley and M.I.T. The set of institutions that are generating the next generation of leaders in the technology sector have all got to get on this train.”

In our next post, we’ll look at how schools like Harvard University and the University of California at Berkeley are combining engineering and ethics.

What do you think? Let us know in the comments.

Share this post
LinkedInTwitterFacebookEmail