AI Education Now: A Complex Subject
Artificial intelligence (AI), “the software engine that drives the Fourth Industrial Revolution,” is a complex subject to teach. There is a burgeoning awareness at many universities that students need to be taught not only how to run powerful AI algorithms, but to understand the ethical complexities involved in these algorithms as well.
Ethical and policy questions are therefore being incorporated into many undergraduate computer science classes, where future designers and engineers are asked to consider the urgent questions raised by the products they are learning to create.
Levent Burak Kara, a professor of mechanical engineering at Carnegie Mellon, notes that there is a tension in teaching AI between ensuring “students understand what’s under the hood and what industry wants.”
AI Education Now: Deciphering the Decisions That AI Will Make
One aspect of AI education is ensuring that students understand the mechanisms of deep neural networks, mathematical algorithms that can learn tasks on their own by analyzing large sets of data.
Deep neural networks translate foreign languages in Microsoft’s Skype phone service, recognize commands spoken to Apple’s Siri and Amazon’s Echo, and help identify faces in Facebook photographs. This neural architecture, which mimics the human brain, is a key basis for the excitement around AI.
However, the way that AI makes decisions through its algorithms is not the same as human reasoning and is therefore a tricky business that even the best programmers can get wrong.
AI Education Now: Shaping the Right Algorithm
One example of a “wrong” algorithm was recently the subject of a paper by Joy Buolamwini, a researcher at the MIT Media Lab, and Timnit Gebru, a Microsoft researcher. They studied gender-recognition technologies from Microsoft, IBM, and China’s Megvii.
What these researchers found was that the tech consistently made more accurate identifications of subjects with photos of lighter-skinned men than those of darker-skinned women. They concluded that this was the result of bias through omission, a hiccup that often occurs with deep learning and image recognition.
After the paper was published, both Microsoft and IBM announced they were taking steps to improve this tech.
If A.I. were to automate job recommendations, says Eric Horvitz, chair of Microsoft’s internal AI and Ethics in Engineering and Research group, there’s always a chance that it could “amplify biases in society that we may not be proud of.”
AI Education Now: the Responsibility of Universities
Considering how deeply AI has already woven itself into modern society and the repercussions from algorithms that are not examined from all angles, it’s no wonder that some universities have begun to pair ethics with computer science beyond stand-alone elective classes.
In our next post, we’ll explore in more detail how some universities are addressing the need for engineers to learn about the ethical implications of AI.
What do you think? Let us know in the comments.