Michael Kearns

From SI410
Revision as of 13:23, 26 March 2021 by Sickham (Talk | contribs)

Jump to: navigation, search

Michael Kearns is currently a professor and the National Center Chair in the Computer and Information Science Department at the University of Pennsylvania. He is the Founding Director of the Warren Center for Network and Data Sciences. He holds secondary appointments in the departments of Economics, Statistics of Wharton as well as their departments of Operations, Information, and Decisions. He is the founder and former director of Penn Engineering's Networked and Social Systems Engineering Program. Another former role of his being the co-director of Penn's interdisciplinary Institute for Research in Cognitive Science.

Michael Kearns.png

He also has prior work in the private sector, having worked with Bank of America, Lehman Brothers, SAC Capital, and Morgan Stanley in roles associated with quantitative & algorithmic trading. He has experience working as an advisor to technology companies and venture capital firms. Kearns began a role with Amazon as part of their Scholars Program, focusing on algorithmic fairness, privacy, machine learning, and related topics within Amazon Web Services in June 2020.

Kearns is an elected Fellow of the American Academy of Arts and Sciences, the Association for Computing Machinery, the Association for the Advancement of Artificial Intelligence, and the Society for the Advancement of Economic Theory. [1]

Educational Background

Kearns's education began by being born into a highly academic family. His father David R. Kearns is Professor Emeritus at the University of California, San Diego in Chemistry (winner of the Guggenheim Fellowship in 1969), his uncle Thomas R. Kearns also a Professor Emeritus at Amherst College in Philosophy, his paternal grandfather Clyde W. Kearns was a well-known professor at the University of Illinois where is his specialty was insecticide toxicology, and his maternal grandfather Chen Shou-Yi was a professor of history and literature at Pomona College.

Michael Kearns completed his B.S. degree in math and computer science at the University of California at Berkeley graduating in 1985. In 1989, he completed a Ph.D. in computer science at Harvard University. His dissertation, "The Computational Complexity of Machine Learning" was published thereafter. Before joining AT&T Bell Labs in 1991, he continued with postdoctoral positions at the Laboratory for Computer Science at MIT and at the International Computer Science Institute (ICSI) in UC Berkeley. [1]

Research

Kearns's research background is broad, he describes it in his University of Pennsylvania bio:

My research interests include topics in machine learning, algorithmic game theory and microeconomics, computational social science, and quantitative finance and algorithmic trading. I often examine problems in these areas using methods and models from theoretical computer science and related disciplines. While much of my work is mathematical in nature, I also often participate in empirical and experimental projects, including applications of machine learning to problems in algorithmic trading and quantitative finance, and human-subject experiments on strategic and economic interaction in social networks. [1]

He has published over 140 research papers dating back to 1987.

Books

The Ethical Algorithm: The Science of Socially Aware Algorithm Design. The Ethical Algorithm is jointly authored with Aaron Roth, and is a general-audience book about the science of designing algorithms that embed social values like privacy and fairness.

An Introduction to Computational Learning Theory Jointly authored with Umesh Vazirani of U.C. Berkeley, this MIT Press publication is intended to be an intuitive but precise treatment of some interesting and fundamental topics in computational learning theory. The level is appropriate for graduate students and researchers in machine learning, artificial intelligence, neural networks, and theoretical computer science.


Issues of algorithmic fairness

The book "The Ethical Algorithm" enlightens readers of many of Kearns' ideas on fairness, privacy, and equality with regard to the recent explosion of machine learning in our daily lives.

Issues and Notions of Algorithmic Privacy

While absolute data privacy is an ideal goal to some, with the idea that only aggregate data should be the only data shared, Kearns warns there are certainly costs to the complete privacy of data. The release of data is often what provides advancements in many of our daily technological applications. Industries like navigation apps, the medical field, and, broadly speaking, science all heavily rely on our data. If society decides to prohibit the release of all data, these core areas will have an exponentially harder time improving.

Kearns also rejects the concept of k-anonymity, an idea of anonymized data that was introduced by Latanya Sweeney and Pierangela Samarati and has recently gained traction among the computer scientist community. According to Kearns, it's another unnecessarily strict method of ensuring privacy.

Instead, Kearns offers up the idea of "differential privacy." This idea is that a person's data, in a specific context such as health data, is released up until it raises the probability of harm being done to them. So following the health example, a study should only make public data that has little impact on the probability of the study's subjects' health insurance going up.


Algorithmic Fairness

Kearns puts forth the idea that there are different types of fairness, and each may come in conflict with one another, creating an issue of deciding which notion should be the primary focus of optimization and an issue of balance between the three notions.

1.Statistical parity: The percentage of individuals who receive some treatment should be approximately equal across (our defined) groups. In simpler terms, this idea is that for an algorithm to be thought of as fair, each "group" of people impacted by the algorithm should have the same proportion of treatment. For example, an algorithm determining whether applicants have high enough credit for a home loan, should have the same percentage of approved applicants across race, gender, sexuality, etc. This can come in conflict with the next definitions of fairness.

2.Approximate equality of false negatives: The percentage of mistakes in terms of not giving treatment to individuals who might deserve it should be approximately equal across groups. Following the loan example, this notion would aim to have the same proportion of each race/gender/sexuality of applicants who are unjustly turned down in their application. This means some individuals may have a different decision on their application in the optimization of this definition compared to the first.

3.Approximate equality of false positives. Similar to (2), the percentage of mistakes in terms of giving treatment to individuals who did not deserve it (false positives) should be approximately equal across groups. Optimizing fairness in this sense will also bring about different loan decisions for some applicants than the optimization of the two prior definitions.


Scientific Reproducibility Issues

Morality and Scope of AI

  1. 1.0 1.1 1.2 Bio: https://www.cis.upenn.edu/~mkearns/