Mark Agnor / Shutterstock.com

What is Fair and Equal Treatment? Policy Challenges in the Machine Learning Age

November 2, 2016

In the 1976 case General Electric Company v. Gilbert, the Supreme Court held that an insurance policy that declined to cover pregnancy did not discriminate on the basis of sex. The court concluded that the plan did not exclude anyone from benefit eligibility because of gender, but rather just removed pregnancy from the list of compensable disabilities. In short, the plan did not discriminate against women, it just discriminated against pregnant people. Men and women were treated equally.

The question of what constitutes fair and equal treatment was under discussion October 6, 2016 during the second of four workshops in the Optimizing Government project. Funded by the Fels Policy Research Initiative, Optimizing Government brings together researchers at the University of Pennsylvania to collaborate on studying the implementation of machine learning in government.  This session brought together scholars from philosophy (Samuel Freeman, Avalon Professor of the Humanities, Department of Philosophy), law (Seth Kreimer, Kenneth W. Gemmill Professor of Law) and political science (Nancy Hirschmann, Professor of Political Science) to explain how fairness and equality are conceptualized in each of their respective disciplines. Interestingly, none of the speakers has published any scholarly work on machine learning. Rather, each studied how society has struggled with the concepts of equality since long before the advent of computers and the issue of accountability in policy making.

One aspiration for machine learning is to help eliminate implicit bias. However, as noted by Hirschmann, it is possible to perpetuate unfairness through the guise of equality. As long as humans are selecting data to input into a machine and then interpreting its subsequent results, there will always be some level of implicit bias encoded into even the most seemingly impartial algorithm. Empirical facts can be twisted to suit an existing bias, lacking objective interpretation. Feminist political theory demands that some room be left for recognition of differences in order to achieve true equality. At a minimum, it must be acknowledged that there is value to including different groups and peoples in the creation of a machine learning policy framework.

Publicity, transparency, and comprehensibility are basic requirements of a democratic society. These requirements apply not just to laws, but also to the principles and reasons behind those laws. A philosophical condition of individual freedom is that we should know, or at least have available to us, the reasons behind decisions made and enforced through coercive political power. From a legal perspective, the government must be able to demonstrate some rational relationship basis for differentiation between individuals.

Can machine learning fulfill these requirements? If a policy maker does not understand the mechanism by which a machine arrived at a conclusion, will she be able to satisfy the “rational relationship” requirement? Can “the machine told me so” possibly be considered adequate justification for a policy? Or will machine learning make it easier for policy-makers to evacuate personal responsibility for their decisions?

The conversation on machine learning will continue on Thursday, November 3 2016 at 4:30pm with a session on Fairness and Performance Trade-Offs in Machine Learning. The Optimizing Government Project is a Fels Policy Research Initiative. This event is open to the public. RSVP here.

Contact Information

Fels Institute of Government
University of Pennsylvania
3814 Walnut Street
Philadelphia, PA 19104

Phone: (215) 898-2600
Fax: (215) 746-2829

felsinstitute@sas.upenn.edu