Publications
Article

A Path Forward for Eliminating Structural Bias in AI Tech

Law360

As national protests over systemic and structural racism in America continue, community organizers, Black scholars, and others fighting injustice and unequal treatment are once again raising awareness of a long-standing problem lurking within artificial intelligence data-based technologies: bias.

Creating fair and equitable data-based decision systems, void of conscious and unconscious bias that causes disparate impacts especially on racial minorities, is a critical task and part of a much larger national anti-discrimination priority. The imperative for federal and state lawmakers, government agencies and the companies that develop, deploy and operate AI technologies is to pass meaningful laws and adopt governance strategies that ensure no biased data-based system has a role in American society.

Calls for Artificial Intelligence Regulations Get Louder

Spurred by protests, negative media attention and employee activism, as well as the current uncertainty surrounding legal liability and damages facing those operating in the artificial intelligence sector, some of the top technology companies have petitioned Congress and the White House to regulate artificial intelligence.

Congress, for its part, needs to avoid partisanship and the distractions of the pandemic and upcoming election and take meaningful action now to stop bias in data-based systems.

As an initial step, Congress should listen to technologists and pass laws containing milestones leading to the development of technical and nontechnical standards concerning fairness, accountability, transparency, and preventing bias and discrimination in artificial intelligence system outputs, consistent with existing civil rights and anti-discrimination laws.

Standards and criteria will be needed by both regulators and those they regulate to assess levels of compliance with anti-bias measures, whether those assessments are risk-based, based on objective testing, or use some other investigatory approach.

Lawmakers could also pass legislation consistent with goals of the Federal Data Strategy's data ethics framework 2020 action plan, expanded to encompass private sector datasets. This could include appropriating funds for the formation of data repositories full of good nonproprietary datasets, updated regularly to reflect a diverse world, and made available to the public.

Lawmakers could also look to state legislatures for examples of other types of guardrails. Washington state, for example, the first state to curb use of facial recognition technology by law enforcement, requires its public agencies to independently test for accuracy and unfair performance differences when a decision system relies on skin tone, gender, age and certain other characteristic features.

Similar tests could be developed for data-based decision systems that use other features, including demographic, behavioral and geographic characteristics that historically have been used to discriminate.

As a backstop, lawmakers could also provide a private right of action in the event unregulated biased systems enter commerce and cause harm. Without a means for aggrieved persons to sue for damages, any law aimed at reducing structural bias in data-based systems could be weakened from the start.

The alternative to a private right of action is codified statutory damages or civil penalties, but to be effective as deterrents, the amounts of damages or civil fines must be sufficiently large to tilt a company's cost-benefit analysis toward compliance.

Finally, lawmakers can legislate protections for whistleblowers who identify developers or agencies that ignore anti-bias/anti-discrimination laws and intentionally rely on biased outputs or permit biased systems to enter the stream of commerce.

For its part, the White House should heed the words of its artificial intelligence experts and National Security Commission on Artificial Intelligence, which acknowledged, at least in the context of national security, the presence of bias in data-based systems. The White House should also heed voices of the majority of those who responded to the White House Office of Management and Budget's draft guidance for regulation of artificial intelligence applications memorandum for the heads of executive departments and agencies from Jan. 13 and expressed at least some level of support for the use of fairness and nondiscrimination as guiding principles federal agencies should consider when acquiring or developing AI systems.

And finally, both Congress and the White House should back the National Institute of Standards and Technology's efforts to research and develop technical standards for artificial intelligence. As required by Executive Order No. 13859, "Maintaining American Leadership in Artificial Intelligence," from Feb. 11, 2019, NIST is leading a public-involved effort to assess how technical standards could be used in evaluating AI technologies, including the bias in data-based systems.

But its mid-August public workshop focused on bias issues revealed just how far the government is from regulating data-based decision systems using meaningful, achievable and enforceable technical standards. Concrete recommendations from the working group may not be forthcoming until after the election in November.

Companies, Their Lawyers and the Call to Action

Until Congress and states pass appropriate and targeted legislation, the White House issues new regulations, and NIST publishes technical and nontechnical standards, the burden of eliminating — or at least minimizing the potential for — bias in data-based systems will fall mostly on the shoulders of the companies who make and deploy the systems.

Legal uncertainty around artificial intelligence should be enough to drive more companies toward action, which may include employing a diverse group of domain experts in ethics and law embedded with system designers to help develop socially responsible and ethical governance approaches that advance anti-bias/anti-discrimination principles. Some technology companies have already taken meaningful steps in that direction, and many others need to follow their lead.

Lawyers especially, both in-house as well as those in private practice, will be crucial in helping craft appropriate measures for those companies. Compliance with future laws and regulations will need to consider the data, feature selection, use cases, interfaces and other aspects of data-based systems to ensure fair and equal treatment of those impacted by the systems.

Lawyers will also be instrumental in vetting third-party data-based software-as-a-service systems to ensure those systems meet company anti-bias/anti-discrimination standards before outputs from those systems are used to drive company decisions. Lawyers can also help establish company incident-response plans and risk-assessment procedures specific to artificial intelligence data-based decision systems.

Many have been moved by the tumultuous events involving police brutality disproportionately targeting Black individuals, and many are demanding action on that and the broader problem of structural racism and discrimination in America and elsewhere. Doing nothing is unacceptable, especially now when Americans by a large margin believe racism is a significant problem and things in this country need to change.

While it may seem like bias in data-based systems is not a significant part of the larger structural discrimination problem, no part of the problem can be ignored. If demonstrable changes are not made in small areas, trust will begin to be eroded more broadly, the consequence of which is continued structural bias and, in the case of artificial intelligence, a failure to fully achieve the benefits advocates for the technology have long promised.

“A Path Forward for Eliminating Structural Bias in AI Tech,” by Brian Wm. Higgins was published in Law360 on September 2, 2020.