Publications
Article

Will “Leaky” Machine Learning Usher in a New Wave of Lawsuits?

RAIL: The Journal of Robotics, Artificial Intelligence & Law

A computer science professor at Cornell University has a new twist on Marc Andreessen’s pronouncement that software is “eating the world.” According to Vitaly Shmatikov, it is “machine learning [that] is eating the world” today. His personification is clear: machine learning and other applications of artificial intelligence (“AI”) are disrupting society at a rate that shows little sign of leveling off. And with increasing numbers of companies and individual developers producing customer-facing AI systems, it seems all but inevitable that some of those systems will create unintended and unforeseen consequences, including harm to individuals and society at large.

Researchers like Shmatikov and his colleagues are starting to reveal those consequences, including one—“leaky” machine learning models—that could have serious legal implications. Below, the causes of action that might be asserted against a developer who publishes, either directly or via a machine learning as a service (“MLaaS”) cloud platform, a leaky machine learning model are explored along with possible defenses, using the lessons of cybersecurity litigation to frame the discussion.

To read the full article, please click here.

“Will ‘Leaky’ Machine Learning Usher in a New Wave of Lawsuits?” by Brian Wm. Higgins was published in the January–February 2019 edition of RAIL: The Journal of Robotics, Artificial Intelligence & Law (Vol. 2, No. 1), a Fastcase, Inc. publication. Reprinted with permission.