Artificial Intelligence and Trust: Improving Transparency and Explainability Policies to Reverse Data Hyper-Localization Trends
Blank Rome's Brian Wm. Higgins and Anastasia Dodd published the article, "Artificial Intelligence and Trust: Improving Transparency and Explainability Policies to Reverse Data Hyper-Localization Trends," in The Journal of Science and Law on March 12, 2021.
Access to data is an essential part of artificial intelligence ("AI") technology development efforts. Government and corporate actors have increasingly imposed localized and hyper-localized restrictions on data due to rising mistrust—the fear and uncertainty about what countries and companies are doing with data, including perceived and real efforts to exploit user data or create more powerful and possibly dangerous AI systems that could threaten civil rights and national security. If the trend is not reversed, over-restriction could impede AI development to the detriment of all. Solutions are offered to improve trust through the adoption of legal and social policies that ensure transparency in data collection and use, and explainability of decisions made by AI systems that affect people’s lives.
To read the full article, please click here.