This article is more than 1 year old

US government use of AI is shoddy and failing citizens – because no one knows how it works

The AI Now Institute's report ain't pretty

New York University's AI Now Institute, a research hub investigating the wider social impacts of machine learning algorithms, has published a report critiquing how the US government uses the technology.

The report, emitted this week, is based around a series of case studies discussed during a workshop held in June earlier this year. Research into the ethics of algorithms is flourishing, and most people are now aware of the common pitfalls of machine learning that are particularly troubling.

AI systems have been described as black boxes. It’s impossible to see what’s going on and understand how machines make decisions since there are so many hidden variables. As a result, there’s a lack of transparency and accountability and biases in the training data can slip through. The report highlights how algorithms used by the US government can often lead to disasters in this way.

Aid for who?

The Medicaid program, which helps subsidize medical costs for people with low incomes, uses software to assess a person’s background to decide what they are entitled to. In the worst cases, faulty AI decisions “terminated benefits and services to individuals with intellectual, developmental, and physical disabilities.”

For example, in Arkansas, algorithmic systems failed to cater for cerebral palsy or diabetes patients looking for health care options at home.

“Many states simply pick an assessment tool used by another state, trained on that other state’s historical data, and then apply it to the new population, thus perpetuating historical patterns of inadequate funding and support. In addition, there are frequent flaws and errors in how these assessment systems are implemented and in how they calculate the need for care,” the report states.

Algorithms have also been used to assess how well teachers are performing. Martha Owen, lead counsel for the Houston Federation of Teachers sued the Houston Independent School District for not disclosing how such systems operated after several teachers were sanctioned or terminated.

They were based around student scores of standardized tests, but not much information was revealed beyond that. The code was considered the “private property of third party vendors.” The lawsuit allowed an expert to pry into some parts of the system, which was deemed impenetrable.

Ultimately, this led to a successful outcome for Owen. Both parties reached a settlement, whereby the Houston Independent School District agreed to stop using scores from standardized tests to justify the termination of teachers as long as these systems remained impossible to understand.

Algorithms and crime

The use of algorithms to determine criminal sentencing is based on how likely a defendant is to reoffend has been increasing, but when it comes to juveniles, it’s particularly disturbing that one factor in assessing recidivism was “parental criminality,” the report found.

“Given the long and well-documented history of racial bias in law enforcement, including the over-policing of communities of color, can easily skew 'high risk' ratings on the basis of a proxy for race,” the report explained.

panel

Criminal justice software code could send you to jail and there’s nothing you can do about it

READ MORE

“Community disorganization” is another influential risk factor. If an individual lives in a neighborhood considered to be 'violent' or near gang activity this could skew 'high risk' ratings on the basis of a proxy for race.”

Another area identified where the decision of an algorithm could severely impact a person’s livelihood was DNA testing.

Software is used to check how closely a suspect’s DNA matched to the evidence found in a crime scene. Forensic testing has been around for awhile, but the report found that the “algorithms used now are so complex that most medical examiners and laboratory technicians are unable to replicate the computational results without the assistance of the system.”

“We also learned that while DNA laboratories are often tested and certified to ensure they maintain minimum standards for biological or chemical testing, the systems they use to perform the probabilistic genotyping are often untested, especially for bugs in their code,” the report said.

The AI Now Institute has recommended that external auditing could help companies work out to weed out bugs and assess how effective systems are before they’re rolled out to reduce harm.

“As evidence about the risks and harms of these systems grow, we’re hopeful we’ll see greater support for assessing and overseeing the use of algorithms in government,” the institute said in a statement.

"Further, that by continuing to convene experts from across disciplines to understand the challenges and discuss solutions, we’ll build a groundswell of strategies and best practices to protect fundamental rights, and ensure that algorithmic decision making actually benefits the citizens it is meant to serve." ®

More about

TIP US OFF

Send us news


Other stories you might like