Categories
Prosecutors Weekly

AI Weekly: AI prosecutors closed out 2021

December 31, 2021 9:30 AM Image Credit: Getty Images Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more In the week that drew 2021 to a close, the tech news cycle died down, as it typically does.…

Image Credit: Getty Images

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


In the week that drew 2021 to a close, the tech news cycle died down, as it typically does. Even an industry as fast-paced as AI needs a reprieve, sometimes — especially as a new COVID-19 variant upends plans and major conferences.

But this is not to say that December was uneventful.

One of the most talked-about stories came from the South China Morning Post (SCMP), which described an “AI prosecutor” developed by Chinese researchers that can reportedly identify crimes and press charges “with 97% accuracy.” The system — which was trained on 1,000 “traits” sourced from 17,000 real-life cases of crimes from 2015 to 2020, like gambling, reckless driving, theft, and fraud — recommends sentences given a brief text description. According to SCMP, it has been tested in Shanghai Pudong People’s Procuratorate which is China’s largest district prosecutor office.

It’s not surprising that a country such as China, which has, like some parts of the U.S.A, embraced predictive crime technology, is seeking to be an AI prosecutor. The implications for anyone who may be subject to the AI prosecutor’s judgment are still troubling, considering how inequitable the justice system has been historically shown to be.

Published last December, a study from researchers at Harvard and the University of Massachusetts found that the Public Safety Assessment (PSA), a risk-gauging tool that judges can opt to use when deciding whether a defendant should be released before a trial, tends to recommend sentencing that’s too severe. According to researchers, the PSA could impose a cash bail on male arrestees and female arrestees. This is a possible sign of gender bias.

The U.S. justice system has a history of adopting AI tools that are later found to exhibit bias against defendants belonging to certain demographic groups. Northpointe’s Correctional Offender Profiler for Alternative Sanctions is perhaps the most well-known. It is used to predict whether a person will become a recidivist. A ProPublica report found that COMPAS was far more likely to incorrectly judge black defendants to be at higher risk of recidivism than white defendants, while at the same time flagging white defendants as low risk more often than black defendants.

With new research showing that even training predictive policing tools in a way meant to lessen bias has little effect, it’s become clear — if it wasn’t before — that deploying these systems responsibly today is infeasible. Some early adopters of predictive police tools, such as the Los Angeles and Pittsburgh police departments, have said they won’t use them.

But with less scrupulous law enforcement, courtrooms, and municipalities plowing ahead, regulation-driven by public pressure is perhaps the best bet for reigning in and setting standards for the technology. Santa Cruz, Oakland and New Orleans have all outlawed predictive policing tools. And the nonprofit group Fair Trials is calling on the European Union to include a prohibition on predictive crime tools in its proposed AI regulatory framework.

“We don’t condone their use [of tools like the PSA],” Ben Winters (the creator of the Electronic Privacy Information Center report that called prerial risk assessment tools strikes against individual liberties), stated in a statement. We would say they should be heavily regulated in areas where they are used.

A new approach to AI

It’s unclear whether even the most sophisticated AI systems understand the world the way that humans do. This is another argument for regulation of predictive policing. However, Cycorp, which Business Insider profiled this week, seeks to codify general human knowledge so AI can make use of it.

Cycorp’s prototype software, which has been in development for nearly 30 years, isn’t programmed in the traditional sense. Cycorp can draw inferences that a human reader might expect. It can also pretend to be confused sixth grader and ask users to help it learn sixth-grade math.

Is it possible to create AI with human-level intelligence? This is the million-dollar question. Experts like the vice president and chief AI scientist for Facebook, Yann LeCun, and renowned professor of computer science, and artificial neural networks expert, Yoshua Bengio, don’t believe it’s within reach, but others beg to differ. One promising direction is neuro-symbolic reasoning, which merges learning and logic to make algorithms “smarter.” The thought is that neuro-symbolic reasoning could help incor

Read More

Leave a Reply

Your email address will not be published. Required fields are marked *