Imagine that your identity was stolen or misconfigured online, resulting in serious personal damage. The reason was a bug rooted in artificial intelligence (AI) technology, equipment with no name or face.
So, who is to blame? Is it the company hosting the technology, the country you requested, the worker who created a particular piece of code, or someone else entirely?
This Clue is a game that has had and will continue to have massive consequences and questions for years to come. However, UNU law professor Sonia Gibson Rankin is one step closer to discovering the answers.
“What happens when a country uses an algorithm to help society and actually harms society?” asked Gibson Rankin.
In a soon-to-be-published paper with the New York University Law Review titled “MiDAS Touch: Atuahene’s ‘Stategraft and the Effects of Unregulated AI’ Gibson Rankin explores the Michigan Integrated Data Automated System (MiDAS) incident.
To combat the billions of dollars owed to the federal government set by the Great Recession, Michigan set its sights on modernizing and reducing what was seen as an unnecessary expense from the Unemployment Insurance Agency (UIA) with MiDAS in 2013.
The state spent hundreds of years and $47 million on this program. The goal of MiDAS was to automatically detect those who commit unemployment fraud, as well as determine eligibility for unemployment, track cases, and monitor income tax refunds.
From October 2013 to September. In 2016, MiDAS did its job – in fact, it found fraud cases tripled to over 25,000 in just one year. By two years, that total number exceeded 40,000. With unemployment fraud claims stretching far back to date, tens of thousands of people have faced a 400% higher end than usual. This generated $96 million, a glowing total that would have resulted in Michigan’s huge debt.
The only problem is that 93% of these accusations were false.
“The concern with applying AI in societies without proper supervision is that by the time we understand the damage has been done, it has already affected hundreds of thousands of people,” said Gibson Rankin.
There is something within AI that has bypassed the due process of individuals who were absolutely right. It was given permission to automatically find people, file applications with the IRS to receive wages, or deduct tax returns, regardless of how long they were unemployed, or how long they had been unemployed.
When Michigan citizens called to find out why this was happening, no one could give them an answer. Likewise, state officials found no evidence of fraud in the vast majority of cases.
“When people called, there was no one who could explain what happened or why The response is basically AI saying you did this.
Initially, the defendants turned to the UIA for answers. The UIA looked at the state. The country turned to technology vendors Fast Enterprises and SAS Institute. They turned to management consultant CSG Government Solutions.
They’ve all faced the same predicament: a guessing game who is to blame.
“If you stalk the state, they say AI did it. If you stalk the third-party seller, there’s a clause to protect them, saying the state made the decision. It leaves the actual person who has been harmed by AI without a lot of options,” she said.
After several trips to court, the state of Michigan agreed to withdraw $20.8 million so far In compensation to compensate the money deducted from the falsely accused of fraud.
That wasn’t enough, according to Gibson Rankin. Many of those affected felt the same.
in Kaho vs. SAS Analytics, The state felt that compensation was procured by granting refunds. The plaintiffs argued that their due process rights were violated beyond financial resources as they had to disentangle themselves from the fraud allegations.
“How do I give back or address the fact that you may have to file for bankruptcy? How do I address the fact that while all this was happening, you may have lost a new job because you have been labeled in the systems as committing fraud if Unemployment.” “How do I address the fact that families may have broken up – that people have been driven from their homes because of the name tags?”
The Michigan Supreme Court has sided with the plaintiffs, saying that trying to use an “artificial intelligence made me do this” defense turned out to be insufficient.
Not only that, but the state is still working on making the rest of the payments.
As residents work for their sanctuary, questions remain for legal minds like Gibson Rankin.
How do you prevent the biases that exist in artificial intelligence to begin with? Can you really get tech paradoxical as a result? How far will artificial intelligence go unanswered?
“When technology is unregulated, it will thrive in all kinds of unique innovations. But there are some parts of it when disorganization leads to serious disaster.” – Sonia Gibson Rankin
In March 2022, Michigan Governor Gretchen Whitmer planned to allocate $75 million to replace the MiDAS system, in search of a “human-centred” system.
Gibson-Rankin believes that from now on, there must be groups and discussions in place to answer these questions before AI advances too big and into the weeds.
“I think we’re going to see a lot of what the AI community does as it continues to operate underground, where people can’t unload the source of the damage,” she said.
She is also working with other professors on the potential development of a computational justice course at UNM.
“It’s going to take all of us sitting at the table to get it right from the start,” she said.
You can read this full research paper and learn more about the MiDAS incident by By following the link here.