The Israeli Occupation Forces have introduced more AI-enabled weaponry to their genocidal war on Gaza. This includes Smart Shooter, an optical scope for small arms that uses image processing to lock on to targets, as well as drones used to map out tunnels and identify human presence. As Rafah, where 1.4 million Palestinians are taking refuge, comes under attack and the number of Palestinians killed by Israel rises above 28,000, Israeli weapons manufacturers continue to use the war on Gaza as a testing ground for new military technologies.
Latest from the Upturn Newsletter
Medicare insurers can’t use AI to deny coverage.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
Subscribe
Our weekly newsletter on social justice and technology with what we're reading and why it matters. Delivered every Friday to your inbox.
Feb 16, 2024
Connections
Is AI innovation serving the needs of the public? The federal government has announced a public-private partnership to provide funding and resources for AI research. But, as Amba Kak and Sarah Myers West point out, the partnership has not articulated how “these investments will meaningfully benefit society at large.” Just as behavioral ad targeting is used to fund the “free” internet, AI companies feeling pressure to turn a profit will likely turn to predatory business models, surveillance, and worker exploitation.
Credit and Finance
People can sue the federal government for Fair Credit Reporting Act violations. The Supreme Court held that sovereign immunity does not apply to protect the federal government from liability when it violates the FCRA. The plaintiff in the case, Reginald Kirtz, alleged that his credit was damaged after he secured a loan from the US Department of Agriculture (USDA), and the USDA incorrectly reported to TransUnion that Kirtz’s account was past due.
Criminal Legal System
Chicago will discontinue its use of ShotSpotter gunshot detection technology. Mayor Brandon Johnson announced that the city would not renew its contract, following a review by the Cook County State’s Attorney that found the tool was ineffective in reducing crime, leading to arrests in 1% of 12,000 incidents. Mayor Johnson campaigned on ending the city’s contract with ShotSpotter and community members protested the contract renewal because of the harm it causes to Black and brown communities.
How misleading face recognition test scores can lead policymakers astray. While testing of face recognition systems for accuracy across demographics is important, “the tests do not take full account of practical realities” and “no laboratory test [represents] the conditions and reality of how police use face recognition in real-world scenarios.” For example, testing labs do not have access to the exact “matching database” that a specific police department uses, nor can account for the range of low-quality images that police feed into the systems.
Housing
Los Angeles will change its housing intake system shown to be racially biased against Black unhoused people. A motion approved unanimously by the LA City Council “called specifically for greater fairness in the vulnerability scoring system,” which had rated Black people as significantly less vulnerable than white people, making Black unhoused Angelenos less likely to obtain subsidized permanent housing.
Labor and Employment
FTC signals scrutiny of worker surveillance, AI management tools. In a recent speech, the Associate Director of the Division of Privacy and Identity Protection at the FTC said that “[c]ompanies that mislead workers about worker surveillance technologies, that fail to be transparent with workers about their collection of personal information, or that deploy technologies in ways that harm workers without corresponding benefits may face liability under the FTC Act,” as “a consumer’s right to be protected from privacy harms and other injuries doesn’t evaporate the minute they enter a factory or log into their computer.”
Customer and food service jobs increasingly require a long and “bizarre” personality quiz. The test, created by the company Paradox.ai, does not actually seem to involve artificial intelligence. Rather, it requires applicants to click through more than 80 slides designed to sort people according to the “Big Five” personality profile model (the usefulness of which has been disputed for years). On Reddit, job seekers complain that the results of the test are inaccurate, bewildering, and of unclear relevance to the job(s) to which they’ve applied.
Online Life
Meta discusses proposal to expand censorship of the word “Zionist,” alarming Palestinian rights groups. Meta already has a policy against using the word within hate speech as a proxy for Jewish or Israeli people, but this expansion could ban important criticism of the ideology. It comes at a dire moment, as Palestinians in Gaza face genocidal attacks by the Zionist Israeli government in their last refuge of Rafah this week. Meta has already been criticized for over-moderating pro-Palestinian speech, and groups have organized a petition to Meta against the new policy.
Public Benefits
CMS puts private Medicare insurers on notice about their use of algorithms to deny care. In a memo issued last week, the Centers for Medicare and Medicaid Services (CMS) clarified that private insurance companies that administer Medicare Advantage, recently subject to multiple lawsuits for automatic denials of coverage, should only use algorithms to ensure alignment with coverage criteria and must base decisions on individual circumstances. The memo further specifies that insurers must make their coverage criteria publicly available online and that they can’t use automated systems to shift or add criteria.