Written by Dave Nyczepir
Artificial intelligence research groups are urging the National AI Research Resource (NAIRR) Task Force to reconsider investing in shared computing and data infrastructure, which they say will subsidize the tech giants that control it rather than democratize access.
The AI Now Institute of New York University and Data & Society Research Institute submitted a joint response to the task force’s request for information, encouraging it to pause efforts to establish NAIRR until it explores alternative investments in AI research and puts controls in place to ensure the accountable and ethical use of government data.
Despite the White House Office of Science and Technology Policy and National Science Foundation‘s insistence NAIRR will democratize access to AI infrastructure for the benefit of academics and startups, researchers say this is jeopardized by the government continuing to license that infrastructure from technology giants.
“What we’re looking at with the National AI Research Resource, unless it’s fundamentally transformed, is a large subsidy going directly to Big Tech in a way that will extend and entrench its power by licensing its infrastructure and making it even more core to a national research agenda,” Meredith Whittaker, cofounder of the AI Now Institute, told FedScoop. “At exactly the same time we’re seeing increased pressure on these industry players due to their concentrated power, due to their regulatory arbitrage and due to fundamental questions about their compatibility with democratic government.”
Only tech giants have the billions of dollars to employ hundreds of site reliability engineers and data center operators to maintain AI infrastructure, while building the software, tools and application programming interfaces that make up the AI research environment. That’s why the CIA contracts with Amazon for its AI infrastructure instead of building its own, Whittaker said.
That same infrastructure gives tech giants the ability to aggregate dossiers of personal information on global populations and use them to increase profits, while declining to reveal how such systems work citing corporate secrecy, Whittaker said
Facebook whistleblower Frances Haugen testified before Congress earlier this month accusing the company of failing to take proper steps to combat misinformation and other harmful content on its platforms, instead prioritizing profits. At the same time a worldwide outage of Facebook and its platforms caused global communications to suffer due to overreliance on apps like WhatsApp.
AI systems remain brittle, fallible and encode patterns of bias that can harm vulnerable communities when deployed at scale, yet the Department of Defense continues to spend billions on the technology — which often amounts to a handful of statistical techniques useful for crunching data but marketed as AI.
“The right move is then to pause on the rapid development of these technologies and to develop the democratic infrastructure for meaningful oversight — particularly centering the people who are subject to AI,” Whittaker said.
That means the people agencies want to use facial recognition on or AI to determine whether they receive government benefits.
The AI Now Institute and Data & Society don’t suggest in their RFI response that AI research should be scuttled entirely, rather they propose that NSF instead expand its National AI Research Institutes by funding under-resourced research domains, scholarships for underrepresented students, fellowships placing them in agencies, and forums where communities harmed by AI systems can weigh in on their design and deployment. NSF should also preserve the independence of its research by ending the practice of having companies jointly fund the institutes, according to the response.
Federal resources also need to be put toward auditing auditing corporate AI systems because at their heart they’re reliant on mass surveillance for the vast amounts of data they require to train their models, the groups argue.
“Yes, we should be auditing, we should be overseeing, we should know where these systems are; that’s just the floor,” Whittaker said. “But we also need to ask deeper questions about whether we’re comfortable with the level of surveillance and the level of concentrated power that is required to create these systems and, particularly, whether we’re comfortable with that in the hands of a handful of for-profit corporations.”