Ptechhub
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs
No Result
View All Result
PtechHub
No Result
View All Result

Under Trump, AI Scientists Are Told to Remove ‘Ideological Bias’ From Powerful Models

By Wired by By Wired
March 14, 2025
Home AI & ML
Share on FacebookShare on Twitter


The National Institute of Standards and Technology (NIST) has issued new instructions to scientists that partner with the US Artificial Intelligence Safety Institute (AISI) that eliminate mention of “AI safety,” “responsible AI,” and “AI fairness” in the skills it expects of members and introduces a request to prioritize “reducing ideological bias, to enable human flourishing and economic competitiveness.”

The information comes as part of an updated cooperative research and development agreement for AI Safety Institute consortium members, sent in early March. Previously, that agreement encouraged researchers to contribute technical work that could help identify and fix discriminatory model behavior related to gender, race, age, or wealth inequality. Such biases are hugely important because they can directly affect end users and disproportionately harm minorities and economically disadvantaged groups.

The new agreement removes mention of developing tools “for authenticating content and tracking its provenance” as well as “labeling synthetic content,” signalling less interest in tracking misinformation and deep fakes. It also adds emphasis on putting America first, asking one working group to develop testing tools “to expand America’s global AI position.”

“The Trump administration has removed safety, fairness, misinformation, and responsibility as things it values for AI, which I think speaks for itself,” says one researcher at an organization working with the AI Safety Institute, who asked not to be named for fear of reprisal.

The researcher believes that ignoring these issues could harm regular users by possibly allowing algorithms that discriminate based on income or other demographics to go unchecked. “Unless you’re a tech billionaire, this is going to lead to a worse future for you and the people you care about. Expect AI to be unfair, discriminatory, unsafe, and deployed irresponsibly,” the researcher claims.

“It’s wild,” says another researcher who has worked with the AI Safety Institute in the past. “What does it even mean for humans to flourish?”

Elon Musk, who is currently leading a controversial effort to slash government spending and bureaucracy on behalf of President Trump, has criticized AI models built by OpenAI and Google. Last February, he posted a meme on X in which Gemini and OpenAI were labeled “racist” and “woke.” He often cites an incident where one of Google’s models debated whether it would be wrong to misgender someone even if it would prevent a nuclear apolocalypse—a highly unlikely scenario. Besides Tesla and SpaceX, Musk runs xAI, an AI company that competes directly with OpenAI and Google. A researcher who advises xAI recently developed a novel technique for possibly altering the political leanings of large language models, as reported by WIRED.

A growing body of research shows that political bias in AI models can impact both liberals and conservatives. For example, a study of Twitter’s recommendation algorithm published in 2021 showed that users were more likely to be shown right-leaning perspectives on the platform.

Since January, Musk’s so-called Department of Government Efficiency (DOGE) has been sweeping through the US government, effectively firing civil servants, pausing spending, and creating an environment thought to be hostile to those who might oppose the Trump administration’s aims. Some government departments such as the Department of Education have archived and deleted documents that mention DEI. DOGE has also targeted NIST, the parent organization of AISI, in recent weeks. Dozens of employees have been fired.



Source link

Tags: Artificial IntelligencedogeDonald Trumpgovernmentmodelspoliticsresearch
By Wired

By Wired

Next Post
CHAI, A Leading Social AI Platform, Launches Creator Feed for UGAI

CHAI, A Leading Social AI Platform, Launches Creator Feed for UGAI

Recommended.

Movemaker and alcove Launch US0K Open-Source Initiative to Establish Secure Contract Library for Aptos Developers

Movemaker and alcove Launch US$200K Open-Source Initiative to Establish Secure Contract Library for Aptos Developers

June 2, 2025
Schneider Electric Channel Chief Gordon Lord Powers Up Partner Pipeline With Gateway

Schneider Electric Channel Chief Gordon Lord Powers Up Partner Pipeline With Gateway

August 20, 2025

Trending.

⚡ Weekly Recap: Oracle 0-Day, BitLocker Bypass, VMScape, WhatsApp Worm & More

⚡ Weekly Recap: Oracle 0-Day, BitLocker Bypass, VMScape, WhatsApp Worm & More

October 6, 2025
Cloud Computing on the Rise: Market Projected to Reach .6 Trillion by 2030

Cloud Computing on the Rise: Market Projected to Reach $1.6 Trillion by 2030

August 1, 2025
Stocks making the biggest moves midday: Autodesk, PayPal, Rivian, Nebius, Waters and more

Stocks making the biggest moves midday: Autodesk, PayPal, Rivian, Nebius, Waters and more

July 14, 2025
The Ultimate MSP Guide to Structuring and Selling vCISO Services

The Ultimate MSP Guide to Structuring and Selling vCISO Services

February 19, 2025
Translators’ Voices: China shares technological achievements with the world for mutual benefit

Translators’ Voices: China shares technological achievements with the world for mutual benefit

June 3, 2025

PTechHub

A tech news platform delivering fresh perspectives, critical insights, and in-depth reporting — beyond the buzz. We cover innovation, policy, and digital culture with clarity, independence, and a sharp editorial edge.

Follow Us

Industries

  • AI & ML
  • Cybersecurity
  • Enterprise IT
  • Finance
  • Telco

Navigation

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Subscribe to Our Newsletter

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Copyright © 2025 | Powered By Porpholio

No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Brand Hub
    • Lifesight
  • Blogs

Copyright © 2025 | Powered By Porpholio