• About
  • Advertise
  • Privacy & Policy
  • Contact
Ptechhub
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Blogs
No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Blogs
No Result
View All Result
PtechHub
No Result
View All Result

I’m Not Convinced Ethical Generative AI Currently Exists

By Wired by By Wired
February 20, 2025
Home AI & ML
Share on FacebookShare on Twitter


Are there generative AI tools I can use that are perhaps slightly more ethical than others?
—Better Choices

No, I don’t think any one generative AI tool from the major players is more ethical than any other. Here’s why.

For me, the ethics of generative AI use can be broken down to issues with how the models are developed—specifically, how the data used to train them was accessed—as well as ongoing concerns about their environmental impact. In order to power a chatbot or image generator, an obscene amount of data is required, and the decisions developers have made in the past—and continue to make—to obtain this repository of data are questionable and shrouded in secrecy. Even what people in Silicon Valley call “open source” models hide the training datasets inside.

Despite complaints from authors, artists, filmmakers, YouTube creators, and even just social media users who don’t want their posts scraped and turned into chatbot sludge, AI companies have typically behaved as if consent from those creators isn’t necessary for their output to be used as training data. One familiar claim from AI proponents is that to obtain this vast amount of data with the consent of the humans who crafted it would be too unwieldy and would impede innovation. Even for companies that have struck licensing deals with major publishers, that “clean” data is an infinitesimal part of the colossal machine.

Although some devs are working on approaches to fairly compensate people when their work is used to train AI models, these projects remain fairly niche alternatives to the mainstream behemoths.

And then there are the ecological consequences. The current environmental impact of generative AI usage is similarly outsized across the major options. While generative AI still represents a small slice of humanity’s aggregate stress on the environment, gen-AI software tools require vastly more energy to create and run than their non-generative counterparts. Using a chatbot for research assistance is contributing much more to the climate crisis than just searching the web in Google.

It’s possible the amount of energy required to run the tools could be lowered—new approaches like DeepSeek’s latest model sip precious energy resources rather than chug them—but the big AI companies appear more interested in accelerating development than pausing to consider approaches less harmful to the planet.

How do we make AI wiser and more ethical rather than smarter and more powerful?
—Galaxy Brain

Thank you for your wise question, fellow human. This predicament may be more of a common topic of discussion among those building generative AI tools than you might expect. For example, Anthropic’s “constitutional” approach to its Claude chatbot attempts to instill a sense of core values into the machine.

The confusion at the heart of your question traces back to how we talk about the software. Recently, multiple companies have released models focused on “reasoning” and “chain-of-thought” approaches to perform research. Describing what the AI tools do with humanlike terms and phrases makes the line between human and machine unnecessarily hazy. I mean, if the model can truly reason and have chains of thoughts, why wouldn’t we be able to send the software down some path of self-enlightenment?

Because it doesn’t think. Words like reasoning, deep thought, understanding—those are all just ways to describe how the algorithm processes information. When I take pause at the ethics of how these models are trained and the environmental impact, my stance isn’t based on an amalgamation of predictive patterns or text, but rather the sum of my individual experiences and closely held beliefs.

The ethical aspects of AI outputs will always circle back to our human inputs. What are the intentions of the user’s prompts when interacting with a chatbot? What were the biases in the training data? How did the devs teach the bot to respond to controversial queries? Rather than focusing on making the AI itself wiser, the real task at hand is cultivating more ethical development practices and user interactions.



Source link

Tags: Artificial IntelligencechatbotsClimateethicsthe prompt
By Wired

By Wired

Next Post
Le HUAWEI Mate XT | ULTIMATE DESIGN est lancé dans le monde entier et ouvre un nouveau chapitre dans la technologie pliable.

Le HUAWEI Mate XT | ULTIMATE DESIGN est lancé dans le monde entier et ouvre un nouveau chapitre dans la technologie pliable.

Recommended.

ReOrbit and Ananth Technologies Enter into Strategic Agreement on GEO Communications Satellites

ReOrbit and Ananth Technologies Enter into Strategic Agreement on GEO Communications Satellites

January 7, 2025
THE MOM’S CHOICE AWARDS NAMES AI-POWERED PREGNANCY & BABY WELLNESS WEARABLE ‘ELORA’ AMONG THE BEST IN FAMILY-FRIENDLY PRODUCTS

THE MOM’S CHOICE AWARDS NAMES AI-POWERED PREGNANCY & BABY WELLNESS WEARABLE ‘ELORA’ AMONG THE BEST IN FAMILY-FRIENDLY PRODUCTS

March 25, 2025

Trending.

Rokid présente les lunettes AR Spatial au Congrès mondial IOT Solutions 2025, soulignant la vision globale de la réalité augmentée

Rokid présente les lunettes AR Spatial au Congrès mondial IOT Solutions 2025, soulignant la vision globale de la réalité augmentée

May 15, 2025
Beyond the hook: How phishing is evolving in the world of AI | Computer Weekly

Beyond the hook: How phishing is evolving in the world of AI | Computer Weekly

May 7, 2025
TCL deja huella con un aclamado éxito de diseño internacional en 2025

TCL deja huella con un aclamado éxito de diseño internacional en 2025

May 15, 2025
Steve Cohen says stocks could retest their April lows, sees a 45% chance of recession

Steve Cohen says stocks could retest their April lows, sees a 45% chance of recession

May 14, 2025
Boomi World 2025: Vendor Reveals New AI Agents, Agentstudio GA

Boomi World 2025: Vendor Reveals New AI Agents, Agentstudio GA

May 14, 2025

PTechHub

A tech news platform delivering fresh perspectives, critical insights, and in-depth reporting — beyond the buzz. We cover innovation, policy, and digital culture with clarity, independence, and a sharp editorial edge.

Follow Us

Industries

  • AI & ML
  • Cybersecurity
  • Enterprise IT
  • Finance
  • Telco

Navigation

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Subscribe to Our Newsletter

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Copyright © 2025 | Powered By Porpholio

No Result
View All Result
  • News
  • Industries
    • Enterprise IT
    • AI & ML
    • Cybersecurity
    • Finance
    • Telco
  • Blogs

Copyright © 2025 | Powered By Porpholio