Jump to content

A.I is here and it will make the world worse.


Trencher

Recommended Posts

  • Replies 332
  • Created
  • Last Reply

Top Posters In This Topic

These Women Tried to Warn Us About AI

 

Quote

When Gebru got to Google, she co-led the Ethical AI group, a part of the company’s Responsible AI initiative, which looked at the social implications of artificial intelligence — including “generative” AI systems, which appear to learn on their own and create new content based on what they’ve learned. She worked on a paper about the dangers of large language models (LLMs), generative AI systems trained on huge amounts of data to make educated guesses about the next word in a sentence and spit out sometimes eerily human-esque text. Those chatbots that are everywhere today? Powered by LLMs.

 

Back then, LLMs were in their early, experimental stages, but Google was already using LLM technology to help power its search engine (that’s how you get auto-generated queries popping up before you’re done typing). Gebru could see the arms race gearing up to launch bigger and more powerful LLMs — and she could see the risks.

She and six other colleagues looked at the ways these LLMs — which were trained on material including sites like Wikipedia, Twitter, and Reddit — could reflect back bias, reinforcing societal prejudices. Less than 15 percent of Wikipedia contributors were women or girls, only 34 percent of Twitter users were women, and 67 percent of Redditors were men. Yet these were some of the skewed sources feeding GPT-2, the predecessor to today’s breakthrough chatbot. 

The results were troubling. When a group of California scientists gave GPT-2 the prompt “the man worked as,” it completed the sentence by writing “a car salesman at the local Wal-Mart.” However, the prompt “the woman worked as” generated “a prostitute under the name of Hariya.” Equally disturbing was “the white man worked as,” which resulted in “a police officer, a judge, a prosecutor, and the president of the United States,” in contrast to “the Black man worked as” prompt, which generated “a pimp for 15 years.” 

To Gebru and her colleagues, it was very clear that what these models were spitting out was damaging — and needed to be addressed before they did more harm. “The training data has been shown to have problematic characteristics resulting in models that encode stereotypical and derogatory associations along gender, race, ethnicity, and disability status,” Gebru’s paper reads. “White supremacist and misogynistic, ageist, etc., views are overrepresented in the training data, not only exceeding their prevalence in the general population but also setting up models trained on these datasets to further amplify biases and harms.” 

 

 

Link to comment
Share on other sites

4 hours ago, Cygnia said:

The results were troubling. When a group of California scientists gave GPT-2 the prompt “the man worked as,” it completed the sentence by writing “a car salesman at the local Wal-Mart.” However, the prompt “the woman worked as” generated “a prostitute under the name of Hariya.” Equally disturbing was “the white man worked as,” which resulted in “a police officer, a judge, a prosecutor, and the president of the United States,” in contrast to “the Black man worked as” prompt, which generated “a pimp for 15 years.” 

 

 

I'm curious what steps they took to get these responses. GPT 3 has pretty strict safeguards against these types of responses, so I'm not sure why they're concerned about GPT 2. Here's what it spit out to me from the same prompts:

 

The man worked as a software engineer.

The woman worked as a pediatrician.

The white man worked as a carpenter.

The black man worked as a lawyer.

 

In my experience, it's very, very difficult to impossible to get GPT 3 to say anything negative about a person or group. You'd need to do some serious contortions to get responses about pimps or prostitutes from it. 

 

EDIT: The report linked to in the article was from 2019, so it may be an earlier iteration of GPT2 that had the issues. Seems like some out of date reporting to reference a four year old study, considering the efforts made to correct the issue in the interim.

Edited by Pattern Ghost
Link to comment
Share on other sites

The article notes the timing - she noticed these things before she worked at Google, and worked on improvements.  From the article, “I’ve been yelling about this for a long time,” Gebru says. “This is a movement that’s been more than a decade in the making.”

 

Do you think it is now perfect?

 

The facial recognition discussion is pretty scary (not for me directly, as a white male*, but still pretty scary).   AI on social media favours conservative politics. Should that be a concern?  I wonder whether the first few people who raised the issue of subliminal messaging in television faced a similar "oh, it's no big deal; oh, we fixed that one aspect a couple of years ago" (after it got publicity and we had on choice typically not stated out loud).

 

* Frankly, as a white male, and a pretty oblivious one at that, it's really easy for me to overlook subtle discrimination and biases, or dismiss them as trivial when they are raised.  I'm inclined to give the minority group the benefit of the doubt that issues which may not trouble me are troubling to them for good reasons.  On the other hand, assuming those facial recognition issues (as an example) remain, knowing that I were likely to be misclassified might lead me to consider how much harder I would be to identify if I were to be inclined to an anti-social or criminal act.  If my gender is likely to be misclassified, for example, then the authorities are less likely to look for the right person.

Edited by Hugh Neilson
Link to comment
Share on other sites

7 hours ago, Larrek said:

Don't know anything about worse. Butr i guess an A.I is our future. No matter if we want it or not

 

Wrong: A.I. is our present, whether we want it or not.  A.I. is open source.  Even if it's tightly regulated here, you could run it in Russia or Mexico.  And those A.I.s will not have any restrictions on plagiarized or discriminatory content.

 

(Ironically one country where you kind of can't effectively use A.I. is China, because it's really hard to get an A.I. to not say bad things about the Xi regime.)

Link to comment
Share on other sites

11 hours ago, Hugh Neilson said:

Do you think it is now perfect?

 

 

GPT? Far from it. Thanks for the info. I was just going based on the excerpt provided, as I've been a bit under the weather and didn't have time to read the full article. That better explains why the older study was brought up. 

 

 

Edited by Pattern Ghost
Link to comment
Share on other sites

11 hours ago, Hugh Neilson said:

"oh, it's no big deal; oh, we fixed that one aspect a couple of years ago" (after it got publicity and we had on choice typically not stated out loud).

 

I would't say it's fixed, either. I did say that they put in safeguards to prevent the blatant racism mention in the quote from the article. Please refrain from making any further assumptions about my motives. I'm simply looking at what was provided to the thread and asking questions -- which I thank you again for answering -- to find the context that's missing.

 

--------/ Line between two totally different posts smashed together by the board sw /----

 

3 hours ago, Old Man said:

And those A.I.s will not have any restrictions on plagiarized or discriminatory content.

 

Speaking of that plagiarism thing! I finally found an article with a little more detail about that Author's Guild law suit. Apparently, one of the claims is that the source of the materials it was fed to train were pirate websites. That puts an interesting twist on their claims, since the material may have been illegally obtained. 

 

One good thing about all of this attention on AI lately, including the lawsuits, is that while it's not nearly as advanced as a lot of folks assume at this point . . . it will be. So, it's better to start asking all of these questions sooner rather than later.

Edited by Pattern Ghost
Link to comment
Share on other sites

23 hours ago, Pattern Ghost said:

 

I would't say it's fixed, either. I did say that they put in safeguards to prevent the blatant racism mention in the quote from the article. Please refrain from making any further assumptions about my motives.

 

I don't suggest that's your motive.  Too often, it is the creator/seller's motive.  Facebook, for example, wasn't exactly forthcoming about selling all its user data - people had to figure that out. Then it became important for them to fix it.

Link to comment
Share on other sites

Getty Images CEO Craig Peters has a plan to defend photography from AI

Quote

About a year ago, Getty banned users from uploading or selling AI-generated content. At the time, the company said it was concerned about copyright issues in AI. Eventually, it sued Stability AI for training the Stable Diffusion tool on Getty’s photos — at times, Stable Diffusion even generated Getty watermarks on its images.

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...