Jump to content

A.I is here and it will make the world worse.


Trencher

Recommended Posts

  • Replies 332
  • Created
  • Last Reply

Top Posters In This Topic

Fiction Analytics Site Prosecraft Shut Down After Author Backlash

Quote

Part of this is because Prosecraft has admitted to using “AI algorithms.” In a blog post dated October 5, 2018, Benji Smith, the developer of both Prosecraft and the writing program Shaxpir that was based on the data mined from Prosecraft’s library, stated that “we taught our machine-learning [AI] algorithms to recognize which kinds of words can be used in which kinds of contexts, by looking at the types of words and phrases that tend to occur within similar sentences and paragraphs.” Additionally, he wrote that Shaxpir “[analyzed] more than 560 million words of fiction, from more than 5,800 books, written by more than 3,300 popular authors.” He does not disclose where he received those works of fiction, or whether or not he received permission to do so.

While the technology used is not necessarily a large language generative model like ChatGPT, it is not a stretch to say that incorporating generative LLM algorithms could have been on the horizon for Prosecraft. And since the site had a massive library of books, author’s fears are incredibly valid. In the wake of this backlash, Smith has written a lengthy blog on medium explaining why he voluntarily took down Prosecraft.

 

Link to comment
Share on other sites

AI Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype

Quote

Wrongful arrests, an expanding surveillance dragnet, defamation and deep-fake pornography are all actually existing dangers of so-called “artificial intelligence” tools currently on the market. That, and not the imagined potential to wipe out humanity, is the real threat from artificial intelligence.

Beneath the hype from many AI firms, their technology already enables routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Already, algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.

Nevertheless, in May the nonprofit Center for AI safety released a statement—co-signed by hundreds of industry leaders, including OpenAI’s CEO Sam Altman—warning of “the risk of extinction from AI,” which it asserted was akin to nuclear war and pandemics. Altman had previously alluded to such a risk in a Congressional hearing, suggesting that generative AI tools could go “quite wrong.” And in July executives from AI companies met with President Joe Biden and made several toothless voluntary commitments to curtail “the most significant sources of AI risks,” hinting at existential threats over real ones. Corporate AI labs justify this posturing with pseudoscientific research reports that misdirect regulatory attention to such imaginary scenarios using fear-mongering terminology, such as “existential risk.”

The broader public and regulatory agencies must not fall for this science-fiction maneuver. Rather we should look to scholars and activists who practice peer review and have pushed back on AI hype in order to understand its detrimental effects here and now.

 

Link to comment
Share on other sites

An Iowa school district is using AI to ban books

 

Quote

In May, the Republican-controlled state legislature passed, and Governor Kim Reynolds subsequently signed, Senate File 496 (SF 496), which enacted sweeping changes to the state's education curriculum. Specifically it limits what books can be made available in school libraries and classrooms, requiring titles to be "age appropriate” and without “descriptions or visual depictions of a sex act,” per Iowa Code 702.17.

But ensuring that every book in the district's archives adhere to these new rules is quickly turning into a mammoth undertaking. "Our classroom and school libraries have vast collections, consisting of texts purchased, donated, and found," Bridgette Exman, assistant superintendent of curriculum and instruction at Mason City Community School District, said in a statement. "It is simply not feasible to read every book and filter for these new requirements." 

As such, the Mason City School District is bringing in AI to parse suspect texts for banned ideas and descriptions since there are simply too many titles for human reviewers to cover on their own. Per the district, a "master list" is first cobbled together from "several sources" based on whether there were previous complaints of sexual content. Books from that list are then scanned by "AI software" which tells the state censors whether or not there actually is a depiction of sex in the book. 

 

Link to comment
Share on other sites

Choosing carefully which books are and are not appropriate for a library is a valid thing that's been done since libraries have existed, especially school libraries.  That said, I would not trust a computer program to know well enough which books I want in my library or not, its going to make some very odd calls.

 

Edited by Christopher R Taylor
Link to comment
Share on other sites

  • 3 weeks later...
  • 2 weeks later...

My current employer, a large financial services company, had an all-hands meeting regarding The Future of Generative AI at (companyname) this morning.  Rest assured that corporate America is moving at flank speed toward adopting AI in all areas where it's appropriate, and some more where it isn't. 

 

And as important as ChatGPT and Midjourney are, they're nothing compared to what AI is going to do to coding and user interfaces.  For example I'd say most customer service call center reps will be replaced by AI chatbots within two years.  And it might take about as long for coding to be replaced by writing requirements documents for AIs to write code to.

Link to comment
Share on other sites

1 hour ago, Old Man said:

My current employer, a large financial services company, had an all-hands meeting regarding The Future of Generative AI at (companyname) this morning.  Rest assured that corporate America is moving at flank speed toward adopting AI in all areas where it's appropriate, and some more where it isn't. 

 

And as important as ChatGPT and Midjourney are, they're nothing compared to what AI is going to do to coding and user interfaces.  For example I'd say most customer service call center reps will be replaced by AI chatbots within two years.  And it might take about as long for coding to be replaced by writing requirements documents for AIs to write code to.


Cyberdyne Systems was supposed to be a warning not a blueprint to be adopted. 


By 2029 we may end up getting there. 

Link to comment
Share on other sites

6 hours ago, Cygnia said:

 

I just tried generating some examples "in the style of" with ChatGPT 3.5. The results were pretty laughable, and all clearly recognizable as ChatGTP's own style, which I suspect is imposed by its safety constraints. It's pretty bad at style mimicry, unless it's something like Dr. Seuss, where it can pick up some obvious things.

 

At least so far.

 

Edit: To comment on the actual lawsuit: It's not illegal to ape someone's style. AI may eventually be able to do so, but it's not currently illegal. They're poking the wrong branch of government to get the protections they want. They need new law, because existing law doesn't really cover their concern.

Edited by Pattern Ghost
Link to comment
Share on other sites

Quote

To comment on the actual lawsuit: It's not illegal to ape someone's style. AI may eventually be able to do so, but it's not currently illegal. They're poking the wrong branch of government to get the protections they want. They need new law, because existing law doesn't really cover their concern.

 

I agree with this analysis.  Besides, who do you find liable?  The program?

Link to comment
Share on other sites

Not sure if any AIs are quite that bad at English, TBH. That reads like a very bad translation, as do the other examples. They probably had an AI or cheap labor generate the content in Portuguese then just ran it through Google Translate. Or they ran it through a word substituting plagiarizer site.

 

I wonder if part of the problem is that they're trying to automate editing on the site, since they got rid of their editors? This quote ". . . and we continue to enhance our systems to identify and prevent inaccurate information from appearing on our channels," might well be referring to automated systems instead of processes and procedures performed by humans.

 

Edit:

 

Here's an example of the workflow these  . . . people . . . use to plagiarize sites to generate spammy content. Typically, the flow would be cheap labor/AI or straight up stealing text, then run it through a paraphrasing program, then run it through Grammarly, and then check it. The source site probably just swiped, then ran it through the plagiarizer site without double-checking. MS should be ashamed of itself for reposting garbage content, but MS has no shame:

 

 

Edited by Pattern Ghost
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...