Jump to content

A.I is here and it will make the world worse.


Trencher

Recommended Posts

  • Replies 332
  • Created
  • Last Reply

Top Posters In This Topic

There is a new Ai bot called Sora. The prompts can now generate up to one minute of full motion video. Take a look:
 

 

This is under a company lock down, so it's not yet available to the public, but it would be a useful tool for generating B-Roll for YouTube, It as of now cannot generate Porn, celebrity look alikes or graphic violence. How long does one think those restrictions wil stay in place?.

Edited by Scott Ruggels
Link to comment
Share on other sites

1 hour ago, Scott Ruggels said:

There is a new Ai bot called Sora. The prompts can now generate up to one minute of full motion video. Take a look:
 

This is under a company lock down, so it's not yet available to the public, but it would be a useful tool for generating B-Roll for YouTube, It as of now cannot generate Porn, celebrity look alikes or graphic violence. How long does one think those restrictions wil stay in place?.

 

For this specific tool?  Depends on the law and the people who run the company.  But as I explained to a family member who is also in cyber, it is not possible to stop development of AI any more than it would be possible to stop development of, say, video games.  The U.S. government could ban AI tomorrow and AI development would promptly move to India or Mexico.

Link to comment
Share on other sites

Quote

The U.S. government could ban AI tomorrow and AI development would promptly move to India or Mexico.

 

Yeah its like nukes or guns or porn or whatever.  The cat is out of the bag, you cannot put it back.  Superman could grab all the nuclear weapons in a net and throw them into the sun, and nations would just build more and keep them in lead silos or disguise them as something Superman ignored.  You can't unlearn tech, unless there is a horrendous catastrophe that resets civilization.  You just have to learn how to use things responsibly and how to respond when people do not.

 

Approximately 1.2 million people die each year as a result of auto collisions.  That's a price we have come to accept as being worth having cars; how many are saved as a result of automobiles?  Ten times that, if not more.  New tech requires new responses, moral judgement, and law.  It takes time, and study, and analysis and cultural change.


We're in the process now of getting used to the idea of instant communication on the internet.  We're trying to learn socially how to handle that, legally how to approach it, and it takes time, philosophical thought, theology, legal study etc.  Every new wave of tech makes that necessary, and people adapt.  The problem we're facing right now is that tech is happening so fast and is so potent in terms of cultural impact that its rough trying to get it all straight.  Making matters worse is that our culture has removed nearly all consequence to certain kinds of behavior, so a lot of corrosive things are consequence-free, or consequence-light, at least.

 

It will all get worked out to at least a functional level, but not perfectly, in time.  Until then its a rough ride, like when the Model T drove through town and scared all the horses and womenfolk.

Edited by Christopher R Taylor
Link to comment
Share on other sites

7 hours ago, Old Man said:

 

For this specific tool?  Depends on the law and the people who run the company.  But as I explained to a family member who is also in cyber, it is not possible to stop development of AI any more than it would be possible to stop development of, say, video games.  The U.S. government could ban AI tomorrow and AI development would promptly move to India or Mexico.

 We could not do "Gain of Function experiments in the U.S., so a few scientists talked to Chinese colleagues, and We spent two years in lock down anyway.

Link to comment
Share on other sites

I'm amazed that people missed the labs in Ukraine story LOL.

 

They aren't secret, you can read about it on the CDC page, described as "cooperative" labs

 

https://www.cdc.gov/globalhealth/countries/ukraine/pdf/ukraine_09262022.pdf

 

More data here

 

https://crsreports.congress.gov/product/pdf/IN/IN11886

 

The Pentagon reported on these labs as well

 

https://www.statesman.com/story/news/politics/politifact/2022/06/18/fact-check-pentagon-military-funded-labs-ukraine-russia-invasion/7646221001/

 

They were reported by Russia as "secret" labs for bioweapons, which may or may not be true (I don't trust anything from the official news from Ukraine or Russia) but the labs exist.

 

It is inescapably true that China operates labs with the US researching bioweapons and doing "gain of function" research.

Link to comment
Share on other sites

None of this supports this claim, *particularly* the part I emphasize:

 

5 hours ago, Christopher R Taylor said:

CDC has labs all over the world in the worst places on earth doing experiments not legally permissible in the US.  

 

The only location would be Ukraine, and well, before the invasion?  I wouldn't have called that one of the worst places on earth by a WIDE margin.  And when you combine "illegal in the US" with "worst places on earth"...you invite interpretations like the Tuskegee study.

https://www.cdc.gov/tuskegee/timeline.htm

 

 

Link to comment
Share on other sites

On 2/17/2024 at 2:50 PM, Christopher R Taylor said:

The cat is out of the bag, you cannot put it back.

 

...

 

You just have to learn how to use things responsibly and how to respond when people do not.

 

Obviously, there are limits to scientific/technological progress as it relates to the Average Joe; not all knowledge can be freely promulgated and access to certain resources is either severely restricted or outright banned.

Link to comment
Share on other sites

Quote

The only location would be Ukraine, and well, before the invasion?

 

And China as I listed.  Where they were doing "gain of function" research.  And rumored in several other nations.  Not exactly the kind of thing they like to let people know about and the press is conspicuously disinterested.

Link to comment
Share on other sites

Why The New York Times might win its copyright lawsuit against OpenAI

Quote

The day after The New York Times sued OpenAI for copyright infringement, the author and systems architect Daniel Jeffries wrote an essay-length tweet arguing that the Times “has a near zero probability of winning” its lawsuit. As we write this, it has been retweeted 288 times and received 885,000 views.

“Trying to get everyone to license training data is not going to work because that's not what copyright is about,” Jeffries wrote. “Copyright law is about preventing people from producing exact copies or near exact copies of content and posting it for commercial gain. Period. Anyone who tells you otherwise is lying or simply does not understand how copyright works.”


This article is written by two authors. One of us is a journalist who has been on the copyright beat for nearly 20 years. The other is a law professor who has taught dozens of courses on IP and Internet law. We’re pretty sure we understand how copyright works. And we’re here to warn the AI community that it needs to take these lawsuits seriously.

In its blog post responding to the Times lawsuit, OpenAI wrote that “training AI models using publicly available Internet materials is fair use, as supported by long-standing and widely accepted precedents.”

The most important of these precedents is a 2015 decision that allowed Google to scan millions of copyrighted books to create a search engine. We expect OpenAI to argue that the Google ruling allows OpenAI to use copyrighted documents to train its generative models. Stability AI and Anthropic will undoubtedly make similar arguments as they face copyright lawsuits of their own.

These defendants could win in court—but they could lose, too. As we’ll see, AI companies are on shakier legal ground than Google was in its book search case. And the courts don’t always side with technology companies in cases where companies make copies to build their systems. The story of MP3.com illustrates the kind of legal peril AI companies could face in the coming years.

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...