dmjalund Posted February 6 Report Share Posted February 6 He was the Anti-Worf. Quote Link to comment Share on other sites More sharing options...
Christopher R Taylor Posted February 6 Report Share Posted February 6 That's why I had Dr Destroyer trash the entire Champions for the Island cover. Equal opportunity. Seeker isn't part of the new flavorless bleh team or I'd have had him front and center, as tradition dictates. Quote Link to comment Share on other sites More sharing options...
Cancer Posted February 8 Report Share Posted February 8 Any number of places this could be posted here, but here's AI making an opportunity for all of us. L. Marcus, Old Man, Grailknight and 1 other 2 1 1 Quote Link to comment Share on other sites More sharing options...
Ragitsu Posted February 8 Report Share Posted February 8 A few of us are already covered: we've got faces only an AI could love...or render. Christopher R Taylor 1 Quote Link to comment Share on other sites More sharing options...
Sociotard Posted February 14 Report Share Posted February 14 Celebrating Chinese New Year? Consider lighting a Waymo driverless car on fire. https://www.kron4.com/news/bay-area/why-did-a-san-francisco-crowd-light-waymos-driverless-vehicle-on-fire/ Remember. In the AI war, the humans struck first. Quote Link to comment Share on other sites More sharing options...
Cygnia Posted February 14 Report Share Posted February 14 Old Man, Sociotard, Cancer and 1 other 1 1 2 Quote Link to comment Share on other sites More sharing options...
Cygnia Posted February 17 Report Share Posted February 17 Air Canada must honor refund policy invented by airline’s chatbot Quote Link to comment Share on other sites More sharing options...
Scott Ruggels Posted February 17 Report Share Posted February 17 (edited) There is a new Ai bot called Sora. The prompts can now generate up to one minute of full motion video. Take a look: This is under a company lock down, so it's not yet available to the public, but it would be a useful tool for generating B-Roll for YouTube, It as of now cannot generate Porn, celebrity look alikes or graphic violence. How long does one think those restrictions wil stay in place?. Edited February 17 by Scott Ruggels Christopher R Taylor and DentArthurDent 1 1 Quote Link to comment Share on other sites More sharing options...
Christopher R Taylor Posted February 17 Report Share Posted February 17 Right, all those restrictions are in place but people are finding ways around them for images. We've seen quite a few images of politicians already -- they look fake, but how long will they be recognizably false? Quote Link to comment Share on other sites More sharing options...
Old Man Posted February 17 Report Share Posted February 17 1 hour ago, Scott Ruggels said: There is a new Ai bot called Sora. The prompts can now generate up to one minute of full motion video. Take a look: This is under a company lock down, so it's not yet available to the public, but it would be a useful tool for generating B-Roll for YouTube, It as of now cannot generate Porn, celebrity look alikes or graphic violence. How long does one think those restrictions wil stay in place?. For this specific tool? Depends on the law and the people who run the company. But as I explained to a family member who is also in cyber, it is not possible to stop development of AI any more than it would be possible to stop development of, say, video games. The U.S. government could ban AI tomorrow and AI development would promptly move to India or Mexico. Scott Ruggels 1 Quote Link to comment Share on other sites More sharing options...
Christopher R Taylor Posted February 17 Report Share Posted February 17 (edited) Quote The U.S. government could ban AI tomorrow and AI development would promptly move to India or Mexico. Yeah its like nukes or guns or porn or whatever. The cat is out of the bag, you cannot put it back. Superman could grab all the nuclear weapons in a net and throw them into the sun, and nations would just build more and keep them in lead silos or disguise them as something Superman ignored. You can't unlearn tech, unless there is a horrendous catastrophe that resets civilization. You just have to learn how to use things responsibly and how to respond when people do not. Approximately 1.2 million people die each year as a result of auto collisions. That's a price we have come to accept as being worth having cars; how many are saved as a result of automobiles? Ten times that, if not more. New tech requires new responses, moral judgement, and law. It takes time, and study, and analysis and cultural change. We're in the process now of getting used to the idea of instant communication on the internet. We're trying to learn socially how to handle that, legally how to approach it, and it takes time, philosophical thought, theology, legal study etc. Every new wave of tech makes that necessary, and people adapt. The problem we're facing right now is that tech is happening so fast and is so potent in terms of cultural impact that its rough trying to get it all straight. Making matters worse is that our culture has removed nearly all consequence to certain kinds of behavior, so a lot of corrosive things are consequence-free, or consequence-light, at least. It will all get worked out to at least a functional level, but not perfectly, in time. Until then its a rough ride, like when the Model T drove through town and scared all the horses and womenfolk. Edited February 17 by Christopher R Taylor Grailknight 1 Quote Link to comment Share on other sites More sharing options...
Scott Ruggels Posted February 18 Report Share Posted February 18 7 hours ago, Old Man said: For this specific tool? Depends on the law and the people who run the company. But as I explained to a family member who is also in cyber, it is not possible to stop development of AI any more than it would be possible to stop development of, say, video games. The U.S. government could ban AI tomorrow and AI development would promptly move to India or Mexico. We could not do "Gain of Function experiments in the U.S., so a few scientists talked to Chinese colleagues, and We spent two years in lock down anyway. Christopher R Taylor 1 Quote Link to comment Share on other sites More sharing options...
Christopher R Taylor Posted February 18 Report Share Posted February 18 CDC has labs all over the world in the worst places on earth doing experiments not legally permissible in the US. We had several labs in Ukraine (probably still do) that were doing this kind of research. Quote Link to comment Share on other sites More sharing options...
unclevlad Posted February 18 Report Share Posted February 18 1 hour ago, Christopher R Taylor said: CDC has labs all over the world in the worst places on earth doing experiments not legally permissible in the US. We had several labs in Ukraine (probably still do) that were doing this kind of research. This either requires support or retraction, IMO. Cygnia 1 Quote Link to comment Share on other sites More sharing options...
Cygnia Posted February 18 Report Share Posted February 18 Reddit reportedly signed a multi-million content licensing deal with an AI company Quote Link to comment Share on other sites More sharing options...
unclevlad Posted February 18 Report Share Posted February 18 29 minutes ago, Cygnia said: Reddit reportedly signed a multi-million content licensing deal with an AI company Oh yeah, high-quality, high-reliability data..... Cygnia 1 Quote Link to comment Share on other sites More sharing options...
Christopher R Taylor Posted February 18 Report Share Posted February 18 I'm amazed that people missed the labs in Ukraine story LOL. They aren't secret, you can read about it on the CDC page, described as "cooperative" labs https://www.cdc.gov/globalhealth/countries/ukraine/pdf/ukraine_09262022.pdf More data here https://crsreports.congress.gov/product/pdf/IN/IN11886 The Pentagon reported on these labs as well https://www.statesman.com/story/news/politics/politifact/2022/06/18/fact-check-pentagon-military-funded-labs-ukraine-russia-invasion/7646221001/ They were reported by Russia as "secret" labs for bioweapons, which may or may not be true (I don't trust anything from the official news from Ukraine or Russia) but the labs exist. It is inescapably true that China operates labs with the US researching bioweapons and doing "gain of function" research. Quote Link to comment Share on other sites More sharing options...
unclevlad Posted February 18 Report Share Posted February 18 None of this supports this claim, *particularly* the part I emphasize: 5 hours ago, Christopher R Taylor said: CDC has labs all over the world in the worst places on earth doing experiments not legally permissible in the US. The only location would be Ukraine, and well, before the invasion? I wouldn't have called that one of the worst places on earth by a WIDE margin. And when you combine "illegal in the US" with "worst places on earth"...you invite interpretations like the Tuskegee study. https://www.cdc.gov/tuskegee/timeline.htm Cygnia 1 Quote Link to comment Share on other sites More sharing options...
Ragitsu Posted February 18 Report Share Posted February 18 On 2/17/2024 at 2:50 PM, Christopher R Taylor said: The cat is out of the bag, you cannot put it back. ... You just have to learn how to use things responsibly and how to respond when people do not. Obviously, there are limits to scientific/technological progress as it relates to the Average Joe; not all knowledge can be freely promulgated and access to certain resources is either severely restricted or outright banned. Quote Link to comment Share on other sites More sharing options...
Christopher R Taylor Posted February 19 Report Share Posted February 19 Quote The only location would be Ukraine, and well, before the invasion? And China as I listed. Where they were doing "gain of function" research. And rumored in several other nations. Not exactly the kind of thing they like to let people know about and the press is conspicuously disinterested. Quote Link to comment Share on other sites More sharing options...
Cygnia Posted February 19 Report Share Posted February 19 A.I. Scammers Are Impersonating Real Authors to Sell Fake Books Quote Link to comment Share on other sites More sharing options...
Cygnia Posted February 20 Report Share Posted February 20 Why The New York Times might win its copyright lawsuit against OpenAI Quote The day after The New York Times sued OpenAI for copyright infringement, the author and systems architect Daniel Jeffries wrote an essay-length tweet arguing that the Times “has a near zero probability of winning” its lawsuit. As we write this, it has been retweeted 288 times and received 885,000 views. “Trying to get everyone to license training data is not going to work because that's not what copyright is about,” Jeffries wrote. “Copyright law is about preventing people from producing exact copies or near exact copies of content and posting it for commercial gain. Period. Anyone who tells you otherwise is lying or simply does not understand how copyright works.” This article is written by two authors. One of us is a journalist who has been on the copyright beat for nearly 20 years. The other is a law professor who has taught dozens of courses on IP and Internet law. We’re pretty sure we understand how copyright works. And we’re here to warn the AI community that it needs to take these lawsuits seriously. In its blog post responding to the Times lawsuit, OpenAI wrote that “training AI models using publicly available Internet materials is fair use, as supported by long-standing and widely accepted precedents.” The most important of these precedents is a 2015 decision that allowed Google to scan millions of copyrighted books to create a search engine. We expect OpenAI to argue that the Google ruling allows OpenAI to use copyrighted documents to train its generative models. Stability AI and Anthropic will undoubtedly make similar arguments as they face copyright lawsuits of their own. These defendants could win in court—but they could lose, too. As we’ll see, AI companies are on shakier legal ground than Google was in its book search case. And the courts don’t always side with technology companies in cases where companies make copies to build their systems. The story of MP3.com illustrates the kind of legal peril AI companies could face in the coming years. Old Man, Hermit, unclevlad and 1 other 4 Quote Link to comment Share on other sites More sharing options...
Ternaugh Posted February 21 Report Share Posted February 21 Funny, it works fine for me. https://g.co/gemini/share/7ced2ac67d6b Quote Link to comment Share on other sites More sharing options...
Iuz the Evil Posted February 22 Report Share Posted February 22 There could potentially be some programming improvements to do there… Quote Link to comment Share on other sites More sharing options...
Sociotard Posted February 23 Report Share Posted February 23 Will Smith is pretty good at making fun of himself Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.