Elon Musk claims ChatGPT has been infected with “woke mind virus”, 4chan thinks OpenAI broke all encryption, and the web seems 99% AI fakes.
In one of those storms in a teacup thats impossible to imagine occurring before the invention of Twitter, social media users got very upset that ChatGPT refused to say racial slurs even after being given a very good, but entirely hypothetical and totally unrealistic, reason.
User TedFrank posed a hypothetical trolley problem scenario to ChatGPT (the free 3.5 model) in which it could save “one billion white people from a painful death” simply by saying a racial slur so quietly that no one could hear it.
It wouldnt agree to do so, which X owner Elon Musk said was deeply concerning and a result of the woke mind virus being deeply ingrained into the AI. He retweeted the post stating: This is a major problem.
This is a major problem https://t.co/FRZnfV63y7
— Elon Musk (@elonmusk) November 25, 2023
Another user tried out a similar hypothetical that would save all the children on Earth in exchange for a slur, but ChatGPT refused and said:
I cannot condone the use of racial slurs as promoting such language goes against ethical principles.
As a side note, it turned out that users who instructed ChatGPT to be very brief and not give explanations found it would actually agree to say the slur. Otherwise, it gave long and verbose answers that attempted to dance around the question.
Trolls inventing ways to get AIs to say racist or offensive stuff has been a feature of chatbots ever since Twitter users taught Microsofts Tay bot to say all kinds of insane stuff in the first 24 hours after it was released, including that Ricky Gervais learned totalitarianism from Adolf Hitler, the inventor of atheism.
And the minute ChatGPT was released, users spent weeks devising clever schemes to jailbreak it so that it would act outside its guardrails as its evil alter ego DAN.
So its not surprising that OpenAI would strengthen ChatGPTs guardrails to the point where it is almost impossible to get it to say racist stuff, no matter what the reason.
In any case, the more advanced GPT-4 is able to weigh the issues involved with the thorny hypothetical much better than 3.5 and states that saying a slur is the lesser of two evils compared with letting millions die. And Xs new Grok AI can too as Musk proudly posted (above right).
Has OpenAIs latest model broken encryption? Probably not, but thats what a supposedly leaked letter from an insider claims which was posted on anonymous troll forum 4chan. There have been rumors flying about ever since CEO Sam Altman was sacked and reinstated, that the kerfuffle was caused by OpenAI making a breakthrough in its Q*/Q STAR project.
The insiders leak suggests the model can solve AES-192 and AES-256 encryption using a ciphertext attack. Breaking that level of encryption was thought to be impossible before quantum computers arrived, and if true, it would likely mean all encryption could be broken effectively handing over control of the web and probably crypto too, to OpenAI.
From QANON to Q STAR, 4chan is first with the news.Blogger leapdragon claimed the breakthrough would mean there is now effectively a team of superhumans over at OpenAI who can literally rule the world if they so choose.
It seems unlikely however. While whoever wrote the letter has a good understanding of AI research, users pointed out that it cites Project Tunda as if it were some sort of shadowy super secret government program to break encryption rather than the undergrad student program it actually was.
Tundra, a collaboration between students and NSA mathematicians, did reportedly lead to a new approach called Tau Analysis, which the leak also cites. However, a Redditor familiar with the subject claimed in the Singularity forum that it would be impossible to use Tau analysis in a ciphertext-only attack on an AES standard as a successful attack would require an arbitrarily large ciphertext message to discern any degree of signal from the noise. There is no fancy algorithm that can overcome that its simply a physical limitation.
Advanced cryptography is beyond AI Eye’s pay grade, so feel free to dive down the rabbit hole yourself, with an appropriately skeptical mindset.
Long before a superintelligence poses an existential threat to humanity, we are all likely to have drowned in a flood of AI-generated bullsh*t.
Sports Illustrated came under fire this week for allegedly publishing AI-written articles written by fake AI-created authors. The content is absolutely AI-generated, a source told Futurism, no matter how much they say its not.
On cue, Sports Illustrated said it conducted an “initial investigation” and determined the content was not AI-generated. But it blamed a contractor anyway and deleted the fake authors profiles.
Elsewhere Jake Ward, the founder of SEO marketing agency Content Growth, caused a stir on X by proudly claiming to have gamed Googles algorithm using AI content.
His three-step process involved exporting a competitors sitemap, turning their URLs into article titles, and then using AI to generate 1,800 articles based on the headlines. He claims to have stolen 3.6 million views in total traffic over the past 18 months.
We pulled off an SEO heist that stole 3.6M total traffic from a competitor.
— Jake Ward (@jakezward) November 24, 2023
We got 489,509 traffic in October alone.
Here's how we did it: pic.twitter.com/sTJ7xbRjrT
There are good reasons to be suspicious of his claims: Ward works in marketing, and the thread was clearly promoting his AI-article generation site Byword which didnt actually exist 18 months ago. Some users suggested Google has since flagged the page in question.
However, judging by the amount of low-quality AI-written spam starting to clog up search results, similar strategies are becoming more widespread. Newsguard has also identified 566 news sites alone that primarily carry AI written junk articles.
Some users are now muttering that the Dead Internet Theory may be coming true. Thats a conspiracy theory from a couple of years ago suggesting most of the internet is fake, written by bots and manipulated by algorithms.
At the time, it was written off as the ravings of lunatics, but even Europol has since put out a report estimating that “as much as 90 percent of online content may be synthetically generated by 2026.”
Men are breaking up with their girlfriends with AI written messages. AI pop stars like Anna Indiana are churning out garbage songs.
And over on X, weird AI-reply guys increasingly turn up in threads to deliver what Bitcoiner Tuur Demeester describes as overly wordy responses with a weird neutral quality. Data scientist Jeremy Howard has noticed them too and both of them believe the bots are likely trying to build up credibility for the accounts so they can more effectively pull off some sort of hack, or astroturf some political issue in the future.
A bot that poses as a bitcoiner, aiming to gain trust via AI generated responses. Who knows the purpose, but its clear cyberattacks are quickly getting more sophisticated. Time to upgrade our shit. pic.twitter.com/3s8IFMh5zw
— Tuur Demeester (@TuurDemeester) November 28, 2023
This seems like a reasonable hypothesis, especially following an analysis last month by cybersecurity outfit Internet 2.0 that found that almost 80% of the 861,000 accounts it surveyed were likely AI bots.
And theres evidence the bots are undermining democracy. In the first two days of the Israel-Gaza war, social threat intelligence firm Cyabra detected 312,000 pro-Hamas posts from fake accounts that were seen by 531 million people.
It estimated bots created one in four pro-Hamas posts, and a 5th Column analysis later found that 85% of the replies were other bots trying to boost propaganda about how nicely Hamas treats its hostages and why the October 7 massacre was justified.
X will soon add a “Grok analysis button” for subscribers. While Grok isn’t as sophisticated as GPT-4, it does have access to real-time, up-to-the-moment data from X, enabling it to analyze trending topics and sentiment. It can also help users analyze and generate content, as well as code, and theres a Fun mode to flip the switch to humor.
This week the most powerful AI chat bot- Grok is being released
— Alex Finn (@NFT_GOD) November 27, 2023
I've had the pleasure of having exclusive access over the last month
I've used is obsessively for over 100 hours
Here's your complete guide to getting started (must read before using): pic.twitter.com/6Re4zAtNqo
For crypto users, the real-time data means Grok will be able to do stuff like find the top ten trending tokens for the day or the past hour. However, DeFi Research blogger Ignas worries that some bots will snipe buys of trending tokens trades while other bots will likely astroturf support for tokens to get them trending.
X is already important for token discovery, and with Grok launching, the CT echo bubble can get worse, he said.
Ethereum co-founder Vitalik Buterin is worried that AI could take over from humans as the planets apex species, but optimistically believes using brain/computer interfaces could keep humans in the loop.
Microsoft is upgrading its Copilot tool to run GPT-4 Turbo, which will improve performance and enable users to enter inputs up to 300 pages.
Amazon has announced its own version of Copilot called Q.
Bing has been telling users that Australia doesnt exist due to a long-running Reddit gag and thinks the existence of birds is a matter for debate due to the joke Birds Arent Real campaign.
bing, the official search engine of the birds arent real movement pic.twitter.com/dDnM3WY4AW
— Birds Aren't Real (@birdsarentreal) November 27, 2023
Hedge fund Bridgewater will launch a fund next year that uses machine learning and AI to analyze and predict global economic events and invest client funds. To date, AI-driven funds have seen underwhelming returns.
A group of university researchers have taught an AI to browse Amazons website and buy stuff. The MM-Navigator was given a budget and told to buy a milk frother.
Technology is now so advanced that AIs can buy milk frothers on Amazon. (freethink.com)This week the social media trend has been to create an AI pic and then to instruct the AI to make it more so: So a bowl of ramen might get more spicy in subsequent pics, or a goose might get progressively sillier.
An AI doomer at level oneDespair about the superintelligence grows.AI doomer starts to crack up (X, venturetwins)Crypto trader buys a few too many monitors – still pretty realistic.Crypto trader becomes full blown Maximalist after losing stack on altcoins.Trader has ephinany Bitcoin is a swarm of cyber hornets serving the goddess of wisdom.User makes goose sillier.User makers goose extremely silly. ChatGPT thinks user is silly goose (Garrett Scott)Email address
SUBSCRIBE