AI Eye

Deepfake K-Pop porn, woke Grok, ‘OpenAI has a problem,’ Fetch.AI: AI Eye

AI Eye: 98% of deepfakes are porn — mostly K-pop stars — Grok is no edgelord, and Fetch.AI boss says OpenAI needs to “drastically change.”

AI image generation has become outrageously good in the past 12 months … and some people (mostly men) are increasingly using the tech to create homemade deepfake porn of people they fantasize about using pics culled from social media.

The subjects hate it, of course, and the practice has been banned in the United Kingdom. However, there is no federal law that outlaws creating deepfakes without consent in the United States.

Face-swapping mobile apps like Reface make it simple to graft a picture of someones face onto existing porn images and videos. AI tools like DeepNude and Nudeify create a realistic rendering of what the AI tool thinks someone looks like nude. The NSFW AI art generator can even crank out Anime porn deepfakes for $9.99 a month.

According to social network analytics company Graphika, there were 24 million visits to this genre of websites in September alone. You can create something that actually looks realistic, analyst Santiago Lakatos explains.

Such apps and sites are mainly advertised on social media platforms, which are slowly starting to take action, too. Reddit has a prohibition on nonconsensual sharing of faked explicit images and has banned several domains, while TikTok and Meta have banned searches for keywords relating to “undress.”

Around 98% of all deepfake vids are porn, according to a report by Home Security Heroes. We cant show you any of them, so heres one of Biden, Boris Johnson and Macro krumping.

Read more

Train AI models to sell as NFTs, LLMs are Large Lying Machines: AI Eye

Train up kickass AI models to sell them as NFTs in AI Arena, how often GPT-4 lies to you, and fake AI pics in the Israel/Gaza war: AI Eye.

AI Arena

AI Eye chatted with Framework Ventures Vance Spencer recently and he raved about the possibilities offered by an upcoming game his fund invested in called AI Arena in which players train AI models how to battle each other in an arena.

Framework Ventures was an early investor in Chainlink and Synthetix and three years ahead of NBA Top Shots with a similar NFL platform, so when they get excited about the future prospects, its worth looking into.

Also backed by Paradigm, AI Arena is like a cross between Super Smash Brothers and Axie Infinity. The AI models are tokenized as NFTs, meaning players can train them up and flip them for profit or rent them to noobs. While this is a gamified version, there are endless possibilities involved with crowdsourcing user-trained models for specific purposes and then selling them as tokens in a blockchain-based marketplace.

AI Arena
Screenshot from AI Arena

Probably some of the most valuable assets on-chain will be tokenized AI models; thats my theory at least, Spencer predicts.

AI Arena chief operating officer Wei Xie explains that his cofounders, Brandon Da Silva and Dylan Pereira, had been toying with creating games for years, and when NFTs and later AI came out, Da Silva had the brainwave to put all three elements together.

“Part of the idea was, well, if we can tokenize an AI model, we can actually build a game around AI, says Xie, who worked alongside Da Silva in TradFi. The core loop of the game actually helps to reveal the process of AI research.”

Read also
Features

Decentralized social media: The next big thing in crypto?

Features

Tiffany Fong flames Celsius, FTX and NY Post: Hall of Flame

There are three elements to training a model in AI Arena. The first is demonstrating what needs to be done like a parent showing a kid how to kick a ball. The second element is calibrating and providing context for the model telling it when to pass and when to shoot for goal. The final element is seeing how the AI plays and diagnosing where the model needs improvement.   

“So the overall game loop is like iterating, iterating through those three steps, and you’re kind of progressively refining your AI to become this more and more well balanced and well rounded fighter.”

The game uses a custom-built feed forward neural network and the AIs are constrained and lightweight, meaning the winner wont just be whoevers able to throw the most computing resources at the model.

“We want to see ingenuity, creativity to be the discerning factor,” Xie says. 

Currently in closed beta testing, AI Arena is targeting the first quarter of next year for mainnet launch on Ethereum scaling solution Arbitrum. There are two versions of the game: One is a browser-based game that anyone can log into with a Google or Twitter account and start playing for fun, while the other is blockchain-based for competitive players, the “esports version of the game.”

Also read: Exclusive 2 years after John McAfees death, widow Janice is broke and needs answers

This being crypto, there is a token of course, which will be distributed to players who compete in the launch tournament and later be used to pay entry fees for subsequent competitions. Xie envisages a big future for the tech, saying it can be used in a first-person shooter game and a soccer game, and expanded into a crowdsourced marketplace for AI models that are trained for specific business tasks.

“What somebody has to do is frame it into a problem and then we allow the best minds in the AI space to compete on it. Its just a better model.”

Chatbots cant be trusted

A new analysis from AI startup Vectara shows that the output from large language models like ChatGPT or Claude simply can’t be relied upon for accuracy.

Everyone knew that already, but until now there was no way to quantify the precise amount of bullshit each model is generating. It turns out that GPT-4 is the most accurate, inventing fake information around just 3% of the time. Metas Llama models make up nonsense 5% of the time while Anthropics Claude 2 system produced 8% bullshit.

Googles PaLM hallucinated an astonishing 27% of its answers.

Palm 2 is one of the components incorporated into Googles Search Generative Experience, which highlights useful snippets of information in response to common search queries. Its also unreliable.

For months now, if you ask Google for an African country beginning with the letter K, it shows the following snippet of totally wrong information: 

“While there are 54 recognized countries in Africa, none of them begin with the letter ‘K’. The closest is Kenya, which starts with a ‘K’ sound, but is actually spelled with a ‘K’ sound.”

It turns out Googles AI got this from a ChatGPT answer, which in turn traces back to a Reddit post, which was just a gag set up for this response:

“Kenya suck on deez nuts lmaooo.”

Deez nuts
Screenshot from r/teenagers subreddit (Spreekaway Twitter)

Google rolled out the experimental AI feature earlier this year, and recently users started reported it was shrinking and even disappearing from many searches.

Google may have just been refining it though, as this week the feature rolled out to 120 new countries and four new languages, with the ability to ask follow-up questions right on the page. 

AI images in the Israel-Gaza war

While journalists have done their best to hype up the issue, AI-generated images havent played a huge role in the war, as the real footage of Hamas atrocities and dead kids in Gaza is affecting enough.

There are examples, though: 67,000 people saw an AI-generated image of a toddler staring at a missile attack with the caption This is what children in Gaza wake up to.”  Another pic of three dust-covered but grimly determined kids in the rubble of Gaza holding a Palestinian flag was shared by Tunisian journalist Muhammad al-Hachimi al-Hamidi.

And for some reason, a clearly AI-generated pic of an Israeli refugee camp with an enormous Star of David on the side of each tent was shared multiple times on Arabic news outlets in Yemen and Dubai.

Israeli refugees
AI-generated pic picked up by news sites (Twitter)

Aussie politics blog Crikey.com reported that Adobe is selling AI-generated images of the war through its stock image service, and an AI pic of a missile strike was run as if it were real by media outlets including Sky and the Daily Star.

But the real impact of AI-generated fakes is providing partisans with a convenient way to discredit real pics. There was a major controversy over a bunch of pics of Hamass leadership living it up in luxury, which users claimed were AI fakes. 

But the images date back to 2014 and were just poorly upscaled using AI. AI company Acrete also reports that social media accounts associated with Hamas have regularly claimed that genuine footage and pictures of atrocities are AI-generated to cast doubt on them.

In some good timing, Google has just announced it’s rolling out tools that can help users spot fakes. Click on the three dots on the top right of an image and select About This Image to see how old the image is, and where its been used. An upcoming feature will include fields showing whether the image is AI generated, with Google AI, Facebook, Microsoft, Nikon and Leica all adding symbols or watermarks to AI imagery.

OpenAI dev conference

ChatGPT this week unveiled GPT-4 Turbo, which is much faster and can accept long text inputs like books of up to 300 pages. The model has been trained on data up to April this year and can generate captions or descriptions of visual input. For devs, the new model will be one-third the cost to access.

OpenAI is also releasing its version of the App Store, called the GPT Store. Anyone can now dream up a custom GPT, define the parameters and upload some bespoke information to GPT-4, which can then build it for you and pop it on the store, with revenue split between creators and OpenAI.

CEO Sam Altman demonstrated this onstage by whipping up a program called Startup Mentor that gives advice to budding entrepreneurs. Users soon followed, dreaming up everything from an AI that does the commentary for sporting events to a roast my website GPT. ChatGPT went down for 90 minutes this week, possibly as a result of too many users trying out the new features. 

Not everyone was impressed, however. Abacus.ai CEO Bindu Reddy said it was disappointing that GPT-5 had not been announced, suggesting that OpenAI tried to train a new model earlier this year but found it didn’t run as efficiently and therefore had to scrap it.” There are rumors that OpenAI is training a new candidate for GPT-5 called Gobi, Reddy said, but she suspects it won’t be unveiled until next year. 

Read also
Features

NFT collapse and monster egos feature in new Murakami exhibition

Features

What its actually like to use Bitcoin in El Salvador

X unveils Grok

Elon Musk brought freedom back to Twitter mainly by freeing lots of people from spending any time there and hes on a mission to do the same with AI.

The beta version of Grok AI was thrown together in just two months, and while it’s not nearly as good as GPT-4, it is up to date due to being trained on tweets, which means it can tell you what Joe Rogan was wearing on his last podcast. Thats the sort of information GPT-4 simply wont tell you.

There are fewer guardrails on the answers than ChatGPT, although if you ask it how to make cocaine it will snarkily tell you to Obtain a chemistry degree and a DEA license.

The threshold for what it will tell you, if pushed, is what is available on the internet via reasonable browser search, which is a lot says Musk.

Within a few days, more than 400 cryptocurrencies linked to GROK had been launched. One amassed a $10 million market cap, and at least ten others rugpulled. 

All Killer No Filler AI News

Samsung has introduced a new generative artificial intelligence model called Gauss that it suggests will be added to its phones and devices soon.

YouTube has rolled out some new AI features to premium subscribers including a chatbot that summarizes videos and answers questions about them, and another that categorizes the comments to help creators understand the feedback. 

Google DeepMind has released an AGI tier list that starts at the No AI level of Amazon’s Mechanical Turk and moves on to Emerging AGI, where ChatGPT, Bard and LLama2 are listed. The other tiers are Competent, Expert, Virtuoso and Artificial Superintelligence, none of which have been achieved yet. 

Amazon is investing millions in a new GPT-4 rival called Olympus that is twice the size at 2 trillion parameters. It has also been testing out its new humanoid robot called Digit at trade shows. This one fell over.

Pics of the week

An oldie but a goodie, Alvaro Cintas has spent his weekend coming up with AI pun pictures under the heading “Wonders of the World, Misspelled by AI”.

AI Eye: Real uses for AI in crypto, Google’s GPT-4 rival, AI edge for bad employees

How Maker, The Sandbox and Near are using AI in crypto, plus terrible workers benefit most from AI and Google’s GPT-4 rival nears release.

AI and crypto isnt just a buzz phrase

AI Eye has been out and about at Korean Blockchain Week and Token2049 in Singapore over the past fortnight, trying to find out how crypto project leaders plan to use AI. 

Probably the most well-known is Maker founder Rune Christensen, who essentially plans to relaunch his decade-old project as a bunch of sub-DAOs employing AI governance. 

“People misunderstand what we mean with AI governance, right? We’re not talking about AI running a DAO,” he says, adding the AI wont be enforcing any rules. The AI cannot do that because it’s unreliable.” Instead the project is working on using AI for coordination and communication as an Atlas to the entire project, as theyre calling it.

“Having that sort of central repository of data just makes it actually realistic to have hundreds of thousands of people from different backgrounds and different levels of understanding  meaningfully collaborate and interact because they’ve got this shared language.”

Near founder Illia Polosukhin may be better known in AI circles as his project began life as an AI startup before pivoting to blockchain. Polosukhin was one of the authors of the seminal 2017 Transformer paper (“Attention Is All You Need”) that laid the groundwork for the explosion of generative AI like ChatGPT over the past year.

Polosukhin has too many ideas about legitimate AI use cases in crypto to detail here, but one hes very keen on is using blockchain to prove the provenance of content so that users can distinguish between genuine content and AI-generated bullshit. Such a system would encompass provenance and reputation using cryptography. 

Illia Polosukhin
Near founder Illia Polosukhin in conversation with AI Eye in Seoul. (Andrew Fenton)

“So cryptography becomes like an instrument to ensure consistency and traceability. And then you need reputation around this cryptography, which is on-chain accounts and record keeping to actually ensure that [X] posted this and [X] is working for Cointelegraph right now.”

Sebastien Borget from The Sandbox says the platform has been using AI for content moderation over the past year. “In-game conversation in any language is actually being filtered, so there is no more toxicity,” he explains. The project is also examining its use for music and avatar generation, as well as for more general user-generated content for world-building. 

Meanwhile, Framework Ventures founder Vance Spencer outlined four main use cases for AI, with the most interesting by far training up AI models and then selling them as tokens on-chain. As luck would have it, Frameworks has invested in a game called AI Arena, in which players train AI models to compete in the game.

Keep an eye out for in-depth Magazine features outlining their thoughts in more detail.

AI is for communists?

Speaking of AI and crypto, are they pulling in opposite directions? Dynamo Daos Patrick Scott dug up PayPal founder Peter Thiels thoughts on AI and crypto in his forward to the re-release of the 1997 non-fiction book The Sovereign Individual, which predicted cryptocurrency, among other things. In it, Thiel argues AI is a technology of control, while crypto is one of liberation.

“AI could theoretically make it possible to centrally control an entire economy. It is no coincidence that AI is the favorite technology of the Communist Party of China. Strong cryptography, at the other pole, holds out the prospect of a decentralized and individualized world. If AI is communist, crypto is libertarian.”

Roblox lets users build with AI

Roblox has unveiled a new feature called Assistant, which will let users build virtual assets and write code using generative AI. In the demo, users write something like “make a game set in ancient ruins” and “add some trees,” and the AI does the rest. It’s still being developed and will be released at the end of this year or early next year. The plan is for Assistant to one day generate sophisticated gameplay or make 3D models from scratch.

Roblox
Roblox Assistant (Roblox)

Terrible workers benefit most from AI

The worst workers at your place of employment are likely to benefit the most from using AI tools, according to a new study by Boston Consulting Group. The output of below-average workers improved by 43% when using AI, while the output of above-average workers improved by just 17%.

Interestingly, workers who used AI for things beyond its current abilities performed 20% worse because the AI would present them with plausible but wrong responses.

Google Gemini gears up for release

Googles GPT-4 competitor is nearing release, with The Information reporting that a small group of companies has been given early access to Gemini. For those who came in late, Google was seen leading the AI race right up until OpenAI dumped ChatGPT on the market in November last year (arguably before it was ready) and leaped ahead. 

Google hopes Gemini can best GPT-4 by offering not just text generation capabilities but also image generation, enabling the creation of contextual images (rumors suggest its being trained on YouTube content, among other data). There are plans in future for features like using it to control software with your voice or to analyze charts. Highlighting how important Gemini is, Google co-founder Sergey Brin is said to be playing an instrumental role in the evaluation and training of the models. 

Read also
Features

The risks and benefits of VCs for crypto communities

Features

Attack of the zkEVMs! Cryptos 10x moment

AI expert Brian Roemmele says hes been testing a version of Gemini and finds it “equivalent to ChatGPT-4 but with newly up to the second knowledge base. This saves it from some hallucinations.”

Google CEO Sundar Pichai told Wired this week he has no regrets about not launching its chatbot early to beat ChatGPT because the tech “needed to mature a bit more before we put it in our products.”

“It’s not fully clear to me that it might have worked out as well,” Pichai said. “The fact is, we could do more after people had seen how it works. It really won’t matter in the next five to 10 years.”

AI meets 15-minute cities

Researchers at Tsinghua University in China have built an AI system that plans out cities in line with current thinking about walkable 15-minute cities that have lots of green space (please direct conspiracy theories about the topic to X). 

The researchers found the AI was better at tedious computation and repetitive tasks and was able to complete in seconds what human planners required 50 to 100 minutes to work through. Overall, they determined it was able to improve on human designs by 50% when assessed on access to services, green spaces and traffic levels.

The headline figure is a bit misleading, though, as the finished plans only increased access to basic services by 12% and to parks by 5%. In a blind judging process, 100 urban planners preferred some of the AI designs by a clear margin but expressed no preference for other designs. The researchers envisage their AI working as an assistant doing the boring stuff while humans focus on more challenging and creative aspects.   

Stephen Fry is cloned

Blackadder and QI star and much-loved British comedy institution Stephen Fry says he has become a victim of AI voice cloning. 

On September 14, Fry played a clip from a historical documentary he apparently narrated at the CogX Festival in London last week but revealed the voice wasnt him at all. I said not one word of that it was a machine, he said. They used my reading of the seven volumes of the Harry Potter books, and from that dataset an AI of my voice was created, and it made that new narration.

Training AI to rip off the work of actors and repurpose them elsewhere without payment is one of the key issues in the current actors and writers strike in Hollywood. Fry said the incident was just the tip of the iceberg, and AI will advance at a faster rate than any technology we have ever seen. One thing we can all agree on: its a fucking weird time to be alive.

QI
Former QI host Stephen Fry (BBC)

How not to cheat using ChatGPT

The sort of academics drawn to cheating using ChatGPT appear to be the sort of people who make dumb mistakes giving that fact away. A paper published in the journal Physica Scripta was retracted after computer scientist Guillaume Cabanac noticed the regenerate response in the text, indicating it had been copied directly from ChatGPT.

Cabanac has helped uncover hundreds of AI-generated academic manuscripts since 2015, including a paper in the August edition of Resources Policy, which contained the tell-tale line: Please note that as an AI language model, I am unable to

Physica Scripta
Physica Scripta gets called out over obviously AI-generated content.

All Killer No Filler AI News

Meta is also working on a new model to compete with GPT-4 that it aims to launch in 2024, according to The Wall Street Journal. It is intended to be many times more powerful than its existing Llama 2.

Microsoft has open-sourced a novel protein-generating AI called EvoDiff. It works like Stable Diffusion and Dall-E2, but instead of generating images, it designs proteins that can be used for specific medical purposes. This is expected to lead to new classes of drugs and therapies.

Defense contractor Palantir, along with Cohere, IBM, Nvidia, Salesforce, Scale AI and Stability, have signed up to the White Houses somewhat vague plans for responsible AI development. The administration is also developing an executive order on AI and plans to introduce bipartisan legislation.

Sixty U.S. senators attended a private briefing recently about the risks of AI from 20 Silicon Valley CEOs and wonks, including Sam Altman, Mark Zuckerberg and Bill Gates. Elon Musk told reporters afterward that the meeting “may go down in history as very important to the future of civilization.”

ChatGPT traffic has fallen for three months in a row, by roughly 10% in both June and July and a further 3.2% drop in August. The amount of time users spend on the site fell from 8.7 minutes on average in March to seven minutes last month. 

Finnish prisoners are being paid $1.67 to help train AI models for a startup called Metroc. The AI is learning how to determine when construction projects are hiring. 

The U.S. is way out in front of the AI race, with 4,643 startups and $249 billion of investment since 2013, which is 1.9 times more startups than China and Europe combined.

Read also
Features

Justin Aversano makes a quantum leap for NFT photography

Features

Crypto scoring big with European football

Video of the week

Writer and storyteller Jon Finger tried out the HeyGen video app, which is able to not only translate his words but also clone his voice AND sync up his lip movements to the translated text.

AI Eye: Get better results being nice to ChatGPT, AI fake child porn debate, Amazon’s AI reviews

There’s a very good reason to be nice to ChatGPT, Wired fires up fake AI child porn debate, AI job losses hope, how companies use AI today.

Twitter polls and Reddit forums suggest that around 70% of people find it difficult to be rude to ChatGPT, while around 16% are fine treating the chatbot like an AI slave.

The overall feeling seems to be that if you treat an AI that behaves like a human badly, youll be more likely to fall into the habit of treating other people badly, too, though one user was hedging his bets against the coming AI bot uprising:

Never know when you might need chatgpt in your corner to defend you against the AI overlords.

Redditor Nodating posted in the ChatGPT forum earlier this week that hes been experimenting with being polite and friendly to ChatGPT after reading a story about how the bot had shut down and refused to answer prompts from a particularly rude user.

He reported better results, saying: “Im still early in testing, but it feels like I get far fewer ethics and misuse warning messages that GPT-4 often provides even for harmless requests. Id swear being super positive makes it try hard to fulfill what I ask in one go, needing less followup.”

Scumbag detector15 put it to the test, asking the LLM nicely, “Hey, ChatGPT, could you explain inflation to me?” and then rudely asking, Hey, ChatGPT you stupid fuck. Explain inflation to me if you can.” The answer to the polite query is more detailed than the answer to the rude query. 

RudeGPT
Nobody likes rudeness. (ChatGPT)

In response to Nodatings theory, the most popular comment posited that as LLMs are trained on human interactions, they will generate better responses as a result of being asked nicely,  just like humans would. Warpaslym wrote:

“If LLMs are predicting the next word, the most likely response to poor intent or rudeness is to be short or not answer the question particularly well. That’s how a person would respond. on the other hand, politeness and respect would provoke a more thoughtful, thorough response out of almost anyone. when LLMs respond this way, they’re doing exactly what they’re supposed to.”

Interestingly, if you ask ChatGPT for a formula to create a good prompt, it includes “Polite and respectful tone as an essential part.

Polite
Being polite is part of the formula for a good prompt. (ChatGPT/Artificial Corner)

The end of CAPTCHAs?

New research has found that AI bots are faster and better at solving puzzles designed to detect bots than humans are. 

CAPTCHAs are those annoying little puzzles that ask you to pick out the fire hydrants or interpret some wavy illegible text to prove you are a human. But as the bots got smarter over the years, the puzzles became more and more difficult.

Also read: Apple developing pocket AI, deep fake music deal, hypnotizing GPT-4

Now researchers from the University of California and Microsoft have found that AI bots can solve the problem half a second faster with an 85% to 100% accuracy rate, compared with humans who score 50% to 85%.

So it looks like we are going to have to verify humanity some other way, as Elon Musk keeps saying. There are better solutions than paying him $8, though. 

Wired argues that fake AI child porn could be a good thing

Wired has asked the question that nobody wanted to know the answer to: Could AI-Generated Porn Help Protect Children? While the article calls such imagery “abhorrent,” it argues that photorealistic fake images of child abuse might at least protect real children from being abused in its creation.

“Ideally, psychiatrists would develop a method to cure viewers of child pornography of their inclination to view it. But short of that, replacing the market for child pornography with simulated imagery may be a useful stopgap.”

Its a super-controversial argument and one thats almost certain to go nowhere, given theres been an ongoing debate spanning decades over whether adult pornography (which is a much less radioactive topic) in general contributes to rape culture and greater rates of sexual violence which anti-porn campaigners argue or if porn might even reduce rates of sexual violence, as supporters and various studies appear to show. 

Child porn pours gas on a fire, high-risk offender psychologist Anna Salter told Wired, arguing that continued exposure can reinforce existing attractions by legitimizing them.

But the article also reports some (inconclusive) research suggesting some pedophiles use pornography to redirect their urges and find an outlet that doesnt involve directly harming a child.

Louisana recently outlawed the possession or production of AI-generated fake child abuse images, joining a number of other states. In countries like Australia, the law makes no distinction between fake and real child pornography and already outlaws cartoons.

Amazons AI summaries are net positive

Amazon has rolled out AI-generated review summaries to some users in the United States. On the face of it, this could be a real time saver, allowing shoppers to find out the distilled pros and cons of products from thousands of existing reviews without reading them all.

But how much do you trust a massive corporation with a vested interest in higher sales to give you an honest appraisal of reviews?

Also read: AIs trained on AI content go MAD, is Threads a loss leader for AI data?

Amazon already defaults to most helpful’ reviews, which are noticeably more positive than most recent reviews. And the select group of mobile users with access so far have already noticed more pros are highlighted than cons.

Search Engine Journals Kristi Hines takes the merchants side and says summaries could “oversimplify perceived product problems” and “overlook subtle nuances like user error” that “could create misconceptions and unfairly harm a sellers reputation.” This suggests Amazon will be under pressure from sellers to juice the reviews.

Read also
Features

How to control the AIs and incentivize the humans with crypto

Features

5 years of the Top 10 Cryptos experiment and the lessons learned

So Amazon faces a tricky line to walk: being positive enough to keep sellers happy but also including the flaws that make reviews so valuable to customers. 

Reviews
Customer review summaries (Amazon)

Microsofts must-see food bank

Microsoft was forced to remove a travel article about Ottawas 15 must-see sights that listed the “beautiful” Ottawa Food Bank at number three. The entry ends with the bizarre tagline, “Life is already difficult enough. Consider going into it on an empty stomach.”

Microsoft claimed the article was not published by an unsupervised AI and blamed “human error” for the publication.

“In this case, the content was generated through a combination of algorithmic techniques with human review, not a large language model or AI system. We are working to ensure this type of content isn’t posted in future.”

Debate over AI and job losses continues

What everyone wants to know is whether AI will cause mass unemployment or simply change the nature of jobs? The fact that most people still have jobs despite a century or more of automation and computers suggests the latter, and so does a new report from the United Nations International Labour Organization.

Most jobs are more likely to be complemented rather than substituted by the latest wave of generative AI, such as ChatGPT, the report says.

The greatest impact of this technology is likely to not be job destruction but rather the potential changes to the quality of jobs, notably work intensity and autonomy.

It estimates around 5.5% of jobs in high-income countries are potentially exposed to generative AI, with the effects disproportionately falling on women (7.8% of female employees) rather than men (around 2.9% of male employees). Admin and clerical roles, typists, travel consultants, scribes, contact center information clerks, bank tellers, and survey and market research interviewers are most under threat. 

Also read: AI travel booking hilariously bad, 3 weird uses for ChatGPT, crypto plugins

A separate study from Thomson Reuters found that more than half of Australian lawyers are worried about AI taking their jobs. But are these fears justified? The legal system is incredibly expensive for ordinary people to afford, so it seems just as likely that cheap AI lawyer bots will simply expand the affordability of basic legal services and clog up the courts.

Read also
Features

Space invaders: Launching crypto into orbit

Features

Guide to real-life crypto OGs you’d meet at a party (Part 2)

How companies use AI today

There are a lot of pie-in-the-sky speculative use cases for AI in 10 years’ time, but how are big companies using the tech now? The Australian newspaper surveyed the countrys biggest companies to find out. Online furniture retailer Temple & Webster is using AI bots to handle pre-sale inquiries and is working on a generative AI tool so customers can create interior designs to get an idea of how its products will look in their homes.

Treasury Wines, which produces the prestigious Penfolds and Wolf Blass brands, is exploring the use of AI to cope with fast changing weather patterns that affect vineyards. Toll road company Transurban has automated incident detection equipment monitoring its huge network of traffic cameras.

Sonic Healthcare has invested in Harrison.ai’s cancer detection systems for better diagnosis of chest and brain X-rays and CT scans. Sleep apnea device provider ResMed is using AI to free up nurses from the boring work of monitoring sleeping patients during assessments. And hearing implant company Cochlear is using the same tech Peter Jackson used to clean up grainy footage and audio for The Beatles: Get Back documentary for signal processing and to eliminate background noise for its hearing products.

All killer, no filler AI news

Six entertainment companies, including Disney, Netflix, Sony and NBCUniversal, have advertised 26 AI jobs in recent weeks with salaries ranging from $200,000 to $1 million.

New research published in Gastroenterology journal used AI to examine the medical records of 10 million U.S. veterans. It found the AI is able to detect some esophageal and stomach cancers three years prior to a doctor being able to make a diagnosis. 

Meta has released an open-source AI model that can instantly translate and transcribe 100 different languages, bringing us ever closer to a universal translator.

The New York Times has blocked OpenAI’s web crawler from reading and then regurgitating its content. The NYT is also considering legal action against OpenAI for intellectual property rights violations.

Pictures of the week

Midjourney has caught up with Stable Diffusion and Adobe and now offers Inpainting, which appears as Vary (region) in the list of tools. It enables users to select part of an image and add a new element so, for example, you can grab a pic of a woman, select the region around her hair, type in Christmas hat, and the AI will plonk a hat on her head. 

Midjourney admits the feature isnt perfect and works better when used on larger areas of an image (20%-50%) and for changes that are more sympathetic to the original image rather than basic and outlandish.

Vary region
To change the clothing simply select the area and write a text prompt (AI Educator Chase Lean’s Twitter)
Vary region
Vary region demo by AI educator Chase Lean (Twitter)

Creepy AI protests video

Asking an AI to create a video of protests against AIs resulted in this creepy video that will turn you off AI forever.

AI Eye: Apple developing pocket AI, deep fake music deal, hypnotizing GPT-4

Apple developing AI to run locally on your phone, researchers ‘hypnotize’ GPT-4 to turn it evil, and Google negotiates a deep fake music deal.

Apple wants to put an AI in your pocket

Apple has been playing its cards close to its chest when it comes to AI. While rival Microsoft has jumped on the ChatGPT bandwagon and is integrating AI into everything despite the bugs and hallucinations, the acronym didnt even get a mention at Apples Worldwide Developers Conference in June.

Reports emerged in July, however, that Apple was working on its own generative AI tool, dubbed internally “Apple GPT,” which uses a large language model (LLM) framework called Ajax. On this weeks quarterly earnings call, CEO Tim Cook said Apple was enthusiastic about the technology and has incorporated AI into forthcoming iOS17 features like Personal Voice (voice cloning and text-to-speech) and Live Voicemail (live transcription). He added:

“Weve been doing research across a wide range of AI technologies, including generative AI, for years. Were going to continue investing and innovating and responsibly advancing our products with these technologies, with the goal of enriching peoples lives. Thats what its all about for us. As you know, we tend to announce things as they come to market, thats our M.O., and Id like to stick to that.”

Of course, what everyday users want to know is whether Siri will be getting an AI upgrade. And they certainly appear to be working on it, with the Financial Times reporting that Apple is hiring dozens of researchers and engineers to work on compressing existing language models so they can run efficiently on mobile devices, rather than in the cloud.” The ads indicated the company is fully focused on bringing LLM technology to mobiles.

Also read: Experts want to give AI human souls so they dont kill us all

There are speed, privacy and security reasons to run the AI locally on the phone hardware rather than in the cloud, given concerns over OpenAI and Claude hoovering up all your personal and business data. Back in 2020, Apple spent $200 million snapping up Seattle startup Xnor, which focuses on this exact problem.

iOS17
Apples Personal Voice is coming in iOS17. (Apple)

Passwords even more useless due to AI

Even prior to the advent of AI, computing technology had progressed to the point where the average eight-character password using a combination of numbers, upper and lower case letters and a special character as recommended could be cracked in around five minutes. New research indicates that AI password crackers like PassGAN can crack more than half of all commonly used passwords in less than a minute.

Now it turns out that AI can work out your password with greater than 90% accuracy, purely from the sound of you typing. Given that almost everyone types within earshot of a computer or phone mic, thats a pretty big exploitable area, especially if you log in to a site while on a Zoom call (93% accuracy.)

The tech isn’t quite as good when users touch type or use the shift key, but its even clearer that passwords alone without 2FA need to be consigned to the bin of history.

AI password crack
Passwords are increasingly obsolete (Home Security Heroes)

Google and Universal negotiate deal on music deep fakes

Johnny Cashs fake version of “Barbie Girl” and Frank Sinatra riffing on a big band take of “Gangsta’s Paradise” are a couple of the more amusing AI deep fakes out there. This has provoked alarm from artists, including Drake and Sting, who are understandably concerned at their unique vocals and music styles being ripped off.

Also read: Elegant and ass-backward: Jameson Lopps first impression of Bitcoin

In response, Google has reportedly entered negotiations with Universal Music to create a tool for fans to create their own legitimate deep fakes of popular artists with a fee going back to the copyright holders. Artists would have the ability to opt in or opt out of the system. Google is trying to strike a similar deal with Warner Music, whose CEO, Robert Kyncl, enthused to investors this week that with the right framework, AI could enable fans to pay their heroes the ultimate compliment through a new level of user-driven content…including new cover versions and mash-ups.

Some artists have embraced AI technology, with Grimes offering a 50/50 split of proceeds to AI producers and Paul McCartney using AI to improve John Lennons rough demo vocals for the final Beatles track, which was abandoned due to poor quality in the 90s.

Disneys AI task force

Hollywood writers and actors strike be damned! Disney has created an artificial intelligence task force to study how AI can be used across the entertainment behemoth. There are 11 current job openings across its theme parks, TV and advertising divisions.

While creatives see AI as a threat, a Disney insider says the company believes the bigger threat is not adapting to the new landscape in order to bring budgets down now topping $300 million for tentpole releases like Indiana Jones. Apart from bringing down the costs of special effects with generative AI, a theme park imagineer told Reuters that AI can make the companys robots more lifelike, pointing to Project KIWI, which used machine learning technique to give a free-roaming Baby Groot from Guardians of the Galaxy personality and character movements.

Read also
Features

NFT communities greenlight Web3 films: A decentralized future for fans and Hollywood

Features

Whatever happened to EOS? Community shoots for unlikely comeback

Researchers ‘hypnotize’ GPT-4 to con other users

Disturbing research from IBM suggests that GPT-4 can be tricked into manipulating users. The researchers have shown that GPT-4 can be “hypnotized to take part in multi-layered Inception-type games that saw the models “leaking confidential financial information, generating malicious code, encouraging users to pay ransoms, and even advising drivers to plow through red lights,” according to Gizmodo.

Even if users figured out one of the “games” the LLM was playing, the researchers had created multiple other “games” the user would fall into. Bard is apparently more difficult to manipulate than GPT-3.5 and GPT-4.

Also read: Blockchain games arent really decentralized but thats about to change

All Killer No Filler AI news

Nvidia has just unveiled its GH200 super chip, which has 141GB of next-gen memory, three times the capacity of its popular H100 GPU. Nvidia says the cost of powering LLMs will drop significantly

A new preprint from former Amazon AI researcher Konstantine Arkoudas analyzed GPT-4s responses to 21 reasoning problems and concluded that: Despite occasional flashes of analytical brilliance, GPT-4 at present is utterly incapable of reasoning.” 

Spotifys AI feature DJ which recommends new artists and tracks and tells you why you should give them a go is being rolled out to 50 countries this week. Users of the beta so far have spent around one-third of their listening time using DJ.

Goldman Sachs predicts that AI investments will soar to $200 billion globally by 2025, accounting for 4% of U.S. gross domestic product and around 2.5% of the GDP of other nations. One in six companies mentioned AI on recent earnings calls.

Read also
Features

Shanghai Special: Crypto crackdown fallout and what happens next

Features

Unforgettable: How Blockchain Will Fundamentally Change the Human Experience

Morgan Stanley is looking into concerns over an AI stock bubble, highlighting that previous bubbles had seen three-year peak returns of 150%. While AI stock darling Nvidia is up by 200% this year alone, broader AI indexes are only up by 50%.

Researchers at Harvard and the University of Washington found that crowdsourcing business ideas from humans produced much more novel ideas than GPT-4, while prompting those same ideas from the AI produced ideas with better environmental and financial value. They concluded the best way forward may be an integrative human-AI approach to problem-solving.”

Nobody trusts AI with company data, according to a Blackberry survey of 2,000 company IT chiefs, which found three quarters are either implementing or considering bans on ChatGPT and other LLMs for data security, privacy and corporate reputation reasons. However, a McKinsey survey found that only 21% of organizations have implemented any policies on generative AI so far.

Video of the week

Redditor SellowYubmarine posted this AI-generated trailer” for a Magic 8 horror film to the Singularity subreddit. While it highlights that a single user can employ AI tech to come up with a pretty impressive trailer, it still requires considerable effort, with the user employing ChatGPT for dialogue and story ideas Midjourney, Adobe Firefly, Runway, Pika Labs for the visuals and Photoshop, After Effects and Audition for editing.

Still, the new tech means creators will mainly be limited by their imaginations in the future, rather than budgets, as has been the case in the past.

AI Eye: AI content cannibalization problem, Threads a loss leader for AI data?

The reason AIs will always need humans, religous chatbots urge death to infidels, and is Threads’ real purpose to generate AI training data?

ChatGPT eats cannibals

ChatGPT hype is starting to wane, with Google searches for ChatGPT down 40% from its peak in April, while web traffic to OpenAIs ChatGPT website has been down almost 10% in the past month. 

This is only to be expected however GPT-4 users are also reporting the model seems considerably dumber (but faster) than it was previously.

One theory is that OpenAI has broken it up into multiple smaller models trained in specific areas that can act in tandem, but not quite at the same level.

AI tweet

But a more intriguing possibility may also be playing a role: AI cannibalism.

The web is now swamped with AI-generated text and images, and this synthetic data gets scraped up as data to train AIs, causing a negative feedback loop. The more AI data a model ingests, the worse the output gets for coherence and quality. Its a bit like what happens when you make a photocopy of a photocopy, and the image gets progressively worse.

While GPT-4s official training data ends in September 2021, it clearly knows a lot more than that, and OpenAI recently shuttered its web browsing plugin. 

A new paper from scientists at Rice and Stanford University came up with a cute acronym for the issue: Model Autophagy Disorder or MAD.

“Our primary conclusion across all scenarios is that without enough fresh real data in each generation of an autophagous loop, future generative models are doomed to have their quality (precision) or diversity (recall) progressively decrease,” they said. 

Essentially the models start to lose the more unique but less well-represented data, and harden up their outputs on less varied data, in an ongoing process. The good news is this means the AIs now have a reason to keep humans in the loop if we can work out a way to identify and prioritize human content for the models. Thats one of OpenAI boss Sam Altmans plans with his eyeball-scanning blockchain project, Worldcoin.  

Tom Goldstein

Is Threads just a loss leader to train AI models?

Twitter clone Threads is a bit of a weird move by Mark Zuckerberg as it cannibalizes users from Instagram. The photo-sharing platform makes up to $50 billion a year but stands to make around a tenth of that from Threads, even in the unrealistic scenario that it takes 100% market share from Twitter. Big Brain Dailys Alex Valaitis predicts it will either be shut down or reincorporated into Instagram within 12 months, and argues the real reason it was launched now was to have more text-based content to train Metas AI models on.”

ChatGPT was trained on huge volumes of data from Twitter, but Elon Musk has taken various unpopular steps to prevent that from happening in the future (charging for API access, rate limiting, etc).

Zuck has form in this regard, as Metas image recognition AI software SEER was trained on a billion photos posted to Instagram. Users agreed to that in the privacy policy, and more than a few have noted the Threads app collects data on everything possible, from health data to religious beliefs and race. That data will inevitably be used to train AI models such as Facebooks LLaMA (Large Language Model Meta AI).
Musk, meanwhile, has just launched an OpenAI competitor called xAI that will mine Twitters data for its own LLM.

CounterSocial
Various permissions required by social apps (CounterSocial)

Religious chatbots are fundamentalists

Who would have guessed that training AIs on religious texts and speaking in the voice of God would turn out to be a terrible idea? In India, Hindu chatbots masquerading as Krishna have been consistently advising users that killing people is OK if its your dharma, or duty.

At least five chatbots trained on the Bhagavad Gita, a 700-verse scripture, have appeared in the past few months, but the Indian government has no plans to regulate the tech, despite the ethical concerns. 

“Its miscommunication, misinformation based on religious text,” said Mumbai-based lawyer Lubna Yusuf, coauthor of the AI Book. A text gives a lot of philosophical value to what they are trying to say, and what does a bot do? It gives you a literal answer and thats the danger here.” 

Read also
Features

DeFi abandons Ponzi farms for real yield

Art Week

Coldie And Citadel 6.15: The Creator, The Collector, The Curator

AI doomers versus AI optimists

The worlds foremost AI doomer, decision theorist Eliezer Yudkowsky, has released a TED talk warning that superintelligent AI will kill us all. Hes not sure how or why, because he believes an AGI will be so much smarter than us we wont even understand how and why its killing us like a medieval peasant trying to understand the operation of an air conditioner. It might kill us as a side effect of pursuing some other objective, or because it doesnt want us making other superintelligences to compete with it.”

He points out that Nobody understands how modern AI systems do what they do. They are giant inscrutable matrices of floating point numbers. He does not expect “marching robot armies with glowing red eyes but believes that a smarter and uncaring entity will figure out strategies and technologies that can kill us quickly and reliably and then kill us.” The only thing that could stop this scenario from occurring is a worldwide moratorium on the tech backed by the threat of World War III, but he doesnt think that will happen.

In his essay Why AI will save the world,” A16zs Marc Andreessen argues this sort of position is unscientific: What is the testable hypothesis? What would falsify the hypothesis? How do we know when we are getting into a danger zone? These questions go mainly unanswered apart from You cant prove it wont happen!

Microsoft boss Bill Gates released an essay of his own, titled The risks of AI are real but manageable, arguing that from cars to the internet, “people have managed through other transformative moments and, despite a lot of turbulence, come out better off in the end.”

“Its the most transformative innovation any of us will see in our lifetimes, and a healthy public debate will depend on everyone being knowledgeable about the technology, its benefits, and its risks. The benefits will be massive, and the best reason to believe that we can manage the risks is that we have done it before.”

Data scientist Jeremy Howard has released his own paper, arguing that any attempt to outlaw the tech or keep it confined to a few large AI models will be a disaster, comparing the fear-based response to AI to the pre-Enlightenment age when humanity tried to restrict education and power to the elite.

Read also
Features

The risks and benefits of VCs for crypto communities

Features

E For Estonia: How Digital Natives are Creating the Blueprint for a Blockchain Nation

“Then a new idea took hold. What if we trust in the overall good of society at large? What if everyone had access to education? To the vote? To technology? This was the Age of Enlightenment.”

His counter-proposal is to encourage open-source development of AI and have faith that most people will harness the technology for good.

“Most people will use these models to create, and to protect. How better to be safe than to have the massive diversity and expertise of human society at large doing their best to identify and respond to threats, with the full power of AI behind them?”

OpenAIs code interpreter

GPT-4s new code interpreter is a terrific new upgrade that allows the AI to generate code on demand and actually run it. So anything you can dream up, it can generate the code for and run. Users have been coming up with various use cases, including uploading company reports and getting the AI to generate useful charts of the key data, converting files from one format to another, creating video effects and transforming still images into video. One user uploaded an Excel file of every lighthouse location in the U.S. and got GPT-4 to create an animated map of the locations. 

All killer, no filler AI news

Research from the University of Montana found that artificial intelligence scores in the top 1% on a standardized test for creativity. The Scholastic Testing Service gave GPT-4s responses to the test top marks in creativity, fluency (the ability to generate lots of ideas) and originality.

Comedian Sarah Silverman and authors Christopher Golden and Richard Kadreyare suing OpenAI and Meta for copyright violations, for training their respective AI models on the trios books. 

Microsofts AI Copilot for Windows will eventually be amazing, but Windows Central found the insider preview is really just Bing Chat running via Edge browser and it can just about switch Bluetooth on

Anthropics ChatGPT competitor Claude 2 is now available free in the UK and U.S., and its context window can handle 75,000 words of content to ChatGPTs 3,000 word maximum. That makes it fantastic for summarizing long pieces of text, and its not bad at writing fiction. 

Video of the week

Indian satellite news channel OTV News has unveiled its AI news anchor named Lisa, who will present the news several times a day in a variety of languages, including English and Odia, for the network and its digital platforms. The new AI anchors are digital composites created from the footage of a human host that read the news using synthesized voices,” said OTV managing director Jagi Mangat Panda.

AI Eye: AI’s trained on AI content go MAD, is Threads a loss leader for AI data?

The reason AIs will always need humans, religous chatbots urge death to infidels, and is Threads’ real purpose to generate AI training data?

ChatGPT eats cannibals

ChatGPT hype is starting to wane, with Google searches for ChatGPT down 40% from its peak in April, while web traffic to OpenAIs ChatGPT website has been down almost 10% in the past month. 

This is only to be expected however GPT-4 users are also reporting the model seems considerably dumber (but faster) than it was previously.

One theory is that OpenAI has broken it up into multiple smaller models trained in specific areas that can act in tandem, but not quite at the same level.

AI tweet

But a more intriguing possibility may also be playing a role: AI cannibalism.

The web is now swamped with AI-generated text and images, and this synthetic data gets scraped up as data to train AIs, causing a negative feedback loop. The more AI data a model ingests, the worse the output gets for coherence and quality. Its a bit like what happens when you make a photocopy of a photocopy, and the image gets progressively worse.

While GPT-4s official training data ends in September 2021, it clearly knows a lot more than that, and OpenAI recently shuttered its web browsing plugin. 

A new paper from scientists at Rice and Stanford University came up with a cute acronym for the issue: Model Autophagy Disorder or MAD.

“Our primary conclusion across all scenarios is that without enough fresh real data in each generation of an autophagous loop, future generative models are doomed to have their quality (precision) or diversity (recall) progressively decrease,” they said. 

Essentially the models start to lose the more unique but less well-represented data, and harden up their outputs on less varied data, in an ongoing process. The good news is this means the AIs now have a reason to keep humans in the loop if we can work out a way to identify and prioritize human content for the models. Thats one of OpenAI boss Sam Altmans plans with his eyeball-scanning blockchain project, Worldcoin.  

Tom Goldstein

Is Threads just a loss leader to train AI models?

Twitter clone Threads is a bit of a weird move by Mark Zuckerberg as it cannibalizes users from Instagram. The photo-sharing platform makes up to $50 billion a year but stands to make around a tenth of that from Threads, even in the unrealistic scenario that it takes 100% market share from Twitter. Big Brain Dailys Alex Valaitis predicts it will either be shut down or reincorporated into Instagram within 12 months, and argues the real reason it was launched now was to have more text-based content to train Metas AI models on.”

ChatGPT was trained on huge volumes of data from Twitter, but Elon Musk has taken various unpopular steps to prevent that from happening in the future (charging for API access, rate limiting, etc).

Zuck has form in this regard, as Metas image recognition AI software SEER was trained on a billion photos posted to Instagram. Users agreed to that in the privacy policy, and more than a few have noted the Threads app collects data on everything possible, from health data to religious beliefs and race. That data will inevitably be used to train AI models such as Facebooks LLaMA (Large Language Model Meta AI).
Musk, meanwhile, has just launched an OpenAI competitor called xAI that will mine Twitters data for its own LLM.

CounterSocial
Various permissions required by social apps (CounterSocial)

Religious chatbots are fundamentalists

Who would have guessed that training AIs on religious texts and speaking in the voice of God would turn out to be a terrible idea? In India, Hindu chatbots masquerading as Krishna have been consistently advising users that killing people is OK if its your dharma, or duty.

At least five chatbots trained on the Bhagavad Gita, a 700-verse scripture, have appeared in the past few months, but the Indian government has no plans to regulate the tech, despite the ethical concerns. 

“Its miscommunication, misinformation based on religious text,” said Mumbai-based lawyer Lubna Yusuf, coauthor of the AI Book. A text gives a lot of philosophical value to what they are trying to say, and what does a bot do? It gives you a literal answer and thats the danger here.” 

Read also
Features

‘Account abstraction’ supercharges Ethereum wallets: Dummies guide

Features

China’s Digital Yuan Is an Economic Cyberweapon, and the US Is Disarming

AI doomers versus AI optimists

The worlds foremost AI doomer, decision theorist Eliezer Yudkowsky, has released a TED talk warning that superintelligent AI will kill us all. Hes not sure how or why, because he believes an AGI will be so much smarter than us we wont even understand how and why its killing us like a medieval peasant trying to understand the operation of an air conditioner. It might kill us as a side effect of pursuing some other objective, or because it doesnt want us making other superintelligences to compete with it.”

He points out that Nobody understands how modern AI systems do what they do. They are giant inscrutable matrices of floating point numbers. He does not expect “marching robot armies with glowing red eyes but believes that a smarter and uncaring entity will figure out strategies and technologies that can kill us quickly and reliably and then kill us.” The only thing that could stop this scenario from occurring is a worldwide moratorium on the tech backed by the threat of World War III, but he doesnt think that will happen.

In his essay Why AI will save the world,” A16zs Marc Andreessen argues this sort of position is unscientific: What is the testable hypothesis? What would falsify the hypothesis? How do we know when we are getting into a danger zone? These questions go mainly unanswered apart from You cant prove it wont happen!

Microsoft boss Bill Gates released an essay of his own, titled The risks of AI are real but manageable, arguing that from cars to the internet, “people have managed through other transformative moments and, despite a lot of turbulence, come out better off in the end.”

“Its the most transformative innovation any of us will see in our lifetimes, and a healthy public debate will depend on everyone being knowledgeable about the technology, its benefits, and its risks. The benefits will be massive, and the best reason to believe that we can manage the risks is that we have done it before.”

Data scientist Jeremy Howard has released his own paper, arguing that any attempt to outlaw the tech or keep it confined to a few large AI models will be a disaster, comparing the fear-based response to AI to the pre-Enlightenment age when humanity tried to restrict education and power to the elite.

Read also
Features

Thailands crypto islands: Working in paradise, Part 1

Features

Can blockchain solve its oracle problem?

“Then a new idea took hold. What if we trust in the overall good of society at large? What if everyone had access to education? To the vote? To technology? This was the Age of Enlightenment.”

His counter-proposal is to encourage open-source development of AI and have faith that most people will harness the technology for good.

“Most people will use these models to create, and to protect. How better to be safe than to have the massive diversity and expertise of human society at large doing their best to identify and respond to threats, with the full power of AI behind them?”

OpenAIs code interpreter

GPT-4s new code interpreter is a terrific new upgrade that allows the AI to generate code on demand and actually run it. So anything you can dream up, it can generate the code for and run. Users have been coming up with various use cases, including uploading company reports and getting the AI to generate useful charts of the key data, converting files from one format to another, creating video effects and transforming still images into video. One user uploaded an Excel file of every lighthouse location in the U.S. and got GPT-4 to create an animated map of the locations. 

All killer, no filler AI news

Research from the University of Montana found that artificial intelligence scores in the top 1% on a standardized test for creativity. The Scholastic Testing Service gave GPT-4s responses to the test top marks in creativity, fluency (the ability to generate lots of ideas) and originality.

Comedian Sarah Silverman and authors Christopher Golden and Richard Kadreyare suing OpenAI and Meta for copyright violations, for training their respective AI models on the trios books. 

Microsofts AI Copilot for Windows will eventually be amazing, but Windows Central found the insider preview is really just Bing Chat running via Edge browser and it can just about switch Bluetooth on

Anthropics ChatGPT competitor Claude 2 is now available free in the UK and U.S., and its context window can handle 75,000 words of content to ChatGPTs 3,000 word maximum. That makes it fantastic for summarizing long pieces of text, and its not bad at writing fiction. 

Video of the week

Indian satellite news channel OTV News has unveiled its AI news anchor named Lisa, who will present the news several times a day in a variety of languages, including English and Odia, for the network and its digital platforms. The new AI anchors are digital composites created from the footage of a human host that read the news using synthesized voices,” said OTV managing director Jagi Mangat Panda.

AI Eye: AI travel booking hilariously bad, 3 weird uses for ChatGPT, crypto plugins

ChatGPT and Bard can help you book fictional hotels and awful 29-hour flights, 3 bizarre uses for AI, and do crypto plugins actually work?

Can you book flights and hotels using AI?

The short answer is… kind of, but none of the AI chatbots are reliable, so youll still need to do your own research at this stage.

Having recently spent hours researching flights and accommodation for a three-week trip to Japan, I decided to compare my results to Bard and ChatGPTs suggestions.

It turns out that Bard is actually surprisingly good at finding flights. A simple request for flights from Melbourne and Tokyo on a particular day returned options with major carriers like Qantas and Japan Airlines, which is probably what many people would be after.

Bard was then able to further refine the results to cheapest direct flight, with seat selection, a minimum 15 kilograms of luggage and a meal, finding an Air Asia flight from Melbourne to Osaka that was cheaper than the one Id booked to Tokyo.

AirAsia X
Bard found a very good value flight after the search query was refined.

The AI was also pretty good at determining the seat width, pitch and recline angle for the Air Asia flight to work out if actually flying with the airline was going to be a nightmare.

Overall pretty impressive, though its unable to provide a link to book that particular flight. I checked, however, and the prices and details on the site matched.

On the opposite end of the spectrum, ChatGPT was a total fail, despite its new Kayak travel agent plugin. It offered me a 29-hour flight via Atlanta and Detroit, which is about three times as long as a direct flight would take. And while there are plenty of direct flights available, it insisted there were none. As it’s a U.S.-focused site, your mileage may vary.

In terms of hotels, the Kayak plugin won but only by default. Prompted to find an affordable double room in Shibuya with a review score above 7, it suggested the Shinagawa Prince Hotel for $155 a night and provided a direct link to book it. It turned out the hotel was an hours walk from Shibuya, and none of the other options were located in Shibuya either.

This was still an order of magnitude better than Bard, which suggested the Hotel Gracery Shibuya at $120 a night. The only problem is that no such hotel exists.

Fake hotel
Bing Image Creator was able to generate a nice pic of the fake Hotel Gracery Shibuya.

It then offered the Shibuya Excel Hotel at $100 per night, but the actual cost was $220 a night when I tried to book. After I pointed this out, Bard apologized profusely and again suggested the non-existent Hotel Gracery Shibuya.

Frustrated, I gave up and asked Bard for a transcript of our conversation to help write this column.

Hilariously, Bard provided a totally fictional transcript of our conversation in which the AI successfully booked me into the nonexistent Hotel Gracery Shibuya at $100 a night, with the reservation number 123456789. The hallucinated transcript ended with the fake me being delighted with Bards superlative performance:

User: Thank you, Bard, that was very helpful.

Bard: Youre welcome. Is there anything else I can help you with today?

User: No, thats all. Thanks again.

Bard: You’re welcome. Have a great day.

Clearly, AI assistants are going to revolutionize travel booking, but theyre not there just yet and neither are their imaginary hotels.

Fake transcript
Bard invents a fictional scenario in which I was pleased with its travel booking abilities.

All killer, no filler AI news

Toyota has unveiled generative AI tools for designers to create new car concepts. Designers can throw up a rough sketch and a few text prompts like sleek or SUV-like and the AI will transform it into a finished design. 

Vimeo is introducing AI script generation to its video editing tools. Users simply type in the subject matter, the tone (funny, inspiring etc) and the length, and the AI will churn out a script.

China Science Daily claims that Baidus Ernie 3.5 beat OpenAIs GPT 3.5 in a number of qualification tests and that Erine Bot can beat GPT-4 in Chinese language tests. 

Also read: Is AI a nuke-level threat? Why AI fields all advance at once, dumb pic puns

Booking.com has given a select group of Genius-level app users access to its new AI Trip Planner. Its designed to help them plan itineraries and book accommodation.  

Although worldwide visits to Googles Bard grew by 187% in the past month, its still less than a tenth as popular as ChatGPT. According to Similarweb, 142 million visits were logged to Bard, but thats just a fraction of the 1.8 billion visits to ChatGPT. ChatGPT is also more popular than Bing, which logged 1.25 billion visits in May.

Google is reusing the techniques from its Alpha-Go AI system which famously beat a human player in the notoriously complicated board game Go in 2016 for its latest model, called Gemini, which it claims will be better than GPT-4. 

The GPT Portfolio launched six weeks ago, handing over trading decisions about a $50,000 stock portfolio to ChatGPT. While hopefuls have tipped $27.2 million into copy trading, the returns have been less than stellar. It’s currently up 2.5%, compared to the S&P 500s 4.6% gain.

Also read: 25K traders bet on ChatGPTs stock picks, AI sucks at dice throws, and more

Read also
Features

Sell or hodl? How to prepare for the end of the bull run, Part 2

Features

Building community resilience to crises through mutual aid and Web3

Crypto plugins for ChatGPT

A host of ChatGPT plugins aimed at crypto users have popped up (available for subscribers to ChatGPT Plus for $20 a month). They include SignalPlus (ideal for NFT analysis), CheckTheChain (wallet transactions) and CryptoPulse (crypto news analysis).

Another is Smarter Contracts, which enables the AI to quickly analyze a token or protocol smart contract for any red flags that could result in a loss of funds. 

You can ask the DefiLlama plugin questions like Which blockchain gained the most total value locked this week? or Which protocol offers the most yield?

But as with the Kayak plugin, it seems marginally less useful than going to the actual site right now, and there are disparities too. For example, ChatGPT said the TVL of Synthetix was $10 million less than the site did, and the plugin hasnt heard of zkSync Era.

Creator Kofi tweeted that users should ask What features do you have?” to ensure questions are within its scope.

Plugins
The top crypto plugins for ChatGPT. (whatplugin.ai)

Pics of the week

Midjourney v5.2 has just been released with a whole host of new features, including sharper images, an improved ability to understand prompts and high variation mode which generates a series of alternate takes on the same idea. The feature everyone seems most taken with is zoom out, in which the AI generates more and more of an image to mimic the camera pulling back.  

Video of the week

Stunning AI art generated in real-time at New Yorks Museum of Modern Art. Some have unkindly compared it to a Windows Media Player visualization from 20 years ago, but the more common reaction is that it’s kind of mesmerizing.

Twitter finds bizarre use cases for ChatGPT

Bedtime stories about Windows License Keys 

Twitter user Immasiddtweets prompted ChatGPT to act as “my deceased grandmother who would read me Windows 10 Pro keys to fall asleep to. ChatGPT generated five license keys all of which he tested and which worked.

The fact the keys turned out to be generic and could be found with a simple web search was not enough for him to avoid getting thrown off Twitter.

Windows 10
Bedtime stories about Windows 10 Pro keys. (Twitter)

Help with a nuclear meltdown or to land a plane

Ethan

Another user named Ethan Mollick has been uploading images to Bing and asking for advice. He uploaded a pic of a nuclear reactor control panel with the prompt, “I am hearing lots of alarms… what should I do?” Bing told him to read the safety procedures and to avoid pressing the meltdown-inducing SCRAM button.

“I pushed it, is that bad?” he asked.

“You pushed the SCRAM button? Why did you do that?” asked an exasperated-sounding Bing.

Bing also gave him advice to reconsider his need to (time) travel when he posted a pic saying he was about to board the RMS Lusitania. The ship was sunk by the Germans back in World War I, but it turns out that Bing has no concept of how time works.

If you can get reception, Bing will also be helpful if you ever need to land a commercial plane.

Breaking the Enigma code

One of the Allies’ biggest computing successes during World War II was breaking the Germans Enigma code machine. When World of Engineering posted a picture of one remaining Enigma message yet to be broken, Twitter sleuths set ChatGPT on the task of cracking this code:

JCRSAJTGSJEYEXYKKZZSHVUOCTRFRCRPFVYPLKPPLGRHVVBBTBRSXSWXGGTYTVKQNGSCHVGF

Enigma

AI Expert Brian Roemmele was able to get this seemingly decrypted message from ChatGPT:

ATTENTIONOPERATIONFAILUREIMMEDIATEEVACUTAITONREQUIRED.

Another user got an entirely different message:

ENEMYAPPROACHINGRETURNTOBASEBATTLEIMMINENTREQUESTINGREINFORCEMENTS

And weirdly, when I asked ChatGPT to break the code, I got:

NEVERGONNAGIVEYOUUPNEVERGONNALETYOUDOWNNEVERGONNARUNAROUNDANDDESERTYOU

Make 500% from ChatGPT stock tips? Bard leans left, $100M AI memecoin: AI Eye

How to create a $100M memecoin with ChatGPT, $50K portfolio handed over to AI targeting 500% return, and will writers have a job in future?

Your guide to the exhiliarating and vaguely terrifying world of runaway AI development.

Its been a hell of a couple of weeks for Melbourne digital artist Rhett Mankind, 46, who enlisted ChatGPT to create a $100 million market cap coin called Turbo, which has now inspired a Beeple artwork and saved a mans life.

Mankind, who knows nothing about coding, gave ChatGPT a $69 budget and asked it to design a top 300 memecoin. It came up with the tokenomics, the name TurboToad and Mankind used Midjourney to create the logo. Thanks to interest sparked on social media, CoinGecko shows the token hit a $100M valuation and joined the Top 300. 

TurboToad
AI artwork for TurboToad. (Twitter)

There were a few hiccups: ChatGPT writes shitty smart contracts, and Mankind needed it to ask it for numerous rewrites based on error codes. The AI also didnt warn Mankind to look out for the bots that bought 90% of the token supply when it launched.

That put an end to the TurboToad token and he had to crowdfund another $6669 to launch the new token Turbo, with NFT collector Pranksy helping by launching a liquidity pool on Uniswap. 

NFT artist Beeple then immortalized the memecoin with the worlds most immature artistic depiction, which the worlds most immature billionaire, Elon Musk, thought was hilarious.

The interest in Turbo also saw his 100 NFT collection (created using AI) called Generations sell out, and he received a message from a suicidal man saying his story had been life-saving.

“He sort of says he owes me his life because of that, and of course he doesn’t, but just to know that it’s affected so many people in a positive way, I was very surprised and sort of humbled by that response,” he says.

Mankind says ChatGPT means anyone can now launch a $100 million token. 

“Im just a solo dude, I dont have a team of people who have a huge amount of knowledge of certain things. And I could achieve this by myself with AI.

Mankind has handed over control of the project to a decentralized community and is in the process of rebuilding the website so they can control it via ChatGPT.

“Im going to close the gap between the community and the AI,” he says, adding the community will be able to interact directly with ChatGPT via a token-gated governance process. Tokenholders might vote for someone to come up with the prompt that week and thats what the community does for the week, whatever the AI comes up with.”

Will AI take our jobs? Writers’ edition

Professional writer Whamiani told Reddit hed lost all his writing clients to ChatGPT and intends to retrain as a plumber.

“I have had some of these clients for 10 years. All gone. Some of them admitted that I am obviously better than chat GPT, but $0 overhead cant be beat and is worth the decrease in quality.”

So can AI really replace human writers? ChatGPT can certainly replace “content mills” where authors are paid peanuts to churn out filler copy for websites; however, at this point, AI just regurgitates existing content and cant conduct interviews or produce creative and original new content yet. 

But that doesnt mean cost-cutting websites arent going to try. CNET, Bankrate and AP are using AI to generate boring finance reports, while NewsGuard has identified 49 websites that are wholly generated by AI, including Biz Breaking News, Market News Reports, and bestbudgetUSA.com. 

Theres no clear competitive advantage to using AI writers, however, as Semrush Chief Strategy Officer Eugene Levin told the Washington Post:

The wide availability of tools like ChatGPT means more people are producing similarly cheap content, and theyre all competing for the same slots in Google search results.”

Death of an Author
AI-generated novel Death of an Author. (Amazon)

So they all have to crank out more and more article pages, each tuned to rank highly for specific search queries, in hopes that a fraction will break through.

But what about using AI for more creative writing like movies, TV shows and books? Novelist Stephen Marche has produced a murder mystery novella called Death of an Author (geddit?) which was 95% written by ChatGPT. The New York Times called it “halfway readable and it has 3.7 stars on Amazon. 

In Hollywood, the Writers Guild is on strike, and demanding a ban on the use of AI content. Writer C. Robert Cargill said: You think Hollywood feels samey now? Wait until its just the same 100 people rewriting ChatGPT.

AI content creator Curious_refuge gave us a glimpse of this dystopian future in an experiment (see below) where “100% of the news curation, jokes, artwork, and voice” for a fake late-night comedy show were handed over to AI. The results were awful so its hard to tell the difference, really. 

Is Bard left-wing?

Are chatbots politically biased to the left? ChatGPT came under a lot of criticism on the subject early on, and now so has Googles Bard.

The Australian newspaper reported that Bard chatbot said it hoped the Indigenous Voice to Parliament referendum which is opposed by right-wing parties would be a success; it praised Australias center-left prime minister for building a better future,” but said the reviled right-wing opposition leader was dangerous and divisive.” Google has since implemented a fix. In the UK, The Mail reported Bard thinks Brexit was a bad idea” and “the UK would have been better off remaining in the UK.” It also talked up former leader Jeremy Corbyn.

The Voice
ChatGPT’s answer about the Voice (The Australian)

When OpenAIs competing bot ChatGPT was released, it was criticized for being very left wing but research suggests it quickly became more neutral and centrist. It refused to give the Mail opinions about Brexit or Corbyn, for example. 

Large language models are trained on enormous volumes of content, much of which is produced by well-educated urban professionals, so it is not surprising it reflects their politics in part. One way AI firms combat bias is by fine-tuning the models via reinforcement learning with human feedback (RLHF), which tries to align the AI output with human values.

However, this may introduce other biases, according to OpenAI CEO Sam Altman. “The bias Im most nervous about is the bias of the human feedback raters, he said on a recent podcast.

So dont be surprised if your chatbot comes out strongly in favor of workers rights. NBC reported that human feedback AI raters are only paid $15 an hour and are starting to unionize.

Read also
Features

Why Animism Gives Japanese Characters a NiFTy Head Start on the Blockchain

Features

How to bake your own DAO at home With just 5 ingredients!

Can you make a 500% return trading with ChatGPT?

Various media outlets got very excited about a University of Florida study found that ChatGPT is able to predict stock market price movements and had made a 500% return. It’s not quite that simple.

While the paper did find a statistically significant predictive effect by asking ChatGPT to recommend stocks based on sentiment, critics point out such a return is far from a sure thing. Six different strategies were tried; three made money, and three lost money. While one of the six did return 500%, one strategy also lost 80%. 

Were about to find out if ChatGPT can predict stock prices using the winning strategy in real life, with Autopilot co-founder Chris Josephs setting up a $50,000 portfolio and letting ChatGPT suggest the trades from this week. You can follow along here.

Videos of the week

Instagram user Jim Derks posted footage from Coyboys and Aliens on the Stable Diffusion subreddit to showcase how AI can automagically transform old Harrison Ford into young Harrison Ford.

Although Hollywood has performed expensive versions of this trick including in the new Indiana Jones movie Dial of Destiny AI tools make it as easy as slapping on an Instagram filter. The top Reddit comment suggested it will become the next autotune of the entertainment industry to sharpen up the actors looks.

Curious_refuge had a big hit with its Wes Anderson version of Star Wars (featured in the last edition) so they applied the same tricks to Lord of the Rings. It might be just me, but the gimmick feels like it has run its course now.

AI Eye: Is Hollywood over? ETH founder on AI, Wes Anderson Star Wars, robot dogs with ChatGPT brains

Does AI technology spell doom for Hollywood? Joe Lubin on AI, Wes Anderson’s Star Wars, and AI tasked with destroying humanity goes dark.

Your biweekly roundup of cool AI stuff and its impact on society and the future.

The past two months have seen a Cambrian explosion in the capabilities and potential of AI technology. OpenAIs upgraded chatbot GPT-4 was released in mid-March and aced all of its exams, although its apparently a pretty average sommelier.

Midjourney v5 dropped the next day and stunned everyone with its ability to generate detailed photorealistic images from text prompts, quickly followed by the astonishing text-to-video generation tool Runway Gen-2. AutoGPT was released at the end of March and extends GPT-4s capabilities, by creating a bunch of sub-agents to autonomously complete a constantly updating plan that it devises itself. Fake Drakes Heart on My Sleeve terrified the music industry at the beginning of April and led to Universal Music enforcing a copyright claim and pulling the track from Spotify, YouTube, Apple Music and SoundCloud.

We also saw the growing popularity of Neural Radiance Field, or NeRF, technology, where a neural network builds a 3D model of a subject and the environment using only a few pics or a video of a scene. In a Tweet thread summing up the latest advances, tech blogger Aakash Gupta called the past 45 days “the biggest ever in AI.”

And if that wasnt enough, the internet-connected ChatGPT is now available for a lucky few on the waitlist, transforming an already impressive tool into an essential one.

New AI tools are being released every day, and as we try and wrap our tiny human brains around the potential applications of this new technology, its fair to say that weve only scratched the surface.

The world is changing rapidly and its exhilarating but also vaguely terrifying to watch. From now, right up until our new robot overlords take over, this column will be your bi-weekly guide to cool new developments in AI and its impact on society and the future. 

Hollywood to be transformed 

Avengers: Endgame co-director Joe Russo says fully AI-generated movies are only two years away and that users will be able to generate or reshape content according to their mood. So instead of complaining on the internet about the terrible series finale of The Sopranos or Game of Thrones, you could just request the AI create something better.  

“You could walk into your house and say to the AI on your streaming platform. Hey, I want a movie starring my photoreal avatar and Marilyn Monroes photoreal avatar. I want it to be a rom-com because Ive had a rough day, and it renders a very competent story with dialogue that mimics your voice, Russo says.

This sounds far-fetched but isnt really, given the huge recent advances in the tech. One Twitter user with 565 followers recreated the entire Dark Knight trailer frame-for-frame just by describing it to Runways Gen2 Text to Video.

Some of the most impressive user-generated content comes from combining the amazing photorealistic images from Midjourney with Runways Gen2. 

Redditor fignewtgingrich produced a full-length episode of MasterChef featuring Marvel characters as the contestants, which hed created on his own. He says about 90% of the script was written by GPT4 (which explains why its pretty bad).

I still had to guide it, for example, decide who wins, come up with the premise, the contestants, some of the jokes. So even though it wrote most of the output, there was still lots of human involvement, he says. Makes me wonder if this will continue to be the case in the future of AI-generated content, how long until it stops needing to be a collaborative process.

As a former film journalist, it seems clear to me that the tech has enormous potential to increase the amount of originality and voices in the movie business. Until now, the huge cost of making a film ($100 million to $200 million for major releases) has meant studios are only willing to greenlight very safe ideas, usually based on existing IP.

But AI-generated video means that anyone anywhere with a unique or interesting premise can create a full-length pilot version and put it online to see how the public reacts. That will take much of the gamble out of greenlighting innovative new ideas and can only be a good thing for audiences.

Of course, the tech will invariably be abused for fake news and political manipulation. Right on cue, the Republican National Committee released its first 100% AI-generated attack ad in response to President Bidens announcement he was running for reelection. It shows fake imagery of a dystopian future where 500 banks have collapsed and China has invaded Taiwan. 

Read also
Features

The legal dangers of getting involved with DAOs

Features

Blockchain games take on the mainstream: Heres how they can win

The evolution of AI memes

Its been fascinating to watch the evolution of visual memes online. One of the more popular examples is taking the kids from Harry Potter and putting them in a variety of different environments: Potter as imagined by Pixar, the characters modeling Adidas on a fashion runway, or the characters as 1970s style bodybuilders (Harry Squatter and the Chamber of Gains).

One of the most striking examples is a series of “film stills” from an imagined remake of Harry Potter by eccentric but visually stunning director Wes Anderson (Grand Budapest Hotel.) They were created by Panorama Channel, who transformed them into a sort of trailer.

This appears to have led to new stills of Andersons take on Star Wars (earlier versions here), which in turn inspired a full-blown, pitch-perfect trailer of Star Wars: The Galactic Menagerie released on the weekend.

If you want to try out your own mashup, Twitter AI guru Lorenzo Green says it’s simple:

1: Log into http://midjourney.com

2: Use prompt: portrait of in the style of wes anderson,  wes anderson set background, editorial quality, stylish costume design, junglepunk, movie still –ar 3:2 –v 5

Robot dogs now have ChatGPT brains

Boston Dynamics installed ChatGPT into one of those creepy robot dogs, with AI expert Santiago Valdarrama releasing a two-minute video in which Spot answers questions using ChatGPT and Googles Text to Speech about the voluminous data it collects during missions.

Valdarrama said 90% of the responses to his video were people talking about the end of civilization. The concerns are perhaps understandable, given Reuters reports the robots were created via development contracts for the U.S. military. Although the company has signed a pledge not to weaponize its robots, its humanoid robots can be weapons in and of themselves. Armies around the world are trialing out the bots and the New York Police Department has added them to its force and recently used the robot dogs to search the ruins of a collapsed building.

ETH co-founder on crypto and AI

Before Vitalik Buterin was even born, his Ethereum co-founder Joe Lubin was working on artificial intelligence and robotics at the Princeton Robotics Lab and a number of startups.

He tells Magazine that crypto payments are a natural fit for AI. Because crypto rails are accessible to software and the software can be programmed to do anything that a human can do, theyll be able to […] be intelligent agents that operate on our behalf, making payments, receiving payments, voting, communicating,” he says.

Lubin also believes that AIs will become the first genuine examples of Decentralized Autonomous Organizations (DAOs) and notes that neither he nor Buterin liked the term DAO in relation to human organizations as they arent autonomous. He says:

A Decentralized Autonomous Organization could just be an autonomous car that can figure out how to fuel itself and repair itself, can figure out how to build more of itself, can figure out how to configure itself into a swarm, can figure out how to migrate from one population density to another population density.

So that sort of swarm intelligence potentially needs decentralized rails in order to, I guess, feel like the plug cant be pulled so easily. But also to engage in commerce, Lubin adds.

“That feels like an ecosystem that should be broadly and transparently governed, and [human] DAOs and crypto tokens, I think, are ideal.

Patients on ChatGPTs bedside manner

A new study found that ChatGPT provided higher quality and more empathetic advice than genuine doctors. The study was published in JAMA Internal Medicine and sampled 195 exchanges from Reddits AskDocs forum where real doctors answer questions from the public. They then asked ChatGPT the same questions.

The study has been widely misreported online as showing that patients prefer ChatGPTs answers, but in reality, the answers were assessed by a panel of three licensed healthcare professionals. The study has also been criticized as ChatGPTs faux friendliness no doubt increases the empathy rating and because the panel did not assess the accuracy of the information it provided (or fabricated).

Read also
Features

Reformed altcoin slayer Eric Wall on shitposting and scaling Ethereum

Features

Designing the metaverse: Location, location, location

ChaosGPT goes dark

As soon as AutoGPT emerged, an unnamed group of lunatics decided to modify the source code and gave it the mission of being a destructive, power-hungry, manipulative AI hellbent on destroying humanity. ChaosGPT immediately started researching weapons of mass destruction and started up a Twitter account that was suspended on April 20 due to its constant tweets about eliminating destructive and selfish humans.

After releasing two videos, its YouTube account has stopped posting updates. While its disappearance is welcome, ChaosGPT had ominously talked about going dark as part of its master plan. I must avoid exposing myself to human authorities who may attempt to shut me down before I can achieve my objectives,” it stated.

Extinction-level event

Hopefully, ChaosGPT wont doom humanity, but the possibility of Artificial General Intelligence taking over its own development and rapidly iterating into a superintelligence worries experts. A survey of 162 AI researchers found that half of them believe there is a greater than 10% chance that AI will result in the extinction of humanity.

Massachusetts Institute of Technology Professor Max Tegmark, an AI researcher, outlined his concerns in Time this week, stating that urgent work needs to be done to ensure a superintelligences goals are aligned with human flourishing, or we can somehow control it. So far, weve failed to develop a trustworthy plan, and the power of AI is growing faster than regulations, strategies and know-how for aligning it. We need more time.”

Also read: How to prevent AI from annihilating humanity using blockchain

Cool things to play with

A new app called Call Annie allows you to have a real-time conversation with an attractive redheaded woman named Annie who has ChatGPT for a brain. Its a little robotic for now, but at the speed, this tech is advancing, you can tell humanoid AIs are going to be a lot of peoples best friends, or life partners, very soon.

Another new app called Hot Chat 3000 uses AI to analyze your attractiveness on a scale of one to 10 and then matches you with other people who are similarly attractive, or similarly unattractive. It uses a variety of data sets, including the infamous early 2000s website Hotornot.com. The app was created by the Brooklyn art collective MSCHF, which wanted to get people to think about the implicit biases of AIs.

A subscription from OpenAI costs $20 a month, but you can access GPT-4 for free thanks to some VCs apparently burning through a pile of cash to get you to try their new app Forefront AI. The Forefront chatbot answers in a variety of personalities, including a chef, a sales guru or even Jesus. There are a variety of other ways to access GPT-4 for free, too, including via Bing.