Elon Musk

EU Commission targets X over ‘dissemination of illegal content’

X owner Elon Musk told advertisers to “go f— yourself” on Nov. 29 after many left the social media platform in response to antisemitic content and a report on hate speech.

The European Commission said it had opened formal proceedings to investigate X — formerly Twitter — over content related to the terrorist group Hamas’ attacks against Israel.

In a Dec. 18 notice, the commission said it planned to assess whether X violated the Digital Services Act for its response to misinformation and illegal content on the platform. According to the government body, X was under investigation for the effectiveness of its Community Notes — comments added to specific tweets aimed at providing context — as well as policies “mitigating risks to civic discourse and electoral processes.”

“The opening of formal proceedings empowers the Commission to take further enforcement steps, such as interim measures, and non-compliance decisions,” said the notice. “The Commission is also empowered to accept any commitment made by X to remedy on the matters subject to the proceeding.”

Read more

Tesla’s humanoid robot is now 30% faster, 22 pounds lighter

Tesla CEO Elon Musk has revealed a new prototype of its humanoid robot, Optimus Gen 2. It can dance and do squats.

Elon Musk, the CEO of Tesla and executive chair of X (formerly Twitter), has revealed a new prototype of Tesla’s humanoid robot, Optimus, which is lighter and faster than previous versions. Musk shared a video presentation of Optimus Gen 2 via his X account on Dec. 13.

Musk has repeatedly called for more regulatory oversight of artificial intelligence (AI), believing it may be “smarter than all humans at everything” in the future.

In December, the entrepreneur claimed that a “digital god” would make the copyright lawsuits regarding AI irrelevant. Musk previously predicted that artificial general intelligence would arrive before 2030, an estimate many industry experts disputed as overly optimistic.

Read more

Elon Musk’s xAI files with SEC for private sale of $1B in unregistered securities

“Spicy” AI chatbot Grok hasn’t been seen by the public yet, but it’ll be worth plenty after this securities issue.

Elon Musk’s X-linked artificial intelligence modeler xAI has an agreement for the private sale of $865.3 million in unregistered equity securities, according to a filing with the United States Securities and Exchange Commission made on Dec. 5.

XAI filed the SEC’s Form D to allow it to engage in the private sale of securities without registration. The form is used to comply with Regulation D of the Securities Act of 1933, which provides exemptions to the standard rules. On the form, Musk is listed as the executive officer and director of the business.

The xAI Form D further clarifies that the securities will be sold to accredited investors with restrictions on their resale under Rule 506(b). The form also indicated that $134.7 million in such securities have already been sold, with the first sale taking place on Nov. 29. Thus, the company is seeking to raise $1 billion.

Read more

Elon Musk says “digital god” will make AI copyright lawsuits irrelevant

The billionaire mogul also claimed that OpenAI was lying about its training methods, but interviewer Andrew Ross Sorkin may have flubbed the question.

Elon Musk made some of his boldest claims yet concerning the future of artificial intelligence (AI) during an interview with CNBC’s Andrew Ross Sorkin.

During a wide-ranging interview, Musk responded to questions concerning recent lawsuits levied against some of the billionaire’s competitors in the AI space related to alleged copyright infringement.

Related: Elon Musk launches AI chatbot ‘Grok,’ says it can outperform ChatGPT

“So, you think it’s a lie,” Sorkin asked Musk during the interview, “when OpenAI says that… none of these guys say that they’re training on copyrighted data.”

Musk’s response, “Yeah, that’s a lie.”

Elon Musk’s digital god

It’s unclear what Sorkin meant by his query, as OpenAI has testified in court to the effect that it does train models on copyrighted material.

Under further prodding from Sorkin, Musk dismissed the efficacy of the lawsuits by claiming that a “digital god” would make the copyright lawsuits irrelevant:

Read more

Elon Musk to advertisers trying to ‘blackmail’ X — ‘Go fuck yourself’

The billionaire X owner lashed out at advertisers ditching the platform due to his controversial posts.

Billionaire entrepreneur Elon Musk is making the headlines again, this time for an expletive-laden outburst on live TV at an annual conference hosted by The New York Times.

Speaking at the 2023 DealBook Summit in New York on Nov. 29, Elon Musk, the owner of micro-blogging platform X (formerly Twitter), lashed out at advertisers leaving the social media site due to antisemitic posts he amplified.

Recently, Musk publicly endorsed what the White House labeled “antisemitic and racist hate” on the platform, which he has since apologized for.

However, when interviewer Andrew Ross Sorkin asked about advertisers leaving the platform, Musk stated:

“If someone is going to try to blackmail me with advertising, blackmail me with money, go fuck yourself …. Is that clear? I hope it is.”

Musk also shouted out to Disney CEO Bob Iger, who was reportedly in the audience, saying “Hi Bob!” since the company was one of several advertisers that have left X.

Crypto Biz: EU looks under the hood of Big Tech algorithms, Musk’s TruthGPT and more

This week’s Crypto Biz explores the fast-growing AI market, MicroStrategy’s integration with Bitcoin Lightning Network, and Microsoft’s efforts to power AI development.

Whether the intelligence can provide truthful answers will have to be seen, but the move would undoubtedly strengthen Musk’s business portfolio, which already includes SpaceX and Twitter, both companies sharing Musk’s curiosity about the universe and his approach to truth.

Speaking of facts, European authorities are strengthening regulations on AI projects by launching a new research hub to investigate Big Tech algorithms. A team of multidisciplinary experts will be in charge of looking “under the hood” of large search engines and online platforms to examine how those algorithms contribute to the spread of illegal and harmful content.

This week’s Crypto Biz looks at the latest developments in the fast-growing AI market, MicroStrategy integration with the Bitcoin Lightning Network, and Microsoft’s efforts to power AI development.

Artificial intelligence (AI) might soon answer a profound philosophical question. Finding what the truth is will be the focus of Elon Musk’s new endeavor TruthGPT, an AI dedicated to finding the fundamental nature of the universe and addressing an alleged “left-wing” bias in the industry. 

Microsoft is developing its own AI chip to power ChatGPT

Since 2019, Microsoft has been developing its own artificial intelligence chips to cut down growing costs for both its own and OpenAI projects, reducing its reliance on Nvidia’s GPUs. The move reflects a chip shortage that affected many industries worldwide during the pandemic. One of the most popular GPUs for training machine learning systems, the Nvidia H100 can be purchased for $40,000 on reseller services such as eBay amid increasing market scarcity.

MicroStrategy’s Saylor fuses work email address with Bitcoin Lightning

MicroStrategy’s CEO Michael Saylor disclosed the integration of email addresses with the Bitcoin Lightning Network, allowing transactions using emails instead of wallet addresses. In a screenshot, Saylor showed a few transactions sent to his corporate email account in the form of satoshis, the smallest unit of Bitcoin (BTC). It’s unclear if the solution is available for other MicroStrategy email addresses. The Lightning Network is a popular Bitcoin scaling solution capable of processing 1 million transactions per second at the cost of 1 satoshi per transaction.

Elon Musk to launch truth-seeking artificial intelligence platform TruthGPT

Elon Musk is developing a ChatGPT rival known as “TruthGPT,” a large language model that will be trained to explore the “nature of the universe.” During an interview with American cable network Fox News, Musk said that the truth-seeking AI would also push back against what he perceives as “left-wing” bias in the industry. ChatGPT “is programmed by left-wing experts, which train the chatbots to lie,” according to Musk. That is not the first time Musk has attacked ChatGPT; he recently spearheaded a letter calling for the halt of advanced AI development claiming societal concerns.

Before you go: OpenAI has until April 30 to comply with EU laws, ‘next to impossible’ say experts

OpenAI faces its biggest regulatory challenge as Italian authorities insist the company has until April 30 to comply with local and European data protection and privacy laws. Under the EU’s laws, tech outfits must solicit user consent to train with personal data. Companies operating in Europe must also allow Europeans to opt out of data collection and sharing. According to experts, this will be difficult for OpenAI since its models are trained on massive data troves, which are scraped from the internet and conflated into training sets.

Crypto Biz is your weekly pulse of the business behind blockchain and crypto, delivered directly to your inbox every Thursday.

Microsoft is developing its own AI chip to power ChatGPT: Report

The software giant is reportedly developing its own machine learning chips to power AI projects for OpenAI and its own internal teams.

Microsoft has secretly been developing its own artificial intelligence (AI) chips to deal with the rising costs of development for in-house and OpenAI projects, per a report from The Information.

Reportedly in the works since 2019, Microsoft’s recently revealed hardware venture appears to be designed to reduce the Redmond, Washington company’s reliance on Nvidia’s GPUs.

A Google search reveals that the Nvidia H100, one of the more popular GPUs for training machine learning systems, costs as much as $40,000 on reseller services such as eBay amid increasing market scarcity.

These high costs have pushed several Big Tech companies to develop their hardware, with Meta, Google and Amazon all developing machine-learning chips over the past few years.

Details remain scarce as Microsoft hasn’t officially commented yet, but The Information’s report claims that the chips are being developed under the code name “Athena” — perhaps a nod to the Greek goddess of war, as the generative AI arms race continues to heat up.

Related: Italy ChatGPT ban: Data watchdog demands transparency to lift restriction

The report also mentions that the new chips are already being tested by team members from Microsoft’s internal machine-learning staff and OpenAI’s developers.

While one can only speculate at this time as to how OpenAI intends to use Microsoft’s AI chips, the company’s co-founder and CEO, Sam Altman, recently told a crowd at the Massachusetts Institute of Technology that the infrastructure and design that got the company from GPT-1 to GPT-4 is “played out” and will need to be rethought:

“I think we’re at the end of the era where it’s going to be these, like, giant, giant models. We’ll make them better in other ways.”

This comes on the heels of a busy news cycle for the AI sector, with Amazon recently entering the arena as a (somewhat) new challenger with its first self-developed models leaping onto the scene as part of its Bedrock AI infrastructure rollout.

And, on April 17, tech mogul and world’s richest person Elon Musk announced the impending launch of TruthGPT, a supposed “truth-seeking” large language model designed to take on ChatGPT’s alleged left-wing bias, during an interview with Fox News’ Tucker Carlson.

EU legislators call for ‘safe’ AI as Google’s CEO cautions on rapid development

A group of EU politicians called for a united front in developing artificial intelligence while tech executives expressed concerns about its societal impacts.

A dozen European Union politicians have signed a letter calling for the “safe” development of artificial intelligence as Google’s CEO cautioned against releasing powerful AI tech before society has had a chance to adapt.

An April 16 open letter shared on Twitter by EU Parliament member Dragoș Tudorache called for a collaborative effort and a universal set of rules around the development of AI.

Tudorache, along with 11 other EU politicians named in the letter, asked the European Commission President Ursula von der Leyen and United States President Joe Biden to convene a summit on AI and to agree on a set of governing principles for the development, control and deployment of the tech.

“Recent advances in the field of artificial intelligence (AI) have demonstrated that the speed of technological progress is faster and more unpredictable than policymakers around the world have anticipated,” the letter reads.

”We are moving very fast.”

The letter further asks the principals of the Trade and Technology Council (TTC), a forum for the U.S. and EU to coordinate approaches to economic and technology issues, to agree on a preliminary agenda for the proposed AI summit and for companies and countries worldwide to “strive for an ever-increasing sense of responsibility” while developing AI.

“Our message to industry, researchers, and decision-makers, in Europe and worldwide, is that the development of very powerful artificial intelligence demonstrates the need for attention and careful consideration. Together, we can steer history in the right direction,” the letter said.

CBS’Google CEO Pichai Sundararajan, better known as Sundar Pichai, also expressed caution around the rapid development of AI in an April 16 interview on CBS’ 60 Minutes, saying that society might need time to adapt to the new tech.

“You don’t want to put a technology out like this when it’s very, very powerful because it gives society no time to adapt. I think that’s one reasonable perspective,” he said.

“The pace at which we can think and adapt as social institutions compared to the pace at which the technology is evolving, there seems to be a mismatch,” he added.

However, Pichai also noted that while there are causes for concern, he does feel “optimistic” because of the number of people worrying about the implications of AI so early in its life cycle compared to other technical advancements in the past.

“I think there are responsible people there trying to figure out how to approach this technology, and so are we,” he said.

Related: Elon Musk to launch truth-seeking artificial intelligence platform TruthGPT

The European Union is already looking at AI with its Artificial Intelligence Act, while the European Data Protection Board has created a task force to examine the generative AI chatbot ChatGPT.

The letter from the EU politicians echoes the same concerns put forward by more than 2,600 tech leaders and researchers who called for a temporary pause on further AI development, fearing “profound risks to society and humanity.”

Tesla CEO Elon Musk, Apple co-founder Steve Wozniak and other AI CEOs, chief technology officers and researchers were among the other signatories of the letter, which was published by the United States think tank Future of Life Institute (FOLI) on March 22.

While the EU politicians agree with the “core message” of the FOLI letter and share “some of the concerns,” they have come out in disagreement with “some of its more alarmist statements.”

Musk has continued to highlight the risk he believes AI could pose in an April 16 interview with Fox News, saying that just like any other technology, AI has the potential to be misused if it is developed with ill intentions.

Magazine: ZK-rollups are ‘the endgame’ for scaling blockchains: Polygon Miden founder

Elon Musk to launch truth-seeking artificial intelligence platform TruthGPT

The tech billionaire made the announcement during a Fox interview on April 17.

According to an April 17 Fox News report, Elon Musk told Fox anchor Tucker Carlson that he’s developing a ChatGPT rival known as “TruthGPT,” a large language model (LLM) that Musk says will be trained to explore the mysteries of the universe. 

“I’m going to start something which I call TruthGPT, or a maximum truth-seeking AI that tries to understand the nature of the universe.” 

This truth-seeking AI, as per Musk, will also push back against what he perceives as “left-wing” bias in the industry. Musk told Carlson that ChatGPT “is programmed by left-wing experts, which train the chatbots to lie.” Carlson, for his part, also stated that “the deeper problem is not simply that it will become autonomous and turn us all into slaves, but that it will control our understanding of reality and do it in a really dishonest way,” adding that “it could be programmed to lie to us for political effect.”

Musk also appeared to address concerns over his entering the crowded LLM market — a move he signaled with the purchase of a reported 10,000 GPUs — just weeks after signing a petition calling for a pause on related research in order to evaluate safety concerns:

“I think this might be the best path to safety in the sense that an AI that cares about understanding the universe is unlikely to annihilate humans because we are an interesting part of the universe.”

Artificial intelligence has also demonstrated its capacity when it comes to blockchain applications. Since March 17, Cointelegraph has been reporting on a series of token trades conducted by ChatGPT-4. When prompted on how to allocate $100 on certain coins or tokens, ChatGPT-4 recommended $50 to Bitcoin (BTC), $25 to Ether (ETH), $15 to Cosmos’ ATOM (ATOM), and $10 to nonfungible tokens and other Web3 projects. According to the chatbot:

“The overall trend shows that Bitcoin acts as a safe haven during times of financial instability, such as the recent Silicon Valley Bank and Signature Bank failures. Additionally, Bitcoin’s dominance is nearing 50%, and some analysts predict a move towards $100k.”

Magazine: FTX considers reboot, Ethereum’s fork goes live and OpenAI news

Cointelegraph journalist and editor Zhiyuan Sun contributed to this story. 

Elon Musk reaffirms AI’s potential to destroy civilization

Speaking about artificial intelligence’s potential for civilizational destruction, Musk said, “Anyone who thinks this risk is 0% is an idiot.”

While tech giants worldwide work on materializing the idea of having a generative artificial intelligence (AI) to aid humans in their daily lives, some believe the risk of the nascent technology going rogue remains imminent. Considering this possibility, Tesla and Twitter chief Elon Musk reminded people of AI’s potential to destroy civilization.

On March 15, Musk’s plan to create a new AI startup surfaced after the entrepreneur reportedly assembled a team of AI researchers and engineers. However, Musk continues to highlight the destructive potential of AI — just like any other technology — if it falls into the wrong hands or is developed with ill intentions.

According to Musk, AI can be dangerous. In a FOX interview, he said AI could be more dangerous than mismanaged aircraft design or production maintenance, for example. While acknowledging the low probability, he stated:

“However small one may regard that probability, but it is non-trivial — it has the potential of civilizational destruction.”

As Crypto Twitter picked up on the discussion, Musk followed up with strong support for his statement:

“Anyone who thinks this risk is 0% is an idiot.”

On the other hand, tech entrepreneurs like Bill Gates remain more optimistic about AI and the positive impacts it can bring to humanity.

Related: Elon Musk reportedly buys thousands of GPUs for Twitter AI project

On April 13, Amazon became the latest tech giant to join the race of creating AI services. Amazon Bedrock allows users to build and scale generative AI apps.

According to a blog post announcing the service, Bedrock allows users to “privately customize foundation models with their own data, and easily integrate and deploy them into their applications.”