AI Eye – Cointelegraph Magazine
2024.08.22 11:57
|
US wants to flood the Taiwan skies with a robot army
The US army plans to combat a Chinese invasion of Taiwan by flooding the narrow Taiwan Strait with swarms of thousands and thousands of autonomous drones.
“I want to turn the Taiwan Strait into an unmanned hellscape using a number of classified capabilities so that I can make their lives utterly miserable for a month, which buys me the time for the rest of everything,” US Indo-Pacific Command chief Navy Admiral Samuel Paparo told the Washington Post.
The drones are intended to confuse enemy aircraft, provide targeting for missiles to knock out warships and generally create chaos. The Ukrainians have been pioneering the use of drones in warfare, destroying 26 Russian vessels and forcing the Black Sea Fleet to retreat.
Ironically, most of the parts in Ukraine’s drones are sourced from China, and there are doubts over whether America can produce enough drones to compete.
To that end, the Pentagon has earmarked $1 billion this year for its Replicator initiative to mass-produce the kamikaze drones. Taiwan also plans to procure nearly 1,000 additional AI-enabled drones in the next year, according to the Taipei Times. The future of warfare has arrived.
AI agents crypto payments network
Skyfire has just launched a payments network that enables AI agents to make autonomous transactions. The agents will get a pre-funded crypto account with safeguards to prevent overspending (humans get pinged if spending exceeds preset limits), and it comes as agents are unable to access their own bank accounts.
Co-founder Craig DeWitt told TechCrunch that AI agents have been “glorified search” without the ability to pay for anything. “Either we figure out a way where agents are actually able to do things, or they don’t do anything, and therefore, they’re not agents,” he said.
Denso, a global auto parts manufacturer, is already using Skyfire with its own AI agents to source materials, while the Fiverr-like Payman platform uses Skyfire to enable AI Agents to pay humans to do tasks on their behalf.
LLMs too dumb to wipe out humanity
A study from the University of Bath in the United Kingdom has concluded that as large language models cannot learn independently or acquire new skills, they pose no existential risk to humanity.
The researchers argue that LLMs like ChatGPT will continue to be safe even as they become more sophisticated and are trained on ever-larger data sets.
The study examined LLMs’ ability to complete novel tasks and determined that they are highly unlikely to ever gain complex reasoning skills.
Dr Tayyar Madabushi said the fear that “a model will go away and do something completely unexpected, innovative and potentially dangerous is not valid.”
Read also
Features
North American crypto miners prepare to challenge China’s dominance
Features
AI may already use more power than Bitcoin — and it threatens Bitcoin mining
AI x crypto tokens tank
AI-themed coins, including Bittensor, Render Network, Near Protocol and Internet Computer, have plunged more than 50% from their peaks this year. And the much-touted merger of Fetch, Singularity.net and Ocean Protocol into the Artificial Superintelligence Alliance has done little for the price.
FET (which is yet to be renamed ASI) peaked at $3.26 and has now fallen to just 87 cents. Kaiko reports that weekly global trading volumes for the sector fell to just $2 billion in early August.
While you might assume that’s because the AI bubble has popped, over on the share market the price of the Global X Robotics and the Artificial Intelligence ETF (BOTZ) is within a few percent of its yearly high.
Raygun video shows limits of AI
Australia finished in the top four at the Olympics, but the only Australian athlete the world will remember is viral breakdancer Raygun. A viral text-to-speech video of Raygun breakdancing is every bit as subpar as her routines and is hilarious or disturbing, depending on your POV.
Text to Video has already developed
pretty good so far. However, the models are clearly not a simulation of the world. Calculating physical laws does not work, the architecture is not designed for this. So we still need a few more breakthroughs. pic.twitter.com/MrqVXcHgkZ— Chubby♨️ (@kimmonismus) August 21, 2024
ChatGPT is a terrible doctor, but specialized AIs are quite good
A new study in the scientific journal PLOS One found that ChatGPT is a pretty terrible doctor. The LLM was only able to achieve 49% accuracy when diagnosing a condition from 150 case studies on Medscape. (That’s probably why ChatGPT will refuse to give you medical advice unless you trick it — for example, by claiming to be doing academic research like the study’s authors did.)
However, specialized medical AIs such as the Articulate Medical Intelligence Explorer are significantly more advanced. A study published by Google early this year found AMIE outperformed human doctors in diagnosing 303 cases sourced from the New England Journal of Medicine.
In anothernew study, researchers from Middle Technical University and the University of South Australia found that a specially trained computer algorithm achieved a 98% accuracy rate in diagnosing diseases based on tongue color, including diabetes, stroke, anemia, asthma and liver and gallbladder conditions.
Read also
Features
Designing the metaverse: Location, location, location
Features
Storming the ‘last bastion’: Angst and anger as NFTs claim high-culture status
AI use in comedy
“Why did the politician bring a ladder to the debate? To make sure he could reach new heights with his promises!” That’s the kind of groan-inducing joke AIs will make up, which is why it’s surprising that some comedians have been using AI to help create shows.
Comedian Anesti Danelis used the bad jokes as the basis of his recent show Artificially Intelligent and said AI tools also helped structure the material.
“I learned through the process that human creativity can’t be replicated or replaced, and in the end, about 20% of the show was pure AI, and the other 80% was a mix,” he told the BBC.
US comedian Viv Ford also used AI to refine her Edinburgh Festival show No Kids on the Blockchain. She said:
“I’ll say, ‘hey, is this joke funny?’ And if it says ‘it’s funny,’ genuinely, it does not land with an audience.”
A study from the University of Southern California found that ChatGPT wrote slightly funnier jokes than the average person. But a second study compared AI gags with Onion headlines and found them similarly unfunny.
Trump’s deepfake AI pics are just memes: WaPo
Bitcoin stan Donald Trump’s reposting of AI-generated images of “Swifties for Trump” and Kamala Harris addressing a communist convention saw media outlets from The New York Times to Al Jazeera hyperventilating over the threat of AI deepfakes in politics. That’s a genuine threat, of course, but The Washington Post’s Will Oremus argues that, in this case, the images weren’t designed to fool people.
“Rather, the images seem to function more like memes, meant to provoke and amuse. They’re visual parallels to the nasty nicknames Trump calls his opponents,” he wrote this week.
“The intended audience doesn’t care whether they’re literally true. The fake images feel true on some level, or at least it’s enjoyable to imagine that they might be. And if the other side gets righteously riled up, the joke is on them.”
You can actually buy a Swifties for Trump T-shirt, though. (Truth Social)
Andrew Fenton
Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, on SA Weekend as a film journalist, and at The Melbourne Weekly.
Follow the author @andrewfenton
Read also
Hodler’s Digest
Musk hints at suing Microsoft, US Rep. wants Gensler fired, and more: Hodler’s Digest, April 16-22
by
Editorial Staff
6 min
April 22, 2023
Elon Musk suggests suing Microsoft, congressman plans bill to remove Gary Gensler, and Societe Generale launches euro-pegged stablecoin.
Read more
AI Eye
$1M bet ChatGPT won’t lead to AGI, Apple’s intelligent AI use, AI millionaires surge: AI Eye
by
Andrew Fenton
8 min
June 13, 2024
$1M prize to debunk hype over AGI, Apple Intelligence is modest but clever, Google is still stuck on that stupid ‘pizza glue’ answer. AI Eye.
Read more