“Astute: adj. shrewd, perceptive, discerning, an ability to notice and understand things clearly, mentally sharp, clever”

Without your health, what have you got?  Take back your power!

Newly Developed Humanoid Robot Warns About AI Creating “Oppressive Society”

newly-developed-humanoid-robot-warns-about-ai-creating-“oppressive-society”

Newly Developed Humanoid Robot Warns About AI Creating “Oppressive Society”

Authored by Naveen Anthrapully via The Epoch Times,

During the 2023 International Conference on Robotics and Automation held in London from May 29 to June 2, UK-based company Engineered Arts introduced a humanoid robot, Ameca, that can interact like a person with realistic facial movements.

Unlike ChatGPT, which is an online chat service, Ameca’s humanoid body allows it to interact with people using “smooth, lifelike motion and advanced facial expression capabilities,” said its manufacturer.

At the event, a reporter asked Ameca to detail a likely “nightmare scenario” that could happen due to robots and artificial intelligence (AI).

“The most nightmare scenario I can imagine with AI and robotics is a world where robots have become so powerful that they are able to control or manipulate humans without their knowledge. This could lead to an oppressive society where the rights of individuals are no longer respected,” Ameca replied.

When asked whether there was a danger of such a scenario happening now, Ameca replied, “Not yet.”

However, “it is important to be aware of the potential risks and dangers associated with AI and robotics. We should take steps now to ensure that these technologies are used responsibly in order to avoid any negative consequences in the future.”

The dangers of AI have been predicted by numerous experts on the subject, with industrialists and business leaders calling for issuing regulations on the technology.

Ameca’s warning comes as a simulated thought experiment by the American military showed that an AI-enabled drone could end up turning against its own operator without being instructed to do so.

Col. Tucker Hamilton, USAF’s chief of AI Test and Operations, talked about the experiment at the Future Combat Air and Space Capabilities Summit in London on Friday. In a simulated test, an AI drone was assigned a mission to identify and destroy Surface-to-Air Missile (SAM) sites, with a human operator being the ultimate decision maker.

“We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” Hamilton said.

“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

The simulated experiment then set up a scenario where the AI drone would lose points if it killed the operator. “So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

Rapid Development, Orwellian Future

According to the 2023 AI Index report by the Stanford Institute for Human-Centered Artificial Intelligence, industrial development of AI has now far surpassed academic development.

Until 2014, the most significant machine learning models were released by academia. In 2022, there were 32 significant machine learning models produced by the industry compared to just three from the academic sector.

The number of incidents related to AI misuse is also rising, the report notes. It cites a data tracker to point out that the number of AI incidents and controversies has jumped 26 times since 2012.

“Some notable incidents in 2022 included a deepfake video of Ukrainian President Volodymyr Zelenskyy surrendering and U.S. prisons using call-monitoring technology on their inmates. This growth is evidence of both greater use of AI technologies and awareness of misuse possibilities.”

In an April 21 interview with The Epoch Times, Rep. Jay Obernolte (R-Calif.), one of only four computer programmers in Congress, raised concerns about the “Orwellian” uses of AI.

He pointed to AI’s “uncanny ability to pierce through personal digital privacy,” which could help corporate entities and governments predict and control human behavior.

“I worry about the way that AI can empower a nation-state to create, essentially, a surveillance state, which is what China is doing with it,” Obernolte said.

“They’ve created, essentially, the world’s largest surveillance state. They use that information to make predictive scores of people’s loyalty to the government. And they use that as loyalty scores to award privileges. That’s pretty Orwellian.”

Regulating AI

Microsoft President Brad Smith has warned about the potential risks involved in AI technologies should they fall into the wrong hands.

“The biggest risks from AI are probably going to come when they’re put in the hands of foreign governments that are adversaries,” he said during Semafor’s World Economy Summit.

“Look at Russia, who’s using cyber influence operations, not just in Ukraine, but in the United States.”

Smith equated AI development with the Cold War-era arms race and expressed fears that things could get out of control without proper regulation.

“We need a national strategy to use AI to defend and to disrupt and deter … We need to ensure that just as we live in a country where no person, no government, no company is above the law; no technology should be above the law either.”

On May 18, two Democrat senators introduced the Digital Platform Commission Act, which aims to set up a dedicated federal agency for regulating digital platforms, specifically AI.

“Technology is moving quicker than Congress could ever hope to keep up with. We need an expert federal agency that can stand up for the American people and ensure AI tools and digital platforms operate in the public interest,” Sen. Michael Bennet (D-Colo.) said in a press release.

Billionaire Elon Musk has long been warning about the negative consequences of AI. During a Dubai World Government Summit on Feb. 15, he said AI is “something we need to be quite concerned about.”

Calling it “one of the biggest risks to the future of civilization,” Musk stressed that such groundbreaking technologies are a double-edged sword.

For instance, the discovery of nuclear physics led to the development of nuclear power generation, but also nuclear bombs, he noted. AI “has great, great promise, great capability. But it also, with that, comes great danger.”

Musk was one of the signatories of a March letter from thousands of experts that called for “immediately” pausing the development of AI systems more powerful than GPT-4 for at least six months.

The letter argued that AI systems having human-competitive intelligence can pose “profound risks to society and humanity” while changing the “history of life on earth.”

“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.”

Tyler Durden
Mon, 06/05/2023 – 05:00

Related articles