
Words reading "Artificial intelligence AI", miniature of robot and toy hand are pictured in this illustration taken December 14, 2023. — Reuters
#Experts #urge #greater #regulation #prevent #loss #control
Experts around the world have emphasized the strong rules of artificial intelligence (AI) to prevent it from avoiding human control, as global leaders are present for a high -level summit on this technology in Paris.
France hosted the program along with India on Monday and Tuesday, chosen to prefer AI ‘Action’ in 2025, which focused focus on safety concerns, which in 2023 UK At the previous summit at Bitchley Park and in 2024, Seoul was dominated.
The French approach advocates for governments, businesses and key stakeholders to support the global AI governance and obey sustainable development, while preventing decrease in imposing binding rules.
“We do not just want to just talk about our time just about the dangers,” said AI’s envoy to President Emmanuel Macron.
Max Tagmark, head of the US -based Life Institute, has regularly warned AI’s risks, telling AFP that France should not waste the opportunity to act.
“France has been an amazing champion of international cooperation and has really had the opportunity to guide the rest of the world,” said MIT physicist.
“At the Paris Summit, there is a huge hook on the road here and it should be embraced.”
‘The will of living’
The Tagmark Institute on Sunday supported a platform launch, which has been named global threat and AI Safety Preparation (GASP), aimed at AI to be developed worldwide. Make a map of major risks associated with.
“We have identified close to 300 tools and technologies in response to these risks,” said Cyrus Hoods, Grip Coordinator.
Survey results will be transferred to members of the OECD Rich County Club and Global Partnership on Artificial Intelligence (GPAI), which has about 30 countries, including about 30 countries, Japan, South Korea and the United States. – Paris Sunday.
Last week, the first international AI Safety Report was also seen on Thursday, compiled by 96 experts and has the support of 30 countries, the United Nations, EU and OECD.
The dangers described in the document are familiar with, such as fake content is more dangerous than online.
“Extra risks such as proof biological attacks or cyberrtex are permanently revealed,” said Yoshua Bangu, a coordinator and renowned computer scientist of this report.
In the long run, the 2018 Tourist Prize winner is feared “potential loss of control” by humans on AI systems, which potentially encourages “their will to survive”.
“Many people believed that mastering the GPT4 level was a science fiction as it was recently like six years ago, and then it happened,” Tagmark referred to Open’s chatboat. He said.
“Now the biggest problem is that many people coming to power have not yet been assumed that we are nearing the construction of artificial general intelligence (AGI) rather than detecting the method of controlling it. “
Improving human intelligence?
AGI refers to an artificial intelligence that will be equal or better in all fields.
Within a few years, the choice of Open Chief Sam Altman has described his point of view.
“If you only spoil the rate on which these abilities are increasing, it is forced to think that we are,” said Dario Amody, a counterpart at rival anthropic in November. Will reach 2026 or 2027. “
“In the worst, these American or Chinese companies lose control over it, and then run by the ground machines,” said Tagmark.
Professor Stuart Russell, a professor of computer science in California, said he had a biggest fear of “the weapons system where the weapons system is controlling is deciding to attack when to attack, and so on Like. “
Russell, who is also the coordinator of the International Association for Safe and Ethical AI (IASEI), is responsible for establishing security measures against governments to strengthen the responsibility against armed AIS.
Tagmark said the solution is very easy: treating the AI industry in the same way that all other industries are.
“Before anyone can create a new nuclear reactor outside Paris, they will have to demonstrate in front of government -appointed experts that this reactor is safe. You will not be able to control it … this AI’s For the same should be the same, “Tagmark said.