China is recruiting its brightest high schoolers to build AI bots for the military
A group of some of China's smartest students have been recruited straight from high school to begin training as the world's youngest AI weapons scientists.
The 27 boys and four girls, all aged 18 and under, were selected for the four-year "experimental programme for intelligent weapons systems" at the Beijing Institute of Technology (BIT) from more than 5,000 candidates, the school said on its website.
The BIT is one of the country's top weapons research institutes, and the launch of the new programme is evidence of the weight it places on the development of AI technology for military use.
China is in competition with the United States and other nations in the race to develop deadly AI applications - from nuclear submarines with self-learning chips to microscopic robots that can crawl into human blood vessels.
"These kids are all exceptionally bright, but being bright is not enough," said a BIT professor who was involved in the screening process but asked not to be named because of the sensitivity of the subject.
"We are looking for other qualities such as creative thinking, willingness to fight, a persistence when facing challenges," he said. "A passion for developing new weapons is a must … and they must also be patriots."
People's Liberation Army soldiers in Tiananmen Square, Beijing, in September 2015. Reuters
Each student will be mentored by two senior weapons scientists, one from an academic background and the other from the defence industry, according to the programme's brochure.
After completing a short programme of course work in the first semester, the students will be asked to choose a speciality field, such as mechanical engineering, electronics or overall weapon design. They will then be assigned to a relevant defence laboratory where they will be able to develop their skills through hands-on experience.
One of the students is Qi Yishen from east China's Shandong province, who said he had had a keen interest in guns and weapons since he was a young boy and enjoyed reading books and magazines on the subject.
As well as being offered an interview for the BIT programme he was in the running for a place at Tsinghua University, one of China's top seats of learning, but both visits were scheduled for the same day.
"When I arrived in Beijing, I loitered at the railway station for a long time. But then I went to BIT … I couldn't resist the attraction," he was quoted as saying on the institute's website.
He said his decision was also influenced by his father, who wanted him to work in the defence industry.
BIT launched the programme at the headquarters of Norinco, one of China's biggest defence contractors, on October 28.
"We are walking a new path, doing things that nobody has done before," said student representative Cui Liyuan in an official statement.
After completing the four-year course, the students are expected to continue on to a PhD programme and become the next leaders of China's AI weapons programme, the institute said.
Eleonore Pauwels, a fellow in emerging cybertechnologies at the Centre for Policy Research, United Nations University in New York, said she was concerned about the launch of the BIT course.
"This is the first university programme in the world designed to aggressively and strategically encourage the next generation to think, design and deploy AI for military research and use."
Paramilitary policemen march in front of the Great Hall of the People in Beijing in March 2016. REUTERS/Aly Song
While the US had similar programmes, such as those run by the Defence Advanced Research Projects Agency, they operated in relative secrecy and employed only the cream of established scientists, Pauwels said.
In contrast, the BIT programme seemed more focused on training the next generation of students in weaponising AI, she said. "This concept is both extremely powerful and troubling."
Students would conceive and design AI as an engine or an enabling force to weaponise self-learning, intelligent and automated systems, she said.
That knowledge could also be used alongside other new and existing technologies such as biotechnologies, quantum computing, nanotechnology and robotics, which would have "drastic implications for security and military dominance", Pauwels said.
"Think of robot swarms capable of delivering harmful toxins in food or biotech supply chains," she said.
With the undergraduate programme, "you could envision students starting to think about how to harness the convergence of AI and genetics systems to design and deploy powerful combinations of weapons that can target, with surgical precision, specific populations", she said.
"[It] may also lead to new forms of warfare, from highly sophisticated automated cyberattacks to what you could call an 'internet of Battle Things', where an array of robots and sensors play a role in defence, offence and in collecting intelligence."
Chinese President Xi Jinping. Reuters/Pool
When asked to comment on the BIT programme, China's foreign ministry said the country was actively engaged in the development and application of AI technology to serve its economic, social development, and scientific and technological progress.
At the same time, it said it was also very aware of the possible problems with a lethal autonomous weapon system, and promoted the exploration of preventative measures by the international community.
Indeed, AI offers a new security arsenal for China, which has its sights firmly set on technological advancement as a way to achieve its goal to become a global leader.
"The fact that China's AI national strategy is built on a doctrine of civil-military fusion means that an AI prototype for military use could be co-opted and perverted for surveillance or harm in the civilian context," Pauwels said.
Stuart Russell, director of the Centre for Intelligent Systems at the University of California, Berkeley, described the BIT programme as "a very bad idea".
"Machines should never be allowed to decide to kill humans. Such weapons quickly become weapons of mass destruction. Moreover, they increase the likelihood of war," he said.
"I hope all these students will begin their course by watching the movie Slaughterbots."
He was referring to a seven-minute film screened at a United Nations arms control convention in Geneva last year, which depicts a disturbing future where swarms of low-cost drones can slaughter humans like cattle with the help of artificial intelligence technology like facial recognition.
The Chinese government submitted a position paper to UN on the use of AI weapons in April.
"As products of emerging high technologies, development and use of lethal autonomous weapons systems would reduce the threshold of war, and the cost of warfare on the part of the user countries. This would make it easier and more frequent for wars to break out," Beijing said, appealing for more discussions.
"Until such discussions have been had, there should not be any preset premises or prejudged outcome which may impede the development of AI technology," it said.
Fuente: businessinsider.com
###
Countering Military Recruitment
WRI's new booklet, Countering Military Recruitment: Learning the lessons of counter-recruitment campaigns internationally, is out now. The booklet includes examples of campaigning against youth militarisation across different countries with the contribution of grassroot activists.
You can order a paperback version here.