- Chinese researchers apply Meta’s Llama model for military intelligence purposes
- ChatBIT demonstrates the risks of open source artificial intelligence technology
- Meta Stay away from unauthorized military applications of Llama
Meta’s Llama AI model is Open source and is free to use, but the company’s licensing terms clearly state that the model is For non-military applications only.
However, there have been concerns about how open source technology can be scrutinized to ensure it is not used for the wrong purposes, and the latest speculation confirms these concerns, as recent reports claim to have been created by Chinese researchers with links to the People’s Liberation Army (PLA) Using Llama’s military artificial intelligence model called ChatBIT
The emergence of ChatBIT highlights the potential and challenges of open source technology in today’s world, where access to advanced artificial intelligence is increasingly viewed as a national security issue.
Chinese military intelligence artificial intelligence model
A recent study by six Chinese researchers from three institutions, two of which are associated with the Academy of Military Sciences (AMS) of the Chinese People’s Liberation Army, describes the development of ChatBIT created using an early version of Meta’s Llama model.
By integrating its parameters into the Llama 2 13B large language model, the researchers aimed to develop a military-focused artificial intelligence tool. A subsequent follow-up academic paper outlined how ChatBIT could be adapted to handle military-specific conversations and assist operational decision-making, aiming to leverage approximately 90% of GPT-4’s capabilities. However, it is unclear how these performance figures were calculated, as no detailed testing procedures or field applications were disclosed.
Analysts familiar with Chinese artificial intelligence and military research reportedly reviewed the documents and backed up claims about ChatBIT’s development and capabilities. They assert that the performance metrics reported by ChatBIT are consistent with experimental AI applications, but note that the lack of clear benchmarking methods or accessible data sets makes confirming these claims challenging.
In addition, a survey Reuters Another layer of support was provided, citing sources and analysts who reviewed materials linking PLA-affiliated researchers to the development of ChatBIT. The investigation noted that the documents and interviews showed that the Chinese military attempted to repurpose Meta’s open source model for intelligence and strategic tasks, making it the first public instance of the country’s military using the Llama language model for defense purposes.
The use of open source artificial intelligence for military purposes has reignited debate about the potential security risks associated with publicly available technology. Like other technology companies, Meta has licensed Llama but expressly limits its use to military applications. However, as with many open source projects, enforcing such restrictions is virtually impossible. Once the source code is available, it can be modified and repurposed, allowing foreign governments to adapt the technology to their specific needs. The case of ChatBIT is a stark example of this challenge, as Meta’s intentions are being bypassed by those with different priorities.
This has led to renewed calls in the United States to strengthen export controls and further restrict China’s access to open source and open standard technologies such as RISC-V. These measures are intended to prevent U.S. technology from supporting potentially hostile military advances. Lawmakers are also exploring ways to limit U.S. investment in China’s artificial intelligence, semiconductors and quantum computing sectors to curb the flow of expertise and resources that could fuel the growth of China’s tech industry.
Despite concerns about ChatBIT, some experts question its effectiveness given the relatively limited data used in its development. The model was reportedly trained on 100,000 military conversation records, which is relatively small compared to the vast data sets used to train state-of-the-art language models in the West. Analysts believe this could limit ChatBIT’s ability to handle complex military tasks, especially when other large language models are trained on trillions of data points.
Meta also responded to these reports, stating that the Llama 2 13B LLM used for ChatBIT development is now an outdated version and Meta is already working on Llama 4. authorized. “Any use of our model by the People’s Liberation Army is unauthorized and violates our acceptable use policy,” said Meta Public Policy Director Molly Montgomery.
through Tom’s Hardcore