
Introduction
As the global AI arms race accelerates, China is emerging not just as a technological superpower, but also as a nation attempting to anchor its military innovation in ethical and legal frameworks. A recent article published on July 10th in the Liberation Army Daily offers a rare glimpse into China’s evolving doctrine on humanoid combat robots—potentially the most controversial military technology of the 21st century.
China at the Forefront of AI Ethics
In contrast to narratives portraying AI as a disruptive or unregulated force, China has already positioned itself as a global leader in AI governance, emphasizing principles like human welfare, fairness, privacy, and security. This same philosophy, according to Chinese military commentators, is now being applied to the realm of humanoid robots designed for combat.
“Military humanoid robots are by far the most human-like weapons, and the potential for indiscriminate manslaughter when used on a large scale will inevitably lead to legal charges and moral condemnation.” — Liberation Army Daily, July 10
Humanoid Robots as a Strategic Military Asset
The PLA article describes military humanoid robots as the “ultimate form of intelligent machines”—fusing artificial intelligence, mechanization, and information systems. Seen as successors to drones and unmanned vehicles, these robots represent a new growth pole in military intelligence. Their potential lies in high-risk missions, frontline replacements for human soldiers, and precision engagement—ideally contributing to the goal of “zero casualties” for China’s forces.
The ICRC and the Global Ethical Debate on Autonomous Weapons
China’s ethical reflections are not happening in a vacuum. The International Committee of the Red Cross (ICRC) has taken a clear and firm stance on autonomous weapons systems, including humanoid platforms. In its official position paper, the ICRC advocates for:
- A ban on unpredictable autonomous weapons that could act in unintended ways.
- A prohibition on using autonomous weapons against human targets, especially in populated areas.
- The requirement of meaningful human control in all decisions involving the use of force.
These principles are rooted in international humanitarian law and resonate with China’s stated emphasis on obedience, respect, and protection in humanoid robot programming.
“There must always be human control over decisions to use force. Autonomous weapons that operate without such control are unacceptable, illegal, and should be prohibited.” — ICRC, 2021 Position Paper
📖 ICRC Official Position on Autonomous Weapons
Legal and Ethical Framing in China
Drawing from Asimov’s Three Laws of Robotics and principles of the law of armed conflict, China’s military thinkers insist that humanoid combat robots must:
- Obey human control without independent lethal decision-making.
- Respect human life, clearly identifying combatants vs civilians.
- Protect humanity, including halting excessive force or preventing atrocities.
These align—rhetorically at least—with the ICRC’s humanitarian framework, though their real-world application remains to be tested.
Conclusion: A Technological Morality or Strategic Optics?
Will humanoid robots ever become the world’s most ethical combatants? Unlikely. But China’s proactive discourse on their legal and ethical deployment places it ahead of many states that remain vague or silent on the matter. Whether this represents true moral leadership or savvy diplomatic optics, it underscores a new phase of “AI geopolitics”—where ethics and deterrence increasingly converge.
Credit photo : © Kyodo News / CCTV via Reuters
