Embodied intelligence is not only an emerging technological direction but also one of the disruptive growth curves that will define the global technology industry over the next decade. As artificial intelligence continues to break through the boundaries of perception, cognition, and action control, AI is moving from the virtual world to the physical world, beginning to truly touch every corner of industrial, commercial, household, and social operations. The technology of embodied intelligence not only brings about a leap in technical capabilities but also signifies the rapid opening up of a market space worth hundreds of billions.
On December 26, 2025, Frost & Sullivan (hereinafter referred to as 'Frost & Sullivan') released the 'Global and Chinese Embodied and Composite Robots Industry White Paper 2025' (hereinafter referred to as the 'Report'). The report aims to conduct a comprehensive and in-depth analysis of the embodied and composite robots industry, exploring the driving factors behind industry development from multiple aspects such as policies and regulations, industrial development, market demand, product lines, technological innovation, etc., to uncover the huge potential of industry development and track the future development trends of the industry.
The 'Global and Chinese Embodied Intelligence Robots and Hybrid Robots Industry White Paper 2025' starts from the perspective of the evolution of embodied intelligence technology and its industrial implementation, systematically sorting out the development background, core technology systems, application paths, etc., of the embodied intelligence robots and hybrid robots industry. At the same time, combining with industrial, service, and emerging application scenarios, it conducts a systematic analysis of the global and Chinese market scale, competitive landscape, and industrial chain structure, and makes forward-looking judgments on the future development trends and commercialization paths of the embodied intelligence robots and hybrid robots industry.
PART.01
Global and China's Embodied Intelligent Robot Industry Analysis
Definition of Embodied Intelligent Robots
Embodied intelligent robots refer to robots that integrate artificial intelligence into physical entities, enabling them to perceive, learn, and dynamically interact with their physical environment like humans, thereby generating intelligent behavior and adaptability.
Traditional AI is more about processing data and making decisions in the digital world, such as computer vision software and conversational AI software; embodied intelligence, on the other hand, emphasizes the interaction between AI and the real physical world. It involves robots perceiving their environment (such as vision, touch, hearing, and force) physically, understanding problems, making decisions, and taking actions.

Data source: Analysis by Frost & Sullivan
The core difference between embodied intelligent robots and traditional robots
Traditional robots (mainly industrial robots) typically focus on precisely and repeatedly executing preset tasks in closed, structured environments; embodied intelligence robots, on the other hand, are intelligent entities that integrate AI-based decision-making systems with robot bodies, capable of autonomously perceiving, understanding, learning, making decisions, and acting in open, dynamic environments.
The emergence of embodied intelligence indicates that robots will transform from being production tools to more general-purpose and adaptable intelligent productivity and assistant roles, thereby penetrating into a wider range of industrial production, professional services, and daily life scenarios.
Core technology path of embodied intelligent robots
Embodied intelligent robots are achieving leapfrog development from single-function execution to adaptive intelligent agents that adapt to their environment through three major technical paths: multimodal perception and interaction, autonomous decision-making and learning algorithms based on large models, and deep system integration of software and hardware.
● Multi-modal perception interaction
Embodied intelligent perception and interaction technologies are evolving from single-modal perception to deep integration and collaborative understanding across multiple modalities. This progress enables intelligent agents not only to parse geometric information with high precision but also to dynamically estimate the physical attributes (such as material and hardness) and interaction states (such as sliding and force application) of objects. At the interaction level, the focus has shifted from simple command transmission to intention-driven natural interactions. By combining visual language models with physical interaction strategies, intelligent agents are gradually achieving the understanding and feedback of fuzzy and continuous human actions and natural language commands, laying a solid foundation for human-machine coexistence and collaboration.
● Autonomous decision-making and learning algorithms based on large models
On the path of autonomous decision-making and learning algorithms, the core context of technological development has shifted from relying on manually preset rules to enabling machines to acquire the ability to complete complex tasks through data and interaction on their own. In the early days, methods such as reinforcement learning optimized decisions by repeatedly trying and erroring on specific tasks, proving the feasibility of machine autonomous learning. However, these methods usually require a large amount of training, and the skills learned are difficult to transfer directly to new environments.
Currently, the key driving force for development comes from foundational models such as large language models. These models possess rich background knowledge and powerful decision-making capabilities, enabling them to directly understand abstract instructions given by humans in natural language and automatically decompose them into a series of specific, actionable steps. In the future, by combining the high-level understanding and planning capabilities of large models with the precise optimization and adaptive learning abilities of reinforcement learning in low-level physical control, a general decision-making system capable of quickly understanding intentions, adapting to new scenarios, and continuously improving from experience can be constructed.
● System integration with in-depth software and hardware collaboration
Software and hardware system integration refers to the organic integration of robot hardware and software components to achieve coordinated operation of the robot system and execution of specific tasks. This process involves multiple aspects, including the robot's sensors, actuators, controller, programming, and user interface, among other components.
Hardware aspect1) Select and configure the mechanical structure according to task requirements and working environment conditions; 2) Select and install sensors such as vision, force, and lidar, and connect them to the robot hardware system; 3) Integrate drive actuators such as motors, hydraulics, or pneumatics to achieve actual control of their mechanical motion and effectively connect them with the mechanical body; 4) It is also necessary to integrate safety systems such as emergency stop switches and collision detection modules into the hardware to ensure the safety and reliability of the operation process.
Software1) The control system is responsible for coordinating and managing the robot's motion control, sensor information fusion, and task decision-making. 2) The path planning and collision detection algorithm is responsible for generating safe and efficient motion trajectories. 3) The user programming interface provides an intuitive operating and debugging environment. 4) The communication system enables data interaction with external devices or control centers. 5) The software integrates the capability to process multi-class sensor data in real time, such as visual recognition and torque measurement.
Main driving factors and development trends of embodied intelligent robots
The main driving factors and development trends of embodied intelligent robots include the application and development of large model technology, as well as the rapid expansion and penetration of application scenarios from industrial manufacturing to multiple fields.
● Applications and Development of Large Model Technologies
The application of large model technology has driven breakthroughs in cross-modal perception and reasoning capabilities among embodied intelligent robots, significantly enhancing their understanding and decision-making abilities in complex environments.
The embodied intelligence basic model has undergone large-scale multi-modal data pre-training and possesses excellent generalization capabilities. This enables embodied intelligent robots to process input information in various modalities such as text and images, thereby understanding complex instructions and autonomously reasoning and planning task execution strategies.
The ability to generalize complex tasks is a key technical support for the commercialization of embodied intelligent robots. In the future, with the continuous development of large model technology, embodied intelligent robots will combine structured spatial intelligence technology to enhance data synthesis in simulated worlds, thereby continuously optimizing models and further exploring their generalization capabilities in mixed 3D representation environments. This will free them from dependence on a large amount of on-site data collection and significantly reduce the difficulty and environmental adaptability issues of task generalization. The enhancement of this capability will significantly reduce deployment and maintenance costs, accelerate their commercialization in unstructured scenarios, and further drive the continuous expansion of the global and Chinese embodied intelligent robot market scale.
● The application scenarios have rapidly expanded and penetrated into multiple fields, shifting from industrial manufacturing
In the early stages of market development, physical intelligent robot suppliers mainly focused on achieving large-scale production and iterative updates of physical intelligent robot products in structured industrial application scenarios that have clear requirements for precision and efficiency, such as precision assembly on automobile production lines and flexible logistics tasks.
With the continuous progress of large model technology in handling unstructured environments and generalizing tasks, the application scenarios of embodied intelligent robots are rapidly expanding from traditional structured industrial environments to broader unstructured application domains.
In the field of business services, embodied intelligent robots are gradually being widely deployed in scenarios such as retail, catering, and building management, performing tasks such as commodity sorting, meal delivery, cleaning maintenance, etc. In the healthcare field, their application prospects include high-value-added tasks such as rehabilitation care, remote diagnosis assistance, and surgical assistance.
The continuous expansion of application scenarios has provided a huge new market space for embodied intelligent robots, driving the sustained growth of the overall market scale.
Global market size of embodied intelligent robots
With the continuous increase in investment by major global economies in artificial intelligence and robotics technology, the deep integration of generative AI models with robot bodies has led to stronger generalization capabilities. The global market size for embodied intelligent robots is expected to grow significantly from $11.71 billion in 2024 to $101 billion in 2030, with a compound annual growth rate of 43.2% between 2024 and 2030. Looking ahead, with the gradual establishment of technical standards and safety regulations, as well as the continuous reduction in manufacturing costs, embodied intelligent robots are expected to achieve large-scale commercial applications globally, fundamentally changing the composition of the workforce and working methods.

Data source: Analysis by Frost & Sullivan
PART.02
Global and Chinese Composite Robot Industry Analysis
Definition of composite robot
A composite robot consists of a lower-level mobile platform and an upper-level collaborative robotic arm, forming a body-integrated intelligent robot with both operational and mobility capabilities. By integrating advanced perception systems such as visual and tactile multimodal sensors, composite robots can perceive environmental changes in real time, optimize their behavior using efficient learning algorithms, achieve dynamic interaction with the physical environment, demonstrate high intelligence and adaptability, and thus efficiently complete complex and variable tasks.
According to different movement methods, compound robots can be divided into wheeled compound robots, wheeled-terrain compound robots, and wheeled-leg compound robots. Based on the number of arms, compound robots can be categorized into single-arm compound robots and double-arm compound robots.
Comparison between Composite Robots and Single-function Robots
Composites robots, by integrating mobility and operational capabilities, have significantly surpassed single-function robots in terms of technical complexity, environmental adaptability, and task versatility, achieving a technological leap from single command execution to autonomous collaborative operations.

Future development trends of composite robot technology
The future development trends and frontier explorations of composite robot technology mainly include multi-robot collaboration and swarm intelligence, cloud-edge integration, generative AI empowering task planning and interaction, as well as the leap in cognitive intelligence.
● Multi-robot collaboration and swarm intelligence
Multi-robot collaboration and swarm intelligence are the core directions for advancing composite robot technology. This technology enables composite robot systems to evolve from independent working units to intelligent groups that achieve collaborative operations through shared perception and distributed decision-making. Composite robots achieve dynamic task allocation and collaborative planning through local perception networks, significantly enhancing adaptability in complex scenarios. With the application of biomimetic algorithms and embodied intelligence models, composite robot swarms have made a leap from behavioral collaboration to cognitive collaboration, capable of semantic parsing and autonomous task decomposition. They demonstrate stronger system robustness and scalability in scenarios such as intelligent manufacturing and logistics.
●Combination of cloud and edge computing
The architecture model of cloud-edge computing collaboration is becoming a key technical path to enhance the intelligence level of composite robots. This model effectively solves the multiple requirements of composite robots for real-time response, complex cognition, and continuous learning by constructing a hierarchical and distributed computing network. It not only ensures the real-time performance and reliability of composite robots in dynamic environments but also provides a feasible foundation for their knowledge sharing, continuous evolution, and large-scale cluster collaboration. It is the core support for driving the industry from 'single-machine automation' towards 'system autonomy'.
● Generative AI empowers task planning and interaction
Generative AI can decompose complex tasks into executable sub-task sequences. For example, in the face of multi-step assembly tasks, embodied intelligent robots understand instructions through natural language processing technology and use generative models to generate detailed execution steps and action trajectories.
Generative AI enables composite robots to better understand human instructions and intentions
Through speech recognition and natural language processing technologies, embodied intelligent robots can have natural conversations with humans, receive task instructions, and provide real-time feedback on execution. In collaborative scenarios, robots can autonomously adjust their work rhythm and methods based on human actions and needs, achieving efficient human-robot collaboration.
● Cognitive Intelligence Leap
The vigorous development of cutting-edge technologies such as AI technology and multimodal large models has driven the intelligent upgrade of compound robots. Advanced technologies empower compound robots comprehensively, significantly enhancing their capabilities in environmental understanding, autonomous decision-making, and collaborative interaction, greatly improving their adaptability in complex environments.
The integrated innovation of AI technology and robotics will endow composite robots with more precise, intelligent, and flexible movement and execution capabilities, achieving a leap in cognitive intelligence. This will directly drive the substantial expansion of composite robot application scenarios, extending from material handling and precise assembly on industrial manufacturing production lines to medical and agricultural applications.
Global potential market size for compound robots
The potential market size of global compound robots will continue to grow in the medium to long term in the future, expected to further increase from around $65 billion in 2030 to nearly $200 billion by 2035, with an annual compound growth rate of 25.2% between 2030 and 2035.

Global market competition landscape for composite robots
The global compound robot market exhibits a pyramid-like competitive landscape. Market leaders continue to lead with their leading technological strength and mature business models. By increasing R&D investment and expanding scenarios, they further consolidate their brand influence and leading position in the global market.


