Company Profile
Independent Variable Robotics has recently completed a 1-billion-yuan A++ round of financing. This round was co-led by top-tier investment institutions and multi-regional platforms including ByteDance Sequoia China Beijing Information Industry Development Fund Shenzhen Venture Capital. This round was co-led by top-tier investment institutions and multi-regional platforms including ByteDance Sequoia China Beijing Information Industry Development Fund Shenzhen Venture Capital (SCVC) Nanshan Strategic Emerging Investment and Wuxi Venture Capital. This round was co-led by top-tier investment institutions and multi-regional platforms including ByteDance Sequoia China Beijing Information Industry Development Fund Shenzhen Venture Capital (SCVC) Nanshan Strategic Emerging Investment and Wuxi Venture Capital. Notably this marks the first investment made by SCVC's AI Fund since its establishment. Of particular significance prior to ByteDance's investment Independent Variable had also received backing from Meituan and Alibaba -making it the only embodied investment in the company. Of particular significance prior to ByteDance's investment Independent Variable had also received backing from Meituan and Alibaba -making it the only embodied intelligence enterprise in China to be invested in by all three major internet giants simultaneously. coordinated investment from cross-domain capital not only underscores the market's collective consensus on the critical importance of embodied foundational models but also confirms deeper understanding of the importance of the role of the Internet in China. The coordinated investment from cross-domain capital not only underscores the market's collective consensus on the critical importance of embodied foundational models but also confirms deep recognition of Independent Variable's technological leadership and growth potential. Foundational Model for the Physical World: Enabling Robots to Truly "Get Work Done" In recent years embodied intelligence has continued to capture market attention with significant progress made in the field. In recent years embodied intelligence has continued to capture market attention with significant progress made in the "body" of robots-their locomotion and control capabilities. The industry's competitive focus has now shifted from the "body" of robots to the "body" of robots. competitive focus has now shifted from "limbs" to the "brain". The key breakthrough lies in constructing an intelligent "brain" for robots that can understand the physical world manipulate objects and flexibly adapt to complex dynamic scenarios enabling them to effectively perform diverse real- world physical tasks. Embodied intelligence foundational models represent a foundational model for the physical world independent and parallel to Embodied intelligence foundational models represent a foundational model for the physical world independent and parallel to virtual-world foundational models such as large language models (LLMs) and multimodal models. The core of foundational models lies in breaking through bottlenecks in generalisation and universality. The complexity of the physical world requires robots to possess real-time capabilities to handle unstructured dynamic and stochastic models. The complexity of the physical world requires robots to possess real-time capabilities to handle unstructured dynamic and stochastic tasks. Independent Variable's embodied foundational model takes perceptual information from all robots (e.g. vision touch speech etc.) as input and output. Independent Variable's embodied foundational model takes perceptual information from all robots (e.g. vision touch speech etc.) as input and directly outputs the robot's actions visual outputs and language responses. Wang Qian Founder and CEO of Independent Variable Robotics stated: "The next stage of competition in embodied intelligence is essentially competition in foundational models built through data closed loops and modeling. The next stage of competition in embodied intelligence is essentially competition in foundational models built through data closed loops and model evolution capabilities." Guided by this judgement the world is accelerating investments across data models, computing power and other dimensions to rapidly expand its capabilities. Guided by this judgement the world is accelerating investments across data models computing power and other dimensions to rapidly advance the development of embodied intelligence. 1. Deep Fusion of VLA and World Models Autonomous Evolution via Real-Robot Reinforcement Learning Independent Variable's self-developed WALL-A model pioneers a system paradigm that deeply fuses the vision-Language-Action (VLA) and the vision-Language-Action (VA) paradigms. Vision-Language-Action (VLA) models with world models. As a native multimodal input-output architecture WALL-A was the first to achieve an embodied As a native multimodal input-output architecture WALL-A was the first to achieve an embodied multimodal chain of thought. WALL-A leverages world model mechanisms for spatio-temporal state prediction collaborates with visual causal reasoning WALL-A leverages world model mechanisms for spatio-temporal state prediction collaborates with visual causal reasoning to understand environmental feedback and internalises physical common sense from data through learnable memory mechanisms. significantly improved the robot's zero-shot generalisation ability for mobile manipulation tasks in unstructured environments. Furthermore relying on large-scale real-robot reinforcement learning the foundational model continuously gains high-quality learning experiences through Furthermore, relying on large-scale real-robot reinforcement learning the foundational model continuously gains high-quality learning experiences through interaction with the real physical world autonomously solving long-tail problems and achieving ongoing evolution of robot capabilities. Variable has built a technical closed loop of "physical world foundational model - real-robot autonomous evolution" through a fully end- to-end technical route. 2. Independent Variable has built a technical closed loop of "physical world foundational model - real-robot autonomous evolution" through a fully end- to-end technical route. 2. High-Quality Real-Robot Data: Building the Engine for Model Evolution Data is the core fuel for foundational model evolution . Since its inception Independent Variable has heavily invested in closed-loop iteration of hardware data and models. As one of the earliest companies in China to scale up real-robot data collection Independent Variable has developed a variety of data collection devices including master-slave teleoperation systems exoskeletons and no-robot-body-free (no-robot-body) setups achieving data validation and model breakthroughs across diverse data collection hardware. The company has also built a model-driven data pipeline continuously generating large-scale high-quality data through data generation filtering augmentation and labeling. The company has also built a model-driven data pipeline continuously generating large-scale high-quality data through data generation filtering augmentation and labelling. Independent Variable insists on using foundational models to provide feedback to all stages of data processing and hardware design iterating toward higher-quality data and more efficient data collection devices to further enhance foundational model performance. 3. 3. Model Iteration Drives Capability Leaps Autonomous Task Completion in the Real World Continuous model iteration has endowed Independent Variable's robots with exceptional adaptability in real-world scenarios. As the world's first successful example of mobile manipulation As the world's first successful example of mobile manipulation spanning both outdoor and indoor environments based on a physical world foundational model the robot demonstrated its capabilities in food delivery and Even when faced with strong wind interference or occluded vision the robot relied on the foundational model's generalisation ability and the world model's causal reasoning to: "Imagine" the full appearance of occluded objects just as humans do Autonomously correct errors via reinforcement learning strategies when encountering bottlenecks Complete task closed loops without any human intervention.