Skip to Content
DocsOverview

Overview

Asimov is a complete software stack for humanoid robots, designed for a future where robots operate at scale. It provides everything from low-level motor control to cloud simulation, built on the principle that software infrastructure matters as much as hardware.

The project includes Menlo-ROS (our real-time middleware), Neocortex (sensor processing), simulation tools (Isaac Lab and MuJoCo integration), and cloud services for data collection and continuous improvement. All components work together to create a platform where robots can be teleoperated, trained in simulation, and deployed to physical hardware with minimal friction.

We’re validating the entire stack on a Unitree G1 humanoid robot. What works in simulation must work on hardware. What works on one robot should work on any robot through proper abstraction.

Architecture

The Asimov stack is organized in layers, each with clear responsibilities:

Robot Hardware Layer

  • Physical robots (Unitree G1, future platforms) or simulation environments
  • Hardware-specific SDKs and firmware
  • Sensors, actuators, and communication buses

Menlo-ROS Middleware

  • Robot Abstraction Layer (RAL): Hardware-agnostic interface with automatic robot detection
  • Robot Management Layer (RML): Cloud gateway for streaming, telemetry, and commands
  • Robot Communication Layer (RCL): Zero-copy shared memory IPC
  • Guaranteed 1 kHz control loops on PREEMPT_RT Linux

Perception Layer

  • Neocortex: Dedicated sensor processing unit
  • 4+ camera feeds with synchronized capture
  • Audio processing with microphone arrays
  • First-person and 360° video generation
  • Real-time streaming via WebRTC

Cloud Infrastructure

  • Data Engine: Collection, processing, and storage of robot operation data
  • Cloud Simulation: Automated policy refinement from real-world data
  • Model training pipelines for vision-language-action models
  • Over-the-air deployment and monitoring

Application Layer

  • Teleoperation interfaces with sub-100ms video latency
  • Autonomy systems for navigation and manipulation
  • Simulation environments for policy training and validation

Design Principles

Hardware Abstraction

Menlo-ROS treats all robots identically at the API level. Applications written for simulation work unchanged on physical hardware. Switching between robot platforms requires no application code changes. RAL detects connected hardware automatically and loads the appropriate driver.

Real-Time Guarantees

Control loops run at deterministic 1 kHz on PREEMPT_RT Linux. Timing jitter is measured in microseconds, not milliseconds. This enables precise motor control and reliable sensor fusion required for dynamic locomotion and manipulation.

Efficient Resource Usage

Built entirely in C/C++ with zero-copy IPC, static memory allocation, and minimal CPU overhead. When scaled to thousands of robots, wasted cycles translate to significant energy costs and degraded performance. The stack is designed to run on embedded compute with strict power budgets.

Simulation-First Development

Policies train in Isaac Lab with thousands of parallel environments on GPU. Validation happens in MuJoCo before hardware deployment. The sim-to-real gap is minimized through domain randomization and accurate physics modeling. What succeeds in simulation has high probability of working on hardware.

Continuous Improvement

Deployed robots collect data that flows back to the cloud. Simulation replays real-world scenarios and refines policies automatically. Improved behaviors deploy via over-the-air updates. The fleet gets better through operation rather than manual engineering.

Current Capabilities

Locomotion

  • Bipedal walking on flat and rough terrain
  • Trained in Isaac Lab, validated in MuJoCo
  • Sim-to-real transfer to Unitree G1 hardware

Teleoperation

  • WebRTC video streaming at 30 FPS
  • Sub-100ms glass-to-glass latency
  • Dual joystick control through web interface
  • Works identically for simulated and physical robots

Simulation

  • Complete Isaac Lab to web frontend video pipeline
  • ZMQ bridge connecting simulation to production stack
  • MuJoCo validation and hardware-in-loop testing
  • Identical interfaces for virtual and physical robots

Cloud Services

  • Data collection from deployed robots
  • Automated scenario replay in simulation
  • Policy refinement from real-world data
  • Over-the-air model deployment

Hardware Platform

We’re developing on a Unitree G1 29-DOF humanoid robot. The G1 provides:

  • Full articulated legs, arms, torso, and head (although removed for now)
  • High-torque actuators suitable for dynamic locomotion
  • Embedded compute for real-time control
  • Production-quality hardware to validate our software stack

The goal is to create software infrastructure that works across robot platforms. The G1 validates our architecture on real hardware with real constraints.

Desktop view

What’s Next

The repository goes public October 1st, 2025. Initial release includes:

  • Complete Menlo-ROS middleware with RAL/RML/RCL
  • Neocortex vision and audio processing
  • Isaac Lab and MuJoCo simulation integration
  • Teleoperation interfaces and WebRTC streaming
  • Documentation for setup, training, and deployment
  • Example policies trained on Unitree G1

Future development focuses on:

  • Expanding robot platform support beyond Unitree
  • Enhanced manipulation capabilities
  • Multi-robot coordination
  • Cloud simulation service for continuous improvement
  • Community-contributed policies and datasets

Philosophy

Robotics software has traditionally been research-focused: flexible, exploratory, prioritizing ease of experimentation. This makes sense for labs running a handful of robots.

When robots deploy at scale - hundreds, thousands, eventually millions - different constraints dominate. Energy efficiency matters globally. Deterministic behavior becomes non-negotiable. Manual intervention doesn’t scale. Software must be production-grade from the start.

Asimov is built for that future. Every architectural decision optimizes for scale, efficiency, and reliability. The stack isn’t trying to be a general-purpose robotics toolkit. It’s an opinionated platform designed for ubiquitous humanoid robots.

True ownership means full control over every layer of the stack. Complete access means you can modify, repair, and improve anything. Collaborative development accelerates capability advancement.

That’s what we’re building.

Last updated on