Search — mujoco
Issues
10 matches- github:google-deepmind/mujoco5/14/2026rendering
Request to add native point cloud import and visualization (e.g., PLY/PCD) to the MuJoCo viewer as an efficient vertex rendering overlay. Current workaround renders each vertex as a sphere, which is inefficient for ground-truth verification.
mujocopoint-cloudpcdplyviewerdebuggingrendering - github:newton-physics/newton5/13/2026integration
Newton’s `add_joint_free()` allows parent bodies other than the world, but MuJoCo requires the parent to be the world. They want to manage the discrepancy with a warning to avoid confusing behavior differences.
newton-physicsmujocoapi-compatjointsmigrationwarnings - github:newton-physics/newton5/13/2026crashes-stability
A dexterous hand imported via URDF fails to grasp and lift a bottle; the object slides and remains unliftable. The same bottle can be lifted using a Franka example, suggesting contact/friction or grasp modeling differences for the hand.
crashusdrenderingmanipulationisaac-labnewton - github:newton-physics/newton5/13/2026crashes-stability
A dexterous hand imported via URDF cannot grasp a bottle reliably; the bottle slides and cannot be lifted. The reporter notes the Franka example can lift the same object, implying a hand-specific contact/friction issue.
crashusdrenderinghardwaremanipulationisaac-labnewtonwarp - github:newton-physics/newton5/12/2026other
Adding a D6 joint with 1 angular DOF and 1 linear DOF to SolverMuJoCo can produce a repeated joint name error because the name-amending logic doesn't change names in this case. This causes model build failure due to name collisions.
newton - kamino_basic_heterogeneous: rigid box exhibits collision glitches after settling on platformFrictiongithub:newton-physics/newton5/8/2026crashes-stabilitycrashusdrenderinghardwaremanipulationmujoconewtonwarp
- github:newton-physics/newton5/8/2026renderingrenderinghardwaremujoconewtonwarp
- github:google-deepmind/mujoco5/6/2026crashes-stabilitycrashusdmujoconewton
- github:google-deepmind/mujoco5/6/2026crashes-stabilitycrashfeature-requestmujocowarp
- github:google-deepmind/mujoco5/5/2026othermujoco
Papers
3 matches- SR-Platform: An Agentic Pipeline for Natural Language-Driven Robot Simulation Environment Synthesis2605.147005/14/2026Ben Wei Lim, Minh Duc Le, Thang Truong, Thanh Nguyen Canh
Generating robot simulation environments remains a major bottleneck in simulation-based robot learning. Constructing a training-ready MuJoCo scene typically requires expertise in 3D asset modeling, MJCF specification, spatial layout, collision avoidance, and robot-model integration. We present SR-Platform, a production-deployed agentic system that converts free-form natural language descriptions into executable, physically valid MuJoCo environments. SR-Platform decomposes scene synthesis into four stages: an LLM-based orchestrator that converts user intent into a structured scene plan; an asset forge that retrieves cached assets or generates new 3D geometry through LLM-to-CadQuery synthesis; a layout architect that assigns object poses and verifies industrial constraints; and a bridge layer that assembles the final MJCF scene and merges the selected robot model. The system is deployed as a nine-service Docker stack with WebSocket progress streaming, MinIO-backed mesh storage, Qdrant-based semantic asset retrieval, Redis job state, and InfluxDB telemetry. Using 30 days of production telemetry covering 611 successful LLM calls, SR-Platform generates five-object scenes with a median end-to-end latency of approximately 50 s, while cache-accelerated scenes complete in approximately 30-40 s. The asset forge shows an 11.3% first-attempt retry rate with automatic recovery, and cached asset retrieval removes per-object LLM calls for previously generated object types. These results show that agentic scene synthesis can reduce the manual effort required to create diverse robot training environments, enabling users to produce executable MuJoCo scenes from plain English prompts in under one minute.
crashusddeploymentintegrationmujoco - Real-Time Whole-Body Teleoperation of a Humanoid Robot Using IMU-Based Motion Capture with Sim2Sim and Sim2Real Validation2605.123475/12/2026Hamza Ahmed Durrani, Suleman Khan
Stable, low-latency whole-body teleoperation of humanoid robots is an open research challenge, complicated by kinematic mismatches between human and robot morphologies, accumulated inertial sensor noise, non-trivial control latency, and persistent sim-to-real transfer gaps. This paper presents a complete real-time whole-body teleoperation system that maps human motion, recorded with a Virdyn IMU-based full-body motion capture suit, directly onto a Unitree G1 humanoid robot. We introduce a custom motion-processing, kinematic retargeting, and control pipeline engineered for continuous, low-latency operation without any offline buffering or learning-based components. The system is first validated in simulation using the MuJoCo physics model of the Unitree G1 (sim2sim), and then deployed without modification on the physical platform (sim2real). Experimental results demonstrate stable, synchronized reproduction of a broad motion repertoire, including walking, standing, sitting, turning, bowing, and coordinated expressive full-body gestures. This work establishes a practical, scalable framework for whole-body humanoid teleoperation using commodity wearable motion capture hardware.
sim2reallocomotionsensorsmujocohumanoidunitree - Monocular Biomechanical Tracking of Fingers with Inverse Kinematics to Foundation Models2605.092585/9/2026R. James Cotton, Pouyan Firouzabadi, Wendy Murray
Accurate hand and finger tracking from video has significant clinical applications for monitoring activities of daily living and measuring range of motion, yet monocular video approaches for obtaining hand biomechanics remain under-developed. We present a method that combines the SAM 3D Body foundation model with inverse kinematics optimization in a full-body biomechanical model to extract anatomically-constrained finger joint angles from single-view video. We port SAM 3D Body from PyTorch to JAX for integration with MuJoCo-MJX, enabling GPU-accelerated optimization, and develop a novel mapping between the Momentum Human Rig (MHR) outputs and biomechanical model markers. Validation against 8-camera multiview reconstruction on 4,590 frames from 7 participants performing a variety of hand poses and object manipulation tasks shows finger joint angle errors of approximately 10 degrees and hand position errors of approximately 6 mm, after Procrustes alignment. Results were consistent across camera viewpoints and robust to different methods for producing reference values from multiview video. This work extends monocular biomechanical analysis to detailed finger tracking, expanding access to quantitative characterization of hand movement from readily available video.
manipulationsensorsintegrationmujocofoundation-model