Search — isaac-sim
Issues
25 matches- nvidia-forum:simulation5/14/2026tooling-dx
User reports the Robot Wizard “Add Colliders” button does not show in the window. This likely blocks or slows collider generation during robot import/setup.
isaac-simrobot-wizardcollidersuiusdauthoring - Isaac Sim 5.1.0 crashes shortly after startup on Windows Server 2025 with RTX Pro 6000 BlackwellBlockernvidia-forum:simulation5/13/2026crashes-stability
Report that Isaac Sim 5.1.0 crashes shortly after startup on Windows Server 2025 with an RTX Pro 6000 Blackwell GPU. User cannot run the simulator in this environment.
isaac-sim5-1-0windows-server-2025rtx-pro-6000blackwellstartup-crashdrivers - nvidia-forum:simulation5/13/2026integration
User asks how to run Isaac Sim via a Python script with the ROS2 bridge enabled. This implies friction in documentation or APIs for programmatic launch/configuration of ROS2 integration.
isaac-simpythonros2-bridgeautomationheadlessintegration - github:isaac-sim/IsaacSim5/13/2026tooling-dx
Isaac Sim Content Browser (planned to replace Asset Browser in Isaac Sim 6) lacks key UX features compared to Asset Browser in 5.1.0: no thumbnail preview, incomplete search (excludes non-SimReady assets), and inability to do some global searches within categories. Users can’t reliably find common assets like tables.
isaac-simcontent-browserasset-browseruxassetssearchisaac-sim-6 - nvidia-forum:simulation5/13/2026crashes-stability
Isaac Sim is reported to crash when using RTX Sensors. The post provides no additional details beyond the crash condition.
crashrenderingisaac-sim - github:isaac-sim/IsaacLab5/13/2026crashes-stability
TacSL force-field readings in an Isaac Lab demo do not increase smoothly with stepped applied normal force and instead appear irregular. The user reports Isaac Lab main branch with Isaac Sim 5.1.
crashusdrenderingsensorsisaac-simisaac-lab - github:isaac-sim/IsaacLab5/12/2026other
Velocity-only write paths on Isaac Lab Articulation do not invalidate cached derived body state buffers. Downstream code may read stale body velocity/state after writing velocities to sim.
isaac-sim - github:isaac-sim/IsaacLab5/12/2026asset-pipeline
Relative texture paths do not work in the IsaacLab Beta according to the report, even when the image is in the same folder as the USD. Loading via IsaacLab code triggers errors.
usdrenderinghardwareisaac-simisaac-lab - nvidia-forum:simulation5/12/2026crashes-stability
Isaac Sim 4.5 GUI crashes with errors when running IsaacLab examples. This blocks users from executing standard example pipelines.
crashisaac-simisaac-lab - github:isaac-sim/IsaacLab5/12/2026training-infra
Proposal to integrate DiffRL into IsaacLab via an isaaclab_diffrl extension and the Mineral algorithm library. It targets the Direct workflow and Newton (Warp) backend to enable end-to-end backprop through physics for algorithms like SHAC and APG/BPTT.
rlintegrationisaac-simisaac-labnewtonwarp - Isaac Sim CrashBlockernvidia-forum:robotics-edge-computing5/11/2026crashes-stability
User reports an Isaac Sim crash. No additional details are provided, but the issue blocks use of the simulator.
isaac-simcrashstabilitysimulation - Isaac Sim CrashPainnvidia-forum:isaac-ros5/11/2026crashes-stability
Same 'Isaac Sim Crash' report appears in Isaac ROS forum. This suggests downstream robotics workflows may be blocked by simulator instability.
crashisaac-sim - Isaac Sim CrashPainnvidia-forum:isaac5/11/2026crashes-stability
User reports Isaac Sim crash. This indicates instability affecting simulator usage.
crashisaac-sim - github:isaac-sim/IsaacLab5/11/2026rendering
In a CloudXR + OpenXR setup, frames stream correctly but inbound messages and hand-tracking poses are silently dropped between client and Isaac Sim’s OpenXR plugin. This blocks teleop commands and hand tracking for interactive workflows.
renderinghardwaredeploymentintegrationisaac-simisaac-lab - github:isaac-sim/IsaacLab5/11/2026crashes-stability
In Isaac Lab v3.0.0-beta, lift_cube_sm.py ignores the --viz kit option and no Kit/Isaac Sim window opens despite the process running. A one-line change to AppLauncher initialization appears to fix it locally.
crashrenderinghardwaredocsisaac-simisaac-lab - nvidia-forum:simulation5/11/2026other
Character animation playback in Isaac Sim does not work as expected. This indicates issues in animation playback or sequencing.
isaac-sim - nvidia-forum:simulation5/11/2026deployment
User cannot start IsaacSim 5.1.0 because of the ROS2 Bridge. This prevents launching the simulator with ROS integration enabled.
deploymentisaac-sim - nvidia-forum:isaac-ros5/10/2026docs-onboarding
User requests a Franka tutorial for the cuMotion MoveIt plugin with Isaac Sim. This indicates a documentation gap for a common manipulator workflow.
docsintegrationisaac-sim - nvidia-forum:isaac5/10/2026docs-onboarding
User requests a Franka tutorial for the cuMotion MoveIt plugin with Isaac Sim. This suggests users struggle to connect the pieces without a guided example.
docsintegrationisaac-sim - nvidia-forum:simulation5/9/2026otherisaac-sim
- github:isaac-sim/IsaacLab5/9/2026training-infrarlrenderinghardwaredocsintegrationisaac-simisaac-lab
- github:isaac-sim/IsaacLab5/9/2026synthetic-datasynthetic-datarldeploymentdocsintegrationfeature-requestisaac-simisaac-lab
- github:isaac-sim/IsaacLab5/8/2026otherisaac-simisaac-lab
- github:isaac-sim/IsaacSim5/8/2026training-infrarlusdrenderinghardwaredeploymentlocomotionisaac-simunitree
- github:isaac-sim/IsaacLab5/8/2026docs-onboardingdocsisaac-simisaac-lab
Papers
3 matches- SceneFactory: GPU-Accelerated Multi-Agent Driving Simulation with Physics-Based Vehicle Dynamics2605.085285/8/2026Yicheng Zhu, Yang Chen, Tao Li, Zilin Bian
Autonomous-driving simulators typically trade physical fidelity for scalable parallelism. Physics-based platforms such as CARLA and MetaDrive provide articulated vehicle dynamics and contact, but their non-vectorized interfaces make batched training difficult. GPU-batched systems such as Waymax and GPUDrive scale to hundreds of scenarios by replacing rigid-body physics with simplified kinematic models, omitting tire--road interaction, suspension, contact dynamics, and road-condition-dependent friction. We introduce SceneFactory, a GPU-vectorized platform for procedural scene construction, physics-based multi-agent simulation, and RL in autonomous-driving environments. Built on NVIDIA Isaac Sim + Isaac Lab, SceneFactory represents worlds and agents as batched tensors: control, observations, rewards, resets, and policy inference run as GPU tensor operations over the Isaac Lab tensor API. SceneFactory converts Waymo Open Motion Dataset road topologies into simulation-ready USD worlds, runs many worlds concurrently on one GPU, populates each with multiple articulated PhysX vehicles, and maps precipitation and road-surface type to PhysX material friction coefficients. With GPU vectorization, SceneFactory achieves up to 127$\times$ higher throughput than a non-vectorized PhysX baseline on the same GPU and physics solver, reaching 19,250 controlled-agent simulation steps per second at 256 worlds $\times$ 16 agents. Cross-simulator transfer reveals an asymmetric dynamics gap: physics-grounded RL policies transfer to a simplified kinematic bicycle model with 99.5% success, whereas reverse transfer drops to 47.3%. Under wet-road friction, friction-aware policies reduce mean peak DRAC from 58.7 to 27.8,m/s$^2$ without sacrificing goal reach. SceneFactory shows that scalable autonomous-driving training need not discard articulated rigid-body dynamics or physically grounded road-condition variation.
crashrlusdrenderingmulti-agentisaac-simisaac-lab - Action Agent: Agentic Video Generation Meets Flow-Constrained Diffusion2605.014775/2/2026Jeffrin Sam, Nguyen Khang, Yara Mahmoud, Miguel Altamirano Cabrera …
We present Action Agent, a two-stage framework that unifies agentic navigation video generation with flow-constrained diffusion control for multi-embodiment robot navigation. In Stage I, a large language model (LLM) acts as an orchestration module that selects video diffusion models, refines prompts through iterative validation, and accumulates cross-task memory to synthesize physically plausible first-person navigation videos from language and image inputs. This increases video generation success from 35% (single-shot) to 86% across 50 navigation tasks. In Stage II, we introduce FlowDiT, a Flow-Constrained Diffusion Transformer that converts optimized goal videos and language instructions into continuous velocity commands using action-space denoising diffusion. FlowDiT integrates DINOv2 visual features, learned optical flow for ego-motion representation, and CLIP language embeddings for semantic stopping. We pretrain on the RECON outdoor navigation dataset and fine-tune on 203 Unitree G1 humanoid episodes collected in Isaac Sim to calibrate velocity dynamics. A single 43M-parameter checkpoint achieves 73.2% navigation success in simulation and 64.7% task completion on a real Unitree G1 in unseen indoor environments under open-loop execution, while operating at 40--47 Hz. We evaluate Action Agent across three embodiments: a Unitree G1 humanoid (real hardware), a drone, and a wheeled mobile robot (Isaac Sim), demonstrating that decoupling trajectory imagination from execution yields a scalable and embodiment-aware paradigm for language-guided navigation.
isaac-simhumanoidunitree - Evidence-Based Landing Site Selection and Vison-Based Landing for UAVs in Unstructured Environments2605.014325/2/2026Sina Sajjadi, Jacopo Panerati, Sina Soleymanpour, Varunkumar Mehta …
Autonomous landing in cluttered or unstructured environments remains a safety-critical challenge for unmanned aerial vehicles (UAVs), particularly under noisy perception caused by sensor uncertainty and platform-induced disturbances such as vibration. This paper presents an evidence-based probabilistic framework for autonomous UAV landing that explicitly separates decision-making under uncertainty from execution via visual servoing. Landing safety is modeled as a latent variable and inferred through recursive accumulation of frame-wise visual likelihoods derived from flatness, slope, and obstacle cues, yielding a temporally consistent belief map that is robust to transient perception errors. Physical feasibility is enforced through a hard geometric constraint based on the minimum required landing radius of the UAV, ensuring that undersized but visually appealing regions are rejected. The final landing site is selected using constrained maximum a posteriori estimation. Once selected, the UAV locks onto the target region using ORB feature tracking and performs precise alignment and descent via image-based visual servoing (IBVS). The proposed approach is validated through both real-world laboratory experiments and high-fidelity simulations in Nvidia Isaac Sim, demonstrating consistent, cautious, and stable landing behavior across domains.
perceptionisaac-sim