Containerizing Omniverse services
Omniverse Services can be containerized, and to help with this an official Omniverse Kit Kernel container is available on NGC: Omniverse Kit Kernel container That container contains the Kit Kernel, and comes with support for GPU and non-GPU workloads alike.
Getting started
The Kit Kernel container is behind a sign-in wall on NGC, NVIDIA’s container registry, and the Docker engine will need to be configured with an API key. Instructions for setting up environments are available on the NGC website.
Non-GPU containers: It is possible to use the Kit Kernel container without needing a GPU for non-GPU workloads (e.g.: a service that copies some files around). For those use cases, the standard Docker runtime can be used.
GPU containers: The Kit Kernel container comes with the necessary libraries and configuration to support GPUs Docker, however needs to be configured to support GPU passthrough. Documentation can be found on the nvidia-docker website.
Using the vanilla Kit Kernel container
The Kit Kernel container can be used as is and comes with a bash shell embedded as well as the Kit Kernel container installed under /opt/nvidia/omniverse/kit-kernel.
At the root of the container are also two example scripts: startup.sh
and startup_nogpu.sh
.
When running a GPU workload, it is important to note some of the details in the startup.sh
script that perform configuration that will need to be replicated in case custom startup scripts are used:
1#!/usr/bin/env bash
2
3# Check for libGLX_nvidia.so.0 (needed for Vulkan)
4ldconfig -p | grep libGLX_nvidia.so.0 || NOTFOUND=1
5if [[ -v NOTFOUND ]]; then
6 cat << EOF > /dev/stderr
7
8Fatal Error: Cannot find libGLX_nvidia.so.0...
9
10Ensure running with NVIDIA runtime (--gpus all) or (--runtime nvidia).
11If no GPU is required, please run `startup_nogpu.sh` as the entrypoint.
12
13EOF
14 exit 1
15fi
16
17# Detect NVIDIA Vulkan API version, and create ICD:
18export VK_DRIVER_FILES=/tmp/nvidia_icd.json
19LD_LIBRARY_PATH=/opt/nvidia/omniverse/kit-kernel/plugins/carb_gfx \
20 /opt/nvidia/omniverse/vkapiversion/bin/vkapiversion \
21 "${VK_DRIVER_FILES}"
Building a Hello World
Omniverse service container
Standard Docker container workflows can be used for installing and configuring an Omniverse services container.
Using the example from our the “Getting started” guide, we’ll assume the following Hello World service, saved in hello_world.py
:
1from omni.services.core import main
2
3def hello_world():
4 return "hello world"
5
6main.register_endpoint("get", "/hello-world", hello_world)
Alongside of it in the same directory, we create a Dockerfile with the following content:
1# syntax=docker/dockerfile:1
2
3FROM nvcr.io/nvidian/omniverse/ov-kit-kernel:106.0.0-release.118399.fcefe91f
4# Install Kit services dependencies.
5# This code is pulled from a extension registry and the `--ext-precache-mode` will pull down the extensions and exit.
6RUN /opt/nvidia/omniverse/kit-kernel/kit \
7 --ext-precache-mode \
8 --enable omni.services.core \
9 --enable omni.services.transport.server.http \
10 --/exts/omni.kit.registry.nucleus/registries/0/name=kit/services \
11 --/exts/omni.kit.registry.nucleus/registries/0/url=https://dw290v42wisod.cloudfront.net/exts/kit/services \
12 --allow-root
13COPY hello_world.py hello_world.py
14EXPOSE 8011/tcp
15ENTRYPOINT [ \
16 "/opt/nvidia/omniverse/kit-kernel/kit", \
17 "--exec", "hello_world.py", \
18 "--enable", "omni.services.core", \
19 "--enable", "omni.services.transport.server.http", \
20 "--allow-root" \
21]
Building and running the example:
docker build -t kit-service .
docker run -it -p 8011:8011 kit-service:latest
By default, an API documentation page is created to list the services exposed, and accessible at http://localhost:8011/docs
It is possible to exercise the API from that webpage by using the Try it out feature within the different endpoints.
Advanced services
It is possible to use all the usual techniques when building Omniverse Extensions and Omniverse app files when building a containerised version. The deep dive into omniverse services , and the extension demo’d in there, can be copied and built into a container in a similar way and run in an identical way. When using extensions requiring a GPU, make sure to refer back to the block on using a vanilla container and the requirements of a startup script to configure the Kit Kernel to be able to detect drivers.
Container startup time and Shader caches
It is important to note that this container does not ship with shader caches. Shader caches are likely to be different based on the GPU and drivers that are used. This means that a container startup time, when using the RTX rendering in the service, may take time to start up as the cache gets generated on the first run. If the container is going to be running on the same driver/GPU hardware it is worth running the container once on that hardware and using docker commit to capture the state of the container after the first run which will significantly reduce the startup time of the container