Deploying Custom Applications

First, from the previous containerizing document, make sure you have a “Fat Package” containing your application and the entirety of the Omniverse Kit version which matches your application.

Copy the fat package from the location you built it in to the /opt/dockerfile folder.

$ cp my_app_fatpack.zip /opt/dockerfile
$ cp name-of-fatpack.zip /opt/dockerfile
$ cd /opt/dockerfile

Create a dockerfile file within the /opt/dockerfile directory.

$ cd /opt/dockerfile
$ sudo nano -l dockerfile

Use the kit 105.1.2 streaming base image available publicly via NVIDIA NGC: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/omniverse/containers/ov-kit-appstreaming

The command will pull it from the kit base image from NGC. Copy the following script into the dockerfile:

# Use the kit 105.1.2 streaming base image available publicly via NVIDIA NGC
FROM nvcr.io/nvidia/omniverse/ov-kit-appstreaming:105.1.2-135279.16b4b239

ARG FAT_PACK
ARG OVC_KIT

RUN if [  -z $FAT_PACK ];then \
>&2 echo  "\n****************Warning!!!!*************\n"; \
>&2 echo "Define docker build --build-arg FAT_PACK=<path_to_your_fat_package>.zip, it cannot be empty!" ; false;\
fi

RUN if [  -z $OVC_KIT ];then \
>&2 echo  "\n****************Warning!!!!*************\n"; \
>&2 echo "Define docker build --build-arg OVC_KIT=<some.file.ovc.kit>, it cannot be empty!" ; false;\
fi

ENV OVC_KIT=$OVC_KIT
ENV OVC_APP_PATH="/opt/nvidia/omniverse"

# Cleanup embedded kit-sdk-launcher package as usd-viewer is a full package with kit-sdk
RUN rm -rf /opt/nvidia/omniverse/kit-sdk-launcher

# Copy the usd-viewer application package from the _build/packages directory into the containers /app directory.
COPY --chown=ubuntu:ubuntu $FAT_PACK $OVC_APP_PATH

# Unzip the application package into the container's /app directory and then delete the application package
WORKDIR $OVC_APP_PATH
RUN FAT_PACK_BASE=$(basename $FAT_PACK) && unzip $FAT_PACK_BASE -d . && rm $FAT_PACK_BASE

# Pull in any additional required dependencies
RUN ./pull_kit_sdk.sh

# Copy the startup.sh script from the repos source/scripts directory.
# This is what will be called when the container image is started.
COPY --chown=ubuntu:ubuntu startup.sh /startup.sh
RUN chmod +x /startup.sh

# This specifies the container's default entrypoint that will be called by "> docker run".
ENTRYPOINT [ "/startup.sh" ]

Press ctrl+o to save
Press ctrl+x to exit

In /opt/dockerfile create another file called startup.sh
$ sudo nano -l /opt/dockerfile/startup.sh

Copy the following into your startup.sh file.

#!/usr/bin/env bash
set -e
set -u

# Check for libGLX_nvidia.so.0 (needed for vulkan)
ldconfig -p | grep libGLX_nvidia.so.0 || NOTFOUND=1
if [[ -v NOTFOUND ]]; then
    cat << EOF > /dev/stderr

Fatal Error: Can't find libGLX_nvidia.so.0...

Ensure running with NVIDIA runtime. (--gpus all) or (--runtime nvidia)

EOF
    exit 1
fi

# Detect NVIDIA Vulkan API version, and create ICD:
export VK_ICD_FILENAMES=/tmp/nvidia_icd.json

USD_PATH="${USD_PATH:-${OVC_APP_PATH}/data/Forklift_A/Forklift_A01_PR_V_NVD_01.usd}"


USER_ID="${USER_ID:-""}"
if [ -z "${USER_ID}" ]; then
echo "User id is not set"
fi

WORKSTREAM="${OV_WORKSTREAM:-"omni-saas-int"}"

export HSSC_SC_MEMCACHED_SERVICE_NAME="memcached-service-r3"
export HSSC_SC_MEMCACHED_REDISCOVER="1"
export HSSC_SC_CLIENT_LOGFILE_ROOT=/tmp/renders/hssc
mkdir -p /tmp/renders

__GL_F32B90a0=$(find /opt/nvidia/omniverse/hssc_shader_cache_client_lib -path \*release/lib\* -name libhssc_shader_cache_client.so)
echo "Found hssc client so in: $__GL_F32B90a0"
export __GL_F32B90a0
export __GL_a011d7=1   # OGL_VULKAN_GFN_SHADER_CACHE_CONTROL=ON
export __GL_43787d32=0 #  OGL_VULKAN_SHADER_CACHE_TYPE=NONE
export __GL_3489FB=1   # OGL_VULKAN_IGNORE_PIPELINE_CACHE=ON

export OPENBLAS_NUM_THREADS=10 # OM-97400, optimize thread count for numpy(OpenBlas)

CMD="${OVC_APP_PATH}/kit/kit"
ARGS=(
    "${OVC_APP_PATH}/apps/${OVC_KIT}"
    "--no-window"
    "--/privacy/userId=${USER_ID}"
    "--/crashreporter/data/workstream=${WORKSTREAM}"
    "--/exts/omni.kit.window.content_browser/show_only_collections/2=" # OM-98801
    "--/exts/omni.kit.window.filepicker/show_only_collections/2=" # OM-98801
    "--ext-folder /home/ubuntu/.local/share/ov/data/exts/v2"
    "--/crashreporter/gatherUserStory=0" # Workaround for OMFP-2908 while carb fix is deployed.
    "--/crashreporter/includePythonTraceback=0" # Workaround for OMFP-2908 while carb fix is deployed.
    "--${OVC_APP_PATH}/auto_load_usd=${USD_PATH}" # TODO: replace with USD_PATH
)

# Since we won't have access for
echo "==== Print out kit config OVC_KIT=${OVC_KIT} for debugging ===="
cat ${OVC_APP_PATH}/apps/${OVC_KIT}
echo "==== End of kit config ${OVC_KIT} ===="

echo "Starting usd viewer with $CMD ${ARGS[@]} $@"

exec "$CMD" "${ARGS[@]}" "$@"

Unzip the fat package and find the kit app name from the fat package. Add the kit app name back into the dockerfile.

cd /opt/dockerfile
sudo unzip -Z my_app_fatpack.zip

Building a containerized application

Change the permissions of the dockerfile.sh and startup.sh files to ensure they have Linux privileges of 775. Without changing the permissions of the files, there will be an error on the build.

In this command, enter the path to the kit file and the path to the fatpack.zip. Below is an example of the workflow. In /opt/dockerfile/ issue the following command:

$ docker build --build-arg OVC_KIT=<the_name_of_ovc_kit_file_inside_the_fat_package.ovc.kit> --build-arg FAT_PACK=<path_to_your_fat_package_release.zip> . -t <name of your container>

$ docker build --build-arg OVC_KIT=omni.usd_explorer.ovc.kit --build-arg FAT_PACK=./kit-app-template-fat@2023.2.1+main.0.a4f26f51.local.linux-x86_64.release.zip . -t kitappcontainer:0.1
(replace the highlighted placeholders with appropriate values)
OVC kit is the name of the app fatpack is the zipfile that it is contained in.

Example:
$ sudo docker build --build-arg OVC_KIT=omni.usd_explorer.ovc.kit --build-arg FAT_PACK=./kit.zip . -t kit-app:0.1
You should see a similar output to this.
horde@louis-kitappdev:~/dockerfile$ docker build --build-arg OVC_KIT=omni.usd_explorer.ovc.kit --build-arg FAT_PACK=./kit-app-template-fat@2023.2.1+main.0.a4f26f51.local.linux-x86_64.release.zip . -t kitappcontainer:0.1
[+] Building 0.2s (15/15) FINISHED                                                                                                                                              docker:default
=> [internal] load build definition from Dockerfile                                                                                                                                      0.0s
=> => transferring dockerfile: 1.91kB                                                                                                                                                    0.0s
=> [internal] load metadata for nvcr.io/nvidia/omniverse/ov-kit-appstreaming:105.1.2-135279.16b4b239                                                                                     0.1s
=> [internal] load .dockerignore                                                                                                                                                         0.0s
=> => transferring context: 2B                                                                                                                                                           0.0s
=> [internal] load build context                                                                                                                                                         0.0s
=> => transferring context: 131B                                                                                                                                                         0.0s
=> [ 1/10] FROM nvcr.io/nvidia/omniverse/ov-kit-appstreaming:105.1.2-135279.16b4b239@sha256:a11a11ce8d4b5a8b25b691e36af38d183843c315e150ef09667be9f7f6996833                             0.0s
=> CACHED [ 2/10] RUN if [  -z ./kit-app-template-fat@2023.2.1+main.0.a4f26f51.local.linux-x86_64.release.zip ];then   >&2 echo  "\n****************Warning!!!!*************\n";   >&2   0.0s
=> CACHED [ 3/10] RUN if [  -z omni.usd_explorer.ovc.kit ];then   >&2 echo  "\n****************Warning!!!!*************\n";   >&2 echo "Define docker build --build-arg OVC_KIT=<some.f  0.0s
=> CACHED [ 4/10] RUN rm -rf /opt/nvidia/omniverse/kit-sdk-launcher                                                                                                                      0.0s
=> CACHED [ 5/10] COPY --chown=ubuntu:ubuntu ./kit-app-template-fat@2023.2.1+main.0.a4f26f51.local.linux-x86_64.release.zip /app/                                                        0.0s
=> CACHED [ 6/10] WORKDIR /app                                                                                                                                                           0.0s
=> CACHED [ 7/10] RUN FAT_PACK_BASE=$(basename ./kit-app-template-fat@2023.2.1+main.0.a4f26f51.local.linux-x86_64.release.zip) && unzip $FAT_PACK_BASE -d . && rm $FAT_PACK_BASE         0.0s
=> CACHED [ 8/10] RUN /app/pull_kit_sdk.sh                                                                                                                                               0.0s
=> CACHED [ 9/10] COPY --chown=ubuntu:ubuntu startup.sh /startup.sh                                                                                                                      0.0s
=> CACHED [10/10] RUN chmod +x /startup.sh                                                                                                                                               0.0s
=> exporting to image                                                                                                                                                                    0.0s
=> => exporting layers                                                                                                                                                                   0.0s
=> => writing image sha256:973cf8e31bdf262ca67774404805d0cc0274a8bd446be1f1895ec0120bb3b7b2                                                                                              0.0s
=> => naming to docker.io/library/kitappcontainer:0.1                                                                                                                                    0.0s
List images
$ sudo docker image list
REPOSITORY        TAG         IMAGE ID       CREATED             SIZE
kitappcontainer   0.1         973cf8e31bdf   About an hour ago   15.2GB
linux_exporter    latest      c7b733c8d82b   6 weeks ago         135MB
python            3.10-slim   af6a90a1d65e   7 weeks ago         128MB
hello-world       latest      d2c94e258dcb   12 months ago       13.3kB
**************************************************************************************

Push the Docker container image to NGC private registry

Your new container image has been built locally and resides in your local Docker instance. In order for it to be broadly deployable, it needs to be registered with a container registry. A container registry is a repository that stores container images to facilitate deployments to Kubernetes clusters. These can be an entirely self-hosted registry or a private or public registry in a commercial repository such as NVIDIA’s NGC. Where to register your container is an important decision and you must ensure that whichever kubernetes clusters you want to deploy to have access to your my_app container image in the registry. Registering your new container image with a repository is called “pushing” and is done by properly tagging your container image and then pushing it. A simple example of pushing a locally created container to NGC, using a pseudo company tag of “my_company”.

# Login to NVIDIA's NGC (ncr = nvidia container registry).
$ docker login nvcr.io
Username: $oauthtoken
Password: <paste your personal access key here> (note that nothing will display)
<you may see something similar to this....>
WARNING! Your password will be stored unencrypted in /home/horde/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Tag the Docker image you wish to push into NGC private registry. Documentation for this process is found here: https://docs.nvidia.com/ngc/gpu-cloud/ngc-private-registry-user-guide/index.html#loading-nvidia-docker-containers

An example of the process would be:

$ docker my_app:0.2.0 nvcr.io/my_company/omniverse/my_app:0.2.0

The my_company can be found under your username menu in the upper right portion of the NGC Dashboard page. Click the down carrot then select Organization.

The 0.2.0 is an example of a version, which is further used to identify the image.

Once the image is tagged, issue the docker push command to upload the Docker image to your private registry.

$ docker push nvcr.io/my_company/omniverse/my_app:0.2.0

You will see output indicating that the container image is being uploaded. This process could take a few minutes depending on the size of your application. Once finished you will see a message similar to:

0.1: digest: sha256:33263c1cb27d3ad8417581c4b3448b792baf03eb4a890a87bf1f6e2642ddc4a7 size: 7196

This is an example of two container images that have been tagged for upload via the docker push command.

$ docker image ls
REPOSITORY                             TAG         IMAGE ID       CREATED         SIZE
ak_explorer                            0.1         d6099a819edd   21 hours ago    15.2GB
nvcr.io/c12dbd4carxf/ovc/ak_explorer   0.1         d6099a819edd   21 hours ago    15.2GB
axel_editor                            0.1         0b2bcded1f8d   21 hours ago    10.7GB
nvcr.io/c12dbd4carxf/ovc/axel_editor   0.1         0b2bcded1f8d   21 hours ago    10.7GB
ubuntu                                 latest      bf3dc08bfed0   3 weeks ago     76.2MB
linux_exporter                         latest      c7b733c8d82b   7 weeks ago     135MB
python                                 3.10-slim   af6a90a1d65e   2 months ago    128MB
hello-world                            latest      d2c94e258dcb   12 months ago   13.3kB