The utility generates an ONNX model in the same directory.
export_for_isaac.py -input_checkpoint model_weights.pth $ git clone įinally, convert the model as follows. Place the PyTorch weights for the model to export to ONNX within this directory. Then, clone the TRTPose repository and navigate to the folder containing the export script inside the container.
First, pull the Docker container for your platform as follows:įor Jetson: docker pull nvcr.io/nvidia/l4t-pytorch:r32.4.4-pth1.6-p圓įor NVIDIA GPUs: docker pull nvcr.io/nvidia/pytorch:20.10-p圓 export_for_isaac.py -input_checkpoint resnet18_baseline_att_224x224_A_epoch_249.pthĪlternatively, you can also use the NVIDIA NGC PyTorch Docker container for L4T or x86 to run the export script. If you already have PyTorch installed locally on your system, you can use this utility to carry out the conversion as follows. The TRTPose repository comes with a Python utility for converting PyTorch weights to ONNX. In the interest of making the app cross-platform across Linux based desktops as well as L4T for Jetson, convert the weights for your model to ONNX.
#BODY POSER PORTABLE#
While PyTorch models provide a quick and convenient way to get a PyTorch app up and running, it is often not portable between frameworks. I used the Open Neural Network Exchange (ONNX) format to deploy the model with DeepStream. While the two models perform differently under different scenarios, there is no difference in how you would go about deploying either one with a DeepStream app. The repository consists of two models: one on the ResNet backbone and the other on the denser DenseNet backbone.
#BODY POSER DOWNLOAD#
Step 2: Download the human pose estimation model and convert it to ONNXĭownload the PyTorch model weights for the TRTPose model.
#BODY POSER CODE#
Create a directory called deepstream-pose-estimation in the sample_apps folder for this walkthrough. This ensures that there are no problems with DeepStream-related symlinks when you later try to compile the app using a makefile. Though you can create the directory anywhere, you are creating your app within the $DEEPSTREAM_DIR$/sources/apps/sample_apps/ directory. Start by creating a directory for the pose estimation application. If this is your first time building a GStreamer pipeline, the GStreamer Foundations page is a good resource to cross reference while building your pipeline. Workflow for developing a pose estimation application with DeepStream. I have broken down the workflow into six main steps:įigure 2. To streamline the process, I built on the sample apps provided with the DeepStream SDK at $DEEPSTREAM_DIR$/sources/sample_apps. Deploying a pose estimation model with DeepStream The actual installation directory could change depending on whether you’re using a container or the bare-metal version of DeepStream. For this post, I assume that DeepStream is installed at $DEEPSTREAM_DIR. If you are using the Jetson platform, CUDA and TensorRT are preinstalled as a part of JetPack.
#BODY POSER SOFTWARE#
Here are the software toolkits to install:
To get started with this project, you need either the NVIDIA Jetson platform or a system with an NVIDIA GPU running Linux. I show the TRTPose model, which is an open-source NVIDIA project that aims to enable real-time pose estimation on NVIDIA platforms and the CMU OpenPose model with DeepStream. I used one of the sample apps from the DeepStream SDK as a starting point and add custom code to detect human poses using various pose estimation AI models. In this post, I discuss building a human pose estimation application with DeepStream.