

In the following example, $RANDOM is a bash function that returns a random number: Otherwise, simply delete the temporary folder after the executable ends. If you plan to reuse the extracted files again, then perhaps you should not delete the extracted files but reuse them. In both cases, the solution is to set MCR_CACHE_ROOT to a local folder (e.g., /tmp or %TEMP%). If the executable is run in parallel on different machines (for example, a computer cluster running a parallel program), then this might even cause lock-outs when different clusters try to access the same network location. Setting MCR_CACHE_ROOT is especially important when running the executable from a network (NFS) location, since unpacking onto a network location could be quite slow. If you wish to set this env variable permanently on Windows, look at the explanation provided here. Set MCR_CACHE_ROOT="C:\Documents and Settings\Yair\Matlab Cache\"

It also supports the NVIDIA DRIVE platform.REM set MCR_CACHE_ROOT=%TEMP% set MCR_CACHE_ROOT="C:\Documents and Settings\Yair\Matlab Cache\" The support package supports the NVIDIA Jetson TK1, Jetson TX1, Jetson TX2, Jetson Xavier, and Jetson Nano developer kits. You can build and deploy the generated CUDA code from your MATLAB algorithm, along with the interfaces to the peripherals and the sensors, on the Jetson platform.

Standalone execution: You can deploy the generated CUDA ® code as a standalone embedded application on the drive platform.

You can log data from supported sensors to help fine-tune your algorithm for early prototyping. Interactive communication: You can remotely communicate with the NVIDIA target from MATLAB to acquire data from supported sensors and imaging devices connected to the target and then analyze and visualize it in MATLAB. You can generate optimized code for preprocessing and postprocessing along with your trained deep learning networks to deploy complete algorithms. You can deploy a variety of trained deep learning networks, such as YOLO, ResNet-50, SegNet, and MobileNet, from Deep Learning Toolbox™ to NVIDIA GPUs. Use the interactive communication to prototype and develop your MATLAB algorithm, then automatically generate equivalent C code and deploy it to the drive platform to run as a standalone. MATLAB Coder™ support package for NVIDIA ® Jetson™ and NVIDIA DRIVE™ platforms automates the deployment of MATLAB ® algorithm or Simulink ® design on embedded NVIDIA GPUs such as the Jetson platform.
