Skip to content

Latest commit

 

History

History
53 lines (40 loc) · 1.71 KB

development.org

File metadata and controls

53 lines (40 loc) · 1.71 KB

Development

This document describes Getting Started-style notes and receipes for development.

Git, GNU Emacs, and GNU Screen

sudo apt install emacs-nox screen
git config --global user.email "[email protected]"
git config --global user.name "Your Name"

Multi-user Development

sudo adduser <username>
sudo adduser <username> sudo

Then add SSH keys for connecting.

Note that it is not clear if multi-user development is practical on the Jetson Nano. Much of the OS image configuration is at the jetbot user level or system-wide. In addition, only one process can likely use the camera at any one time anyway. Therefore, it may make more sense to conceptualize the entire device as a single-user system despite the multi-user OS capabilities.

Python Virtual Environment

Ensure the system has python-venv package installed.

sudo apt install python-venv
python3 -m venv --system-site-packages ~/ilf_env
source ~/ilf_env/bin/activate
pip install --upgrade pip

Note, that it is not clear that a virtual environment approach is consistent with how OS images are designed for the Jetson Nano. ML/AI toolkits are already pre-configured at the jetbot user or the system-level.

NVIDIA Deep Learning Institute

See “Getting Started with AI on Jetson Nano” for introductory information. Example code is in /srv/nvdli-nano and symlinked from the jetbot user home directory. The OS image starts Jupyter Lab at boot by default on port 8888. It is running as the jetbot user. It can be accessed remotely via port forwarding. A good starting point is nvdli-nano/hello_camera, which test the camera feed.