From d7c0b32f4d2b8c3fae6b560b7f9192831dcf1fb2 Mon Sep 17 00:00:00 2001 From: Wenxuan Tan Date: Fri, 31 Jan 2025 17:59:28 -0600 Subject: [PATCH] [Docs] Add more details to profiling docs (#3221) --- docs/references/benchmark_and_profiling.md | 83 ++++++++++++---------- 1 file changed, 44 insertions(+), 39 deletions(-) diff --git a/docs/references/benchmark_and_profiling.md b/docs/references/benchmark_and_profiling.md index 0600b192b4f..762cae27671 100644 --- a/docs/references/benchmark_and_profiling.md +++ b/docs/references/benchmark_and_profiling.md @@ -15,8 +15,46 @@ python3 -m sglang.bench_serving --backend sglang --num-prompt 10 ``` +## Profile with PyTorch Profiler +Pytorch Profiler is a convenient basic tool to inspect kernel execution time, call stack, and kernel overlap and occupancy. +- To profile a server +```bash +# set trace path +export SGLANG_TORCH_PROFILER_DIR=/root/sglang/profile_log + +# start server +python -m sglang.launch_server --model-path meta-llama/Llama-3.1-8B-Instruct + +# send profiling request from client +python -m sglang.bench_serving --backend sglang --model-path meta-llama/Llama-3.1-8B-Instruct --num-prompts 10 --sharegpt-output-len 100 --profile +``` +Please make sure that the `SGLANG_TORCH_PROFILER_DIR` should be set at both server and client side, otherwise the trace file cannot be generated correctly . A secure way will be setting `SGLANG_TORCH_PROFILER_DIR` in the `.*rc` file of shell (e.g. `~/.bashrc` for bash shells). + +- To profile offline +```bash +export SGLANG_TORCH_PROFILER_DIR=/root/sglang/profile_log +python -m sglang.bench_offline_throughput --model-path meta-llama/Llama-3.1-8B-Instruct --dataset-name random --num-prompts 10 --profile --mem-frac=0.8 +``` + +- View Traces + +Trace files can be loaded and visualized from: +1. https://ui.perfetto.dev/ (any browser) +2. chrome://tracing (Chrome browser only) + +If browser cannot open trace file due to its large size, +client can generate a small trace file (<100MB) by controlling number of prompts and lengths of prompt outputs. +For example, when profiling a server, +```bash +python -m sglang.bench_serving --backend sglang --model-path meta-llama/Llama-3.1-8B-Instruct --num-prompts 2 --sharegpt-output-len 100 --profile +``` +sets the number of prompts to 2 with `--num-prompts` argument and limits the length of output sequences to 100 with `--sharegpt-output-len` argument, which can generate a small trace file for browser to open smoothly. + ## Profile with Nsight -0. Prerequisite +Nsight systems is an advanced tool that exposes more profiling details, such as register and shared memory usage, annotated code regions and low-level CUDA APIs and events. + +0. Prerequisite: install using apt, or run inside a [NVIDIA Docker container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch/tags) or [SGLang Docker container](https://github.com/sgl-project/sglang/tree/main/docker). + ```bash # install nsys # https://docs.nvidia.com/nsight-systems/InstallationGuide/index.html @@ -41,12 +79,13 @@ nsys profile --trace-fork-before-exec=true --cuda-graph-trace=node -o sglang.out python3 -m sglang.bench_serving --backend sglang --num-prompts 1000 --dataset-name random --random-input 1024 --random-output 512 ``` -3. Use NVTX, e.g. +3. Use NVTX to annotate code regions, e.g. to see their execution time. ```bash # install nvtx pip install nvtx - +``` +``` python # code snippets import nvtx with nvtx.annotate("description", color="color"): @@ -54,41 +93,7 @@ with nvtx.annotate("description", color="color"): ``` ## Other tips - 1. You can benchmark a model using dummy weights by only providing the config.json file. This allows for quick testing of model variants without training. To do so, add `--load-format dummy` to the above commands and then you only need a correct `config.json` under the checkpoint folder. 2. You can benchmark a model with modified configs (e.g., less layers) by using `--json-model-override-args`. For example, you can benchmark a model with only 2 layers and 2 kv heads using `python -m sglang.bench_one_batch --model-path meta-llama/Meta-Llama-3.1-8B-Instruct --batch 32 --input-len 256 --output-len 32 --load-format dummy --json-model-override-args '{"num_hidden_layers": 1, "num_key_value_heads": 1}'` - - -## Profile with PyTorch Profiler -- To profile a server -```bash -# set trace path -export SGLANG_TORCH_PROFILER_DIR=/root/sglang/profile_log - -# start server -python -m sglang.launch_server --model-path meta-llama/Llama-3.1-8B-Instruct - -# send profiling request from client -python -m sglang.bench_serving --backend sglang --model-path meta-llama/Llama-3.1-8B-Instruct --num-prompts 10 --sharegpt-output-len 100 --profile -``` -Please make sure that the `SGLANG_TORCH_PROFILER_DIR` should be set at both server and client side, otherwise the trace file cannot be generated correctly . A secure way will be setting `SGLANG_TORCH_PROFILER_DIR` in the `.*rc` file of shell (e.g. `~/.bashrc` for bash shells). - -- To profile offline -```bash -export SGLANG_TORCH_PROFILER_DIR=/root/sglang/profile_log -python -m sglang.bench_offline_throughput --model-path meta-llama/Llama-3.1-8B-Instruct --dataset-name random --num-prompts 10 --profile --mem-frac=0.8 -``` - -- View Traces - -Trace files can be loaded and visualized from: -1. https://ui.perfetto.dev/ (any browser) -2. chrome://tracing (Chrome browser only) - -If browser cannot open trace file due to its large size, -client can generate a small trace file (<100MB) by controlling number of prompts and lengths of prompt outputs. -For example, when profiling a server, -```bash -python -m sglang.bench_serving --backend sglang --model-path meta-llama/Llama-3.1-8B-Instruct --num-prompts 2 --sharegpt-output-len 100 --profile -``` -sets the number of prompts to 2 with `--num-prompts` argument and limits the length of output sequences to 100 with `--sharegpt-output-len` argument, which can generate a small trace file for browser to open smoothly. +3. You can use `--python-backtrace=cuda` to see python call stack for all CUDA kernels, as in PyTorch Profiler. (Caveat: this can cause inaccurately long kernel runtimes for CUDA event based timing) +4. For more args please see https://docs.nvidia.com/nsight-systems/UserGuide/index.html