Skip to main content
Tracer integrates with Slurm, providing detailed observability for batch jobs and array tasks. It works by running the Tracer agent on each compute node — no modification to job scripts required.

Why use Tracer in combination with Slurm

Slurm gives job-level scheduling visibility, but not what happens inside the job. Tracer adds that missing layer:
  • Per-process telemetry inside each job allocation
  • Job array correlation and node-level performance
  • Resource and cost insights across users and queues
  • Zero changes to job submission or scripts
  • Real-time updates in the Tracer dashboard

Getting Started

Prerequisites

  • Slurm cluster access with sudo or admin privileges for installation
  • Tracer installed on your operating system

Just run your pipeline, Tracer will automatically attach

If Tracer is already installed on your operating system, you only need to enable the Tracer agent for pipelines that have not been run with Tracer before.
In that case, run the following command:
sudo tracer init --token <your-token>
Go to our onboarding to get your own personal token
When running this command, you will be asked to name your pipeline for clear labeling in the dashboard.

Examples

Run a Slurm pipeline under Tracer:
#!/bin/bash
#SBATCH --job-name=test
#SBATCH --cpus-per-task=8
#SBATCH --time=01:00:00

module load python
python analysis.py
Submit this job as usual with:
my_job.sh
or launch the Tracer demo workflow:
sudo tracer demo
Once the pipeline starts, open the Tracer dashboard, and you’ll see each Slurm job as a timeline step updating in real time.

Tracer Logo
Watch your pipeline run in the Tracer dashboard
View real-time metrics, resource usage, and performance insights for your pipeline runs.