Skip to main content
Explore your complete experiment history with Tracer’s powerful search and filtering capabilities.

Overview

Tracer maintains a comprehensive history of all your experiments, making it easy to find past runs, compare results, and reproduce successful experiments.

Accessing Experiment History

1

Open History View

Navigate to the Experiments section in the Tracer dashboard.
2

Browse or Search

Use the search bar or browse through your experiment timeline.
3

Filter Results

Apply filters to narrow down your search.

Search Capabilities

Search experiments by:
  • Experiment Name - Find experiments by their identifier
  • Date Range - Filter by when experiments were run
  • Status - Show only successful, failed, or running experiments
  • Tags - Search by custom tags you’ve applied

Advanced Filters

Combine multiple filters to find exactly what you’re looking for.
Filter by:
Filter TypeDescriptionExample
DurationExecution timeTasks longer than 1 hour
Resource UsageCPU, memory, I/OExperiments using >16GB RAM
UserWho ran the experimentExperiments by team member
EnvironmentWhere it ranAWS Batch, local, Docker
ParametersInput parametersSpecific configuration values

Comparing Experiments

Side-by-Side Comparison

Select multiple experiments to compare:
  • Performance Metrics - Execution time, resource usage
  • Outputs - Result files and data
  • Configuration - Parameter differences
  • Resource Consumption - Cost and efficiency
Use the comparison view to identify which parameters led to better performance.

Trend Analysis

View trends across experiments:
# View resource trends over time
tracer trends --metric memory --range 30d

# Compare success rates
tracer stats --group-by workflow --range 7d

Reproducing Experiments

Exact Reproduction

Tracer captures everything needed to reproduce an experiment:
  • Operating system and kernel version
  • Software dependencies and versions
  • Environment variables
  • Container images (if used)
  • Input file checksums
  • Data locations
  • Parameter values
  • Configuration files
  • Resource allocations
  • Execution order
  • Parallelization settings
  • Random seeds (if applicable)

Re-running Experiments

# Re-run an experiment by ID
tracer rerun experiment-id

# Re-run with modified parameters
tracer rerun experiment-id --param batch_size=64

# Re-run on different infrastructure
tracer rerun experiment-id --env aws-batch

Organizing Experiments

Tagging

Add tags to organize experiments:
# Tag an experiment
tracer tag experiment-id production validated

# Search by tag
tracer search --tag production

Collections

Group related experiments:
  • By Project - Organize by research project
  • By Pipeline - Group by workflow type
  • By Goal - Categorize by objective

Exporting History

Export Options

Export experiment data for external analysis:

CSV Export

Export metrics and metadata to CSV

JSON Export

Full experiment details in JSON format

Report Generation

Generate PDF reports with visualizations

API Access

Programmatic access via REST API

Example Export

# Export last 30 days to CSV
tracer export --format csv --range 30d --output experiments.csv

# Export specific experiments
tracer export --ids exp1,exp2,exp3 --format json

Search Tips

Pro Tip: Save frequently used searches as “Saved Views” for quick access.

Keyboard Shortcuts

  • Ctrl/Cmd + K - Quick search
  • Ctrl/Cmd + F - Filter panel
  • Ctrl/Cmd + Click - Select multiple experiments

Search Syntax

Use advanced search syntax:
# Search by date range
created:>2024-01-01 created:<2024-12-31

# Search by status
status:failed

# Search by resource usage
memory:>8GB cpu:>4

# Combine filters
status:success duration:>1h tag:production

Integration with Analysis Tools

Connect Tracer history to your analysis workflow:
  • Jupyter Notebooks - Query experiment data programmatically
  • R/Python Scripts - Analyze trends and patterns
  • BI Tools - Connect to Tableau, PowerBI, etc.
  • Custom Dashboards - Build visualizations with Grafana

Next Steps