What Are Test Artifacts in Software Testing?

By Alex Gandy January 19, 2026

Test artifacts are the outputs generated when automated tests run. Reports, logs, screenshots, coverage data - anything produced during test execution that isn’t the test code itself. They’re essential for debugging failures, tracking test health, and understanding what actually happened when a test ran.

Types of Test Artifacts

Test Reports

The most common artifact. Test frameworks generate reports showing which tests passed, failed, or were skipped.

Formats vary by framework:

  • HTML reports - Human-readable, browsable (Playwright, pytest-html, jest-html-reporter)
  • JSON reports - Machine-readable, good for parsing (Jest, Vitest)
  • JUnit XML - Industry standard, supported by most CI systems
  • CTRF - Common Test Report Format, a newer universal JSON schema

A typical HTML report includes:

  • Test names and status (pass/fail/skip)
  • Execution time per test
  • Error messages and stack traces
  • Sometimes screenshots or video links

Screenshots and Videos

E2E testing frameworks like Playwright and Cypress capture visual evidence when tests fail:

// Playwright - automatic screenshot on failure
export default defineConfig({
  use: {
    screenshot: 'only-on-failure',
    video: 'retain-on-failure',
  },
});

These artifacts are invaluable for debugging. “The button wasn’t visible” is vague. A screenshot showing the button was covered by a modal is actionable.

Log Files

Console output, debug logs, and application logs captured during test execution:

  • Browser console logs (for frontend tests)
  • Server logs (for integration tests)
  • Test runner output
  • Custom debug logging

Trace Files

Some frameworks generate trace files for deep debugging:

  • Playwright traces - Complete recording of network requests, DOM snapshots, and actions
  • HAR files - HTTP archive format for network activity
  • Performance profiles - CPU and memory usage during tests

Coverage Reports

Code coverage data showing which lines of code were executed during tests:

  • Line coverage (which lines ran)
  • Branch coverage (which conditional paths executed)
  • Function coverage (which functions were called)

Coverage artifacts are typically HTML reports or JSON/LCOV data for CI integration.

Why Test Artifacts Matter

Debugging Failures

When a test fails in CI, you need to understand what happened. Without artifacts, you’re limited to:

  • The test name
  • A pass/fail status
  • Maybe an error message

With artifacts, you get:

  • Full stack traces
  • Screenshots of the failure state
  • Network requests that were made
  • Console logs leading up to the failure

The difference between “Element not found” and seeing a screenshot where the element is clearly there but obscured by a loading spinner is the difference between 5 minutes of debugging and 2 hours.

Historical Analysis

Individual test runs are data points. Artifacts across many runs reveal patterns:

  • Which tests fail intermittently (flaky tests)
  • When a test started failing (regression detection)
  • Whether test duration is increasing over time
  • Coverage trends across releases

This requires storing artifacts, not just generating them.

Compliance and Auditing

Some industries require test evidence retention:

  • Healthcare - FDA 21 CFR Part 11 requires documented testing
  • Finance - SOX compliance may require test records
  • Automotive - ISO 26262 functional safety standards

CI artifacts that expire after 30 days don’t meet these requirements.

Team Communication

Test artifacts are communication tools:

  • Share a failure report with a teammate
  • Attach test results to a bug ticket
  • Review test coverage in a PR
  • Show stakeholders that tests passed before release

The Artifact Lifecycle Problem

Test artifacts have a lifecycle issue: they’re generated during CI but often inaccessible when you need them.

Generation

Most CI pipelines generate artifacts automatically. The test framework runs, produces output files, and the CI system captures them.

# GitHub Actions - tests generate artifacts
- run: npm test
- uses: actions/upload-artifact@v4
  with:
    name: test-results
    path: ./test-results/

This part works fine.

Storage

CI providers store artifacts, but with limits:

ProviderDefault RetentionMax Retention
GitHub Actions90 days400 days
GitLab CI30 daysConfigurable
CircleCI30 days30 days
Bitbucket14 days14 days

After the retention period, artifacts are deleted. The test run record exists, but the actual reports are gone.

Access

Even while artifacts exist, accessing them is tedious:

  1. Navigate to the CI provider
  2. Find the workflow run
  3. Download the artifact zip file
  4. Extract locally
  5. Open the report

This friction means people often skip checking artifacts unless they’re actively debugging.

Sharing

Sharing CI artifacts with teammates who don’t have CI access is painful. You end up downloading the artifact and re-uploading it somewhere else.

Managing Test Artifacts

Option 1: Rely on CI Retention

Accept that artifacts expire. Debug quickly or lose the data.

Pros: No additional setup Cons: Historical analysis impossible, artifacts disappear when you need them

Option 2: Upload to Cloud Storage

Add a CI step to copy artifacts to S3, GCS, or Azure Blob:

- run: aws s3 cp ./test-results s3://my-bucket/tests/${{ github.sha }}/ --recursive

Pros: Control retention, potentially cheaper at scale Cons: Manual setup, no built-in UI, need to manage access control and cleanup

Option 3: Dedicated Test Report Hosting

Use a service built for test artifacts - we built Gaffer for this.

Pros: Shareable URLs, built-in analytics, team access control Cons: Another service to pay for

Best Practices

Generate Structured Reports

Plain console output is hard to parse. Generate structured reports (HTML, JSON, JUnit XML) that tools can work with.

// Playwright - multiple reporters
export default defineConfig({
  reporter: [
    ['list'],  // Console output for humans
    ['html'],  // Browsable HTML report
    ['json', { outputFile: 'results.json' }]  // Machine-readable
  ],
});

Capture Screenshots on Failure

For E2E tests, always capture screenshots on failure. The 100KB per screenshot is worth it.

Use Self-Contained HTML

When generating HTML reports, use self-contained mode so the report doesn’t depend on external CSS/JS files:

pytest --html=report.html --self-contained-html

Separate Artifacts by Type

Keep different artifact types in different directories:

test-results/
├── reports/          # HTML and JSON reports
├── screenshots/      # Failure screenshots
├── videos/          # Test recordings
├── coverage/        # Coverage data
└── traces/          # Playwright traces

Makes it easier to apply different retention policies.

Tag artifacts with the commit SHA and branch. When you’re debugging a failure from last week, you need to know what code produced those results.

curl -X POST https://api.example.com/upload \
  -F "[email protected]" \
  -F "commit=$CI_COMMIT_SHA" \
  -F "branch=$CI_BRANCH"
Start Free