You set up an S3 bucket to store test reports. Smart move—now you have historical data. Six months later, the storage bill is 10x what you expected. Sound familiar?
The Hidden Cost of “Just Store It”
When teams outgrow CI artifact storage, the natural solution is cloud storage:
- name: Upload to S3
run: aws s3 cp ./test-results s3://test-reports/${{ github.sha }} --recursiveSimple. Effective. And a ticking time bomb for your cloud bill.
Why Storage Costs Compound
Unlike compute (which stops billing when idle), storage bills accumulate:
| Month | New Reports | Total Stored | Monthly Cost* |
|---|---|---|---|
| 1 | 50 GB | 50 GB | $1.15 |
| 3 | 50 GB | 150 GB | $3.45 |
| 6 | 50 GB | 300 GB | $6.90 |
| 12 | 50 GB | 600 GB | $13.80 |
| 24 | 50 GB | 1.2 TB | $27.60 |
*S3 Standard pricing at $0.023/GB
That’s assuming steady growth. Teams that scale up CI runs or add more test suites see exponential increases.
The Playwright Problem
E2E testing frameworks like Playwright make storage costs explode. A typical Playwright setup generates:
- Screenshots on failure: 200KB–2MB each
- Video recordings: 5–50MB per test
- Trace files: 10–100MB per test run
A team running 100 E2E tests across 3 browsers, 4 times per day:
| Artifact Type | Size per Run | Daily | Monthly |
|---|---|---|---|
| HTML Report | 5 MB | 20 MB | 600 MB |
| Screenshots | 50 MB | 200 MB | 6 GB |
| Videos | 500 MB | 2 GB | 60 GB |
| Traces | 200 MB | 800 MB | 24 GB |
| Total | 755 MB | ~3 GB | ~90 GB |
That’s 90 GB per month from one project. Most teams have multiple. And nobody wants to disable recordings—they’re essential for debugging flaky tests.
DIY Cleanup: Harder Than It Looks
“We’ll just write a cleanup script.” Famous last words.
Lifecycle Rules Look Simple
<LifecycleConfiguration>
<Rule>
<Expiration>
<Days>90</Days>
</Expiration>
<Filter>
<Prefix>test-reports/</Prefix>
</Filter>
</Rule>
</LifecycleConfiguration>But Reality Is Messy
Problem 1: One-size-fits-all doesn’t work
Your main branch test reports are critical for debugging production issues. Feature branch reports are disposable after merge. S3 lifecycle rules can’t distinguish between them without complex prefix schemes.
Problem 2: Accidental deletion
A misconfigured rule deleted 6 months of production test history. Now you’re explaining to leadership why you can’t investigate that customer-reported bug from Q2.
Problem 3: Cost visibility
Which project is eating all the storage? S3 doesn’t tell you without additional tooling (CloudWatch, Cost Explorer tags, third-party analytics).
Problem 4: Access control
Who can view reports? S3 bucket policies are notoriously tricky. Teams end up either too permissive (security risk) or too restrictive (people can’t access what they need).
What Teams Actually Need
After talking to dozens of engineering teams, the requirements are clear:
- Automatic cleanup - Old reports deleted without manual intervention
- Per-project control - Different retention for different projects
- Predictable costs - Know what you’ll pay before the bill arrives
- Easy access - Browse and share reports without S3 permissions
- No maintenance - No scripts to write, debug, or update
How Gaffer Solves Storage Costs
Gaffer is purpose-built for test report hosting. Storage management is built in, not bolted on.
Automatic Cleanup by Default
Every plan includes automatic cleanup based on your retention period:
| Plan | Default Retention | Storage Included |
|---|---|---|
| Free | 7 days | 500 MB |
| Pro | 30 days | 10 GB |
| Team | 90 days | 50 GB |
Old reports are automatically deleted. No lifecycle rules to configure. No scripts to maintain.
Per-Project Retention
Not all projects are equal. Configure retention per-project:
- Feature branch tests: 7 days (auto-cleanup after merge)
- Main branch tests: 90 days (debug production issues)
- Compliance projects: Unlimited (paid plans can disable cleanup)
# Your CI stays simple
- name: Upload to Gaffer
uses: gaffer-sh/gaffer-uploader@v2
with:
api-key: ${{ secrets.GAFFER_API_KEY }}
report-path: ./test-resultsRetention is configured in the Gaffer dashboard, not your CI pipeline.
Predictable Pricing
Go over your storage limit? Simple overage billing at $0.50/GB. No surprise bills, no complex pricing tiers.
Example scenario:
- Team plan: 50 GB included
- Actual usage: 65 GB
- Overage: 15 GB × $0.50 = $7.50 extra
Compare that to debugging S3 cost spikes across multiple buckets and projects.
Cost Visibility Built In
See storage usage per project in your dashboard:
- Which projects use the most storage?
- How is usage trending over time?
- Where should you enable more aggressive cleanup?
No CloudWatch dashboards or Cost Explorer tags required.
Migration Is Painless
Already have test reports in S3? You don’t need to migrate anything:
- Add Gaffer upload to CI (5 minutes)
- New reports go to Gaffer automatically
- Old S3 reports stay where they are (or delete them to save costs)
- Over time, Gaffer becomes your source of truth
Your S3 bucket can run down naturally while Gaffer handles new reports.
When to Keep S3
Gaffer isn’t the right fit for everyone:
- Raw build artifacts (binaries, packages) → Keep in S3/artifact storage
- Compliance archives requiring specific certifications → Use compliant storage
- Existing tooling that depends on S3 paths → Evaluate migration cost
For test reports specifically—HTML reports, JUnit XML, JSON results—Gaffer is purpose-built and cost-optimized.
The Bottom Line
S3 is great for general-purpose storage. It’s not optimized for test reports.
| Concern | S3 DIY | Gaffer |
|---|---|---|
| Automatic cleanup | Manual lifecycle rules | Built-in |
| Per-project retention | Complex prefix schemes | Dashboard toggle |
| Cost predictability | Varies with usage | Fixed + simple overage |
| Access control | Bucket policies | Team-based permissions |
| Setup time | Hours to days | 5 minutes |
| Maintenance | Ongoing | Zero |
Get Started
Stop watching storage costs climb. Gaffer’s automatic cleanup keeps your test report storage predictable, with per-project controls when you need them.