Last Updated: 3/7/2026
GitHub Actions CI/CD
Use E2B sandboxes in your GitHub Actions workflows to run testing, validation, and AI code reviews.
Overview
E2B sandboxes integrate seamlessly with GitHub Actions, providing:
- Isolated test environments
- Parallel test execution
- AI-powered code review
- Security scanning in sandboxes
- Preview environment generation
Basic Setup
GitHub Action Configuration
Create .github/workflows/e2b-tests.yml:
name: E2B Tests
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
- name: Install dependencies
run: npm install
- name: Run tests in E2B
env:
E2B_API_KEY: ${{ secrets.E2B_API_KEY }}
run: node test-runner.jsTest Runner Script
test-runner.js:
import { Sandbox } from 'e2b'
import fs from 'fs'
async function runTests() {
const sandbox = await Sandbox.create()
try {
// Upload test files
const testFiles = fs.readdirSync('./tests')
for (const file of testFiles) {
const content = fs.readFileSync(`./tests/${file}`, 'utf-8')
await sandbox.files.write(`/home/user/tests/${file}`, content)
}
// Upload source code
const srcFiles = fs.readdirSync('./src')
for (const file of srcFiles) {
const content = fs.readFileSync(`./src/${file}`, 'utf-8')
await sandbox.files.write(`/home/user/src/${file}`, content)
}
// Install dependencies
await sandbox.commands.run('npm install')
// Run tests
const result = await sandbox.commands.run('npm test')
console.log('Test output:', result.stdout)
if (result.exitCode !== 0) {
console.error('Tests failed:', result.stderr)
process.exit(1)
}
console.log('All tests passed!')
} finally {
await sandbox.close()
}
}
runTests().catch(console.error)Advanced Use Cases
AI Code Review
import { Sandbox } from 'e2b'
import { Octokit } from '@octokit/rest'
import OpenAI from 'openai'
const octokit = new Octokit({ auth: process.env.GITHUB_TOKEN })
const openai = new OpenAI()
async function aiCodeReview() {
const sandbox = await Sandbox.create()
// Get PR diff
const { data: pr } = await octokit.pulls.get({
owner: process.env.GITHUB_REPOSITORY_OWNER,
repo: process.env.GITHUB_REPOSITORY_NAME,
pull_number: process.env.PR_NUMBER
})
// Run code in sandbox
await sandbox.files.write('/home/user/code.py', pr.diff)
const lintResult = await sandbox.commands.run('pylint code.py')
// Get AI review
const review = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{
role: 'user',
content: `Review this code change:\n${pr.diff}\n\nLint results:\n${lintResult.stdout}`
}]
})
// Post review comment
await octokit.pulls.createReview({
owner: process.env.GITHUB_REPOSITORY_OWNER,
repo: process.env.GITHUB_REPOSITORY_NAME,
pull_number: process.env.PR_NUMBER,
body: review.choices[0].message.content,
event: 'COMMENT'
})
await sandbox.close()
}Parallel Test Execution
import { Sandbox } from 'e2b'
async function runParallelTests(testSuites) {
const promises = testSuites.map(async (suite) => {
const sandbox = await Sandbox.create()
try {
// Setup
await sandbox.files.write('/home/user/test.js', suite.code)
await sandbox.commands.run('npm install')
// Run test
const result = await sandbox.commands.run(`npm test -- ${suite.name}`)
return {
suite: suite.name,
passed: result.exitCode === 0,
output: result.stdout,
error: result.stderr
}
} finally {
await sandbox.close()
}
})
const results = await Promise.all(promises)
// Report results
const failed = results.filter(r => !r.passed)
if (failed.length > 0) {
console.error('Failed tests:', failed)
process.exit(1)
}
console.log('All test suites passed!')
}Security Scanning
async function securityScan() {
const sandbox = await Sandbox.create({
template: 'security-tools' // Custom template with security tools
})
try {
// Clone repo
await sandbox.commands.run('git clone $GITHUB_REPOSITORY /home/user/repo')
// Run security scanners
const npmAudit = await sandbox.commands.run('cd /home/user/repo && npm audit')
const trivyScan = await sandbox.commands.run('trivy fs /home/user/repo')
const bandit = await sandbox.commands.run('bandit -r /home/user/repo')
// Aggregate results
const issues = [
...parseNpmAudit(npmAudit.stdout),
...parseTrivy(trivyScan.stdout),
...parseBandit(bandit.stdout)
]
// Create GitHub issue if vulnerabilities found
if (issues.length > 0) {
await createSecurityIssue(issues)
}
} finally {
await sandbox.close()
}
}Preview Environment
async function createPreviewEnvironment() {
const sandbox = await Sandbox.create({
template: 'web-server'
})
try {
// Deploy PR code
await sandbox.commands.run('git clone $GITHUB_REPOSITORY /home/user/app')
await sandbox.commands.run('cd /home/user/app && git checkout $PR_BRANCH')
// Build and start
await sandbox.commands.run('cd /home/user/app && npm install')
await sandbox.commands.run('cd /home/user/app && npm run build')
await sandbox.commands.run('cd /home/user/app && npm start &')
// Get preview URL
const url = `https://${sandbox.id}.e2b.dev`
// Comment on PR
await octokit.issues.createComment({
owner: process.env.GITHUB_REPOSITORY_OWNER,
repo: process.env.GITHUB_REPOSITORY_NAME,
issue_number: process.env.PR_NUMBER,
body: `Preview environment ready: ${url}`
})
// Keep sandbox alive for preview
// Don't close immediately
} catch (error) {
await sandbox.close()
throw error
}
}Workflow Patterns
Matrix Testing
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [16, 18, 20]
python-version: [3.8, 3.9, 3.10, 3.11]
steps:
- uses: actions/checkout@v3
- name: Test with Node ${{ matrix.node-version }} and Python ${{ matrix.python-version }}
env:
E2B_API_KEY: ${{ secrets.E2B_API_KEY }}
NODE_VERSION: ${{ matrix.node-version }}
PYTHON_VERSION: ${{ matrix.python-version }}
run: node test-matrix.jsConditional Execution
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Check if tests needed
id: check
run: |
if git diff --name-only HEAD~1 | grep -E '\.(js|ts|py)$'; then
echo "run_tests=true" >> $GITHUB_OUTPUT
fi
- name: Run E2B tests
if: steps.check.outputs.run_tests == 'true'
env:
E2B_API_KEY: ${{ secrets.E2B_API_KEY }}
run: node test-runner.jsArtifact Collection
// Collect test artifacts
const artifacts = await sandbox.files.read('/home/user/test-results/')
await sandbox.files.download('/home/user/coverage/', './coverage')
// Upload to GitHub
// In workflow:- name: Upload coverage
uses: actions/upload-artifact@v3
with:
name: coverage-report
path: coverage/Best Practices
1. Secrets Management
env:
E2B_API_KEY: ${{ secrets.E2B_API_KEY }}
DATABASE_URL: ${{ secrets.DATABASE_URL }}2. Caching
- name: Cache dependencies
uses: actions/cache@v3
with:
path: ~/.npm
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}3. Timeout Protection
const result = await sandbox.commands.run('npm test', {
timeout: 300000 // 5 minutes
})4. Resource Cleanup
try {
// Run tests
} finally {
await sandbox.close() // Always cleanup
}Troubleshooting
Common Issues
-
API Key Not Set
- Add
E2B_API_KEYto GitHub Secrets - Reference in workflow:
${{ secrets.E2B_API_KEY }}
- Add
-
Timeout Errors
- Increase timeout in sandbox commands
- Use parallel execution for large test suites
-
Memory Issues
- Use appropriate sandbox template
- Clean up resources between tests
-
Network Issues
- Check sandbox internet access settings
- Verify firewall rules