GitHub - calebevans/gha-failure-analysis: Automated AI-powered root cause analysis for GitHub Actions workflow failures.
Navigation Menu
Search code, repositories, users, issues, pull requests...
Provide feedback
We read every piece of feedback, and take your input very seriously.
Saved searches
Use saved searches to filter your results more quickly
To see all available qualifiers, see our documentation.
Automated AI-powered root cause analysis for GitHub Actions workflow failures.
License
calebevans/gha-failure-analysis
Folders and files
Latest commit
History
Repository files navigation
GitHub Actions Failure Analysis
When your GitHub Actions workflow fails, this action automatically analyzes logs, correlates failures with code changes, and generates actionable root cause analysis reports.
Features
Quick Start
Same Workflow (Recommended)
Add as a final job that runs on failure:
Separate Workflow
Create a separate workflow that triggers on completion:
Note: The github-token parameter is optional and defaults to ${{ github.token }}. Only specify it if you need to use a custom token with different permissions.
How It Works
When a workflow fails, the action:
PR Context Analysis
What It Does
When analyzing PR-triggered workflow failures, the action automatically:
This helps you quickly understand if failures are due to your changes or unrelated infrastructure issues.
Example Output
When PR changes cause failures, you'll see:
Configuration
PR context analysis is enabled by default for PR-triggered runs. To customize:
Important: The action analyzes the specific commit that triggered the workflow, not the current PR state. This ensures accurate analysis even if the PR has been updated since the failure.
Configuration
Required Inputs
Optional Inputs
Cordon Options (Log Preprocessing)
Outputs
LLM Providers
The action supports any LLM provider compatible with DSPy/LiteLLM:
OpenAI
Anthropic
Google Gemini
Ollama (Local)
Custom Endpoint
Cordon Configuration
Cordon preprocesses logs to extract semantically relevant sections. Configure it to use remote or local embeddings.
Remote Embeddings (Recommended)
Fast and no model downloads required. Uses your LLM provider's embedding API:
Supported providers:
Example with Gemini:
Local Embeddings
For local embedding generation (slower, requires model download):
GPU Acceleration: If your runner has a GPU, use it for 5-15x faster preprocessing:
Example Output
The action generates structured analyses like:
Security
The action automatically detects and redacts secrets from all outputs using detect-secrets. This prevents accidental exposure in:
About
Automated AI-powered root cause analysis for GitHub Actions workflow failures.
Topics
Resources
License
Uh oh!
There was an error while loading. Please reload this page.
Stars
Watchers
Forks
Releases
9
Packages
0
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
Languages
Footer
Footer navigation