| # | Job Name | Repo | Runs | Failures | Failure Rate | Flakiness Index | Last Failure | Top Failing Step |
|---|---|---|---|---|---|---|---|---|
| 1 | test / lint | cli | 28 | 19 | 6785.7% | yesterday | Run build | |
| 2 | test / empty-files-build | cli | 28 | 17 | 6071.4% | yesterday | ||
| 3 | build / arm64-prebuild | cli | 28 | 17 | 6071.4% | yesterday | Run build | |
| 4 | build / amd64-prebuild | cli | 28 | 18 | 6428.6% | yesterday | Run earthly/actions-setup@43211โฆ | |
| 5 | test / multi-platform-buildah | cli | 28 | 19 | 6785.7% | yesterday | Run Build | |
| 6 | test / container-podman-build-chunked-oโฆ | cli | 28 | 19 | 6785.7% | yesterday | Run Build | |
| 7 | test / iso-from-image | cli | 28 | 18 | 6428.6% | yesterday | ||
| 8 | test / iso-from-recipe | cli | 28 | 18 | 6428.6% | yesterday | ||
| 9 | test / container-podman-build | cli | 28 | 18 | 6428.6% | yesterday | Run Build | |
| 10 | test / build-chunked-oci-build | cli | 28 | 18 | 6428.6% | yesterday |
| Repo | Success Rate 7d | Success Rate 30d | Avg Duration | Total Runs (7d) | Last Stream Status |
|---|---|---|---|---|---|
| base-images | 57.1% | 64.7% | 297m | 7 | ๐ข success |
| cli | 0.0% | 0.0% | 19m | 1 | ๐ด failure |
| modules | 0.0% | 0.0% | 0m | 0 | โช No data |
| Repo | Stream | 7d Rate | 30d Rate | Runs (7d) | Avg Duration | Last Run | Status |
|---|---|---|---|---|---|---|---|
| base-images | build | 57.1% | 64.7% | 7 | 297m | 23h ago | ๐ข |
| cli | main | 0.0% | 0.0% | 1 | 19m | 1d ago | ๐ด |
Publish step tracking requires live build data โ shown after first successful CI run.
Rates computed from workflow step names over last 30 days. Steps not detected in pipeline are shown as โ.
Collecting Scorecard data โ check back after the next sync.
Scores from OpenSSF Scorecard. Click a card to view the full report.
| Repo | Workflow | Branch | Trigger | Duration | Started | Jobs | ||
|---|---|---|---|---|---|---|---|---|
| ๐ข | base-images | bluebuild | main | sched | 326m | 2026-04-07T07:42:11Z | 24/24 โ | โ |
| ๐ด | cli | Main branch build | main | push | 62m | 2026-04-06T22:52:11Z | 37/41 โ | โ |
| ๐ก | cli | Main branch build | main | push | <1m | 2026-04-06T22:51:33Z | 27/27 โ | โ |
| ๐ก | cli | Main branch build | main | push | <1m | 2026-04-06T22:51:03Z | 27/27 โ | โ |
| ๐ก | cli | Main branch build | main | push | <1m | 2026-04-06T22:50:36Z | 27/27 โ | โ |
| ๐ก | cli | Main branch build | main | push | <1m | 2026-04-06T22:49:54Z | 27/27 โ | โ |
| ๐ก | cli | Main branch build | main | push | <1m | 2026-04-06T22:49:19Z | 27/27 โ | โ |
| ๐ก | cli | Main branch build | main | push | <1m | 2026-04-06T22:48:46Z | 27/27 โ | โ |
| ๐ก | cli | Main branch build | main | push | <1m | 2026-04-06T22:48:16Z | 27/27 โ | โ |
| ๐ก | cli | Main branch build | main | push | <1m | 2026-04-06T22:47:35Z | 27/27 โ | โ |
| ๐ก | cli | Main branch build | main | push | 1m | 2026-04-06T22:46:24Z | 27/27 โ | โ |
| ๐ก | cli | Main branch build | main | push | 2m | 2026-04-06T22:44:57Z | 27/27 โ | โ |
| ๐ด | base-images | bluebuild | main | sched | 276m | 2026-04-06T07:55:15Z | 23/24 โ | โ |
| ๐ด | base-images | bluebuild | main | sched | 313m | 2026-04-05T07:17:12Z | 23/24 โ | โ |
| ๐ข | base-images | bluebuild | main | sched | 300m | 2026-04-04T07:09:56Z | 24/24 โ | โ |
| ๐ด | base-images | bluebuild | main | sched | 304m | 2026-04-03T07:20:07Z | 23/24 โ | โ |
| ๐ข | base-images | bluebuild | main | sched | 300m | 2026-04-02T07:23:12Z | 24/24 โ | โ |
| ๐ข | base-images | bluebuild | main | sched | 334m | 2026-04-01T07:44:20Z | 24/24 โ | โ |
| ๐ข | base-images | bluebuild | main | sched | 54m | 2026-03-31T12:46:32Z | 24/24 โ | โ |
| ๐ด | cli | Main branch build | main | push | 98m | 2026-03-30T16:15:28Z | 40/41 โ | โ |
The metrics on this page are grounded in peer-reviewed research and open standards. These resources explain what each metric means, why it predicts software delivery performance, and how to improve it.
The canonical framework for measuring software delivery: Deployment Frequency, Lead Time for Changes, Change Failure Rate, and Time to Restore Service. Published by Google Cloud's DevOps Research and Assessment team and validated across thousands of organizations since 2014.
Automated security health checks for open source projects, scoring 0โ10 across checks including signed releases, SBOM presence, branch protection, pinned dependencies, and CI test coverage. Produced by the Open Source Security Foundation (OpenSSF), a Linux Foundation project.
A graduated framework (L0โL3) for verifiable software build integrity. Each level adds stronger guarantees: L1 means provenance exists, L2 means it is signed by a hosted build platform, L3 means the build environment itself is hardened and isolated. Developed by Google and adopted as an OpenSSF standard.
Keyless, identity-based artifact signing backed by a public transparency log (Rekor). Cosign signs and verifies container images and release artifacts using short-lived OIDC certificates โ no long-lived private keys to manage or rotate. A CNCF project used by Kubernetes, Tekton, and the Bluefin image pipeline.
A machine-readable inventory of every component and dependency in a software artifact. SBOMs make vulnerability response faster โ when a new CVE is published, you can immediately know which of your images are affected. The OpenSSF SBOM Everywhere SIG maintains tooling guidance and naming conventions.
The CNCF Technical Advisory Group for Security publishes authoritative whitepapers on cloud-native supply chain security. The Software Supply Chain Best Practices paper (v2, 2025) and the Secure Software Factory reference architecture define the practices that the Scorecard and SLSA checks encode.
The authoritative CNCF definition of what internal developer platforms are, what they should measure (user satisfaction, self-service rate, onboarding time), and how platform teams should operate. Published by the TAG App Delivery Platforms Working Group. Recommends DORA metrics as the delivery measurement standard for platform teams.
A 4-level model (Provisional โ Operational โ Scalable โ Optimizing) across five aspects: Investment, Adoption, Interfaces, Operations, and Measurement. Helps platform teams understand where they are and what practices characterize the next level. Published by the CNCF TAG App Delivery Platforms Working Group.
A 5-level model (Build โ Operate โ Scale โ Improve โ Adapt) across Business Outcomes, People, Process, Policy, and Technology. Maintained by the CNCF Cartografos Working Group. Version 4 (2025) adds AI and FinOps dimensions. Useful for understanding where cloud-native adoption fits in the broader organizational journey.
The peer-reviewed research behind DORA metrics. Nicole Forsgren, Jez Humble, and Gene Kim identified 24 technical, process, and cultural capabilities that predict software delivery performance and organizational outcomes. Required reading for understanding why deployment frequency and lead time matter.
Google's Site Reliability Engineering book defines four signals sufficient to monitor any user-facing service: Latency, Traffic, Errors, and Saturation. These are the production observability complement to DORA โ they define what a "failure" actually is (without them, Change Failure Rate cannot be accurately measured) and predict when MTTR will spike before incidents occur.
DORA measures what the pipeline does; SPACE measures how developers experience it. Developed by Nicole Forsgren (GitHub), Margaret-Anne Storey, and colleagues at Microsoft Research. Five dimensions: Satisfaction, Performance, Activity, Communication/Collaboration, and Efficiency. Never measure activity in isolation.