Hyperlight sandbox

A lightweight, secure container runtime solution designed for modern cloud-native workloads

Latest prerelease from main branch

What's Changed

Full Changelog (excl. dependencies)

Full Changelog (dependencies)

Full Changelog: v0.13.0...dev-latest

Harbor graduated

Harbor is an open source registry that secures artifacts with policies and role-based access control, ensures images are scanned and free from vulnerabilities, and signs images as trusted. Can be installed on any Kubernetes environment or on a system with Docker support.

v2.15.0

What's Changed

Exciting New Features ๐ŸŽ‰

Enhancement ๐Ÿš€

  • Add max_upstream_conn parameter for each proxy_cache project by @stonezdj in #22348
  • add per-endpoint CA certificate support for registry endpoints by @wy65701436 in #22535
  • Add oci type support for jfrog registry by @stonezdj in #22589
  • add DeleteTag support for both aws and azure cr adapters by @shaiatias in #22227

Component updates โฌ†๏ธ

Docs update ๐Ÿ—„๏ธ

Community update ๐Ÿง‘๐Ÿปโ€๐Ÿคโ€๐Ÿง‘๐Ÿพ

Bump Component Version ๐Ÿค–

Other Changes

  • Remove port 9443 from harbor IP for webhook event check by @stonezdj in #22320
  • Remove GCR replication because GCR account is removed by @stonezdj in #22309
  • chore(deps): bump aws-actions/configure-aws-credentials from 4.2.1 to 5.0.0 by @dependabot[bot] in #22324
  • chore(deps): bump actions/setup-node from 4 to 5 by @dependabot[bot] in #22325
  • chore(deps): bump actions/stale from 9.1.0 to 10.0.0 by @dependabot[bot] in #22323
  • chore(deps): bump actions/upload-artifact from 4 to 5 by @dependabot[bot] in #22509
  • docs: minor improvement for docs by @geogrego in #22526
  • Add HARBOR_ADMIN to run upgrade script by @stonezdj in #22615
  • Correct the log upload path and make sure it always run by @stonezdj in #22616
  • chore(deps): bump actions/checkout from 5 to 6 by @dependabot[bot] in #22584
  • chore(deps): bump aws-actions/configure-aws-credentials from 5.0.0 to 5.1.1 by @dependabot[bot] in #22599
  • chore(deps): bump actions/upload-artifact from 5 to 6 by @dependabot[bot] in #22642
  • Add Cosign keyless signing for Harbor release artifacts by @Aloui-Ikram in #22578
  • chore(deps): bump sigstore/cosign-installer from 3.7.0 to 4.0.0 by @dependabot[bot] in #22732
  • ci: migrate build workflows to ubuntu-latest runners by @chlins in #22750
  • ci: fix the publishImage script with new docker version by @chlins in #22753
  • chore(deps): bump kentaro-m/auto-assign-action from 2.0.0 to 2.0.1 by @dependabot[bot] in #22748
  • chore(deps): bump github/codeql-action from 3 to 4 by @dependabot[bot] in #22432

New Contributors

Full Changelog: v2.14.0...v2.15.0

Keycloak incubating

Keycloak is an open-source identity and access management solution for modern applications and services, built on top of industry security standard protocols.

nightly

Translations update from Hosted Weblate (#47175)

* Updated translation for Turkish

Language: tr

Updated translation for Turkish

Language: tr

Update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Arif EROL <arif.erol16@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Signed-off-by: Arif EROL <arif.erol16@gmail.com>
Signed-off-by: Hosted Weblate <hosted@weblate.org>

* Update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Signed-off-by: Hosted Weblate <hosted@weblate.org>

* Update translation files

Updated by "Cleanup translation files" hook in Weblate.

Updated translation for German

Language: de

Update translation files

Updated by "Remove blank strings" hook in Weblate.

Updated translation for German

Language: de

Updated translation for German

Language: de

Updated translation for German

Language: de

Updated translation for German

Language: de

Updated translation for German

Language: de

Co-authored-by: Alexander Schwartz <alexander.schwartz@gmx.net>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Robin <39960884+robson90@users.noreply.github.com>
Signed-off-by: Alexander Schwartz <alexander.schwartz@gmx.net>
Signed-off-by: Hosted Weblate <hosted@weblate.org>
Signed-off-by: Robin <39960884+robson90@users.noreply.github.com>

* Update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Signed-off-by: Hosted Weblate <hosted@weblate.org>

* Update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Signed-off-by: Hosted Weblate <hosted@weblate.org>

* Updated translation for Romanian

Language: ro

Updated translation for Romanian

Language: ro

Update translation files

Updated by "Cleanup translation files" hook in Weblate.

Updated translation for Romanian

Language: ro

Updated translation for Romanian

Language: ro

Updated translation for Romanian

Language: ro

Updated translation for Romanian

Language: ro

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Liviu Roman <contact@liviuroman.com>
Signed-off-by: Hosted Weblate <hosted@weblate.org>
Signed-off-by: Liviu Roman <contact@liviuroman.com>

* Update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Signed-off-by: Hosted Weblate <hosted@weblate.org>

* Update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Signed-off-by: Hosted Weblate <hosted@weblate.org>

* Update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Signed-off-by: Hosted Weblate <hosted@weblate.org>

* Update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Signed-off-by: Hosted Weblate <hosted@weblate.org>

* Update translation files

Updated by "Cleanup translation files" hook in Weblate.

Updated translation for Italian

Language: it

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: albanobattistella <albano_battistella@hotmail.com>
Signed-off-by: Hosted Weblate <hosted@weblate.org>
Signed-off-by: albanobattistella <albano_battistella@hotmail.com>

* Update translation files

Updated by "Cleanup translation files" hook in Weblate.

Updated translation for Indonesian

Language: id

Updated translation for Indonesian

Language: id

Updated translation for Indonesian

Language: id

Co-authored-by: Andika Triwidada <andika@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Signed-off-by: Andika Triwidada <andika@gmail.com>
Signed-off-by: Hosted Weblate <hosted@weblate.org>

* Updated translation for Dutch

Language: nl

Update translation files

Updated by "Cleanup translation files" hook in Weblate.

Updated translation for Dutch

Language: nl

Updated translation for Dutch

Language: nl

Co-authored-by: Andy Airey <airey.andy@gmail.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Jan Herrygers <jherrygers@vaa.com>
Signed-off-by: Andy Airey <airey.andy@gmail.com>
Signed-off-by: Hosted Weblate <hosted@weblate.org>
Signed-off-by: Jan Herrygers <jherrygers@vaa.com>

* Update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Signed-off-by: Hosted Weblate <hosted@weblate.org>

* Updated translation for French

Language: fr

Update translation files

Updated by "Cleanup translation files" hook in Weblate.

Updated translation for French

Language: fr

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Co-authored-by: Sylvain Pichon <service@spichon.fr>
Signed-off-by: Hosted Weblate <hosted@weblate.org>
Signed-off-by: Sylvain Pichon <service@spichon.fr>

* Update translation files

Updated by "Cleanup translation files" hook in Weblate.

Updated translation for Swedish

Language: sv

Updated translation for Swedish

Language: sv

Co-authored-by: Daniel Nylander <daniel@danielnylander.se>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
Signed-off-by: Daniel Nylander <daniel@danielnylander.se>
Signed-off-by: Hosted Weblate <hosted@weblate.org>

* Update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Signed-off-by: Hosted Weblate <hosted@weblate.org>

* Update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Signed-off-by: Hosted Weblate <hosted@weblate.org>

* Update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Signed-off-by: Hosted Weblate <hosted@weblate.org>

* Update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Signed-off-by: Hosted Weblate <hosted@weblate.org>

---------

Signed-off-by: Arif EROL <arif.erol16@gmail.com>
Signed-off-by: Hosted Weblate <hosted@weblate.org>
Signed-off-by: Alexander Schwartz <alexander.schwartz@gmx.net>
Signed-off-by: Robin <39960884+robson90@users.noreply.github.com>
Signed-off-by: Liviu Roman <contact@liviuroman.com>
Signed-off-by: albanobattistella <albano_battistella@hotmail.com>
Signed-off-by: Andika Triwidada <andika@gmail.com>
Signed-off-by: Andy Airey <airey.andy@gmail.com>
Signed-off-by: Jan Herrygers <jherrygers@vaa.com>
Signed-off-by: Sylvain Pichon <service@spichon.fr>
Signed-off-by: Daniel Nylander <daniel@danielnylander.se>
Co-authored-by: Arif EROL <arif.erol16@gmail.com>
Co-authored-by: Alexander Schwartz <alexander.schwartz@gmx.net>
Co-authored-by: Robin <39960884+robson90@users.noreply.github.com>
Co-authored-by: Liviu Roman <contact@liviuroman.com>
Co-authored-by: albanobattistella <albano_battistella@hotmail.com>
Co-authored-by: Andika Triwidada <andika@gmail.com>
Co-authored-by: Andy Airey <airey.andy@gmail.com>
Co-authored-by: Jan Herrygers <jherrygers@vaa.com>
Co-authored-by: Sylvain Pichon <service@spichon.fr>
Co-authored-by: Daniel Nylander <daniel@danielnylander.se>

Kubernetes graduated

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications

v1.36.0-beta.0

See kubernetes-announce@. Additional binary downloads are linked in the CHANGELOG.

See the CHANGELOG for more details.

Argo graduated

Kubernetes-native tools to run workflows, manage clusters, and do GitOps right.

v3.4.0-rc2

Quick Start

Non-HA:

kubectl create namespace argocd
kubectl apply -n argocd --server-side --force-conflicts -f https://raw.githubusercontent.com/argoproj/argo-cd/v3.4.0-rc2/manifests/install.yaml

HA:

kubectl create namespace argocd
kubectl apply -n argocd --server-side --force-conflicts -f https://raw.githubusercontent.com/argoproj/argo-cd/v3.4.0-rc2/manifests/ha/install.yaml

Release Signatures and Provenance

All Argo CD container images are signed by cosign. A Provenance is generated for container images and CLI binaries which meet the SLSA Level 3 specifications. See the documentation on how to verify.

Release Notes Blog Post

For a detailed breakdown of the key changes and improvements in this release, check out the official blog post

Upgrading

If upgrading from a different minor version, be sure to read the upgrading documentation.

Changelog

Features

  • 3157fb1: feat(helm): support wildcard glob patterns for valueFiles (cherry-pick #26768 for 3.4) (#26919) (@argo-cd-cherry-pick-bot[bot])

Bug fixes

  • 21e13a6: fix(UI): show RollingSync step clearly when labels match no step (cherry-pick #26877 for 3.4) (#26882) (@argo-cd-cherry-pick-bot[bot])
  • e70034a: fix(ci): add .gitkeep to images dir (cherry-pick #26892 for 3.4) (#26912) (@argo-cd-cherry-pick-bot[bot])
  • 5deef68: fix(ui): include _-prefixed dirs in embedded assets (cherry-pick #26589 for 3.4) (#26909) (@argo-cd-cherry-pick-bot[bot])
  • 226178c: fix: stack overflow when processing circular ownerrefs in resource graph (#26783) (cherry-pick #26790 for 3.4) (#26878) (@argo-cd-cherry-pick-bot[bot])

Full Changelog: v3.4.0-rc1...v3.4.0-rc2

LoxiLB sandbox

eBPF based cloud-native load-balancer. Powering Kubernetes|Edge|5G|IoT|XaaS Apps.

vlatest

Merge pull request #874 from TrekkieCoder/main

gh-868 Generate packages runnable with systemd

Hyperlight sandbox

A lightweight, secure container runtime solution designed for modern cloud-native workloads

Release v0.13.1

What's Changed

Fixed

  • Explicitly error out on host-guest version mismatch by @ludfjig in #1252

Added

Full Changelog (excl. dependencies)

Full Changelog (dependencies)

Full Changelog: v0.13.0...v0.13.1

OpenFGA incubating

OpenFGA is a high performance and flexible authorization/permission system built for developers and inspired by Google Zanzibar

v1.12.1

Changed

  • The ListObjects "pipeline" algorithm ditches its custom Pipe implementation and replaces it with Go native channels. #2977
  • Refactor tuple validation and manipulation functions for optimal performance. #2984
  • Update grpc-go version to v1.79.3 and grpc-health-probe to v0.4.47. #2988

Fixed

  • Fixed OTEL_EXPORTER_OTLP_ENDPOINT not accepting URIs with schemes (e.g. http://host:4317). The scheme is now stripped before passing to the gRPC exporter, and an https:// scheme enables TLS regardless of the trace.otlp.tls.enabled flag. #2981

Full Changelog: v1.12.0...v1.12.1

SPIRE graduated

SPIRE implements the SPIFFE standards to provide cryptographic service identity (e.g. X.509 certificates and JWTs) and identity federation to workloads, independent of where those workloads are running. SPIRE provides secure attestation of both the workload itself and the environment it is running within and uses that information against custom defined policy to determine the identity of the workload and issue the appropriate credentials

v1.14.4

v1.14.4

Fixed

- The version that the agent was reporting at startup would get replaced by an empty string every time the agent re-attests or re-news it's SVID (#6763)

k3s sandbox

Lightweight Kubernetes

v1.35.3-rc1+k3s1

[release-1.35] Update to v1.35.3-k3s1 and Go 1.25.7 (#13835)

* Update to v1.35.3

Signed-off-by: Rafael Breno <rafael_breno@outlook.com>

* Update how VERSION_GOLANG is set

Signed-off-by: Rafael Breno <rafael_breno@outlook.com>

---------

Signed-off-by: Rafael Breno <rafael_breno@outlook.com>

Linkerd graduated

Ultra light, ultra simple, ultra powerful. Linkerd adds security, observability, and reliability to Kubernetes, without the complexity.

edge-26.3.3

What's Changed

Full Changelog: edge-26.3.2...edge-26.3.3

Krkn sandbox

Chaos testing tool for Kubernetes to identify bottlenecks and improve resilience and performance under failure conditions.

v5.0.1

Release v5.0.1

Download Artifacts

Changes

What's Changed

Full Changelog: v5.0.1-beta.1...v5.0.1

Dapr graduated

The Distributed Application Runtime (Dapr) provides APIs that simplify microservice architecture development and increases developer productivity. Whether your communication pattern is service-to-service invocation or pub/sub messaging, Dapr helps you write resilient and secured microservices....

Dapr Runtime v1.17.2

Dapr 1.17.2

This update includes security fixes, a breaking change, a new component, and bug fixes:

Go standard library vulnerabilities fixed by upgrading to Go 1.25.8

Problem

Three vulnerabilities were identified in the Go standard library used by Dapr 1.17.1 (Go 1.24.13):

  • GO-2026-4603: URLs in meta content attribute actions are not escaped in html/template, allowing potential cross-site scripting via crafted URLs.
  • GO-2026-4602: FileInfo can escape from a Root in os, potentially allowing access to files outside an intended directory boundary.
  • GO-2026-4601: Incorrect parsing of IPv6 host literals in net/url, which could lead to unexpected URL routing or SSRF in applications that parse user-supplied URLs.

Impact

Applications using html/template, os.Root-scoped file operations, or net/url URL parsing are potentially affected by these vulnerabilities. All three are fixed in Go 1.25.8.

Root Cause

The vulnerabilities are in the Go standard library and are not specific to Dapr code. They affect any Go program compiled with Go versions prior to 1.25.8.

Solution

Upgraded the Go toolchain from 1.24.13 to 1.25.8 across all modules and Docker images in the repository.

Register RavenDB state store component

Problem

The RavenDB state store component from components-contrib was not registered in the Dapr runtime, so it could not be used as a state store in Dapr applications.

Impact

Users could not use RavenDB as a state store backend with Dapr, despite the component implementation being available in components-contrib.

Root Cause

The component registration file for the RavenDB state store was missing from the Dapr runtime's component loader (cmd/daprd/components/).

Solution

Added the state_ravendb.go registration file to register the RavenDB state store component with the default state store registry. The component is available when building with the allcomponents build tag. The ravendb-go-client dependency was added to go.mod.

Workflow state retention policy CRD fields use incorrect type (Breaking Change)

Problem

The Configuration CRD defined the stateRetentionPolicy fields (anyTerminal, completed, failed, terminated) as type: integer, format: int64, but the Go API types use metav1.Duration which serializes as strings (e.g. "1s", "168h").
This mismatch caused Kubernetes to reject valid duration string values for these fields, and prevented the workflow state retention policy from being configured correctly via the Kubernetes Configuration CRD.

Impact

Users running Dapr in Kubernetes mode could not configure the workflow state retention policy using the Configuration CRD with human-readable duration strings like "1s" or "168h". Kubernetes validation rejected these values because the CRD schema expected integers.
Additionally, even if integer nanosecond values were used to bypass the CRD schema validation, the internal configuration deserializer could not correctly unmarshal the metav1.Duration string format sent by the operator, causing daprd to fail with:

Fatal error from runtime: error loading configuration: json: cannot unmarshal string into Go struct field WorkflowStateRetentionPolicy.spec.workflow.stateRetentionPolicy.anyTerminal of type time.Duration

Root Cause

The Configuration CRD YAML (charts/dapr/crds/configuration.yaml) was not regenerated after the Go API type WorkflowStateRetentionPolicy was updated to use *metav1.Duration fields.

Solution

Updated the CRD schema to use type: string for all stateRetentionPolicy fields, matching the metav1.Duration serialization format.
Added a custom UnmarshalJSON method on the internal config.WorkflowStateRetentionPolicy struct that deserializes via the configapi.WorkflowStateRetentionPolicy type (which uses *metav1.Duration), correctly handling both the Kubernetes CRD string format and the standalone YAML format.

Upgrading

This is a change that requires a CRD update. Kubernetes does not automatically update CRDs when upgrading Dapr via Helm.
You must manually update the CRDs before upgrading.
See the Kubernetes upgrade guide for detailed instructions on how to force update CRDs.

To update CRDs manually:

kubectl apply -f https://raw.githubusercontent.com/dapr/dapr/v1.17.2/charts/dapr/crds/configuration.yaml

Pub/sub messages incorrectly routed to dead-letter queue during graceful shutdown

Problem

During graceful shutdown (or hot-reload of a pub/sub component), messages arriving after the subscription began closing were immediately NACKed by Dapr.
Brokers that support dead-letter queues interpreted these NACKs as permanent delivery failures and routed the messages to the dead-letter queue, where they were never retried.

Impact

Applications using pub/sub with dead-letter queues configured could lose messages during rolling deployments, restarts, or any event that triggers graceful shutdown.
Rather than being redelivered to another healthy consumer, these messages were silently diverted to the dead-letter queue.
This affected all subscription types: declarative, programmatic (HTTP and gRPC), and streaming subscriptions.

Root Cause

When a subscription was closing, Dapr rejected new incoming messages with a "subscription is closed" error.
The pluggable pub/sub component layer translated this error into a NACK sent back to the broker.
The broker then treated the message as a permanent failure and routed it to the configured dead-letter topic.

Solution

Dapr now holds messages that arrive during subscription shutdown instead of rejecting them.
The message handler blocks until the broker connection is torn down, at which point the broker treats the message as unacknowledged and redelivers it to another available consumer.
In-flight messages that were already being processed continue to complete normally before the subscription fully closes.

Scheduler jobs with Drop failure policy may fire more than once during host reconnection

Problem

When the scheduler cluster membership changed (including during initial startup), one-shot jobs or jobs with a Drop failure policy could be triggered more than once.

Impact

Jobs configured with DueTime (one-shot) or a Drop failure policy could be delivered to the application multiple times instead of at most once.
This was more likely to occur during scheduler startup or when the scheduler cluster membership changed, as etcd can emit multiple membership events in quick succession.

Root Cause

A race condition existed between two asynchronous event loops in daprd's scheduler connection management.
The hosts loop manages gRPC client connections to the scheduler, and the connector loop manages the stream-based cluster that runs on those connections.

When the hosts loop received a second set of scheduler host addresses (e.g. from an etcd membership event during startup), it immediately closed the first set of gRPC client connections before the connector loop had a chance to gracefully stop the cluster running on those connections.
This caused active streams to break mid-flight, in-flight job triggers to be marked as undeliverable and re-staged, and jobs to fire again when new streams connected.

Solution

Moved gRPC connection lifecycle management from the hosts loop to the connector loop.
The hosts loop now passes connection close functions to the connector via the Connect event, and the connector closes old connections only after it has gracefully stopped the previous cluster.
This ensures connections are never closed while streams are still active.

Scheduler fails to start due to trailing dot in cluster domain DNS lookup

Problem

The Dapr Scheduler service fails to start in Kubernetes with a fatal error:

Fatal error running scheduler: failed to create etcd config: peer certificate does not contain the expected DNS name dapr-scheduler-server-1.dapr-scheduler-server.dapr-system.svc.cluster.local. got [dapr-scheduler-server-0.dapr-scheduler-server.dapr-system.svc.cluster.local dapr-scheduler-server-1.dapr-scheduler-server.dapr-system.svc.cluster.local dapr-scheduler-server-2.dapr-scheduler-server.dapr-system.svc.cluster.local]

Impact

The Scheduler service cannot start in any Kubernetes cluster where the DNS CNAME lookup for the cluster domain returns a fully-qualified domain name with a trailing dot (standard DNS behavior). This prevents all scheduler-based functionality including job scheduling.

Root Cause

The scheduler resolves the Kubernetes cluster domain via a DNS CNAME lookup. Per DNS convention, CNAME responses include a trailing dot (e.g. cluster.local.).
The code only stripped leading dots from the result, leaving the trailing dot intact.
This caused the etcd peer TLS server name to end with an extra dot, which did not match the certificate SANs and failed validation.

Solution

Changed strings.TrimLeft to strings.Trim to strip dots from both ends of the parsed cluster domain, ensuring the trailing dot from DNS CNAME responses is removed.

Service invocation buffers entire streaming request body in memory

Problem

When sending a request with a streaming body (chunked transfer encoding) through Dapr HTTP service invocation, the sidecar buffered the entire request body in memory before forwarding it.
For large payloadsโ€”such as file uploads or long-running data streamsโ€”this caused excessive memory usage and potential out-of-memory crashes.

Impact

Any HTTP service invocation request without a known Content-Length (e.g. chunked uploads, streamed data, piped bodies) had its entire body buffered in memory by the sending sidecar.
This made Dapr unsuitable for streaming large payloads between services and could cause sidecar OOM kills in production.

Root Cause

The sidecar's retry mechanism unconditionally buffered the request body into memory so it could replay the body on retry.
For streaming requests, the body cannot be replayed because it is consumed as it is read, making the buffering both unnecessary and harmful.

Solution

The sidecar now detects streaming requests (those with no known content length) and skips request body buffering entirely.
Both the built-in retry logic and any user-configured resiliency retry policies are automatically bypassed for streaming requests, since retrying would require re-reading a body that has already been consumed.
Non-streaming requests with a known Content-Length continue to support retries as before.

Service invocation buffers entire streaming response body in memory

Problem

When proxying HTTP responses through service invocation, the sidecar buffered the entire response body in memory before forwarding it to the caller.
For large or unbounded streaming responses, this caused excessive memory usage and potential out-of-memory crashes.

Impact

Any service invocation response with a large or streaming body could cause sidecar OOM kills, regardless of HTTP status code.
This made Dapr unsuitable for proxying streaming responses such as server-sent events, file downloads, or long-running data streams between services.

Root Cause

The sidecar's resiliency mechanism read the full response body into memory so it could evaluate whether to retry the request.
When the request itself is a stream that has already been consumed, retries are impossible regardless of the response, making the buffering unnecessary.

Solution

For streaming requests, the sidecar now forwards response bodies directly to the caller without buffering them in memory.
Resiliency features like circuit breakers continue to track failures normally.
Non-streaming requests continue to support retries and buffered error handling as before.

Oracle Database state store BulkGet returns HTTP 500 instead of per-key errors

Problem

When using the Oracle Database state store component, a BulkGet request that encountered an error for one or more keys returned an HTTP 500 error for the entire request instead of returning per-key errors alongside successful results.

Impact

Applications using BulkGet with the Oracle Database state store could not retrieve any results if even a single key encountered an error. Instead of receiving successful results for valid keys with per-key errors for failed keys, the entire operation failed with an HTTP 500 response.

Root Cause

The BulkGet implementation in the Oracle Database state store component returned a top-level error when any individual key retrieval failed, rather than collecting the error and associating it with the specific key in the response.

Solution

Updated the BulkGet implementation to return per-key errors in the BulkGetResponse items instead of returning a top-level error. Successful key retrievals are now returned alongside any per-key errors, matching the expected state store BulkGet contract.

Pulsar pub/sub publishes invalid JSON messages when Avro schema is configured

Problem

When the Pulsar pub/sub component was configured with an Avro schema, JSON messages were published without being validated against the schema. Invalid messages that did not conform to the Avro schema were accepted and published to the topic.

Impact

Applications relying on Avro schema enforcement at the Pulsar pub/sub layer could publish malformed messages that did not conform to the expected schema. Downstream consumers expecting schema-compliant messages could encounter deserialization failures or data integrity issues.

Root Cause

The Pulsar pub/sub component did not validate JSON message payloads against the configured Avro schema before publishing. The schema was used only for consumer-side deserialization, not for producer-side validation.

Solution

Added JSON-to-Avro schema validation in the publish path. Before publishing, the component now validates JSON message payloads against the configured Avro schema and returns an error if the message does not conform, preventing invalid messages from being published to the topic.

Actor placement dissemination failures with many replicas

Problem

After upgrading to Dapr 1.17.x, deployments with many replicas (e.g. 50+) experience frequent "dissemination timeout after 8s" errors, and /placement/state showing only a fraction of expected hosts.

Impact

Actor invocations fail intermittently because most sidecars never receive a complete placement table. Rolling restarts and scaling events amplify the problem, making large actor deployments unstable.

Root Cause

Three issues combined to cause a cascading failure during dissemination:

  1. Stale UNLOCK version accepted: The sidecar disseminator assigned the incoming version before comparing it against the current version, so the guard currentVersion > version always evaluated to false. Stale UNLOCK messages were incorrectly applied.
  2. Errors killed the disseminator permanently: When the sidecar detected a version mismatch on UPDATE or received an unknown operation, it returned a fatal error that terminated the disseminator loop entirely. The sidecar never reconnected to the placement service and remained stuck.
  3. Sequential dissemination rounds for concurrent connections: When many replicas connected to the placement service simultaneously while a dissemination round was in progress, each waiting connection triggered its own sequential dissemination round on completion. With N waiting replicas, this created N rounds instead of 1, causing timeouts that disconnected other sidecars and produced the cascading failure.

Solution

  1. Fixed the UNLOCK version guard to compare before assignment, so stale versions are correctly rejected.
  2. Changed version mismatch and unknown operation handling to cancel the stream and trigger a clean reconnection instead of killing the disseminator.
  3. Batched all connections that arrive during an active dissemination round into a single round, reducing N sequential rounds to 1.

Nil pointer dereference in conversation LangChain Go Kit LLM logger

Problem

The conversation component using the LangChain Go Kit could panic with a nil pointer dereference when the LLM logger was invoked.

Impact

Applications using the conversation API with the LangChain Go Kit-based component could experience unexpected crashes due to a nil pointer dereference, causing the Dapr sidecar to restart.

Root Cause

The LLM logger callback in the LangChain Go Kit conversation component was called with a nil pointer, and the logger did not perform a nil check before accessing the pointer.

Solution

Added a nil pointer check in the LangChain Go Kit LLM logger to prevent the dereference, ensuring the conversation component handles the case gracefully without panicking.

Workflow activities with large results fail with gRPC ResourceExhausted error

Problem

Workflow activities that return results larger than ~2MB fail with a ResourceExhausted gRPC error when scheduling the activity result reminder via the scheduler:

Error scheduling reminder job activity-result-XXXX due to: rpc error: code = ResourceExhausted desc = trying to send message larger than max (37950104 vs. 2097152)

Impact

Any workflow activity returning a result larger than the default gRPC send message size limit (~2MB) fails to deliver its result back to the parent orchestration. The orchestration hangs indefinitely waiting for the activity result, eventually timing out or stalling.

Root Cause

The scheduler gRPC client configured MaxCallRecvMsgSize to allow receiving large messages, but did not configure MaxCallSendMsgSize. This left the send-side limit at the gRPC default (~2MB). When an activity completes, its result is serialized into a reminder job request sent to the scheduler. If the activity result exceeds the default limit, the gRPC client rejects the outgoing message before it reaches the server.

Solution

Added MaxCallSendMsgSize to the scheduler gRPC client dial options, matching the existing MaxCallRecvMsgSize configuration.

Bulk publish does not apply namespace prefix to topic

Problem

When using the Bulk Publish API with a pub/sub component that has NamespaceScoped enabled, messages were published to the un-namespaced topic instead of the namespace-prefixed topic.

Impact

Applications using namespace-scoped pub/sub components with the Bulk Publish API experienced silent message loss. Bulk-published messages were routed to the wrong topic (e.g. the un-namespaced exchange), while subscribers were listening on the namespace-prefixed topic. The regular Publish API was not affected, so only bulk publish users encountered this issue.

Root Cause

The Publish method in publisher.go prepends the namespace to req.Topic when NamespaceScoped is true, but the BulkPublish method did not include this same namespace-prefixing step. This caused bulk-published messages to bypass the namespace scoping entirely.

Solution

Added the namespace prefix guard to BulkPublish in publisher.go, immediately after scope validation and before either the native BulkPublisher or defaultBulkPublisher fallback path is invoked. This ensures bulk-published messages are routed to the same namespace-prefixed topic as regular published messages.

Workflow timer reminders not deleted when external event is received before timeout

Problem

When a workflow used WaitForSingleEvent with a timeout, a timer reminder was created in the scheduler. If the external event was raised before the timer fired, the timer reminder was never deleted and remained as an orphan in the scheduler until it eventually fired unnecessarily.
Additionally, when a workflow completed while timers were still pending (e.g. a CreateTimer that had not yet fired), those timer reminders were also left behind.

Impact

Workflows using WaitForSingleEvent with timeouts accumulated orphan timer reminders in the scheduler. These timers would eventually fire and trigger unnecessary workflow actor invocations that were silently ignored, wasting scheduler and actor resources.
For long-running workflows with many WaitForSingleEvent calls or long timeouts, the number of orphan reminders could grow significantly.

Root Cause

The durable task SDK completes the event task when an external event is received, but does not signal the Dapr runtime to delete the associated timer reminder. The runtime had no mechanism to detect that a timer was no longer needed because its associated event had already been received.
Similarly, when a workflow completed, there was no cleanup of pending timer reminders that had not yet fired.

Solution

Added two timer cleanup mechanisms to the workflow orchestrator:

  1. Mid-execution cleanup (deleteCancelledEventTimers): After each workflow execution step, the runtime scans the history for TimerCreated events associated with WaitForSingleEvent calls (identified by the Name field on TimerCreated). When a matching EventRaised event is found in the new events, the corresponding timer reminder is deleted from the scheduler. Event name matching is case-insensitive, and already-deleted timers (e.g. from a crash recovery) are handled gracefully by ignoring NotFound errors.

  2. Completion cleanup (deleteAllReminders): When a workflow completes and has unfired timers (detected by comparing TimerCreated vs TimerFired event counts), all reminders for the workflow and its activities are bulk-deleted via DeleteByActorID. This handles timers without a Name field (e.g. CreateTimer) that cannot be matched to specific events.

Ollama conversation component missing endpoint metadata field in spec

Problem

The Ollama conversation component's metadata spec was missing the endpoint metadata field, which is required to configure the Ollama server URL.

Impact

Users configuring the Ollama conversation component could not discover the endpoint metadata field through the component spec. The field was functional in code but not declared in the component metadata spec, making it invisible to tooling and documentation that relies on the spec.

Root Cause

The endpoint metadata field was omitted from the Ollama conversation component's metadata.yaml spec file.

Solution

Added the endpoint metadata field to the Ollama conversation component spec (conversation/ollama/metadata.yaml).

Dapr CLI cannot list workflow instances when using MongoDB as workflow actor state store

Problem

The Dapr CLI dapr workflow list command failed when MongoDB was configured as the workflow actor state store.

Impact

Users using MongoDB as their workflow actor state store could not list workflow instances via the Dapr CLI. The list operation requires prefix-based key queries to enumerate workflow instances, which MongoDB did not support.

Root Cause

The MongoDB state store component did not implement the KeysLiker interface, which provides prefix-based key listing functionality. The Dapr CLI's workflow list operation depends on this interface to query workflow instance keys by prefix.

Solution

Implemented the KeysLiker interface on the MongoDB state store component, enabling the prefix-based key listing queries required by the Dapr CLI workflow list command.

LangChain Go Kit conversation component does not return error when required tool calls are not invoked

Problem

When the LangChain Go Kit conversation component received a response from the LLM that included required tool calls, but those tool calls were not actually invoked, no error was returned to the caller.

Impact

Applications using the conversation API with the LangChain Go Kit component could silently receive incomplete responses when the LLM requested tool calls that were not executed. The caller had no indication that the response was missing expected tool call results.

Root Cause

The LangChain Go Kit conversation component did not check whether tool calls flagged as required by the LLM were actually invoked during the conversation turn.

Solution

Added error handling to return an error when the LLM response includes required tool calls that were not invoked, ensuring the caller is informed of the incomplete response.

Sentry fails to sign certificates when issuer key type does not match CSR signature algorithm

Problem

Sentry fails to sign workload certificates with the error:

x509: requested SignatureAlgorithm does not match private key type

This occurs when the CSR signature algorithm does not match the issuer key type. For example, when a sidecar generates an Ed25519 CSR but the Sentry issuer key is ECDSA, or vice versa. This breaks version skew scenarios where the sidecar and control plane use different key types.

Impact

Sidecars cannot obtain workload certificates from Sentry during version skew upgrades where the sidecar and Sentry use different cryptographic key types. All mTLS-secured communication fails, preventing the sidecar from starting.

Root Cause

Sentry copied the SignatureAlgorithm from the incoming CSR onto the workload certificate template. When x509.CreateCertificate was called, Go's x509 library rejected the mismatch between the template's signature algorithm (from the CSR) and the issuer's private key type.

Solution

Removed the hardcoded SignatureAlgorithm from certificate templates and the SignRequest struct. Go's x509.CreateCertificate now infers the correct signature algorithm from the issuer's signing key, allowing Sentry to sign certificates regardless of the CSR's key type.

Kubewarden sandbox

Kubewarden is a Policy Engine powered by WebAssembly policies. Its policies can be written in CEL, Rego (OPA & Gatekeeper flavours), Rust, Go, YAML, and others....

v1.33.1

  • fix: Use correct chart versions for 1.33.1 (#1594)
  • test: Consume mirrored kalaksi:tinyproxy image (#1593)
  • ci(release): Bump attestation GHA to 4.6.0 (#1592)
  • fix(ci): fix broken release, address issue during attestation step (#1589)
  • deps: Consume testcontainers tinyproxy image from gitlab registry (#1588)
  • feat: foward image pull secrets (#1583)

๐Ÿ› Bug Fixes

  • fix(deps): update rust crate cached to 0.58 (#1565)
  • fix(deps): update go dependencies (#1568)

๐Ÿงฐ Maintenance

  • chore(deps): Bump google.golang.org/grpc from 1.79.2 to 1.79.3 (#1590)
  • build: v1.33.1 release (#1587)
  • build(deps): lock file maintenance (#1584)
  • chore(deps): Update Helm chart dependencies (#1582)
  • chore(deps): update module github.com/opencontainers/runc to v1.4.1 (#1578)
  • build(deps): lock file maintenance (#1581)
  • chore(deps): update rust dependencies (#1579)
  • chore(deps): update github actions (#1577)
  • fix(deps): update rust crate cached to 0.58 (#1565)
  • fix(deps): update go dependencies (#1568)
  • chore(deps): Update Helm chart dependencies (#1570)
  • chore(deps): update golang docker tag to v1.26.1 (#1562)
  • build(deps): lock file maintenance (#1569)
  • chore(deps): update github actions (#1566)
  • chore(deps): update otel/opentelemetry-collector docker tag to v0.147.0 (#1567)
xRegistry sandbox

The xRegistry project defines an abstract model for managing metadata about resources and provides a REST-based interface to discover, create, modify and delete those resources.

dev

Latest development build of the 'xr(server)' executables. The commit pointer and zip/tar files are old, do not use them.

kagent sandbox

Kagent is an open source programming framework designed for DevOps and platform engineers to run AI agents in Kubernetes

v0.8.0-beta9

What's Changed

New Contributors

Full Changelog: v0.8.0-beta8...v0.8.0-beta9

metal3-io incubating

Provision bare metal hardware via k8s-native APIs, including integration with the Cluster API.

v0.11.6

Changes since v0.11.5

๐Ÿ› Bug Fixes

  • Add handling of paused annotation to the HFS controller (#3089)
  • Fix return values of the HFC controller in case of provisioner errors (#3077)

๐ŸŒฑ Others

  • Bump CAPI to v1.11.7 (#3100)
  • bump google.golang.org/grpc to v1.79.3 (#3098)
  • Bump golangci-lint to v2.5.0 (#3086)
  • bump x/net to v0.49.0 (#3084)
  • harden pr-verifier workflow trigger (#3087)
  • Update Go version to 1.25.8 (#3066)
  • Bump github.com/cloudflare/circl to v1.6.3 (#3051)
  • Bump github.com/cert-manager/cert-manager from 1.18.5 to 1.18.6 in /test (#3042)
  • Bump the kubernetes group to v0.33.9 (#3041)
  • Bump go.etcd.io/etcd/client/pkg/v3 from 3.6.7 to 3.6.8 (#2995)
  • bump osv-scanner in hack/verify-release.sh (#3031)
  • add Sunnatillo as reviewer (#3025)
  • Bump opentelemetry.io/otel/sdk to v1.40.0 (#3019)
  • add smoshiur1237 as reviewer (#3016)
  • E2E: Avoid pre-pulling release-0.8 (#3010)

โ™ป๏ธ Superseded or Reverted

The image for this release is: v0.11.6

Thanks to all our contributors! ๐Ÿ˜Š

metal3-io incubating

Provision bare metal hardware via k8s-native APIs, including integration with the Cluster API.

v0.12.3

Changes since v0.12.2

๐Ÿ› Bug Fixes

  • Add handling of paused annotation to the HFS controller (#3090)
  • Fix return values of the HFC controller in case of provisioner errors (#3076)

๐ŸŒฑ Others

  • Bump CAPI to v1.12.4 (#3099)
  • bump google.golang.org/grpc to v1.79.3 (#3097)
  • bump x/net to v0.49.0 (#3083)
  • Bump golangci-lint to v2.5.0 in workflow (#3085)
  • harden pr-verifier workflow trigger (#3088)
  • Update Go version to 1.25.8 (#3064)
  • Bump github.com/cloudflare/circl to v1.6.3 (#3050)
  • Bump github.com/cert-manager/cert-manager from 1.18.5 to 1.18.6 in /test (#3040)
  • Bump the kubernetes group to v0.34.5 (#3039)
  • Bump golangci-lint to v2.5.0 and fix linter findings (#3032)
  • Bump opentelemetry.io/otel/sdk to v1.40.0 (#3020)
  • bump osv-scanner in hack/verify-release.sh (#3030)
  • add Sunnatillo as reviewer (#3026)
  • add smoshiur1237 as reviewer (#3017)
  • Bump sigs.k8s.io/kustomize/kustomize/v5 from 5.8.0 to 5.8.1 in /hack/tools (#2990)
  • Bump go.etcd.io/etcd/client/pkg/v3 from 3.6.7 to 3.6.8 (#2988)
  • Bump sigs.k8s.io/kustomize/api from 0.21.0 to 0.21.1 in /test (#2992)

โ™ป๏ธ Superseded or Reverted

The image for this release is: v0.12.3

Thanks to all our contributors! ๐Ÿ˜Š

Cozystack sandbox

Cozystack is a free PaaS platform and framework for building private clouds and providing users/customers with managed Kubernetes, KubeVirt-based VMs, databases as a service, NATS, message brokers, etc. with GPU support in VMs and Kubernetes clusters.

v1.1.3

Fixes

  • [kubernetes] Fix CiliumNetworkPolicy endpointSelector not updated for multi-node RWX volumes: When an NFS-backed RWX volume was published to multiple VMs, the CiliumNetworkPolicy endpointSelector.matchLabels was set only for the first VM and never broadened on subsequent ControllerPublishVolume calls. This caused Cilium to block NFS egress so that mounts hung on all nodes except the first. The selector now uses matchExpressions with operator: In and is rebuilt whenever owner references are added or removed (@mattia-eleuteri in #2227, #2229).

  • [dashboard] Fix dashboard-client secret desynchronization with Keycloak after upgrades: When the dashboard-client Kubernetes Secret was recreated with a new value after an upgrade or reinstall, the KeycloakClient spec remained unchanged and the EDP Keycloak operator skipped reconciliation, leaving Keycloak with the stale secret and causing authentication failures for the dashboard. A secret-hash annotation containing the SHA256 hash of the client secret is now added to the KeycloakClient resource; any secret rotation updates the hash in metadata, triggering operator reconciliation and syncing the new secret to Keycloak (@sircthulhu in #2231, #2241).

  • [etcd] Fix defrag CronJob accumulating hundreds of pods during cluster upgrades: After upgrading CozyStack, the etcd defrag CronJob could accumulate hundreds of running and failed pods when etcd was temporarily unavailable during the upgrade, because no concurrency or retry limits were configured. Added concurrencyPolicy: Forbid to prevent parallel jobs, startingDeadlineSeconds: 300 to discard missed schedules older than 5 minutes, failedJobsHistoryLimit: 1 to limit failure retention, activeDeadlineSeconds: 1800 for a 30-minute per-job timeout, and backoffLimit: 2 to cap retries (@sircthulhu in #2233, #2234).

Documentation

  • [website] Document keycloakInternalUrl platform value: Added reference documentation for the authentication.oidc.keycloakInternalUrl platform value to the Platform Package Reference, Self-Signed Certificates guide, and Enable OIDC Server guide, explaining how to route dashboard backend OIDC requests through the internal Keycloak service URL (@sircthulhu in cozystack/website#452).

Full Changelog: v1.1.2...v1.1.3

Download cozystack

Cozystack sandbox

Cozystack is a free PaaS platform and framework for building private clouds and providing users/customers with managed Kubernetes, KubeVirt-based VMs, databases as a service, NATS, message brokers, etc. with GPU support in VMs and Kubernetes clusters.

v1.0.3

Fixes

  • [platform] Fix package name conversion in migration script: Fixed the migrate-to-version-1.0.sh script to correctly prepend the cozystack. prefix when converting BUNDLE_DISABLE and BUNDLE_ENABLE package name lists, ensuring packages are properly identified during the v0.41โ†’v1.0 upgrade (@myasnikovdaniil in #2144, #2148).

Documentation

  • [website] Add white labeling guide: Added a comprehensive guide for configuring white labeling (branding) in Cozystack v1, covering Dashboard fields (titleText, footerText, tenantText, logoText, logoSvg, iconSvg) and Keycloak fields (brandName, brandHtmlName). Includes SVG preparation workflow with theme-aware template variables, portable base64 encoding, and migration notes from the v0 ConfigMap approach (@lexfrei in cozystack/website#441).

  • [website] Actualize backup and recovery documentation: Reworked the backup and recovery docs to be user-focused, separating operator and tenant workflows. Added tenant-facing documentation for BackupJob and Plan resources and status inspection commands, and added a new Velero administration guide for operators covering storage credentials and backup storage configuration (@androndo in cozystack/website#434).


Full Changelog: v1.0.2...v1.0.3

Download cozystack

Cozystack sandbox

Cozystack is a free PaaS platform and framework for building private clouds and providing users/customers with managed Kubernetes, KubeVirt-based VMs, databases as a service, NATS, message brokers, etc. with GPU support in VMs and Kubernetes clusters.

v1.1.1

Fixes

  • [dashboard] Fix hidden MarketplacePanel resources appearing in sidebar menu: The sidebar was generated independently from MarketplacePanels, always showing all resources regardless of their hidden state. Fixed by fetching MarketplacePanels during sidebar reconciliation and skipping resources where hidden=true, so hiding a resource from the marketplace also removes it from the sidebar navigation (@IvanHunters in #2177, #2203).

  • [dashboard] Fix disabled/hidden state overwritten on every MarketplacePanel reconciliation: The controller was hardcoding disabled=false and hidden=false on every reconciliation, silently overwriting any user changes made through the dashboard UI. Fixed by reading and preserving the current disabled/hidden values from the existing resource before updating (@IvanHunters in #2176, #2201).

  • [dashboard] Fix External IPs factory EnrichedTable rendering: The external-IPs table displayed empty rows because the factory used incorrect EnrichedTable properties. Replaced clusterNamePartOfUrl with cluster and changed pathToItems from array to dot-path string format, consistent with all other working EnrichedTable instances (@IvanHunters in #2175, #2193).

  • [platform] Fix VM MAC address not preserved during virtual-machine to vm-instance migration: Kube-OVN reads MAC address exclusively from the pod annotation ovn.kubernetes.io/mac_address, not from the IP resource spec.macAddress. Without the annotation, migrated VMs received a new random MAC, breaking OS-level network configurations that match by MAC (e.g. netplan). Added a Helm lookup for the Kube-OVN IP resource in the vm-instance chart so that MAC and IP addresses are automatically injected as pod annotations when the resource exists (@sircthulhu in #2169, #2190).

  • [etcd-operator] Replace deprecated kube-rbac-proxy image: The gcr.io/kubebuilder/kube-rbac-proxy image became unavailable after Google Container Registry was deprecated. Replaced it with quay.io/brancz/kube-rbac-proxy from the original upstream author, restoring etcd-operator functionality (@kvaps in #2181, #2182).

  • [migrations] Handle missing RabbitMQ CRD in migration 34: Migration 34 failed with an error when the rabbitmqs.apps.cozystack.io CRD did not exist โ€” which occurs on clusters where RabbitMQ was never installed. Added a CRD presence check before attempting to list resources so that migration 34 completes cleanly on such clusters (@IvanHunters in #2168, #2180).

  • [keycloak] Fix Keycloak crashloop due to misconfigured health probes: Keycloak 26.x redirects all HTTP requests on port 8080 to the configured HTTPS hostname; since kubelet does not follow redirects, liveness and readiness probes failed causing a crashloop. Fixed by enabling KC_HEALTH_ENABLED=true, exposing management port 9000, and switching all probes to /health/live and /health/ready on port 9000. Also added a startupProbe for improved startup tolerance (@mattia-eleuteri in #2162, #2179).


Full Changelog: v1.1.0...v1.1.1

Download cozystack

Cozystack sandbox

Cozystack is a free PaaS platform and framework for building private clouds and providing users/customers with managed Kubernetes, KubeVirt-based VMs, databases as a service, NATS, message brokers, etc. with GPU support in VMs and Kubernetes clusters.

v1.0.4

Fixes

  • [system] Fix Keycloak probe crashloop with management port health endpoints: Fixed a crashloop where Keycloak 26.x was endlessly restarting because liveness and readiness probes were sending HTTP requests to port 8080. Keycloak 26.x redirects all requests on port 8080 to KC_HOSTNAME (HTTPS), and since kubelet does not follow redirects, probes failed, eventually triggering container restarts. The fix switches probes to the dedicated management port 9000 (/health/live, /health/ready) enabled via KC_HEALTH_ENABLED=true, exposes management port 9000, and adds a startupProbe with appropriate failure thresholds for better startup tolerance (@mattia-eleuteri in #2162, #2178).

  • [system] Fix etcd-operator deprecated kube-rbac-proxy image: Replaced the deprecated gcr.io/kubebuilder/kube-rbac-proxy:v0.16.0 image with quay.io/brancz/kube-rbac-proxy:v0.18.1 in the vendored etcd-operator chart. The GCR-hosted image became unavailable after March 18, 2025, causing etcd-operator pods to fail on image pull (@kvaps in #2181, #2183).

  • [platform] Fix VM MAC address not preserved during virtual-machine to vm-instance migration: During the virtual-machine โ†’ vm-instance migration (script 29), VM MAC addresses were not preserved. Kube-OVN reads MAC addresses exclusively from the pod annotation ovn.kubernetes.io/mac_address, not from spec.macAddress of the IP resource. Without this annotation, migrated VMs received a new random MAC address, breaking OS-level network configuration that matches by MAC (e.g., netplan). The fix adds a Helm lookup in the vm-instance chart template to read the Kube-OVN IP resource and automatically inject the MAC and IP addresses as pod annotations (@sircthulhu in #2169, #2191).

  • [dashboard] Fix External IPs page showing empty rows: Fixed the External IPs administration page displaying empty rows instead of service data. The EnrichedTable configuration in the external-ips factory was using incorrect property names โ€” replaced clusterNamePartOfUrl with cluster and changed pathToItems from array format to dot-path string format, matching the convention used by all other EnrichedTable instances (@IvanHunters in #2175, #2192).

  • [dashboard] Fix disabled/hidden state reset on MarketplacePanel reconciliation: Fixed a bug where the dashboard controller was hardcoding disabled=false and hidden=false on every reconcile loop, overwriting changes made through the dashboard UI. Services disabled or hidden via the marketplace panel now correctly retain their state after controller reconciliation (@IvanHunters in #2176, #2202).

  • [dashboard] Fix hidden MarketplacePanel resources appearing in sidebar menu: Fixed the sidebar navigation showing all resources regardless of their MarketplacePanel hidden state. The controller now fetches MarketplacePanels during sidebar reconciliation and filters out resources where hidden=true, ensuring that hiding a resource from the marketplace also removes it from the sidebar navigation. Listing failures are non-fatal โ€” if the configuration fetch fails, no hiding is applied and the dashboard remains functional (@IvanHunters in #2177, #2204).

Documentation

  • [website] Add OIDC self-signed certificates configuration guide: Added a comprehensive guide for configuring OIDC authentication with Keycloak when using self-signed certificates (the default in Cozystack). Covers Talos machine configuration with certificate mounting and host entries, kubelogin setup instructions, and a troubleshooting section. The guide is available for both v0 and v1 versioned documentation paths (@IvanHunters in cozystack/website#443).

Full Changelog: v1.0.3...v1.0.4

Download cozystack

Cozystack sandbox

Cozystack is a free PaaS platform and framework for building private clouds and providing users/customers with managed Kubernetes, KubeVirt-based VMs, databases as a service, NATS, message brokers, etc. with GPU support in VMs and Kubernetes clusters.

v1.1.2

Fixes

  • [bucket] Fix S3 Manager endpoint mismatch with COSI credentials: The S3 Manager UI previously constructed an s3.<tenant>.<cluster-domain> endpoint even though COSI-issued bucket credentials point to the root-level S3 endpoint. This caused login failures with "invalid credentials" despite valid secrets. The deployment now uses the actual endpoint from BucketInfo, with the old namespace-based endpoint kept only as a fallback before BucketAccess secrets exist (@IvanHunters in #2211, #2215).

  • [platform] Fix spurious OpenAPI post-processing errors on cozystack-api startup: The OpenAPI post-processor was being invoked for non-apps.cozystack.io group versions where the base Application* schemas do not exist, producing noisy startup errors on every API server launch. It now skips those non-apps group versions gracefully instead of returning an error (@kvaps in #2212, #2217).

Documentation

  • [website] Add troubleshooting for packages stuck in DependenciesNotReady: Added an operations guide that explains how to diagnose missing package dependencies in operator logs and corrected the packages management development docs to use the current make image-packages target (@kvaps in cozystack/website#450).

  • [website] Reorder installation docs to install the operator before the platform package: Updated the platform installation guide and tutorial so the setup sequence consistently installs the Cozystack operator first, then prepares and applies the Platform Package, matching the rest of the documentation set (@sircthulhu in cozystack/website#449).

  • [website] Add automated installation guide for the Ansible collection: Added a full guide for deploying Cozystack with the cozystack.installer collection, including inventory examples, distro-specific playbooks, configuration reference, and explicit version pinning guidance (@lexfrei in cozystack/website#442).

  • [website] Expand monitoring and platform architecture reference docs: Added a tenant custom metrics collection guide for VMServiceScrape and VMPodScrape, and documented PackageSource/Package architecture, reconciliation flow, rollback behavior, and the cozypkg workflow in Key Concepts (@IvanHunters in cozystack/website#444, cozystack/website#445).

  • [website] Improve operations guides for CA rotation and Velero backups: Completed the CA rotation documentation with dry-run and post-rotation credential retrieval steps, and expanded the backup configuration guide with concrete examples, verification commands, and clearer operator procedures (@kvaps in cozystack/website#406; @androndo in cozystack/website#440).


Full Changelog: v1.1.1...v1.1.2

Download cozystack

werf sandbox

werf is a solution for implementing efficient and consistent software delivery to Kubernetes. It covers the entire CI/CD lifecycle and all related artifacts, glues commonly used tools (Git, Docker/Buildah, Helm, K8s) and facilitates best practices.

v2.62.1

Changelog

Bug Fixes

  • cleanup: do not require docker daemon for registry mirrors (7a42d33)

Installation

To install werf we strongly recommend following these instructions.

Alternatively, you can download werf binaries from here:

These binaries were signed with PGP and could be verified with the werf PGP public key. For example, werf binary can be downloaded and verified with gpg on Linux with these commands:

curl -sSLO "https://tuf.werf.io/targets/releases/2.62.1/linux-amd64/bin/werf" -O "https://tuf.werf.io/targets/signatures/2.62.1/linux-amd64/bin/werf.sig"
curl -sSL https://werf.io/werf.asc | gpg --import
gpg --verify werf.sig werf
werf sandbox

werf is a solution for implementing efficient and consistent software delivery to Kubernetes. It covers the entire CI/CD lifecycle and all related artifacts, glues commonly used tools (Git, Docker/Buildah, Helm, K8s) and facilitates best practices.

v2.63.0 [alpha,beta,ea]

Changelog

Features

  • build, stapel, git: add WERF_DISABLE_GIT_COMMIT_ANCESTRY_CHECK to disable git commit ancestry check (bae3300)
  • deploy: switch to goccy/go-yaml and improve parse error context (#7398) (3097703)
  • import: provide WERF_EXPERIMENTAL_IMPORT_BY_SOURCE_IMAGE_TAG env to change calculation import checksums method to reduce FD (#7392) (9abe1b5)
  • telemetry: extend build metrics with metadata fields (#7384) (09a324e)

Bug Fixes

  • deploy: print engine.Render() result on debug level (#7396) (7237218)
  • deploy: tracking absence for release namespace deletion (#7397) (6d885d9)
  • includes: add empty args check (a6b8766)

Installation

To install werf we strongly recommend following these instructions.

Alternatively, you can download werf binaries from here:

These binaries were signed with PGP and could be verified with the werf PGP public key. For example, werf binary can be downloaded and verified with gpg on Linux with these commands:

curl -sSLO "https://tuf.werf.io/targets/releases/2.63.0/linux-amd64/bin/werf" -O "https://tuf.werf.io/targets/signatures/2.63.0/linux-amd64/bin/werf.sig"
curl -sSL https://werf.io/werf.asc | gpg --import
gpg --verify werf.sig werf
werf sandbox

werf is a solution for implementing efficient and consistent software delivery to Kubernetes. It covers the entire CI/CD lifecycle and all related artifacts, glues commonly used tools (Git, Docker/Buildah, Helm, K8s) and facilitates best practices.

latest-signature

signed

KAITO sandbox

Kubernetes AI Toolchain Operator (KAITO) simplifies LLM inference, tuning, and RAG workloads on Kubernetes.

v0.9.3

v0.9.3 - 2026-03-19

Changelog

Bug Fixes ๐Ÿž

Maintenance ๐Ÿ”ง

  • 4ce9f87 chore: bump google.golang.org/grpc from 1.78.0 to 1.79.3 (#1856)