Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(deps): update salamandre cluster #24

Merged
merged 1 commit into from
Jan 8, 2025
Merged

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Nov 1, 2024

This PR contains the following updates:

Package Update Change
cert-manager (source) minor 1.15.3 -> v1.16.2
cloudnative-pg (source) minor 0.22.0 -> 0.23.0
cloudnative-pg-source minor v1.24.0 -> v1.25.0
collabora/code patch 24.04.7.2.1 -> 24.04.11.1.1
docker minor 27.3.1-dind-rootless -> 27.4.1-dind-rootless
mailhog (source) minor 5.2.3 -> 5.7.0
nextcloud (source) minor 6.0.3 -> 6.6.2
vault (source) minor 0.28.1 -> 0.29.1
vaultwarden/server patch 1.32.0-alpine -> 1.32.7-alpine
zitadel minor 8.5.0 -> 8.11.1

Release Notes

cert-manager/cert-manager (cert-manager)

v1.16.2

Compare Source

cert-manager is the easiest way to automatically manage certificates in Kubernetes and OpenShift clusters.

This patch release of cert-manager 1.16 makes several changes to how PEM input is validated, adding maximum sizes appropriate to the type of PEM data which is being parsed.

This is to prevent an unacceptable slow-down in parsing specially crafted PEM data. The issue was found by Google's OSS-Fuzz project.

The issue is low severity; to exploit the PEM issue would require privileged access which would likely allow Denial-of-Service through other methods.

Note also that since most PEM data parsed by cert-manager comes from ConfigMap or Secret resources which have a max size limit of approximately 1MB, it's difficult to force cert-manager to parse large amounts of PEM data.

Further information is available in GHSA-r4pg-vg54-wxx4

In addition, the version of Go used to build cert-manager 1.16 was updated along with the base images.

Changes by Kind

Bug or Regression
  • Set a maximum size for PEM inputs which cert-manager will accept to remove possibility of taking a long time to process an input (#​7401, @​SgtCoDFish)
Other (Cleanup or Flake)

v1.16.1

Compare Source

cert-manager is the easiest way to automatically manage certificates in Kubernetes and OpenShift clusters.

The cert-manager 1.16 release includes: new Helm chart features, more Prometheus metrics, memory optimizations, and various improvements and bug fixes for the ACME issuer and Venafi Issuer.

📖 Read the complete 1.16 release notes before upgrading.

📜Changes since v1.16.0

Bug or Regression
  • BUGFIX: Helm schema validation: the new schema validation was too strict for the "global" section. Since the global section is shared across all charts and sub-charts, we must also allow unknown fields. (#​7348, @inteon)
  • BUGFIX: Helm will now accept percentages for the podDisruptionBudget.minAvailable and podDisruptionBudget.maxAvailable values. (#​7345, @inteon)
  • Helm: allow enabled to be set as a value to toggle cert-manager as a dependency. (#​7356, @inteon)
  • BUGFIX: A change in v1.16.0 caused cert-manager's ACME ClusterIssuer to look in the wrong namespace for resources required for the issuance (e.g. credential Secrets). This is now fixed in v1.16.1. (#​7342, @inteon)

v1.16.0

Compare Source

cert-manager is the easiest way to automatically manage certificates in Kubernetes and OpenShift clusters.

The cert-manager 1.16 release includes: new Helm chart features, more Prometheus metrics, memory optimizations, and various improvements and bug fixes for the ACME issuer and Venafi Issuer.

📖 Read the complete 1.16 release notes at cert-manager.io.

⚠️ Known issues

  1. Helm Chart: JSON schema prevents the chart being used as a sub-chart on Rancher RKE.
  2. ACME DNS01 ClusterIssuer fail while loading credentials from Secret resources.

❗ Breaking changes

  1. Helm schema validation may reject your existing Helm values files if they contain typos or unrecognized fields.
  2. Venafi Issuer may fail to renew certificates if the requested duration conflicts with the CA’s minimum or maximum policy settings in Venafi.
  3. Venafi Issuer may fail to renew Certificates if the issuer has been configured for TPP with username-password authentication.

📖 Read the complete 1.16 release notes at cert-manager.io.

📜 Changes since v1.15.0

📖 Read the complete 1.16 release notes at cert-manager.io.

Feature
  • Add SecretRef support for Venafi TPP issuer CA Bundle (#​7036, @sankalp-at-gh)
  • Add renewBeforePercentage alternative to renewBefore (#​6987, @cbroglie)
  • Add a metrics server to the cainjector (#​7194, @wallrj)
  • Add a metrics server to the webhook (#​7182, @wallrj)
  • Add client certificate auth method for Vault issuer (#​4330, @joshmue)
  • Add process and go runtime metrics for controller (#​6966, @mindw)
  • Added app.kubernetes.io/managed-by: cert-manager label to the cert-manager-webhook-ca Secret (#​7154, @jrcichra)
  • Allow the user to specify a Pod template when using GatewayAPI HTTP01 solver, this mirrors the behavior when using the Ingress HTTP01 solver. (#​7211, @ThatsMrTalbot)
  • Create token request RBAC for the cert-manager ServiceAccount by default (#​7213, @Jasper-Ben)
  • Feature: Append cert-manager user-agent string to all AWS API requests, including IMDS and STS requests. (#​7295, @wallrj)
  • Feature: Log AWS SDK warnings and API requests at cert-manager debug level to help debug AWS Route53 problems in the field. (#​7292, @wallrj)
  • Feature: The Route53 DNS solver of the ACME Issuer will now use regional STS endpoints computed from the region that is supplied in the Issuer spec or in the AWS_REGION environment variable.
    Feature: The Route53 DNS solver of the ACME Issuer now uses the "ambient" region (AWS_REGION or AWS_DEFAULT_REGION) if issuer.spec.acme.solvers.dns01.route53.region is empty; regardless of the flags --issuer-ambient-credentials and --cluster-issuer-ambient-credentials. (#​7299, @wallrj)
  • Helm: adds JSON schema validation for the Helm values. (#​7069, @inteon)
  • If the --controllers flag only specifies disabled controllers, the default controllers are now enabled implicitly.
    Added disableAutoApproval and approveSignerNames Helm chart options. (#​7049, @inteon)
  • Make it easier to configure cert-manager using Helm by defaulting config.apiVersion and config.kind within the Helm chart. (#​7126, @ThatsMrTalbot)
  • Now passes down specified duration to Venafi client instead of using the CA default only. (#​7104, @Guitarkalle)
  • Reduce the memory usage of cainjector, by only caching the metadata of Secret resources.
    Reduce the load on the K8S API server when cainjector starts up, by only listing the metadata of Secret resources. (#​7161, @wallrj)
  • The Route53 DNS01 solver of the ACME Issuer can now detect the AWS region from the AWS_REGION and AWS_DEFAULT_REGION environment variables, which is set by the IAM for Service Accounts (IRSA) webhook and by the Pod Identity webhook.
    The issuer.spec.acme.solvers.dns01.route53.region field is now optional.
    The API documentation of the region field has been updated to explain when and how the region value is used. (#​7287, @wallrj)
  • Venafi TPP issuer can now be used with a username & password combination with OAuth. Fixes #​4653.
    Breaking: cert-manager will no longer use the API Key authentication method which was deprecated in 20.2 and since removed in 24.1 of TPP. (#​7084, @hawksight)
  • You can now configure the pod security context of HTTP-01 solver pods. (#​5373, @aidy)
  • Helm: New value webhook.extraEnv, allows you to set custom environment variables in the webhook Pod.
    Helm: New value cainjector.extraEnv, allows you to set custom environment variables in the cainjector Pod.
    Helm: New value startupapicheck.extraEnv, allows you to set custom environment variables in the startupapicheck Pod. (#​7319, @wallrj)
Bug or Regression
  • Adds support (behind a flag) to use a domain qualified finalizer. If the feature is enabled (which is not by default), it should prevent Kubernetes from reporting: metadata.finalizers: "finalizer.acme.cert-manager.io": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers (#​7273, @jsoref)
  • BUGFIX Route53: explicitly set the aws-global STS region which is now required by the github.com/aws/aws-sdk-go-v2 library. (#​7108, @inteon)
  • BUGFIX: fix issue that caused Vault issuer to not retry signing when an error was encountered. (#​7105, @inteon)
  • BUGFIX: the dynamic certificate source used by the webhook TLS server failed to detect a root CA approaching expiration, due to a calculation error. This will cause the webhook TLS server to fail renewing its CA certificate. Please upgrade before the expiration of this CA certificate is reached. (#​7230, @inteon)
  • Bugfix: Prevent aggressive Route53 retries caused by IRSA authentication failures by removing the Amazon Request ID from errors wrapped by the default credential cache. (#​7291, @wallrj)
  • Bugfix: Prevent aggressive Route53 retries caused by STS authentication failures by removing the Amazon Request ID from STS errors. (#​7259, @wallrj)
  • Bump grpc-go to fix GHSA-xr7q-jx4m-x55m (#​7164, @SgtCoDFish)
  • Bump the go-retryablehttp dependency to fix CVE-2024-6104 (#​7125, @SgtCoDFish)
  • Fix Azure DNS causing panics whenever authentication error happens (#​7177, @eplightning)
  • Fix incorrect indentation of endpointAdditionalProperties in the PodMonitor template of the Helm chart (#​7190, @wallrj)
  • Fixes ACME HTTP01 challenge behavior when using Gateway API to prevent unbounded creation of HTTPRoute resources (#​7178, @miguelvr)
  • Handle errors arising from challenges missing from the ACME server (#​7202, @bdols)
  • Helm BUGFIX: the cainjector ConfigMap was not mounted in the cainjector deployment. (#​7052, @inteon)
  • Improve the startupapicheck: validate that the validating and mutating webhooks are doing their job. (#​7057, @inteon)
  • The KeyUsages X.509 extension is no longer added when there are no key usages set (in accordance to RFC 5280 Section 4.2.1.3) (#​7250, @inteon)
  • Update github.com/Azure/azure-sdk-for-go/sdk/azidentity to address CVE-2024-35255 (#​7087, @dependabot[bot])
Other (Cleanup or Flake)
  • Old API versions were removed from the codebase.
    Removed:
    (acme.)cert-manager.io/v1alpha2
    (acme.)cert-manager.io/v1alpha3
    (acme.)cert-manager.io/v1beta1 (#​7278, @inteon)
  • Upgrading to client-go v0.31.0 removes a lot of noisy reflector.go: unable to sync list result: internal error: cannot cast object DeletedFinalStateUnknown errors from logs. (#​7237, @inteon)
  • Bump Go to v1.23.2 (#​7324, @cert-manager-bot)

v1.15.4

Compare Source

cert-manager is the easiest way to automatically manage certificates in Kubernetes and OpenShift clusters.

This patch release of cert-manager 1.15 makes several changes to how PEM input is validated, adding maximum sizes appropriate to the type of PEM data which is being parsed.

This is to prevent an unacceptable slow-down in parsing specially crafted PEM data. The issue was found by Google's OSS-Fuzz project.

The issue is low severity; to exploit the PEM issue would require privileged access which would likely allow Denial-of-Service through other methods.

Note also that since most PEM data parsed by cert-manager comes from ConfigMap or Secret resources which have a max size limit of approximately 1MB, it's difficult to force cert-manager to parse large amounts of PEM data.

Further information is available in GHSA-r4pg-vg54-wxx4

In addition, the version of Go used to build cert-manager 1.15 was updated along with the base images, and a Route53 bug fix was backported.

Changes by Kind

Bug or Regression
  • Bugfix: Prevent aggressive Route53 retries caused by STS authentication failures by removing the Amazon Request ID from STS errors. (#​7261, @​cert-manager-bot)
  • Set a maximum size for PEM inputs which cert-manager will accept to remove possibility of taking a long time to process an input (#​7402, @​SgtCoDFish)
Other (Cleanup or Flake)
cloudnative-pg/charts (cloudnative-pg)

v0.23.0

Compare Source

CloudNativePG Operator Helm Chart

What's Changed

New Contributors

Full Changelog: cloudnative-pg/charts@cloudnative-pg-v0.23.0-rc1...cloudnative-pg-v0.23.0

v0.22.1

Compare Source

CloudNativePG Operator Helm Chart

What's Changed

New Contributors

Full Changelog: cloudnative-pg/charts@cluster-v0.0.11...cloudnative-pg-v0.22.1

cloudnative-pg/cloudnative-pg (cloudnative-pg-source)

v1.25.0

Compare Source

Release Date: December 23, 2024

Features
  • Declarative Database Management: Introduce the Database Custom Resource Definition (CRD), enabling users to create and manage PostgreSQL databases declaratively within a cluster. (#​5325)

  • Logical Replication Management: Add Publication and Subscription CRDs for declarative management of PostgreSQL logical replication. These simplify replication setup and facilitate online migrations to CloudNativePG. (#​5329)

  • Experimental Support for CNPG-I: Introducing CNPG-I (CloudNativePG Interface), a standardized framework designed to extend CloudNativePG functionality through third-party plugins and foster the growth of the CNPG ecosystem. The Barman Cloud Plugin serves as a live example, illustrating how plugins can be developed to enhance backup and recovery workflows. Although CNPG-I support is currently experimental, it offers a powerful approach to extending CloudNativePG without modifying the operator’s core code—akin to PostgreSQL extensions. We welcome community feedback and contributions to shape this exciting new capability.

Enhancements
  • Add the dataDurability option to the .spec.postgresql.synchronous stanza, allowing users to choose between required (default) or preferred durability in synchronous replication. (#​5878)
  • Enable customization of startup, liveness, and readiness probes through the .spec.probes stanza. (#​6266)
  • Support additional pg_dump and pg_restore options to enhance database import flexibility. (#​6214)
  • Add support for maxConcurrentReconciles in the CloudNativePG controller and set the default to 10, improving the operator's ability to efficiently manage larger deployments out of the box. (#​5678)
  • Add the cnpg.io/userType label to secrets generated for predefined users, specifically superuser and app. (#​4392)
  • Improved validation for the spec.schedule field in ScheduledBackups, raising warnings for potential misconfigurations. (#​5396)
  • cnpg plugin:
    • Enhance the backup command to support plugins. (#​6045)
    • Honor the User-Agent header in HTTP requests with the API server. (#​6153)
Bug Fixes
  • Ensure the former primary flushes its WAL file queue to the archive before re-synchronizing as a replica, reducing recovery times and enhancing data consistency during failovers. (#​6141)
  • Clean the WAL volume along with the PGDATA volume during bootstrap. (#​6265)
  • Update the operator to set the cluster phase to Unrecoverable when all previously generated PersistentVolumeClaims are missing. (#​6170)
  • Fix the parsing of the synchronous_standby_names GUC when .spec.postgresql.synchronous.method is set to first. (#​5955)
  • Resolved a potential race condition when patching certain conditions in CRD statuses, improving reliability in concurrent updates. (#​6328)
  • Correct role changes to apply at the transaction level instead of the database context. (#​6064)
  • Remove the primary_slot_name definition from the override.conf file on the primary to ensure it is always empty. (#​6219)
  • Configure libpq environment variables, including PGHOST, in PgBouncer pods to enable seamless access to the pgbouncer virtual database using psql from within the container. (#​6247)
  • Remove unnecessary updates to the Cluster status when verifying changes in the image catalog. (#​6277)
  • Prevent panic during recovery from an external server without proper backup configuration. (#​6300)
  • Resolved a key collision issue in structured logs, where the name field was inconsistently used to log two distinct values. (#​6324)
  • Ensure proper quoting of the inRoles field in SQL statements to prevent syntax errors in generated SQL during role management. (#​6346)
  • cnpg plugin:
    • Ensure the kubectl context is properly passed in the psql command. (#​6257)
    • Avoid displaying physical backups block when empty with status command. (#​5998)
Supported Versions
  • Kubernetes: 1.32, 1.31, 1.30, and 1.29
  • PostgreSQL: 17, 16, 15, 14, and 13
    • Default image: PostgreSQL 17.2
    • Officially dropped support for PostgreSQL 12
    • PostgreSQL 13 support ends on November 12, 2025

v1.24.2

Compare Source

Release Date: December 23, 2024

Enhancements
  • Enable customization of startup, liveness, and readiness probes through the .spec.probes stanza. (#​6266)
  • Add the cnpg.io/userType label to secrets generated for predefined users, specifically superuser and app. (#​4392)
  • Improved validation for the spec.schedule field in ScheduledBackups, raising warnings for potential misconfigurations. (#​5396)
  • cnpg plugin:
    • Honor the User-Agent header in HTTP requests with the API server. (#​6153)
Bug Fixes
  • Ensure the former primary flushes its WAL file queue to the archive before re-synchronizing as a replica, reducing recovery times and enhancing data consistency during failovers. (#​6141)
  • Clean the WAL volume along with the PGDATA volume during bootstrap. (#​6265)
  • Update the operator to set the cluster phase to Unrecoverable when all previously generated PersistentVolumeClaims are missing. (#​6170)
  • Fix the parsing of the synchronous_standby_names GUC when .spec.postgresql.synchronous.method is set to first. (#​5955)
  • Resolved a potential race condition when patching certain conditions in CRD statuses, improving reliability in concurrent updates. (#​6328)
  • Correct role changes to apply at the transaction level instead of the database context. (#​6064)
  • Remove the primary_slot_name definition from the override.conf file on the primary to ensure it is always empty. (#​6219)
  • Configure libpq environment variables, including PGHOST, in PgBouncer pods to enable seamless access to the pgbouncer virtual database using psql from within the container. (#​6247)
  • Remove unnecessary updates to the Cluster status when verifying changes in the image catalog. (#​6277)
  • Prevent panic during recovery from an external server without proper backup configuration. (#​6300)
  • Resolved a key collision issue in structured logs, where the name field was inconsistently used to log two distinct values. (#​6324)
  • Ensure proper quoting of the inRoles field in SQL statements to prevent syntax errors in generated SQL during role management. (#​6346)
  • cnpg plugin:
    • Ensure the kubectl context is properly passed in the psql command. (#​6257)
    • Avoid displaying physical backups block when empty with status command. (#​5998)

v1.24.1

Compare Source

Release date: Oct 16, 2024

Enhancements:
  • Remove the use of pg_database_size from the status probe, as it caused high resource utilization by scanning the entire PGDATA directory to compute database sizes. The kubectl status plugin will now rely on du to provide detailed size information retrieval (#​5689).
  • Add the ability to configure the full_page_writes parameter in PostgreSQL. This setting defaults to on, in line with PostgreSQL's recommendations (#​5516).
  • Plugin:
    • Add the logs pretty command in the cnpg plugin to read a log stream from standard input and output a human-readable format, with options to filter log entries (#​5770)
    • Enhance the status command by allowing multiple -v options to increase verbosity for more detailed output (#​5765).
    • Add support for specifying a custom Docker image using the --image flag in the pgadmin4 plugin command, giving users control over the Docker image used for pgAdmin4 deployments (#​5515).
Fixes:
  • Resolve an issue with concurrent status updates when demoting a primary to a designated primary, ensuring smoother transitions during cluster role changes (#​5755).
  • Ensure that replica PodDisruptionBudgets (PDB) are removed when scaling down to two instances, enabling easier maintenance on the node hosting the replica (#​5487).
  • Prioritize full rollout over inplace restarts (#​5407).
  • When using .spec.postgresql.synchronous, ensure that the synchronous_standby_names parameter is correctly set, even when no replicas are reachable (#​5831).
  • Fix an issue that could lead to double failover in cases of lost connectivity (#​5788).
  • Correctly set the TMPDIR and PSQL_HISTORY environment variables for pods and jobs, improving temporary file and history management (#​5503).
  • Plugin:
    • Resolve a race condition in the logs cluster command (#​5775).
    • Display the potential sync status in the status plugin (#​5533).
    • Fix the issue where pods deployed by the pgadmin4 command didn’t have a writable home directory (#​5800).
Supported versions
  • PostgreSQL 17 (PostgreSQL 17.0 is the default image)
codecentric/helm-charts (mailhog)

v5.7.0

Compare Source

An e-mail testing tool for developers

v5.6.0

Compare Source

An e-mail testing tool for developers

v5.5.0

Compare Source

An e-mail testing tool for developers

v5.4.0

Compare Source

An e-mail testing tool for developers

v5.3.0

Compare Source

An e-mail testing tool for developers

nextcloud/helm (nextcloud)

v6.6.2

Compare Source

A file sharing server that puts the control and security of your own data back into your hands.

What's Changed

New Contributors

Full Changelog: nextcloud/helm@nextcloud-6.5.2...nextcloud-6.6.2

v6.5.2

Compare Source

A file sharing server that puts the control and security of your own data back into your hands.

What's Changed

New Contributors

Full Changelog: nextcloud/helm@nextcloud-6.5.1...nextcloud-6.5.2

v6.5.1

Compare Source

A file sharing server that puts the control and security of your own data back into your hands.

What's Changed

Full Changelog: nextcloud/helm@nextcloud-6.5.0...nextcloud-6.5.1

v6.5.0

Compare Source

A file sharing server that puts the control and security of your own data back into your hands.

What's Changed

Full Changelog: nextcloud/helm@nextcloud-6.4.1...nextcloud-6.5.0

v6.4.1

Compare Source

A file sharing server that puts the control and security of your own data back into your hands.

What's Changed

New Contributors

Full Changelog: nextcloud/helm@nextcloud-6.3.2...nextcloud-6.4.1

v6.3.2

Compare Source

A file sharing server that puts the control and security of your own data back into your hands.

What's Changed

Full Changelog: nextcloud/helm@nextcloud-6.3.1...nextcloud-6.3.2

v6.3.1

Compare Source

A file sharing server that puts the control and security of your own data back into your hands.

What's Changed

Full Changelog: nextcloud/helm@nextcloud-6.3.0...nextcloud-6.3.1

v6.3.0

Compare Source

A file sharing server that puts the control and security of your own data back into your hands.

What's Changed

Full Changelog: nextcloud/helm@nextcloud-6.2.4...nextcloud-6.3.0

v6.2.4

Compare Source

A file sharing server that puts the control and security of your own data back into your hands.

What's Changed

Full Changelog: nextcloud/helm@nextcloud-6.2.3...nextcloud-6.2.4

v6.2.3

Compare Source

A file sharing server that puts the control and security of your own data back into your hands.

What's Changed

New Contributors

Full Changelog: nextcloud/helm@nextcloud-6.2.2...nextcloud-6.2.3

v6.2.2

Compare Source

A file sharing server that puts the control and security of your own data back into your hands.

What's Changed

Full Changelog: nextcloud/helm@nextcloud-6.2.1...nextcloud-6.2.2

v6.2.1

Compare Source

A file sharing server that puts the control and security of your own data back into your hands.

What's Changed

Full Changelog: nextcloud/helm@nextcloud-6.2.0...nextcloud-6.2.1

v6.2.0

Compare Source

A file sharing server that puts the control and security of your own data back into your hands.

What's Changed

New Contributors

Full Changelog: nextcloud/helm@nextcloud-6.1.1...nextcloud-6.2.0

v6.1.1

Compare Source

A file sharing server that puts the control and security of your own data back into your hands.

What's Changed

Full Changelog: nextcloud/helm@nextcloud-6.1.0...nextcloud-6.1.1

v6.1.0

Compare Source

A file sharing server that puts the control and security of your own data back into your hands.

What's Changed

New Contributors

Full Changelog: nextcloud/helm@nextcloud-6.0.3...nextcloud-6.1.0

hashicorp/vault-helm (vault)

v0.29.1

Compare Source

Bugs:

  • server: restore support for templated config GH-1073

v0.29.0

Compare Source

Changes:

  • Default vault version updated to 1.18.1
  • Default vault-k8s version updated to 1.5.0
  • Default vault-csi-provider version updated to 1.5.0
  • Tested with Kubernetes versions 1.28-1.31

Features:

  • csi: Allow modification of the hostNetwork parameter on the DaemonSet GH-1046

Bugs:

  • Properly handle JSON formatted server config GH-1049
dani-garcia/vaultwarden (vaultwarden/server)

v1.32.7

Compare Source

Security Fixes

This release contains a security fix for the following CVE GHSA-g65h-982x-4m5m.

This vulnerability affects any installations that have the ORG_GROUPS_ENABLED setting enabled, and we urge anyone doing so to update as soon as possible.

What's Changed

Full Changelog: dani-garcia/vaultwarden@1.32.6...1.32.7

v1.32.6

Compare Source

What's Changed

Configuration

📅 Schedule: Branch creation - "* 0-3 1 * *" in timezone Europe/Paris, Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate renovate bot requested a review from orblazer as a code owner November 1, 2024 00:07
@renovate renovate bot force-pushed the renovate/salamandre branch 5 times, most recently from b1697b1 to 491a311 Compare November 12, 2024 19:53
@renovate renovate bot force-pushed the renovate/salamandre branch 5 times, most recently from a5118a1 to c63e25e Compare November 20, 2024 22:28
@renovate renovate bot force-pushed the renovate/salamandre branch 2 times, most recently from a92530e to 3082ed6 Compare November 22, 2024 19:47
@renovate renovate bot force-pushed the renovate/salamandre branch 4 times, most recently from 3055695 to ccf3032 Compare December 10, 2024 00:13
@renovate renovate bot force-pushed the renovate/salamandre branch 5 times, most recently from 283343d to 3908565 Compare December 17, 2024 19:05
@renovate renovate bot force-pushed the renovate/salamandre branch 7 times, most recently from c2e2727 to b738301 Compare December 23, 2024 19:14
@renovate renovate bot force-pushed the renovate/salamandre branch from b738301 to fd402fb Compare December 27, 2024 16:19
@renovate renovate bot force-pushed the renovate/salamandre branch 6 times, most recently from 7490b1e to c976b0d Compare January 7, 2025 18:46
@renovate renovate bot force-pushed the renovate/salamandre branch from c976b0d to 1285688 Compare January 8, 2025 03:04
@orblazer orblazer merged commit 0632766 into main Jan 8, 2025
1 check passed
@orblazer orblazer deleted the renovate/salamandre branch January 8, 2025 03:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants