Log in to your harness - The Modern Software Delivery Platform® account to give feedback

Feature Requests

Anonymous

Feature Requests for Harness. Select 'Category' based on the module you are requesting the feature for.
Feature Request: Allow Steps to Reuse a Shared Named Container Within a CI Stage (Opt-In)
Problem In Harness CI, each step creates its own ephemeral container even when multiple steps use the same image. This increases build time, image pull overhead, pod/container churn, and Kubernetes load. Many pipelines naturally group related steps (e.g., build + test in the same toolchain), but Harness cannot currently express “reuse this container across steps”. This differs from established patterns in Jenkins, Tekton, and other K8s-based CI systems. ⸻ Proposal (Opt-In) Introduce an optional stage-level feature that allows users to: 1. Define named containers in the stage (e.g., build, node, tools). 2. Assign steps to those containers via a simple container: <name> field. 3. Harness initialises each named container once per pod, and runs all referencing steps inside it sequentially. 4. Default behaviour (one container per step) remains unchanged for users who want strict isolation. ⸻ Value • Performance: avoids repeated container startup and image checks; reduces CI latency. • Efficiency: lowers registry traffic and node/kubelet workload; improves cluster utilisation. • Better modelling: aligns with how teams naturally structure pipelines and with patterns from other CI systems. • Safe & incremental: fully optional; no breaking changes; ideal for advanced users who want faster pipelines. ⸻ Example YAML the spec section could be updated to hold the pod spec before the steps. Something like stage: name: Full SDLC Validation type: CI spec: Opt-in: define shared, named containers once per stage pod: containers: Build & test toolchain - name: build image: maven:3.9-eclipse-temurin-21 could mount a cache volume here in future Image build toolchain (e.g. Kaniko / BuildKit / Docker CLI wrapper) - name: image-builder image: gcr.io/kaniko-project/executor:latest Security tools (SCA, image scan, SBOM) - name: security image: aquasec/trivy:latest steps: 1) Compile + unit test (classic inner-loop) - name: compile-and-unit-tests container: build command: mvn -B clean verify 2) Static analysis / linting using Maven plugins - name: static-analysis container: build command: mvn -B spotbugs:check checkstyle:check 3) Package JAR / artefacts (if not already from previous goal) - name: package-artifacts container: build command: mvn -B package 4) Build container image from Dockerfile using dedicated image builder - name: build-container-image container: image-builder command: > /kaniko/executor --dockerfile=Dockerfile --context=. --destination= registry.example.com/my-team/my-service:${HARNESS_BUILD_ID} 5) Image security scan & SBOM generation - name: security-scan-and-sbom container: security command: > trivy image --scanners vuln,secret,config --format sarif --output trivy-report.sarif registry.example.com/my-team/my-service:${HARNESS_BUILD_ID} 6) Contract/integration tests against the built image or a test deployment - name: contract-tests container: build command: mvn -B -Pintegration-test verify
1
·
Continuous Integration
·
long-term
Pre-Execution Pod YAML Size Validation and Threshold Controls for CI Stages
The Problem When using Kubernetes-based CI builds, pod YAML specifications can exceed GKE's 1.5MiB etcd limit, causing pipeline failures with "etcdserver: request is too large" errors. Currently, these failures are only discovered at runtime when Kubernetes rejects the pod specification. There is no way to proactively validate or prevent this before execution. We appreciate the recent CI_COMMON_ENV_POD optimization (delegate 25.11.87300) which helps reduce pod size, but this does not provide visibility or governance controls to prevent future breaches. What We Need A proactive validation mechanism that: Calculates pod YAML size before execution - Estimate the pod specification size before submitting to Kubernetes Provides configurable thresholds - Allow administrators to set size limits at account, project, or stage levels Supports warning and fail behaviors - Option to warn when approaching limits or fail fast before Kubernetes rejection Offers visibility - Display pod size in execution logs for troubleshooting and template governance Use Case Platform teams managing CI/CD templates need to validate that adding new stages, security scanning steps, or template changes won't breach etcd limits before releasing to production. Currently, this is trial-and-error through runtime failures, impacting developer productivity and release velocity. Business Value Shifts from reactive incident response to proactive prevention Enables safe evolution of CI templates without risk of platform-wide failures Reduces failed builds and investigation time for development teams Applicable to any enterprise customer using Kubernetes-based CI infrastructure
3
·
Continuous Integration
·
long-term
Conditional Stage-level execution based on changed files
Subject: Conditional stage-level execution based on changed files (without JEXL) Expected behavior Users should be able to configure stage-level execution rules based on changed files in a PR or push directly in the trigger, without needing to write JEXL expressions. Concretely, for each stage in a CI pipeline, users want to: Specify one or more file path / glob patterns (e.g. services/api/ , infra/ , frontend/**) that determine if that stage should run. Use a simple, UI-driven configuration (fields similar to trigger “file path” filters) in the trigger details, instead of manually writing and maintaining JEXL against the webhook payload. Leverage the same “changed files” information already used at the trigger level, but applied at stage granularity, so that different stages can run (or be skipped) depending on which parts of the repo changed. This is a common CI pattern in other tools (e.g., “run this job only if files under X changed”), and users expect it to be straightforward to configure. Observed behavior / current limitation Today, Harness allows file path conditions at the trigger level, including “run this trigger only if certain files changed”. However, stage-level control is: Static: you can choose which stages run in the trigger configuration, but this is not dynamic per execution, or Dynamic only via JEXL, where users must: Parse the trigger payload (e.g. <+trigger.payload> / webhook body). Derive changed files manually. Write a JEXL condition per stage to decide whether it should execute. This JEXL-based approach: Is not intuitive for many users. Is error-prone and hard to maintain. Feels like an anti-pattern for what customers consider a “simple CI feature”. How to observe / example use case Customer has a PR checks pipeline with multiple stages, for example: backend-tests → should run when backend/** changes. frontend-tests → should run when frontend/** changes. infra-validation → should run when infra/** changes. A trigger is configured for PR events on a Git repo. They want each stage to automatically run only when the corresponding path patterns change in the PR. Today: Trigger-level path filters can decide whether to fire the pipeline at all, but cannot selectively run stages. Stage-level dynamic behavior requires manual JEXL against the payload to simulate “changed files” logic in the conditional execution settings. There is no built-in, discoverable UI option at the stage level to say: “Run this stage only when these files change.”
2
·
Continuous Integration
·
long-term
Load More