Log in to your harness - The Modern Software Delivery Platform® account to give feedback

Feature Requests

Anonymous

Feature Requests for Harness. Select 'Category' based on the module you are requesting the feature for.
Feature Request: Allow Steps to Reuse a Shared Named Container Within a CI Stage (Opt-In)
Problem In Harness CI, each step creates its own ephemeral container even when multiple steps use the same image. This increases build time, image pull overhead, pod/container churn, and Kubernetes load. Many pipelines naturally group related steps (e.g., build + test in the same toolchain), but Harness cannot currently express “reuse this container across steps”. This differs from established patterns in Jenkins, Tekton, and other K8s-based CI systems. ⸻ Proposal (Opt-In) Introduce an optional stage-level feature that allows users to: 1. Define named containers in the stage (e.g., build, node, tools). 2. Assign steps to those containers via a simple container: <name> field. 3. Harness initialises each named container once per pod, and runs all referencing steps inside it sequentially. 4. Default behaviour (one container per step) remains unchanged for users who want strict isolation. ⸻ Value • Performance: avoids repeated container startup and image checks; reduces CI latency. • Efficiency: lowers registry traffic and node/kubelet workload; improves cluster utilisation. • Better modelling: aligns with how teams naturally structure pipelines and with patterns from other CI systems. • Safe & incremental: fully optional; no breaking changes; ideal for advanced users who want faster pipelines. ⸻ Example YAML the spec section could be updated to hold the pod spec before the steps. Something like stage: name: Full SDLC Validation type: CI spec: Opt-in: define shared, named containers once per stage pod: containers: Build & test toolchain - name: build image: maven:3.9-eclipse-temurin-21 could mount a cache volume here in future Image build toolchain (e.g. Kaniko / BuildKit / Docker CLI wrapper) - name: image-builder image: gcr.io/kaniko-project/executor:latest Security tools (SCA, image scan, SBOM) - name: security image: aquasec/trivy:latest steps: 1) Compile + unit test (classic inner-loop) - name: compile-and-unit-tests container: build command: mvn -B clean verify 2) Static analysis / linting using Maven plugins - name: static-analysis container: build command: mvn -B spotbugs:check checkstyle:check 3) Package JAR / artefacts (if not already from previous goal) - name: package-artifacts container: build command: mvn -B package 4) Build container image from Dockerfile using dedicated image builder - name: build-container-image container: image-builder command: > /kaniko/executor --dockerfile=Dockerfile --context=. --destination=registry.example.com/my-team/my-service:${HARNESS_BUILD_ID} 5) Image security scan & SBOM generation - name: security-scan-and-sbom container: security command: > trivy image --scanners vuln,secret,config --format sarif --output trivy-report.sarif registry.example.com/my-team/my-service:${HARNESS_BUILD_ID} 6) Contract/integration tests against the built image or a test deployment - name: contract-tests container: build command: mvn -B -Pintegration-test verify
1
·
Continuous Integration
·
long-term
Pre-Execution Pod YAML Size Validation and Threshold Controls for CI Stages
The Problem When using Kubernetes-based CI builds, pod YAML specifications can exceed GKE's 1.5MiB etcd limit, causing pipeline failures with "etcdserver: request is too large" errors. Currently, these failures are only discovered at runtime when Kubernetes rejects the pod specification. There is no way to proactively validate or prevent this before execution. We appreciate the recent CI_COMMON_ENV_POD optimization (delegate 25.11.87300) which helps reduce pod size, but this does not provide visibility or governance controls to prevent future breaches. What We Need A proactive validation mechanism that: Calculates pod YAML size before execution - Estimate the pod specification size before submitting to Kubernetes Provides configurable thresholds - Allow administrators to set size limits at account, project, or stage levels Supports warning and fail behaviors - Option to warn when approaching limits or fail fast before Kubernetes rejection Offers visibility - Display pod size in execution logs for troubleshooting and template governance Use Case Platform teams managing CI/CD templates need to validate that adding new stages, security scanning steps, or template changes won't breach etcd limits before releasing to production. Currently, this is trial-and-error through runtime failures, impacting developer productivity and release velocity. Business Value Shifts from reactive incident response to proactive prevention Enables safe evolution of CI templates without risk of platform-wide failures Reduces failed builds and investigation time for development teams Applicable to any enterprise customer using Kubernetes-based CI infrastructure
0
·
Continuous Integration
Load More