Scenario is as follows:
I run a containerised run step in an application namespace, so that I can mount a service account that exists within that namespace to connect to an external service.
Currently the pod that spins up in the application namespace to run the containerised step needs connectivity directly to app.harness.io in order to send logs back to Harness. This is not appropriate as some namespaces will have very restrictive Network Policies, and we don't want to have to open a route when one is already available in-cluster.
One of two options would be preferable:
  1. The containerised step pod logs to stdout/stderr and the Harness delegate streams the output from the kubernetes API, then sends it (critically from within the delegate's own namespace) to app.harness.io on behalf on the step. This would require no additional Network Policies to be created.
  2. When running a containerised run step outside of the delegate's own namespace, the delegate would start a proxy container in its namespace and configure the log streamer to send the logs via this proxy. This would require an internal Network Policy to allow traffic to the proxy container but this could be easily restricted via selectors to only apply to containers running Harness steps.