Certain steps should always fail, even though resilience option `failOnError=false` is used.
* Docker execution typically happens in another step. We should not hide errors here but rather handle their resilience in the step which uses `dockerExecute` and `dockerExecuteOnKubernetes`.
* Wrapper steps like `pipelineExecute`, `pipelineRestartSteps` should not hide errors. If an error occured this has to be considered as **intentional** and not hidden accidentially in case resilience option is switched on.
* handlePipelineStepErrors - allow step timeouts
This adds another resilience option:
A timeout can be configured for steps in order to stop step execution, continue with the pipeline while setting build status to "UNSTABLE"
When dealing with stashes in dockerExecuteOnKubernetes the global
stash list was updated from the step. This resulted in stashes
transported between the steps, which in turn resulted in having
old stashes unstashed in a pod later down the build. E.g.: mtaBuild
followed by neoDeploy: mtaBuild created a stash, the stash was
rememebered in the default stash list and re-used later on by
neoDeploy. Since the stash was created before the mtaBuild the
deployable was missing in the step.
* alpine does not support date option --universal
Replaced by --utc as this seems to be more universal than --universal
* Fix unit tests after date parameter change
Up to now the presence of the deployable (source) was checked late
by the NeoCommandLineHelper. The code doing this is surrounded by
the try/catch which finally also puts the log written by the neo
toolset into the job log in case an exception occured.
The check for the deployable returns with the same type of
exception like a failed neo command. Hence we cannot distiguish (ok,
would be possible to parse the exception message, but that is ugly).
When the exception is triggered by the missing deployable we try to
cat the neo log into the job log. But at this point the neo log has
not been provided - neo has not been called at all in this case.
Hence `cat logs/neo/*` in turn fails.
In order to avoid such a failure we check now for the presence of the
deployable earlier before launching the neo toolset.
Since the deployable is used in any deploy mode case no further check
for the deploy mode is required prior to the check for the deployable.
from property dockerImage we cannot conclude that we are in fact running inside a docker environment.
Step dockerExecute has some checks if we are in a docker context. If not there is a fallback to the
local environment.
The docker image property is provided from resources/default_pipeline_environment (value: 's4sdk/docker-neo-cli').
Hence a value will be present all the time (exception: someone configured null/ empty string explicitly). So we
will enter the corresponding code block anyway.
It is IMO also desirable to have the neo log in the job log when running inside a non-docker setup since this
simplifies troubleshooting anyway.
* Fix sanity checks for warPropertiesFile deploy mode.
* improve tests for the sanity checks
The sanity checks are performed per deploy mode.
All parameters are checked at once.
* Explict check for host, account not found by sanity checks for deploy mode war properties
* Define pod using k8s yaml manifest
The Kubernetes plugin allows to define pods directly via the Kubernetes
API specification:
https://github.com/jenkinsci/kubernetes-plugin#using-yaml-to-define-pod-templates
This has the advantage of unlocking Kubernetes features which are not
exposed via the Kubernetes plugin, including all Kubernetes security
featues.
Using the Kubernetes API directly is also better from development
point of view because it is stable and better desgined then the API the
plugin offers.
* Make the Kubernetes ns configurable
If one Jenkins Master is used by multiple Teams, it is desirable to
schedule K8S workloads in seperatae workspaces.
* Add securityContext to define uid and fsGroup
In the context of the Jenkins k8s plugin it is uids and fsGroups play an
important role, because the containers share a common file system.
Therefore it is benefical to configure this data centraly.
* fix indention
* Undo format changes
* Extend and fix unit tests
* Fix port mapping
* Don't set uid globally
This does not work with jaas due to permissions problems.
* Fix sidecar test
* Make security context configurable at stage level
* Extract json serialization
* Cleanup unit tests