* Add new config mtarName for mtaBuild step
* Remove unnecessary whitespace changes in unit test
* Sort new config & avoid file operation when this config provided
* Modify the test to take the custom name without extension
* Update new config documentation
Co-Authored-By: Christopher Fenner <26137398+CCFenner@users.noreply.github.com>
* custom mta name should be given with mtar extension
* Updated the config documentation
* Streamline url parsing in piperPipelineStageInit
* Remove .git appendix only once
* Improve the regex for parsing urls
now the colon for the port is contained in the port group. This increases the
understandability of the regex.
* Improve the regex for parsing the urls again
now the leading slash of the path is contained in the path group. This increases the
understandability of the regex.
The parameter map is directly handed over from outside into the step via signature of the call method.
The container map is defined as step parameters, not as parameter handed over (only) via the parameters map.
With the current approach only the container map from the parameters is taken into account. In case the
parametersMap is defined elsewhere it is not taken into account.
* [refactoring] condence common coding for cf deploy
Small change beyond refactoring: for mtaDeploy the user is now quoted.
* more general name: logoutAction -> postDeployAction
# Changes
This PR adds a new step: cloudFoundryServiceCreate
There is a cf community plugin [Create-Service-Push](https://github.com/dawu415/CF-CLI-Create-Service-Push-Plugin) available to apply infrastructure as code to Cloud Foundry. The plugin uses a manifest.yml to create services in a targeted CF space.
The proposed step provides an interface to this plugin.
Already done:
- [x] Tests
- [x] Documentation
Further actions:
- a Refactoring: Move varOptions and varsFileOption code into a class and make us of this here and in cloudFoundryDeploy step. -> Is it ok to use the CfManifestUtils, or add it as a new class to variablesubstitiion package?
- enhance the s4sdk cf cli docker image to include the plugin.
* Provide additional opts for cf deploy
Inside cloudFoundyDeploy we use these cf commands
o login
o plugins
o blue-green-deploy
o push
o deploy
o bg-deploy
o stop
o logout
o logout and stop does not provide any options
o plugins provides options (--checksum --outdated) but it is unlikely that
these options can be used in a reasonable way during the deploy process.
o login now uses `loginOpts`
o The other commands uses now `deployOpts`
* provide additional opts also for cf api calls
* Provide more log when verbose
* re-use mtaDeployParameters and adjust names of other params (api, login) accordingly
* Streamline naming
* distinuish between cfNative and mta deploy params
* Add cfNativeDeployParam default
* login and api paramters are not under cloudFoundry
* Back commonPipelineEnvironment step by shared class
Each pipeline step comes with its own instance of a commonPipelineEnvironment.
Properties stored on one instance was not shared with the other instances.
Now we strip down the commonPipelineEnvironment step and forward basically
everything to a shared singleton instance.
With that approach all instances of commonPipelineEnvironment shares the
same data and can now be really used for information exchange between the steps.
Before that change only the commonPipelineEnvironment instance associated with
the pipeline script itself could be used for that purpose.
* Remove unneeded commented line
* Changes:
- New YamlSubstituteVariables step to substitute variables in YAML files with values from another YAML
- New Tests, that check the different substitution patterns.
- Added test resources, including various manifest and variables files.
- Improved usage of JenkinsLoggingRule
- Improved JenkinsReadYamlRule to properly reflect the mocked library's behaviour.
- Added a new JenkinsWriteYamlRule.
* Changes:
- added a Logger that checks a config.verbose flag before it logs debug messages.
- changed error handling to rethrow Yaml parsing exception in case of wrongly-formatted Yaml files.
- changed JenkinsWriteYamlRule to capture Yaml file details of every invocation of writeYaml. This allows sanity checks at end of tests, even if there were multiple invocations.
- adjusted tests.
* Changes:
- Removed javadoc-code blocks from API documentation since they are not supported.
- Removed skipDeletion boolean.
- Added a new deleteFile script which deletes a file if present.
- Added a new JenkinsDeleteFileRule to mock deleteFile script and optionally skip deletion for tests.
- Adjusted yamlSubstituteVariables script.
- Adjusted tests to include new JenkinsDeleteFileRule.
- Changed code that deletes an already existing output file to produce better logs.
* Changes:
- Turned yamlSubstituteVariables into a script that works purely based on Yaml data (not files).
- Added a new cfManifestSubstituteVariables that uses yamlSubstituteVariables under the hood but works based on files.
- Adjusted tests, and added new ones.
* Adjusted documentation and a few log statements.
* Changed documentation to no longer include javadoc code statements.
* Made mocking of deletion of a file a default. Adjusted tests.
* Changed signature of yamlSubstituteVariables' call method to return void.
* Changes:
- Fixed naming issues in deleteFile.
- Renamed Logger to DebugHelper.
- Fixed some documentation.
* Changed implementation of deleteFile not to use java.io.File - which is evil when using it for file operations.
* PROPERLY Changed implementation of deleteFile not to use java.io.File - which is evil when using it for file operations.
* Changes:
- Added tests for deleteFile script
- Changed JenkinsFileExistsRule to also keep track of which files have been queried for existence.
* Changes:
- Removed java.io.File usage from cfManifestSubstituteVariables and using fileExists instead now.
- Adjusted tests.
* Wrapped file path inside ticks to allow spaces in file path when calling deleteFile.
* Removed null checks of mandatory parameters, and resorted to ConfigurationHelper.withMandatoryProperty
* Fixed a NullPointer due to weird Jenkins / Groovy behaviour.
* Changes:
- Turned yamlSubstituteVariables step into a utils class.
- Added tests
- Adjusted cfManifestSubstituteVariables to use utils class instead of step.
- Adjusted tests
- Adjusted APIs of DebugHelper.
* Re-introduced log statement that shows what variables are being replaced and with what.
* Changing API of YamlUtils to take the script and config as input.
* Test
* Test
* Test
* Test
* Test
* Fixing issue.
* Fixing issue.
* Changes:
- Refactored DebugHelper and YamlUtils to make usage nicer and rely on dependency injection.
- Removed Field for DebugHelper and turned it into local variable.
- Adjusted classes using the above.
- Adjusted tests where necessary.
* Added link to CF standards to YamlUtils also.
* Add docu for step cfManifestSubstituteVariables.md
* Added documentation.
* Added missing script parameter to documentation. Some steps document it, some don't. Right now you need it, so we document it.
* Fixed some layouting and typos
* Beautified exception listing.
* Removed trailing whitespaces to make code climate checks pass.
* Trying to get documentation generated, with all the exceptions to markup one should not use.
* cosmetics.
* cosmetics, part 2
* Code climate changes...
* Inlined deleteFile step.
* Added two more tests to properly check file deletion and output handling.
* Changes:
- adjusted API to take a list of variables files, as does 'cf push --vars-file'
- adjusted API to allow for an optional list of variable key-value-maps as does 'cf push --vars'
- reproduced conflict resolution and overriding behavior of variables files and vars lists
- adjusted tests and documentation
* Added missing paramter to doc comment.
* Re-checked docs for missing paramters or params that have no counterpart in the method signature.
* Adjusted documentation.
* Removed absolute path usage from documentation.
* corrected documentation.
* Changed javadoc comment to plain comment.
* Turned all comments to plain comments.
Before: complete scmInfo was handed over via method signature.
After: Only the relevant part (GIT_URL from scmInfo) is handed over.
All the other properties from scmInfo are not used in the method body.
With this appraoch it is more obvious what is used inside the method.
* dockerExecuteOnKubernetes - add stashBack configuration
For certain cases it is valuable to only bring back some of the files from an execution inside a container back to the workspace.
This is now added.
Closes#753
* refactor according to PR review
* Take proper jnlp image as default for Kubernetes execution
Following changes are contained:
* removal of custom jnlp image as default
* allow customization of jnlp image via system environment
fixes#757
* add documentation
This step should serve as generic entry point in pipelines for building artifacts.
Build principle: build once.
Purpose of the step:
- build using a defined build technology
- store build result for future use in testing etc.
* dockerExecuteOnKubernetes - hide yaml by default
* hide step parameters to not leak sensitive parameter values into the log
* add more details to log output
stashing .git directory had negative side-effects.
Solution would be to stash `.git` folder and unstash in `dockerExecuteOnKubernetes` only if required for a dedicated scenario.
* add Slack notification to post stage
* add Slack notification to init stage
* add trigger condition for Slack notification
* fix whitespaces
* use capital stage name
* add tests for init stage
* remove unused import
* add tests for post stage
* minor changes
* fix typo
* Pipeline resilience - be more verbose
Be more verbose about when a pipeline gets into 'UNSTABLE' state.
Collect step name centrally to be able to inform end-users at a later point inside a pipeline (e.g. during an approval step).
* address PR feedback
extends init condition with condition `configKeys`
This condition allows to specify a list of configuration keys which if any key is set will activate the respective step & stage
Allow setting custom settings file for maven in mta build, which is for example required if a custom maven repo (i.e. company internal) needs to be used.
* add new step buildSetResult
* set pipeline result in post stage
* exclude buildSetResult from commonStepTests
* extend pipeline test
* remove post stage reference
Certain steps should always fail, even though resilience option `failOnError=false` is used.
* Docker execution typically happens in another step. We should not hide errors here but rather handle their resilience in the step which uses `dockerExecute` and `dockerExecuteOnKubernetes`.
* Wrapper steps like `pipelineExecute`, `pipelineRestartSteps` should not hide errors. If an error occured this has to be considered as **intentional** and not hidden accidentially in case resilience option is switched on.
* handlePipelineStepErrors - allow step timeouts
This adds another resilience option:
A timeout can be configured for steps in order to stop step execution, continue with the pipeline while setting build status to "UNSTABLE"
When dealing with stashes in dockerExecuteOnKubernetes the global
stash list was updated from the step. This resulted in stashes
transported between the steps, which in turn resulted in having
old stashes unstashed in a pod later down the build. E.g.: mtaBuild
followed by neoDeploy: mtaBuild created a stash, the stash was
rememebered in the default stash list and re-used later on by
neoDeploy. Since the stash was created before the mtaBuild the
deployable was missing in the step.
* alpine does not support date option --universal
Replaced by --utc as this seems to be more universal than --universal
* Fix unit tests after date parameter change
Up to now the presence of the deployable (source) was checked late
by the NeoCommandLineHelper. The code doing this is surrounded by
the try/catch which finally also puts the log written by the neo
toolset into the job log in case an exception occured.
The check for the deployable returns with the same type of
exception like a failed neo command. Hence we cannot distiguish (ok,
would be possible to parse the exception message, but that is ugly).
When the exception is triggered by the missing deployable we try to
cat the neo log into the job log. But at this point the neo log has
not been provided - neo has not been called at all in this case.
Hence `cat logs/neo/*` in turn fails.
In order to avoid such a failure we check now for the presence of the
deployable earlier before launching the neo toolset.
Since the deployable is used in any deploy mode case no further check
for the deploy mode is required prior to the check for the deployable.
from property dockerImage we cannot conclude that we are in fact running inside a docker environment.
Step dockerExecute has some checks if we are in a docker context. If not there is a fallback to the
local environment.
The docker image property is provided from resources/default_pipeline_environment (value: 's4sdk/docker-neo-cli').
Hence a value will be present all the time (exception: someone configured null/ empty string explicitly). So we
will enter the corresponding code block anyway.
It is IMO also desirable to have the neo log in the job log when running inside a non-docker setup since this
simplifies troubleshooting anyway.
* Fix sanity checks for warPropertiesFile deploy mode.
* improve tests for the sanity checks
The sanity checks are performed per deploy mode.
All parameters are checked at once.
* Explict check for host, account not found by sanity checks for deploy mode war properties
* Define pod using k8s yaml manifest
The Kubernetes plugin allows to define pods directly via the Kubernetes
API specification:
https://github.com/jenkinsci/kubernetes-plugin#using-yaml-to-define-pod-templates
This has the advantage of unlocking Kubernetes features which are not
exposed via the Kubernetes plugin, including all Kubernetes security
featues.
Using the Kubernetes API directly is also better from development
point of view because it is stable and better desgined then the API the
plugin offers.
* Make the Kubernetes ns configurable
If one Jenkins Master is used by multiple Teams, it is desirable to
schedule K8S workloads in seperatae workspaces.
* Add securityContext to define uid and fsGroup
In the context of the Jenkins k8s plugin it is uids and fsGroups play an
important role, because the containers share a common file system.
Therefore it is benefical to configure this data centraly.
* fix indention
* Undo format changes
* Extend and fix unit tests
* Fix port mapping
* Don't set uid globally
This does not work with jaas due to permissions problems.
* Fix sidecar test
* Make security context configurable at stage level
* Extract json serialization
* Cleanup unit tests
In case of a mis-configuration we get a hint like "host is missing".
Actually it should be "neo/host is missing" since the parameter "host" is nested inside "neo".
Having simply "host" confuses the person troubleshooting this issue.
With this change the input validation is performed right at the beginning of the step.
The NeoCommandLine helper does not check a second time now.