1
0
mirror of https://github.com/SAP/jenkins-library.git synced 2025-01-04 04:07:16 +02:00

InfluxDB support (#52)

* adding step for writing metrics to InfluxDB including dependencies
* added documentation
* incorporated PR feedback
This commit is contained in:
Oliver Nocon 2018-01-24 09:55:38 +01:00 committed by GitHub
parent b9eedda38e
commit 749aa5e7ed
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
17 changed files with 751 additions and 14 deletions

View File

@ -1,7 +1,7 @@
# ConfigurationMerger
## Description
A helper script that can merge the configurations from multiple sources.
A helper script that can merge the configurations from multiple sources.
## Static Method Details
@ -10,8 +10,8 @@ A helper script that can merge the configurations from multiple sources.
#### Description
A step is usually configured by default values, configuration values from the configuration file and the parameters.
The methods can merge these sources.
Default values are overwritten by configuration file values.
The method can merge these sources.
Default values are overwritten by configuration file values.
These are overwritten by parameters.
#### Parameters
@ -25,9 +25,9 @@ These are overwritten by parameters.
| `defaults` | yes | Map |
* `parameters` Parameters map given to the step
* `parameterKeys` List of parameter names (keys) that should be considered while merging.
* `parameterKeys` List of parameter names (keys) that should be considered while merging.
* `configurationMap` Configuration map loaded from the configuration file.
* `configurationKeys` List of configuration keys that should be considered while merging.
* `configurationKeys` List of configuration keys that should be considered while merging.
* `defaults` Map of default values, e.g. loaded from the default value configuration file.
#### Side effects
@ -62,3 +62,62 @@ List stepConfigurationKeys = [
Map configuration = ConfigurationMerger.merge(parameters, parameterKeys, stepConfiguration, stepConfigurationKeys, stepDefaults)
```
### mergeWithPipelineData
#### Description
A step is usually configured by default values, configuration values from the configuration file and the parameters.
In certain cases also information previously generated in the pipeline should be mixed in, like for example an artifactVersion created earlier.
The method can merge these sources.
Default values are overwritten by configuration file values.
Those are overwritten by information previously generated in the pipeline (e.g. stored in [commonPipelineEnvironment](../steps/commonPipelineEnvironment.md)).
These are overwritten by parameters passed directly to the step.
#### Parameters
| parameter | mandatory | Class |
| -------------------|-----------|-----------------------------------|
| `parameters` | yes | Map |
| `parameterKeys` | yes | List |
| `pipelineDataMap` | yes | Map |
| `configurationMap` | yes | Map |
| `configurationKeys`| yes | List |
| `defaults` | yes | Map |
* `parameters` Parameters map given to the step
* `parameterKeys` List of parameter names (keys) that should be considered while merging.
* `configurationMap` Configuration map loaded from the configuration file.
* `pipelineDataMap` Values available to the step during pipeline run.
* `configurationKeys` List of configuration keys that should be considered while merging.
* `defaults` Map of default values, e.g. loaded from the default value configuration file.
#### Side effects
none
#### Example
```groovy
def stepName = 'influxWriteData'
prepareDefaultValues script: script
final Map stepDefaults = ConfigurationLoader.defaultStepConfiguration(script, stepName)
final Map stepConfiguration = ConfigurationLoader.stepConfiguration(script, stepName)
final Map generalConfiguration = ConfigurationLoader.generalConfiguration(script)
List parameterKeys = [
'artifactVersion',
'influxServer',
'influxPrefix'
]
Map pipelineDataMap = [
artifactVersion: commonPipelineEnvironment.getArtifactVersion()
]
List stepConfigurationKeys = [
'influxServer',
'influxPrefix'
]
Map configuration = ConfigurationMerger.mergeWithPipelineData(parameters, parameterKeys, pipelineDataMap, stepConfiguration, stepConfigurationKeys, stepDefaults)
```

View File

@ -0,0 +1,30 @@
# JsonUtils
## Description
Provides json related utility functions.
## Constructors
### JsonUtils()
Default no-argument constructor. Instances of the Utils class does not hold any instance specific state.
#### Example
```groovy
new JsonUtils()
```
## Method Details
### getPrettyJsonString(object)
#### Description
Creates a pretty-printed json string.
#### Parameters
* `object` - A object (e.g. Map or List).
#### Return value
A pretty printed `String`.
#### Side effects
none

View File

@ -24,7 +24,7 @@ Retrieves the parameter value for parameter `paramName` from parameter map `map`
#### Parameters
* `map` - A map containing configuration parameters.
* `paramName` - The key of the parameter which should be looked up.
* `defaultValue` - The value which is returned in case there is no parameter with key `paramName` contained in `map`.
* optional: `defaultValue` - The value which is returned in case there is no parameter with key `paramName` contained in `map`. If it is not provided the default is `null`.
#### Return value
The value to the parameter to be retrieved, or the default value if the former is `null`, either since there is no such key or the key is associated with value `null`. In case the parameter is not defined or the value for that parameter is `null`and there is no default value an exception is thrown.

View File

@ -6,9 +6,52 @@ Provides project specific settings.
## Prerequisites
none
## Method details
### getArtifactVersion()
#### Description
Returns the version of the artifact which is build in the pipeline.
#### Parameters
none
#### Return value
A `String` containing the version.
#### Side effects
none
#### Exceptions
none
#### Example
```groovy
def myVersion = commonPipelineEnvironment.getArtifactVersion()
```
### setArtifactVersion(version)
#### Description
Sets the version of the artifact which is build in the pipeline.
#### Parameters
none
#### Return value
none
#### Side effects
none
#### Exceptions
none
#### Example
```groovy
commonPipelineEnvironment.setArtifactVersion('1.2.3')
```
### getConfigProperties()
#### Description
@ -102,6 +145,53 @@ none
commonPipelineEnvironment.setConfigProperty('DEPLOY_HOST', 'my-deploy-host.com')
```
### getInfluxCustomData()
#### Description
Returns the Influx custom data which can be collected during pipeline run.
#### Parameters
none
#### Return value
A `Map` containing the data collected.
#### Side effects
none
#### Exceptions
none
#### Example
```groovy
def myInfluxData = commonPipelineEnvironment.getInfluxCustomData()
```
### getInfluxCustomDataMap()
#### Description
Returns the Influx custom data map which can be collected during pipeline run.
It is used for example by step [`influxWriteData`](../steps/influxWriteData.md).
The data map is a map of maps, like `[pipeline_data: [:], my_measurement: [:]]`
Each map inside the map represents a dedicated measurement in the InfluxDB.
#### Parameters
none
#### Return value
A `Map` containing a `Map`s with data collected.
#### Side effects
none
#### Exceptions
none
#### Example
```groovy
def myInfluxDataMap = commonPipelineEnvironment.getInfluxCustomDataMap()
```
### getMtarFileName()
@ -143,3 +233,50 @@ none
```groovy
commonPipelineEnvironment.setMtarFileName('path/to/foo.mtar')
```
### getPipelineMeasurement(measurementName)
#### Description
Returns the value of a specific pipeline measurement.
The measurements are collected with step [`durationMeasure`](../steps/durationMeasure.md)
#### Parameters
Name of the measurement
#### Return value
Value of the measurement
#### Side effects
none
#### Exceptions
none
#### Example
```groovy
def myMeasurementValue = commonPipelineEnvironment.getPipelineMeasurement('build_stage_duration')
```
### setPipelineMeasurement(measurementName, value)
#### Description
**This is an internal function!**
Sets the value of a specific pipeline measurement.
Please use the step [`durationMeasure`](../steps/durationMeasure.md) in a pipeline, instead.
#### Parameters
Name of the measurement and its value.
#### Return value
none
#### Side effects
none
#### Exceptions
none
#### Example
```groovy
commonPipelineEnvironment.setPipelineMeasurement('build_stage_duration', 2345)
```

View File

@ -0,0 +1,37 @@
# durationMeasure
## Description
This step is used to measure the duration of a set of steps, e.g. a certain stage.
The duration is stored in a Map. The measurement data can then be written to an Influx database using step [influxWriteData](influxWriteData.md).
!!! tip
Measuring for example the duration of pipeline stages helps to identify potential bottlenecks within the deployment pipeline.
This then helps to counter identified issues with respective optimization measures, e.g parallelization of tests.
## Prerequisites
none
## Pipeline configuration
none
## Explanation of pipeline step
Usage of pipeline step:
```groovy
durationMeasure (script: this, measurementName: 'build_duration') {
//execute your build
}
```
Available parameters:
| parameter | mandatory | default | possible values |
| ----------|-----------|---------|-----------------|
| script | no | empty `globalPipelineEnvironment` | |
| measurementName | no | test_duration | |
Details:
* `script` defines the global script environment of the Jenkinsfile run. Typically `this` is passed to this parameter. This allows the function to access the [`commonPipelineEnvironment`](commonPipelineEnvironment.md) for storing the measured duration.
* `measurementName` defines the name of the measurement which is written to the Influx database.

View File

@ -0,0 +1,195 @@
# influxWriteData
## Description
Since your Continuous Delivery Pipeline in Jenkins provides your productive development and delivery infrastructure you should monitor the pipeline to ensure it runs as expected. How to setup this monitoring is described in the following.
You basically need three components:
- The [InfluxDB Jenkins plugin](https://wiki.jenkins-ci.org/display/JENKINS/InfluxDB+Plugin) which allows you to send build metrics to InfluxDB servers
- The [InfluxDB](https://www.influxdata.com/time-series-platform/influxdb/) to store this data (Docker available)
- A [Grafana](http://grafana.org/) dashboard to visualize the data stored in InfluxDB (Docker available)
!!! note "no InfluxDB available?"
If you don't have an InfluxDB available yet this step will still provide you some benefit.
It will create following files for you and archive them into your build:
* `jenkins_data.json`: This file gives you build-specific information, like e.g. build result, stage where the build failed
* `pipeline_data.json`: This file gives you detailed information about your pipeline, e.g. stage durations, steps executed, ...
## Prerequisites
### Setting up InfluxDB with Grafana
The easiest way to start with is using the available official docker images.
You can either run these docker containers on the same host on which you run your Jenkins or each docker on individual VMs (hosts).
Very basic setup can be done like that (with user "admin" and password "adminPwd" for both InfluxDB and Grafana):
docker run -d -p 8083:8083 -p 8086:8086 --restart=always --name influxdb -v /var/influx_data:/var/lib/influxdb influxdb
docker run -d -p 3000:3000 --name grafana --restart=always --link influxdb:influxdb -e "GF_SECURITY_ADMIN_PASSWORD=adminPwd" grafana/grafana
For more advanced setup please reach out to the respective documentation:
- https://hub.docker.com/_/influxdb/ (and https://github.com/docker-library/docs/tree/master/influxdb)
- https://hub.docker.com/r/grafana/grafana/ (and https://github.com/grafana/grafana-docker)
After you have started your InfluxDB docker you need to create a database:
- in a Webbrowser open the InfluxDB Web-UI using the following URL: <host of your docker>:8083 (port 8083 is used for access via Web-UI, for Jenkins you use port 8086 to access the DB)
- create new DB (the name of this DB you need to provide later to Jenkins)
- create Admin user (this user you need to provide later to Jenkins)
!!! hint "With InfluxDB version 1.1 the InfluxDB Web-UI is deprecated"
You can perform the above steps via commandline:
- The following command will create a database with name <databasename>
`curl -i -XPOST http://localhost:8086/query --data-urlencode "q=CREATE DATABASE \<databasename\>"`
- The admin user with the name &lt;adminusername&gt; and the password &lt;adminuserpwd&gt; can be created with
`curl -i -XPOST http://localhost:8086/query --data-urlencode "q=CREATE USER \<adminusername\> WITH PASSWORD '\<adminuserpwd\>' WITH ALL PRIVILEGES"`
Once you have started both docker containers and Influx and Grafana are running you need to configure the Jenkins Plugin according to your settings.
## Pipeline configuration
To setup your Jenkins you need to do two configuration steps:
1. Configure Jenkins (via Manage Jenkins)
2. Adapt pipeline configuration
### Configure Jenkins
Once the plugin is available in your Jenkins:
* go to "Manage Jenkins" > "Configure System" > scroll down to section "influxdb target"
* maintain Influx data
!!! note "Jenkins as a Service"
For Jenkins as a Service instances this is already preset to the local InfluxDB with the name `jenkins`. In this case there is not need to do any additional configuration.
### Adapt pipeline configuration
You need to define the influxDB server in your pipeline as it is defined in the InfluxDb plugin configuration (see above).
```properties
influxDBServer=jenkins
```
## Explanation of pipeline step
Example usage of pipeline step:
```groovy
influxWriteData script: this
```
Available parameters:
| parameter | mandatory | default | possible values |
| ----------|-----------|---------|-----------------|
| script | no | empty `commonPipelineEnvironment` | |
| artifactVersion | yes | commonPipelineEnvironment.getArtifactVersion() | |
| influxServer | no | `jenkins` | |
| influxPrefix | no | `null` | |
## Work with InfluxDB and Grafana
You can access your **Grafana** via Web-UI: &lt;host of your grafana(-docker)&gt;:&lt;port3000&gt;
(or another port in case you have defined another one when starting your docker)
As a first step you need to add your InfluxDB as Data source to your Grafana:
- Login as user admin (PW as defined when starting your docker)
- in the navigation go to data sources -> add data source:
- name
- type: InfluxDB
- Url: \http://&lt;host of your InfluxDB server&gt;:&lt;port&gt;
- Access: direct (not via proxy)
- database: &lt;name of the DB as specified above&gt;
- User: &lt;name of the admin user as specified in step above&gt;
- Password: &lt;password of the admin user as specified in step above&gt;
!!! note "Jenkins as a Service"
For Jenkins as a Service the data source configuration is already available.
Therefore no need to go through the data source configuration step unless you want to add addtional data sources.
## Data collected in InfluxDB
The Influx plugin collects following data in the Piper context:
* All data as per default [InfluxDB plugin capabilities](https://wiki.jenkins.io/display/JENKINS/InfluxDB+Plugin)
* Additional data collected via `commonPipelineEnvironment.setInfluxCustomDataProperty()` and via `commonPipelineEnvironment.setPipelineMeasurement()`
!!! note "Add custom information to your InfluxDB"
You can simply add custom data collected during your pipeline runs via available data objects.
Example:
```groovy
//add data to measurement jenkins_custom_data - value can be a String or a Number
commonPipelineEnvironment.setInfluxCustomDataProperty('myProperty', 2018)
```
### Collected InfluxDB measurements
Measurements are potentially pre-fixed - see parameter `influxPrefix` above.
| Measurement name | data column | description |
| ---------------- | ----------- | ----------- |
| **All measurements** |<ul><li>build_number</li><li>project_name</li></ul>| All below measurements will have these columns. <br />Details see [InfluxDB plugin documentation](https://wiki.jenkins.io/display/JENKINS/InfluxDB+Plugin)|
| jenkins_data | <ul><li>build_result</li><li>build_time</li><li>last_successful_build</li><li>tests_failed</li><li>tests_skipped</li><li>tests_total</li><li>...</li></ul> | Details see [InfluxDB plugin documentation](https://wiki.jenkins.io/display/JENKINS/InfluxDB+Plugin)|
| cobertura_data | <ul><li>cobertura_branch_coverage_rate</li><li>cobertura_class_coverage_rate</li><li>cobertura_line_coverage_rate</li><li>cobertura_package_coverage_rate</li><li>...</li></ul> | Details see [InfluxDB plugin documentation](https://wiki.jenkins.io/display/JENKINS/InfluxDB+Plugin) |
| jacoco_data | <ul><li>jacoco_branch_coverage_rate</li><li>jacoco_class_coverage_rate</li><li>jacoco_instruction_coverage_rate</li><li>jacoco_line_coverage_rate</li><li>jacoco_method_coverage_rate</li></ul> | Details see [InfluxDB plugin documentation](https://wiki.jenkins.io/display/JENKINS/InfluxDB+Plugin) |
| performance_data | <ul><li>90Percentile</li><li>average</li><li>max</li><li>median</li><li>min</li><li>error_count</li><li>error_percent</li><li>...</li></ul> | Details see [InfluxDB plugin documentation](https://wiki.jenkins.io/display/JENKINS/InfluxDB+Plugin) |
| sonarqube_data | <ul><li>blocker_issues</li><li>critical_issues</li><li>info_issues</li><li>major_issues</li><li>minor_issues</li><li>lines_of_code</li><li>...</li></ul> | Details see [InfluxDB plugin documentation](https://wiki.jenkins.io/display/JENKINS/InfluxDB+Plugin) |
| jenkins_custom_data | Piper fills following colums by default: <br /><ul><li>build_result</li><li>build_result_key</li><li>build_step (->step in case of error)</li><li>build_error (->error message in case of error)</li></ul> | filled by `commonPipelineEnvironment.setInfluxCustomDataProperty()` |
| pipeline_data | Examples from the Piper templates:<br /><ul><li>build_duration</li><li>opa_duration</li><li>deploy_test_duration</li><li>deploy_test_duration</li><li>fortify_duration</li><li>release_duration</li><li>...</li></ul>| filled by step [`measureDuration`](durationMeasure.md) using parameter `measurementName`|
| step_data | Considered, e.g.:<br /><ul><li>build_quality (Milestone/Release)</li><li>build_url</li><li>bats</li><li>checkmarx</li><li>fortify</li><li>gauge</li><li>nsp</li><li>opa</li><li>opensourcedependency</li><li>ppms</li><li>jmeter</li><li>supa</li><li>snyk</li><li>sonar</li><li>sourceclear</li><li>uiveri5</li><li>vulas</li><li>whitesource</li><li>traceability</li><li>...</li><li>xmakestage</li><li>xmakepromote</li></ul>| filled by `commonPipelineEnvironment.setInfluxStepData()` |
### Examples for InfluxDB queries which can be used in Grafana
!!! caution "Project Names containing dashes (-)"
The InfluxDB plugin replaces dashes (-) with underscores (\_).
Please keep this in mind when specifying your project_name for a InfluxDB query.
#### Example 1: Select last 10 successful builds
```
select top(build_number,10), build_result from jenkins_data WHERE build_result = 'SUCCESS'
```
#### Example 2: Select last 10 step names of failed builds
```
select top(build_number,10), build_result, build_step from jenkins_custom_data WHERE build_result = 'FAILURE'
```
#### Example 3: Select build duration of step for a specific project
```
select build_duration / 1000 from "pipeline_data" WHERE project_name='PiperTestOrg_piper_test_master'
```
#### Example 4: Get transparency about successful/failed steps for a specific project
```
select top(build_number,10) AS "Build", build_url, build_quality, fortify, gauge, vulas, opa from step_data WHERE project_name='PiperTestOrg_piper_test_master'
```
!!! note
With this query you can create transparency about which steps ran successfully / not successfully in your pipeline and which ones were not executed at all.
By specifying all the steps you consider relevant in your select statement it is very easy to create this transparency.

View File

@ -3,17 +3,20 @@ pages:
- Home: index.md
- 'Library steps':
- commonPipelineEnvironment: steps/commonPipelineEnvironment.md
- dockerExecute: steps/dockerExecute.md
- durationMeasure: steps/durationMeasure.md
- handlePipelineStepErrors: steps/handlePipelineStepErrors.md
- pipelineExecute: steps/pipelineExecute.md
- toolValidate: steps/toolValidate.md
- influxWriteData: steps/influxWriteData.md
- mavenExecute: steps/mavenExecute.md
- mtaBuild: steps/mtaBuild.md
- neoDeploy: steps/neoDeploy.md
- setupCommonPipelineEnvironment: steps/setupCommonPipelineEnvironment.md
- mavenExecute: steps/mavenExecute.md
- dockerExecute: steps/dockerExecute.md
- pipelineExecute: steps/pipelineExecute.md
- prepareDefaultValues: steps/prepareDefaultValues.md
- setupCommonPipelineEnvironment: steps/setupCommonPipelineEnvironment.md
- toolValidate: steps/toolValidate.md
- 'Library scripts':
- FileUtils: scripts/fileUtils.md
- JsonUtils: scripts/jsonUtils.md
- Utils: scripts/utils.md
- Version: scripts/version.md
- ConfigurationLoader: scripts/configurationLoader.md

View File

@ -6,3 +6,6 @@ general:
steps:
mavenExecute:
dockerImage: 'maven:3.5-jdk-7'
influxWriteData:
influxServer: 'jenkins'

View File

@ -20,6 +20,21 @@ class ConfigurationMerger {
return merged
}
@NonCPS
def static mergeWithPipelineData(Map parameters, List parameterKeys,
Map pipelineDataMap,
Map configurationMap, List configurationKeys,
Map stepDefaults=[:]
){
Map merged = [:]
merged.putAll(stepDefaults)
merged.putAll(filterByKeyAndNull(configurationMap, configurationKeys))
merged.putAll(pipelineDataMap)
merged.putAll(filterByKeyAndNull(parameters, parameterKeys))
return merged
}
@NonCPS
private static filterByKeyAndNull(Map map, List keys) {
Map filteredMap = map.findAll {

View File

@ -0,0 +1,8 @@
package com.sap.piper
import com.cloudbees.groovy.cps.NonCPS
@NonCPS
def getPrettyJsonString(object) {
return groovy.json.JsonOutput.prettyPrint(groovy.json.JsonOutput.toJson(object))
}

View File

@ -3,7 +3,7 @@ package com.sap.piper
import com.cloudbees.groovy.cps.NonCPS
@NonCPS
def getMandatoryParameter(Map map, paramName, defaultValue) {
def getMandatoryParameter(Map map, paramName, defaultValue = null) {
def paramValue = map[paramName]

View File

@ -0,0 +1,25 @@
#!groovy
import com.lesfurets.jenkins.unit.BasePipelineTest
import org.junit.Rule
import org.junit.Test
import util.JenkinsSetupRule
import static org.junit.Assert.assertTrue
class DurationMeasureTest extends BasePipelineTest {
@Rule
public JenkinsSetupRule setupRule = new JenkinsSetupRule(this)
@Test
void testDurationMeasurement() throws Exception {
def cpe = loadScript("commonPipelineEnvironment.groovy").commonPipelineEnvironment
def script = loadScript("durationMeasure.groovy")
def bodyExecuted = false
script.call(script: [commonPipelineEnvironment: cpe], measurementName: 'test') {
bodyExecuted = true
}
assertTrue(cpe.getPipelineMeasurement('test') != null)
assertTrue(bodyExecuted)
assertJobStatusSuccess()
}
}

View File

@ -0,0 +1,99 @@
#!groovy
import com.lesfurets.jenkins.unit.BasePipelineTest
import org.junit.Before
import org.junit.Rule
import org.junit.Test
import org.junit.rules.RuleChain
import util.JenkinsLoggingRule
import util.JenkinsSetupRule
import static org.junit.Assert.assertTrue
import static org.junit.Assert.assertEquals
class InfluxWriteDataTest extends BasePipelineTest {
Script influxWriteDataScript
Map fileMap = [:]
Map stepMap = [:]
String echoLog = ''
def cpe
public JenkinsSetupRule setupRule = new JenkinsSetupRule(this)
public JenkinsLoggingRule loggingRule = new JenkinsLoggingRule(this)
@Rule
public RuleChain ruleChain =
RuleChain.outerRule(setupRule)
.around(loggingRule)
@Before
void init() throws Exception {
//reset stepMap
stepMap = [:]
//reset fileMap
fileMap = [:]
helper.registerAllowedMethod('readYaml', [Map.class], { map ->
return [
general: [productiveBranch: 'develop'],
steps : [influxWriteData: [influxServer: 'testInflux']]
]
})
helper.registerAllowedMethod('writeFile', [Map.class],{m -> fileMap[m.file] = m.text})
helper.registerAllowedMethod('step', [Map.class],{m -> stepMap = m})
cpe = loadScript('commonPipelineEnvironment.groovy').commonPipelineEnvironment
influxWriteDataScript = loadScript("influxWriteData.groovy")
}
@Test
void testInfluxWriteDataWithDefault() throws Exception {
cpe.setArtifactVersion('1.2.3')
influxWriteDataScript.call(script: [commonPipelineEnvironment: cpe])
assertTrue(loggingRule.log.contains('Artifact version: 1.2.3'))
assertEquals('testInflux', stepMap.selectedTarget)
assertEquals(null, stepMap.customPrefix)
assertEquals([:], stepMap.customData)
assertEquals([pipeline_data:[:]], stepMap.customDataMap)
assertTrue(fileMap.containsKey('jenkins_data.json'))
assertTrue(fileMap.containsKey('pipeline_data.json'))
assertJobStatusSuccess()
}
@Test
void testInfluxWriteDataNoInflux() throws Exception {
cpe.setArtifactVersion('1.2.3')
influxWriteDataScript.call(script: [commonPipelineEnvironment: cpe], influxServer: '')
assertEquals(0, stepMap.size())
assertTrue(fileMap.containsKey('jenkins_data.json'))
assertTrue(fileMap.containsKey('pipeline_data.json'))
assertJobStatusSuccess()
}
@Test
void testInfluxWriteDataNoArtifactVersion() throws Exception {
influxWriteDataScript.call(script: [commonPipelineEnvironment: cpe])
assertEquals(0, stepMap.size())
assertEquals(0, fileMap.size())
assertTrue(loggingRule.log.contains('no artifact version available -> exiting writeInflux without writing data'))
assertJobStatusSuccess()
}
}

View File

@ -26,4 +26,17 @@ class ConfigurationMergerTest {
Map merged = ConfigurationMerger.merge(parameters, parameterKeys, defaults)
Assert.assertEquals([], merged.nonErpDestinations)
}
@Test
void testMergeCustomPipelineValues(){
Map defaults = [dockerImage: 'mvn']
Map parameters = [goals: 'install', flags: '']
List parameterKeys = ['flags']
Map configuration = [flags: '-B']
List configurationKeys = ['flags']
Map pipelineDataMap = [artifactVersion: '1.2.3', flags: 'test']
Map merged = ConfigurationMerger.mergeWithPipelineData(parameters, parameterKeys, pipelineDataMap, configuration, configurationKeys, defaults)
Assert.assertEquals('', merged.flags)
Assert.assertEquals('1.2.3', merged.artifactVersion)
}
}

View File

@ -1,11 +1,27 @@
class commonPipelineEnvironment implements Serializable {
private Map configProperties = [:]
Map defaultConfiguration = [:]
//stores version of the artifact which is build during pipeline run
def artifactVersion
Map configuration = [:]
Map defaultConfiguration = [:]
//each Map in influxCustomDataMap represents a measurement in Influx. Additional measurements can be added as a new Map entry of influxCustomDataMap
private Map influxCustomDataMap = [pipeline_data: [:]]
//influxCustomData represents measurement jenkins_custom_data in Influx. Metrics can be written into this map
private Map influxCustomData = [:]
private String mtarFilePath
def setArtifactVersion(version) {
artifactVersion = version
}
def getArtifactVersion() {
return artifactVersion
}
def setConfigProperties(map) {
configProperties = map
}
@ -25,6 +41,14 @@ class commonPipelineEnvironment implements Serializable {
return configProperties[property]
}
def getInfluxCustomData() {
return influxCustomData
}
def getInfluxCustomDataMap() {
return influxCustomDataMap
}
def getMtarFilePath() {
return mtarFilePath
}
@ -32,4 +56,12 @@ class commonPipelineEnvironment implements Serializable {
void setMtarFilePath(mtarFilePath) {
this.mtarFilePath = mtarFilePath
}
def setPipelineMeasurement (measurementName, value) {
influxCustomDataMap.pipeline_data[measurementName] = value
}
def getPipelineMeasurement (measurementName) {
return influxCustomDataMap.pipeline_data[measurementName]
}
}

View File

@ -0,0 +1,19 @@
def call(Map parameters = [:], body) {
def script = parameters.script
def measurementName = parameters.get('measurementName', 'test_duration')
//start measurement
def start = System.currentTimeMillis()
body()
//record measurement
def duration = System.currentTimeMillis() - start
if (script != null)
script.commonPipelineEnvironment.setPipelineMeasurement(measurementName, duration)
return duration
}

View File

@ -0,0 +1,62 @@
import com.sap.piper.ConfigurationLoader
import com.sap.piper.ConfigurationMerger
import com.sap.piper.JsonUtils
def call(Map parameters = [:]) {
def stepName = 'influxWriteData'
handlePipelineStepErrors (stepName: stepName, stepParameters: parameters, allowBuildFailure: true) {
def script = parameters.script
if (script == null)
script = [commonPipelineEnvironment: commonPipelineEnvironment]
prepareDefaultValues script: script
final Map stepDefaults = ConfigurationLoader.defaultStepConfiguration(script, stepName)
final Map stepConfiguration = ConfigurationLoader.stepConfiguration(script, stepName)
List parameterKeys = [
'artifactVersion',
'influxServer',
'influxPrefix'
]
Map pipelineDataMap = [
artifactVersion: commonPipelineEnvironment.getArtifactVersion()
]
List stepConfigurationKeys = [
'influxServer',
'influxPrefix'
]
Map configuration = ConfigurationMerger.mergeWithPipelineData(parameters, parameterKeys, pipelineDataMap, stepConfiguration, stepConfigurationKeys, stepDefaults)
def artifactVersion = configuration.artifactVersion
if (!artifactVersion) {
//this takes care that terminated builds due to milestone-locking do not cause an error
echo "[${stepName}] no artifact version available -> exiting writeInflux without writing data"
return
}
def influxServer = configuration.influxServer
def influxPrefix = configuration.influxPrefix
echo """[${stepName}]----------------------------------------------------------
Artifact version: ${artifactVersion}
Influx server: ${influxServer}
Influx prefix: ${influxPrefix}
InfluxDB data: ${script.commonPipelineEnvironment.getInfluxCustomData()}
InfluxDB data map: ${script.commonPipelineEnvironment.getInfluxCustomDataMap()}
[${stepName}]----------------------------------------------------------"""
if (influxServer)
step([$class: 'InfluxDbPublisher', selectedTarget: influxServer, customPrefix: influxPrefix, customData: script.commonPipelineEnvironment.getInfluxCustomData(), customDataMap: script.commonPipelineEnvironment.getInfluxCustomDataMap()])
//write results into json file for archiving - also benefitial when no InfluxDB is available yet
def jsonUtils = new JsonUtils()
writeFile file: 'jenkins_data.json', text: jsonUtils.getPrettyJsonString(script.commonPipelineEnvironment.getInfluxCustomData())
writeFile file: 'pipeline_data.json', text: jsonUtils.getPrettyJsonString(script.commonPipelineEnvironment.getInfluxCustomDataMap())
archiveArtifacts artifacts: '*data.json', allowEmptyArchive: true
}
}