Add reporting of operations-related data to Influx (if configured), like: * Version of deployed artifact * Deployment time * Target infrastructure for deployment
10 KiB
influxWriteData
Description
Since your Continuous Delivery Pipeline in Jenkins provides your productive development and delivery infrastructure you should monitor the pipeline to ensure it runs as expected. How to setup this monitoring is described in the following.
You basically need three components:
- The InfluxDB Jenkins plugin which allows you to send build metrics to InfluxDB servers
- The InfluxDB to store this data (Docker available)
- A Grafana dashboard to visualize the data stored in InfluxDB (Docker available)
!!! note "no InfluxDB available?" If you don't have an InfluxDB available yet this step will still provide you some benefit.
It will create following files for you and archive them into your build:
* `jenkins_data.json`: This file gives you build-specific information, like e.g. build result, stage where the build failed
* `influx_data.json`: This file gives you detailed information about your pipeline, e.g. stage durations, steps executed, ...
Prerequisites
Setting up InfluxDB with Grafana
The easiest way to start with is using the available official docker images. You can either run these docker containers on the same host on which you run your Jenkins or each docker on individual VMs (hosts). Very basic setup can be done like that (with user "admin" and password "adminPwd" for both InfluxDB and Grafana):
docker run -d -p 8083:8083 -p 8086:8086 --restart=always --name influxdb -v /var/influx_data:/var/lib/influxdb influxdb
docker run -d -p 3000:3000 --name grafana --restart=always --link influxdb:influxdb -e "GF_SECURITY_ADMIN_PASSWORD=adminPwd" grafana/grafana
For more advanced setup please reach out to the respective documentation:
- https://hub.docker.com/_/influxdb/ (and https://github.com/docker-library/docs/tree/master/influxdb)
- https://hub.docker.com/r/grafana/grafana/ (and https://github.com/grafana/grafana-docker)
After you have started your InfluxDB docker you need to create a database:
- in a Webbrowser open the InfluxDB Web-UI using the following URL: <host of your docker>:8083 (port 8083 is used for access via Web-UI, for Jenkins you use port 8086 to access the DB)
- create new DB (the name of this DB you need to provide later to Jenkins)
- create Admin user (this user you need to provide later to Jenkins)
!!! hint "With InfluxDB version 1.1 the InfluxDB Web-UI is deprecated"
You can perform the above steps via commandline:
-
The following command will create a database with name <databasename>
curl -i -XPOST http://localhost:8086/query --data-urlencode "q=CREATE DATABASE \<databasename\>"
-
The admin user with the name <adminusername> and the password <adminuserpwd> can be created with
curl -i -XPOST http://localhost:8086/query --data-urlencode "q=CREATE USER \<adminusername\> WITH PASSWORD '\<adminuserpwd\>' WITH ALL PRIVILEGES"
Once you have started both docker containers and Influx and Grafana are running you need to configure the Jenkins Plugin according to your settings.
Pipeline configuration
To setup your Jenkins you need to do two configuration steps:
- Configure Jenkins (via Manage Jenkins)
- Adapt pipeline configuration
Configure Jenkins
Once the plugin is available in your Jenkins:
- go to "Manage Jenkins" > "Configure System" > scroll down to section "influxdb target"
- maintain Influx data
!!! note "Jenkins as a Service"
For Jenkins as a Service instances this is already preset to the local InfluxDB with the name jenkins
. In this case there is not need to do any additional configuration.
Adapt pipeline configuration
You need to define the influxDB server in your pipeline as it is defined in the InfluxDb plugin configuration (see above).
influxDBServer=jenkins
Parameters
parameter | mandatory | default | possible values |
---|---|---|---|
script | yes | ||
artifactVersion | no | commonPipelineEnvironment.getArtifactVersion() |
|
customData | no | commonPipelineEnvironment.getInfluxCustomData() |
|
customDataMap | no | commonPipelineEnvironment.getInfluxCustomDataMap() |
|
customDataMapTags | no | commonPipelineEnvironment.getInfluxCustomDataTags() |
|
customDataTags | no | commonPipelineEnvironment.getInfluxCustomDataTags() |
|
influxPrefix | no | ||
influxServer | no | '' |
|
wrapInNode | no | false |
Step configuration
We recommend to define values of step parameters via config.yml file.
In following sections the configuration is possible:
parameter | general | step | stage |
---|---|---|---|
script | |||
artifactVersion | X | X | |
customData | X | X | |
customDataMap | X | X | |
customDataMapTags | X | X | |
customDataTags | X | X | |
influxPrefix | X | X | |
influxServer | X | X | |
wrapInNode | X | X |
Example
influxWriteData script: this
Work with InfluxDB and Grafana
You can access your Grafana via Web-UI: <host of your grafana(-docker)>:<port3000> (or another port in case you have defined another one when starting your docker)
As a first step you need to add your InfluxDB as Data source to your Grafana:
- Login as user admin (PW as defined when starting your docker)
- in the navigation go to data sources -> add data source:
- name
- type: InfluxDB
- Url:
http://<host of your InfluxDB server>:<port>
- Access: direct (not via proxy)
- database:
<name of the DB as specified above>
- User:
<name of the admin user as specified in step above>
- Password:
<password of the admin user as specified in step above>
!!! note "Jenkins as a Service" For Jenkins as a Service the data source configuration is already available.
Therefore no need to go through the data source configuration step unless you want to add addtional data sources.
Data collected in InfluxDB
The Influx plugin collects following data in the Piper context:
- All data as per default InfluxDB plugin capabilities
- Additional data collected via
commonPipelineEnvironment.setInfluxCustomDataProperty()
and viacommonPipelineEnvironment.setPipelineMeasurement()
!!! note "Add custom information to your InfluxDB" You can simply add custom data collected during your pipeline runs via available data objects. Example:
```groovy
//add data to measurement jenkins_custom_data - value can be a String or a Number
commonPipelineEnvironment.setInfluxCustomDataProperty('myProperty', 2018)
```
Collected InfluxDB measurements
Measurements are potentially pre-fixed - see parameter influxPrefix
above.
Measurement name | data column | description |
---|---|---|
All measurements |
|
All below measurements will have these columns. Details see InfluxDB plugin documentation |
jenkins_data |
|
Details see InfluxDB plugin documentation |
cobertura_data |
|
Details see InfluxDB plugin documentation |
jacoco_data |
|
Details see InfluxDB plugin documentation |
performance_data |
|
Details see InfluxDB plugin documentation |
sonarqube_data |
|
Details see InfluxDB plugin documentation |
jenkins_custom_data | Piper fills following colums by default:
|
filled by commonPipelineEnvironment.setInfluxCustomDataProperty() |
pipeline_data | Examples from the Piper templates:
|
filled by step measureDuration using parameter measurementName |
step_data | Considered, e.g.:
|
filled by commonPipelineEnvironment.setInfluxStepData() |
Examples for InfluxDB queries which can be used in Grafana
!!! caution "Project Names containing dashes (-)" The InfluxDB plugin replaces dashes (-) with underscores (_).
Please keep this in mind when specifying your project_name for a InfluxDB query.
Example 1: Select last 10 successful builds
select top(build_number,10), build_result from jenkins_data WHERE build_result = 'SUCCESS'
Example 2: Select last 10 step names of failed builds
select top(build_number,10), build_result, build_step from jenkins_custom_data WHERE build_result = 'FAILURE'
Example 3: Select build duration of step for a specific project
select build_duration / 1000 from "pipeline_data" WHERE project_name='PiperTestOrg_piper_test_master'
Example 4: Get transparency about successful/failed steps for a specific project
select top(build_number,10) AS "Build", build_url, build_quality, fortify, gauge, vulas, opa from step_data WHERE project_name='PiperTestOrg_piper_test_master'
!!! note With this query you can create transparency about which steps ran successfully / not successfully in your pipeline and which ones were not executed at all.
By specifying all the steps you consider relevant in your select statement it is very easy to create this transparency.