1
0
mirror of https://github.com/open-telemetry/opentelemetry-go.git synced 2025-04-07 07:00:13 +02:00

Use docker compose in otel collector example (#5244)

* Remove k8s files

* Add docker compose file

* Update endpoint in main.go

* Update README to use docker compose

* Update CHANGELOG

* Add Shutting down section for cleanup steps

* Replace logging exporter with debug exporter

---------

Co-authored-by: Chester Cheung <cheung.zhy.csu@gmail.com>
This commit is contained in:
Sam Xie 2024-05-02 07:44:54 -07:00 committed by GitHub
parent 7ee6ff19b5
commit dbfc75817a
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
12 changed files with 85 additions and 429 deletions

View File

@ -48,6 +48,7 @@ This project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.htm
- Update `go.opentelemetry.io/proto/otlp` from v1.1.0 to v1.2.0. (#5177)
- Improve performance of baggage member character validation in `go.opentelemetry.io/otel/baggage`. (#5214)
- The `otel-collector` example now uses docker compose to bring up services instead of kubernetes. (#5244)
## [1.25.0/0.47.0/0.0.8/0.1.0-alpha] 2024-04-05

View File

@ -1,28 +0,0 @@
JAEGER_OPERATOR_VERSION = v1.36.0
namespace-k8s:
kubectl apply -f k8s/namespace.yaml
jaeger-operator-k8s:
# Create the jaeger operator and necessary artifacts in ns observability
kubectl create -n observability -f https://github.com/jaegertracing/jaeger-operator/releases/download/$(JAEGER_OPERATOR_VERSION)/jaeger-operator.yaml
jaeger-k8s:
kubectl apply -f k8s/jaeger.yaml
prometheus-k8s:
kubectl apply -f k8s/prometheus-service.yaml # Prometheus instance
kubectl apply -f k8s/prometheus-monitor.yaml # Service monitor
otel-collector-k8s:
kubectl apply -f k8s/otel-collector.yaml
clean-k8s:
- kubectl delete -f k8s/otel-collector.yaml
- kubectl delete -f k8s/prometheus-monitor.yaml
- kubectl delete -f k8s/prometheus-service.yaml
- kubectl delete -f k8s/jaeger.yaml
- kubectl delete -n observability -f https://github.com/jaegertracing/jaeger-operator/releases/download/$(JAEGER_OPERATOR_VERSION)/jaeger-operator.yaml

View File

@ -13,165 +13,17 @@ App + SDK ---> OpenTelemetry Collector ---|
# Prerequisites
You will need access to a Kubernetes cluster for this demo. We use a local
instance of [microk8s](https://microk8s.io/), but please feel free to pick
your favorite. If you do decide to use microk8s, please ensure that dns
and storage addons are enabled
You will need [Docker Compose V2](https://docs.docker.com/compose/) installed for this demo.
# Deploying to docker compose
This command will bring up the OpenTelemetry Collector, Jaeger, and Prometheus, and
expose the necessary ports for you to view the data.
```bash
microk8s enable dns storage
docker compose up -d
```
For simplicity, the demo application is not part of the k8s cluster, and will
access the OpenTelemetry Collector through a NodePort on the cluster. Note that
the NodePort opened by this demo is not secured.
Ideally you'd want to either have your application running as part of the
kubernetes cluster, or use a secured connection (NodePort/LoadBalancer with TLS
or an ingress extension).
If not using microk8s, ensure that cert-manager is installed by following [the
instructions here](https://cert-manager.io/docs/installation/).
# Deploying to Kubernetes
All the necessary Kubernetes deployment files are available in this demo, in the
[k8s](./k8s) folder. For your convenience, we assembled a [makefile](./Makefile)
with deployment commands (see below). For those with subtly different systems,
you are, of course, welcome to poke inside the Makefile and run the commands
manually. If you use microk8s and alias `microk8s kubectl` to `kubectl`, the
Makefile will not recognize the alias, and so the commands will have to be run
manually.
## Setting up the Prometheus operator
If you're using microk8s like us, simply do
```bash
microk8s enable prometheus
```
and you're good to go. Move on to [Using the makefile](#using-the-makefile).
Otherwise, obtain a copy of the Prometheus Operator stack from
[prometheus-operator](https://github.com/prometheus-operator/kube-prometheus):
```bash
git clone https://github.com/prometheus-operator/kube-prometheus.git
cd kube-prometheus
kubectl create -f manifests/setup
# wait for namespaces and CRDs to become available, then
kubectl create -f manifests/
```
And to tear down the stack when you're finished:
```bash
kubectl delete --ignore-not-found=true -f manifests/ -f manifests/setup
```
## Using the makefile
Next, we can deploy our Jaeger instance, Prometheus monitor, and Collector
using the [makefile](./Makefile).
```bash
# Create the namespace
make namespace-k8s
# Deploy Jaeger operator
make jaeger-operator-k8s
# After the operator is deployed, create the Jaeger instance
make jaeger-k8s
# Then the Prometheus instance. Ensure you have enabled a Prometheus operator
# before executing (see above).
make prometheus-k8s
# Finally, deploy the OpenTelemetry Collector
make otel-collector-k8s
```
If you want to clean up after this, you can use the `make clean-k8s` to delete
all the resources created above. Note that this will not remove the namespace.
Because Kubernetes sometimes gets stuck when removing namespaces, please remove
this namespace manually after all the resources inside have been deleted,
for example with
```bash
kubectl delete namespaces observability
```
# Configuring the OpenTelemetry Collector
Although the above steps should deploy and configure everything, let's spend
some time on the [configuration](./k8s/otel-collector.yaml) of the Collector.
One important part here is that, in order to enable our application to send data
to the OpenTelemetry Collector, we need to first configure the `otlp` receiver:
```yml
...
otel-collector-config: |
receivers:
# Make sure to add the otlp receiver.
# This will open up the receiver on port 4317.
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
processors:
...
```
This will create the receiver on the Collector side, and open up port `4317`
for receiving traces.
The rest of the configuration is quite standard, with the only mention that we
need to create the Jaeger and Prometheus exporters:
```yml
...
exporters:
jaeger:
endpoint: "jaeger-collector.observability.svc.cluster.local:14250"
prometheus:
endpoint: 0.0.0.0:8889
namespace: "testapp"
...
```
## OpenTelemetry Collector service
One more aspect in the OpenTelemetry Collector [configuration](./k8s/otel-collector.yaml) worth looking at is the NodePort service used for accessing it:
```yaml
apiVersion: v1
kind: Service
metadata:
...
spec:
ports:
- name: otlp # Default endpoint for otlp receiver.
port: 4317
protocol: TCP
targetPort: 4317
nodePort: 30080
- name: metrics # Endpoint for metrics from our app.
port: 8889
protocol: TCP
targetPort: 8889
selector:
component: otel-collector
type:
NodePort
```
This service will bind the `4317` port used to access the otlp receiver to port `30080` on your cluster's node. By doing so, it makes it possible for us to access the Collector by using the static address `<node-ip>:30080`. In case you are running a local cluster, this will be `localhost:30080`. Note that you can also change this to a LoadBalancer or have an ingress extension for accessing the service.
# Running the code
You can find the complete code for this example in the [main.go](./main.go)
@ -192,40 +44,20 @@ sample application
## Jaeger UI
First, we need to enable an ingress provider. If you've been using microk8s,
do
```bash
microk8s enable ingress
```
Then find out where the Jaeger console is living:
```bash
kubectl get ingress --all-namespaces
```
For us, we get the output
```
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
observability jaeger-query <none> * 127.0.0.1 80 5h40m
```
indicating that the Jaeger UI is available at
[http://localhost:80](http://localhost:80). Navigate there in your favorite
The Jaeger UI is available at
[http://localhost:16686](http://localhost:16686). Navigate there in your favorite
web-browser to view the generated traces.
## Prometheus
Unfortunately, the Prometheus operator doesn't provide a convenient
out-of-the-box ingress route for us to use, so we'll use port-forwarding
instead. Note: this is a quick-and-dirty solution for the sake of example.
You *will* be attacked by shady people if you do this in production!
The Prometheus UI is available at
[http://localhost:9090](http://localhost:9090). Navigate there in your favorite
web-browser to view the generated metrics.
# Shutting down
To shut down and clean the example, run
```bash
kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090
docker compose down
```
Then navigate to [http://localhost:9090](http://localhost:9090) to view
the Prometheus dashboard.

View File

@ -0,0 +1,23 @@
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
services:
otel-collector:
image: otel/opentelemetry-collector-contrib:0.91.0
command: ["--config=/etc/otel-collector.yaml"]
volumes:
- ./otel-collector.yaml:/etc/otel-collector.yaml
ports:
- 4317:4317
prometheus:
image: prom/prometheus:v2.45.2
volumes:
- ./prometheus.yaml:/etc/prometheus/prometheus.yml
ports:
- 9090:9090
jaeger:
image: jaegertracing/all-in-one:1.52
ports:
- 16686:16686

View File

@ -1,8 +0,0 @@
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: jaeger
namespace: observability

View File

@ -1,7 +0,0 @@
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
apiVersion: v1
kind: Namespace
metadata:
name: observability

View File

@ -1,142 +0,0 @@
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
apiVersion: v1
kind: ConfigMap
metadata:
name: otel-collector-conf
namespace: observability
labels:
app: opentelemetry
component: otel-collector-conf
data:
otel-collector-config: |
receivers:
# Make sure to add the otlp receiver.
# This will open up the receiver on port 4317
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
processors:
extensions:
health_check: {}
exporters:
jaeger:
endpoint: "jaeger-collector.observability.svc.cluster.local:14250"
insecure: true
prometheus:
endpoint: 0.0.0.0:8889
namespace: "testapp"
logging:
service:
extensions: [health_check]
pipelines:
traces:
receivers: [otlp]
processors: []
exporters: [jaeger]
metrics:
receivers: [otlp]
processors: []
exporters: [prometheus, logging]
---
apiVersion: v1
kind: Service
metadata:
name: otel-collector
namespace: observability
labels:
app: opentelemetry
component: otel-collector
spec:
ports:
- name: otlp # Default endpoint for otlp receiver.
port: 4317
protocol: TCP
targetPort: 4317
nodePort: 30080
- name: metrics # Default endpoint for metrics.
port: 8889
protocol: TCP
targetPort: 8889
selector:
component: otel-collector
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: otel-collector
namespace: observability
labels:
app: opentelemetry
component: otel-collector
spec:
selector:
matchLabels:
app: opentelemetry
component: otel-collector
minReadySeconds: 5
progressDeadlineSeconds: 120
replicas: 1 #TODO - adjust this to your own requirements
template:
metadata:
annotations:
prometheus.io/path: "/metrics"
prometheus.io/port: "8889"
prometheus.io/scrape: "true"
labels:
app: opentelemetry
component: otel-collector
spec:
containers:
- command:
- "/otelcol"
- "--config=/conf/otel-collector-config.yaml"
# Memory Ballast size should be max 1/3 to 1/2 of memory.
- "--mem-ballast-size-mib=683"
env:
- name: GOGC
value: "80"
image: otel/opentelemetry-collector:0.6.0
name: otel-collector
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 200m
memory: 400Mi
ports:
- containerPort: 4317 # Default endpoint for otlp receiver.
- containerPort: 8889 # Default endpoint for querying metrics.
volumeMounts:
- name: otel-collector-config-vol
mountPath: /conf
# - name: otel-collector-secrets
# mountPath: /secrets
livenessProbe:
httpGet:
path: /
port: 13133 # Health Check extension default port.
readinessProbe:
httpGet:
path: /
port: 13133 # Health Check extension default port.
volumes:
- configMap:
name: otel-collector-conf
items:
- key: otel-collector-config
path: otel-collector-config.yaml
name: otel-collector-config-vol
# - secret:
# name: otel-collector-secrets
# items:
# - key: cert.pem
# path: cert.pem
# - key: key.pem
# path: key.pem

View File

@ -1,32 +0,0 @@
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
labels:
app: prometheus
prometheus: service-prometheus
name: service-prometheus
namespace: monitoring
spec:
alerting:
alertmanagers:
- name: alertmanager-main
namespace: monitoring
port: web
baseImage: quay.io/prometheus/prometheus
logLevel: info
paused: false
replicas: 2
retention: 2d
routePrefix: /
ruleSelector:
matchLabels:
prometheus: service-prometheus
role: alert-rules
serviceAccountName: prometheus-k8s
serviceMonitorSelector:
matchExpressions:
- key: serviceapp
operator: Exists

View File

@ -1,21 +0,0 @@
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
serviceapp: otel-collector
name: otel-collector
namespace: observability
spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
interval: 30s
port: metrics
namespaceSelector:
matchNames:
- observability
selector:
matchLabels:
app: opentelemetry

View File

@ -42,12 +42,9 @@ func initProvider() (func(context.Context) error, error) {
return nil, fmt.Errorf("failed to create resource: %w", err)
}
// If the OpenTelemetry Collector is running on a local cluster (minikube or
// microk8s), it should be accessible through the NodePort service at the
// `localhost:30080` endpoint. Otherwise, replace `localhost` with the
// endpoint of your cluster. If you run the app inside k8s, then you can
// probably connect directly to the service through dns.
conn, err := grpc.NewClient("localhost:30080",
// It connects the OpenTelemetry Collector through local gRPC connection.
// You may replace `localhost:4317` with your endpoint.
conn, err := grpc.NewClient("localhost:4317",
// Note the use of insecure transport here. TLS is recommended in production.
grpc.WithTransportCredentials(insecure.NewCredentials()),
)

View File

@ -0,0 +1,33 @@
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
processors:
extensions:
health_check: {}
exporters:
otlp:
endpoint: jaeger:4317
tls:
insecure: true
prometheus:
endpoint: 0.0.0.0:9090
namespace: testapp
debug:
service:
extensions: [health_check]
pipelines:
traces:
receivers: [otlp]
processors: []
exporters: [otlp, debug]
metrics:
receivers: [otlp]
processors: []
exporters: [prometheus, debug]

View File

@ -0,0 +1,8 @@
# Copyright The OpenTelemetry Authors
# SPDX-License-Identifier: Apache-2.0
scrape_configs:
- job_name: 'otel-collector'
scrape_interval: 5s
static_configs:
- targets: ['otel-collector:9090']