f1971b3f81
* POC using the grpc.ClientConn to handle connections * Update invalid client security test * Update client start test for a bad endpoint * Use any ClientConn a user provides * Connect ReconnectionPeriod to gRPC conn retries * Replace connection retry handling direct in otlptracegrpc * Fix client comments * Fix comment for NewGRPCConfig * Replace reconnection test * Fix grammar * Remove unrelated changes * Remove connection pkg * Rename evaluate to retryable * POC using the grpc.ClientConn to handle connections * Replace connection retry handling direct in otlptracegrpc * Add ClientConn use changes to changelog * Update otlptracegrpc options * Only close ClientConn that the Client create * Remove listener wrapper from mock_collector_test This is not needed now that no tests relies on the listener to wait for a connection to be established before continuing. * Fix spelling error * Do not use deprecated options in the otel-collector example * Add unit tests for retryable and throttleDelay funcs * Add unit tests for context heredity * Add test that exporter stop is linked to context cancel * go mod tidy * Update exporters/otlp/otlptrace/otlptracegrpc/client.go Co-authored-by: Anthony Mirabella <a9@aneurysm9.com> * Fix go.mod from rebase * Remove wrong comment about client stop closing gRPC conn * Fix shutdown test cleanup Do not check the second call to the client Stop. There is no guarantee it will not error in normal operation. * Make lint fixes * Fix flaky unit test Use the internals of the client to explicit cancel the context returned from exportContext. This gets around the bug where the select in Stop may randomly choose the non-context Done case and avoid returning an error (also failing to cancel the context). * Remove deprecation To configure the client/exporter with environment variables these options are used. There is no way to fully remove these options without removing support for configuration with environment variables. Leave that decision and strategy determination to a separate PR. * Fix grammatical error in comment Co-authored-by: Anthony Mirabella <a9@aneurysm9.com> |
||
---|---|---|
.. | ||
k8s | ||
go.mod | ||
go.sum | ||
main.go | ||
Makefile | ||
README.md |
OpenTelemetry Collector Traces Example
This example illustrates how to export trace and metric data from the OpenTelemetry-Go SDK to the OpenTelemetry Collector. From there, we bring the trace data to Jaeger and the metric data to Prometheus The complete flow is:
-----> Jaeger (trace)
App + SDK ---> OpenTelemetry Collector ---|
-----> Prometheus (metrics)
Prerequisites
You will need access to a Kubernetes cluster for this demo. We use a local instance of microk8s, but please feel free to pick your favorite. If you do decide to use microk8s, please ensure that dns and storage addons are enabled
microk8s enable dns storage
For simplicity, the demo application is not part of the k8s cluster, and will access the OpenTelemetry Collector through a NodePort on the cluster. Note that the NodePort opened by this demo is not secured.
Ideally you'd want to either have your application running as part of the kubernetes cluster, or use a secured connection (NodePort/LoadBalancer with TLS or an ingress extension).
Deploying to Kubernetes
All the necessary Kubernetes deployment files are available in this demo, in the
k8s folder. For your convenience, we assembled a makefile
with deployment commands (see below). For those with subtly different systems,
you are, of course, welcome to poke inside the Makefile and run the commands
manually. If you use microk8s and alias microk8s kubectl
to kubectl
, the
Makefile will not recognize the alias, and so the commands will have to be run
manually.
Setting up the Prometheus operator
If you're using microk8s like us, simply do
microk8s enable prometheus
and you're good to go. Move on to Using the makefile.
Otherwise, obtain a copy of the Prometheus Operator stack from coreos:
git clone https://github.com/coreos/kube-prometheus.git
cd kube-prometheus
kubectl create -f manifests/setup
# wait for namespaces and CRDs to become available, then
kubectl create -f manifests/
And to tear down the stack when you're finished:
kubectl delete --ignore-not-found=true -f manifests/ -f manifests/setup
Using the makefile
Next, we can deploy our Jaeger instance, Prometheus monitor, and Collector using the makefile.
# Create the namespace
make namespace-k8s
# Deploy Jaeger operator
make jaeger-operator-k8s
# After the operator is deployed, create the Jaeger instance
make jaeger-k8s
# Then the Prometheus instance. Ensure you have enabled a Prometheus operator
# before executing (see above).
make prometheus-k8s
# Finally, deploy the OpenTelemetry Collector
make otel-collector-k8s
If you want to clean up after this, you can use the make clean-k8s
to delete
all the resources created above. Note that this will not remove the namespace.
Because Kubernetes sometimes gets stuck when removing namespaces, please remove
this namespace manually after all the resources inside have been deleted,
for example with
kubectl delete namespaces observability
Configuring the OpenTelemetry Collector
Although the above steps should deploy and configure everything, let's spend some time on the configuration of the Collector.
One important part here is that, in order to enable our application to send data
to the OpenTelemetry Collector, we need to first configure the otlp
receiver:
...
otel-collector-config: |
receivers:
# Make sure to add the otlp receiver.
# This will open up the receiver on port 4317.
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
processors:
...
This will create the receiver on the Collector side, and open up port 4317
for receiving traces.
The rest of the configuration is quite standard, with the only mention that we need to create the Jaeger and Prometheus exporters:
...
exporters:
jaeger:
endpoint: "jaeger-collector.observability.svc.cluster.local:14250"
prometheus:
endpoint: 0.0.0.0:8889
namespace: "testapp"
...
OpenTelemetry Collector service
One more aspect in the OpenTelemetry Collector configuration worth looking at is the NodePort service used for accessing it:
apiVersion: v1
kind: Service
metadata:
...
spec:
ports:
- name: otlp # Default endpoint for otlp receiver.
port: 4317
protocol: TCP
targetPort: 4317
nodePort: 30080
- name: metrics # Endpoint for metrics from our app.
port: 8889
protocol: TCP
targetPort: 8889
selector:
component: otel-collector
type:
NodePort
This service will bind the 4317
port used to access the otlp receiver to port 30080
on your cluster's node. By doing so, it makes it possible for us to access the Collector by using the static address <node-ip>:30080
. In case you are running a local cluster, this will be localhost:30080
. Note that you can also change this to a LoadBalancer or have an ingress extension for accessing the service.
Running the code
You can find the complete code for this example in the main.go file. To run it, ensure you have a somewhat recent version of Go (preferably >= 1.13) and do
go run main.go
The example simulates an application, hard at work, computing for ten seconds then finishing.
Viewing instrumentation data
Now the exciting part! Let's check out the telemetry data generated by our sample application
Jaeger UI
First, we need to enable an ingress provider. If you've been using microk8s, do
microk8s enable ingress
Then find out where the Jaeger console is living:
kubectl get ingress --all-namespaces
For us, we get the output
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
observability jaeger-query <none> * 127.0.0.1 80 5h40m
indicating that the Jaeger UI is available at http://localhost:80. Navigate there in your favorite web-browser to view the generated traces.
Prometheus
Unfortunately, the Prometheus operator doesn't provide a convenient out-of-the-box ingress route for us to use, so we'll use port-forwarding instead. Note: this is a quick-and-dirty solution for the sake of example. You will be attacked by shady people if you do this in production!
kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090
Then navigate to http://localhost:9090 to view the Prometheus dashboard.