1
0
mirror of https://github.com/open-telemetry/opentelemetry-go.git synced 2024-12-20 19:52:56 +02:00

Fix existing markdown lint issues (#1866)

* Remove empty sdk README

* Fix markdown lint issues

* Update markdownlint config to ignore single title header

* Remove broken link
This commit is contained in:
Tyler Yahn 2021-04-30 17:51:19 +00:00 committed by GitHub
parent 08f4c2707f
commit 9bc28f6bc6
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 34 additions and 8 deletions

View File

@ -14,6 +14,9 @@ MD013: false
MD024:
siblings_only: true
#single-title
MD025: false
# ol-prefix
MD029:
style: ordered

View File

@ -54,6 +54,7 @@ Starting from an application using entirely OpenCensus APIs:
4. Remove OpenCensus exporters and configuration
To override OpenCensus' DefaultTracer with the bridge:
```go
import (
octrace "go.opencensus.io/trace"
@ -82,7 +83,7 @@ OpenCensus and OpenTelemetry APIs are not entirely compatible. If the bridge fi
The problem for monitoring is simpler than the problem for tracing, since there
are no context propagation issues to deal with. However, it still is difficult
for users to migrate an entire applications' monitoring at once. It
should be possible to send metrics generated by OpenCensus libraries to an
should be possible to send metrics generated by OpenCensus libraries to an
OpenTelemetry pipeline so that migrating a metric does not require maintaining
separate export pipelines for OpenCensus and OpenTelemetry.
@ -102,11 +103,12 @@ Starting from an application using entirely OpenCensus APIs:
4. Remove OpenCensus Exporters and configuration.
For example, to swap out the OpenCensus logging exporter for the OpenTelemetry stdout exporter:
```go
import (
"go.opencensus.io/metric/metricexport"
"go.opentelemetry.io/otel/bridge/opencensus"
"go.opentelemetry.io/otel/exporters/stdout"
"go.opentelemetry.io/otel/exporters/stdout"
"go.opentelemetry.io/otel"
)
// With OpenCensus, you could have previously configured the logging exporter like this:

View File

@ -12,6 +12,7 @@ App + SDK ---> OpenTelemetry Collector ---|
```
# Prerequisites
You will need access to a Kubernetes cluster for this demo. We use a local
instance of [microk8s](https://microk8s.io/), but please feel free to pick
your favorite. If you do decide to use microk8s, please ensure that dns
@ -30,6 +31,7 @@ kubernetes cluster, or use a secured connection (NodePort/LoadBalancer with TLS
or an ingress extension).
# Deploying to Kubernetes
All the necessary Kubernetes deployment files are available in this demo, in the
[k8s](./k8s) folder. For your convenience, we assembled a [makefile](./Makefile)
with deployment commands (see below). For those with subtly different systems,
@ -39,14 +41,18 @@ Makefile will not recognize the alias, and so the commands will have to be run
manually.
## Setting up the Prometheus operator
If you're using microk8s like us, simply do
```bash
microk8s enable prometheus
```
and you're good to go. Move on to [Using the makefile](#using-the-makefile).
Otherwise, obtain a copy of the Prometheus Operator stack from
[coreos](https://github.com/coreos/kube-prometheus):
```bash
git clone https://github.com/coreos/kube-prometheus.git
cd kube-prometheus
@ -57,11 +63,13 @@ kubectl create -f manifests/
```
And to tear down the stack when you're finished:
```bash
kubectl delete --ignore-not-found=true -f manifests/ -f manifests/setup
```
## Using the makefile
Next, we can deploy our Jaeger instance, Prometheus monitor, and Collector
using the [makefile](./Makefile).
@ -94,6 +102,7 @@ kubectl delete namespaces observability
```
# Configuring the OpenTelemetry Collector
Although the above steps should deploy and configure everything, let's spend
some time on the [configuration](./k8s/otel-collector.yaml) of the Collector.
@ -133,6 +142,7 @@ need to create the Jaeger and Prometheus exporters:
## OpenTelemetry Collector service
One more aspect in the OpenTelemetry Collector [configuration](./k8s/otel-collector.yaml) worth looking at is the NodePort service used for accessing it:
```yaml
apiVersion: v1
kind: Service
@ -157,8 +167,8 @@ spec:
This service will bind the `55680` port used to access the otlp receiver to port `30080` on your cluster's node. By doing so, it makes it possible for us to access the Collector by using the static address `<node-ip>:30080`. In case you are running a local cluster, this will be `localhost:30080`. Note that you can also change this to a LoadBalancer or have an ingress extension for accessing the service.
# Running the code
You can find the complete code for this example in the [main.go](./main.go)
file. To run it, ensure you have a somewhat recent version of Go (preferably >=
1.13) and do
@ -171,10 +181,12 @@ The example simulates an application, hard at work, computing for ten seconds
then finishing.
# Viewing instrumentation data
Now the exciting part! Let's check out the telemetry data generated by our
sample application
## Jaeger UI
First, we need to enable an ingress provider. If you've been using microk8s,
do
@ -183,20 +195,24 @@ microk8s enable ingress
```
Then find out where the Jaeger console is living:
```bash
kubectl get ingress --all-namespaces
```
For us, we get the output
```
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
observability jaeger-query <none> * 127.0.0.1 80 5h40m
```
indicating that the Jaeger UI is available at
[http://localhost:80](http://localhost:80). Navigate there in your favorite
web-browser to view the generated traces.
## Prometheus
Unfortunately, the Prometheus operator doesn't provide a convenient
out-of-the-box ingress route for us to use, so we'll use port-forwarding
instead. Note: this is a quick-and-dirty solution for the sake of example.

View File

@ -1,3 +1,5 @@
# Prometheus Collector Example
This example demonstrates a metrics export pipeline that supports
Prometheus (pull) and simultaneously exports OTLP to an OpenTelemetry
endpoint (push).

View File

@ -4,19 +4,21 @@ Send an example span to a [Zipkin](https://zipkin.io/) service.
These instructions expect you have [docker-compose](https://docs.docker.com/compose/) installed.
Bring up the `zipkin-collector` service and example `zipkin-client` service to send an example trace:
```sh
docker-compose up --detach zipkin-collector zipkin-client
```
The `zipkin-client` service sends just one trace and exits. Retrieve the `traceId` generated by the `zipkin-client` service; should be the last line in the logs:
```sh
docker-compose logs --tail=1 zipkin-client
```
With the `traceId` you can view the trace from the `zipkin-collector` service UI hosted on port `9411`, e.g. with `traceId` of `f5695ba3b2ed00ea583fa4fa0badbeef`:
http://localhost:9411/zipkin/traces/f5695ba3b2ed00ea583fa4fa0badbeef
With the `traceId` you can view the trace from the `zipkin-collector` service UI hosted on port `9411`, e.g. with `traceId` of `f5695ba3b2ed00ea583fa4fa0badbeef`: [http://localhost:9411/zipkin/traces/f5695ba3b2ed00ea583fa4fa0badbeef](http://localhost:9411/zipkin/traces/f5695ba3b2ed00ea583fa4fa0badbeef)
Shut down the services when you are finished with the example:
```sh
docker-compose down
```

View File

@ -10,7 +10,6 @@ Additionally, there are [metric](./metric) and [trace](./trace) only exporters.
## Metric Telemetry Only
- [prometheus](./metric/prometheus): Exposes metric telemetry as Prometheus metrics.
- [test](./metric/test): A development tool when testing the telemetry pipeline.
## Trace Telemetry Only

View File

@ -1,8 +1,9 @@
# OpenTelemetry-Go Prometheus Exporter
OpenTelemetry Prometheus exporter
OpenTelemetry Prometheus exporter
## Installation
```
go get -u go.opentelemetry.io/otel/exporters/metric/prometheus
```

View File

@ -1,8 +1,9 @@
# OpenTelemetry-Go Jaeger Exporter
OpenTelemetry Jaeger exporter
OpenTelemetry Jaeger exporter
## Installation
```
go get -u go.opentelemetry.io/otel/exporters/trace/jaeger
```

View File