I am looking into I am looking into
https://promlabs.com/blog/2025/07/17/why-i-recommend-native-prometheus-instrumentation-over-opentelemetry/#comparing-counter-increment-performance,
and was trying to figure out why incrementing a counter with 10
attributes was so much slower than incrementing a counter with no
attributes, or 1 attribute:
```
$ go test -run=xxxxxMatchNothingxxxxx -cpu=1 -test.benchtime=1s -bench=BenchmarkSyncMeasure/NoView/Int64Counter/Attributes
goos: linux
goarch: amd64
pkg: go.opentelemetry.io/otel/sdk/metric
cpu: Intel(R) Xeon(R) CPU @ 2.20GHz
BenchmarkSyncMeasure/NoView/Int64Counter/Attributes/0 9905773 121.3 ns/op
BenchmarkSyncMeasure/NoView/Int64Counter/Attributes/1 4079145 296.5 ns/op
BenchmarkSyncMeasure/NoView/Int64Counter/Attributes/10 781627 1531 ns/op
```
Looking at the profile, most of the time is spent in
"runtime.mapKeyError2" within "runtime.mapaccess2". My best guess is
that whatever we are using for Equivalent() is not very performant when
used as a map key. This seems like a good opportunity to greatly improve
the performance of our metrics (and probably other signals) API + SDK.
To start, i'm adding a simple benchmark within the attribute package to
isolate the issue. Results:
```
$ go test -run '^$' -bench '^BenchmarkEquivalentMapAccess' -benchtime .1s -cpu 1 -benchmem
goos: linux
goarch: amd64
pkg: go.opentelemetry.io/otel/attribute
cpu: Intel(R) Xeon(R) CPU @ 2.20GHz
BenchmarkEquivalentMapAccess/Empty 2220508 53.58 ns/op 0 B/op 0 allocs/op
BenchmarkEquivalentMapAccess/1_string_attribute 622770 196.7 ns/op 0 B/op 0 allocs/op
BenchmarkEquivalentMapAccess/10_string_attributes 77462 1558 ns/op 0 B/op 0 allocs/op
BenchmarkEquivalentMapAccess/1_int_attribute 602163 197.7 ns/op 0 B/op 0 allocs/op
BenchmarkEquivalentMapAccess/10_int_attributes 76603 1569 ns/op 0 B/op 0 allocs/op
```
This shows that it is the map lookup and storage itself that is making
the metrics API+SDK perform much worse with more attributes.
Some optimization ideas include:
* Most attribute sets are likely to be just numbers and strings. Can we
make a fast path for sets that don't include complex attributes?
* We encourage improving performance of the metrics API by re-using
attribute sets where possible. If we can lazily compute+cache a "faster"
map key, that will have a big performance improvement when attribute
sets are re-used.
* compute a uint64 hash using something like
https://github.com/gohugoio/hashstructure, or something similar to what
prometheus/client_golang does:
c79a891c6c/model/signature.go (L31)
---------
Co-authored-by: Tyler Yahn <MrAlias@users.noreply.github.com>
Co-authored-by: Flc゛ <four_leaf_clover@foxmail.com>
OpenTelemetry-Go
OpenTelemetry-Go is the Go implementation of OpenTelemetry. It provides a set of APIs to directly measure performance and behavior of your software and send this data to observability platforms.
Project Status
Signal | Status |
---|---|
Traces | Stable |
Metrics | Stable |
Logs | Beta1 |
Progress and status specific to this repository is tracked in our project boards and milestones.
Project versioning information and stability guarantees can be found in the versioning documentation.
Compatibility
OpenTelemetry-Go ensures compatibility with the current supported versions of the Go language:
Each major Go release is supported until there are two newer major releases. For example, Go 1.5 was supported until the Go 1.7 release, and Go 1.6 was supported until the Go 1.8 release.
For versions of Go that are no longer supported upstream, opentelemetry-go will stop ensuring compatibility with these versions in the following manner:
- A minor release of opentelemetry-go will be made to add support for the new supported release of Go.
- The following minor release of opentelemetry-go will remove compatibility testing for the oldest (now archived upstream) version of Go. This, and future, releases of opentelemetry-go may include features only supported by the currently supported versions of Go.
Currently, this project supports the following environments.
OS | Go Version | Architecture |
---|---|---|
Ubuntu | 1.24 | amd64 |
Ubuntu | 1.23 | amd64 |
Ubuntu | 1.24 | 386 |
Ubuntu | 1.23 | 386 |
Ubuntu | 1.24 | arm64 |
Ubuntu | 1.23 | arm64 |
macOS 13 | 1.24 | amd64 |
macOS 13 | 1.23 | amd64 |
macOS | 1.24 | arm64 |
macOS | 1.23 | arm64 |
Windows | 1.24 | amd64 |
Windows | 1.23 | amd64 |
Windows | 1.24 | 386 |
Windows | 1.23 | 386 |
While this project should work for other systems, no compatibility guarantees are made for those systems currently.
Getting Started
You can find a getting started guide on opentelemetry.io.
OpenTelemetry's goal is to provide a single set of APIs to capture distributed traces and metrics from your application and send them to an observability platform. This project allows you to do just that for applications written in Go. There are two steps to this process: instrument your application, and configure an exporter.
Instrumentation
To start capturing distributed traces and metric events from your application it first needs to be instrumented. The easiest way to do this is by using an instrumentation library for your code. Be sure to check out the officially supported instrumentation libraries.
If you need to extend the telemetry an instrumentation library provides or want to build your own instrumentation for your application directly you will need to use the Go otel package. The examples are a good way to see some practical uses of this process.
Export
Now that your application is instrumented to collect telemetry, it needs an export pipeline to send that telemetry to an observability platform.
All officially supported exporters for the OpenTelemetry project are contained in the exporters directory.
Exporter | Logs | Metrics | Traces |
---|---|---|---|
OTLP | ✓ | ✓ | ✓ |
Prometheus | ✓ | ||
stdout | ✓ | ✓ | ✓ |
Zipkin | ✓ |
Contributing
See the contributing documentation.