* Add a key benchmark, optimize SDK SetAttribute
* Use reflect in key.Infer
* Move to separate benchmark file; remove pointer test; remove dead comment
* Run go mod tidy
* Add license header
* Use the reflect scalar accessors
Co-authored-by: Liz Fong-Jones <lizf@honeycomb.io>
* Do not put span context into go context if extraction failed
This causes problems if multiple trace propagators are chained,
because the first propagator in chain may extract a valid span
context, then next propagator will overwrite it with an empty span
context when required headers in supplier are missing.
* Test for clobbering propagators
Co-authored-by: Joshua MacDonald <jmacd@users.noreply.github.com>
* New label set API
* Checkpoint
* Remove label.Labels interface
* Fix trace
* Remove label storage
* Restore metric_test.go
* Tidy tests
* More comments
* More comments
* Same changes as 654
* Checkpoint
* Fix batch labels
* Avoid Resource.Attributes() where possible
* Update comments and restore order in resource.go
* From feedback
* From feedback
* Move iterator_test & feedback
* Strenghten the label.Set test
* Feedback on typos
* Fix the set test per @krnowak
* Nit
* Point to the convenience functions in api/key package
This is to increase the visibility of the api/key package through the
api/core package, otherwise developers often tend to miss the api/key
package altogether and write `core.Key(name).TYPE(value)` and complain
at the verbosity of such a construction. The api/key package would
allow them to write `key.TYPE(name, value)`.
* Use the api/key package where applicable
This transforms all the uses of `core.Key(name).TYPE(value)` to
`key.TYPE(name, value)`. This also should help increasing the
visibility of the api/key package for developers reading the otel-go
code.
Co-authored-by: Joshua MacDonald <jmacd@users.noreply.github.com>
* TraceID and SpanID implementations for Stringer Interface
* Hex encode while stringifying
* Modify format specifiers wherever SpanID is used
* comment changes
* Remove TraceIdString() and SpanIdString()
* Comments Fixes
* Remove LabelSet frmo api/metric
* SDK tests pass
* Restore benchmarks
* All tests pass
* Remove all mentions of LabelSet
* Test RecordBatch
* Batch test
* Improves benchmark (some)
* Move the benchmark to match HEAD
* Align labels for GOARCH=386
* Add alignment test
* Disable the stress test fo GOARCH=386
* Fix bug
* Move atomic fields into their own file
* Add a TODO
* Comments
* Remove metric.Labels(...)
* FTB
Co-authored-by: Liz Fong-Jones <lizf@honeycomb.io>
Update license header to standard format for source files missed prior.
Add license header to new source files.
Add Makefile check to test all `*.go` and `*.sh` files have a copyright
notice (or comment about them being auto-generated) within the first few
lines.
* Add skeleton uniqueness checker
* Fix the build w/ new code in place
* Add sync tests
* More test
* Implement global uniqueness checking
* Set the library name
* Ensure ordered global initialization
* Use proper require statement for errors
* Comment
* Apply feedback fixes
* Comment and rename from feedback
* Temporarily opt-out export.Labels from label encoding stuff
* Stop passing label encoding stuff to export.Labels
* Drop label encoding stuff from SDK
* Dogstatd exporter does not need to implement label exporter anymore
* more dogstatd exporter fixes
* export labels get back to encoding stuff
in a lame way, but improvements are coming in following commits
* Get encoded labels through export.Labels
* make SDK to provide its own implementation of export.Labels
* drop dead code
* add noop label exporter
* make export simple labels immutable
* Move the default label encoder to export package
* Simplify the simple export labels a bit
* Reserve some label exporter IDs
* Document and shuffle the code a bit
* Prepare for bring the iterator benchmark test back
We can install a callback to the Batcher's process function - this is
the place where we can access the labels, and thus test the label
iterator.
* Bring back the iterator benchmarks
* Simplifications and docs
* Fix copyright to be consistent with the rest
* Fix typo
* Put reserved label encoder IDs into constants
We get fewer comments about magic numbers that way.
* Fix the label encoder as label exporter thinko
* Update License header for all source files
- Add Apache 2.0 header to source files that did not have one.
- Update all existing headers dated to 2019 to be 2020
- Remove comma from License header to comply with the Apache 2.0
guidelines.
* Update Copyright notice
Use the standard Copyright notices outlined by the
[CNCF](https://github.com/cncf/foundation/blob/master/copyright-notices.md#copyright-notices)
* Add support for Resources in the SDK
Add `Config` types for the push `Controller` and the `SDK`. Included
with this are helper functions to configure the `ErrorHandler` and
`Resource`.
Add a `Resource` to the Meter `Descriptor`. The choice to add the
`Resource` here (instead of say a `Record` or the `Instrument` itself)
was motivated by the definition of the `Descriptor` as the way to
uniquely describe a metric instrument.
Update the push `Controller` and default `SDK` to pass down their configured
`Resource` from instantiation to the metric instruments.
* Update New SDK constructor documentation
* Change NewDescriptor constructor to take opts
Add DescriptorConfig and DescriptorOption to configure the metric
Descriptor with the description, unit, keys, and resource.
Update all function calls to NewDescriptor to use new function
signature.
* Apply suggestions from code review
Co-Authored-By: Rahul Patel <rghetia@yahoo.com>
* Update and add copyright notices
* Update push controller creator func
Pass the configured ErrorHandler for the controller to the SDK.
* Update Resource integration with the SDK
Add back the Resource field to the Descriptor that was moved in the
last merge with master.
Add a resource.Provider interface.
Have the default SDK implement the new resource.Provider interface and
integrate the new interface into the newSync/newAsync workflows. Now, if
the SDK has a Resource defined it will be passed to all Descriptors
created for the instruments it creates.
* Remove nil check for metric SDK config
* Fix and add test for API Options
Add an `Equal` method to the Resource so it can be compared with
github.com/google/go-cmp/cmp.
Add additional test of the API Option unit tests to ensure WithResource
correctly sets a new resource.
* Move the resource.Provider interface to the API package
Move the interface to where it is used.
Fix spelling.
* Remove errant line
* Remove nil checks for the push controller config
* Fix check SDK implements Resourcer
* Apply suggestions from code review
Co-Authored-By: Rahul Patel <rghetia@yahoo.com>
Co-authored-by: Rahul Patel <rghetia@yahoo.com>
* Create MeterImpl interface
* Checkpoint w/ sdk.go building
* Checkpoint working on global
* api/global builds (test fails)
* Test fix
* All tests pass
* Comments
* Add two tests
* Comments and uncomment tests
* Precommit part 1
* Still working on tests
* Lint
* Add a test and a TODO
* Cleanup
* Lint
* Interface()->Implementation()
* Apply some feedback
* From feedback
* (A)Synchronous -> (A)Sync
* Add a missing comment
* Apply suggestions from code review
Co-Authored-By: Krzesimir Nowak <qdlacz@gmail.com>
* Rename a variable
Co-authored-by: Krzesimir Nowak <qdlacz@gmail.com>
* update always and never sample descriptions
* fix typo
* rename always on / off sampler files, structs and variables to match
Co-authored-by: Rahul Patel <rahulpa@google.com>
* Update api for Must constructors, with SDK helpers
* Update for Must constructors, leaving TODOs about global errors
* Add tests
* Move Must methods into metric.Must
* Apply the feedback
* Remove interfaces
* Remove more interfaces
* Again...
* Remove a sentence about a dead inteface
* drop gauge instrument
* Restore the benchmark and stress test for lastvalue aggregator, but remove monotonic last-value support
* Rename gauge->lastvalue and remove remaining uses of the word 'gauge'
Co-authored-by: Krzesimir Nowak <krzesimir@kinvolk.io>
* Propagate context changes in mix tests
We will need this for testing the correlation context and baggage
items propagation between the APIs.
* Add baggage interoperation tests
The test adds a baggage item to active OT span and some correlation
key value to current Otel span. Then makes sure that the OT span
contains both the baggage item and some translated version of the
correlation key value its Otel sibling got, and that the Otel span
contains both the correlation key value and the baggage item its OT
sibling got.
* Add hooks functionality to baggage propagation
This introduces two kinds of hooks into the correlation context
code.
The set hook gets called every time we set a Map in the context. The
hook receives a context with the Map and returns a new context.
The get hook gets called every time we get a Map from the context. The
hook receives the context and the map, and returns a new Map.
These hooks will be used for correlation context and baggage items
propagation between the Otel and OT APIs.
* Warn on foreign opentracing span
* fixup for using otel propagators
* Add utility function for setting up bridge and context
This prepares the context by installing the hooks, so the correlation
context and baggage items can be propagated between the APIs.
* Add bridge span constructor
So I do not need to remember about initializing a newly added member
in several places now.
* Propagate baggage across otel and OT APIs
This uses the set hook functionality to propagate correlation context
changes from Otel to OT spans by inserting keys and values into the
baggage items. The get hook functionality is used to propagate baggage
items from active OT span into the otel correlation context.
* Use correlation Map for baggage items
We will put this map into the context with correlation context
functions, and that is easier if we have correlation.Map, not
map[string]string.
* Use otel propagators in bridge
The otel propagators are now kinda sorta usable for opentracing
bridge. Some more work is needed to make it fully work, though -
correlation context set with the otel API is not propagated to OT
spans as baggage items yet.
Co-authored-by: Joshua MacDonald <jmacd@users.noreply.github.com>
* Refactor SDK Sampler API to conform to Spec
* Sampler is now an interface rather than a function type
* SamplingParameters include the span Kind, Attributes, and Links
* SamplingResult includes a SamplingDecision with three possible values, as well as Attributes
* Add attributes retruned from a Sampler to the span
* Add SpanKind, Attributes, and Links to API Sampler.ShouldSample() parameters
* Drop "Get" from sdk Sampler.GetDescription to match api Sampler
* Make spanID parameter in API Sampler interface a core.SpanID
* Fix types and printf format per PR feedback from krnowak
* Ensure unit test error messages reflect new reality
Co-authored-by: Joshua MacDonald <jmacd@users.noreply.github.com>
* wip: observers
* wip: float observers
* fix copy pasta
* wip: rework observers in sdk
* small fix in global meter
* wip: aggregators and selectors
* wip: monotonicity option for observers
* some refactor
* wip: docs
needs more package docs (especially for api/metric and sdk/metric)
* fix ci
* Fix copy-pasta in docs
Co-Authored-By: Mauricio Vásquez <mauricio@kinvolk.io>
* recycle unused recorders in observers
if a recorder for a labelset is unused for a second collection cycle
in a row, drop it
* unregister
* thread-safe set callback
* Fix docs
* Revert "wip: aggregators and selectors"
This reverts commit 37b7d05aed5dc90f6d5593325b6eb77494e21736.
* update selector
* tests
* Rework number equality
Compare concrete numbers, so we can get actual numbers in the error
message when they are not equal, not some uint64 representation. This
also uses InDelta for comparing floats.
* Ensure that Observers are registered in the same order
* Run observers in fixed order
So the tests can be reproducible - iterating a map made the order of
measurements random.
* Ensure the proper alignment of the delegates
This wasn't checked at all. After adding the checks, the test-386
failed.
* Small tweaks to the global meter test
* Ensure proper alignment of the callback pointer
test-386 was complaining about it
* update docs
* update a TODO
* address review issues
* drop SetCallback
Co-authored-by: Mauricio Vásquez <mauricio@kinvolk.io>
Co-authored-by: Rahul Patel <rghetia@yahoo.com>
* Add global propagators
The default global propagators are set to the chained W3C trace and
correlation context propagators.
* Use global propagators in plugins
The httptrace and grpcplugins should also get some API for setting a
propagator to use (the othttp plugin already has such an API), but
that can come in some other PR.
* Decrease indentation in trace propagators
* Drop obsolete TODOs
Now we do "something" with correlation context - it ends up in the
context, and we put the context into the request, so the chained HTTP
handler can access it too.
The other TODO was about tag.Upsert which is long gone.
* Do not unnecessarily update the request context
The request context already contains the span (and we add some
attribute there), so inserting it into context again is pointless.
Co-authored-by: Joshua MacDonald <jmacd@users.noreply.github.com>
* Drop bogus comment, fix typo
A result of copy-pasting.
* Unexport HTTP header constants
* Rename instruments in tests
"ajwaj" is my favourite placeholder text, but seems like I forgot to
replace its occurences with proper names.
Co-authored-by: Rahul Patel <rghetia@yahoo.com>
The `go.opentelemetry.io/otel/exporter/trace/jaeger` package was
mistakenly released with a `v1.0.0` tag instead of `v0.1.0`. This
resulted in all subsequent releases not becoming the default latest,
meaning that `go get`s pulled in the incompatible `v0.1.0` release of
that package when pulling in more recent packages from other otel
packages. Renaming the `exporter` directory to `exporters` fixes this
issue by consequentially renaming the package.
Additionally, this action also renames *all* exporters. This is
understood to be a disruptive action to existing users as they will need
to update any dependencies they currently have on our exporters.
However, it was decided to take this action regardless. The need to
resolve the existing issue explained above is highly important, and
given the Alpha state of this project these kinds of breaking changes
should be expected (though not without reason).
Resolves#331
Co-authored-by: Rahul Patel <rghetia@yahoo.com>
* Add `Span#Error` method to simplify setting an error status and message.
* `Span#Error` should no-op on nil errors
* Record errors as a span event rather than status/attributes.
The implementation in the SDK package now relies on existing API methods.
* Add WithErrorStatus() ErrorOption to allow setting span status on error.
* Apply suggestions from code review
Co-Authored-By: Krzesimir Nowak <qdlacz@gmail.com>
* Address code review feedback
* Clean up RecordError tests
* Ensure complete and unique error type is recorded for defined types
* Avoid duplicating logic under test in tests
* Move TestError to internal/testing package, improve RecordError test scenarios
Co-authored-by: Krzesimir Nowak <qdlacz@gmail.com>
It would be nice to follow a single schema for naming context
functions. In the trace package we followed the form FooFromContext
and ContextWithFoo. Do the same in the correlation package. The schema
WithFoo is mainly used for functions following the options pattern.
Not sure about a name of the NewContext function, though. For now I
have left it alone.
Co-authored-by: Rahul Patel <rghetia@yahoo.com>
* Test for a panic inside global internal meter instrument's Unbind
* Fix a possible nil-dereference crash
There is a nil dereference crash if we perform some operations in
certain order:
- get a global meter
- create an instrument
- bind it
- set the delegate
- unbind the instrument
- call some recording function on the not-really-bound-anymore
instrument
Unbind will run the no op run-once initialization routine, so the
follow-up RecordOne call will not run it's initialization
routine. Which RecordOne's initialization routine being skipped, the
delegate to bounded instrument is not set, but the code is still
trying to get a pointer to it and then unconditionally dereference it.
Add an extra check for a nil pointer - if this is true, then Unbind
was first and RecordOne should effectively be a no op.
Co-authored-by: Joshua MacDonald <jmacd@users.noreply.github.com>
Correlation context propagation shouldn't be a part of the trace
package - it is a different aspect of the propagation cross-cutting
concern.
This commit also adds a DefaultHTTPPropagator function for correlation
context propagation and makes the plugins use it.
Co-authored-by: Joshua MacDonald <jmacd@users.noreply.github.com>
* Remove binary propagators
They are in process of being dropped from the specification and we
haven't be using them anywhere in the project. Can reintroduce them
later.
* Rename Supplier to HTTPSupplier
The supplier is used only in HTTP propagators currently. It's not
clear if it will be useful for binary propagators if they get to be
specified at some point.
* Rework propagation interfaces
The biggest change here is that HTTP extractors return a new context
with whatever information the propagator is able to retrieve from the
supplier. Such interface does not hardcode any extractor's
functionality (like it was before by explicitly returning a span
context and correlation context) and makes it easy to chain multiple
propagators.
Injection part hasn't changed.
* Add Propagators interface
This interface (and its default implementation) is likely going to be
the propagation API used the most. Single injectors, extractors or
propagators are likely going to be used just as parameters to the
Option functions that configure the Propagators implementation.
* Drop noop propagator
It's rather pointless - just create an empty Propagators instance.
* Fix wrong name in docs
Co-authored-by: Joshua MacDonald <jmacd@users.noreply.github.com>
Tracer.WithSpan() will now accept StartOptions as a variadic final parameter `opts`
that will be passed to the Tracer.Start() invocation that creates the Span
wrapping the user-provided function.
The methods on the `Float64Gauge`, `Int64Gauge`, `Float64Counter`,
`Int64Counter`, `Float64Measure`, and `Int64Measure` `struct`s do not
need to mutate the internal state of the `struct` and can therefore be
defined with value receivers instead. This aligns closer to the function
signatures of each instruments constructor function. Additionally, this
change means calls to these methods do not need an allocation to the
heap.
Resolves#440
Co-authored-by: Rahul Patel <rghetia@yahoo.com>
This PR removes the non-compliant ChildOf and FollowsFrom interfaces
and the Relation type, which were inherited from OpenTracing via the
initial prototype. Instead allow adding a span context to the go
context as a remote span context and use a simple algorithm for
figuring out an actual parent of the new span, which was proposed for
the OpenTelemetry specification.
Also add a way to ignore current span and remote span context in go
context, so we can force the tracer to create a new root span - a span
with a new trace ID.
That required some moderate changes in the opentracing bridge - first
reference with ChildOfRef reference type becomes a local parent, the
rest become links. This also fixes links handling in the meantime. The
downside of the approach proposed here is that we can only set the
remote parent when creating a span through the opentracing API.
Co-authored-by: Joshua MacDonald <jmacd@users.noreply.github.com>
* Drop entry from correlation map
Entry used to contain stuff like TTL, but right now the notion of
entry was dropped from the spec.
* Compute exact size of the correlations map
The map will be immutable, so spend some more time to ensure that we
will not unnecessarily waste some memory on a basically read-only map.
* Allow dropping keys from correlations map
This is to follow the spec. Before this change it was possible in an
awkward way by using Foreach to gather and filter the key-value pairs,
and then calling NewMap with a MultiKV MapUpdate.
* Document the correlation package
* Add missing license blurbs in correlation package
* Add tests for deleting items in correlations
* Factor out getting map size and test it
This is an implementation detail that can't be tested in a black-box
manner.
* Fix copyright dates
* Simplify/disambiguate keySet function parameters/return values
* Fix typo in Apply docs
* Fix test names
* Explain the nonsense of dropping keys from new map
* Remove Vendor constants from tracing plugins
Unused. And confusing, since "ot" may mean "opentracing" as well.
* Simplify current span key declaration
No need for a block.
* Fix typo
Co-authored-by: Joshua MacDonald <jmacd@users.noreply.github.com>
* Rename distributedcontext package to correlation
Correlation is the name we agreed upon.
* Move trace propagators to api/trace
The trace propagators tests had to be moved to a testtrace subpackage
to avoid import cycles between api/trace and internal/trace.
Needed to shut up golint about stutter in trace.TraceContext -
TraceContext is a name of a W3C spec, so this stutter is
expected. It's certainly still better than golint's suggestion of
having trace.Context.
* Rename api/propagators to api/propagation
This package will not contain any propagators in the long run, just
the interface definitions.
Co-authored-by: Joshua MacDonald <jmacd@users.noreply.github.com>
* Switch stdout exporter to use ungrouped batcher
* Add unspecified keys to name without equals signs
* Fix tests for stdout exporter
* Add test for unspecified keys
* Move test to stdout_test.go
Spans should not have the Tracer name as a prefix for their names. This
removes the `spanNameWithPrefix` function and instead passes through the
span name unmodified wherever this had been called.
Tests that checked Span names are updated to have the non-prefix
expected names.
* Add comments on needed filed alignment
Add comment about alignment requirements to all struct fields who's
values are passed to 64-bit atomic operations.
Update any struct's field ordering if one or more of those fields has
alignment requirements to support 64-bit atomic operations.
* Add 64-bit alignment tests
Most `struct` that have field alignment requirements are now statically
validated prior to testing. The only `struct`s not validated that have
these requirements are ones defined in tests themselves where multiple
`TestMain` functions would be needed to test them. Given the fields are
already identified with comments specifying the alignment requirements
and they are in the test themselves, this seems like an OK omission.
Co-authored-by: Liz Fong-Jones <elizabeth@ctyalcove.org>