* Add skeleton uniqueness checker
* Fix the build w/ new code in place
* Add sync tests
* More test
* Implement global uniqueness checking
* Set the library name
* Ensure ordered global initialization
* Use proper require statement for errors
* Comment
* Apply feedback fixes
* Comment and rename from feedback
* Make pre_release.sh script work with GNU
The `-i` sed flag is for doing in-place updates of the file. On some
systems it requires a parameter, on other (GNU) the parameter is
optional, but should be "glued" to the flag (`-i.bak` for example).
That's is problematic, so just emulate the in-place updates with
copies and removals.
* Update the version string in sdk with the pre-release script
This was a missing step, which is why we still have a version string
"0.2.3" while in reality it should be "0.3.0". Hopefully it will be
something to fix in "0.3.1" or what the next release version will be.
* Ensure clean git state when doing a release
The script does `git add .` which adds everything in the tree, even
the untracked files. Make sure that there are none of those.
* Temporarily opt-out export.Labels from label encoding stuff
* Stop passing label encoding stuff to export.Labels
* Drop label encoding stuff from SDK
* Dogstatd exporter does not need to implement label exporter anymore
* more dogstatd exporter fixes
* export labels get back to encoding stuff
in a lame way, but improvements are coming in following commits
* Get encoded labels through export.Labels
* make SDK to provide its own implementation of export.Labels
* drop dead code
* add noop label exporter
* make export simple labels immutable
* Move the default label encoder to export package
* Simplify the simple export labels a bit
* Reserve some label exporter IDs
* Document and shuffle the code a bit
* Prepare for bring the iterator benchmark test back
We can install a callback to the Batcher's process function - this is
the place where we can access the labels, and thus test the label
iterator.
* Bring back the iterator benchmarks
* Simplifications and docs
* Fix copyright to be consistent with the rest
* Fix typo
* Put reserved label encoder IDs into constants
We get fewer comments about magic numbers that way.
* Fix the label encoder as label exporter thinko
* Update License header for all source files
- Add Apache 2.0 header to source files that did not have one.
- Update all existing headers dated to 2019 to be 2020
- Remove comma from License header to comply with the Apache 2.0
guidelines.
* Update Copyright notice
Use the standard Copyright notices outlined by the
[CNCF](https://github.com/cncf/foundation/blob/master/copyright-notices.md#copyright-notices)
The `checkpoint` function is executed in a single thread so we can do
the encoding lazily before passing the encoded version of labels to
the exporter. This is a cheap and quick way to avoid encoding the
labels on every collection interval.
Co-authored-by: Rahul Patel <rahulpa@google.com>
* Add support for Resources in the SDK
Add `Config` types for the push `Controller` and the `SDK`. Included
with this are helper functions to configure the `ErrorHandler` and
`Resource`.
Add a `Resource` to the Meter `Descriptor`. The choice to add the
`Resource` here (instead of say a `Record` or the `Instrument` itself)
was motivated by the definition of the `Descriptor` as the way to
uniquely describe a metric instrument.
Update the push `Controller` and default `SDK` to pass down their configured
`Resource` from instantiation to the metric instruments.
* Update New SDK constructor documentation
* Change NewDescriptor constructor to take opts
Add DescriptorConfig and DescriptorOption to configure the metric
Descriptor with the description, unit, keys, and resource.
Update all function calls to NewDescriptor to use new function
signature.
* Apply suggestions from code review
Co-Authored-By: Rahul Patel <rghetia@yahoo.com>
* Update and add copyright notices
* Update push controller creator func
Pass the configured ErrorHandler for the controller to the SDK.
* Update Resource integration with the SDK
Add back the Resource field to the Descriptor that was moved in the
last merge with master.
Add a resource.Provider interface.
Have the default SDK implement the new resource.Provider interface and
integrate the new interface into the newSync/newAsync workflows. Now, if
the SDK has a Resource defined it will be passed to all Descriptors
created for the instruments it creates.
* Remove nil check for metric SDK config
* Fix and add test for API Options
Add an `Equal` method to the Resource so it can be compared with
github.com/google/go-cmp/cmp.
Add additional test of the API Option unit tests to ensure WithResource
correctly sets a new resource.
* Move the resource.Provider interface to the API package
Move the interface to where it is used.
Fix spelling.
* Remove errant line
* Remove nil checks for the push controller config
* Fix check SDK implements Resourcer
* Apply suggestions from code review
Co-Authored-By: Rahul Patel <rghetia@yahoo.com>
Co-authored-by: Rahul Patel <rghetia@yahoo.com>
* Do not expose a slice of labels in export.Record
This is really an inconvenient implementation detail leak - we may
want to store labels in a different way. Replace it with an iterator -
it does not force us to use slice of key values as a storage in the
long run.
* Add Len to LabelIterator
It may come in handy in several situations, where we don't have access
to export.Labels object, but only to the label iterator.
* Use reflect value label iterator for the fixed labels
* add reset operation to iterator
Makes my life easier when writing a benchmark. Might also be an
alternative to cloning the iterator.
* Add benchmarks for iterators
* Add import comment
* Add clone operation to label iterator
* Move iterator tests to a separate package
* Add tests for cloning iterators
* Pass label iterator to export labels
* Use non-addressable array reflect values
By not using the value created by `reflect.New()`, but rather by
`reflect.ValueOf()`, we get a non-addressable array in the value,
which does not infer an allocation cost when getting an element from
the array.
* Drop zero iterator
This can be substituted by a reflect value iterator that goes over a
value with a zero-sized array.
* Add a simple iterator that implements label iterator
In the long run this will completely replace the LabelIterator
interface.
* Replace reflect value iterator with simple iterator
* Pass label storage to new export labels, not label iterator
* Drop label iterator interface, rename storage iterator to label iterator
* Drop clone operation from iterator
It's a leftover from interface times and now it's pointless - the
iterator is a simple struct, so cloning it is a simple copy.
* Drop Reset from label iterator
The sole existence of Reset was actually for benchmarking convenience.
Now we can just copy the iterator cheaply, so a need for Reset is no
more.
* Drop noop iterator tests
* Move back iterator tests to export package
* Eagerly get the reflect value of ordered labels
So we won't get into problems when several goroutines want to iterate
the same labels at the same time. Not sure if this would be a big
deal, since every goroutine would compute the same reflect.Value, but
concurrent write to the same memory is bad anyway. And it doesn't cost
us any extra allocations anyway.
* Replace NewSliceLabelIterator() with a method of LabelSlice
* Add some documentation
* Documentation fixes
* Create MeterImpl interface
* Checkpoint w/ sdk.go building
* Checkpoint working on global
* api/global builds (test fails)
* Test fix
* All tests pass
* Comments
* Add two tests
* Comments and uncomment tests
* Precommit part 1
* Still working on tests
* Lint
* Add a test and a TODO
* Cleanup
* Lint
* Interface()->Implementation()
* Apply some feedback
* From feedback
* (A)Synchronous -> (A)Sync
* Add a missing comment
* Apply suggestions from code review
Co-Authored-By: Krzesimir Nowak <qdlacz@gmail.com>
* Rename a variable
Co-authored-by: Krzesimir Nowak <qdlacz@gmail.com>
* Add request filtering capability to othhtp.Handler
* Add simple and useful filters for othttp plugin
* Add note that all requests are traced in the absence of any filters
* Add copyright notice to plugin/othttp/filters/filters_test.go
Co-Authored-By: Tyler Yahn <MrAlias@users.noreply.github.com>
* Add package docstring for filters package
Co-authored-by: Tyler Yahn <MrAlias@users.noreply.github.com>
Co-authored-by: Rahul Patel <rahulpa@google.com>
* update always and never sample descriptions
* fix typo
* rename always on / off sampler files, structs and variables to match
Co-authored-by: Rahul Patel <rahulpa@google.com>
* Initial metrics addition to the OTLP exporter
* Fixes
Update to incorporate merged changes.
Fix lint issues.
* Add sum float64 transform unit test
* Fix static check
* Update comments
Fix malformed License header.
Add documentation for new transform functions.
Remove errant TODO.
* Fix test failures and handle ErrEmptyDataSet
Use `assert.NoError` instead of `assert.Nil` to correctly display
checked errors.
Use the result of `assert.NoError` to guard against `nil` pointer
dereferences.
Add check to skip `Record`s that return an `ErrEmptyDataSet` error and
include test to check this error is correctly returned from the
transform package.
Co-authored-by: Rahul Patel <rahulpa@google.com>
Update copyright date to when file was created (2020)
Create random numbers between 0 and 100 to more evenly match the
buckets defined in the histogram test (25, 50, 75)
* update README with import instructions and how to build / test
* fix typo
* remove building the code section from README.md
* add clone instructions to CONTRIBUTING.md
Co-authored-by: Joshua MacDonald <jmacd@users.noreply.github.com>
Co-authored-by: Rahul Patel <rahulpa@google.com>
* Add zipkin exporter
The zipkin exporter implements the SpanBatcher interface. It follows
the current-at-the-time-of-writing document about conversion from
OpenTelemetry span data to Zipkin spans. Which means that endpoint
information is not yet filled.
* Fix typo in docs
* Add a zipkin example
This sends span information to a locally running zipkin collector.
Currently I have a problem getting the collector to show me the spans
after accepting them with HTTP 202. Not sure if this is because of
missing endpoint information.
* Make gitignore consistent
The fixed paths should be prefixed with a slash. The "relative" paths
mean that git will ignore all the files that end with the path.
* Add tests for zipkin exporter
* Update api for Must constructors, with SDK helpers
* Update for Must constructors, leaving TODOs about global errors
* Add tests
* Move Must methods into metric.Must
* Apply the feedback
* Remove interfaces
* Remove more interfaces
* Again...
* Remove a sentence about a dead inteface
* change the histogram aggregator to have a consistent but blocking Checkpoint()
* docs
* wrapping docs
* remove currentIdx from the 8bit alignment check
* stress test
* add export and move lockfreewrite algorithm to an external struct.
* move state locker to another package.
* add todos
* minimal tests
* renaming and docs
* change to context.Background()
* add link to algorithm and grammars
Co-authored-by: Joshua MacDonald <jmacd@users.noreply.github.com>
* Use an array key to label encoding in the SDK
* Comment
* Precommit
* Comment
* Comment
* Feedback from krnowak
* Do not overwrite the Key
* Add the value test requested
* Add a comment
* drop gauge instrument
* Restore the benchmark and stress test for lastvalue aggregator, but remove monotonic last-value support
* Rename gauge->lastvalue and remove remaining uses of the word 'gauge'
Co-authored-by: Krzesimir Nowak <krzesimir@kinvolk.io>
* Propagate context changes in mix tests
We will need this for testing the correlation context and baggage
items propagation between the APIs.
* Add baggage interoperation tests
The test adds a baggage item to active OT span and some correlation
key value to current Otel span. Then makes sure that the OT span
contains both the baggage item and some translated version of the
correlation key value its Otel sibling got, and that the Otel span
contains both the correlation key value and the baggage item its OT
sibling got.
* Add hooks functionality to baggage propagation
This introduces two kinds of hooks into the correlation context
code.
The set hook gets called every time we set a Map in the context. The
hook receives a context with the Map and returns a new context.
The get hook gets called every time we get a Map from the context. The
hook receives the context and the map, and returns a new Map.
These hooks will be used for correlation context and baggage items
propagation between the Otel and OT APIs.
* Warn on foreign opentracing span
* fixup for using otel propagators
* Add utility function for setting up bridge and context
This prepares the context by installing the hooks, so the correlation
context and baggage items can be propagated between the APIs.
* Add bridge span constructor
So I do not need to remember about initializing a newly added member
in several places now.
* Propagate baggage across otel and OT APIs
This uses the set hook functionality to propagate correlation context
changes from Otel to OT spans by inserting keys and values into the
baggage items. The get hook functionality is used to propagate baggage
items from active OT span into the otel correlation context.
* Use correlation Map for baggage items
We will put this map into the context with correlation context
functions, and that is easier if we have correlation.Map, not
map[string]string.
* Use otel propagators in bridge
The otel propagators are now kinda sorta usable for opentracing
bridge. Some more work is needed to make it fully work, though -
correlation context set with the otel API is not propagated to OT
spans as baggage items yet.
Co-authored-by: Joshua MacDonald <jmacd@users.noreply.github.com>
* Refactor SDK Sampler API to conform to Spec
* Sampler is now an interface rather than a function type
* SamplingParameters include the span Kind, Attributes, and Links
* SamplingResult includes a SamplingDecision with three possible values, as well as Attributes
* Add attributes retruned from a Sampler to the span
* Add SpanKind, Attributes, and Links to API Sampler.ShouldSample() parameters
* Drop "Get" from sdk Sampler.GetDescription to match api Sampler
* Make spanID parameter in API Sampler interface a core.SpanID
* Fix types and printf format per PR feedback from krnowak
* Ensure unit test error messages reflect new reality
Co-authored-by: Joshua MacDonald <jmacd@users.noreply.github.com>
* wip: observers
* wip: float observers
* fix copy pasta
* wip: rework observers in sdk
* small fix in global meter
* wip: aggregators and selectors
* wip: monotonicity option for observers
* some refactor
* wip: docs
needs more package docs (especially for api/metric and sdk/metric)
* fix ci
* Fix copy-pasta in docs
Co-Authored-By: Mauricio Vásquez <mauricio@kinvolk.io>
* recycle unused recorders in observers
if a recorder for a labelset is unused for a second collection cycle
in a row, drop it
* unregister
* thread-safe set callback
* Fix docs
* Revert "wip: aggregators and selectors"
This reverts commit 37b7d05aed5dc90f6d5593325b6eb77494e21736.
* update selector
* tests
* Rework number equality
Compare concrete numbers, so we can get actual numbers in the error
message when they are not equal, not some uint64 representation. This
also uses InDelta for comparing floats.
* Ensure that Observers are registered in the same order
* Run observers in fixed order
So the tests can be reproducible - iterating a map made the order of
measurements random.
* Ensure the proper alignment of the delegates
This wasn't checked at all. After adding the checks, the test-386
failed.
* Small tweaks to the global meter test
* Ensure proper alignment of the callback pointer
test-386 was complaining about it
* update docs
* update a TODO
* address review issues
* drop SetCallback
Co-authored-by: Mauricio Vásquez <mauricio@kinvolk.io>
Co-authored-by: Rahul Patel <rghetia@yahoo.com>
* Add global propagators
The default global propagators are set to the chained W3C trace and
correlation context propagators.
* Use global propagators in plugins
The httptrace and grpcplugins should also get some API for setting a
propagator to use (the othttp plugin already has such an API), but
that can come in some other PR.
* Decrease indentation in trace propagators
* Drop obsolete TODOs
Now we do "something" with correlation context - it ends up in the
context, and we put the context into the request, so the chained HTTP
handler can access it too.
The other TODO was about tag.Upsert which is long gone.
* Do not unnecessarily update the request context
The request context already contains the span (and we add some
attribute there), so inserting it into context again is pointless.
Co-authored-by: Joshua MacDonald <jmacd@users.noreply.github.com>