* Do not expose a slice of labels in export.Record
This is really an inconvenient implementation detail leak - we may
want to store labels in a different way. Replace it with an iterator -
it does not force us to use slice of key values as a storage in the
long run.
* Add Len to LabelIterator
It may come in handy in several situations, where we don't have access
to export.Labels object, but only to the label iterator.
* Use reflect value label iterator for the fixed labels
* add reset operation to iterator
Makes my life easier when writing a benchmark. Might also be an
alternative to cloning the iterator.
* Add benchmarks for iterators
* Add import comment
* Add clone operation to label iterator
* Move iterator tests to a separate package
* Add tests for cloning iterators
* Pass label iterator to export labels
* Use non-addressable array reflect values
By not using the value created by `reflect.New()`, but rather by
`reflect.ValueOf()`, we get a non-addressable array in the value,
which does not infer an allocation cost when getting an element from
the array.
* Drop zero iterator
This can be substituted by a reflect value iterator that goes over a
value with a zero-sized array.
* Add a simple iterator that implements label iterator
In the long run this will completely replace the LabelIterator
interface.
* Replace reflect value iterator with simple iterator
* Pass label storage to new export labels, not label iterator
* Drop label iterator interface, rename storage iterator to label iterator
* Drop clone operation from iterator
It's a leftover from interface times and now it's pointless - the
iterator is a simple struct, so cloning it is a simple copy.
* Drop Reset from label iterator
The sole existence of Reset was actually for benchmarking convenience.
Now we can just copy the iterator cheaply, so a need for Reset is no
more.
* Drop noop iterator tests
* Move back iterator tests to export package
* Eagerly get the reflect value of ordered labels
So we won't get into problems when several goroutines want to iterate
the same labels at the same time. Not sure if this would be a big
deal, since every goroutine would compute the same reflect.Value, but
concurrent write to the same memory is bad anyway. And it doesn't cost
us any extra allocations anyway.
* Replace NewSliceLabelIterator() with a method of LabelSlice
* Add some documentation
* Documentation fixes
* Create MeterImpl interface
* Checkpoint w/ sdk.go building
* Checkpoint working on global
* api/global builds (test fails)
* Test fix
* All tests pass
* Comments
* Add two tests
* Comments and uncomment tests
* Precommit part 1
* Still working on tests
* Lint
* Add a test and a TODO
* Cleanup
* Lint
* Interface()->Implementation()
* Apply some feedback
* From feedback
* (A)Synchronous -> (A)Sync
* Add a missing comment
* Apply suggestions from code review
Co-Authored-By: Krzesimir Nowak <qdlacz@gmail.com>
* Rename a variable
Co-authored-by: Krzesimir Nowak <qdlacz@gmail.com>
* drop gauge instrument
* Restore the benchmark and stress test for lastvalue aggregator, but remove monotonic last-value support
* Rename gauge->lastvalue and remove remaining uses of the word 'gauge'
Co-authored-by: Krzesimir Nowak <krzesimir@kinvolk.io>
The `go.opentelemetry.io/otel/exporter/trace/jaeger` package was
mistakenly released with a `v1.0.0` tag instead of `v0.1.0`. This
resulted in all subsequent releases not becoming the default latest,
meaning that `go get`s pulled in the incompatible `v0.1.0` release of
that package when pulling in more recent packages from other otel
packages. Renaming the `exporter` directory to `exporters` fixes this
issue by consequentially renaming the package.
Additionally, this action also renames *all* exporters. This is
understood to be a disruptive action to existing users as they will need
to update any dependencies they currently have on our exporters.
However, it was decided to take this action regardless. The need to
resolve the existing issue explained above is highly important, and
given the Alpha state of this project these kinds of breaking changes
should be expected (though not without reason).
Resolves#331
Co-authored-by: Rahul Patel <rghetia@yahoo.com>