Banner at the top of godoc.org pages are already asking users to
redirect to pkg.go.dev as mentioned in blog post
https://blog.golang.org/pkg.go.dev-2020
Signed-off-by: Andrew Hsu <xuzuan@gmail.com>
* Remove LabelSet frmo api/metric
* SDK tests pass
* Restore benchmarks
* All tests pass
* Remove all mentions of LabelSet
* Test RecordBatch
* Batch test
* Improves benchmark (some)
* Move the benchmark to match HEAD
* Align labels for GOARCH=386
* Add alignment test
* Disable the stress test fo GOARCH=386
* Fix bug
* Move atomic fields into their own file
* Add a TODO
* Comments
* Remove metric.Labels(...)
* FTB
Co-authored-by: Liz Fong-Jones <lizf@honeycomb.io>
This PR changes the circleci build config to build the project in both go-1.13 and go-1.14 versions.
Co-authored-by: Joshua MacDonald <jmacd@users.noreply.github.com>
Co-authored-by: Liz Fong-Jones <lizf@honeycomb.io>
Update license header to standard format for source files missed prior.
Add license header to new source files.
Add Makefile check to test all `*.go` and `*.sh` files have a copyright
notice (or comment about them being auto-generated) within the first few
lines.
* Move span transforms of the OTLP exporter to internal
Breakup and move functionality of the `transform_spans.go` file into
appropriate files in the `internal/transform` sub-package. This is in
preparation of using some of the overlapping functionality to implement
Resource support in the metric side of the exporter.
Adds more specific unit tests for some of the functionality transferred.
The tests removed used the exporter as a processing engine and the
replacement tests do not do this. The test found in the `oltp_test.go`
seem to comprehensively cover this type of test.
Include Link `Name` in the exporter span link and adds a test to check
for this.
Resolves#527
* Apply suggestions from code review
Co-Authored-By: Rahul Patel <rghetia@yahoo.com>
* Fix SpanData doc
* Consolidate span comparison
Co-authored-by: Rahul Patel <rghetia@yahoo.com>
Co-authored-by: Joshua MacDonald <jmacd@users.noreply.github.com>
* Add skeleton uniqueness checker
* Fix the build w/ new code in place
* Add sync tests
* More test
* Implement global uniqueness checking
* Set the library name
* Ensure ordered global initialization
* Use proper require statement for errors
* Comment
* Apply feedback fixes
* Comment and rename from feedback
* Make pre_release.sh script work with GNU
The `-i` sed flag is for doing in-place updates of the file. On some
systems it requires a parameter, on other (GNU) the parameter is
optional, but should be "glued" to the flag (`-i.bak` for example).
That's is problematic, so just emulate the in-place updates with
copies and removals.
* Update the version string in sdk with the pre-release script
This was a missing step, which is why we still have a version string
"0.2.3" while in reality it should be "0.3.0". Hopefully it will be
something to fix in "0.3.1" or what the next release version will be.
* Ensure clean git state when doing a release
The script does `git add .` which adds everything in the tree, even
the untracked files. Make sure that there are none of those.
* Temporarily opt-out export.Labels from label encoding stuff
* Stop passing label encoding stuff to export.Labels
* Drop label encoding stuff from SDK
* Dogstatd exporter does not need to implement label exporter anymore
* more dogstatd exporter fixes
* export labels get back to encoding stuff
in a lame way, but improvements are coming in following commits
* Get encoded labels through export.Labels
* make SDK to provide its own implementation of export.Labels
* drop dead code
* add noop label exporter
* make export simple labels immutable
* Move the default label encoder to export package
* Simplify the simple export labels a bit
* Reserve some label exporter IDs
* Document and shuffle the code a bit
* Prepare for bring the iterator benchmark test back
We can install a callback to the Batcher's process function - this is
the place where we can access the labels, and thus test the label
iterator.
* Bring back the iterator benchmarks
* Simplifications and docs
* Fix copyright to be consistent with the rest
* Fix typo
* Put reserved label encoder IDs into constants
We get fewer comments about magic numbers that way.
* Fix the label encoder as label exporter thinko
* Update License header for all source files
- Add Apache 2.0 header to source files that did not have one.
- Update all existing headers dated to 2019 to be 2020
- Remove comma from License header to comply with the Apache 2.0
guidelines.
* Update Copyright notice
Use the standard Copyright notices outlined by the
[CNCF](https://github.com/cncf/foundation/blob/master/copyright-notices.md#copyright-notices)
The `checkpoint` function is executed in a single thread so we can do
the encoding lazily before passing the encoded version of labels to
the exporter. This is a cheap and quick way to avoid encoding the
labels on every collection interval.
Co-authored-by: Rahul Patel <rahulpa@google.com>
* Add support for Resources in the SDK
Add `Config` types for the push `Controller` and the `SDK`. Included
with this are helper functions to configure the `ErrorHandler` and
`Resource`.
Add a `Resource` to the Meter `Descriptor`. The choice to add the
`Resource` here (instead of say a `Record` or the `Instrument` itself)
was motivated by the definition of the `Descriptor` as the way to
uniquely describe a metric instrument.
Update the push `Controller` and default `SDK` to pass down their configured
`Resource` from instantiation to the metric instruments.
* Update New SDK constructor documentation
* Change NewDescriptor constructor to take opts
Add DescriptorConfig and DescriptorOption to configure the metric
Descriptor with the description, unit, keys, and resource.
Update all function calls to NewDescriptor to use new function
signature.
* Apply suggestions from code review
Co-Authored-By: Rahul Patel <rghetia@yahoo.com>
* Update and add copyright notices
* Update push controller creator func
Pass the configured ErrorHandler for the controller to the SDK.
* Update Resource integration with the SDK
Add back the Resource field to the Descriptor that was moved in the
last merge with master.
Add a resource.Provider interface.
Have the default SDK implement the new resource.Provider interface and
integrate the new interface into the newSync/newAsync workflows. Now, if
the SDK has a Resource defined it will be passed to all Descriptors
created for the instruments it creates.
* Remove nil check for metric SDK config
* Fix and add test for API Options
Add an `Equal` method to the Resource so it can be compared with
github.com/google/go-cmp/cmp.
Add additional test of the API Option unit tests to ensure WithResource
correctly sets a new resource.
* Move the resource.Provider interface to the API package
Move the interface to where it is used.
Fix spelling.
* Remove errant line
* Remove nil checks for the push controller config
* Fix check SDK implements Resourcer
* Apply suggestions from code review
Co-Authored-By: Rahul Patel <rghetia@yahoo.com>
Co-authored-by: Rahul Patel <rghetia@yahoo.com>
* Do not expose a slice of labels in export.Record
This is really an inconvenient implementation detail leak - we may
want to store labels in a different way. Replace it with an iterator -
it does not force us to use slice of key values as a storage in the
long run.
* Add Len to LabelIterator
It may come in handy in several situations, where we don't have access
to export.Labels object, but only to the label iterator.
* Use reflect value label iterator for the fixed labels
* add reset operation to iterator
Makes my life easier when writing a benchmark. Might also be an
alternative to cloning the iterator.
* Add benchmarks for iterators
* Add import comment
* Add clone operation to label iterator
* Move iterator tests to a separate package
* Add tests for cloning iterators
* Pass label iterator to export labels
* Use non-addressable array reflect values
By not using the value created by `reflect.New()`, but rather by
`reflect.ValueOf()`, we get a non-addressable array in the value,
which does not infer an allocation cost when getting an element from
the array.
* Drop zero iterator
This can be substituted by a reflect value iterator that goes over a
value with a zero-sized array.
* Add a simple iterator that implements label iterator
In the long run this will completely replace the LabelIterator
interface.
* Replace reflect value iterator with simple iterator
* Pass label storage to new export labels, not label iterator
* Drop label iterator interface, rename storage iterator to label iterator
* Drop clone operation from iterator
It's a leftover from interface times and now it's pointless - the
iterator is a simple struct, so cloning it is a simple copy.
* Drop Reset from label iterator
The sole existence of Reset was actually for benchmarking convenience.
Now we can just copy the iterator cheaply, so a need for Reset is no
more.
* Drop noop iterator tests
* Move back iterator tests to export package
* Eagerly get the reflect value of ordered labels
So we won't get into problems when several goroutines want to iterate
the same labels at the same time. Not sure if this would be a big
deal, since every goroutine would compute the same reflect.Value, but
concurrent write to the same memory is bad anyway. And it doesn't cost
us any extra allocations anyway.
* Replace NewSliceLabelIterator() with a method of LabelSlice
* Add some documentation
* Documentation fixes
* Create MeterImpl interface
* Checkpoint w/ sdk.go building
* Checkpoint working on global
* api/global builds (test fails)
* Test fix
* All tests pass
* Comments
* Add two tests
* Comments and uncomment tests
* Precommit part 1
* Still working on tests
* Lint
* Add a test and a TODO
* Cleanup
* Lint
* Interface()->Implementation()
* Apply some feedback
* From feedback
* (A)Synchronous -> (A)Sync
* Add a missing comment
* Apply suggestions from code review
Co-Authored-By: Krzesimir Nowak <qdlacz@gmail.com>
* Rename a variable
Co-authored-by: Krzesimir Nowak <qdlacz@gmail.com>
* Add request filtering capability to othhtp.Handler
* Add simple and useful filters for othttp plugin
* Add note that all requests are traced in the absence of any filters
* Add copyright notice to plugin/othttp/filters/filters_test.go
Co-Authored-By: Tyler Yahn <MrAlias@users.noreply.github.com>
* Add package docstring for filters package
Co-authored-by: Tyler Yahn <MrAlias@users.noreply.github.com>
Co-authored-by: Rahul Patel <rahulpa@google.com>
* update always and never sample descriptions
* fix typo
* rename always on / off sampler files, structs and variables to match
Co-authored-by: Rahul Patel <rahulpa@google.com>
* Initial metrics addition to the OTLP exporter
* Fixes
Update to incorporate merged changes.
Fix lint issues.
* Add sum float64 transform unit test
* Fix static check
* Update comments
Fix malformed License header.
Add documentation for new transform functions.
Remove errant TODO.
* Fix test failures and handle ErrEmptyDataSet
Use `assert.NoError` instead of `assert.Nil` to correctly display
checked errors.
Use the result of `assert.NoError` to guard against `nil` pointer
dereferences.
Add check to skip `Record`s that return an `ErrEmptyDataSet` error and
include test to check this error is correctly returned from the
transform package.
Co-authored-by: Rahul Patel <rahulpa@google.com>
Update copyright date to when file was created (2020)
Create random numbers between 0 and 100 to more evenly match the
buckets defined in the histogram test (25, 50, 75)
* update README with import instructions and how to build / test
* fix typo
* remove building the code section from README.md
* add clone instructions to CONTRIBUTING.md
Co-authored-by: Joshua MacDonald <jmacd@users.noreply.github.com>
Co-authored-by: Rahul Patel <rahulpa@google.com>
* Add zipkin exporter
The zipkin exporter implements the SpanBatcher interface. It follows
the current-at-the-time-of-writing document about conversion from
OpenTelemetry span data to Zipkin spans. Which means that endpoint
information is not yet filled.
* Fix typo in docs
* Add a zipkin example
This sends span information to a locally running zipkin collector.
Currently I have a problem getting the collector to show me the spans
after accepting them with HTTP 202. Not sure if this is because of
missing endpoint information.
* Make gitignore consistent
The fixed paths should be prefixed with a slash. The "relative" paths
mean that git will ignore all the files that end with the path.
* Add tests for zipkin exporter
* Update api for Must constructors, with SDK helpers
* Update for Must constructors, leaving TODOs about global errors
* Add tests
* Move Must methods into metric.Must
* Apply the feedback
* Remove interfaces
* Remove more interfaces
* Again...
* Remove a sentence about a dead inteface
* change the histogram aggregator to have a consistent but blocking Checkpoint()
* docs
* wrapping docs
* remove currentIdx from the 8bit alignment check
* stress test
* add export and move lockfreewrite algorithm to an external struct.
* move state locker to another package.
* add todos
* minimal tests
* renaming and docs
* change to context.Background()
* add link to algorithm and grammars
Co-authored-by: Joshua MacDonald <jmacd@users.noreply.github.com>
* Use an array key to label encoding in the SDK
* Comment
* Precommit
* Comment
* Comment
* Feedback from krnowak
* Do not overwrite the Key
* Add the value test requested
* Add a comment
* drop gauge instrument
* Restore the benchmark and stress test for lastvalue aggregator, but remove monotonic last-value support
* Rename gauge->lastvalue and remove remaining uses of the word 'gauge'
Co-authored-by: Krzesimir Nowak <krzesimir@kinvolk.io>
* Propagate context changes in mix tests
We will need this for testing the correlation context and baggage
items propagation between the APIs.
* Add baggage interoperation tests
The test adds a baggage item to active OT span and some correlation
key value to current Otel span. Then makes sure that the OT span
contains both the baggage item and some translated version of the
correlation key value its Otel sibling got, and that the Otel span
contains both the correlation key value and the baggage item its OT
sibling got.
* Add hooks functionality to baggage propagation
This introduces two kinds of hooks into the correlation context
code.
The set hook gets called every time we set a Map in the context. The
hook receives a context with the Map and returns a new context.
The get hook gets called every time we get a Map from the context. The
hook receives the context and the map, and returns a new Map.
These hooks will be used for correlation context and baggage items
propagation between the Otel and OT APIs.
* Warn on foreign opentracing span
* fixup for using otel propagators
* Add utility function for setting up bridge and context
This prepares the context by installing the hooks, so the correlation
context and baggage items can be propagated between the APIs.
* Add bridge span constructor
So I do not need to remember about initializing a newly added member
in several places now.
* Propagate baggage across otel and OT APIs
This uses the set hook functionality to propagate correlation context
changes from Otel to OT spans by inserting keys and values into the
baggage items. The get hook functionality is used to propagate baggage
items from active OT span into the otel correlation context.
* Use correlation Map for baggage items
We will put this map into the context with correlation context
functions, and that is easier if we have correlation.Map, not
map[string]string.
* Use otel propagators in bridge
The otel propagators are now kinda sorta usable for opentracing
bridge. Some more work is needed to make it fully work, though -
correlation context set with the otel API is not propagated to OT
spans as baggage items yet.
Co-authored-by: Joshua MacDonald <jmacd@users.noreply.github.com>