* Remove the push controller named Meter map
* Checkpoint
* Remove Provider impls
* Add a test
* Expose Provider() getter instead of implementing the interface
* api/metric changes from jmacd:jmacd/batch_obs_2
* Add an SDK test
* Use a single collector method
* Two fixes
* Comments; embed AsyncRunner
* Comments
* Comment fix
* More comments
* Renaming for clarity
* Renaming for clarity (fix)
* Lint
* Reimplement histogram using mutex instead of stateLocker
Move existing implementation to histogram_statelocker.go. Implement
benchmarks for single thread and parallel histogram updates comparing
mutex version to stateLocker version
* Drop statelocker implementation and alignment tests, benchmarks
Co-authored-by: Joshua MacDonald <jmacd@users.noreply.github.com>
* Switch MinMaxSumCount to a mutex lock instead of StateLocker
With multiple values being modified for each Update(), a single mutex
lock and non-atomic operations ends up being faster than making each
value update into an atomic operation.
* Remove StateLocker implementation and comparison benchmarks
* Remove field offset tests. No longer required with no atomics.
Co-authored-by: Joshua MacDonald <jmacd@users.noreply.github.com>
* New label set API
* Checkpoint
* Remove label.Labels interface
* Fix trace
* Remove label storage
* Restore metric_test.go
* Tidy tests
* More comments
* More comments
* Same changes as 654
* Checkpoint
* Fix batch labels
* Avoid Resource.Attributes() where possible
* Update comments and restore order in resource.go
* From feedback
* From feedback
* Move iterator_test & feedback
* Strenghten the label.Set test
* Feedback on typos
* Fix the set test per @krnowak
* Nit
* Point to the convenience functions in api/key package
This is to increase the visibility of the api/key package through the
api/core package, otherwise developers often tend to miss the api/key
package altogether and write `core.Key(name).TYPE(value)` and complain
at the verbosity of such a construction. The api/key package would
allow them to write `key.TYPE(name, value)`.
* Use the api/key package where applicable
This transforms all the uses of `core.Key(name).TYPE(value)` to
`key.TYPE(name, value)`. This also should help increasing the
visibility of the api/key package for developers reading the otel-go
code.
Co-authored-by: Joshua MacDonald <jmacd@users.noreply.github.com>
This PR adds histogram support to the prometheus exporter.
- Adds a new aggregator selector that returns a histogram for `MeasureKind`.
The selector can be constructed using `simple.NewWithHistogramMeasure`
- Adds support for histogram aggregators in prometheus collect method.
With this PR, the default selector is changed to use histograms.
In order to support the prometheus histogram, the `aggregator.Histogram`
interface is extended with the `Sum` method.
fixes#487
Co-authored-by: Rahul Patel <rahulpa@google.com>
* Remove LabelSet frmo api/metric
* SDK tests pass
* Restore benchmarks
* All tests pass
* Remove all mentions of LabelSet
* Test RecordBatch
* Batch test
* Improves benchmark (some)
* Move the benchmark to match HEAD
* Align labels for GOARCH=386
* Add alignment test
* Disable the stress test fo GOARCH=386
* Fix bug
* Move atomic fields into their own file
* Add a TODO
* Comments
* Remove metric.Labels(...)
* FTB
Co-authored-by: Liz Fong-Jones <lizf@honeycomb.io>
Update license header to standard format for source files missed prior.
Add license header to new source files.
Add Makefile check to test all `*.go` and `*.sh` files have a copyright
notice (or comment about them being auto-generated) within the first few
lines.
* Add skeleton uniqueness checker
* Fix the build w/ new code in place
* Add sync tests
* More test
* Implement global uniqueness checking
* Set the library name
* Ensure ordered global initialization
* Use proper require statement for errors
* Comment
* Apply feedback fixes
* Comment and rename from feedback
* Temporarily opt-out export.Labels from label encoding stuff
* Stop passing label encoding stuff to export.Labels
* Drop label encoding stuff from SDK
* Dogstatd exporter does not need to implement label exporter anymore
* more dogstatd exporter fixes
* export labels get back to encoding stuff
in a lame way, but improvements are coming in following commits
* Get encoded labels through export.Labels
* make SDK to provide its own implementation of export.Labels
* drop dead code
* add noop label exporter
* make export simple labels immutable
* Move the default label encoder to export package
* Simplify the simple export labels a bit
* Reserve some label exporter IDs
* Document and shuffle the code a bit
* Prepare for bring the iterator benchmark test back
We can install a callback to the Batcher's process function - this is
the place where we can access the labels, and thus test the label
iterator.
* Bring back the iterator benchmarks
* Simplifications and docs
* Fix copyright to be consistent with the rest
* Fix typo
* Put reserved label encoder IDs into constants
We get fewer comments about magic numbers that way.
* Fix the label encoder as label exporter thinko
* Update License header for all source files
- Add Apache 2.0 header to source files that did not have one.
- Update all existing headers dated to 2019 to be 2020
- Remove comma from License header to comply with the Apache 2.0
guidelines.
* Update Copyright notice
Use the standard Copyright notices outlined by the
[CNCF](https://github.com/cncf/foundation/blob/master/copyright-notices.md#copyright-notices)
The `checkpoint` function is executed in a single thread so we can do
the encoding lazily before passing the encoded version of labels to
the exporter. This is a cheap and quick way to avoid encoding the
labels on every collection interval.
Co-authored-by: Rahul Patel <rahulpa@google.com>
* Add support for Resources in the SDK
Add `Config` types for the push `Controller` and the `SDK`. Included
with this are helper functions to configure the `ErrorHandler` and
`Resource`.
Add a `Resource` to the Meter `Descriptor`. The choice to add the
`Resource` here (instead of say a `Record` or the `Instrument` itself)
was motivated by the definition of the `Descriptor` as the way to
uniquely describe a metric instrument.
Update the push `Controller` and default `SDK` to pass down their configured
`Resource` from instantiation to the metric instruments.
* Update New SDK constructor documentation
* Change NewDescriptor constructor to take opts
Add DescriptorConfig and DescriptorOption to configure the metric
Descriptor with the description, unit, keys, and resource.
Update all function calls to NewDescriptor to use new function
signature.
* Apply suggestions from code review
Co-Authored-By: Rahul Patel <rghetia@yahoo.com>
* Update and add copyright notices
* Update push controller creator func
Pass the configured ErrorHandler for the controller to the SDK.
* Update Resource integration with the SDK
Add back the Resource field to the Descriptor that was moved in the
last merge with master.
Add a resource.Provider interface.
Have the default SDK implement the new resource.Provider interface and
integrate the new interface into the newSync/newAsync workflows. Now, if
the SDK has a Resource defined it will be passed to all Descriptors
created for the instruments it creates.
* Remove nil check for metric SDK config
* Fix and add test for API Options
Add an `Equal` method to the Resource so it can be compared with
github.com/google/go-cmp/cmp.
Add additional test of the API Option unit tests to ensure WithResource
correctly sets a new resource.
* Move the resource.Provider interface to the API package
Move the interface to where it is used.
Fix spelling.
* Remove errant line
* Remove nil checks for the push controller config
* Fix check SDK implements Resourcer
* Apply suggestions from code review
Co-Authored-By: Rahul Patel <rghetia@yahoo.com>
Co-authored-by: Rahul Patel <rghetia@yahoo.com>
* Do not expose a slice of labels in export.Record
This is really an inconvenient implementation detail leak - we may
want to store labels in a different way. Replace it with an iterator -
it does not force us to use slice of key values as a storage in the
long run.
* Add Len to LabelIterator
It may come in handy in several situations, where we don't have access
to export.Labels object, but only to the label iterator.
* Use reflect value label iterator for the fixed labels
* add reset operation to iterator
Makes my life easier when writing a benchmark. Might also be an
alternative to cloning the iterator.
* Add benchmarks for iterators
* Add import comment
* Add clone operation to label iterator
* Move iterator tests to a separate package
* Add tests for cloning iterators
* Pass label iterator to export labels
* Use non-addressable array reflect values
By not using the value created by `reflect.New()`, but rather by
`reflect.ValueOf()`, we get a non-addressable array in the value,
which does not infer an allocation cost when getting an element from
the array.
* Drop zero iterator
This can be substituted by a reflect value iterator that goes over a
value with a zero-sized array.
* Add a simple iterator that implements label iterator
In the long run this will completely replace the LabelIterator
interface.
* Replace reflect value iterator with simple iterator
* Pass label storage to new export labels, not label iterator
* Drop label iterator interface, rename storage iterator to label iterator
* Drop clone operation from iterator
It's a leftover from interface times and now it's pointless - the
iterator is a simple struct, so cloning it is a simple copy.
* Drop Reset from label iterator
The sole existence of Reset was actually for benchmarking convenience.
Now we can just copy the iterator cheaply, so a need for Reset is no
more.
* Drop noop iterator tests
* Move back iterator tests to export package
* Eagerly get the reflect value of ordered labels
So we won't get into problems when several goroutines want to iterate
the same labels at the same time. Not sure if this would be a big
deal, since every goroutine would compute the same reflect.Value, but
concurrent write to the same memory is bad anyway. And it doesn't cost
us any extra allocations anyway.
* Replace NewSliceLabelIterator() with a method of LabelSlice
* Add some documentation
* Documentation fixes
* Create MeterImpl interface
* Checkpoint w/ sdk.go building
* Checkpoint working on global
* api/global builds (test fails)
* Test fix
* All tests pass
* Comments
* Add two tests
* Comments and uncomment tests
* Precommit part 1
* Still working on tests
* Lint
* Add a test and a TODO
* Cleanup
* Lint
* Interface()->Implementation()
* Apply some feedback
* From feedback
* (A)Synchronous -> (A)Sync
* Add a missing comment
* Apply suggestions from code review
Co-Authored-By: Krzesimir Nowak <qdlacz@gmail.com>
* Rename a variable
Co-authored-by: Krzesimir Nowak <qdlacz@gmail.com>
Update copyright date to when file was created (2020)
Create random numbers between 0 and 100 to more evenly match the
buckets defined in the histogram test (25, 50, 75)
* Update api for Must constructors, with SDK helpers
* Update for Must constructors, leaving TODOs about global errors
* Add tests
* Move Must methods into metric.Must
* Apply the feedback
* Remove interfaces
* Remove more interfaces
* Again...
* Remove a sentence about a dead inteface
* change the histogram aggregator to have a consistent but blocking Checkpoint()
* docs
* wrapping docs
* remove currentIdx from the 8bit alignment check
* stress test
* add export and move lockfreewrite algorithm to an external struct.
* move state locker to another package.
* add todos
* minimal tests
* renaming and docs
* change to context.Background()
* add link to algorithm and grammars
Co-authored-by: Joshua MacDonald <jmacd@users.noreply.github.com>
* Use an array key to label encoding in the SDK
* Comment
* Precommit
* Comment
* Comment
* Feedback from krnowak
* Do not overwrite the Key
* Add the value test requested
* Add a comment
* drop gauge instrument
* Restore the benchmark and stress test for lastvalue aggregator, but remove monotonic last-value support
* Rename gauge->lastvalue and remove remaining uses of the word 'gauge'
Co-authored-by: Krzesimir Nowak <krzesimir@kinvolk.io>
* wip: observers
* wip: float observers
* fix copy pasta
* wip: rework observers in sdk
* small fix in global meter
* wip: aggregators and selectors
* wip: monotonicity option for observers
* some refactor
* wip: docs
needs more package docs (especially for api/metric and sdk/metric)
* fix ci
* Fix copy-pasta in docs
Co-Authored-By: Mauricio Vásquez <mauricio@kinvolk.io>
* recycle unused recorders in observers
if a recorder for a labelset is unused for a second collection cycle
in a row, drop it
* unregister
* thread-safe set callback
* Fix docs
* Revert "wip: aggregators and selectors"
This reverts commit 37b7d05aed5dc90f6d5593325b6eb77494e21736.
* update selector
* tests
* Rework number equality
Compare concrete numbers, so we can get actual numbers in the error
message when they are not equal, not some uint64 representation. This
also uses InDelta for comparing floats.
* Ensure that Observers are registered in the same order
* Run observers in fixed order
So the tests can be reproducible - iterating a map made the order of
measurements random.
* Ensure the proper alignment of the delegates
This wasn't checked at all. After adding the checks, the test-386
failed.
* Small tweaks to the global meter test
* Ensure proper alignment of the callback pointer
test-386 was complaining about it
* update docs
* update a TODO
* address review issues
* drop SetCallback
Co-authored-by: Mauricio Vásquez <mauricio@kinvolk.io>
Co-authored-by: Rahul Patel <rghetia@yahoo.com>
The `go.opentelemetry.io/otel/exporter/trace/jaeger` package was
mistakenly released with a `v1.0.0` tag instead of `v0.1.0`. This
resulted in all subsequent releases not becoming the default latest,
meaning that `go get`s pulled in the incompatible `v0.1.0` release of
that package when pulling in more recent packages from other otel
packages. Renaming the `exporter` directory to `exporters` fixes this
issue by consequentially renaming the package.
Additionally, this action also renames *all* exporters. This is
understood to be a disruptive action to existing users as they will need
to update any dependencies they currently have on our exporters.
However, it was decided to take this action regardless. The need to
resolve the existing issue explained above is highly important, and
given the Alpha state of this project these kinds of breaking changes
should be expected (though not without reason).
Resolves#331
Co-authored-by: Rahul Patel <rghetia@yahoo.com>
* Refactor metric records logic.
Signed-off-by: Bogdan Cristian Drutu <bogdandrutu@gmail.com>
* Fix lint errors
Signed-off-by: Bogdan Cristian Drutu <bogdandrutu@gmail.com>
* Fix a bug that we try to readd the old entry instead of a new one.
Signed-off-by: Bogdan Cristian Drutu <bogdandrutu@gmail.com>
* Update comments in refcount_mapped.
Signed-off-by: Bogdan Cristian Drutu <bogdandrutu@gmail.com>
* Remove the need to use a records list, iterate over the map.
Signed-off-by: Bogdan Cristian Drutu <bogdandrutu@gmail.com>
* Fix comments and typos
Signed-off-by: Bogdan Cristian Drutu <bogdandrutu@gmail.com>
* Fix more comments
Signed-off-by: Bogdan Cristian Drutu <bogdandrutu@gmail.com>
* Clarify tryUnmap comment
Signed-off-by: Bogdan Cristian Drutu <bogdandrutu@gmail.com>
* Fix one more typo.
Signed-off-by: Bogdan Cristian Drutu <bogdandrutu@gmail.com>
* histogram aggregator draft
* add tests for buckets
* naming stuffs
* docs
* add tests for buckets
* fix doc
* update year
* adds docs for Histogram
* docs for boundaries.
* addresses review comments
Change to less-than buckets. Add offset checks. Unexport fields that don't need to be exported. Fix tests when running on profile with int64 number kind.
* sort boundaries
* remove testing field
* fixes import order
* remove print 🙈
Rename the package from "export" to "metric". Note that all the existing
imports of this package use an explicit name of `export` and, therefore,
no import declaration changes are included.
Rename the `MetricKind` to `Kind` to not stutter in the type usage. Note
this does not include a method name change for the `Descriptor` method
`MetricKind`.
* Add comments on needed filed alignment
Add comment about alignment requirements to all struct fields who's
values are passed to 64-bit atomic operations.
Update any struct's field ordering if one or more of those fields has
alignment requirements to support 64-bit atomic operations.
* Add 64-bit alignment tests
Most `struct` that have field alignment requirements are now statically
validated prior to testing. The only `struct`s not validated that have
these requirements are ones defined in tests themselves where multiple
`TestMain` functions would be needed to test them. Given the fields are
already identified with comments specifying the alignment requirements
and they are in the test themselves, this seems like an OK omission.
Co-authored-by: Liz Fong-Jones <elizabeth@ctyalcove.org>
* Initial skeleton
* Revert noop provider removal
* Checkpoint
* Checkpoint
* Implement Bound instrument and LabelSet
* Add test
* Add a benchmark
* Add a release test
* Document LabelSetDelegator
* Lint and comments
* Add a second Meter test; fix typo; add a panic
* Add a test for the builtin SDK
* Address feedback
* draft using stop aggregating on prometheus client (counters)
* remove prometheus client aggregations
Measures are being exported as summaries since histograms doesn't
exist on OpenTelemetry yet.
Better error handling must be done.
* make pre commit
* add simple error callback
* remove options from collector
* refactor exporter to smaller methods
* wording
* change to snapshot
* lock collection and checkpointset read
* remove histogram options and unexported fields from the Exporter
* documenting why prometheus uses a stateful batcher
* add todo for histograms
* change summaries objects to summary quantiles
* remove histogram buckets from tests
* wording
* rename 'lockedCheckpoint' to 'syncCheckpointSet'
* default summary quantiles should be defaulted to no buckets.
* add quantiles options
* refactor test.CheckpointSet and add docs
* flip aggregators merge
Co-authored-by: Joshua MacDonald <jmacd@users.noreply.github.com>
Change all occurrences of value to pointer receivers
Add meta sys files to .gitignore
Code cleanup e.g.
- Don't capitalize error statements
- Fix ignored errors
- Fix ambiguous variable naming
- Remove unnecessary type casting
- Use named params
Fix#306
* Add Min() interface - rename MaxSumCount to MinMaxSumCount
Fixes https://github.com/open-telemetry/opentelemetry-go/issues/319
* update stdout exporter to collect and output the minimum value
* update min and max atomically in Aggregator Update
* changed all references to maxsumcount to minmaxsumcount
* Address PR comments
* Prom exporter structure
* update prometheus exporter with master and add example.
* remove distributedcontext from prometheus example
* docs and interface checker
* make precommit
* make precommit & remove "OnRegisterError"
* coerce values to float
* return register errors and maybe fix precommit?
* add option to specify a prometheus.Registry
* make exporter implement http.Handler interface
* fix map keys bugs
* remove unused const
* fix modules dependencies.
* add support for histogram
* get metrics with labels values only instead of a labels map
* make exporter implements label encoder interface
* encode labels if the encoder is different.
* split metrics on several files and encapsulate them in structs
* make pre commit
* unexport 'sanitize'
* remove 'AllValues' in favor of 'Points' and change to 'NewDefaultLabelEncoder'
* add prometheus tests
* remove newlines on struct declaration
* formatting
* rewording
* imports
* add todo on labelValues
* blame myself for todo (:
* add todos on sanitize
* add support for summaries. custom remove label encoder.
* imports
* imports
* update with upstream
* Add tests for nonabsolute and varying sign values
* Implement support for NonAbsolute Measurement MaxSumCount
Previously, the MaxSumCount aggregator failed to work correctly with
negative numbers (e.g. MeasureKind Alternate()==true).
* Pass NumberKind to MaxSumCount New() function
Allows it to set the initial state (current.max) to the correct value
based on the NumberKind.
* Revert extraneous local change
* Pass full descriptor to msc New()
This is analagous to the DDSketch New() constructor
* Remember to run make precommit first
* Add tests for empty checkpoint of MaxSumCount aggregator
An empty checkpoint should have Sum() == 0, Count() == 0 and Max()
still equal to the numberKind.Minimum()
* Return ErrEmptyDataSet if no value set by the aggregator
Remove TODO from stdout exporter to ensure that if a maxsumcount or
ddsketch aggregator returns ErrEmptyDataSet from Max(), then the
entire record will be skipped by the exporter.
Added tests to ensure the exporter doesn't send any updates for
EmptyDataSet checkpoints - for both ddsketch and maxsumcount.
* Relayout Aggreggator struct to ensure int64s are 8-byte aligned
On 32-bit architectures, Go only guarantees that primitive
values are aligned to a 4 byte boundary. Atomic operations on 32-bit
machines require 8-byte alignment.
See https://github.com/golang/go/issues/599
* Addressing PR comments
The use of Minimum() for the default uninitialized Maximum value means
that in the unlikely condition that every recorded value for a measure
is equal to the same NumberKind.Minimum(), then the aggregator's Max()
will return ErrEmptyDataSet
* Fix PR merge issue
* Add MetricAggregator.Merge() implementations
* Update from feedback
* Type
* Ckpt
* Ckpt
* Add push controller
* Ckpt
* Add aggregator interfaces, stdout encoder
* Modify basic main.go
* Main is working
* Batch stdout output
* Sum udpate
* Rename stdout
* Add stateless/stateful Batcher options
* Undo a for-loop in the example, remove a done TODO
* Update imports
* Add note
* Rename defaultkeys
* Support variable label encoder to speed OpenMetrics/Statsd export
* Lint
* Checkpoint
* Checkpoint
* Doc
* Precommit/lint
* Simplify Aggregator API
* Record->Identifier
* Remove export.Record a.k.a. Identifier
* Checkpoint
* Propagate errors to the SDK, remove a bunch of 'TODO warn'
* Checkpoint
* Introduce export.Labels
* Comments in export/metric.go
* Comment
* More merge
* More doc
* Complete example
* Lint fixes
* Add a testable example
* Lint
* Dogstats
* Let Export return an error
* Checkpoint
* add a basic stdout exporter test
* Add measure test; fix aggregator APIs
* Use JSON numbers, not strings
* Test stdout exporter error
* Add a test for the call to RangeTest
* Add error handler API to improve correctness test; return errors from RecordOne
* Undo the previous -- do not expose errors
* Add simple selector variations, test
* Repair examples
* Test push controller error handling
* Add SDK label encoder tests
* Add a defaultkeys batcher test
* Add an ungrouped batcher test
* Lint new tests
* Respond to krnowak's feedback
* Checkpoint
* Funciontal example using unixgram
* Tidy the example
* Add a packet-split test
* More tests
* Undo comment
* Use concrete receivers for export records and labels, since the constructors return structs not pointers
* Bug fix for stateful batchers; clone an aggregator for long term storage
* Remove TODO addressed in #318
* Add errors to all aggregator interfaces
* Handle ErrNoLastValue case in stdout exporter
* Move aggregator API into sdk/export/metric/aggregator
* Update all aggregator exported-method comments
* Document the aggregator APIs
* More aggregator comments
* Add multiple updates to the ungrouped test
* Fixes for feedback from Gustavo and Liz
* Producer->CheckpointSet; add FinishedCollection
* Process takes an export.Record
* ReadCheckpoint->CheckpointSet
* EncodeLabels->Encode
* Format a better inconsistent type error; add more aggregator API tests
* More RangeTest test coverage
* Make benbjohnson/clock a test-only dependency
* Handle ErrNoLastValue in stress_test
* Update comments; use a pipe vs a unix socket in the example test
* Update test
* Spelling
* Typo fix
* Rename DefaultLabelEncoder to NewDefaultLabelEncoder for clarity
* Rename DefaultLabelEncoder to NewDefaultLabelEncoder for clarity
* Test different adapters; add ForceEncode to statsd label encoder
* Add MetricAggregator.Merge() implementations
* Update from feedback
* Type
* Ckpt
* Ckpt
* Add push controller
* Ckpt
* Add aggregator interfaces, stdout encoder
* Modify basic main.go
* Main is working
* Batch stdout output
* Sum udpate
* Rename stdout
* Add stateless/stateful Batcher options
* Undo a for-loop in the example, remove a done TODO
* Update imports
* Add note
* Rename defaultkeys
* Support variable label encoder to speed OpenMetrics/Statsd export
* Lint
* Doc
* Precommit/lint
* Simplify Aggregator API
* Record->Identifier
* Remove export.Record a.k.a. Identifier
* Checkpoint
* Propagate errors to the SDK, remove a bunch of 'TODO warn'
* Checkpoint
* Introduce export.Labels
* Comments in export/metric.go
* Comment
* More merge
* More doc
* Complete example
* Lint fixes
* Add a testable example
* Lint
* Let Export return an error
* add a basic stdout exporter test
* Add measure test; fix aggregator APIs
* Use JSON numbers, not strings
* Test stdout exporter error
* Add a test for the call to RangeTest
* Add error handler API to improve correctness test; return errors from RecordOne
* Undo the previous -- do not expose errors
* Add simple selector variations, test
* Repair examples
* Test push controller error handling
* Add SDK label encoder tests
* Add a defaultkeys batcher test
* Add an ungrouped batcher test
* Lint new tests
* Respond to krnowak's feedback
* Undo comment
* Use concrete receivers for export records and labels, since the constructors return structs not pointers
* Bug fix for stateful batchers; clone an aggregator for long term storage
* Remove TODO addressed in #318
* Add errors to all aggregator interfaces
* Handle ErrNoLastValue case in stdout exporter
* Move aggregator API into sdk/export/metric/aggregator
* Update all aggregator exported-method comments
* Document the aggregator APIs
* More aggregator comments
* Add multiple updates to the ungrouped test
* Fixes for feedback from Gustavo and Liz
* Producer->CheckpointSet; add FinishedCollection
* Process takes an export.Record
* ReadCheckpoint->CheckpointSet
* EncodeLabels->Encode
* Format a better inconsistent type error; add more aggregator API tests
* More RangeTest test coverage
* Make benbjohnson/clock a test-only dependency
* Handle ErrNoLastValue in stress_test
* Array aggregator part 1
* Improve median testing
* More testing
* More testing
* Update other dist tests
* Add to the benchmark
* Move errors into aggregator package, use from ddsketch; update Max/Min/Quantile to return errors for array/ddsketch/maxsumcount
* Lint
* Test non-absolute ddsketch
* Lint
* Comment
* Add note