Split connection management away from exporter (#1369)
* Split protocol handling away from exporter
This commits adds a ProtocolDriver interface, which the exporter
will use to connect to the collector and send both metrics and traces
to it. That way, the Exporter type is free from dealing with any
connection/protocol details, as this business is taken over by the
implementations of the ProtocolDriver interface.
The gRPC code from the exporter is moved into the implementation of
ProtocolDriver. Currently it only maintains a single connection,
just as the Exporter used to do.
With the split, most of the Exporter options became actually gRPC
connection manager's options. Currently the only option that remained
to be Exporter's is about setting the export kind selector.
* Update changelog
* Increase the test coverage of GRPC driver
* Do not close a channel with multiple senders
The disconnected channel can be used for sending by multiple
goroutines (for example, by metric controller and span processor), so
this channel should not be closed at all. Dropping this line closes a
race between closing a channel and sending to it.
* Simplify new connection handler
The callbacks never return an error, so drop the return type from it.
* Access clients under a lock
The client may change as a result on reconnection in background, so
guard against a racy access.
* Simplify the GRPC driver a bit
The config type was exported earlier to have a consistent way of
configuring the driver, when also the multiple connection driver would
appear. Since we are not going to add a multiple connection driver,
pass the options directly to the driver constructor. Also shorten the
name of the constructor to `NewGRPCDriver`.
* Merge common gRPC code back into the driver
The common code was supposed to be shared between single connection
driver and multiple connection driver, but since the latter won't be
happening, it makes no sense to keep the not-so-common code in a
separate file. Also drop some abstraction too.
* Rename the file with gRPC driver implementation
* Update changelog
* Sleep for a second to trigger the timeout
Sometimes CI has it's better moments, so it's blazing fast and manages
to finish shutting the exporter down within the 1 microsecond timeout.
* Increase the timeout for shutting down the exporter
One millisecond is quite short, and I was getting failures locally or
in CI:
go test ./... + race in ./exporters/otlp
2020/12/14 18:27:54 rpc error: code = Canceled desc = context canceled
2020/12/14 18:27:54 context deadline exceeded
--- FAIL: TestNewExporter_withMultipleAttributeTypes (0.37s)
otlp_integration_test.go:541: resource span count: got 0, want 1
FAIL
FAIL go.opentelemetry.io/otel/exporters/otlp 5.278s
or
go test ./... + coverage in ./exporters/otlp
2020/12/14 17:41:16 rpc error: code = Canceled desc = context canceled
2020/12/14 17:41:16 exporter disconnected
--- FAIL: TestNewExporter_endToEnd (1.53s)
--- FAIL: TestNewExporter_endToEnd/WithCompressor (0.41s)
otlp_integration_test.go:246: span counts: got 3, want 4
2020/12/14 17:41:18 context canceled
FAIL
coverage: 35.3% of statements in ./...
FAIL go.opentelemetry.io/otel/exporters/otlp 4.753s
* Shut down the providers in end to end test
This is to make sure that all batched spans are actually flushed
before closing the exporter.
2020-12-21 22:49:45 +02:00
|
|
|
// Copyright The OpenTelemetry Authors
|
|
|
|
//
|
|
|
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
|
|
// you may not use this file except in compliance with the License.
|
|
|
|
// You may obtain a copy of the License at
|
|
|
|
//
|
|
|
|
// http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
//
|
|
|
|
// Unless required by applicable law or agreed to in writing, software
|
|
|
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
// See the License for the specific language governing permissions and
|
|
|
|
// limitations under the License.
|
|
|
|
|
|
|
|
package otlp // import "go.opentelemetry.io/otel/exporters/otlp"
|
|
|
|
|
|
|
|
import (
|
|
|
|
"time"
|
|
|
|
|
|
|
|
"google.golang.org/grpc"
|
|
|
|
"google.golang.org/grpc/credentials"
|
|
|
|
)
|
|
|
|
|
|
|
|
const (
|
|
|
|
// DefaultGRPCServiceConfig is the gRPC service config used if none is
|
|
|
|
// provided by the user.
|
|
|
|
//
|
|
|
|
// For more info on gRPC service configs:
|
|
|
|
// https://github.com/grpc/proposal/blob/master/A6-client-retries.md
|
|
|
|
//
|
|
|
|
// For more info on the RetryableStatusCodes we allow here:
|
|
|
|
// https://github.com/open-telemetry/oteps/blob/be2a3fcbaa417ebbf5845cd485d34fdf0ab4a2a4/text/0035-opentelemetry-protocol.md#export-response
|
|
|
|
//
|
|
|
|
// Note: MaxAttempts > 5 are treated as 5. See
|
|
|
|
// https://github.com/grpc/proposal/blob/master/A6-client-retries.md#validation-of-retrypolicy
|
|
|
|
// for more details.
|
|
|
|
DefaultGRPCServiceConfig = `{
|
|
|
|
"methodConfig":[{
|
|
|
|
"name":[
|
|
|
|
{ "service":"opentelemetry.proto.collector.metrics.v1.MetricsService" },
|
|
|
|
{ "service":"opentelemetry.proto.collector.trace.v1.TraceService" }
|
|
|
|
],
|
|
|
|
"retryPolicy":{
|
|
|
|
"MaxAttempts":5,
|
|
|
|
"InitialBackoff":"0.3s",
|
|
|
|
"MaxBackoff":"5s",
|
|
|
|
"BackoffMultiplier":2,
|
|
|
|
"RetryableStatusCodes":[
|
|
|
|
"UNAVAILABLE",
|
|
|
|
"CANCELLED",
|
|
|
|
"DEADLINE_EXCEEDED",
|
|
|
|
"RESOURCE_EXHAUSTED",
|
|
|
|
"ABORTED",
|
|
|
|
"OUT_OF_RANGE",
|
|
|
|
"UNAVAILABLE",
|
|
|
|
"DATA_LOSS"
|
|
|
|
]
|
|
|
|
}
|
|
|
|
}]
|
|
|
|
}`
|
|
|
|
)
|
|
|
|
|
|
|
|
type grpcConnectionConfig struct {
|
|
|
|
canDialInsecure bool
|
2020-12-29 21:15:57 +02:00
|
|
|
collectorEndpoint string
|
Split connection management away from exporter (#1369)
* Split protocol handling away from exporter
This commits adds a ProtocolDriver interface, which the exporter
will use to connect to the collector and send both metrics and traces
to it. That way, the Exporter type is free from dealing with any
connection/protocol details, as this business is taken over by the
implementations of the ProtocolDriver interface.
The gRPC code from the exporter is moved into the implementation of
ProtocolDriver. Currently it only maintains a single connection,
just as the Exporter used to do.
With the split, most of the Exporter options became actually gRPC
connection manager's options. Currently the only option that remained
to be Exporter's is about setting the export kind selector.
* Update changelog
* Increase the test coverage of GRPC driver
* Do not close a channel with multiple senders
The disconnected channel can be used for sending by multiple
goroutines (for example, by metric controller and span processor), so
this channel should not be closed at all. Dropping this line closes a
race between closing a channel and sending to it.
* Simplify new connection handler
The callbacks never return an error, so drop the return type from it.
* Access clients under a lock
The client may change as a result on reconnection in background, so
guard against a racy access.
* Simplify the GRPC driver a bit
The config type was exported earlier to have a consistent way of
configuring the driver, when also the multiple connection driver would
appear. Since we are not going to add a multiple connection driver,
pass the options directly to the driver constructor. Also shorten the
name of the constructor to `NewGRPCDriver`.
* Merge common gRPC code back into the driver
The common code was supposed to be shared between single connection
driver and multiple connection driver, but since the latter won't be
happening, it makes no sense to keep the not-so-common code in a
separate file. Also drop some abstraction too.
* Rename the file with gRPC driver implementation
* Update changelog
* Sleep for a second to trigger the timeout
Sometimes CI has it's better moments, so it's blazing fast and manages
to finish shutting the exporter down within the 1 microsecond timeout.
* Increase the timeout for shutting down the exporter
One millisecond is quite short, and I was getting failures locally or
in CI:
go test ./... + race in ./exporters/otlp
2020/12/14 18:27:54 rpc error: code = Canceled desc = context canceled
2020/12/14 18:27:54 context deadline exceeded
--- FAIL: TestNewExporter_withMultipleAttributeTypes (0.37s)
otlp_integration_test.go:541: resource span count: got 0, want 1
FAIL
FAIL go.opentelemetry.io/otel/exporters/otlp 5.278s
or
go test ./... + coverage in ./exporters/otlp
2020/12/14 17:41:16 rpc error: code = Canceled desc = context canceled
2020/12/14 17:41:16 exporter disconnected
--- FAIL: TestNewExporter_endToEnd (1.53s)
--- FAIL: TestNewExporter_endToEnd/WithCompressor (0.41s)
otlp_integration_test.go:246: span counts: got 3, want 4
2020/12/14 17:41:18 context canceled
FAIL
coverage: 35.3% of statements in ./...
FAIL go.opentelemetry.io/otel/exporters/otlp 4.753s
* Shut down the providers in end to end test
This is to make sure that all batched spans are actually flushed
before closing the exporter.
2020-12-21 22:49:45 +02:00
|
|
|
compressor string
|
|
|
|
reconnectionPeriod time.Duration
|
|
|
|
grpcServiceConfig string
|
|
|
|
grpcDialOptions []grpc.DialOption
|
|
|
|
headers map[string]string
|
|
|
|
clientCredentials credentials.TransportCredentials
|
|
|
|
}
|
|
|
|
|
|
|
|
type GRPCConnectionOption func(cfg *grpcConnectionConfig)
|
|
|
|
|
|
|
|
// WithInsecure disables client transport security for the exporter's gRPC connection
|
|
|
|
// just like grpc.WithInsecure() https://pkg.go.dev/google.golang.org/grpc#WithInsecure
|
|
|
|
// does. Note, by default, client security is required unless WithInsecure is used.
|
|
|
|
func WithInsecure() GRPCConnectionOption {
|
|
|
|
return func(cfg *grpcConnectionConfig) {
|
|
|
|
cfg.canDialInsecure = true
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-12-29 21:15:57 +02:00
|
|
|
// WithEndpoint allows one to set the endpoint that the exporter will
|
Split connection management away from exporter (#1369)
* Split protocol handling away from exporter
This commits adds a ProtocolDriver interface, which the exporter
will use to connect to the collector and send both metrics and traces
to it. That way, the Exporter type is free from dealing with any
connection/protocol details, as this business is taken over by the
implementations of the ProtocolDriver interface.
The gRPC code from the exporter is moved into the implementation of
ProtocolDriver. Currently it only maintains a single connection,
just as the Exporter used to do.
With the split, most of the Exporter options became actually gRPC
connection manager's options. Currently the only option that remained
to be Exporter's is about setting the export kind selector.
* Update changelog
* Increase the test coverage of GRPC driver
* Do not close a channel with multiple senders
The disconnected channel can be used for sending by multiple
goroutines (for example, by metric controller and span processor), so
this channel should not be closed at all. Dropping this line closes a
race between closing a channel and sending to it.
* Simplify new connection handler
The callbacks never return an error, so drop the return type from it.
* Access clients under a lock
The client may change as a result on reconnection in background, so
guard against a racy access.
* Simplify the GRPC driver a bit
The config type was exported earlier to have a consistent way of
configuring the driver, when also the multiple connection driver would
appear. Since we are not going to add a multiple connection driver,
pass the options directly to the driver constructor. Also shorten the
name of the constructor to `NewGRPCDriver`.
* Merge common gRPC code back into the driver
The common code was supposed to be shared between single connection
driver and multiple connection driver, but since the latter won't be
happening, it makes no sense to keep the not-so-common code in a
separate file. Also drop some abstraction too.
* Rename the file with gRPC driver implementation
* Update changelog
* Sleep for a second to trigger the timeout
Sometimes CI has it's better moments, so it's blazing fast and manages
to finish shutting the exporter down within the 1 microsecond timeout.
* Increase the timeout for shutting down the exporter
One millisecond is quite short, and I was getting failures locally or
in CI:
go test ./... + race in ./exporters/otlp
2020/12/14 18:27:54 rpc error: code = Canceled desc = context canceled
2020/12/14 18:27:54 context deadline exceeded
--- FAIL: TestNewExporter_withMultipleAttributeTypes (0.37s)
otlp_integration_test.go:541: resource span count: got 0, want 1
FAIL
FAIL go.opentelemetry.io/otel/exporters/otlp 5.278s
or
go test ./... + coverage in ./exporters/otlp
2020/12/14 17:41:16 rpc error: code = Canceled desc = context canceled
2020/12/14 17:41:16 exporter disconnected
--- FAIL: TestNewExporter_endToEnd (1.53s)
--- FAIL: TestNewExporter_endToEnd/WithCompressor (0.41s)
otlp_integration_test.go:246: span counts: got 3, want 4
2020/12/14 17:41:18 context canceled
FAIL
coverage: 35.3% of statements in ./...
FAIL go.opentelemetry.io/otel/exporters/otlp 4.753s
* Shut down the providers in end to end test
This is to make sure that all batched spans are actually flushed
before closing the exporter.
2020-12-21 22:49:45 +02:00
|
|
|
// connect to the collector on. If unset, it will instead try to use
|
|
|
|
// connect to DefaultCollectorHost:DefaultCollectorPort.
|
2020-12-29 21:15:57 +02:00
|
|
|
func WithEndpoint(endpoint string) GRPCConnectionOption {
|
Split connection management away from exporter (#1369)
* Split protocol handling away from exporter
This commits adds a ProtocolDriver interface, which the exporter
will use to connect to the collector and send both metrics and traces
to it. That way, the Exporter type is free from dealing with any
connection/protocol details, as this business is taken over by the
implementations of the ProtocolDriver interface.
The gRPC code from the exporter is moved into the implementation of
ProtocolDriver. Currently it only maintains a single connection,
just as the Exporter used to do.
With the split, most of the Exporter options became actually gRPC
connection manager's options. Currently the only option that remained
to be Exporter's is about setting the export kind selector.
* Update changelog
* Increase the test coverage of GRPC driver
* Do not close a channel with multiple senders
The disconnected channel can be used for sending by multiple
goroutines (for example, by metric controller and span processor), so
this channel should not be closed at all. Dropping this line closes a
race between closing a channel and sending to it.
* Simplify new connection handler
The callbacks never return an error, so drop the return type from it.
* Access clients under a lock
The client may change as a result on reconnection in background, so
guard against a racy access.
* Simplify the GRPC driver a bit
The config type was exported earlier to have a consistent way of
configuring the driver, when also the multiple connection driver would
appear. Since we are not going to add a multiple connection driver,
pass the options directly to the driver constructor. Also shorten the
name of the constructor to `NewGRPCDriver`.
* Merge common gRPC code back into the driver
The common code was supposed to be shared between single connection
driver and multiple connection driver, but since the latter won't be
happening, it makes no sense to keep the not-so-common code in a
separate file. Also drop some abstraction too.
* Rename the file with gRPC driver implementation
* Update changelog
* Sleep for a second to trigger the timeout
Sometimes CI has it's better moments, so it's blazing fast and manages
to finish shutting the exporter down within the 1 microsecond timeout.
* Increase the timeout for shutting down the exporter
One millisecond is quite short, and I was getting failures locally or
in CI:
go test ./... + race in ./exporters/otlp
2020/12/14 18:27:54 rpc error: code = Canceled desc = context canceled
2020/12/14 18:27:54 context deadline exceeded
--- FAIL: TestNewExporter_withMultipleAttributeTypes (0.37s)
otlp_integration_test.go:541: resource span count: got 0, want 1
FAIL
FAIL go.opentelemetry.io/otel/exporters/otlp 5.278s
or
go test ./... + coverage in ./exporters/otlp
2020/12/14 17:41:16 rpc error: code = Canceled desc = context canceled
2020/12/14 17:41:16 exporter disconnected
--- FAIL: TestNewExporter_endToEnd (1.53s)
--- FAIL: TestNewExporter_endToEnd/WithCompressor (0.41s)
otlp_integration_test.go:246: span counts: got 3, want 4
2020/12/14 17:41:18 context canceled
FAIL
coverage: 35.3% of statements in ./...
FAIL go.opentelemetry.io/otel/exporters/otlp 4.753s
* Shut down the providers in end to end test
This is to make sure that all batched spans are actually flushed
before closing the exporter.
2020-12-21 22:49:45 +02:00
|
|
|
return func(cfg *grpcConnectionConfig) {
|
2020-12-29 21:15:57 +02:00
|
|
|
cfg.collectorEndpoint = endpoint
|
Split connection management away from exporter (#1369)
* Split protocol handling away from exporter
This commits adds a ProtocolDriver interface, which the exporter
will use to connect to the collector and send both metrics and traces
to it. That way, the Exporter type is free from dealing with any
connection/protocol details, as this business is taken over by the
implementations of the ProtocolDriver interface.
The gRPC code from the exporter is moved into the implementation of
ProtocolDriver. Currently it only maintains a single connection,
just as the Exporter used to do.
With the split, most of the Exporter options became actually gRPC
connection manager's options. Currently the only option that remained
to be Exporter's is about setting the export kind selector.
* Update changelog
* Increase the test coverage of GRPC driver
* Do not close a channel with multiple senders
The disconnected channel can be used for sending by multiple
goroutines (for example, by metric controller and span processor), so
this channel should not be closed at all. Dropping this line closes a
race between closing a channel and sending to it.
* Simplify new connection handler
The callbacks never return an error, so drop the return type from it.
* Access clients under a lock
The client may change as a result on reconnection in background, so
guard against a racy access.
* Simplify the GRPC driver a bit
The config type was exported earlier to have a consistent way of
configuring the driver, when also the multiple connection driver would
appear. Since we are not going to add a multiple connection driver,
pass the options directly to the driver constructor. Also shorten the
name of the constructor to `NewGRPCDriver`.
* Merge common gRPC code back into the driver
The common code was supposed to be shared between single connection
driver and multiple connection driver, but since the latter won't be
happening, it makes no sense to keep the not-so-common code in a
separate file. Also drop some abstraction too.
* Rename the file with gRPC driver implementation
* Update changelog
* Sleep for a second to trigger the timeout
Sometimes CI has it's better moments, so it's blazing fast and manages
to finish shutting the exporter down within the 1 microsecond timeout.
* Increase the timeout for shutting down the exporter
One millisecond is quite short, and I was getting failures locally or
in CI:
go test ./... + race in ./exporters/otlp
2020/12/14 18:27:54 rpc error: code = Canceled desc = context canceled
2020/12/14 18:27:54 context deadline exceeded
--- FAIL: TestNewExporter_withMultipleAttributeTypes (0.37s)
otlp_integration_test.go:541: resource span count: got 0, want 1
FAIL
FAIL go.opentelemetry.io/otel/exporters/otlp 5.278s
or
go test ./... + coverage in ./exporters/otlp
2020/12/14 17:41:16 rpc error: code = Canceled desc = context canceled
2020/12/14 17:41:16 exporter disconnected
--- FAIL: TestNewExporter_endToEnd (1.53s)
--- FAIL: TestNewExporter_endToEnd/WithCompressor (0.41s)
otlp_integration_test.go:246: span counts: got 3, want 4
2020/12/14 17:41:18 context canceled
FAIL
coverage: 35.3% of statements in ./...
FAIL go.opentelemetry.io/otel/exporters/otlp 4.753s
* Shut down the providers in end to end test
This is to make sure that all batched spans are actually flushed
before closing the exporter.
2020-12-21 22:49:45 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// WithReconnectionPeriod allows one to set the delay between next connection attempt
|
|
|
|
// after failing to connect with the collector.
|
|
|
|
func WithReconnectionPeriod(rp time.Duration) GRPCConnectionOption {
|
|
|
|
return func(cfg *grpcConnectionConfig) {
|
|
|
|
cfg.reconnectionPeriod = rp
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// WithCompressor will set the compressor for the gRPC client to use when sending requests.
|
|
|
|
// It is the responsibility of the caller to ensure that the compressor set has been registered
|
|
|
|
// with google.golang.org/grpc/encoding. This can be done by encoding.RegisterCompressor. Some
|
|
|
|
// compressors auto-register on import, such as gzip, which can be registered by calling
|
|
|
|
// `import _ "google.golang.org/grpc/encoding/gzip"`
|
|
|
|
func WithCompressor(compressor string) GRPCConnectionOption {
|
|
|
|
return func(cfg *grpcConnectionConfig) {
|
|
|
|
cfg.compressor = compressor
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// WithHeaders will send the provided headers with gRPC requests
|
|
|
|
func WithHeaders(headers map[string]string) GRPCConnectionOption {
|
|
|
|
return func(cfg *grpcConnectionConfig) {
|
|
|
|
cfg.headers = headers
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// WithTLSCredentials allows the connection to use TLS credentials
|
|
|
|
// when talking to the server. It takes in grpc.TransportCredentials instead
|
|
|
|
// of say a Certificate file or a tls.Certificate, because the retrieving
|
|
|
|
// these credentials can be done in many ways e.g. plain file, in code tls.Config
|
|
|
|
// or by certificate rotation, so it is up to the caller to decide what to use.
|
|
|
|
func WithTLSCredentials(creds credentials.TransportCredentials) GRPCConnectionOption {
|
|
|
|
return func(cfg *grpcConnectionConfig) {
|
|
|
|
cfg.clientCredentials = creds
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// WithGRPCServiceConfig defines the default gRPC service config used.
|
|
|
|
func WithGRPCServiceConfig(serviceConfig string) GRPCConnectionOption {
|
|
|
|
return func(cfg *grpcConnectionConfig) {
|
|
|
|
cfg.grpcServiceConfig = serviceConfig
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// WithGRPCDialOption opens support to any grpc.DialOption to be used. If it conflicts
|
|
|
|
// with some other configuration the GRPC specified via the collector the ones here will
|
|
|
|
// take preference since they are set last.
|
|
|
|
func WithGRPCDialOption(opts ...grpc.DialOption) GRPCConnectionOption {
|
|
|
|
return func(cfg *grpcConnectionConfig) {
|
|
|
|
cfg.grpcDialOptions = opts
|
|
|
|
}
|
|
|
|
}
|