1
0
mirror of https://github.com/woodpecker-ci/woodpecker.git synced 2024-12-24 10:07:21 +02:00
woodpecker/agent
nemunaire 8e45ddd58b
agent: Continue to retry indefinitely (#3599)
When the woodpecker server is not reachable (eg. for update,
maintenance, agent connection issue, ...) for a long period of time, the
agent tries continuously to reconnect, without any delay. This creates
**several GB** of logs in a short period of time.

Here is a sample line, repeated indefinitely:

```
{"level":"error","error":"rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial tcp x.x.x.x:xxx: connect: connection refused\"","time":"2024-04-07T17:29:59Z","message":"grpc error: done(): code: Unavailable"}
```

It appears that the [backoff
package](https://pkg.go.dev/github.com/cenkalti/backoff/v4#BackOff),
after a certain amount of time, returns `backoff.Stop` (-1) instead of a
valid delay to wait. It means that no more retry should be made, [as
shown in the
example](https://pkg.go.dev/github.com/cenkalti/backoff/v4#BackOff). But
the code doesn't handle that case and takes -1 as the next delay.
This led to continuous retry with no delay between them and creates a
huge amount of logs.

[`MaxElapsedTime` default is 15
minutes](https://pkg.go.dev/github.com/cenkalti/backoff/v4#pkg-constants),
passed this time, `NextBackOff` returns `backoff.Stop` (-1) instead of
`MaxInterval`.
This commit sets `MaxElapsedTime` to 0, [to avoid `Stop`
return](https://pkg.go.dev/github.com/cenkalti/backoff/v4#ExponentialBackOff).
2024-04-09 03:24:19 +03:00
..
rpc agent: Continue to retry indefinitely (#3599) 2024-04-09 03:24:19 +03:00
logger.go Clean up models (#3228) 2024-01-22 07:56:18 +01:00
runner.go Update golang (packages) (#3564) 2024-03-29 09:21:54 +01:00
state.go Enable some linters (#3129) 2024-01-09 21:35:37 +01:00
tracer.go Use step uuid instead of name in GRPC status calls (#3143) 2024-01-09 15:39:09 +01:00