* add tests to ensure function even after switching docker client api version
* switch docker client api version to remove import of Sirupsen and get rid of the casing workaround
* migrate from glide to dep to go modules
* rewrite ci workflow
* only run publish on version tags
* only run build on branches
* update goreleaser config
* disable automated latest tag push
* remove dependency to v2tec/docker-gobuilder
* remove dead code and files
* add golands .idea folder to gitignore
* add label to released docker images
* add test reporting, add some unit tests
* change test output dir
* fix goreleaser versions
* add debug output for circleci and goreleaser
* disable cgo
With insights from https://github.com/docker/docker/issues/29265
the behaviour is the same as the one from docker-compose
* connect to 1 network (at random) at start
* disconnect from that network
* reconnect to all the network from the previous configuration
This causes authentication failures on registries that don't match, including public registries.
Fallback to no-authentication to handle the case of public registries.
Load authentication credentials for available credential stores in order of preference:
1. Environment variables REPO_USER, REPO_PASS
2. Docker config files
Request image pull with authentication header.
Wait until pull request is complete before exiting function.
Since Zodiac always uses image IDs for deployments we can relay on the
standard container image field to determine the image that was used to
start the container. Luckily, Zodiac writes the original image name to a
label in the container metadata. If we find that Zodiac-specific label
on a running container we will use the associated value when trying to
determine if the container's image has changed.
No need to export this particular struct since we already have a public
Client interface available and a NewClient function which can be used to
instantiate the concrete struct.
If we receive an error while trying to shutdown/startup a particular
container we don't want to immediately terminate the current update
cycle. Instead we should continue processing the remaining containers
and simply log the error.