Conversation
|
| item2 := &Condition{ | ||
| Id: "sync-volume-missing", | ||
| Name: "Sync Volume is Missing", | ||
| Test: []string{fmt.Sprintf("$(id=$(docker container ps -aq --filter 'name=^/%s$'); docker top $id &>/dev/null)", syncName)}, |
There was a problem hiding this comment.
Is this supposed to be the exact same test as the one that checks to see if the container is running?
There was a problem hiding this comment.
Perhaps it should be docker volume ls $syncName which is pretty easy to do as a shell test (assuming the volume name would get hard coded in outrigger.yml).
There was a problem hiding this comment.
I'm working on a PR which adds a rig project sync:name to get that.
| conditions["sync-container-not-running"] = item1 | ||
| } | ||
|
|
||
| item2 := &Condition{ |
There was a problem hiding this comment.
Should we also add a test to the effect of ps aux | grep unison | $syncName to check for the process on the local machine?
| item1 := &Condition{ | ||
| Id: "sync-container-not-running", | ||
| Name: "Sync Container Not Working", | ||
| Test: []string{fmt.Sprintf("$(id=$(docker container ps -aq --filter 'name=^/%s$'); docker top $id &>/dev/null)", syncName)}, |
There was a problem hiding this comment.
Some potential shell versions assuming you are ok with the sync volume being hard coded in outrigger.yml.
test -z $(docker container ps -aq --filter 'name=^/projectname-sync$') should tell you if the container exists but not much about if it is running.
test -z $(docker container ps -q --filter 'name=^/projectname-sync$') dropping the -a flag should let you know if it is running but can't tell you anything about the processes inside it. I don't think that is different than what the return code from docker top is doing for you but I'm not positive.
There was a problem hiding this comment.
The problem is docker container ps never has a failing exit code, whereas docker top does.
After creating these examples of conditions, I've since gone on to work on have a more hard-coded set of checks around project sync, I hope to have a PR ready later today/tomorrow.
With all the issues around unison, I want to be able to tell people to upgrade rig and have a comprehensive solution in place for all projects using unison.
Here's a reboot on the project doctor work. This is based on #157 and should be reviewed after that.
With some tweaks to the built-in conditions we could merge this as-is, but there are a few things I'd like to address:
Example
Uncomfortable Things
Future Enhancements
rig project sync:namecommand to supply the sync volume and sync container name.rig project sync:doctorcommand which verifies the health of the sync process. (I'm planning to tackle that to split the current checking rig project sync does as part of setup into a separately callable API method and command.)rig project doctor --analyze=conditionIdto support testing that a condition behaves in isolation.