Image may be NSFW.
Clik here to view.
I have DNS roundrobin on 2 virtual IP in front of service. (Among others the service tested was: apache, nginx, varnish, postfix, … It really does not matter. Let’s call it just service.)
I have corosync config where on two nodes service is running (as a clone with max=2 max-node=1
) and each node has one of 2 virtual IP.
- In case of node failure: Corosync stop, standby mode – other node takes over IP.
- In case of stopping service: Cluster brings it up.
But:
- In case of destroying config of the service: Cluster cannot start it and it remains stopped/errored but the virtual IP remains.
When the cluster was active/passive there waw no clone. Primitive service was in group with IP and in case of failure also virt IP wasn’t started.
I cannot group clone.
How do I solve this?
Please note that it seems to have nothing with ordering, which works just fine.
Image may be NSFW.
Clik here to view.
I have added to Primitive
option to ‘op start’: on-fail="standby"
. Now when my service (the only primitive in clone) cannot start due to faulty config – the node looses also virtIP.
This way I end with migrated resources to healthy node.
Check more discussion of this question.