Jobs get stuck after processing first batch, remaining hosts remain in N/A
By accounting for missing hosts in a job we broke running jobs on hosts past
first batch. There is a module which periodically polls states of sub-tasks and
overwrites then numbers we were using to decide whether to spawn next batch or
With this change we store the expected total count under a different key in
task's output so it won't get overwritten by the included polling module. For
compatibility with already existing jobs we use the value under the original key
in task's output in case the task doesn't have the value set under the new key.
#3 Updated by Anonymous almost 2 years ago
- Status changed from Ready For Testing to Closed
Applied in changeset foreman_plugin|2d1bb488360850a8014c079ece75d911fa85af16.