Job invocations should happen asynchronously
|Status:||Ready For Testing|
|Assigned To:||Adam Ruzicka|
|Target version:||Foreman - Team Ivan Iteration 17|
|Difficulty:||Pull request:||https://github.com/theforeman/smart_proxy_remote_execution_ssh/pull/32, https://github.com/theforeman/smart_proxy_dynflow/pull/33, https://github.com/theforeman/foreman_remote_execution/pull/249|
|Velocity based estimate||-|
Description of problem:
Customer needs to run many long-running job invocations at the same time on multiple machines.
These machines are located in a network with low bandwidth, so keeping many connections alive isn't possible as some jobs could take a long time (e.g: reposync, yum update).
These connections waste resources on the client hosts which are not very powerful machines.
This could be implemented by having another provider different to SSH or possibly by making ssh run the job and return right away (the capsule could check the status of the job somehow)
Currently they are running their own custom remote execution scripts which use Ansible core libraries to make calls asynchronously and poll for the status of the execution. The solution provided by Satellite does not necessarily have to poll for the status but it would need to provide a way to check it's status.