Project

General

Profile

Actions

Feature #7640

closed

Support for multi-host setups

Added by Ivan Necas about 10 years ago. Updated 8 months ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
Difficulty:
Triaged:
No
Fixed in Releases:
Found in Releases:

Description

For now, the foreman-tasks has a limitation to be able to run the executor
only on one host, for two reasons:

1. the communication between the web server and the executor
goes though a local socket (for simplicity for simple setups)

2. dynflow has a limitation for not supporting more executors running
(as a consequence of 1.)

And the solution consists of two parts: choose a communication mechanism
for inter-hosts setup and implement support to Dynflow to allow more executors to run.

The communication is in a form of exchanging events:
1. web server -> executor - when new task is created, the executor gets notified
to start executing it
2. executor -> web server - when the task is finished, the web server gets notified
(in case we want to make the task synchronous and not return anything to the client
before the task finished: useful for short orchestration tasks)
3. web server -> executor - when an event is triggered in the web server that needs
to be propagated into executor
4. executor -> executor -> when one task triggers a sub-task on another executor (applicable
only on multi-executors setup)

For single-host setup, the current approach with local socket file
should be enough, as it brings no performance penalty and low setup complexity (with optional
persistence of the events, as there is still a risk of loosing some event in the socket-only setup)

For multi-host setup, there are three posssible options for cross-host communication (maybe more?).

a. using shared database

The communication events would be shared though a database table,
every participant polling periodically for the corresponding events.

Advantages:

  • simpler setup (all the components are already present),
  • failover - one executor enough for handling the tasks

Disadvantages:

  • slow - the multi-host setup might in fact get slower with respect to handling tasks than
    the single-host setup

b. using shared database and local socket if possible

Advantages:

  • simpler setup (all the components are already present)
  • we can reduce the inter-hosts communication between
    hosts: when new task is created, the same host would be used for its execution (using the local socket), limiting
    the cross-hosts communication just for external events, such as user cancel or some
    event triggered by calling Foreman API.

Disadvantages:

  • added complexity, when distinguishing between local executors and remote one.
  • slower for the cross-hosts communication: periodically polling for the events to be handled
  • if some executor is down, the tasks triggered at the corresponding web server will not get handled

c. messaging

Advantages:
  • simpler, more transparent implementation once the messaging is there.
  • the web server not determining, what executor will be used
    for handling task
  • better failover - one executor down on one host would not mean some tasks
    are not getting handled (provided at least one executor is there)
Disadvantage:
  • more complex setup
  • what messaging implementation to choose, we should be ok with everything that
    supports STOMP, but I would not want to maintain all different implementations
    in the installer

Related issues 1 (1 open0 closed)

Blocks Foreman - Feature #7514: add foreman tasks into coreNewActions
Actions #1

Updated by Ivan Necas about 10 years ago

Actions #2

Updated by Ivan Necas about 10 years ago

  • Blocked by deleted (Feature #7514: add foreman tasks into core)
Actions #3

Updated by Ivan Necas about 10 years ago

Actions #4

Updated by Ivan Necas about 10 years ago

  • Description updated (diff)
Actions #5

Updated by Ivan Necas about 10 years ago

  • Status changed from New to Assigned
Actions #6

Updated by Adam Ruzicka 8 months ago

  • Status changed from Assigned to Closed
  • Triaged set to No

In theory this is already possible since the move to Sidekiq. Getting it working is a little bit involved process, but it is possible. As long as all the involved instances talk to the same redis and same database and all run the same code, it should work.

Actions

Also available in: Atom PDF