Project

General

Profile

Actions

Feature #17175

closed

max_memory_per_executor support

Added by Ivan Necas about 8 years ago. Updated over 6 years ago.

Status:
Closed
Priority:
Normal
Assignee:
Category:
-
Target version:
Fixed in Releases:
Found in Releases:

Description

Given once Ruby allocates some memory, it doesn't give it back, bigger
set of larger actions can lead to quite big memory consumption that
persists and can accumulate over time. With this, it's hard to keep
memory consumption fully under control, especially in an environment
with other systems (passenger, pulp, candlepin, qpid). Since the
executors can terminate nicely without affecting the tasks itselves,
it should be pretty easy to extend it to watch the memory consumption.

The idea:

1. config options:
max_memory_per_executor - the threshold for the memory size per executor
min_executors_count - minimal count executors (default 1)
minimal_executor_age - the period it will check whether the memory consumption didn't grow (default 1h)

2. the executor will periodically check it's memory usage,
(http://stackoverflow.com/a/24423978/457560 seems to be a sane
approach for us)

3. if memory usage exceeds `max_memory_per_executor`, the executor is
older than `minimal_executor_age` (to prevent situation, where the
memory would grow too fast over the max_memory_per_executor, which
would mean we wouldn't do anything than restarting the executors
without getting anything done and the amount of current executors
would not go under `min_executors_count`, politely terminate executor

4. the polite termination should be able to hand over all the tasks to
the other executors and once everything is finalized on the executor, it would just exit

5. the daemon monitor would notice the executor getting closed and running a new executor

It would be configurable, turned off by default (for development) but we would configure
this in production, where we can rely on the monitor being present.


Related issues 4 (0 open4 closed)

Related to foreman-tasks - Bug #16488: ruby consumes 43GB RSS when there is lots of stucked errata apply tasksClosed09/08/2016Actions
Related to foreman-tasks - Bug #16487: Continuous memory leak while tasks are getting runClosed09/08/2016Actions
Related to foreman-tasks - Bug #20875: max_memory_per_executor can lead to stuck executor, waiting for an event that would not arriveClosedIvan Necas09/07/2017Actions
Blocked by foreman-tasks - Bug #14806: Add option to set the amount of dynflow executors to be runningClosedIvan Necas04/25/2016Actions
Actions #1

Updated by Ivan Necas about 8 years ago

  • Target version set to 1.3.2
Actions #2

Updated by Ivan Necas about 8 years ago

  • Blocks Bug #16488: ruby consumes 43GB RSS when there is lots of stucked errata apply tasks added
Actions #3

Updated by Ivan Necas about 8 years ago

  • Blocks deleted (Bug #16488: ruby consumes 43GB RSS when there is lots of stucked errata apply tasks)
Actions #4

Updated by Ivan Necas about 8 years ago

  • Related to Bug #16488: ruby consumes 43GB RSS when there is lots of stucked errata apply tasks added
Actions #5

Updated by Ivan Necas about 8 years ago

  • Blocked by Bug #14806: Add option to set the amount of dynflow executors to be running added
Actions #6

Updated by Shimon Shtein about 8 years ago

A couple of thoughts:

First, since the process stabilizes at some point, we are not satisfied with the fact that they are "stuck" in memory for too long.
Maybe we can address it by more aggressive cleanup after the task finishes - maybe calling a full GC after each task, so it's leftovers will be purged.

Second, again, since the process stabilizes at some point, maybe we should enhance the algorithm that spawns new executors.
I mean monitoring the amount of memory consumed by all executors, and when a threshold is passed reduce the amount of live executors. Thus the memory would be divided between fewer executors, allowing them to get to the point where they can stabilize. The same can go the other way around, if the executors have stabilized before reaching the memory threshold, we can spawn extra executor and get the tasks queue cleared faster.

Actions #7

Updated by The Foreman Bot about 8 years ago

  • Status changed from New to Ready For Testing
  • Assignee set to Shimon Shtein
  • Pull request https://github.com/theforeman/foreman-tasks/pull/216 added
Actions #8

Updated by Shimon Shtein about 8 years ago

  • Pull request https://github.com/Dynflow/dynflow/pull/211 added
Actions #9

Updated by Ivan Necas about 8 years ago

  • Related to Bug #16487: Continuous memory leak while tasks are getting run added
Actions #10

Updated by Ivan Necas almost 8 years ago

  • Target version changed from 1.3.2 to 1.11.3
Actions #11

Updated by Ivan Necas almost 8 years ago

  • Target version changed from 1.11.3 to 1.12.2
Actions #12

Updated by Mike McCune almost 8 years ago

  • Bugzilla link set to 1434069
Actions #13

Updated by Shimon Shtein over 7 years ago

  • Status changed from Ready For Testing to Closed
  • % Done changed from 0 to 100
Actions #14

Updated by Ivan Necas over 7 years ago

  • Translation missing: en.field_release set to 252
Actions #15

Updated by Ivan Necas over 7 years ago

  • Related to Bug #20875: max_memory_per_executor can lead to stuck executor, waiting for an event that would not arrive added
Actions

Also available in: Atom PDF