Bug #5568
closedPossible memory leak in trends
Description
By Sean Alderman (https://groups.google.com/forum/#!msg/foreman-users/hi7TagIRE8A/5ZRzlI8ChxEJ):
I am collecting a few trends in foreman using host facts. I'm using a cron entry for the foreman user: /usr/sbin/foreman-rake trends:counter every half hour.
My foreman machine is running puppet, puppetdb, foreman, foreman-proxy, and postgres, 2 vCPU and 6GB of RAM. It seems that the rake commands are running away with all the memory and swap. Is there a way to control this?
Here's what top shows currently:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 30002 foreman 20 0 6602m 3.3g 104 D 13.3 58.3 17:06.22 /opt/rh/ruby193/root/usr/bin/ruby /opt/rh/ruby193/root/usr/bin/rake trends:clean
Seems like a lot of RAM and accumulated CPU time. I can add more RAM, but other than rake commands like this one, the server doesn't seem to utilize the memory allocated.
Updated by Dominic Cleal over 10 years ago
- Related to Bug #4114: Trends don't scale well and become very slow added
Updated by Edson Manners over 10 years ago
Seem to have a similar issue. Currently running 1.4.5 on RHEL 6.5.
We have 545 hosts and I'm not running and anytrends manually foreman seems ot be running these trends on it's own.
I'm running this in a KVM VM with 6 cores and 16GB RAM ot this point but it's obviously never enough.
I'll be upgrading through 1.5 and 1.6 in the coming days and looking to see if that helps.
Top show this after 'service foreman stop; service httpd stop'
30143 foreman 20 0 8407m 7.9g 6544 D 11.6 50.9 42:44.19 rake
I have to 'kill -9' the rake process as it doesn't die on it's own.
'ps -lef | grep 30143'
/opt/rh/ruby193/root/usr/bin/ruby /opt/rh/ruby193/root/usr/bin/rake trends:clean
Updated by Shimon Shtein almost 10 years ago
Can it be related to the amount of data in the trend_counters table?
Updated by Jon McKenzie over 9 years ago
We're running into this issue too. I haven't done a ton with ActiveRecord, but the issue are these lines in the trends:clean
rake task:
counts = TrendCounter.group([:trend_id, :created_at]).count
dupes = counts.select{ |attrs, count| count > 1}
Rather than doing the aggregation inside the database, the task first executes a very expensive GROUP
and then filters the results it needs in plain Ruby.
Again, not an ActiveRecord expert, but I think replacing those two lines with the following should significantly speed it up:
dupes = TrendCounter.having('count(*) > 1').group([:trend_id, :created_at]).count
This translates the SQL from this:
SELECT COUNT(*) AS count_all, trend_id AS trend_id, created_at AS created_at FROM "trend_counters" GROUP BY trend_id, created_at ORDER BY created_at
...to this:
SELECT COUNT(*) AS count_all, trend_id AS trend_id, created_at AS created_at FROM "trend_counters" GROUP BY trend_id, created_at HAVING count(*) > 1 ORDER BY created_at
Updated by Jon McKenzie over 9 years ago
Huh, I must have misread this issue initially, which is for trends:counter
not trends:clean
. Oh well, I submitted a PR for the trends:clean
issue anyways
Updated by Dominic Cleal over 9 years ago
- Status changed from Assigned to Ready For Testing
- Assignee deleted (
Lukas Zapletal) - Pull request https://github.com/theforeman/foreman/pull/2365 added
- Pull request deleted (
)
Updated by Jon McKenzie over 9 years ago
- Status changed from Ready For Testing to Closed
- % Done changed from 0 to 100
Applied in changeset 040abfa32f326de7cd488f34f243377f76fd70ae.
Updated by Dominic Cleal over 9 years ago
- Assignee set to Jon McKenzie
- Translation missing: en.field_release set to 50