[Devel] sched: Count loadavg under rq::lock in calc_load_nohz_start()

Submitted by Kirill Tkhai on July 7, 2017, 5:01 p.m.

Details

Message ID 149944681880.6758.291211676925521108.stgit@localhost.localdomain
State New
Series "sched: Count loadavg under rq::lock in calc_load_nohz_start()"
Headers show

Commit Message

Kirill Tkhai July 7, 2017, 5:01 p.m.
Since calc_load_fold_active() reads two variables (nr_running
and nr_uninterruptible) it may race with parallel try_to_wake_up().
So it must be executed under rq::lock to prevent that.
This seems to be the reason of negative calc_load_tasks, observed
on several machines.

https://jira.sw.ru/browse/PSBM-68052

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 kernel/sched/core.c |    3 +++
 1 file changed, 3 insertions(+)

Patch hide | download patch | download mbox

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 7c1c3f2fda6..2bc51b15fc7 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2885,7 +2885,10 @@  void calc_load_enter_idle(void)
 	 * We're going into NOHZ mode, if there's any pending delta, fold it
 	 * into the pending idle delta.
 	 */
+
+	raw_spin_lock(&this_rq->lock);
 	delta = calc_load_fold_active(this_rq);
+	raw_spin_unlock(&this_rq->lock);
 	if (delta) {
 		int idx = calc_load_write_idx();
 		atomic_long_add(delta, &calc_load_idle[idx]);