[RHEL7,COMMIT] asm-generic/tlb: avoid potential double flush

Submitted by Vasily Averin on July 21, 2020, 2:58 p.m.


Message ID 202007211458.06LEwu8A007560@vvs.co7.work.ct
State New
Series "Series without cover letter"
Headers show

Commit Message

Vasily Averin July 21, 2020, 2:58 p.m.
The commit is pushed to "branch-rh7-3.10.0-1127.10.1.vz7.162.x-ovz" and will appear at https://src.openvz.org/scm/ovz/vzkernel.git
after rh7-3.10.0-1127.10.1.vz7.162.13
commit f64be64b0e3d460cd44eef5371e2142e8099bdc4
Author: Peter Zijlstra <peterz@infradead.org>
Date:   Tue Jul 21 17:58:56 2020 +0300

    asm-generic/tlb: avoid potential double flush
    Aneesh reported that:
                tlb_flush()                 <-- #1
                      tlb_flush()           <-- #2
    does two TLBIs when tlb->fullmm, because __tlb_reset_range() will not
    clear tlb->end in that case.
    Observe that any caller to __tlb_adjust_range() also sets at least one of
    the tlb->freed_tables || tlb->cleared_p* bits, and those are
    unconditionally cleared by __tlb_reset_range().
    Change the condition for actually issuing TLBI to having one of those bits
    set, as opposed to having tlb->end != 0.
    Link: http://lkml.kernel.org/r/20200116064531.483522-4-aneesh.kumar@linux.ibm.com
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
    Reported-by: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>
    Cc: Michael Ellerman <mpe@ellerman.id.au>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    (cherry picked from commit 0758cd8304942292e95a0f750c374533db378b32)
    Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
 mm/memory.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

Patch hide | download patch | download mbox

diff --git a/mm/memory.c b/mm/memory.c
index f4874a2b8be84..4370dd4008220 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -252,8 +252,14 @@  void arch_tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm,
 static void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb)
-	if (!tlb->end)
+	/*
+	 * Anything calling __tlb_adjust_range() also sets at least one of
+	 * these bits.
+	 */
+	if (!(tlb->freed_tables || tlb->cleared_ptes || tlb->cleared_pmds ||
+	    tlb->cleared_puds || tlb->cleared_p4ds))
 	mmu_notifier_invalidate_range(tlb->mm, tlb->start, tlb->end);