[Devel] ms/mm/slub.c: list_lock may not be held in some circumstances

Submitted by Dmitry Safonov on Nov. 15, 2016, 3:51 p.m.

Details

Message ID 20161115155136.5326-1-dsafonov@virtuozzo.com
State New
Series "ms/mm/slub.c: list_lock may not be held in some circumstances"
Headers show

Commit Message

Dmitry Safonov Nov. 15, 2016, 3:51 p.m.
From: David Rientjes <rientjes@google.com>

Commit c65c1877bd68 ("slub: use lockdep_assert_held") incorrectly
required that add_full() and remove_full() hold n->list_lock.  The lock
is only taken when kmem_cache_debug(s), since that's the only time it
actually does anything.

Require that the lock only be taken under such a condition.

Reported-by: Larry Finger <Larry.Finger@lwfinger.net>
Tested-by: Larry Finger <Larry.Finger@lwfinger.net>
Tested-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

[backported from ms commit 255d0884f563 ("mm/slub.c: list_lock may not
be held in some circumstances")]
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Signed-off-by: Dmitry Safonov <dsafonov@virtuozzo.com>
---
 mm/slub.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

Patch hide | download patch | download mbox

diff --git a/mm/slub.c b/mm/slub.c
index c930b022c5be..fcebd145a1b6 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -960,21 +960,19 @@  static void trace(struct kmem_cache *s, struct page *page, void *object,
 static void add_full(struct kmem_cache *s,
 	struct kmem_cache_node *n, struct page *page)
 {
-	lockdep_assert_held(&n->list_lock);
-
 	if (!(s->flags & SLAB_STORE_USER))
 		return;
 
+	lockdep_assert_held(&n->list_lock);
 	list_add(&page->lru, &n->full);
 }
 
 static void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, struct page *page)
 {
-	lockdep_assert_held(&n->list_lock);
-
 	if (!(s->flags & SLAB_STORE_USER))
 		return;
 
+	lockdep_assert_held(&n->list_lock);
 	list_del(&page->lru);
 }
 

Comments

Andrey Ryabinin Nov. 15, 2016, 3:56 p.m.
On 11/15/2016 06:51 PM, Dmitry Safonov wrote:
> From: David Rientjes <rientjes@google.com>
> 
> Commit c65c1877bd68 ("slub: use lockdep_assert_held") incorrectly
> required that add_full() and remove_full() hold n->list_lock.  The lock
> is only taken when kmem_cache_debug(s), since that's the only time it
> actually does anything.
> 
> Require that the lock only be taken under such a condition.
> 
> Reported-by: Larry Finger <Larry.Finger@lwfinger.net>
> Tested-by: Larry Finger <Larry.Finger@lwfinger.net>
> Tested-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> Acked-by: Christoph Lameter <cl@linux.com>
> Cc: Pekka Enberg <penberg@kernel.org>
> Signed-off-by: David Rientjes <rientjes@google.com>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
> 
> [backported from ms commit 255d0884f563 ("mm/slub.c: list_lock may not
> be held in some circumstances")]
> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Signed-off-by: Dmitry Safonov <dsafonov@virtuozzo.com>
> ---

Acked-by: Andrey Ryabinin <aryabinin@virtuozzo.com>

>  mm/slub.c | 6 ++----
>  1 file changed, 2 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index c930b022c5be..fcebd145a1b6 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -960,21 +960,19 @@ static void trace(struct kmem_cache *s, struct page *page, void *object,
>  static void add_full(struct kmem_cache *s,
>  	struct kmem_cache_node *n, struct page *page)
>  {
> -	lockdep_assert_held(&n->list_lock);
> -
>  	if (!(s->flags & SLAB_STORE_USER))
>  		return;
>  
> +	lockdep_assert_held(&n->list_lock);
>  	list_add(&page->lru, &n->full);
>  }
>  
>  static void remove_full(struct kmem_cache *s, struct kmem_cache_node *n, struct page *page)
>  {
> -	lockdep_assert_held(&n->list_lock);
> -
>  	if (!(s->flags & SLAB_STORE_USER))
>  		return;
>  
> +	lockdep_assert_held(&n->list_lock);
>  	list_del(&page->lru);
>  }
>  
>