[Devel,rh7,v2,10/21] ms/mm: huge_memory: use GFP_TRANSHUGE when charging huge pages

Submitted by Andrey Ryabinin on Jan. 12, 2017, 9:47 a.m.


Message ID 20170112094738.7720-10-aryabinin@virtuozzo.com
State New
Series "Series without cover letter"
Headers show

Commit Message

Andrey Ryabinin Jan. 12, 2017, 9:47 a.m.
From: Johannes Weiner <hannes@cmpxchg.org>

Transparent huge page charges prefer falling back to regular pages
rather than spending a lot of time in direct reclaim.

Desired reclaim behavior is usually declared in the gfp mask, but THP
charges use GFP_KERNEL and then rely on the fact that OOM is disabled
for THP charges, and that OOM-disabled charges don't retry reclaim.
Needless to say, this is anything but obvious and quite error prone.

Convert THP charges to use GFP_TRANSHUGE instead, which implies
__GFP_NORETRY, to indicate the low-latency requirement.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@suse.cz>
Cc: Hugh Dickins <hughd@google.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Vladimir Davydov <vdavydov@parallels.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

(cherry picked from commit d51d885bbb137cc8e1704e76be1846c5e0d5e8b4)
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
 mm/huge_memory.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

Patch hide | download patch | download mbox

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index c406494..14ed98b 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -708,7 +708,7 @@  static int __do_huge_pmd_anonymous_page(struct mm_struct *mm,
 	VM_BUG_ON_PAGE(!PageCompound(page), page);
-	if (unlikely(mem_cgroup_newpage_charge(page, mm, GFP_KERNEL))) {
+	if (unlikely(mem_cgroup_newpage_charge(page, mm, GFP_TRANSHUGE))) {
@@ -1241,7 +1241,7 @@  alloc:
 		goto out;
-	if (unlikely(mem_cgroup_newpage_charge(new_page, mm, GFP_KERNEL))) {
+	if (unlikely(mem_cgroup_newpage_charge(new_page, mm, GFP_TRANSHUGE))) {
 		if (page) {
@@ -2524,7 +2524,7 @@  static void collapse_huge_page(struct mm_struct *mm,
 	if (!new_page)
-	if (unlikely(mem_cgroup_newpage_charge(new_page, mm, GFP_KERNEL)))
+	if (unlikely(mem_cgroup_newpage_charge(new_page, mm, GFP_TRANSHUGE)))