[rh7,2/2] mm/vmscan/HACK: scan only anon if global file inactive isn't low.

Submitted by Andrey Ryabinin on Nov. 23, 2017, 9:40 a.m.

Details

Message ID 20171123094033.9954-2-aryabinin@virtuozzo.com
State New
Series "Series without cover letter"
Headers show

Commit Message

Andrey Ryabinin Nov. 23, 2017, 9:40 a.m.
Avoid swapping if global inactive list is big.

Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
---
 mm/vmscan.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

Patch hide | download patch | download mbox

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 524d1452deb1..798e013757f1 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2064,6 +2064,22 @@  static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
 		}
 	}
 
+	if (global_reclaim(sc)) {
+		unsigned long inactive = zone_page_state(zone, NR_INACTIVE_FILE);
+		unsigned long active = zone_page_state(zone, NR_ACTIVE_FILE);
+		unsigned long gb, inactive_ratio;
+
+		gb = (inactive + active) >> (30 - PAGE_SHIFT);
+		if (gb)
+			inactive_ratio = int_sqrt(10 * gb);
+		else
+			inactive_ratio = 1;
+		if (inactive_ratio * inactive >= active) {
+			scan_balance = SCAN_FILE;
+			goto out;
+		}
+	}
+
 	/*
 	 * There is enough inactive page cache, do not reclaim
 	 * anything from the anonymous working set right now.

Comments

Konstantin Khorenko Dec. 4, 2017, 4:14 p.m.
won't take it.
we've implemented pagecache limit for a cgroup, it should suit us better.

--
Best regards,

Konstantin Khorenko,
Virtuozzo Linux Kernel Team

On 11/23/2017 12:40 PM, Andrey Ryabinin wrote:
> Avoid swapping if global inactive list is big.
>
> Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
> ---
>  mm/vmscan.c | 16 ++++++++++++++++
>  1 file changed, 16 insertions(+)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 524d1452deb1..798e013757f1 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2064,6 +2064,22 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
>  		}
>  	}
>
> +	if (global_reclaim(sc)) {
> +		unsigned long inactive = zone_page_state(zone, NR_INACTIVE_FILE);
> +		unsigned long active = zone_page_state(zone, NR_ACTIVE_FILE);
> +		unsigned long gb, inactive_ratio;
> +
> +		gb = (inactive + active) >> (30 - PAGE_SHIFT);
> +		if (gb)
> +			inactive_ratio = int_sqrt(10 * gb);
> +		else
> +			inactive_ratio = 1;
> +		if (inactive_ratio * inactive >= active) {
> +			scan_balance = SCAN_FILE;
> +			goto out;
> +		}
> +	}
> +
>  	/*
>  	 * There is enough inactive page cache, do not reclaim
>  	 * anything from the anonymous working set right now.
>