linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] follow up nodereclaim for 32b fix
@ 2017-01-17 10:36 Michal Hocko
  2017-01-17 10:37 ` [PATCH 1/3] mm, vmscan: cleanup lru size claculations Michal Hocko
                   ` (3 more replies)
  0 siblings, 4 replies; 12+ messages in thread
From: Michal Hocko @ 2017-01-17 10:36 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Johannes Weiner, Mel Gorman, Minchan Kim, Hillf Danton, linux-mm, LKML

Hi,
I have previously posted this as an RFC [1] but there didn't seem to be
any objections other than some requests to reorganize the changes in
a slightly different way so I am reposting the series and asking for
inclusion.

This is a follow up on top of [2]. The patch 1 cleans up the code a bit.
I haven't seen any real issues or bug reports but conceptualy ignoring
the maximum eligible zone in get_scan_count is wrong by definition. This
is what patch 2 does.  Patch 3 removes inactive_reclaimable_pages
which was a kind of hack around for the problem which should have been
addressed at get_scan_count.

There is one more place which needs a special handling which is not
a part of this series. too_many_isolated can get confused as well. I
already have some preliminary work but it still needs some testing so I
will post it separatelly.

Michal Hocko (3):
      mm, vmscan: cleanup lru size claculations
      mm, vmscan: consider eligible zones in get_scan_count
      Revert "mm: bail out in shrink_inactive_list()"

 include/linux/mmzone.h |   2 +-
 mm/vmscan.c            | 116 +++++++++++++++++++------------------------------
 mm/workingset.c        |   2 +-
 3 files changed, 46 insertions(+), 74 deletions(-)

[1] http://lkml.kernel.org/r/20170110125552.4170-1-mhocko@kernel.org
[2] http://lkml.kernel.org/r/20170104100825.3729-1-mhocko@kernel.org

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 12+ messages in thread
* Re: [RFC PATCH 1/2] mm, vmscan: consider eligible zones in get_scan_count
@ 2017-01-16 16:01 Johannes Weiner
  2017-01-16 19:33 ` [PATCH 1/3] mm, vmscan: cleanup lru size claculations Michal Hocko
  0 siblings, 1 reply; 12+ messages in thread
From: Johannes Weiner @ 2017-01-16 16:01 UTC (permalink / raw)
  To: Michal Hocko; +Cc: linux-mm, Mel Gorman, Minchan Kim, Andrew Morton

On Mon, Jan 16, 2017 at 10:29:56AM +0100, Michal Hocko wrote:
> From 39824aac7504b38f943a80b7d98ec4e87a5607a7 Mon Sep 17 00:00:00 2001
> From: Michal Hocko <mhocko@suse.com>
> Date: Tue, 27 Dec 2016 16:28:44 +0100
> Subject: [PATCH] mm, vmscan: consider eligible zones in get_scan_count
> 
> get_scan_count considers the whole node LRU size when
> - doing SCAN_FILE due to many page cache inactive pages
> - calculating the number of pages to scan
> 
> in both cases this might lead to unexpected behavior especially on 32b
> systems where we can expect lowmem memory pressure very often.
> 
> A large highmem zone can easily distort SCAN_FILE heuristic because
> there might be only few file pages from the eligible zones on the node
> lru and we would still enforce file lru scanning which can lead to
> trashing while we could still scan anonymous pages.
> 
> The later use of lruvec_lru_size can be problematic as well. Especially
> when there are not many pages from the eligible zones. We would have to
> skip over many pages to find anything to reclaim but shrink_node_memcg
> would only reduce the remaining number to scan by SWAP_CLUSTER_MAX
> at maximum. Therefore we can end up going over a large LRU many times
> without actually having chance to reclaim much if anything at all. The
> closer we are out of memory on lowmem zone the worse the problem will
> be.
> 
> Fix this by making lruvec_lru_size zone aware. zone_idx will tell the
> the maximum eligible zone.
> 
> Changes since v2
> - move the zone filtering logic to lruvec_lru_size so that we do not
>   have too many lruvec_lru_size* functions - Johannes
> 
> Changes since v1
> - s@lruvec_lru_size_zone_idx@lruvec_lru_size_eligibe_zones@
> 
> Acked-by: Minchan Kim <minchan@kernel.org>
> Signed-off-by: Michal Hocko <mhocko@suse.com>

Thanks, that looks better IMO. Two tiny things:

> @@ -234,22 +234,44 @@ bool pgdat_reclaimable(struct pglist_data *pgdat)
>  		pgdat_reclaimable_pages(pgdat) * 6;
>  }
>  
> -unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru)
> +static unsigned long lruvec_zone_lru_size(struct lruvec *lruvec,
> +		enum lru_list lru, int zone_idx)
>  {
>  	if (!mem_cgroup_disabled())
> -		return mem_cgroup_get_lru_size(lruvec, lru);
> +		return mem_cgroup_get_zone_lru_size(lruvec, lru, zone_idx);
>  
> -	return node_page_state(lruvec_pgdat(lruvec), NR_LRU_BASE + lru);
> +	return zone_page_state(&lruvec_pgdat(lruvec)->node_zones[zone_idx],
> +			       NR_ZONE_LRU_BASE + lru);
>  }
>  
> -unsigned long lruvec_zone_lru_size(struct lruvec *lruvec, enum lru_list lru,
> -				   int zone_idx)
> +/** lruvec_lru_size -  Returns the number of pages on the given LRU list.
> + * @lruvec: lru vector
> + * @lru: lru to use
> + * @zone_idx: zones to consider (use MAX_NR_ZONES for the whole LRU list)
> + */
> +unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, int zone_idx)
>  {
> +	unsigned long lru_size;
> +	int zid;
> +
>  	if (!mem_cgroup_disabled())
> -		return mem_cgroup_get_zone_lru_size(lruvec, lru, zone_idx);
> +		lru_size = mem_cgroup_get_lru_size(lruvec, lru);
> +	else
> +		lru_size = node_page_state(lruvec_pgdat(lruvec), NR_LRU_BASE + lru);
> +
> +	for (zid = zone_idx + 1; zid < MAX_NR_ZONES; zid++) {
> +		struct zone *zone = &lruvec_pgdat(lruvec)->node_zones[zid];
> +		unsigned long size;
> +
> +		if (!managed_zone(zone))
> +			continue;
> +
> +		size = lruvec_zone_lru_size(lruvec, lru, zid);
> +		lru_size -= min(size, lru_size);

Fold lruvec_zone_lru_size() in here? Its body goes well with how we
get lru_size at the start of the function, no need to maintain that
abstraction.

> @@ -2064,8 +2086,8 @@ static bool inactive_list_is_low(struct lruvec *lruvec, bool file,
>  	if (!file && !total_swap_pages)
>  		return false;
>  
> -	total_inactive = inactive = lruvec_lru_size(lruvec, file * LRU_FILE);
> -	total_active = active = lruvec_lru_size(lruvec, file * LRU_FILE + LRU_ACTIVE);
> +	total_inactive = inactive = lruvec_lru_size(lruvec, file * LRU_FILE, MAX_NR_ZONES);
> +	total_active = active = lruvec_lru_size(lruvec, file * LRU_FILE + LRU_ACTIVE, MAX_NR_ZONES);
>  
>  	/*
>  	 * For zone-constrained allocations, it is necessary to check if

It might be a better patch order to do the refactoring of the zone
filtering from inactive_list_is_low() to lruvec_lru_size() in 1/2,
without change of behavior; then update the other callers in 2/2.

Hm?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2017-02-06 23:40 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-17 10:36 [PATCH 0/3] follow up nodereclaim for 32b fix Michal Hocko
2017-01-17 10:37 ` [PATCH 1/3] mm, vmscan: cleanup lru size claculations Michal Hocko
2017-01-17 10:37 ` [PATCH 2/3] mm, vmscan: consider eligible zones in get_scan_count Michal Hocko
2017-01-18 16:46   ` Johannes Weiner
2017-02-06  8:10   ` Michal Hocko
2017-02-06 23:40     ` Andrew Morton
2017-01-17 10:37 ` [PATCH 3/3] Revert "mm: bail out in shrink_inactive_list()" Michal Hocko
2017-01-18 16:48   ` Johannes Weiner
2017-01-17 11:13 ` [PATCH 0/3] follow up nodereclaim for 32b fix Mel Gorman
  -- strict thread matches above, loose matches on Subject: below --
2017-01-16 16:01 [RFC PATCH 1/2] mm, vmscan: consider eligible zones in get_scan_count Johannes Weiner
2017-01-16 19:33 ` [PATCH 1/3] mm, vmscan: cleanup lru size claculations Michal Hocko
2017-01-17  3:40   ` Hillf Danton
2017-01-17  6:58   ` Minchan Kim

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox