linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.cz>
To: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
	"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
	"nishimura@mxp.nes.nec.co.jp" <nishimura@mxp.nes.nec.co.jp>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [FIX][PATCH 2/3] memcg: fix numa scan information update to be triggered by memory event
Date: Wed, 29 Jun 2011 15:12:22 +0200	[thread overview]
Message-ID: <20110629131222.GB24262@tiehlicka.suse.cz> (raw)
In-Reply-To: <20110628174150.6b32e51c.kamezawa.hiroyu@jp.fujitsu.com>

On Tue 28-06-11 17:41:50, KAMEZAWA Hiroyuki wrote:
> From 646ca5cd1e1ab0633892b86a1bbb6cf600d79d58 Mon Sep 17 00:00:00 2001
> From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> Date: Tue, 28 Jun 2011 17:09:25 +0900
> Subject: [PATCH 2/3] Fix numa scan information update to be triggered by memory event
> 
> commit 889976 adds an numa node round-robin for memcg. But the information
> is updated once per 10sec.
> 
> This patch changes the update trigger from jiffies to memcg's event count.
> After this patch, numa scan information will be updated when we see
> 1024 events of pagein/pageout under a memcg.
> 
> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

Reviewed-by: Michal Hocko <mhocko@suse.cz>

See the note about wasted memory for MAX_NUMNODES==1 bellow.

> 
> Changelog:
>   - simplified
>   - removed mutex
>   - removed 3% check. To use heuristics, we cannot avoid magic value.
>     So, removed heuristics.
> ---
>  mm/memcontrol.c |   29 +++++++++++++++++++++++++----
>  1 files changed, 25 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index c624312..3e7d5e6 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -108,10 +108,12 @@ enum mem_cgroup_events_index {
>  enum mem_cgroup_events_target {
>  	MEM_CGROUP_TARGET_THRESH,
>  	MEM_CGROUP_TARGET_SOFTLIMIT,
> +	MEM_CGROUP_TARGET_NUMAINFO,

This still wastes sizeof(unsigned long) per CPU space for non NUMA
machines (resp. MAX_NUMNODES==1).

[...]
> @@ -703,6 +709,14 @@ static void memcg_check_events(struct mem_cgroup *mem, struct page *page)
>  			__mem_cgroup_target_update(mem,
>  				MEM_CGROUP_TARGET_SOFTLIMIT);
>  		}
> +#if MAX_NUMNODES > 1
> +		if (unlikely(__memcg_event_check(mem,
> +			MEM_CGROUP_TARGET_NUMAINFO))) {
> +			atomic_inc(&mem->numainfo_events);
> +			__mem_cgroup_target_update(mem,
> +				MEM_CGROUP_TARGET_NUMAINFO);
> +		}
> +#endif
>  	}
>  }
>  
> @@ -1582,11 +1596,15 @@ static bool test_mem_cgroup_node_reclaimable(struct mem_cgroup *mem,
>  static void mem_cgroup_may_update_nodemask(struct mem_cgroup *mem)
>  {
>  	int nid;
> -
> -	if (time_after(mem->next_scan_node_update, jiffies))
> +	/*
> +	 * numainfo_events > 0 means there was at least NUMAINFO_EVENTS_TARGET
> +	 * pagein/pageout changes since the last update.
> +	 */
> +	if (!atomic_read(&mem->numainfo_events))
> +		return;

At first I was worried about memory barriers here because
atomic_{set,inc} used for numainfo_events do not imply mem. barriers
but that is not a problem because memcg_check_events will always make
numainfo_events > 0 (even if it doesn't see atomic_set from this
function and we are not interested in the exact value).

> +	if (atomic_inc_return(&mem->numainfo_updating) > 1)
>  		return;

OK, this one should be barrier safe as well as this enforces barrier on
both sides (before and after operation) so the atomic_set shouldn't
break it AFAIU.

>  
> -	mem->next_scan_node_update = jiffies + 10*HZ;
>  	/* make a nodemask where this memcg uses memory from */
>  	mem->scan_nodes = node_states[N_HIGH_MEMORY];
>  
> @@ -1595,6 +1613,9 @@ static void mem_cgroup_may_update_nodemask(struct mem_cgroup *mem)
>  		if (!test_mem_cgroup_node_reclaimable(mem, nid, false))
>  			node_clear(nid, mem->scan_nodes);
>  	}
> +
> +	atomic_set(&mem->numainfo_events, 0);
> +	atomic_set(&mem->numainfo_updating, 0);
>  }
-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2011-06-29 13:12 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-06-28  8:31 [FIX][PATCH 0/3] memcg: 3 fixes for memory cgroup's memory reclaim KAMEZAWA Hiroyuki
2011-06-28  8:39 ` [PATCH 1/3] memcg: fix reclaimable lru check in memcg KAMEZAWA Hiroyuki
2011-06-29 13:40   ` Michal Hocko
2011-06-28  8:41 ` [FIX][PATCH 2/3] memcg: fix numa scan information update to be triggered by memory event KAMEZAWA Hiroyuki
2011-06-29 13:12   ` Michal Hocko [this message]
2011-06-28  8:54 ` [PATCH 3/3] mm: preallocate page before lock_page() at filemap COW KAMEZAWA Hiroyuki

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110629131222.GB24262@tiehlicka.suse.cz \
    --to=mhocko@suse.cz \
    --cc=akpm@linux-foundation.org \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nishimura@mxp.nes.nec.co.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox