From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: balbir@linux.vnet.ibm.com
Cc: Andrew Morton <akpm@linux-foundation.org>,
nishimura@mxp.nes.nec.co.jp, menage@google.com, xemul@openvz.org,
linux-mm@kvack.org, lizf@cn.fujitsu.com
Subject: Re: [RFC] Reduce the resource counter lock overhead
Date: Thu, 25 Jun 2009 13:37:25 +0900 [thread overview]
Message-ID: <20090625133725.c5af0998.kamezawa.hiroyu@jp.fujitsu.com> (raw)
In-Reply-To: <20090625032717.GX8642@balbir.in.ibm.com>
On Thu, 25 Jun 2009 08:57:17 +0530
Balbir Singh <balbir@linux.vnet.ibm.com> wrote:
> > What kind of workload can be much improved ?
> > IIUC, in general, using seq_lock to frequently modified counter just makes
> > it slow.
>
> Why do you think so? I've been looking primarily at do_gettimeofday().
IIUC, modification to xtime is _not_ frequent.
> Yes, frequent updates can hurt readers in the worst case.
You don't understand my point. write-side of seqlock itself is
heavy. I have no interests in read-side.
What need to be faster is here.
==
929 while (1) {
930 int ret;
931 bool noswap = false;
932
933 ret = res_counter_charge(&mem->res, PAGE_SIZE, &fail_res);
934 if (likely(!ret)) {
935 if (!do_swap_account)
936 break;
937 ret = res_counter_charge(&mem->memsw, PAGE_SIZE,
938 &fail_res);
939 if (likely(!ret))
940 break;
941 /* mem+swap counter fails */
942 res_counter_uncharge(&mem->res, PAGE_SIZE);
943 noswap = true;
944 mem_over_limit = mem_cgroup_from_res_counter(fail_res,
945 memsw);
946 } else
947 /* mem counter fails */
948 mem_over_limit = mem_cgroup_from_res_counter(fail_res,
949
==
And using seq_lock will add more overheads to here.
> I've been
> meaning to experiment with percpu counters as well, but we'll need to
> decide what is the tolerance limit, since we can have a batch value
> fuzziness, before all CPUs see that the limit is exceeded, but it
> might be worth experimenting.
>
per-cpu counter is a choice. but "batch" value is very difficult if
we never allow "exceeds". And if # of bactch is too small, percpu
counter is slower than current one.
And if hierarchy is used, jitter by batch will be very big in parent nodes.
Thanks,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2009-06-25 4:37 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-06-24 17:05 Balbir Singh
2009-06-24 19:40 ` Paul Menage
2009-06-24 23:10 ` Andrew Morton
2009-06-24 23:53 ` KAMEZAWA Hiroyuki
2009-06-25 3:27 ` Balbir Singh
2009-06-25 3:44 ` Andrew Morton
2009-06-25 4:39 ` KAMEZAWA Hiroyuki
2009-06-25 5:40 ` Balbir Singh
2009-06-25 6:30 ` KAMEZAWA Hiroyuki
2009-06-25 16:16 ` Balbir Singh
2009-06-25 5:01 ` Balbir Singh
2009-06-25 4:37 ` KAMEZAWA Hiroyuki [this message]
2009-06-25 3:04 ` Balbir Singh
2009-06-25 3:40 ` Andrew Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20090625133725.c5af0998.kamezawa.hiroyu@jp.fujitsu.com \
--to=kamezawa.hiroyu@jp.fujitsu.com \
--cc=akpm@linux-foundation.org \
--cc=balbir@linux.vnet.ibm.com \
--cc=linux-mm@kvack.org \
--cc=lizf@cn.fujitsu.com \
--cc=menage@google.com \
--cc=nishimura@mxp.nes.nec.co.jp \
--cc=xemul@openvz.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox