From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail172.messagelabs.com (mail172.messagelabs.com [216.82.254.3]) by kanga.kvack.org (Postfix) with SMTP id D77B15F0047 for ; Sun, 17 Oct 2010 20:50:05 -0400 (EDT) Received: from m1.gw.fujitsu.co.jp ([10.0.50.71]) by fgwmail6.fujitsu.co.jp (Fujitsu Gateway) with ESMTP id o9I0o3Ca029332 for (envelope-from kamezawa.hiroyu@jp.fujitsu.com); Mon, 18 Oct 2010 09:50:03 +0900 Received: from smail (m1 [127.0.0.1]) by outgoing.m1.gw.fujitsu.co.jp (Postfix) with ESMTP id A778345DE51 for ; Mon, 18 Oct 2010 09:50:02 +0900 (JST) Received: from s1.gw.fujitsu.co.jp (s1.gw.fujitsu.co.jp [10.0.50.91]) by m1.gw.fujitsu.co.jp (Postfix) with ESMTP id 20E6745DE50 for ; Mon, 18 Oct 2010 09:50:02 +0900 (JST) Received: from s1.gw.fujitsu.co.jp (localhost.localdomain [127.0.0.1]) by s1.gw.fujitsu.co.jp (Postfix) with ESMTP id AFBE11DB804E for ; Mon, 18 Oct 2010 09:50:01 +0900 (JST) Received: from m108.s.css.fujitsu.com (m108.s.css.fujitsu.com [10.249.87.108]) by s1.gw.fujitsu.co.jp (Postfix) with ESMTP id 5269C1DB804D for ; Mon, 18 Oct 2010 09:50:01 +0900 (JST) Date: Mon, 18 Oct 2010 09:44:37 +0900 From: KAMEZAWA Hiroyuki Subject: Re: [PATCH v2 04/11] memcg: disable softirq in lock_page_cgroup() Message-Id: <20101018094437.e12aae49.kamezawa.hiroyu@jp.fujitsu.com> In-Reply-To: <1287177279-30876-5-git-send-email-gthelen@google.com> References: <1287177279-30876-1-git-send-email-gthelen@google.com> <1287177279-30876-5-git-send-email-gthelen@google.com> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org To: Greg Thelen Cc: Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, containers@lists.osdl.org, Andrea Righi , Balbir Singh , Daisuke Nishimura , Minchan Kim , Ciju Rajan K , David Rientjes List-ID: On Fri, 15 Oct 2010 14:14:32 -0700 Greg Thelen wrote: > If pages are being migrated from a memcg, then updates to that > memcg's page statistics are protected by grabbing a bit spin lock > using lock_page_cgroup(). In an upcoming commit memcg dirty page > accounting will be updating memcg page accounting (specifically: > num writeback pages) from softirq. Avoid a deadlocking nested > spin lock attempt by disabling softirq on the local processor > when grabbing the page_cgroup bit_spin_lock in lock_page_cgroup(). > This avoids the following deadlock: > statistic > CPU 0 CPU 1 > inc_file_mapped > rcu_read_lock > start move > synchronize_rcu > lock_page_cgroup > softirq > test_clear_page_writeback > mem_cgroup_dec_page_stat(NR_WRITEBACK) > rcu_read_lock > lock_page_cgroup /* deadlock */ > unlock_page_cgroup > rcu_read_unlock > unlock_page_cgroup > rcu_read_unlock > > By disabling softirq in lock_page_cgroup, nested calls are avoided. > The softirq would be delayed until after inc_file_mapped enables > softirq when calling unlock_page_cgroup(). > > The normal, fast path, of memcg page stat updates typically > does not need to call lock_page_cgroup(), so this change does > not affect the performance of the common case page accounting. > > Signed-off-by: Andrea Righi > Signed-off-by: Greg Thelen I have a patch for this problem. So, could you reorder patch as 1,2,3,5,6,7,8,9,10,11 I'll post an add-on patch."12" move_charge performance improbement patches will be posted later. Thanks, -Kame -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org