From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from psmtp.com (na3sys010amx147.postini.com [74.125.245.147]) by kanga.kvack.org (Postfix) with SMTP id 9D96C6B005A for ; Thu, 28 Jun 2012 03:46:13 -0400 (EDT) Received: from m4.gw.fujitsu.co.jp (unknown [10.0.50.74]) by fgwmail5.fujitsu.co.jp (Postfix) with ESMTP id 3DF683EE0AE for ; Thu, 28 Jun 2012 16:46:12 +0900 (JST) Received: from smail (m4 [127.0.0.1]) by outgoing.m4.gw.fujitsu.co.jp (Postfix) with ESMTP id 252EB45DE54 for ; Thu, 28 Jun 2012 16:46:12 +0900 (JST) Received: from s4.gw.fujitsu.co.jp (s4.gw.fujitsu.co.jp [10.0.50.94]) by m4.gw.fujitsu.co.jp (Postfix) with ESMTP id ED62845DE50 for ; Thu, 28 Jun 2012 16:46:11 +0900 (JST) Received: from s4.gw.fujitsu.co.jp (localhost.localdomain [127.0.0.1]) by s4.gw.fujitsu.co.jp (Postfix) with ESMTP id E005CE08003 for ; Thu, 28 Jun 2012 16:46:11 +0900 (JST) Received: from ml13.s.css.fujitsu.com (ml13.s.css.fujitsu.com [10.240.81.133]) by s4.gw.fujitsu.co.jp (Postfix) with ESMTP id 997F01DB803E for ; Thu, 28 Jun 2012 16:46:11 +0900 (JST) Message-ID: <4FEC0B3F.7070108@jp.fujitsu.com> Date: Thu, 28 Jun 2012 16:43:59 +0900 From: Kamezawa Hiroyuki MIME-Version: 1.0 Subject: Re: needed lru_add_drain_all() change References: <20120626143703.396d6d66.akpm@linux-foundation.org> In-Reply-To: <20120626143703.396d6d66.akpm@linux-foundation.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: Andrew Morton Cc: linux-mm@kvack.org (2012/06/27 6:37), Andrew Morton wrote: > https://bugzilla.kernel.org/show_bug.cgi?id=43811 > > lru_add_drain_all() uses schedule_on_each_cpu(). But > schedule_on_each_cpu() hangs if a realtime thread is spinning, pinned > to a CPU. There's no intention to change the scheduler behaviour, so I > think we should remove schedule_on_each_cpu() from the kernel. > > The biggest user of schedule_on_each_cpu() is lru_add_drain_all(). > > Does anyone have any thoughts on how we can do this? The obvious > approach is to declare these: > > static DEFINE_PER_CPU(struct pagevec[NR_LRU_LISTS], lru_add_pvecs); > static DEFINE_PER_CPU(struct pagevec, lru_rotate_pvecs); > static DEFINE_PER_CPU(struct pagevec, lru_deactivate_pvecs); > > to be irq-safe and use on_each_cpu(). lru_rotate_pvecs is already > irq-safe and converting lru_add_pvecs and lru_deactivate_pvecs looks > pretty simple. > > Thoughts? > How about this kind of RCU synchronization ? == /* * Double buffered pagevec for quick drain. * The usual per-cpu-pvec user need to take rcu_read_lock() before accessing. * External drainer of pvecs will relpace pvec vector and call synchroize_rcu(), * and drain all pages on unused pvecs in turn. */ static DEFINE_PER_CPU(struct pagevec[NR_LRU_LISTS * 2], lru_pvecs); atomic_t pvec_idx; /* must be placed onto some aligned address...*/ struct pagevec *my_pagevec(enum lru) { return pvec = &__get_cpu_var(lru_pvecs[lru << atomic_read(pvec_idx)]); } /* * percpu pagevec access should be surrounded by these calls. */ static inline void pagevec_start_access() { rcu_read_lock(); } static inline void pagevec_end_access() { rcu_read_unlock(); } /* * changing pagevec array vec 0 <-> 1 */ static void lru_pvec_update() { if (atomic_read(&pvec_idx)) atomic_set(&pvec_idx, 0); else atomic_set(&pvec_idx, 1); } /* * drain all LRUS on per-cpu pagevecs. */ DEFINE_MUTEX(lru_add_drain_all_mutex); static void lru_add_drain_all() { mutex_lock(&lru_add_drain_mutex); lru_pvec_update(); synchronize_rcu(); /* waits for all accessors to pvec quits. */ for_each_cpu(cpu) drain_pvec_of_the_cpu(cpu); mutex_unlock(&lru_add_drain_mutex); } == -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org