From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from psmtp.com (na3sys010amx145.postini.com [74.125.245.145]) by kanga.kvack.org (Postfix) with SMTP id C0DC76B0074 for ; Thu, 26 Jan 2012 05:03:12 -0500 (EST) Received: by mail-ee0-f41.google.com with SMTP id c13so138786eek.14 for ; Thu, 26 Jan 2012 02:03:12 -0800 (PST) From: Gilad Ben-Yossef Subject: [v7 4/8] smp: add func to IPI cpus based on parameter func Date: Thu, 26 Jan 2012 12:01:57 +0200 Message-Id: <1327572121-13673-5-git-send-email-gilad@benyossef.com> In-Reply-To: <1327572121-13673-1-git-send-email-gilad@benyossef.com> References: <1327572121-13673-1-git-send-email-gilad@benyossef.com> Sender: owner-linux-mm@kvack.org List-ID: To: linux-kernel@vger.kernel.org Cc: Gilad Ben-Yossef , Chris Metcalf , Christoph Lameter , Peter Zijlstra , Frederic Weisbecker , Russell King , linux-mm@kvack.org, Pekka Enberg , Matt Mackall , Sasha Levin , Rik van Riel , Andi Kleen , Alexander Viro , linux-fsdevel@vger.kernel.org, Avi Kivity , Michal Nazarewicz , Kosaki Motohiro , Andrew Morton , Milton Miller Add the on_each_cpu_cond() function that wraps on_each_cpu_mask() and calculates the cpumask of cpus to IPI by calling a function supplied as a parameter in order to determine whether to IPI each specific cpu. The function works around allocation failure of cpumask variable in CONFIG_CPUMASK_OFFSTACK=y by itereating over cpus sending an IPI a time via smp_call_function_single(). The function is useful since it allows to seperate the specific code that decided in each case whether to IPI a specific cpu for a specific request from the common boilerplate code of handling creating the mask, handling failures etc. Signed-off-by: Gilad Ben-Yossef CC: Chris Metcalf CC: Christoph Lameter CC: Peter Zijlstra CC: Frederic Weisbecker CC: Russell King CC: linux-mm@kvack.org CC: Pekka Enberg CC: Matt Mackall CC: Sasha Levin CC: Rik van Riel CC: Andi Kleen CC: Alexander Viro CC: linux-fsdevel@vger.kernel.org CC: Avi Kivity CC: Michal Nazarewicz CC: Kosaki Motohiro CC: Andrew Morton CC: Milton Miller --- include/linux/smp.h | 19 ++++++++++++++++ kernel/smp.c | 58 +++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 77 insertions(+), 0 deletions(-) diff --git a/include/linux/smp.h b/include/linux/smp.h index d0adb78..e1ea702 100644 --- a/include/linux/smp.h +++ b/include/linux/smp.h @@ -109,6 +109,15 @@ void on_each_cpu_mask(const struct cpumask *mask, smp_call_func_t func, void *info, bool wait); /* + * Call a function on each processor for which the supplied function + * cond_func returns a positive value. This may include the local + * processor. + */ +void on_each_cpu_cond(bool (*cond_func)(int cpu, void *info), + smp_call_func_t func, void *info, bool wait, + gfp_t gfpflags); + +/* * Mark the boot cpu "online" so that it can call console drivers in * printk() and can access its per-cpu storage. */ @@ -153,6 +162,16 @@ static inline int up_smp_call_function(smp_call_func_t func, void *info) local_irq_enable(); \ } \ } while (0) +#define on_each_cpu_cond(cond_func, func, info, wait, gfpflags) \ + do { \ + preempt_disable(); \ + if (cond_func(0, info)) { \ + local_irq_disable(); \ + (func)(info); \ + local_irq_enable(); \ + } \ + preempt_enable(); \ + } while (0) static inline void smp_send_reschedule(int cpu) { } #define num_booting_cpus() 1 diff --git a/kernel/smp.c b/kernel/smp.c index a081e6c..fa0912a 100644 --- a/kernel/smp.c +++ b/kernel/smp.c @@ -730,3 +730,61 @@ void on_each_cpu_mask(const struct cpumask *mask, smp_call_func_t func, put_cpu(); } EXPORT_SYMBOL(on_each_cpu_mask); + +/* + * on_each_cpu_cond(): Call a function on each processor for which + * the supplied function cond_func returns true, optionally waiting + * for all the required CPUs to finish. This may include the local + * processor. + * @cond_func: A callback function that is passed a cpu id and + * the the info parameter. The function is called + * with preemption disabled. The function should + * return a blooean value indicating whether to IPI + * the specified CPU. + * @func: The function to run on all applicable CPUs. + * This must be fast and non-blocking. + * @info: An arbitrary pointer to pass to both functions. + * @wait: If true, wait (atomically) until function has + * completed on other CPUs. + * @gfpflags: GFP flags to use when allocating the cpumask + * used internally by the function. + * + * The function might sleep if the GFP flags indicates a non + * atomic allocation is allowed. + * + * You must not call this function with disabled interrupts or + * from a hardware interrupt handler or from a bottom half handler. + */ +void on_each_cpu_cond(bool (*cond_func)(int cpu, void *info), + smp_call_func_t func, void *info, bool wait, + gfp_t gfpflags) +{ + cpumask_var_t cpus; + int cpu, ret; + + might_sleep_if(gfpflags & __GFP_WAIT); + + if (likely(zalloc_cpumask_var(&cpus, (gfpflags|__GFP_NOWARN)))) { + preempt_disable(); + for_each_online_cpu(cpu) + if (cond_func(cpu, info)) + cpumask_set_cpu(cpu, cpus); + on_each_cpu_mask(cpus, func, info, wait); + preempt_enable(); + free_cpumask_var(cpus); + } else { + /* + * No free cpumask, bother. No matter, we'll + * just have to IPI them one by one. + */ + preempt_disable(); + for_each_online_cpu(cpu) + if (cond_func(cpu, info)) { + ret = smp_call_function_single(cpu, func, + info, wait); + WARN_ON_ONCE(!ret); + } + preempt_enable(); + } +} +EXPORT_SYMBOL(on_each_cpu_cond); -- 1.7.0.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: email@kvack.org