From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail203.messagelabs.com (mail203.messagelabs.com [216.82.254.243]) by kanga.kvack.org (Postfix) with ESMTP id A12576B003D for ; Sat, 28 Mar 2009 08:27:52 -0400 (EDT) Subject: Re: [PATCH 1/2] x86/mm: maintain a percpu "in get_user_pages_fast" flag From: Peter Zijlstra In-Reply-To: <49CDAF17.5060207@goop.org> References: <49CD37B8.4070109@goop.org> <49CD9E25.2090407@redhat.com> <49CDAF17.5060207@goop.org> Content-Type: text/plain Date: Sat, 28 Mar 2009 13:27:15 +0100 Message-Id: <1238243235.4039.715.camel@laptop> Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org To: Jeremy Fitzhardinge Cc: Avi Kivity , Nick Piggin , Linux Kernel Mailing List , Linux Memory Management List , the arch/x86 maintainers List-ID: On Fri, 2009-03-27 at 22:01 -0700, Jeremy Fitzhardinge wrote: > Avi Kivity wrote: > > Jeremy Fitzhardinge wrote: > >> get_user_pages_fast() relies on cross-cpu tlb flushes being a barrier > >> between clearing and setting a pte, and before freeing a pagetable page. > >> It usually does this by disabling interrupts to hold off IPIs, but > >> some tlb flush implementations don't use IPIs for tlb flushes, and > >> must use another mechanism. > >> > >> In this change, add in_gup_cpumask, which is a cpumask of cpus currently > >> performing a get_user_pages_fast traversal of a pagetable. A cross-cpu > >> tlb flush function can use this to determine whether it should hold-off > >> on the flush until the gup_fast has finished. > >> > >> @@ -255,6 +260,10 @@ int get_user_pages_fast(unsigned long start, int > >> nr_pages, int write, > >> * address down to the the page and take a ref on it. > >> */ > >> local_irq_disable(); > >> + > >> + cpu = smp_processor_id(); > >> + cpumask_set_cpu(cpu, in_gup_cpumask); > >> + > > > > This will bounce a cacheline, every time. Please wrap in CONFIG_XEN > > and skip at runtime if Xen is not enabled. > > Every time? Only when running successive gup_fasts on different cpus, > and only twice per gup_fast. (What's the typical page count? I see that > kvm and lguest are page-at-a-time users, but presumably direct IO has > larger batches.) The larger the batch, the longer the irq-off latency, I've just proposed adding a batch mechanism to gup_fast() to limit this. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org