linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH RFC] vm_unmap_aliases: allow callers to inhibit TLB flush
  2008-12-11 19:05 [PATCH RFC] vm_unmap_aliases: allow callers to inhibit TLB flush Jeremy Fitzhardinge
@ 2007-07-24  0:52 ` Nick Piggin
  2008-12-12  1:59   ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 17+ messages in thread
From: Nick Piggin @ 2007-07-24  0:52 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Andrew Morton, Linux Kernel Mailing List,
	Linux Memory Management List, the arch/x86 maintainers,
	Arjan van de Ven

Hi,

On Friday 12 December 2008 06:05, Jeremy Fitzhardinge wrote:
> Hi Nick,
>
> In Xen when we're killing the lazy vmalloc aliases, we're only concerned
> about the pagetable references to the mapped pages, not the TLB entries.

Hm? Why is that? Why wouldn't it matter if some page table page gets
written to via a stale TLB?


> For the most part eliminating the TLB flushes would be a performance
> optimisation, but there's at least one case where we need to shoot down
> aliases in an interrupt-disabled section, so the TLB shootdown IPIs
> would potentially deadlock.

So... 2.6.28 is deadlocky for you?


> I'm wondering what your thoughts are about this approach?

Doesn't work, because that's allowing virtual addresses to be reused
before they have TLBs flushed.

You could have a xen specific function which goes through the lazy maps
and unmaps their page tables, but leaves them in the virtual address
allocator (so a subsequent lazy flush will still do the TLB flush before
allowing the addresses to be reused).


> I'm not super-happy with the changes to __purge_vmap_area_lazy(), but
> given that we need a tri-state policy selection there, adding an enum is
> clearer than adding another boolean argument.
>
> It also raises the question of how many callers of vm_unmap_aliases()
> really care about flushing the tlbs. Presumably if we're shooting down
> some stray vmalloc mappings then nobody is actually using them at the
> time, and any corresponding TLB entries are residual. Or does leaving
> them around leave open the possibility of unwanted speculative
> references which could violate memory type rules?  Perhaps callers who
> care about that could arrange their own tlb flush?

The whole lazy flush requires the TLB flush because the core vmalloc
code needs to flush before reallocating. So it can't really go into
the caller.

PAT needs the TLBs flushed, definitely.

I'm surprised that Xen doesn't... but let's hear it :)

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH RFC] vm_unmap_aliases: allow callers to inhibit TLB flush
  2008-12-12  1:59   ` Jeremy Fitzhardinge
@ 2007-07-24  1:40     ` Nick Piggin
  2008-12-16  1:28       ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 17+ messages in thread
From: Nick Piggin @ 2007-07-24  1:40 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Andrew Morton, Linux Kernel Mailing List,
	Linux Memory Management List, the arch/x86 maintainers,
	Arjan van de Ven

On Friday 12 December 2008 12:59, Jeremy Fitzhardinge wrote:
> Nick Piggin wrote:
> > Hi,
> >
> > On Friday 12 December 2008 06:05, Jeremy Fitzhardinge wrote:
> >> Hi Nick,
> >>
> >> In Xen when we're killing the lazy vmalloc aliases, we're only concerned
> >> about the pagetable references to the mapped pages, not the TLB entries.
> >
> > Hm? Why is that? Why wouldn't it matter if some page table page gets
> > written to via a stale TLB?
>
> No.  Well, yes, it would, but Xen itself will do whatever tlb flushes
> are necessary to keep it safe (it must, since it doesn't trust guest
> kernels).  It's fairly clever about working out which cpus need flushing
> and if other flushes have already done the job.

OK. Yeah, then the problem is simply that the guest may reuse that virtual
memory for another vmap.


> >> For the most part eliminating the TLB flushes would be a performance
> >> optimisation, but there's at least one case where we need to shoot down
> >> aliases in an interrupt-disabled section, so the TLB shootdown IPIs
> >> would potentially deadlock.
> >
> > So... 2.6.28 is deadlocky for you?
>
> No.  The deadlock is in the new dom0 code I'm working on.  I haven't
> posted it yet (well, it hasn't been merged).

OK, good.


> In this case, I'm swizzling the physical pages underlying a piece of
> guest pseudo-physical memory so that it is physically contiguous and/or
> under the device limit, so I can set up DMA buffers, swiotlb memory,
> etc.  This requires removing the mappings to the old pages and replacing
> them with new mappings, but I need to make sure the old pages have no
> other aliases before I can release them back to Xen.  (This can all
> happen in dma_alloc_coherent in a device driver with interrupts
> disabled, so the IPI causes deadlock warnings.)
>
> The TLB is irrelevant because Xen will make sure any stale entries are
> flushed appropriately before giving those pages out to any other domain.

OK.


> >> I'm wondering what your thoughts are about this approach?
> >
> > Doesn't work, because that's allowing virtual addresses to be reused
> > before they have TLBs flushed.
>
> Right, I see.  It's a question of flush on unmap or flush on map.

Yes. And flushing on unmap is easier of course, because we know exactly
what we've just unmapped.


> > You could have a xen specific function which goes through the lazy maps
> > and unmaps their page tables, but leaves them in the virtual address
> > allocator (so a subsequent lazy flush will still do the TLB flush before
> > allowing the addresses to be reused).
>
> Yes, that would work.

That would be my preferred approach.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH RFC] vm_unmap_aliases: allow callers to inhibit TLB flush
@ 2008-12-11 19:05 Jeremy Fitzhardinge
  2007-07-24  0:52 ` Nick Piggin
  0 siblings, 1 reply; 17+ messages in thread
From: Jeremy Fitzhardinge @ 2008-12-11 19:05 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Andrew Morton, Linux Kernel Mailing List,
	Linux Memory Management List, the arch/x86 maintainers,
	Arjan van de Ven

Hi Nick,

In Xen when we're killing the lazy vmalloc aliases, we're only concerned 
about the pagetable references to the mapped pages, not the TLB entries. 
For the most part eliminating the TLB flushes would be a performance 
optimisation, but there's at least one case where we need to shoot down 
aliases in an interrupt-disabled section, so the TLB shootdown IPIs 
would potentially deadlock.

I'm wondering what your thoughts are about this approach?

I'm not super-happy with the changes to __purge_vmap_area_lazy(), but 
given that we need a tri-state policy selection there, adding an enum is 
clearer than adding another boolean argument.

It also raises the question of how many callers of vm_unmap_aliases() 
really care about flushing the tlbs. Presumably if we're shooting down 
some stray vmalloc mappings then nobody is actually using them at the 
time, and any corresponding TLB entries are residual. Or does leaving 
them around leave open the possibility of unwanted speculative 
references which could violate memory type rules?  Perhaps callers who 
care about that could arrange their own tlb flush?

Thanks,
    J

===================================================================
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -41,6 +41,7 @@
 extern void *vm_map_ram(struct page **pages, unsigned int count,
 				int node, pgprot_t prot);
 extern void vm_unmap_aliases(void);
+extern void __vm_unmap_aliases(int allow_flush);
 
 #ifdef CONFIG_MMU
 extern void __init vmalloc_init(void);
===================================================================
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -458,18 +458,26 @@
 
 static atomic_t vmap_lazy_nr = ATOMIC_INIT(0);
 
+enum purge_flush {
+	PURGE_FLUSH_NEVER,
+	PURGE_FLUSH_IF_NEEDED,
+	PURGE_FLUSH_FORCE
+};
+
 /*
  * Purges all lazily-freed vmap areas.
  *
  * If sync is 0 then don't purge if there is already a purge in progress.
- * If force_flush is 1, then flush kernel TLBs between *start and *end even
- * if we found no lazy vmap areas to unmap (callers can use this to optimise
- * their own TLB flushing).
+ * 'flush' sets the TLB flushing policy between *start and *end:
+ *    PURGE_FLUSH_NEVER     caller doesn't care about TLB state, so don't flush
+ *    PURGE_FLUSH_IF_NEEDED flush if we found a lazy vmap area to unmap
+ *    PURGE_FLUSH_FORCE     always flush, to allow callers to optimise their own flushing
+ *
  * Returns with *start = min(*start, lowest purged address)
  *              *end = max(*end, highest purged address)
  */
 static void __purge_vmap_area_lazy(unsigned long *start, unsigned long *end,
-					int sync, int force_flush)
+				   int sync, enum purge_flush flush)
 {
 	static DEFINE_SPINLOCK(purge_lock);
 	LIST_HEAD(valist);
@@ -481,7 +489,7 @@
 	 * should not expect such behaviour. This just simplifies locking for
 	 * the case that isn't actually used at the moment anyway.
 	 */
-	if (!sync && !force_flush) {
+	if (!sync && flush != PURGE_FLUSH_FORCE) {
 		if (!spin_trylock(&purge_lock))
 			return;
 	} else
@@ -508,7 +516,7 @@
 		atomic_sub(nr, &vmap_lazy_nr);
 	}
 
-	if (nr || force_flush)
+	if ((nr && flush == PURGE_FLUSH_IF_NEEDED) || flush == PURGE_FLUSH_FORCE)
 		flush_tlb_kernel_range(*start, *end);
 
 	if (nr) {
@@ -528,7 +536,7 @@
 {
 	unsigned long start = ULONG_MAX, end = 0;
 
-	__purge_vmap_area_lazy(&start, &end, 0, 0);
+	__purge_vmap_area_lazy(&start, &end, 0, PURGE_FLUSH_IF_NEEDED);
 }
 
 /*
@@ -538,7 +546,7 @@
 {
 	unsigned long start = ULONG_MAX, end = 0;
 
-	__purge_vmap_area_lazy(&start, &end, 1, 0);
+	__purge_vmap_area_lazy(&start, &end, 1, PURGE_FLUSH_IF_NEEDED);
 }
 
 /*
@@ -847,11 +855,11 @@
  * be sure that none of the pages we have control over will have any aliases
  * from the vmap layer.
  */
-void vm_unmap_aliases(void)
+void __vm_unmap_aliases(int allow_flush)
 {
 	unsigned long start = ULONG_MAX, end = 0;
 	int cpu;
-	int flush = 0;
+	enum purge_flush flush = PURGE_FLUSH_IF_NEEDED;
 
 	if (unlikely(!vmap_initialized))
 		return;
@@ -875,7 +883,7 @@
 				s = vb->va->va_start + (i << PAGE_SHIFT);
 				e = vb->va->va_start + (j << PAGE_SHIFT);
 				vunmap_page_range(s, e);
-				flush = 1;
+				flush = PURGE_FLUSH_FORCE;
 
 				if (s < start)
 					start = s;
@@ -891,7 +899,13 @@
 		rcu_read_unlock();
 	}
 
-	__purge_vmap_area_lazy(&start, &end, 1, flush);
+	__purge_vmap_area_lazy(&start, &end, 1,
+			       allow_flush ? flush : PURGE_FLUSH_NEVER);
+}
+
+void vm_unmap_aliases(void)
+{
+	__vm_unmap_aliases(1);
 }
 EXPORT_SYMBOL_GPL(vm_unmap_aliases);
 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH RFC] vm_unmap_aliases: allow callers to inhibit TLB flush
  2007-07-24  0:52 ` Nick Piggin
@ 2008-12-12  1:59   ` Jeremy Fitzhardinge
  2007-07-24  1:40     ` Nick Piggin
  0 siblings, 1 reply; 17+ messages in thread
From: Jeremy Fitzhardinge @ 2008-12-12  1:59 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Andrew Morton, Linux Kernel Mailing List,
	Linux Memory Management List, the arch/x86 maintainers,
	Arjan van de Ven

Nick Piggin wrote:
> Hi,
>
> On Friday 12 December 2008 06:05, Jeremy Fitzhardinge wrote:
>   
>> Hi Nick,
>>
>> In Xen when we're killing the lazy vmalloc aliases, we're only concerned
>> about the pagetable references to the mapped pages, not the TLB entries.
>>     
>
> Hm? Why is that? Why wouldn't it matter if some page table page gets
> written to via a stale TLB?
>   

No.  Well, yes, it would, but Xen itself will do whatever tlb flushes 
are necessary to keep it safe (it must, since it doesn't trust guest 
kernels).  It's fairly clever about working out which cpus need flushing 
and if other flushes have already done the job.

>> For the most part eliminating the TLB flushes would be a performance
>> optimisation, but there's at least one case where we need to shoot down
>> aliases in an interrupt-disabled section, so the TLB shootdown IPIs
>> would potentially deadlock.
>>     
>
> So... 2.6.28 is deadlocky for you?
>   

No.  The deadlock is in the new dom0 code I'm working on.  I haven't 
posted it yet (well, it hasn't been merged).

In this case, I'm swizzling the physical pages underlying a piece of 
guest pseudo-physical memory so that it is physically contiguous and/or 
under the device limit, so I can set up DMA buffers, swiotlb memory, 
etc.  This requires removing the mappings to the old pages and replacing 
them with new mappings, but I need to make sure the old pages have no 
other aliases before I can release them back to Xen.  (This can all 
happen in dma_alloc_coherent in a device driver with interrupts 
disabled, so the IPI causes deadlock warnings.)

The TLB is irrelevant because Xen will make sure any stale entries are 
flushed appropriately before giving those pages out to any other domain.

>> I'm wondering what your thoughts are about this approach?
>>     
>
> Doesn't work, because that's allowing virtual addresses to be reused
> before they have TLBs flushed.
>   

Right, I see.  It's a question of flush on unmap or flush on map.

> You could have a xen specific function which goes through the lazy maps
> and unmaps their page tables, but leaves them in the virtual address
> allocator (so a subsequent lazy flush will still do the TLB flush before
> allowing the addresses to be reused).
>   

Yes, that would work.

    J

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH RFC] vm_unmap_aliases: allow callers to inhibit TLB flush
  2007-07-24  1:40     ` Nick Piggin
@ 2008-12-16  1:28       ` Jeremy Fitzhardinge
  2008-12-30  3:42         ` Nick Piggin
  0 siblings, 1 reply; 17+ messages in thread
From: Jeremy Fitzhardinge @ 2008-12-16  1:28 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Andrew Morton, Linux Kernel Mailing List,
	Linux Memory Management List, the arch/x86 maintainers,
	Arjan van de Ven

Nick Piggin wrote:
> On Friday 12 December 2008 12:59, Jeremy Fitzhardinge wrote:
>   
>> Nick Piggin wrote:
>>     
>>> Hi,
>>>
>>> On Friday 12 December 2008 06:05, Jeremy Fitzhardinge wrote:
>>>       
>>>> Hi Nick,
>>>>
>>>> In Xen when we're killing the lazy vmalloc aliases, we're only concerned
>>>> about the pagetable references to the mapped pages, not the TLB entries.
>>>>         
>>> Hm? Why is that? Why wouldn't it matter if some page table page gets
>>> written to via a stale TLB?
>>>       
>> No.  Well, yes, it would, but Xen itself will do whatever tlb flushes
>> are necessary to keep it safe (it must, since it doesn't trust guest
>> kernels).  It's fairly clever about working out which cpus need flushing
>> and if other flushes have already done the job.
>>     
>
> OK. Yeah, then the problem is simply that the guest may reuse that virtual
> memory for another vmap.
>   

Hm.  What you would you think of a "deferred tlb flush" flag (or 
something) to cause the next vmap to do the tlb flushes, in the case the 
vunmap happens in a context where the flushes can't be done?

    J
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH RFC] vm_unmap_aliases: allow callers to inhibit TLB flush
  2008-12-16  1:28       ` Jeremy Fitzhardinge
@ 2008-12-30  3:42         ` Nick Piggin
  2008-12-30 11:27           ` Jeremy Fitzhardinge
  2009-02-17 21:57           ` Jeremy Fitzhardinge
  0 siblings, 2 replies; 17+ messages in thread
From: Nick Piggin @ 2008-12-30  3:42 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Andrew Morton, Linux Kernel Mailing List,
	Linux Memory Management List, the arch/x86 maintainers,
	Arjan van de Ven

On Tuesday 16 December 2008 12:28:19 Jeremy Fitzhardinge wrote:
> Nick Piggin wrote:
> > On Friday 12 December 2008 12:59, Jeremy Fitzhardinge wrote:

> >> No.  Well, yes, it would, but Xen itself will do whatever tlb flushes
> >> are necessary to keep it safe (it must, since it doesn't trust guest
> >> kernels).  It's fairly clever about working out which cpus need flushing
> >> and if other flushes have already done the job.
> >
> > OK. Yeah, then the problem is simply that the guest may reuse that
> > virtual memory for another vmap.
>
> Hm.  What you would you think of a "deferred tlb flush" flag (or
> something) to cause the next vmap to do the tlb flushes, in the case the
> vunmap happens in a context where the flushes can't be done?

Sorry to get back to you late... I would just prefer to have a flushing mode
that clears page tables but leaves the vm entries there that will get picked
up and flushed naturally as needed.

I have patches to move the tlb flushing to an asynchronous process context...
but all tweaks to that (including flushing at vmap) are just variations on the
existing flushing scheme and don't solve your problem, so I don't think we
really need to change that for the moment (my patches are mainly for latency
improvement and to allow vunmap to be usable from interrupt context).



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH RFC] vm_unmap_aliases: allow callers to inhibit TLB flush
  2008-12-30  3:42         ` Nick Piggin
@ 2008-12-30 11:27           ` Jeremy Fitzhardinge
  2009-02-17 21:57           ` Jeremy Fitzhardinge
  1 sibling, 0 replies; 17+ messages in thread
From: Jeremy Fitzhardinge @ 2008-12-30 11:27 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Andrew Morton, Linux Kernel Mailing List,
	Linux Memory Management List, the arch/x86 maintainers,
	Arjan van de Ven

Nick Piggin wrote:
> I have patches to move the tlb flushing to an asynchronous process context...
> but all tweaks to that (including flushing at vmap) are just variations on the
> existing flushing scheme and don't solve your problem, so I don't think we
> really need to change that for the moment (my patches are mainly for latency
> improvement and to allow vunmap to be usable from interrupt context).
>   

Well, that's basically what I want - I want to use vunmap in an 
interrupts-disabled context.  Any other possibility of deferring tlb 
flushes is pure bonus and not all that important.

But it also occurred to me that Xen doesn't use IPIs for cross-cpu TLB 
flushes (it goes to hypercall), so it shouldn't be an issue anyway.  I 
haven't had a chance to look at what's really going on there.

    J

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH RFC] vm_unmap_aliases: allow callers to inhibit TLB flush
  2008-12-30  3:42         ` Nick Piggin
  2008-12-30 11:27           ` Jeremy Fitzhardinge
@ 2009-02-17 21:57           ` Jeremy Fitzhardinge
  2009-02-19 11:54             ` Nick Piggin
  1 sibling, 1 reply; 17+ messages in thread
From: Jeremy Fitzhardinge @ 2009-02-17 21:57 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Andrew Morton, Linux Kernel Mailing List,
	Linux Memory Management List, the arch/x86 maintainers,
	Arjan van de Ven

Nick Piggin wrote:
> I have patches to move the tlb flushing to an asynchronous process context...
> but all tweaks to that (including flushing at vmap) are just variations on the
> existing flushing scheme and don't solve your problem, so I don't think we
> really need to change that for the moment (my patches are mainly for latency
> improvement and to allow vunmap to be usable from interrupt context).
>   

Hi Nick,

I'm very interested in being able to call vm_unmap_aliases() from 
interrupt context.  Does the work you mention here encompass that?

For Xen dom0, when someone does something like dma_alloc_coherent, we 
allocate the memory as normal, and then swizzle the underlying physical 
pages to be machine physically contiguous (vs contiguous pseudo-physical 
guest memory), and within the addressable range for the device.  In 
order to do that, we need to make sure the pages are only mapped by the 
linear mapping, and there are no other aliases.

And since drivers are free to allocate dma memory at interrupt time, 
this needs to happen at interrupt time too.

(The tlb flush issue that started this read should be a non-issue for 
Xen, at least, because all cross-cpu tlb flushes should happen via  a 
hypercall rather than kernel-initiated IPIs, so there's no possibility 
of deadlock.  Though I'll happily admit that taking advantage of the 
implementation properties of a particular implementation is not very 
pretty...)

Thanks,
    J


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH RFC] vm_unmap_aliases: allow callers to inhibit TLB flush
  2009-02-17 21:57           ` Jeremy Fitzhardinge
@ 2009-02-19 11:54             ` Nick Piggin
  2009-02-19 17:02               ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 17+ messages in thread
From: Nick Piggin @ 2009-02-19 11:54 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Andrew Morton, Linux Kernel Mailing List,
	Linux Memory Management List, the arch/x86 maintainers,
	Arjan van de Ven

On Wednesday 18 February 2009 08:57:56 Jeremy Fitzhardinge wrote:
> Nick Piggin wrote:
> > I have patches to move the tlb flushing to an asynchronous process
> > context... but all tweaks to that (including flushing at vmap) are just
> > variations on the existing flushing scheme and don't solve your problem,
> > so I don't think we really need to change that for the moment (my patches
> > are mainly for latency improvement and to allow vunmap to be usable from
> > interrupt context).
>
> Hi Nick,
>
> I'm very interested in being able to call vm_unmap_aliases() from
> interrupt context.  Does the work you mention here encompass that?

No, and it can't because we can't do the global kernel tlb flush
from interrupt context.

There is basically no point in doing the vm_unmap_aliases from
interrupt context without doing the global TLB flush as well,
because you still cannot reuse the virtual memory, you still have
possible aliases to it, and you still need to schedule a TLB flush
at some point anyway.


> For Xen dom0, when someone does something like dma_alloc_coherent, we
> allocate the memory as normal, and then swizzle the underlying physical
> pages to be machine physically contiguous (vs contiguous pseudo-physical
> guest memory), and within the addressable range for the device.  In
> order to do that, we need to make sure the pages are only mapped by the
> linear mapping, and there are no other aliases.

These are just stale aliases that will no longer be operated on
unless there is a kernel bug -- so can you just live with them,
or is it a security issue of memory access escaping its domain?


> And since drivers are free to allocate dma memory at interrupt time,
> this needs to happen at interrupt time too.
>
> (The tlb flush issue that started this read should be a non-issue for
> Xen, at least, because all cross-cpu tlb flushes should happen via  a
> hypercall rather than kernel-initiated IPIs, so there's no possibility
> of deadlock.  Though I'll happily admit that taking advantage of the
> implementation properties of a particular implementation is not very
> pretty...)

If it is really no other way around it, it would be possible to
allow arch code to take advantage of this if it knows its TLB
flush is interrupt safe.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH RFC] vm_unmap_aliases: allow callers to inhibit TLB flush
  2009-02-19 11:54             ` Nick Piggin
@ 2009-02-19 17:02               ` Jeremy Fitzhardinge
  2009-02-19 17:41                 ` Nick Piggin
  0 siblings, 1 reply; 17+ messages in thread
From: Jeremy Fitzhardinge @ 2009-02-19 17:02 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Andrew Morton, Linux Kernel Mailing List,
	Linux Memory Management List, the arch/x86 maintainers,
	Arjan van de Ven

Nick Piggin wrote:
> On Wednesday 18 February 2009 08:57:56 Jeremy Fitzhardinge wrote:
>   
>> Nick Piggin wrote:
>>     
>>> I have patches to move the tlb flushing to an asynchronous process
>>> context... but all tweaks to that (including flushing at vmap) are just
>>> variations on the existing flushing scheme and don't solve your problem,
>>> so I don't think we really need to change that for the moment (my patches
>>> are mainly for latency improvement and to allow vunmap to be usable from
>>> interrupt context).
>>>       
>> Hi Nick,
>>
>> I'm very interested in being able to call vm_unmap_aliases() from
>> interrupt context.  Does the work you mention here encompass that?
>>     
>
> No, and it can't because we can't do the global kernel tlb flush
> from interrupt context.
>
> There is basically no point in doing the vm_unmap_aliases from
> interrupt context without doing the global TLB flush as well,
> because you still cannot reuse the virtual memory, you still have
> possible aliases to it, and you still need to schedule a TLB flush
> at some point anyway.
>   

But that's only an issue when you actually do want to reuse the virtual 
address space.  Couldn't you set a flag saying "tlb flush needed", so 
when cpu X is about to use some of that address space, it flushes 
first?  Avoids the need for synchronous cross-cpu tlb flushes.  It 
assumes they're not currently using that address space, but I think that 
would indicate a bug anyway.

(Xen does something like this internally to either defer or avoid many 
expensive tlb operations.)

>> For Xen dom0, when someone does something like dma_alloc_coherent, we
>> allocate the memory as normal, and then swizzle the underlying physical
>> pages to be machine physically contiguous (vs contiguous pseudo-physical
>> guest memory), and within the addressable range for the device.  In
>> order to do that, we need to make sure the pages are only mapped by the
>> linear mapping, and there are no other aliases.
>>     
>
> These are just stale aliases that will no longer be operated on
> unless there is a kernel bug -- so can you just live with them,
> or is it a security issue of memory access escaping its domain?
>   

The underlying physical page is being exchanged, so the old page is 
being returned to Xen's free page pool.  It will refuse to do the 
exchange if the guest still has pagetable references to the page.


>> And since drivers are free to allocate dma memory at interrupt time,
>> this needs to happen at interrupt time too.
>>
>> (The tlb flush issue that started this read should be a non-issue for
>> Xen, at least, because all cross-cpu tlb flushes should happen via  a
>> hypercall rather than kernel-initiated IPIs, so there's no possibility
>> of deadlock.  Though I'll happily admit that taking advantage of the
>> implementation properties of a particular implementation is not very
>> pretty...)
>>     
>
> If it is really no other way around it, it would be possible to
> allow arch code to take advantage of this if it knows its TLB
> flush is interrupt safe.
>   

It's almost safe.  I've got this patch in my tree to tie up the 
flush_tlb_all loose end, though I won't claim its pretty.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH RFC] vm_unmap_aliases: allow callers to inhibit TLB flush
  2009-02-19 17:02               ` Jeremy Fitzhardinge
@ 2009-02-19 17:41                 ` Nick Piggin
  2009-02-19 19:11                   ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 17+ messages in thread
From: Nick Piggin @ 2009-02-19 17:41 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Andrew Morton, Linux Kernel Mailing List,
	Linux Memory Management List, the arch/x86 maintainers,
	Arjan van de Ven

On Friday 20 February 2009 04:02:38 Jeremy Fitzhardinge wrote:
> Nick Piggin wrote:
> > On Wednesday 18 February 2009 08:57:56 Jeremy Fitzhardinge wrote:
> >> Nick Piggin wrote:
> >>> I have patches to move the tlb flushing to an asynchronous process
> >>> context... but all tweaks to that (including flushing at vmap) are just
> >>> variations on the existing flushing scheme and don't solve your
> >>> problem, so I don't think we really need to change that for the moment
> >>> (my patches are mainly for latency improvement and to allow vunmap to
> >>> be usable from interrupt context).
> >>
> >> Hi Nick,
> >>
> >> I'm very interested in being able to call vm_unmap_aliases() from
> >> interrupt context.  Does the work you mention here encompass that?
> >
> > No, and it can't because we can't do the global kernel tlb flush
> > from interrupt context.
> >
> > There is basically no point in doing the vm_unmap_aliases from
> > interrupt context without doing the global TLB flush as well,
> > because you still cannot reuse the virtual memory, you still have
> > possible aliases to it, and you still need to schedule a TLB flush
> > at some point anyway.
>
> But that's only an issue when you actually do want to reuse the virtual
> address space.  Couldn't you set a flag saying "tlb flush needed", so
> when cpu X is about to use some of that address space, it flushes
> first?  Avoids the need for synchronous cross-cpu tlb flushes.  It
> assumes they're not currently using that address space, but I think that
> would indicate a bug anyway.

Then what is the point of the vm_unmap_aliases? If you are doing it
for security it won't work because other CPUs might still be able
to write through dangling TLBs. If you are not doing it for
security then it does not need to be done at all.

Unless it is something strange that Xen does with the page table
structure and you just need to get rid of those?


> (Xen does something like this internally to either defer or avoid many
> expensive tlb operations.)
>
> >> For Xen dom0, when someone does something like dma_alloc_coherent, we
> >> allocate the memory as normal, and then swizzle the underlying physical
> >> pages to be machine physically contiguous (vs contiguous pseudo-physical
> >> guest memory), and within the addressable range for the device.  In
> >> order to do that, we need to make sure the pages are only mapped by the
> >> linear mapping, and there are no other aliases.
> >
> > These are just stale aliases that will no longer be operated on
> > unless there is a kernel bug -- so can you just live with them,
> > or is it a security issue of memory access escaping its domain?
>
> The underlying physical page is being exchanged, so the old page is
> being returned to Xen's free page pool.  It will refuse to do the
> exchange if the guest still has pagetable references to the page.

But it refuses to do this because it is worried about dangling TLBs?
Or some implementation detail that can't handle the page table
entries?


> > If it is really no other way around it, it would be possible to
> > allow arch code to take advantage of this if it knows its TLB
> > flush is interrupt safe.
>
> It's almost safe.  I've got this patch in my tree to tie up the
> flush_tlb_all loose end, though I won't claim its pretty.

Hmm. Let's just try to establish that it is really required first.

Or... what if we just allow a compile and/or boot time flag to direct
that it does not want lazy vmap unmapping and it will just revert to
synchronous unmapping? If Xen needs lots of flushing anyway it might
not be a win anyway.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH RFC] vm_unmap_aliases: allow callers to inhibit TLB flush
  2009-02-19 17:41                 ` Nick Piggin
@ 2009-02-19 19:11                   ` Jeremy Fitzhardinge
  2009-02-23  4:14                     ` Nick Piggin
  0 siblings, 1 reply; 17+ messages in thread
From: Jeremy Fitzhardinge @ 2009-02-19 19:11 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Andrew Morton, Linux Kernel Mailing List,
	Linux Memory Management List, the arch/x86 maintainers,
	Arjan van de Ven

Nick Piggin wrote:
> Then what is the point of the vm_unmap_aliases? If you are doing it
> for security it won't work because other CPUs might still be able
> to write through dangling TLBs. If you are not doing it for
> security then it does not need to be done at all.
>   

Xen will make sure any danging tlb entries are flushed before handing 
the page out to anyone else.

> Unless it is something strange that Xen does with the page table
> structure and you just need to get rid of those?
>   

Yeah.  A pte pointing at a page holds a reference on it, saying that it 
belongs to the domain.  You can't return it to Xen until the refcount is 0.

>> (Xen does something like this internally to either defer or avoid many
>> expensive tlb operations.)
>>
>>     
>>>> For Xen dom0, when someone does something like dma_alloc_coherent, we
>>>> allocate the memory as normal, and then swizzle the underlying physical
>>>> pages to be machine physically contiguous (vs contiguous pseudo-physical
>>>> guest memory), and within the addressable range for the device.  In
>>>> order to do that, we need to make sure the pages are only mapped by the
>>>> linear mapping, and there are no other aliases.
>>>>         
>>> These are just stale aliases that will no longer be operated on
>>> unless there is a kernel bug -- so can you just live with them,
>>> or is it a security issue of memory access escaping its domain?
>>>       
>> The underlying physical page is being exchanged, so the old page is
>> being returned to Xen's free page pool.  It will refuse to do the
>> exchange if the guest still has pagetable references to the page.
>>     
>
> But it refuses to do this because it is worried about dangling TLBs?
> Or some implementation detail that can't handle the page table
> entries?
>   

Right.  The actual pte pointing at the page hold the reference.  We need 
to drop all the references before doing the exchange.

> Hmm. Let's just try to establish that it is really required first.
>   

Well, its desireable anyway.  The using IPI for any kind of tlb flushing 
is pretty pessimal under Xen (or any virtual environment); Xen has a 
much better idea about which real cpus have stale tlb state for which vcpus.

> Or... what if we just allow a compile and/or boot time flag to direct
> that it does not want lazy vmap unmapping and it will just revert to
> synchronous unmapping? If Xen needs lots of flushing anyway it might
> not be a win anyway.
>   

That may be worth considering.

    J

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH RFC] vm_unmap_aliases: allow callers to inhibit TLB flush
  2009-02-19 19:11                   ` Jeremy Fitzhardinge
@ 2009-02-23  4:14                     ` Nick Piggin
  2009-02-23  7:30                       ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 17+ messages in thread
From: Nick Piggin @ 2009-02-23  4:14 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Andrew Morton, Linux Kernel Mailing List,
	Linux Memory Management List, the arch/x86 maintainers,
	Arjan van de Ven

On Friday 20 February 2009 06:11:32 Jeremy Fitzhardinge wrote:
> Nick Piggin wrote:
> > Then what is the point of the vm_unmap_aliases? If you are doing it
> > for security it won't work because other CPUs might still be able
> > to write through dangling TLBs. If you are not doing it for
> > security then it does not need to be done at all.
>
> Xen will make sure any danging tlb entries are flushed before handing
> the page out to anyone else.
>
> > Unless it is something strange that Xen does with the page table
> > structure and you just need to get rid of those?
>
> Yeah.  A pte pointing at a page holds a reference on it, saying that it
> belongs to the domain.  You can't return it to Xen until the refcount is 0.

OK. Then I will remember to find some time to get the interrupt
safe patches working. I wonder why you can't just return it to
Xen when (or have Xen hold it somewhere until) the refcount
reaches 0?

Anyway...

> > Or... what if we just allow a compile and/or boot time flag to direct
> > that it does not want lazy vmap unmapping and it will just revert to
> > synchronous unmapping? If Xen needs lots of flushing anyway it might
> > not be a win anyway.
>
> That may be worth considering.

... in the meantime, shall we just do this for Xen? It is probably
safer and may end up with no worse performance on Xen anyway. If
we get more vmap users and it becomes important, you could look at
more sophisticated ways of doing this. Eg. a page could be flagged
if it potentially has lazy vmaps.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH RFC] vm_unmap_aliases: allow callers to inhibit TLB flush
  2009-02-23  4:14                     ` Nick Piggin
@ 2009-02-23  7:30                       ` Jeremy Fitzhardinge
  2009-02-23  9:13                         ` Nick Piggin
  0 siblings, 1 reply; 17+ messages in thread
From: Jeremy Fitzhardinge @ 2009-02-23  7:30 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Andrew Morton, Linux Kernel Mailing List,
	Linux Memory Management List, the arch/x86 maintainers,
	Arjan van de Ven

Nick Piggin wrote:
> On Friday 20 February 2009 06:11:32 Jeremy Fitzhardinge wrote:
>   
>> Nick Piggin wrote:
>>     
>>> Then what is the point of the vm_unmap_aliases? If you are doing it
>>> for security it won't work because other CPUs might still be able
>>> to write through dangling TLBs. If you are not doing it for
>>> security then it does not need to be done at all.
>>>       
>> Xen will make sure any danging tlb entries are flushed before handing
>> the page out to anyone else.
>>
>>     
>>> Unless it is something strange that Xen does with the page table
>>> structure and you just need to get rid of those?
>>>       
>> Yeah.  A pte pointing at a page holds a reference on it, saying that it
>> belongs to the domain.  You can't return it to Xen until the refcount is 0.
>>     
>
> OK. Then I will remember to find some time to get the interrupt
> safe patches working. I wonder why you can't just return it to
> Xen when (or have Xen hold it somewhere until) the refcount
> reaches 0?
>   

It would still need to allocate a page in the meantime, which could fail 
because the domain has hit its hard memory limit (which will be the 
common case, because a domain generally starts with its full compliment 
of memory).   The nice thing about the exchange is that there's no 
accounting to take into account.

>>> Or... what if we just allow a compile and/or boot time flag to direct
>>> that it does not want lazy vmap unmapping and it will just revert to
>>> synchronous unmapping? If Xen needs lots of flushing anyway it might
>>> not be a win anyway.
>>>       
>> That may be worth considering.
>>     
>
> ... in the meantime, shall we just do this for Xen? It is probably
> safer and may end up with no worse performance on Xen anyway. If
> we get more vmap users and it becomes important, you could look at
> more sophisticated ways of doing this. Eg. a page could be flagged
> if it potentially has lazy vmaps.
>   

OK.  Do you want to do the patch, or shall I?

    J

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH RFC] vm_unmap_aliases: allow callers to inhibit TLB flush
  2009-02-23  7:30                       ` Jeremy Fitzhardinge
@ 2009-02-23  9:13                         ` Nick Piggin
  2009-02-23 19:27                           ` Jeremy Fitzhardinge
  0 siblings, 1 reply; 17+ messages in thread
From: Nick Piggin @ 2009-02-23  9:13 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Andrew Morton, Linux Kernel Mailing List,
	Linux Memory Management List, the arch/x86 maintainers,
	Arjan van de Ven

On Monday 23 February 2009 18:30:14 Jeremy Fitzhardinge wrote:
> Nick Piggin wrote:
> > On Friday 20 February 2009 06:11:32 Jeremy Fitzhardinge wrote:
> >> Nick Piggin wrote:
> >>> Then what is the point of the vm_unmap_aliases? If you are doing it
> >>> for security it won't work because other CPUs might still be able
> >>> to write through dangling TLBs. If you are not doing it for
> >>> security then it does not need to be done at all.
> >>
> >> Xen will make sure any danging tlb entries are flushed before handing
> >> the page out to anyone else.
> >>
> >>> Unless it is something strange that Xen does with the page table
> >>> structure and you just need to get rid of those?
> >>
> >> Yeah.  A pte pointing at a page holds a reference on it, saying that it
> >> belongs to the domain.  You can't return it to Xen until the refcount is
> >> 0.
> >
> > OK. Then I will remember to find some time to get the interrupt
> > safe patches working. I wonder why you can't just return it to
> > Xen when (or have Xen hold it somewhere until) the refcount
> > reaches 0?
>
> It would still need to allocate a page in the meantime, which could fail
> because the domain has hit its hard memory limit (which will be the
> common case, because a domain generally starts with its full compliment
> of memory).   The nice thing about the exchange is that there's no
> accounting to take into account.

OK, well I don't really understand the details but I trust you if
you say it's hard :)


> >>> Or... what if we just allow a compile and/or boot time flag to direct
> >>> that it does not want lazy vmap unmapping and it will just revert to
> >>> synchronous unmapping? If Xen needs lots of flushing anyway it might
> >>> not be a win anyway.
> >>
> >> That may be worth considering.
> >
> > ... in the meantime, shall we just do this for Xen? It is probably
> > safer and may end up with no worse performance on Xen anyway. If
> > we get more vmap users and it becomes important, you could look at
> > more sophisticated ways of doing this. Eg. a page could be flagged
> > if it potentially has lazy vmaps.
>
> OK.  Do you want to do the patch, or shall I?

Here's a start for you. I think it gets rid of all the dead code and
data without introducing any actual conditional compilation...

---
 mm/vmalloc.c |   66 ++++++++++++++++++++++++++++++++++++++++++-----------------
 1 file changed, 48 insertions(+), 18 deletions(-)

Index: linux-2.6/mm/vmalloc.c
===================================================================
--- linux-2.6.orig/mm/vmalloc.c
+++ linux-2.6/mm/vmalloc.c
@@ -29,6 +29,11 @@
 #include <asm/uaccess.h>
 #include <asm/tlbflush.h>
 
+#ifdef CONFIG_VMAP_NO_LAZY_FLUSH
+#define VMAP_LAZY_FLUSHES 0
+#else
+#define VMAP_LAZY_FLUSHES 1
+#endif
 
 /*** Page table manipulation functions ***/
 
@@ -376,7 +381,7 @@ retry:
 found:
 	if (addr + size > vend) {
 		spin_unlock(&vmap_area_lock);
-		if (!purged) {
+		if (VMAP_LAZY_FLUSHES && !purged) {
 			purge_vmap_area_lazy();
 			purged = 1;
 			goto retry;
@@ -413,7 +418,10 @@ static void __free_vmap_area(struct vmap
 	RB_CLEAR_NODE(&va->rb_node);
 	list_del_rcu(&va->list);
 
-	call_rcu(&va->rcu_head, rcu_free_va);
+	if (VMAP_LAZY_FLUSHES)
+		call_rcu(&va->rcu_head, rcu_free_va);
+	else
+		kfree(va);
 }
 
 /*
@@ -450,8 +458,10 @@ static void vmap_debug_free_range(unsign
 	 * faster).
 	 */
 #ifdef CONFIG_DEBUG_PAGEALLOC
-	vunmap_page_range(start, end);
-	flush_tlb_kernel_range(start, end);
+	if (VMAP_LAZY_FLUSHES) {
+		vunmap_page_range(start, end);
+		flush_tlb_kernel_range(start, end);
+	}
 #endif
 }
 
@@ -571,10 +581,16 @@ static void purge_vmap_area_lazy(void)
  */
 static void free_unmap_vmap_area_noflush(struct vmap_area *va)
 {
-	va->flags |= VM_LAZY_FREE;
-	atomic_add((va->va_end - va->va_start) >> PAGE_SHIFT, &vmap_lazy_nr);
-	if (unlikely(atomic_read(&vmap_lazy_nr) > lazy_max_pages()))
-		try_purge_vmap_area_lazy();
+	if (VMAP_LAZY_FLUSHES) {
+		va->flags |= VM_LAZY_FREE;
+		atomic_add((va->va_end - va->va_start) >> PAGE_SHIFT,
+							&vmap_lazy_nr);
+		if (unlikely(atomic_read(&vmap_lazy_nr) > lazy_max_pages()))
+			try_purge_vmap_area_lazy();
+	} else {
+		vunmap_page_range(va->va_start, va->va_end);
+		flush_tlb_kernel_range(va->va_start, va->va_end);
+	}
 }
 
 /*
@@ -610,6 +626,15 @@ static void free_unmap_vmap_area_addr(un
 /*** Per cpu kva allocator ***/
 
 /*
+ * This does lazy flushing as well, so don't call it if the arch doesn't want
+ * lazy vmap kva flushes... The scalability aspect should be less important
+ * in that case anyway seeing as kernel tlb flushing tends not to be scalable.
+ * It would be possible to make this work without lazy tlb flushing if it
+ * was really a big deal.
+ */
+
+
+/*
  * vmap space is limited especially on 32 bit architectures. Ensure there is
  * room for at least 16 percpu vmap blocks per CPU.
  */
@@ -877,6 +902,9 @@ void vm_unmap_aliases(void)
 	int cpu;
 	int flush = 0;
 
+	if (!VMAP_LAZY_FLUSHES)
+		return;
+
 	if (unlikely(!vmap_initialized))
 		return;
 
@@ -937,7 +965,7 @@ void vm_unmap_ram(const void *mem, unsig
 	debug_check_no_locks_freed(mem, size);
 	vmap_debug_free_range(addr, addr+size);
 
-	if (likely(count <= VMAP_MAX_ALLOC))
+	if (VMAP_LAZY_FLUSHES && likely(count <= VMAP_MAX_ALLOC))
 		vb_free(mem, size);
 	else
 		free_unmap_vmap_area_addr(addr);
@@ -959,7 +987,7 @@ void *vm_map_ram(struct page **pages, un
 	unsigned long addr;
 	void *mem;
 
-	if (likely(count <= VMAP_MAX_ALLOC)) {
+	if (VMAP_LAZY_FLUSHES && likely(count <= VMAP_MAX_ALLOC)) {
 		mem = vb_alloc(size, GFP_KERNEL);
 		if (IS_ERR(mem))
 			return NULL;
@@ -988,14 +1016,16 @@ void __init vmalloc_init(void)
 	struct vm_struct *tmp;
 	int i;
 
-	for_each_possible_cpu(i) {
-		struct vmap_block_queue *vbq;
-
-		vbq = &per_cpu(vmap_block_queue, i);
-		spin_lock_init(&vbq->lock);
-		INIT_LIST_HEAD(&vbq->free);
-		INIT_LIST_HEAD(&vbq->dirty);
-		vbq->nr_dirty = 0;
+	if (VMAP_LAZY_FLUSHES) {
+		for_each_possible_cpu(i) {
+			struct vmap_block_queue *vbq;
+
+			vbq = &per_cpu(vmap_block_queue, i);
+			spin_lock_init(&vbq->lock);
+			INIT_LIST_HEAD(&vbq->free);
+			INIT_LIST_HEAD(&vbq->dirty);
+			vbq->nr_dirty = 0;
+		}
 	}
 
 	/* Import existing vmlist entries. */
\0

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH RFC] vm_unmap_aliases: allow callers to inhibit TLB flush
  2009-02-23  9:13                         ` Nick Piggin
@ 2009-02-23 19:27                           ` Jeremy Fitzhardinge
  2009-02-24 12:23                             ` Nick Piggin
  0 siblings, 1 reply; 17+ messages in thread
From: Jeremy Fitzhardinge @ 2009-02-23 19:27 UTC (permalink / raw)
  To: Nick Piggin
  Cc: Andrew Morton, Linux Kernel Mailing List,
	Linux Memory Management List, the arch/x86 maintainers,
	Arjan van de Ven

Nick Piggin wrote:
> Here's a start for you. I think it gets rid of all the dead code and
> data without introducing any actual conditional compilation...
>   

OK, I can get started with this, but it will need to be a runtime 
switch; a Xen kernel running native is just a normal kernel, and I don't 
think we want to disable lazy flushes in that case.

    J

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH RFC] vm_unmap_aliases: allow callers to inhibit TLB flush
  2009-02-23 19:27                           ` Jeremy Fitzhardinge
@ 2009-02-24 12:23                             ` Nick Piggin
  0 siblings, 0 replies; 17+ messages in thread
From: Nick Piggin @ 2009-02-24 12:23 UTC (permalink / raw)
  To: Jeremy Fitzhardinge
  Cc: Andrew Morton, Linux Kernel Mailing List,
	Linux Memory Management List, the arch/x86 maintainers,
	Arjan van de Ven

On Tuesday 24 February 2009 06:27:01 Jeremy Fitzhardinge wrote:
> Nick Piggin wrote:
> > Here's a start for you. I think it gets rid of all the dead code and
> > data without introducing any actual conditional compilation...
>
> OK, I can get started with this, but it will need to be a runtime
> switch; a Xen kernel running native is just a normal kernel, and I don't
> think we want to disable lazy flushes in that case.

That's fine, just make it a constant 1 if !CONFIG_XEN? And otherwise
a variable?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2009-02-24 12:23 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-12-11 19:05 [PATCH RFC] vm_unmap_aliases: allow callers to inhibit TLB flush Jeremy Fitzhardinge
2007-07-24  0:52 ` Nick Piggin
2008-12-12  1:59   ` Jeremy Fitzhardinge
2007-07-24  1:40     ` Nick Piggin
2008-12-16  1:28       ` Jeremy Fitzhardinge
2008-12-30  3:42         ` Nick Piggin
2008-12-30 11:27           ` Jeremy Fitzhardinge
2009-02-17 21:57           ` Jeremy Fitzhardinge
2009-02-19 11:54             ` Nick Piggin
2009-02-19 17:02               ` Jeremy Fitzhardinge
2009-02-19 17:41                 ` Nick Piggin
2009-02-19 19:11                   ` Jeremy Fitzhardinge
2009-02-23  4:14                     ` Nick Piggin
2009-02-23  7:30                       ` Jeremy Fitzhardinge
2009-02-23  9:13                         ` Nick Piggin
2009-02-23 19:27                           ` Jeremy Fitzhardinge
2009-02-24 12:23                             ` Nick Piggin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox