linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm/zswap: Replace kmap_atomic() with kmap_local_page()
@ 2023-11-27 15:55 Fabio M. De Francesco
  2023-11-27 18:07 ` Nhat Pham
  2023-11-27 20:16 ` Chris Li
  0 siblings, 2 replies; 7+ messages in thread
From: Fabio M. De Francesco @ 2023-11-27 15:55 UTC (permalink / raw)
  To: Seth Jennings, Dan Streetman, Vitaly Wool, Andrew Morton,
	linux-mm, linux-kernel
  Cc: Fabio M. De Francesco, Ira Weiny

kmap_atomic() has been deprecated in favor of kmap_local_page().

Therefore, replace kmap_atomic() with kmap_local_page() in
zswap.c.

kmap_atomic() is implemented like a kmap_local_page() which also
disables page-faults and preemption (the latter only in !PREEMPT_RT
kernels). The kernel virtual addresses returned by these two API are
only valid in the context of the callers (i.e., they cannot be handed to
other threads).

With kmap_local_page() the mappings are per thread and CPU local like
in kmap_atomic(); however, they can handle page-faults and can be called
from any context (including interrupts). The tasks that call
kmap_local_page() can be preempted and, when they are scheduled to run
again, the kernel virtual addresses are restored and are still valid.

In mm/zswap.c, the blocks of code between the mappings and un-mappings
do not depend on the above-mentioned side effects of kmap_atomic(), so
that the mere replacements of the old API with the new one is all that is
required (i.e., there is no need to explicitly call pagefault_disable()
and/or preempt_disable()).

Cc: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Fabio M. De Francesco <fabio.maria.de.francesco@linux.intel.com>
---
 mm/zswap.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/zswap.c b/mm/zswap.c
index 74411dfdad92..699c6ee11222 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -1267,16 +1267,16 @@ bool zswap_store(struct folio *folio)
 	}
 
 	if (zswap_same_filled_pages_enabled) {
-		src = kmap_atomic(page);
+		src = kmap_local_page(page);
 		if (zswap_is_page_same_filled(src, &value)) {
-			kunmap_atomic(src);
+			kunmap_local(src);
 			entry->swpentry = swp_entry(type, offset);
 			entry->length = 0;
 			entry->value = value;
 			atomic_inc(&zswap_same_filled_pages);
 			goto insert_entry;
 		}
-		kunmap_atomic(src);
+		kunmap_local(src);
 	}
 
 	if (!zswap_non_same_filled_pages_enabled)
@@ -1422,9 +1422,9 @@ bool zswap_load(struct folio *folio)
 	spin_unlock(&tree->lock);
 
 	if (!entry->length) {
-		dst = kmap_atomic(page);
+		dst = kmap_local_page(page);
 		zswap_fill_page(dst, entry->value);
-		kunmap_atomic(dst);
+		kunmap_local(dst);
 		ret = true;
 		goto stats;
 	}
-- 
2.42.0



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] mm/zswap: Replace kmap_atomic() with kmap_local_page()
  2023-11-27 15:55 [PATCH] mm/zswap: Replace kmap_atomic() with kmap_local_page() Fabio M. De Francesco
@ 2023-11-27 18:07 ` Nhat Pham
  2023-11-27 20:16 ` Chris Li
  1 sibling, 0 replies; 7+ messages in thread
From: Nhat Pham @ 2023-11-27 18:07 UTC (permalink / raw)
  To: Fabio M. De Francesco
  Cc: Seth Jennings, Dan Streetman, Vitaly Wool, Andrew Morton,
	linux-mm, linux-kernel, Ira Weiny

On Mon, Nov 27, 2023 at 8:03 AM Fabio M. De Francesco
<fabio.maria.de.francesco@linux.intel.com> wrote:
>
> kmap_atomic() has been deprecated in favor of kmap_local_page().
>
> Therefore, replace kmap_atomic() with kmap_local_page() in
> zswap.c.
>
> kmap_atomic() is implemented like a kmap_local_page() which also
> disables page-faults and preemption (the latter only in !PREEMPT_RT
> kernels). The kernel virtual addresses returned by these two API are
> only valid in the context of the callers (i.e., they cannot be handed to
> other threads).
>
> With kmap_local_page() the mappings are per thread and CPU local like
> in kmap_atomic(); however, they can handle page-faults and can be called
> from any context (including interrupts). The tasks that call
> kmap_local_page() can be preempted and, when they are scheduled to run
> again, the kernel virtual addresses are restored and are still valid.
>
> In mm/zswap.c, the blocks of code between the mappings and un-mappings
> do not depend on the above-mentioned side effects of kmap_atomic(), so
> that the mere replacements of the old API with the new one is all that is
> required (i.e., there is no need to explicitly call pagefault_disable()
> and/or preempt_disable()).
>
> Cc: Ira Weiny <ira.weiny@intel.com>
> Signed-off-by: Fabio M. De Francesco <fabio.maria.de.francesco@linux.intel.com>
> ---
>  mm/zswap.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/mm/zswap.c b/mm/zswap.c
> index 74411dfdad92..699c6ee11222 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -1267,16 +1267,16 @@ bool zswap_store(struct folio *folio)
>         }
>
>         if (zswap_same_filled_pages_enabled) {
> -               src = kmap_atomic(page);
> +               src = kmap_local_page(page);
>                 if (zswap_is_page_same_filled(src, &value)) {
> -                       kunmap_atomic(src);
> +                       kunmap_local(src);
>                         entry->swpentry = swp_entry(type, offset);
>                         entry->length = 0;
>                         entry->value = value;
>                         atomic_inc(&zswap_same_filled_pages);
>                         goto insert_entry;
>                 }
> -               kunmap_atomic(src);
> +               kunmap_local(src);
>         }
>
>         if (!zswap_non_same_filled_pages_enabled)
> @@ -1422,9 +1422,9 @@ bool zswap_load(struct folio *folio)
>         spin_unlock(&tree->lock);
>
>         if (!entry->length) {
> -               dst = kmap_atomic(page);
> +               dst = kmap_local_page(page);
>                 zswap_fill_page(dst, entry->value);
> -               kunmap_atomic(dst);
> +               kunmap_local(dst);
>                 ret = true;
>                 goto stats;
>         }
> --
> 2.42.0
>
>
Probably worth running a couple rounds of stress tests, but otherwise
LGTM. FWIW, I've wanted to do this ever since I worked on the
storing-uncompressed-pages patch.


Reviewed-by: Nhat Pham <nphamcs@gmail.com>


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] mm/zswap: Replace kmap_atomic() with kmap_local_page()
  2023-11-27 15:55 [PATCH] mm/zswap: Replace kmap_atomic() with kmap_local_page() Fabio M. De Francesco
  2023-11-27 18:07 ` Nhat Pham
@ 2023-11-27 20:16 ` Chris Li
  2023-11-28 14:09   ` Matthew Wilcox
  2023-11-29 11:41   ` Fabio M. De Francesco
  1 sibling, 2 replies; 7+ messages in thread
From: Chris Li @ 2023-11-27 20:16 UTC (permalink / raw)
  To: Fabio M. De Francesco
  Cc: Seth Jennings, Dan Streetman, Vitaly Wool, Andrew Morton,
	linux-mm, linux-kernel, Ira Weiny, Nhat Pham

Hi Fabio,

On Mon, Nov 27, 2023 at 8:01 AM Fabio M. De Francesco
<fabio.maria.de.francesco@linux.intel.com> wrote:
>
> kmap_atomic() has been deprecated in favor of kmap_local_page().
>
> Therefore, replace kmap_atomic() with kmap_local_page() in
> zswap.c.
>
> kmap_atomic() is implemented like a kmap_local_page() which also
> disables page-faults and preemption (the latter only in !PREEMPT_RT
> kernels). The kernel virtual addresses returned by these two API are
> only valid in the context of the callers (i.e., they cannot be handed to
> other threads).
>
> With kmap_local_page() the mappings are per thread and CPU local like
> in kmap_atomic(); however, they can handle page-faults and can be called
> from any context (including interrupts). The tasks that call
> kmap_local_page() can be preempted and, when they are scheduled to run
> again, the kernel virtual addresses are restored and are still valid.

As far as I can tell, the kmap_atomic() is the same as
kmap_local_page() with the following additional code before calling to
"__kmap_local_page_prot(page, prot)", which is common between these
two functions.

        if (IS_ENABLED(CONFIG_PREEMPT_RT))
                migrate_disable();
        else
                preempt_disable();

        pagefault_disable();

From the performance perspective, kmap_local_page() does less so it
has some performance gain. I am trying to think would it have another
unwanted side effect of enabling interrupt and page fault while zswap
decompressing a page.
The decompression should not generate page fault. The interrupt
enabling might introduce extra latency, but most of the page fault was
having interrupt enabled anyway. The time spent in decompression is
relatively small compared to the whole duration of the page fault. So
the interrupt enabling during those short windows should be fine.
"Should" is the famous last word.

I am tempted to Ack on it. Let me sleep on it a before more. BTW,
thanks for the patch.

Chris


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] mm/zswap: Replace kmap_atomic() with kmap_local_page()
  2023-11-27 20:16 ` Chris Li
@ 2023-11-28 14:09   ` Matthew Wilcox
  2023-11-28 20:43     ` Chris Li
  2023-11-29 11:41   ` Fabio M. De Francesco
  1 sibling, 1 reply; 7+ messages in thread
From: Matthew Wilcox @ 2023-11-28 14:09 UTC (permalink / raw)
  To: Chris Li
  Cc: Fabio M. De Francesco, Seth Jennings, Dan Streetman, Vitaly Wool,
	Andrew Morton, linux-mm, linux-kernel, Ira Weiny, Nhat Pham

On Mon, Nov 27, 2023 at 12:16:56PM -0800, Chris Li wrote:
> Hi Fabio,
> 
> On Mon, Nov 27, 2023 at 8:01 AM Fabio M. De Francesco
> <fabio.maria.de.francesco@linux.intel.com> wrote:
> >
> > kmap_atomic() has been deprecated in favor of kmap_local_page().
> >
> > Therefore, replace kmap_atomic() with kmap_local_page() in
> > zswap.c.
> >
> > kmap_atomic() is implemented like a kmap_local_page() which also
> > disables page-faults and preemption (the latter only in !PREEMPT_RT
> > kernels). The kernel virtual addresses returned by these two API are
> > only valid in the context of the callers (i.e., they cannot be handed to
> > other threads).
> >
> > With kmap_local_page() the mappings are per thread and CPU local like
> > in kmap_atomic(); however, they can handle page-faults and can be called
> > from any context (including interrupts). The tasks that call
> > kmap_local_page() can be preempted and, when they are scheduled to run
> > again, the kernel virtual addresses are restored and are still valid.
> 
> As far as I can tell, the kmap_atomic() is the same as
> kmap_local_page() with the following additional code before calling to
> "__kmap_local_page_prot(page, prot)", which is common between these
> two functions.
> 
>         if (IS_ENABLED(CONFIG_PREEMPT_RT))
>                 migrate_disable();
>         else
>                 preempt_disable();
> 
>         pagefault_disable();
> 
> >From the performance perspective, kmap_local_page() does less so it
> has some performance gain. I am trying to think would it have another
> unwanted side effect of enabling interrupt and page fault while zswap
> decompressing a page.
> The decompression should not generate page fault. The interrupt
> enabling might introduce extra latency, but most of the page fault was
> having interrupt enabled anyway. The time spent in decompression is
> relatively small compared to the whole duration of the page fault. So
> the interrupt enabling during those short windows should be fine.
> "Should" is the famous last word.

Interrupts are enabled with kmap_atomic() too.  The difference is
whether we can be preempted by a higher-priority process.


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] mm/zswap: Replace kmap_atomic() with kmap_local_page()
  2023-11-28 14:09   ` Matthew Wilcox
@ 2023-11-28 20:43     ` Chris Li
  0 siblings, 0 replies; 7+ messages in thread
From: Chris Li @ 2023-11-28 20:43 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Fabio M. De Francesco, Seth Jennings, Dan Streetman, Vitaly Wool,
	Andrew Morton, linux-mm, linux-kernel, Ira Weiny, Nhat Pham

Hi Matthew,

On Tue, Nov 28, 2023 at 6:09 AM Matthew Wilcox <willy@infradead.org> wrote:
> >
> > >From the performance perspective, kmap_local_page() does less so it
> > has some performance gain. I am trying to think would it have another
> > unwanted side effect of enabling interrupt and page fault while zswap
> > decompressing a page.
> > The decompression should not generate page fault. The interrupt
> > enabling might introduce extra latency, but most of the page fault was
> > having interrupt enabled anyway. The time spent in decompression is
> > relatively small compared to the whole duration of the page fault. So
> > the interrupt enabling during those short windows should be fine.
> > "Should" is the famous last word.
>
> Interrupts are enabled with kmap_atomic() too.  The difference is
> whether we can be preempted by a higher-priority process.
>
You are right, thanks for the clarification.

Hi Fabio,

Acked-by: Chris Li <chrisl@kernel.org> (Google)

Chris


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] mm/zswap: Replace kmap_atomic() with kmap_local_page()
  2023-11-27 20:16 ` Chris Li
  2023-11-28 14:09   ` Matthew Wilcox
@ 2023-11-29 11:41   ` Fabio M. De Francesco
  2023-11-29 19:03     ` Christopher Li
  1 sibling, 1 reply; 7+ messages in thread
From: Fabio M. De Francesco @ 2023-11-29 11:41 UTC (permalink / raw)
  To: Chris Li
  Cc: Seth Jennings, Dan Streetman, Vitaly Wool, Andrew Morton,
	linux-mm, linux-kernel, Ira Weiny, Nhat Pham, Matthew Wilcox

Hi Chris,

On Monday, 27 November 2023 21:16:56 CET Chris Li wrote:
> Hi Fabio,
> 
> On Mon, Nov 27, 2023 at 8:01 AM Fabio M. De Francesco
> 
> <fabio.maria.de.francesco@linux.intel.com> wrote:
> > kmap_atomic() has been deprecated in favor of kmap_local_page().
> > 
> > Therefore, replace kmap_atomic() with kmap_local_page() in
> > zswap.c.
> > 
> > kmap_atomic() is implemented like a kmap_local_page() which also
> > disables page-faults and preemption (the latter only in !PREEMPT_RT
> > kernels). 

Please read again the sentence above.

> > The kernel virtual addresses returned by these two API are
> > only valid in the context of the callers (i.e., they cannot be handed to
> > other threads).
> > 
> > With kmap_local_page() the mappings are per thread and CPU local like
> > in kmap_atomic(); however, they can handle page-faults and can be called
> > from any context (including interrupts). The tasks that call
> > kmap_local_page() can be preempted and, when they are scheduled to run
> > again, the kernel virtual addresses are restored and are still valid.
> 
> As far as I can tell, the kmap_atomic() is the same as
> kmap_local_page() with the following additional code before calling to
> "__kmap_local_page_prot(page, prot)", which is common between these
> two functions.
> 
>         if (IS_ENABLED(CONFIG_PREEMPT_RT))
>                 migrate_disable();
>         else
>                 preempt_disable();
> 
>         pagefault_disable();
> 

This is what I tried to explain with that sentence. I think you overlooked it 
:)

BTW, please have a look at the Highmem documentation. It has initially been 
written by Peter Z. and I reworked and largely extended it authoring  the 
patches with my gmail address (6 - 7 different patches, if I remember 
correctly).

You will find there everything you may want to know about these API and how to 
do conversions from the older to the newer.

Thanks for acking this :)

> From the performance perspective, kmap_local_page() does less so it
> has some performance gain. I am trying to think would it have another
> unwanted side effect of enabling interrupt and page fault while zswap
> decompressing a page.
> The decompression should not generate page fault. The interrupt
> enabling might introduce extra latency, but most of the page fault was
> having interrupt enabled anyway. The time spent in decompression is
> relatively small compared to the whole duration of the page fault. So
> the interrupt enabling during those short windows should be fine.
> "Should" is the famous last word.

Here, Matthew chimed in to clarify. Thanks Matthew.
 
> I am tempted to Ack on it. Let me sleep on it a before more. BTW,
> thanks for the patch.
> 
> Chris






^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] mm/zswap: Replace kmap_atomic() with kmap_local_page()
  2023-11-29 11:41   ` Fabio M. De Francesco
@ 2023-11-29 19:03     ` Christopher Li
  0 siblings, 0 replies; 7+ messages in thread
From: Christopher Li @ 2023-11-29 19:03 UTC (permalink / raw)
  To: Fabio M. De Francesco
  Cc: Seth Jennings, Dan Streetman, Vitaly Wool, Andrew Morton,
	linux-mm, linux-kernel, Ira Weiny, Nhat Pham, Matthew Wilcox

Hi Fabio,

On Wed, Nov 29, 2023 at 3:41 AM Fabio M. De Francesco
<fabio.maria.de.francesco@linux.intel.com> wrote:
> > > The kernel virtual addresses returned by these two API are
> > > only valid in the context of the callers (i.e., they cannot be handed to
> > > other threads).
> > >
> > > With kmap_local_page() the mappings are per thread and CPU local like
> > > in kmap_atomic(); however, they can handle page-faults and can be called
> > > from any context (including interrupts). The tasks that call
> > > kmap_local_page() can be preempted and, when they are scheduled to run
> > > again, the kernel virtual addresses are restored and are still valid.
> >
> > As far as I can tell, the kmap_atomic() is the same as
> > kmap_local_page() with the following additional code before calling to
> > "__kmap_local_page_prot(page, prot)", which is common between these
> > two functions.
> >
> >         if (IS_ENABLED(CONFIG_PREEMPT_RT))
> >                 migrate_disable();
> >         else
> >                 preempt_disable();
> >
> >         pagefault_disable();
> >
>
> This is what I tried to explain with that sentence. I think you overlooked it
> :)

I did read your description. It is not that I don't trust your
description. I want to see how the code does what you describe at the
source code level. In this case the related code is fairly easy to
isolate.

>
> BTW, please have a look at the Highmem documentation. It has initially been
> written by Peter Z. and I reworked and largely extended it authoring  the
> patches with my gmail address (6 - 7 different patches, if I remember
> correctly).
>
> You will find there everything you may want to know about these API and how to
> do conversions from the older to the newer.

Will do, thanks for the pointer.

Chris


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2023-11-29 19:04 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-27 15:55 [PATCH] mm/zswap: Replace kmap_atomic() with kmap_local_page() Fabio M. De Francesco
2023-11-27 18:07 ` Nhat Pham
2023-11-27 20:16 ` Chris Li
2023-11-28 14:09   ` Matthew Wilcox
2023-11-28 20:43     ` Chris Li
2023-11-29 11:41   ` Fabio M. De Francesco
2023-11-29 19:03     ` Christopher Li

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox