From: Yosry Ahmed <yosry.ahmed@linux.dev>
To: Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Minchan Kim <minchan@kernel.org>,
Johannes Weiner <hannes@cmpxchg.org>,
Nhat Pham <nphamcs@gmail.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCHv1 4/6] zsmalloc: introduce new object mapping API
Date: Wed, 29 Jan 2025 17:31:20 +0000 [thread overview]
Message-ID: <Z5pl6OOVcb_rsgTC@google.com> (raw)
In-Reply-To: <20250129064853.2210753-5-senozhatsky@chromium.org>
On Wed, Jan 29, 2025 at 03:43:50PM +0900, Sergey Senozhatsky wrote:
> Current object mapping API is a little cumbersome. First, it's
> inconsistent, sometimes it returns with page-faults disabled and
> sometimes with page-faults enabled. Second, and most importantly,
> it enforces atomicity restrictions on its users. zs_map_object()
> has to return a liner object address which is not always possible
> because some objects span multiple physical (non-contiguous)
> pages. For such objects zsmalloc uses a per-CPU buffer to which
> object's data is copied before a pointer to that per-CPU buffer
> is returned back to the caller. This leads to another, final,
> issue - extra memcpy(). Since the caller gets a pointer to
> per-CPU buffer it can memcpy() data only to that buffer, and
> during zs_unmap_object() zsmalloc will memcpy() from that per-CPU
> buffer to physical pages that object in question spans across.
>
> New API splits functions by access mode:
> - zs_obj_read_begin(handle, local_copy)
> Returns a pointer to handle memory. For objects that span two
> physical pages a local_copy buffer is used to store object's
> data before the address is returned to the caller. Otherwise
> the object's page is kmap_local mapped directly.
>
> - zs_obj_read_end(handle, buf)
> Unmaps the page if it was kmap_local mapped by zs_obj_read_begin().
>
> - zs_obj_write(handle, buf, len)
> Copies len-bytes from compression buffer to handle memory
> (takes care of objects that span two pages). This does not
> need any additional (e.g. per-CPU) buffers and writes the data
> directly to zsmalloc pool pages.
>
> The old API will stay around until the remaining users switch
> to the new one. After that we'll also remove zsmalloc per-CPU
> buffer and CPU hotplug handling.
I will propose removing zbud (in addition to z3fold) soon. If that gets
in then we'd only need to update zpool and zswap code to use the new
API. I can take care of that if you want.
>
> Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org>
I have a couple of questions below, but generally LGTM:
Reviewed-by: Yosry Ahmed <yosry.ahmed@linux.dev>
> ---
> include/linux/zsmalloc.h | 8 +++
> mm/zsmalloc.c | 129 +++++++++++++++++++++++++++++++++++++++
> 2 files changed, 137 insertions(+)
>
> diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
> index a48cd0ffe57d..625adae8e547 100644
> --- a/include/linux/zsmalloc.h
> +++ b/include/linux/zsmalloc.h
> @@ -58,4 +58,12 @@ unsigned long zs_compact(struct zs_pool *pool);
> unsigned int zs_lookup_class_index(struct zs_pool *pool, unsigned int size);
>
> void zs_pool_stats(struct zs_pool *pool, struct zs_pool_stats *stats);
> +
> +void zs_obj_read_end(struct zs_pool *pool, unsigned long handle,
> + void *handle_mem);
> +void *zs_obj_read_begin(struct zs_pool *pool, unsigned long handle,
> + void *local_copy);
Nit: Any reason to put 'end' before 'begin'? Same for the function
definitions.
> +void zs_obj_write(struct zs_pool *pool, unsigned long handle,
> + void *handle_mem, size_t mem_len);
> +
> #endif
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 8f4011713bc8..0e21bc57470b 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1371,6 +1371,135 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
> }
> EXPORT_SYMBOL_GPL(zs_unmap_object);
>
> +void zs_obj_write(struct zs_pool *pool, unsigned long handle,
> + void *handle_mem, size_t mem_len)
> +{
> + struct zspage *zspage;
> + struct zpdesc *zpdesc;
> + unsigned long obj, off;
> + unsigned int obj_idx;
> + struct size_class *class;
> +
> + WARN_ON(in_interrupt());
> +
> + /* Guarantee we can get zspage from handle safely */
> + pool_read_lock(pool);
> + obj = handle_to_obj(handle);
> + obj_to_location(obj, &zpdesc, &obj_idx);
> + zspage = get_zspage(zpdesc);
> +
> + /* Make sure migration doesn't move any pages in this zspage */
> + zspage_read_lock(zspage);
> + pool_read_unlock(pool);
> +
> + class = zspage_class(pool, zspage);
> + off = offset_in_page(class->size * obj_idx);
> +
> + if (off + class->size <= PAGE_SIZE) {
> + /* this object is contained entirely within a page */
> + void *dst = kmap_local_zpdesc(zpdesc);
> +
> + if (!ZsHugePage(zspage))
> + off += ZS_HANDLE_SIZE;
> + memcpy(dst + off, handle_mem, mem_len);
> + kunmap_local(dst);
> + } else {
> + size_t sizes[2];
> +
> + /* this object spans two pages */
> + off += ZS_HANDLE_SIZE;
Are huge pages always stored in a single page? If yes, can we just do
this before the if block for both cases:
if (!ZsHugePage(zspage))
off += ZS_HANDLE_SIZE;
> + sizes[0] = PAGE_SIZE - off;
> + sizes[1] = mem_len - sizes[0];
> +
> + memcpy_to_page(zpdesc_page(zpdesc), off,
> + handle_mem, sizes[0]);
> + zpdesc = get_next_zpdesc(zpdesc);
> + memcpy_to_page(zpdesc_page(zpdesc), 0,
> + handle_mem + sizes[0], sizes[1]);
> + }
> +
> + zspage_read_unlock(zspage);
> +}
> +EXPORT_SYMBOL_GPL(zs_obj_write);
> +
> +void zs_obj_read_end(struct zs_pool *pool, unsigned long handle,
> + void *handle_mem)
> +{
> + struct zspage *zspage;
> + struct zpdesc *zpdesc;
> + unsigned long obj, off;
> + unsigned int obj_idx;
> + struct size_class *class;
> +
> + obj = handle_to_obj(handle);
> + obj_to_location(obj, &zpdesc, &obj_idx);
> + zspage = get_zspage(zpdesc);
> + class = zspage_class(pool, zspage);
> + off = offset_in_page(class->size * obj_idx);
> +
> + if (off + class->size <= PAGE_SIZE) {
> + if (!ZsHugePage(zspage))
> + off += ZS_HANDLE_SIZE;
> + handle_mem -= off;
> + kunmap_local(handle_mem);
> + }
> +
> + zspage_read_unlock(zspage);
> +}
> +EXPORT_SYMBOL_GPL(zs_obj_read_end);
> +
> +void *zs_obj_read_begin(struct zs_pool *pool, unsigned long handle,
> + void *local_copy)
> +{
> + struct zspage *zspage;
> + struct zpdesc *zpdesc;
> + unsigned long obj, off;
> + unsigned int obj_idx;
> + struct size_class *class;
> + void *addr;
> +
> + WARN_ON(in_interrupt());
> +
> + /* Guarantee we can get zspage from handle safely */
> + pool_read_lock(pool);
> + obj = handle_to_obj(handle);
> + obj_to_location(obj, &zpdesc, &obj_idx);
> + zspage = get_zspage(zpdesc);
> +
> + /* Make sure migration doesn't move any pages in this zspage */
> + zspage_read_lock(zspage);
> + pool_read_unlock(pool);
> +
> + class = zspage_class(pool, zspage);
> + off = offset_in_page(class->size * obj_idx);
> +
> + if (off + class->size <= PAGE_SIZE) {
> + /* this object is contained entirely within a page */
> + addr = kmap_local_zpdesc(zpdesc);
> + addr += off;
> + } else {
> + size_t sizes[2];
> +
> + /* this object spans two pages */
> + sizes[0] = PAGE_SIZE - off;
> + sizes[1] = class->size - sizes[0];
> + addr = local_copy;
> +
> + memcpy_from_page(addr, zpdesc_page(zpdesc),
> + off, sizes[0]);
> + zpdesc = get_next_zpdesc(zpdesc);
> + memcpy_from_page(addr + sizes[0],
> + zpdesc_page(zpdesc),
> + 0, sizes[1]);
> + }
> +
> + if (!ZsHugePage(zspage))
> + addr += ZS_HANDLE_SIZE;
> +
> + return addr;
> +}
> +EXPORT_SYMBOL_GPL(zs_obj_read_begin);
> +
> /**
> * zs_huge_class_size() - Returns the size (in bytes) of the first huge
> * zsmalloc &size_class.
> --
> 2.48.1.262.g85cc9f2d1e-goog
>
next prev parent reply other threads:[~2025-01-29 17:31 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-29 6:43 [PATCHv1 0/6] zsmalloc: preemptible object mapping Sergey Senozhatsky
2025-01-29 6:43 ` [PATCHv1 1/6] zsmalloc: factor out pool locking helpers Sergey Senozhatsky
2025-01-29 16:59 ` Yosry Ahmed
2025-01-30 4:01 ` Sergey Senozhatsky
2025-01-30 16:25 ` Yosry Ahmed
2025-01-31 3:34 ` Sergey Senozhatsky
2025-01-29 6:43 ` [PATCHv1 2/6] zsmalloc: factor out size-class " Sergey Senozhatsky
2025-01-29 17:01 ` Yosry Ahmed
2025-01-29 6:43 ` [PATCHv1 3/6] zsmalloc: make zspage lock preemptible Sergey Senozhatsky
2025-01-29 11:25 ` Sergey Senozhatsky
2025-01-29 15:22 ` Uros Bizjak
2025-01-30 3:22 ` Sergey Senozhatsky
2025-01-29 6:43 ` [PATCHv1 4/6] zsmalloc: introduce new object mapping API Sergey Senozhatsky
2025-01-29 17:31 ` Yosry Ahmed [this message]
2025-01-30 3:21 ` Sergey Senozhatsky
2025-01-30 4:17 ` Sergey Senozhatsky
2025-01-30 16:27 ` Yosry Ahmed
2025-01-29 6:43 ` [PATCHv1 5/6] zram: switch to new zsmalloc " Sergey Senozhatsky
2025-01-29 6:43 ` [PATCHv1 6/6] zram: add might_sleep to zcomp API Sergey Senozhatsky
2025-01-29 15:53 ` [PATCHv1 0/6] zsmalloc: preemptible object mapping Yosry Ahmed
2025-01-30 3:13 ` Sergey Senozhatsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z5pl6OOVcb_rsgTC@google.com \
--to=yosry.ahmed@linux.dev \
--cc=akpm@linux-foundation.org \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=minchan@kernel.org \
--cc=nphamcs@gmail.com \
--cc=senozhatsky@chromium.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox