From: Sergey Senozhatsky <senozhatsky@chromium.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Yosry Ahmed <yosry.ahmed@linux.dev>,
Hillf Danton <hdanton@sina.com>, Kairui Song <ryncsn@gmail.com>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
Minchan Kim <minchan@kernel.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Sergey Senozhatsky <senozhatsky@chromium.org>
Subject: [PATCH v10 14/19] zsmalloc: introduce new object mapping API
Date: Mon, 3 Mar 2025 11:03:23 +0900 [thread overview]
Message-ID: <20250303022425.285971-15-senozhatsky@chromium.org> (raw)
In-Reply-To: <20250303022425.285971-1-senozhatsky@chromium.org>
Current object mapping API is a little cumbersome. First, it's
inconsistent, sometimes it returns with page-faults disabled and
sometimes with page-faults enabled. Second, and most importantly,
it enforces atomicity restrictions on its users. zs_map_object()
has to return a liner object address which is not always possible
because some objects span multiple physical (non-contiguous)
pages. For such objects zsmalloc uses a per-CPU buffer to which
object's data is copied before a pointer to that per-CPU buffer
is returned back to the caller. This leads to another, final,
issue - extra memcpy(). Since the caller gets a pointer to
per-CPU buffer it can memcpy() data only to that buffer, and
during zs_unmap_object() zsmalloc will memcpy() from that per-CPU
buffer to physical pages that object in question spans across.
New API splits functions by access mode:
- zs_obj_read_begin(handle, local_copy)
Returns a pointer to handle memory. For objects that span two
physical pages a local_copy buffer is used to store object's
data before the address is returned to the caller. Otherwise
the object's page is kmap_local mapped directly.
- zs_obj_read_end(handle, buf)
Unmaps the page if it was kmap_local mapped by zs_obj_read_begin().
- zs_obj_write(handle, buf, len)
Copies len-bytes from compression buffer to handle memory
(takes care of objects that span two pages). This does not
need any additional (e.g. per-CPU) buffers and writes the data
directly to zsmalloc pool pages.
In terms of performance, on a synthetic and completely reproducible
test that allocates fixed number of objects of fixed sizes and
iterates over those objects, first mapping in RO then in RW mode:
OLD API
=======
3 first results out of 10
369,205,778 instructions # 0.80 insn per cycle
40,467,926 branches # 113.732 M/sec
369,002,122 instructions # 0.62 insn per cycle
40,426,145 branches # 189.361 M/sec
369,036,706 instructions # 0.63 insn per cycle
40,430,860 branches # 204.105 M/sec
[..]
NEW API
=======
3 first results out of 10
265,799,293 instructions # 0.51 insn per cycle
29,834,567 branches # 170.281 M/sec
265,765,970 instructions # 0.55 insn per cycle
29,829,019 branches # 161.602 M/sec
265,764,702 instructions # 0.51 insn per cycle
29,828,015 branches # 189.677 M/sec
[..]
T-test on all 10 runs
=====================
Difference at 95.0% confidence
-1.03219e+08 +/- 55308.7
-27.9705% +/- 0.0149878%
(Student's t, pooled s = 58864.4)
The old API will stay around until the remaining users switch
to the new one. After that we'll also remove zsmalloc per-CPU
buffer and CPU hotplug handling.
The split of map(RO) and map(WO) into read_{begin/end}/write is
suggested by Yosry Ahmed.
Suggested-by: Yosry Ahmed <yosry.ahmed@linux.dev>
Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org>
Reviewed-by: Yosry Ahmed <yosry.ahmed@linux.dev>
---
include/linux/zsmalloc.h | 8 +++
mm/zsmalloc.c | 125 +++++++++++++++++++++++++++++++++++++++
2 files changed, 133 insertions(+)
diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
index a48cd0ffe57d..7d70983cf398 100644
--- a/include/linux/zsmalloc.h
+++ b/include/linux/zsmalloc.h
@@ -58,4 +58,12 @@ unsigned long zs_compact(struct zs_pool *pool);
unsigned int zs_lookup_class_index(struct zs_pool *pool, unsigned int size);
void zs_pool_stats(struct zs_pool *pool, struct zs_pool_stats *stats);
+
+void *zs_obj_read_begin(struct zs_pool *pool, unsigned long handle,
+ void *local_copy);
+void zs_obj_read_end(struct zs_pool *pool, unsigned long handle,
+ void *handle_mem);
+void zs_obj_write(struct zs_pool *pool, unsigned long handle,
+ void *handle_mem, size_t mem_len);
+
#endif
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index afbd72363731..7566070729ee 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1362,6 +1362,131 @@ void zs_unmap_object(struct zs_pool *pool, unsigned long handle)
}
EXPORT_SYMBOL_GPL(zs_unmap_object);
+void *zs_obj_read_begin(struct zs_pool *pool, unsigned long handle,
+ void *local_copy)
+{
+ struct zspage *zspage;
+ struct zpdesc *zpdesc;
+ unsigned long obj, off;
+ unsigned int obj_idx;
+ struct size_class *class;
+ void *addr;
+
+ /* Guarantee we can get zspage from handle safely */
+ read_lock(&pool->lock);
+ obj = handle_to_obj(handle);
+ obj_to_location(obj, &zpdesc, &obj_idx);
+ zspage = get_zspage(zpdesc);
+
+ /* Make sure migration doesn't move any pages in this zspage */
+ zspage_read_lock(zspage);
+ read_unlock(&pool->lock);
+
+ class = zspage_class(pool, zspage);
+ off = offset_in_page(class->size * obj_idx);
+
+ if (off + class->size <= PAGE_SIZE) {
+ /* this object is contained entirely within a page */
+ addr = kmap_local_zpdesc(zpdesc);
+ addr += off;
+ } else {
+ size_t sizes[2];
+
+ /* this object spans two pages */
+ sizes[0] = PAGE_SIZE - off;
+ sizes[1] = class->size - sizes[0];
+ addr = local_copy;
+
+ memcpy_from_page(addr, zpdesc_page(zpdesc),
+ off, sizes[0]);
+ zpdesc = get_next_zpdesc(zpdesc);
+ memcpy_from_page(addr + sizes[0],
+ zpdesc_page(zpdesc),
+ 0, sizes[1]);
+ }
+
+ if (!ZsHugePage(zspage))
+ addr += ZS_HANDLE_SIZE;
+
+ return addr;
+}
+EXPORT_SYMBOL_GPL(zs_obj_read_begin);
+
+void zs_obj_read_end(struct zs_pool *pool, unsigned long handle,
+ void *handle_mem)
+{
+ struct zspage *zspage;
+ struct zpdesc *zpdesc;
+ unsigned long obj, off;
+ unsigned int obj_idx;
+ struct size_class *class;
+
+ obj = handle_to_obj(handle);
+ obj_to_location(obj, &zpdesc, &obj_idx);
+ zspage = get_zspage(zpdesc);
+ class = zspage_class(pool, zspage);
+ off = offset_in_page(class->size * obj_idx);
+
+ if (off + class->size <= PAGE_SIZE) {
+ if (!ZsHugePage(zspage))
+ off += ZS_HANDLE_SIZE;
+ handle_mem -= off;
+ kunmap_local(handle_mem);
+ }
+
+ zspage_read_unlock(zspage);
+}
+EXPORT_SYMBOL_GPL(zs_obj_read_end);
+
+void zs_obj_write(struct zs_pool *pool, unsigned long handle,
+ void *handle_mem, size_t mem_len)
+{
+ struct zspage *zspage;
+ struct zpdesc *zpdesc;
+ unsigned long obj, off;
+ unsigned int obj_idx;
+ struct size_class *class;
+
+ /* Guarantee we can get zspage from handle safely */
+ read_lock(&pool->lock);
+ obj = handle_to_obj(handle);
+ obj_to_location(obj, &zpdesc, &obj_idx);
+ zspage = get_zspage(zpdesc);
+
+ /* Make sure migration doesn't move any pages in this zspage */
+ zspage_read_lock(zspage);
+ read_unlock(&pool->lock);
+
+ class = zspage_class(pool, zspage);
+ off = offset_in_page(class->size * obj_idx);
+
+ if (off + class->size <= PAGE_SIZE) {
+ /* this object is contained entirely within a page */
+ void *dst = kmap_local_zpdesc(zpdesc);
+
+ if (!ZsHugePage(zspage))
+ off += ZS_HANDLE_SIZE;
+ memcpy(dst + off, handle_mem, mem_len);
+ kunmap_local(dst);
+ } else {
+ /* this object spans two pages */
+ size_t sizes[2];
+
+ off += ZS_HANDLE_SIZE;
+ sizes[0] = PAGE_SIZE - off;
+ sizes[1] = mem_len - sizes[0];
+
+ memcpy_to_page(zpdesc_page(zpdesc), off,
+ handle_mem, sizes[0]);
+ zpdesc = get_next_zpdesc(zpdesc);
+ memcpy_to_page(zpdesc_page(zpdesc), 0,
+ handle_mem + sizes[0], sizes[1]);
+ }
+
+ zspage_read_unlock(zspage);
+}
+EXPORT_SYMBOL_GPL(zs_obj_write);
+
/**
* zs_huge_class_size() - Returns the size (in bytes) of the first huge
* zsmalloc &size_class.
--
2.48.1.711.g2feabab25a-goog
next prev parent reply other threads:[~2025-03-03 2:25 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-03 2:03 [PATCH v10 00/19] zsmalloc/zram: there be preemption Sergey Senozhatsky
2025-03-03 2:03 ` [PATCH v10 01/19] zram: sleepable entry locking Sergey Senozhatsky
2025-03-03 2:03 ` [PATCH v10 02/19] zram: permit preemption with active compression stream Sergey Senozhatsky
2025-03-03 2:03 ` [PATCH v10 03/19] zram: remove unused crypto include Sergey Senozhatsky
2025-03-03 2:03 ` [PATCH v10 04/19] zram: remove max_comp_streams device attr Sergey Senozhatsky
2025-03-03 2:03 ` [PATCH v10 05/19] zram: remove second stage of handle allocation Sergey Senozhatsky
2025-03-03 2:03 ` [PATCH v10 06/19] zram: add GFP_NOWARN to incompressible zsmalloc " Sergey Senozhatsky
2025-03-03 2:03 ` [PATCH v10 07/19] zram: remove writestall zram_stats member Sergey Senozhatsky
2025-03-03 2:03 ` [PATCH v10 08/19] zram: limit max recompress prio to num_active_comps Sergey Senozhatsky
2025-03-03 2:03 ` [PATCH v10 09/19] zram: filter out recomp targets based on priority Sergey Senozhatsky
2025-03-03 2:03 ` [PATCH v10 10/19] zram: rework recompression loop Sergey Senozhatsky
2025-03-03 2:03 ` [PATCH v10 11/19] zram: move post-processing target allocation Sergey Senozhatsky
2025-03-03 2:03 ` [PATCH v10 12/19] zsmalloc: rename pool lock Sergey Senozhatsky
2025-03-03 2:03 ` [PATCH v10 13/19] zsmalloc: sleepable zspage reader-lock Sergey Senozhatsky
2025-03-03 2:03 ` Sergey Senozhatsky [this message]
2025-03-03 2:03 ` [PATCH v10 15/19] zram: switch to new zsmalloc object mapping API Sergey Senozhatsky
2025-03-03 2:03 ` [PATCH v10 16/19] zram: permit reclaim in zstd custom allocator Sergey Senozhatsky
2025-03-03 2:03 ` [PATCH v10 17/19] zram: do not leak page on recompress_store error path Sergey Senozhatsky
2025-03-03 2:03 ` [PATCH v10 18/19] zram: do not leak page on writeback_store " Sergey Senozhatsky
2025-03-03 2:03 ` [PATCH v10 19/19] zram: add might_sleep to zcomp API Sergey Senozhatsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250303022425.285971-15-senozhatsky@chromium.org \
--to=senozhatsky@chromium.org \
--cc=akpm@linux-foundation.org \
--cc=bigeasy@linutronix.de \
--cc=hdanton@sina.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=minchan@kernel.org \
--cc=ryncsn@gmail.com \
--cc=yosry.ahmed@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox