* [PATCH net-next v23 0/7] Replace page_frag with page_frag_cache (Part-1)
@ 2024-10-28 11:53 Yunsheng Lin
2024-10-28 11:53 ` [PATCH net-next v23 1/7] mm: page_frag: add a test module for page_frag Yunsheng Lin
` (9 more replies)
0 siblings, 10 replies; 23+ messages in thread
From: Yunsheng Lin @ 2024-10-28 11:53 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck, Shuah Khan,
Andrew Morton, Linux-MM
This is part 1 of "Replace page_frag with page_frag_cache",
which mainly contain refactoring and optimization for the
implementation of page_frag API before the replacing.
As the discussion in [1], it would be better to target net-next
tree to get more testing as all the callers page_frag API are
in networking, and the chance of conflicting with MM tree seems
low as implementation of page_frag API seems quite self-contained.
After [2], there are still two implementations for page frag:
1. mm/page_alloc.c: net stack seems to be using it in the
rx part with 'struct page_frag_cache' and the main API
being page_frag_alloc_align().
2. net/core/sock.c: net stack seems to be using it in the
tx part with 'struct page_frag' and the main API being
skb_page_frag_refill().
This patchset tries to unfiy the page frag implementation
by replacing page_frag with page_frag_cache for sk_page_frag()
first. net_high_order_alloc_disable_key for the implementation
in net/core/sock.c doesn't seems matter that much now as pcp
is also supported for high-order pages:
commit 44042b449872 ("mm/page_alloc: allow high-order pages to
be stored on the per-cpu lists")
As the related change is mostly related to networking, so
targeting the net-next. And will try to replace the rest
of page_frag in the follow patchset.
After this patchset:
1. Unify the page frag implementation by taking the best out of
two the existing implementations: we are able to save some space
for the 'page_frag_cache' API user, and avoid 'get_page()' for
the old 'page_frag' API user.
2. Future bugfix and performance can be done in one place, hence
improving maintainability of page_frag's implementation.
Kernel Image changing:
Linux Kernel total | text data bss
------------------------------------------------------
after 45250307 | 27274279 17209996 766032
before 45254134 | 27278118 17209984 766032
delta -3827 | -3839 +12 +0
Performance validation:
1. Using micro-benchmark ko added in patch 1 to test aligned and
non-aligned API performance impact for the existing users, there
is no notiable performance degradation. Instead we seems to have
some major performance boot for both aligned and non-aligned API
after switching to ptr_ring for testing, respectively about 200%
and 10% improvement in arm64 server as below.
2. Use the below netcat test case, we also have some minor
performance boot for replacing 'page_frag' with 'page_frag_cache'
after this patchset.
server: taskset -c 32 nc -l -k 1234 > /dev/null
client: perf stat -r 200 -- taskset -c 0 head -c 20G /dev/zero | taskset -c 1 nc 127.0.0.1 1234
In order to avoid performance noise as much as possible, the testing
is done in system without any other load and have enough iterations to
prove the data is stable enough, complete log for testing is below:
perf stat -r 200 -- insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17 test_alloc_len=12 nr_test=51200000
perf stat -r 200 -- insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17 test_alloc_len=12 nr_test=51200000 test_align=1
taskset -c 32 nc -l -k 1234 > /dev/null
perf stat -r 200 -- taskset -c 0 head -c 20G /dev/zero | taskset -c 1 nc 127.0.0.1 1234
*After* this patchset:
Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17 test_alloc_len=12 nr_test=51200000' (200 runs):
17.758393 task-clock (msec) # 0.004 CPUs utilized ( +- 0.51% )
5 context-switches # 0.293 K/sec ( +- 0.65% )
0 cpu-migrations # 0.008 K/sec ( +- 17.21% )
74 page-faults # 0.004 M/sec ( +- 0.12% )
46128650 cycles # 2.598 GHz ( +- 0.51% )
60810511 instructions # 1.32 insn per cycle ( +- 0.04% )
14764914 branches # 831.433 M/sec ( +- 0.04% )
19281 branch-misses # 0.13% of all branches ( +- 0.13% )
4.240273854 seconds time elapsed ( +- 0.13% )
Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17 test_alloc_len=12 nr_test=51200000 test_align=1' (200 runs):
17.348690 task-clock (msec) # 0.019 CPUs utilized ( +- 0.66% )
5 context-switches # 0.310 K/sec ( +- 0.84% )
0 cpu-migrations # 0.009 K/sec ( +- 16.55% )
74 page-faults # 0.004 M/sec ( +- 0.11% )
45065287 cycles # 2.598 GHz ( +- 0.66% )
60755389 instructions # 1.35 insn per cycle ( +- 0.05% )
14747865 branches # 850.085 M/sec ( +- 0.05% )
19272 branch-misses # 0.13% of all branches ( +- 0.13% )
0.935251375 seconds time elapsed ( +- 0.07% )
Performance counter stats for 'taskset -c 0 head -c 20G /dev/zero' (200 runs):
16626.042731 task-clock (msec) # 0.607 CPUs utilized ( +- 0.03% )
3291020 context-switches # 0.198 M/sec ( +- 0.05% )
1 cpu-migrations # 0.000 K/sec ( +- 0.50% )
85 page-faults # 0.005 K/sec ( +- 0.16% )
30581044838 cycles # 1.839 GHz ( +- 0.05% )
34962744631 instructions # 1.14 insn per cycle ( +- 0.01% )
6483883671 branches # 389.984 M/sec ( +- 0.02% )
99624551 branch-misses # 1.54% of all branches ( +- 0.17% )
27.370305077 seconds time elapsed ( +- 0.01% )
*Before* this patchset:
Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17 test_alloc_len=12 nr_test=51200000' (200 runs):
21.587934 task-clock (msec) # 0.005 CPUs utilized ( +- 0.72% )
6 context-switches # 0.281 K/sec ( +- 0.28% )
1 cpu-migrations # 0.047 K/sec ( +- 0.50% )
73 page-faults # 0.003 M/sec ( +- 0.12% )
56080697 cycles # 2.598 GHz ( +- 0.72% )
61605150 instructions # 1.10 insn per cycle ( +- 0.05% )
14950196 branches # 692.526 M/sec ( +- 0.05% )
19410 branch-misses # 0.13% of all branches ( +- 0.18% )
4.603530546 seconds time elapsed ( +- 0.11% )
Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17 test_alloc_len=12 nr_test=51200000 test_align=1' (200 runs):
20.988297 task-clock (msec) # 0.006 CPUs utilized ( +- 0.81% )
7 context-switches # 0.316 K/sec ( +- 0.54% )
1 cpu-migrations # 0.048 K/sec ( +- 0.70% )
73 page-faults # 0.003 M/sec ( +- 0.11% )
54512166 cycles # 2.597 GHz ( +- 0.81% )
61440941 instructions # 1.13 insn per cycle ( +- 0.08% )
14906043 branches # 710.207 M/sec ( +- 0.08% )
19927 branch-misses # 0.13% of all branches ( +- 0.17% )
3.438041238 seconds time elapsed ( +- 1.11% )
Performance counter stats for 'taskset -c 0 head -c 20G /dev/zero' (200 runs):
17364.040855 task-clock (msec) # 0.624 CPUs utilized ( +- 0.02% )
3340375 context-switches # 0.192 M/sec ( +- 0.06% )
1 cpu-migrations # 0.000 K/sec
85 page-faults # 0.005 K/sec ( +- 0.15% )
32077623335 cycles # 1.847 GHz ( +- 0.03% )
35121047596 instructions # 1.09 insn per cycle ( +- 0.01% )
6519872824 branches # 375.481 M/sec ( +- 0.02% )
101877022 branch-misses # 1.56% of all branches ( +- 0.14% )
27.842745343 seconds time elapsed ( +- 0.02% )
Note, ipv4-udp, ipv6-tcp and ipv6-udp is also tested with the below script:
nc -u -l -k 1234 > /dev/null
perf stat -r 4 -- head -c 51200000000 /dev/zero | nc -N -u 127.0.0.1 1234
nc -l6 -k 1234 > /dev/null
perf stat -r 4 -- head -c 51200000000 /dev/zero | nc -N ::1 1234
nc -l6 -k -u 1234 > /dev/null
perf stat -r 4 -- head -c 51200000000 /dev/zero | nc -u -N ::1 1234
CC: Alexander Duyck <alexander.duyck@gmail.com>
CC: Shuah Khan <skhan@linuxfoundation.org>
CC: Andrew Morton <akpm@linux-foundation.org>
CC: Linux-MM <linux-mm@kvack.org>
1. https://lore.kernel.org/all/add10dd4-7f5d-4aa1-aa04-767590f944e0@redhat.com/
2. https://lore.kernel.org/all/20240228093013.8263-1-linyunsheng@huawei.com/
Change log:
V23:
1. CC Andrew and MM ML explicitly.
2. Split into two parts according to the discussion in v22, and this is
the part-1.
V22:
1. Fix some typo as noted by Bagas.
2. Remove page_frag_cache_page_offset() as it is not really related to
this patchset.
V21:
1. Do renaming as suggested by Alexander.
2. Filter out the test results of dmesg in script as suggested by
Shuah.
V20:
1. Rename skb_copy_to_page_nocache() to skb_add_frag_nocache().
2. Define the PFMEMALLOC_BIT as the ORDER_MASK + 1 as suggested by
Alexander.
V19:
1. Rebased on latest net-next.
2. Use wait_for_completion_timeout() instead of wait_for_completion()
in page_frag_test.c
V18:
1. Fix a typo in test_page_frag.sh pointed out by Alexander.
2. Move some inline helper into c file, use ternary operator and
move the getting of the size as suggested by Alexander.
V17:
1. Add TEST_FILES in Makefile for test_page_frag.sh.
V16:
1. Add test_page_frag.sh to handle page_frag_test.ko and add testing
for prepare API.
2. Move inline helper unneeded outside of the page_frag_cache.c to
page_frag_cache.c.
3. Reset nc->offset when reusing an old page.
V15:
1. Fix the compile error pointed out by Simon.
2. Fix Other mistakes when using new API naming and refactoring.
V14:
1. Drop '_va' Renaming patch and use new API naming.
2. Use new refactoring to enable more codes to be reusable.
3. And other minor suggestions from Alexander.
V13:
1. Move page_frag_test from mm/ to tools/testing/selftest/mm
2. Use ptr_ring to replace ptr_pool for page_frag_test.c
3. Retest based on the new testing ko, which shows a big different
result than using ptr_pool.
V12:
1. Do not treat page_frag_test ko as DEBUG feature.
2. Make some improvement for the refactoring in patch 8.
3. Some other minor improvement as Alexander's comment.
RFC v11:
1. Fold 'page_frag_cache' moving change into patch 2.
2. Optimizate patch 3 according to discussion in v9.
V10:
1. Change Subject to "Replace page_frag with page_frag_cache for sk_page_frag()".
2. Move 'struct page_frag_cache' to sched.h as suggested by Alexander.
3. Rename skb_copy_to_page_nocache().
4. Adjust change between patches to make it more reviewable as Alexander's comment.
5. Use 'aligned_remaining' variable to generate virtual address as Alexander's
comment.
6. Some included header and typo fix as Alexander's comment.
7. Add back the get_order() opt patch for xtensa arch
V9:
1. Add check for test_alloc_len and change perm of module_param()
to 0 as Wang Wei' comment.
2. Rebased on latest net-next.
V8: Remove patch 2 & 3 in V7, as free_unref_page() is changed to call
pcp_allowed_order() and used in page_frag API recently in:
commit 5b8d75913a0e ("mm: combine free_the_page() and free_unref_page()")
V7: Fix doc build warning and error.
V6:
1. Fix some typo and compiler error for x86 pointed out by Jakub and
Simon.
2. Add two refactoring and optimization patches.
V5:
1. Add page_frag_alloc_pg() API for tls_device.c case and refactor
some implementation, update kernel bin size changing as bin size
is increased after that.
2. Add ack from Mat.
RFC v4:
1. Update doc according to Randy and Mat's suggestion.
2. Change probe API to "probe" for a specific amount of available space,
rather than "nonzero" space according to Mat's suggestion.
3. Retest and update the test result.
v3:
1. Use new layout for 'struct page_frag_cache' as the discussion
with Alexander and other sugeestions from Alexander.
2. Add probe API to address Mat' comment about mptcp use case.
3. Some doc updating according to Bagas' suggestion.
v2:
1. reorder test module to patch 1.
2. split doc and maintainer updating to two patches.
3. refactor the page_frag before moving.
4. fix a type and 'static' warning in test module.
5. add a patch for xtensa arch to enable using get_order() in
BUILD_BUG_ON().
6. Add test case and performance data for the socket code.
Yunsheng Lin (7):
mm: page_frag: add a test module for page_frag
mm: move the page fragment allocator from page_alloc into its own file
mm: page_frag: use initial zero offset for page_frag_alloc_align()
mm: page_frag: avoid caller accessing 'page_frag_cache' directly
xtensa: remove the get_order() implementation
mm: page_frag: reuse existing space for 'size' and 'pfmemalloc'
mm: page_frag: use __alloc_pages() to replace alloc_pages_node()
arch/xtensa/include/asm/page.h | 18 --
drivers/vhost/net.c | 2 +-
include/linux/gfp.h | 22 --
include/linux/mm_types.h | 18 --
include/linux/mm_types_task.h | 21 ++
include/linux/page_frag_cache.h | 61 ++++++
include/linux/skbuff.h | 1 +
mm/Makefile | 1 +
mm/page_alloc.c | 136 ------------
mm/page_frag_cache.c | 171 +++++++++++++++
net/core/skbuff.c | 6 +-
net/rxrpc/conn_object.c | 4 +-
net/rxrpc/local_object.c | 4 +-
net/sunrpc/svcsock.c | 6 +-
tools/testing/selftests/mm/Makefile | 3 +
tools/testing/selftests/mm/page_frag/Makefile | 18 ++
.../selftests/mm/page_frag/page_frag_test.c | 198 ++++++++++++++++++
tools/testing/selftests/mm/run_vmtests.sh | 8 +
tools/testing/selftests/mm/test_page_frag.sh | 175 ++++++++++++++++
19 files changed, 665 insertions(+), 208 deletions(-)
create mode 100644 include/linux/page_frag_cache.h
create mode 100644 mm/page_frag_cache.c
create mode 100644 tools/testing/selftests/mm/page_frag/Makefile
create mode 100644 tools/testing/selftests/mm/page_frag/page_frag_test.c
create mode 100755 tools/testing/selftests/mm/test_page_frag.sh
--
2.33.0
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH net-next v23 1/7] mm: page_frag: add a test module for page_frag
2024-10-28 11:53 [PATCH net-next v23 0/7] Replace page_frag with page_frag_cache (Part-1) Yunsheng Lin
@ 2024-10-28 11:53 ` Yunsheng Lin
2024-11-14 16:02 ` Mark Brown
2024-10-28 11:53 ` [PATCH net-next v23 2/7] mm: move the page fragment allocator from page_alloc into its own file Yunsheng Lin
` (8 subsequent siblings)
9 siblings, 1 reply; 23+ messages in thread
From: Yunsheng Lin @ 2024-10-28 11:53 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, Andrew Morton,
Alexander Duyck, Linux-MM, Alexander Duyck, Shuah Khan,
linux-kselftest
The testing is done by ensuring that the fragment allocated
from a frag_frag_cache instance is pushed into a ptr_ring
instance in a kthread binded to a specified cpu, and a kthread
binded to a specified cpu will pop the fragment from the
ptr_ring and free the fragment.
CC: Andrew Morton <akpm@linux-foundation.org>
CC: Alexander Duyck <alexander.duyck@gmail.com>
CC: Linux-MM <linux-mm@kvack.org>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
---
tools/testing/selftests/mm/Makefile | 3 +
tools/testing/selftests/mm/page_frag/Makefile | 18 ++
.../selftests/mm/page_frag/page_frag_test.c | 198 ++++++++++++++++++
tools/testing/selftests/mm/run_vmtests.sh | 8 +
tools/testing/selftests/mm/test_page_frag.sh | 175 ++++++++++++++++
5 files changed, 402 insertions(+)
create mode 100644 tools/testing/selftests/mm/page_frag/Makefile
create mode 100644 tools/testing/selftests/mm/page_frag/page_frag_test.c
create mode 100755 tools/testing/selftests/mm/test_page_frag.sh
diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile
index 02e1204971b0..acec529baaca 100644
--- a/tools/testing/selftests/mm/Makefile
+++ b/tools/testing/selftests/mm/Makefile
@@ -36,6 +36,8 @@ MAKEFLAGS += --no-builtin-rules
CFLAGS = -Wall -I $(top_srcdir) $(EXTRA_CFLAGS) $(KHDR_INCLUDES) $(TOOLS_INCLUDES)
LDLIBS = -lrt -lpthread -lm
+TEST_GEN_MODS_DIR := page_frag
+
TEST_GEN_FILES = cow
TEST_GEN_FILES += compaction_test
TEST_GEN_FILES += gup_longterm
@@ -126,6 +128,7 @@ TEST_FILES += test_hmm.sh
TEST_FILES += va_high_addr_switch.sh
TEST_FILES += charge_reserved_hugetlb.sh
TEST_FILES += hugetlb_reparenting_test.sh
+TEST_FILES += test_page_frag.sh
# required by charge_reserved_hugetlb.sh
TEST_FILES += write_hugetlb_memory.sh
diff --git a/tools/testing/selftests/mm/page_frag/Makefile b/tools/testing/selftests/mm/page_frag/Makefile
new file mode 100644
index 000000000000..58dda74d50a3
--- /dev/null
+++ b/tools/testing/selftests/mm/page_frag/Makefile
@@ -0,0 +1,18 @@
+PAGE_FRAG_TEST_DIR := $(realpath $(dir $(abspath $(lastword $(MAKEFILE_LIST)))))
+KDIR ?= $(abspath $(PAGE_FRAG_TEST_DIR)/../../../../..)
+
+ifeq ($(V),1)
+Q =
+else
+Q = @
+endif
+
+MODULES = page_frag_test.ko
+
+obj-m += page_frag_test.o
+
+all:
+ +$(Q)make -C $(KDIR) M=$(PAGE_FRAG_TEST_DIR) modules
+
+clean:
+ +$(Q)make -C $(KDIR) M=$(PAGE_FRAG_TEST_DIR) clean
diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c
new file mode 100644
index 000000000000..912d97b99107
--- /dev/null
+++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c
@@ -0,0 +1,198 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Test module for page_frag cache
+ *
+ * Copyright (C) 2024 Yunsheng Lin <linyunsheng@huawei.com>
+ */
+
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/cpumask.h>
+#include <linux/completion.h>
+#include <linux/ptr_ring.h>
+#include <linux/kthread.h>
+
+#define TEST_FAILED_PREFIX "page_frag_test failed: "
+
+static struct ptr_ring ptr_ring;
+static int nr_objs = 512;
+static atomic_t nthreads;
+static struct completion wait;
+static struct page_frag_cache test_nc;
+static int test_popped;
+static int test_pushed;
+static bool force_exit;
+
+static int nr_test = 2000000;
+module_param(nr_test, int, 0);
+MODULE_PARM_DESC(nr_test, "number of iterations to test");
+
+static bool test_align;
+module_param(test_align, bool, 0);
+MODULE_PARM_DESC(test_align, "use align API for testing");
+
+static int test_alloc_len = 2048;
+module_param(test_alloc_len, int, 0);
+MODULE_PARM_DESC(test_alloc_len, "alloc len for testing");
+
+static int test_push_cpu;
+module_param(test_push_cpu, int, 0);
+MODULE_PARM_DESC(test_push_cpu, "test cpu for pushing fragment");
+
+static int test_pop_cpu;
+module_param(test_pop_cpu, int, 0);
+MODULE_PARM_DESC(test_pop_cpu, "test cpu for popping fragment");
+
+static int page_frag_pop_thread(void *arg)
+{
+ struct ptr_ring *ring = arg;
+
+ pr_info("page_frag pop test thread begins on cpu %d\n",
+ smp_processor_id());
+
+ while (test_popped < nr_test) {
+ void *obj = __ptr_ring_consume(ring);
+
+ if (obj) {
+ test_popped++;
+ page_frag_free(obj);
+ } else {
+ if (force_exit)
+ break;
+
+ cond_resched();
+ }
+ }
+
+ if (atomic_dec_and_test(&nthreads))
+ complete(&wait);
+
+ pr_info("page_frag pop test thread exits on cpu %d\n",
+ smp_processor_id());
+
+ return 0;
+}
+
+static int page_frag_push_thread(void *arg)
+{
+ struct ptr_ring *ring = arg;
+
+ pr_info("page_frag push test thread begins on cpu %d\n",
+ smp_processor_id());
+
+ while (test_pushed < nr_test && !force_exit) {
+ void *va;
+ int ret;
+
+ if (test_align) {
+ va = page_frag_alloc_align(&test_nc, test_alloc_len,
+ GFP_KERNEL, SMP_CACHE_BYTES);
+
+ if ((unsigned long)va & (SMP_CACHE_BYTES - 1)) {
+ force_exit = true;
+ WARN_ONCE(true, TEST_FAILED_PREFIX "unaligned va returned\n");
+ }
+ } else {
+ va = page_frag_alloc(&test_nc, test_alloc_len, GFP_KERNEL);
+ }
+
+ if (!va)
+ continue;
+
+ ret = __ptr_ring_produce(ring, va);
+ if (ret) {
+ page_frag_free(va);
+ cond_resched();
+ } else {
+ test_pushed++;
+ }
+ }
+
+ pr_info("page_frag push test thread exits on cpu %d\n",
+ smp_processor_id());
+
+ if (atomic_dec_and_test(&nthreads))
+ complete(&wait);
+
+ return 0;
+}
+
+static int __init page_frag_test_init(void)
+{
+ struct task_struct *tsk_push, *tsk_pop;
+ int last_pushed = 0, last_popped = 0;
+ ktime_t start;
+ u64 duration;
+ int ret;
+
+ test_nc.va = NULL;
+ atomic_set(&nthreads, 2);
+ init_completion(&wait);
+
+ if (test_alloc_len > PAGE_SIZE || test_alloc_len <= 0 ||
+ !cpu_active(test_push_cpu) || !cpu_active(test_pop_cpu))
+ return -EINVAL;
+
+ ret = ptr_ring_init(&ptr_ring, nr_objs, GFP_KERNEL);
+ if (ret)
+ return ret;
+
+ tsk_push = kthread_create_on_cpu(page_frag_push_thread, &ptr_ring,
+ test_push_cpu, "page_frag_push");
+ if (IS_ERR(tsk_push))
+ return PTR_ERR(tsk_push);
+
+ tsk_pop = kthread_create_on_cpu(page_frag_pop_thread, &ptr_ring,
+ test_pop_cpu, "page_frag_pop");
+ if (IS_ERR(tsk_pop)) {
+ kthread_stop(tsk_push);
+ return PTR_ERR(tsk_pop);
+ }
+
+ start = ktime_get();
+ wake_up_process(tsk_push);
+ wake_up_process(tsk_pop);
+
+ pr_info("waiting for test to complete\n");
+
+ while (!wait_for_completion_timeout(&wait, msecs_to_jiffies(10000))) {
+ /* exit if there is no progress for push or pop size */
+ if (last_pushed == test_pushed || last_popped == test_popped) {
+ WARN_ONCE(true, TEST_FAILED_PREFIX "no progress\n");
+ force_exit = true;
+ continue;
+ }
+
+ last_pushed = test_pushed;
+ last_popped = test_popped;
+ pr_info("page_frag_test progress: pushed = %d, popped = %d\n",
+ test_pushed, test_popped);
+ }
+
+ if (force_exit) {
+ pr_err(TEST_FAILED_PREFIX "exit with error\n");
+ goto out;
+ }
+
+ duration = (u64)ktime_us_delta(ktime_get(), start);
+ pr_info("%d of iterations for %s testing took: %lluus\n", nr_test,
+ test_align ? "aligned" : "non-aligned", duration);
+
+out:
+ ptr_ring_cleanup(&ptr_ring, NULL);
+ page_frag_cache_drain(&test_nc);
+
+ return -EAGAIN;
+}
+
+static void __exit page_frag_test_exit(void)
+{
+}
+
+module_init(page_frag_test_init);
+module_exit(page_frag_test_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Yunsheng Lin <linyunsheng@huawei.com>");
+MODULE_DESCRIPTION("Test module for page_frag");
diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh
index c5797ad1d37b..2c5394584af4 100755
--- a/tools/testing/selftests/mm/run_vmtests.sh
+++ b/tools/testing/selftests/mm/run_vmtests.sh
@@ -75,6 +75,8 @@ separated by spaces:
read-only VMAs
- mdwe
test prctl(PR_SET_MDWE, ...)
+- page_frag
+ test handling of page fragment allocation and freeing
example: ./run_vmtests.sh -t "hmm mmap ksm"
EOF
@@ -456,6 +458,12 @@ CATEGORY="mkdirty" run_test ./mkdirty
CATEGORY="mdwe" run_test ./mdwe_test
+CATEGORY="page_frag" run_test ./test_page_frag.sh smoke
+
+CATEGORY="page_frag" run_test ./test_page_frag.sh aligned
+
+CATEGORY="page_frag" run_test ./test_page_frag.sh nonaligned
+
echo "SUMMARY: PASS=${count_pass} SKIP=${count_skip} FAIL=${count_fail}" | tap_prefix
echo "1..${count_total}" | tap_output
diff --git a/tools/testing/selftests/mm/test_page_frag.sh b/tools/testing/selftests/mm/test_page_frag.sh
new file mode 100755
index 000000000000..f55b105084cf
--- /dev/null
+++ b/tools/testing/selftests/mm/test_page_frag.sh
@@ -0,0 +1,175 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+#
+# Copyright (C) 2024 Yunsheng Lin <linyunsheng@huawei.com>
+# Copyright (C) 2018 Uladzislau Rezki (Sony) <urezki@gmail.com>
+#
+# This is a test script for the kernel test driver to test the
+# correctness and performance of page_frag's implementation.
+# Therefore it is just a kernel module loader. You can specify
+# and pass different parameters in order to:
+# a) analyse performance of page fragment allocations;
+# b) stressing and stability check of page_frag subsystem.
+
+DRIVER="./page_frag/page_frag_test.ko"
+CPU_LIST=$(grep -m 2 processor /proc/cpuinfo | cut -d ' ' -f 2)
+TEST_CPU_0=$(echo $CPU_LIST | awk '{print $1}')
+
+if [ $(echo $CPU_LIST | wc -w) -gt 1 ]; then
+ TEST_CPU_1=$(echo $CPU_LIST | awk '{print $2}')
+ NR_TEST=100000000
+else
+ TEST_CPU_1=$TEST_CPU_0
+ NR_TEST=1000000
+fi
+
+# 1 if fails
+exitcode=1
+
+# Kselftest framework requirement - SKIP code is 4.
+ksft_skip=4
+
+check_test_failed_prefix() {
+ if dmesg | grep -q 'page_frag_test failed:';then
+ echo "page_frag_test failed, please check dmesg"
+ exit $exitcode
+ fi
+}
+
+#
+# Static templates for testing of page_frag APIs.
+# Also it is possible to pass any supported parameters manually.
+#
+SMOKE_PARAM="test_push_cpu=$TEST_CPU_0 test_pop_cpu=$TEST_CPU_1"
+NONALIGNED_PARAM="$SMOKE_PARAM test_alloc_len=75 nr_test=$NR_TEST"
+ALIGNED_PARAM="$NONALIGNED_PARAM test_align=1"
+
+check_test_requirements()
+{
+ uid=$(id -u)
+ if [ $uid -ne 0 ]; then
+ echo "$0: Must be run as root"
+ exit $ksft_skip
+ fi
+
+ if ! which insmod > /dev/null 2>&1; then
+ echo "$0: You need insmod installed"
+ exit $ksft_skip
+ fi
+
+ if [ ! -f $DRIVER ]; then
+ echo "$0: You need to compile page_frag_test module"
+ exit $ksft_skip
+ fi
+}
+
+run_nonaligned_check()
+{
+ echo "Run performance tests to evaluate how fast nonaligned alloc API is."
+
+ insmod $DRIVER $NONALIGNED_PARAM > /dev/null 2>&1
+}
+
+run_aligned_check()
+{
+ echo "Run performance tests to evaluate how fast aligned alloc API is."
+
+ insmod $DRIVER $ALIGNED_PARAM > /dev/null 2>&1
+}
+
+run_smoke_check()
+{
+ echo "Run smoke test."
+
+ insmod $DRIVER $SMOKE_PARAM > /dev/null 2>&1
+}
+
+usage()
+{
+ echo -n "Usage: $0 [ aligned ] | [ nonaligned ] | | [ smoke ] | "
+ echo "manual parameters"
+ echo
+ echo "Valid tests and parameters:"
+ echo
+ modinfo $DRIVER
+ echo
+ echo "Example usage:"
+ echo
+ echo "# Shows help message"
+ echo "$0"
+ echo
+ echo "# Smoke testing"
+ echo "$0 smoke"
+ echo
+ echo "# Performance testing for nonaligned alloc API"
+ echo "$0 nonaligned"
+ echo
+ echo "# Performance testing for aligned alloc API"
+ echo "$0 aligned"
+ echo
+ exit 0
+}
+
+function validate_passed_args()
+{
+ VALID_ARGS=`modinfo $DRIVER | awk '/parm:/ {print $2}' | sed 's/:.*//'`
+
+ #
+ # Something has been passed, check it.
+ #
+ for passed_arg in $@; do
+ key=${passed_arg//=*/}
+ valid=0
+
+ for valid_arg in $VALID_ARGS; do
+ if [[ $key = $valid_arg ]]; then
+ valid=1
+ break
+ fi
+ done
+
+ if [[ $valid -ne 1 ]]; then
+ echo "Error: key is not correct: ${key}"
+ exit $exitcode
+ fi
+ done
+}
+
+function run_manual_check()
+{
+ #
+ # Validate passed parameters. If there is wrong one,
+ # the script exists and does not execute further.
+ #
+ validate_passed_args $@
+
+ echo "Run the test with following parameters: $@"
+ insmod $DRIVER $@ > /dev/null 2>&1
+}
+
+function run_test()
+{
+ if [ $# -eq 0 ]; then
+ usage
+ else
+ if [[ "$1" = "smoke" ]]; then
+ run_smoke_check
+ elif [[ "$1" = "nonaligned" ]]; then
+ run_nonaligned_check
+ elif [[ "$1" = "aligned" ]]; then
+ run_aligned_check
+ else
+ run_manual_check $@
+ fi
+ fi
+
+ check_test_failed_prefix
+
+ echo "Done."
+ echo "Check the kernel ring buffer to see the summary."
+}
+
+check_test_requirements
+run_test $@
+
+exit 0
--
2.33.0
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH net-next v23 2/7] mm: move the page fragment allocator from page_alloc into its own file
2024-10-28 11:53 [PATCH net-next v23 0/7] Replace page_frag with page_frag_cache (Part-1) Yunsheng Lin
2024-10-28 11:53 ` [PATCH net-next v23 1/7] mm: page_frag: add a test module for page_frag Yunsheng Lin
@ 2024-10-28 11:53 ` Yunsheng Lin
2024-10-28 11:53 ` [PATCH net-next v23 3/7] mm: page_frag: use initial zero offset for page_frag_alloc_align() Yunsheng Lin
` (7 subsequent siblings)
9 siblings, 0 replies; 23+ messages in thread
From: Yunsheng Lin @ 2024-10-28 11:53 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, David Howells,
Alexander Duyck, Andrew Morton, Linux-MM, Alexander Duyck,
Eric Dumazet, Simon Horman, Shuah Khan, linux-kselftest
Inspired by [1], move the page fragment allocator from page_alloc
into its own c file and header file, as we are about to make more
change for it to replace another page_frag implementation in
sock.c
As this patchset is going to replace 'struct page_frag' with
'struct page_frag_cache' in sched.h, including page_frag_cache.h
in sched.h has a compiler error caused by interdependence between
mm_types.h and mm.h for asm-offsets.c, see [2]. So avoid the compiler
error by moving 'struct page_frag_cache' to mm_types_task.h as
suggested by Alexander, see [3].
1. https://lore.kernel.org/all/20230411160902.4134381-3-dhowells@redhat.com/
2. https://lore.kernel.org/all/15623dac-9358-4597-b3ee-3694a5956920@gmail.com/
3. https://lore.kernel.org/all/CAKgT0UdH1yD=LSCXFJ=YM_aiA4OomD-2wXykO42bizaWMt_HOA@mail.gmail.com/
CC: David Howells <dhowells@redhat.com>
CC: Alexander Duyck <alexander.duyck@gmail.com>
CC: Andrew Morton <akpm@linux-foundation.org>
CC: Linux-MM <linux-mm@kvack.org>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Acked-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
---
include/linux/gfp.h | 22 ---
include/linux/mm_types.h | 18 ---
include/linux/mm_types_task.h | 18 +++
include/linux/page_frag_cache.h | 31 ++++
include/linux/skbuff.h | 1 +
mm/Makefile | 1 +
mm/page_alloc.c | 136 ----------------
mm/page_frag_cache.c | 145 ++++++++++++++++++
.../selftests/mm/page_frag/page_frag_test.c | 2 +-
9 files changed, 197 insertions(+), 177 deletions(-)
create mode 100644 include/linux/page_frag_cache.h
create mode 100644 mm/page_frag_cache.c
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index a951de920e20..a0a6d25f883f 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -371,28 +371,6 @@ __meminit void *alloc_pages_exact_nid_noprof(int nid, size_t size, gfp_t gfp_mas
extern void __free_pages(struct page *page, unsigned int order);
extern void free_pages(unsigned long addr, unsigned int order);
-struct page_frag_cache;
-void page_frag_cache_drain(struct page_frag_cache *nc);
-extern void __page_frag_cache_drain(struct page *page, unsigned int count);
-void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz,
- gfp_t gfp_mask, unsigned int align_mask);
-
-static inline void *page_frag_alloc_align(struct page_frag_cache *nc,
- unsigned int fragsz, gfp_t gfp_mask,
- unsigned int align)
-{
- WARN_ON_ONCE(!is_power_of_2(align));
- return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align);
-}
-
-static inline void *page_frag_alloc(struct page_frag_cache *nc,
- unsigned int fragsz, gfp_t gfp_mask)
-{
- return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u);
-}
-
-extern void page_frag_free(void *addr);
-
#define __free_page(page) __free_pages((page), 0)
#define free_page(addr) free_pages((addr), 0)
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 6e3bdf8e38bc..92314ef2d978 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -521,9 +521,6 @@ static_assert(sizeof(struct ptdesc) <= sizeof(struct page));
*/
#define STRUCT_PAGE_MAX_SHIFT (order_base_2(sizeof(struct page)))
-#define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK)
-#define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE)
-
/*
* page_private can be used on tail pages. However, PagePrivate is only
* checked by the VM on the head page. So page_private on the tail pages
@@ -542,21 +539,6 @@ static inline void *folio_get_private(struct folio *folio)
return folio->private;
}
-struct page_frag_cache {
- void * va;
-#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
- __u16 offset;
- __u16 size;
-#else
- __u32 offset;
-#endif
- /* we maintain a pagecount bias, so that we dont dirty cache line
- * containing page->_refcount every time we allocate a fragment.
- */
- unsigned int pagecnt_bias;
- bool pfmemalloc;
-};
-
typedef unsigned long vm_flags_t;
/*
diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h
index bff5706b76e1..0ac6daebdd5c 100644
--- a/include/linux/mm_types_task.h
+++ b/include/linux/mm_types_task.h
@@ -8,6 +8,7 @@
* (These are defined separately to decouple sched.h from mm_types.h as much as possible.)
*/
+#include <linux/align.h>
#include <linux/types.h>
#include <asm/page.h>
@@ -43,6 +44,23 @@ struct page_frag {
#endif
};
+#define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK)
+#define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE)
+struct page_frag_cache {
+ void *va;
+#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
+ __u16 offset;
+ __u16 size;
+#else
+ __u32 offset;
+#endif
+ /* we maintain a pagecount bias, so that we dont dirty cache line
+ * containing page->_refcount every time we allocate a fragment.
+ */
+ unsigned int pagecnt_bias;
+ bool pfmemalloc;
+};
+
/* Track pages that require TLB flushes */
struct tlbflush_unmap_batch {
#ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h
new file mode 100644
index 000000000000..67ac8626ed9b
--- /dev/null
+++ b/include/linux/page_frag_cache.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef _LINUX_PAGE_FRAG_CACHE_H
+#define _LINUX_PAGE_FRAG_CACHE_H
+
+#include <linux/log2.h>
+#include <linux/mm_types_task.h>
+#include <linux/types.h>
+
+void page_frag_cache_drain(struct page_frag_cache *nc);
+void __page_frag_cache_drain(struct page *page, unsigned int count);
+void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz,
+ gfp_t gfp_mask, unsigned int align_mask);
+
+static inline void *page_frag_alloc_align(struct page_frag_cache *nc,
+ unsigned int fragsz, gfp_t gfp_mask,
+ unsigned int align)
+{
+ WARN_ON_ONCE(!is_power_of_2(align));
+ return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align);
+}
+
+static inline void *page_frag_alloc(struct page_frag_cache *nc,
+ unsigned int fragsz, gfp_t gfp_mask)
+{
+ return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u);
+}
+
+void page_frag_free(void *addr);
+
+#endif
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 48f1e0fa2a13..7adca0fa2602 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -31,6 +31,7 @@
#include <linux/in6.h>
#include <linux/if_packet.h>
#include <linux/llist.h>
+#include <linux/page_frag_cache.h>
#include <net/flow.h>
#if IS_ENABLED(CONFIG_NF_CONNTRACK)
#include <linux/netfilter/nf_conntrack_common.h>
diff --git a/mm/Makefile b/mm/Makefile
index d5639b036166..dba52bb0da8a 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -65,6 +65,7 @@ page-alloc-$(CONFIG_SHUFFLE_PAGE_ALLOCATOR) += shuffle.o
memory-hotplug-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o
obj-y += page-alloc.o
+obj-y += page_frag_cache.o
obj-y += init-mm.o
obj-y += memblock.o
obj-y += $(memory-hotplug-y)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8afab64814dc..6ca2abce857b 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4836,142 +4836,6 @@ void free_pages(unsigned long addr, unsigned int order)
EXPORT_SYMBOL(free_pages);
-/*
- * Page Fragment:
- * An arbitrary-length arbitrary-offset area of memory which resides
- * within a 0 or higher order page. Multiple fragments within that page
- * are individually refcounted, in the page's reference counter.
- *
- * The page_frag functions below provide a simple allocation framework for
- * page fragments. This is used by the network stack and network device
- * drivers to provide a backing region of memory for use as either an
- * sk_buff->head, or to be used in the "frags" portion of skb_shared_info.
- */
-static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
- gfp_t gfp_mask)
-{
- struct page *page = NULL;
- gfp_t gfp = gfp_mask;
-
-#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
- gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP |
- __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC;
- page = alloc_pages_node(NUMA_NO_NODE, gfp_mask,
- PAGE_FRAG_CACHE_MAX_ORDER);
- nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE;
-#endif
- if (unlikely(!page))
- page = alloc_pages_node(NUMA_NO_NODE, gfp, 0);
-
- nc->va = page ? page_address(page) : NULL;
-
- return page;
-}
-
-void page_frag_cache_drain(struct page_frag_cache *nc)
-{
- if (!nc->va)
- return;
-
- __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias);
- nc->va = NULL;
-}
-EXPORT_SYMBOL(page_frag_cache_drain);
-
-void __page_frag_cache_drain(struct page *page, unsigned int count)
-{
- VM_BUG_ON_PAGE(page_ref_count(page) == 0, page);
-
- if (page_ref_sub_and_test(page, count))
- free_unref_page(page, compound_order(page));
-}
-EXPORT_SYMBOL(__page_frag_cache_drain);
-
-void *__page_frag_alloc_align(struct page_frag_cache *nc,
- unsigned int fragsz, gfp_t gfp_mask,
- unsigned int align_mask)
-{
- unsigned int size = PAGE_SIZE;
- struct page *page;
- int offset;
-
- if (unlikely(!nc->va)) {
-refill:
- page = __page_frag_cache_refill(nc, gfp_mask);
- if (!page)
- return NULL;
-
-#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
- /* if size can vary use size else just use PAGE_SIZE */
- size = nc->size;
-#endif
- /* Even if we own the page, we do not use atomic_set().
- * This would break get_page_unless_zero() users.
- */
- page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE);
-
- /* reset page count bias and offset to start of new frag */
- nc->pfmemalloc = page_is_pfmemalloc(page);
- nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
- nc->offset = size;
- }
-
- offset = nc->offset - fragsz;
- if (unlikely(offset < 0)) {
- page = virt_to_page(nc->va);
-
- if (!page_ref_sub_and_test(page, nc->pagecnt_bias))
- goto refill;
-
- if (unlikely(nc->pfmemalloc)) {
- free_unref_page(page, compound_order(page));
- goto refill;
- }
-
-#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
- /* if size can vary use size else just use PAGE_SIZE */
- size = nc->size;
-#endif
- /* OK, page count is 0, we can safely set it */
- set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);
-
- /* reset page count bias and offset to start of new frag */
- nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
- offset = size - fragsz;
- if (unlikely(offset < 0)) {
- /*
- * The caller is trying to allocate a fragment
- * with fragsz > PAGE_SIZE but the cache isn't big
- * enough to satisfy the request, this may
- * happen in low memory conditions.
- * We don't release the cache page because
- * it could make memory pressure worse
- * so we simply return NULL here.
- */
- return NULL;
- }
- }
-
- nc->pagecnt_bias--;
- offset &= align_mask;
- nc->offset = offset;
-
- return nc->va + offset;
-}
-EXPORT_SYMBOL(__page_frag_alloc_align);
-
-/*
- * Frees a page fragment allocated out of either a compound or order 0 page.
- */
-void page_frag_free(void *addr)
-{
- struct page *page = virt_to_head_page(addr);
-
- if (unlikely(put_page_testzero(page)))
- free_unref_page(page, compound_order(page));
-}
-EXPORT_SYMBOL(page_frag_free);
-
static void *make_alloc_exact(unsigned long addr, unsigned int order,
size_t size)
{
diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
new file mode 100644
index 000000000000..609a485cd02a
--- /dev/null
+++ b/mm/page_frag_cache.c
@@ -0,0 +1,145 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Page fragment allocator
+ *
+ * Page Fragment:
+ * An arbitrary-length arbitrary-offset area of memory which resides within a
+ * 0 or higher order page. Multiple fragments within that page are
+ * individually refcounted, in the page's reference counter.
+ *
+ * The page_frag functions provide a simple allocation framework for page
+ * fragments. This is used by the network stack and network device drivers to
+ * provide a backing region of memory for use as either an sk_buff->head, or to
+ * be used in the "frags" portion of skb_shared_info.
+ */
+
+#include <linux/export.h>
+#include <linux/gfp_types.h>
+#include <linux/init.h>
+#include <linux/mm.h>
+#include <linux/page_frag_cache.h>
+#include "internal.h"
+
+static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
+ gfp_t gfp_mask)
+{
+ struct page *page = NULL;
+ gfp_t gfp = gfp_mask;
+
+#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
+ gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP |
+ __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC;
+ page = alloc_pages_node(NUMA_NO_NODE, gfp_mask,
+ PAGE_FRAG_CACHE_MAX_ORDER);
+ nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE;
+#endif
+ if (unlikely(!page))
+ page = alloc_pages_node(NUMA_NO_NODE, gfp, 0);
+
+ nc->va = page ? page_address(page) : NULL;
+
+ return page;
+}
+
+void page_frag_cache_drain(struct page_frag_cache *nc)
+{
+ if (!nc->va)
+ return;
+
+ __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias);
+ nc->va = NULL;
+}
+EXPORT_SYMBOL(page_frag_cache_drain);
+
+void __page_frag_cache_drain(struct page *page, unsigned int count)
+{
+ VM_BUG_ON_PAGE(page_ref_count(page) == 0, page);
+
+ if (page_ref_sub_and_test(page, count))
+ free_unref_page(page, compound_order(page));
+}
+EXPORT_SYMBOL(__page_frag_cache_drain);
+
+void *__page_frag_alloc_align(struct page_frag_cache *nc,
+ unsigned int fragsz, gfp_t gfp_mask,
+ unsigned int align_mask)
+{
+ unsigned int size = PAGE_SIZE;
+ struct page *page;
+ int offset;
+
+ if (unlikely(!nc->va)) {
+refill:
+ page = __page_frag_cache_refill(nc, gfp_mask);
+ if (!page)
+ return NULL;
+
+#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
+ /* if size can vary use size else just use PAGE_SIZE */
+ size = nc->size;
+#endif
+ /* Even if we own the page, we do not use atomic_set().
+ * This would break get_page_unless_zero() users.
+ */
+ page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE);
+
+ /* reset page count bias and offset to start of new frag */
+ nc->pfmemalloc = page_is_pfmemalloc(page);
+ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
+ nc->offset = size;
+ }
+
+ offset = nc->offset - fragsz;
+ if (unlikely(offset < 0)) {
+ page = virt_to_page(nc->va);
+
+ if (!page_ref_sub_and_test(page, nc->pagecnt_bias))
+ goto refill;
+
+ if (unlikely(nc->pfmemalloc)) {
+ free_unref_page(page, compound_order(page));
+ goto refill;
+ }
+
+#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
+ /* if size can vary use size else just use PAGE_SIZE */
+ size = nc->size;
+#endif
+ /* OK, page count is 0, we can safely set it */
+ set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);
+
+ /* reset page count bias and offset to start of new frag */
+ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
+ offset = size - fragsz;
+ if (unlikely(offset < 0)) {
+ /*
+ * The caller is trying to allocate a fragment
+ * with fragsz > PAGE_SIZE but the cache isn't big
+ * enough to satisfy the request, this may
+ * happen in low memory conditions.
+ * We don't release the cache page because
+ * it could make memory pressure worse
+ * so we simply return NULL here.
+ */
+ return NULL;
+ }
+ }
+
+ nc->pagecnt_bias--;
+ offset &= align_mask;
+ nc->offset = offset;
+
+ return nc->va + offset;
+}
+EXPORT_SYMBOL(__page_frag_alloc_align);
+
+/*
+ * Frees a page fragment allocated out of either a compound or order 0 page.
+ */
+void page_frag_free(void *addr)
+{
+ struct page *page = virt_to_head_page(addr);
+
+ if (unlikely(put_page_testzero(page)))
+ free_unref_page(page, compound_order(page));
+}
+EXPORT_SYMBOL(page_frag_free);
diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c
index 912d97b99107..13c44133e009 100644
--- a/tools/testing/selftests/mm/page_frag/page_frag_test.c
+++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c
@@ -6,12 +6,12 @@
* Copyright (C) 2024 Yunsheng Lin <linyunsheng@huawei.com>
*/
-#include <linux/mm.h>
#include <linux/module.h>
#include <linux/cpumask.h>
#include <linux/completion.h>
#include <linux/ptr_ring.h>
#include <linux/kthread.h>
+#include <linux/page_frag_cache.h>
#define TEST_FAILED_PREFIX "page_frag_test failed: "
--
2.33.0
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH net-next v23 3/7] mm: page_frag: use initial zero offset for page_frag_alloc_align()
2024-10-28 11:53 [PATCH net-next v23 0/7] Replace page_frag with page_frag_cache (Part-1) Yunsheng Lin
2024-10-28 11:53 ` [PATCH net-next v23 1/7] mm: page_frag: add a test module for page_frag Yunsheng Lin
2024-10-28 11:53 ` [PATCH net-next v23 2/7] mm: move the page fragment allocator from page_alloc into its own file Yunsheng Lin
@ 2024-10-28 11:53 ` Yunsheng Lin
2025-01-23 19:15 ` Florian Fainelli
2024-10-28 11:53 ` [PATCH net-next v23 4/7] mm: page_frag: avoid caller accessing 'page_frag_cache' directly Yunsheng Lin
` (6 subsequent siblings)
9 siblings, 1 reply; 23+ messages in thread
From: Yunsheng Lin @ 2024-10-28 11:53 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck,
Andrew Morton, Linux-MM, Alexander Duyck
We are about to use page_frag_alloc_*() API to not just
allocate memory for skb->data, but also use them to do
the memory allocation for skb frag too. Currently the
implementation of page_frag in mm subsystem is running
the offset as a countdown rather than count-up value,
there may have several advantages to that as mentioned
in [1], but it may have some disadvantages, for example,
it may disable skb frag coalescing and more correct cache
prefetching
We have a trade-off to make in order to have a unified
implementation and API for page_frag, so use a initial zero
offset in this patch, and the following patch will try to
make some optimization to avoid the disadvantages as much
as possible.
1. https://lore.kernel.org/all/f4abe71b3439b39d17a6fb2d410180f367cadf5c.camel@gmail.com/
CC: Alexander Duyck <alexander.duyck@gmail.com>
CC: Andrew Morton <akpm@linux-foundation.org>
CC: Linux-MM <linux-mm@kvack.org>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
---
mm/page_frag_cache.c | 46 ++++++++++++++++++++++----------------------
1 file changed, 23 insertions(+), 23 deletions(-)
diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
index 609a485cd02a..4c8e04379cb3 100644
--- a/mm/page_frag_cache.c
+++ b/mm/page_frag_cache.c
@@ -63,9 +63,13 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
unsigned int fragsz, gfp_t gfp_mask,
unsigned int align_mask)
{
+#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
+ unsigned int size = nc->size;
+#else
unsigned int size = PAGE_SIZE;
+#endif
+ unsigned int offset;
struct page *page;
- int offset;
if (unlikely(!nc->va)) {
refill:
@@ -85,11 +89,24 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
/* reset page count bias and offset to start of new frag */
nc->pfmemalloc = page_is_pfmemalloc(page);
nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
- nc->offset = size;
+ nc->offset = 0;
}
- offset = nc->offset - fragsz;
- if (unlikely(offset < 0)) {
+ offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask);
+ if (unlikely(offset + fragsz > size)) {
+ if (unlikely(fragsz > PAGE_SIZE)) {
+ /*
+ * The caller is trying to allocate a fragment
+ * with fragsz > PAGE_SIZE but the cache isn't big
+ * enough to satisfy the request, this may
+ * happen in low memory conditions.
+ * We don't release the cache page because
+ * it could make memory pressure worse
+ * so we simply return NULL here.
+ */
+ return NULL;
+ }
+
page = virt_to_page(nc->va);
if (!page_ref_sub_and_test(page, nc->pagecnt_bias))
@@ -100,33 +117,16 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
goto refill;
}
-#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
- /* if size can vary use size else just use PAGE_SIZE */
- size = nc->size;
-#endif
/* OK, page count is 0, we can safely set it */
set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);
/* reset page count bias and offset to start of new frag */
nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
- offset = size - fragsz;
- if (unlikely(offset < 0)) {
- /*
- * The caller is trying to allocate a fragment
- * with fragsz > PAGE_SIZE but the cache isn't big
- * enough to satisfy the request, this may
- * happen in low memory conditions.
- * We don't release the cache page because
- * it could make memory pressure worse
- * so we simply return NULL here.
- */
- return NULL;
- }
+ offset = 0;
}
nc->pagecnt_bias--;
- offset &= align_mask;
- nc->offset = offset;
+ nc->offset = offset + fragsz;
return nc->va + offset;
}
--
2.33.0
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH net-next v23 4/7] mm: page_frag: avoid caller accessing 'page_frag_cache' directly
2024-10-28 11:53 [PATCH net-next v23 0/7] Replace page_frag with page_frag_cache (Part-1) Yunsheng Lin
` (2 preceding siblings ...)
2024-10-28 11:53 ` [PATCH net-next v23 3/7] mm: page_frag: use initial zero offset for page_frag_alloc_align() Yunsheng Lin
@ 2024-10-28 11:53 ` Yunsheng Lin
2024-10-28 11:53 ` [PATCH net-next v23 5/7] xtensa: remove the get_order() implementation Yunsheng Lin
` (5 subsequent siblings)
9 siblings, 0 replies; 23+ messages in thread
From: Yunsheng Lin @ 2024-10-28 11:53 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck,
Andrew Morton, Linux-MM, Alexander Duyck, Chuck Lever,
Michael S. Tsirkin, Jason Wang, Eugenio Pérez, Eric Dumazet,
Simon Horman, David Howells, Marc Dionne, Jeff Layton,
Neil Brown, Olga Kornievskaia, Dai Ngo, Tom Talpey,
Trond Myklebust, Anna Schumaker, Shuah Khan, kvm, virtualization,
linux-afs, linux-nfs, linux-kselftest
Use appropriate frag_page API instead of caller accessing
'page_frag_cache' directly.
CC: Alexander Duyck <alexander.duyck@gmail.com>
CC: Andrew Morton <akpm@linux-foundation.org>
CC: Linux-MM <linux-mm@kvack.org>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
Acked-by: Chuck Lever <chuck.lever@oracle.com>
---
drivers/vhost/net.c | 2 +-
include/linux/page_frag_cache.h | 10 ++++++++++
net/core/skbuff.c | 6 +++---
net/rxrpc/conn_object.c | 4 +---
net/rxrpc/local_object.c | 4 +---
net/sunrpc/svcsock.c | 6 ++----
tools/testing/selftests/mm/page_frag/page_frag_test.c | 2 +-
7 files changed, 19 insertions(+), 15 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index f16279351db5..9ad37c012189 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -1325,7 +1325,7 @@ static int vhost_net_open(struct inode *inode, struct file *f)
vqs[VHOST_NET_VQ_RX]);
f->private_data = n;
- n->pf_cache.va = NULL;
+ page_frag_cache_init(&n->pf_cache);
return 0;
}
diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h
index 67ac8626ed9b..0a52f7a179c8 100644
--- a/include/linux/page_frag_cache.h
+++ b/include/linux/page_frag_cache.h
@@ -7,6 +7,16 @@
#include <linux/mm_types_task.h>
#include <linux/types.h>
+static inline void page_frag_cache_init(struct page_frag_cache *nc)
+{
+ nc->va = NULL;
+}
+
+static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc)
+{
+ return !!nc->pfmemalloc;
+}
+
void page_frag_cache_drain(struct page_frag_cache *nc);
void __page_frag_cache_drain(struct page *page, unsigned int count);
void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz,
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 00afeb90c23a..6841e61a6bd0 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -753,14 +753,14 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len,
if (in_hardirq() || irqs_disabled()) {
nc = this_cpu_ptr(&netdev_alloc_cache);
data = page_frag_alloc(nc, len, gfp_mask);
- pfmemalloc = nc->pfmemalloc;
+ pfmemalloc = page_frag_cache_is_pfmemalloc(nc);
} else {
local_bh_disable();
local_lock_nested_bh(&napi_alloc_cache.bh_lock);
nc = this_cpu_ptr(&napi_alloc_cache.page);
data = page_frag_alloc(nc, len, gfp_mask);
- pfmemalloc = nc->pfmemalloc;
+ pfmemalloc = page_frag_cache_is_pfmemalloc(nc);
local_unlock_nested_bh(&napi_alloc_cache.bh_lock);
local_bh_enable();
@@ -850,7 +850,7 @@ struct sk_buff *napi_alloc_skb(struct napi_struct *napi, unsigned int len)
len = SKB_HEAD_ALIGN(len);
data = page_frag_alloc(&nc->page, len, gfp_mask);
- pfmemalloc = nc->page.pfmemalloc;
+ pfmemalloc = page_frag_cache_is_pfmemalloc(&nc->page);
}
local_unlock_nested_bh(&napi_alloc_cache.bh_lock);
diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c
index 1539d315afe7..694c4df7a1a3 100644
--- a/net/rxrpc/conn_object.c
+++ b/net/rxrpc/conn_object.c
@@ -337,9 +337,7 @@ static void rxrpc_clean_up_connection(struct work_struct *work)
*/
rxrpc_purge_queue(&conn->rx_queue);
- if (conn->tx_data_alloc.va)
- __page_frag_cache_drain(virt_to_page(conn->tx_data_alloc.va),
- conn->tx_data_alloc.pagecnt_bias);
+ page_frag_cache_drain(&conn->tx_data_alloc);
call_rcu(&conn->rcu, rxrpc_rcu_free_connection);
}
diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c
index f9623ace2201..2792d2304605 100644
--- a/net/rxrpc/local_object.c
+++ b/net/rxrpc/local_object.c
@@ -452,9 +452,7 @@ void rxrpc_destroy_local(struct rxrpc_local *local)
#endif
rxrpc_purge_queue(&local->rx_queue);
rxrpc_purge_client_connections(local);
- if (local->tx_alloc.va)
- __page_frag_cache_drain(virt_to_page(local->tx_alloc.va),
- local->tx_alloc.pagecnt_bias);
+ page_frag_cache_drain(&local->tx_alloc);
}
/*
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index 825ec5357691..b785425c3315 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -1608,7 +1608,6 @@ static void svc_tcp_sock_detach(struct svc_xprt *xprt)
static void svc_sock_free(struct svc_xprt *xprt)
{
struct svc_sock *svsk = container_of(xprt, struct svc_sock, sk_xprt);
- struct page_frag_cache *pfc = &svsk->sk_frag_cache;
struct socket *sock = svsk->sk_sock;
trace_svcsock_free(svsk, sock);
@@ -1618,8 +1617,7 @@ static void svc_sock_free(struct svc_xprt *xprt)
sockfd_put(sock);
else
sock_release(sock);
- if (pfc->va)
- __page_frag_cache_drain(virt_to_head_page(pfc->va),
- pfc->pagecnt_bias);
+
+ page_frag_cache_drain(&svsk->sk_frag_cache);
kfree(svsk);
}
diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c
index 13c44133e009..e806c1866e36 100644
--- a/tools/testing/selftests/mm/page_frag/page_frag_test.c
+++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c
@@ -126,7 +126,7 @@ static int __init page_frag_test_init(void)
u64 duration;
int ret;
- test_nc.va = NULL;
+ page_frag_cache_init(&test_nc);
atomic_set(&nthreads, 2);
init_completion(&wait);
--
2.33.0
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH net-next v23 5/7] xtensa: remove the get_order() implementation
2024-10-28 11:53 [PATCH net-next v23 0/7] Replace page_frag with page_frag_cache (Part-1) Yunsheng Lin
` (3 preceding siblings ...)
2024-10-28 11:53 ` [PATCH net-next v23 4/7] mm: page_frag: avoid caller accessing 'page_frag_cache' directly Yunsheng Lin
@ 2024-10-28 11:53 ` Yunsheng Lin
2024-10-28 11:53 ` [PATCH net-next v23 6/7] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc' Yunsheng Lin
` (4 subsequent siblings)
9 siblings, 0 replies; 23+ messages in thread
From: Yunsheng Lin @ 2024-10-28 11:53 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck,
Andrew Morton, Linux-MM, Max Filippov, Alexander Duyck,
Chris Zankel
As the get_order() implemented by xtensa supporting 'nsau'
instruction seems be the same as the generic implementation
in include/asm-generic/getorder.h when size is not a constant
value as the generic implementation calling the fls*() is also
utilizing the 'nsau' instruction for xtensa.
So remove the get_order() implemented by xtensa, as using the
generic implementation may enable the compiler to do the
computing when size is a constant value instead of runtime
computing and enable the using of get_order() in BUILD_BUG_ON()
macro in next patch.
CC: Alexander Duyck <alexander.duyck@gmail.com>
CC: Andrew Morton <akpm@linux-foundation.org>
CC: Linux-MM <linux-mm@kvack.org>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Acked-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
---
arch/xtensa/include/asm/page.h | 18 ------------------
1 file changed, 18 deletions(-)
diff --git a/arch/xtensa/include/asm/page.h b/arch/xtensa/include/asm/page.h
index 4db56ef052d2..8665d57991dd 100644
--- a/arch/xtensa/include/asm/page.h
+++ b/arch/xtensa/include/asm/page.h
@@ -109,26 +109,8 @@ typedef struct page *pgtable_t;
#define __pgd(x) ((pgd_t) { (x) } )
#define __pgprot(x) ((pgprot_t) { (x) } )
-/*
- * Pure 2^n version of get_order
- * Use 'nsau' instructions if supported by the processor or the generic version.
- */
-
-#if XCHAL_HAVE_NSA
-
-static inline __attribute_const__ int get_order(unsigned long size)
-{
- int lz;
- asm ("nsau %0, %1" : "=r" (lz) : "r" ((size - 1) >> PAGE_SHIFT));
- return 32 - lz;
-}
-
-#else
-
# include <asm-generic/getorder.h>
-#endif
-
struct page;
struct vm_area_struct;
extern void clear_page(void *page);
--
2.33.0
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH net-next v23 6/7] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc'
2024-10-28 11:53 [PATCH net-next v23 0/7] Replace page_frag with page_frag_cache (Part-1) Yunsheng Lin
` (4 preceding siblings ...)
2024-10-28 11:53 ` [PATCH net-next v23 5/7] xtensa: remove the get_order() implementation Yunsheng Lin
@ 2024-10-28 11:53 ` Yunsheng Lin
2024-10-28 11:53 ` [PATCH net-next v23 7/7] mm: page_frag: use __alloc_pages() to replace alloc_pages_node() Yunsheng Lin
` (3 subsequent siblings)
9 siblings, 0 replies; 23+ messages in thread
From: Yunsheng Lin @ 2024-10-28 11:53 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck,
Andrew Morton, Linux-MM, Alexander Duyck
Currently there is one 'struct page_frag' for every 'struct
sock' and 'struct task_struct', we are about to replace the
'struct page_frag' with 'struct page_frag_cache' for them.
Before begin the replacing, we need to ensure the size of
'struct page_frag_cache' is not bigger than the size of
'struct page_frag', as there may be tens of thousands of
'struct sock' and 'struct task_struct' instances in the
system.
By or'ing the page order & pfmemalloc with lower bits of
'va' instead of using 'u16' or 'u32' for page size and 'u8'
for pfmemalloc, we are able to avoid 3 or 5 bytes space waste.
And page address & pfmemalloc & order is unchanged for the
same page in the same 'page_frag_cache' instance, it makes
sense to fit them together.
After this patch, the size of 'struct page_frag_cache' should be
the same as the size of 'struct page_frag'.
CC: Alexander Duyck <alexander.duyck@gmail.com>
CC: Andrew Morton <akpm@linux-foundation.org>
CC: Linux-MM <linux-mm@kvack.org>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
---
include/linux/mm_types_task.h | 19 +++++----
include/linux/page_frag_cache.h | 24 ++++++++++-
mm/page_frag_cache.c | 70 ++++++++++++++++++++++-----------
3 files changed, 81 insertions(+), 32 deletions(-)
diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h
index 0ac6daebdd5c..a82aa80c0ba4 100644
--- a/include/linux/mm_types_task.h
+++ b/include/linux/mm_types_task.h
@@ -47,18 +47,21 @@ struct page_frag {
#define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK)
#define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE)
struct page_frag_cache {
- void *va;
-#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
+ /* encoded_page consists of the virtual address, pfmemalloc bit and
+ * order of a page.
+ */
+ unsigned long encoded_page;
+
+ /* we maintain a pagecount bias, so that we dont dirty cache line
+ * containing page->_refcount every time we allocate a fragment.
+ */
+#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) && (BITS_PER_LONG <= 32)
__u16 offset;
- __u16 size;
+ __u16 pagecnt_bias;
#else
__u32 offset;
+ __u32 pagecnt_bias;
#endif
- /* we maintain a pagecount bias, so that we dont dirty cache line
- * containing page->_refcount every time we allocate a fragment.
- */
- unsigned int pagecnt_bias;
- bool pfmemalloc;
};
/* Track pages that require TLB flushes */
diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h
index 0a52f7a179c8..41a91df82631 100644
--- a/include/linux/page_frag_cache.h
+++ b/include/linux/page_frag_cache.h
@@ -3,18 +3,38 @@
#ifndef _LINUX_PAGE_FRAG_CACHE_H
#define _LINUX_PAGE_FRAG_CACHE_H
+#include <linux/bits.h>
#include <linux/log2.h>
#include <linux/mm_types_task.h>
#include <linux/types.h>
+#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
+/* Use a full byte here to enable assembler optimization as the shift
+ * operation is usually expecting a byte.
+ */
+#define PAGE_FRAG_CACHE_ORDER_MASK GENMASK(7, 0)
+#else
+/* Compiler should be able to figure out we don't read things as any value
+ * ANDed with 0 is 0.
+ */
+#define PAGE_FRAG_CACHE_ORDER_MASK 0
+#endif
+
+#define PAGE_FRAG_CACHE_PFMEMALLOC_BIT (PAGE_FRAG_CACHE_ORDER_MASK + 1)
+
+static inline bool encoded_page_decode_pfmemalloc(unsigned long encoded_page)
+{
+ return !!(encoded_page & PAGE_FRAG_CACHE_PFMEMALLOC_BIT);
+}
+
static inline void page_frag_cache_init(struct page_frag_cache *nc)
{
- nc->va = NULL;
+ nc->encoded_page = 0;
}
static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc)
{
- return !!nc->pfmemalloc;
+ return encoded_page_decode_pfmemalloc(nc->encoded_page);
}
void page_frag_cache_drain(struct page_frag_cache *nc);
diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
index 4c8e04379cb3..a36fd09bf275 100644
--- a/mm/page_frag_cache.c
+++ b/mm/page_frag_cache.c
@@ -12,6 +12,7 @@
* be used in the "frags" portion of skb_shared_info.
*/
+#include <linux/build_bug.h>
#include <linux/export.h>
#include <linux/gfp_types.h>
#include <linux/init.h>
@@ -19,9 +20,36 @@
#include <linux/page_frag_cache.h>
#include "internal.h"
+static unsigned long encoded_page_create(struct page *page, unsigned int order,
+ bool pfmemalloc)
+{
+ BUILD_BUG_ON(PAGE_FRAG_CACHE_MAX_ORDER > PAGE_FRAG_CACHE_ORDER_MASK);
+ BUILD_BUG_ON(PAGE_FRAG_CACHE_PFMEMALLOC_BIT >= PAGE_SIZE);
+
+ return (unsigned long)page_address(page) |
+ (order & PAGE_FRAG_CACHE_ORDER_MASK) |
+ ((unsigned long)pfmemalloc * PAGE_FRAG_CACHE_PFMEMALLOC_BIT);
+}
+
+static unsigned long encoded_page_decode_order(unsigned long encoded_page)
+{
+ return encoded_page & PAGE_FRAG_CACHE_ORDER_MASK;
+}
+
+static void *encoded_page_decode_virt(unsigned long encoded_page)
+{
+ return (void *)(encoded_page & PAGE_MASK);
+}
+
+static struct page *encoded_page_decode_page(unsigned long encoded_page)
+{
+ return virt_to_page((void *)encoded_page);
+}
+
static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
gfp_t gfp_mask)
{
+ unsigned long order = PAGE_FRAG_CACHE_MAX_ORDER;
struct page *page = NULL;
gfp_t gfp = gfp_mask;
@@ -30,23 +58,26 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
__GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC;
page = alloc_pages_node(NUMA_NO_NODE, gfp_mask,
PAGE_FRAG_CACHE_MAX_ORDER);
- nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE;
#endif
- if (unlikely(!page))
+ if (unlikely(!page)) {
page = alloc_pages_node(NUMA_NO_NODE, gfp, 0);
+ order = 0;
+ }
- nc->va = page ? page_address(page) : NULL;
+ nc->encoded_page = page ?
+ encoded_page_create(page, order, page_is_pfmemalloc(page)) : 0;
return page;
}
void page_frag_cache_drain(struct page_frag_cache *nc)
{
- if (!nc->va)
+ if (!nc->encoded_page)
return;
- __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias);
- nc->va = NULL;
+ __page_frag_cache_drain(encoded_page_decode_page(nc->encoded_page),
+ nc->pagecnt_bias);
+ nc->encoded_page = 0;
}
EXPORT_SYMBOL(page_frag_cache_drain);
@@ -63,35 +94,29 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
unsigned int fragsz, gfp_t gfp_mask,
unsigned int align_mask)
{
-#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
- unsigned int size = nc->size;
-#else
- unsigned int size = PAGE_SIZE;
-#endif
- unsigned int offset;
+ unsigned long encoded_page = nc->encoded_page;
+ unsigned int size, offset;
struct page *page;
- if (unlikely(!nc->va)) {
+ if (unlikely(!encoded_page)) {
refill:
page = __page_frag_cache_refill(nc, gfp_mask);
if (!page)
return NULL;
-#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
- /* if size can vary use size else just use PAGE_SIZE */
- size = nc->size;
-#endif
+ encoded_page = nc->encoded_page;
+
/* Even if we own the page, we do not use atomic_set().
* This would break get_page_unless_zero() users.
*/
page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE);
/* reset page count bias and offset to start of new frag */
- nc->pfmemalloc = page_is_pfmemalloc(page);
nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
nc->offset = 0;
}
+ size = PAGE_SIZE << encoded_page_decode_order(encoded_page);
offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask);
if (unlikely(offset + fragsz > size)) {
if (unlikely(fragsz > PAGE_SIZE)) {
@@ -107,13 +132,14 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
return NULL;
}
- page = virt_to_page(nc->va);
+ page = encoded_page_decode_page(encoded_page);
if (!page_ref_sub_and_test(page, nc->pagecnt_bias))
goto refill;
- if (unlikely(nc->pfmemalloc)) {
- free_unref_page(page, compound_order(page));
+ if (unlikely(encoded_page_decode_pfmemalloc(encoded_page))) {
+ free_unref_page(page,
+ encoded_page_decode_order(encoded_page));
goto refill;
}
@@ -128,7 +154,7 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
nc->pagecnt_bias--;
nc->offset = offset + fragsz;
- return nc->va + offset;
+ return encoded_page_decode_virt(encoded_page) + offset;
}
EXPORT_SYMBOL(__page_frag_alloc_align);
--
2.33.0
^ permalink raw reply [flat|nested] 23+ messages in thread
* [PATCH net-next v23 7/7] mm: page_frag: use __alloc_pages() to replace alloc_pages_node()
2024-10-28 11:53 [PATCH net-next v23 0/7] Replace page_frag with page_frag_cache (Part-1) Yunsheng Lin
` (5 preceding siblings ...)
2024-10-28 11:53 ` [PATCH net-next v23 6/7] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc' Yunsheng Lin
@ 2024-10-28 11:53 ` Yunsheng Lin
2024-10-28 15:30 ` [PATCH net-next v23 0/7] Replace page_frag with page_frag_cache (Part-1) Alexander Duyck
` (2 subsequent siblings)
9 siblings, 0 replies; 23+ messages in thread
From: Yunsheng Lin @ 2024-10-28 11:53 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck,
Andrew Morton, Linux-MM, Alexander Duyck
It seems there is about 24Bytes binary size increase for
__page_frag_cache_refill() after refactoring in arm64 system
with 64K PAGE_SIZE. By doing the gdb disassembling, It seems
we can have more than 100Bytes decrease for the binary size
by using __alloc_pages() to replace alloc_pages_node(), as
there seems to be some unnecessary checking for nid being
NUMA_NO_NODE, especially when page_frag is part of the mm
system.
CC: Alexander Duyck <alexander.duyck@gmail.com>
CC: Andrew Morton <akpm@linux-foundation.org>
CC: Linux-MM <linux-mm@kvack.org>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
---
mm/page_frag_cache.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
index a36fd09bf275..3f7a203d35c6 100644
--- a/mm/page_frag_cache.c
+++ b/mm/page_frag_cache.c
@@ -56,11 +56,11 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP |
__GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC;
- page = alloc_pages_node(NUMA_NO_NODE, gfp_mask,
- PAGE_FRAG_CACHE_MAX_ORDER);
+ page = __alloc_pages(gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER,
+ numa_mem_id(), NULL);
#endif
if (unlikely(!page)) {
- page = alloc_pages_node(NUMA_NO_NODE, gfp, 0);
+ page = __alloc_pages(gfp, 0, numa_mem_id(), NULL);
order = 0;
}
--
2.33.0
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH net-next v23 0/7] Replace page_frag with page_frag_cache (Part-1)
2024-10-28 11:53 [PATCH net-next v23 0/7] Replace page_frag with page_frag_cache (Part-1) Yunsheng Lin
` (6 preceding siblings ...)
2024-10-28 11:53 ` [PATCH net-next v23 7/7] mm: page_frag: use __alloc_pages() to replace alloc_pages_node() Yunsheng Lin
@ 2024-10-28 15:30 ` Alexander Duyck
2024-10-29 9:36 ` Yunsheng Lin
2024-11-05 23:57 ` Jakub Kicinski
2024-11-11 22:20 ` patchwork-bot+netdevbpf
9 siblings, 1 reply; 23+ messages in thread
From: Alexander Duyck @ 2024-10-28 15:30 UTC (permalink / raw)
To: Yunsheng Lin
Cc: davem, kuba, pabeni, netdev, linux-kernel, Shuah Khan,
Andrew Morton, Linux-MM
On Mon, Oct 28, 2024 at 5:00 AM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>
> This is part 1 of "Replace page_frag with page_frag_cache",
> which mainly contain refactoring and optimization for the
> implementation of page_frag API before the replacing.
>
> As the discussion in [1], it would be better to target net-next
> tree to get more testing as all the callers page_frag API are
> in networking, and the chance of conflicting with MM tree seems
> low as implementation of page_frag API seems quite self-contained.
>
> After [2], there are still two implementations for page frag:
>
> 1. mm/page_alloc.c: net stack seems to be using it in the
> rx part with 'struct page_frag_cache' and the main API
> being page_frag_alloc_align().
> 2. net/core/sock.c: net stack seems to be using it in the
> tx part with 'struct page_frag' and the main API being
> skb_page_frag_refill().
>
> This patchset tries to unfiy the page frag implementation
> by replacing page_frag with page_frag_cache for sk_page_frag()
> first. net_high_order_alloc_disable_key for the implementation
> in net/core/sock.c doesn't seems matter that much now as pcp
> is also supported for high-order pages:
> commit 44042b449872 ("mm/page_alloc: allow high-order pages to
> be stored on the per-cpu lists")
>
> As the related change is mostly related to networking, so
> targeting the net-next. And will try to replace the rest
> of page_frag in the follow patchset.
>
> After this patchset:
> 1. Unify the page frag implementation by taking the best out of
> two the existing implementations: we are able to save some space
> for the 'page_frag_cache' API user, and avoid 'get_page()' for
> the old 'page_frag' API user.
> 2. Future bugfix and performance can be done in one place, hence
> improving maintainability of page_frag's implementation.
>
> Kernel Image changing:
> Linux Kernel total | text data bss
> ------------------------------------------------------
> after 45250307 | 27274279 17209996 766032
> before 45254134 | 27278118 17209984 766032
> delta -3827 | -3839 +12 +0
>
> Performance validation:
> 1. Using micro-benchmark ko added in patch 1 to test aligned and
> non-aligned API performance impact for the existing users, there
> is no notiable performance degradation. Instead we seems to have
> some major performance boot for both aligned and non-aligned API
> after switching to ptr_ring for testing, respectively about 200%
> and 10% improvement in arm64 server as below.
>
> 2. Use the below netcat test case, we also have some minor
> performance boot for replacing 'page_frag' with 'page_frag_cache'
> after this patchset.
> server: taskset -c 32 nc -l -k 1234 > /dev/null
> client: perf stat -r 200 -- taskset -c 0 head -c 20G /dev/zero | taskset -c 1 nc 127.0.0.1 1234
>
> In order to avoid performance noise as much as possible, the testing
> is done in system without any other load and have enough iterations to
> prove the data is stable enough, complete log for testing is below:
>
> perf stat -r 200 -- insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17 test_alloc_len=12 nr_test=51200000
> perf stat -r 200 -- insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17 test_alloc_len=12 nr_test=51200000 test_align=1
> taskset -c 32 nc -l -k 1234 > /dev/null
> perf stat -r 200 -- taskset -c 0 head -c 20G /dev/zero | taskset -c 1 nc 127.0.0.1 1234
>
> *After* this patchset:
>
> Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17 test_alloc_len=12 nr_test=51200000' (200 runs):
>
> 17.758393 task-clock (msec) # 0.004 CPUs utilized ( +- 0.51% )
> 5 context-switches # 0.293 K/sec ( +- 0.65% )
> 0 cpu-migrations # 0.008 K/sec ( +- 17.21% )
> 74 page-faults # 0.004 M/sec ( +- 0.12% )
> 46128650 cycles # 2.598 GHz ( +- 0.51% )
> 60810511 instructions # 1.32 insn per cycle ( +- 0.04% )
> 14764914 branches # 831.433 M/sec ( +- 0.04% )
> 19281 branch-misses # 0.13% of all branches ( +- 0.13% )
>
> 4.240273854 seconds time elapsed ( +- 0.13% )
>
> Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17 test_alloc_len=12 nr_test=51200000 test_align=1' (200 runs):
>
> 17.348690 task-clock (msec) # 0.019 CPUs utilized ( +- 0.66% )
> 5 context-switches # 0.310 K/sec ( +- 0.84% )
> 0 cpu-migrations # 0.009 K/sec ( +- 16.55% )
> 74 page-faults # 0.004 M/sec ( +- 0.11% )
> 45065287 cycles # 2.598 GHz ( +- 0.66% )
> 60755389 instructions # 1.35 insn per cycle ( +- 0.05% )
> 14747865 branches # 850.085 M/sec ( +- 0.05% )
> 19272 branch-misses # 0.13% of all branches ( +- 0.13% )
>
> 0.935251375 seconds time elapsed ( +- 0.07% )
>
> Performance counter stats for 'taskset -c 0 head -c 20G /dev/zero' (200 runs):
>
> 16626.042731 task-clock (msec) # 0.607 CPUs utilized ( +- 0.03% )
> 3291020 context-switches # 0.198 M/sec ( +- 0.05% )
> 1 cpu-migrations # 0.000 K/sec ( +- 0.50% )
> 85 page-faults # 0.005 K/sec ( +- 0.16% )
> 30581044838 cycles # 1.839 GHz ( +- 0.05% )
> 34962744631 instructions # 1.14 insn per cycle ( +- 0.01% )
> 6483883671 branches # 389.984 M/sec ( +- 0.02% )
> 99624551 branch-misses # 1.54% of all branches ( +- 0.17% )
>
> 27.370305077 seconds time elapsed ( +- 0.01% )
>
>
> *Before* this patchset:
>
> Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17 test_alloc_len=12 nr_test=51200000' (200 runs):
>
> 21.587934 task-clock (msec) # 0.005 CPUs utilized ( +- 0.72% )
> 6 context-switches # 0.281 K/sec ( +- 0.28% )
> 1 cpu-migrations # 0.047 K/sec ( +- 0.50% )
> 73 page-faults # 0.003 M/sec ( +- 0.12% )
> 56080697 cycles # 2.598 GHz ( +- 0.72% )
> 61605150 instructions # 1.10 insn per cycle ( +- 0.05% )
> 14950196 branches # 692.526 M/sec ( +- 0.05% )
> 19410 branch-misses # 0.13% of all branches ( +- 0.18% )
>
> 4.603530546 seconds time elapsed ( +- 0.11% )
>
> Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17 test_alloc_len=12 nr_test=51200000 test_align=1' (200 runs):
>
> 20.988297 task-clock (msec) # 0.006 CPUs utilized ( +- 0.81% )
> 7 context-switches # 0.316 K/sec ( +- 0.54% )
> 1 cpu-migrations # 0.048 K/sec ( +- 0.70% )
> 73 page-faults # 0.003 M/sec ( +- 0.11% )
> 54512166 cycles # 2.597 GHz ( +- 0.81% )
> 61440941 instructions # 1.13 insn per cycle ( +- 0.08% )
> 14906043 branches # 710.207 M/sec ( +- 0.08% )
> 19927 branch-misses # 0.13% of all branches ( +- 0.17% )
>
> 3.438041238 seconds time elapsed ( +- 1.11% )
>
> Performance counter stats for 'taskset -c 0 head -c 20G /dev/zero' (200 runs):
>
> 17364.040855 task-clock (msec) # 0.624 CPUs utilized ( +- 0.02% )
> 3340375 context-switches # 0.192 M/sec ( +- 0.06% )
> 1 cpu-migrations # 0.000 K/sec
> 85 page-faults # 0.005 K/sec ( +- 0.15% )
> 32077623335 cycles # 1.847 GHz ( +- 0.03% )
> 35121047596 instructions # 1.09 insn per cycle ( +- 0.01% )
> 6519872824 branches # 375.481 M/sec ( +- 0.02% )
> 101877022 branch-misses # 1.56% of all branches ( +- 0.14% )
>
> 27.842745343 seconds time elapsed ( +- 0.02% )
>
>
Is this actually the numbers for this patch set? Seems like you have
been using the same numbers for the last several releases. I can
understand the "before" being mostly the same, but since we have
factored out the refactor portion of it the numbers for the "after"
should have deviated as I find it highly unlikely the numbers are
exactly the same down to the nanosecond. from the previous patch set.
Also it wouldn't hurt to have an explanation for the 3.4->0.9 second
performance change as it seems like the samples don't seem to match up
with the elapsed time data.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH net-next v23 0/7] Replace page_frag with page_frag_cache (Part-1)
2024-10-28 15:30 ` [PATCH net-next v23 0/7] Replace page_frag with page_frag_cache (Part-1) Alexander Duyck
@ 2024-10-29 9:36 ` Yunsheng Lin
2024-10-29 15:45 ` Alexander Duyck
0 siblings, 1 reply; 23+ messages in thread
From: Yunsheng Lin @ 2024-10-29 9:36 UTC (permalink / raw)
To: Alexander Duyck
Cc: davem, kuba, pabeni, netdev, linux-kernel, Shuah Khan,
Andrew Morton, Linux-MM
On 2024/10/28 23:30, Alexander Duyck wrote:
...
>>
>>
>
> Is this actually the numbers for this patch set? Seems like you have
> been using the same numbers for the last several releases. I can
Yes, as recent refactoring doesn't seems big enough that the perf data is
reused for the last several releases.
> understand the "before" being mostly the same, but since we have
As there is rebasing for the latest net-next tree, even the 'before'
might not be the same as the testing seems sensitive to other changing,
like binary size changing and page allocator changing during different
version.
So it might need both the same kernel and config for 'before' and 'after'.
> factored out the refactor portion of it the numbers for the "after"
> should have deviated as I find it highly unlikely the numbers are
> exactly the same down to the nanosecond. from the previous patch set.
Below is the the performance data for Part-1 with the latest net-next:
Before this patchset:
Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17 test_alloc_len=12 nr_test=51200000' (200 runs):
17.990790 task-clock (msec) # 0.003 CPUs utilized ( +- 0.19% )
8 context-switches # 0.444 K/sec ( +- 0.09% )
0 cpu-migrations # 0.000 K/sec ( +-100.00% )
81 page-faults # 0.004 M/sec ( +- 0.09% )
46712295 cycles # 2.596 GHz ( +- 0.19% )
34466157 instructions # 0.74 insn per cycle ( +- 0.01% )
8011755 branches # 445.325 M/sec ( +- 0.01% )
39913 branch-misses # 0.50% of all branches ( +- 0.07% )
6.382252558 seconds time elapsed ( +- 0.07% )
Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17 test_alloc_len=12 nr_test=51200000 test_align=1' (200 runs):
17.638466 task-clock (msec) # 0.003 CPUs utilized ( +- 0.01% )
8 context-switches # 0.451 K/sec ( +- 0.20% )
0 cpu-migrations # 0.001 K/sec ( +- 70.53% )
81 page-faults # 0.005 M/sec ( +- 0.08% )
45794305 cycles # 2.596 GHz ( +- 0.01% )
34435077 instructions # 0.75 insn per cycle ( +- 0.00% )
8004416 branches # 453.805 M/sec ( +- 0.00% )
39758 branch-misses # 0.50% of all branches ( +- 0.06% )
5.328976590 seconds time elapsed ( +- 0.60% )
After this patchset:
Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17 test_alloc_len=12 nr_test=51200000' (200 runs):
18.647432 task-clock (msec) # 0.003 CPUs utilized ( +- 1.11% )
8 context-switches # 0.422 K/sec ( +- 0.36% )
0 cpu-migrations # 0.005 K/sec ( +- 22.54% )
81 page-faults # 0.004 M/sec ( +- 0.08% )
48418108 cycles # 2.597 GHz ( +- 1.11% )
35889299 instructions # 0.74 insn per cycle ( +- 0.11% )
8318363 branches # 446.086 M/sec ( +- 0.11% )
19263 branch-misses # 0.23% of all branches ( +- 0.13% )
5.624666079 seconds time elapsed ( +- 0.07% )
Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17 test_alloc_len=12 nr_test=51200000 test_align=1' (200 runs):
18.466768 task-clock (msec) # 0.007 CPUs utilized ( +- 1.23% )
8 context-switches # 0.428 K/sec ( +- 0.26% )
0 cpu-migrations # 0.002 K/sec ( +- 34.73% )
81 page-faults # 0.004 M/sec ( +- 0.09% )
47949220 cycles # 2.597 GHz ( +- 1.23% )
35859039 instructions # 0.75 insn per cycle ( +- 0.12% )
8309086 branches # 449.948 M/sec ( +- 0.11% )
19246 branch-misses # 0.23% of all branches ( +- 0.08% )
2.573546035 seconds time elapsed ( +- 0.04% )
>
> Also it wouldn't hurt to have an explanation for the 3.4->0.9 second
> performance change as it seems like the samples don't seem to match up
> with the elapsed time data.
As there is also a 4.6->3.4 second performance change for the 'before'
part, I am not really thinking much at that.
I am guessing some timing for implementation of ptr_ring or cpu cache
cause the above performance change?
I used the same cpu for both pop and push thread, the performance change
doesn't seems to exist anymore, and the performance improvement doesn't
seems to exist anymore either:
After this patchset:
Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=0 test_pop_cpu=0 test_alloc_len=12 nr_test=512000' (10 runs):
13.293402 task-clock (msec) # 0.002 CPUs utilized ( +- 5.05% )
7 context-switches # 0.534 K/sec ( +- 1.41% )
0 cpu-migrations # 0.015 K/sec ( +-100.00% )
80 page-faults # 0.006 M/sec ( +- 0.38% )
34494793 cycles # 2.595 GHz ( +- 5.05% )
9663299 instructions # 0.28 insn per cycle ( +- 1.45% )
1767284 branches # 132.944 M/sec ( +- 1.70% )
19798 branch-misses # 1.12% of all branches ( +- 1.18% )
8.119681413 seconds time elapsed ( +- 0.01% )
Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=0 test_pop_cpu=0 test_alloc_len=12 nr_test=512000 test_align=1' (10 runs):
12.289096 task-clock (msec) # 0.002 CPUs utilized ( +- 0.07% )
7 context-switches # 0.570 K/sec ( +- 2.13% )
0 cpu-migrations # 0.033 K/sec ( +- 66.67% )
81 page-faults # 0.007 M/sec ( +- 0.43% )
31886319 cycles # 2.595 GHz ( +- 0.07% )
9468850 instructions # 0.30 insn per cycle ( +- 0.06% )
1723487 branches # 140.245 M/sec ( +- 0.05% )
19263 branch-misses # 1.12% of all branches ( +- 0.47% )
8.119686950 seconds time elapsed ( +- 0.01% )
Before this patchset:
Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=0 test_pop_cpu=0 test_alloc_len=12 nr_test=512000' (10 runs):
13.320328 task-clock (msec) # 0.002 CPUs utilized ( +- 5.00% )
7 context-switches # 0.541 K/sec ( +- 1.85% )
0 cpu-migrations # 0.008 K/sec ( +-100.00% )
80 page-faults # 0.006 M/sec ( +- 0.36% )
34572091 cycles # 2.595 GHz ( +- 5.01% )
9664910 instructions # 0.28 insn per cycle ( +- 1.51% )
1768276 branches # 132.750 M/sec ( +- 1.80% )
19592 branch-misses # 1.11% of all branches ( +- 1.33% )
8.119686381 seconds time elapsed ( +- 0.01% )
Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=0 test_pop_cpu=0 test_alloc_len=12 nr_test=512000 test_align=1' (10 runs):
12.306471 task-clock (msec) # 0.002 CPUs utilized ( +- 0.08% )
7 context-switches # 0.585 K/sec ( +- 1.85% )
0 cpu-migrations # 0.000 K/sec
80 page-faults # 0.007 M/sec ( +- 0.28% )
31937686 cycles # 2.595 GHz ( +- 0.08% )
9462218 instructions # 0.30 insn per cycle ( +- 0.08% )
1721989 branches # 139.925 M/sec ( +- 0.07% )
19114 branch-misses # 1.11% of all branches ( +- 0.31% )
8.118897296 seconds time elapsed ( +- 0.00% )
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH net-next v23 0/7] Replace page_frag with page_frag_cache (Part-1)
2024-10-29 9:36 ` Yunsheng Lin
@ 2024-10-29 15:45 ` Alexander Duyck
0 siblings, 0 replies; 23+ messages in thread
From: Alexander Duyck @ 2024-10-29 15:45 UTC (permalink / raw)
To: Yunsheng Lin
Cc: davem, kuba, pabeni, netdev, linux-kernel, Shuah Khan,
Andrew Morton, Linux-MM
On Tue, Oct 29, 2024 at 2:36 AM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>
> On 2024/10/28 23:30, Alexander Duyck wrote:
>
> ...
>
> >>
> >>
> >
> > Is this actually the numbers for this patch set? Seems like you have
> > been using the same numbers for the last several releases. I can
>
> Yes, as recent refactoring doesn't seems big enough that the perf data is
> reused for the last several releases.
>
> > understand the "before" being mostly the same, but since we have
>
> As there is rebasing for the latest net-next tree, even the 'before'
> might not be the same as the testing seems sensitive to other changing,
> like binary size changing and page allocator changing during different
> version.
>
> So it might need both the same kernel and config for 'before' and 'after'.
>
> > factored out the refactor portion of it the numbers for the "after"
> > should have deviated as I find it highly unlikely the numbers are
> > exactly the same down to the nanosecond. from the previous patch set.
> Below is the the performance data for Part-1 with the latest net-next:
>
> Before this patchset:
> Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17 test_alloc_len=12 nr_test=51200000' (200 runs):
>
> 17.990790 task-clock (msec) # 0.003 CPUs utilized ( +- 0.19% )
> 8 context-switches # 0.444 K/sec ( +- 0.09% )
> 0 cpu-migrations # 0.000 K/sec ( +-100.00% )
> 81 page-faults # 0.004 M/sec ( +- 0.09% )
> 46712295 cycles # 2.596 GHz ( +- 0.19% )
> 34466157 instructions # 0.74 insn per cycle ( +- 0.01% )
> 8011755 branches # 445.325 M/sec ( +- 0.01% )
> 39913 branch-misses # 0.50% of all branches ( +- 0.07% )
>
> 6.382252558 seconds time elapsed ( +- 0.07% )
>
> Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17 test_alloc_len=12 nr_test=51200000 test_align=1' (200 runs):
>
> 17.638466 task-clock (msec) # 0.003 CPUs utilized ( +- 0.01% )
> 8 context-switches # 0.451 K/sec ( +- 0.20% )
> 0 cpu-migrations # 0.001 K/sec ( +- 70.53% )
> 81 page-faults # 0.005 M/sec ( +- 0.08% )
> 45794305 cycles # 2.596 GHz ( +- 0.01% )
> 34435077 instructions # 0.75 insn per cycle ( +- 0.00% )
> 8004416 branches # 453.805 M/sec ( +- 0.00% )
> 39758 branch-misses # 0.50% of all branches ( +- 0.06% )
>
> 5.328976590 seconds time elapsed ( +- 0.60% )
>
>
> After this patchset:
> Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17 test_alloc_len=12 nr_test=51200000' (200 runs):
>
> 18.647432 task-clock (msec) # 0.003 CPUs utilized ( +- 1.11% )
> 8 context-switches # 0.422 K/sec ( +- 0.36% )
> 0 cpu-migrations # 0.005 K/sec ( +- 22.54% )
> 81 page-faults # 0.004 M/sec ( +- 0.08% )
> 48418108 cycles # 2.597 GHz ( +- 1.11% )
> 35889299 instructions # 0.74 insn per cycle ( +- 0.11% )
> 8318363 branches # 446.086 M/sec ( +- 0.11% )
> 19263 branch-misses # 0.23% of all branches ( +- 0.13% )
>
> 5.624666079 seconds time elapsed ( +- 0.07% )
>
>
> Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17 test_alloc_len=12 nr_test=51200000 test_align=1' (200 runs):
>
> 18.466768 task-clock (msec) # 0.007 CPUs utilized ( +- 1.23% )
> 8 context-switches # 0.428 K/sec ( +- 0.26% )
> 0 cpu-migrations # 0.002 K/sec ( +- 34.73% )
> 81 page-faults # 0.004 M/sec ( +- 0.09% )
> 47949220 cycles # 2.597 GHz ( +- 1.23% )
> 35859039 instructions # 0.75 insn per cycle ( +- 0.12% )
> 8309086 branches # 449.948 M/sec ( +- 0.11% )
> 19246 branch-misses # 0.23% of all branches ( +- 0.08% )
>
> 2.573546035 seconds time elapsed ( +- 0.04% )
>
Interesting. It doesn't look like too much changed in terms of most of
the metrics other than the fact that we reduced the number of branch
misses by just over half.
> >
> > Also it wouldn't hurt to have an explanation for the 3.4->0.9 second
> > performance change as it seems like the samples don't seem to match up
> > with the elapsed time data.
>
> As there is also a 4.6->3.4 second performance change for the 'before'
> part, I am not really thinking much at that.
>
> I am guessing some timing for implementation of ptr_ring or cpu cache
> cause the above performance change?
>
> I used the same cpu for both pop and push thread, the performance change
> doesn't seems to exist anymore, and the performance improvement doesn't
> seems to exist anymore either:
>
> After this patchset:
> Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=0 test_pop_cpu=0 test_alloc_len=12 nr_test=512000' (10 runs):
>
> 13.293402 task-clock (msec) # 0.002 CPUs utilized ( +- 5.05% )
> 7 context-switches # 0.534 K/sec ( +- 1.41% )
> 0 cpu-migrations # 0.015 K/sec ( +-100.00% )
> 80 page-faults # 0.006 M/sec ( +- 0.38% )
> 34494793 cycles # 2.595 GHz ( +- 5.05% )
> 9663299 instructions # 0.28 insn per cycle ( +- 1.45% )
> 1767284 branches # 132.944 M/sec ( +- 1.70% )
> 19798 branch-misses # 1.12% of all branches ( +- 1.18% )
>
> 8.119681413 seconds time elapsed ( +- 0.01% )
>
> Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=0 test_pop_cpu=0 test_alloc_len=12 nr_test=512000 test_align=1' (10 runs):
>
> 12.289096 task-clock (msec) # 0.002 CPUs utilized ( +- 0.07% )
> 7 context-switches # 0.570 K/sec ( +- 2.13% )
> 0 cpu-migrations # 0.033 K/sec ( +- 66.67% )
> 81 page-faults # 0.007 M/sec ( +- 0.43% )
> 31886319 cycles # 2.595 GHz ( +- 0.07% )
> 9468850 instructions # 0.30 insn per cycle ( +- 0.06% )
> 1723487 branches # 140.245 M/sec ( +- 0.05% )
> 19263 branch-misses # 1.12% of all branches ( +- 0.47% )
>
> 8.119686950 seconds time elapsed ( +- 0.01% )
>
> Before this patchset:
> Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=0 test_pop_cpu=0 test_alloc_len=12 nr_test=512000' (10 runs):
>
> 13.320328 task-clock (msec) # 0.002 CPUs utilized ( +- 5.00% )
> 7 context-switches # 0.541 K/sec ( +- 1.85% )
> 0 cpu-migrations # 0.008 K/sec ( +-100.00% )
> 80 page-faults # 0.006 M/sec ( +- 0.36% )
> 34572091 cycles # 2.595 GHz ( +- 5.01% )
> 9664910 instructions # 0.28 insn per cycle ( +- 1.51% )
> 1768276 branches # 132.750 M/sec ( +- 1.80% )
> 19592 branch-misses # 1.11% of all branches ( +- 1.33% )
>
> 8.119686381 seconds time elapsed ( +- 0.01% )
>
> Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=0 test_pop_cpu=0 test_alloc_len=12 nr_test=512000 test_align=1' (10 runs):
>
> 12.306471 task-clock (msec) # 0.002 CPUs utilized ( +- 0.08% )
> 7 context-switches # 0.585 K/sec ( +- 1.85% )
> 0 cpu-migrations # 0.000 K/sec
> 80 page-faults # 0.007 M/sec ( +- 0.28% )
> 31937686 cycles # 2.595 GHz ( +- 0.08% )
> 9462218 instructions # 0.30 insn per cycle ( +- 0.08% )
> 1721989 branches # 139.925 M/sec ( +- 0.07% )
> 19114 branch-misses # 1.11% of all branches ( +- 0.31% )
>
> 8.118897296 seconds time elapsed ( +- 0.00% )
That isn't too surprising. Most likely you are at the mercy of the
scheduler and you are just waiting for it to cycle back and forth from
producer to consumer and back in order to allow you to complete the
test.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH net-next v23 0/7] Replace page_frag with page_frag_cache (Part-1)
2024-10-28 11:53 [PATCH net-next v23 0/7] Replace page_frag with page_frag_cache (Part-1) Yunsheng Lin
` (7 preceding siblings ...)
2024-10-28 15:30 ` [PATCH net-next v23 0/7] Replace page_frag with page_frag_cache (Part-1) Alexander Duyck
@ 2024-11-05 23:57 ` Jakub Kicinski
2024-11-08 0:02 ` Alexander Duyck
2024-11-11 22:20 ` patchwork-bot+netdevbpf
9 siblings, 1 reply; 23+ messages in thread
From: Jakub Kicinski @ 2024-11-05 23:57 UTC (permalink / raw)
To: Andrew Morton, Linux-MM
Cc: Yunsheng Lin, davem, pabeni, netdev, linux-kernel,
Alexander Duyck, Shuah Khan
On Mon, 28 Oct 2024 19:53:35 +0800 Yunsheng Lin wrote:
> This is part 1 of "Replace page_frag with page_frag_cache",
> which mainly contain refactoring and optimization for the
> implementation of page_frag API before the replacing.
Looks like Alex is happy with all of these patches. Since
page_frag_cache is primarily used in networking I think it's
okay for us to apply it but I wanted to ask if anyone:
- thinks this shouldn't go in;
- needs more time to review;
- prefers to take it via their own tree.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH net-next v23 0/7] Replace page_frag with page_frag_cache (Part-1)
2024-11-05 23:57 ` Jakub Kicinski
@ 2024-11-08 0:02 ` Alexander Duyck
0 siblings, 0 replies; 23+ messages in thread
From: Alexander Duyck @ 2024-11-08 0:02 UTC (permalink / raw)
To: Jakub Kicinski
Cc: Andrew Morton, Linux-MM, Yunsheng Lin, davem, pabeni, netdev,
linux-kernel, Shuah Khan
On Tue, Nov 5, 2024 at 3:57 PM Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Mon, 28 Oct 2024 19:53:35 +0800 Yunsheng Lin wrote:
> > This is part 1 of "Replace page_frag with page_frag_cache",
> > which mainly contain refactoring and optimization for the
> > implementation of page_frag API before the replacing.
>
> Looks like Alex is happy with all of these patches. Since
> page_frag_cache is primarily used in networking I think it's
> okay for us to apply it but I wanted to ask if anyone:
> - thinks this shouldn't go in;
> - needs more time to review;
> - prefers to take it via their own tree.
Yeah. I was happy with the set. Just curious about the numbers as they
hadn't been updated, but I am satisfied with the numbers provided
after I pointed that out.
- Alex
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH net-next v23 0/7] Replace page_frag with page_frag_cache (Part-1)
2024-10-28 11:53 [PATCH net-next v23 0/7] Replace page_frag with page_frag_cache (Part-1) Yunsheng Lin
` (8 preceding siblings ...)
2024-11-05 23:57 ` Jakub Kicinski
@ 2024-11-11 22:20 ` patchwork-bot+netdevbpf
9 siblings, 0 replies; 23+ messages in thread
From: patchwork-bot+netdevbpf @ 2024-11-11 22:20 UTC (permalink / raw)
To: Yunsheng Lin
Cc: davem, kuba, pabeni, netdev, linux-kernel, alexander.duyck,
skhan, akpm, linux-mm
Hello:
This series was applied to netdev/net-next.git (main)
by Jakub Kicinski <kuba@kernel.org>:
On Mon, 28 Oct 2024 19:53:35 +0800 you wrote:
> This is part 1 of "Replace page_frag with page_frag_cache",
> which mainly contain refactoring and optimization for the
> implementation of page_frag API before the replacing.
>
> As the discussion in [1], it would be better to target net-next
> tree to get more testing as all the callers page_frag API are
> in networking, and the chance of conflicting with MM tree seems
> low as implementation of page_frag API seems quite self-contained.
>
> [...]
Here is the summary with links:
- [net-next,v23,1/7] mm: page_frag: add a test module for page_frag
https://git.kernel.org/netdev/net-next/c/7fef0dec415c
- [net-next,v23,2/7] mm: move the page fragment allocator from page_alloc into its own file
https://git.kernel.org/netdev/net-next/c/65941f10caf2
- [net-next,v23,3/7] mm: page_frag: use initial zero offset for page_frag_alloc_align()
https://git.kernel.org/netdev/net-next/c/8218f62c9c9b
- [net-next,v23,4/7] mm: page_frag: avoid caller accessing 'page_frag_cache' directly
https://git.kernel.org/netdev/net-next/c/3d18dfe69ce4
- [net-next,v23,5/7] xtensa: remove the get_order() implementation
https://git.kernel.org/netdev/net-next/c/49e302be73f1
- [net-next,v23,6/7] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc'
https://git.kernel.org/netdev/net-next/c/0c3ce2f50261
- [net-next,v23,7/7] mm: page_frag: use __alloc_pages() to replace alloc_pages_node()
https://git.kernel.org/netdev/net-next/c/ec397ea00cb3
You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH net-next v23 1/7] mm: page_frag: add a test module for page_frag
2024-10-28 11:53 ` [PATCH net-next v23 1/7] mm: page_frag: add a test module for page_frag Yunsheng Lin
@ 2024-11-14 16:02 ` Mark Brown
2024-11-15 9:03 ` Yunsheng Lin
0 siblings, 1 reply; 23+ messages in thread
From: Mark Brown @ 2024-11-14 16:02 UTC (permalink / raw)
To: Yunsheng Lin
Cc: davem, kuba, pabeni, netdev, linux-kernel, Andrew Morton,
Alexander Duyck, Linux-MM, Alexander Duyck, Shuah Khan,
linux-kselftest, Aishwarya.TCV
[-- Attachment #1: Type: text/plain, Size: 1344 bytes --]
On Mon, Oct 28, 2024 at 07:53:36PM +0800, Yunsheng Lin wrote:
> The testing is done by ensuring that the fragment allocated
> from a frag_frag_cache instance is pushed into a ptr_ring
> instance in a kthread binded to a specified cpu, and a kthread
> binded to a specified cpu will pop the fragment from the
> ptr_ring and free the fragment.
This is breaking the build in -next on at least arm64 and x86_64 since
it's trying to build an out of tree kernel module which is included in
the selftests directory, the kselftest build system just isn't set up to
do that in a sensible and robust fashion. The module should probably be
in the main kernel tree and enabled by the config file for the mm tests.
KernelCI sees:
***
*** Configuration file ".config" not found!
***
*** Please run some configurator (e.g. "make oldconfig" or
*** "make menuconfig" or "make xconfig").
***
Makefile:810: include/config/auto.conf.cmd: No such file or directory
(see https://storage.kernelci.org/next/master/next-20241114/x86_64/x86_64_defconfig%2Bkselftest/gcc-12/logs/kselftest.log)
and I've seen:
ERROR: Kernel configuration is invalid.
include/generated/autoconf.h or include/config/auto.conf are missing.
Run 'make oldconfig && make prepare' on kernel src to fix it.
make[3]: *** [Makefile:788: include/config/auto.conf] Error 1
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH net-next v23 1/7] mm: page_frag: add a test module for page_frag
2024-11-14 16:02 ` Mark Brown
@ 2024-11-15 9:03 ` Yunsheng Lin
2024-11-15 14:12 ` Mark Brown
0 siblings, 1 reply; 23+ messages in thread
From: Yunsheng Lin @ 2024-11-15 9:03 UTC (permalink / raw)
To: Mark Brown
Cc: davem, kuba, pabeni, netdev, linux-kernel, Andrew Morton,
Alexander Duyck, Linux-MM, Alexander Duyck, Shuah Khan,
linux-kselftest, Aishwarya.TCV
On 2024/11/15 0:02, Mark Brown wrote:
> On Mon, Oct 28, 2024 at 07:53:36PM +0800, Yunsheng Lin wrote:
>> The testing is done by ensuring that the fragment allocated
>> from a frag_frag_cache instance is pushed into a ptr_ring
>> instance in a kthread binded to a specified cpu, and a kthread
>> binded to a specified cpu will pop the fragment from the
>> ptr_ring and free the fragment.
Hi,
Thanks for reporting.
>
> This is breaking the build in -next on at least arm64 and x86_64 since
> it's trying to build an out of tree kernel module which is included in
> the selftests directory, the kselftest build system just isn't set up to
> do that in a sensible and robust fashion. The module should probably be
I tried the below kernel modules in the testing directory, they seemed to
have the similar problem if the kernel is not compiled yet.
make -C tools/testing/nvdimm
make -C tools/testing/selftests/bpf/bpf_testmod/
make -C tools/testing/selftests/livepatch/test_modules/
> in the main kernel tree and enabled by the config file for the mm tests.
As discussed in [1], this module is not really a vaild kernel module by
returning '-EAGAIN', which is the main reason that it is setup in the
selftests instead of the main kernel tree.
1. https://lore.kernel.org/all/CAKgT0UdL77J4reY0JRaQfCJAxww3R=jOkHfDmkuJHSkd1uE55A@mail.gmail.com/
>
> KernelCI sees:
>
> ***
> *** Configuration file ".config" not found!
> ***
> *** Please run some configurator (e.g. "make oldconfig" or
> *** "make menuconfig" or "make xconfig").
> ***
> Makefile:810: include/config/auto.conf.cmd: No such file or directory
>
> (see https://storage.kernelci.org/next/master/next-20241114/x86_64/x86_64_defconfig%2Bkselftest/gcc-12/logs/kselftest.log)
>
> and I've seen:
>
> ERROR: Kernel configuration is invalid.
> include/generated/autoconf.h or include/config/auto.conf are missing.
> Run 'make oldconfig && make prepare' on kernel src to fix it.
>
> make[3]: *** [Makefile:788: include/config/auto.conf] Error 1
As above, I am not sure if there is some elegant way to avoid the above error
in the selftest core, one possible way to avoid the above error is to skip
compiling like below as tools/testing/selftests/mm/test_page_frag.sh already
skip the testing for page_frag if the test module is not compiled:
diff --git a/tools/testing/selftests/mm/page_frag/Makefile b/tools/testing/selftests/mm/page_frag/Makefile
index 58dda74d50a3..ab5f457bd39e 100644
--- a/tools/testing/selftests/mm/page_frag/Makefile
+++ b/tools/testing/selftests/mm/page_frag/Makefile
@@ -7,6 +7,8 @@ else
Q = @
endif
+ifneq (,$(wildcard $(KDIR)/Module.symvers))
+
MODULES = page_frag_test.ko
obj-m += page_frag_test.o
@@ -16,3 +18,10 @@ all:
clean:
+$(Q)make -C $(KDIR) M=$(PAGE_FRAG_TEST_DIR) clean
+
+else
+
+all:
+ $(warning Please build the kernel before building the test ko)
+
+endif
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH net-next v23 1/7] mm: page_frag: add a test module for page_frag
2024-11-15 9:03 ` Yunsheng Lin
@ 2024-11-15 14:12 ` Mark Brown
2024-11-15 22:34 ` Jakub Kicinski
2024-11-16 4:59 ` Yunsheng Lin
0 siblings, 2 replies; 23+ messages in thread
From: Mark Brown @ 2024-11-15 14:12 UTC (permalink / raw)
To: Yunsheng Lin
Cc: davem, kuba, pabeni, netdev, linux-kernel, Andrew Morton,
Alexander Duyck, Linux-MM, Alexander Duyck, Shuah Khan,
linux-kselftest, Aishwarya.TCV
[-- Attachment #1: Type: text/plain, Size: 2069 bytes --]
On Fri, Nov 15, 2024 at 05:03:34PM +0800, Yunsheng Lin wrote:
> On 2024/11/15 0:02, Mark Brown wrote:
> > On Mon, Oct 28, 2024 at 07:53:36PM +0800, Yunsheng Lin wrote:
> > This is breaking the build in -next on at least arm64 and x86_64 since
> > it's trying to build an out of tree kernel module which is included in
> > the selftests directory, the kselftest build system just isn't set up to
> > do that in a sensible and robust fashion. The module should probably be
> I tried the below kernel modules in the testing directory, they seemed to
> have the similar problem if the kernel is not compiled yet.
> make -C tools/testing/nvdimm
This is not included in the top level selftests Makefile.
> make -C tools/testing/selftests/bpf/bpf_testmod/
The BPF tests aren't built as standard due to a number of issues,
originally it was requiring very shiny toolchains though that's starting
to get under control.
> make -C tools/testing/selftests/livepatch/test_modules/
Ah, this one is actually using some framework support for building
modules - it's putting the modules in a separate directory and using
TEST_GEN_MODS_DIR. Crucially, though, it has guards which ensure that
we don't try to build the modules if KDIR doesn't exist - you should
follow that pattern.
> > in the main kernel tree and enabled by the config file for the mm tests.
> As discussed in [1], this module is not really a vaild kernel module by
> returning '-EAGAIN', which is the main reason that it is setup in the
> selftests instead of the main kernel tree.
Sure, we have other test stuff in the main kernel.
> As above, I am not sure if there is some elegant way to avoid the above error
> in the selftest core, one possible way to avoid the above error is to skip
> compiling like below as tools/testing/selftests/mm/test_page_frag.sh already
> skip the testing for page_frag if the test module is not compiled:
Since the tests currently don't build the test systems are by and by
large not getting as far as trying to run anything, the entire mm suite
is just getting skipped.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH net-next v23 1/7] mm: page_frag: add a test module for page_frag
2024-11-15 14:12 ` Mark Brown
@ 2024-11-15 22:34 ` Jakub Kicinski
2024-11-16 5:08 ` Yunsheng Lin
2024-11-16 4:59 ` Yunsheng Lin
1 sibling, 1 reply; 23+ messages in thread
From: Jakub Kicinski @ 2024-11-15 22:34 UTC (permalink / raw)
To: Yunsheng Lin
Cc: Mark Brown, davem, pabeni, netdev, linux-kernel, Andrew Morton,
Alexander Duyck, Linux-MM, Alexander Duyck, Shuah Khan,
linux-kselftest, Aishwarya.TCV
On Fri, 15 Nov 2024 14:12:09 +0000 Mark Brown wrote:
> > As above, I am not sure if there is some elegant way to avoid the above error
> > in the selftest core, one possible way to avoid the above error is to skip
> > compiling like below as tools/testing/selftests/mm/test_page_frag.sh already
> > skip the testing for page_frag if the test module is not compiled:
>
> Since the tests currently don't build the test systems are by and by
> large not getting as far as trying to run anything, the entire mm suite
> is just getting skipped.
Yunsheng, please try to resolve this ASAP, or just send a revert
removing the selftest for now. We can't ship net-next to Linus breaking
other subsystem's selftests, and merge window will likely open next
week.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH net-next v23 1/7] mm: page_frag: add a test module for page_frag
2024-11-15 14:12 ` Mark Brown
2024-11-15 22:34 ` Jakub Kicinski
@ 2024-11-16 4:59 ` Yunsheng Lin
1 sibling, 0 replies; 23+ messages in thread
From: Yunsheng Lin @ 2024-11-16 4:59 UTC (permalink / raw)
To: Mark Brown, Yunsheng Lin
Cc: davem, kuba, pabeni, netdev, linux-kernel, Andrew Morton,
Alexander Duyck, Linux-MM, Alexander Duyck, Shuah Khan,
linux-kselftest, Aishwarya.TCV
On 11/15/2024 10:12 PM, Mark Brown wrote:
...
>
>> make -C tools/testing/selftests/livepatch/test_modules/
>
> Ah, this one is actually using some framework support for building
> modules - it's putting the modules in a separate directory and using
> TEST_GEN_MODS_DIR. Crucially, though, it has guards which ensure that
> we don't try to build the modules if KDIR doesn't exist - you should
> follow that pattern.
Will add a checking whether to build the test modules around the
TEST_GEN_MODS_DIR setup to avoid rsync copy error when the test module
compiling need to be skipped.
>
>>> in the main kernel tree and enabled by the config file for the mm tests.
>
>> As discussed in [1], this module is not really a vaild kernel module by
>> returning '-EAGAIN', which is the main reason that it is setup in the
>> selftests instead of the main kernel tree.
>
> Sure, we have other test stuff in the main kernel.
>
>> As above, I am not sure if there is some elegant way to avoid the above error
>> in the selftest core, one possible way to avoid the above error is to skip
>> compiling like below as tools/testing/selftests/mm/test_page_frag.sh already
>> skip the testing for page_frag if the test module is not compiled:
>
> Since the tests currently don't build the test systems are by and by
> large not getting as far as trying to run anything, the entire mm suite
> is just getting skipped.
I just sent a fix for above, it would be good if you can test it if it
fixes the above problem.
I tested it with both latest net-next main kernel and older host kernel,
it seems to work.
1.
https://lore.kernel.org/lkml/20241116042314.100400-1-yunshenglin0825@gmail.com/
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH net-next v23 1/7] mm: page_frag: add a test module for page_frag
2024-11-15 22:34 ` Jakub Kicinski
@ 2024-11-16 5:08 ` Yunsheng Lin
0 siblings, 0 replies; 23+ messages in thread
From: Yunsheng Lin @ 2024-11-16 5:08 UTC (permalink / raw)
To: Jakub Kicinski, Yunsheng Lin
Cc: Mark Brown, davem, pabeni, netdev, linux-kernel, Andrew Morton,
Alexander Duyck, Linux-MM, Alexander Duyck, Shuah Khan,
linux-kselftest, Aishwarya.TCV
On 11/16/2024 6:34 AM, Jakub Kicinski wrote:
> On Fri, 15 Nov 2024 14:12:09 +0000 Mark Brown wrote:
>>> As above, I am not sure if there is some elegant way to avoid the above error
>>> in the selftest core, one possible way to avoid the above error is to skip
>>> compiling like below as tools/testing/selftests/mm/test_page_frag.sh already
>>> skip the testing for page_frag if the test module is not compiled:
>>
>> Since the tests currently don't build the test systems are by and by
>> large not getting as far as trying to run anything, the entire mm suite
>> is just getting skipped.
>
> Yunsheng, please try to resolve this ASAP, or just send a revert
> removing the selftest for now. We can't ship net-next to Linus breaking
> other subsystem's selftests, and merge window will likely open next
> week.
Sure, Let me try to fix first, the revert can be used as last resort.
A possible fix is sent in [1], but somehow I missed to add the netdev
ML for that:(
1.
https://lore.kernel.org/lkml/20241116042314.100400-1-yunshenglin0825@gmail.com/
>
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH net-next v23 3/7] mm: page_frag: use initial zero offset for page_frag_alloc_align()
2024-10-28 11:53 ` [PATCH net-next v23 3/7] mm: page_frag: use initial zero offset for page_frag_alloc_align() Yunsheng Lin
@ 2025-01-23 19:15 ` Florian Fainelli
2025-01-24 9:52 ` Yunsheng Lin
0 siblings, 1 reply; 23+ messages in thread
From: Florian Fainelli @ 2025-01-23 19:15 UTC (permalink / raw)
To: Yunsheng Lin, davem, kuba, pabeni, Eric Dumazet
Cc: netdev, linux-kernel, Alexander Duyck, Andrew Morton, Linux-MM,
Alexander Duyck
Hi Yunsheng,
On 10/28/24 04:53, Yunsheng Lin wrote:
> We are about to use page_frag_alloc_*() API to not just
> allocate memory for skb->data, but also use them to do
> the memory allocation for skb frag too. Currently the
> implementation of page_frag in mm subsystem is running
> the offset as a countdown rather than count-up value,
> there may have several advantages to that as mentioned
> in [1], but it may have some disadvantages, for example,
> it may disable skb frag coalescing and more correct cache
> prefetching
>
> We have a trade-off to make in order to have a unified
> implementation and API for page_frag, so use a initial zero
> offset in this patch, and the following patch will try to
> make some optimization to avoid the disadvantages as much
> as possible.
>
> 1. https://lore.kernel.org/all/f4abe71b3439b39d17a6fb2d410180f367cadf5c.camel@gmail.com/
>
> CC: Alexander Duyck <alexander.duyck@gmail.com>
> CC: Andrew Morton <akpm@linux-foundation.org>
> CC: Linux-MM <linux-mm@kvack.org>
> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
Sorry for the late feedback, this patch causes the bgmac driver in is
.ndo_open() function to return -ENOMEM, the call trace looks like this:
bgmac_open
-> bgmac_dma_init
-> bgmac_dma_rx_skb_for_slot
-> netdev_alloc_frag
BGMAC_RX_ALLOC_SIZE = 10048 and PAGE_FRAG_CACHE_MAX_SIZE = 32768.
Eventually we land into __page_frag_alloc_align() with the following
parameters across multiple successive calls:
__page_frag_alloc_align: fragsz=10048, align_mask=-1, size=32768, offset=0
__page_frag_alloc_align: fragsz=10048, align_mask=-1, size=32768,
offset=10048
__page_frag_alloc_align: fragsz=10048, align_mask=-1, size=32768,
offset=20096
__page_frag_alloc_align: fragsz=10048, align_mask=-1, size=32768,
offset=30144
So in that case we do indeed have offset + fragsz (40192) > size (32768)
and so we would eventually return NULL.
Any idea on how to best fix that within the bgmac driver?
Thanks!
> ---
> mm/page_frag_cache.c | 46 ++++++++++++++++++++++----------------------
> 1 file changed, 23 insertions(+), 23 deletions(-)
>
> diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
> index 609a485cd02a..4c8e04379cb3 100644
> --- a/mm/page_frag_cache.c
> +++ b/mm/page_frag_cache.c
> @@ -63,9 +63,13 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
> unsigned int fragsz, gfp_t gfp_mask,
> unsigned int align_mask)
> {
> +#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
> + unsigned int size = nc->size;
> +#else
> unsigned int size = PAGE_SIZE;
> +#endif
> + unsigned int offset;
> struct page *page;
> - int offset;
>
> if (unlikely(!nc->va)) {
> refill:
> @@ -85,11 +89,24 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
> /* reset page count bias and offset to start of new frag */
> nc->pfmemalloc = page_is_pfmemalloc(page);
> nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
> - nc->offset = size;
> + nc->offset = 0;
> }
>
> - offset = nc->offset - fragsz;
> - if (unlikely(offset < 0)) {
> + offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask);
> + if (unlikely(offset + fragsz > size)) {
> + if (unlikely(fragsz > PAGE_SIZE)) {
> + /*
> + * The caller is trying to allocate a fragment
> + * with fragsz > PAGE_SIZE but the cache isn't big
> + * enough to satisfy the request, this may
> + * happen in low memory conditions.
> + * We don't release the cache page because
> + * it could make memory pressure worse
> + * so we simply return NULL here.
> + */
> + return NULL;
> + }
> +
> page = virt_to_page(nc->va);
>
> if (!page_ref_sub_and_test(page, nc->pagecnt_bias))
> @@ -100,33 +117,16 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
> goto refill;
> }
>
> -#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
> - /* if size can vary use size else just use PAGE_SIZE */
> - size = nc->size;
> -#endif
> /* OK, page count is 0, we can safely set it */
> set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);
>
> /* reset page count bias and offset to start of new frag */
> nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
> - offset = size - fragsz;
> - if (unlikely(offset < 0)) {
> - /*
> - * The caller is trying to allocate a fragment
> - * with fragsz > PAGE_SIZE but the cache isn't big
> - * enough to satisfy the request, this may
> - * happen in low memory conditions.
> - * We don't release the cache page because
> - * it could make memory pressure worse
> - * so we simply return NULL here.
> - */
> - return NULL;
> - }
> + offset = 0;
> }
>
> nc->pagecnt_bias--;
> - offset &= align_mask;
> - nc->offset = offset;
> + nc->offset = offset + fragsz;
>
> return nc->va + offset;
> }
--
Florian
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH net-next v23 3/7] mm: page_frag: use initial zero offset for page_frag_alloc_align()
2025-01-23 19:15 ` Florian Fainelli
@ 2025-01-24 9:52 ` Yunsheng Lin
2025-01-24 18:55 ` Florian Fainelli
0 siblings, 1 reply; 23+ messages in thread
From: Yunsheng Lin @ 2025-01-24 9:52 UTC (permalink / raw)
To: Florian Fainelli, davem, kuba, pabeni, Eric Dumazet
Cc: netdev, linux-kernel, Alexander Duyck, Andrew Morton, Linux-MM,
Alexander Duyck
On 2025/1/24 3:15, Florian Fainelli wrote:
>
> Sorry for the late feedback, this patch causes the bgmac driver in is .ndo_open() function to return -ENOMEM, the call trace looks like this:
Hi, Florian
Thanks for the report.
>
> bgmac_open
> -> bgmac_dma_init
> -> bgmac_dma_rx_skb_for_slot
> -> netdev_alloc_frag
>
> BGMAC_RX_ALLOC_SIZE = 10048 and PAGE_FRAG_CACHE_MAX_SIZE = 32768.
I guess BGMAC_RX_ALLOC_SIZE being bigger than PAGE_SIZE is the
problem here, as the frag API is not really supporting allocating
fragment that is bigger than PAGE_SIZE, as it will fall back to
allocate the base page when order-3 compound page allocation fails,
see __page_frag_cache_refill().
Also, it seems strange that bgmac driver seems to always use jumbo
frame size to allocate fragment, isn't more appropriate to allocate
the fragment according to MTU?
>
> Eventually we land into __page_frag_alloc_align() with the following parameters across multiple successive calls:
>
> __page_frag_alloc_align: fragsz=10048, align_mask=-1, size=32768, offset=0
> __page_frag_alloc_align: fragsz=10048, align_mask=-1, size=32768, offset=10048
> __page_frag_alloc_align: fragsz=10048, align_mask=-1, size=32768, offset=20096
> __page_frag_alloc_align: fragsz=10048, align_mask=-1, size=32768, offset=30144
>
> So in that case we do indeed have offset + fragsz (40192) > size (32768) and so we would eventually return NULL.
It seems the changing of '(unlikely(offset < 0))' checking to
"fragsz > PAGE_SIZE" causes bgmac driver to break more easily
here. But bgmac driver might likely break too if the system
memory is severely fragmented when falling back to alllocate
base page before this patch.
>
> Any idea on how to best fix that within the bgmac driver?
Maybe use the page allocator API directly when allocating fragment
with BGMAC_RX_ALLOC_SIZE > PAGE_SIZE for a quick fix.
In the long term, maybe it makes sense to use the page_pool API
as more drivers are converting to use page_pool API already.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [PATCH net-next v23 3/7] mm: page_frag: use initial zero offset for page_frag_alloc_align()
2025-01-24 9:52 ` Yunsheng Lin
@ 2025-01-24 18:55 ` Florian Fainelli
0 siblings, 0 replies; 23+ messages in thread
From: Florian Fainelli @ 2025-01-24 18:55 UTC (permalink / raw)
To: Yunsheng Lin, davem, kuba, pabeni, Eric Dumazet
Cc: netdev, linux-kernel, Alexander Duyck, Andrew Morton, Linux-MM,
Alexander Duyck
On 1/24/25 01:52, Yunsheng Lin wrote:
> On 2025/1/24 3:15, Florian Fainelli wrote:
>>
>> Sorry for the late feedback, this patch causes the bgmac driver in is .ndo_open() function to return -ENOMEM, the call trace looks like this:
>
> Hi, Florian
> Thanks for the report.
>
>>
>> bgmac_open
>> -> bgmac_dma_init
>> -> bgmac_dma_rx_skb_for_slot
>> -> netdev_alloc_frag
>>
>> BGMAC_RX_ALLOC_SIZE = 10048 and PAGE_FRAG_CACHE_MAX_SIZE = 32768.
>
> I guess BGMAC_RX_ALLOC_SIZE being bigger than PAGE_SIZE is the
> problem here, as the frag API is not really supporting allocating
> fragment that is bigger than PAGE_SIZE, as it will fall back to
> allocate the base page when order-3 compound page allocation fails,
> see __page_frag_cache_refill().
>
> Also, it seems strange that bgmac driver seems to always use jumbo
> frame size to allocate fragment, isn't more appropriate to allocate
> the fragment according to MTU?
Totally, even though my email domain would suggest otherwise, I am just
an user of that driver here, not its maintainer, though I do have some
familiarity with it, I don't know why that choice was made.
>
>>
>> Eventually we land into __page_frag_alloc_align() with the following parameters across multiple successive calls:
>>
>> __page_frag_alloc_align: fragsz=10048, align_mask=-1, size=32768, offset=0
>> __page_frag_alloc_align: fragsz=10048, align_mask=-1, size=32768, offset=10048
>> __page_frag_alloc_align: fragsz=10048, align_mask=-1, size=32768, offset=20096
>> __page_frag_alloc_align: fragsz=10048, align_mask=-1, size=32768, offset=30144
>>
>> So in that case we do indeed have offset + fragsz (40192) > size (32768) and so we would eventually return NULL.
>
> It seems the changing of '(unlikely(offset < 0))' checking to
> "fragsz > PAGE_SIZE" causes bgmac driver to break more easily
> here. But bgmac driver might likely break too if the system
> memory is severely fragmented when falling back to alllocate
> base page before this patch.
>
>>
>> Any idea on how to best fix that within the bgmac driver?
>
> Maybe use the page allocator API directly when allocating fragment
> with BGMAC_RX_ALLOC_SIZE > PAGE_SIZE for a quick fix.
>
> In the long term, maybe it makes sense to use the page_pool API
> as more drivers are converting to use page_pool API already.
Short term, I think I am going to submit a quick fix that is inspired by
the out of tree patches carried in OpenWrt:
https://git.openwrt.org/?p=openwrt/openwrt.git;a=blob;f=target/linux/bcm53xx/patches-6.6/700-bgmac-reduce-max-frame-size-to-support-just-MTU-1500.patch;h=3a2f4b06ed6d8cda1f3f0be23e1066267234766b;hb=HEAD
Thanks!
--
Florian
^ permalink raw reply [flat|nested] 23+ messages in thread
end of thread, other threads:[~2025-01-24 18:55 UTC | newest]
Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-10-28 11:53 [PATCH net-next v23 0/7] Replace page_frag with page_frag_cache (Part-1) Yunsheng Lin
2024-10-28 11:53 ` [PATCH net-next v23 1/7] mm: page_frag: add a test module for page_frag Yunsheng Lin
2024-11-14 16:02 ` Mark Brown
2024-11-15 9:03 ` Yunsheng Lin
2024-11-15 14:12 ` Mark Brown
2024-11-15 22:34 ` Jakub Kicinski
2024-11-16 5:08 ` Yunsheng Lin
2024-11-16 4:59 ` Yunsheng Lin
2024-10-28 11:53 ` [PATCH net-next v23 2/7] mm: move the page fragment allocator from page_alloc into its own file Yunsheng Lin
2024-10-28 11:53 ` [PATCH net-next v23 3/7] mm: page_frag: use initial zero offset for page_frag_alloc_align() Yunsheng Lin
2025-01-23 19:15 ` Florian Fainelli
2025-01-24 9:52 ` Yunsheng Lin
2025-01-24 18:55 ` Florian Fainelli
2024-10-28 11:53 ` [PATCH net-next v23 4/7] mm: page_frag: avoid caller accessing 'page_frag_cache' directly Yunsheng Lin
2024-10-28 11:53 ` [PATCH net-next v23 5/7] xtensa: remove the get_order() implementation Yunsheng Lin
2024-10-28 11:53 ` [PATCH net-next v23 6/7] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc' Yunsheng Lin
2024-10-28 11:53 ` [PATCH net-next v23 7/7] mm: page_frag: use __alloc_pages() to replace alloc_pages_node() Yunsheng Lin
2024-10-28 15:30 ` [PATCH net-next v23 0/7] Replace page_frag with page_frag_cache (Part-1) Alexander Duyck
2024-10-29 9:36 ` Yunsheng Lin
2024-10-29 15:45 ` Alexander Duyck
2024-11-05 23:57 ` Jakub Kicinski
2024-11-08 0:02 ` Alexander Duyck
2024-11-11 22:20 ` patchwork-bot+netdevbpf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox