* [PATCH net-next v22 01/14] mm: page_frag: add a test module for page_frag
[not found] <20241018105351.1960345-1-linyunsheng@huawei.com>
@ 2024-10-18 10:53 ` Yunsheng Lin
2024-10-18 10:53 ` [PATCH net-next v22 02/14] mm: move the page fragment allocator from page_alloc into its own file Yunsheng Lin
` (9 subsequent siblings)
10 siblings, 0 replies; 24+ messages in thread
From: Yunsheng Lin @ 2024-10-18 10:53 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck,
Alexander Duyck, Andrew Morton, Shuah Khan, linux-mm,
linux-kselftest
The testing is done by ensuring that the fragment allocated
from a frag_frag_cache instance is pushed into a ptr_ring
instance in a kthread binded to a specified cpu, and a kthread
binded to a specified cpu will pop the fragment from the
ptr_ring and free the fragment.
CC: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
---
tools/testing/selftests/mm/Makefile | 3 +
tools/testing/selftests/mm/page_frag/Makefile | 18 ++
.../selftests/mm/page_frag/page_frag_test.c | 198 ++++++++++++++++++
tools/testing/selftests/mm/run_vmtests.sh | 8 +
tools/testing/selftests/mm/test_page_frag.sh | 175 ++++++++++++++++
5 files changed, 402 insertions(+)
create mode 100644 tools/testing/selftests/mm/page_frag/Makefile
create mode 100644 tools/testing/selftests/mm/page_frag/page_frag_test.c
create mode 100755 tools/testing/selftests/mm/test_page_frag.sh
diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile
index 02e1204971b0..acec529baaca 100644
--- a/tools/testing/selftests/mm/Makefile
+++ b/tools/testing/selftests/mm/Makefile
@@ -36,6 +36,8 @@ MAKEFLAGS += --no-builtin-rules
CFLAGS = -Wall -I $(top_srcdir) $(EXTRA_CFLAGS) $(KHDR_INCLUDES) $(TOOLS_INCLUDES)
LDLIBS = -lrt -lpthread -lm
+TEST_GEN_MODS_DIR := page_frag
+
TEST_GEN_FILES = cow
TEST_GEN_FILES += compaction_test
TEST_GEN_FILES += gup_longterm
@@ -126,6 +128,7 @@ TEST_FILES += test_hmm.sh
TEST_FILES += va_high_addr_switch.sh
TEST_FILES += charge_reserved_hugetlb.sh
TEST_FILES += hugetlb_reparenting_test.sh
+TEST_FILES += test_page_frag.sh
# required by charge_reserved_hugetlb.sh
TEST_FILES += write_hugetlb_memory.sh
diff --git a/tools/testing/selftests/mm/page_frag/Makefile b/tools/testing/selftests/mm/page_frag/Makefile
new file mode 100644
index 000000000000..58dda74d50a3
--- /dev/null
+++ b/tools/testing/selftests/mm/page_frag/Makefile
@@ -0,0 +1,18 @@
+PAGE_FRAG_TEST_DIR := $(realpath $(dir $(abspath $(lastword $(MAKEFILE_LIST)))))
+KDIR ?= $(abspath $(PAGE_FRAG_TEST_DIR)/../../../../..)
+
+ifeq ($(V),1)
+Q =
+else
+Q = @
+endif
+
+MODULES = page_frag_test.ko
+
+obj-m += page_frag_test.o
+
+all:
+ +$(Q)make -C $(KDIR) M=$(PAGE_FRAG_TEST_DIR) modules
+
+clean:
+ +$(Q)make -C $(KDIR) M=$(PAGE_FRAG_TEST_DIR) clean
diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c
new file mode 100644
index 000000000000..912d97b99107
--- /dev/null
+++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c
@@ -0,0 +1,198 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Test module for page_frag cache
+ *
+ * Copyright (C) 2024 Yunsheng Lin <linyunsheng@huawei.com>
+ */
+
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/cpumask.h>
+#include <linux/completion.h>
+#include <linux/ptr_ring.h>
+#include <linux/kthread.h>
+
+#define TEST_FAILED_PREFIX "page_frag_test failed: "
+
+static struct ptr_ring ptr_ring;
+static int nr_objs = 512;
+static atomic_t nthreads;
+static struct completion wait;
+static struct page_frag_cache test_nc;
+static int test_popped;
+static int test_pushed;
+static bool force_exit;
+
+static int nr_test = 2000000;
+module_param(nr_test, int, 0);
+MODULE_PARM_DESC(nr_test, "number of iterations to test");
+
+static bool test_align;
+module_param(test_align, bool, 0);
+MODULE_PARM_DESC(test_align, "use align API for testing");
+
+static int test_alloc_len = 2048;
+module_param(test_alloc_len, int, 0);
+MODULE_PARM_DESC(test_alloc_len, "alloc len for testing");
+
+static int test_push_cpu;
+module_param(test_push_cpu, int, 0);
+MODULE_PARM_DESC(test_push_cpu, "test cpu for pushing fragment");
+
+static int test_pop_cpu;
+module_param(test_pop_cpu, int, 0);
+MODULE_PARM_DESC(test_pop_cpu, "test cpu for popping fragment");
+
+static int page_frag_pop_thread(void *arg)
+{
+ struct ptr_ring *ring = arg;
+
+ pr_info("page_frag pop test thread begins on cpu %d\n",
+ smp_processor_id());
+
+ while (test_popped < nr_test) {
+ void *obj = __ptr_ring_consume(ring);
+
+ if (obj) {
+ test_popped++;
+ page_frag_free(obj);
+ } else {
+ if (force_exit)
+ break;
+
+ cond_resched();
+ }
+ }
+
+ if (atomic_dec_and_test(&nthreads))
+ complete(&wait);
+
+ pr_info("page_frag pop test thread exits on cpu %d\n",
+ smp_processor_id());
+
+ return 0;
+}
+
+static int page_frag_push_thread(void *arg)
+{
+ struct ptr_ring *ring = arg;
+
+ pr_info("page_frag push test thread begins on cpu %d\n",
+ smp_processor_id());
+
+ while (test_pushed < nr_test && !force_exit) {
+ void *va;
+ int ret;
+
+ if (test_align) {
+ va = page_frag_alloc_align(&test_nc, test_alloc_len,
+ GFP_KERNEL, SMP_CACHE_BYTES);
+
+ if ((unsigned long)va & (SMP_CACHE_BYTES - 1)) {
+ force_exit = true;
+ WARN_ONCE(true, TEST_FAILED_PREFIX "unaligned va returned\n");
+ }
+ } else {
+ va = page_frag_alloc(&test_nc, test_alloc_len, GFP_KERNEL);
+ }
+
+ if (!va)
+ continue;
+
+ ret = __ptr_ring_produce(ring, va);
+ if (ret) {
+ page_frag_free(va);
+ cond_resched();
+ } else {
+ test_pushed++;
+ }
+ }
+
+ pr_info("page_frag push test thread exits on cpu %d\n",
+ smp_processor_id());
+
+ if (atomic_dec_and_test(&nthreads))
+ complete(&wait);
+
+ return 0;
+}
+
+static int __init page_frag_test_init(void)
+{
+ struct task_struct *tsk_push, *tsk_pop;
+ int last_pushed = 0, last_popped = 0;
+ ktime_t start;
+ u64 duration;
+ int ret;
+
+ test_nc.va = NULL;
+ atomic_set(&nthreads, 2);
+ init_completion(&wait);
+
+ if (test_alloc_len > PAGE_SIZE || test_alloc_len <= 0 ||
+ !cpu_active(test_push_cpu) || !cpu_active(test_pop_cpu))
+ return -EINVAL;
+
+ ret = ptr_ring_init(&ptr_ring, nr_objs, GFP_KERNEL);
+ if (ret)
+ return ret;
+
+ tsk_push = kthread_create_on_cpu(page_frag_push_thread, &ptr_ring,
+ test_push_cpu, "page_frag_push");
+ if (IS_ERR(tsk_push))
+ return PTR_ERR(tsk_push);
+
+ tsk_pop = kthread_create_on_cpu(page_frag_pop_thread, &ptr_ring,
+ test_pop_cpu, "page_frag_pop");
+ if (IS_ERR(tsk_pop)) {
+ kthread_stop(tsk_push);
+ return PTR_ERR(tsk_pop);
+ }
+
+ start = ktime_get();
+ wake_up_process(tsk_push);
+ wake_up_process(tsk_pop);
+
+ pr_info("waiting for test to complete\n");
+
+ while (!wait_for_completion_timeout(&wait, msecs_to_jiffies(10000))) {
+ /* exit if there is no progress for push or pop size */
+ if (last_pushed == test_pushed || last_popped == test_popped) {
+ WARN_ONCE(true, TEST_FAILED_PREFIX "no progress\n");
+ force_exit = true;
+ continue;
+ }
+
+ last_pushed = test_pushed;
+ last_popped = test_popped;
+ pr_info("page_frag_test progress: pushed = %d, popped = %d\n",
+ test_pushed, test_popped);
+ }
+
+ if (force_exit) {
+ pr_err(TEST_FAILED_PREFIX "exit with error\n");
+ goto out;
+ }
+
+ duration = (u64)ktime_us_delta(ktime_get(), start);
+ pr_info("%d of iterations for %s testing took: %lluus\n", nr_test,
+ test_align ? "aligned" : "non-aligned", duration);
+
+out:
+ ptr_ring_cleanup(&ptr_ring, NULL);
+ page_frag_cache_drain(&test_nc);
+
+ return -EAGAIN;
+}
+
+static void __exit page_frag_test_exit(void)
+{
+}
+
+module_init(page_frag_test_init);
+module_exit(page_frag_test_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Yunsheng Lin <linyunsheng@huawei.com>");
+MODULE_DESCRIPTION("Test module for page_frag");
diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh
index c5797ad1d37b..2c5394584af4 100755
--- a/tools/testing/selftests/mm/run_vmtests.sh
+++ b/tools/testing/selftests/mm/run_vmtests.sh
@@ -75,6 +75,8 @@ separated by spaces:
read-only VMAs
- mdwe
test prctl(PR_SET_MDWE, ...)
+- page_frag
+ test handling of page fragment allocation and freeing
example: ./run_vmtests.sh -t "hmm mmap ksm"
EOF
@@ -456,6 +458,12 @@ CATEGORY="mkdirty" run_test ./mkdirty
CATEGORY="mdwe" run_test ./mdwe_test
+CATEGORY="page_frag" run_test ./test_page_frag.sh smoke
+
+CATEGORY="page_frag" run_test ./test_page_frag.sh aligned
+
+CATEGORY="page_frag" run_test ./test_page_frag.sh nonaligned
+
echo "SUMMARY: PASS=${count_pass} SKIP=${count_skip} FAIL=${count_fail}" | tap_prefix
echo "1..${count_total}" | tap_output
diff --git a/tools/testing/selftests/mm/test_page_frag.sh b/tools/testing/selftests/mm/test_page_frag.sh
new file mode 100755
index 000000000000..f55b105084cf
--- /dev/null
+++ b/tools/testing/selftests/mm/test_page_frag.sh
@@ -0,0 +1,175 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+#
+# Copyright (C) 2024 Yunsheng Lin <linyunsheng@huawei.com>
+# Copyright (C) 2018 Uladzislau Rezki (Sony) <urezki@gmail.com>
+#
+# This is a test script for the kernel test driver to test the
+# correctness and performance of page_frag's implementation.
+# Therefore it is just a kernel module loader. You can specify
+# and pass different parameters in order to:
+# a) analyse performance of page fragment allocations;
+# b) stressing and stability check of page_frag subsystem.
+
+DRIVER="./page_frag/page_frag_test.ko"
+CPU_LIST=$(grep -m 2 processor /proc/cpuinfo | cut -d ' ' -f 2)
+TEST_CPU_0=$(echo $CPU_LIST | awk '{print $1}')
+
+if [ $(echo $CPU_LIST | wc -w) -gt 1 ]; then
+ TEST_CPU_1=$(echo $CPU_LIST | awk '{print $2}')
+ NR_TEST=100000000
+else
+ TEST_CPU_1=$TEST_CPU_0
+ NR_TEST=1000000
+fi
+
+# 1 if fails
+exitcode=1
+
+# Kselftest framework requirement - SKIP code is 4.
+ksft_skip=4
+
+check_test_failed_prefix() {
+ if dmesg | grep -q 'page_frag_test failed:';then
+ echo "page_frag_test failed, please check dmesg"
+ exit $exitcode
+ fi
+}
+
+#
+# Static templates for testing of page_frag APIs.
+# Also it is possible to pass any supported parameters manually.
+#
+SMOKE_PARAM="test_push_cpu=$TEST_CPU_0 test_pop_cpu=$TEST_CPU_1"
+NONALIGNED_PARAM="$SMOKE_PARAM test_alloc_len=75 nr_test=$NR_TEST"
+ALIGNED_PARAM="$NONALIGNED_PARAM test_align=1"
+
+check_test_requirements()
+{
+ uid=$(id -u)
+ if [ $uid -ne 0 ]; then
+ echo "$0: Must be run as root"
+ exit $ksft_skip
+ fi
+
+ if ! which insmod > /dev/null 2>&1; then
+ echo "$0: You need insmod installed"
+ exit $ksft_skip
+ fi
+
+ if [ ! -f $DRIVER ]; then
+ echo "$0: You need to compile page_frag_test module"
+ exit $ksft_skip
+ fi
+}
+
+run_nonaligned_check()
+{
+ echo "Run performance tests to evaluate how fast nonaligned alloc API is."
+
+ insmod $DRIVER $NONALIGNED_PARAM > /dev/null 2>&1
+}
+
+run_aligned_check()
+{
+ echo "Run performance tests to evaluate how fast aligned alloc API is."
+
+ insmod $DRIVER $ALIGNED_PARAM > /dev/null 2>&1
+}
+
+run_smoke_check()
+{
+ echo "Run smoke test."
+
+ insmod $DRIVER $SMOKE_PARAM > /dev/null 2>&1
+}
+
+usage()
+{
+ echo -n "Usage: $0 [ aligned ] | [ nonaligned ] | | [ smoke ] | "
+ echo "manual parameters"
+ echo
+ echo "Valid tests and parameters:"
+ echo
+ modinfo $DRIVER
+ echo
+ echo "Example usage:"
+ echo
+ echo "# Shows help message"
+ echo "$0"
+ echo
+ echo "# Smoke testing"
+ echo "$0 smoke"
+ echo
+ echo "# Performance testing for nonaligned alloc API"
+ echo "$0 nonaligned"
+ echo
+ echo "# Performance testing for aligned alloc API"
+ echo "$0 aligned"
+ echo
+ exit 0
+}
+
+function validate_passed_args()
+{
+ VALID_ARGS=`modinfo $DRIVER | awk '/parm:/ {print $2}' | sed 's/:.*//'`
+
+ #
+ # Something has been passed, check it.
+ #
+ for passed_arg in $@; do
+ key=${passed_arg//=*/}
+ valid=0
+
+ for valid_arg in $VALID_ARGS; do
+ if [[ $key = $valid_arg ]]; then
+ valid=1
+ break
+ fi
+ done
+
+ if [[ $valid -ne 1 ]]; then
+ echo "Error: key is not correct: ${key}"
+ exit $exitcode
+ fi
+ done
+}
+
+function run_manual_check()
+{
+ #
+ # Validate passed parameters. If there is wrong one,
+ # the script exists and does not execute further.
+ #
+ validate_passed_args $@
+
+ echo "Run the test with following parameters: $@"
+ insmod $DRIVER $@ > /dev/null 2>&1
+}
+
+function run_test()
+{
+ if [ $# -eq 0 ]; then
+ usage
+ else
+ if [[ "$1" = "smoke" ]]; then
+ run_smoke_check
+ elif [[ "$1" = "nonaligned" ]]; then
+ run_nonaligned_check
+ elif [[ "$1" = "aligned" ]]; then
+ run_aligned_check
+ else
+ run_manual_check $@
+ fi
+ fi
+
+ check_test_failed_prefix
+
+ echo "Done."
+ echo "Check the kernel ring buffer to see the summary."
+}
+
+check_test_requirements
+run_test $@
+
+exit 0
--
2.33.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH net-next v22 02/14] mm: move the page fragment allocator from page_alloc into its own file
[not found] <20241018105351.1960345-1-linyunsheng@huawei.com>
2024-10-18 10:53 ` [PATCH net-next v22 01/14] mm: page_frag: add a test module for page_frag Yunsheng Lin
@ 2024-10-18 10:53 ` Yunsheng Lin
2024-10-18 10:53 ` [PATCH net-next v22 03/14] mm: page_frag: use initial zero offset for page_frag_alloc_align() Yunsheng Lin
` (8 subsequent siblings)
10 siblings, 0 replies; 24+ messages in thread
From: Yunsheng Lin @ 2024-10-18 10:53 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, David Howells,
Alexander Duyck, Andrew Morton, Alexander Duyck, Eric Dumazet,
Shuah Khan, linux-mm, linux-kselftest
Inspired by [1], move the page fragment allocator from page_alloc
into its own c file and header file, as we are about to make more
change for it to replace another page_frag implementation in
sock.c
As this patchset is going to replace 'struct page_frag' with
'struct page_frag_cache' in sched.h, including page_frag_cache.h
in sched.h has a compiler error caused by interdependence between
mm_types.h and mm.h for asm-offsets.c, see [2]. So avoid the compiler
error by moving 'struct page_frag_cache' to mm_types_task.h as
suggested by Alexander, see [3].
1. https://lore.kernel.org/all/20230411160902.4134381-3-dhowells@redhat.com/
2. https://lore.kernel.org/all/15623dac-9358-4597-b3ee-3694a5956920@gmail.com/
3. https://lore.kernel.org/all/CAKgT0UdH1yD=LSCXFJ=YM_aiA4OomD-2wXykO42bizaWMt_HOA@mail.gmail.com/
CC: David Howells <dhowells@redhat.com>
CC: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Acked-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
---
include/linux/gfp.h | 22 ---
include/linux/mm_types.h | 18 ---
include/linux/mm_types_task.h | 18 +++
include/linux/page_frag_cache.h | 31 ++++
include/linux/skbuff.h | 1 +
mm/Makefile | 1 +
mm/page_alloc.c | 136 ----------------
mm/page_frag_cache.c | 145 ++++++++++++++++++
.../selftests/mm/page_frag/page_frag_test.c | 2 +-
9 files changed, 197 insertions(+), 177 deletions(-)
create mode 100644 include/linux/page_frag_cache.h
create mode 100644 mm/page_frag_cache.c
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index a951de920e20..a0a6d25f883f 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -371,28 +371,6 @@ __meminit void *alloc_pages_exact_nid_noprof(int nid, size_t size, gfp_t gfp_mas
extern void __free_pages(struct page *page, unsigned int order);
extern void free_pages(unsigned long addr, unsigned int order);
-struct page_frag_cache;
-void page_frag_cache_drain(struct page_frag_cache *nc);
-extern void __page_frag_cache_drain(struct page *page, unsigned int count);
-void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz,
- gfp_t gfp_mask, unsigned int align_mask);
-
-static inline void *page_frag_alloc_align(struct page_frag_cache *nc,
- unsigned int fragsz, gfp_t gfp_mask,
- unsigned int align)
-{
- WARN_ON_ONCE(!is_power_of_2(align));
- return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align);
-}
-
-static inline void *page_frag_alloc(struct page_frag_cache *nc,
- unsigned int fragsz, gfp_t gfp_mask)
-{
- return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u);
-}
-
-extern void page_frag_free(void *addr);
-
#define __free_page(page) __free_pages((page), 0)
#define free_page(addr) free_pages((addr), 0)
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 6e3bdf8e38bc..92314ef2d978 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -521,9 +521,6 @@ static_assert(sizeof(struct ptdesc) <= sizeof(struct page));
*/
#define STRUCT_PAGE_MAX_SHIFT (order_base_2(sizeof(struct page)))
-#define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK)
-#define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE)
-
/*
* page_private can be used on tail pages. However, PagePrivate is only
* checked by the VM on the head page. So page_private on the tail pages
@@ -542,21 +539,6 @@ static inline void *folio_get_private(struct folio *folio)
return folio->private;
}
-struct page_frag_cache {
- void * va;
-#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
- __u16 offset;
- __u16 size;
-#else
- __u32 offset;
-#endif
- /* we maintain a pagecount bias, so that we dont dirty cache line
- * containing page->_refcount every time we allocate a fragment.
- */
- unsigned int pagecnt_bias;
- bool pfmemalloc;
-};
-
typedef unsigned long vm_flags_t;
/*
diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h
index bff5706b76e1..0ac6daebdd5c 100644
--- a/include/linux/mm_types_task.h
+++ b/include/linux/mm_types_task.h
@@ -8,6 +8,7 @@
* (These are defined separately to decouple sched.h from mm_types.h as much as possible.)
*/
+#include <linux/align.h>
#include <linux/types.h>
#include <asm/page.h>
@@ -43,6 +44,23 @@ struct page_frag {
#endif
};
+#define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK)
+#define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE)
+struct page_frag_cache {
+ void *va;
+#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
+ __u16 offset;
+ __u16 size;
+#else
+ __u32 offset;
+#endif
+ /* we maintain a pagecount bias, so that we dont dirty cache line
+ * containing page->_refcount every time we allocate a fragment.
+ */
+ unsigned int pagecnt_bias;
+ bool pfmemalloc;
+};
+
/* Track pages that require TLB flushes */
struct tlbflush_unmap_batch {
#ifdef CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h
new file mode 100644
index 000000000000..67ac8626ed9b
--- /dev/null
+++ b/include/linux/page_frag_cache.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef _LINUX_PAGE_FRAG_CACHE_H
+#define _LINUX_PAGE_FRAG_CACHE_H
+
+#include <linux/log2.h>
+#include <linux/mm_types_task.h>
+#include <linux/types.h>
+
+void page_frag_cache_drain(struct page_frag_cache *nc);
+void __page_frag_cache_drain(struct page *page, unsigned int count);
+void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz,
+ gfp_t gfp_mask, unsigned int align_mask);
+
+static inline void *page_frag_alloc_align(struct page_frag_cache *nc,
+ unsigned int fragsz, gfp_t gfp_mask,
+ unsigned int align)
+{
+ WARN_ON_ONCE(!is_power_of_2(align));
+ return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align);
+}
+
+static inline void *page_frag_alloc(struct page_frag_cache *nc,
+ unsigned int fragsz, gfp_t gfp_mask)
+{
+ return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u);
+}
+
+void page_frag_free(void *addr);
+
+#endif
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 48f1e0fa2a13..7adca0fa2602 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -31,6 +31,7 @@
#include <linux/in6.h>
#include <linux/if_packet.h>
#include <linux/llist.h>
+#include <linux/page_frag_cache.h>
#include <net/flow.h>
#if IS_ENABLED(CONFIG_NF_CONNTRACK)
#include <linux/netfilter/nf_conntrack_common.h>
diff --git a/mm/Makefile b/mm/Makefile
index d5639b036166..dba52bb0da8a 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -65,6 +65,7 @@ page-alloc-$(CONFIG_SHUFFLE_PAGE_ALLOCATOR) += shuffle.o
memory-hotplug-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o
obj-y += page-alloc.o
+obj-y += page_frag_cache.o
obj-y += init-mm.o
obj-y += memblock.o
obj-y += $(memory-hotplug-y)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8afab64814dc..6ca2abce857b 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4836,142 +4836,6 @@ void free_pages(unsigned long addr, unsigned int order)
EXPORT_SYMBOL(free_pages);
-/*
- * Page Fragment:
- * An arbitrary-length arbitrary-offset area of memory which resides
- * within a 0 or higher order page. Multiple fragments within that page
- * are individually refcounted, in the page's reference counter.
- *
- * The page_frag functions below provide a simple allocation framework for
- * page fragments. This is used by the network stack and network device
- * drivers to provide a backing region of memory for use as either an
- * sk_buff->head, or to be used in the "frags" portion of skb_shared_info.
- */
-static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
- gfp_t gfp_mask)
-{
- struct page *page = NULL;
- gfp_t gfp = gfp_mask;
-
-#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
- gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP |
- __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC;
- page = alloc_pages_node(NUMA_NO_NODE, gfp_mask,
- PAGE_FRAG_CACHE_MAX_ORDER);
- nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE;
-#endif
- if (unlikely(!page))
- page = alloc_pages_node(NUMA_NO_NODE, gfp, 0);
-
- nc->va = page ? page_address(page) : NULL;
-
- return page;
-}
-
-void page_frag_cache_drain(struct page_frag_cache *nc)
-{
- if (!nc->va)
- return;
-
- __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias);
- nc->va = NULL;
-}
-EXPORT_SYMBOL(page_frag_cache_drain);
-
-void __page_frag_cache_drain(struct page *page, unsigned int count)
-{
- VM_BUG_ON_PAGE(page_ref_count(page) == 0, page);
-
- if (page_ref_sub_and_test(page, count))
- free_unref_page(page, compound_order(page));
-}
-EXPORT_SYMBOL(__page_frag_cache_drain);
-
-void *__page_frag_alloc_align(struct page_frag_cache *nc,
- unsigned int fragsz, gfp_t gfp_mask,
- unsigned int align_mask)
-{
- unsigned int size = PAGE_SIZE;
- struct page *page;
- int offset;
-
- if (unlikely(!nc->va)) {
-refill:
- page = __page_frag_cache_refill(nc, gfp_mask);
- if (!page)
- return NULL;
-
-#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
- /* if size can vary use size else just use PAGE_SIZE */
- size = nc->size;
-#endif
- /* Even if we own the page, we do not use atomic_set().
- * This would break get_page_unless_zero() users.
- */
- page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE);
-
- /* reset page count bias and offset to start of new frag */
- nc->pfmemalloc = page_is_pfmemalloc(page);
- nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
- nc->offset = size;
- }
-
- offset = nc->offset - fragsz;
- if (unlikely(offset < 0)) {
- page = virt_to_page(nc->va);
-
- if (!page_ref_sub_and_test(page, nc->pagecnt_bias))
- goto refill;
-
- if (unlikely(nc->pfmemalloc)) {
- free_unref_page(page, compound_order(page));
- goto refill;
- }
-
-#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
- /* if size can vary use size else just use PAGE_SIZE */
- size = nc->size;
-#endif
- /* OK, page count is 0, we can safely set it */
- set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);
-
- /* reset page count bias and offset to start of new frag */
- nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
- offset = size - fragsz;
- if (unlikely(offset < 0)) {
- /*
- * The caller is trying to allocate a fragment
- * with fragsz > PAGE_SIZE but the cache isn't big
- * enough to satisfy the request, this may
- * happen in low memory conditions.
- * We don't release the cache page because
- * it could make memory pressure worse
- * so we simply return NULL here.
- */
- return NULL;
- }
- }
-
- nc->pagecnt_bias--;
- offset &= align_mask;
- nc->offset = offset;
-
- return nc->va + offset;
-}
-EXPORT_SYMBOL(__page_frag_alloc_align);
-
-/*
- * Frees a page fragment allocated out of either a compound or order 0 page.
- */
-void page_frag_free(void *addr)
-{
- struct page *page = virt_to_head_page(addr);
-
- if (unlikely(put_page_testzero(page)))
- free_unref_page(page, compound_order(page));
-}
-EXPORT_SYMBOL(page_frag_free);
-
static void *make_alloc_exact(unsigned long addr, unsigned int order,
size_t size)
{
diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
new file mode 100644
index 000000000000..609a485cd02a
--- /dev/null
+++ b/mm/page_frag_cache.c
@@ -0,0 +1,145 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Page fragment allocator
+ *
+ * Page Fragment:
+ * An arbitrary-length arbitrary-offset area of memory which resides within a
+ * 0 or higher order page. Multiple fragments within that page are
+ * individually refcounted, in the page's reference counter.
+ *
+ * The page_frag functions provide a simple allocation framework for page
+ * fragments. This is used by the network stack and network device drivers to
+ * provide a backing region of memory for use as either an sk_buff->head, or to
+ * be used in the "frags" portion of skb_shared_info.
+ */
+
+#include <linux/export.h>
+#include <linux/gfp_types.h>
+#include <linux/init.h>
+#include <linux/mm.h>
+#include <linux/page_frag_cache.h>
+#include "internal.h"
+
+static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
+ gfp_t gfp_mask)
+{
+ struct page *page = NULL;
+ gfp_t gfp = gfp_mask;
+
+#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
+ gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP |
+ __GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC;
+ page = alloc_pages_node(NUMA_NO_NODE, gfp_mask,
+ PAGE_FRAG_CACHE_MAX_ORDER);
+ nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE;
+#endif
+ if (unlikely(!page))
+ page = alloc_pages_node(NUMA_NO_NODE, gfp, 0);
+
+ nc->va = page ? page_address(page) : NULL;
+
+ return page;
+}
+
+void page_frag_cache_drain(struct page_frag_cache *nc)
+{
+ if (!nc->va)
+ return;
+
+ __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias);
+ nc->va = NULL;
+}
+EXPORT_SYMBOL(page_frag_cache_drain);
+
+void __page_frag_cache_drain(struct page *page, unsigned int count)
+{
+ VM_BUG_ON_PAGE(page_ref_count(page) == 0, page);
+
+ if (page_ref_sub_and_test(page, count))
+ free_unref_page(page, compound_order(page));
+}
+EXPORT_SYMBOL(__page_frag_cache_drain);
+
+void *__page_frag_alloc_align(struct page_frag_cache *nc,
+ unsigned int fragsz, gfp_t gfp_mask,
+ unsigned int align_mask)
+{
+ unsigned int size = PAGE_SIZE;
+ struct page *page;
+ int offset;
+
+ if (unlikely(!nc->va)) {
+refill:
+ page = __page_frag_cache_refill(nc, gfp_mask);
+ if (!page)
+ return NULL;
+
+#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
+ /* if size can vary use size else just use PAGE_SIZE */
+ size = nc->size;
+#endif
+ /* Even if we own the page, we do not use atomic_set().
+ * This would break get_page_unless_zero() users.
+ */
+ page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE);
+
+ /* reset page count bias and offset to start of new frag */
+ nc->pfmemalloc = page_is_pfmemalloc(page);
+ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
+ nc->offset = size;
+ }
+
+ offset = nc->offset - fragsz;
+ if (unlikely(offset < 0)) {
+ page = virt_to_page(nc->va);
+
+ if (!page_ref_sub_and_test(page, nc->pagecnt_bias))
+ goto refill;
+
+ if (unlikely(nc->pfmemalloc)) {
+ free_unref_page(page, compound_order(page));
+ goto refill;
+ }
+
+#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
+ /* if size can vary use size else just use PAGE_SIZE */
+ size = nc->size;
+#endif
+ /* OK, page count is 0, we can safely set it */
+ set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);
+
+ /* reset page count bias and offset to start of new frag */
+ nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
+ offset = size - fragsz;
+ if (unlikely(offset < 0)) {
+ /*
+ * The caller is trying to allocate a fragment
+ * with fragsz > PAGE_SIZE but the cache isn't big
+ * enough to satisfy the request, this may
+ * happen in low memory conditions.
+ * We don't release the cache page because
+ * it could make memory pressure worse
+ * so we simply return NULL here.
+ */
+ return NULL;
+ }
+ }
+
+ nc->pagecnt_bias--;
+ offset &= align_mask;
+ nc->offset = offset;
+
+ return nc->va + offset;
+}
+EXPORT_SYMBOL(__page_frag_alloc_align);
+
+/*
+ * Frees a page fragment allocated out of either a compound or order 0 page.
+ */
+void page_frag_free(void *addr)
+{
+ struct page *page = virt_to_head_page(addr);
+
+ if (unlikely(put_page_testzero(page)))
+ free_unref_page(page, compound_order(page));
+}
+EXPORT_SYMBOL(page_frag_free);
diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c
index 912d97b99107..13c44133e009 100644
--- a/tools/testing/selftests/mm/page_frag/page_frag_test.c
+++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c
@@ -6,12 +6,12 @@
* Copyright (C) 2024 Yunsheng Lin <linyunsheng@huawei.com>
*/
-#include <linux/mm.h>
#include <linux/module.h>
#include <linux/cpumask.h>
#include <linux/completion.h>
#include <linux/ptr_ring.h>
#include <linux/kthread.h>
+#include <linux/page_frag_cache.h>
#define TEST_FAILED_PREFIX "page_frag_test failed: "
--
2.33.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH net-next v22 03/14] mm: page_frag: use initial zero offset for page_frag_alloc_align()
[not found] <20241018105351.1960345-1-linyunsheng@huawei.com>
2024-10-18 10:53 ` [PATCH net-next v22 01/14] mm: page_frag: add a test module for page_frag Yunsheng Lin
2024-10-18 10:53 ` [PATCH net-next v22 02/14] mm: move the page fragment allocator from page_alloc into its own file Yunsheng Lin
@ 2024-10-18 10:53 ` Yunsheng Lin
2024-10-18 10:53 ` [PATCH net-next v22 04/14] mm: page_frag: avoid caller accessing 'page_frag_cache' directly Yunsheng Lin
` (7 subsequent siblings)
10 siblings, 0 replies; 24+ messages in thread
From: Yunsheng Lin @ 2024-10-18 10:53 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck,
Alexander Duyck, Andrew Morton, linux-mm
We are about to use page_frag_alloc_*() API to not just
allocate memory for skb->data, but also use them to do
the memory allocation for skb frag too. Currently the
implementation of page_frag in mm subsystem is running
the offset as a countdown rather than count-up value,
there may have several advantages to that as mentioned
in [1], but it may have some disadvantages, for example,
it may disable skb frag coalescing and more correct cache
prefetching
We have a trade-off to make in order to have a unified
implementation and API for page_frag, so use a initial zero
offset in this patch, and the following patch will try to
make some optimization to avoid the disadvantages as much
as possible.
1. https://lore.kernel.org/all/f4abe71b3439b39d17a6fb2d410180f367cadf5c.camel@gmail.com/
CC: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
---
mm/page_frag_cache.c | 46 ++++++++++++++++++++++----------------------
1 file changed, 23 insertions(+), 23 deletions(-)
diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
index 609a485cd02a..4c8e04379cb3 100644
--- a/mm/page_frag_cache.c
+++ b/mm/page_frag_cache.c
@@ -63,9 +63,13 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
unsigned int fragsz, gfp_t gfp_mask,
unsigned int align_mask)
{
+#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
+ unsigned int size = nc->size;
+#else
unsigned int size = PAGE_SIZE;
+#endif
+ unsigned int offset;
struct page *page;
- int offset;
if (unlikely(!nc->va)) {
refill:
@@ -85,11 +89,24 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
/* reset page count bias and offset to start of new frag */
nc->pfmemalloc = page_is_pfmemalloc(page);
nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
- nc->offset = size;
+ nc->offset = 0;
}
- offset = nc->offset - fragsz;
- if (unlikely(offset < 0)) {
+ offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask);
+ if (unlikely(offset + fragsz > size)) {
+ if (unlikely(fragsz > PAGE_SIZE)) {
+ /*
+ * The caller is trying to allocate a fragment
+ * with fragsz > PAGE_SIZE but the cache isn't big
+ * enough to satisfy the request, this may
+ * happen in low memory conditions.
+ * We don't release the cache page because
+ * it could make memory pressure worse
+ * so we simply return NULL here.
+ */
+ return NULL;
+ }
+
page = virt_to_page(nc->va);
if (!page_ref_sub_and_test(page, nc->pagecnt_bias))
@@ -100,33 +117,16 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
goto refill;
}
-#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
- /* if size can vary use size else just use PAGE_SIZE */
- size = nc->size;
-#endif
/* OK, page count is 0, we can safely set it */
set_page_count(page, PAGE_FRAG_CACHE_MAX_SIZE + 1);
/* reset page count bias and offset to start of new frag */
nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
- offset = size - fragsz;
- if (unlikely(offset < 0)) {
- /*
- * The caller is trying to allocate a fragment
- * with fragsz > PAGE_SIZE but the cache isn't big
- * enough to satisfy the request, this may
- * happen in low memory conditions.
- * We don't release the cache page because
- * it could make memory pressure worse
- * so we simply return NULL here.
- */
- return NULL;
- }
+ offset = 0;
}
nc->pagecnt_bias--;
- offset &= align_mask;
- nc->offset = offset;
+ nc->offset = offset + fragsz;
return nc->va + offset;
}
--
2.33.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH net-next v22 04/14] mm: page_frag: avoid caller accessing 'page_frag_cache' directly
[not found] <20241018105351.1960345-1-linyunsheng@huawei.com>
` (2 preceding siblings ...)
2024-10-18 10:53 ` [PATCH net-next v22 03/14] mm: page_frag: use initial zero offset for page_frag_alloc_align() Yunsheng Lin
@ 2024-10-18 10:53 ` Yunsheng Lin
2024-10-18 10:53 ` [PATCH net-next v22 06/14] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc' Yunsheng Lin
` (6 subsequent siblings)
10 siblings, 0 replies; 24+ messages in thread
From: Yunsheng Lin @ 2024-10-18 10:53 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck,
Alexander Duyck, Chuck Lever, Michael S. Tsirkin, Jason Wang,
Eugenio Pérez, Andrew Morton, Eric Dumazet, David Howells,
Marc Dionne, Jeff Layton, Neil Brown, Olga Kornievskaia, Dai Ngo,
Tom Talpey, Trond Myklebust, Anna Schumaker, Shuah Khan, kvm,
virtualization, linux-mm, linux-afs, linux-nfs, linux-kselftest
Use appropriate frag_page API instead of caller accessing
'page_frag_cache' directly.
CC: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
Acked-by: Chuck Lever <chuck.lever@oracle.com>
---
drivers/vhost/net.c | 2 +-
include/linux/page_frag_cache.h | 10 ++++++++++
net/core/skbuff.c | 6 +++---
net/rxrpc/conn_object.c | 4 +---
net/rxrpc/local_object.c | 4 +---
net/sunrpc/svcsock.c | 6 ++----
tools/testing/selftests/mm/page_frag/page_frag_test.c | 2 +-
7 files changed, 19 insertions(+), 15 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index f16279351db5..9ad37c012189 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -1325,7 +1325,7 @@ static int vhost_net_open(struct inode *inode, struct file *f)
vqs[VHOST_NET_VQ_RX]);
f->private_data = n;
- n->pf_cache.va = NULL;
+ page_frag_cache_init(&n->pf_cache);
return 0;
}
diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h
index 67ac8626ed9b..0a52f7a179c8 100644
--- a/include/linux/page_frag_cache.h
+++ b/include/linux/page_frag_cache.h
@@ -7,6 +7,16 @@
#include <linux/mm_types_task.h>
#include <linux/types.h>
+static inline void page_frag_cache_init(struct page_frag_cache *nc)
+{
+ nc->va = NULL;
+}
+
+static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc)
+{
+ return !!nc->pfmemalloc;
+}
+
void page_frag_cache_drain(struct page_frag_cache *nc);
void __page_frag_cache_drain(struct page *page, unsigned int count);
void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz,
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 00afeb90c23a..6841e61a6bd0 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -753,14 +753,14 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len,
if (in_hardirq() || irqs_disabled()) {
nc = this_cpu_ptr(&netdev_alloc_cache);
data = page_frag_alloc(nc, len, gfp_mask);
- pfmemalloc = nc->pfmemalloc;
+ pfmemalloc = page_frag_cache_is_pfmemalloc(nc);
} else {
local_bh_disable();
local_lock_nested_bh(&napi_alloc_cache.bh_lock);
nc = this_cpu_ptr(&napi_alloc_cache.page);
data = page_frag_alloc(nc, len, gfp_mask);
- pfmemalloc = nc->pfmemalloc;
+ pfmemalloc = page_frag_cache_is_pfmemalloc(nc);
local_unlock_nested_bh(&napi_alloc_cache.bh_lock);
local_bh_enable();
@@ -850,7 +850,7 @@ struct sk_buff *napi_alloc_skb(struct napi_struct *napi, unsigned int len)
len = SKB_HEAD_ALIGN(len);
data = page_frag_alloc(&nc->page, len, gfp_mask);
- pfmemalloc = nc->page.pfmemalloc;
+ pfmemalloc = page_frag_cache_is_pfmemalloc(&nc->page);
}
local_unlock_nested_bh(&napi_alloc_cache.bh_lock);
diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c
index 1539d315afe7..694c4df7a1a3 100644
--- a/net/rxrpc/conn_object.c
+++ b/net/rxrpc/conn_object.c
@@ -337,9 +337,7 @@ static void rxrpc_clean_up_connection(struct work_struct *work)
*/
rxrpc_purge_queue(&conn->rx_queue);
- if (conn->tx_data_alloc.va)
- __page_frag_cache_drain(virt_to_page(conn->tx_data_alloc.va),
- conn->tx_data_alloc.pagecnt_bias);
+ page_frag_cache_drain(&conn->tx_data_alloc);
call_rcu(&conn->rcu, rxrpc_rcu_free_connection);
}
diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c
index f9623ace2201..2792d2304605 100644
--- a/net/rxrpc/local_object.c
+++ b/net/rxrpc/local_object.c
@@ -452,9 +452,7 @@ void rxrpc_destroy_local(struct rxrpc_local *local)
#endif
rxrpc_purge_queue(&local->rx_queue);
rxrpc_purge_client_connections(local);
- if (local->tx_alloc.va)
- __page_frag_cache_drain(virt_to_page(local->tx_alloc.va),
- local->tx_alloc.pagecnt_bias);
+ page_frag_cache_drain(&local->tx_alloc);
}
/*
diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index 825ec5357691..b785425c3315 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -1608,7 +1608,6 @@ static void svc_tcp_sock_detach(struct svc_xprt *xprt)
static void svc_sock_free(struct svc_xprt *xprt)
{
struct svc_sock *svsk = container_of(xprt, struct svc_sock, sk_xprt);
- struct page_frag_cache *pfc = &svsk->sk_frag_cache;
struct socket *sock = svsk->sk_sock;
trace_svcsock_free(svsk, sock);
@@ -1618,8 +1617,7 @@ static void svc_sock_free(struct svc_xprt *xprt)
sockfd_put(sock);
else
sock_release(sock);
- if (pfc->va)
- __page_frag_cache_drain(virt_to_head_page(pfc->va),
- pfc->pagecnt_bias);
+
+ page_frag_cache_drain(&svsk->sk_frag_cache);
kfree(svsk);
}
diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c
index 13c44133e009..e806c1866e36 100644
--- a/tools/testing/selftests/mm/page_frag/page_frag_test.c
+++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c
@@ -126,7 +126,7 @@ static int __init page_frag_test_init(void)
u64 duration;
int ret;
- test_nc.va = NULL;
+ page_frag_cache_init(&test_nc);
atomic_set(&nthreads, 2);
init_completion(&wait);
--
2.33.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH net-next v22 06/14] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc'
[not found] <20241018105351.1960345-1-linyunsheng@huawei.com>
` (3 preceding siblings ...)
2024-10-18 10:53 ` [PATCH net-next v22 04/14] mm: page_frag: avoid caller accessing 'page_frag_cache' directly Yunsheng Lin
@ 2024-10-18 10:53 ` Yunsheng Lin
2024-10-18 16:43 ` Alexander Duyck
2024-10-18 10:53 ` [PATCH net-next v22 07/14] mm: page_frag: some minor refactoring before adding new API Yunsheng Lin
` (5 subsequent siblings)
10 siblings, 1 reply; 24+ messages in thread
From: Yunsheng Lin @ 2024-10-18 10:53 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck,
Andrew Morton, linux-mm
Currently there is one 'struct page_frag' for every 'struct
sock' and 'struct task_struct', we are about to replace the
'struct page_frag' with 'struct page_frag_cache' for them.
Before begin the replacing, we need to ensure the size of
'struct page_frag_cache' is not bigger than the size of
'struct page_frag', as there may be tens of thousands of
'struct sock' and 'struct task_struct' instances in the
system.
By or'ing the page order & pfmemalloc with lower bits of
'va' instead of using 'u16' or 'u32' for page size and 'u8'
for pfmemalloc, we are able to avoid 3 or 5 bytes space waste.
And page address & pfmemalloc & order is unchanged for the
same page in the same 'page_frag_cache' instance, it makes
sense to fit them together.
After this patch, the size of 'struct page_frag_cache' should be
the same as the size of 'struct page_frag'.
CC: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
---
include/linux/mm_types_task.h | 19 +++++----
include/linux/page_frag_cache.h | 24 ++++++++++-
mm/page_frag_cache.c | 70 ++++++++++++++++++++++-----------
3 files changed, 81 insertions(+), 32 deletions(-)
diff --git a/include/linux/mm_types_task.h b/include/linux/mm_types_task.h
index 0ac6daebdd5c..a82aa80c0ba4 100644
--- a/include/linux/mm_types_task.h
+++ b/include/linux/mm_types_task.h
@@ -47,18 +47,21 @@ struct page_frag {
#define PAGE_FRAG_CACHE_MAX_SIZE __ALIGN_MASK(32768, ~PAGE_MASK)
#define PAGE_FRAG_CACHE_MAX_ORDER get_order(PAGE_FRAG_CACHE_MAX_SIZE)
struct page_frag_cache {
- void *va;
-#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
+ /* encoded_page consists of the virtual address, pfmemalloc bit and
+ * order of a page.
+ */
+ unsigned long encoded_page;
+
+ /* we maintain a pagecount bias, so that we dont dirty cache line
+ * containing page->_refcount every time we allocate a fragment.
+ */
+#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE) && (BITS_PER_LONG <= 32)
__u16 offset;
- __u16 size;
+ __u16 pagecnt_bias;
#else
__u32 offset;
+ __u32 pagecnt_bias;
#endif
- /* we maintain a pagecount bias, so that we dont dirty cache line
- * containing page->_refcount every time we allocate a fragment.
- */
- unsigned int pagecnt_bias;
- bool pfmemalloc;
};
/* Track pages that require TLB flushes */
diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h
index 0a52f7a179c8..41a91df82631 100644
--- a/include/linux/page_frag_cache.h
+++ b/include/linux/page_frag_cache.h
@@ -3,18 +3,38 @@
#ifndef _LINUX_PAGE_FRAG_CACHE_H
#define _LINUX_PAGE_FRAG_CACHE_H
+#include <linux/bits.h>
#include <linux/log2.h>
#include <linux/mm_types_task.h>
#include <linux/types.h>
+#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
+/* Use a full byte here to enable assembler optimization as the shift
+ * operation is usually expecting a byte.
+ */
+#define PAGE_FRAG_CACHE_ORDER_MASK GENMASK(7, 0)
+#else
+/* Compiler should be able to figure out we don't read things as any value
+ * ANDed with 0 is 0.
+ */
+#define PAGE_FRAG_CACHE_ORDER_MASK 0
+#endif
+
+#define PAGE_FRAG_CACHE_PFMEMALLOC_BIT (PAGE_FRAG_CACHE_ORDER_MASK + 1)
+
+static inline bool encoded_page_decode_pfmemalloc(unsigned long encoded_page)
+{
+ return !!(encoded_page & PAGE_FRAG_CACHE_PFMEMALLOC_BIT);
+}
+
static inline void page_frag_cache_init(struct page_frag_cache *nc)
{
- nc->va = NULL;
+ nc->encoded_page = 0;
}
static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc)
{
- return !!nc->pfmemalloc;
+ return encoded_page_decode_pfmemalloc(nc->encoded_page);
}
void page_frag_cache_drain(struct page_frag_cache *nc);
diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
index 4c8e04379cb3..a36fd09bf275 100644
--- a/mm/page_frag_cache.c
+++ b/mm/page_frag_cache.c
@@ -12,6 +12,7 @@
* be used in the "frags" portion of skb_shared_info.
*/
+#include <linux/build_bug.h>
#include <linux/export.h>
#include <linux/gfp_types.h>
#include <linux/init.h>
@@ -19,9 +20,36 @@
#include <linux/page_frag_cache.h>
#include "internal.h"
+static unsigned long encoded_page_create(struct page *page, unsigned int order,
+ bool pfmemalloc)
+{
+ BUILD_BUG_ON(PAGE_FRAG_CACHE_MAX_ORDER > PAGE_FRAG_CACHE_ORDER_MASK);
+ BUILD_BUG_ON(PAGE_FRAG_CACHE_PFMEMALLOC_BIT >= PAGE_SIZE);
+
+ return (unsigned long)page_address(page) |
+ (order & PAGE_FRAG_CACHE_ORDER_MASK) |
+ ((unsigned long)pfmemalloc * PAGE_FRAG_CACHE_PFMEMALLOC_BIT);
+}
+
+static unsigned long encoded_page_decode_order(unsigned long encoded_page)
+{
+ return encoded_page & PAGE_FRAG_CACHE_ORDER_MASK;
+}
+
+static void *encoded_page_decode_virt(unsigned long encoded_page)
+{
+ return (void *)(encoded_page & PAGE_MASK);
+}
+
+static struct page *encoded_page_decode_page(unsigned long encoded_page)
+{
+ return virt_to_page((void *)encoded_page);
+}
+
static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
gfp_t gfp_mask)
{
+ unsigned long order = PAGE_FRAG_CACHE_MAX_ORDER;
struct page *page = NULL;
gfp_t gfp = gfp_mask;
@@ -30,23 +58,26 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
__GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC;
page = alloc_pages_node(NUMA_NO_NODE, gfp_mask,
PAGE_FRAG_CACHE_MAX_ORDER);
- nc->size = page ? PAGE_FRAG_CACHE_MAX_SIZE : PAGE_SIZE;
#endif
- if (unlikely(!page))
+ if (unlikely(!page)) {
page = alloc_pages_node(NUMA_NO_NODE, gfp, 0);
+ order = 0;
+ }
- nc->va = page ? page_address(page) : NULL;
+ nc->encoded_page = page ?
+ encoded_page_create(page, order, page_is_pfmemalloc(page)) : 0;
return page;
}
void page_frag_cache_drain(struct page_frag_cache *nc)
{
- if (!nc->va)
+ if (!nc->encoded_page)
return;
- __page_frag_cache_drain(virt_to_head_page(nc->va), nc->pagecnt_bias);
- nc->va = NULL;
+ __page_frag_cache_drain(encoded_page_decode_page(nc->encoded_page),
+ nc->pagecnt_bias);
+ nc->encoded_page = 0;
}
EXPORT_SYMBOL(page_frag_cache_drain);
@@ -63,35 +94,29 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
unsigned int fragsz, gfp_t gfp_mask,
unsigned int align_mask)
{
-#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
- unsigned int size = nc->size;
-#else
- unsigned int size = PAGE_SIZE;
-#endif
- unsigned int offset;
+ unsigned long encoded_page = nc->encoded_page;
+ unsigned int size, offset;
struct page *page;
- if (unlikely(!nc->va)) {
+ if (unlikely(!encoded_page)) {
refill:
page = __page_frag_cache_refill(nc, gfp_mask);
if (!page)
return NULL;
-#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
- /* if size can vary use size else just use PAGE_SIZE */
- size = nc->size;
-#endif
+ encoded_page = nc->encoded_page;
+
/* Even if we own the page, we do not use atomic_set().
* This would break get_page_unless_zero() users.
*/
page_ref_add(page, PAGE_FRAG_CACHE_MAX_SIZE);
/* reset page count bias and offset to start of new frag */
- nc->pfmemalloc = page_is_pfmemalloc(page);
nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
nc->offset = 0;
}
+ size = PAGE_SIZE << encoded_page_decode_order(encoded_page);
offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask);
if (unlikely(offset + fragsz > size)) {
if (unlikely(fragsz > PAGE_SIZE)) {
@@ -107,13 +132,14 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
return NULL;
}
- page = virt_to_page(nc->va);
+ page = encoded_page_decode_page(encoded_page);
if (!page_ref_sub_and_test(page, nc->pagecnt_bias))
goto refill;
- if (unlikely(nc->pfmemalloc)) {
- free_unref_page(page, compound_order(page));
+ if (unlikely(encoded_page_decode_pfmemalloc(encoded_page))) {
+ free_unref_page(page,
+ encoded_page_decode_order(encoded_page));
goto refill;
}
@@ -128,7 +154,7 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
nc->pagecnt_bias--;
nc->offset = offset + fragsz;
- return nc->va + offset;
+ return encoded_page_decode_virt(encoded_page) + offset;
}
EXPORT_SYMBOL(__page_frag_alloc_align);
--
2.33.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH net-next v22 07/14] mm: page_frag: some minor refactoring before adding new API
[not found] <20241018105351.1960345-1-linyunsheng@huawei.com>
` (4 preceding siblings ...)
2024-10-18 10:53 ` [PATCH net-next v22 06/14] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc' Yunsheng Lin
@ 2024-10-18 10:53 ` Yunsheng Lin
2024-10-18 17:26 ` Alexander Duyck
2024-10-18 10:53 ` [PATCH net-next v22 08/14] mm: page_frag: use __alloc_pages() to replace alloc_pages_node() Yunsheng Lin
` (4 subsequent siblings)
10 siblings, 1 reply; 24+ messages in thread
From: Yunsheng Lin @ 2024-10-18 10:53 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck,
Andrew Morton, linux-mm
Refactor common codes from __page_frag_alloc_va_align() to
__page_frag_cache_prepare() and __page_frag_cache_commit(),
so that the new API can make use of them.
CC: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
---
include/linux/page_frag_cache.h | 36 +++++++++++++++++++++++++++--
mm/page_frag_cache.c | 40 ++++++++++++++++++++++++++-------
2 files changed, 66 insertions(+), 10 deletions(-)
diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h
index 41a91df82631..feed99d0cddb 100644
--- a/include/linux/page_frag_cache.h
+++ b/include/linux/page_frag_cache.h
@@ -5,6 +5,7 @@
#include <linux/bits.h>
#include <linux/log2.h>
+#include <linux/mmdebug.h>
#include <linux/mm_types_task.h>
#include <linux/types.h>
@@ -39,8 +40,39 @@ static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc)
void page_frag_cache_drain(struct page_frag_cache *nc);
void __page_frag_cache_drain(struct page *page, unsigned int count);
-void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz,
- gfp_t gfp_mask, unsigned int align_mask);
+void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz,
+ struct page_frag *pfrag, gfp_t gfp_mask,
+ unsigned int align_mask);
+unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc,
+ struct page_frag *pfrag,
+ unsigned int used_sz);
+
+static inline unsigned int __page_frag_cache_commit(struct page_frag_cache *nc,
+ struct page_frag *pfrag,
+ unsigned int used_sz)
+{
+ VM_BUG_ON(!nc->pagecnt_bias);
+ nc->pagecnt_bias--;
+
+ return __page_frag_cache_commit_noref(nc, pfrag, used_sz);
+}
+
+static inline void *__page_frag_alloc_align(struct page_frag_cache *nc,
+ unsigned int fragsz, gfp_t gfp_mask,
+ unsigned int align_mask)
+{
+ struct page_frag page_frag;
+ void *va;
+
+ va = __page_frag_cache_prepare(nc, fragsz, &page_frag, gfp_mask,
+ align_mask);
+ if (unlikely(!va))
+ return NULL;
+
+ __page_frag_cache_commit(nc, &page_frag, fragsz);
+
+ return va;
+}
static inline void *page_frag_alloc_align(struct page_frag_cache *nc,
unsigned int fragsz, gfp_t gfp_mask,
diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
index a36fd09bf275..a852523bc8ca 100644
--- a/mm/page_frag_cache.c
+++ b/mm/page_frag_cache.c
@@ -90,9 +90,31 @@ void __page_frag_cache_drain(struct page *page, unsigned int count)
}
EXPORT_SYMBOL(__page_frag_cache_drain);
-void *__page_frag_alloc_align(struct page_frag_cache *nc,
- unsigned int fragsz, gfp_t gfp_mask,
- unsigned int align_mask)
+unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc,
+ struct page_frag *pfrag,
+ unsigned int used_sz)
+{
+ unsigned int orig_offset;
+
+ VM_BUG_ON(used_sz > pfrag->size);
+ VM_BUG_ON(pfrag->page != encoded_page_decode_page(nc->encoded_page));
+ VM_BUG_ON(pfrag->offset + pfrag->size >
+ (PAGE_SIZE << encoded_page_decode_order(nc->encoded_page)));
+
+ /* pfrag->offset might be bigger than the nc->offset due to alignment */
+ VM_BUG_ON(nc->offset > pfrag->offset);
+
+ orig_offset = nc->offset;
+ nc->offset = pfrag->offset + used_sz;
+
+ /* Return true size back to caller considering the offset alignment */
+ return nc->offset - orig_offset;
+}
+EXPORT_SYMBOL(__page_frag_cache_commit_noref);
+
+void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz,
+ struct page_frag *pfrag, gfp_t gfp_mask,
+ unsigned int align_mask)
{
unsigned long encoded_page = nc->encoded_page;
unsigned int size, offset;
@@ -114,6 +136,8 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
/* reset page count bias and offset to start of new frag */
nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
nc->offset = 0;
+ } else {
+ page = encoded_page_decode_page(encoded_page);
}
size = PAGE_SIZE << encoded_page_decode_order(encoded_page);
@@ -132,8 +156,6 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
return NULL;
}
- page = encoded_page_decode_page(encoded_page);
-
if (!page_ref_sub_and_test(page, nc->pagecnt_bias))
goto refill;
@@ -148,15 +170,17 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
/* reset page count bias and offset to start of new frag */
nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
+ nc->offset = 0;
offset = 0;
}
- nc->pagecnt_bias--;
- nc->offset = offset + fragsz;
+ pfrag->page = page;
+ pfrag->offset = offset;
+ pfrag->size = size - offset;
return encoded_page_decode_virt(encoded_page) + offset;
}
-EXPORT_SYMBOL(__page_frag_alloc_align);
+EXPORT_SYMBOL(__page_frag_cache_prepare);
/*
* Frees a page fragment allocated out of either a compound or order 0 page.
--
2.33.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH net-next v22 08/14] mm: page_frag: use __alloc_pages() to replace alloc_pages_node()
[not found] <20241018105351.1960345-1-linyunsheng@huawei.com>
` (5 preceding siblings ...)
2024-10-18 10:53 ` [PATCH net-next v22 07/14] mm: page_frag: some minor refactoring before adding new API Yunsheng Lin
@ 2024-10-18 10:53 ` Yunsheng Lin
2024-10-18 10:53 ` [PATCH net-next v22 10/14] mm: page_frag: introduce prepare/probe/commit API Yunsheng Lin
` (3 subsequent siblings)
10 siblings, 0 replies; 24+ messages in thread
From: Yunsheng Lin @ 2024-10-18 10:53 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck,
Alexander Duyck, Andrew Morton, linux-mm
It seems there is about 24Bytes binary size increase for
__page_frag_cache_refill() after refactoring in arm64 system
with 64K PAGE_SIZE. By doing the gdb disassembling, It seems
we can have more than 100Bytes decrease for the binary size
by using __alloc_pages() to replace alloc_pages_node(), as
there seems to be some unnecessary checking for nid being
NUMA_NO_NODE, especially when page_frag is part of the mm
system.
CC: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
---
mm/page_frag_cache.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
index a852523bc8ca..f55d34cf7d43 100644
--- a/mm/page_frag_cache.c
+++ b/mm/page_frag_cache.c
@@ -56,11 +56,11 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
gfp_mask = (gfp_mask & ~__GFP_DIRECT_RECLAIM) | __GFP_COMP |
__GFP_NOWARN | __GFP_NORETRY | __GFP_NOMEMALLOC;
- page = alloc_pages_node(NUMA_NO_NODE, gfp_mask,
- PAGE_FRAG_CACHE_MAX_ORDER);
+ page = __alloc_pages(gfp_mask, PAGE_FRAG_CACHE_MAX_ORDER,
+ numa_mem_id(), NULL);
#endif
if (unlikely(!page)) {
- page = alloc_pages_node(NUMA_NO_NODE, gfp, 0);
+ page = __alloc_pages(gfp, 0, numa_mem_id(), NULL);
order = 0;
}
--
2.33.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH net-next v22 10/14] mm: page_frag: introduce prepare/probe/commit API
[not found] <20241018105351.1960345-1-linyunsheng@huawei.com>
` (6 preceding siblings ...)
2024-10-18 10:53 ` [PATCH net-next v22 08/14] mm: page_frag: use __alloc_pages() to replace alloc_pages_node() Yunsheng Lin
@ 2024-10-18 10:53 ` Yunsheng Lin
2024-10-18 18:03 ` Alexander Duyck
2024-10-18 10:53 ` [PATCH net-next v22 11/14] mm: page_frag: add testing for the newly added prepare API Yunsheng Lin
` (2 subsequent siblings)
10 siblings, 1 reply; 24+ messages in thread
From: Yunsheng Lin @ 2024-10-18 10:53 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck,
Andrew Morton, linux-mm
There are many use cases that need minimum memory in order
for forward progress, but more performant if more memory is
available or need to probe the cache info to use any memory
available for frag caoleasing reason.
Currently skb_page_frag_refill() API is used to solve the
above use cases, but caller needs to know about the internal
detail and access the data field of 'struct page_frag' to
meet the requirement of the above use cases and its
implementation is similar to the one in mm subsystem.
To unify those two page_frag implementations, introduce a
prepare API to ensure minimum memory is satisfied and return
how much the actual memory is available to the caller and a
probe API to report the current available memory to caller
without doing cache refilling. The caller needs to either call
the commit API to report how much memory it actually uses, or
not do so if deciding to not use any memory.
CC: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
---
include/linux/page_frag_cache.h | 130 ++++++++++++++++++++++++++++++++
mm/page_frag_cache.c | 21 ++++++
2 files changed, 151 insertions(+)
diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h
index feed99d0cddb..1c0c11250b66 100644
--- a/include/linux/page_frag_cache.h
+++ b/include/linux/page_frag_cache.h
@@ -46,6 +46,10 @@ void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz,
unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc,
struct page_frag *pfrag,
unsigned int used_sz);
+void *__page_frag_alloc_refill_probe_align(struct page_frag_cache *nc,
+ unsigned int fragsz,
+ struct page_frag *pfrag,
+ unsigned int align_mask);
static inline unsigned int __page_frag_cache_commit(struct page_frag_cache *nc,
struct page_frag *pfrag,
@@ -88,6 +92,132 @@ static inline void *page_frag_alloc(struct page_frag_cache *nc,
return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u);
}
+static inline bool __page_frag_refill_align(struct page_frag_cache *nc,
+ unsigned int fragsz,
+ struct page_frag *pfrag,
+ gfp_t gfp_mask,
+ unsigned int align_mask)
+{
+ if (unlikely(!__page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask,
+ align_mask)))
+ return false;
+
+ __page_frag_cache_commit(nc, pfrag, fragsz);
+ return true;
+}
+
+static inline bool page_frag_refill_align(struct page_frag_cache *nc,
+ unsigned int fragsz,
+ struct page_frag *pfrag,
+ gfp_t gfp_mask, unsigned int align)
+{
+ WARN_ON_ONCE(!is_power_of_2(align));
+ return __page_frag_refill_align(nc, fragsz, pfrag, gfp_mask, -align);
+}
+
+static inline bool page_frag_refill(struct page_frag_cache *nc,
+ unsigned int fragsz,
+ struct page_frag *pfrag, gfp_t gfp_mask)
+{
+ return __page_frag_refill_align(nc, fragsz, pfrag, gfp_mask, ~0u);
+}
+
+static inline bool __page_frag_refill_prepare_align(struct page_frag_cache *nc,
+ unsigned int fragsz,
+ struct page_frag *pfrag,
+ gfp_t gfp_mask,
+ unsigned int align_mask)
+{
+ return !!__page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask,
+ align_mask);
+}
+
+static inline bool page_frag_refill_prepare_align(struct page_frag_cache *nc,
+ unsigned int fragsz,
+ struct page_frag *pfrag,
+ gfp_t gfp_mask,
+ unsigned int align)
+{
+ WARN_ON_ONCE(!is_power_of_2(align));
+ return __page_frag_refill_prepare_align(nc, fragsz, pfrag, gfp_mask,
+ -align);
+}
+
+static inline bool page_frag_refill_prepare(struct page_frag_cache *nc,
+ unsigned int fragsz,
+ struct page_frag *pfrag,
+ gfp_t gfp_mask)
+{
+ return __page_frag_refill_prepare_align(nc, fragsz, pfrag, gfp_mask,
+ ~0u);
+}
+
+static inline void *__page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc,
+ unsigned int fragsz,
+ struct page_frag *pfrag,
+ gfp_t gfp_mask,
+ unsigned int align_mask)
+{
+ return __page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, align_mask);
+}
+
+static inline void *page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc,
+ unsigned int fragsz,
+ struct page_frag *pfrag,
+ gfp_t gfp_mask,
+ unsigned int align)
+{
+ WARN_ON_ONCE(!is_power_of_2(align));
+ return __page_frag_alloc_refill_prepare_align(nc, fragsz, pfrag,
+ gfp_mask, -align);
+}
+
+static inline void *page_frag_alloc_refill_prepare(struct page_frag_cache *nc,
+ unsigned int fragsz,
+ struct page_frag *pfrag,
+ gfp_t gfp_mask)
+{
+ return __page_frag_alloc_refill_prepare_align(nc, fragsz, pfrag,
+ gfp_mask, ~0u);
+}
+
+static inline void *page_frag_alloc_refill_probe(struct page_frag_cache *nc,
+ unsigned int fragsz,
+ struct page_frag *pfrag)
+{
+ return __page_frag_alloc_refill_probe_align(nc, fragsz, pfrag, ~0u);
+}
+
+static inline bool page_frag_refill_probe(struct page_frag_cache *nc,
+ unsigned int fragsz,
+ struct page_frag *pfrag)
+{
+ return !!page_frag_alloc_refill_probe(nc, fragsz, pfrag);
+}
+
+static inline void page_frag_commit(struct page_frag_cache *nc,
+ struct page_frag *pfrag,
+ unsigned int used_sz)
+{
+ __page_frag_cache_commit(nc, pfrag, used_sz);
+}
+
+static inline void page_frag_commit_noref(struct page_frag_cache *nc,
+ struct page_frag *pfrag,
+ unsigned int used_sz)
+{
+ __page_frag_cache_commit_noref(nc, pfrag, used_sz);
+}
+
+static inline void page_frag_alloc_abort(struct page_frag_cache *nc,
+ unsigned int fragsz)
+{
+ VM_BUG_ON(fragsz > nc->offset);
+
+ nc->pagecnt_bias++;
+ nc->offset -= fragsz;
+}
+
void page_frag_free(void *addr);
#endif
diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
index f55d34cf7d43..5ea4b663ab8e 100644
--- a/mm/page_frag_cache.c
+++ b/mm/page_frag_cache.c
@@ -112,6 +112,27 @@ unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc,
}
EXPORT_SYMBOL(__page_frag_cache_commit_noref);
+void *__page_frag_alloc_refill_probe_align(struct page_frag_cache *nc,
+ unsigned int fragsz,
+ struct page_frag *pfrag,
+ unsigned int align_mask)
+{
+ unsigned long encoded_page = nc->encoded_page;
+ unsigned int size, offset;
+
+ size = PAGE_SIZE << encoded_page_decode_order(encoded_page);
+ offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask);
+ if (unlikely(!encoded_page || offset + fragsz > size))
+ return NULL;
+
+ pfrag->page = encoded_page_decode_page(encoded_page);
+ pfrag->size = size - offset;
+ pfrag->offset = offset;
+
+ return encoded_page_decode_virt(encoded_page) + offset;
+}
+EXPORT_SYMBOL(__page_frag_alloc_refill_probe_align);
+
void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz,
struct page_frag *pfrag, gfp_t gfp_mask,
unsigned int align_mask)
--
2.33.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH net-next v22 11/14] mm: page_frag: add testing for the newly added prepare API
[not found] <20241018105351.1960345-1-linyunsheng@huawei.com>
` (7 preceding siblings ...)
2024-10-18 10:53 ` [PATCH net-next v22 10/14] mm: page_frag: introduce prepare/probe/commit API Yunsheng Lin
@ 2024-10-18 10:53 ` Yunsheng Lin
2024-10-18 10:53 ` [PATCH net-next v22 13/14] mm: page_frag: update documentation for page_frag Yunsheng Lin
[not found] ` <CAKgT0Uft5Ga0ub_Fj6nonV6E0hRYcej8x_axmGBBX_Nm_wZ_8w@mail.gmail.com>
10 siblings, 0 replies; 24+ messages in thread
From: Yunsheng Lin @ 2024-10-18 10:53 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck,
Andrew Morton, Shuah Khan, linux-mm, linux-kselftest
Add testing for the newly added prepare API, for both aligned
and non-aligned API, also probe API is also tested along with
prepare API.
CC: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
---
.../selftests/mm/page_frag/page_frag_test.c | 76 +++++++++++++++++--
tools/testing/selftests/mm/run_vmtests.sh | 4 +
tools/testing/selftests/mm/test_page_frag.sh | 27 +++++++
3 files changed, 102 insertions(+), 5 deletions(-)
diff --git a/tools/testing/selftests/mm/page_frag/page_frag_test.c b/tools/testing/selftests/mm/page_frag/page_frag_test.c
index e806c1866e36..1e47e9ad66f0 100644
--- a/tools/testing/selftests/mm/page_frag/page_frag_test.c
+++ b/tools/testing/selftests/mm/page_frag/page_frag_test.c
@@ -32,6 +32,10 @@ static bool test_align;
module_param(test_align, bool, 0);
MODULE_PARM_DESC(test_align, "use align API for testing");
+static bool test_prepare;
+module_param(test_prepare, bool, 0);
+MODULE_PARM_DESC(test_prepare, "use prepare API for testing");
+
static int test_alloc_len = 2048;
module_param(test_alloc_len, int, 0);
MODULE_PARM_DESC(test_alloc_len, "alloc len for testing");
@@ -74,6 +78,21 @@ static int page_frag_pop_thread(void *arg)
return 0;
}
+static void frag_frag_test_commit(struct page_frag_cache *nc,
+ struct page_frag *prepare_pfrag,
+ struct page_frag *probe_pfrag,
+ unsigned int used_sz)
+{
+ if (prepare_pfrag->page != probe_pfrag->page ||
+ prepare_pfrag->offset != probe_pfrag->offset ||
+ prepare_pfrag->size != probe_pfrag->size) {
+ force_exit = true;
+ WARN_ONCE(true, TEST_FAILED_PREFIX "wrong probed info\n");
+ }
+
+ page_frag_commit(nc, prepare_pfrag, used_sz);
+}
+
static int page_frag_push_thread(void *arg)
{
struct ptr_ring *ring = arg;
@@ -86,15 +105,61 @@ static int page_frag_push_thread(void *arg)
int ret;
if (test_align) {
- va = page_frag_alloc_align(&test_nc, test_alloc_len,
- GFP_KERNEL, SMP_CACHE_BYTES);
+ if (test_prepare) {
+ struct page_frag prepare_frag, probe_frag;
+ void *probe_va;
+
+ va = page_frag_alloc_refill_prepare_align(&test_nc,
+ test_alloc_len,
+ &prepare_frag,
+ GFP_KERNEL,
+ SMP_CACHE_BYTES);
+
+ probe_va = __page_frag_alloc_refill_probe_align(&test_nc,
+ test_alloc_len,
+ &probe_frag,
+ -SMP_CACHE_BYTES);
+ if (va != probe_va) {
+ force_exit = true;
+ WARN_ONCE(true, TEST_FAILED_PREFIX "wrong va\n");
+ }
+
+ if (likely(va))
+ frag_frag_test_commit(&test_nc, &prepare_frag,
+ &probe_frag, test_alloc_len);
+ } else {
+ va = page_frag_alloc_align(&test_nc,
+ test_alloc_len,
+ GFP_KERNEL,
+ SMP_CACHE_BYTES);
+ }
if ((unsigned long)va & (SMP_CACHE_BYTES - 1)) {
force_exit = true;
WARN_ONCE(true, TEST_FAILED_PREFIX "unaligned va returned\n");
}
} else {
- va = page_frag_alloc(&test_nc, test_alloc_len, GFP_KERNEL);
+ if (test_prepare) {
+ struct page_frag prepare_frag, probe_frag;
+ void *probe_va;
+
+ va = page_frag_alloc_refill_prepare(&test_nc, test_alloc_len,
+ &prepare_frag, GFP_KERNEL);
+
+ probe_va = page_frag_alloc_refill_probe(&test_nc, test_alloc_len,
+ &probe_frag);
+
+ if (va != probe_va) {
+ force_exit = true;
+ WARN_ONCE(true, TEST_FAILED_PREFIX "wrong va\n");
+ }
+
+ if (likely(va))
+ frag_frag_test_commit(&test_nc, &prepare_frag,
+ &probe_frag, test_alloc_len);
+ } else {
+ va = page_frag_alloc(&test_nc, test_alloc_len, GFP_KERNEL);
+ }
}
if (!va)
@@ -176,8 +241,9 @@ static int __init page_frag_test_init(void)
}
duration = (u64)ktime_us_delta(ktime_get(), start);
- pr_info("%d of iterations for %s testing took: %lluus\n", nr_test,
- test_align ? "aligned" : "non-aligned", duration);
+ pr_info("%d of iterations for %s %s API testing took: %lluus\n", nr_test,
+ test_align ? "aligned" : "non-aligned",
+ test_prepare ? "prepare" : "alloc", duration);
out:
ptr_ring_cleanup(&ptr_ring, NULL);
diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh
index 2c5394584af4..f6ff9080a6f2 100755
--- a/tools/testing/selftests/mm/run_vmtests.sh
+++ b/tools/testing/selftests/mm/run_vmtests.sh
@@ -464,6 +464,10 @@ CATEGORY="page_frag" run_test ./test_page_frag.sh aligned
CATEGORY="page_frag" run_test ./test_page_frag.sh nonaligned
+CATEGORY="page_frag" run_test ./test_page_frag.sh aligned_prepare
+
+CATEGORY="page_frag" run_test ./test_page_frag.sh nonaligned_prepare
+
echo "SUMMARY: PASS=${count_pass} SKIP=${count_skip} FAIL=${count_fail}" | tap_prefix
echo "1..${count_total}" | tap_output
diff --git a/tools/testing/selftests/mm/test_page_frag.sh b/tools/testing/selftests/mm/test_page_frag.sh
index f55b105084cf..1c757fd11844 100755
--- a/tools/testing/selftests/mm/test_page_frag.sh
+++ b/tools/testing/selftests/mm/test_page_frag.sh
@@ -43,6 +43,8 @@ check_test_failed_prefix() {
SMOKE_PARAM="test_push_cpu=$TEST_CPU_0 test_pop_cpu=$TEST_CPU_1"
NONALIGNED_PARAM="$SMOKE_PARAM test_alloc_len=75 nr_test=$NR_TEST"
ALIGNED_PARAM="$NONALIGNED_PARAM test_align=1"
+NONALIGNED_PREPARE_PARAM="$NONALIGNED_PARAM test_prepare=1"
+ALIGNED_PREPARE_PARAM="$ALIGNED_PARAM test_prepare=1"
check_test_requirements()
{
@@ -77,6 +79,20 @@ run_aligned_check()
insmod $DRIVER $ALIGNED_PARAM > /dev/null 2>&1
}
+run_nonaligned_prepare_check()
+{
+ echo "Run performance tests to evaluate how fast nonaligned prepare API is."
+
+ insmod $DRIVER $NONALIGNED_PREPARE_PARAM > /dev/null 2>&1
+}
+
+run_aligned_prepare_check()
+{
+ echo "Run performance tests to evaluate how fast aligned prepare API is."
+
+ insmod $DRIVER $ALIGNED_PREPARE_PARAM > /dev/null 2>&1
+}
+
run_smoke_check()
{
echo "Run smoke test."
@@ -87,6 +103,7 @@ run_smoke_check()
usage()
{
echo -n "Usage: $0 [ aligned ] | [ nonaligned ] | | [ smoke ] | "
+ echo "[ aligned_prepare ] | [ nonaligned_prepare ] | "
echo "manual parameters"
echo
echo "Valid tests and parameters:"
@@ -107,6 +124,12 @@ usage()
echo "# Performance testing for aligned alloc API"
echo "$0 aligned"
echo
+ echo "# Performance testing for nonaligned prepare API"
+ echo "$0 nonaligned_prepare"
+ echo
+ echo "# Performance testing for aligned prepare API"
+ echo "$0 aligned_prepare"
+ echo
exit 0
}
@@ -158,6 +181,10 @@ function run_test()
run_nonaligned_check
elif [[ "$1" = "aligned" ]]; then
run_aligned_check
+ elif [[ "$1" = "nonaligned_prepare" ]]; then
+ run_nonaligned_prepare_check
+ elif [[ "$1" = "aligned_prepare" ]]; then
+ run_aligned_prepare_check
else
run_manual_check $@
fi
--
2.33.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH net-next v22 13/14] mm: page_frag: update documentation for page_frag
[not found] <20241018105351.1960345-1-linyunsheng@huawei.com>
` (8 preceding siblings ...)
2024-10-18 10:53 ` [PATCH net-next v22 11/14] mm: page_frag: add testing for the newly added prepare API Yunsheng Lin
@ 2024-10-18 10:53 ` Yunsheng Lin
2024-10-20 10:02 ` Bagas Sanjaya
[not found] ` <CAKgT0Uft5Ga0ub_Fj6nonV6E0hRYcej8x_axmGBBX_Nm_wZ_8w@mail.gmail.com>
10 siblings, 1 reply; 24+ messages in thread
From: Yunsheng Lin @ 2024-10-18 10:53 UTC (permalink / raw)
To: davem, kuba, pabeni
Cc: netdev, linux-kernel, Yunsheng Lin, Alexander Duyck,
Jonathan Corbet, Andrew Morton, linux-doc, linux-mm
Update documentation about design, implementation and API usages
for page_frag.
CC: Alexander Duyck <alexander.duyck@gmail.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
---
Documentation/mm/page_frags.rst | 176 +++++++++++++++++++++-
include/linux/page_frag_cache.h | 250 +++++++++++++++++++++++++++++++-
mm/page_frag_cache.c | 26 +++-
3 files changed, 441 insertions(+), 11 deletions(-)
diff --git a/Documentation/mm/page_frags.rst b/Documentation/mm/page_frags.rst
index 503ca6cdb804..7fd9398aca4e 100644
--- a/Documentation/mm/page_frags.rst
+++ b/Documentation/mm/page_frags.rst
@@ -1,3 +1,5 @@
+.. SPDX-License-Identifier: GPL-2.0
+
==============
Page fragments
==============
@@ -40,4 +42,176 @@ page via a single call. The advantage to doing this is that it allows for
cleaning up the multiple references that were added to a page in order to
avoid calling get_page per allocation.
-Alexander Duyck, Nov 29, 2016.
+
+Architecture overview
+=====================
+
+.. code-block:: none
+
+ +----------------------+
+ | page_frag API caller |
+ +----------------------+
+ |
+ |
+ v
+ +------------------------------------------------------------------+
+ | request page fragment |
+ +------------------------------------------------------------------+
+ | | |
+ | | |
+ | Cache not enough |
+ | | |
+ | +-----------------+ |
+ | | reuse old cache |--Usable-->|
+ | +-----------------+ |
+ | | |
+ | Not usable |
+ | | |
+ | v |
+ Cache empty +-----------------+ |
+ | | drain old cache | |
+ | +-----------------+ |
+ | | |
+ v_________________________________v |
+ | |
+ | |
+ _________________v_______________ |
+ | | Cache is enough
+ | | |
+ PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE | |
+ | | |
+ | PAGE_SIZE >= PAGE_FRAG_CACHE_MAX_SIZE |
+ v | |
+ +----------------------------------+ | |
+ | refill cache with order > 0 page | | |
+ +----------------------------------+ | |
+ | | | |
+ | | | |
+ | Refill failed | |
+ | | | |
+ | v v |
+ | +------------------------------------+ |
+ | | refill cache with order 0 page | |
+ | +----------------------------------=-+ |
+ | | |
+ Refill succeed | |
+ | Refill succeed |
+ | | |
+ v v v
+ +------------------------------------------------------------------+
+ | allocate fragment from cache |
+ +------------------------------------------------------------------+
+
+API interface
+=============
+As the design and implementation of page_frag API implies, the allocation side
+does not allow concurrent calling. Instead it is assumed that the caller must
+ensure there is not concurrent alloc calling to the same page_frag_cache
+instance by using its own lock or rely on some lockless guarantee like NAPI
+softirq.
+
+Depending on different aligning requirement, the page_frag API caller may call
+page_frag_*_align*() to ensure the returned virtual address or offset of the
+page is aligned according to the 'align/alignment' parameter. Note the size of
+the allocated fragment is not aligned, the caller needs to provide an aligned
+fragsz if there is an alignment requirement for the size of the fragment.
+
+Depending on different use cases, callers expecting to deal with va, page or
+both va and page for them may call page_frag_alloc, page_frag_refill, or
+page_frag_alloc_refill API accordingly.
+
+There is also a use case that needs minimum memory in order for forward progress,
+but more performant if more memory is available. Using page_frag_*_prepare() and
+page_frag_commit*() related API, the caller requests the minimum memory it needs
+and the prepare API will return the maximum size of the fragment returned. The
+caller needs to either call the commit API to report how much memory it actually
+uses, or not do so if deciding to not use any memory.
+
+.. kernel-doc:: include/linux/page_frag_cache.h
+ :identifiers: page_frag_cache_init page_frag_cache_is_pfmemalloc
+ __page_frag_alloc_align page_frag_alloc_align page_frag_alloc
+ __page_frag_refill_align page_frag_refill_align
+ page_frag_refill __page_frag_refill_prepare_align
+ page_frag_refill_prepare_align page_frag_refill_prepare
+ __page_frag_alloc_refill_prepare_align
+ page_frag_alloc_refill_prepare_align
+ page_frag_alloc_refill_prepare page_frag_alloc_refill_probe
+ page_frag_refill_probe page_frag_commit
+ page_frag_commit_noref page_frag_alloc_abort
+
+.. kernel-doc:: mm/page_frag_cache.c
+ :identifiers: page_frag_cache_drain page_frag_free
+ __page_frag_alloc_refill_probe_align
+
+Coding examples
+===============
+
+Initialization and draining API
+-------------------------------
+
+.. code-block:: c
+
+ page_frag_cache_init(nc);
+ ...
+ page_frag_cache_drain(nc);
+
+
+Allocation & freeing API
+------------------------
+
+.. code-block:: c
+
+ void *va;
+
+ va = page_frag_alloc_align(nc, size, gfp, align);
+ if (!va)
+ goto do_error;
+
+ err = do_something(va, size);
+ if (err) {
+ page_frag_abort(nc, size);
+ goto do_error;
+ }
+
+ ...
+
+ page_frag_free(va);
+
+
+Preparation & committing API
+----------------------------
+
+.. code-block:: c
+
+ struct page_frag page_frag, *pfrag;
+ bool merge = true;
+ void *va;
+
+ pfrag = &page_frag;
+ va = page_frag_alloc_refill_prepare(nc, 32U, pfrag, GFP_KERNEL);
+ if (!va)
+ goto wait_for_space;
+
+ copy = min_t(unsigned int, copy, pfrag->size);
+ if (!skb_can_coalesce(skb, i, pfrag->page, pfrag->offset)) {
+ if (i >= max_skb_frags)
+ goto new_segment;
+
+ merge = false;
+ }
+
+ copy = mem_schedule(copy);
+ if (!copy)
+ goto wait_for_space;
+
+ err = copy_from_iter_full_nocache(va, copy, iter);
+ if (err)
+ goto do_error;
+
+ if (merge) {
+ skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy);
+ page_frag_commit_noref(nc, pfrag, copy);
+ } else {
+ skb_fill_page_desc(skb, i, pfrag->page, pfrag->offset, copy);
+ page_frag_commit(nc, pfrag, copy);
+ }
diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h
index 1c0c11250b66..806d4b8d4bed 100644
--- a/include/linux/page_frag_cache.h
+++ b/include/linux/page_frag_cache.h
@@ -28,11 +28,29 @@ static inline bool encoded_page_decode_pfmemalloc(unsigned long encoded_page)
return !!(encoded_page & PAGE_FRAG_CACHE_PFMEMALLOC_BIT);
}
+/**
+ * page_frag_cache_init() - Init page_frag cache.
+ * @nc: page_frag cache from which to init
+ *
+ * Inline helper to initialize the page_frag cache.
+ */
static inline void page_frag_cache_init(struct page_frag_cache *nc)
{
nc->encoded_page = 0;
}
+/**
+ * page_frag_cache_is_pfmemalloc() - Check for pfmemalloc.
+ * @nc: page_frag cache from which to check
+ *
+ * Used to check if the current page in page_frag cache is allocated from the
+ * pfmemalloc reserves. It has the same calling context expectation as the
+ * allocation API.
+ *
+ * Return:
+ * true if the current page in page_frag cache is allocated from the pfmemalloc
+ * reserves, otherwise return false.
+ */
static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc)
{
return encoded_page_decode_pfmemalloc(nc->encoded_page);
@@ -61,6 +79,19 @@ static inline unsigned int __page_frag_cache_commit(struct page_frag_cache *nc,
return __page_frag_cache_commit_noref(nc, pfrag, used_sz);
}
+/**
+ * __page_frag_alloc_align() - Alloc a page fragment with aligning
+ * requirement.
+ * @nc: page_frag cache from which to allocate
+ * @fragsz: the requested fragment size
+ * @gfp_mask: the allocation gfp to use when cache need to be refilled
+ * @align_mask: the requested aligning requirement for the 'va'
+ *
+ * Allocate a page fragment from page_frag cache with aligning requirement.
+ *
+ * Return:
+ * Virtual address of the page fragment, otherwise return NULL.
+ */
static inline void *__page_frag_alloc_align(struct page_frag_cache *nc,
unsigned int fragsz, gfp_t gfp_mask,
unsigned int align_mask)
@@ -78,6 +109,19 @@ static inline void *__page_frag_alloc_align(struct page_frag_cache *nc,
return va;
}
+/**
+ * page_frag_alloc_align() - Allocate a page fragment with aligning requirement.
+ * @nc: page_frag cache from which to allocate
+ * @fragsz: the requested fragment size
+ * @gfp_mask: the allocation gfp to use when cache needs to be refilled
+ * @align: the requested aligning requirement for the fragment
+ *
+ * WARN_ON_ONCE() checking for @align before allocating a page fragment from
+ * page_frag cache with aligning requirement.
+ *
+ * Return:
+ * virtual address of the page fragment, otherwise return NULL.
+ */
static inline void *page_frag_alloc_align(struct page_frag_cache *nc,
unsigned int fragsz, gfp_t gfp_mask,
unsigned int align)
@@ -86,12 +130,36 @@ static inline void *page_frag_alloc_align(struct page_frag_cache *nc,
return __page_frag_alloc_align(nc, fragsz, gfp_mask, -align);
}
+/**
+ * page_frag_alloc() - Allocate a page fragment.
+ * @nc: page_frag cache from which to allocate
+ * @fragsz: the requested fragment size
+ * @gfp_mask: the allocation gfp to use when cache need to be refilled
+ *
+ * Alloc a page fragment from page_frag cache.
+ *
+ * Return:
+ * virtual address of the page fragment, otherwise return NULL.
+ */
static inline void *page_frag_alloc(struct page_frag_cache *nc,
unsigned int fragsz, gfp_t gfp_mask)
{
return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u);
}
+/**
+ * __page_frag_refill_align() - Refill a page_frag with aligning requirement.
+ * @nc: page_frag cache from which to refill
+ * @fragsz: the requested fragment size
+ * @pfrag: the page_frag to be refilled.
+ * @gfp_mask: the allocation gfp to use when cache need to be refilled
+ * @align_mask: the requested aligning requirement for the fragment
+ *
+ * Refill a page_frag from page_frag cache with aligning requirement.
+ *
+ * Return:
+ * True if refill succeeds, otherwise return false.
+ */
static inline bool __page_frag_refill_align(struct page_frag_cache *nc,
unsigned int fragsz,
struct page_frag *pfrag,
@@ -106,6 +174,20 @@ static inline bool __page_frag_refill_align(struct page_frag_cache *nc,
return true;
}
+/**
+ * page_frag_refill_align() - Refill a page_frag with aligning requirement.
+ * @nc: page_frag cache from which to refill
+ * @fragsz: the requested fragment size
+ * @pfrag: the page_frag to be refilled.
+ * @gfp_mask: the allocation gfp to use when cache needs to be refilled
+ * @align: the requested aligning requirement for the fragment
+ *
+ * WARN_ON_ONCE() checking for @align before refilling a page_frag from
+ * page_frag cache with aligning requirement.
+ *
+ * Return:
+ * True if refill succeeds, otherwise return false.
+ */
static inline bool page_frag_refill_align(struct page_frag_cache *nc,
unsigned int fragsz,
struct page_frag *pfrag,
@@ -115,6 +197,18 @@ static inline bool page_frag_refill_align(struct page_frag_cache *nc,
return __page_frag_refill_align(nc, fragsz, pfrag, gfp_mask, -align);
}
+/**
+ * page_frag_refill() - Refill a page_frag.
+ * @nc: page_frag cache from which to refill
+ * @fragsz: the requested fragment size
+ * @pfrag: the page_frag to be refilled.
+ * @gfp_mask: the allocation gfp to use when cache need to be refilled
+ *
+ * Refill a page_frag from page_frag cache.
+ *
+ * Return:
+ * True if refill succeeds, otherwise return false.
+ */
static inline bool page_frag_refill(struct page_frag_cache *nc,
unsigned int fragsz,
struct page_frag *pfrag, gfp_t gfp_mask)
@@ -122,6 +216,20 @@ static inline bool page_frag_refill(struct page_frag_cache *nc,
return __page_frag_refill_align(nc, fragsz, pfrag, gfp_mask, ~0u);
}
+/**
+ * __page_frag_refill_prepare_align() - Prepare refilling a page_frag with
+ * aligning requirement.
+ * @nc: page_frag cache from which to refill
+ * @fragsz: the requested fragment size
+ * @pfrag: the page_frag to be refilled.
+ * @gfp_mask: the allocation gfp to use when cache need to be refilled
+ * @align_mask: the requested aligning requirement for the fragment
+ *
+ * Prepare refill a page_frag from page_frag cache with aligning requirement.
+ *
+ * Return:
+ * True if prepare refilling succeeds, otherwise return false.
+ */
static inline bool __page_frag_refill_prepare_align(struct page_frag_cache *nc,
unsigned int fragsz,
struct page_frag *pfrag,
@@ -132,6 +240,21 @@ static inline bool __page_frag_refill_prepare_align(struct page_frag_cache *nc,
align_mask);
}
+/**
+ * page_frag_refill_prepare_align() - Prepare refilling a page_frag with
+ * aligning requirement.
+ * @nc: page_frag cache from which to refill
+ * @fragsz: the requested fragment size
+ * @pfrag: the page_frag to be refilled.
+ * @gfp_mask: the allocation gfp to use when cache needs to be refilled
+ * @align: the requested aligning requirement for the fragment
+ *
+ * WARN_ON_ONCE() checking for @align before prepare refilling a page_frag from
+ * page_frag cache with aligning requirement.
+ *
+ * Return:
+ * True if prepare refilling succeeds, otherwise return false.
+ */
static inline bool page_frag_refill_prepare_align(struct page_frag_cache *nc,
unsigned int fragsz,
struct page_frag *pfrag,
@@ -143,6 +266,18 @@ static inline bool page_frag_refill_prepare_align(struct page_frag_cache *nc,
-align);
}
+/**
+ * page_frag_refill_prepare() - Prepare refilling a page_frag.
+ * @nc: page_frag cache from which to refill
+ * @fragsz: the requested fragment size
+ * @pfrag: the page_frag to be refilled.
+ * @gfp_mask: the allocation gfp to use when cache need to be refilled
+ *
+ * Prepare refilling a page_frag from page_frag cache.
+ *
+ * Return:
+ * True if refill succeeds, otherwise return false.
+ */
static inline bool page_frag_refill_prepare(struct page_frag_cache *nc,
unsigned int fragsz,
struct page_frag *pfrag,
@@ -152,6 +287,20 @@ static inline bool page_frag_refill_prepare(struct page_frag_cache *nc,
~0u);
}
+/**
+ * __page_frag_alloc_refill_prepare_align() - Prepare allocating a fragment and
+ * refilling a page_frag with aligning requirement.
+ * @nc: page_frag cache from which to allocate and refill
+ * @fragsz: the requested fragment size
+ * @pfrag: the page_frag to be refilled.
+ * @gfp_mask: the allocation gfp to use when cache need to be refilled
+ * @align_mask: the requested aligning requirement for the fragment.
+ *
+ * Prepare allocating a fragment and refilling a page_frag from page_frag cache.
+ *
+ * Return:
+ * virtual address of the page fragment, otherwise return NULL.
+ */
static inline void *__page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc,
unsigned int fragsz,
struct page_frag *pfrag,
@@ -161,6 +310,21 @@ static inline void *__page_frag_alloc_refill_prepare_align(struct page_frag_cach
return __page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, align_mask);
}
+/**
+ * page_frag_alloc_refill_prepare_align() - Prepare allocating a fragment and
+ * refilling a page_frag with aligning requirement.
+ * @nc: page_frag cache from which to allocate and refill
+ * @fragsz: the requested fragment size
+ * @pfrag: the page_frag to be refilled.
+ * @gfp_mask: the allocation gfp to use when cache need to be refilled
+ * @align: the requested aligning requirement for the fragment.
+ *
+ * WARN_ON_ONCE() checking for @align before prepare allocating a fragment and
+ * refilling a page_frag from page_frag cache.
+ *
+ * Return:
+ * virtual address of the page fragment, otherwise return NULL.
+ */
static inline void *page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc,
unsigned int fragsz,
struct page_frag *pfrag,
@@ -172,6 +336,19 @@ static inline void *page_frag_alloc_refill_prepare_align(struct page_frag_cache
gfp_mask, -align);
}
+/**
+ * page_frag_alloc_refill_prepare() - Prepare allocating a fragment and
+ * refilling a page_frag.
+ * @nc: page_frag cache from which to allocate and refill
+ * @fragsz: the requested fragment size
+ * @pfrag: the page_frag to be refilled.
+ * @gfp_mask: the allocation gfp to use when cache need to be refilled
+ *
+ * Prepare allocating a fragment and refilling a page_frag from page_frag cache.
+ *
+ * Return:
+ * virtual address of the page fragment, otherwise return NULL.
+ */
static inline void *page_frag_alloc_refill_prepare(struct page_frag_cache *nc,
unsigned int fragsz,
struct page_frag *pfrag,
@@ -181,6 +358,18 @@ static inline void *page_frag_alloc_refill_prepare(struct page_frag_cache *nc,
gfp_mask, ~0u);
}
+/**
+ * page_frag_alloc_refill_probe() - Probe allocating a fragment and refilling
+ * a page_frag.
+ * @nc: page_frag cache from which to allocate and refill
+ * @fragsz: the requested fragment size
+ * @pfrag: the page_frag to be refilled
+ *
+ * Probe allocating a fragment and refilling a page_frag from page_frag cache.
+ *
+ * Return:
+ * virtual address of the page fragment, otherwise return NULL.
+ */
static inline void *page_frag_alloc_refill_probe(struct page_frag_cache *nc,
unsigned int fragsz,
struct page_frag *pfrag)
@@ -188,6 +377,17 @@ static inline void *page_frag_alloc_refill_probe(struct page_frag_cache *nc,
return __page_frag_alloc_refill_probe_align(nc, fragsz, pfrag, ~0u);
}
+/**
+ * page_frag_refill_probe() - Probe refilling a page_frag.
+ * @nc: page_frag cache from which to refill
+ * @fragsz: the requested fragment size
+ * @pfrag: the page_frag to be refilled
+ *
+ * Probe refilling a page_frag from page_frag cache.
+ *
+ * Return:
+ * True if refill succeeds, otherwise return false.
+ */
static inline bool page_frag_refill_probe(struct page_frag_cache *nc,
unsigned int fragsz,
struct page_frag *pfrag)
@@ -195,20 +395,54 @@ static inline bool page_frag_refill_probe(struct page_frag_cache *nc,
return !!page_frag_alloc_refill_probe(nc, fragsz, pfrag);
}
-static inline void page_frag_commit(struct page_frag_cache *nc,
- struct page_frag *pfrag,
- unsigned int used_sz)
+/**
+ * page_frag_commit - Commit a prepared page fragment.
+ * @nc: page_frag cache from which to commit
+ * @pfrag: the page_frag to be committed
+ * @used_sz: size of the page fragment has been used
+ *
+ * Commit the actual used size for the allocation that was either prepared
+ * or probed.
+ *
+ * Return:
+ * The true size of the fragment considering the offset alignment.
+ */
+static inline unsigned int page_frag_commit(struct page_frag_cache *nc,
+ struct page_frag *pfrag,
+ unsigned int used_sz)
{
- __page_frag_cache_commit(nc, pfrag, used_sz);
+ return __page_frag_cache_commit(nc, pfrag, used_sz);
}
-static inline void page_frag_commit_noref(struct page_frag_cache *nc,
- struct page_frag *pfrag,
- unsigned int used_sz)
+/**
+ * page_frag_commit_noref - Commit a prepared page fragment without taking
+ * page refcount.
+ * @nc: page_frag cache from which to commit
+ * @pfrag: the page_frag to be committed
+ * @used_sz: size of the page fragment has been used
+ *
+ * Commit the prepared or probed fragment by passing the actual used size, but
+ * not taking refcount. Mostly used for fragmemt coalescing case when the
+ * current fragment can share the same refcount with previous fragment.
+ *
+ * Return:
+ * The true size of the fragment considering the offset alignment.
+ */
+static inline unsigned int page_frag_commit_noref(struct page_frag_cache *nc,
+ struct page_frag *pfrag,
+ unsigned int used_sz)
{
- __page_frag_cache_commit_noref(nc, pfrag, used_sz);
+ return __page_frag_cache_commit_noref(nc, pfrag, used_sz);
}
+/**
+ * page_frag_alloc_abort - Abort the page fragment allocation.
+ * @nc: page_frag cache to which the page fragment is aborted back
+ * @fragsz: size of the page fragment to be aborted
+ *
+ * It is expected to be called from the same context as the allocation API.
+ * Mostly used for error handling cases where the fragment is no longer needed.
+ */
static inline void page_frag_alloc_abort(struct page_frag_cache *nc,
unsigned int fragsz)
{
diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
index 5ea4b663ab8e..51f4eb4b2169 100644
--- a/mm/page_frag_cache.c
+++ b/mm/page_frag_cache.c
@@ -70,6 +70,10 @@ static struct page *__page_frag_cache_refill(struct page_frag_cache *nc,
return page;
}
+/**
+ * page_frag_cache_drain - Drain the current page from page_frag cache.
+ * @nc: page_frag cache from which to drain
+ */
void page_frag_cache_drain(struct page_frag_cache *nc)
{
if (!nc->encoded_page)
@@ -112,6 +116,20 @@ unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc,
}
EXPORT_SYMBOL(__page_frag_cache_commit_noref);
+/**
+ * __page_frag_alloc_refill_probe_align() - Probe allocing a fragment and
+ * refilling a page_frag with aligning requirement.
+ * @nc: page_frag cache from which to allocate and refill
+ * @fragsz: the requested fragment size
+ * @pfrag: the page_frag to be refilled.
+ * @align_mask: the requested aligning requirement for the fragment.
+ *
+ * Probe allocing a fragment and refilling a page_frag from page_frag cache with
+ * aligning requirement.
+ *
+ * Return:
+ * virtual address of the page fragment, otherwise return NULL.
+ */
void *__page_frag_alloc_refill_probe_align(struct page_frag_cache *nc,
unsigned int fragsz,
struct page_frag *pfrag,
@@ -203,8 +221,12 @@ void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz,
}
EXPORT_SYMBOL(__page_frag_cache_prepare);
-/*
- * Frees a page fragment allocated out of either a compound or order 0 page.
+/**
+ * page_frag_free - Free a page fragment.
+ * @addr: va of page fragment to be freed
+ *
+ * Free a page fragment allocated out of either a compound or order 0 page by
+ * virtual address.
*/
void page_frag_free(void *addr)
{
--
2.33.0
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH net-next v22 06/14] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc'
2024-10-18 10:53 ` [PATCH net-next v22 06/14] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc' Yunsheng Lin
@ 2024-10-18 16:43 ` Alexander Duyck
0 siblings, 0 replies; 24+ messages in thread
From: Alexander Duyck @ 2024-10-18 16:43 UTC (permalink / raw)
To: Yunsheng Lin
Cc: davem, kuba, pabeni, netdev, linux-kernel, Andrew Morton, linux-mm
On Fri, Oct 18, 2024 at 4:00 AM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>
> Currently there is one 'struct page_frag' for every 'struct
> sock' and 'struct task_struct', we are about to replace the
> 'struct page_frag' with 'struct page_frag_cache' for them.
> Before begin the replacing, we need to ensure the size of
> 'struct page_frag_cache' is not bigger than the size of
> 'struct page_frag', as there may be tens of thousands of
> 'struct sock' and 'struct task_struct' instances in the
> system.
>
> By or'ing the page order & pfmemalloc with lower bits of
> 'va' instead of using 'u16' or 'u32' for page size and 'u8'
> for pfmemalloc, we are able to avoid 3 or 5 bytes space waste.
> And page address & pfmemalloc & order is unchanged for the
> same page in the same 'page_frag_cache' instance, it makes
> sense to fit them together.
>
> After this patch, the size of 'struct page_frag_cache' should be
> the same as the size of 'struct page_frag'.
>
> CC: Alexander Duyck <alexander.duyck@gmail.com>
> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> ---
> include/linux/mm_types_task.h | 19 +++++----
> include/linux/page_frag_cache.h | 24 ++++++++++-
> mm/page_frag_cache.c | 70 ++++++++++++++++++++++-----------
> 3 files changed, 81 insertions(+), 32 deletions(-)
>
LGTM
Reviewed-by: Alexander Duyck <alexanderduyck@fb.com>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH net-next v22 07/14] mm: page_frag: some minor refactoring before adding new API
2024-10-18 10:53 ` [PATCH net-next v22 07/14] mm: page_frag: some minor refactoring before adding new API Yunsheng Lin
@ 2024-10-18 17:26 ` Alexander Duyck
2024-10-19 8:29 ` Yunsheng Lin
0 siblings, 1 reply; 24+ messages in thread
From: Alexander Duyck @ 2024-10-18 17:26 UTC (permalink / raw)
To: Yunsheng Lin
Cc: davem, kuba, pabeni, netdev, linux-kernel, Andrew Morton, linux-mm
On Fri, Oct 18, 2024 at 4:00 AM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>
> Refactor common codes from __page_frag_alloc_va_align() to
> __page_frag_cache_prepare() and __page_frag_cache_commit(),
> so that the new API can make use of them.
>
> CC: Alexander Duyck <alexander.duyck@gmail.com>
> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> ---
> include/linux/page_frag_cache.h | 36 +++++++++++++++++++++++++++--
> mm/page_frag_cache.c | 40 ++++++++++++++++++++++++++-------
> 2 files changed, 66 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h
> index 41a91df82631..feed99d0cddb 100644
> --- a/include/linux/page_frag_cache.h
> +++ b/include/linux/page_frag_cache.h
> @@ -5,6 +5,7 @@
>
> #include <linux/bits.h>
> #include <linux/log2.h>
> +#include <linux/mmdebug.h>
> #include <linux/mm_types_task.h>
> #include <linux/types.h>
>
> @@ -39,8 +40,39 @@ static inline bool page_frag_cache_is_pfmemalloc(struct page_frag_cache *nc)
>
> void page_frag_cache_drain(struct page_frag_cache *nc);
> void __page_frag_cache_drain(struct page *page, unsigned int count);
> -void *__page_frag_alloc_align(struct page_frag_cache *nc, unsigned int fragsz,
> - gfp_t gfp_mask, unsigned int align_mask);
> +void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz,
> + struct page_frag *pfrag, gfp_t gfp_mask,
> + unsigned int align_mask);
> +unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc,
> + struct page_frag *pfrag,
> + unsigned int used_sz);
> +
> +static inline unsigned int __page_frag_cache_commit(struct page_frag_cache *nc,
> + struct page_frag *pfrag,
> + unsigned int used_sz)
> +{
> + VM_BUG_ON(!nc->pagecnt_bias);
> + nc->pagecnt_bias--;
> +
> + return __page_frag_cache_commit_noref(nc, pfrag, used_sz);
> +}
> +
> +static inline void *__page_frag_alloc_align(struct page_frag_cache *nc,
> + unsigned int fragsz, gfp_t gfp_mask,
> + unsigned int align_mask)
> +{
> + struct page_frag page_frag;
> + void *va;
> +
> + va = __page_frag_cache_prepare(nc, fragsz, &page_frag, gfp_mask,
> + align_mask);
> + if (unlikely(!va))
> + return NULL;
> +
> + __page_frag_cache_commit(nc, &page_frag, fragsz);
Minor nit here. Rather than if (!va) return I think it might be better
to just go with if (likely(va)) __page_frag_cache_commit.
> +
> + return va;
> +}
>
> static inline void *page_frag_alloc_align(struct page_frag_cache *nc,
> unsigned int fragsz, gfp_t gfp_mask,
> diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
> index a36fd09bf275..a852523bc8ca 100644
> --- a/mm/page_frag_cache.c
> +++ b/mm/page_frag_cache.c
> @@ -90,9 +90,31 @@ void __page_frag_cache_drain(struct page *page, unsigned int count)
> }
> EXPORT_SYMBOL(__page_frag_cache_drain);
>
> -void *__page_frag_alloc_align(struct page_frag_cache *nc,
> - unsigned int fragsz, gfp_t gfp_mask,
> - unsigned int align_mask)
> +unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc,
> + struct page_frag *pfrag,
> + unsigned int used_sz)
> +{
> + unsigned int orig_offset;
> +
> + VM_BUG_ON(used_sz > pfrag->size);
> + VM_BUG_ON(pfrag->page != encoded_page_decode_page(nc->encoded_page));
> + VM_BUG_ON(pfrag->offset + pfrag->size >
> + (PAGE_SIZE << encoded_page_decode_order(nc->encoded_page)));
> +
> + /* pfrag->offset might be bigger than the nc->offset due to alignment */
> + VM_BUG_ON(nc->offset > pfrag->offset);
> +
> + orig_offset = nc->offset;
> + nc->offset = pfrag->offset + used_sz;
> +
> + /* Return true size back to caller considering the offset alignment */
> + return nc->offset - orig_offset;
> +}
> +EXPORT_SYMBOL(__page_frag_cache_commit_noref);
> +
I have a question. How often is it that we are committing versus just
dropping the fragment? It seems like this approach is designed around
optimizing for not commiting the page as we are having to take an
extra function call to commit the change every time. Would it make
more sense to have an abort versus a commit?
> +void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz,
> + struct page_frag *pfrag, gfp_t gfp_mask,
> + unsigned int align_mask)
> {
> unsigned long encoded_page = nc->encoded_page;
> unsigned int size, offset;
> @@ -114,6 +136,8 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
> /* reset page count bias and offset to start of new frag */
> nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
> nc->offset = 0;
> + } else {
> + page = encoded_page_decode_page(encoded_page);
> }
>
> size = PAGE_SIZE << encoded_page_decode_order(encoded_page);
This makes no sense to me. Seems like there are scenarios where you
are grabbing the page even if you aren't going to use it? Why?
I think you would be better off just waiting to the end and then
fetching it instead of trying to grab it and potentially throw it away
if there is no space left in the page. Otherwise what you might do is
something along the lines of:
pfrag->page = page ? : encoded_page_decode_page(encoded_page);
> @@ -132,8 +156,6 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
> return NULL;
> }
>
> - page = encoded_page_decode_page(encoded_page);
> -
> if (!page_ref_sub_and_test(page, nc->pagecnt_bias))
> goto refill;
>
> @@ -148,15 +170,17 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
>
> /* reset page count bias and offset to start of new frag */
> nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
> + nc->offset = 0;
> offset = 0;
> }
>
> - nc->pagecnt_bias--;
> - nc->offset = offset + fragsz;
> + pfrag->page = page;
> + pfrag->offset = offset;
> + pfrag->size = size - offset;
I really think we should still be moving the nc->offset forward at
least with each allocation. It seems like you end up doing two flavors
of commit, one with and one without the decrement of the bias. So I
would be okay with that being pulled out into some separate logic to
avoid the extra increment in the case of merging the pages. However in
both cases you need to move the offset, so I would recommend keeping
that bit there as it would allow us to essentially call this multiple
times without having to do a commit in between to keep the offset
correct. With that your commit logic only has to verify nothing
changes out from underneath us and then update the pagecnt_bias if
needed.
>
> return encoded_page_decode_virt(encoded_page) + offset;
> }
> -EXPORT_SYMBOL(__page_frag_alloc_align);
> +EXPORT_SYMBOL(__page_frag_cache_prepare);
>
> /*
> * Frees a page fragment allocated out of either a compound or order 0 page.
> --
> 2.33.0
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH net-next v22 10/14] mm: page_frag: introduce prepare/probe/commit API
2024-10-18 10:53 ` [PATCH net-next v22 10/14] mm: page_frag: introduce prepare/probe/commit API Yunsheng Lin
@ 2024-10-18 18:03 ` Alexander Duyck
2024-10-19 8:33 ` Yunsheng Lin
0 siblings, 1 reply; 24+ messages in thread
From: Alexander Duyck @ 2024-10-18 18:03 UTC (permalink / raw)
To: Yunsheng Lin
Cc: davem, kuba, pabeni, netdev, linux-kernel, Andrew Morton, linux-mm
On Fri, Oct 18, 2024 at 4:00 AM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>
> There are many use cases that need minimum memory in order
> for forward progress, but more performant if more memory is
> available or need to probe the cache info to use any memory
> available for frag caoleasing reason.
>
> Currently skb_page_frag_refill() API is used to solve the
> above use cases, but caller needs to know about the internal
> detail and access the data field of 'struct page_frag' to
> meet the requirement of the above use cases and its
> implementation is similar to the one in mm subsystem.
>
> To unify those two page_frag implementations, introduce a
> prepare API to ensure minimum memory is satisfied and return
> how much the actual memory is available to the caller and a
> probe API to report the current available memory to caller
> without doing cache refilling. The caller needs to either call
> the commit API to report how much memory it actually uses, or
> not do so if deciding to not use any memory.
>
> CC: Alexander Duyck <alexander.duyck@gmail.com>
> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
> ---
> include/linux/page_frag_cache.h | 130 ++++++++++++++++++++++++++++++++
> mm/page_frag_cache.c | 21 ++++++
> 2 files changed, 151 insertions(+)
>
> diff --git a/include/linux/page_frag_cache.h b/include/linux/page_frag_cache.h
> index feed99d0cddb..1c0c11250b66 100644
> --- a/include/linux/page_frag_cache.h
> +++ b/include/linux/page_frag_cache.h
> @@ -46,6 +46,10 @@ void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz,
> unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc,
> struct page_frag *pfrag,
> unsigned int used_sz);
> +void *__page_frag_alloc_refill_probe_align(struct page_frag_cache *nc,
> + unsigned int fragsz,
> + struct page_frag *pfrag,
> + unsigned int align_mask);
>
> static inline unsigned int __page_frag_cache_commit(struct page_frag_cache *nc,
> struct page_frag *pfrag,
> @@ -88,6 +92,132 @@ static inline void *page_frag_alloc(struct page_frag_cache *nc,
> return __page_frag_alloc_align(nc, fragsz, gfp_mask, ~0u);
> }
>
> +static inline bool __page_frag_refill_align(struct page_frag_cache *nc,
> + unsigned int fragsz,
> + struct page_frag *pfrag,
> + gfp_t gfp_mask,
> + unsigned int align_mask)
> +{
> + if (unlikely(!__page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask,
> + align_mask)))
> + return false;
> +
> + __page_frag_cache_commit(nc, pfrag, fragsz);
> + return true;
> +}
> +
> +static inline bool page_frag_refill_align(struct page_frag_cache *nc,
> + unsigned int fragsz,
> + struct page_frag *pfrag,
> + gfp_t gfp_mask, unsigned int align)
> +{
> + WARN_ON_ONCE(!is_power_of_2(align));
> + return __page_frag_refill_align(nc, fragsz, pfrag, gfp_mask, -align);
> +}
> +
> +static inline bool page_frag_refill(struct page_frag_cache *nc,
> + unsigned int fragsz,
> + struct page_frag *pfrag, gfp_t gfp_mask)
> +{
> + return __page_frag_refill_align(nc, fragsz, pfrag, gfp_mask, ~0u);
> +}
> +
> +static inline bool __page_frag_refill_prepare_align(struct page_frag_cache *nc,
> + unsigned int fragsz,
> + struct page_frag *pfrag,
> + gfp_t gfp_mask,
> + unsigned int align_mask)
> +{
> + return !!__page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask,
> + align_mask);
> +}
> +
> +static inline bool page_frag_refill_prepare_align(struct page_frag_cache *nc,
> + unsigned int fragsz,
> + struct page_frag *pfrag,
> + gfp_t gfp_mask,
> + unsigned int align)
> +{
> + WARN_ON_ONCE(!is_power_of_2(align));
> + return __page_frag_refill_prepare_align(nc, fragsz, pfrag, gfp_mask,
> + -align);
> +}
> +
> +static inline bool page_frag_refill_prepare(struct page_frag_cache *nc,
> + unsigned int fragsz,
> + struct page_frag *pfrag,
> + gfp_t gfp_mask)
> +{
> + return __page_frag_refill_prepare_align(nc, fragsz, pfrag, gfp_mask,
> + ~0u);
> +}
> +
> +static inline void *__page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc,
> + unsigned int fragsz,
> + struct page_frag *pfrag,
> + gfp_t gfp_mask,
> + unsigned int align_mask)
> +{
> + return __page_frag_cache_prepare(nc, fragsz, pfrag, gfp_mask, align_mask);
> +}
> +
> +static inline void *page_frag_alloc_refill_prepare_align(struct page_frag_cache *nc,
> + unsigned int fragsz,
> + struct page_frag *pfrag,
> + gfp_t gfp_mask,
> + unsigned int align)
> +{
> + WARN_ON_ONCE(!is_power_of_2(align));
> + return __page_frag_alloc_refill_prepare_align(nc, fragsz, pfrag,
> + gfp_mask, -align);
> +}
> +
> +static inline void *page_frag_alloc_refill_prepare(struct page_frag_cache *nc,
> + unsigned int fragsz,
> + struct page_frag *pfrag,
> + gfp_t gfp_mask)
> +{
> + return __page_frag_alloc_refill_prepare_align(nc, fragsz, pfrag,
> + gfp_mask, ~0u);
> +}
> +
> +static inline void *page_frag_alloc_refill_probe(struct page_frag_cache *nc,
> + unsigned int fragsz,
> + struct page_frag *pfrag)
> +{
> + return __page_frag_alloc_refill_probe_align(nc, fragsz, pfrag, ~0u);
> +}
> +
> +static inline bool page_frag_refill_probe(struct page_frag_cache *nc,
> + unsigned int fragsz,
> + struct page_frag *pfrag)
> +{
> + return !!page_frag_alloc_refill_probe(nc, fragsz, pfrag);
> +}
> +
> +static inline void page_frag_commit(struct page_frag_cache *nc,
> + struct page_frag *pfrag,
> + unsigned int used_sz)
> +{
> + __page_frag_cache_commit(nc, pfrag, used_sz);
> +}
> +
> +static inline void page_frag_commit_noref(struct page_frag_cache *nc,
> + struct page_frag *pfrag,
> + unsigned int used_sz)
> +{
> + __page_frag_cache_commit_noref(nc, pfrag, used_sz);
> +}
> +
Not a huge fan of introducing a ton of new API calls and then having
to have them all applied at once in the follow-on patches. Ideally the
functions and the header documentation for them would be introduced in
the same patch as well as examples on how it would be used.
I really think we should break these up as some are used in one case,
and others in another and it is a pain to have a pile of abstractions
that are all using these functions in different ways.
> +static inline void page_frag_alloc_abort(struct page_frag_cache *nc,
> + unsigned int fragsz)
> +{
> + VM_BUG_ON(fragsz > nc->offset);
> +
> + nc->pagecnt_bias++;
> + nc->offset -= fragsz;
> +}
> +
We should probably have the same checks here you had on the earlier
commit. We should not be allowing blind changes. If we are using the
commit or abort interfaces we should be verifying a page frag with
them to verify that the request to modify this is legitimate.
> void page_frag_free(void *addr);
>
> #endif
> diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
> index f55d34cf7d43..5ea4b663ab8e 100644
> --- a/mm/page_frag_cache.c
> +++ b/mm/page_frag_cache.c
> @@ -112,6 +112,27 @@ unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc,
> }
> EXPORT_SYMBOL(__page_frag_cache_commit_noref);
>
> +void *__page_frag_alloc_refill_probe_align(struct page_frag_cache *nc,
> + unsigned int fragsz,
> + struct page_frag *pfrag,
> + unsigned int align_mask)
> +{
> + unsigned long encoded_page = nc->encoded_page;
> + unsigned int size, offset;
> +
> + size = PAGE_SIZE << encoded_page_decode_order(encoded_page);
> + offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask);
> + if (unlikely(!encoded_page || offset + fragsz > size))
> + return NULL;
> +
> + pfrag->page = encoded_page_decode_page(encoded_page);
> + pfrag->size = size - offset;
> + pfrag->offset = offset;
> +
> + return encoded_page_decode_virt(encoded_page) + offset;
> +}
> +EXPORT_SYMBOL(__page_frag_alloc_refill_probe_align);
> +
If I am not mistaken this would be the equivalent of allocating a size
0 fragment right? The only difference is that you are copying out the
"remaining" size, but we could get that from the offset if we knew the
size couldn't we? Would it maybe make sense to look at limiting this
to PAGE_SIZE instead of passing the size of the actual fragment?
> void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz,
> struct page_frag *pfrag, gfp_t gfp_mask,
> unsigned int align_mask)
> --
> 2.33.0
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH net-next v22 07/14] mm: page_frag: some minor refactoring before adding new API
2024-10-18 17:26 ` Alexander Duyck
@ 2024-10-19 8:29 ` Yunsheng Lin
2024-10-20 15:45 ` Alexander Duyck
0 siblings, 1 reply; 24+ messages in thread
From: Yunsheng Lin @ 2024-10-19 8:29 UTC (permalink / raw)
To: Alexander Duyck, Yunsheng Lin
Cc: davem, kuba, pabeni, netdev, linux-kernel, Andrew Morton, linux-mm
On 10/19/2024 1:26 AM, Alexander Duyck wrote:
...
>> +static inline void *__page_frag_alloc_align(struct page_frag_cache *nc,
>> + unsigned int fragsz, gfp_t gfp_mask,
>> + unsigned int align_mask)
>> +{
>> + struct page_frag page_frag;
>> + void *va;
>> +
>> + va = __page_frag_cache_prepare(nc, fragsz, &page_frag, gfp_mask,
>> + align_mask);
>> + if (unlikely(!va))
>> + return NULL;
>> +
>> + __page_frag_cache_commit(nc, &page_frag, fragsz);
>
> Minor nit here. Rather than if (!va) return I think it might be better
> to just go with if (likely(va)) __page_frag_cache_commit.
Ack.
>
>> +
>> + return va;
>> +}
>>
>> static inline void *page_frag_alloc_align(struct page_frag_cache *nc,
>> unsigned int fragsz, gfp_t gfp_mask,
>> diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
>> index a36fd09bf275..a852523bc8ca 100644
>> --- a/mm/page_frag_cache.c
>> +++ b/mm/page_frag_cache.c
>> @@ -90,9 +90,31 @@ void __page_frag_cache_drain(struct page *page, unsigned int count)
>> }
>> EXPORT_SYMBOL(__page_frag_cache_drain);
>>
>> -void *__page_frag_alloc_align(struct page_frag_cache *nc,
>> - unsigned int fragsz, gfp_t gfp_mask,
>> - unsigned int align_mask)
>> +unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc,
>> + struct page_frag *pfrag,
>> + unsigned int used_sz)
>> +{
>> + unsigned int orig_offset;
>> +
>> + VM_BUG_ON(used_sz > pfrag->size);
>> + VM_BUG_ON(pfrag->page != encoded_page_decode_page(nc->encoded_page));
>> + VM_BUG_ON(pfrag->offset + pfrag->size >
>> + (PAGE_SIZE << encoded_page_decode_order(nc->encoded_page)));
>> +
>> + /* pfrag->offset might be bigger than the nc->offset due to alignment */
>> + VM_BUG_ON(nc->offset > pfrag->offset);
>> +
>> + orig_offset = nc->offset;
>> + nc->offset = pfrag->offset + used_sz;
>> +
>> + /* Return true size back to caller considering the offset alignment */
>> + return nc->offset - orig_offset;
>> +}
>> +EXPORT_SYMBOL(__page_frag_cache_commit_noref);
>> +
>
> I have a question. How often is it that we are committing versus just
> dropping the fragment? It seems like this approach is designed around
> optimizing for not commiting the page as we are having to take an
> extra function call to commit the change every time. Would it make
> more sense to have an abort versus a commit?
Before this patch, page_frag_alloc() related API seems to be mostly used
for skb data or frag for rx part, see napi_alloc_skb() or some drivers
like e1000, but with more drivers using the page_pool for skb rx frag,
it seems skb data for tx is the main usecase.
And the prepare and commit API added in the patchset seems to be mainly
used for skb frag for tx part except af_packet.
It seems it is not very clear which is mostly used one, mostly likely
the prepare and commit API might be the mostly used one if I have to
guess as there might be more memory needed for skb frag than skb data.
>
>> +void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz,
>> + struct page_frag *pfrag, gfp_t gfp_mask,
>> + unsigned int align_mask)
>> {
>> unsigned long encoded_page = nc->encoded_page;
>> unsigned int size, offset;
>> @@ -114,6 +136,8 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
>> /* reset page count bias and offset to start of new frag */
>> nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
>> nc->offset = 0;
>> + } else {
>> + page = encoded_page_decode_page(encoded_page);
>> }
>>
>> size = PAGE_SIZE << encoded_page_decode_order(encoded_page);
>
> This makes no sense to me. Seems like there are scenarios where you
> are grabbing the page even if you aren't going to use it? Why?
>
> I think you would be better off just waiting to the end and then
> fetching it instead of trying to grab it and potentially throw it away
> if there is no space left in the page. Otherwise what you might do is
> something along the lines of:
> pfrag->page = page ? : encoded_page_decode_page(encoded_page);
But doesn't that mean an additional checking is needed to decide if we
need to grab the page?
But the './scripts/bloat-o-meter' does show some binary size shrink
using the above.
>
>
>> @@ -132,8 +156,6 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
>> return NULL;
>> }
>>
>> - page = encoded_page_decode_page(encoded_page);
>> -
>> if (!page_ref_sub_and_test(page, nc->pagecnt_bias))
>> goto refill;
>>
>> @@ -148,15 +170,17 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
>>
>> /* reset page count bias and offset to start of new frag */
>> nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
>> + nc->offset = 0;
>> offset = 0;
>> }
>>
>> - nc->pagecnt_bias--;
>> - nc->offset = offset + fragsz;
>> + pfrag->page = page;
>> + pfrag->offset = offset;
>> + pfrag->size = size - offset;
>
> I really think we should still be moving the nc->offset forward at
> least with each allocation. It seems like you end up doing two flavors
> of commit, one with and one without the decrement of the bias. So I
> would be okay with that being pulled out into some separate logic to
> avoid the extra increment in the case of merging the pages. However in
> both cases you need to move the offset, so I would recommend keeping
> that bit there as it would allow us to essentially call this multiple
> times without having to do a commit in between to keep the offset
> correct. With that your commit logic only has to verify nothing
> changes out from underneath us and then update the pagecnt_bias if
> needed.
The problem is that we don't really know how much the nc->offset
need to be moved forward to and the caller needs the original offset
for skb_fill_page_desc() related calling when prepare API is used as
an example in 'Preparation & committing API' section of patch 13:
+Preparation & committing API
+----------------------------
+
+.. code-block:: c
+
+ struct page_frag page_frag, *pfrag;
+ bool merge = true;
+ void *va;
+
+ pfrag = &page_frag;
+ va = page_frag_alloc_refill_prepare(nc, 32U, pfrag, GFP_KERNEL);
+ if (!va)
+ goto wait_for_space;
+
+ copy = min_t(unsigned int, copy, pfrag->size);
+ if (!skb_can_coalesce(skb, i, pfrag->page, pfrag->offset)) {
+ if (i >= max_skb_frags)
+ goto new_segment;
+
+ merge = false;
+ }
+
+ copy = mem_schedule(copy);
+ if (!copy)
+ goto wait_for_space;
+
+ err = copy_from_iter_full_nocache(va, copy, iter);
+ if (err)
+ goto do_error;
+
+ if (merge) {
+ skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy);
+ page_frag_commit_noref(nc, pfrag, copy);
+ } else {
+ skb_fill_page_desc(skb, i, pfrag->page, pfrag->offset, copy);
+ page_frag_commit(nc, pfrag, copy);
+ }
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH net-next v22 10/14] mm: page_frag: introduce prepare/probe/commit API
2024-10-18 18:03 ` Alexander Duyck
@ 2024-10-19 8:33 ` Yunsheng Lin
2024-10-20 16:04 ` Alexander Duyck
0 siblings, 1 reply; 24+ messages in thread
From: Yunsheng Lin @ 2024-10-19 8:33 UTC (permalink / raw)
To: Alexander Duyck, Yunsheng Lin
Cc: davem, kuba, pabeni, netdev, linux-kernel, Andrew Morton, linux-mm
On 10/19/2024 2:03 AM, Alexander Duyck wrote:
>
> Not a huge fan of introducing a ton of new API calls and then having
> to have them all applied at once in the follow-on patches. Ideally the
> functions and the header documentation for them would be introduced in
> the same patch as well as examples on how it would be used.
>
> I really think we should break these up as some are used in one case,
> and others in another and it is a pain to have a pile of abstractions
> that are all using these functions in different ways.
I am guessing this patch may be split into three parts to make it more
reviewable and easier to discuss here:
1. Prepare & commit related API, which is still the large one.
2. Probe API related API.
3. Abort API.
And it is worthing mentioning that even if this patch is split into more
patches, it seems impossible to break patch 12 up as almost everything
related to changing "page_frag" to "page_frag_cache" need to be one
patch to avoid compile error.
>
>> +static inline void page_frag_alloc_abort(struct page_frag_cache *nc,
>> + unsigned int fragsz)
>> +{
>> + VM_BUG_ON(fragsz > nc->offset);
>> +
>> + nc->pagecnt_bias++;
>> + nc->offset -= fragsz;
>> +}
>> +
>
> We should probably have the same checks here you had on the earlier
> commit. We should not be allowing blind changes. If we are using the
> commit or abort interfaces we should be verifying a page frag with
> them to verify that the request to modify this is legitimate.
As an example in 'Preparation & committing API' section of patch 13, the
abort API is used to abort the operation of page_frag_alloc_*() related
API, so 'page_frag' is not available for doing those checking like the
commit API. For some case without the needing of complicated prepare &
commit API like tun_build_skb(), the abort API can be used to abort the
operation of page_frag_alloc_*() related API when bpf_prog_run_xdp()
returns XDP_DROP knowing that no one else is taking extra reference to
the just allocated fragment.
+Allocation & freeing API
+------------------------
+
+.. code-block:: c
+
+ void *va;
+
+ va = page_frag_alloc_align(nc, size, gfp, align);
+ if (!va)
+ goto do_error;
+
+ err = do_something(va, size);
+ if (err) {
+ page_frag_alloc_abort(nc, size);
+ goto do_error;
+ }
+
+ ...
+
+ page_frag_free(va);
If there is a need to abort the commit API operation, we probably call
it something like page_frag_commit_abort()?
>
>> void page_frag_free(void *addr);
>>
>> #endif
>> diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
>> index f55d34cf7d43..5ea4b663ab8e 100644
>> --- a/mm/page_frag_cache.c
>> +++ b/mm/page_frag_cache.c
>> @@ -112,6 +112,27 @@ unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc,
>> }
>> EXPORT_SYMBOL(__page_frag_cache_commit_noref);
>>
>> +void *__page_frag_alloc_refill_probe_align(struct page_frag_cache *nc,
>> + unsigned int fragsz,
>> + struct page_frag *pfrag,
>> + unsigned int align_mask)
>> +{
>> + unsigned long encoded_page = nc->encoded_page;
>> + unsigned int size, offset;
>> +
>> + size = PAGE_SIZE << encoded_page_decode_order(encoded_page);
>> + offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask);
>> + if (unlikely(!encoded_page || offset + fragsz > size))
>> + return NULL;
>> +
>> + pfrag->page = encoded_page_decode_page(encoded_page);
>> + pfrag->size = size - offset;
>> + pfrag->offset = offset;
>> +
>> + return encoded_page_decode_virt(encoded_page) + offset;
>> +}
>> +EXPORT_SYMBOL(__page_frag_alloc_refill_probe_align);
>> +
>
> If I am not mistaken this would be the equivalent of allocating a size
> 0 fragment right? The only difference is that you are copying out the
> "remaining" size, but we could get that from the offset if we knew the
> size couldn't we? Would it maybe make sense to look at limiting this
> to PAGE_SIZE instead of passing the size of the actual fragment?
I am not sure if I understand what does "limiting this to PAGE_SIZE"
mean here.
I probably should mention the usecase of probe API here. For the usecase
of mptcp_sendmsg(), the minimum size of a fragment can be smaller when
the new fragment can be coalesced to previous fragment as there is an
extra memory needed for some header if the fragment can not be coalesced
to previous fragment. The probe API is mainly used to see if there is
any memory left in the 'page_frag_cache' that can be coalesced to
previous fragment.
>
>> void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz,
>> struct page_frag *pfrag, gfp_t gfp_mask,
>> unsigned int align_mask)
>> --
>> 2.33.0
>>
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH net-next v22 13/14] mm: page_frag: update documentation for page_frag
2024-10-18 10:53 ` [PATCH net-next v22 13/14] mm: page_frag: update documentation for page_frag Yunsheng Lin
@ 2024-10-20 10:02 ` Bagas Sanjaya
2024-10-21 9:32 ` Yunsheng Lin
0 siblings, 1 reply; 24+ messages in thread
From: Bagas Sanjaya @ 2024-10-20 10:02 UTC (permalink / raw)
To: Yunsheng Lin, davem, kuba, pabeni
Cc: netdev, linux-kernel, Alexander Duyck, Jonathan Corbet,
Andrew Morton, linux-doc, linux-mm
[-- Attachment #1: Type: text/plain, Size: 10879 bytes --]
On Fri, Oct 18, 2024 at 06:53:50PM +0800, Yunsheng Lin wrote:
> diff --git a/Documentation/mm/page_frags.rst b/Documentation/mm/page_frags.rst
> index 503ca6cdb804..7fd9398aca4e 100644
> --- a/Documentation/mm/page_frags.rst
> +++ b/Documentation/mm/page_frags.rst
> @@ -1,3 +1,5 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> ==============
> Page fragments
> ==============
> @@ -40,4 +42,176 @@ page via a single call. The advantage to doing this is that it allows for
> cleaning up the multiple references that were added to a page in order to
> avoid calling get_page per allocation.
>
> -Alexander Duyck, Nov 29, 2016.
> +
> +Architecture overview
> +=====================
> +
> +.. code-block:: none
> +
> + +----------------------+
> + | page_frag API caller |
> + +----------------------+
> + |
> + |
> + v
> + +------------------------------------------------------------------+
> + | request page fragment |
> + +------------------------------------------------------------------+
> + | | |
> + | | |
> + | Cache not enough |
> + | | |
> + | +-----------------+ |
> + | | reuse old cache |--Usable-->|
> + | +-----------------+ |
> + | | |
> + | Not usable |
> + | | |
> + | v |
> + Cache empty +-----------------+ |
> + | | drain old cache | |
> + | +-----------------+ |
> + | | |
> + v_________________________________v |
> + | |
> + | |
> + _________________v_______________ |
> + | | Cache is enough
> + | | |
> + PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE | |
> + | | |
> + | PAGE_SIZE >= PAGE_FRAG_CACHE_MAX_SIZE |
> + v | |
> + +----------------------------------+ | |
> + | refill cache with order > 0 page | | |
> + +----------------------------------+ | |
> + | | | |
> + | | | |
> + | Refill failed | |
> + | | | |
> + | v v |
> + | +------------------------------------+ |
> + | | refill cache with order 0 page | |
> + | +----------------------------------=-+ |
> + | | |
> + Refill succeed | |
> + | Refill succeed |
> + | | |
> + v v v
> + +------------------------------------------------------------------+
> + | allocate fragment from cache |
> + +------------------------------------------------------------------+
> +
> +API interface
> +=============
> +As the design and implementation of page_frag API implies, the allocation side
> +does not allow concurrent calling. Instead it is assumed that the caller must
> +ensure there is not concurrent alloc calling to the same page_frag_cache
> +instance by using its own lock or rely on some lockless guarantee like NAPI
> +softirq.
> +
> +Depending on different aligning requirement, the page_frag API caller may call
> +page_frag_*_align*() to ensure the returned virtual address or offset of the
> +page is aligned according to the 'align/alignment' parameter. Note the size of
> +the allocated fragment is not aligned, the caller needs to provide an aligned
> +fragsz if there is an alignment requirement for the size of the fragment.
> +
> +Depending on different use cases, callers expecting to deal with va, page or
> +both va and page for them may call page_frag_alloc, page_frag_refill, or
> +page_frag_alloc_refill API accordingly.
> +
> +There is also a use case that needs minimum memory in order for forward progress,
> +but more performant if more memory is available. Using page_frag_*_prepare() and
> +page_frag_commit*() related API, the caller requests the minimum memory it needs
> +and the prepare API will return the maximum size of the fragment returned. The
> +caller needs to either call the commit API to report how much memory it actually
> +uses, or not do so if deciding to not use any memory.
> +
> +.. kernel-doc:: include/linux/page_frag_cache.h
> + :identifiers: page_frag_cache_init page_frag_cache_is_pfmemalloc
> + __page_frag_alloc_align page_frag_alloc_align page_frag_alloc
> + __page_frag_refill_align page_frag_refill_align
> + page_frag_refill __page_frag_refill_prepare_align
> + page_frag_refill_prepare_align page_frag_refill_prepare
> + __page_frag_alloc_refill_prepare_align
> + page_frag_alloc_refill_prepare_align
> + page_frag_alloc_refill_prepare page_frag_alloc_refill_probe
> + page_frag_refill_probe page_frag_commit
> + page_frag_commit_noref page_frag_alloc_abort
> +
> +.. kernel-doc:: mm/page_frag_cache.c
> + :identifiers: page_frag_cache_drain page_frag_free
> + __page_frag_alloc_refill_probe_align
> +
> +Coding examples
> +===============
> +
> +Initialization and draining API
> +-------------------------------
> +
> +.. code-block:: c
> +
> + page_frag_cache_init(nc);
> + ...
> + page_frag_cache_drain(nc);
> +
> +
> +Allocation & freeing API
> +------------------------
> +
> +.. code-block:: c
> +
> + void *va;
> +
> + va = page_frag_alloc_align(nc, size, gfp, align);
> + if (!va)
> + goto do_error;
> +
> + err = do_something(va, size);
> + if (err) {
> + page_frag_abort(nc, size);
> + goto do_error;
> + }
> +
> + ...
> +
> + page_frag_free(va);
> +
> +
> +Preparation & committing API
> +----------------------------
> +
> +.. code-block:: c
> +
> + struct page_frag page_frag, *pfrag;
> + bool merge = true;
> + void *va;
> +
> + pfrag = &page_frag;
> + va = page_frag_alloc_refill_prepare(nc, 32U, pfrag, GFP_KERNEL);
> + if (!va)
> + goto wait_for_space;
> +
> + copy = min_t(unsigned int, copy, pfrag->size);
> + if (!skb_can_coalesce(skb, i, pfrag->page, pfrag->offset)) {
> + if (i >= max_skb_frags)
> + goto new_segment;
> +
> + merge = false;
> + }
> +
> + copy = mem_schedule(copy);
> + if (!copy)
> + goto wait_for_space;
> +
> + err = copy_from_iter_full_nocache(va, copy, iter);
> + if (err)
> + goto do_error;
> +
> + if (merge) {
> + skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy);
> + page_frag_commit_noref(nc, pfrag, copy);
> + } else {
> + skb_fill_page_desc(skb, i, pfrag->page, pfrag->offset, copy);
> + page_frag_commit(nc, pfrag, copy);
> + }
Looks good.
> +/**
> + * page_frag_cache_is_pfmemalloc() - Check for pfmemalloc.
> + * @nc: page_frag cache from which to check
> + *
> + * Used to check if the current page in page_frag cache is allocated from the
"Check if ..."
> + * pfmemalloc reserves. It has the same calling context expectation as the
> + * allocation API.
> + *
> + * Return:
> + * true if the current page in page_frag cache is allocated from the pfmemalloc
> + * reserves, otherwise return false.
> + */
> <snipped>...
> +/**
> + * page_frag_alloc() - Allocate a page fragment.
> + * @nc: page_frag cache from which to allocate
> + * @fragsz: the requested fragment size
> + * @gfp_mask: the allocation gfp to use when cache need to be refilled
> + *
> + * Alloc a page fragment from page_frag cache.
"Allocate a page fragment ..."
> + *
> + * Return:
> + * virtual address of the page fragment, otherwise return NULL.
> + */
> static inline void *page_frag_alloc(struct page_frag_cache *nc,
> <snipped>...
> +/**
> + * __page_frag_refill_prepare_align() - Prepare refilling a page_frag with
> + * aligning requirement.
> + * @nc: page_frag cache from which to refill
> + * @fragsz: the requested fragment size
> + * @pfrag: the page_frag to be refilled.
> + * @gfp_mask: the allocation gfp to use when cache need to be refilled
> + * @align_mask: the requested aligning requirement for the fragment
> + *
> + * Prepare refill a page_frag from page_frag cache with aligning requirement.
"Prepare refilling ..."
> + *
> + * Return:
> + * True if prepare refilling succeeds, otherwise return false.
> + */
> <snipped>...
> +/**
> + * __page_frag_alloc_refill_probe_align() - Probe allocing a fragment and
> + * refilling a page_frag with aligning requirement.
> + * @nc: page_frag cache from which to allocate and refill
> + * @fragsz: the requested fragment size
> + * @pfrag: the page_frag to be refilled.
> + * @align_mask: the requested aligning requirement for the fragment.
> + *
> + * Probe allocing a fragment and refilling a page_frag from page_frag cache with
"Probe allocating..."
> + * aligning requirement.
> + *
> + * Return:
> + * virtual address of the page fragment, otherwise return NULL.
> + */
Thanks.
--
An old man doll... just what I always wanted! - Clara
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH net-next v22 07/14] mm: page_frag: some minor refactoring before adding new API
2024-10-19 8:29 ` Yunsheng Lin
@ 2024-10-20 15:45 ` Alexander Duyck
2024-10-21 9:34 ` Yunsheng Lin
0 siblings, 1 reply; 24+ messages in thread
From: Alexander Duyck @ 2024-10-20 15:45 UTC (permalink / raw)
To: Yunsheng Lin
Cc: Yunsheng Lin, davem, kuba, pabeni, netdev, linux-kernel,
Andrew Morton, linux-mm
On Sat, Oct 19, 2024 at 1:30 AM Yunsheng Lin <yunshenglin0825@gmail.com> wrote:
>
> On 10/19/2024 1:26 AM, Alexander Duyck wrote:
>
> ...
>
> >> +static inline void *__page_frag_alloc_align(struct page_frag_cache *nc,
> >> + unsigned int fragsz, gfp_t gfp_mask,
> >> + unsigned int align_mask)
> >> +{
> >> + struct page_frag page_frag;
> >> + void *va;
> >> +
> >> + va = __page_frag_cache_prepare(nc, fragsz, &page_frag, gfp_mask,
> >> + align_mask);
> >> + if (unlikely(!va))
> >> + return NULL;
> >> +
> >> + __page_frag_cache_commit(nc, &page_frag, fragsz);
> >
> > Minor nit here. Rather than if (!va) return I think it might be better
> > to just go with if (likely(va)) __page_frag_cache_commit.
>
> Ack.
>
> >
> >> +
> >> + return va;
> >> +}
> >>
> >> static inline void *page_frag_alloc_align(struct page_frag_cache *nc,
> >> unsigned int fragsz, gfp_t gfp_mask,
> >> diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
> >> index a36fd09bf275..a852523bc8ca 100644
> >> --- a/mm/page_frag_cache.c
> >> +++ b/mm/page_frag_cache.c
> >> @@ -90,9 +90,31 @@ void __page_frag_cache_drain(struct page *page, unsigned int count)
> >> }
> >> EXPORT_SYMBOL(__page_frag_cache_drain);
> >>
> >> -void *__page_frag_alloc_align(struct page_frag_cache *nc,
> >> - unsigned int fragsz, gfp_t gfp_mask,
> >> - unsigned int align_mask)
> >> +unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc,
> >> + struct page_frag *pfrag,
> >> + unsigned int used_sz)
> >> +{
> >> + unsigned int orig_offset;
> >> +
> >> + VM_BUG_ON(used_sz > pfrag->size);
> >> + VM_BUG_ON(pfrag->page != encoded_page_decode_page(nc->encoded_page));
> >> + VM_BUG_ON(pfrag->offset + pfrag->size >
> >> + (PAGE_SIZE << encoded_page_decode_order(nc->encoded_page)));
> >> +
> >> + /* pfrag->offset might be bigger than the nc->offset due to alignment */
> >> + VM_BUG_ON(nc->offset > pfrag->offset);
> >> +
> >> + orig_offset = nc->offset;
> >> + nc->offset = pfrag->offset + used_sz;
> >> +
> >> + /* Return true size back to caller considering the offset alignment */
> >> + return nc->offset - orig_offset;
> >> +}
> >> +EXPORT_SYMBOL(__page_frag_cache_commit_noref);
> >> +
> >
> > I have a question. How often is it that we are committing versus just
> > dropping the fragment? It seems like this approach is designed around
> > optimizing for not commiting the page as we are having to take an
> > extra function call to commit the change every time. Would it make
> > more sense to have an abort versus a commit?
>
> Before this patch, page_frag_alloc() related API seems to be mostly used
> for skb data or frag for rx part, see napi_alloc_skb() or some drivers
> like e1000, but with more drivers using the page_pool for skb rx frag,
> it seems skb data for tx is the main usecase.
>
> And the prepare and commit API added in the patchset seems to be mainly
> used for skb frag for tx part except af_packet.
>
> It seems it is not very clear which is mostly used one, mostly likely
> the prepare and commit API might be the mostly used one if I have to
> guess as there might be more memory needed for skb frag than skb data.
Well one of the things I am noticing is that you have essentially two
API setups in the later patches.
In one you are calling the page_frag_alloc_align and then later
calling an abort function that is added later. In the other you have
the probe/commit approach. In my mind it might make sense to think
about breaking those up to be handled as two seperate APIs rather than
trying to replace everything all at once.
> >
> >> +void *__page_frag_cache_prepare(struct page_frag_cache *nc, unsigned int fragsz,
> >> + struct page_frag *pfrag, gfp_t gfp_mask,
> >> + unsigned int align_mask)
> >> {
> >> unsigned long encoded_page = nc->encoded_page;
> >> unsigned int size, offset;
> >> @@ -114,6 +136,8 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
> >> /* reset page count bias and offset to start of new frag */
> >> nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
> >> nc->offset = 0;
> >> + } else {
> >> + page = encoded_page_decode_page(encoded_page);
> >> }
> >>
> >> size = PAGE_SIZE << encoded_page_decode_order(encoded_page);
> >
> > This makes no sense to me. Seems like there are scenarios where you
> > are grabbing the page even if you aren't going to use it? Why?
> >
> > I think you would be better off just waiting to the end and then
> > fetching it instead of trying to grab it and potentially throw it away
> > if there is no space left in the page. Otherwise what you might do is
> > something along the lines of:
> > pfrag->page = page ? : encoded_page_decode_page(encoded_page);
>
> But doesn't that mean an additional checking is needed to decide if we
> need to grab the page?
>
> But the './scripts/bloat-o-meter' does show some binary size shrink
> using the above.
You are probably correct on this one. I think your approach may be
better. I think the only case my approach would be optimizing for
would probably be the size > 4K which isn't appropriate anyway.
> >
> >
> >> @@ -132,8 +156,6 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
> >> return NULL;
> >> }
> >>
> >> - page = encoded_page_decode_page(encoded_page);
> >> -
> >> if (!page_ref_sub_and_test(page, nc->pagecnt_bias))
> >> goto refill;
> >>
> >> @@ -148,15 +170,17 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
> >>
> >> /* reset page count bias and offset to start of new frag */
> >> nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
> >> + nc->offset = 0;
> >> offset = 0;
> >> }
> >>
> >> - nc->pagecnt_bias--;
> >> - nc->offset = offset + fragsz;
> >> + pfrag->page = page;
> >> + pfrag->offset = offset;
> >> + pfrag->size = size - offset;
> >
> > I really think we should still be moving the nc->offset forward at
> > least with each allocation. It seems like you end up doing two flavors
> > of commit, one with and one without the decrement of the bias. So I
> > would be okay with that being pulled out into some separate logic to
> > avoid the extra increment in the case of merging the pages. However in
> > both cases you need to move the offset, so I would recommend keeping
> > that bit there as it would allow us to essentially call this multiple
> > times without having to do a commit in between to keep the offset
> > correct. With that your commit logic only has to verify nothing
> > changes out from underneath us and then update the pagecnt_bias if
> > needed.
>
> The problem is that we don't really know how much the nc->offset
> need to be moved forward to and the caller needs the original offset
> for skb_fill_page_desc() related calling when prepare API is used as
> an example in 'Preparation & committing API' section of patch 13:
The thing is you really have 2 different APIs. You have one you were
doing which was a alloc/abort approach and another that is a
probe/commit approach. I think for the probe/commit you could probably
get away with using an "alloc" type approach with a size of 0 which
would correctly set the start of your offset and then you would need
to update it later once you know the total size for your commit. For
the probe/commit we could use the nc->offset as a kind of cookie to
verify we are working with the expected page and offset.
For the alloc/abort it would be something similar but more the
reverse. With that one we would need to have the size + offset and
then verify the current offset is equal to that before we allow
reverting the previous nc->offset update. The current patch set is a
bit too permissive on the abort in my opinion and should be verifying
that we are updating the correct offset.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH net-next v22 10/14] mm: page_frag: introduce prepare/probe/commit API
2024-10-19 8:33 ` Yunsheng Lin
@ 2024-10-20 16:04 ` Alexander Duyck
2024-10-21 9:36 ` Yunsheng Lin
0 siblings, 1 reply; 24+ messages in thread
From: Alexander Duyck @ 2024-10-20 16:04 UTC (permalink / raw)
To: Yunsheng Lin
Cc: Yunsheng Lin, davem, kuba, pabeni, netdev, linux-kernel,
Andrew Morton, linux-mm
On Sat, Oct 19, 2024 at 1:33 AM Yunsheng Lin <yunshenglin0825@gmail.com> wrote:
>
> On 10/19/2024 2:03 AM, Alexander Duyck wrote:
>
> >
> > Not a huge fan of introducing a ton of new API calls and then having
> > to have them all applied at once in the follow-on patches. Ideally the
> > functions and the header documentation for them would be introduced in
> > the same patch as well as examples on how it would be used.
> >
> > I really think we should break these up as some are used in one case,
> > and others in another and it is a pain to have a pile of abstractions
> > that are all using these functions in different ways.
>
> I am guessing this patch may be split into three parts to make it more
> reviewable and easier to discuss here:
> 1. Prepare & commit related API, which is still the large one.
> 2. Probe API related API.
In my mind the first two listed here are much more related to each
other than this abort api.
> 3. Abort API.
I wonder if we couldn't look at introducing this first as it is
actually closer to the existing API in terms of how you might use it.
The only spot of commonality I can think of in terms of all these is
that we would need to be able to verify the VA, offset, and size. I
partly wonder if for our page frag API we couldn't get away with
passing a virtual address instead of a page for the page frag. It
would save us having to do the virt_to_page or page_to_virt when
trying to verify a commit or a revert.
> And it is worthing mentioning that even if this patch is split into more
> patches, it seems impossible to break patch 12 up as almost everything
> related to changing "page_frag" to "page_frag_cache" need to be one
> patch to avoid compile error.
That is partly true. One issue is that there are more changes there
than just changing out the page APIs. It seems like you went in
performing optimizations as soon as you were changing out the page
allocation method used. For example one thing that jumps out at me was
the removal of linear_to_page and its replacement with
spd_fill_linear_page which seems to take on other pieces of the
function as well as you made it a return path of its own when that
section wasn't previously.
Ideally changing out the APIs used should be more about doing just
that and avoiding additional optimization or deviations from the
original coded path if possible.
> >
> >> +static inline void page_frag_alloc_abort(struct page_frag_cache *nc,
> >> + unsigned int fragsz)
> >> +{
> >> + VM_BUG_ON(fragsz > nc->offset);
> >> +
> >> + nc->pagecnt_bias++;
> >> + nc->offset -= fragsz;
> >> +}
> >> +
> >
> > We should probably have the same checks here you had on the earlier
> > commit. We should not be allowing blind changes. If we are using the
> > commit or abort interfaces we should be verifying a page frag with
> > them to verify that the request to modify this is legitimate.
>
> As an example in 'Preparation & committing API' section of patch 13, the
> abort API is used to abort the operation of page_frag_alloc_*() related
> API, so 'page_frag' is not available for doing those checking like the
> commit API. For some case without the needing of complicated prepare &
> commit API like tun_build_skb(), the abort API can be used to abort the
> operation of page_frag_alloc_*() related API when bpf_prog_run_xdp()
> returns XDP_DROP knowing that no one else is taking extra reference to
> the just allocated fragment.
>
> +Allocation & freeing API
> +------------------------
> +
> +.. code-block:: c
> +
> + void *va;
> +
> + va = page_frag_alloc_align(nc, size, gfp, align);
> + if (!va)
> + goto do_error;
> +
> + err = do_something(va, size);
> + if (err) {
> + page_frag_alloc_abort(nc, size);
> + goto do_error;
> + }
> +
> + ...
> +
> + page_frag_free(va);
>
>
> If there is a need to abort the commit API operation, we probably call
> it something like page_frag_commit_abort()?
I would argue that using an abort API in such a case is likely not
valid then. What we most likely need to be doing is passing the va as
a part of the abort request. With that we should be able to work our
way backwards to get back to verifying the fragment came from the
correct page before we allow stuffing it back on the page.
> >
> >> void page_frag_free(void *addr);
> >>
> >> #endif
> >> diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
> >> index f55d34cf7d43..5ea4b663ab8e 100644
> >> --- a/mm/page_frag_cache.c
> >> +++ b/mm/page_frag_cache.c
> >> @@ -112,6 +112,27 @@ unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc,
> >> }
> >> EXPORT_SYMBOL(__page_frag_cache_commit_noref);
> >>
> >> +void *__page_frag_alloc_refill_probe_align(struct page_frag_cache *nc,
> >> + unsigned int fragsz,
> >> + struct page_frag *pfrag,
> >> + unsigned int align_mask)
> >> +{
> >> + unsigned long encoded_page = nc->encoded_page;
> >> + unsigned int size, offset;
> >> +
> >> + size = PAGE_SIZE << encoded_page_decode_order(encoded_page);
> >> + offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask);
> >> + if (unlikely(!encoded_page || offset + fragsz > size))
> >> + return NULL;
> >> +
> >> + pfrag->page = encoded_page_decode_page(encoded_page);
> >> + pfrag->size = size - offset;
> >> + pfrag->offset = offset;
> >> +
> >> + return encoded_page_decode_virt(encoded_page) + offset;
> >> +}
> >> +EXPORT_SYMBOL(__page_frag_alloc_refill_probe_align);
> >> +
> >
> > If I am not mistaken this would be the equivalent of allocating a size
> > 0 fragment right? The only difference is that you are copying out the
> > "remaining" size, but we could get that from the offset if we knew the
> > size couldn't we? Would it maybe make sense to look at limiting this
> > to PAGE_SIZE instead of passing the size of the actual fragment?
>
> I am not sure if I understand what does "limiting this to PAGE_SIZE"
> mean here.
Right now you are returning pfrag->size = size - offset. I am
wondering if we should be returning something more like "pfrag->size =
PAGE_SIZE - (offset % PAGE_SIZE)".
> I probably should mention the usecase of probe API here. For the usecase
> of mptcp_sendmsg(), the minimum size of a fragment can be smaller when
> the new fragment can be coalesced to previous fragment as there is an
> extra memory needed for some header if the fragment can not be coalesced
> to previous fragment. The probe API is mainly used to see if there is
> any memory left in the 'page_frag_cache' that can be coalesced to
> previous fragment.
What is the fragment size we are talking about? In my example above we
would basically be looking at rounding the page off to the nearest
PAGE_SIZE block before we would have to repeat the call to grab the
next PAGE_SIZE block. Since the request size for the page frag alloc
API is supposed to be limited to 4K or less it would make sense to
limit the probe API similarly.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH net-next v22 13/14] mm: page_frag: update documentation for page_frag
2024-10-20 10:02 ` Bagas Sanjaya
@ 2024-10-21 9:32 ` Yunsheng Lin
0 siblings, 0 replies; 24+ messages in thread
From: Yunsheng Lin @ 2024-10-21 9:32 UTC (permalink / raw)
To: Bagas Sanjaya, davem, kuba, pabeni
Cc: netdev, linux-kernel, Alexander Duyck, Jonathan Corbet,
Andrew Morton, linux-doc, linux-mm
On 2024/10/20 18:02, Bagas Sanjaya wrote:
Thanks, will try my best to not miss any 'alloc' typo for doc patch
next version:(
> On Fri, Oct 18, 2024 at 06:53:50PM +0800, Yunsheng Lin wrote:
>> diff --git a/Documentation/mm/page_frags.rst b/Documentation/mm/page_frags.rst
>> index 503ca6cdb804..7fd9398aca4e 100644
>> --- a/Documentation/mm/page_frags.rst
>> +++ b/Documentation/mm/page_frags.rst
>> @@ -1,3 +1,5 @@
>> +.. SPDX-License-Identifier: GPL-2.0
>> +
>> ==============
>> Page fragments
>> ==============
>> @@ -40,4 +42,176 @@ page via a single call. The advantage to doing this is that it allows for
>> cleaning up the multiple references that were added to a page in order to
>> avoid calling get_page per allocation.
>>
>> -Alexander Duyck, Nov 29, 2016.
>> +
>> +Architecture overview
>> +=====================
>> +
>> +.. code-block:: none
>> +
>> + +----------------------+
>> + | page_frag API caller |
>> + +----------------------+
>> + |
>> + |
>> + v
>> + +------------------------------------------------------------------+
>> + | request page fragment |
>> + +------------------------------------------------------------------+
>> + | | |
>> + | | |
>> + | Cache not enough |
>> + | | |
>> + | +-----------------+ |
>> + | | reuse old cache |--Usable-->|
>> + | +-----------------+ |
>> + | | |
>> + | Not usable |
>> + | | |
>> + | v |
>> + Cache empty +-----------------+ |
>> + | | drain old cache | |
>> + | +-----------------+ |
>> + | | |
>> + v_________________________________v |
>> + | |
>> + | |
>> + _________________v_______________ |
>> + | | Cache is enough
>> + | | |
>> + PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE | |
>> + | | |
>> + | PAGE_SIZE >= PAGE_FRAG_CACHE_MAX_SIZE |
>> + v | |
>> + +----------------------------------+ | |
>> + | refill cache with order > 0 page | | |
>> + +----------------------------------+ | |
>> + | | | |
>> + | | | |
>> + | Refill failed | |
>> + | | | |
>> + | v v |
>> + | +------------------------------------+ |
>> + | | refill cache with order 0 page | |
>> + | +----------------------------------=-+ |
>> + | | |
>> + Refill succeed | |
>> + | Refill succeed |
>> + | | |
>> + v v v
>> + +------------------------------------------------------------------+
>> + | allocate fragment from cache |
>> + +------------------------------------------------------------------+
>> +
>> +API interface
>> +=============
>> +As the design and implementation of page_frag API implies, the allocation side
>> +does not allow concurrent calling. Instead it is assumed that the caller must
>> +ensure there is not concurrent alloc calling to the same page_frag_cache
>> +instance by using its own lock or rely on some lockless guarantee like NAPI
>> +softirq.
>> +
>> +Depending on different aligning requirement, the page_frag API caller may call
>> +page_frag_*_align*() to ensure the returned virtual address or offset of the
>> +page is aligned according to the 'align/alignment' parameter. Note the size of
>> +the allocated fragment is not aligned, the caller needs to provide an aligned
>> +fragsz if there is an alignment requirement for the size of the fragment.
>> +
>> +Depending on different use cases, callers expecting to deal with va, page or
>> +both va and page for them may call page_frag_alloc, page_frag_refill, or
>> +page_frag_alloc_refill API accordingly.
>> +
>> +There is also a use case that needs minimum memory in order for forward progress,
>> +but more performant if more memory is available. Using page_frag_*_prepare() and
>> +page_frag_commit*() related API, the caller requests the minimum memory it needs
>> +and the prepare API will return the maximum size of the fragment returned. The
>> +caller needs to either call the commit API to report how much memory it actually
>> +uses, or not do so if deciding to not use any memory.
>> +
>> +.. kernel-doc:: include/linux/page_frag_cache.h
>> + :identifiers: page_frag_cache_init page_frag_cache_is_pfmemalloc
>> + __page_frag_alloc_align page_frag_alloc_align page_frag_alloc
>> + __page_frag_refill_align page_frag_refill_align
>> + page_frag_refill __page_frag_refill_prepare_align
>> + page_frag_refill_prepare_align page_frag_refill_prepare
>> + __page_frag_alloc_refill_prepare_align
>> + page_frag_alloc_refill_prepare_align
>> + page_frag_alloc_refill_prepare page_frag_alloc_refill_probe
>> + page_frag_refill_probe page_frag_commit
>> + page_frag_commit_noref page_frag_alloc_abort
>> +
>> +.. kernel-doc:: mm/page_frag_cache.c
>> + :identifiers: page_frag_cache_drain page_frag_free
>> + __page_frag_alloc_refill_probe_align
>> +
>> +Coding examples
>> +===============
>> +
>> +Initialization and draining API
>> +-------------------------------
>> +
>> +.. code-block:: c
>> +
>> + page_frag_cache_init(nc);
>> + ...
>> + page_frag_cache_drain(nc);
>> +
>> +
>> +Allocation & freeing API
>> +------------------------
>> +
>> +.. code-block:: c
>> +
>> + void *va;
>> +
>> + va = page_frag_alloc_align(nc, size, gfp, align);
>> + if (!va)
>> + goto do_error;
>> +
>> + err = do_something(va, size);
>> + if (err) {
>> + page_frag_abort(nc, size);
>> + goto do_error;
>> + }
>> +
>> + ...
>> +
>> + page_frag_free(va);
>> +
>> +
>> +Preparation & committing API
>> +----------------------------
>> +
>> +.. code-block:: c
>> +
>> + struct page_frag page_frag, *pfrag;
>> + bool merge = true;
>> + void *va;
>> +
>> + pfrag = &page_frag;
>> + va = page_frag_alloc_refill_prepare(nc, 32U, pfrag, GFP_KERNEL);
>> + if (!va)
>> + goto wait_for_space;
>> +
>> + copy = min_t(unsigned int, copy, pfrag->size);
>> + if (!skb_can_coalesce(skb, i, pfrag->page, pfrag->offset)) {
>> + if (i >= max_skb_frags)
>> + goto new_segment;
>> +
>> + merge = false;
>> + }
>> +
>> + copy = mem_schedule(copy);
>> + if (!copy)
>> + goto wait_for_space;
>> +
>> + err = copy_from_iter_full_nocache(va, copy, iter);
>> + if (err)
>> + goto do_error;
>> +
>> + if (merge) {
>> + skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy);
>> + page_frag_commit_noref(nc, pfrag, copy);
>> + } else {
>> + skb_fill_page_desc(skb, i, pfrag->page, pfrag->offset, copy);
>> + page_frag_commit(nc, pfrag, copy);
>> + }
>
> Looks good.
>
>> +/**
>> + * page_frag_cache_is_pfmemalloc() - Check for pfmemalloc.
>> + * @nc: page_frag cache from which to check
>> + *
>> + * Used to check if the current page in page_frag cache is allocated from the
> "Check if ..."
>> + * pfmemalloc reserves. It has the same calling context expectation as the
>> + * allocation API.
>> + *
>> + * Return:
>> + * true if the current page in page_frag cache is allocated from the pfmemalloc
>> + * reserves, otherwise return false.
>> + */
>> <snipped>...
>> +/**
>> + * page_frag_alloc() - Allocate a page fragment.
>> + * @nc: page_frag cache from which to allocate
>> + * @fragsz: the requested fragment size
>> + * @gfp_mask: the allocation gfp to use when cache need to be refilled
>> + *
>> + * Alloc a page fragment from page_frag cache.
> "Allocate a page fragment ..."
>> + *
>> + * Return:
>> + * virtual address of the page fragment, otherwise return NULL.
>> + */
>> static inline void *page_frag_alloc(struct page_frag_cache *nc,
>> <snipped>...
>> +/**
>> + * __page_frag_refill_prepare_align() - Prepare refilling a page_frag with
>> + * aligning requirement.
>> + * @nc: page_frag cache from which to refill
>> + * @fragsz: the requested fragment size
>> + * @pfrag: the page_frag to be refilled.
>> + * @gfp_mask: the allocation gfp to use when cache need to be refilled
>> + * @align_mask: the requested aligning requirement for the fragment
>> + *
>> + * Prepare refill a page_frag from page_frag cache with aligning requirement.
> "Prepare refilling ..."
>> + *
>> + * Return:
>> + * True if prepare refilling succeeds, otherwise return false.
>> + */
>> <snipped>...
>> +/**
>> + * __page_frag_alloc_refill_probe_align() - Probe allocing a fragment and
>> + * refilling a page_frag with aligning requirement.
>> + * @nc: page_frag cache from which to allocate and refill
>> + * @fragsz: the requested fragment size
>> + * @pfrag: the page_frag to be refilled.
>> + * @align_mask: the requested aligning requirement for the fragment.
>> + *
>> + * Probe allocing a fragment and refilling a page_frag from page_frag cache with
> "Probe allocating..."
>> + * aligning requirement.
>> + *
>> + * Return:
>> + * virtual address of the page fragment, otherwise return NULL.
>> + */
>
> Thanks.
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH net-next v22 07/14] mm: page_frag: some minor refactoring before adding new API
2024-10-20 15:45 ` Alexander Duyck
@ 2024-10-21 9:34 ` Yunsheng Lin
0 siblings, 0 replies; 24+ messages in thread
From: Yunsheng Lin @ 2024-10-21 9:34 UTC (permalink / raw)
To: Alexander Duyck, Yunsheng Lin
Cc: davem, kuba, pabeni, netdev, linux-kernel, Andrew Morton, linux-mm
On 2024/10/20 23:45, Alexander Duyck wrote:
...
>
>>>
>>>
>>>> @@ -132,8 +156,6 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
>>>> return NULL;
>>>> }
>>>>
>>>> - page = encoded_page_decode_page(encoded_page);
>>>> -
>>>> if (!page_ref_sub_and_test(page, nc->pagecnt_bias))
>>>> goto refill;
>>>>
>>>> @@ -148,15 +170,17 @@ void *__page_frag_alloc_align(struct page_frag_cache *nc,
>>>>
>>>> /* reset page count bias and offset to start of new frag */
>>>> nc->pagecnt_bias = PAGE_FRAG_CACHE_MAX_SIZE + 1;
>>>> + nc->offset = 0;
>>>> offset = 0;
>>>> }
>>>>
>>>> - nc->pagecnt_bias--;
>>>> - nc->offset = offset + fragsz;
>>>> + pfrag->page = page;
>>>> + pfrag->offset = offset;
>>>> + pfrag->size = size - offset;
>>>
>>> I really think we should still be moving the nc->offset forward at
>>> least with each allocation. It seems like you end up doing two flavors
>>> of commit, one with and one without the decrement of the bias. So I
>>> would be okay with that being pulled out into some separate logic to
>>> avoid the extra increment in the case of merging the pages. However in
>>> both cases you need to move the offset, so I would recommend keeping
>>> that bit there as it would allow us to essentially call this multiple
>>> times without having to do a commit in between to keep the offset
>>> correct. With that your commit logic only has to verify nothing
>>> changes out from underneath us and then update the pagecnt_bias if
>>> needed.
>>
>> The problem is that we don't really know how much the nc->offset
>> need to be moved forward to and the caller needs the original offset
>> for skb_fill_page_desc() related calling when prepare API is used as
>> an example in 'Preparation & committing API' section of patch 13:
>
> The thing is you really have 2 different APIs. You have one you were
> doing which was a alloc/abort approach and another that is a
> probe/commit approach. I think for the probe/commit you could probably
> get away with using an "alloc" type approach with a size of 0 which
> would correctly set the start of your offset and then you would need
> to update it later once you know the total size for your commit. For
It seems there are some issues with the above approach as below as I
can see for now:
1. when nc->encoded_page is 0, Calling the "alloc" type API with
fragsz being zero may still allocate a new page from allocator, which
seems to against the purpose of probe API, right?
2. It doesn't allow the caller to specify a fragsz for probing, instead
it rely on the caller to check if the size of probed fragment is bigger
enough for its use case.
> the probe/commit we could use the nc->offset as a kind of cookie to
> verify we are working with the expected page and offset.
I am not sure if I am following the above, but I should mention that
nc->offset is not updated for prepare/probe API because the original
offset might be used for calculating the truesize of the fragment
when commit API is called, and the offset returned to the caller might
need to be updated according to alignment requirement, so I am not sure
how nc->offset can be used to verify the exact offset here.
If it is realy about catching misuse of the page_frag API, it might be
better to add something like nc->last_offset to record the offset of
allocated fragment under some config like PAGE_FRAG_DEBUG, as there
are other ways that the caller might mess up here like messing up the
allocation context assumtion.
>
> For the alloc/abort it would be something similar but more the
> reverse. With that one we would need to have the size + offset and
> then verify the current offset is equal to that before we allow
> reverting the previous nc->offset update. The current patch set is a
> bit too permissive on the abort in my opinion and should be verifying
> that we are updating the correct offset.
I am not sure if I understand what is your idea about how to do an
exact verifying for abort API here.
For abort API, it seems we can do an exact verifying if the 'va' is
also passed to the abort API as the nc->offset is already updated,
something like below:
static inline void page_frag_alloc_abort(struct page_frag_cache *nc,
void *va, unsigned int fragsz)
{
VM_BUG_ON((nc->offset - fragsz) !=
(encoded_page_decode_virt(nc->encoded_page) - va));
nc->pagecnt_bias++;
nc->offset -= fragsz;
}
But it also might mean we need to put page_frag_alloc_abort() in
page_frag_cache.c instead of a inline helper in page_frag_cache.h, as
the encoded_page_decode_virt() is a static function in c file. Or put
encoded_page_decode_virt() in the h file.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH net-next v22 10/14] mm: page_frag: introduce prepare/probe/commit API
2024-10-20 16:04 ` Alexander Duyck
@ 2024-10-21 9:36 ` Yunsheng Lin
0 siblings, 0 replies; 24+ messages in thread
From: Yunsheng Lin @ 2024-10-21 9:36 UTC (permalink / raw)
To: Alexander Duyck, Yunsheng Lin
Cc: davem, kuba, pabeni, netdev, linux-kernel, Andrew Morton, linux-mm
On 2024/10/21 0:04, Alexander Duyck wrote:
> On Sat, Oct 19, 2024 at 1:33 AM Yunsheng Lin <yunshenglin0825@gmail.com> wrote:
>>
>> On 10/19/2024 2:03 AM, Alexander Duyck wrote:
>>
>>>
>>> Not a huge fan of introducing a ton of new API calls and then having
>>> to have them all applied at once in the follow-on patches. Ideally the
>>> functions and the header documentation for them would be introduced in
>>> the same patch as well as examples on how it would be used.
>>>
>>> I really think we should break these up as some are used in one case,
>>> and others in another and it is a pain to have a pile of abstractions
>>> that are all using these functions in different ways.
>>
>> I am guessing this patch may be split into three parts to make it more
>> reviewable and easier to discuss here:
>> 1. Prepare & commit related API, which is still the large one.
>> 2. Probe API related API.
>
> In my mind the first two listed here are much more related to each
> other than this abort api.
>
>> 3. Abort API.
>
> I wonder if we couldn't look at introducing this first as it is
> actually closer to the existing API in terms of how you might use it.
> The only spot of commonality I can think of in terms of all these is
> that we would need to be able to verify the VA, offset, and size. I
> partly wonder if for our page frag API we couldn't get away with
> passing a virtual address instead of a page for the page frag. It
> would save us having to do the virt_to_page or page_to_virt when
> trying to verify a commit or a revert.
Perhaps break this patch into the more patches as the order like below?
mm: page_frag: introduce page_frag_alloc_abort() API
mm: page_frag: introduce refill prepare & commit API
mm: page_frag: introduce alloc_refill prepare & commit API
mm: page_frag: introduce probe related API
>
>
>> And it is worthing mentioning that even if this patch is split into more
>> patches, it seems impossible to break patch 12 up as almost everything
>> related to changing "page_frag" to "page_frag_cache" need to be one
>> patch to avoid compile error.
>
> That is partly true. One issue is that there are more changes there
> than just changing out the page APIs. It seems like you went in
> performing optimizations as soon as you were changing out the page
> allocation method used. For example one thing that jumps out at me was
> the removal of linear_to_page and its replacement with
> spd_fill_linear_page which seems to take on other pieces of the
> function as well as you made it a return path of its own when that
> section wasn't previously.
The reason for the new spd_fill_linear_page() is that the reference
counting in spd_fill_page() is not longer reusable for new API, which
uses page_frag_commit() and page_frag_commit_noref(), instead of using
get_page() in spd_fill_page().
>
> Ideally changing out the APIs used should be more about doing just
> that and avoiding additional optimization or deviations from the
> original coded path if possible.
Yes, we can always do better, I am just not sure if it is worthing it.
>
>>>
>>>> +static inline void page_frag_alloc_abort(struct page_frag_cache *nc,
>>>> + unsigned int fragsz)
>>>> +{
>>>> + VM_BUG_ON(fragsz > nc->offset);
>>>> +
>>>> + nc->pagecnt_bias++;
>>>> + nc->offset -= fragsz;
>>>> +}
>>>> +
>>>
>>> We should probably have the same checks here you had on the earlier
>>> commit. We should not be allowing blind changes. If we are using the
>>> commit or abort interfaces we should be verifying a page frag with
>>> them to verify that the request to modify this is legitimate.
>>
>> As an example in 'Preparation & committing API' section of patch 13, the
>> abort API is used to abort the operation of page_frag_alloc_*() related
>> API, so 'page_frag' is not available for doing those checking like the
>> commit API. For some case without the needing of complicated prepare &
>> commit API like tun_build_skb(), the abort API can be used to abort the
>> operation of page_frag_alloc_*() related API when bpf_prog_run_xdp()
>> returns XDP_DROP knowing that no one else is taking extra reference to
>> the just allocated fragment.
>>
>> +Allocation & freeing API
>> +------------------------
>> +
>> +.. code-block:: c
>> +
>> + void *va;
>> +
>> + va = page_frag_alloc_align(nc, size, gfp, align);
>> + if (!va)
>> + goto do_error;
>> +
>> + err = do_something(va, size);
>> + if (err) {
>> + page_frag_alloc_abort(nc, size);
>> + goto do_error;
>> + }
>> +
>> + ...
>> +
>> + page_frag_free(va);
>>
>>
>> If there is a need to abort the commit API operation, we probably call
>> it something like page_frag_commit_abort()?
>
> I would argue that using an abort API in such a case is likely not
> valid then. What we most likely need to be doing is passing the va as
> a part of the abort request. With that we should be able to work our
> way backwards to get back to verifying the fragment came from the
> correct page before we allow stuffing it back on the page.
How about something like below mentioned in the previous comment:
page_frag_alloc_abort(nc, va, size);
>
>>>
>>>> void page_frag_free(void *addr);
>>>>
>>>> #endif
>>>> diff --git a/mm/page_frag_cache.c b/mm/page_frag_cache.c
>>>> index f55d34cf7d43..5ea4b663ab8e 100644
>>>> --- a/mm/page_frag_cache.c
>>>> +++ b/mm/page_frag_cache.c
>>>> @@ -112,6 +112,27 @@ unsigned int __page_frag_cache_commit_noref(struct page_frag_cache *nc,
>>>> }
>>>> EXPORT_SYMBOL(__page_frag_cache_commit_noref);
>>>>
>>>> +void *__page_frag_alloc_refill_probe_align(struct page_frag_cache *nc,
>>>> + unsigned int fragsz,
>>>> + struct page_frag *pfrag,
>>>> + unsigned int align_mask)
>>>> +{
>>>> + unsigned long encoded_page = nc->encoded_page;
>>>> + unsigned int size, offset;
>>>> +
>>>> + size = PAGE_SIZE << encoded_page_decode_order(encoded_page);
>>>> + offset = __ALIGN_KERNEL_MASK(nc->offset, ~align_mask);
>>>> + if (unlikely(!encoded_page || offset + fragsz > size))
>>>> + return NULL;
>>>> +
>>>> + pfrag->page = encoded_page_decode_page(encoded_page);
>>>> + pfrag->size = size - offset;
>>>> + pfrag->offset = offset;
>>>> +
>>>> + return encoded_page_decode_virt(encoded_page) + offset;
>>>> +}
>>>> +EXPORT_SYMBOL(__page_frag_alloc_refill_probe_align);
>>>> +
>>>
>>> If I am not mistaken this would be the equivalent of allocating a size
>>> 0 fragment right? The only difference is that you are copying out the
>>> "remaining" size, but we could get that from the offset if we knew the
>>> size couldn't we? Would it maybe make sense to look at limiting this
>>> to PAGE_SIZE instead of passing the size of the actual fragment?
>>
>> I am not sure if I understand what does "limiting this to PAGE_SIZE"
>> mean here.
>
> Right now you are returning pfrag->size = size - offset. I am
> wondering if we should be returning something more like "pfrag->size =
> PAGE_SIZE - (offset % PAGE_SIZE)".
Doesn't doing above defeat the purpose of the 'performant' part mentioned
in the commit log? With above, I would say the new page_frag API is not
providing the expected semantic of skb_page_frag_refill() as the caller
can use up the whole order-3 page by accessing pfrag->size directly.
"There are many use cases that need minimum memory in order
for forward progress, but more performant if more memory is
available"
>
>> I probably should mention the usecase of probe API here. For the usecase
>> of mptcp_sendmsg(), the minimum size of a fragment can be smaller when
>> the new fragment can be coalesced to previous fragment as there is an
>> extra memory needed for some header if the fragment can not be coalesced
>> to previous fragment. The probe API is mainly used to see if there is
>> any memory left in the 'page_frag_cache' that can be coalesced to
>> previous fragment.
>
> What is the fragment size we are talking about? In my example above we
I am talking about the minimum fragment size required by the caller, if
there is more space, the caller can decide how much it will use by using
the commit API passing the 'used_sz' parameter.
We only need to limit the caller not to pass a fragsz larger than
PAGE_SIZE when calling a prepare API, but not when calling commit API.
> would basically be looking at rounding the page off to the nearest
> PAGE_SIZE block before we would have to repeat the call to grab the
> next PAGE_SIZE block. Since the request size for the page frag alloc
> API is supposed to be limited to 4K or less it would make sense to
> limit the probe API similarly.
It is partly true for prepare/commit API, as it is true for prepare API,
but not true for commit API.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH net-next v22 00/14] Replace page_frag with page_frag_cache for sk_page_frag()
[not found] ` <02d4971c-a906-44e8-b694-bd54a89cf671@gmail.com>
@ 2024-10-24 9:05 ` Paolo Abeni
2024-10-24 11:39 ` Yunsheng Lin
2024-10-27 3:42 ` Yunsheng Lin
0 siblings, 2 replies; 24+ messages in thread
From: Paolo Abeni @ 2024-10-24 9:05 UTC (permalink / raw)
To: Yunsheng Lin, Andrew Morton
Cc: netdev, linux-kernel, Shuah Khan, Eric Dumazet, Alexander Duyck,
davem, Yunsheng Lin, kuba, linux-mm
Hi,
I just noted MM maintainer and ML was not CC on the cover-letter (but
they were on the relevant patches), adding them now.
On 10/19/24 10:27, Yunsheng Lin wrote:
> On 10/19/2024 1:39 AM, Alexander Duyck wrote:
>> So I still think this set should be split in half in order to make
>> this easier to review. The ones I have provided a review-by for so far
>> seem fine to me. I really think if you just submitted that batch first
>> we can get that landed and let them stew in the kernel for a bit to
>> make sure we didn't miss anything there.
>
> It makes sense to me too that it might be better to get those submitted
> to get more testing if there is no more comment about it.
>
> I am guessing they should be targetting net-next tree to get more
> testing as all the callers of page_frag API seem to be in the
> networking, right?
>
> Hi, David, Jakub & Paolo
> It would be good if those patches are just cherry-picked from this
> patchset as those patches with 'Reviewed-by' tag seem to be applying
> cleanly. Or any better suggestion here?
We can cherry pick the patches from the posted series, applying the
review tags as needed, but we need an explicit ack from the mm
maintainer, given the mentioned patches touch mostly such code.
I would like to avoid repeating a recent incident of unintentionally
stepping on other subsystem toes.
@Andrew: are you ok with the above plan?
Thank you,
Paolo
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH net-next v22 00/14] Replace page_frag with page_frag_cache for sk_page_frag()
2024-10-24 9:05 ` [PATCH net-next v22 00/14] Replace page_frag with page_frag_cache for sk_page_frag() Paolo Abeni
@ 2024-10-24 11:39 ` Yunsheng Lin
2024-10-27 3:42 ` Yunsheng Lin
1 sibling, 0 replies; 24+ messages in thread
From: Yunsheng Lin @ 2024-10-24 11:39 UTC (permalink / raw)
To: Paolo Abeni, Yunsheng Lin, Andrew Morton
Cc: netdev, linux-kernel, Shuah Khan, Eric Dumazet, Alexander Duyck,
davem, kuba, linux-mm
On 2024/10/24 17:05, Paolo Abeni wrote:
> Hi,
>
> I just noted MM maintainer and ML was not CC on the cover-letter (but
> they were on the relevant patches), adding them now.
>
> On 10/19/24 10:27, Yunsheng Lin wrote:
>> On 10/19/2024 1:39 AM, Alexander Duyck wrote:
>>> So I still think this set should be split in half in order to make
>>> this easier to review. The ones I have provided a review-by for so far
>>> seem fine to me. I really think if you just submitted that batch first
>>> we can get that landed and let them stew in the kernel for a bit to
>>> make sure we didn't miss anything there.
>>
>> It makes sense to me too that it might be better to get those submitted
>> to get more testing if there is no more comment about it.
>>
>> I am guessing they should be targetting net-next tree to get more
>> testing as all the callers of page_frag API seem to be in the
>> networking, right?
>>
>> Hi, David, Jakub & Paolo
>> It would be good if those patches are just cherry-picked from this
>> patchset as those patches with 'Reviewed-by' tag seem to be applying
>> cleanly. Or any better suggestion here?
>
> We can cherry pick the patches from the posted series, applying the
> review tags as needed, but we need an explicit ack from the mm
Thanks.
I would be good to cherry pick the below one too, as it has also a
'Reviewed-by' tag. I mentioned that it might be easier to miss that
one because it sits after one without 'Reviewed-by' and it seems to
be also applied cleanly:
[net-next,v22,08/14] mm: page_frag: use __alloc_pages() to replace alloc_pages_node()
https://patchwork.kernel.org/project/netdevbpf/patch/20241018105351.1960345-9-linyunsheng@huawei.com/
> maintainer, given the mentioned patches touch mostly such code.
Sorry for missing to cc Andrew and MM ML.
Maybe I should have mentioned that Andrew provided an 'Acked-by' in
patch 2, but it is always safer to double check it.
>
> I would like to avoid repeating a recent incident of unintentionally
> stepping on other subsystem toes.
>
> @Andrew: are you ok with the above plan?
>
> Thank you,
>
> Paolo
>
>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH net-next v22 00/14] Replace page_frag with page_frag_cache for sk_page_frag()
2024-10-24 9:05 ` [PATCH net-next v22 00/14] Replace page_frag with page_frag_cache for sk_page_frag() Paolo Abeni
2024-10-24 11:39 ` Yunsheng Lin
@ 2024-10-27 3:42 ` Yunsheng Lin
1 sibling, 0 replies; 24+ messages in thread
From: Yunsheng Lin @ 2024-10-27 3:42 UTC (permalink / raw)
To: Andrew Morton
Cc: netdev, linux-kernel, Shuah Khan, Eric Dumazet, Alexander Duyck,
davem, Yunsheng Lin, kuba, linux-mm, Paolo Abeni
Hi, Andrew
On 10/24/2024 5:05 PM, Paolo Abeni wrote:
> Hi,
>
> I just noted MM maintainer and ML was not CC on the cover-letter (but
> they were on the relevant patches), adding them now.
>
> On 10/19/24 10:27, Yunsheng Lin wrote:
>> On 10/19/2024 1:39 AM, Alexander Duyck wrote:
>>> So I still think this set should be split in half in order to make
>>> this easier to review. The ones I have provided a review-by for so far
>>> seem fine to me. I really think if you just submitted that batch first
>>> we can get that landed and let them stew in the kernel for a bit to
>>> make sure we didn't miss anything there.
>>
>> It makes sense to me too that it might be better to get those submitted
>> to get more testing if there is no more comment about it.
>>
>> I am guessing they should be targetting net-next tree to get more
>> testing as all the callers of page_frag API seem to be in the
>> networking, right?
>>
>> Hi, David, Jakub & Paolo
>> It would be good if those patches are just cherry-picked from this
>> patchset as those patches with 'Reviewed-by' tag seem to be applying
>> cleanly. Or any better suggestion here?
>
> We can cherry pick the patches from the posted series, applying the
> review tags as needed, but we need an explicit ack from the mm
> maintainer, given the mentioned patches touch mostly such code.
>
> I would like to avoid repeating a recent incident of unintentionally
> stepping on other subsystem toes.
>
> @Andrew: are you ok with the above plan?
Are the above patches cherry-picked to net-next tree ok with you?
More specifically, they are patch 1, 2, 3, 4, 5, 6, 8 with at least
one 'Acked-by' or 'Reviewed-by' tag.
Or any better suggestion about the plan?
^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2024-12-05 15:22 UTC | newest]
Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <20241018105351.1960345-1-linyunsheng@huawei.com>
2024-10-18 10:53 ` [PATCH net-next v22 01/14] mm: page_frag: add a test module for page_frag Yunsheng Lin
2024-10-18 10:53 ` [PATCH net-next v22 02/14] mm: move the page fragment allocator from page_alloc into its own file Yunsheng Lin
2024-10-18 10:53 ` [PATCH net-next v22 03/14] mm: page_frag: use initial zero offset for page_frag_alloc_align() Yunsheng Lin
2024-10-18 10:53 ` [PATCH net-next v22 04/14] mm: page_frag: avoid caller accessing 'page_frag_cache' directly Yunsheng Lin
2024-10-18 10:53 ` [PATCH net-next v22 06/14] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc' Yunsheng Lin
2024-10-18 16:43 ` Alexander Duyck
2024-10-18 10:53 ` [PATCH net-next v22 07/14] mm: page_frag: some minor refactoring before adding new API Yunsheng Lin
2024-10-18 17:26 ` Alexander Duyck
2024-10-19 8:29 ` Yunsheng Lin
2024-10-20 15:45 ` Alexander Duyck
2024-10-21 9:34 ` Yunsheng Lin
2024-10-18 10:53 ` [PATCH net-next v22 08/14] mm: page_frag: use __alloc_pages() to replace alloc_pages_node() Yunsheng Lin
2024-10-18 10:53 ` [PATCH net-next v22 10/14] mm: page_frag: introduce prepare/probe/commit API Yunsheng Lin
2024-10-18 18:03 ` Alexander Duyck
2024-10-19 8:33 ` Yunsheng Lin
2024-10-20 16:04 ` Alexander Duyck
2024-10-21 9:36 ` Yunsheng Lin
2024-10-18 10:53 ` [PATCH net-next v22 11/14] mm: page_frag: add testing for the newly added prepare API Yunsheng Lin
2024-10-18 10:53 ` [PATCH net-next v22 13/14] mm: page_frag: update documentation for page_frag Yunsheng Lin
2024-10-20 10:02 ` Bagas Sanjaya
2024-10-21 9:32 ` Yunsheng Lin
[not found] ` <CAKgT0Uft5Ga0ub_Fj6nonV6E0hRYcej8x_axmGBBX_Nm_wZ_8w@mail.gmail.com>
[not found] ` <02d4971c-a906-44e8-b694-bd54a89cf671@gmail.com>
2024-10-24 9:05 ` [PATCH net-next v22 00/14] Replace page_frag with page_frag_cache for sk_page_frag() Paolo Abeni
2024-10-24 11:39 ` Yunsheng Lin
2024-10-27 3:42 ` Yunsheng Lin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox