From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2EFECD0E6D5 for ; Mon, 21 Oct 2024 09:32:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8E1726B007B; Mon, 21 Oct 2024 05:32:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8919E6B0082; Mon, 21 Oct 2024 05:32:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 70B6C6B0083; Mon, 21 Oct 2024 05:32:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 519FA6B007B for ; Mon, 21 Oct 2024 05:32:44 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 3F975121796 for ; Mon, 21 Oct 2024 09:32:30 +0000 (UTC) X-FDA: 82697093850.09.2626DDA Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf04.hostedemail.com (Postfix) with ESMTP id 7938540007 for ; Mon, 21 Oct 2024 09:32:22 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=none; spf=pass (imf04.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1729503012; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ss6zf1CT7Zcbf/bhY7/oHj6xsJhiNDAN58g2dqI7rIk=; b=5XjVhim8RlKlAOVQE8eupSijoN3+yeQweClW1EtoHwbu7xpQTfI5aRziuhhMsk6fmXiGpQ qUj03vdjIZYbkG9hilBogIu2FPA/sXCbW57fzBntxX+6FgCR9NUBXbuF0Ikjw8cePcmWn9 DcD2EEkoOlh2C7OYmVQa0NaSt53rluc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1729503012; a=rsa-sha256; cv=none; b=s+9p9VFTH2niuwXs4Y5uH8SnqNjKQdDL9HKRpMPtNc0J2FljtG99NtqGked6UhenZ/TIrg XWUQo6Eg2qxueH8CtQADj1yJZhyz++2k+AgESoWbgiib2bS1zzoH7FJSYTSIb8+z7hDWJJ PTUxl9Z6fF/lPaT6KyTj2fm8tjJgc6w= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=none; spf=pass (imf04.hostedemail.com: domain of linyunsheng@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=linyunsheng@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.252]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4XX96L50bSzyTNV; Mon, 21 Oct 2024 17:31:06 +0800 (CST) Received: from dggpemf200006.china.huawei.com (unknown [7.185.36.61]) by mail.maildlp.com (Postfix) with ESMTPS id F327F1800DB; Mon, 21 Oct 2024 17:32:35 +0800 (CST) Received: from [10.67.120.129] (10.67.120.129) by dggpemf200006.china.huawei.com (7.185.36.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 21 Oct 2024 17:32:35 +0800 Message-ID: <6f9840b3-66c4-485e-b6bb-baeaa641e720@huawei.com> Date: Mon, 21 Oct 2024 17:32:35 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH net-next v22 13/14] mm: page_frag: update documentation for page_frag To: Bagas Sanjaya , , , CC: , , Alexander Duyck , Jonathan Corbet , Andrew Morton , , References: <20241018105351.1960345-1-linyunsheng@huawei.com> <20241018105351.1960345-14-linyunsheng@huawei.com> Content-Language: en-US From: Yunsheng Lin In-Reply-To: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.67.120.129] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemf200006.china.huawei.com (7.185.36.61) X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 7938540007 X-Stat-Signature: nuas56b3ajfkrxgyt6aiuxopnxs6rbc4 X-HE-Tag: 1729503142-583571 X-HE-Meta: U2FsdGVkX19aCX0MriNK6gOfjuYA8HY+PrE+gGov9q3lpdVBW5CK8ErCHSuB8tgQxMvxrkHEFkcbRCV3CxbF4d2WrYWSkawyQEJ3K4OTBCBKrp82keCne5gKdoGcKBY9JzwaGhHCUieUm7jMMxR6tHz28nvUZiBcQMdwFqNDILXgoW6y+zBpxsVUuIBw9NMHRXaX/3lHD8irrRlaDatX4HkZiTDzFFnlv+go+ROQb0EzkH1F5RlDeqlOz5X9mN8D5VZf2yibpsbDXHUQhGoBIHR8yk/Br7PzSSEdtD2JhEYOCIX7jotROxqxigjpwzaRa/G18XOA1AgLddmuAeKRHSsk0Rncvb/I7/dSuIOwfKQ83cfoMnQ8/WxbJcDlDoMqW9ovUblabMZvWB46W11KMn5V4idE+P/aZYsE426JUJvjEb+2wQCrfapOcktK2o+rtgJvASH+S/454mOHknomqyiSiTfa+hl8q44iSWlpNSWfhixUmNaoGMRfBuT817hOMhaZI1m1+/en7qzZuCkjr30zEhqOpyKtzTH6raWQkDlBjKw74IlkxKn9jyc18WgiUUHok6YiDJLTyv2nOA4RE8FCXYf/nxdZXHuOnb47TI+JSIijQA8Pfsg3ASlKWfwYUdYYdUZeQHaJBkRAZC7u4dXqy4trsWNQI/ZCP520QdVNXMbbnU6Nyd6t9OnpmTeWSeqG+wZExrCKo1z9M3Th5az1YXqiPswCCvBgBgBrFgBQ+4YQZQmHGA6HcAuxsI+znB7en2kBAAjPqw7HXUtHo2+h3oyrKubVOURDjziNayFuSgyZc3zbYIVKrSOry5hXtM8+ka6A0ttB4KvJvPJI2sDn43WqGSFqMh3lid6xiXh1xGwMtSdkZ54NkL4RdT/Lu9zZlBRjdvfXF+CBTUomgP8Bj/1AB/tpPFGEELrDkZ0tBib6ZxHVNROfnkGRRnon22iaA6TtmRTWKTyvEWT ZS8ZgNNh IXEigkpn3MmNoR59tzqOSLkDrwoDHAHYJss9KS6fpfgAJ57ZXZrt/uY1cVKkPgYAD5VqOoo4sNWySuRJMtHDxAtshcrOO16wTTjsMDctZk+ai/bh+Zm5DP/t/ExLAsnjV0jCNkFfrDzQqqs09dQFOmmj+YTq0mCqgod8SB3GCfZgo8nIseEm43IX7NynmMZd1TmcvFrO2iCSUbEgF0OpfFrceio4LMz0+GHdlVo7o/BEW/ABqK+KDJ916y7jlUfUh5UeahoJB+X745iSIQGs1iMLDZ+9lo2knNmOP X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/10/20 18:02, Bagas Sanjaya wrote: Thanks, will try my best to not miss any 'alloc' typo for doc patch next version:( > On Fri, Oct 18, 2024 at 06:53:50PM +0800, Yunsheng Lin wrote: >> diff --git a/Documentation/mm/page_frags.rst b/Documentation/mm/page_frags.rst >> index 503ca6cdb804..7fd9398aca4e 100644 >> --- a/Documentation/mm/page_frags.rst >> +++ b/Documentation/mm/page_frags.rst >> @@ -1,3 +1,5 @@ >> +.. SPDX-License-Identifier: GPL-2.0 >> + >> ============== >> Page fragments >> ============== >> @@ -40,4 +42,176 @@ page via a single call. The advantage to doing this is that it allows for >> cleaning up the multiple references that were added to a page in order to >> avoid calling get_page per allocation. >> >> -Alexander Duyck, Nov 29, 2016. >> + >> +Architecture overview >> +===================== >> + >> +.. code-block:: none >> + >> + +----------------------+ >> + | page_frag API caller | >> + +----------------------+ >> + | >> + | >> + v >> + +------------------------------------------------------------------+ >> + | request page fragment | >> + +------------------------------------------------------------------+ >> + | | | >> + | | | >> + | Cache not enough | >> + | | | >> + | +-----------------+ | >> + | | reuse old cache |--Usable-->| >> + | +-----------------+ | >> + | | | >> + | Not usable | >> + | | | >> + | v | >> + Cache empty +-----------------+ | >> + | | drain old cache | | >> + | +-----------------+ | >> + | | | >> + v_________________________________v | >> + | | >> + | | >> + _________________v_______________ | >> + | | Cache is enough >> + | | | >> + PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE | | >> + | | | >> + | PAGE_SIZE >= PAGE_FRAG_CACHE_MAX_SIZE | >> + v | | >> + +----------------------------------+ | | >> + | refill cache with order > 0 page | | | >> + +----------------------------------+ | | >> + | | | | >> + | | | | >> + | Refill failed | | >> + | | | | >> + | v v | >> + | +------------------------------------+ | >> + | | refill cache with order 0 page | | >> + | +----------------------------------=-+ | >> + | | | >> + Refill succeed | | >> + | Refill succeed | >> + | | | >> + v v v >> + +------------------------------------------------------------------+ >> + | allocate fragment from cache | >> + +------------------------------------------------------------------+ >> + >> +API interface >> +============= >> +As the design and implementation of page_frag API implies, the allocation side >> +does not allow concurrent calling. Instead it is assumed that the caller must >> +ensure there is not concurrent alloc calling to the same page_frag_cache >> +instance by using its own lock or rely on some lockless guarantee like NAPI >> +softirq. >> + >> +Depending on different aligning requirement, the page_frag API caller may call >> +page_frag_*_align*() to ensure the returned virtual address or offset of the >> +page is aligned according to the 'align/alignment' parameter. Note the size of >> +the allocated fragment is not aligned, the caller needs to provide an aligned >> +fragsz if there is an alignment requirement for the size of the fragment. >> + >> +Depending on different use cases, callers expecting to deal with va, page or >> +both va and page for them may call page_frag_alloc, page_frag_refill, or >> +page_frag_alloc_refill API accordingly. >> + >> +There is also a use case that needs minimum memory in order for forward progress, >> +but more performant if more memory is available. Using page_frag_*_prepare() and >> +page_frag_commit*() related API, the caller requests the minimum memory it needs >> +and the prepare API will return the maximum size of the fragment returned. The >> +caller needs to either call the commit API to report how much memory it actually >> +uses, or not do so if deciding to not use any memory. >> + >> +.. kernel-doc:: include/linux/page_frag_cache.h >> + :identifiers: page_frag_cache_init page_frag_cache_is_pfmemalloc >> + __page_frag_alloc_align page_frag_alloc_align page_frag_alloc >> + __page_frag_refill_align page_frag_refill_align >> + page_frag_refill __page_frag_refill_prepare_align >> + page_frag_refill_prepare_align page_frag_refill_prepare >> + __page_frag_alloc_refill_prepare_align >> + page_frag_alloc_refill_prepare_align >> + page_frag_alloc_refill_prepare page_frag_alloc_refill_probe >> + page_frag_refill_probe page_frag_commit >> + page_frag_commit_noref page_frag_alloc_abort >> + >> +.. kernel-doc:: mm/page_frag_cache.c >> + :identifiers: page_frag_cache_drain page_frag_free >> + __page_frag_alloc_refill_probe_align >> + >> +Coding examples >> +=============== >> + >> +Initialization and draining API >> +------------------------------- >> + >> +.. code-block:: c >> + >> + page_frag_cache_init(nc); >> + ... >> + page_frag_cache_drain(nc); >> + >> + >> +Allocation & freeing API >> +------------------------ >> + >> +.. code-block:: c >> + >> + void *va; >> + >> + va = page_frag_alloc_align(nc, size, gfp, align); >> + if (!va) >> + goto do_error; >> + >> + err = do_something(va, size); >> + if (err) { >> + page_frag_abort(nc, size); >> + goto do_error; >> + } >> + >> + ... >> + >> + page_frag_free(va); >> + >> + >> +Preparation & committing API >> +---------------------------- >> + >> +.. code-block:: c >> + >> + struct page_frag page_frag, *pfrag; >> + bool merge = true; >> + void *va; >> + >> + pfrag = &page_frag; >> + va = page_frag_alloc_refill_prepare(nc, 32U, pfrag, GFP_KERNEL); >> + if (!va) >> + goto wait_for_space; >> + >> + copy = min_t(unsigned int, copy, pfrag->size); >> + if (!skb_can_coalesce(skb, i, pfrag->page, pfrag->offset)) { >> + if (i >= max_skb_frags) >> + goto new_segment; >> + >> + merge = false; >> + } >> + >> + copy = mem_schedule(copy); >> + if (!copy) >> + goto wait_for_space; >> + >> + err = copy_from_iter_full_nocache(va, copy, iter); >> + if (err) >> + goto do_error; >> + >> + if (merge) { >> + skb_frag_size_add(&skb_shinfo(skb)->frags[i - 1], copy); >> + page_frag_commit_noref(nc, pfrag, copy); >> + } else { >> + skb_fill_page_desc(skb, i, pfrag->page, pfrag->offset, copy); >> + page_frag_commit(nc, pfrag, copy); >> + } > > Looks good. > >> +/** >> + * page_frag_cache_is_pfmemalloc() - Check for pfmemalloc. >> + * @nc: page_frag cache from which to check >> + * >> + * Used to check if the current page in page_frag cache is allocated from the > "Check if ..." >> + * pfmemalloc reserves. It has the same calling context expectation as the >> + * allocation API. >> + * >> + * Return: >> + * true if the current page in page_frag cache is allocated from the pfmemalloc >> + * reserves, otherwise return false. >> + */ >> ... >> +/** >> + * page_frag_alloc() - Allocate a page fragment. >> + * @nc: page_frag cache from which to allocate >> + * @fragsz: the requested fragment size >> + * @gfp_mask: the allocation gfp to use when cache need to be refilled >> + * >> + * Alloc a page fragment from page_frag cache. > "Allocate a page fragment ..." >> + * >> + * Return: >> + * virtual address of the page fragment, otherwise return NULL. >> + */ >> static inline void *page_frag_alloc(struct page_frag_cache *nc, >> ... >> +/** >> + * __page_frag_refill_prepare_align() - Prepare refilling a page_frag with >> + * aligning requirement. >> + * @nc: page_frag cache from which to refill >> + * @fragsz: the requested fragment size >> + * @pfrag: the page_frag to be refilled. >> + * @gfp_mask: the allocation gfp to use when cache need to be refilled >> + * @align_mask: the requested aligning requirement for the fragment >> + * >> + * Prepare refill a page_frag from page_frag cache with aligning requirement. > "Prepare refilling ..." >> + * >> + * Return: >> + * True if prepare refilling succeeds, otherwise return false. >> + */ >> ... >> +/** >> + * __page_frag_alloc_refill_probe_align() - Probe allocing a fragment and >> + * refilling a page_frag with aligning requirement. >> + * @nc: page_frag cache from which to allocate and refill >> + * @fragsz: the requested fragment size >> + * @pfrag: the page_frag to be refilled. >> + * @align_mask: the requested aligning requirement for the fragment. >> + * >> + * Probe allocing a fragment and refilling a page_frag from page_frag cache with > "Probe allocating..." >> + * aligning requirement. >> + * >> + * Return: >> + * virtual address of the page fragment, otherwise return NULL. >> + */ > > Thanks. >