From: SeongJae Park <sj@kernel.org>
To: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: akpm@linux-foundation.org, torvalds@linux-foundation.org,
sj@kernel.org, hannes@cmpxchg.org, mhocko@kernel.org,
roman.gushchin@linux.dev, shakeelb@google.com,
muchun.song@linux.dev, naoya.horiguchi@nec.com,
linmiaohe@huawei.com, david@redhat.com, osalvador@suse.de,
mike.kravetz@oracle.com, willy@infradead.org,
damon@lists.linux.dev, cgroups@vger.kernel.org,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 1/4] mm: change to return bool for folio_isolate_lru()
Date: Tue, 14 Feb 2023 17:46:26 +0000 [thread overview]
Message-ID: <20230214174626.71336-1-sj@kernel.org> (raw)
In-Reply-To: <7abe2897803410fdc8f167b46bb5b0467de0e3c9.1676382188.git.baolin.wang@linux.alibaba.com>
On Tue, 14 Feb 2023 21:59:29 +0800 Baolin Wang <baolin.wang@linux.alibaba.com> wrote:
> Now the folio_isolate_lru() did not return a boolean value to indicate
> isolation success or not, however below code checking the return value
> can make people think that it was a boolean success/failure thing, which
> makes people easy to make mistakes (see the fix patch[1]).
>
> if (folio_isolate_lru(folio))
> continue;
>
> Thus it's better to check the negative error value expilictly returned by
> folio_isolate_lru(), which makes code more clear per Linus's suggestion[2].
> Moreover Matthew suggested we can convert the isolation functions to return
> a boolean[3], since most users did not care about the negative error value,
> and can also remove the confusing of checking return value.
>
> So this patch converts the folio_isolate_lru() to return a boolean value,
> which means return 'true' to indicate the folio isolation is successful, and
> 'false' means a failure to isolation. Meanwhile changing all users' logic of
> checking the isolation state.
>
> No functional changes intended.
>
> [1] https://lore.kernel.org/all/20230131063206.28820-1-Kuan-Ying.Lee@mediatek.com/T/#u
> [2] https://lore.kernel.org/all/CAHk-=wiBrY+O-4=2mrbVyxR+hOqfdJ=Do6xoucfJ9_5az01L4Q@mail.gmail.com/
> [3] https://lore.kernel.org/all/Y+sTFqwMNAjDvxw3@casper.infradead.org/
>
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
> mm/damon/paddr.c | 2 +-
> mm/folio-compat.c | 8 +++++++-
> mm/gup.c | 2 +-
> mm/internal.h | 2 +-
> mm/khugepaged.c | 2 +-
> mm/madvise.c | 4 ++--
> mm/mempolicy.c | 2 +-
> mm/vmscan.c | 10 +++++-----
> 8 files changed, 19 insertions(+), 13 deletions(-)
>
> diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
> index b4df9b9bcc0a..607bb69e526c 100644
> --- a/mm/damon/paddr.c
> +++ b/mm/damon/paddr.c
> @@ -246,7 +246,7 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s)
>
> folio_clear_referenced(folio);
> folio_test_clear_young(folio);
> - if (folio_isolate_lru(folio)) {
> + if (!folio_isolate_lru(folio)) {
> folio_put(folio);
> continue;
> }
> diff --git a/mm/folio-compat.c b/mm/folio-compat.c
> index 18c48b557926..540373cf904e 100644
> --- a/mm/folio-compat.c
> +++ b/mm/folio-compat.c
> @@ -115,9 +115,15 @@ EXPORT_SYMBOL(grab_cache_page_write_begin);
>
> int isolate_lru_page(struct page *page)
> {
> + bool ret;
> +
> if (WARN_RATELIMIT(PageTail(page), "trying to isolate tail page"))
> return -EBUSY;
> - return folio_isolate_lru((struct folio *)page);
> + ret = folio_isolate_lru((struct folio *)page);
> + if (ret)
> + return 0;
> +
> + return -EBUSY;
> }
>
> void putback_lru_page(struct page *page)
> diff --git a/mm/gup.c b/mm/gup.c
> index b0885f70579c..eab18ba045db 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -1939,7 +1939,7 @@ static unsigned long collect_longterm_unpinnable_pages(
> drain_allow = false;
> }
>
> - if (folio_isolate_lru(folio))
> + if (!folio_isolate_lru(folio))
> continue;
>
> list_add_tail(&folio->lru, movable_page_list);
> diff --git a/mm/internal.h b/mm/internal.h
> index dfb37e94e140..8645e8496537 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -188,7 +188,7 @@ pgprot_t __init early_memremap_pgprot_adjust(resource_size_t phys_addr,
> * in mm/vmscan.c:
> */
> int isolate_lru_page(struct page *page);
> -int folio_isolate_lru(struct folio *folio);
> +bool folio_isolate_lru(struct folio *folio);
> void putback_lru_page(struct page *page);
> void folio_putback_lru(struct folio *folio);
> extern void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason);
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index a5d32231bfad..cee659cfa3c1 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -2047,7 +2047,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
> goto out_unlock;
> }
>
> - if (folio_isolate_lru(folio)) {
> + if (!folio_isolate_lru(folio)) {
> result = SCAN_DEL_PAGE_LRU;
> goto out_unlock;
> }
> diff --git a/mm/madvise.c b/mm/madvise.c
> index 5a5a687d03c2..c2202f51e9dd 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -406,7 +406,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> folio_clear_referenced(folio);
> folio_test_clear_young(folio);
> if (pageout) {
> - if (!folio_isolate_lru(folio)) {
> + if (folio_isolate_lru(folio)) {
> if (folio_test_unevictable(folio))
> folio_putback_lru(folio);
> else
> @@ -500,7 +500,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
> folio_clear_referenced(folio);
> folio_test_clear_young(folio);
> if (pageout) {
> - if (!folio_isolate_lru(folio)) {
> + if (folio_isolate_lru(folio)) {
> if (folio_test_unevictable(folio))
> folio_putback_lru(folio);
> else
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index 0919c7a719d4..2751bc3310fd 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -1033,7 +1033,7 @@ static int migrate_folio_add(struct folio *folio, struct list_head *foliolist,
> * expensive, so check the estimated mapcount of the folio instead.
> */
> if ((flags & MPOL_MF_MOVE_ALL) || folio_estimated_sharers(folio) == 1) {
> - if (!folio_isolate_lru(folio)) {
> + if (folio_isolate_lru(folio)) {
> list_add_tail(&folio->lru, foliolist);
> node_stat_mod_folio(folio,
> NR_ISOLATED_ANON + folio_is_file_lru(folio),
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 34535bbd4fe9..7658b40df947 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -2337,12 +2337,12 @@ static unsigned long isolate_lru_folios(unsigned long nr_to_scan,
> * (2) The lru_lock must not be held.
> * (3) Interrupts must be enabled.
> *
> - * Return: 0 if the folio was removed from an LRU list.
> - * -EBUSY if the folio was not on an LRU list.
> + * Return: true if the folio was removed from an LRU list.
> + * false if the folio was not on an LRU list.
> */
> -int folio_isolate_lru(struct folio *folio)
> +bool folio_isolate_lru(struct folio *folio)
> {
> - int ret = -EBUSY;
> + bool ret = false;
>
> VM_BUG_ON_FOLIO(!folio_ref_count(folio), folio);
>
> @@ -2353,7 +2353,7 @@ int folio_isolate_lru(struct folio *folio)
> lruvec = folio_lruvec_lock_irq(folio);
> lruvec_del_folio(lruvec, folio);
> unlock_page_lruvec_irq(lruvec);
> - ret = 0;
> + ret = true;
> }
>
> return ret;
> --
> 2.27.0
Reviewed-by: SeongJae Park <sj@kernel.org>
Thanks,
SJ
next prev parent reply other threads:[~2023-02-14 17:46 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-14 13:59 [PATCH v2 0/4] Change the return value for page isolation functions Baolin Wang
2023-02-14 13:59 ` [PATCH v2 1/4] mm: change to return bool for folio_isolate_lru() Baolin Wang
2023-02-14 17:46 ` SeongJae Park [this message]
2023-02-14 13:59 ` [PATCH v2 2/4] mm: change to return bool for isolate_lru_page() Baolin Wang
2023-02-14 19:32 ` SeongJae Park
2023-02-15 1:04 ` Baolin Wang
2023-02-14 13:59 ` [PATCH v2 3/4] mm: hugetlb: change to return bool for isolate_hugetlb() Baolin Wang
2023-02-14 18:03 ` SeongJae Park
2023-02-14 18:07 ` SeongJae Park
2023-02-14 18:21 ` Mike Kravetz
2023-02-15 1:06 ` Baolin Wang
2023-02-14 13:59 ` [PATCH v2 4/4] mm: change to return bool for isolate_movable_page() Baolin Wang
2023-02-14 17:52 ` [PATCH v2 0/4] Change the return value for page isolation functions David Hildenbrand
2023-02-15 1:21 ` Baolin Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230214174626.71336-1-sj@kernel.org \
--to=sj@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=cgroups@vger.kernel.org \
--cc=damon@lists.linux.dev \
--cc=david@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=linmiaohe@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=mike.kravetz@oracle.com \
--cc=muchun.song@linux.dev \
--cc=naoya.horiguchi@nec.com \
--cc=osalvador@suse.de \
--cc=roman.gushchin@linux.dev \
--cc=shakeelb@google.com \
--cc=torvalds@linux-foundation.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox