From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0828C04FFE for ; Wed, 8 May 2024 08:30:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1B9296B00B6; Wed, 8 May 2024 04:30:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1691D6B00B7; Wed, 8 May 2024 04:30:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0306F6B00B8; Wed, 8 May 2024 04:30:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id D4DBB6B00B6 for ; Wed, 8 May 2024 04:30:29 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 85633A1402 for ; Wed, 8 May 2024 08:30:29 +0000 (UTC) X-FDA: 82094556978.22.25FE1FC Received: from mail-vk1-f182.google.com (mail-vk1-f182.google.com [209.85.221.182]) by imf29.hostedemail.com (Postfix) with ESMTP id B81FA12000B for ; Wed, 8 May 2024 08:30:26 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=IAHPpp8u; spf=pass (imf29.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.221.182 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715157026; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=EUJDG03Mjw+2XpovVR7RPuA7B88kWpbfRCW3yD4FtR4=; b=HIJhnVFlb/gGc1vTXhbMGqM8456tCZ4wlO0KoIttmquYWVkFwxSECyHQGBnr/GD5DsF/hi Q9LiTxVK7SlBxYaYHYhPgs7swSBAU41MleKzJsG5ioAH4ivFSXhb8gns/kagNdoieJQkem HNOppbpCa8yX/Zu0BHL1apMSLVU2sXc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715157026; a=rsa-sha256; cv=none; b=Tvt5cw50S7wt/fp/YCpbdSjrTYIn1vEApaY3SQm/h4e/I3NYEt2V8QYLSFja88RrRgN4MT deOhuG/FyHdOrWkcXogJag5VNDOY0kaRNEBf64p5a4JrrmRR7PyUuwqKzCxg4ZISWTIvNO Zz7JTdkC54ANT7LVEYF7gLneRpZptNc= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=IAHPpp8u; spf=pass (imf29.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.221.182 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-vk1-f182.google.com with SMTP id 71dfb90a1353d-4df1ec345f6so1486494e0c.0 for ; Wed, 08 May 2024 01:30:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1715157026; x=1715761826; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=EUJDG03Mjw+2XpovVR7RPuA7B88kWpbfRCW3yD4FtR4=; b=IAHPpp8uuf+lc5tubE6ZM/232zC1ImCTLJKffP93r56ZhcAeMB3CYttDFNMqroKEIb 0sWIzcIfyUpLHEQYG93LdhnzrZHdho1aeb99lRotinxF+LM0euXIcIylncIYw6+OlZoI bxMSSI94Wvp6L+9PrOCSNhTnOg+f7zaYPImxaxleu5zDz1HzO/4FmPaxScBccs4TkLzJ YwfD1j7Z7gjeFjHlI8HNZQ9ZHlzpvXv2nWLgcYgT9uuEF8h9bHQqkqCXk62kOAYCvI1z atfGDz/LUe7O7HkrhL3biBPFbQoofjxiiLkqyb8hRNtVpMOvkp4vRyYaoLIxT6GZd7Ca lTkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715157026; x=1715761826; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=EUJDG03Mjw+2XpovVR7RPuA7B88kWpbfRCW3yD4FtR4=; b=jUI1cVnJ1IkTjD3bbuo/aOB3u/sExtVqNfvxhnSmEX5XkiNacgQhFzRBER8wVbPmQe PPcjW2ytPW6+0gLpDmzS4RAc+iP+dWRq/YsGObQOmAI8uiCX8FOhFt77hEP6Yk1rnN1B 3a1xfXAk6BG7K/VsTehGpwctw9qhumSgQ+wHBBOMTzlrGNUR5Nq+ZayFkUyepAJDO8Uk bOlS3HYOdoDnNCpDyZcdGffJMLZmnAWeFvmqhtwOz60OETLAXNu1l15zfRZM0v7NTPOR XcjjmZdQhxsWGV0d6fpJwl5GOX1KNV5m+qsOo7hWBUKNLH8RazDkQCzjxotXCHbcBxxl fpOA== X-Forwarded-Encrypted: i=1; AJvYcCU6Qwu/10vsEZQLIjwuwmO2aLoVynKkiiAnnOkLHsV5rA6aqHDeOhzr8ZHKkK/kbPcW31wkUi3t6zkM6yKMys0d6zo= X-Gm-Message-State: AOJu0YxWh5K7ZRhlmQze1Bgv477CUOrRAu1rsL5KqmYpxhhkh197FtRC 7idj+qk3Ey6k+NE4Z7S8/9hx/jNg4NW9zBVk3LThXQ8/z4mLhMeD8qM8po9eSEjQzNXGFpriMq1 kgKj0b7hM7Mw8FmsU7dWGoHSoLx8= X-Google-Smtp-Source: AGHT+IFpK2nFxJdMD/Vo1UFasc9ih/H09ABF9W1Bnr4S7XIG/z2E2Wiqwk8mVWA2bStZXhLjSXL1ulkmP1H5ESDLaCs= X-Received: by 2002:a05:6122:2089:b0:4d4:2069:eafb with SMTP id 71dfb90a1353d-4df692bfa01mr1784456e0c.9.1715157024145; Wed, 08 May 2024 01:30:24 -0700 (PDT) MIME-Version: 1.0 References: <20240503005023.174597-1-21cnbao@gmail.com> <20240503005023.174597-3-21cnbao@gmail.com> <87y18kivny.fsf@yhuang6-desk2.ccr.corp.intel.com> In-Reply-To: <87y18kivny.fsf@yhuang6-desk2.ccr.corp.intel.com> From: Barry Song <21cnbao@gmail.com> Date: Wed, 8 May 2024 20:30:12 +1200 Message-ID: Subject: Re: [PATCH v3 2/6] mm: remove swap_free() and always use swap_free_nr() To: "Huang, Ying" , Christoph Hellwig , chrisl@kernel.org Cc: Ryan Roberts , akpm@linux-foundation.org, linux-mm@kvack.org, baolin.wang@linux.alibaba.com, david@redhat.com, hanchuanhua@oppo.com, hannes@cmpxchg.org, hughd@google.com, kasong@tencent.com, linux-kernel@vger.kernel.org, surenb@google.com, v-songbaohua@oppo.com, willy@infradead.org, xiang@kernel.org, yosryahmed@google.com, yuzhao@google.com, ziy@nvidia.com, "Rafael J. Wysocki" , Pavel Machek , Len Brown Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Stat-Signature: 76ooh1cfyegxficcorc3erhjmywnqa1f X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: B81FA12000B X-HE-Tag: 1715157026-581295 X-HE-Meta: U2FsdGVkX1/+0QuemhSiu2M/zroVVKY1qz+uccUGsFH1hSb/3eU5IjE9+Xk8nyL6LqBIyanzYx12UYxGM9+dSPRky4lcoHF1gI32Eqb9qBM8VUruPiMCgciGdEVKOxiliq1g2k6oA7qAlatrmt++8UaZe5emVIuYm06Hv4fk55Rznvgi+PQur3nU/aN8B0FWnMcDavmrJmI5uE7H62CQy8rhaFQ3vNO5w7YaS1ZWAcuBPsyGaQV+XKG3yq3ydrmyDqh4Ow0gHbVP9nNWXcvGyED5iSfw50dz4V3Ol3W6L682xVeKR9EOGk2T0FfMLLhcUsY4BBrHVDPGSbAkjClzMcVTYseSQL0XQ25pts8yVWHiSwlBj8nNN8Ma4j/R3TT+J7Tmj/Bt3faymZPiAUgbfY7W98HeJlJPLeiSGXVoQGVytjhViDV/5yLsoMom/MSQWqcHnPK0ntkwrzn0l2JwrcbigYDeIhH8FmdfX43D2QYRdAzcIGzOYf2Y4tMH7tD+yFCW+2J3p9YbOEGVOAXPtG/bfzMbEG0ZsAH+2v2ADTEr7c6nVyZgmQK938D7b3B8c0xrqwEOM/jPo6dy0QVbb+dGB2NTKriGZlelfHChAmd8cgdmWu+2TOf0B63fyQ3RR5g4D1Yo/Ioi+LUZ3c4bZJL76JiSbV4pwKxp1BvWpUCQsrLlAwLN9Y7VlEnqQI1NdJMmB2v8LyNkzyOn75dHufnxEfw9SprrFSwRJnZHmsh3PAhNzPoekNhIMd+N88rEtz0A89scQ4PJW8sJ+RI9xVAS2dfaBfrYfj4KuzVc+uc0kO2brSncpXrJ38hQ4RoZ3w5IKOVuDGXFnxWfErl1KHqsUBfGUZysCMWJZUMNrOv7IipQn7Odtigivqfpat7/8hL0utlkXHhffOlk7xnAnvFFZVLCGg+ZTPsPd282DmaafHrq/6macnmKBtwkV1Jv/t312InJCmhVfBB3FJ2 q3X5bMXe U2VOtTmCT26ZdgpAqB2PO6eZRzL5NUxWJxizlkLktLv49VEdHpcPPyH5o9AOQVtMr/Tcu3GcnSqA9QLbqIovFP/nEJdS0DR2wIoPy9CHEXujZV4cYY3ejSXvIdwkt+3jKdfUObvsWTSt9AvqT3vMw6IPnshB3pseTK3IwHoMNyRFtaEFBbXtaGm7UFCGBYfY+Xyhk1I53KHqvw6mKs/UUVRGahcyst3MY2INma3RDjYDj5ypldjdfbcHGm2sLDlZk36JpbSUxCm634UCXL5YgB3sinbmgIpF968qjsUCISWOLmA6eGDNrbw0KWJbbFF8rJsGVEAhQrcuIuYHaq6b1frxBhnRW8MTYW6OkMes71tjCjoGJD0fPM5/7OGE0BbkTLQBUoPkHc5S6zmI7FWNFtuWSchVRHChkUNWnlJFzAo6CERnHpzpDzc3qii+8FZp4/9J347cFZBuOmJ8VrtGIcRnnZPXpJkSPrA3Bg4fFj9Yq/TxjC97aYZXLY68V5k93gsap/OsZU82Tb/w4uQs1+z2bhaeSk3O4+LC5+Qk/TgnbPulDg0F4ARWOdaDhLqWSAYXYXAboHX7Biu+l+tkD8PGDH/bC0zRhCXUd3MqP4XMe0uU4MjjwLXNumQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, May 8, 2024 at 7:58=E2=80=AFPM Huang, Ying w= rote: > > Ryan Roberts writes: > > > On 03/05/2024 01:50, Barry Song wrote: > >> From: Barry Song > >> > >> To streamline maintenance efforts, we propose discontinuing the use of > >> swap_free(). Instead, we can simply invoke swap_free_nr() with nr set > >> to 1. This adjustment offers the advantage of enabling batch processin= g > >> within kernel/power/swap.c. Furthermore, swap_free_nr() is designed wi= th > >> a bitmap consisting of only one long, resulting in overhead that can b= e > >> ignored for cases where nr equals 1. > >> > >> Suggested-by: "Huang, Ying" > >> Signed-off-by: Barry Song > >> Cc: "Rafael J. Wysocki" > >> Cc: Pavel Machek > >> Cc: Len Brown > >> Cc: Hugh Dickins > >> --- > >> include/linux/swap.h | 5 ----- > >> kernel/power/swap.c | 7 +++---- > >> mm/memory.c | 2 +- > >> mm/rmap.c | 4 ++-- > >> mm/shmem.c | 4 ++-- > >> mm/swapfile.c | 19 +++++-------------- > >> 6 files changed, 13 insertions(+), 28 deletions(-) > >> > >> diff --git a/include/linux/swap.h b/include/linux/swap.h > >> index d1d35e92d7e9..f03cb446124e 100644 > >> --- a/include/linux/swap.h > >> +++ b/include/linux/swap.h > >> @@ -482,7 +482,6 @@ extern int add_swap_count_continuation(swp_entry_t= , gfp_t); > >> extern void swap_shmem_alloc(swp_entry_t); > >> extern int swap_duplicate(swp_entry_t); > >> extern int swapcache_prepare(swp_entry_t); > >> -extern void swap_free(swp_entry_t); > > > > I wonder if it would be cleaner to: > > > > #define swap_free(entry) swap_free_nr((entry), 1) > > > > To save all the churn for the callsites that just want to pass a single= entry? > > I prefer this way. Although I prefer inline functions. Yes, using static inline is preferable. I've recently submitted a checkpatch/codestyle for this, which can be found at: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git/commit/?h=3Dmm-= everything&id=3D39c58d5ed036 https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git/commit/?h=3Dmm-= everything&id=3D8379bf0b0e1f5 Using static inline aligns with the established rule. > > Otherwise, LGTM. Feel free to add > > Reviewed-by: "Huang, Ying" Thanks! > > in the future version. I believe Christoph's vote leans towards simply removing swap_free_nr and renaming it to swap_free, while adding a new parameter as follows. void swap_free(swp_entry_t entry, int nr); { } now I see Ryan and you prefer static inline swap_free() { swap_free_nr(...., 1) } Chris slightly favors discouraging the use of swap_free() without the new parameter. Removing swap_free() can address this concern. It seems that maintaining swap_free() and having it call swap_free_nr() wit= h a default value of 1 received the most support. To align with free_swap_and_cache() and free_swap_and_cache_nr(), I'll proceed with the "static inline" approach in the new version. Please voice any objections you may have, Christoph, Chris. > > >> extern void swap_free_nr(swp_entry_t entry, int nr_pages); > >> extern void swapcache_free_entries(swp_entry_t *entries, int n); > >> extern void free_swap_and_cache_nr(swp_entry_t entry, int nr); > >> @@ -561,10 +560,6 @@ static inline int swapcache_prepare(swp_entry_t s= wp) > >> return 0; > >> } > >> > >> -static inline void swap_free(swp_entry_t swp) > >> -{ > >> -} > >> - > >> static inline void swap_free_nr(swp_entry_t entry, int nr_pages) > >> { > >> } > >> diff --git a/kernel/power/swap.c b/kernel/power/swap.c > >> index 5bc04bfe2db1..6befaa88a342 100644 > >> --- a/kernel/power/swap.c > >> +++ b/kernel/power/swap.c > >> @@ -181,7 +181,7 @@ sector_t alloc_swapdev_block(int swap) > >> offset =3D swp_offset(get_swap_page_of_type(swap)); > >> if (offset) { > >> if (swsusp_extents_insert(offset)) > >> - swap_free(swp_entry(swap, offset)); > >> + swap_free_nr(swp_entry(swap, offset), 1); > >> else > >> return swapdev_block(swap, offset); > >> } > >> @@ -200,12 +200,11 @@ void free_all_swap_pages(int swap) > >> > >> while ((node =3D swsusp_extents.rb_node)) { > >> struct swsusp_extent *ext; > >> - unsigned long offset; > >> > >> ext =3D rb_entry(node, struct swsusp_extent, node); > >> rb_erase(node, &swsusp_extents); > >> - for (offset =3D ext->start; offset <=3D ext->end; offset+= +) > >> - swap_free(swp_entry(swap, offset)); > >> + swap_free_nr(swp_entry(swap, ext->start), > >> + ext->end - ext->start + 1); > >> > >> kfree(ext); > >> } > >> diff --git a/mm/memory.c b/mm/memory.c > >> index eea6e4984eae..f033eb3528ba 100644 > >> --- a/mm/memory.c > >> +++ b/mm/memory.c > >> @@ -4225,7 +4225,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > >> * We're already holding a reference on the page but haven't mapp= ed it > >> * yet. > >> */ > >> - swap_free(entry); > >> + swap_free_nr(entry, 1); > >> if (should_try_to_free_swap(folio, vma, vmf->flags)) > >> folio_free_swap(folio); > >> > >> diff --git a/mm/rmap.c b/mm/rmap.c > >> index 087a79f1f611..39ec7742acec 100644 > >> --- a/mm/rmap.c > >> +++ b/mm/rmap.c > >> @@ -1865,7 +1865,7 @@ static bool try_to_unmap_one(struct folio *folio= , struct vm_area_struct *vma, > >> goto walk_done_err; > >> } > >> if (arch_unmap_one(mm, vma, address, pteval) < 0)= { > >> - swap_free(entry); > >> + swap_free_nr(entry, 1); > >> set_pte_at(mm, address, pvmw.pte, pteval)= ; > >> goto walk_done_err; > >> } > >> @@ -1873,7 +1873,7 @@ static bool try_to_unmap_one(struct folio *folio= , struct vm_area_struct *vma, > >> /* See folio_try_share_anon_rmap(): clear PTE fir= st. */ > >> if (anon_exclusive && > >> folio_try_share_anon_rmap_pte(folio, subpage)= ) { > >> - swap_free(entry); > >> + swap_free_nr(entry, 1); > >> set_pte_at(mm, address, pvmw.pte, pteval)= ; > >> goto walk_done_err; > >> } > >> diff --git a/mm/shmem.c b/mm/shmem.c > >> index fa2a0ed97507..bfc8a2beb24f 100644 > >> --- a/mm/shmem.c > >> +++ b/mm/shmem.c > >> @@ -1836,7 +1836,7 @@ static void shmem_set_folio_swapin_error(struct = inode *inode, pgoff_t index, > >> * in shmem_evict_inode(). > >> */ > >> shmem_recalc_inode(inode, -1, -1); > >> - swap_free(swap); > >> + swap_free_nr(swap, 1); > >> } > >> > >> /* > >> @@ -1927,7 +1927,7 @@ static int shmem_swapin_folio(struct inode *inod= e, pgoff_t index, > >> > >> delete_from_swap_cache(folio); > >> folio_mark_dirty(folio); > >> - swap_free(swap); > >> + swap_free_nr(swap, 1); > >> put_swap_device(si); > >> > >> *foliop =3D folio; > >> diff --git a/mm/swapfile.c b/mm/swapfile.c > >> index ec12f2b9d229..ddcd0f24b9a1 100644 > >> --- a/mm/swapfile.c > >> +++ b/mm/swapfile.c > >> @@ -1343,19 +1343,6 @@ static void swap_entry_free(struct swap_info_st= ruct *p, swp_entry_t entry) > >> swap_range_free(p, offset, 1); > >> } > >> > >> -/* > >> - * Caller has made sure that the swap device corresponding to entry > >> - * is still around or has not been recycled. > >> - */ > >> -void swap_free(swp_entry_t entry) > >> -{ > >> - struct swap_info_struct *p; > >> - > >> - p =3D _swap_info_get(entry); > >> - if (p) > >> - __swap_entry_free(p, entry); > >> -} > >> - > >> static void cluster_swap_free_nr(struct swap_info_struct *sis, > >> unsigned long offset, int nr_pages) > >> { > >> @@ -1385,6 +1372,10 @@ static void cluster_swap_free_nr(struct swap_in= fo_struct *sis, > >> unlock_cluster_or_swap_info(sis, ci); > >> } > >> > >> +/* > >> + * Caller has made sure that the swap device corresponding to entry > >> + * is still around or has not been recycled. > >> + */ > >> void swap_free_nr(swp_entry_t entry, int nr_pages) > >> { > >> int nr; > >> @@ -1930,7 +1921,7 @@ static int unuse_pte(struct vm_area_struct *vma,= pmd_t *pmd, > >> new_pte =3D pte_mkuffd_wp(new_pte); > >> setpte: > >> set_pte_at(vma->vm_mm, addr, pte, new_pte); > >> - swap_free(entry); > >> + swap_free_nr(entry, 1); > >> out: > >> if (pte) > >> pte_unmap_unlock(pte, ptl); > > -- > Best Regards, > Huang, Ying Thanks Barry