From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0C00CD4F4C for ; Wed, 4 Sep 2024 23:44:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3862F6B026C; Wed, 4 Sep 2024 19:44:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3354E6B026F; Wed, 4 Sep 2024 19:44:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1AF286B026E; Wed, 4 Sep 2024 19:44:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id EF8EB6B026B for ; Wed, 4 Sep 2024 19:44:17 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 98BFB14062C for ; Wed, 4 Sep 2024 23:44:17 +0000 (UTC) X-FDA: 82528686954.16.5B659BE Received: from mail-ua1-f41.google.com (mail-ua1-f41.google.com [209.85.222.41]) by imf08.hostedemail.com (Postfix) with ESMTP id C1E8A16000D for ; Wed, 4 Sep 2024 23:44:15 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="d/oze03A"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf08.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.41 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725493377; a=rsa-sha256; cv=none; b=Qj46K2x6rbEJkNXHkNZq2uFOV/k+1in8JuTvcLRkLYuduv72qQ8iRKFgMmYmrntFG+0FJb oOOuwHrCBRYtFbKQa3F7T7sOzGmJM9tyrnU9TZvWxGuK7PD1ef6WxIDD8ojNmqETbOuD9r Xneeep7QvcrJpXhImNRvkR5AKimshdY= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="d/oze03A"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf08.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.41 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725493377; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zB5UVch4ksIaA7qcUV1eGK3Oio2m7SaBWkqGsGQFfDA=; b=FFAz5lw55WF0yDcXau0mHeOgVBjt+o955kBC0UiyztDhvhM/C0ak5UwIEO+Mf3Q5c1OT9w MI4IPW7uvzfbVYlQlN0ZjUDwrVsZjtM0Yt15S26CUu/lT7xsN5fH/xnql4Y5hijnmtzUkY tGXt79Dln8bc9CqYpgCq0pjTUNiJsnI= Received: by mail-ua1-f41.google.com with SMTP id a1e0cc1a2514c-846d536254fso56103241.1 for ; Wed, 04 Sep 2024 16:44:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1725493455; x=1726098255; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=zB5UVch4ksIaA7qcUV1eGK3Oio2m7SaBWkqGsGQFfDA=; b=d/oze03AoCIs6u5+OjlahO06suMylC78205FMc/1wu9bI2H5XerxRDc9MmllwHZRc5 PQEXbPxpAriJ/tZR8xgLHXyjb0K9T9abzVHJIhe+7xDdbviPLG8Y6v15lDZUAb6SW2yh 2ECOBcNO2YLi5+QjN3CjXL0pkhd8d/u4sukwOKpKPqW92c9UjxFleznY5lfLMhXJK8jS CxXALK3YUyLvEji7JmEVXC/p6zPqNZBoAR84ZbEsy/da55x/tGj0AuaTORz48UfiWHZm 4Os0kV1HOVr6dt90MruTWkvMTknYlScczY3ztdS1a76rnC/lgtO0kGNcWncOl7jKOE2g 1e3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725493455; x=1726098255; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zB5UVch4ksIaA7qcUV1eGK3Oio2m7SaBWkqGsGQFfDA=; b=Lc8xvYCRXqsU+8ZuuRnEQ+iBLAqVCG7baoI3lbqI06QA1G3ojxtm9As2H+L2XQAS1m Uva9xHFSS++zI4aMdi9jAuuDWc1WNQZVwc5P5V0/oJ2QZp4iCFUI2T3Jnu3chUn/Sy83 touuHjjY5m4kRfxLgUPF1AdEPKvNXSwF/SVxzvZPQqpp9ypBw4M+DWG/TcTAtVik+ME4 6rpiYHr8H8MUSEkONd0LS5qPgBSi1aa/snvYrmIJBLXlzh/TNkw2R3pZsTEfy8zZe5Q5 wERFyRBDQ5i3X163zEfVYGj5SFBp6M3hbyHHIIKUZw7ULYTaour8gU/RXYZUSdmzLSfm Sl8Q== X-Forwarded-Encrypted: i=1; AJvYcCXcsi/CKPLkED601rA1QDb0wzROFTiOE5RgtxpYZqH+U1CLJNfLZzn993zh/bmfW94Cu/LoSchy+g==@kvack.org X-Gm-Message-State: AOJu0Yxylp174Bmxv2ERjpBWR1nsubMGKGqlBBZY85hhZMuRGrPDOTuu Cix1WDpWm9ljejKsdJ+aVYxMB2Vo7jMEGdJWPDhb0v2o9Jz2ut6uB4ABnxt3yH3fUsMz9PSrrV7 rVWagsm4sfOsdK6ppUbbNU8mcfwc= X-Google-Smtp-Source: AGHT+IEl7Mgtu0ZTre+YOtBOTfct0K77RB3XEeqrk62nBVuULXQYddih66G0J115d+M2qGKe2UGroXm1CxwagJCTsHw= X-Received: by 2002:a05:6122:168e:b0:4f5:2276:136d with SMTP id 71dfb90a1353d-5009affded1mr19415526e0c.1.1725493454742; Wed, 04 Sep 2024 16:44:14 -0700 (PDT) MIME-Version: 1.0 References: <20240612124750.2220726-2-usamaarif642@gmail.com> <20240904055522.2376-1-21cnbao@gmail.com> <7a91ff31-1f56-4d0c-a4a7-a305331ba97a@gmail.com> In-Reply-To: <7a91ff31-1f56-4d0c-a4a7-a305331ba97a@gmail.com> From: Barry Song <21cnbao@gmail.com> Date: Thu, 5 Sep 2024 11:44:03 +1200 Message-ID: Subject: Re: [PATCH v4 1/2] mm: store zero pages to be swapped out in a bitmap To: Usama Arif Cc: akpm@linux-foundation.org, chengming.zhou@linux.dev, david@redhat.com, hannes@cmpxchg.org, hughd@google.com, kernel-team@meta.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, nphamcs@gmail.com, shakeel.butt@linux.dev, willy@infradead.org, ying.huang@intel.com, yosryahmed@google.com, hanchuanhua@oppo.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: C1E8A16000D X-Stat-Signature: 67oj65zo9s475ok65qq7cm38huoq7a5n X-Rspam-User: X-HE-Tag: 1725493455-240218 X-HE-Meta: U2FsdGVkX1+e1QyRg+YhCq7zUlyR537iE/f+7GuS8lj1bxNxnvkO+ZRwTa7o01NM/lmxpHSfc2iW2JxfDFmurSnLMwY8dUqzdR2COu3anLM2Z+v1mK10aeysXo4dzWsay/jlqTt2q2KQVHopcBvEJinnQJbFxdeNXo5hjCNR6Ts8kkf91Ph93/4OVkSJzrkyitsFt7qEGK/ADG4JA2Aac4O4BRQDPunWeSRdbu/l0rHMdLfm5amQ9CLm8dYFCSMwFiRJ+YQeILHCZruUzbkChmE4GFFQIK2AaYHyYYZQrIcdRENkccIJnXfCN+VQddh8Fd8V8MlbH7kM83rzwv+hWpT1t639/XWLc4js+3/wW8T4xOSWkuemNdHfzvVwFtsqTHtx1DyZfjZJ5K+xDBx1RxVHn152o9R/8XZMhnx33kOTHXtKV6MfcAo5/vLk6RmvCmSOPTv8UjVjv7idv7A9Z/yF3Fgpqr80QyGE3NrqoyRf5I/SuHXX5AeX7d5e8GbrwK8Mqirf+G/SaJLRSCRuwpYXpxY2zV5MYrTX1AGTCqddjEi3yn0RGcAgg8m7cEEYNR9+vgGtO+N4SFFUUzzDoD9rQEsZny/bhGF9Au+rrCjTPUr5PH0XlW5pQXGNpE76eglV+ycvVN7nUVTeV2wCwQmOvpz8B0naD69twF2qCel7WOQ4qjhlJmldDdvjGN5AqwOxdfD0XMJNKeHmnjOxs4Ub2xkQ2nFLva6AQgsK/eobNr5YQzh0k9IU4aETGz8a/Fnm+eFBXY6yNav6oXMG2gpLLTCKc7g70GEG5PTX1n9QtCvuxP6/fdHkUyCmyIReC5Pn02p/XeSqQzcaDSnhyikL8bFZbvO9EVL0DbJ/3kvDBna460F/TTaNCe92ThmZaD0Ibfx7+xjYNyJPoxV23G5e+nXsFyQfVHzguquoUltny1xKo9EVruEfsSFk7Bddi9V7quZGngHK7PtlkUd mf9JnpKg vYSWvTIFbarco1tA+g74AMsg4MgRtPhzv/aqBmWiCibCPOoC3+OyNLbuLU5+mphqdPqN9PG+Pfm342yvz8kX6/lKR3j+PB1lbas0QeFKBDRYgIzo06fXnCLfm9e3I+nhvGKEJCZUrCHgzRXNmEqKfXY95X/ZrBkfAvapAKLJhNbe8nrwoHcf4K69FI/CT6uli8+Y0hZheGUaI54weDn8uvIULNWc6R/frgepoTsTV++xBufmd0qSnRao4Q69gGv8dl9+W8WhjHsCKCNiTAHwvH6YWYz1W5WU+X+dSShYA3V4KuPLK4jWqI22jRlBPw0JfAQ6as3YJftdkBEMVSL9XLm6SoKAJcHmDfW1GNzhyt4ybZwwCW5mXiLfhdg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Sep 4, 2024 at 11:14=E2=80=AFPM Usama Arif = wrote: > > > > On 04/09/2024 06:55, Barry Song wrote: > > On Thu, Jun 13, 2024 at 12:48=E2=80=AFAM Usama Arif wrote: > >> > >> Approximately 10-20% of pages to be swapped out are zero pages [1]. > >> Rather than reading/writing these pages to flash resulting > >> in increased I/O and flash wear, a bitmap can be used to mark these > >> pages as zero at write time, and the pages can be filled at > >> read time if the bit corresponding to the page is set. > >> With this patch, NVMe writes in Meta server fleet decreased > >> by almost 10% with conventional swap setup (zswap disabled). > >> > >> [1] https://lore.kernel.org/all/20171018104832epcms5p1b2232e2236258de3= d03d1344dde9fce0@epcms5p1/ > >> > >> Signed-off-by: Usama Arif > >> --- > >> include/linux/swap.h | 1 + > >> mm/page_io.c | 114 ++++++++++++++++++++++++++++++++++++++++++= - > >> mm/swapfile.c | 24 ++++++++- > >> 3 files changed, 136 insertions(+), 3 deletions(-) > >> > >> diff --git a/include/linux/swap.h b/include/linux/swap.h > >> index a11c75e897ec..e88563978441 100644 > >> --- a/include/linux/swap.h > >> +++ b/include/linux/swap.h > >> @@ -299,6 +299,7 @@ struct swap_info_struct { > >> signed char type; /* strange name for an index *= / > >> unsigned int max; /* extent of the swap_map */ > >> unsigned char *swap_map; /* vmalloc'ed array of usage c= ounts */ > >> + unsigned long *zeromap; /* vmalloc'ed bitmap to track = zero pages */ > >> struct swap_cluster_info *cluster_info; /* cluster info. Only = for SSD */ > >> struct swap_cluster_list free_clusters; /* free clusters list = */ > >> unsigned int lowest_bit; /* index of first free in swap= _map */ > >> diff --git a/mm/page_io.c b/mm/page_io.c > >> index a360857cf75d..39fc3919ce15 100644 > >> --- a/mm/page_io.c > >> +++ b/mm/page_io.c > >> @@ -172,6 +172,88 @@ int generic_swapfile_activate(struct swap_info_st= ruct *sis, > >> goto out; > >> } > >> > >> +static bool is_folio_page_zero_filled(struct folio *folio, int i) > >> +{ > >> + unsigned long *data; > >> + unsigned int pos, last_pos =3D PAGE_SIZE / sizeof(*data) - 1; > >> + bool ret =3D false; > >> + > >> + data =3D kmap_local_folio(folio, i * PAGE_SIZE); > >> + if (data[last_pos]) > >> + goto out; > >> + for (pos =3D 0; pos < PAGE_SIZE / sizeof(*data); pos++) { > >> + if (data[pos]) > >> + goto out; > >> + } > >> + ret =3D true; > >> +out: > >> + kunmap_local(data); > >> + return ret; > >> +} > >> + > >> +static bool is_folio_zero_filled(struct folio *folio) > >> +{ > >> + unsigned int i; > >> + > >> + for (i =3D 0; i < folio_nr_pages(folio); i++) { > >> + if (!is_folio_page_zero_filled(folio, i)) > >> + return false; > >> + } > >> + return true; > >> +} > >> + > >> +static void folio_zero_fill(struct folio *folio) > >> +{ > >> + unsigned int i; > >> + > >> + for (i =3D 0; i < folio_nr_pages(folio); i++) > >> + clear_highpage(folio_page(folio, i)); > >> +} > >> + > >> +static void swap_zeromap_folio_set(struct folio *folio) > >> +{ > >> + struct swap_info_struct *sis =3D swp_swap_info(folio->swap); > >> + swp_entry_t entry; > >> + unsigned int i; > >> + > >> + for (i =3D 0; i < folio_nr_pages(folio); i++) { > >> + entry =3D page_swap_entry(folio_page(folio, i)); > >> + set_bit(swp_offset(entry), sis->zeromap); > >> + } > >> +} > >> + > >> +static void swap_zeromap_folio_clear(struct folio *folio) > >> +{ > >> + struct swap_info_struct *sis =3D swp_swap_info(folio->swap); > >> + swp_entry_t entry; > >> + unsigned int i; > >> + > >> + for (i =3D 0; i < folio_nr_pages(folio); i++) { > >> + entry =3D page_swap_entry(folio_page(folio, i)); > >> + clear_bit(swp_offset(entry), sis->zeromap); > >> + } > >> +} > >> + > >> +/* > >> + * Return the index of the first subpage which is not zero-filled > >> + * according to swap_info_struct->zeromap. > >> + * If all pages are zero-filled according to zeromap, it will return > >> + * folio_nr_pages(folio). > >> + */ > >> +static unsigned int swap_zeromap_folio_test(struct folio *folio) > >> +{ > >> + struct swap_info_struct *sis =3D swp_swap_info(folio->swap); > >> + swp_entry_t entry; > >> + unsigned int i; > >> + > >> + for (i =3D 0; i < folio_nr_pages(folio); i++) { > >> + entry =3D page_swap_entry(folio_page(folio, i)); > >> + if (!test_bit(swp_offset(entry), sis->zeromap)) > >> + return i; > >> + } > >> + return i; > >> +} > >> + > >> /* > >> * We may have stale swap cache pages in memory: notice > >> * them here and get rid of the unnecessary final write. > >> @@ -195,6 +277,13 @@ int swap_writepage(struct page *page, struct writ= eback_control *wbc) > >> folio_unlock(folio); > >> return ret; > >> } > >> + > >> + if (is_folio_zero_filled(folio)) { > >> + swap_zeromap_folio_set(folio); > >> + folio_unlock(folio); > >> + return 0; > >> + } > >> + swap_zeromap_folio_clear(folio); > >> if (zswap_store(folio)) { > >> folio_start_writeback(folio); > >> folio_unlock(folio); > >> @@ -426,6 +515,26 @@ static void sio_read_complete(struct kiocb *iocb,= long ret) > >> mempool_free(sio, sio_pool); > >> } > >> > >> +static bool swap_read_folio_zeromap(struct folio *folio) > >> +{ > >> + unsigned int idx =3D swap_zeromap_folio_test(folio); > >> + > >> + if (idx =3D=3D 0) > >> + return false; > >> + > >> + /* > >> + * Swapping in a large folio that is partially in the zeromap = is not > >> + * currently handled. Return true without marking the folio up= todate so > >> + * that an IO error is emitted (e.g. do_swap_page() will sigbu= s). > >> + */ > >> + if (WARN_ON_ONCE(idx < folio_nr_pages(folio))) > >> + return true; > > > > Hi Usama, Yosry, > > > > I feel the warning is wrong as we could have the case where idx=3D=3D0 > > is not zeromap but idx=3D1 is zeromap. idx =3D=3D 0 doesn't necessarily > > mean we should return false. > > > > What about the below change which both fixes the warning and unblocks > > large folios swap-in? > > > Hi Barry, > > I remembered when resending the zeromap series about the comment Yosry ha= d made earlier, but checked that the mTHP swap-in was not in mm-unstable. > I should have checked the mailing list and commented! > > I have not tested the below diff yet (will do in a few hours). But there = might be a small issue with it. Have commented inline. > > > diff --git a/mm/page_io.c b/mm/page_io.c > > index 4bc77d1c6bfa..7d7ff7064e2b 100644 > > --- a/mm/page_io.c > > +++ b/mm/page_io.c > > @@ -226,26 +226,6 @@ static void swap_zeromap_folio_clear(struct folio = *folio) > > } > > } > > > > -/* > > - * Return the index of the first subpage which is not zero-filled > > - * according to swap_info_struct->zeromap. > > - * If all pages are zero-filled according to zeromap, it will return > > - * folio_nr_pages(folio). > > - */ > > -static unsigned int swap_zeromap_folio_test(struct folio *folio) > > -{ > > - struct swap_info_struct *sis =3D swp_swap_info(folio->swap); > > - swp_entry_t entry; > > - unsigned int i; > > - > > - for (i =3D 0; i < folio_nr_pages(folio); i++) { > > - entry =3D page_swap_entry(folio_page(folio, i)); > > - if (!test_bit(swp_offset(entry), sis->zeromap)) > > - return i; > > - } > > - return i; > > -} > > - > > /* > > * We may have stale swap cache pages in memory: notice > > * them here and get rid of the unnecessary final write. > > @@ -524,9 +504,10 @@ static void sio_read_complete(struct kiocb *iocb, = long ret) > > > > static bool swap_read_folio_zeromap(struct folio *folio) > > { > > - unsigned int idx =3D swap_zeromap_folio_test(folio); > > + unsigned int nr_pages =3D folio_nr_pages(folio); > > + unsigned int nr =3D swap_zeromap_entries_count(folio->swap, nr_pa= ges); > > > > - if (idx =3D=3D 0) > > + if (nr =3D=3D 0) > > return false; > > > > /* > > @@ -534,7 +515,7 @@ static bool swap_read_folio_zeromap(struct folio *f= olio) > > * currently handled. Return true without marking the folio uptod= ate so > > * that an IO error is emitted (e.g. do_swap_page() will sigbus). > > */ > > - if (WARN_ON_ONCE(idx < folio_nr_pages(folio))) > > + if (WARN_ON_ONCE(nr < nr_pages)) > > return true; > > > > folio_zero_range(folio, 0, folio_size(folio)); > > diff --git a/mm/swap.h b/mm/swap.h > > index f8711ff82f84..2d59e9d89e95 100644 > > --- a/mm/swap.h > > +++ b/mm/swap.h > > @@ -80,6 +80,32 @@ static inline unsigned int folio_swap_flags(struct f= olio *folio) > > { > > return swp_swap_info(folio->swap)->flags; > > } > > + > > +/* > > + * Return the number of entries which are zero-filled according to > > + * swap_info_struct->zeromap. It isn't precise if the return value > > + * is larger than 0 and smaller than nr to avoid extra iterations, > > + * In this case, it means entries haven't consistent zeromap. > > + */ > > +static inline unsigned int swap_zeromap_entries_count(swp_entry_t entr= y, int nr) > > +{ > > + struct swap_info_struct *sis =3D swp_swap_info(entry); > > + unsigned long offset =3D swp_offset(entry); > > + unsigned int type =3D swp_type(entry); > > + unsigned int n =3D 0; > > + > > + for (int i =3D 0; i < nr; i++) { > > + entry =3D swp_entry(type, offset + i); > > + if (test_bit(offset + i, sis->zeromap)) { > > Should this be if (test_bit(swp_offset(entry), sis->zeromap)) > well. i feel i have a much cheaper way to implement this, which can entirely iteration even in your original code: +/* + * Return the number of entries which are zero-filled according to + * swap_info_struct->zeromap. It isn't precise if the return value + * is 1 for nr > 1. In this case, it means entries have inconsistent + * zeromap. + */ +static inline unsigned int swap_zeromap_entries_count(swp_entry_t entry, int nr) +{ + struct swap_info_struct *sis =3D swp_swap_info(entry); + unsigned long start =3D swp_offset(entry); + unsigned long end =3D start + nr; + unsigned long idx =3D 0; + + idx =3D find_next_bit(sis->zeromap, end, start); + if (idx =3D=3D end) + return 0; + if (idx > start) + return 1; + return nr; +} + > > Also, are you going to use this in alloc_swap_folio? > You mentioned above that this unblocks large folios swap-in, but I don't = see > it in the diff here. I am guessing there is some change in alloc_swap_inf= o that > uses swap_zeromap_entries_count? > > Thanks > Usama > > > + if (i !=3D n) > > + return i; > > + n++; > > + } > > + } > > + > > + return n; > > +} > > + > > #else /* CONFIG_SWAP */ > > struct swap_iocb; > > static inline void swap_read_folio(struct folio *folio, struct swap_io= cb **plug) > > @@ -171,6 +197,11 @@ static inline unsigned int folio_swap_flags(struct= folio *folio) > > { > > return 0; > > } > > + > > +static inline unsigned int swap_zeromap_entries_count(swp_entry_t entr= y, int nr) > > +{ > > + return 0; > > +} > > #endif /* CONFIG_SWAP */ > > > > #endif /* _MM_SWAP_H */ > > Thanks Barry