From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A09B5C27C43 for ; Thu, 30 May 2024 12:27:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 11E1F6B0089; Thu, 30 May 2024 08:27:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0CDBA6B0095; Thu, 30 May 2024 08:27:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ED6F36B0096; Thu, 30 May 2024 08:27:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id D196A6B0089 for ; Thu, 30 May 2024 08:27:24 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 7CBA1120F92 for ; Thu, 30 May 2024 12:27:24 +0000 (UTC) X-FDA: 82174987608.13.65AA1A0 Received: from mail-qk1-f171.google.com (mail-qk1-f171.google.com [209.85.222.171]) by imf13.hostedemail.com (Postfix) with ESMTP id 7879D20006 for ; Thu, 30 May 2024 12:27:22 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b="d7r/wEX+"; spf=pass (imf13.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.222.171 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1717072042; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VgMssebry1DXQ8EKvNTlkNQ+MlNGgogW9+pAb9Ernn0=; b=orAlYhvpy/MMzNaCj1SR1AgC+bJgVjYwgIwYtLzkqvwj2PfW/YtB6MckTDLjXEELxN3bEW HXodnHHcjjIUBozHuQ6yxfpc2cAMAmeXKImN55d9aUGcF2EqhPaUp7pweKjp/EQtkKzkQ9 UvXWUfzbtpEQ/rcrdp7G5bMLkIyKc4o= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b="d7r/wEX+"; spf=pass (imf13.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.222.171 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1717072042; a=rsa-sha256; cv=none; b=YcjJH32mSJa/v+SZ/fqCKL0q/IUAM72T+sYxUoM85n5KdwYQ4zWy2UXSld1+H+dd+FF60j q5gMe0Iy/CQDWRa9ojda7YK0oVatzGX/7kIABNhIVtKTeAatkG51DJBPi8aoR2ZgQh3UaE zDpcHB1KHh/PdfNhGpt8fMuyQYeZZVk= Received: by mail-qk1-f171.google.com with SMTP id af79cd13be357-794ab0eb68cso88245185a.0 for ; Thu, 30 May 2024 05:27:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20230601.gappssmtp.com; s=20230601; t=1717072041; x=1717676841; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=VgMssebry1DXQ8EKvNTlkNQ+MlNGgogW9+pAb9Ernn0=; b=d7r/wEX+1F1g0KoiPjq3qVeWlgC/g2K6y19TAin5nhXgE47L0ygVolUz/rAW5/gX2P aMqCwn/wvyzrPXtSkiOSfL3UU08eyAsh32Tcn4qwCGuvixOH0+SAjAl6+88Z/OqY4rHt FUSO7svAlA4uvV+z/T2cMRsQRUh4PZI341UbWuAgXcalwQ1qb8QtGdD0nVIm1tZrxqIL 4hhCa6aYkK/4sr7r9j3YRQc61kc8GzXXYSATmDMhOFQsq8krU6y7Wsl7vz4stoFayJvT XkDUaeT4GCQsmkJz0ajigSk0HvmhuOOI2lOssaV8SUYooqvzScKX48YY9k4xb/d9ArYi Qn5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1717072041; x=1717676841; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=VgMssebry1DXQ8EKvNTlkNQ+MlNGgogW9+pAb9Ernn0=; b=cPcnL1Xsp3FPr3DhmhsyFwOI/iAQgJmL3C/VkXCd6H5qlo82HEwCNQn7X1DsbmGk5k fE88OtxrRQe2HVrLuPABqiVvmKIaHS2m8trcdWs1mSzmXZN7482qr1w8CMUkqs+yhFq3 448WxaABMc99eM/BQrQF13blzTg+JI/eMcRKw1zvYCVvSZaTqkwTLrOMYhtddpP9Fi+N uPlJn3QEgBwXuR4lCA2HsWUd6A9+TQjLrb0g7MDybjBCIx3eKKWwJE2UoCvDss4emkaY FnJjeleLz97RG7h/lMvB0iKRmEYwmQDtr2mRV9Cw2R/8qqfkADgBnzzsuDRdHT0MW6XF 9s8g== X-Forwarded-Encrypted: i=1; AJvYcCVksPZzfCpZ37lI4EPGpLADW7mAnUCjj6omUZvdDic3a0VFVF8q6Vq/DpYjmF/0uftFq7+YE+eQNxRjgaDGbvO8V7c= X-Gm-Message-State: AOJu0Ywi141oAPvjZ9koB6H3BcAS2o4DvfaWahoWKjCnRkDWVcQId6Kq MPSY8JHyu8WhwbalcQaQyWC3mcy9AV2K+xVSyPIljFvaUUH4mvsEpKx+/A1BXtw= X-Google-Smtp-Source: AGHT+IElFZU6HnNo29DXJa73bSHcFab+k0X03IXmJRIDhZYMeYhrS2+Gs/TA2o7ZAoHJXOgA1i+7Vw== X-Received: by 2002:a05:620a:4586:b0:790:efef:af55 with SMTP id af79cd13be357-794eaeb13bfmr294600185a.3.1717072041303; Thu, 30 May 2024 05:27:21 -0700 (PDT) Received: from localhost ([2603:7000:c01:2716:da5e:d3ff:fee7:26e7]) by smtp.gmail.com with ESMTPSA id af79cd13be357-794abd303c1sm550977685a.101.2024.05.30.05.27.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 30 May 2024 05:27:20 -0700 (PDT) Date: Thu, 30 May 2024 08:27:15 -0400 From: Johannes Weiner To: Usama Arif Cc: akpm@linux-foundation.org, yosryahmed@google.com, nphamcs@gmail.com, chengming.zhou@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@meta.com, Hugh Dickins , Huang Ying Subject: Re: [PATCH 1/2] mm: store zero pages to be swapped out in a bitmap Message-ID: <20240530122715.GB1222079@cmpxchg.org> References: <20240530102126.357438-1-usamaarif642@gmail.com> <20240530102126.357438-2-usamaarif642@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240530102126.357438-2-usamaarif642@gmail.com> X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 7879D20006 X-Stat-Signature: cuqprtjbgqiuzgkdsshs4xeik4zp4h53 X-Rspam-User: X-HE-Tag: 1717072042-167040 X-HE-Meta: U2FsdGVkX194gZM0WWR+buQJDI1UtgNPCn6Az0+nRsj/LAWnxV8JaGltFsgRDNgzPychN3jrqxZ8wNtzjd1nLgTNXIrRdCKxqZRwV/LFpPKNIU6F2CGzmaiy+yNBvzPQQ9KD4V5VxK4VR1I+gKw6KmJIBx2UINY5qzR5MbPFy0B3sxaNinrsP107QFbxDKHE4mxYa5FRhxGjNFhcXKlK+rwsI/MqfGnblZ8SfhF+JJ9L+5pDmeuaodMMKTA2OQeeufZkLMc9HDH5Ctb5xZGdANKpJrLVnKoLhawsvClg/sTGN2RoRFo7W4qIKGAoWlB09vYw7yvRuiDf+hZ1wBt0uN8hl8wXBSsHY/koDnsiK7NLHpH8u3a2OaLkh7d1v31aR5LFdCrq5yULEABYOr0FVm7nkoP/lMmh+Y8CASFWNEXbYTgU4PWN/BKSjQi7FPvHQpAiHl0BZlB92d4mY3HdYr6qG+V/yleCNe9DQia8qOoyX2bM+4WDSNTS9oZH2gexTckUKky2ARKyGWoQutj9x23F4yuCwTD5fh+PPCw+OQPStfg0iM2myzFyl08PhtVQMwEzZ/1mRNmNzAlt0R/pD4vr7IyBI1xR6pbA4v7trqb0hgONXFBgGm1s0Q4gMDl6ULxF/Yut32bYU/epZ4trTDj3IoviWkP68mo5yiulIF2a9ycHr123mKIlewRpjQ7Py7tiAhI/hSteQzWY7GvXQpKgT9QMsoPxcppvCUne4f1/RdnjSAlK8zCGXhDQg2ej0tO6H/mjydCwEsAxnK0/P0oy1kPYmIwjx6sLW+8gRfePslsdMRTHFl2JymaS7RrMkOgUUr6AyankadphPvTVvgX6HS5lhQmnamUQQxC0WfC7N3Zf0+Rru2HKKwVkt6bswL1xK4HMvwDkRhhRqhJxrlGsOW2yt95guGPHaEWBE7VkuJVvTc/xzFd3+CVxQRSiecW81hy7NZcQAbt9e85 PwpqeF+o 33N466yR4YT7liqAbeYAQI55GFOCNTjUZ1sNgIa3TrWzJoyTxDFKax6C8Ay0AWBufh74VT+Aj7r5yU5sJ1w6MprZJE3/EPy43uokcwhhoNrqrSrEwuVAID2OiqKwfzrwkl781IBUGZDMPkqoaOOWp420NI83tHb2OSGlhA/SrJ7V0KCs3HEs1IOxhzISfi4XE1c63Q26IHs1YnixdFGd2PQr5v76GZh5TgnxcAD+vil5QM4LNMKUuRcj77UdHrFt8kAl/jJ+IzCZWj+TzoZPwvKO/O4dhYa1QgHrZzvBD6jzT4Rh+NOvcOhTxyQHF4avf37SPrh/Cfo3dXxxhjwJIxlObcBnC4nlfwSQ711139js9OzHaBNdWaKpZa113mquaPKzjYRQPqgiRN7BiQ1Y8Jhd4qhBhvyky9Z3s2bB7fBSKOxAmMq4iwnTN0aKS/qQfJH81gkZh3dHbqzPFu8sKdhYv/RfKjW2sTkgaQiZhLYgjZvDlEvxVr38lTiuZJyq2szXnkQe9AAq/DMRkSfQHbnQI9yTHuqAjWOemHIJrNvddEWYqYCEbgE+zKoOGmrkSIWw9 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, May 30, 2024 at 11:19:07AM +0100, Usama Arif wrote: > Approximately 10-20% of pages to be swapped out are zero pages [1]. > Rather than reading/writing these pages to flash resulting > in increased I/O and flash wear, a bitmap can be used to mark these > pages as zero at write time, and the pages can be filled at > read time if the bit corresponding to the page is set. > With this patch, NVMe writes in Meta server fleet decreased > by almost 10% with conventional swap setup (zswap disabled). > > [1]https://lore.kernel.org/all/20171018104832epcms5p1b2232e2236258de3d03d1344dde9fce0@epcms5p1/ > > Signed-off-by: Usama Arif This is awesome. > --- > include/linux/swap.h | 1 + > mm/page_io.c | 86 ++++++++++++++++++++++++++++++++++++++++++-- > mm/swapfile.c | 10 ++++++ > 3 files changed, 95 insertions(+), 2 deletions(-) > > diff --git a/include/linux/swap.h b/include/linux/swap.h > index a11c75e897ec..e88563978441 100644 > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -299,6 +299,7 @@ struct swap_info_struct { > signed char type; /* strange name for an index */ > unsigned int max; /* extent of the swap_map */ > unsigned char *swap_map; /* vmalloc'ed array of usage counts */ > + unsigned long *zeromap; /* vmalloc'ed bitmap to track zero pages */ One bit per swap slot, so 1 / (4096 * 8) = 0.003% static memory overhead for configured swap space. That seems reasonable for what appears to be a fairly universal 10% reduction in swap IO. An alternative implementation would be to reserve a bit in swap_map. This would be no overhead at idle, but would force continuation counts earlier on heavily shared page tables, and AFAICS would get complicated in terms of locking, whereas this one is pretty simple (atomic ops protect the map, swapcache lock protects the bit). So I prefer this version. But a few comments below: > struct swap_cluster_info *cluster_info; /* cluster info. Only for SSD */ > struct swap_cluster_list free_clusters; /* free clusters list */ > unsigned int lowest_bit; /* index of first free in swap_map */ > diff --git a/mm/page_io.c b/mm/page_io.c > index a360857cf75d..ab043b4ad577 100644 > --- a/mm/page_io.c > +++ b/mm/page_io.c > @@ -172,6 +172,77 @@ int generic_swapfile_activate(struct swap_info_struct *sis, > goto out; > } > > +static bool is_folio_page_zero_filled(struct folio *folio, int i) > +{ > + unsigned long *page; > + unsigned int pos; > + bool ret = false; > + > + page = kmap_local_folio(folio, i * PAGE_SIZE); > + for (pos = 0; pos < PAGE_SIZE / sizeof(*page); pos++) { > + if (page[pos] != 0) > + goto out; > + } > + ret = true; > +out: > + kunmap_local(page); > + return ret; > +} > + > +static bool is_folio_zero_filled(struct folio *folio) > +{ > + unsigned int i; > + > + for (i = 0; i < folio_nr_pages(folio); i++) { > + if (!is_folio_page_zero_filled(folio, i)) > + return false; > + } > + return true; > +} > + > +static void folio_page_zero_fill(struct folio *folio, int i) > +{ > + unsigned long *page; > + > + page = kmap_local_folio(folio, i * PAGE_SIZE); > + memset_l(page, 0, PAGE_SIZE / sizeof(unsigned long)); > + kunmap_local(page); > +} > + > +static void folio_zero_fill(struct folio *folio) > +{ > + unsigned int i; > + > + for (i = 0; i < folio_nr_pages(folio); i++) > + folio_page_zero_fill(folio, i); > +} > + > +static void swap_zeromap_folio_set(struct folio *folio) > +{ > + struct swap_info_struct *sis = swp_swap_info(folio->swap); > + swp_entry_t entry; > + unsigned int i; > + > + for (i = 0; i < folio_nr_pages(folio); i++) { > + entry = page_swap_entry(folio_page(folio, i)); > + bitmap_set(sis->zeromap, swp_offset(entry), 1); This should be set_bit(). bitmap_set() isn't atomic, so it would corrupt the map on concurrent swapping of other zero pages. And you don't need a range op here anyway. > + } > +} > + > +static bool swap_zeromap_folio_test(struct folio *folio) > +{ > + struct swap_info_struct *sis = swp_swap_info(folio->swap); > + swp_entry_t entry; > + unsigned int i; > + > + for (i = 0; i < folio_nr_pages(folio); i++) { > + entry = page_swap_entry(folio_page(folio, i)); > + if (!test_bit(swp_offset(entry), sis->zeromap)) > + return false; > + } > + return true; > +} > + > /* > * We may have stale swap cache pages in memory: notice > * them here and get rid of the unnecessary final write. > @@ -195,6 +266,14 @@ int swap_writepage(struct page *page, struct writeback_control *wbc) > folio_unlock(folio); > return ret; > } > + > + if (is_folio_zero_filled(folio)) { > + swap_zeromap_folio_set(folio); > + folio_start_writeback(folio); > + folio_unlock(folio); > + folio_end_writeback(folio); > + return 0; > + } You need to clear the zeromap bit in the else branch. Userspace can change the contents of a swapcached page, which redirties the page and forces an overwrite of the slot when the page is reclaimed again. So if the page goes from zeroes to something else and then gets reclaimed again, a subsequent swapin would read the stale zeroes. > if (zswap_store(folio)) { > folio_start_writeback(folio); > folio_unlock(folio); > @@ -515,8 +594,11 @@ void swap_read_folio(struct folio *folio, bool synchronous, > psi_memstall_enter(&pflags); > } > delayacct_swapin_start(); > - > - if (zswap_load(folio)) { > + if (swap_zeromap_folio_test(folio)) { > + folio_zero_fill(folio); > + folio_mark_uptodate(folio); > + folio_unlock(folio); > + } else if (zswap_load(folio)) { > folio_mark_uptodate(folio); > folio_unlock(folio); > } else if (data_race(sis->flags & SWP_FS_OPS)) { > diff --git a/mm/swapfile.c b/mm/swapfile.c > index f1e559e216bd..3f00a1cce5ba 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -461,6 +461,7 @@ static void swap_cluster_schedule_discard(struct swap_info_struct *si, > */ > memset(si->swap_map + idx * SWAPFILE_CLUSTER, > SWAP_MAP_BAD, SWAPFILE_CLUSTER); > + bitmap_clear(si->zeromap, idx * SWAPFILE_CLUSTER, SWAPFILE_CLUSTER); AFAICS this needs to be atomic as well. The swap_info and cluster are locked, which stabilizes si->swap_map, but zeromap can see updates from concurrent swap_writepage() and swap_read_folio() on other slots. I think you need to use a loop over clear_bit(). Please also add a comment with the above. > > cluster_list_add_tail(&si->discard_clusters, si->cluster_info, idx); > > @@ -498,6 +499,7 @@ static void swap_do_scheduled_discard(struct swap_info_struct *si) > __free_cluster(si, idx); > memset(si->swap_map + idx * SWAPFILE_CLUSTER, > 0, SWAPFILE_CLUSTER); > + bitmap_clear(si->zeromap, idx * SWAPFILE_CLUSTER, SWAPFILE_CLUSTER); Same. > unlock_cluster(ci); > } > } > @@ -1336,6 +1338,7 @@ static void swap_entry_free(struct swap_info_struct *p, swp_entry_t entry) > count = p->swap_map[offset]; > VM_BUG_ON(count != SWAP_HAS_CACHE); > p->swap_map[offset] = 0; > + bitmap_clear(p->zeromap, offset, 1); This too needs to be atomic, IOW clear_bit(). Otherwise this looks good to me. > dec_cluster_info_page(p, p->cluster_info, offset); > unlock_cluster(ci); > > @@ -2597,6 +2600,7 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) > free_percpu(p->cluster_next_cpu); > p->cluster_next_cpu = NULL; > vfree(swap_map); > + bitmap_free(p->zeromap); > kvfree(cluster_info); > /* Destroy swap account information */ > swap_cgroup_swapoff(p->type); > @@ -3123,6 +3127,12 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) > goto bad_swap_unlock_inode; > } > > + p->zeromap = bitmap_zalloc(maxpages, GFP_KERNEL); > + if (!p->zeromap) { > + error = -ENOMEM; > + goto bad_swap_unlock_inode; > + } > + > if (p->bdev && bdev_stable_writes(p->bdev)) > p->flags |= SWP_STABLE_WRITES; > > -- > 2.43.0