From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E540AC27C4F for ; Thu, 13 Jun 2024 11:37:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7BF4A6B0096; Thu, 13 Jun 2024 07:37:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 76F486B0098; Thu, 13 Jun 2024 07:37:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 60FA66B0099; Thu, 13 Jun 2024 07:37:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 42D2F6B0096 for ; Thu, 13 Jun 2024 07:37:49 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id D5FBB1619BC for ; Thu, 13 Jun 2024 11:37:48 +0000 (UTC) X-FDA: 82225665816.04.616234E Received: from mail-wm1-f45.google.com (mail-wm1-f45.google.com [209.85.128.45]) by imf05.hostedemail.com (Postfix) with ESMTP id D47BD10001E for ; Thu, 13 Jun 2024 11:37:46 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=eI72GEnN; spf=pass (imf05.hostedemail.com: domain of usamaarif642@gmail.com designates 209.85.128.45 as permitted sender) smtp.mailfrom=usamaarif642@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718278666; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=iyS05mccHMGgaTY1tNQhECHHkXFY13iQYzyqO2gCGAY=; b=GTbFqjYKCzAKjgYiqe1cNUaaYX90o5Gb80GZaODUoBqev/MP2Co3apgUFwzB7CBRuqJRzz J6+ehkE3EmCSctkNqW5nN4kbsjA/gPysCXqMqBCaydqGl9qQrKZUcHPxdiYq1XCDLHJ0zj UP5TeCjVu28lDkN3GNttIS6U4uZ25mQ= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=eI72GEnN; spf=pass (imf05.hostedemail.com: domain of usamaarif642@gmail.com designates 209.85.128.45 as permitted sender) smtp.mailfrom=usamaarif642@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718278666; a=rsa-sha256; cv=none; b=FIEuVBGaaNVB3WVhoS+QWKj91RdhY4M3k4cNZ7aZoGgvwQKdjeEhdRv45BYuxDkPI7BWNM vXfyCH23F6tUBxLZwPSnvkwpMcxm+MdTONEYdGTRVi0E6rr/MtWLr/MMgDUSd4rLGgrTBl WiCfNCaU9AY0GnYxQf7va5SoWr+ZrUM= Received: by mail-wm1-f45.google.com with SMTP id 5b1f17b1804b1-42278f3aea4so9317485e9.1 for ; Thu, 13 Jun 2024 04:37:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718278665; x=1718883465; darn=kvack.org; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=iyS05mccHMGgaTY1tNQhECHHkXFY13iQYzyqO2gCGAY=; b=eI72GEnNctEAkkJIfBs8IUNm4pLXwsUganTwN1SaHAUbroQ2FHLZrPW1uFA5MCbIo4 EW4lrUPHsfMpLST7PrDYtx8cLCNmpFglP4N0i4NJ5cGS62dCmr3qujM7zEkehYSW23uz bTrLVBqmxPJtqvMLAbrDX9GvbVpnURZIzCngyJkmjVLZZ8dJzAsogQis4Qu7DefdwxaQ ZpEE+F8FhRqML5ODyD4hAOX8GMu9cOKBQ5Y6y/PsqaNTkOg+CBTXc2j3shBfSsBMRPCy Rp9IpVKUSegs5gtsGQJhFiYWBaWbwpvFUjq8mrygVNfsMGz4AoYNOUGAJaZy4fdZG/uX zttg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718278665; x=1718883465; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=iyS05mccHMGgaTY1tNQhECHHkXFY13iQYzyqO2gCGAY=; b=r3PvyjsDura4i3k5oJSvq31kYprx49huwcI9FqnYH1hFFydj7uMjHTRLTFyYHF/TJE ar64mCUyifBnRg3Qd6+SOogaz0wKa8/nCnukyGe/HTb6znO5HuH5iowdMVVgP/SOrjQ5 nDOCNEmXn4P9z0TBjPjYtfqffzv2KrvXZ3GUBLRto8WoCvVPj7VfodDfLcK/vlieCs6I /aah78EeVYXVtnnic4HXk0V4rhqZPT0PPVzr3bRm2ebeWYEhZ461fnvD7Is+GwEtQxhN yrpQfOgpNfsIE7oX41m8R7xR66sSJJlX/Nf4I33ZChpECM+mAPpswWjsDPE/q6KQc//S u95w== X-Forwarded-Encrypted: i=1; AJvYcCUtx120pteup18PsDfbmBx9CMT4gD1zHWKDle59ksvQT3HNU1dS3DrCwZtLgsVXNgozFnHNqgXi0yGRN82wp1b6DfY= X-Gm-Message-State: AOJu0YzkQXJc9FJCBvynZdMUiB73/oH8IjneuU7SS0mLFy4Y2jdwTNIR 4yZi0STk6xuPRBopPUKOKI+r65f3NK98Wp9vHaMbVEjqgpvU0ayK X-Google-Smtp-Source: AGHT+IHMLNWiJWlbAhzsaJzf+jFKWYXoQ3r+a3XOon9IWDIK7S2hN+vh79fEG0yTidg+tn3guusMhg== X-Received: by 2002:a05:600c:2192:b0:41b:e0e5:a525 with SMTP id 5b1f17b1804b1-422864ae1cfmr42865595e9.17.1718278665010; Thu, 13 Jun 2024 04:37:45 -0700 (PDT) Received: from ?IPV6:2a02:6b6a:b75d:0:64:3301:4710:ec21? ([2a02:6b6a:b75d:0:64:3301:4710:ec21]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-422f602e802sm21157395e9.11.2024.06.13.04.37.44 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 13 Jun 2024 04:37:44 -0700 (PDT) Message-ID: <85804484-9973-41a1-a05d-000833285f39@gmail.com> Date: Thu, 13 Jun 2024 12:37:44 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v4 1/2] mm: store zero pages to be swapped out in a bitmap To: Yosry Ahmed Cc: akpm@linux-foundation.org, hannes@cmpxchg.org, shakeel.butt@linux.dev, david@redhat.com, ying.huang@intel.com, hughd@google.com, willy@infradead.org, nphamcs@gmail.com, chengming.zhou@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@meta.com References: <20240612124750.2220726-1-usamaarif642@gmail.com> <20240612124750.2220726-2-usamaarif642@gmail.com> Content-Language: en-US From: Usama Arif In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: D47BD10001E X-Stat-Signature: k5z1w9bts8y7wce7zt4kprqezgahfjoy X-HE-Tag: 1718278666-127948 X-HE-Meta: U2FsdGVkX1/K2sG9N++mv0jyIWUDE8Ds1xWh9xb/vwldP+qRaK1VZbxXhEi5rbcl1TqcXkZ1mXg5+SV1nPI9qcRDSzjuuctj6EkEe9acYXpwCM+zyPzx73soMKXF7MbRQwFrm+X+h8lFVRKAuBTjkwBbXwDV/TQFZH3xFrE/x6pYanRR4Jgth1yMjwClOq3V3lj4g2/cVYgzQNxUHaZJ4VDixkYYuJ/jWnuX192kv+q5ZEUVoxv8NMlfxtqN982fSWVDMWXhzH5UUoW9SwERqGYxbjKu+t7v2kQlOq0xW++zGBoicIPkSchUU23hZo3UidBmuGYiCSb+Tj+fjhfqJzgYTThbvOY4cTFp/zJhhBPDCaA9p64oBOeXpLwlnFe2SYulKlOqgreMlBcO1ORcC7FTuDqoCc+/cpOs8zJmw/c6I0bbCMiGUu5jvK4TPbTT/yxvcbzjs5AJzhL7LkXxgODGgqUl1q9mX4/qsuvAPyhY+3U90icZMxmIc99HcCCRRYrjkwxr98DJ4lWWo5nXW0z40iPazC7yhLhOjjQ0S7VMlxxWjurh8AOJkt+LvwFjOW3+ogMlXHRhyoC7tR2iUShhZ0Gjkvxveg35VC9UhXeECfy9pa+3j4Ka0rczl6FVk4cfEd1DlMc5BXq/KRurm8gQgIcCSO7NXHeEBwIqw3fC6iIMDlU34vLgsQzQyiNy3aq4CznNvbc9opQomPDtu6LoGHKUMN2Ws2elTgbjNJ63A5tkR/sK5lUTf8f7s6tDEvTRkmBkczLNjDsUQEyoMHc+T/rh4eVg7svmpJC8rbI5lfQVrLKPbdgUyVCXIbRKgLxz6SDA/F681QnYmpmguPF5qGWkpKXWpEfUSTeJ5/wesR9AzYtGSP4DAk8PM8C6NZsxiJe2ppZHG4xnkrZtsJj6JSO80WVLXaYPmpVnvVQiZxJUbt5aNIO+3fZygznIT9ksSAOMt8f0fYLNyy1 Mf/jTlRB D0JYqFFQi9/d1XDBIOJ/7LmnIgMpTnUrPthMRDGMhoA2FNi7loh4vP27hS8Y+WMaTrbxiL21vrjc/aWzDBta8XYv2HxFykjliX4VhY5gm4yEUCerHR99nUV9nvbwpZqA13huiHjk/jBDfRxR3ieuAkYyosV3yCbkDkUZ/qN4xDcI4ddIG1j0wRr/Fyv9/yAKFme/ZpKt1XxT/dpjdZgyxNZ73gjS+UnF9X48QModwm8TPjj3N3WIqO9YXhU+Qbm40dcToRnsaNa65ONFjLe9U+n/szVqD1N1wSogp/sIFl4u9bfsqlFQj1i/UU7yQv+cTW8DIx7h1sV/lfHVXPB4dwQ9aQbwWuOLqW6ouVEgopSxhGA7zXkrPqw9+zqt3TyCvRqvBzBePe2lwozBJNdc3LVU4CxTWbdasduXtxzceexj05ePm6mPNpr1otQpBscXfp7mZ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 12/06/2024 21:13, Yosry Ahmed wrote: > On Wed, Jun 12, 2024 at 01:43:35PM +0100, Usama Arif wrote: > [..] > > Hi Usama, > > A few more comments/questions, sorry for not looking closely earlier. No worries, Thanks for the reviews! >> diff --git a/mm/swapfile.c b/mm/swapfile.c >> index f1e559e216bd..48d8dca0b94b 100644 >> --- a/mm/swapfile.c >> +++ b/mm/swapfile.c >> @@ -453,6 +453,8 @@ static unsigned int cluster_list_del_first(struct swap_cluster_list *list, >> static void swap_cluster_schedule_discard(struct swap_info_struct *si, >> unsigned int idx) >> { >> + unsigned int i; >> + >> /* >> * If scan_swap_map_slots() can't find a free cluster, it will check >> * si->swap_map directly. To make sure the discarding cluster isn't >> @@ -461,6 +463,13 @@ static void swap_cluster_schedule_discard(struct swap_info_struct *si, >> */ >> memset(si->swap_map + idx * SWAPFILE_CLUSTER, >> SWAP_MAP_BAD, SWAPFILE_CLUSTER); >> + /* >> + * zeromap can see updates from concurrent swap_writepage() and swap_read_folio() >> + * call on other slots, hence use atomic clear_bit for zeromap instead of the >> + * non-atomic bitmap_clear. >> + */ > I don't think this is accurate. swap_read_folio() does not update the > zeromap. I think the need for an atomic operation here is because we may > be updating adjacent bits simulatenously, so we may cause lost updates > otherwise (i.e. corrupting adjacent bits). Thanks, will change to "Use atomic clear_bit instead of non-atomic bitmap_clear to prevent adjacent bits corruption due to simultaneous writes." in the next revision > >> + for (i = 0; i < SWAPFILE_CLUSTER; i++) >> + clear_bit(idx * SWAPFILE_CLUSTER + i, si->zeromap); > Could you explain why we need to clear the zeromap here? > > swap_cluster_schedule_discard() is called from: > - swap_free_cluster() -> free_cluster() > > This is already covered below. > > - swap_entry_free() -> dec_cluster_info_page() -> free_cluster() > > Each entry in the cluster should have its zeromap bit cleared in > swap_entry_free() before the entire cluster is free and we call > free_cluster(). > > Am I missing something? Yes, it looks like this one is not needed as swap_entry_free and swap_free_cluster would already have cleared the bit. Will remove it. I had initially started checking what codepaths zeromap would need to be cleared. But then thought I could do it wherever si->swap_map is cleared or set to SWAP_MAP_BAD, which is why I added it here. >> >> cluster_list_add_tail(&si->discard_clusters, si->cluster_info, idx); >> >> @@ -482,7 +491,7 @@ static void __free_cluster(struct swap_info_struct *si, unsigned long idx) >> static void swap_do_scheduled_discard(struct swap_info_struct *si) >> { >> struct swap_cluster_info *info, *ci; >> - unsigned int idx; >> + unsigned int idx, i; >> >> info = si->cluster_info; >> >> @@ -498,6 +507,8 @@ static void swap_do_scheduled_discard(struct swap_info_struct *si) >> __free_cluster(si, idx); >> memset(si->swap_map + idx * SWAPFILE_CLUSTER, >> 0, SWAPFILE_CLUSTER); >> + for (i = 0; i < SWAPFILE_CLUSTER; i++) >> + clear_bit(idx * SWAPFILE_CLUSTER + i, si->zeromap); > Same here. I didn't look into the specific code paths, but shouldn't the > cluster be unused (and hence its zeromap bits already cleared?). > I think this one is needed (or atleast very good to have). There are 2 paths: 1) swap_cluster_schedule_discard (clears zeromap) -> swap_discard_work -> swap_do_scheduled_discard (clears zeromap) Path 1 doesnt need it as swap_cluster_schedule_discard already clears it. 2) scan_swap_map_slots -> scan_swap_map_try_ssd_cluster -> swap_do_scheduled_discard (clears zeromap) Path 2 might need it as zeromap isnt cleared earlier I believe (eventhough I think it might already be 0). Even if its cleared in path 2, I think its good to keep this one, as the function is swap_do_scheduled_discard, i.e. incase it gets directly called or si->discard_work gets scheduled anywhere else in the future, it should do as the function name suggests, i.e. swap discard(clear zeromap). >> unlock_cluster(ci); >> } >> } >> @@ -1059,9 +1070,12 @@ static void swap_free_cluster(struct swap_info_struct *si, unsigned long idx) >> { >> unsigned long offset = idx * SWAPFILE_CLUSTER; >> struct swap_cluster_info *ci; >> + unsigned int i; >> >> ci = lock_cluster(si, offset); >> memset(si->swap_map + offset, 0, SWAPFILE_CLUSTER); >> + for (i = 0; i < SWAPFILE_CLUSTER; i++) >> + clear_bit(offset + i, si->zeromap); >> cluster_set_count_flag(ci, 0, 0); >> free_cluster(si, idx); >> unlock_cluster(ci); >> @@ -1336,6 +1350,7 @@ static void swap_entry_free(struct swap_info_struct *p, swp_entry_t entry) >> count = p->swap_map[offset]; >> VM_BUG_ON(count != SWAP_HAS_CACHE); >> p->swap_map[offset] = 0; >> + clear_bit(offset, p->zeromap); > I think instead of clearing the zeromap in swap_free_cluster() and here > separately, we can just do it in swap_range_free(). I suspect this may > be the only place we really need to clear the zero in the swapfile code. Sure, we could move it to swap_range_free, but then also move the clearing of swap_map. When it comes to clearing zeromap, I think its just generally a good idea to clear it wherever swap_map is cleared. So the diff over v4 looks like below (should address all comments but not remove it from swap_do_scheduled_discard, and move si->swap_map/zeromap clearing from swap_free_cluster/swap_entry_free to swap_range_free): diff --git a/mm/swapfile.c b/mm/swapfile.c index 48d8dca0b94b..39cad0d09525 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -463,13 +463,6 @@ static void swap_cluster_schedule_discard(struct swap_info_struct *si,          */         memset(si->swap_map + idx * SWAPFILE_CLUSTER,                         SWAP_MAP_BAD, SWAPFILE_CLUSTER); -       /* -        * zeromap can see updates from concurrent swap_writepage() and swap_read_folio() -        * call on other slots, hence use atomic clear_bit for zeromap instead of the -        * non-atomic bitmap_clear. -        */ -       for (i = 0; i < SWAPFILE_CLUSTER; i++) -               clear_bit(idx * SWAPFILE_CLUSTER + i, si->zeromap);         cluster_list_add_tail(&si->discard_clusters, si->cluster_info, idx); @@ -758,6 +751,15 @@ static void swap_range_free(struct swap_info_struct *si, unsigned long offset,         unsigned long begin = offset;         unsigned long end = offset + nr_entries - 1;         void (*swap_slot_free_notify)(struct block_device *, unsigned long); +       unsigned int i; + +       memset(si->swap_map + offset, 0, nr_entries); +       /* +        * Use atomic clear_bit operations only on zeromap instead of non-atomic +        * bitmap_clear to prevent adjacent bits corruption due to simultaneous writes. +        */ +       for (i = 0; i < nr_entries; i++) +               clear_bit(offset + i, si->zeromap);         if (offset < si->lowest_bit)                 si->lowest_bit = offset; @@ -1070,12 +1072,8 @@ static void swap_free_cluster(struct swap_info_struct *si, unsigned long idx)  {         unsigned long offset = idx * SWAPFILE_CLUSTER;         struct swap_cluster_info *ci; -       unsigned int i;         ci = lock_cluster(si, offset); -       memset(si->swap_map + offset, 0, SWAPFILE_CLUSTER); -       for (i = 0; i < SWAPFILE_CLUSTER; i++) -               clear_bit(offset + i, si->zeromap);         cluster_set_count_flag(ci, 0, 0);         free_cluster(si, idx);         unlock_cluster(ci); @@ -1349,8 +1347,6 @@ static void swap_entry_free(struct swap_info_struct *p, swp_entry_t entry)         ci = lock_cluster(p, offset);         count = p->swap_map[offset];         VM_BUG_ON(count != SWAP_HAS_CACHE); -       p->swap_map[offset] = 0; -       clear_bit(offset, p->zeromap);         dec_cluster_info_page(p, p->cluster_info, offset);         unlock_cluster(ci);