From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB1C6C54E64 for ; Sat, 23 Mar 2024 01:55:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 334BC6B0082; Fri, 22 Mar 2024 21:55:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2E5256B0085; Fri, 22 Mar 2024 21:55:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1AC906B0088; Fri, 22 Mar 2024 21:55:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 082A26B0082 for ; Fri, 22 Mar 2024 21:55:52 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id A218F160FFE for ; Sat, 23 Mar 2024 01:55:51 +0000 (UTC) X-FDA: 81926637702.20.79AF7AF Received: from mail-qt1-f170.google.com (mail-qt1-f170.google.com [209.85.160.170]) by imf30.hostedemail.com (Postfix) with ESMTP id B0E858000C for ; Sat, 23 Mar 2024 01:55:49 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b="x2Leig0/"; spf=pass (imf30.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.160.170 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711158950; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=3vsgiwxIesEweY/56xdcSGWn13L1TSsWr5GbRXDd3ts=; b=m1pCFfL4uEd7/lgLam24Dk95MUdeNMZtHG/frhKr2lGYZivXR1tn8zQXjA7wiZfdjh4o3n /49ar1BubFzOF+lIvwxW55LLN3bQVBIleDMLZ8ThSOBRbExJMe6d1ozJGjnfZ7cGi9VyXx YG0/EtFYn31AtSkRzh0/gMQctxzKgdg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711158950; a=rsa-sha256; cv=none; b=KWaaSCwt5QSYSo0RvHnyWko9PxP603LbmakW51NQWKvpshBGIiOOpdd7lF1TVsvnPPfIen DScefUur7szoLL/vF4I5wud5P3E6qMAQgMJzsppchwQGFyyGMb2jiubrpwCM10cfr6GFmN AxSckg9DHuO5f37PXfP4POxbcuunTSA= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=cmpxchg-org.20230601.gappssmtp.com header.s=20230601 header.b="x2Leig0/"; spf=pass (imf30.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.160.170 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org Received: by mail-qt1-f170.google.com with SMTP id d75a77b69052e-430c41f3f89so25004271cf.0 for ; Fri, 22 Mar 2024 18:55:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20230601.gappssmtp.com; s=20230601; t=1711158949; x=1711763749; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=3vsgiwxIesEweY/56xdcSGWn13L1TSsWr5GbRXDd3ts=; b=x2Leig0/rvEGRMujIYWcnwdHk+G4UXKEqHN2E1ZPtGSuWwo8ygTsJ13RVSURG0XbQN yqx47Pft3lbWVlfwBxteaEjINK3BtvrF4f51m246Qb9d7wNiFVGm3TVNb5rUy7eCbIvc 55hXyk5lDjNBOE034N3BEM/sQltByR2ooP3wWL0NRZWRgyM5/JLzk5pQYFLfe4NRSCjG /RBcT0BeGkg5O/E8GsVP9vUnNPlIxhM/LUJW/9fz0P7jbPyu5a5hmIh1jpqirU9y+gom lyIgMBD3mU1O9wGI/CiYRSXbUbJMKSo2FiCiKB3uKjCvWMgipVykmRB0gX7O4HCXTUjC Yp+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711158949; x=1711763749; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=3vsgiwxIesEweY/56xdcSGWn13L1TSsWr5GbRXDd3ts=; b=gNi1776HLARZwH62BurpX9G989Bqk3yFnpzA+Y3TYda8Exx9ffzbTyHGhRr+GthLP3 IAkCxrTGUFrIYpbOmu5AVosEB7HEMJ/leR72IIYVl9HNNL2D0Ecd0Ie9iyRxOvLp4Jhm 8aU3N0ISl9AFKvat5ub6a0Fy7lj9vQ94DAiaPgfHahYRRhdIHRqeUa/tqPb6gQlWHUxI /tbTLsIGdXMV4y11Owse9H38cD84Ra/xZKV+0bOIJmfBw1uGADhfImESWSFfy2rFAYfR Pln+Nu1C7xRT8/+HMSAr6HNbiz34udTy2at3COKTtPP3v8xW0VWw2cMvgip0DW8+R/wp sH/Q== X-Forwarded-Encrypted: i=1; AJvYcCVONZBjFuSYEY+2pTzXSgVrrVNYtGDFrcMDxCChjc1tsk+s6N9YGgoFWCMGHAnGNgIslURB9pRfbfmzY0Emkeqh4GQ= X-Gm-Message-State: AOJu0YyXsxeiEc2qiHjz1lrXI4kWu7VETCa27+manOtvGmhvB2viF3KU Crpa/e/+CjNn2+KHSXXshWC1RIM57euh+1g8YBGbW1FhlspCCOex0nvngpnDrMg= X-Google-Smtp-Source: AGHT+IHtAO+0TiuhzdK5UOUufoT0yvPr6F2X4n0RODULlmM9+L2RicUEhrxu84JeLwvBBCoyWpfX1w== X-Received: by 2002:ac8:7dcd:0:b0:431:3803:eaf2 with SMTP id c13-20020ac87dcd000000b004313803eaf2mr1156393qte.7.1711158948633; Fri, 22 Mar 2024 18:55:48 -0700 (PDT) Received: from localhost ([2620:10d:c091:400::5:16be]) by smtp.gmail.com with ESMTPSA id he37-20020a05622a602500b00430bad81704sm374664qtb.52.2024.03.22.18.55.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Mar 2024 18:55:48 -0700 (PDT) Date: Fri, 22 Mar 2024 21:55:43 -0400 From: Johannes Weiner To: Yosry Ahmed Cc: Barry Song <21cnbao@gmail.com>, chengming.zhou@linux.dev, nphamcs@gmail.com, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Zhongkun He Subject: Re: [RFC PATCH] mm: add folio in swapcache if swapin from zswap Message-ID: <20240323015543.GB448621@cmpxchg.org> References: <20240322163939.17846-1-chengming.zhou@linux.dev> <20240322234826.GA448621@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: B0E858000C X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: 6917ez8ybshozxu5ozsmhzkhedoohcxf X-HE-Tag: 1711158949-710931 X-HE-Meta: U2FsdGVkX1/rXpl4AKbuTXj8GiSQ2qz4JRdJjOtNi5ir812JmzcWxz8cG6qjdOksciiVyO6KKrEP+jgNDNMRKCshwCWbDe0jdAmYAKlvHOzbe/YmA5CTETB0DZyI/7cg+zj+5SY8a3eTk4rFZ2BBn0B2cl7tvYc3uLqe68bH4SbRJQ19n3nN2KPae9FnSuqM8px2i1zqwNRXdIMFFA/ubj9EsfRtMKFv7CCUcTQ3BNqG0zpmSPDac+ygHPS/0mrgVqJ15bS47zTkUgZsH5Ps3HFfIQo5dzlM7Remm/sdgRH/p4hUcPHobWJ6a0TWAAmzoNJCeYSikPifPCCXritPoJ8sBqI2rQm7WQnH2Ou/s6cMaHWvQPStZKeEfBO+3MTK98qrSSaMHIWPZZcEVeGxSWb4zwV7fgPhEqrTAgwVhkCWKKxQxFZ31A6gyvJmZ68oR1OE+NZdNjHgz5S/2LbATmJV5DNyphAzlO5yGQwwrqRdVJbQtrNUeCi9eN3XDzRrvwxzqZidbVEtYp8lJe14nDRdTdYJx+6ET9ElrNg1EXeavlnHgWNfjkuXf7VF9z8j0vxBVthU2vx8b0KQykzix9rToX6iV/kvXLiQe5/SmLfGcV4bLbnrc8JaXKXfLAhY+givWr93ktwj6WbxFQ3ljoN+UkVkcGS5t1yXAaLbs9OMj5CU2xkb+tmp/XjGqIoGaM5zpmYdUFGcLBK/zf/2Bbo1qBYGxEbCYKmER9TPAJXVL4xI8mpO9TT2q4/9mGB+2z05Ey7l6Tc+5LgGvPLvAkHU2zSGo5QPgU0z3LAqfbmLcjDSq2IP24ZQdDEA9Luvz5qIDqoT4zF1fbK2FANu4AD5djgT+M/B+ioe1FxE01yL//kxdsMG8i+jrwYcVQ5qlRY5GBUFqB0yucGniPTNuNK+c6tmX8szfPnNMOqNI3vNCwKABCwmEIOtEVQV6I+TF/bRVn/M7uVKB5j3lP7 31mAXFhL rQh/0yIfFE+zRzDD5wJoD0rp4sJPZIbxIDESKAPCEauFInNKNuZWgvhwRMQDD0HVU4PfO62MUrP1O6xJdPzmCcGd8eFq/e1UUb+Wf0XwA6btTcd+KMf3Tx50zQy+4gFkE305aRPmlCBW1l48= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Mar 22, 2024 at 05:14:37PM -0700, Yosry Ahmed wrote: > [..] > > > > I don't think we want to stop doing exclusive loads in zswap due to this > > > > interaction with zram, which shouldn't be common. > > > > > > > > I think we can solve this by just writing the folio back to zswap upon > > > > failure as I mentioned. > > > > > > Instead of storing again, can we avoid invalidating the entry in the > > > first place if the load is not "exclusive"? > > > > > > The reason for exclusive loads is that the ownership is transferred to > > > the swapcache, so there is no point in keeping our copy. With an > > > optimistic read that doesn't transfer ownership, this doesn't > > > apply. And we can easily tell inside zswap_load() if we're dealing > > > with a swapcache read or not by testing the folio. > > > > > > The synchronous read already has to pin the swp_entry_t to be safe, > > > using swapcache_prepare(). That blocks __read_swap_cache_async() which > > > means no other (exclusive) loads and no invalidates can occur. > > > > > > The zswap entry is freed during the regular swap_free() path, which > > > the sync fault calls on success. Otherwise we keep it. > > > > I thought about this, but I was particularly worried about the need to > > bring back the refcount that was removed when we switched to only > > supporting exclusive loads: > > https://lore.kernel.org/lkml/20240201-b4-zswap-invalidate-entry-v2-6-99d4084260a0@bytedance.com/ > > > > It seems to be that we don't need it, because swap_free() will free > > the entry as you mentioned before anyone else has the chance to load > > it or invalidate it. Writeback used to grab a reference as well, but > > it removes the entry from the tree anyway and takes full ownership of > > it then frees it, so that should be okay. > > > > It makes me nervous though to be honest. For example, not long ago > > swap_free() didn't call zswap_invalidate() directly (used to happen to > > swap slots cache draining). Without it, a subsequent load could race > > with writeback without refcount protection, right? We would need to > > make sure to backport 0827a1fb143f ("mm/zswap: invalidate zswap entry > > when swap entry free") with the fix to stable for instance. > > > > I can't find a problem with your diff, but it just makes me nervous to > > have non-exclusive loads without a refcount. > > > > > > > > diff --git a/mm/zswap.c b/mm/zswap.c > > > index 535c907345e0..686364a6dd86 100644 > > > --- a/mm/zswap.c > > > +++ b/mm/zswap.c > > > @@ -1622,6 +1622,7 @@ bool zswap_load(struct folio *folio) > > > swp_entry_t swp = folio->swap; > > > pgoff_t offset = swp_offset(swp); > > > struct page *page = &folio->page; > > > + bool swapcache = folio_test_swapcache(folio); > > > struct zswap_tree *tree = swap_zswap_tree(swp); > > > struct zswap_entry *entry; > > > u8 *dst; > > > @@ -1634,7 +1635,8 @@ bool zswap_load(struct folio *folio) > > > spin_unlock(&tree->lock); > > > return false; > > > } > > > - zswap_rb_erase(&tree->rbroot, entry); > > > + if (swapcache) > > > + zswap_rb_erase(&tree->rbroot, entry); > > On second thought, if we don't remove the entry from the tree here, > writeback could free the entry from under us after we drop the lock > here, right? The sync-swapin does swapcache_prepare() and holds SWAP_HAS_CACHE, so racing writeback would loop on the -EEXIST in __read_swap_cache_async(). (Or, if writeback wins the race, sync-swapin fails on swapcache_prepare() instead and bails on the fault.) This isn't coincidental. The sync-swapin needs to, and does, serialize against the swap entry moving into swapcache or being invalidated for it to be safe. Which is the same requirement that zswap ops have.