From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12BCBC00140 for ; Thu, 11 Aug 2022 02:28:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3D73E6B0073; Wed, 10 Aug 2022 22:28:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 386106B0074; Wed, 10 Aug 2022 22:28:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 228208E0001; Wed, 10 Aug 2022 22:28:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 144F76B0073 for ; Wed, 10 Aug 2022 22:28:37 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id DBB2412052A for ; Thu, 11 Aug 2022 02:28:36 +0000 (UTC) X-FDA: 79785728232.04.7124554 Received: from mail-qv1-f46.google.com (mail-qv1-f46.google.com [209.85.219.46]) by imf01.hostedemail.com (Postfix) with ESMTP id 723624018E for ; Thu, 11 Aug 2022 02:28:36 +0000 (UTC) Received: by mail-qv1-f46.google.com with SMTP id i4so12402900qvv.7 for ; Wed, 10 Aug 2022 19:28:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc; bh=kvw0qG1E0iHLZpcKlukK9mqIn6ZdZyhGFo4Z35sCqp8=; b=c7MamgLXiXCUmT3Rxbqmu/JD1AQqOM9Khl7KufFAm9yvS+2dFJF50cTz1OEhZnSxm7 wipbjr2vKvJq7QAgCoBeQE2CKB0D0lglNBndvDGnKyocFfy1uT0RE5O9bcna8Rj2jnxC FnRyXp2F4vSXSDD+EmNupO1dZwZkvUsupGEd6OO38+K3IQTMBSQmWmGOtJJCcUK0Wm0l hE4m8XHG6UgTFi8CHzx8FhiDKo2VebjxlaF17R3jZWbpiS8V3pMsdS6Z28zUq8byCarK MADhPt52EqZmJZ1R0FgLc1Ffg5N/fwulV9zw4M1NOf4Uq/b8FXJQfiCg+cSN/uL8Ixqa qx3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc; bh=kvw0qG1E0iHLZpcKlukK9mqIn6ZdZyhGFo4Z35sCqp8=; b=ojAnbO83oQVqIOJNZAR9gXG5dHjWJYSqB2ptKzoDnKoxWVQ6UvJigO5ivMcDpqW7O0 xtNErvAWzRMqBaSSswEL6705hk82lsGgsDyYEKYX5gnM0nD56fOTXH5vgikivgQhdfol PBCH1OcyMxd79BtocvcJW4eCFsBWw7fGLHEG0r6aIG7O59T203lzNOo94GJtdeqAVZN4 BlV08VVNhsaNZb2z60aX5FbHGeveQqC9sKpRG59U6pnOMkUq2mFEqefGRGd+U0TxHRME lyHoivyY+norzgVRcyLnhx+nEqAZNPOVB0JEBzZqi+H+UzQbj1H4Yn//2vrD2B+vMsu2 HaWA== X-Gm-Message-State: ACgBeo1cLyDa49EBpfcYZ9TfFkbXuNqzKLoqiLvHcsuXuKfVb9o0CZs/ 2JvAlWh3+yeaQo+6EhJMOXo7tQ== X-Google-Smtp-Source: AA6agR5TLnyMdCvnrEVIsLXtPj00BqRxBIQ1Ih9y/AMh7uJcIq7uC/8nYMM43yAq9TUCXpJlY8eZ0A== X-Received: by 2002:a05:6214:2621:b0:476:6092:3687 with SMTP id gv1-20020a056214262100b0047660923687mr26399039qvb.21.1660184915675; Wed, 10 Aug 2022 19:28:35 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id h3-20020a05620a400300b006b93fcc9604sm1196556qko.108.2022.08.10.19.28.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 10 Aug 2022 19:28:35 -0700 (PDT) Date: Wed, 10 Aug 2022 19:28:33 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.anvils To: "Matthew Wilcox (Oracle)" cc: linux-mm@kvack.org, hughd@google.com, Andrew Morton Subject: Re: [PATCH 18/59] mm: Convert do_swap_page()'s swapcache variable to a folio In-Reply-To: <20220808193430.3378317-19-willy@infradead.org> Message-ID: References: <20220808193430.3378317-1-willy@infradead.org> <20220808193430.3378317-19-willy@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660184916; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=kvw0qG1E0iHLZpcKlukK9mqIn6ZdZyhGFo4Z35sCqp8=; b=izanK/0TtykZ1eTzepUQWkzOOdZv8cOfbA3bycVtD1bF6Bu4ZvmP7USbpQNAbBQE+19Vrs V8AJbuyfp+IWX+6/QRWFPcrDEkekcV5tAoKc1K3w7qISDoosBunZKmQXBHSuL76nwDKnyB Z0g7sS+i6EAQ6iNV/zOnIZpX6vy7/uU= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=c7MamgLX; spf=pass (imf01.hostedemail.com: domain of hughd@google.com designates 209.85.219.46 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660184916; a=rsa-sha256; cv=none; b=AJkofjsPlM+jdYaYPlac8okciAqZ0MHUYJfO/x6ft0fzFI/yosaSdEf6sxWEmIyGSZZJ4r 70EykPVLNF+ytFcAEgg8GnWxuU+jZhrk9VORBd1+z7/opS+kgz8fxNe4YnjPRrSV5T6cEe 9IWwJr6f/V0t/k2oiPZ+sukvjBaggac= X-Stat-Signature: hbapkbmeujybb1n6kt1dreeoa5zo3w1b X-Rspamd-Queue-Id: 723624018E Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=c7MamgLX; spf=pass (imf01.hostedemail.com: domain of hughd@google.com designates 209.85.219.46 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1660184916-392335 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, 8 Aug 2022, Matthew Wilcox (Oracle) wrote: > The 'swapcache' variable is used to track whether the page is from the > swapcache or not. It can do this equally well by being the folio of > the page rather than the page itself, and this saves a number of calls > to compound_head(). > > Signed-off-by: Matthew Wilcox (Oracle) > --- > mm/memory.c | 32 ++++++++++++++++---------------- > 1 file changed, 16 insertions(+), 16 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index f172b148e29b..471102f0cbf2 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -3718,8 +3718,8 @@ static vm_fault_t handle_pte_marker(struct vm_fault *vmf) > vm_fault_t do_swap_page(struct vm_fault *vmf) > { > struct vm_area_struct *vma = vmf->vma; > - struct folio *folio; > - struct page *page = NULL, *swapcache; > + struct folio *swapcache, *folio = NULL; > + struct page *page; > struct swap_info_struct *si = NULL; > rmap_t rmap_flags = RMAP_NONE; > bool exclusive = false; > @@ -3762,11 +3762,11 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > goto out; > > page = lookup_swap_cache(entry, vma, vmf->address); > - swapcache = page; > if (page) > folio = page_folio(page); > + swapcache = folio; > > - if (!page) { > + if (!folio) { > if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && > __swap_count(entry) == 1) { > /* skip swapcache */ > @@ -3799,12 +3799,12 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > } else { > page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE, > vmf); > - swapcache = page; > if (page) > folio = page_folio(page); > + swapcache = folio; > } > > - if (!page) { > + if (!folio) { > /* > * Back out if somebody else faulted in this pte > * while we released the pte lock. > @@ -3856,10 +3856,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > page = ksm_might_need_to_copy(page, vma, vmf->address); > if (unlikely(!page)) { > ret = VM_FAULT_OOM; > - page = swapcache; > goto out_page; > } > folio = page_folio(page); > + swapcache = folio; I couldn't get further than one iteration into my swapping loads: processes hung waiting for the folio lock. Delete that "swapcache = folio;" line: here is (one place) where swapcache and folio may diverge, and shall need to be unlocked and put separately. All working okay since I deleted that. Hugh > > /* > * If we want to map a page that's in the swapcache writable, we > @@ -3867,7 +3867,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > * owner. Try removing the extra reference from the local LRU > * pagevecs if required. > */ > - if ((vmf->flags & FAULT_FLAG_WRITE) && page == swapcache && > + if ((vmf->flags & FAULT_FLAG_WRITE) && folio == swapcache && > !folio_test_ksm(folio) && !folio_test_lru(folio)) > lru_add_drain(); > } > @@ -3908,7 +3908,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > * without __HAVE_ARCH_PTE_SWP_EXCLUSIVE. > */ > exclusive = pte_swp_exclusive(vmf->orig_pte); > - if (page != swapcache) { > + if (folio != swapcache) { > /* > * We have a fresh page that is not exposed to the > * swapcache -> certainly exclusive. > @@ -3976,7 +3976,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > vmf->orig_pte = pte; > > /* ksm created a completely new copy */ > - if (unlikely(page != swapcache && swapcache)) { > + if (unlikely(folio != swapcache && swapcache)) { > page_add_new_anon_rmap(page, vma, vmf->address); > folio_add_lru_vma(folio, vma); > } else { > @@ -3989,7 +3989,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte); > > folio_unlock(folio); > - if (page != swapcache && swapcache) { > + if (folio != swapcache && swapcache) { > /* > * Hold the lock to avoid the swap entry to be reused > * until we take the PT lock for the pte_same() check > @@ -3998,8 +3998,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > * so that the swap count won't change under a > * parallel locked swapcache. > */ > - unlock_page(swapcache); > - put_page(swapcache); > + folio_unlock(swapcache); > + folio_put(swapcache); > } > > if (vmf->flags & FAULT_FLAG_WRITE) { > @@ -4023,9 +4023,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > folio_unlock(folio); > out_release: > folio_put(folio); > - if (page != swapcache && swapcache) { > - unlock_page(swapcache); > - put_page(swapcache); > + if (folio != swapcache && swapcache) { > + folio_unlock(swapcache); > + folio_put(swapcache); > } > if (si) > put_swap_device(si); > -- > 2.35.1