From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27003C5320E for ; Sun, 25 Aug 2024 22:05:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3C0548D0032; Sun, 25 Aug 2024 18:05:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3493B8D0029; Sun, 25 Aug 2024 18:05:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1EC788D0032; Sun, 25 Aug 2024 18:05:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id F2A3A8D0029 for ; Sun, 25 Aug 2024 18:05:36 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 70DD780962 for ; Sun, 25 Aug 2024 22:05:36 +0000 (UTC) X-FDA: 82492150272.20.76463D1 Received: from mail-oo1-f44.google.com (mail-oo1-f44.google.com [209.85.161.44]) by imf26.hostedemail.com (Postfix) with ESMTP id 9B85F140004 for ; Sun, 25 Aug 2024 22:05:34 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=eww5Au0g; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf26.hostedemail.com: domain of hughd@google.com designates 209.85.161.44 as permitted sender) smtp.mailfrom=hughd@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724623465; a=rsa-sha256; cv=none; b=uMB1oDjySoH9gh65cyi4Xc5Amd77mzv6/RnPg2TFa/uRbc+eLJ/KkS/rJjs7u3tlyrgk8U 3fnqO+/LL/aDNMHAg3TEZSkT+TM/4apNCKdtdmqt9GSJM6F0S7t2yttR8g/AJxkkITkV+I jg/FLH406bQdMjac88HLoa2ahUFOnc0= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=eww5Au0g; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf26.hostedemail.com: domain of hughd@google.com designates 209.85.161.44 as permitted sender) smtp.mailfrom=hughd@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724623465; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=a/DbmaHqRLlCXqB0erc6lMDhMIgce7/NU5XyP1hIbAA=; b=amh/CBco5hSk8gouGuTQz7VSJjv/fuJGRTgSNFW3XlXDaktfLMsmNUuI9wxOKxFUdXgzZO HMMPgpd7xtz/vlKXCz1U7G+6ZFL30CIObNLTGsgQEcJkdkUvh/e2eQtV1eMFeZyzE2MGzb biCk2iE8nolCZSBJGDfKQhxojNACdcg= Received: by mail-oo1-f44.google.com with SMTP id 006d021491bc7-5d5c7f24372so3003518eaf.0 for ; Sun, 25 Aug 2024 15:05:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1724623533; x=1725228333; darn=kvack.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=a/DbmaHqRLlCXqB0erc6lMDhMIgce7/NU5XyP1hIbAA=; b=eww5Au0gkGQfSZsdkb2tTKEfvFIbXHQUxO4bilnhWFRQKFAmcv53SJyItEC2hLgdoA orJ5ZV92oyY3ev/zPbqF41jVQLJl1QsZ7psu+8QtQa5ipcOwjiazQqjDq9FCpcA3Sv9o rWCe5JDepD6FVOoqL9M/RAMnfqHtKw/ej1xEgYuiFwFC2qU/OHOEaq/G+h4qmFikJlCy u/p6W1T9gcahgOdZBBR0Hy1Wi8uo2/MkNgyUEWjlo0JkvIwQV4xpdEe3/P2QjC2rDX5E 12/qf7+lDG/4kCfkKmgN6ubEQ1WmlDMFw+B+Wbu0LXqT49ziYR1lDeJkR7tq0sC0e0dD yQIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724623533; x=1725228333; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=a/DbmaHqRLlCXqB0erc6lMDhMIgce7/NU5XyP1hIbAA=; b=fQ7uD6Wqf6r+6vxuJc42EdujJD65z7LgHwh4yBWBQNw246/PvVZz1rNIWsIR0MsTP4 r74eMtr/xBiQQUpdAwOHombfkFaDLaSKqSJBR19maowne53rvdD22GqI3+AzFRGmYddh 83p2gmoIIBbXG/SX90vouMDo3Cb5biRm0weBuXTlQWwgiIoFPZRVd/ETu8AzlpHZvyRi 0QZ9/8vMBPnqD5Ns/VtFDYqVMriENZ8wdFZgbZzRsx0X/EoCH68Tjz7iDQVQj+nM+d1L aigSfhy98izjnJrcVK71jo7S67os0psJ0h3FnWLpYqwrpoTNDS9LhH1p9j4drt3Z4UTP Zq+Q== X-Forwarded-Encrypted: i=1; AJvYcCU6bSewF2TuO34CA6deCAwx8KA88drWE41lGuqzmrpcJDhPSCl0qNaHFaCCtuGE74xwuUErbkk1Dg==@kvack.org X-Gm-Message-State: AOJu0YwjtkLn9+mgC1C24D866usPVrKugdswyHyBYJvZ6aQe2ML2D3qh ROoifF9t9VUWP+5YsgLHLXYueRNLs2q0d7w/7m3cxSepFaEvzpWXHxOZMdNXBA== X-Google-Smtp-Source: AGHT+IElTOJlfF8aX64jaL6+Z/2zZyZR5ZI3HzlGU9aZNsNDW/Rswb1nhqvqzktlW3QjIkbZUadzIw== X-Received: by 2002:a05:6820:222a:b0:5dc:a733:d984 with SMTP id 006d021491bc7-5dcc5e9ba9dmr9324315eaf.1.1724623533225; Sun, 25 Aug 2024 15:05:33 -0700 (PDT) Received: from darker.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id 006d021491bc7-5dcb5e6894esm1754506eaf.36.2024.08.25.15.05.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Aug 2024 15:05:32 -0700 (PDT) Date: Sun, 25 Aug 2024 15:05:30 -0700 (PDT) From: Hugh Dickins To: Baolin Wang cc: akpm@linux-foundation.org, hughd@google.com, willy@infradead.org, david@redhat.com, wangkefeng.wang@huawei.com, chrisl@kernel.org, ying.huang@intel.com, 21cnbao@gmail.com, ryan.roberts@arm.com, shy828301@gmail.com, ziy@nvidia.com, ioworker0@gmail.com, da.gomez@samsung.com, p.raghav@samsung.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v5 6/9] mm: shmem: support large folio allocation for shmem_replace_folio() In-Reply-To: Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 9B85F140004 X-Stat-Signature: 1xx1k5c8c6mdfxcsk3jwy6enjr96yhzi X-Rspam-User: X-HE-Tag: 1724623534-911954 X-HE-Meta: U2FsdGVkX1854chRb3U0W4q/Ctdsoyw6xVMowENkG+dP+QSFRhVMfQH0Dh6tVYbr2UbSrJcnyJ8gTiEwDiIJcFUxXsRl0r7jOp2Zp/R2pK9ppdbo/zZ7Dk/f6QSPgZXsX6vAToA99wbfb58hdjRE4BkxWht6+jorZtmQrvYO/iHuvSRVrBo2FnvPnWCupojCvvRa/IqALeR0ZP7i3gk50rHpZ3vUIvkrIaAkK1SgqJJuNVfKFX7S9Y8rTaazoTDHslxX6x30jqyabSnXJX67Cq3NUd+1YyyY4Vn0KibnadxkdtGm9furGJaSByJSh5JXrwP/Bwory0Ah6JDhixkwa4vi39GGR4MVW9HvgdkDG/e7r5Lk7T3upsHdeTZ3aM1Ok8vgbTkZvS28cAbTNpXAE4A8Zepx3vImjYNvejjGLclBjAsiyzbgcVIOyDqrWwUmEhtx1swUfm+8nUpmeJfOq07mCzFgvEzZb0QFyeqREOr9+bLldXILKZL0rDewDJWFjdnGnXQ0N+wIyJst1vDKcvwAGFYxZXuvHtYXOYSDpPldOilxreUSVVQUXTnHaQTH4T1Nn4jBNe6m/z/AsGXeKPwct4RgBk2GWeex9EEnG08nXkIGDfG6iBZvkGYAZiWbljL5+QpC7e790gaDEcdpuWZINCoTLC3duTZQfTP6WJ9XiO9O0Z2DvDsG4F/8jZ/AD1ha7YADQ2FHUlLuYTV1aaXDqRljdGEAi2q27tz8upp401XloryQSpenbIhxvMfM8/Lad42KcZFRXQF2lejpFCWJI6aV/TnJPDDypak5YDy5t65MRw2DQ01aX79E2+V5zX6KlBAf5Jy14pPDKPNzBCZM5ONS/J3RTm2fYsrx36opg5vnxarRCSrlAYwKWxjEFUqgDLwv0h94ECQiiIfofDWN1qLl6nEiY/mkch2tZRUSjENLHh3o5srXIAY3GNP0y4khSkH9KvN7gmfZnx7 CWhBTd/Z PHN7IEDrdhC7R+IxzIp9vZYEbBOBQTNcN3h+i55nu4WBNLWkuPmFewBeSjM76OPQ4iBjmX5oAH/POhH1813Jtf/NNJSd/AbMUfvMn9mnDhKXJlxniCqvaTPy8k4hy+ipGFdWpIWZm20NTBuNoN1LfCuMvwleyXp9Dc1ZJ/AepiIyunM6YdMQj+C7Fy5Ki5PPhkmBeCbYcWtAAa30gU32iSMNGWPK7oqXUVLoco67LpFgJg/WrtD+7tQHe069EGpH8vdAHQeHyRjPq5YnDUGAI0RwgDKcELUtPuc5Ejw6x1/YKEsboq529nTaBvs1VI4AY1axYtXqURmcQjOJEGNGwTb2bB0NxNWRam8a91CKdl6dvM3h0uj/LYt8FLRLjlczWXc6HoBDQGZuWK9SXELSva02t8g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, 12 Aug 2024, Baolin Wang wrote: > To support large folio swapin for shmem in the following patches, add > large folio allocation for the new replacement folio in shmem_replace_folio(). > Moreover large folios occupy N consecutive entries in the swap cache > instead of using multi-index entries like the page cache, therefore > we should replace each consecutive entries in the swap cache instead > of using the shmem_replace_entry(). > > As well as updating statistics and folio reference count using the number > of pages in the folio. > > Signed-off-by: Baolin Wang > --- > mm/shmem.c | 54 +++++++++++++++++++++++++++++++----------------------- > 1 file changed, 31 insertions(+), 23 deletions(-) > > diff --git a/mm/shmem.c b/mm/shmem.c > index f6bab42180ea..d94f02ad7bd1 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -1889,28 +1889,24 @@ static bool shmem_should_replace_folio(struct folio *folio, gfp_t gfp) > static int shmem_replace_folio(struct folio **foliop, gfp_t gfp, > struct shmem_inode_info *info, pgoff_t index) > { > - struct folio *old, *new; > - struct address_space *swap_mapping; > - swp_entry_t entry; > - pgoff_t swap_index; > - int error; > - > - old = *foliop; > - entry = old->swap; > - swap_index = swap_cache_index(entry); > - swap_mapping = swap_address_space(entry); > + struct folio *new, *old = *foliop; > + swp_entry_t entry = old->swap; > + struct address_space *swap_mapping = swap_address_space(entry); > + pgoff_t swap_index = swap_cache_index(entry); > + XA_STATE(xas, &swap_mapping->i_pages, swap_index); > + int nr_pages = folio_nr_pages(old); > + int error = 0, i; > > /* > * We have arrived here because our zones are constrained, so don't > * limit chance of success by further cpuset and node constraints. > */ > gfp &= ~GFP_CONSTRAINT_MASK; > - VM_BUG_ON_FOLIO(folio_test_large(old), old); > - new = shmem_alloc_folio(gfp, 0, info, index); > + new = shmem_alloc_folio(gfp, folio_order(old), info, index); It is not clear to me whether folio_order(old) will ever be more than 0 here: but if it can be, then care will need to be taken over the gfp flags, that they are suited to allocating the large folio; and there will need to be (could be awkward!) fallback to order 0 when that allocation fails. My own testing never comes to shmem_replace_folio(): it was originally for one lowend graphics driver; but IIRC there's now a more common case for it. Hugh