From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D21D4C4829A for ; Tue, 13 Feb 2024 22:00:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4BB486B009C; Tue, 13 Feb 2024 17:00:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4448E6B009D; Tue, 13 Feb 2024 17:00:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2BE436B009F; Tue, 13 Feb 2024 17:00:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 182426B009C for ; Tue, 13 Feb 2024 17:00:55 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id D43F5160C5A for ; Tue, 13 Feb 2024 22:00:54 +0000 (UTC) X-FDA: 81788151228.14.A428BCE Received: from mail-pj1-f45.google.com (mail-pj1-f45.google.com [209.85.216.45]) by imf12.hostedemail.com (Postfix) with ESMTP id C869A40011 for ; Tue, 13 Feb 2024 22:00:52 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=fromorbit-com.20230601.gappssmtp.com header.s=20230601 header.b=qtvDjQDx; dmarc=pass (policy=quarantine) header.from=fromorbit.com; spf=pass (imf12.hostedemail.com: domain of david@fromorbit.com designates 209.85.216.45 as permitted sender) smtp.mailfrom=david@fromorbit.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1707861652; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LOy62y/l+hy+VpeNSTBfz65Ny067CD40d0fpxQdxrHs=; b=Wd9uOXZuS1MPeqUjOSFd16p3/qGTNwux14g9Ch3Pnxc3LKjOC1dpqYlGvzeUg74USyG+5r C+6g2K/x0pRBhk8xoC9U2xmJ/iJrASreUXQSba67paXlgSUjCV8svK76D3xxhcH3SyV+s/ g7EicJBWjMV7GBB1SmI0RQ2IE4cja4E= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=fromorbit-com.20230601.gappssmtp.com header.s=20230601 header.b=qtvDjQDx; dmarc=pass (policy=quarantine) header.from=fromorbit.com; spf=pass (imf12.hostedemail.com: domain of david@fromorbit.com designates 209.85.216.45 as permitted sender) smtp.mailfrom=david@fromorbit.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1707861652; a=rsa-sha256; cv=none; b=lCBdLsJ3ojQ3k3aBhv9YPQOw2fc0sLjdjQ4gBnG4e+BDlCBEUncTlUXGsHXXdg9yIOaISr f+NtBaj3kxKeM3J3xJi0xVI9zkey+hr3V5nnIW8xEo0vkLvtTYMGTl8Ie4jvMH8MG+SC+x 0ESRWqNTWD4WrcyJzwmzC7Pka38Ymm0= Received: by mail-pj1-f45.google.com with SMTP id 98e67ed59e1d1-296db7e258dso145430a91.0 for ; Tue, 13 Feb 2024 14:00:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fromorbit-com.20230601.gappssmtp.com; s=20230601; t=1707861651; x=1708466451; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=LOy62y/l+hy+VpeNSTBfz65Ny067CD40d0fpxQdxrHs=; b=qtvDjQDx77wLAhCIE4E3/b83xqmRDcBsVSha0mnigLfWCLGZ1HgSdu3++Rno1ctrLN bwsb9AaPh14aNMf/bKkWL2Rtdny6dKNsH0KvBgvnRC+SIEqK9k75KosCxwwC6o/x/yBs smFRkJlv1yR2ewTzrS4S2ZQl5HhTDCSxk4xcGOg6rY856AuJbwdf2oBYxXjS8gMzEPSM HlenR5z/b4kXeXLV8pLBeODeuw1U5f7T7g0n/iGXBpxp2vrR8IP1YFNpHB43EYC3qX2O bhFad34EtC5n3aH217tpcNwa7xtqLPKK1Exz8cK5P0muaAskPCQI+HXk9yuTI+5ysX1j 3Jaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707861651; x=1708466451; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=LOy62y/l+hy+VpeNSTBfz65Ny067CD40d0fpxQdxrHs=; b=LhHTH3RZtkFvV66u070UxbNKnPE1NxoWq3Ba8VxJuvL1HBmLdjlFqjuEWnPfTavRh5 PzDWzEtRug4Oe0DyFHgOUhQWlR36k+/cr0rAm0x+w5xf8jQogr84L1sNeObjglRMyiO+ f8inKMZaLu9+7dRtPEwz0RF9O7UfvFRXmIVw3XrqvDRvyqHXrn93GO3aawiaHpE3F0un woU9yr3zevVc3MY4fRBCnhnlQgQpoiAG6ymSS2mo368nv8peDb91vE1yatiqC8sftVj3 ZMwE5h4przoxcpwZzLNDr1rQdFhGM9s56ZrZGFVMenU7NrMeelwVATQMnBdCBjfwujb7 zSEA== X-Forwarded-Encrypted: i=1; AJvYcCW0z9gnw6vyCq17BIxSYE7qknkr9HcnwOsA8LN6F8rIPzeLTHI1C3ZFFe/o/o5Y49Yq9S+SSkiNl/cPPY5elgEgDzM= X-Gm-Message-State: AOJu0YwdwrIw7cXEE57OLMQpRqSUtkhcIBNw3wblb1+jlo+4BZa0iMre a7I6DbtKOfE42YYU60bLgvJ+KvzyyTrmYppenjWcH6mpmGR6NUjiEPucSiw1SD1RX94HHcbV5tv E X-Google-Smtp-Source: AGHT+IEtluv1IQyCMYzRb3zjY+gIW+P+JFmQv935YBRoN3MfD9rTNtYsSqa2/eF8gVJtVjYzpvhrEA== X-Received: by 2002:a17:90b:2292:b0:298:b516:f4c0 with SMTP id kx18-20020a17090b229200b00298b516f4c0mr166636pjb.16.1707861651023; Tue, 13 Feb 2024 14:00:51 -0800 (PST) X-Forwarded-Encrypted: i=1; AJvYcCXfkPc0mfLFBFQbVip2kMaoGS9fy5JaOs+tUSLZKTLsVHHCDzKzQ9r0GdJISe81SJmdpRjFMiuEx5tvIwJ01XJfzivCakjGwarRmGoaWDq6qiPSwQEEY5+gYW2PVZGBdWOf6hYwGZsBg8QRrqB3fgkBu+/L+Uy57LUPCEYN+NUYJqX7Yt1rvujWEvDpgf46cdVp8HGTIWgy43XGmW12kjY+Kje5f04uNn0/Yz3ASGWVHJBRynAl4ojk97J/6+/L4nibEqSpYPOPq736rXzeNnVaoe/pJkbmizq5jmhy0L+WQBD71oR593t5ECOxI7rgGIj+hmu1Lu6AcAP7LfWBdlFA+LLvXUwcyUFUQXFo2N+l9OM5gxYfgNaSN5VKl66rNlfVFkaqJTTMUyB8UU6anWMjNJtwDkJuIHsWX1E= Received: from dread.disaster.area (pa49-181-38-249.pa.nsw.optusnet.com.au. [49.181.38.249]) by smtp.gmail.com with ESMTPSA id o14-20020a17090aac0e00b0029718f72ad3sm2258pjq.44.2024.02.13.14.00.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Feb 2024 14:00:50 -0800 (PST) Received: from dave by dread.disaster.area with local (Exim 4.96) (envelope-from ) id 1ra0pa-0067aF-2H; Wed, 14 Feb 2024 09:00:46 +1100 Date: Wed, 14 Feb 2024 09:00:46 +1100 From: Dave Chinner To: "Pankaj Raghav (Samsung)" Cc: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org, mcgrof@kernel.org, gost.dev@samsung.com, akpm@linux-foundation.org, kbusch@kernel.org, djwong@kernel.org, chandan.babu@oracle.com, p.raghav@samsung.com, linux-kernel@vger.kernel.org, hare@suse.de, willy@infradead.org, linux-mm@kvack.org Subject: Re: [RFC v2 02/14] filemap: align the index to mapping_min_order in the page cache Message-ID: References: <20240213093713.1753368-1-kernel@pankajraghav.com> <20240213093713.1753368-3-kernel@pankajraghav.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240213093713.1753368-3-kernel@pankajraghav.com> X-Rspam-User: X-Stat-Signature: jbzb1utzoogke8i73k9g74r4hphr79bj X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: C869A40011 X-HE-Tag: 1707861652-689071 X-HE-Meta: U2FsdGVkX1/hiSQSJFQ3pRdG/WPxShZnFRXLrwWPMHR68ZPIQ7OMfPTesAuW6E/eJfPrNhxotgg7AA9w88RCA3j1iDr7z75hixrcfk4FtTQDLYa4Zy/p5W4DK4MWlLiBHz/ip1dDp+XIFTe/4iZ8axLvzisIJz4+ycAQia2DukrqQ+OxzRVEE1+/dA3Edxw7yiTGBPvzbpRZZzPeQXXtRvVNNr1tnwf1WpOf4eGF7NxthualCOtgLrxV2E0DUCELKJZ/xuNlojxAfqsdcB0KFGuYEoYRRuHeVRwgrSMV8thfXv5Bk1fdKFgUuot8hzh3kKURck8RKO6IwbTm16dDtVgyaV6B2PCvyLLiJAaB2O3zeUY3YVDaI0vt2cJwp8AmLWWsChP/k6Klx7YI28Bek2y4/+Uk4B7/LhtbhA0XcTjiu0B35r0tdnP6N6oQta29O4AV48wQ1FIESu5UTIFii7JtVo4ET1afLvNLu3ViLN5vfgI+abBbvWEl5NlJAlrFKZutUn+nGcspBeDlW88DpfbcuFs+1mkmbnoY+xyvHB6Usvvhs+NYEDUffXY3PPXiinaBvW3FLBFkZWW+qONsy1BUvv2LFzmDx6sR2/0y6Oma/U6NUOnUe+H0OQyXhgd5JSe7ttGrgHgHKXQAVKD3BUS2XlR7EKLrD5ZOXjW6FOYbFVFdD2Q4Mg5Oya67QLbKUSIk/7CHu5tMuT//XWf33QaZ3Yu/tbuqWkhUjcWg23sq1MQcXH5rpoPB/9MK8a+QQ2a2is6qYozz9vTsNXtm/hDES1Fgs0djx+val3DD4BCX0r3thkvAnPKm82ELyZRAz7zmYD5KltR8viBqztrgy/qKQX7+alRzB6PYOndjSSn1u2wJ7jNleI9VbzLW5qPOoGjLpVOYJePJKcyBMv8rjJdNpMVYEa/zOsofMbWn1tebPwXyKRCTgHzmnJ+Sbn6GbIYtlp6I+KCX+lBwJ4y bwKJZoWX IDWjbV43vcpfIcq1o2VyDRMlJhrY5fXtp088hfn7UOrND+cB/BhCVCA6C69oqzNeSePtwdc3dXT8GtY2R/tOY7tjMW4uxZa32THjc42FEdFvpuS/0nWiXIfHfhszlUcK+Re2eEDk5R0MUXDqzw97huQ8VoTMMFN5muX2G5Sz/ly+XQFGcYPqWgOkobnUGgsYuEFBpAfmYblz5A9xnuDGz0fTIS2D1yB7gXewB7raV/f2l4VQH0+xbuEPlW5cocZtql9LisVJCRzqWlxaC53DG2eINvC0ZRYwbbBYBM99K5V8kJP95//c5fpB5s5HCu7dK+r5xrr6gjD6vku8TavQUcmPQz1Klg8BKT8Eyk4uFl4/wSCweWlxPiv3xbbWdXmTadDVXGSXZyIiu/BI/6nTwFLpLBMjkjN1hLH4RYacScdt77LHp9uLudHU/KDjU28JuokZIbFyX73Ar2XmaPCZoYqy59ttcbufFvk/9BkKIWAIasJ4rSqEsqdA+GibC2O+DMTsFd1A51ObMICeHofP6MLf/ho27tZ3hu/c4tqEo2zyuZCFXwwm95fjTkg8jphd3OOhsSNpeVhoLT1I= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Feb 13, 2024 at 10:37:01AM +0100, Pankaj Raghav (Samsung) wrote: > From: Luis Chamberlain > > Supporting mapping_min_order implies that we guarantee each folio in the > page cache has at least an order of mapping_min_order. So when adding new > folios to the page cache we must ensure the index used is aligned to the > mapping_min_order as the page cache requires the index to be aligned to > the order of the folio. > > A higher order folio than min_order by definition is a multiple of the > min_order. If an index is aligned to an order higher than a min_order, it > will also be aligned to the min order. > > This effectively introduces no new functional changes when min order is > not set other than a few rounding computations that should result in the > same value. > > Signed-off-by: Luis Chamberlain > Signed-off-by: Pankaj Raghav > --- > mm/filemap.c | 34 ++++++++++++++++++++++++++-------- > 1 file changed, 26 insertions(+), 8 deletions(-) > > diff --git a/mm/filemap.c b/mm/filemap.c > index 750e779c23db..323a8e169581 100644 > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -2479,14 +2479,16 @@ static int filemap_get_pages(struct kiocb *iocb, size_t count, > { > struct file *filp = iocb->ki_filp; > struct address_space *mapping = filp->f_mapping; > + unsigned int min_nrpages = mapping_min_folio_nrpages(mapping); > struct file_ra_state *ra = &filp->f_ra; > - pgoff_t index = iocb->ki_pos >> PAGE_SHIFT; > + pgoff_t index = round_down(iocb->ki_pos >> PAGE_SHIFT, min_nrpages); This is pretty magic - this patch adds a bunch of what appear to be random rounding operations for some undocumented reason. > pgoff_t last_index; > struct folio *folio; > int err = 0; > > /* "last_index" is the index of the page beyond the end of the read */ > last_index = DIV_ROUND_UP(iocb->ki_pos + count, PAGE_SIZE); > + last_index = round_up(last_index, min_nrpages); Same here - this is pretty nasty - we round up twice, but no obvious reason as to why the second round up exists or why it can't be done by the DIV_ROUND_UP() call. Just looking at the code it's impossible to reason why this is being done, let alone determine if it has been implemented correctly. > retry: > if (fatal_signal_pending(current)) > return -EINTR; > @@ -2502,8 +2504,7 @@ static int filemap_get_pages(struct kiocb *iocb, size_t count, > if (!folio_batch_count(fbatch)) { > if (iocb->ki_flags & (IOCB_NOWAIT | IOCB_WAITQ)) > return -EAGAIN; > - err = filemap_create_folio(filp, mapping, > - iocb->ki_pos >> PAGE_SHIFT, fbatch); > + err = filemap_create_folio(filp, mapping, index, fbatch); > if (err == AOP_TRUNCATED_PAGE) > goto retry; > return err; > @@ -3095,7 +3096,10 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf) > struct file *file = vmf->vma->vm_file; > struct file_ra_state *ra = &file->f_ra; > struct address_space *mapping = file->f_mapping; > - DEFINE_READAHEAD(ractl, file, ra, mapping, vmf->pgoff); > + unsigned int min_order = mapping_min_folio_order(mapping); > + unsigned int min_nrpages = mapping_min_folio_nrpages(file->f_mapping); Why use file->f_mapping here and not mapping? And why not just unsigned int min_nrpages = 1U < min_order; So it's obvious how the index alignment is related to the folio order? > + pgoff_t index = round_down(vmf->pgoff, min_nrpages); > + DEFINE_READAHEAD(ractl, file, ra, mapping, index); > struct file *fpin = NULL; > unsigned long vm_flags = vmf->vma->vm_flags; > unsigned int mmap_miss; > @@ -3147,10 +3151,11 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf) > */ > fpin = maybe_unlock_mmap_for_io(vmf, fpin); > ra->start = max_t(long, 0, vmf->pgoff - ra->ra_pages / 2); > + ra->start = round_down(ra->start, min_nrpages); Again, another random rounding operation.... > ra->size = ra->ra_pages; > ra->async_size = ra->ra_pages / 4; > ractl._index = ra->start; > - page_cache_ra_order(&ractl, ra, 0); > + page_cache_ra_order(&ractl, ra, min_order); > return fpin; > } > > @@ -3164,7 +3169,9 @@ static struct file *do_async_mmap_readahead(struct vm_fault *vmf, > { > struct file *file = vmf->vma->vm_file; > struct file_ra_state *ra = &file->f_ra; > - DEFINE_READAHEAD(ractl, file, ra, file->f_mapping, vmf->pgoff); > + unsigned int min_nrpages = mapping_min_folio_nrpages(file->f_mapping); > + pgoff_t index = round_down(vmf->pgoff, min_nrpages); > + DEFINE_READAHEAD(ractl, file, ra, file->f_mapping, index); Ok, this is begging for a new DEFINE_READAHEAD_ALIGNED() macro which internally grabs the mapping_min_folio_nrpages() from the mapping passed to the macro. > struct file *fpin = NULL; > unsigned int mmap_miss; > > @@ -3212,13 +3219,17 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) > struct file *file = vmf->vma->vm_file; > struct file *fpin = NULL; > struct address_space *mapping = file->f_mapping; > + unsigned int min_order = mapping_min_folio_order(mapping); > + unsigned int nrpages = 1UL << min_order; You didn't use mapping_min_folio_nrpages() for this. At least initialise all the variables the same way in the same patch! > struct inode *inode = mapping->host; > - pgoff_t max_idx, index = vmf->pgoff; > + pgoff_t max_idx, index = round_down(vmf->pgoff, nrpages); Yup, I can't help but think that with how many times this is being repeated in this patchset that a helper or two is in order: index = mapping_align_start_index(mapping, vmf->pgoff); And then most of the calls to mapping_min_folio_order() and mapping_min_folio_nrpages() can be removed from this code, too. > struct folio *folio; > vm_fault_t ret = 0; > bool mapping_locked = false; > > max_idx = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); > + max_idx = round_up(max_idx, nrpages); max_index = mapping_align_end_index(mapping, DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE)); > if (unlikely(index >= max_idx)) > return VM_FAULT_SIGBUS; > > @@ -3317,13 +3328,17 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) > * We must recheck i_size under page lock. > */ > max_idx = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); > + max_idx = round_up(max_idx, nrpages); Same again: max_index = mapping_align_end_index(mapping, DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE)); > + > if (unlikely(index >= max_idx)) { > folio_unlock(folio); > folio_put(folio); > return VM_FAULT_SIGBUS; > } > > - vmf->page = folio_file_page(folio, index); > + VM_BUG_ON_FOLIO(folio_order(folio) < min_order, folio); > + > + vmf->page = folio_file_page(folio, vmf->pgoff); > return ret | VM_FAULT_LOCKED; > > page_not_uptodate: > @@ -3658,6 +3673,9 @@ static struct folio *do_read_cache_folio(struct address_space *mapping, > { > struct folio *folio; > int err; > + unsigned int min_nrpages = mapping_min_folio_nrpages(mapping); > + > + index = round_down(index, min_nrpages); And more magic rounding. index = mapping_align_start_index(mapping, index); -Dave. -- Dave Chinner david@fromorbit.com