linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Saurabh Singh Sengar <ssengar@microsoft.com>
To: Zach O'Keefe <zokeefe@google.com>, Matthew Wilcox <willy@infradead.org>
Cc: Dan Williams <dan.j.williams@intel.com>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	Yang Shi <shy828301@gmail.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: RE: [EXTERNAL] [PATCH] mm/thp: fix "mm: thp: kill __transhuge_page_enabled()"
Date: Wed, 16 Aug 2023 16:49:57 +0000	[thread overview]
Message-ID: <PUZP153MB0635DBD4E63A1A90C25F67ADBE15A@PUZP153MB0635.APCP153.PROD.OUTLOOK.COM> (raw)
In-Reply-To: <CAAa6QmSN4NhaDL0DQsRd-F8HTnCCjq1ULRNk88LAA9gVbDXE4g@mail.gmail.com>



> -----Original Message-----
> From: Zach O'Keefe <zokeefe@google.com>
> Sent: Tuesday, August 15, 2023 5:35 AM
> To: Matthew Wilcox <willy@infradead.org>
> Cc: Saurabh Singh Sengar <ssengar@microsoft.com>; Dan Williams
> <dan.j.williams@intel.com>; linux-mm@kvack.org; Yang Shi
> <shy828301@gmail.com>; linux-kernel@vger.kernel.org
> Subject: Re: [EXTERNAL] [PATCH] mm/thp: fix "mm: thp: kill
> __transhuge_page_enabled()"
> 
> [You don't often get email from zokeefe@google.com. Learn why this is
> important at https://aka.ms/LearnAboutSenderIdentification ]
> 
> On Mon, Aug 14, 2023 at 12:06 PM Matthew Wilcox <willy@infradead.org>
> wrote:
> >
> > On Mon, Aug 14, 2023 at 11:47:50AM -0700, Zach O'Keefe wrote:
> > > Willy -- I'm not up-to-date on what is happening on the THP-fs front.
> > > Should we be checking for a ->huge_fault handler here?
> >
> > Oh, thank goodness, I thought you were cc'ing me to ask a DAX question ...
> 
> :)
> 
> > From a large folios perspective, filesystems do not implement a
> > special handler.  They call filemap_fault() (directly or indirectly)
> > from their
> > ->fault handler.  If there is already a folio in the page cache which
> > satisfies this fault, we insert it into the page tables (no matter
> > what size it is).  If there is no folio, we call readahead to populate
> > that index in the page cache, and probably some other indices around it.
> > That's do_sync_mmap_readahead().
> >
> > If you look at that, you'll see that we check the VM_HUGEPAGE flag,
> > and if set we align to a PMD boundary and read two PMD-size pages (so
> > that we can do async readahead for the second page, if we're doing a linear
> scan).
> > If the VM_HUGEPAGE flag isn't set, we'll use the readahead algorithm
> > to decide how large the folio should be that we're reading into; if
> > it's a random read workload, we'll stick to order-0 pages, but if
> > we're getting good hit rate from the linear scan, we'll increase the
> > size (although we won't go past PMD size)
> >
> > There's also the ->map_pages() optimisation which handles page faults
> > locklessly, and will fail back to ->fault() if there's even a light
> > breeze.  I don't think that's of any particular use in answering your
> > question, so I'm not going into details about it.
> >
> > I'm not sure I understand the code that's being modified well enough
> > to be able to give you a straight answer to your question, but
> > hopefully this is helpful to you.
> 
> Thank you, this was great info. I had thought, incorrectly, that large folio work
> would eventually tie into that ->huge_fault() handler (should be
> dax_huge_fault() ?)
> 
> If that's the case, then faulting file-backed, non-DAX memory as (pmd-
> mapped-)THPs isn't supported at all, and no fault lies with the
> aforementioned patches.
> 
> Saurabh, perhaps you can elaborate on your use case a bit more, and how
> that anonymous check broke you?

Zach,

We have a out of tree driver that maps huge pages through a file handle and
relies on -> huge_fault. It used to work in 5.19 kernels but 6.1 changed this
behaviour.

I don’t think reverting the earlier behaviour of fault_path for huge pages should
impact kernel negatively.

- Saurabh

> 
> Best,
> Zach

      parent reply	other threads:[~2023-08-16 16:50 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-12 21:00 Zach O'Keefe
2023-08-12 21:24 ` Zach O'Keefe
2023-08-13  6:19 ` [EXTERNAL] " Saurabh Singh Sengar
2023-08-14 18:47   ` Zach O'Keefe
2023-08-14 19:06     ` Matthew Wilcox
2023-08-15  0:04       ` Zach O'Keefe
2023-08-15  2:24         ` Matthew Wilcox
2023-08-16 16:52           ` Saurabh Singh Sengar
2023-08-16 21:47             ` Zach O'Keefe
2023-08-17 17:46               ` Yang Shi
2023-08-17 18:29                 ` Zach O'Keefe
2023-08-18 21:21                   ` Yang Shi
2023-08-21 15:08                     ` Zach O'Keefe
2023-08-21 22:59                       ` Yang Shi
2023-08-16 21:31           ` Zach O'Keefe
2023-08-17 12:18             ` Matthew Wilcox
2023-08-17 18:13               ` Zach O'Keefe
2023-08-17 19:01                 ` Matthew Wilcox
2023-08-17 21:12                   ` Zach O'Keefe
2023-08-16 16:49         ` Saurabh Singh Sengar [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=PUZP153MB0635DBD4E63A1A90C25F67ADBE15A@PUZP153MB0635.APCP153.PROD.OUTLOOK.COM \
    --to=ssengar@microsoft.com \
    --cc=dan.j.williams@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=shy828301@gmail.com \
    --cc=willy@infradead.org \
    --cc=zokeefe@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox