From: "Daniel Gomez" <d@kruces.com>
To: "Daniel Gomez" <d@kruces.com>,
"David Hildenbrand" <david@redhat.com>,
"Baolin Wang" <baolin.wang@linux.alibaba.com>,
"Daniel Gomez" <da.gomez@samsung.com>,
"Kirill A. Shutemov" <kirill@shutemov.name>
Cc: "Matthew Wilcox" <willy@infradead.org>,
<akpm@linux-foundation.org>, <hughd@google.com>,
<wangkefeng.wang@huawei.com>, <21cnbao@gmail.com>,
<ryan.roberts@arm.com>, <ioworker0@gmail.com>,
<linux-mm@kvack.org>, <linux-kernel@vger.kernel.org>,
"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
Subject: Re: [RFC PATCH v3 0/4] Support large folios for tmpfs
Date: Thu, 24 Oct 2024 12:52:46 +0200 [thread overview]
Message-ID: <D53ZAA760YS1.B4SVBIPY8MTV@kruces.com> (raw)
In-Reply-To: <D53Z7I8D6MRB.XN14XUEFQFG7@kruces.com>
On Thu Oct 24, 2024 at 12:49 PM CEST, Daniel Gomez wrote:
> On Wed Oct 23, 2024 at 11:27 AM CEST, David Hildenbrand wrote:
> > On 23.10.24 10:04, Baolin Wang wrote:
> > >
> > >
> > > On 2024/10/22 23:31, David Hildenbrand wrote:
> > >> On 22.10.24 05:41, Baolin Wang wrote:
> > >>>
> > >>>
> > >>> On 2024/10/21 21:34, Daniel Gomez wrote:
> > >>>> On Mon Oct 21, 2024 at 10:54 AM CEST, Kirill A. Shutemov wrote:
> > >>>>> On Mon, Oct 21, 2024 at 02:24:18PM +0800, Baolin Wang wrote:
> > >>>>>>
> > >>>>>>
> > >>>>>> On 2024/10/17 19:26, Kirill A. Shutemov wrote:
> > >>>>>>> On Thu, Oct 17, 2024 at 05:34:15PM +0800, Baolin Wang wrote:
> > >>>>>>>> + Kirill
> > >>>>>>>>
> > >>>>>>>> On 2024/10/16 22:06, Matthew Wilcox wrote:
> > >>>>>>>>> On Thu, Oct 10, 2024 at 05:58:10PM +0800, Baolin Wang wrote:
> > >>>>>>>>>> Considering that tmpfs already has the 'huge=' option to
> > >>>>>>>>>> control the THP
> > >>>>>>>>>> allocation, it is necessary to maintain compatibility with the
> > >>>>>>>>>> 'huge='
> > >>>>>>>>>> option, as well as considering the 'deny' and 'force' option
> > >>>>>>>>>> controlled
> > >>>>>>>>>> by '/sys/kernel/mm/transparent_hugepage/shmem_enabled'.
> > >>>>>>>>>
> > >>>>>>>>> No, it's not. No other filesystem honours these settings.
> > >>>>>>>>> tmpfs would
> > >>>>>>>>> not have had these settings if it were written today. It should
> > >>>>>>>>> simply
> > >>>>>>>>> ignore them, the way that NFS ignores the "intr" mount option
> > >>>>>>>>> now that
> > >>>>>>>>> we have a better solution to the original problem.
> > >>>>>>>>>
> > >>>>>>>>> To reiterate my position:
> > >>>>>>>>>
> > >>>>>>>>> - When using tmpfs as a filesystem, it should behave like
> > >>>>>>>>> other
> > >>>>>>>>> filesystems.
> > >>>>>>>>> - When using tmpfs to implement MAP_ANONYMOUS | MAP_SHARED,
> > >>>>>>>>> it should
> > >>>>>>>>> behave like anonymous memory.
> > >>>>>>>>
> > >>>>>>>> I do agree with your point to some extent, but the ‘huge=’ option
> > >>>>>>>> has
> > >>>>>>>> existed for nearly 8 years, and the huge orders based on write
> > >>>>>>>> size may not
> > >>>>>>>> achieve the performance of PMD-sized THP in some scenarios, such
> > >>>>>>>> as when the
> > >>>>>>>> write length is consistently 4K. So, I am still concerned that
> > >>>>>>>> ignoring the
> > >>>>>>>> 'huge' option could lead to compatibility issues.
> > >>>>>>>
> > >>>>>>> Yeah, I don't think we are there yet to ignore the mount option.
> > >>>>>>
> > >>>>>> OK.
> > >>>>>>
> > >>>>>>> Maybe we need to get a new generic interface to request the semantics
> > >>>>>>> tmpfs has with huge= on per-inode level on any fs. Like a set of
> > >>>>>>> FADV_*
> > >>>>>>> handles to make kernel allocate PMD-size folio on any allocation
> > >>>>>>> or on
> > >>>>>>> allocations within i_size. I think this behaviour is useful beyond
> > >>>>>>> tmpfs.
> > >>>>>>>
> > >>>>>>> Then huge= implementation for tmpfs can be re-defined to set these
> > >>>>>>> per-inode FADV_ flags by default. This way we can keep tmpfs
> > >>>>>>> compatible
> > >>>>>>> with current deployments and less special comparing to rest of
> > >>>>>>> filesystems on kernel side.
> > >>>>>>
> > >>>>>> I did a quick search, and I didn't find any other fs that require
> > >>>>>> PMD-sized
> > >>>>>> huge pages, so I am not sure if FADV_* is useful for filesystems
> > >>>>>> other than
> > >>>>>> tmpfs. Please correct me if I missed something.
> > >>>>>
> > >>>>> What do you mean by "require"? THPs are always opportunistic.
> > >>>>>
> > >>>>> IIUC, we don't have a way to hint kernel to use huge pages for a
> > >>>>> file on
> > >>>>> read from backing storage. Readahead is not always the right way.
> > >>>>>
> > >>>>>>> If huge= is not set, tmpfs would behave the same way as the rest of
> > >>>>>>> filesystems.
> > >>>>>>
> > >>>>>> So if 'huge=' is not set, tmpfs write()/fallocate() can still
> > >>>>>> allocate large
> > >>>>>> folios based on the write size? If yes, that means it will change the
> > >>>>>> default huge behavior for tmpfs. Because previously having 'huge='
> > >>>>>> is not
> > >>>>>> set means the huge option is 'SHMEM_HUGE_NEVER', which is similar
> > >>>>>> to what I
> > >>>>>> mentioned:
> > >>>>>> "Another possible choice is to make the huge pages allocation based
> > >>>>>> on write
> > >>>>>> size as the *default* behavior for tmpfs, ..."
> > >>>>>
> > >>>>> I am more worried about breaking existing users of huge pages. So
> > >>>>> changing
> > >>>>> behaviour of users who don't specify huge is okay to me.
> > >>>>
> > >>>> I think moving tmpfs to allocate large folios opportunistically by
> > >>>> default (as it was proposed initially) doesn't necessary conflict with
> > >>>> the default behaviour (huge=never). We just need to clarify that in
> > >>>> the documentation.
> > >>>>
> > >>>> However, and IIRC, one of the requests from Hugh was to have a way to
> > >>>> disable large folios which is something other FS do not have control
> > >>>> of as of today. Ryan sent a proposal to actually control that globally
> > >>>> but I think it didn't move forward. So, what are we missing to go back
> > >>>> to implement large folios in tmpfs in the default case, as any other fs
> > >>>> leveraging large folios?
> > >>>
> > >>> IMHO, as I discussed with Kirill, we still need maintain compatibility
> > >>> with the 'huge=' mount option. This means that if 'huge=never' is set
> > >>> for tmpfs, huge page allocation will still be prohibited (which can
> > >>> address Hugh's request?). However, if 'huge=' is not set, we can
> > >>> allocate large folios based on the write size.
>
> So, in order to make tmpfs behave like other filesystems, we need to
> allocate large folios by default. Not setting 'huge=' is the same as
> setting it to 'huge=never' as per documentation. But 'huge=' is meant to
> control THP, not large folios, so it should not have a conflict here, or
> else, what case are you thinking?
>
> So, to make tmpfs behave like other filesystems, we need to allocate
> large folios by default. According to the documentation, not setting
> 'huge=' is the same as setting 'huge=never.' However, 'huge=' is
> intended to control THP, not large folios, so there shouldn't be
> a conflict in this case. Can you clarify what specific scenario or
> conflict you're considering here? Perhaps when large folios order is the
> same as PMD-size?
Sorry for duplicate paragraph.
>
> > >>
> > >> I consider allocating large folios in shmem/tmpfs on the write path less
> > >> controversial than allocating them on the page fault path -- especially
> > >> as long as we stay within the size to-be-written.
> > >>
> > >> I think in RHEL THP on shmem/tmpfs are disabled as default (e.g.,
> > >> shmem_enabled=never). Maybe because of some rather undesired
> > >> side-effects (maybe some are historical?): I recall issues with VMs with
> > >> THP+ memory ballooning, as we cannot reclaim pages of folios if
> > >> splitting fails). I assume most of these problematic use cases don't use
> > >> tmpfs as an ordinary file system (write()/read()), but mmap() the whole
> > >> thing.
> > >>
> > >> Sadly, I don't find any information about shmem/tmpfs + THP in the RHEL
> > >> documentation; most documentation is only concerned about anon THP.
> > >> Which makes me conclude that they are not suggested as of now.
> > >>
> > >> I see more issues with allocating them on the page fault path and not
> > >> having a way to disable it -- compared to allocating them on the write()
> > >> path.
> > >
> > > I may not understand your issues. IIUC, you can disable allocating huge
> > > pages on the page fault path by using the 'huge=never' mount option or
> > > setting shmem_enabled=deny. No?
> >
> > That's what I am saying: if there is some way to disable it that will
> > keep working, great.
>
> I agree. That aligns with what I recall Hugh requested. However, I
> believe if that is the way to go, we shouldn't limit it to tmpfs.
> Otherwise, why should tmpfs be prevented from allocating large folios if
> other filesystems in the system are allowed to allocate them? I think,
> if we want to disable large folios we should make it more generic,
> something similar to Ryan's proposal [1] for controlling folio sizes.
>
> [1] https://lore.kernel.org/all/20240717071257.4141363-1-ryan.roberts@arm.com/
>
> That said, there has already been disagreement on this point here [2].
>
> [2] https://lore.kernel.org/all/ZvVRiJYfaXD645Nh@casper.infradead.org/
next prev parent reply other threads:[~2024-10-24 10:52 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-10 9:58 Baolin Wang
2024-10-10 9:58 ` [RFC PATCH v3 1/4] mm: factor out the order calculation into a new helper Baolin Wang
2024-10-10 9:58 ` [RFC PATCH v3 2/4] mm: shmem: change shmem_huge_global_enabled() to return huge order bitmap Baolin Wang
2024-10-10 9:58 ` [RFC PATCH v3 3/4] mm: shmem: add large folio support to the write and fallocate paths for tmpfs Baolin Wang
2024-10-10 9:58 ` [RFC PATCH v3 4/4] docs: tmpfs: add documention for 'write_size' huge option Baolin Wang
2024-10-16 7:49 ` [RFC PATCH v3 0/4] Support large folios for tmpfs Kefeng Wang
2024-10-16 9:29 ` Baolin Wang
2024-10-16 13:45 ` Kefeng Wang
2024-10-17 9:52 ` Baolin Wang
2024-10-16 14:06 ` Matthew Wilcox
2024-10-17 9:34 ` Baolin Wang
2024-10-17 11:26 ` Kirill A. Shutemov
2024-10-21 6:24 ` Baolin Wang
2024-10-21 8:54 ` Kirill A. Shutemov
2024-10-21 13:34 ` Daniel Gomez
2024-10-22 3:41 ` Baolin Wang
2024-10-22 15:31 ` David Hildenbrand
2024-10-23 8:04 ` Baolin Wang
2024-10-23 9:27 ` David Hildenbrand
2024-10-24 10:49 ` Daniel Gomez
2024-10-24 10:52 ` Daniel Gomez [this message]
2024-10-25 2:56 ` Baolin Wang
2024-10-25 20:21 ` David Hildenbrand
2024-10-28 9:48 ` David Hildenbrand
2024-10-31 3:43 ` Baolin Wang
2024-10-31 8:53 ` David Hildenbrand
2024-10-31 10:04 ` Baolin Wang
2024-10-31 10:46 ` David Hildenbrand
2024-10-31 10:46 ` David Hildenbrand
2024-11-05 12:45 ` Baolin Wang
2024-11-05 14:56 ` David Hildenbrand
2024-11-06 3:17 ` Baolin Wang
2024-10-28 21:56 ` Daniel Gomez
2024-10-29 12:20 ` David Hildenbrand
2024-10-22 3:34 ` Baolin Wang
2024-10-22 10:06 ` Kirill A. Shutemov
2024-10-23 9:25 ` Baolin Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=D53ZAA760YS1.B4SVBIPY8MTV@kruces.com \
--to=d@kruces.com \
--cc=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=da.gomez@samsung.com \
--cc=david@redhat.com \
--cc=hughd@google.com \
--cc=ioworker0@gmail.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=kirill@shutemov.name \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ryan.roberts@arm.com \
--cc=wangkefeng.wang@huawei.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox