From: David Hildenbrand <david@redhat.com>
To: Baolin Wang <baolin.wang@linux.alibaba.com>,
Daniel Gomez <da.gomez@kernel.org>
Cc: "Ville Syrjälä" <ville.syrjala@linux.intel.com>,
akpm@linux-foundation.org, hughd@google.com, willy@infradead.org,
wangkefeng.wang@huawei.com, 21cnbao@gmail.com,
ryan.roberts@arm.com, ioworker0@gmail.com, da.gomez@samsung.com,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
regressions@lists.linux.dev, intel-gfx@lists.freedesktop.org,
"Eero Tamminen" <eero.t.tamminen@intel.com>
Subject: Re: [REGRESSION] Re: [PATCH v3 3/6] mm: shmem: add large folio support for tmpfs
Date: Tue, 6 May 2025 16:36:23 +0200 [thread overview]
Message-ID: <f0187458-e576-4894-b728-5914d3d9ed36@redhat.com> (raw)
In-Reply-To: <e54e0b31-1b92-4110-b8ac-4737893fe197@linux.alibaba.com>
On 06.05.25 05:33, Baolin Wang wrote:
>
>
> On 2025/5/2 23:31, David Hildenbrand wrote:
>> On 02.05.25 15:10, Daniel Gomez wrote:
>>> On Fri, May 02, 2025 at 09:18:41AM +0100, David Hildenbrand wrote:
>>>> On 02.05.25 03:02, Baolin Wang wrote:
>>>>>
>>>>>
>>>>> On 2025/4/30 21:24, Daniel Gomez wrote:
>>>>>> On Wed, Apr 30, 2025 at 02:20:02PM +0100, Ville Syrjälä wrote:
>>>>>>> On Wed, Apr 30, 2025 at 02:32:39PM +0800, Baolin Wang wrote:
>>>>>>>> On 2025/4/30 01:44, Ville Syrjälä wrote:
>>>>>>>>> On Thu, Nov 28, 2024 at 03:40:41PM +0800, Baolin Wang wrote:
>>>>>>>>> Hi,
>>>>>>>>>
>>>>>>>>> This causes a huge regression in Intel iGPU texturing performance.
>>>>>>>>
>>>>>>>> Unfortunately, I don't have such platform to test it.
>>>>>>>>
>>>>>>>>>
>>>>>>>>> I haven't had time to look at this in detail, but presumably the
>>>>>>>>> problem is that we're no longer getting huge pages from our
>>>>>>>>> private tmpfs mount (done in i915_gemfs_init()).
>>>>>>>>
>>>>>>>> IIUC, the i915 driver still limits the maximum write size to
>>>>>>>> PAGE_SIZE
>>>>>>>> in the shmem_pwrite(),
>>>>>>>
>>>>>>> pwrite is just one random way to write to objects, and probably
>>>>>>> not something that's even used by current Mesa.
>>>>>>>
>>>>>>>> which prevents tmpfs from allocating large
>>>>>>>> folios. As mentioned in the comments below, tmpfs like other file
>>>>>>>> systems that support large folios, will allow getting a highest
>>>>>>>> order
>>>>>>>> hint based on the size of the write and fallocate paths, and then
>>>>>>>> will
>>>>>>>> attempt each allowable huge order.
>>>>>>>>
>>>>>>>> Therefore, I think the shmem_pwrite() function should be changed to
>>>>>>>> remove the limitation that the write size cannot exceed PAGE_SIZE.
>>>>>>
>>>>>> To enable mTHP on tmpfs, the necessary knobs must first be enabled
>>>>>> in sysfs
>>>>>> as they are not enabled by default IIRC (only THP, PMD level).
>>>>>> Ville, I
>>>>>> see i915_gemfs the huge=within_size mount option is passed. Can you
>>>>>> confirm
>>>>>> if /sys/kernel/mm/transparent_hugepage/hugepages-*/enabled are also
>>>>>> marked as
>>>>>> 'always' when the regression is found?
>>>>>
>>>>> The tmpfs mount will not be controlled by
>>>>> '/sys/kernel/mm/transparent_hugepage/hugepages-*Kb/enabled' (except for
>>>>> the debugging options 'deny' and 'force').
>>>>
>>>> Right, IIRC as requested by Willy, it should behave like other FSes
>>>> where
>>>> there is no control over the folio size to be used.
>>>
>>> Thanks for reminding me. I forgot we finally changed it.
>>>
>>> Could the performance drop be due to the driver no longer using
>>> PMD-level pages?
>>
>> I suspect that the faulting logic will now go to a smaller order first,
>> indeed.
>>
>> ... trying to digest shmem_allowable_huge_orders() and
>> shmem_huge_global_enabled(), having a hard time trying to isolate the
>> tmpfs case: especially, if we run here into the vma vs. !vma case.
>>
>> Without a VMA, I think we should have "mpfs will allow getting a highest
>> order hint based on and fallocate paths, then will try each allowable
>> order".
>>
>> With a VMA (no access hint), "we still use PMD-sized order to locate
>> huge pages due to lack of a write size hint."
>>
>> So if we get a fallocate()/write() that is, say, 1 MiB, we'd now
>> allocate an 1 MiB folio instead of a 2 MiB one.
>
> Right.
>
> So I asked Ville how the shmem folios are allocated in the i915 driver,
> and to see if we can make some improvements.
Maybe preallocation (using fallocate) might be reasonable for their use
case: if they know they will consume all that memory either way. If it's
sparse, it's more problematic.
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2025-05-06 14:36 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-11-28 7:40 [PATCH v3 0/6] Support large folios " Baolin Wang
2024-11-28 7:40 ` [PATCH v3 1/6] mm: factor out the order calculation into a new helper Baolin Wang
2024-11-28 7:40 ` [PATCH v3 2/6] mm: shmem: change shmem_huge_global_enabled() to return huge order bitmap Baolin Wang
2024-11-28 7:40 ` [PATCH v3 3/6] mm: shmem: add large folio support for tmpfs Baolin Wang
2025-04-29 17:44 ` [REGRESSION] " Ville Syrjälä
2025-04-30 6:32 ` Baolin Wang
2025-04-30 11:20 ` Ville Syrjälä
2025-04-30 13:24 ` Daniel Gomez
2025-05-02 1:02 ` Baolin Wang
2025-05-02 7:18 ` David Hildenbrand
2025-05-02 13:10 ` Daniel Gomez
2025-05-02 15:31 ` David Hildenbrand
2025-05-06 3:33 ` Baolin Wang
2025-05-06 14:36 ` David Hildenbrand [this message]
2024-11-28 7:40 ` [PATCH v3 4/6] mm: shmem: add a kernel command line to change the default huge policy " Baolin Wang
2024-11-28 7:40 ` [PATCH v3 5/6] docs: tmpfs: update the large folios policy for tmpfs and shmem Baolin Wang
2024-11-28 7:40 ` [PATCH v3 6/6] docs: tmpfs: drop 'fadvise()' from the documentation Baolin Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f0187458-e576-4894-b728-5914d3d9ed36@redhat.com \
--to=david@redhat.com \
--cc=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=da.gomez@kernel.org \
--cc=da.gomez@samsung.com \
--cc=eero.t.tamminen@intel.com \
--cc=hughd@google.com \
--cc=intel-gfx@lists.freedesktop.org \
--cc=ioworker0@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=regressions@lists.linux.dev \
--cc=ryan.roberts@arm.com \
--cc=ville.syrjala@linux.intel.com \
--cc=wangkefeng.wang@huawei.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox