linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Vernon Yang <vernon2gm@gmail.com>
To: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Vernon Yang <vernon2gm@gmail.com>,
	hughd@google.com, akpm@linux-foundation.org,
	da.gomez@samsung.com, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org,
	Vernon Yang <yanglincheng@kylinos.cn>
Subject: Re: [PATCH] mm: shmem: fix too little space for tmpfs only fallback 4KB
Date: Tue, 9 Sep 2025 20:29:14 +0800	[thread overview]
Message-ID: <3349E5A6-BCDC-47B9-956B-CB0D0BC02D84@gmail.com> (raw)
In-Reply-To: <c245dbb5-2e2b-4308-a296-f711b74002eb@linux.alibaba.com>



> On Sep 9, 2025, at 13:58, Baolin Wang <baolin.wang@linux.alibaba.com> wrote:
> 
> 
> 
> On 2025/9/8 20:31, Vernon Yang wrote:
>> From: Vernon Yang <yanglincheng@kylinos.cn>
>> When the system memory is sufficient, allocating memory is always
>> successful, but when tmpfs size is low (e.g. 1MB), it falls back
>> directly from 2MB to 4KB, and other small granularity (8KB ~ 1024KB)
>> will not be tried.
>> Therefore add check whether the remaining space of tmpfs is sufficient
>> for allocation. If there is too little space left, try smaller large
>> folio.
> 
> I don't think so.
> 
> For a tmpfs mount with 'huge=within_size' and 'size=1M', if you try to write 1M data, it will allocate an order 8 large folio and will not fallback to order 0.
> 
> For a tmpfs mount with 'huge=always' and 'size=1M', if you try to write 1M data, it will not completely fallback to order 0 either, instead, it will still allocate some order 1 to order 7 large folios.
> 
> I'm not sure if this is your actual user scenario. If your files are small and you are concerned about not getting large folio allocations, I recommend using the 'huge=within_size' mount option.
> 

No, this is not my user scenario.

Based on your previous patch [1], this scenario can be easily reproduced as 
follows.

$ mount -t tmpfs -o size=1024K,huge=always tmpfs /xxx/test
$ echo hello > /xxx/test/README
$ df -h
tmpfs            1.0M  4.0K 1020K   1% /xxx/test

The code logic is as follows:

shmem_get_folio_gfp()
    orders = shmem_allowable_huge_orders()
    shmem_alloc_and_add_folio(orders) return -ENOSPC;
        shmem_alloc_folio() alloc 2MB
        shmem_inode_acct_blocks()
            percpu_counter_limited_add() goto unacct;
        filemap_remove_folio()
    shmem_alloc_and_add_folio(order = 0)


As long as the tmpfs remaining space is too little and the system can allocate 
memory 2MB, the above path will be triggered. 

[1] https://lore.kernel.org/linux-mm/10e7ac6cebe6535c137c064d5c5a235643eebb4a.1756888965.git.baolin.wang@linux.alibaba.com/

>> Fixes: acd7ccb284b8 ("mm: shmem: add large folio support for tmpfs")
> 
> No, this doesn't fix anything.
> 
>> Signed-off-by: Vernon Yang <yanglincheng@kylinos.cn>
>> ---
>>  mm/shmem.c | 13 +++++++++++++
>>  1 file changed, 13 insertions(+)
>> diff --git a/mm/shmem.c b/mm/shmem.c
>> index 8c592c6db2a0..b20affd57b23 100644
>> --- a/mm/shmem.c
>> +++ b/mm/shmem.c
>> @@ -1820,6 +1820,7 @@ static unsigned long shmem_suitable_orders(struct inode *inode, struct vm_fault
>>     unsigned long orders)
>>  {
>>   struct vm_area_struct *vma = vmf ? vmf->vma : NULL;
>> + struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb);
>>   pgoff_t aligned_index;
>>   unsigned long pages;
>>   int order;
>> @@ -1835,6 +1836,18 @@ static unsigned long shmem_suitable_orders(struct inode *inode, struct vm_fault
>>   while (orders) {
>>   pages = 1UL << order;
>>   aligned_index = round_down(index, pages);
>> +
>> + /*
>> + * Check whether the remaining space of tmpfs is sufficient for
>> + * allocation. If there is too little space left, try smaller
>> + * large folio.
>> + */
>> + if (sbinfo->max_blocks && percpu_counter_read(&sbinfo->used_blocks)
>> + + pages > sbinfo->max_blocks) {
>> + order = next_order(&orders, order);
>> + continue;
>> + }
>> +
>>   /*
>>   * Check for conflict before waiting on a huge allocation.
>>   * Conflict might be that a huge page has just been allocated
> 



  reply	other threads:[~2025-09-09 12:29 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-09-08 12:31 Vernon Yang
2025-09-08 23:22 ` Andrew Morton
2025-09-09 14:16   ` Vernon Yang
2025-09-09  5:58 ` Baolin Wang
2025-09-09 12:29   ` Vernon Yang [this message]
2025-09-22  1:46     ` Baolin Wang
2025-09-22  2:51       ` Vernon Yang
2025-09-22  3:09         ` Baolin Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3349E5A6-BCDC-47B9-956B-CB0D0BC02D84@gmail.com \
    --to=vernon2gm@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=da.gomez@samsung.com \
    --cc=hughd@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=yanglincheng@kylinos.cn \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox