linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Oscar Salvador <osalvador@suse.de>
To: Andrey Alekhin <andrei.aleohin@gmail.com>
Cc: muchun.song@linux.dev, linux-mm@kvack.org
Subject: Re: [PATCH] mm: free surplus huge pages properly on NUMA systems
Date: Tue, 20 May 2025 12:26:00 +0200	[thread overview]
Message-ID: <aCxYuIY-5sD8xRKf@localhost.localdomain> (raw)
In-Reply-To: <20250515191327.41089-1-andrei.aleohin@gmail.com>

On Thu, May 15, 2025 at 10:13:27PM +0300, Andrey Alekhin wrote:
> The following sequence is possible on NUMA system:
> 
> n - overall number of huge pages
> f - number of free huge pages
> s - number of surplus huge pages
> huge page counters:  [before]
>                         |
>                      [after]
> 
> 		Process runs on node #1
> 			     |
>        node0               node1
> 1) addr1 = mmap(MAP_SHARED, ...) // 1 huge page is mmaped (cur_nid=1)
>    [n=2 f=2 s=0]       [n=1 f=1 s=0]  r=0
>                   |
>    [n=2 f=2 s=0]       [n=1 f=1 s=0]  r=1
> 
> 2) echo 1 > /proc/sys/vm/nr_hugepages (cur_nid=1)
>    [n=2 f=2 s=0]       [n=1 f=1 s=0]  r=1
>                   |
>    [n=0 f=0 s=0]       [n=1 f=1 s=0]  r=1
> 3) addr2 = mmap(MAP_SHARED, ...) // 1 huge page is mmaped (cur_nid=1)
>    [n=0 f=0 s=0]       [n=1 f=1 s=0]  r=1
>                   |
>    [n=1 f=1 s=1]       [n=1 f=1 s=0]  r=2
>    New surplus huge page is reserved on node0, not on node1. In linux 6.14
>    it is unlikely but possible and legal.
> 
> 4) write to second page (touch)
>    [n=1 f=1 s=1]       [n=1 f=1 s=0]  r=2
>                   |
>    [n=1 f=1 s=1]       [n=1 f=0 s=0]  r=1
>     Reserverd page is mapped on node1
> 
> 5) munmap(addr2) // 1 huge page is unmaped
>    [n=1 f=1 s=1]       [n=1 f=0 s=0]  r=1
>                   |
>    [n=1 f=1 s=1]       [n=1 f=1 s=0]  r=1
>    Huge page is freed, but it is not freed as surplus page. Huge page
>    counters in system are now: [nr_hugepages=2 free_huge_pages=2
>    surplus_hugepages=1]. But they must be: [nr_hugepages=1 free_huge_pages=1
>    surplus_hugepages=0].

But sure once you do the munmap for addr1, stats will be corrected
again, right?

>  void free_huge_folio(struct folio *folio)
>  {
>  	/*
> @@ -1833,6 +1850,8 @@ void free_huge_folio(struct folio *folio)
>  	struct hugepage_subpool *spool = hugetlb_folio_subpool(folio);
>  	bool restore_reserve;
>  	unsigned long flags;
> +	int node;
> +	nodemask_t *mbind_nodemask, alloc_nodemask;
>  
>  	VM_BUG_ON_FOLIO(folio_ref_count(folio), folio);
>  	VM_BUG_ON_FOLIO(folio_mapcount(folio), folio);
> @@ -1883,6 +1902,25 @@ void free_huge_folio(struct folio *folio)
>  		remove_hugetlb_folio(h, folio, true);
>  		spin_unlock_irqrestore(&hugetlb_lock, flags);
>  		update_and_free_hugetlb_folio(h, folio, true);
> +	} else if (h->surplus_huge_pages) {
> +		mbind_nodemask = policy_mbind_nodemask(htlb_alloc_mask(h));
> +		if (mbind_nodemask)
> +			nodes_and(alloc_nodemask, *mbind_nodemask,
> +					cpuset_current_mems_allowed);
> +		else
> +			alloc_nodemask = cpuset_current_mems_allowed;
> +
> +		for_each_node_mask(node, alloc_nodemask) {
> +			if (h->surplus_huge_pages_node[node]) {
> +				h->surplus_huge_pages_node[node]--;
> +				h->surplus_huge_pages--;
> +				break;
> +			}
> +		}

And I am not convinced about this one.
Apart from the fact that free_huge_folio() can be called from a workqueue,
why would we need to do this dance?
What if the node is not in the policy anymore? What happens to the its
counters?

I have to think about this some more, but I am not really convinced we
need this.


-- 
Oscar Salvador
SUSE Labs


  parent reply	other threads:[~2025-05-20 10:26 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-05-15 19:13 Andrey Alekhin
2025-05-16  7:56 ` David Hildenbrand
2025-05-20  6:50 ` Oscar Salvador
2025-05-20 10:26 ` Oscar Salvador [this message]
2025-05-21 16:28   ` Andrey Alekhin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aCxYuIY-5sD8xRKf@localhost.localdomain \
    --to=osalvador@suse.de \
    --cc=andrei.aleohin@gmail.com \
    --cc=linux-mm@kvack.org \
    --cc=muchun.song@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox