linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mike Kravetz <mike.kravetz@oracle.com>
To: Miaohe Lin <linmiaohe@huawei.com>, akpm@linux-foundation.org
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm/hugetlb: Fix use after free when subpool max_hpages accounting is not enabled
Date: Tue, 26 Jan 2021 16:06:40 -0800	[thread overview]
Message-ID: <a5952a6f-aaf4-b542-f9f1-5603658a602a@oracle.com> (raw)
In-Reply-To: <20210126115510.53374-1-linmiaohe@huawei.com>

On 1/26/21 3:55 AM, Miaohe Lin wrote:
> When subpool max_hpages accounting is not enabled, used_hpages is always 0
> and might lead to release subpool prematurely because it indicates no pages
> are used now while there might be.

It might be good to say that you need min_hpages accounting (min_size mount
option) enabled for this issue to occur.  Or, perhaps say this is possible
if a hugetlbfs filesystem is created with the min_size option and without
the size option.

That might better explain the conditions in which a user could see the issue.

> In order to fix this issue, we should check used_hpages == 0 iff max_hpages
> accounting is enabled. As max_hpages accounting should be enabled in most
> common case, this is not worth a Cc stable.

I agree that such a combination of mount options is very uncommon.

> 
> Signed-off-by: Hongxiang Lou <louhongxiang@huawei.com>
> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> ---
>  mm/hugetlb.c | 16 +++++++++++++---
>  1 file changed, 13 insertions(+), 3 deletions(-)

Thanks,

Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>

-- 
Mike Kravetz

> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 777bc0e45bf3..53ea65d1c5ab 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -97,16 +97,26 @@ static inline void ClearPageHugeFreed(struct page *head)
>  /* Forward declaration */
>  static int hugetlb_acct_memory(struct hstate *h, long delta);
>  
> -static inline void unlock_or_release_subpool(struct hugepage_subpool *spool)
> +static inline bool subpool_is_free(struct hugepage_subpool *spool)
>  {
> -	bool free = (spool->count == 0) && (spool->used_hpages == 0);
> +	if (spool->count)
> +		return false;
> +	if (spool->max_hpages != -1)
> +		return spool->used_hpages == 0;
> +	if (spool->min_hpages != -1)
> +		return spool->rsv_hpages == spool->min_hpages;
>  
> +	return true;
> +}
> +
> +static inline void unlock_or_release_subpool(struct hugepage_subpool *spool)
> +{
>  	spin_unlock(&spool->lock);
>  
>  	/* If no pages are used, and no other handles to the subpool
>  	 * remain, give up any reservations based on minimum size and
>  	 * free the subpool */
> -	if (free) {
> +	if (subpool_is_free(spool)) {
>  		if (spool->min_hpages != -1)
>  			hugetlb_acct_memory(spool->hstate,
>  						-spool->min_hpages);
> 


  reply	other threads:[~2021-01-27  0:06 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-26 11:55 Miaohe Lin
2021-01-27  0:06 ` Mike Kravetz [this message]
2021-01-27  2:08   ` Miaohe Lin
2021-01-28  0:17     ` Andrew Morton
2021-01-28  1:46       ` Miaohe Lin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a5952a6f-aaf4-b542-f9f1-5603658a602a@oracle.com \
    --to=mike.kravetz@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox