linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Zhuo, Qiuxu" <qiuxu.zhuo@intel.com>
To: Miaohe Lin <linmiaohe@huawei.com>
Cc: "akpm@linux-foundation.org" <akpm@linux-foundation.org>,
	"Luck, Tony" <tony.luck@intel.com>,
	"Huang, Ying" <ying.huang@intel.com>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	HORIGUCHI NAOYA <naoya.horiguchi@nec.com>,
	"Yin, Fengwei" <fengwei.yin@intel.com>
Subject: RE: [PATCH 1/1] mm: memory-failure: Re-split hw-poisoned huge page on -EAGAIN
Date: Wed, 20 Dec 2023 08:56:45 +0000	[thread overview]
Message-ID: <CY8PR11MB7134A31039FA79E85300DA2A8996A@CY8PR11MB7134.namprd11.prod.outlook.com> (raw)
In-Reply-To: <81eebf23-fce3-3bb3-857d-8aab5a75d788@huawei.com>

Hi Miaohe,

Thanks for the review.
Please see the comments below.

> From: Miaohe Lin <linmiaohe@huawei.com>
> ...
> > +
> > +static void split_thp_work_fn(struct work_struct *work) {
> > +	struct split_thp_req *req = container_of(work, typeof(*req),
> work.work);
> > +	int ret;
> > +
> > +	/* Split the thp. */
> > +	get_page(req->thp);
> 
> Can req->thp be freed when split_thp_work_fn is scheduled ?

It's possible. Thanks for catching this.

Instead of making a new work to re-split the thp, 
I'll leverage the existing memory_failure_queue() to resplit the thp in the v2.

> 
> > +	lock_page(req->thp);
> > +	ret = split_huge_page(req->thp);
> > +	unlock_page(req->thp);
> > +	put_page(req->thp);
> > +
> > +	/* Retry with an exponential backoff. */
> > +	if (ret && ++req->retries < SPLIT_THP_MAX_RETRY_CNT) {
> > +		schedule_delayed_work(to_delayed_work(work),
> > +
> msecs_to_jiffies(SPLIT_THP_INIT_DELAYED_MS << req->retries));
> > +		return;
> > +	}
> > +
> > +	pr_err("%#lx: split unsplit thp %ssuccessfully.\n", page_to_pfn(req-
> >thp), ret ? "un" : "");
> > +	kfree(req);
> > +	split_thp_pending = false;
> 
> split_thp_pending is not protected against split_thp_delayed? Though this
> race should be benign.

Thanks for being concerned about this.

As the Read-Check-Modify of "split_thp_pending" is protected by the
mutex " &mf_mutex", and the worker only modified it to false (no read on it). 
In theory, there is no race here. 

Will leverage the existing memory_failure_queue() in v2. There should be no
such concern about this race. 😊

-Qiuxu



  reply	other threads:[~2023-12-20  8:56 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-12-15  8:12 Qiuxu Zhuo
2023-12-19  2:17 ` Naoya Horiguchi
2023-12-20  8:44   ` Zhuo, Qiuxu
2023-12-19 11:50 ` Miaohe Lin
2023-12-20  8:56   ` Zhuo, Qiuxu [this message]
2023-12-22  6:27 ` [PATCH v2 1/2] mm: memory-failure: Make memory_failure_queue_delayed() helper Qiuxu Zhuo
2023-12-22  6:27   ` [PATCH v2 2/2] mm: memory-failure: Re-split hw-poisoned huge page on -EAGAIN Qiuxu Zhuo
2023-12-22 19:42     ` Andrew Morton
2024-01-02  2:41       ` Zhuo, Qiuxu
2024-01-03  2:47         ` Miaohe Lin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CY8PR11MB7134A31039FA79E85300DA2A8996A@CY8PR11MB7134.namprd11.prod.outlook.com \
    --to=qiuxu.zhuo@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=fengwei.yin@intel.com \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=naoya.horiguchi@nec.com \
    --cc=tony.luck@intel.com \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox