From: Qiuxu Zhuo <qiuxu.zhuo@intel.com>
To: naoya.horiguchi@nec.com
Cc: linmiaohe@huawei.com, akpm@linux-foundation.org,
tony.luck@intel.com, ying.huang@intel.com, qiuxu.zhuo@intel.com,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: [PATCH 1/1] mm: memory-failure: Re-split hw-poisoned huge page on -EAGAIN
Date: Fri, 15 Dec 2023 16:12:04 +0800 [thread overview]
Message-ID: <20231215081204.8802-1-qiuxu.zhuo@intel.com> (raw)
During the process of splitting a hw-poisoned huge page, it is possible
for the reference count of the huge page to be increased by the threads
within the affected process, leading to a failure in splitting the
hw-poisoned huge page with an error code of -EAGAIN.
This issue can be reproduced when doing memory error injection to a
multiple-thread process, and the error occurs within a huge page.
The call path with the returned -EAGAIN during the testing is shown below:
memory_failure()
try_to_split_thp_page()
split_huge_page()
split_huge_page_to_list() {
...
Step A: can_split_folio() - Checked that the thp can be split.
Step B: unmap_folio()
Step C: folio_ref_freeze() - Failed and returned -EAGAIN.
...
}
The testing logs indicated that some huge pages were split successfully
via the call path above (Step C was successful for these huge pages).
However, some huge pages failed to split due to a failure at Step C, and
it was observed that the reference count of the huge page increased between
Step A and Step C.
Testing has shown that after receiving -EAGAIN, simply re-splitting the
hw-poisoned huge page within memory_failure() always results in the same
-EAGAIN. This is possible because memory_failure() is executed in the
currently affected process. Before this process exits memory_failure() and
is terminated, its threads could increase the reference count of the
hw-poisoned page.
To address this issue, employ the kernel worker to re-split the hw-poisoned
huge page. By the time this worker begins re-splitting the hw-poisoned huge
page, the affected process has already been terminated, preventing its
threads from increasing the reference count. Experimental results have
consistently shown that this worker successfully re-splits these
hw-poisoned huge pages on its first attempt.
The kernel log (before):
[ 1116.862895] Memory failure: 0x4097fa7: recovery action for unsplit thp: Ignored
The kernel log (after):
[ 793.573536] Memory failure: 0x2100dda: recovery action for unsplit thp: Delayed
[ 793.574666] Memory failure: 0x2100dda: split unsplit thp successfully.
Signed-off-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com>
---
mm/memory-failure.c | 73 +++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 71 insertions(+), 2 deletions(-)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 660c21859118..0db4cf712a78 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -72,6 +72,60 @@ atomic_long_t num_poisoned_pages __read_mostly = ATOMIC_LONG_INIT(0);
static bool hw_memory_failure __read_mostly = false;
+#define SPLIT_THP_MAX_RETRY_CNT 10
+#define SPLIT_THP_INIT_DELAYED_MS 1
+
+static bool split_thp_pending;
+
+struct split_thp_req {
+ struct delayed_work work;
+ struct page *thp;
+ int retries;
+};
+
+static void split_thp_work_fn(struct work_struct *work)
+{
+ struct split_thp_req *req = container_of(work, typeof(*req), work.work);
+ int ret;
+
+ /* Split the thp. */
+ get_page(req->thp);
+ lock_page(req->thp);
+ ret = split_huge_page(req->thp);
+ unlock_page(req->thp);
+ put_page(req->thp);
+
+ /* Retry with an exponential backoff. */
+ if (ret && ++req->retries < SPLIT_THP_MAX_RETRY_CNT) {
+ schedule_delayed_work(to_delayed_work(work),
+ msecs_to_jiffies(SPLIT_THP_INIT_DELAYED_MS << req->retries));
+ return;
+ }
+
+ pr_err("%#lx: split unsplit thp %ssuccessfully.\n", page_to_pfn(req->thp), ret ? "un" : "");
+ kfree(req);
+ split_thp_pending = false;
+}
+
+static bool split_thp_delayed(struct page *thp)
+{
+ struct split_thp_req *req;
+
+ if (split_thp_pending)
+ return false;
+
+ req = kmalloc(sizeof(*req), GFP_ATOMIC);
+ if (!req)
+ return false;
+
+ req->thp = thp;
+ req->retries = 0;
+ INIT_DELAYED_WORK(&req->work, split_thp_work_fn);
+ split_thp_pending = true;
+ schedule_delayed_work(&req->work, msecs_to_jiffies(SPLIT_THP_INIT_DELAYED_MS));
+ return true;
+}
+
static DEFINE_MUTEX(mf_mutex);
void num_poisoned_pages_inc(unsigned long pfn)
@@ -2275,8 +2329,23 @@ int memory_failure(unsigned long pfn, int flags)
* page is a valid handlable page.
*/
SetPageHasHWPoisoned(hpage);
- if (try_to_split_thp_page(p) < 0) {
- res = action_result(pfn, MF_MSG_UNSPLIT_THP, MF_IGNORED);
+ res = try_to_split_thp_page(p);
+ if (res < 0) {
+ /*
+ * Re-attempting try_to_split_thp_page() here could consistently
+ * yield -EAGAIN, as the threads of the process may increment the
+ * reference count of the huge page before the process exits
+ * memory_failure() and terminates.
+ *
+ * Employ the kernel worker to re-split the huge page. By the time
+ * this worker initiates the re-splitting process, the affected
+ * process has already been terminated, preventing its threads from
+ * incrementing the reference count.
+ */
+ if (res == -EAGAIN && split_thp_delayed(p))
+ res = action_result(pfn, MF_MSG_UNSPLIT_THP, MF_DELAYED);
+ else
+ res = action_result(pfn, MF_MSG_UNSPLIT_THP, MF_IGNORED);
goto unlock_mutex;
}
VM_BUG_ON_PAGE(!page_count(p), p);
--
2.17.1
next reply other threads:[~2023-12-15 8:12 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-15 8:12 Qiuxu Zhuo [this message]
2023-12-19 2:17 ` Naoya Horiguchi
2023-12-20 8:44 ` Zhuo, Qiuxu
2023-12-19 11:50 ` Miaohe Lin
2023-12-20 8:56 ` Zhuo, Qiuxu
2023-12-22 6:27 ` [PATCH v2 1/2] mm: memory-failure: Make memory_failure_queue_delayed() helper Qiuxu Zhuo
2023-12-22 6:27 ` [PATCH v2 2/2] mm: memory-failure: Re-split hw-poisoned huge page on -EAGAIN Qiuxu Zhuo
2023-12-22 19:42 ` Andrew Morton
2024-01-02 2:41 ` Zhuo, Qiuxu
2024-01-03 2:47 ` Miaohe Lin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231215081204.8802-1-qiuxu.zhuo@intel.com \
--to=qiuxu.zhuo@intel.com \
--cc=akpm@linux-foundation.org \
--cc=linmiaohe@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=naoya.horiguchi@nec.com \
--cc=tony.luck@intel.com \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox