From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DB60C433E0 for ; Thu, 14 Jan 2021 10:38:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C8E4A23A3B for ; Thu, 14 Jan 2021 10:38:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C8E4A23A3B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 59A198D00D1; Thu, 14 Jan 2021 05:38:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 54CF98D008E; Thu, 14 Jan 2021 05:38:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 411948D00D1; Thu, 14 Jan 2021 05:38:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0127.hostedemail.com [216.40.44.127]) by kanga.kvack.org (Postfix) with ESMTP id 2B8DF8D008E for ; Thu, 14 Jan 2021 05:38:42 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id E32AB1EE6 for ; Thu, 14 Jan 2021 10:38:41 +0000 (UTC) X-FDA: 77704032042.12.beds52_110827627526 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id BED081802B54C for ; Thu, 14 Jan 2021 10:38:41 +0000 (UTC) X-HE-Tag: beds52_110827627526 X-Filterd-Recvd-Size: 7099 Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) by imf16.hostedemail.com (Postfix) with ESMTP for ; Thu, 14 Jan 2021 10:38:41 +0000 (UTC) Received: by mail-pf1-f177.google.com with SMTP id a188so3084846pfa.11 for ; Thu, 14 Jan 2021 02:38:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xjSQX/DBq/dGDHDtI/eYRgdlVUdpJ6hklm+hnz97cjg=; b=EvUu9erDPTKjFMQD3ILkX2jgicEwa6KcpRq1JdSNIbVE1ROOWJ8eAvMmG2oczZokaA BDy8DaYvMSLKYCsF8qHjNPQg1UYIqEby4XLJrII5shLRtOR6/2ZEFEIkC1NClk7Wkqlw 2cFMt5Cqwq5hILU44hNKzKCCpS2pr7faUepyWgoMhhd5wYjLAAwuXX6fScFUkqLERxNK LPnOPS9afJYFuW3gu6hjFsMjgaC7H8sk/TRwOB3awiY0PXJsQphqAabSLRRWV8pPjJca 9ab7sQEpz6wb7RroZWJAgIZ3hoQ9H78mGwKbb3UPuP6nwGZ7TwAMyNR4pn9yplaRHjsS me/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xjSQX/DBq/dGDHDtI/eYRgdlVUdpJ6hklm+hnz97cjg=; b=RSIhW1GqArKLdo2K4BxGNsMcBTIAQ/8U0kckPZDJ28/ibTOV8nqQNLq2jAyoWwqanW Qjk2D1DWlwAELhuFr07LAarTXNGTxT4LYbnPg3NQiPkeYK6YzZA5MCc/u2GK/RxXQq6L CfNf0OJWx4zWY1U2lNEYf/DM8ZmtHqGLM7DGKxiMDxkarOKMapt2vb3rtv728ZtoWxO4 jpFfqk6NJv5dspkWrGuKiDHERXXhQ2AACw4Bv8GzYxFKZRfOaj52gZKQSJ39o6AVX5I2 vxrxu446U9gY/gRmxt9t15n3q8V2BuKsI7eRkBal1pMCFrjvmoY3GlraWBqYxelh+2GS 12IQ== X-Gm-Message-State: AOAM533wsWB/QRBvS+iqpdhmcDsLRJt9DSsC/9RSy90HqqYCava/MGXK MRZsiMqOR+7erp8IntNa4XYH/g== X-Google-Smtp-Source: ABdhPJzETpjNokcaByGnWQuYbXi9vblyTMKU1rIC+MG3Lq3WcuLgGvyZScNVoVyzG4ltBJHwt445WA== X-Received: by 2002:a63:50a:: with SMTP id 10mr6899209pgf.273.1610620718661; Thu, 14 Jan 2021 02:38:38 -0800 (PST) Received: from localhost.localdomain ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id l12sm4970112pjq.7.2021.01.14.02.38.34 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 14 Jan 2021 02:38:38 -0800 (PST) From: Muchun Song To: mike.kravetz@oracle.com, akpm@linux-foundation.org Cc: n-horiguchi@ah.jp.nec.com, ak@linux.intel.com, mhocko@suse.cz, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Muchun Song , stable@vger.kernel.org Subject: [PATCH v5 3/5] mm: hugetlb: fix a race between freeing and dissolving the page Date: Thu, 14 Jan 2021 18:35:13 +0800 Message-Id: <20210114103515.12955-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210114103515.12955-1-songmuchun@bytedance.com> References: <20210114103515.12955-1-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There is a race condition between __free_huge_page() and dissolve_free_huge_page(). CPU0: CPU1: // page_count(page) =3D=3D 1 put_page(page) __free_huge_page(page) dissolve_free_huge_page(page) spin_lock(&hugetlb_lock) // PageHuge(page) && !page_count(page) update_and_free_page(page) // page is freed to the buddy spin_unlock(&hugetlb_lock) spin_lock(&hugetlb_lock) clear_page_huge_active(page) enqueue_huge_page(page) // It is wrong, the page is already freed spin_unlock(&hugetlb_lock) The race windows is between put_page() and dissolve_free_huge_page(). We should make sure that the page is already on the free list when it is dissolved. Fixes: c8721bbbdd36 ("mm: memory-hotplug: enable memory hotplug to handle= hugepage") Signed-off-by: Muchun Song Reviewed-by: Mike Kravetz Cc: stable@vger.kernel.org --- mm/hugetlb.c | 41 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 41 insertions(+) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 4741d60f8955..1b789d1fd06b 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -79,6 +79,21 @@ DEFINE_SPINLOCK(hugetlb_lock); static int num_fault_mutexes; struct mutex *hugetlb_fault_mutex_table ____cacheline_aligned_in_smp; =20 +static inline bool PageHugeFreed(struct page *head) +{ + return page_private(head + 4) =3D=3D -1UL; +} + +static inline void SetPageHugeFreed(struct page *head) +{ + set_page_private(head + 4, -1UL); +} + +static inline void ClearPageHugeFreed(struct page *head) +{ + set_page_private(head + 4, 0); +} + /* Forward declaration */ static int hugetlb_acct_memory(struct hstate *h, long delta); =20 @@ -1028,6 +1043,7 @@ static void enqueue_huge_page(struct hstate *h, str= uct page *page) list_move(&page->lru, &h->hugepage_freelists[nid]); h->free_huge_pages++; h->free_huge_pages_node[nid]++; + SetPageHugeFreed(page); } =20 static struct page *dequeue_huge_page_node_exact(struct hstate *h, int n= id) @@ -1044,6 +1060,7 @@ static struct page *dequeue_huge_page_node_exact(st= ruct hstate *h, int nid) =20 list_move(&page->lru, &h->hugepage_activelist); set_page_refcounted(page); + ClearPageHugeFreed(page); h->free_huge_pages--; h->free_huge_pages_node[nid]--; return page; @@ -1504,6 +1521,7 @@ static void prep_new_huge_page(struct hstate *h, st= ruct page *page, int nid) spin_lock(&hugetlb_lock); h->nr_huge_pages++; h->nr_huge_pages_node[nid]++; + ClearPageHugeFreed(page); spin_unlock(&hugetlb_lock); } =20 @@ -1754,6 +1772,7 @@ int dissolve_free_huge_page(struct page *page) { int rc =3D -EBUSY; =20 +retry: /* Not to disrupt normal path by vainly holding hugetlb_lock */ if (!PageHuge(page)) return 0; @@ -1770,6 +1789,28 @@ int dissolve_free_huge_page(struct page *page) int nid =3D page_to_nid(head); if (h->free_huge_pages - h->resv_huge_pages =3D=3D 0) goto out; + + /* + * We should make sure that the page is already on the free list + * when it is dissolved. + */ + if (unlikely(!PageHugeFreed(head))) { + spin_unlock(&hugetlb_lock); + + /* + * Theoretically, we should return -EBUSY when we + * encounter this race. In fact, we have a chance + * to successfully dissolve the page if we do a + * retry. Because the race window is quite small. + * If we seize this opportunity, it is an optimization + * for increasing the success rate of dissolving page. + */ + while (PageHeadHuge(head) && !PageHugeFreed(head)) + cond_resched(); + + goto retry; + } + /* * Move PageHWPoison flag from head page to the raw error page, * which makes any subpages rather than the error page reusable. --=20 2.11.0