From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F31B3CD13CF for ; Mon, 2 Sep 2024 14:01:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 84B016B0387; Mon, 2 Sep 2024 10:01:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7D3AF8D0098; Mon, 2 Sep 2024 10:01:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6743E6B0389; Mon, 2 Sep 2024 10:01:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 460136B0387 for ; Mon, 2 Sep 2024 10:01:01 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id D8743141D52 for ; Mon, 2 Sep 2024 14:01:00 +0000 (UTC) X-FDA: 82519959480.24.2BB18A0 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf01.hostedemail.com (Postfix) with ESMTP id 760B540021 for ; Mon, 2 Sep 2024 14:00:55 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="h9/XHUsY"; dmarc=none; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725285581; a=rsa-sha256; cv=none; b=wghRFHk/u+PSjBaU9AebuhoqZmevfz0lGfWJ18C+fI+IlzB3mqKzZnoMcYXUOqP2vDg7mR 1btAn2v8H56sTgjMDTXBvXb7mJfvc0RhjTlFakEKVStZuwg2x2orD3qoUH1vCB+cXVYxNR FYcENRp2EoBgO3G+wSjvt2g0KYiffy4= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="h9/XHUsY"; dmarc=none; spf=none (imf01.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725285581; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ouh8tM//YMSBky/0ZxoaC8YVq42vhgoGIwSy22S0uUw=; b=GeEx1h7acweUe0PyCQt+onChd1W8MFSu31t/nQOduKwNcbm17gbv37bqoBEC9YTpFy2FGn S0/xtM7XRN303mScib9mAW3eeMUz1uMmTtKHkS3lDy+k60y8lZIS/NwCNlKEtOHIe2rD9a VM6LxZxUKoyjJP6dUuVFfjnucO3vW9U= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=ouh8tM//YMSBky/0ZxoaC8YVq42vhgoGIwSy22S0uUw=; b=h9/XHUsYJaehUfGRgfew/jDW0b 2LNLJU83qD602ukWkhBc9hSSLO2C4Ayreo73Jsm1zHn00Iwpu5PI2UZmNFyWfcjnZ0xR36/g2Z0xn nGDl0UlGNNSzQQBuO/mdSJ0HAoZar1BHjCbsdurqiyb4ar3GYfOvXdL40/LiV/NTrxGDAQN3cHCG8 xZwwJThSYp6kEDrflLMInB6U3FrkaOdKiKEi4OHIOR97Riv1pa8AnvLe2Wo2rVeFYCSYxAW94YjIi 8fc8ZCBpjZQkF6Lg2yZXCQz97RUF0nvXoBNLTu4azGdIKfDC7wpVucKA5BVUUomv45Sj5hFhnmPxV r4PEN4iQ==; Received: from willy by casper.infradead.org with local (Exim 4.98 #2 (Red Hat Linux)) id 1sl7bh-00000006jAw-1GcP; Mon, 02 Sep 2024 14:00:37 +0000 Date: Mon, 2 Sep 2024 15:00:37 +0100 From: Matthew Wilcox To: "Pankaj Raghav (Samsung)" Cc: brauner@kernel.org, sfr@canb.auug.org.au, akpm@linux-foundation.org, linux-next@vger.kernel.org, mcgrof@kernel.org, ziy@nvidia.com, da.gomez@samsung.com, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Pankaj Raghav , Sven Schnelle Subject: Re: [PATCH] mm: don't convert the page to folio before splitting in split_huge_page() Message-ID: References: <20240902124931.506061-2-kernel@pankajraghav.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240902124931.506061-2-kernel@pankajraghav.com> X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 760B540021 X-Stat-Signature: hmhwofqz8ow8cwbin7169wqe1anx6dmw X-Rspam-User: X-HE-Tag: 1725285655-104625 X-HE-Meta: U2FsdGVkX18+G8nOActzrzw3Y+9Koo3Vv5HixwLusvfcTF4xkKRL7IjnFX16lG16+hon06qjh6Fwj9DU3UpBuAT7yXgdJ1Nv/14XuV2BVk19AzdgDGBB1PFCs7Wt9lUl46odaAQzJn97CF4dcwThf9HtyMXKcizmys8bOkk26KSb8z1lqZKYam4K+tb8S1W7yXeq5B86VppeTxgNbAgiMCkjUlmHc0mefd1VInvBFOw5WN2Vn7HMeEPpblOyesoQ7gLdlNBuFNvI/4zwg4jKByrcK1ANPwCHWQslHHOE18KyyocjHg0IQ5j6IS2ZeQMUC+nXdeOhGUJE7I7gJ/3SE7uiavATYy1sAsI1qXg4z+QcODVX1XIdRaQH4sMYDxiHH1P1N5Ig8cbKYI6/pWJjh05lVMdE1fqYM4W90IZWxa+Xhp/keGVrv6VEySwbuGni6rWaeT3u8r3tQ463EUrqP8q9XI5hlcGAyZ0ei6EPjJ1aA+utBnfynucrJbvTWPM6EKQmoCoRrC/VtXz++GxhNSXNe0tnDWKWhaeorU19CHCma7YqXl2RVOobIFStTnaBwsdAglJlS5jySjzLO6aVMMsi8//bo5mY2dLK4BLLYMAeBGbe2Y/+bWtLwLz+6BAFOfvi+RHyeU8C24LSyNWPOEN6z11f/1qPVb88mK6HnJhwC5NBv9Xy/ei18fdXFo0seS5BEOPID35jYwdCTh1aV41ugzcGsiUtQA+sLAV4GV/BLPmSiGUtQMpiyRH0Alk4mEeT2iaq7gAQWwYxdhwW38i7fwTXQNadB7VVOJIL6s4SkrT70rC9ueeHs/2SaJ6MZDWnPrqsM+5SkqerNbFbYOlTKYceTqP1ygb44ewaufAq7kBe9b7m3mJh1A5jCqUtvmYPX6o2RbtfLtl0A+zrlvtkJKZdc6/l0usblB3d7xFzSIrPw0yWdkaPZ8Jq1WArUnXRe7S3tqKBSBLr7K/ pqZXNGPo OA2y0qsr3oMRqZQVnE8UpLLXsWXB9BtOKAlFilFfkDKmREKgnJypdxBjjpPoC5yYn0iq2Gn64S2ROhPfE8SMWrFrguj2Ov9bUs9hNgOxg3MyMn3bxRqAz8v1UYI7GfxSyM7/miDWK9VMlb+YcHNljkEpkL1G3Dgr7pnL3IQgawfnxdPCUkXRqU7V0POKsFvWARBOLfj6ckKRhRRFFecqx6sTnfeuWJdvEd6nU5fl8waYTf0GF8LwcRZdICGrQ+Tj/fGcNjwBCl8Woe4s//sJK6BiQIS5FCHrZUvaXryJhMAL2NBPIsH+dtHdyzGnoVu1G3N7zak5R8briEq4pRDm68gHWIyQrX2Qd9vbq X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Sep 02, 2024 at 02:49:32PM +0200, Pankaj Raghav (Samsung) wrote: > From: Pankaj Raghav > > Sven reported that a commit from bs > ps series was breaking the ksm ltp > test[1]. > > split_huge_page() takes precisely a page that is locked, and it also > expects the folio that contains that page to be locked after that > huge page has been split. The changes introduced converted the page to > folio, and passed the head page to be split, which might not be locked, > resulting in a kernel panic. > > This commit fixes it by always passing the correct page to be split from > split_huge_page() with the appropriate minimum order for splitting. This should be folded into the patch that is broken, not be a separate fix commit, otherwise it introduces a bisection hazard which are to be avoided when possible. > [1] https://lore.kernel.org/linux-xfs/yt9dttf3r49e.fsf@linux.ibm.com/ > Reported-by: Sven Schnelle > Fixes: fd031210c9ce ("mm: split a folio in minimum folio order chunks") > Signed-off-by: Pankaj Raghav > --- > This applies to the vfs.blocksize branch on the vfs tree. > > @Christian, Stephen already sent a mail saying that there is a conflict > with these changes and mm-unstable. For now, I have based these patches > out of your tree. Let me know if you need the same patch based on > linux-next. > > include/linux/huge_mm.h | 16 +++++++++++++++- > mm/huge_memory.c | 21 +++++++++++++-------- > 2 files changed, 28 insertions(+), 9 deletions(-) > > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > index 7c50aeed0522..7a570e0437c9 100644 > --- a/include/linux/huge_mm.h > +++ b/include/linux/huge_mm.h > @@ -319,10 +319,24 @@ unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long add > bool can_split_folio(struct folio *folio, int *pextra_pins); > int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, > unsigned int new_order); > +int min_order_for_split(struct folio *folio); > int split_folio_to_list(struct folio *folio, struct list_head *list); > static inline int split_huge_page(struct page *page) > { > - return split_folio(page_folio(page)); > + struct folio *folio = page_folio(page); > + int ret = min_order_for_split(folio); > + > + if (ret < 0) > + return ret; > + > + /* > + * split_huge_page() locks the page before splitting and > + * expects the same page that has been split to be locked when > + * returned. split_folio(page_folio(page)) cannot be used here > + * because it converts the page to folio and passes the head > + * page to be split. > + */ > + return split_huge_page_to_list_to_order(page, NULL, ret); > } > void deferred_split_folio(struct folio *folio); > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index c29af9451d92..9931ff1d9a9d 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -3297,12 +3297,10 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, > return ret; > } > > -int split_folio_to_list(struct folio *folio, struct list_head *list) > +int min_order_for_split(struct folio *folio) > { > - unsigned int min_order = 0; > - > if (folio_test_anon(folio)) > - goto out; > + return 0; > > if (!folio->mapping) { > if (folio_test_pmd_mappable(folio)) > @@ -3310,10 +3308,17 @@ int split_folio_to_list(struct folio *folio, struct list_head *list) > return -EBUSY; > } > > - min_order = mapping_min_folio_order(folio->mapping); > -out: > - return split_huge_page_to_list_to_order(&folio->page, list, > - min_order); > + return mapping_min_folio_order(folio->mapping); > +} > + > +int split_folio_to_list(struct folio *folio, struct list_head *list) > +{ > + int ret = min_order_for_split(folio); > + > + if (ret < 0) > + return ret; > + > + return split_huge_page_to_list_to_order(&folio->page, list, ret); > } > > void __folio_undo_large_rmappable(struct folio *folio) > -- > 2.44.1 > >