From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BFA24C433F5 for ; Thu, 16 Dec 2021 08:56:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A29856B0075; Thu, 16 Dec 2021 03:56:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9D8DC6B0078; Thu, 16 Dec 2021 03:56:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8C7D56B007B; Thu, 16 Dec 2021 03:56:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0161.hostedemail.com [216.40.44.161]) by kanga.kvack.org (Postfix) with ESMTP id 7EF546B0075 for ; Thu, 16 Dec 2021 03:56:09 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 3B00F181AEF09 for ; Thu, 16 Dec 2021 08:55:59 +0000 (UTC) X-FDA: 78923050038.29.366844F Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by imf06.hostedemail.com (Postfix) with ESMTP id 5FDCC180016 for ; Thu, 16 Dec 2021 08:55:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1639644958; x=1671180958; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version:content-transfer-encoding; bh=oj5qftouKS+A0csQkx37OvNj5H/qBcncARvxPcB/p/c=; b=HeLZXTn7pIL1Ps8KJ8cuiBfFo18Dh9zC4Uce+jUk1aULMYvubQ/eNgW6 9TlBtKNp1x3L+35YpNg6SvyPhgxshMRBUllYA6VSVe6nmaW7LYerxCL/u Dow3fKtHtEt7aViSmeGXOMX1r4jKWsytQBz5usEo4EuJq/eTdAoNAO15n w8HfQ12F3QxvmRX8F3mlUUlATSVNUNx3SwGqartlBj1Yz4/CtK1sIm3BS GfslkNV0p94oMCwWryQ7IBG3Po5I3LrZkcGYULJoVvUsBPkxSYQ/SsBSV MUkpLMLYXRtd9N5SJV9pjcSZ5Y5iKhFvsJtmeYXI1/4M31u498NFouQ9O w==; X-IronPort-AV: E=McAfee;i="6200,9189,10199"; a="239253809" X-IronPort-AV: E=Sophos;i="5.88,211,1635231600"; d="scan'208";a="239253809" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Dec 2021 00:55:56 -0800 X-IronPort-AV: E=Sophos;i="5.88,211,1635231600"; d="scan'208";a="506196851" Received: from unknown (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.239.13.11]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Dec 2021 00:55:55 -0800 From: "Huang, Ying" To: Li Xinhai Cc: Zi Yan , linux-mm@kvack.org, "Kirill A. Shutemov" , akpm@linux-foundation.org Subject: Re: [PATCH V2] mm/gup.c: stricter check on THP migration entry during follow_pmd_mask References: <20211211151014.650778-1-lixinhai.lxh@gmail.com> <5c0f5b5c-7c22-a7ab-4add-fa6bd11f7af8@gmail.com> Date: Thu, 16 Dec 2021 16:55:49 +0800 In-Reply-To: (Li Xinhai's message of "Thu, 16 Dec 2021 10:00:04 +0800") Message-ID: <875yrpszca.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=HeLZXTn7; spf=none (imf06.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 134.134.136.24) smtp.mailfrom=ying.huang@intel.com; dmarc=pass (policy=none) header.from=intel.com X-Rspamd-Queue-Id: 5FDCC180016 X-Stat-Signature: cmmjdbisoadu4cpf76w3apq8rwen9fwi X-Rspamd-Server: rspam04 X-HE-Tag: 1639644957-28900 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Li Xinhai writes: > On 12/15/21 10:46 PM, Zi Yan wrote: >> On 15 Dec 2021, at 7:47, Li Xinhai wrote: >>=20 >>> On 12/11/21 11:16 PM, Li Xinhai wrote: >>>> >>>> >>>> On 12/11/21 11:10 PM, Li Xinhai wrote: >>>>> When BUG_ON check for THP migration entry, the exsiting code only c= heck >>>>> thp_migration_supported case, but not for !thp_migration_supported = case. >>>>> To make the BUG_ON check consistent, we need catch both cases. >>>>> >>>>> Move the BUG_ON check one step eariler, because if the bug happen w= e >>>>> should know it instead of depend on FOLL_MIGRATION been used by cal= ler. >>>>> >>>>> Because pmdval instead of *pmd is read by the is_pmd_migration_entr= y() >>>>> check, the existing code don't help to avoid useless locking within >>>>> pmd_migration_entry_wait(), so remove that check. >>>>> >>>>> Signed-off-by: Li Xinhai >>>>> --- >>>> V1->V2: >>>> Move the BUG_ON() check before if(!(flags & FOLL_MIGRATION)); and ad= d comments >>>> for it. >>>> >>> Yan, Ying and Kirill have been worked on this part of code, may help = to review. >>> >>> This change was based on my code review, no real bug has been observe= d. >>> >>> >>>>> mm/gup.c | 13 +++++++++---- >>>>> 1 file changed, 9 insertions(+), 4 deletions(-) >>>>> >>>>> diff --git a/mm/gup.c b/mm/gup.c >>>>> index 2c51e9748a6a..94d0e586ca0b 100644 >>>>> --- a/mm/gup.c >>>>> +++ b/mm/gup.c >>>>> @@ -642,12 +642,17 @@ static struct page *follow_pmd_mask(struct vm= _area_struct *vma, >>>>> =A0=A0 } >>>>> retry: >>>>> =A0=A0 if (!pmd_present(pmdval)) { >>>>> +=A0=A0=A0 /* >>>>> +=A0=A0=A0=A0 * Should never reach here, if thp migration is not su= pported; >>>>> +=A0=A0=A0=A0 * Otherwise, it must be a thp miration entry. >>>>> +=A0=A0=A0=A0 */ >>>>> +=A0=A0=A0 VM_BUG_ON(!thp_migration_supported() || >>>>> +=A0=A0=A0=A0=A0=A0=A0=A0 !is_pmd_migration_entry(pmdval)); >>>>> + >> This means VM_BUG_ON will be triggered when pmdval is not present >> and THP migration >> is not enabled. This can happen when it is pmd_none(). That is not a b= ug and should >> just return no_page_table(vma, flags). >>=20 > > Thanks for review! > The pmd_none() has been checked at the beginning of follow_pmd_mask() a= nd before > the 'retry' loop start again, so that possibility is excluded. > > We will have VM_BUG_ON(1) if thp_migration_supported() is false, or > VM_BUG_ON(!is_pmd_migration_entry(pmdval)) if thp_migration_supported()= is true, by > compiler. > > If we left !thp_migration_supported() case for the VM_BUG_ON(), it will= cause > misunderstanding that that case is not a bug at the code context. I think your code works. The patch description may be improved. If !thp_migration_supported() and !pmd_present(), the original code may dead loop in theory. Best Regards, Huang, Ying >>>>> =A0=A0=A0=A0 if (likely(!(flags & FOLL_MIGRATION))) >>>>> =A0=A0=A0=A0=A0=A0 return no_page_table(vma, flags); >>>>> -=A0=A0=A0 VM_BUG_ON(thp_migration_supported() && >>>>> -=A0=A0=A0=A0=A0=A0=A0=A0 !is_pmd_migration_entry(pmdval)); >>>>> -=A0=A0=A0 if (is_pmd_migration_entry(pmdval)) >>>>> -=A0=A0=A0=A0=A0 pmd_migration_entry_wait(mm, pmd); >>>>> + >>>>> +=A0=A0=A0 pmd_migration_entry_wait(mm, pmd); >>>>> =A0=A0=A0=A0 pmdval =3D READ_ONCE(*pmd); >>>>> =A0=A0=A0=A0 /* >>>>> =A0=A0=A0=A0=A0 * MADV_DONTNEED may convert the pmd to null becaus= e >>>>> >> -- >> Best Regards, >> Yan, Zi >>=20