From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C7FCC433EF for ; Sat, 11 Jun 2022 21:43:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 690E38D0123; Sat, 11 Jun 2022 17:43:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 63F098D011D; Sat, 11 Jun 2022 17:43:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4BA728D0123; Sat, 11 Jun 2022 17:43:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 3581E8D011D for ; Sat, 11 Jun 2022 17:43:44 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 09893805AD for ; Sat, 11 Jun 2022 21:43:44 +0000 (UTC) X-FDA: 79567282368.26.519A8B2 Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) by imf11.hostedemail.com (Postfix) with ESMTP id 97A2040078 for ; Sat, 11 Jun 2022 21:43:43 +0000 (UTC) Received: by mail-pl1-f169.google.com with SMTP id d13so2048278plh.13 for ; Sat, 11 Jun 2022 14:43:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=DlWC0afQkPL/3yt1Hfspvlz/uMYSCScGBzo3WdZAJ+w=; b=lOpg0sqoYeNKdGSkXS37B2UfADpL0MCZ+bozvEFHG/5nxBIOiSCfttMoa3oVin25ZH oEljLGSqkfQIFjNMKE4pYzMobK7TyoNfhKEIc6JprR00Q53n3BEb67txbm1066c21a1N ZRGIaxJyw9wolN3ZslwhjNJ6DqG5Bz03ZVMcrRtsN5EWq4zS+4bE+7BjWBdt6HAR6U6L jlbsa2rMlnCqPgcjnylcMKD9Hb7W5zkcRcWA8oX+/cAmBTQedkuBkP7ZWKq8qFqMF6fQ kDxFbidptYyvaliJM6ZeFvpqK0e9sP9z/PuQv/lqsiwGWOu5YXMjmBqPKIvSE8mmIc73 2u7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=DlWC0afQkPL/3yt1Hfspvlz/uMYSCScGBzo3WdZAJ+w=; b=S6RhB7DdXA/t/YFeQ+2A3c9OSX8qKpw09YWP8J/Q5cWNwOFTbIbJRvg0B0e2xv5ZdJ ZbKCyhwFpJnnc5zLDQSWQbexjWGMk+zvH5K3340n5xMx5bMKZHHsoisjqonnTeuRpu+l BmGcEQ7LFPcbACrJfP483TC8Sk5HGGExE/xvXl1aWwCd9eoWnxkQh1AUUSgBN7qWLW0q ju5YMlcUXAn5BHhH0ksjuzq6guCqcip06d9I1UJsMm+vnxT8Ec75Doube5b+Md6pMCOF mW+83jQPwCOlhlPPKHlMp6N899k5/nKY7uxZhJR+PwE3AVxBzxccHSbdgLiWO62Ve3MJ Vlug== X-Gm-Message-State: AOAM530lDf1e3zwanGiopss6oRafAgNVfXE+bPCkPHx9W9xvgRdHeLJG 1YVrtpAE+/ksYMz2/XvOVGK9aQ== X-Google-Smtp-Source: ABdhPJwqNaHsx81l0b0yfA1iTZl5qO5n7GtKGRo7EpJo9XBEQlTPwlPYqhYWYPtgwLu+GKjoNDTe+w== X-Received: by 2002:a17:90a:e68a:b0:1e3:252f:24e0 with SMTP id s10-20020a17090ae68a00b001e3252f24e0mr6847288pjy.122.1654983822069; Sat, 11 Jun 2022 14:43:42 -0700 (PDT) Received: from google.com (55.212.185.35.bc.googleusercontent.com. [35.185.212.55]) by smtp.gmail.com with ESMTPSA id w15-20020a1709026f0f00b00163f35bd8f5sm1939942plk.289.2022.06.11.14.43.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 11 Jun 2022 14:43:40 -0700 (PDT) Date: Sat, 11 Jun 2022 14:43:37 -0700 From: Zach O'Keefe To: Yang Shi Cc: Vlastimil Babka , "Kirill A. Shutemov" , Matthew Wilcox , Andrew Morton , Linux MM , Linux Kernel Mailing List Subject: Re: [v3 PATCH 4/7] mm: khugepaged: use transhuge_vma_suitable replace open-code Message-ID: References: <20220606214414.736109-1-shy828301@gmail.com> <20220606214414.736109-5-shy828301@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1654983823; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=DlWC0afQkPL/3yt1Hfspvlz/uMYSCScGBzo3WdZAJ+w=; b=q7bgfQOzZZawFesWEAQJJSPYSclqBEH2+zhWgMJ0S3WzAQXm6H/ue5NXK50zMDmjFHaZoD 7Z10uHCX2WyyFI4fljxF6rxXWnZ4dK/cS0Ln274QJusKVbNOYJSDQ9ATU+gC+37Cw1mUoP pXVAT/JweQLJgOATE5UajsldVZpQyr8= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1654983823; a=rsa-sha256; cv=none; b=Z2BRxKszm9ISlckoA7fnn8kq1iRg9Msd061mN+jfL9LENyredbrzdrUJnYhO3v+AsVxEyn qzw3oj2gIxpLNYP4/lGF2qMczbGwPRlQeef96ucszBL+87634zZ+7R1DKfIE7gbS1RoYJJ C1QBPhDg1LXV8/7866NkZarWJdDL0d8= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=lOpg0sqo; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf11.hostedemail.com: domain of zokeefe@google.com designates 209.85.214.169 as permitted sender) smtp.mailfrom=zokeefe@google.com X-Rspam-User: Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=lOpg0sqo; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf11.hostedemail.com: domain of zokeefe@google.com designates 209.85.214.169 as permitted sender) smtp.mailfrom=zokeefe@google.com X-Rspamd-Server: rspam03 X-Stat-Signature: 617xzi3dno4yw6ki39wbunsa7nkxfayz X-Rspamd-Queue-Id: 97A2040078 X-HE-Tag: 1654983823-986564 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 10 Jun 20:25, Yang Shi wrote: > On Fri, Jun 10, 2022 at 5:28 PM Zach O'Keefe wrote: > > > > On Fri, Jun 10, 2022 at 3:04 PM Yang Shi wrote: > > > > > > On Fri, Jun 10, 2022 at 9:59 AM Yang Shi wrote: > > > > > > > > On Thu, Jun 9, 2022 at 6:52 PM Zach O'Keefe wrote: > > > > > > > > > > On Mon, Jun 6, 2022 at 2:44 PM Yang Shi wrote: > > > > > > > > > > > > The hugepage_vma_revalidate() needs to check if the address is still in > > > > > > the aligned HPAGE_PMD_SIZE area of the vma when reacquiring mmap_lock, > > > > > > but it was open-coded, use transhuge_vma_suitable() to do the job. And > > > > > > add proper comments for transhuge_vma_suitable(). > > > > > > > > > > > > Signed-off-by: Yang Shi > > > > > > --- > > > > > > include/linux/huge_mm.h | 6 ++++++ > > > > > > mm/khugepaged.c | 5 +---- > > > > > > 2 files changed, 7 insertions(+), 4 deletions(-) > > > > > > > > > > > > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > > > > > > index a8f61db47f2a..79d5919beb83 100644 > > > > > > --- a/include/linux/huge_mm.h > > > > > > +++ b/include/linux/huge_mm.h > > > > > > @@ -128,6 +128,12 @@ static inline bool transhuge_vma_size_ok(struct vm_area_struct *vma) > > > > > > return false; > > > > > > } > > > > > > > > > > > > +/* > > > > > > + * Do the below checks: > > > > > > + * - For non-anon vma, check if the vm_pgoff is HPAGE_PMD_NR aligned. > > > > > > + * - For all vmas, check if the haddr is in an aligned HPAGE_PMD_SIZE > > > > > > + * area. > > > > > > + */ > > > > > > > > > > AFAIK we aren't checking if vm_pgoff is HPAGE_PMD_NR aligned, but > > > > > rather that linear_page_index(vma, round_up(vma->vm_start, > > > > > HPAGE_PMD_SIZE)) is HPAGE_PMD_NR aligned within vma->vm_file. I was > > > > > > > > Yeah, you are right. > > > > > > > > > pretty confused about this (hopefully I have it right now - if not - > > > > > case and point :) ), so it might be a good opportunity to add some > > > > > extra commentary to help future travelers understand why this > > > > > constraint exists. > > > > > > > > I'm not fully sure I understand this 100%. I think this is related to > > > > how page cache is structured. I will try to add more comments. > > > > > > How's about "The underlying THP is always properly aligned in page > > > cache, but it may be across the boundary of VMA if the VMA is > > > misaligned, so the THP can't be PMD mapped for this case." > > > > I could certainly still be wrong / am learning here - but I *thought* > > the reason for this check was to make sure that the hugepage > > to-be-collapsed is naturally aligned within the file (since, AFAIK, > > without this constraint, different mm's might have different ideas > > about where hugepages in the file should be). > > The hugepage is definitely naturally aligned within the file, this is > guaranteed by how page cache is organized, you could find some example > code from shmem fault, for example, the below code snippet: > > hindex = round_down(index, folio_nr_pages(folio)); > error = shmem_add_to_page_cache(folio, mapping, hindex, NULL, gfp & > GFP_RECLAIM_MASK, charge_mm); > > The index is actually rounded down to HPAGE_PMD_NR aligned. Thanks for the reference here. > The check in hugepage_vma_check() is used to guarantee there is an PMD > aligned area in the vma exactly overlapping with a PMD range in the > page cache. For example, you have a vma starting from 0x1000 maps to > the file's page offset of 0, even though you get THP for the file, it > can not be PMD mapped to the vma. But if it maps to the file's page > offset of 1, then starting from 0x200000 (assuming the vma is big > enough) it can PMD map the second THP in the page cache. Does it make > sense? > Yes, this makes sense - thanks for providing your insight. I think I was basically thinking the same thing ; except your description is more accurate (namely, that is *some* pmd-aligned range covered by the vma that maps to a hugepage-aligned offset in the file (I mistakenly took this to be the *first* pmd-aligned address >= vma->vm_start)). Also, with this in mind, your previous suggested comment makes sense. If I had to take a stab at it, I would say something like: "The hugepage is guaranteed to be hugepage-aligned within the file, but we must check that the PMD-aligned addresses in the VMA map to PMD-aligned offsets within the file, else the hugepage will not be PMD-mappable". WDYT? > > > > > > > > > > > > > > > > Also I wonder while we're at it if we can rename this to > > > > > transhuge_addr_aligned() or transhuge_addr_suitable() or something. > > > > > > > > I think it is still actually used to check vma. > > > > > > > > > > > > > > Otherwise I think the change is a nice cleanup. > > > > > > > > > > > static inline bool transhuge_vma_suitable(struct vm_area_struct *vma, > > > > > > unsigned long addr) > > > > > > { > > > > > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > > > > > > index 7a5d1c1a1833..ca1754d3a827 100644 > > > > > > --- a/mm/khugepaged.c > > > > > > +++ b/mm/khugepaged.c > > > > > > @@ -951,7 +951,6 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, > > > > > > struct vm_area_struct **vmap) > > > > > > { > > > > > > struct vm_area_struct *vma; > > > > > > - unsigned long hstart, hend; > > > > > > > > > > > > if (unlikely(khugepaged_test_exit(mm))) > > > > > > return SCAN_ANY_PROCESS; > > > > > > @@ -960,9 +959,7 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, > > > > > > if (!vma) > > > > > > return SCAN_VMA_NULL; > > > > > > > > > > > > - hstart = (vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK; > > > > > > - hend = vma->vm_end & HPAGE_PMD_MASK; > > > > > > - if (address < hstart || address + HPAGE_PMD_SIZE > hend) > > > > > > + if (!transhuge_vma_suitable(vma, address)) > > > > > > return SCAN_ADDRESS_RANGE; > > > > > > if (!hugepage_vma_check(vma, vma->vm_flags)) > > > > > > return SCAN_VMA_CHECK; > > > > > > -- > > > > > > 2.26.3 > > > > > > > > > > > >