From: Pasha Tatashin <pasha.tatashin@soleen.com>
To: Wei Xu <weixugc@google.com>
Cc: Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Linux MM <linux-mm@kvack.org>,
Andrew Morton <akpm@linux-foundation.org>,
David Rientjes <rientjes@google.com>,
Paul Turner <pjt@google.com>, Greg Thelen <gthelen@google.com>,
Ingo Molnar <mingo@redhat.com>, Will Deacon <will@kernel.org>,
Mike Rapoport <rppt@kernel.org>,
Dave Hansen <dave.hansen@linux.intel.com>,
"H. Peter Anvin" <hpa@zytor.com>,
"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>,
Jiri Slaby <jirislaby@kernel.org>,
Muchun Song <songmuchun@bytedance.com>,
Fusion Future <qydwhotmail@gmail.com>,
Hugh Dickins <hughd@google.com>, Zi Yan <ziy@nvidia.com>,
Anshuman Khandual <anshuman.khandual@arm.com>
Subject: Re: [PATCH 2/3] mm/page_table_check: check entries at pud and pmd levels
Date: Thu, 20 Jan 2022 13:25:45 -0500 [thread overview]
Message-ID: <CA+CK2bC5WJxdb9tdZj8eiB1PeOfV-D4v1Tpi+TEZhMskbZ6k4A@mail.gmail.com> (raw)
In-Reply-To: <CAAPL-u_kP5E5iOtyztsXLJG0YNFzganD3djr2nW0vTkUpv=A9Q@mail.gmail.com>
Hi Wei,
Thank you for looking at this patch.
> > +static void pmd_clear_level(struct mm_struct *mm, unsigned long addr,
> > + pmd_t *pmdp)
> > +{
> > + unsigned long i;
> > +
> > + for (i = 0; i < PTRS_PER_PMD; i++) {
> > + pmd_t old_pmd = *pmdp;
> > +
> > + if (pmd_user_accessible_page(old_pmd)) {
> > + page_table_check_clear(mm, addr, pmd_pfn(old_pmd),
> > + PMD_PAGE_SIZE >> PAGE_SHIFT);
> > + } else if (!pmd_bad(old_pmd) && !pmd_leaf(old_pmd)) {
> > + pte_t *ptep = pte_offset_map(&old_pmd, addr);
> > +
> > + pte_clear_level(mm, addr, ptep);
> > + pte_unmap(ptep);
> > + }
>
> You can call __page_table_check_pmd_clear(mm, addr, old_pmd, addr)
> instead to share the new code.
Yeap.
>
> >
> > + addr += PMD_PAGE_SIZE;
> > + pmdp++;
> > + }
> > +}
> > +
> > /*
> > * page is on free list, or is being allocated, verify that counters are zeroes
> > * crash if they are not.
> > @@ -186,6 +220,11 @@ void __page_table_check_pmd_clear(struct mm_struct *mm, unsigned long addr,
> > if (pmd_user_accessible_page(pmd)) {
> > page_table_check_clear(mm, addr, pmd_pfn(pmd),
> > PMD_PAGE_SIZE >> PAGE_SHIFT);
> > + } else if (!pmd_bad(pmd) && !pmd_leaf(pmd)) {
> > + pte_t *ptep = pte_offset_map(&pmd, addr);
> > +
> > + pte_clear_level(mm, addr, ptep);
> > + pte_unmap(ptep);
> > }
> > }
> > EXPORT_SYMBOL(__page_table_check_pmd_clear);
> > @@ -199,6 +238,10 @@ void __page_table_check_pud_clear(struct mm_struct *mm, unsigned long addr,
> > if (pud_user_accessible_page(pud)) {
> > page_table_check_clear(mm, addr, pud_pfn(pud),
> > PUD_PAGE_SIZE >> PAGE_SHIFT);
> > + } else if (!pud_bad(pud) && !pud_leaf(pud)) {
> > + pmd_t *pmdp = pmd_offset(&pud, addr);
> > +
> > + pmd_clear_level(mm, addr, pmdp);
> > }
> > }
> > EXPORT_SYMBOL(__page_table_check_pud_clear);
> > @@ -237,6 +280,11 @@ void __page_table_check_pmd_set(struct mm_struct *mm, unsigned long addr,
> > if (pmd_user_accessible_page(old_pmd)) {
> > page_table_check_clear(mm, addr, pmd_pfn(old_pmd),
> > PMD_PAGE_SIZE >> PAGE_SHIFT);
> > + } else if (!pmd_bad(old_pmd) && !pmd_leaf(old_pmd)) {
> > + pte_t *ptep = pte_offset_map(&old_pmd, addr);
> > +
> > + pte_clear_level(mm, addr, ptep);
> > + pte_unmap(ptep);
> > }
> >
>
> How about replacing the above code with
> __page_table_check_pmd_clear(mm, addr, old_pmd)?
OK
>
> > if (pmd_user_accessible_page(pmd)) {
> > @@ -259,6 +307,10 @@ void __page_table_check_pud_set(struct mm_struct *mm, unsigned long addr,
> > if (pud_user_accessible_page(old_pud)) {
> > page_table_check_clear(mm, addr, pud_pfn(old_pud),
> > PUD_PAGE_SIZE >> PAGE_SHIFT);
> > + } else if (!pud_bad(old_pud) && !pud_leaf(old_pud)) {
> > + pmd_t *pmdp = pmd_offset(&old_pud, addr);
> > +
> > + pmd_clear_level(mm, addr, pmdp);
> > }
>
> Replacing with __page_table_check_pud_clear(mm, addr, old_pud)?
OK
Good suggestions, I will simplify the code with your suggestions.
Pasha
next prev parent reply other threads:[~2022-01-20 18:26 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-20 4:25 [PATCH 0/3] page table check fixes and cleanups Pasha Tatashin
2022-01-20 4:25 ` [PATCH 1/3] mm/debug_vm_pgtable: remove pte entry from the page table Pasha Tatashin
2022-01-20 15:19 ` Zi Yan
2022-01-20 4:25 ` [PATCH 2/3] mm/page_table_check: check entries at pud and pmd levels Pasha Tatashin
2022-01-20 17:59 ` Wei Xu
2022-01-20 18:25 ` Pasha Tatashin [this message]
2022-01-20 4:25 ` [PATCH 3/3] mm/page_table_check: use unsigned long for page counters Pasha Tatashin
2022-01-20 18:25 ` Wei Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CA+CK2bC5WJxdb9tdZj8eiB1PeOfV-D4v1Tpi+TEZhMskbZ6k4A@mail.gmail.com \
--to=pasha.tatashin@soleen.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@linux.ibm.com \
--cc=anshuman.khandual@arm.com \
--cc=dave.hansen@linux.intel.com \
--cc=gthelen@google.com \
--cc=hpa@zytor.com \
--cc=hughd@google.com \
--cc=jirislaby@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mingo@redhat.com \
--cc=pjt@google.com \
--cc=qydwhotmail@gmail.com \
--cc=rientjes@google.com \
--cc=rppt@kernel.org \
--cc=songmuchun@bytedance.com \
--cc=weixugc@google.com \
--cc=will@kernel.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox