linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Ryan Roberts <ryan.roberts@arm.com>
To: David Hildenbrand <david@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	Yin Fengwei <fengwei.yin@intel.com>, Yu Zhao <yuzhao@google.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	Yang Shi <shy828301@gmail.com>,
	"Huang, Ying" <ying.huang@intel.com>, Zi Yan <ziy@nvidia.com>,
	Luis Chamberlain <mcgrof@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH v3 1/4] mm: Non-pmd-mappable, large folios for folio_add_new_anon_rmap()
Date: Mon, 17 Jul 2023 14:21:31 +0100	[thread overview]
Message-ID: <e08619d6-1d4a-6507-05a3-1ce901355d18@arm.com> (raw)
In-Reply-To: <bf84d967-545f-92f8-bb82-2cbc0a54ddbc@redhat.com>

On 17/07/2023 14:19, David Hildenbrand wrote:
> On 17.07.23 15:13, Ryan Roberts wrote:
>> On 17/07/2023 14:00, David Hildenbrand wrote:
>>> On 14.07.23 18:17, Ryan Roberts wrote:
>>>> In preparation for FLEXIBLE_THP support, improve
>>>> folio_add_new_anon_rmap() to allow a non-pmd-mappable, large folio to be
>>>> passed to it. In this case, all contained pages are accounted using the
>>>> order-0 folio (or base page) scheme.
>>>>
>>>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>>>> Reviewed-by: Yu Zhao <yuzhao@google.com>
>>>> Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
>>>> ---
>>>>    mm/rmap.c | 28 +++++++++++++++++++++-------
>>>>    1 file changed, 21 insertions(+), 7 deletions(-)
>>>>
>>>> diff --git a/mm/rmap.c b/mm/rmap.c
>>>> index 0c0d8857dfce..f293d072368a 100644
>>>> --- a/mm/rmap.c
>>>> +++ b/mm/rmap.c
>>>> @@ -1278,31 +1278,45 @@ void page_add_anon_rmap(struct page *page, struct
>>>> vm_area_struct *vma,
>>>>     * This means the inc-and-test can be bypassed.
>>>>     * The folio does not have to be locked.
>>>>     *
>>>> - * If the folio is large, it is accounted as a THP.  As the folio
>>>> + * If the folio is pmd-mappable, it is accounted as a THP.  As the folio
>>>>     * is new, it's assumed to be mapped exclusively by a single process.
>>>>     */
>>>>    void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct
>>>> *vma,
>>>>            unsigned long address)
>>>>    {
>>>> -    int nr;
>>>> +    int nr = folio_nr_pages(folio);
>>>>
>>>> -    VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma);
>>>> +    VM_BUG_ON_VMA(address < vma->vm_start ||
>>>> +            address + (nr << PAGE_SHIFT) > vma->vm_end, vma);
>>>>        __folio_set_swapbacked(folio);
>>>>
>>>> -    if (likely(!folio_test_pmd_mappable(folio))) {
>>>> +    if (!folio_test_large(folio)) {
>>>
>>> Why remove the "likely" here? The patch itself does not change anything about
>>> that condition.
>>
>> Good question; I'm not sure why. Will have to put it down to bad copy/paste
>> fixup. Will put it back in the next version.
>>
>>>
>>>>            /* increment count (starts at -1) */
>>>>            atomic_set(&folio->_mapcount, 0);
>>>> -        nr = 1;
>>>> +        __page_set_anon_rmap(folio, &folio->page, vma, address, 1);
>>>> +    } else if (!folio_test_pmd_mappable(folio)) {
>>>> +        int i;
>>>> +
>>>> +        for (i = 0; i < nr; i++) {
>>>> +            struct page *page = folio_page(folio, i);
>>>> +
>>>> +            /* increment count (starts at -1) */
>>>> +            atomic_set(&page->_mapcount, 0);
>>>> +            __page_set_anon_rmap(folio, page, vma,
>>>> +                    address + (i << PAGE_SHIFT), 1);
>>>> +        }
>>>> +
>>>> +        /* increment count (starts at 0) */
>>>
>>> That comment is a bit misleading. We're not talking about a mapcount as in the
>>> other cases here.
>>
>> Correct, I'm talking about _nr_pages_mapped, which starts 0, not -1 like
>> _mapcount. The comment was intended to be in the style used in other similar
>> places in rmap.c. I could change it to: "_nr_pages_mapped is 0-based, so set it
>> to the number of pages in the folio" or remove it entirely? What do you prefer?
>>
> 
> We only have to comment what's weird, not what's normal.
> 
> IOW, we also didn't have such a comment in the existing code when doing
> atomic_set(&folio->_nr_pages_mapped, COMPOUND_MAPPED);
> 
> 
> What might make sense here is a simple
> 
> "All pages of the folio are PTE-mapped."
> 

ACK - thanks.


  reply	other threads:[~2023-07-17 13:21 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-07-14 16:04 [PATCH v3 0/4] variable-order, large folios for anonymous memory Ryan Roberts
2023-07-14 16:17 ` [PATCH v3 1/4] mm: Non-pmd-mappable, large folios for folio_add_new_anon_rmap() Ryan Roberts
2023-07-14 16:52   ` Yu Zhao
2023-07-14 18:01     ` Ryan Roberts
2023-07-17 13:00   ` David Hildenbrand
2023-07-17 13:13     ` Ryan Roberts
2023-07-17 13:19       ` David Hildenbrand
2023-07-17 13:21         ` Ryan Roberts [this message]
2023-07-14 16:17 ` [PATCH v3 2/4] mm: Default implementation of arch_wants_pte_order() Ryan Roberts
2023-07-14 16:54   ` Yu Zhao
2023-07-17 11:13   ` Yin Fengwei
2023-07-17 13:01   ` David Hildenbrand
2023-07-17 13:15     ` Ryan Roberts
2023-07-14 16:17 ` [PATCH v3 3/4] mm: FLEXIBLE_THP for improved performance Ryan Roberts
2023-07-14 17:17   ` Yu Zhao
2023-07-14 17:59     ` Ryan Roberts
2023-07-14 22:11       ` Yu Zhao
2023-07-17 13:36         ` Ryan Roberts
2023-07-17 19:31           ` Yu Zhao
2023-07-17 20:35             ` Yu Zhao
2023-07-17 23:37           ` Hugh Dickins
2023-07-18 10:36             ` Ryan Roberts
2023-07-17 13:06     ` David Hildenbrand
2023-07-17 13:20       ` Ryan Roberts
2023-07-17 13:56         ` David Hildenbrand
2023-07-17 14:47           ` Ryan Roberts
2023-07-17 14:55             ` David Hildenbrand
2023-07-17 17:07       ` Yu Zhao
2023-07-17 17:16         ` David Hildenbrand
2023-07-21 10:57   ` Ryan Roberts
2023-07-14 16:17 ` [PATCH v3 4/4] arm64: mm: Override arch_wants_pte_order() Ryan Roberts
2023-07-14 16:47   ` Yu Zhao
2023-07-24 11:59 ` [PATCH v3 0/4] variable-order, large folios for anonymous memory Ryan Roberts
2023-07-24 14:58   ` Zi Yan
2023-07-24 15:41     ` Ryan Roberts
2023-07-26  7:36       ` Itaru Kitayama
2023-07-26  8:42         ` Ryan Roberts
2023-07-26  8:47           ` Itaru Kitayama

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e08619d6-1d4a-6507-05a3-1ce901355d18@arm.com \
    --to=ryan.roberts@arm.com \
    --cc=akpm@linux-foundation.org \
    --cc=anshuman.khandual@arm.com \
    --cc=catalin.marinas@arm.com \
    --cc=david@redhat.com \
    --cc=fengwei.yin@intel.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mcgrof@kernel.org \
    --cc=shy828301@gmail.com \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    --cc=ying.huang@intel.com \
    --cc=yuzhao@google.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox