From: Ryan Roberts <ryan.roberts@arm.com>
To: David Hildenbrand <david@redhat.com>,
Andrew Morton <akpm@linux-foundation.org>,
Matthew Wilcox <willy@infradead.org>,
Yin Fengwei <fengwei.yin@intel.com>, Yu Zhao <yuzhao@google.com>,
Catalin Marinas <catalin.marinas@arm.com>,
Anshuman Khandual <anshuman.khandual@arm.com>,
Yang Shi <shy828301@gmail.com>,
"Huang, Ying" <ying.huang@intel.com>, Zi Yan <ziy@nvidia.com>,
Luis Chamberlain <mcgrof@kernel.org>,
Itaru Kitayama <itaru.kitayama@gmail.com>,
"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
John Hubbard <jhubbard@nvidia.com>,
David Rientjes <rientjes@google.com>,
Vlastimil Babka <vbabka@suse.cz>, Hugh Dickins <hughd@google.com>,
Kefeng Wang <wangkefeng.wang@huawei.com>,
Barry Song <21cnbao@gmail.com>,
Alistair Popple <apopple@nvidia.com>
Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v8 03/10] mm: thp: Introduce multi-size THP sysfs interface
Date: Thu, 7 Dec 2023 11:44:03 +0000 [thread overview]
Message-ID: <856f4317-43d9-41bb-8096-1eef69c86d3b@arm.com> (raw)
In-Reply-To: <44fdd46b-ad46-4ae2-a20f-20374acdf464@redhat.com>
On 07/12/2023 11:25, David Hildenbrand wrote:
> On 07.12.23 12:22, Ryan Roberts wrote:
>> On 07/12/2023 11:13, David Hildenbrand wrote:
>>>>>
>>>>>> +
>>>>>> if (!vma->vm_mm) /* vdso */
>>>>>> - return false;
>>>>>> + return 0;
>>>>>> /*
>>>>>> * Explicitly disabled through madvise or prctl, or some
>>>>>> @@ -88,16 +141,16 @@ bool hugepage_vma_check(struct vm_area_struct *vma,
>>>>>> unsigned long vm_flags,
>>>>>> * */
>>>>>> if ((vm_flags & VM_NOHUGEPAGE) ||
>>>>>> test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))
>>>>>> - return false;
>>>>>> + return 0;
>>>>>> /*
>>>>>> * If the hardware/firmware marked hugepage support disabled.
>>>>>> */
>>>>>> if (transparent_hugepage_flags & (1 <<
>>>>>> TRANSPARENT_HUGEPAGE_UNSUPPORTED))
>>>>>> - return false;
>>>>>> + return 0;
>>>>>> /* khugepaged doesn't collapse DAX vma, but page fault is fine. */
>>>>>> if (vma_is_dax(vma))
>>>>>> - return in_pf;
>>>>>> + return in_pf ? orders : 0;
>>>>>> /*
>>>>>> * khugepaged special VMA and hugetlb VMA.
>>>>>> @@ -105,17 +158,29 @@ bool hugepage_vma_check(struct vm_area_struct *vma,
>>>>>> unsigned long vm_flags,
>>>>>> * VM_MIXEDMAP set.
>>>>>> */
>>>>>> if (!in_pf && !smaps && (vm_flags & VM_NO_KHUGEPAGED))
>>>>>> - return false;
>>>>>> + return 0;
>>>>>> /*
>>>>>> - * Check alignment for file vma and size for both file and anon vma.
>>>>>> + * Check alignment for file vma and size for both file and anon vma by
>>>>>> + * filtering out the unsuitable orders.
>>>>>> *
>>>>>> * Skip the check for page fault. Huge fault does the check in fault
>>>>>> - * handlers. And this check is not suitable for huge PUD fault.
>>>>>> + * handlers.
>>>>>> */
>>>>>> - if (!in_pf &&
>>>>>> - !transhuge_vma_suitable(vma, (vma->vm_end - HPAGE_PMD_SIZE)))
>>>>>> - return false;
>>>>>> + if (!in_pf) {
>>>>>> + int order = first_order(orders);
>>>>>> + unsigned long addr;
>>>>>> +
>>>>>> + while (orders) {
>>>>>> + addr = vma->vm_end - (PAGE_SIZE << order);
>>>>>> + if (thp_vma_suitable_orders(vma, addr, BIT(order)))
>>>>>> + break;
>>>>>
>>>>> Comment: you'd want a "thp_vma_suitable_order" helper here. But maybe the
>>>>> compiler is smart enough to optimize the loop and everyything else out.
>>>>
>>>> I'm happy to refactor so that thp_vma_suitable_order() is the basic primitive,
>>>> then make thp_vma_suitable_orders() a loop that calls thp_vma_suitable_order()
>>>> (that's basically how it is laid out already, just all in one function). Is
>>>> that
>>>> what you are requesting?
>>>
>>> You got the spirit, yes.
>>>
>>>>>
>>>>> [...]
>>>>>
>>>>>> +
>>>>>> +static ssize_t thpsize_enabled_store(struct kobject *kobj,
>>>>>> + struct kobj_attribute *attr,
>>>>>> + const char *buf, size_t count)
>>>>>> +{
>>>>>> + int order = to_thpsize(kobj)->order;
>>>>>> + ssize_t ret = count;
>>>>>> +
>>>>>> + if (sysfs_streq(buf, "always")) {
>>>>>> + set_bit(order, &huge_anon_orders_always);
>>>>>> + clear_bit(order, &huge_anon_orders_inherit);
>>>>>> + clear_bit(order, &huge_anon_orders_madvise);
>>>>>> + } else if (sysfs_streq(buf, "inherit")) {
>>>>>> + set_bit(order, &huge_anon_orders_inherit);
>>>>>> + clear_bit(order, &huge_anon_orders_always);
>>>>>> + clear_bit(order, &huge_anon_orders_madvise);
>>>>>> + } else if (sysfs_streq(buf, "madvise")) {
>>>>>> + set_bit(order, &huge_anon_orders_madvise);
>>>>>> + clear_bit(order, &huge_anon_orders_always);
>>>>>> + clear_bit(order, &huge_anon_orders_inherit);
>>>>>> + } else if (sysfs_streq(buf, "never")) {
>>>>>> + clear_bit(order, &huge_anon_orders_always);
>>>>>> + clear_bit(order, &huge_anon_orders_inherit);
>>>>>> + clear_bit(order, &huge_anon_orders_madvise);
>>>>>
>>>>> Note: I was wondering for a second if some concurrent cames could lead to an
>>>>> inconsistent state. I think in the worst case we'll simply end up with "never"
>>>>> on races.
>>>>
>>>> You mean if different threads try to write different values to this file
>>>> concurrently? Or if there is a concurrent fault that tries to read the flags
>>>> while they are being modified?
>>>
>>> I thought about what you said first, but what you said last might also apply. As
>>> long as "nothing breaks", all good.
>>>
>>>>
>>>> I thought about this for a long time too and wasn't sure what was best. The
>>>> existing global enabled store impl clears the bits first then sets the bit.
>>>> With
>>>> this approach you can end up with multiple bits set if there is a race to set
>>>> diffierent values, and you can end up with a faulting thread seeing never if it
>>>> reads the bits after they have been cleared but before setting them.
>>>
>>> Right, but user space is playing stupid games and can win stupid prices. As long
>>> as nothing breaks, we're good.
>>>
>>>>
>>>> I decided to set the new bit before clearing the old bits, which is
>>>> different; A
>>>> racing fault will never see "never" but as you say, a race to set the file
>>>> could
>>>> result in "never" being set.
>>>>
>>>> On reflection, it's probably best to set the bit *last* like the global control
>>>> does?
>>>
>>> Probably might just slap a simple spinlock in there, so at least the writer side
>>> is completely serialized. Then you can just set the bit last. It's unlikely that
>>> readers will actually run into issues, and if they ever would, we could use some
>>> rcu magic to let them read a consistent state.
>>
>> I'd prefer to leave it as it is now; clear first, set last without any explicit
>> serialization. I've convinced myself that nothing breaks and its the same
>> pattern used by the global control so its consistent. Unless you're insisting on
>> the spin lock?
>
> No, not at all. But it would certainly remove any possible concerns :)
OK fine, you win :). I'll add a spin lock on the writer side.
next prev parent reply other threads:[~2023-12-07 11:44 UTC|newest]
Thread overview: 60+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-04 10:20 [PATCH v8 00/10] Multi-size THP for anonymous memory Ryan Roberts
2023-12-04 10:20 ` [PATCH v8 01/10] mm: Allow deferred splitting of arbitrary anon large folios Ryan Roberts
2023-12-04 10:20 ` [PATCH v8 02/10] mm: Non-pmd-mappable, large folios for folio_add_new_anon_rmap() Ryan Roberts
2023-12-05 0:58 ` Barry Song
2023-12-04 10:20 ` [PATCH v8 03/10] mm: thp: Introduce multi-size THP sysfs interface Ryan Roberts
2023-12-05 4:21 ` Barry Song
2023-12-05 9:50 ` Ryan Roberts
2023-12-05 9:57 ` David Hildenbrand
2023-12-05 10:50 ` Ryan Roberts
2023-12-05 16:57 ` David Hildenbrand
2023-12-06 13:18 ` Ryan Roberts
2023-12-07 10:56 ` Ryan Roberts
2023-12-07 11:13 ` David Hildenbrand
2023-12-07 11:22 ` Ryan Roberts
2023-12-07 11:25 ` David Hildenbrand
2023-12-07 11:44 ` Ryan Roberts [this message]
2023-12-04 10:20 ` [PATCH v8 04/10] mm: thp: Support allocation of anonymous multi-size THP Ryan Roberts
2023-12-05 1:15 ` Barry Song
2023-12-05 1:24 ` Barry Song
2023-12-05 10:48 ` Ryan Roberts
2023-12-05 11:16 ` David Hildenbrand
2023-12-05 20:16 ` Barry Song
2023-12-06 10:15 ` Ryan Roberts
2023-12-06 10:25 ` Barry Song
2023-12-05 16:32 ` David Hildenbrand
2023-12-05 16:35 ` David Hildenbrand
2023-12-06 14:19 ` Ryan Roberts
2023-12-06 15:44 ` Ryan Roberts
2023-12-07 10:37 ` Ryan Roberts
2023-12-07 10:40 ` David Hildenbrand
2023-12-07 11:08 ` David Hildenbrand
2023-12-07 12:08 ` Ryan Roberts
2023-12-07 13:28 ` David Hildenbrand
2023-12-07 14:45 ` Ryan Roberts
2023-12-07 15:01 ` David Hildenbrand
2023-12-07 15:12 ` Ryan Roberts
2023-12-04 10:20 ` [PATCH v8 05/10] selftests/mm/kugepaged: Restore thp settings at exit Ryan Roberts
2023-12-05 17:00 ` David Hildenbrand
2023-12-04 10:20 ` [PATCH v8 06/10] selftests/mm: Factor out thp settings management Ryan Roberts
2023-12-05 17:03 ` David Hildenbrand
2023-12-04 10:20 ` [PATCH v8 07/10] selftests/mm: Support multi-size THP interface in thp_settings Ryan Roberts
2023-12-04 10:20 ` [PATCH v8 08/10] selftests/mm/khugepaged: Enlighten for multi-size THP Ryan Roberts
2023-12-04 10:20 ` [PATCH v8 09/10] selftests/mm/cow: Generalize do_run_with_thp() helper Ryan Roberts
2023-12-05 9:59 ` David Hildenbrand
2023-12-04 10:20 ` [PATCH v8 10/10] selftests/mm/cow: Add tests for anonymous multi-size THP Ryan Roberts
2023-12-05 16:00 ` David Hildenbrand
2023-12-04 19:30 ` [PATCH v8 00/10] Multi-size THP for anonymous memory Andrew Morton
2023-12-05 9:34 ` Ryan Roberts
2023-12-05 3:28 ` Barry Song
2023-12-05 11:05 ` Ryan Roberts
2023-12-05 3:37 ` John Hubbard
2023-12-05 11:13 ` Ryan Roberts
2023-12-05 18:58 ` John Hubbard
2023-12-05 14:19 ` Kefeng Wang
2023-12-06 10:08 ` Ryan Roberts
2023-12-07 15:50 ` Kefeng Wang
2023-12-05 17:21 ` David Hildenbrand
2023-12-06 10:13 ` Ryan Roberts
2023-12-06 10:22 ` David Hildenbrand
2023-12-06 14:22 ` Ryan Roberts
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=856f4317-43d9-41bb-8096-1eef69c86d3b@arm.com \
--to=ryan.roberts@arm.com \
--cc=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=apopple@nvidia.com \
--cc=catalin.marinas@arm.com \
--cc=david@redhat.com \
--cc=fengwei.yin@intel.com \
--cc=hughd@google.com \
--cc=itaru.kitayama@gmail.com \
--cc=jhubbard@nvidia.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mcgrof@kernel.org \
--cc=rientjes@google.com \
--cc=shy828301@gmail.com \
--cc=vbabka@suse.cz \
--cc=wangkefeng.wang@huawei.com \
--cc=willy@infradead.org \
--cc=ying.huang@intel.com \
--cc=yuzhao@google.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox