linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Frank van der Linden <fvdl@google.com>
To: David Hildenbrand <david@redhat.com>
Cc: Yu Zhao <yuzhao@google.com>, Muchun Song <muchun.song@linux.dev>,
	 Matthew Wilcox <willy@infradead.org>,
	Jane Chu <jane.chu@oracle.com>, Will Deacon <will@kernel.org>,
	 Nanyong Sun <sunnanyong@huawei.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	 akpm@linux-foundation.org, anshuman.khandual@arm.com,
	 wangkefeng.wang@huawei.com,
	linux-arm-kernel@lists.infradead.org,
	 linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH v3 0/3] A Solution to Re-enable hugetlb vmemmap optimize
Date: Fri, 7 Jun 2024 09:55:55 -0700	[thread overview]
Message-ID: <CAPTztWb0ZMHB74=KGxqRpTejDXNVJZ+Y9LGH1KEaPy_cnUmABA@mail.gmail.com> (raw)
In-Reply-To: <be130a96-a27e-4240-ad78-776802f57cad@redhat.com>

I had an offline discussion with Yu on this, and he pointed out
something I hadn't realized: the x86 cmpxchg instruction always
produces a write cycle, even if it doesn't modify the data - it just
writes back the original data in that case.

So, get_page_unless_zero will always produce a fault on RO mapped page
structures on x86.

Maybe this was obvious to other people, but I didn't see it explicitly
mentioned, so I figured I'd add the datapoint.

- Frank

On Thu, Jun 6, 2024 at 1:30 AM David Hildenbrand <david@redhat.com> wrote:
>
> >> Additionally, we also should alter RO permission of those 7 tail pages
> >> to RW to avoid panic().
> >
> > We can use RCU, which IMO is a better choice, as the following:
> >
> > get_page_unless_zero()
> > {
> >    int rc = false;
> >
> >    rcu_read_lock();
> >
> >    if (page_is_fake_head(page) || !page_ref_count(page)) {
> >          smp_mb(); // implied by atomic_add_unless()
> >          goto unlock;
> >    }
> >
> >    rc = page_ref_add_unless();
> >
> > unlock:
> >    rcu_read_unlock();
> >
> >    return rc;
> > }
> >
> > And on the HVO/de-HOV sides:
> >
> >    folio_ref_unfreeze();
> >    synchronize_rcu();
> >    HVO/de-HVO;
> >
> > I think this is a lot better than making tail page metadata RW because:
> > 1. it helps debug, IMO, a lot;
> > 2. I don't think HVO is the only one that needs this.
> >
> > David (we missed you in today's THP meeting),
>
> Sorry, I had a private meeting conflict :)
>
> >
> > Please correct me if I'm wrong -- I think virtio-mem also suffers from
> > the same problem when freeing offlined struct page, since I wasn't
> > able to find anything that would prevent a **speculative** struct page
> > walker from trying to access struct pages belonging to pages being
> > concurrently offlined.
>
> virtio-mem does currently not yet optimize fake-offlined memory like HVO
> would. So the only way we really remove "struct page" metadata is by
> actually offlining+removing a complete Linux memory block, like ordinary
> memory hotunplug would.
>
> It might be an interesting project to optimize "struct page" metadata
> consumption for fake-offlined memory chunks within an online Linux
> memory block.
>
> The biggest challenge might be interaction with memory hotplug, which
> requires all "struct page" metadata to be allocated. So that would make
> cases where virtio-mem hot-plugs a Linux memory block but keeps parts of
> it fake-offline a bit more problematic to handle .
>
> In a world with memdesc this might all be nicer to handle I think :)
>
>
> There is one possible interaction between virtio-mem and speculative
> page references: all fake-offline chunks in a Linux memory block do have
> on each page a refcount of 1 and PageOffline() set. When actually
> offlining the Linux memory block to remove it, virtio-mem will drop that
> reference during MEM_GOING_OFFLINE, such that memory offlining can
> proceed (seeing refcount==0 and PageOffline()).
>
> In virtio_mem_fake_offline_going_offline() we have:
>
> if (WARN_ON(!page_ref_dec_and_test(page)))
>         dump_page(page, "fake-offline page referenced");
>
> which would trigger on a speculative reference.
>
> We never saw that trigger so far because quite a long time must have
> passed ever since a page might have been part of the page cache / page
> tables, before virtio-mem fake-offlined it (using alloc_contig_range())
> and the Linux memory block actually gets offlined.
>
> But yes, RCU (e.g., on the memory offlining path) would likely be the
> right approach to make sure GUP-fast and the pagecache will no longer
> grab this page by accident.
>
> >
> > If this is true, we might want to map a "zero struct page" rather than
> > leave a hole in vmemmap when offlining pages. And the logic on the hot
> > removal side would be similar to that of HVO.
>
> Once virtio-mem would do something like HVO, yes. Right now virtio-mem
> only removes struct-page metadata by removing/unplugging its owned Linux
> memory blocks once they are fully "logically offline".
>
> --
> Cheers,
>
> David / dhildenb
>


  reply	other threads:[~2024-06-07 16:56 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-13  9:44 Nanyong Sun
2024-01-13  9:44 ` [PATCH v3 1/3] mm: HVO: introduce helper function to update and flush pgtable Nanyong Sun
2024-01-13  9:44 ` [PATCH v3 2/3] arm64: mm: HVO: support BBM of vmemmap pgtable safely Nanyong Sun
2024-01-15  2:38   ` Muchun Song
2024-02-07 12:21   ` Mark Rutland
2024-02-08  9:30     ` Nanyong Sun
2024-01-13  9:44 ` [PATCH v3 3/3] arm64: mm: Re-enable OPTIMIZE_HUGETLB_VMEMMAP Nanyong Sun
2024-01-25 18:06 ` [PATCH v3 0/3] A Solution to Re-enable hugetlb vmemmap optimize Catalin Marinas
2024-01-27  5:04   ` Nanyong Sun
2024-02-07 11:12     ` Will Deacon
2024-02-07 11:21       ` Matthew Wilcox
2024-02-07 12:11         ` Will Deacon
2024-02-07 12:24           ` Mark Rutland
2024-02-07 14:17           ` Matthew Wilcox
2024-02-08  2:24             ` Jane Chu
2024-02-08 15:49               ` Matthew Wilcox
2024-02-08 19:21                 ` Jane Chu
2024-02-11 11:59                 ` Muchun Song
2024-06-05 20:50                   ` Yu Zhao
2024-06-06  8:30                     ` David Hildenbrand
2024-06-07 16:55                       ` Frank van der Linden [this message]
2024-02-07 12:20         ` Catalin Marinas
2024-02-08  9:44           ` Nanyong Sun
2024-02-08 13:17             ` Will Deacon
2024-03-13 23:32               ` David Rientjes
2024-03-25 15:24                 ` Nanyong Sun
2024-03-26 12:54                   ` Will Deacon
2024-06-24  5:39                   ` Yu Zhao
2024-06-27 14:33                     ` Nanyong Sun
2024-06-27 21:03                       ` Yu Zhao
2024-07-04 11:47                         ` Nanyong Sun
2024-07-04 19:45                           ` Yu Zhao
2024-02-07 12:44     ` Catalin Marinas
2024-06-27 21:19       ` Yu Zhao
2024-07-05 15:49         ` Catalin Marinas
2024-07-05 17:41           ` Yu Zhao
2024-07-10 16:51             ` Catalin Marinas
2024-07-10 17:12               ` Yu Zhao
2024-07-10 22:29                 ` Catalin Marinas
2024-07-10 23:07                   ` Yu Zhao
2024-07-11  8:31                     ` Yu Zhao
2024-07-11 11:39                       ` Catalin Marinas
2024-07-11 17:38                         ` Yu Zhao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAPTztWb0ZMHB74=KGxqRpTejDXNVJZ+Y9LGH1KEaPy_cnUmABA@mail.gmail.com' \
    --to=fvdl@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=anshuman.khandual@arm.com \
    --cc=catalin.marinas@arm.com \
    --cc=david@redhat.com \
    --cc=jane.chu@oracle.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=muchun.song@linux.dev \
    --cc=sunnanyong@huawei.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox