* Re: [PATCH] ceph: switch back to testing for NULL folio->private in ceph_dirty_folio
[not found] ` <6189bdb3-6bfa-b85a-8df5-0fe94d7a962a@redhat.com>
@ 2022-06-19 3:49 ` Matthew Wilcox
2022-06-20 1:30 ` Xiubo Li
0 siblings, 1 reply; 2+ messages in thread
From: Matthew Wilcox @ 2022-06-19 3:49 UTC (permalink / raw)
To: Xiubo Li; +Cc: Jeff Layton, idryomov, ceph-devel, linux-fsdevel, linux-mm
On Mon, Jun 13, 2022 at 08:48:40AM +0800, Xiubo Li wrote:
>
> On 6/10/22 11:40 PM, Jeff Layton wrote:
> > Willy requested that we change this back to warning on folio->private
> > being non-NULl. He's trying to kill off the PG_private flag, and so we'd
> > like to catch where it's non-NULL.
> >
> > Add a VM_WARN_ON_FOLIO (since it doesn't exist yet) and change over to
> > using that instead of VM_BUG_ON_FOLIO along with testing the ->private
> > pointer.
> >
> > Cc: Matthew Wilcox <willy@infradead.org>
> > Signed-off-by: Jeff Layton <jlayton@kernel.org>
> > ---
> > fs/ceph/addr.c | 2 +-
> > include/linux/mmdebug.h | 9 +++++++++
> > 2 files changed, 10 insertions(+), 1 deletion(-)
> >
> > diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
> > index b43cc01a61db..b24d6bdb91db 100644
> > --- a/fs/ceph/addr.c
> > +++ b/fs/ceph/addr.c
> > @@ -122,7 +122,7 @@ static bool ceph_dirty_folio(struct address_space *mapping, struct folio *folio)
> > * Reference snap context in folio->private. Also set
> > * PagePrivate so that we get invalidate_folio callback.
> > */
> > - VM_BUG_ON_FOLIO(folio_test_private(folio), folio);
> > + VM_WARN_ON_FOLIO(folio->private, folio);
> > folio_attach_private(folio, snapc);
> > return ceph_fscache_dirty_folio(mapping, folio);
I found a couple of places where page->private needs to be NULLed out.
Neither of them are Ceph's fault. I decided that testing whether
folio->private and PG_private are in agreement was better done in
folio_unlock() than in any of the other potential places we could
check for it.
diff --git a/mm/filemap.c b/mm/filemap.c
index 8ef861297ffb..acef71f75e78 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1535,6 +1535,9 @@ void folio_unlock(struct folio *folio)
BUILD_BUG_ON(PG_waiters != 7);
BUILD_BUG_ON(PG_locked > 7);
VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
+ VM_BUG_ON_FOLIO(!folio_test_private(folio) &&
+ !folio_test_swapbacked(folio) &&
+ folio_get_private(folio), folio);
if (clear_bit_unlock_is_negative_byte(PG_locked, folio_flags(folio, 0)))
folio_wake_bit(folio, PG_locked);
}
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 2e2a8b5bc567..af0751a79c19 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2438,6 +2438,7 @@ static void __split_huge_page_tail(struct page *head, int tail,
page_tail);
page_tail->mapping = head->mapping;
page_tail->index = head->index + tail;
+ page_tail->private = 0;
/* Page flags must be visible before we make the page non-compound. */
smp_wmb();
diff --git a/mm/migrate.c b/mm/migrate.c
index eb62e026c501..fa8e36e74f0d 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1157,6 +1157,8 @@ static int unmap_and_move(new_page_t get_new_page,
newpage = get_new_page(page, private);
if (!newpage)
return -ENOMEM;
+ BUG_ON(compound_order(newpage) != compound_order(page));
+ newpage->private = 0;
rc = __unmap_and_move(page, newpage, force, mode);
if (rc == MIGRATEPAGE_SUCCESS)
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [PATCH] ceph: switch back to testing for NULL folio->private in ceph_dirty_folio
2022-06-19 3:49 ` [PATCH] ceph: switch back to testing for NULL folio->private in ceph_dirty_folio Matthew Wilcox
@ 2022-06-20 1:30 ` Xiubo Li
0 siblings, 0 replies; 2+ messages in thread
From: Xiubo Li @ 2022-06-20 1:30 UTC (permalink / raw)
To: Matthew Wilcox; +Cc: Jeff Layton, idryomov, ceph-devel, linux-fsdevel, linux-mm
On 6/19/22 11:49 AM, Matthew Wilcox wrote:
> On Mon, Jun 13, 2022 at 08:48:40AM +0800, Xiubo Li wrote:
>> On 6/10/22 11:40 PM, Jeff Layton wrote:
>>> Willy requested that we change this back to warning on folio->private
>>> being non-NULl. He's trying to kill off the PG_private flag, and so we'd
>>> like to catch where it's non-NULL.
>>>
>>> Add a VM_WARN_ON_FOLIO (since it doesn't exist yet) and change over to
>>> using that instead of VM_BUG_ON_FOLIO along with testing the ->private
>>> pointer.
>>>
>>> Cc: Matthew Wilcox <willy@infradead.org>
>>> Signed-off-by: Jeff Layton <jlayton@kernel.org>
>>> ---
>>> fs/ceph/addr.c | 2 +-
>>> include/linux/mmdebug.h | 9 +++++++++
>>> 2 files changed, 10 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
>>> index b43cc01a61db..b24d6bdb91db 100644
>>> --- a/fs/ceph/addr.c
>>> +++ b/fs/ceph/addr.c
>>> @@ -122,7 +122,7 @@ static bool ceph_dirty_folio(struct address_space *mapping, struct folio *folio)
>>> * Reference snap context in folio->private. Also set
>>> * PagePrivate so that we get invalidate_folio callback.
>>> */
>>> - VM_BUG_ON_FOLIO(folio_test_private(folio), folio);
>>> + VM_WARN_ON_FOLIO(folio->private, folio);
>>> folio_attach_private(folio, snapc);
>>> return ceph_fscache_dirty_folio(mapping, folio);
> I found a couple of places where page->private needs to be NULLed out.
> Neither of them are Ceph's fault. I decided that testing whether
> folio->private and PG_private are in agreement was better done in
> folio_unlock() than in any of the other potential places we could
> check for it.
Hi Willy,
Cool. I will test this patch today. Thanks!
-- Xiubo
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 8ef861297ffb..acef71f75e78 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -1535,6 +1535,9 @@ void folio_unlock(struct folio *folio)
> BUILD_BUG_ON(PG_waiters != 7);
> BUILD_BUG_ON(PG_locked > 7);
> VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
> + VM_BUG_ON_FOLIO(!folio_test_private(folio) &&
> + !folio_test_swapbacked(folio) &&
> + folio_get_private(folio), folio);
> if (clear_bit_unlock_is_negative_byte(PG_locked, folio_flags(folio, 0)))
> folio_wake_bit(folio, PG_locked);
> }
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 2e2a8b5bc567..af0751a79c19 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2438,6 +2438,7 @@ static void __split_huge_page_tail(struct page *head, int tail,
> page_tail);
> page_tail->mapping = head->mapping;
> page_tail->index = head->index + tail;
> + page_tail->private = 0;
>
> /* Page flags must be visible before we make the page non-compound. */
> smp_wmb();
> diff --git a/mm/migrate.c b/mm/migrate.c
> index eb62e026c501..fa8e36e74f0d 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1157,6 +1157,8 @@ static int unmap_and_move(new_page_t get_new_page,
> newpage = get_new_page(page, private);
> if (!newpage)
> return -ENOMEM;
> + BUG_ON(compound_order(newpage) != compound_order(page));
> + newpage->private = 0;
>
> rc = __unmap_and_move(page, newpage, force, mode);
> if (rc == MIGRATEPAGE_SUCCESS)
>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2022-06-20 1:30 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <20220610154013.68259-1-jlayton@kernel.org>
[not found] ` <6189bdb3-6bfa-b85a-8df5-0fe94d7a962a@redhat.com>
2022-06-19 3:49 ` [PATCH] ceph: switch back to testing for NULL folio->private in ceph_dirty_folio Matthew Wilcox
2022-06-20 1:30 ` Xiubo Li
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox