linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Charan Teja Kalla <quic_charante@quicinc.com>,
	akpm@linux-foundation.org, quic_pkondeti@quicinc.com,
	pasha.tatashin@soleen.com, sjpark@amazon.de, sieberf@amazon.com,
	shakeelb@google.com, dhowells@redhat.com, willy@infradead.org,
	liuting.0x7c00@bytedance.com, minchan@kernel.org,
	Michal Hocko <mhocko@suse.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH V2] mm: fix use-after free of page_ext after race with memory-offline
Date: Mon, 1 Aug 2022 14:04:57 +0200	[thread overview]
Message-ID: <a6f28a45-ee0a-183f-fa60-28a56e1c506c@redhat.com> (raw)
In-Reply-To: <f670c6ee-1c20-570f-68f9-42a3e1e85557@quicinc.com>

On 01.08.22 13:50, Charan Teja Kalla wrote:
> Thanks David!!
> 
> On 8/1/2022 2:00 PM, David Hildenbrand wrote:
>>> Having said that, I am open to go for call_rcu() and infact it will be a
>>> much simple change where I can do the freeing of page_ext in the
>>> __free_page_ext() itself which is called for every section there by
>>> avoid the extra tracking flag PAGE_EXT_INVALID.
>>>       ...........
>>>         WRITE_ONCE(ms->page_ext, NULL);
>>> 	call_rcu(rcu_head, fun); // Free in fun()
>>>        .............
>>>
>>> Or your opinion is to use call_rcu () only once in place of
>>> synchronize_rcu() after invalidating all the page_ext's of memory block?
>>
>> Yeah, that would be an option. And if you fail to allocate a temporary
>> buffer to hold the data-to-free (structure containing rcu_head), the
>> slower fallback path would be synchronize_rcu().
>>
> 
> I will add this as a note in the code that in future If some
> optimizations needs to be done in this path, this option can be
> considered.  Hope this will be fine for now?

IMHO yes. But not need to add all these details to the patch description
(try keeping it short and precise). You can always just link to the
discussion, e.g., via

https://lkml.kernel.org/r/a26ce299-aed1-b8ad-711e-a49e82bdd180@quicinc.com

-- 
Thanks,

David / dhildenb



  reply	other threads:[~2022-08-01 12:05 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-27 14:15 Charan Teja Kalla
2022-07-27 14:19 ` Charan Teja Kalla
2022-07-27 17:29 ` David Hildenbrand
2022-07-28  9:53   ` Charan Teja Kalla
2022-08-01  8:30     ` David Hildenbrand
2022-08-01 11:50       ` Charan Teja Kalla
2022-08-01 12:04         ` David Hildenbrand [this message]
2022-07-28 14:37 ` Michal Hocko
2022-07-29 15:47   ` Charan Teja Kalla
2022-08-01  8:27     ` Michal Hocko
2022-08-01 13:01       ` Charan Teja Kalla
2022-08-01 13:08         ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a6f28a45-ee0a-183f-fa60-28a56e1c506c@redhat.com \
    --to=david@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=dhowells@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=liuting.0x7c00@bytedance.com \
    --cc=mhocko@suse.com \
    --cc=minchan@kernel.org \
    --cc=pasha.tatashin@soleen.com \
    --cc=quic_charante@quicinc.com \
    --cc=quic_pkondeti@quicinc.com \
    --cc=shakeelb@google.com \
    --cc=sieberf@amazon.com \
    --cc=sjpark@amazon.de \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox