From: Michal Hocko <mhocko@suse.com>
To: "Zhijian Li (Fujitsu)" <lizhijian@fujitsu.com>
Cc: "linux-mm@kvack.org" <linux-mm@kvack.org>,
Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@redhat.com>,
Oscar Salvador <osalvador@suse.de>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"Yasunori Gotou (Fujitsu)" <y-goto@fujitsu.com>
Subject: Re: [PATCH RFC] mm: Avoid triggering oom-killer during memory hot-remove operations
Date: Mon, 29 Jul 2024 08:13:33 +0200 [thread overview]
Message-ID: <ZqczDQ_qAjOGmBk0@tiehlicka> (raw)
In-Reply-To: <2ab277af-06ed-41a9-a2b4-91dd1ffce733@fujitsu.com>
On Mon 29-07-24 02:14:13, Zhijian Li (Fujitsu) wrote:
>
>
> On 29/07/2024 08:37, Li Zhijian wrote:
> > Michal,
> >
> > Sorry to the late reply.
> >
> >
> > On 26/07/2024 17:17, Michal Hocko wrote:
> >> On Fri 26-07-24 16:44:56, Li Zhijian wrote:
> >>> When a process is bound to a node that is being hot-removed, any memory
> >>> allocation attempts from that node should fail gracefully without
> >>> triggering the OOM-killer. However, the current behavior can cause the
> >>> oom-killer to be invoked, leading to the termination of processes on other
> >>> nodes, even when there is sufficient memory available in the system.
> >>
> >> But you said they are bound to the node that is offlined.
> >>> Prevent the oom-killer from being triggered by processes bound to a
> >>> node undergoing hot-remove operations. Instead, the allocation attempts
> >>> from the offlining node will simply fail, allowing the process to handle
> >>> the failure appropriately without causing disruption to the system.
> >>
> >> NAK.
> >>
> >> Also it is not really clear why process of offlining should behave any
> >> different from after the node is offlined. Could you describe an actual
> >> problem you are facing with much more details please?
> >
> > We encountered that some processes(including some system critical services, for example sshd, rsyslogd, login)
> > were killed during our memory hot-remove testing. Our test program are described previous mail[1]
> >
> > In short, we have 3 memory nodes, node0 and node1 are DRAM, while node2 is CXL volatile memory that is onlined
> > to ZONE_MOVABLE. When we attempted to remove the node2, oom-killed was invoked to kill other processes
> > (sshd, rsyslogd, login) even though there is enough memory on node0+node1.
What are sizes of those nodes, how much memory does the testing program
consumes and do you have oom report without the patch applied?
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2024-07-29 6:13 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-26 8:44 Li Zhijian
2024-07-26 9:17 ` Michal Hocko
2024-07-29 0:37 ` Zhijian Li (Fujitsu)
2024-07-29 2:14 ` Zhijian Li (Fujitsu)
2024-07-29 6:13 ` Michal Hocko [this message]
2024-07-29 6:34 ` Zhijian Li (Fujitsu)
2024-07-29 7:40 ` Michal Hocko
2024-07-29 8:04 ` Zhijian Li (Fujitsu)
2024-07-29 8:15 ` Michal Hocko
2024-07-29 8:53 ` Zhijian Li (Fujitsu)
2024-07-29 9:16 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZqczDQ_qAjOGmBk0@tiehlicka \
--to=mhocko@suse.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lizhijian@fujitsu.com \
--cc=osalvador@suse.de \
--cc=y-goto@fujitsu.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox