From: "Michael S. Tsirkin" <mst@redhat.com>
To: David Hildenbrand <david@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
linux-s390@vger.kernel.org, virtualization@lists.linux.dev,
kvm@vger.kernel.org, linux-fsdevel@vger.kernel.org,
kexec@lists.infradead.org, "Heiko Carstens" <hca@linux.ibm.com>,
"Vasily Gorbik" <gor@linux.ibm.com>,
"Alexander Gordeev" <agordeev@linux.ibm.com>,
"Christian Borntraeger" <borntraeger@linux.ibm.com>,
"Sven Schnelle" <svens@linux.ibm.com>,
"Jason Wang" <jasowang@redhat.com>,
"Xuan Zhuo" <xuanzhuo@linux.alibaba.com>,
"Eugenio Pérez" <eperezma@redhat.com>,
"Baoquan He" <bhe@redhat.com>, "Vivek Goyal" <vgoyal@redhat.com>,
"Dave Young" <dyoung@redhat.com>,
"Thomas Huth" <thuth@redhat.com>,
"Cornelia Huck" <cohuck@redhat.com>,
"Janosch Frank" <frankja@linux.ibm.com>,
"Claudio Imbrenda" <imbrenda@linux.ibm.com>,
"Eric Farman" <farman@linux.ibm.com>,
"Andrew Morton" <akpm@linux-foundation.org>
Subject: Re: [PATCH v2 00/12] fs/proc/vmcore: kdump support for virtio-mem on s390
Date: Wed, 8 Jan 2025 07:04:23 -0500 [thread overview]
Message-ID: <20250108070407-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20241204125444.1734652-1-david@redhat.com>
On Wed, Dec 04, 2024 at 01:54:31PM +0100, David Hildenbrand wrote:
> The only "different than everything else" thing about virtio-mem on s390
> is kdump: The crash (2nd) kernel allocates+prepares the elfcore hdr
> during fs_init()->vmcore_init()->elfcorehdr_alloc(). Consequently, the
> kdump kernel must detect memory ranges of the crashed kernel to
> include via PT_LOAD in the vmcore.
>
> On other architectures, all RAM regions (boot + hotplugged) can easily be
> observed on the old (to crash) kernel (e.g., using /proc/iomem) to create
> the elfcore hdr.
>
> On s390, information about "ordinary" memory (heh, "storage") can be
> obtained by querying the hypervisor/ultravisor via SCLP/diag260, and
> that information is stored early during boot in the "physmem" memblock
> data structure.
>
> But virtio-mem memory is always detected by as device driver, which is
> usually build as a module. So in the crash kernel, this memory can only be
> properly detected once the virtio-mem driver started up.
>
> The virtio-mem driver already supports the "kdump mode", where it won't
> hotplug any memory but instead queries the device to implement the
> pfn_is_ram() callback, to avoid reading unplugged memory holes when reading
> the vmcore.
>
> With this series, if the virtio-mem driver is included in the kdump
> initrd -- which dracut already takes care of under Fedora/RHEL -- it will
> now detect the device RAM ranges on s390 once it probes the devices, to add
> them to the vmcore using the same callback mechanism we already have for
> pfn_is_ram().
>
> To add these device RAM ranges to the vmcore ("patch the vmcore"), we will
> add new PT_LOAD entries that describe these memory ranges, and update
> all offsets vmcore size so it is all consistent.
>
> My testing when creating+analyzing crash dumps with hotplugged virtio-mem
> memory (incl. holes) did not reveal any surprises.
>
> Patch #1 -- #7 are vmcore preparations and cleanups
> Patch #8 adds the infrastructure for drivers to report device RAM
> Patch #9 + #10 are virtio-mem preparations
> Patch #11 implements virtio-mem support to report device RAM
> Patch #12 activates it for s390, implementing a new function to fill
> PT_LOAD entry for device RAM
Who is merging this?
virtio parts:
Acked-by: Michael S. Tsirkin <mst@redhat.com>
> v1 -> v2:
> * "fs/proc/vmcore: convert vmcore_cb_lock into vmcore_mutex"
> -> Extend patch description
> * "fs/proc/vmcore: replace vmcoredd_mutex by vmcore_mutex"
> -> Extend patch description
> * "fs/proc/vmcore: disallow vmcore modifications while the vmcore is open"
> -> Disallow modifications only if it is currently open, but warn if it
> was already open and got closed again.
> -> Track vmcore_open vs. vmcore_opened
> -> Extend patch description
> * "fs/proc/vmcore: prefix all pr_* with "vmcore:""
> -> Added
> * "fs/proc/vmcore: move vmcore definitions out if kcore.h"
> -> Call it "vmcore_range"
> -> Place vmcoredd_node into vmcore.c
> -> Adjust patch subject + description
> * "fs/proc/vmcore: factor out allocating a vmcore range and adding it to a
> list"
> -> Adjust to "vmcore_range"
> * "fs/proc/vmcore: factor out freeing a list of vmcore ranges"
> -> Adjust to "vmcore_range"
> * "fs/proc/vmcore: introduce PROC_VMCORE_DEVICE_RAM to detect device RAM
> ranges in 2nd kernel"
> -> Drop PROVIDE_PROC_VMCORE_DEVICE_RAM for now
> -> Simplify Kconfig a bit
> -> Drop "Kdump:" from warnings/errors
> -> Perform Elf64 check first
> -> Add regions also if the vmcore was opened, but got closed again. But
> warn in any case, because it is unexpected.
> -> Adjust patch description
> * "virtio-mem: support CONFIG_PROC_VMCORE_DEVICE_RAM"
> -> "depends on VIRTIO_MEM" for PROC_VMCORE_DEVICE_RAM
>
>
> Cc: Heiko Carstens <hca@linux.ibm.com>
> Cc: Vasily Gorbik <gor@linux.ibm.com>
> Cc: Alexander Gordeev <agordeev@linux.ibm.com>
> Cc: Christian Borntraeger <borntraeger@linux.ibm.com>
> Cc: Sven Schnelle <svens@linux.ibm.com>
> Cc: "Michael S. Tsirkin" <mst@redhat.com>
> Cc: Jason Wang <jasowang@redhat.com>
> Cc: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> Cc: "Eugenio Pérez" <eperezma@redhat.com>
> Cc: Baoquan He <bhe@redhat.com>
> Cc: Vivek Goyal <vgoyal@redhat.com>
> Cc: Dave Young <dyoung@redhat.com>
> Cc: Thomas Huth <thuth@redhat.com>
> Cc: Cornelia Huck <cohuck@redhat.com>
> Cc: Janosch Frank <frankja@linux.ibm.com>
> Cc: Claudio Imbrenda <imbrenda@linux.ibm.com>
> Cc: Eric Farman <farman@linux.ibm.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
>
> David Hildenbrand (12):
> fs/proc/vmcore: convert vmcore_cb_lock into vmcore_mutex
> fs/proc/vmcore: replace vmcoredd_mutex by vmcore_mutex
> fs/proc/vmcore: disallow vmcore modifications while the vmcore is open
> fs/proc/vmcore: prefix all pr_* with "vmcore:"
> fs/proc/vmcore: move vmcore definitions out of kcore.h
> fs/proc/vmcore: factor out allocating a vmcore range and adding it to
> a list
> fs/proc/vmcore: factor out freeing a list of vmcore ranges
> fs/proc/vmcore: introduce PROC_VMCORE_DEVICE_RAM to detect device RAM
> ranges in 2nd kernel
> virtio-mem: mark device ready before registering callbacks in kdump
> mode
> virtio-mem: remember usable region size
> virtio-mem: support CONFIG_PROC_VMCORE_DEVICE_RAM
> s390/kdump: virtio-mem kdump support (CONFIG_PROC_VMCORE_DEVICE_RAM)
>
> arch/s390/Kconfig | 1 +
> arch/s390/kernel/crash_dump.c | 39 ++++-
> drivers/virtio/virtio_mem.c | 103 ++++++++++++-
> fs/proc/Kconfig | 19 +++
> fs/proc/vmcore.c | 283 ++++++++++++++++++++++++++--------
> include/linux/crash_dump.h | 41 +++++
> include/linux/kcore.h | 13 --
> 7 files changed, 407 insertions(+), 92 deletions(-)
>
>
> base-commit: feffde684ac29a3b7aec82d2df850fbdbdee55e4
> --
> 2.47.1
next prev parent reply other threads:[~2025-01-08 12:04 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-04 12:54 David Hildenbrand
2024-12-04 12:54 ` [PATCH v2 01/12] fs/proc/vmcore: convert vmcore_cb_lock into vmcore_mutex David Hildenbrand
2024-12-04 12:54 ` [PATCH v2 02/12] fs/proc/vmcore: replace vmcoredd_mutex by vmcore_mutex David Hildenbrand
2024-12-04 12:54 ` [PATCH v2 03/12] fs/proc/vmcore: disallow vmcore modifications while the vmcore is open David Hildenbrand
2024-12-04 12:54 ` [PATCH v2 04/12] fs/proc/vmcore: prefix all pr_* with "vmcore:" David Hildenbrand
2024-12-04 12:54 ` [PATCH v2 05/12] fs/proc/vmcore: move vmcore definitions out of kcore.h David Hildenbrand
2024-12-04 12:54 ` [PATCH v2 06/12] fs/proc/vmcore: factor out allocating a vmcore range and adding it to a list David Hildenbrand
2024-12-04 12:54 ` [PATCH v2 07/12] fs/proc/vmcore: factor out freeing a list of vmcore ranges David Hildenbrand
2024-12-04 12:54 ` [PATCH v2 08/12] fs/proc/vmcore: introduce PROC_VMCORE_DEVICE_RAM to detect device RAM ranges in 2nd kernel David Hildenbrand
2024-12-04 12:54 ` [PATCH v2 09/12] virtio-mem: mark device ready before registering callbacks in kdump mode David Hildenbrand
2024-12-04 12:54 ` [PATCH v2 10/12] virtio-mem: remember usable region size David Hildenbrand
2024-12-04 12:54 ` [PATCH v2 11/12] virtio-mem: support CONFIG_PROC_VMCORE_DEVICE_RAM David Hildenbrand
2024-12-04 12:54 ` [PATCH v2 12/12] s390/kdump: virtio-mem kdump support (CONFIG_PROC_VMCORE_DEVICE_RAM) David Hildenbrand
2025-01-08 12:04 ` Michael S. Tsirkin [this message]
2025-01-08 12:10 ` [PATCH v2 00/12] fs/proc/vmcore: kdump support for virtio-mem on s390 Heiko Carstens
2025-01-08 12:14 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250108070407-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=agordeev@linux.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=bhe@redhat.com \
--cc=borntraeger@linux.ibm.com \
--cc=cohuck@redhat.com \
--cc=david@redhat.com \
--cc=dyoung@redhat.com \
--cc=eperezma@redhat.com \
--cc=farman@linux.ibm.com \
--cc=frankja@linux.ibm.com \
--cc=gor@linux.ibm.com \
--cc=hca@linux.ibm.com \
--cc=imbrenda@linux.ibm.com \
--cc=jasowang@redhat.com \
--cc=kexec@lists.infradead.org \
--cc=kvm@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-s390@vger.kernel.org \
--cc=svens@linux.ibm.com \
--cc=thuth@redhat.com \
--cc=vgoyal@redhat.com \
--cc=virtualization@lists.linux.dev \
--cc=xuanzhuo@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox