From: Baoquan He <bhe@redhat.com>
To: David Hildenbrand <david@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
linux-s390@vger.kernel.org, virtualization@lists.linux.dev,
kvm@vger.kernel.org, linux-fsdevel@vger.kernel.org,
kexec@lists.infradead.org, "Heiko Carstens" <hca@linux.ibm.com>,
"Vasily Gorbik" <gor@linux.ibm.com>,
"Alexander Gordeev" <agordeev@linux.ibm.com>,
"Christian Borntraeger" <borntraeger@linux.ibm.com>,
"Sven Schnelle" <svens@linux.ibm.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
"Jason Wang" <jasowang@redhat.com>,
"Xuan Zhuo" <xuanzhuo@linux.alibaba.com>,
"Eugenio Pérez" <eperezma@redhat.com>,
"Vivek Goyal" <vgoyal@redhat.com>,
"Dave Young" <dyoung@redhat.com>,
"Thomas Huth" <thuth@redhat.com>,
"Cornelia Huck" <cohuck@redhat.com>,
"Janosch Frank" <frankja@linux.ibm.com>,
"Claudio Imbrenda" <imbrenda@linux.ibm.com>,
"Eric Farman" <farman@linux.ibm.com>,
"Andrew Morton" <akpm@linux-foundation.org>
Subject: Re: [PATCH v1 02/11] fs/proc/vmcore: replace vmcoredd_mutex by vmcore_mutex
Date: Fri, 15 Nov 2024 17:32:10 +0800 [thread overview]
Message-ID: <ZzcVGrUcgNMXPkqw@MiWiFi-R3L-srv> (raw)
In-Reply-To: <20241025151134.1275575-3-david@redhat.com>
On 10/25/24 at 05:11pm, David Hildenbrand wrote:
> Let's use our new mutex instead.
Is there reason vmcoredd_mutex need be replaced and integrated with the
vmcore_mutex? Is it the reason the concurrent opening of vmcore could
happen with the old vmcoredd_mutex?
>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
> fs/proc/vmcore.c | 17 ++++++++---------
> 1 file changed, 8 insertions(+), 9 deletions(-)
>
> diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
> index 110ce193d20f..b91c304463c9 100644
> --- a/fs/proc/vmcore.c
> +++ b/fs/proc/vmcore.c
> @@ -53,7 +53,6 @@ static struct proc_dir_entry *proc_vmcore;
> #ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
> /* Device Dump list and mutex to synchronize access to list */
> static LIST_HEAD(vmcoredd_list);
> -static DEFINE_MUTEX(vmcoredd_mutex);
>
> static bool vmcoredd_disabled;
> core_param(novmcoredd, vmcoredd_disabled, bool, 0);
> @@ -248,7 +247,7 @@ static int vmcoredd_copy_dumps(struct iov_iter *iter, u64 start, size_t size)
> size_t tsz;
> char *buf;
>
> - mutex_lock(&vmcoredd_mutex);
> + mutex_lock(&vmcore_mutex);
> list_for_each_entry(dump, &vmcoredd_list, list) {
> if (start < offset + dump->size) {
> tsz = min(offset + (u64)dump->size - start, (u64)size);
> @@ -269,7 +268,7 @@ static int vmcoredd_copy_dumps(struct iov_iter *iter, u64 start, size_t size)
> }
>
> out_unlock:
> - mutex_unlock(&vmcoredd_mutex);
> + mutex_unlock(&vmcore_mutex);
> return ret;
> }
>
> @@ -283,7 +282,7 @@ static int vmcoredd_mmap_dumps(struct vm_area_struct *vma, unsigned long dst,
> size_t tsz;
> char *buf;
>
> - mutex_lock(&vmcoredd_mutex);
> + mutex_lock(&vmcore_mutex);
> list_for_each_entry(dump, &vmcoredd_list, list) {
> if (start < offset + dump->size) {
> tsz = min(offset + (u64)dump->size - start, (u64)size);
> @@ -306,7 +305,7 @@ static int vmcoredd_mmap_dumps(struct vm_area_struct *vma, unsigned long dst,
> }
>
> out_unlock:
> - mutex_unlock(&vmcoredd_mutex);
> + mutex_unlock(&vmcore_mutex);
> return ret;
> }
> #endif /* CONFIG_MMU */
> @@ -1517,9 +1516,9 @@ int vmcore_add_device_dump(struct vmcoredd_data *data)
> dump->size = data_size;
>
> /* Add the dump to driver sysfs list */
> - mutex_lock(&vmcoredd_mutex);
> + mutex_lock(&vmcore_mutex);
> list_add_tail(&dump->list, &vmcoredd_list);
> - mutex_unlock(&vmcoredd_mutex);
> + mutex_unlock(&vmcore_mutex);
>
> vmcoredd_update_size(data_size);
> return 0;
> @@ -1537,7 +1536,7 @@ EXPORT_SYMBOL(vmcore_add_device_dump);
> static void vmcore_free_device_dumps(void)
> {
> #ifdef CONFIG_PROC_VMCORE_DEVICE_DUMP
> - mutex_lock(&vmcoredd_mutex);
> + mutex_lock(&vmcore_mutex);
> while (!list_empty(&vmcoredd_list)) {
> struct vmcoredd_node *dump;
>
> @@ -1547,7 +1546,7 @@ static void vmcore_free_device_dumps(void)
> vfree(dump->buf);
> vfree(dump);
> }
> - mutex_unlock(&vmcoredd_mutex);
> + mutex_unlock(&vmcore_mutex);
> #endif /* CONFIG_PROC_VMCORE_DEVICE_DUMP */
> }
>
> --
> 2.46.1
>
next prev parent reply other threads:[~2024-11-15 9:32 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-25 15:11 [PATCH v1 00/11] fs/proc/vmcore: kdump support for virtio-mem on s390 David Hildenbrand
2024-10-25 15:11 ` [PATCH v1 01/11] fs/proc/vmcore: convert vmcore_cb_lock into vmcore_mutex David Hildenbrand
2024-11-15 9:30 ` Baoquan He
2024-11-15 10:03 ` David Hildenbrand
2024-11-20 8:16 ` Baoquan He
2024-11-20 8:56 ` David Hildenbrand
2024-10-25 15:11 ` [PATCH v1 02/11] fs/proc/vmcore: replace vmcoredd_mutex by vmcore_mutex David Hildenbrand
2024-11-15 9:32 ` Baoquan He [this message]
2024-11-15 10:04 ` David Hildenbrand
2024-11-20 8:14 ` Baoquan He
2024-10-25 15:11 ` [PATCH v1 03/11] fs/proc/vmcore: disallow vmcore modifications after the vmcore was opened David Hildenbrand
2024-11-22 9:16 ` Baoquan He
2024-11-22 9:30 ` David Hildenbrand
2024-11-25 14:41 ` Baoquan He
2024-11-29 10:38 ` David Hildenbrand
2024-12-03 10:42 ` Baoquan He
2024-12-03 10:51 ` David Hildenbrand
2024-10-25 15:11 ` [PATCH v1 04/11] fs/proc/vmcore: move vmcore definitions from kcore.h to crash_dump.h David Hildenbrand
2024-11-15 9:44 ` Baoquan He
2024-11-15 9:59 ` David Hildenbrand
2024-11-20 9:42 ` Baoquan He
2024-11-20 10:28 ` David Hildenbrand
2024-11-21 4:35 ` Baoquan He
2024-11-21 15:37 ` David Hildenbrand
2024-10-25 15:11 ` [PATCH v1 05/11] fs/proc/vmcore: factor out allocating a vmcore memory node David Hildenbrand
2024-11-20 9:45 ` Baoquan He
2024-10-25 15:11 ` [PATCH v1 06/11] fs/proc/vmcore: factor out freeing a list of vmcore ranges David Hildenbrand
2024-11-20 9:46 ` Baoquan He
2024-10-25 15:11 ` [PATCH v1 07/11] fs/proc/vmcore: introduce PROC_VMCORE_DEVICE_RAM to detect device RAM ranges in 2nd kernel David Hildenbrand
2024-11-20 10:13 ` Baoquan He
2024-11-20 10:48 ` David Hildenbrand
2024-11-20 14:05 ` Baoquan He
2024-11-20 14:39 ` David Hildenbrand
2024-11-21 4:30 ` Baoquan He
2024-11-21 19:47 ` David Hildenbrand
2024-11-22 7:51 ` Baoquan He
2024-11-22 7:31 ` Baoquan He
2024-11-22 9:47 ` David Hildenbrand
2024-10-25 15:11 ` [PATCH v1 08/11] virtio-mem: mark device ready before registering callbacks in kdump mode David Hildenbrand
2024-10-25 15:11 ` [PATCH v1 09/11] virtio-mem: remember usable region size David Hildenbrand
2024-10-25 15:11 ` [PATCH v1 10/11] virtio-mem: support CONFIG_PROC_VMCORE_DEVICE_RAM David Hildenbrand
2024-10-25 15:11 ` [PATCH v1 11/11] s390/kdump: virtio-mem kdump support (CONFIG_PROC_VMCORE_DEVICE_RAM) David Hildenbrand
2024-11-04 6:21 ` [PATCH v1 00/11] fs/proc/vmcore: kdump support for virtio-mem on s390 Baoquan He
2024-11-15 8:46 ` Baoquan He
2024-11-15 8:55 ` David Hildenbrand
2024-11-15 9:48 ` Baoquan He
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZzcVGrUcgNMXPkqw@MiWiFi-R3L-srv \
--to=bhe@redhat.com \
--cc=agordeev@linux.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=borntraeger@linux.ibm.com \
--cc=cohuck@redhat.com \
--cc=david@redhat.com \
--cc=dyoung@redhat.com \
--cc=eperezma@redhat.com \
--cc=farman@linux.ibm.com \
--cc=frankja@linux.ibm.com \
--cc=gor@linux.ibm.com \
--cc=hca@linux.ibm.com \
--cc=imbrenda@linux.ibm.com \
--cc=jasowang@redhat.com \
--cc=kexec@lists.infradead.org \
--cc=kvm@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-s390@vger.kernel.org \
--cc=mst@redhat.com \
--cc=svens@linux.ibm.com \
--cc=thuth@redhat.com \
--cc=vgoyal@redhat.com \
--cc=virtualization@lists.linux.dev \
--cc=xuanzhuo@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox