From: Barry Song <21cnbao@gmail.com>
To: jasowang@redhat.com
Cc: 21cnbao@gmail.com, 42.hyeyoo@gmail.com,
akpm@linux-foundation.org, cl@linux.com, eperezma@redhat.com,
hailong.liu@oppo.com, hch@infradead.org, iamjoonsoo.kim@lge.com,
linux-mm@kvack.org, lstoakes@gmail.com,
maxime.coquelin@redhat.com, mhocko@suse.com, mst@redhat.com,
penberg@kernel.org, rientjes@google.com,
roman.gushchin@linux.dev, torvalds@linux-foundation.org,
urezki@gmail.com, v-songbaohua@oppo.com, vbabka@suse.cz,
virtualization@lists.linux.dev, xuanzhuo@linux.alibaba.com
Subject: Re: [PATCH RFC 1/5] vpda: try to fix the potential crash due to misusing __GFP_NOFAIL
Date: Mon, 29 Jul 2024 18:05:29 +1200 [thread overview]
Message-ID: <20240729060529.93243-1-21cnbao@gmail.com> (raw)
In-Reply-To: <CACGkMEs49KckODWSpe7VPfTeshogni0_eOdkMO0b7zW2A5YX2w@mail.gmail.com>
On Mon, Jul 29, 2024 at 3:42 PM Jason Wang <jasowang@redhat.com> wrote:
>
> On Thu, Jul 25, 2024 at 3:00 PM Barry Song <21cnbao@gmail.com> wrote:
> >
> > On Thu, Jul 25, 2024 at 6:08 PM Michal Hocko <mhocko@suse.com> wrote:
> > >
> > > On Thu 25-07-24 10:50:45, Barry Song wrote:
> > > > On Thu, Jul 25, 2024 at 12:27 AM Michal Hocko <mhocko@suse.com> wrote:
> > > > >
> > > > > On Wed 24-07-24 20:55:40, Barry Song wrote:
> > > [...]
> > > > > > diff --git a/drivers/vdpa/vdpa_user/iova_domain.c b/drivers/vdpa/vdpa_user/iova_domain.c
> > > > > > index 791d38d6284c..eff700e5f7a2 100644
> > > > > > --- a/drivers/vdpa/vdpa_user/iova_domain.c
> > > > > > +++ b/drivers/vdpa/vdpa_user/iova_domain.c
> > > > > > @@ -287,28 +287,44 @@ void vduse_domain_remove_user_bounce_pages(struct vduse_iova_domain *domain)
> > > > > > {
> > > > > > struct vduse_bounce_map *map;
> > > > > > unsigned long i, count;
> > > > > > + struct page **pages = NULL;
> > > > > >
> > > > > > write_lock(&domain->bounce_lock);
> > > > > > if (!domain->user_bounce_pages)
> > > > > > goto out;
> > > > > > -
> > > > > > count = domain->bounce_size >> PAGE_SHIFT;
> > > > > > + write_unlock(&domain->bounce_lock);
> > > > > > +
> > > > > > + pages = kmalloc_array(count, sizeof(*pages), GFP_KERNEL | __GFP_NOFAIL);
> > > > > > + for (i = 0; i < count; i++)
> > > > > > + pages[i] = alloc_page(GFP_KERNEL | __GFP_NOFAIL);
> > > > >
> > > > > AFAICS vduse_domain_release calls this function with
> > > > > spin_lock(&domain->iotlb_lock) so dropping &domain->bounce_lock is not
> > > > > sufficient.
> > > >
> > > > yes. this is true:
> > > >
> > > > static int vduse_domain_release(struct inode *inode, struct file *file)
> > > > {
> > > > struct vduse_iova_domain *domain = file->private_data;
> > > >
> > > > spin_lock(&domain->iotlb_lock);
> > > > vduse_iotlb_del_range(domain, 0, ULLONG_MAX);
> > > > vduse_domain_remove_user_bounce_pages(domain);
> > > > vduse_domain_free_kernel_bounce_pages(domain);
> > > > spin_unlock(&domain->iotlb_lock);
> > > > put_iova_domain(&domain->stream_iovad);
> > > > put_iova_domain(&domain->consistent_iovad);
> > > > vhost_iotlb_free(domain->iotlb);
> > > > vfree(domain->bounce_maps);
> > > > kfree(domain);
> > > >
> > > > return 0;
> > > > }
> > > >
> > > > This is quite a pain. I admit I don't have knowledge of this driver, and I don't
> > > > think it's safe to release two locks and then reacquire them. The situation is
> > > > rather complex. Therefore, I would prefer if the VDPA maintainers could
> > > > take the lead in implementing a proper fix.
> > >
> > > Would it be possible to move all that work to a deferred context?
> >
> > My understanding is that we need to be aware of both the iotlb_lock and
> > bounce_lock to implement the correct changes. As long as we still need
> > to acquire these two locks in a deferred context, there doesn't seem to
> > be any difference.
> >
> > I can do the memory pre-allocation before spin_lock(&domain->iotlb_lock),
> > but I have no knowledge whether the "count" will change after I make
> > the preallocation.
> >
> > diff --git a/drivers/vdpa/vdpa_user/iova_domain.c
> > b/drivers/vdpa/vdpa_user/iova_domain.c
> > index 791d38d6284c..7ec87ef33d42 100644
> > --- a/drivers/vdpa/vdpa_user/iova_domain.c
> > +++ b/drivers/vdpa/vdpa_user/iova_domain.c
> > @@ -544,9 +544,12 @@ static int vduse_domain_release(struct inode
> > *inode, struct file *file)
> > {
> > struct vduse_iova_domain *domain = file->private_data;
> >
> > + struct page **pages;
> > + spin_lock(&domain->iotlb_lock); maybe also + bounce_lock?
> > + count = domain->bounce_size >> PAGE_SHIFT;
> > + spin_unlock(&domain->iotlb_lock);
>
> We probably don't need any lock here as bounce_size won't be changed .
>
> > +
> > + preallocate_count_pages(pages, count);
> > +
> > ....
> > spin_lock(&domain->iotlb_lock);
> > vduse_iotlb_del_range(domain, 0, ULLONG_MAX);
> > - vduse_domain_remove_user_bounce_pages(domain);
> > + vduse_domain_remove_user_bounce_pages(domain, pages);
> > vduse_domain_free_kernel_bounce_pages(domain);
> > spin_unlock(&domain->iotlb_lock);
> > put_iova_domain(&domain->stream_iovad);
>
> This seems to work.
Thanks, Jason. I personally have no knowledge of vDPA. Could you please help
review and test the patch below?
From 1f3cae091159bfcaffdb4a999a4a8e37db2eacf1 Mon Sep 17 00:00:00 2001
From: Barry Song <v-songbaohua@oppo.com>
Date: Wed, 24 Jul 2024 20:55:40 +1200
Subject: [PATCH RFC v2] vpda: try to fix the potential crash due to misusing
__GFP_NOFAIL
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
mm doesn't support non-blockable __GFP_NOFAIL allocation. Because
__GFP_NOFAIL without direct reclamation may just result in a busy
loop within non-sleepable contexts.
static inline struct page *
__alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
struct alloc_context *ac)
{
...
/*
* Make sure that __GFP_NOFAIL request doesn't leak out and make sure
* we always retry
*/
if (gfp_mask & __GFP_NOFAIL) {
/*
* All existing users of the __GFP_NOFAIL are blockable, so warn
* of any new users that actually require GFP_NOWAIT
*/
if (WARN_ON_ONCE_GFP(!can_direct_reclaim, gfp_mask))
goto fail;
...
}
...
fail:
warn_alloc(gfp_mask, ac->nodemask,
"page allocation failure: order:%u", order);
got_pg:
return page;
}
Let's move the memory allocation out of the atomic context and use
the normal sleepable context to get pages.
[RFC]: This has only been compile-tested; I'd prefer if the VDPA maintainers
handles it.
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Cc: "Eugenio Pérez" <eperezma@redhat.com>
Cc: Maxime Coquelin <maxime.coquelin@redhat.com>
Signed-off-by: Barry Song <v-songbaohua@oppo.com>
---
drivers/vdpa/vdpa_user/iova_domain.c | 21 ++++++++++++++++-----
drivers/vdpa/vdpa_user/iova_domain.h | 3 ++-
drivers/vdpa/vdpa_user/vduse_dev.c | 13 ++++++++++++-
3 files changed, 30 insertions(+), 7 deletions(-)
diff --git a/drivers/vdpa/vdpa_user/iova_domain.c b/drivers/vdpa/vdpa_user/iova_domain.c
index 791d38d6284c..014809ac2b7c 100644
--- a/drivers/vdpa/vdpa_user/iova_domain.c
+++ b/drivers/vdpa/vdpa_user/iova_domain.c
@@ -283,7 +283,7 @@ int vduse_domain_add_user_bounce_pages(struct vduse_iova_domain *domain,
return ret;
}
-void vduse_domain_remove_user_bounce_pages(struct vduse_iova_domain *domain)
+void vduse_domain_remove_user_bounce_pages(struct vduse_iova_domain *domain, struct page **pages)
{
struct vduse_bounce_map *map;
unsigned long i, count;
@@ -294,15 +294,16 @@ void vduse_domain_remove_user_bounce_pages(struct vduse_iova_domain *domain)
count = domain->bounce_size >> PAGE_SHIFT;
for (i = 0; i < count; i++) {
- struct page *page = NULL;
+ struct page *page = pages[i];
map = &domain->bounce_maps[i];
- if (WARN_ON(!map->bounce_page))
+ if (WARN_ON(!map->bounce_page)) {
+ put_page(page);
continue;
+ }
/* Copy user page to kernel page if it's in use */
if (map->orig_phys != INVALID_PHYS_ADDR) {
- page = alloc_page(GFP_ATOMIC | __GFP_NOFAIL);
memcpy_from_page(page_address(page),
map->bounce_page, 0, PAGE_SIZE);
}
@@ -543,10 +544,19 @@ static int vduse_domain_mmap(struct file *file, struct vm_area_struct *vma)
static int vduse_domain_release(struct inode *inode, struct file *file)
{
struct vduse_iova_domain *domain = file->private_data;
+ struct page **pages = NULL;
+ unsigned long count, i;
+
+ if (domain->user_bounce_pages) {
+ count = domain->bounce_size >> PAGE_SHIFT;
+ pages = kmalloc_array(count, sizeof(*pages), GFP_KERNEL | __GFP_NOFAIL);
+ for (i = 0; i < count; i++)
+ pages[i] = alloc_page(GFP_KERNEL | __GFP_NOFAIL);
+ }
spin_lock(&domain->iotlb_lock);
vduse_iotlb_del_range(domain, 0, ULLONG_MAX);
- vduse_domain_remove_user_bounce_pages(domain);
+ vduse_domain_remove_user_bounce_pages(domain, pages);
vduse_domain_free_kernel_bounce_pages(domain);
spin_unlock(&domain->iotlb_lock);
put_iova_domain(&domain->stream_iovad);
@@ -554,6 +564,7 @@ static int vduse_domain_release(struct inode *inode, struct file *file)
vhost_iotlb_free(domain->iotlb);
vfree(domain->bounce_maps);
kfree(domain);
+ kfree(pages);
return 0;
}
diff --git a/drivers/vdpa/vdpa_user/iova_domain.h b/drivers/vdpa/vdpa_user/iova_domain.h
index f92f22a7267d..db0b793d86db 100644
--- a/drivers/vdpa/vdpa_user/iova_domain.h
+++ b/drivers/vdpa/vdpa_user/iova_domain.h
@@ -74,7 +74,8 @@ void vduse_domain_reset_bounce_map(struct vduse_iova_domain *domain);
int vduse_domain_add_user_bounce_pages(struct vduse_iova_domain *domain,
struct page **pages, int count);
-void vduse_domain_remove_user_bounce_pages(struct vduse_iova_domain *domain);
+void vduse_domain_remove_user_bounce_pages(struct vduse_iova_domain *domain,
+ struct page **pages);
void vduse_domain_destroy(struct vduse_iova_domain *domain);
diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
index 7ae99691efdf..df7c1b6f1350 100644
--- a/drivers/vdpa/vdpa_user/vduse_dev.c
+++ b/drivers/vdpa/vdpa_user/vduse_dev.c
@@ -1030,6 +1030,8 @@ static int vduse_dev_queue_irq_work(struct vduse_dev *dev,
static int vduse_dev_dereg_umem(struct vduse_dev *dev,
u64 iova, u64 size)
{
+ struct page **pages = NULL;
+ unsigned long count, i;
int ret;
mutex_lock(&dev->mem_lock);
@@ -1044,13 +1046,22 @@ static int vduse_dev_dereg_umem(struct vduse_dev *dev,
if (dev->umem->iova != iova || size != dev->domain->bounce_size)
goto unlock;
- vduse_domain_remove_user_bounce_pages(dev->domain);
+ if (dev->domain->user_bounce_pages) {
+ count = dev->domain->bounce_size >> PAGE_SHIFT;
+ pages = kmalloc_array(count, sizeof(*pages),
+ GFP_KERNEL | __GFP_NOFAIL);
+ for (i = 0; i < count; i++)
+ pages[i] = alloc_page(GFP_KERNEL | __GFP_NOFAIL);
+ }
+
+ vduse_domain_remove_user_bounce_pages(dev->domain, pages);
unpin_user_pages_dirty_lock(dev->umem->pages,
dev->umem->npages, true);
atomic64_sub(dev->umem->npages, &dev->umem->mm->pinned_vm);
mmdrop(dev->umem->mm);
vfree(dev->umem->pages);
kfree(dev->umem);
+ kfree(pages);
dev->umem = NULL;
ret = 0;
unlock:
--
2.34.1
>
> Thanks
>
> >
> >
> > > --
> > > Michal Hocko
> > > SUSE Labs
> >
Thanks
Barry
next prev parent reply other threads:[~2024-07-29 6:05 UTC|newest]
Thread overview: 44+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-24 8:55 [PATCH 0/5] mm: clarify nofail memory allocation Barry Song
2024-07-24 8:55 ` [PATCH RFC 1/5] vpda: try to fix the potential crash due to misusing __GFP_NOFAIL Barry Song
2024-07-24 12:26 ` Michal Hocko
2024-07-24 22:50 ` Barry Song
2024-07-25 6:08 ` Michal Hocko
2024-07-25 7:00 ` Barry Song
2024-07-29 3:42 ` Jason Wang
2024-07-29 6:05 ` Barry Song [this message]
[not found] ` <CACGkMEuv4M_NaUQPHH59MPevGoJJoYb70LykcCODD=nUvik3ZQ@mail.gmail.com>
2024-07-30 3:08 ` Barry Song
2024-07-24 8:55 ` [PATCH 2/5] mm: Document __GFP_NOFAIL must be blockable Barry Song
2024-07-24 11:58 ` Michal Hocko
2024-08-03 23:09 ` Davidlohr Bueso
2024-07-24 8:55 ` [PATCH 3/5] mm: BUG_ON to avoid NULL deference while __GFP_NOFAIL fails Barry Song
2024-07-24 10:03 ` Vlastimil Babka
2024-07-24 10:11 ` Barry Song
2024-07-24 12:10 ` Michal Hocko
2024-07-24 8:55 ` [PATCH 4/5] mm: Introduce GFP_NOFAIL with the inclusion of __GFP_RECLAIM Barry Song
2024-07-24 12:12 ` Michal Hocko
2024-07-24 8:55 ` [PATCH RFC 5/5] non-mm: discourage the usage of __GFP_NOFAIL and encourage GFP_NOFAIL Barry Song
2024-07-24 9:53 ` Vlastimil Babka
2024-07-24 9:58 ` Barry Song
2024-07-24 13:14 ` Christoph Hellwig
2024-07-24 12:25 ` Michal Hocko
2024-07-24 13:13 ` Christoph Hellwig
2024-07-24 13:21 ` Michal Hocko
2024-07-24 13:23 ` Christoph Hellwig
2024-07-24 13:31 ` Michal Hocko
2024-07-24 13:33 ` Vlastimil Babka
2024-07-24 13:38 ` Christoph Hellwig
2024-07-24 13:47 ` Michal Hocko
2024-07-24 13:55 ` Christoph Hellwig
2024-07-24 14:39 ` Vlastimil Babka
2024-07-24 14:41 ` Christoph Hellwig
2024-07-25 1:47 ` Barry Song
2024-07-29 9:56 ` Barry Song
2024-07-29 10:03 ` Vlastimil Babka
2024-07-29 10:16 ` Barry Song
2024-07-24 12:17 ` Michal Hocko
2024-07-25 1:38 ` Barry Song
2024-07-25 6:16 ` Michal Hocko
2024-07-26 21:08 ` Davidlohr Bueso
2024-07-29 11:50 ` Michal Hocko
2024-08-03 22:15 ` Davidlohr Bueso
2024-08-05 7:49 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240729060529.93243-1-21cnbao@gmail.com \
--to=21cnbao@gmail.com \
--cc=42.hyeyoo@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=eperezma@redhat.com \
--cc=hailong.liu@oppo.com \
--cc=hch@infradead.org \
--cc=iamjoonsoo.kim@lge.com \
--cc=jasowang@redhat.com \
--cc=linux-mm@kvack.org \
--cc=lstoakes@gmail.com \
--cc=maxime.coquelin@redhat.com \
--cc=mhocko@suse.com \
--cc=mst@redhat.com \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=torvalds@linux-foundation.org \
--cc=urezki@gmail.com \
--cc=v-songbaohua@oppo.com \
--cc=vbabka@suse.cz \
--cc=virtualization@lists.linux.dev \
--cc=xuanzhuo@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox