From: Andrew Morton <akpm@linux-foundation.org>
To: David Vrabel <david.vrabel@citrix.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
xen-devel@lists.xenproject.org,
Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
Subject: Re: [PATCHv2] mm/vmalloc: avoid soft lockup warnings when vunmap()'ing large ranges
Date: Tue, 11 Mar 2014 12:46:59 -0700 [thread overview]
Message-ID: <20140311124659.9565a5cc86ade7084eabe24d@linux-foundation.org> (raw)
In-Reply-To: <1394563223-5045-1-git-send-email-david.vrabel@citrix.com>
On Tue, 11 Mar 2014 18:40:23 +0000 David Vrabel <david.vrabel@citrix.com> wrote:
> If vunmap() is used to unmap a large (e.g., 50 GB) region, it may take
> sufficiently long that it triggers soft lockup warnings.
>
> Add a cond_resched() into vunmap_pmd_range() so the calling task may
> be resheduled after unmapping each PMD entry. This is how
> zap_pmd_range() fixes the same problem for userspace mappings.
>
> All callers may sleep except for the APEI GHES driver (apei/ghes.c)
> which calls unmap_kernel_range_no_flush() from NMI and IRQ contexts.
> This driver only unmaps a single pages so don't call cond_resched() if
> the unmap doesn't cross a PMD boundary.
>
> Reported-by: Dietmar Hahn <dietmar.hahn@ts.fujitsu.com>
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
> v2: don't call cond_resched() at the end of a PMD range.
> ---
> mm/vmalloc.c | 2 ++
> 1 files changed, 2 insertions(+), 0 deletions(-)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 0fdf968..1a8b162 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -75,6 +75,8 @@ static void vunmap_pmd_range(pud_t *pud, unsigned long addr, unsigned long end)
> if (pmd_none_or_clear_bad(pmd))
> continue;
> vunmap_pte_range(pmd, addr, next);
> + if (next != end)
> + cond_resched();
> } while (pmd++, addr = next, addr != end);
> }
Worried. This adds a schedule into a previously atomic function. Are
there any callers which call into here from interrupt or with a lock
held, etc?
I started doing an audit, got to
mvebu_hwcc_dma_ops.free->__dma_free_remap->unmap_kernel_range->vunmap_page_range
and gave up - there's just too much.
The best I can suggest is to do
--- a/mm/vmalloc.c~mm-vmalloc-avoid-soft-lockup-warnings-when-vunmaping-large-ranges-fix
+++ a/mm/vmalloc.c
@@ -71,6 +71,8 @@ static void vunmap_pmd_range(pud_t *pud,
pmd_t *pmd;
unsigned long next;
+ might_sleep();
+
pmd = pmd_offset(pud, addr);
do {
next = pmd_addr_end(addr, end);
so we at least find out about bugs promptly, but that's a pretty lame
approach.
Who the heck is mapping 50GB?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
prev parent reply other threads:[~2014-03-11 19:47 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-03-11 18:40 David Vrabel
2014-03-11 19:46 ` Andrew Morton [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140311124659.9565a5cc86ade7084eabe24d@linux-foundation.org \
--to=akpm@linux-foundation.org \
--cc=david.vrabel@citrix.com \
--cc=dietmar.hahn@ts.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox