From: Hillf Danton <hdanton@sina.com>
To: "Uladzislau Rezki (Sony)" <urezki@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Roman Gushchin <guro@fb.com>, Michal Hocko <mhocko@suse.com>,
Hillf Danton <hdanton@sina.com>,
Matthew Wilcox <willy@infradead.org>,
linux-mm@kvack.org, LKML <linux-kernel@vger.kernel.org>,
Thomas Garnier <thgarnie@google.com>,
Oleksiy Avramchenko <oleksiy.avramchenko@sonymobile.com>,
Steven Rostedt <rostedt@goodmis.org>,
Joel Fernandes <joelaf@google.com>,
Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@elte.hu>,
Tejun Heo <tj@kernel.org>
Subject: Re: [PATCH 2/4] mm/vmap: preload a CPU with one object for split purpose
Date: Fri, 24 May 2019 18:33:16 +0800 [thread overview]
Message-ID: <20190524103316.1352-1-hdanton@sina.com> (raw)
In-Reply-To: <20190522150939.24605-1-urezki@gmail.com>
On Wed, 22 May 2019 17:09:37 +0200 Uladzislau Rezki (Sony) wrote:
> /*
> + * Preload this CPU with one extra vmap_area object to ensure
> + * that we have it available when fit type of free area is
> + * NE_FIT_TYPE.
> + *
> + * The preload is done in non-atomic context thus, it allows us
> + * to use more permissive allocation masks, therefore to be more
> + * stable under low memory condition and high memory pressure.
> + *
> + * If success, it returns zero with preemption disabled. In case
> + * of error, (-ENOMEM) is returned with preemption not disabled.
> + * Note it has to be paired with alloc_vmap_area_preload_end().
> + */
> +static void
> +ne_fit_preload(int *preloaded)
> +{
> + preempt_disable();
> +
> + if (!__this_cpu_read(ne_fit_preload_node)) {
> + struct vmap_area *node;
> +
> + preempt_enable();
> + node = kmem_cache_alloc(vmap_area_cachep, GFP_KERNEL);
Alternatively, can you please take another look at the upside to use
the memory node parameter in alloc_vmap_area() for allocating va slab,
given that this preload, unlike adjust_va_to_fit_type() is invoked
with the vmap_area_lock not aquired?
> + if (node == NULL) {
> + *preloaded = 0;
> + return;
> + }
> +
> + preempt_disable();
> +
> + if (__this_cpu_cmpxchg(ne_fit_preload_node, NULL, node))
> + kmem_cache_free(vmap_area_cachep, node);
> + }
> +
> + *preloaded = 1;
> +}
> +
> +static void
> +ne_fit_preload_end(int preloaded)
> +{
> + if (preloaded)
> + preempt_enable();
> +}
> +
> +/*
> * Allocate a region of KVA of the specified size and alignment, within the
> * vstart and vend.
> */
> @@ -1034,6 +1100,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
> struct vmap_area *va;
> unsigned long addr;
> int purged = 0;
> + int preloaded;
>
> BUG_ON(!size);
> BUG_ON(offset_in_page(size));
> @@ -1056,6 +1123,12 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
> kmemleak_scan_area(&va->rb_node, SIZE_MAX, gfp_mask & GFP_RECLAIM_MASK);
>
> retry:
> + /*
> + * Even if it fails we do not really care about that.
> + * Just proceed as it is. "overflow" path will refill
> + * the cache we allocate from.
> + */
> + ne_fit_preload(&preloaded);
> spin_lock(&vmap_area_lock);
>
> /*
> @@ -1063,6 +1136,8 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
> * returned. Therefore trigger the overflow path.
> */
> addr = __alloc_vmap_area(size, align, vstart, vend);
> + ne_fit_preload_end(preloaded);
> +
> if (unlikely(addr == vend))
> goto overflow;
>
> --
> 2.11.0
>
Best Regards
Hillf
next prev parent reply other threads:[~2019-05-24 10:33 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-05-22 15:09 [PATCH 1/4] mm/vmap: remove "node" argument Uladzislau Rezki (Sony)
2019-05-22 15:09 ` [PATCH 2/4] mm/vmap: preload a CPU with one object for split purpose Uladzislau Rezki (Sony)
2019-05-22 18:19 ` Andrew Morton
2019-05-23 11:42 ` Uladzislau Rezki
2019-05-22 15:09 ` [PATCH 3/4] mm/vmap: get rid of one single unlink_va() when merge Uladzislau Rezki (Sony)
2019-05-22 18:19 ` Andrew Morton
2019-05-23 11:49 ` Uladzislau Rezki
2019-05-22 15:09 ` [PATCH 4/4] mm/vmap: move BUG_ON() check to the unlink_va() Uladzislau Rezki (Sony)
2019-05-22 18:19 ` Andrew Morton
2019-05-23 12:07 ` Uladzislau Rezki
2019-05-24 10:33 ` Hillf Danton [this message]
2019-05-24 14:14 ` [PATCH 2/4] mm/vmap: preload a CPU with one object for split purpose Uladzislau Rezki
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190524103316.1352-1-hdanton@sina.com \
--to=hdanton@sina.com \
--cc=akpm@linux-foundation.org \
--cc=guro@fb.com \
--cc=joelaf@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=mingo@elte.hu \
--cc=oleksiy.avramchenko@sonymobile.com \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
--cc=thgarnie@google.com \
--cc=tj@kernel.org \
--cc=urezki@gmail.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox