linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Minchan Kim <minchan@kernel.org>
To: Jungseok Lee <jungseoklee85@gmail.com>
Cc: Arnd Bergmann <arnd@arndb.de>,
	linux-arm-kernel@lists.infradead.org,
	Catalin Marinas <catalin.marinas@arm.com>,
	barami97@gmail.com, Will Deacon <will.deacon@arm.com>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [RFC PATCH 2/2] arm64: Implement vmalloc based thread_info allocator
Date: Mon, 25 May 2015 23:58:57 +0900	[thread overview]
Message-ID: <20150525145857.GF14922@blaptop> (raw)
In-Reply-To: <B873B881-4972-4524-B1D9-4BB05D7248A4@gmail.com>

On Mon, May 25, 2015 at 07:01:33PM +0900, Jungseok Lee wrote:
> On May 25, 2015, at 2:49 AM, Arnd Bergmann wrote:
> > On Monday 25 May 2015 01:02:20 Jungseok Lee wrote:
> >> Fork-routine sometimes fails to get a physically contiguous region for
> >> thread_info on 4KB page system although free memory is enough. That is,
> >> a physically contiguous region, which is currently 16KB, is not available
> >> since system memory is fragmented.
> >> 
> >> This patch tries to solve the problem as allocating thread_info memory
> >> from vmalloc space, not 1:1 mapping one. The downside is one additional
> >> page allocation in case of vmalloc. However, vmalloc space is large enough,
> >> around 240GB, under a combination of 39-bit VA and 4KB page. Thus, it is
> >> not a big tradeoff for fork-routine service.
> > 
> > vmalloc has a rather large runtime cost. I'd argue that failing to allocate
> > thread_info structures means something has gone very wrong.
> 
> That is why the feature is marked "N" by default.
> I focused on fork-routine stability rather than performance.

If VM has trouble with order-2 allocation, your system would be
trouble soon although fork at the moment manages to be successful
because such small high-order(ex, order <= PAGE_ALLOC_COSTLY_ORDER)
allocation is common in the kernel so VM should handle it smoothly.
If VM didn't, it means we should fix VM itself, not a specific
allocation site. Fork is one of victim by that.

> 
> Could you give me an idea how to evaluate performance degradation?
> Running some benchmarks would be helpful, but I would like to try to
> gather data based on meaningful methodology.
> 
> > Can you describe the scenario that leads to fragmentation this bad?
> 
> Android, but I could not describe an exact reproduction procedure step
> by step since it's behaved and reproduced randomly. As reading the following
> thread from mm mailing list, a similar symptom is observed on other systems. 
> 
> https://lkml.org/lkml/2015/4/28/59
> 
> Although I do not know the details of a system mentioned in the thread,
> even order-2 page allocation is not smoothly operated due to fragmentation on
> low memory system.

What Joonsoo have tackle is generic fragmentation problem, not *a* fork fail,
which is more right approach to handle small high-order allocation problem.

> 
> I think the point is *low memory system*. 64-bit kernel is usually a feasible
> option when system memory is enough, but 64-bit kernel and low memory system
> combo is not unusual in case of ARM64.
> 
> > Could the stack size be reduced to 8KB perhaps?
> 
> I guess probably not.
> 
> A commit, 845ad05e, says that 8KB is not enough to cover SpecWeb benchmark.
> The stack size is 16KB on x86_64. I am not sure whether all applications,
> which work fine on x86_64 machine, run very well on ARM64 with 8KB stack size.
> 
> Best Regards
> Jungseok Lee
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Kind regards,
Minchan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2015-05-25 14:59 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-05-24 16:02 Jungseok Lee
2015-05-24 17:49 ` Arnd Bergmann
2015-05-25 10:01   ` Jungseok Lee
2015-05-25 14:58     ` Minchan Kim [this message]
2015-05-26 12:10       ` Jungseok Lee
2015-05-27  4:24         ` Minchan Kim
2015-05-27 16:00           ` Jungseok Lee
2015-05-25 16:47     ` Catalin Marinas
2015-05-25 20:29       ` Arnd Bergmann
2015-05-25 22:36         ` Catalin Marinas
2015-05-26  9:51           ` Arnd Bergmann
2015-05-26 13:02       ` Jungseok Lee
2015-05-25 21:20     ` Arnd Bergmann
2015-05-25 14:40 ` Minchan Kim
2015-05-26 11:29   ` Jungseok Lee
2015-05-27  4:10     ` Minchan Kim
2015-05-27  6:22       ` Sergey Senozhatsky
2015-05-27  7:31         ` Arnd Bergmann
2015-05-27 16:05           ` Jungseok Lee
2015-05-27 16:08       ` Jungseok Lee
2015-05-26  2:52 ` yalin wang
2015-05-26 12:21   ` Jungseok Lee

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150525145857.GF14922@blaptop \
    --to=minchan@kernel.org \
    --cc=arnd@arndb.de \
    --cc=barami97@gmail.com \
    --cc=catalin.marinas@arm.com \
    --cc=jungseoklee85@gmail.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox