linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Catalin Marinas <catalin.marinas@arm.com>
To: Christoph Lameter <cl@linux.com>
Cc: Robert Richter <rric@kernel.org>, Joonsoo Kim <js1304@gmail.com>,
	Linux-sh list <linux-sh@vger.kernel.org>,
	Will Deacon <will.deacon@arm.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Robert Richter <rrichter@cavium.com>,
	Tirumalesh Chalamarla <tchalamarla@cavium.com>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
	linux-mm@kvack.org
Subject: Re: [PATCH] arm64: Increase the max granular size
Date: Wed, 4 Nov 2015 12:36:41 +0000	[thread overview]
Message-ID: <20151104123640.GK7637@e104818-lin.cambridge.arm.com> (raw)
In-Reply-To: <alpine.DEB.2.20.1511031724010.8178@east.gentwo.org>

(+ linux-mm)

On Tue, Nov 03, 2015 at 05:33:25PM -0600, Christoph Lameter wrote:
> On Tue, 3 Nov 2015, Catalin Marinas wrote:
> > (cc'ing Jonsoo and Christoph; summary: slab failure with L1_CACHE_BYTES
> > of 128 and sizeof(kmem_cache_node) of 152)
> 
> Hmmm... Yes that would mean use the 196 sized kmalloc array which is not a
> power of two slab. But the code looks fine to me.

I'm not entirely sure that gets used (or even created).
kmalloc_index(152) returns 8 (INDEX_NODE==8) since KMALLOC_MIN_SIZE==128
and the "kmalloc-node" cache size is 256.

> > If I revert commit 8fc9cf420b36 ("slab: make more slab management
> > structure off the slab") it works but I still need to figure out how
> > slab indices are calculated. The size_index[] array is overridden so
> > that 0..15 are 7 and 16..23 are 8. But the kmalloc_caches[7] has never
> > been populated, hence the BUG_ON. Another option may be to change
> > kmalloc_size and kmalloc_index to cope with KMALLOC_MIN_SIZE of 128.
> >
> > I'll do some more investigation tomorrow.
> 
> The commit allows off slab management for PAGE_SIZE >> 5 that is 128.

This means that the first kmalloc cache to be created, "kmalloc-128", is
off slab.

> After that commit kmem_cache_create would try to allocate an off slab
> management structure which is not available during early boot.
> But the slab_early_init is set which should prevent the use of an off slab
> management infrastructure in kmem_cache_create().
> 
> However, the failure in line 2283 shows that the OFF SLAB flag was
> mistakenly set anyways!!!! Something must havee cleared slab_early_init?

slab_early_init is cleared after "kmem_cache" and "kmalloc-node" caches
are successfully created. Following this, the minimum kmalloc allocation
goes off-slab when KMALLOC_MIN_SIZE == 128.

When trying to create "kmalloc-128" (via create_kmalloc_caches(),
slab_early_init is already 0), __kmem_cache_create() requires an
allocation of 32 bytes (freelist_size) which has index 7, hence exactly
the kmalloc_caches[7] we are trying to create.

The simplest option would be to make sure that off slab isn't allowed
for caches of KMALLOC_MIN_SIZE or smaller, with the drawback that not
only "kmalloc-128" but any other such caches will be on slab.

I think a better option would be to first check that there is a
kmalloc_caches[] entry for freelist_size before deciding to go off-slab.
See below:

-----8<------------------------------

       reply	other threads:[~2015-11-04 12:36 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1442944788-17254-1-git-send-email-rric@kernel.org>
     [not found] ` <20151028190948.GJ8899@e104818-lin.cambridge.arm.com>
     [not found]   ` <CAMuHMdWQygbxMXoOsbwek6DzZcr7J-C23VCK4ubbgUr+zj=giw@mail.gmail.com>
     [not found]     ` <20151103120504.GF7637@e104818-lin.cambridge.arm.com>
     [not found]       ` <20151103143858.GI7637@e104818-lin.cambridge.arm.com>
     [not found]         ` <CAMuHMdWk0fPzTSKhoCuS4wsOU1iddhKJb2SOpjo=a_9vCm_KXQ@mail.gmail.com>
     [not found]           ` <20151103185050.GJ7637@e104818-lin.cambridge.arm.com>
     [not found]             ` <alpine.DEB.2.20.1511031724010.8178@east.gentwo.org>
2015-11-04 12:36               ` Catalin Marinas [this message]
2015-11-04 13:53                 ` Christoph Lameter
2015-11-04 14:54                   ` Catalin Marinas
2015-11-04 15:28                     ` Christoph Lameter
2015-11-04 15:39                       ` Catalin Marinas
2015-11-05  4:31                         ` Joonsoo Kim
2015-11-05 11:50                           ` [PATCH] mm: slab: Only move management objects off-slab for sizes larger than KMALLOC_MIN_SIZE Catalin Marinas
2015-11-05 13:31                             ` Andrew Morton
2015-11-05 16:08                               ` Catalin Marinas
2015-11-06 13:00                                 ` Geert Uytterhoeven
2015-11-05 17:39                             ` Christoph Lameter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20151104123640.GK7637@e104818-lin.cambridge.arm.com \
    --to=catalin.marinas@arm.com \
    --cc=cl@linux.com \
    --cc=geert@linux-m68k.org \
    --cc=js1304@gmail.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-sh@vger.kernel.org \
    --cc=rric@kernel.org \
    --cc=rrichter@cavium.com \
    --cc=tchalamarla@cavium.com \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox