linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Uladzislau Rezki <urezki@gmail.com>
To: Lorenzo Stoakes <lstoakes@gmail.com>
Cc: "Uladzislau Rezki (Sony)" <urezki@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org, LKML <linux-kernel@vger.kernel.org>,
	Baoquan He <bhe@redhat.com>,
	Christoph Hellwig <hch@infradead.org>,
	Matthew Wilcox <willy@infradead.org>,
	Dave Chinner <david@fromorbit.com>,
	Oleksiy Avramchenko <oleksiy.avramchenko@sony.com>
Subject: Re: [PATCH v3 1/2] mm: vmalloc: Remove a global vmap_blocks xarray
Date: Wed, 29 Mar 2023 17:01:11 +0200	[thread overview]
Message-ID: <ZCRStzHE06l21T0c@pc636> (raw)
In-Reply-To: <132e2d5c-0c1f-4fff-850c-b3fb084455bb@lucifer.local>

Hello, Lorenzo!

> >  /*
> > - * XArray of vmap blocks, indexed by address, to quickly find a vmap block
> > - * in the free path. Could get rid of this if we change the API to return a
> > - * "cookie" from alloc, to be passed to free. But no big deal yet.
> > + * In order to fast access to any "vmap_block" associated with a
> > + * specific address, we store them into a per-cpu xarray. A hash
> > + * function is addr_to_vbq() whereas a key is a vb->va->va_start
> > + * value.
> > + *
> > + * Please note, a vmap_block_queue, which is a per-cpu, is not
> > + * serialized by a raw_smp_processor_id() current CPU, instead
> > + * it is chosen based on a CPU-index it belongs to, i.e. it is
> > + * a hash-table.
> > + *
> > + * An example:
> > + *
> > + *  CPU_1  CPU_2  CPU_0
> > + *    |      |      |
> > + *    V      V      V
> > + * 0     10     20     30     40     50     60
> > + * |------|------|------|------|------|------|...<vmap address space>
> > + *   CPU0   CPU1   CPU2   CPU0   CPU1   CPU2
> > + *
> > + * - CPU_1 invokes vm_unmap_ram(6), 6 belongs to CPU0 zone, thus
> > + *   it access: CPU0/INDEX0 -> vmap_blocks -> xa_lock;
> > + *
> > + * - CPU_2 invokes vm_unmap_ram(11), 11 belongs to CPU1 zone, thus
> > + *   it access: CPU1/INDEX1 -> vmap_blocks -> xa_lock;
> > + *
> > + * - CPU_0 invokes vm_unmap_ram(20), 20 belongs to CPU2 zone, thus
> > + *   it access: CPU2/INDEX2 -> vmap_blocks -> xa_lock.
> >   */
> 
> OK so if I understand this correctly, you're overloading the per-CPU
> vmap_block_queue array to use as a simple hash based on the address and
> relying on the xa_lock() in xa_insert() to serialise in case of contention?
> 
> I like the general heft of your comment but I feel this could be spelled
> out a little more clearly, something like:-
> 
>   In order to have fast access to any vmap_block object associated with a
>   specific address, we use a hash.
> 
>   Rather than waste space on defining a new hash table  we take advantage
>   of the fact we already have a static per-cpu array vmap_block_queue.
> 
>   This is already used for per-CPU access to the block queue, however we
>   overload this to _also_ act as a vmap_block hash. The hash function is
>   addr_to_vbq() which hashes on vb->va->va_start.
> 
>   This then uses per_cpu() to lookup the _index_ rather than the
>   _cpu_. Each vmap_block_queue contains an xarray of vmap blocks which are
>   indexed on the same key as the hash (vb->va->va_start).
> 
>   xarray read acceses are protected by RCU lock and inserts are protected
>   by a spin lock so there is no risk of a race here.
> 
/*
 * In order to fast access to any "vmap_block" associated with a
 * specific address, we use a hash.
 *
 * A per-cpu vmap_block_queue is used in both ways, to serialize
 * an access to free block chains among CPUs(alloc path) and it
 * also acts as a vmap_block hash(alloc/free paths). It means we
 * overload it, since we already have the per-cpu array which is
 * used as a hash table.
 *
 * A hash function is addr_to_vbq() which hashes any address to
 * a specific index(in a hash) it belongs to. This then uses a
 * per_cpu() macro to access the array with specific index.
 *
 * An example:
 *
 *  CPU_1  CPU_2  CPU_0
 *    |      |      |
 *    V      V      V
 * 0     10     20     30     40     50     60
 * |------|------|------|------|------|------|...<vmap address space>
 *   CPU0   CPU1   CPU2   CPU0   CPU1   CPU2
 *
 * - CPU_1 invokes vm_unmap_ram(6), 6 belongs to CPU0 zone, thus
 *   it access: CPU0/INDEX0 -> vmap_blocks -> xa_lock;
 *
 * - CPU_2 invokes vm_unmap_ram(11), 11 belongs to CPU1 zone, thus
 *   it access: CPU1/INDEX1 -> vmap_blocks -> xa_lock;
 *
 * - CPU_0 invokes vm_unmap_ram(20), 20 belongs to CPU2 zone, thus
 *   it access: CPU2/INDEX2 -> vmap_blocks -> xa_lock.
 *
 * This technique allows almost remove a lock-contention in locking
 * primitives which protect insert/remove operations.
 */
Are you find with it?

--
Uladzislau Rezki



  parent reply	other threads:[~2023-03-29 15:01 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-27 17:01 Uladzislau Rezki (Sony)
2023-03-27 17:01 ` [PATCH v3 2/2] lib/test_vmalloc.c: Add vm_map_ram()/vm_unmap_ram() test case Uladzislau Rezki (Sony)
2023-03-27 20:28   ` Lorenzo Stoakes
2023-03-28 12:29     ` Uladzislau Rezki
2023-03-27 20:09 ` [PATCH v3 1/2] mm: vmalloc: Remove a global vmap_blocks xarray Lorenzo Stoakes
2023-03-28 12:51   ` Uladzislau Rezki
2023-03-28 16:37   ` Uladzislau Rezki
2023-03-29 15:01   ` Uladzislau Rezki [this message]
2023-03-29 16:23     ` Lorenzo Stoakes
2023-03-29 17:50       ` Uladzislau Rezki
2023-03-28  3:25 ` Baoquan He
2023-03-28 12:34   ` Uladzislau Rezki
2023-03-29  4:33     ` Baoquan He
2023-03-29  6:54       ` Uladzislau Rezki

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZCRStzHE06l21T0c@pc636 \
    --to=urezki@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=bhe@redhat.com \
    --cc=david@fromorbit.com \
    --cc=hch@infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lstoakes@gmail.com \
    --cc=oleksiy.avramchenko@sony.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox