linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: Nadav Amit <namit@vmware.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	Borislav Petkov <bp@suse.de>, Toshi Kani <toshi.kani@hpe.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Bjorn Helgaas <bhelgaas@google.com>,
	Ingo Molnar <mingo@kernel.org>
Subject: Re: [PATCH 3/3] resource: Introduce resource cache
Date: Mon, 17 Jun 2019 21:57:50 -0700	[thread overview]
Message-ID: <20190617215750.8e46ae846c09cd5c1f22fdf9@linux-foundation.org> (raw)
In-Reply-To: <20190613045903.4922-4-namit@vmware.com>

On Wed, 12 Jun 2019 21:59:03 -0700 Nadav Amit <namit@vmware.com> wrote:

> For efficient search of resources, as needed to determine the memory
> type for dax page-faults, introduce a cache of the most recently used
> top-level resource. Caching the top-level should be safe as ranges in
> that level do not overlap (unlike those of lower levels).
> 
> Keep the cache per-cpu to avoid possible contention. Whenever a resource
> is added, removed or changed, invalidate all the resources. The
> invalidation takes place when the resource_lock is taken for write,
> preventing possible races.
> 
> This patch provides relatively small performance improvements over the
> previous patch (~0.5% on sysbench), but can benefit systems with many
> resources.

> --- a/kernel/resource.c
> +++ b/kernel/resource.c
> @@ -53,6 +53,12 @@ struct resource_constraint {
>  
>  static DEFINE_RWLOCK(resource_lock);
>  
> +/*
> + * Cache of the top-level resource that was most recently use by
> + * find_next_iomem_res().
> + */
> +static DEFINE_PER_CPU(struct resource *, resource_cache);

A per-cpu cache which is accessed under a kernel-wide read_lock looks a
bit odd - the latency getting at that rwlock will swamp the benefit of
isolating the CPUs from each other when accessing resource_cache.

On the other hand, if we have multiple CPUs running
find_next_iomem_res() concurrently then yes, I see the benefit.  Has
the benefit of using a per-cpu cache (rather than a kernel-wide one)
been quantified?


> @@ -262,9 +268,20 @@ static void __release_child_resources(struct resource *r)
>  	}
>  }
>  
> +static void invalidate_resource_cache(void)
> +{
> +	int cpu;
> +
> +	lockdep_assert_held_exclusive(&resource_lock);
> +
> +	for_each_possible_cpu(cpu)
> +		per_cpu(resource_cache, cpu) = NULL;
> +}

All the calls to invalidate_resource_cache() are rather a
maintainability issue - easy to miss one as the code evolves.

Can't we just make find_next_iomem_res() smarter?  For example, start
the lookup from the cached point and if that failed, do a full sweep?

> +	invalidate_resource_cache();
> +	invalidate_resource_cache();
> +	invalidate_resource_cache();
> +	invalidate_resource_cache();
> +	invalidate_resource_cache();
> +	invalidate_resource_cache();
> +	invalidate_resource_cache();
> +	invalidate_resource_cache();
> +	invalidate_resource_cache();
> +	invalidate_resource_cache();
> +	invalidate_resource_cache();
> +			invalidate_resource_cache();
> +	invalidate_resource_cache();
> +	invalidate_resource_cache();

Ow.  I guess the maintainability situation can be improved by renaming
resource_lock to something else (to avoid mishaps) then adding wrapper
functions.  But still.  I can't say this is a super-exciting patch :(


  parent reply	other threads:[~2019-06-18  4:57 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-13  4:59 [PATCH 0/3] resource: find_next_iomem_res() improvements Nadav Amit
     [not found] ` <20190613045903.4922-2-namit@vmware.com>
2019-06-15 22:15   ` [PATCH 1/3] resource: Fix locking in find_next_iomem_res() Sasha Levin
2019-06-17 19:14     ` Nadav Amit
2019-06-18  0:55       ` Sasha Levin
2019-06-18  1:32         ` Nadav Amit
2019-06-18  4:26   ` Andrew Morton
     [not found] ` <20190613045903.4922-4-namit@vmware.com>
2019-06-15 22:16   ` [PATCH 3/3] resource: Introduce resource cache Sasha Levin
2019-06-17 17:20     ` Nadav Amit
2019-06-18  4:57   ` Andrew Morton [this message]
2019-06-18  5:33     ` Nadav Amit
2019-06-18  5:40       ` Nadav Amit
2019-06-19 13:00         ` Bjorn Helgaas
2019-06-19 20:35           ` Nadav Amit
2019-06-19 21:53           ` Dan Williams
2019-06-20 21:31             ` Andi Kleen
2019-06-20 23:13               ` Dan Williams
2019-06-18  6:44 ` [PATCH 0/3] resource: find_next_iomem_res() improvements Dan Williams
2019-06-18 17:42   ` Nadav Amit
2019-06-18 18:30     ` Dan Williams
2019-06-18 21:56       ` Nadav Amit
2019-07-16 22:00         ` Andrew Morton
2019-07-16 22:06           ` Nadav Amit
2019-07-16 22:07           ` Dan Williams
2019-07-16 22:13             ` Nadav Amit
2019-07-16 22:20               ` Dan Williams
2019-07-16 22:28                 ` Nadav Amit
2019-07-16 22:45                   ` Dan Williams

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190617215750.8e46ae846c09cd5c1f22fdf9@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=bhelgaas@google.com \
    --cc=bp@suse.de \
    --cc=dan.j.williams@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mingo@kernel.org \
    --cc=namit@vmware.com \
    --cc=peterz@infradead.org \
    --cc=toshi.kani@hpe.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox