From: Nadav Amit <namit@vmware.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Linux MM <linux-mm@kvack.org>, Borislav Petkov <bp@suse.de>,
Toshi Kani <toshi.kani@hpe.com>,
Peter Zijlstra <peterz@infradead.org>,
Dave Hansen <dave.hansen@linux.intel.com>,
Bjorn Helgaas <bhelgaas@google.com>,
Ingo Molnar <mingo@kernel.org>
Subject: Re: [PATCH 0/3] resource: find_next_iomem_res() improvements
Date: Tue, 18 Jun 2019 21:56:43 +0000 [thread overview]
Message-ID: <19C3DCA0-823E-46CB-A758-D5F82C5FA3C8@vmware.com> (raw)
In-Reply-To: <CAPcyv4hstt+0teXPtAq2nwFQaNb9TujgetgWPVMOnYH8JwqGeA@mail.gmail.com>
> On Jun 18, 2019, at 11:30 AM, Dan Williams <dan.j.williams@intel.com> wrote:
>
> On Tue, Jun 18, 2019 at 10:42 AM Nadav Amit <namit@vmware.com> wrote:
>>> On Jun 17, 2019, at 11:44 PM, Dan Williams <dan.j.williams@intel.com> wrote:
>>>
>>> On Wed, Jun 12, 2019 at 9:59 PM Nadav Amit <namit@vmware.com> wrote:
>>>> Running some microbenchmarks on dax keeps showing find_next_iomem_res()
>>>> as a place in which significant amount of time is spent. It appears that
>>>> in order to determine the cacheability that is required for the PTE,
>>>> lookup_memtype() is called, and this one traverses the resources list in
>>>> an inefficient manner. This patch-set tries to improve this situation.
>>>
>>> Let's just do this lookup once per device, cache that, and replay it
>>> to modified vmf_insert_* routines that trust the caller to already
>>> know the pgprot_values.
>>
>> IIUC, one device can have multiple regions with different characteristics,
>> which require difference cachability.
>
> Not for pmem. It will always be one common cacheability setting for
> the entirety of persistent memory.
>
>> Apparently, that is the reason there
>> is a tree of resources. Please be more specific about where you want to
>> cache it, please.
>
> The reason for lookup_memtype() was to try to prevent mixed
> cacheability settings of pages across different processes . The
> mapping type for pmem/dax is established by one of:
>
> drivers/nvdimm/pmem.c:413: addr =
> devm_memremap_pages(dev, &pmem->pgmap);
> drivers/nvdimm/pmem.c:425: addr =
> devm_memremap_pages(dev, &pmem->pgmap);
> drivers/nvdimm/pmem.c:432: addr = devm_memremap(dev,
> pmem->phys_addr,
> drivers/nvdimm/pmem.c-433- pmem->size,
> ARCH_MEMREMAP_PMEM);
>
> ...and is constant for the life of the device and all subsequent mappings.
>
>> Perhaps you want to cache the cachability-mode in vma->vm_page_prot (which I
>> see being done in quite a few cases), but I don’t know the code well enough
>> to be certain that every vma should have a single protection and that it
>> should not change afterwards.
>
> No, I'm thinking this would naturally fit as a property hanging off a
> 'struct dax_device', and then create a version of vmf_insert_mixed()
> and vmf_insert_pfn_pmd() that bypass track_pfn_insert() to insert that
> saved value.
Thanks for the detailed explanation. I’ll give it a try (the moment I find
some free time). I still think that patch 2/3 is beneficial, but based on
your feedback, patch 3/3 should be dropped.
next prev parent reply other threads:[~2019-06-18 21:56 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-06-13 4:59 Nadav Amit
[not found] ` <20190613045903.4922-2-namit@vmware.com>
2019-06-15 22:15 ` [PATCH 1/3] resource: Fix locking in find_next_iomem_res() Sasha Levin
2019-06-17 19:14 ` Nadav Amit
2019-06-18 0:55 ` Sasha Levin
2019-06-18 1:32 ` Nadav Amit
2019-06-18 4:26 ` Andrew Morton
[not found] ` <20190613045903.4922-4-namit@vmware.com>
2019-06-15 22:16 ` [PATCH 3/3] resource: Introduce resource cache Sasha Levin
2019-06-17 17:20 ` Nadav Amit
2019-06-18 4:57 ` Andrew Morton
2019-06-18 5:33 ` Nadav Amit
2019-06-18 5:40 ` Nadav Amit
2019-06-19 13:00 ` Bjorn Helgaas
2019-06-19 20:35 ` Nadav Amit
2019-06-19 21:53 ` Dan Williams
2019-06-20 21:31 ` Andi Kleen
2019-06-20 23:13 ` Dan Williams
2019-06-18 6:44 ` [PATCH 0/3] resource: find_next_iomem_res() improvements Dan Williams
2019-06-18 17:42 ` Nadav Amit
2019-06-18 18:30 ` Dan Williams
2019-06-18 21:56 ` Nadav Amit [this message]
2019-07-16 22:00 ` Andrew Morton
2019-07-16 22:06 ` Nadav Amit
2019-07-16 22:07 ` Dan Williams
2019-07-16 22:13 ` Nadav Amit
2019-07-16 22:20 ` Dan Williams
2019-07-16 22:28 ` Nadav Amit
2019-07-16 22:45 ` Dan Williams
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=19C3DCA0-823E-46CB-A758-D5F82C5FA3C8@vmware.com \
--to=namit@vmware.com \
--cc=akpm@linux-foundation.org \
--cc=bhelgaas@google.com \
--cc=bp@suse.de \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
--cc=toshi.kani@hpe.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox