From: huang ying <huang.ying.caritas@gmail.com>
To: Aneesh Kumar K V <aneesh.kumar@linux.ibm.com>
Cc: "Huang, Ying" <ying.huang@intel.com>,
linux-mm@kvack.org, akpm@linux-foundation.org,
Wei Xu <weixugc@google.com>, Yang Shi <shy828301@gmail.com>,
Davidlohr Bueso <dave@stgolabs.net>,
Tim C Chen <tim.c.chen@intel.com>,
Michal Hocko <mhocko@kernel.org>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Hesham Almatary <hesham.almatary@huawei.com>,
Dave Hansen <dave.hansen@intel.com>,
Jonathan Cameron <Jonathan.Cameron@huawei.com>,
Alistair Popple <apopple@nvidia.com>,
Dan Williams <dan.j.williams@intel.com>,
Johannes Weiner <hannes@cmpxchg.org>,
jvgediya.oss@gmail.com, Bharata B Rao <bharata@amd.com>
Subject: Re: [PATCH v14 04/10] mm/demotion/dax/kmem: Set node's abstract distance to MEMTIER_DEFAULT_DAX_ADISTANCE
Date: Tue, 16 Aug 2022 15:28:08 +0800 [thread overview]
Message-ID: <CAC=cRTMZZ9bqyC7pnxD1zUWqfBiQ9U7im+8EYa_8GVK8iA7HXQ@mail.gmail.com> (raw)
In-Reply-To: <cd1c13ee-6fc3-bde8-96f9-8c3c93441275@linux.ibm.com>
On Tue, Aug 16, 2022 at 1:10 PM Aneesh Kumar K V
<aneesh.kumar@linux.ibm.com> wrote:
>
> On 8/15/22 8:09 AM, Huang, Ying wrote:
> > "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> writes:
> >
[snip]
> >>
> >> +/*
> >> + * Default abstract distance assigned to the NUMA node onlined
> >> + * by DAX/kmem if the low level platform driver didn't initialize
> >> + * one for this NUMA node.
> >> + */
> >> +#define MEMTIER_DEFAULT_DAX_ADISTANCE (MEMTIER_ADISTANCE_DRAM * 2)
> >
> > If my understanding were correct, this is targeting Optane DCPMM for
> > now. The measured results in the following paper is,
> >
> > https://arxiv.org/pdf/2002.06018.pdf
> >
> > Section: 2.1 Read/Write Latencies
> >
> > "
> > For read access, the latency of DCPMM was 400.1% higher than that of
> > DRAM. For write access, it was 407.1% higher.
> > "
> >
> > Section: 2.2 Read/Write Bandwidths
> >
> > "
> > For read access, the throughput of DCPMM was 37.1% of DRAM. For write
> > access, it was 7.8%
> > "
> >
> > According to the above data, I think the MEMTIER_DEFAULT_DAX_ADISTANCE
> > can be "5 * MEMTIER_ADISTANCE_DRAM".
> >
>
> If we look at mapping every 100% increase in latency as a memory tier, we essentially
> will have 4 memory tier here. Each memory tier is covering a range of abstract distance 128.
> which makes a total adistance increase from MEMTIER_ADISTANCE_DRAM by 512. This puts
> DEFAULT_DAX_DISTANCE at 1024 or MEMTIER_ADISTANCE_DRAM * 2
If my understanding were correct, you are suggesting to use a kind of
logarithmic mapping from latency to abstract distance? That is,
abstract_distance = log2(latency)
While I am suggesting to use a kind of linear mapping from latency to
abstract distance. That is,
abstract_distance = C * latency
I think that linear mapping is easy to understand.
Are there some good reasons to use logarithmic mapping?
Best Regards,
Huang, Ying
next prev parent reply other threads:[~2022-08-16 7:28 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-08-12 5:56 [PATCH v14 00/10] mm/demotion: Memory tiers and demotion Aneesh Kumar K.V
2022-08-12 5:57 ` [PATCH v14 01/10] mm/demotion: Add support for explicit memory tiers Aneesh Kumar K.V
2022-08-16 8:28 ` huang ying
2022-08-12 5:57 ` [PATCH v14 02/10] mm/demotion: Move memory demotion related code Aneesh Kumar K.V
2022-08-12 5:57 ` [PATCH v14 03/10] mm/demotion: Add hotplug callbacks to handle new numa node onlined Aneesh Kumar K.V
2022-08-12 5:57 ` [PATCH v14 04/10] mm/demotion/dax/kmem: Set node's abstract distance to MEMTIER_DEFAULT_DAX_ADISTANCE Aneesh Kumar K.V
2022-08-15 2:25 ` Huang, Ying
2022-08-15 2:39 ` Huang, Ying
2022-08-16 5:09 ` Aneesh Kumar K V
2022-08-16 7:28 ` huang ying [this message]
2022-08-16 8:12 ` Bharata B Rao
2022-08-16 8:26 ` huang ying
2022-08-16 14:45 ` Bharata B Rao
2022-08-17 1:02 ` Huang, Ying
2022-08-12 5:57 ` [PATCH v14 05/10] mm/demotion: Build demotion targets based on explicit memory tiers Aneesh Kumar K.V
2022-08-12 5:57 ` [PATCH v14 06/10] mm/demotion: Add pg_data_t member to track node memory tier details Aneesh Kumar K.V
2022-08-12 5:57 ` [PATCH v14 07/10] mm/demotion: Drop memtier from memtype Aneesh Kumar K.V
2022-08-12 5:57 ` [PATCH v14 08/10] mm/demotion: Demote pages according to allocation fallback order Aneesh Kumar K.V
2022-08-12 5:57 ` [PATCH v14 09/10] mm/demotion: Update node_is_toptier to work with memory tiers Aneesh Kumar K.V
2022-08-12 5:57 ` [PATCH v14 10/10] lib/nodemask: Optimize node_random for nodemask with single NUMA node Aneesh Kumar K.V
2022-08-15 2:49 ` [PATCH v14 00/10] mm/demotion: Memory tiers and demotion Huang, Ying
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAC=cRTMZZ9bqyC7pnxD1zUWqfBiQ9U7im+8EYa_8GVK8iA7HXQ@mail.gmail.com' \
--to=huang.ying.caritas@gmail.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@linux.ibm.com \
--cc=apopple@nvidia.com \
--cc=bharata@amd.com \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@intel.com \
--cc=dave@stgolabs.net \
--cc=hannes@cmpxchg.org \
--cc=hesham.almatary@huawei.com \
--cc=jvgediya.oss@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=shy828301@gmail.com \
--cc=tim.c.chen@intel.com \
--cc=weixugc@google.com \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox