From: Christopher Lameter <cl@linux.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: lsf-pc@lists.linux-foundation.org, linux-mm@kvack.org
Subject: Re: Memory management facing a 400Gpbs network link
Date: Tue, 19 Feb 2019 14:21:50 +0000 [thread overview]
Message-ID: <01000169062262ea-777bfd38-e0f9-4e9c-806f-1c64e507ea2c-000000@email.amazonses.com> (raw)
In-Reply-To: <20190219122609.GN4525@dhcp22.suse.cz>
On Tue, 19 Feb 2019, Michal Hocko wrote:
> On Tue 12-02-19 18:25:50, Cristopher Lameter wrote:
> > 400G Infiniband will become available this year. This means that the data
> > ingest speeds can be higher than the bandwidth of the processor
> > interacting with its own memory.
> >
> > For example a single hardware thread is limited to 20Gbyte/sec whereas the
> > network interface provides 50Gbytes/sec. These rates can only be obtained
> > currently with pinned memory.
> >
> > How can we evolve the memory management subsystem to operate at higher
> > speeds with more the comforts of paging and system calls that we are used
> > to?
>
> Realistically, is there anything we _can_ do when the HW is the
> bottleneck?
Well the hardware is one problem. The problem that a single core cannot
handle the full memory bandwidth can be solved by spreading the
processing of the data to multiple processors. So I think the memory
subsystem could be aware of that? How do we load balance between cores so
that we can handle the full bandwidth?
The other is that the memory needs to be pinned and all sorts of special
measures and tuning needs to be done to make this actually work. Is there
any way to simplify this?
Also the need for page pinning becomes a problem since the majority of the
memory of a system would need to be pinned. Actually the application seems
to be doing the memory management then?
next prev parent reply other threads:[~2019-02-19 14:21 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-02-12 18:25 Christopher Lameter
2019-02-15 16:34 ` Jerome Glisse
2019-02-19 12:26 ` Michal Hocko
2019-02-19 14:21 ` Christopher Lameter [this message]
2019-02-19 17:36 ` Michal Hocko
2019-02-19 18:21 ` Christopher Lameter
2019-02-19 18:42 ` Alexander Duyck
2019-02-19 19:13 ` Michal Hocko
2019-02-19 20:46 ` Christopher Lameter
2019-02-20 8:31 ` Michal Hocko
2019-02-21 18:15 ` Christopher Lameter
2019-02-21 18:24 ` [Lsf-pc] " Rik van Riel
2019-02-21 18:47 ` Christopher Lameter
2019-02-21 20:13 ` Jerome Glisse
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=01000169062262ea-777bfd38-e0f9-4e9c-806f-1c64e507ea2c-000000@email.amazonses.com \
--to=cl@linux.com \
--cc=linux-mm@kvack.org \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=mhocko@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox