ksummit.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
From: Benjamin Herrenschmidt <benh@kernel.crashing.org>
To: ksummit-discuss@lists.linuxfoundation.org
Subject: [Ksummit-discuss] [TECH TOPIC] Semantics of MMIO mapping attributes accross archs
Date: Sat, 04 Jul 2015 18:17:17 +1000	[thread overview]
Message-ID: <1435997837.3948.21.camel@kernel.crashing.org> (raw)

Allright, it's that time of year ... So here's my attempt at getting
myself invited :-)

We've been talking about some of that on-list recently, and it might
well be that this will trigger a resolution before we even reach KS, but
I though it might be worthwhile to gather enough people from various
arch together to hash things out:

We have a pile of mapping attributes (more showing up recently),
typically used for MMIO mappings (but not necessarily exclusively).
ioremap_cache and ioremap_nocache are the old/common ones, but we have
_wc (write combine), _wt (write through) and possibly more around the
corner.

What are their precise semantics accross all architecture ? This is not
clear (not documented). For example, we define writel(), readl() and
friends as being fully ordered vs each other but also vs DMA etc... but
on what mapping types do they have this property ?

Will _wc() provide the write-combine ability for writel() on all archs ?
Or does it require writel_relaxed() on some ? Will _wc() bring other
side effects such as loss of read vs. write ordering ? (on some archs at
least...). Etc....

There is a growing matrix of MMIO accessors and mapping types whose
semantics are poorly (if at all) defined. We cannot define them all
exactly for all architectures as there are too many differences that
will impact them. But we should be able to guarantee at least *some*,
ie, whether a given type of ordering is guaranteed or not by a given
accessor on a given mapping type, whether write combine (if supported at
all) will happen with a given accessor or not etc...

As for who should participate, well, at least one rep from each major
arch who is familiar with the intricacies of the architecture memory
model I would say, possibly others who dabbled in that stuff recently
such as Luis R. Rodriguez <mcgrof@suse.com> who was proposing patch
series lately to consolidate the use of _wc.

Cheers,
Ben.

             reply	other threads:[~2015-07-04  8:24 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-07-04  8:17 Benjamin Herrenschmidt [this message]
2015-07-04 14:12 ` Dan Williams
2015-07-05  3:02   ` Benjamin Herrenschmidt
2015-07-05 18:55     ` Andy Lutomirski
2015-07-05 19:56       ` Benjamin Herrenschmidt
2015-07-05 20:09         ` Andy Lutomirski
2015-07-06  9:33         ` Will Deacon
2015-07-06 22:02           ` Benjamin Herrenschmidt
2015-07-07  9:56             ` Will Deacon
2015-07-07 10:29               ` Will Deacon
2015-07-06  9:52       ` Catalin Marinas
2015-07-06 17:14         ` Andy Lutomirski
2015-07-06 22:04           ` Benjamin Herrenschmidt
2015-07-06 19:11       ` Luck, Tony
2015-07-07  0:01 ` Luis R. Rodriguez

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1435997837.3948.21.camel@kernel.crashing.org \
    --to=benh@kernel.crashing.org \
    --cc=ksummit-discuss@lists.linuxfoundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox