From: Pasha Tatashin <pasha.tatashin@soleen.com>
To: akpm@linux-foundation.org, linux-mm@kvack.org,
pasha.tatashin@soleen.com, linux-kernel@vger.kernel.org,
rientjes@google.com, dwmw2@infradead.org,
baolu.lu@linux.intel.com, joro@8bytes.org, will@kernel.org,
robin.murphy@arm.com, iommu@lists.linux.dev
Subject: [RFC 0/3] iommu/intel: Free empty page tables on unmaps
Date: Thu, 21 Dec 2023 03:19:12 +0000 [thread overview]
Message-ID: <20231221031915.619337-1-pasha.tatashin@soleen.com> (raw)
This series frees empty page tables on unmaps. It intends to be a
low overhead feature.
The read-writer lock is used to synchronize page table, but most of
time the lock is held is reader. It is held as a writer for short
period of time when unmapping a page that is bigger than the current
iova request. For all other cases this lock is read-only.
page->refcount is used in order to track number of entries at each page
table.
Microbenchmark data using iova_stress[1]:
Base:
yqbtg12:/home# ./iova_stress -s 16
dma_size: 4K iova space: 16T iommu: ~ 32783M time: 22.297s
/iova_stress -s 16
dma_size: 4K iova space: 16T iommu: ~ 0M time: 23.388s
The test maps/unmaps 4K pages and cycles through the IOVA space.
Base uses 32G of memory, and test completes in 22.3S.
Fix uses 0G of memory, and test completes in 23.4s.
I believe the proposed fix is a good compromize in terms of complexity/
scalability. A more scalable solution would be to spread read/writer
lock per-page table, and user page->private field to store the lock
itself.
However, since iommu already has some protection: i.e. no-one touches
the iova space of the request map/unmap we can avoid the extra complexity
and rely on a single per page table RW lock, and be in a reader mode
most of the time.
[1] https://github.com/soleen/iova_stress
Pasha Tatashin (3):
iommu/intel: Use page->refcount to count number of entries in IOMMU
iommu/intel: synchronize page table map and unmap operations
iommu/intel: free empty page tables on unmaps
drivers/iommu/intel/iommu.c | 153 ++++++++++++++++++++++++++++--------
drivers/iommu/intel/iommu.h | 44 +++++++++--
2 files changed, 158 insertions(+), 39 deletions(-)
--
2.43.0.472.g3155946c3a-goog
next reply other threads:[~2023-12-21 3:19 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-21 3:19 Pasha Tatashin [this message]
2023-12-21 3:19 ` [RFC 1/3] iommu/intel: Use page->refcount to count number of entries in IOMMU Pasha Tatashin
2023-12-25 2:59 ` Liu, Jingqi
2023-12-28 15:01 ` Pasha Tatashin
2023-12-21 3:19 ` [RFC 2/3] iommu/intel: synchronize page table map and unmap operations Pasha Tatashin
2023-12-21 3:19 ` [RFC 3/3] iommu/intel: free empty page tables on unmaps Pasha Tatashin
2023-12-21 4:16 ` [RFC 0/3] iommu/intel: Free " Matthew Wilcox
2023-12-21 5:13 ` Pasha Tatashin
2023-12-21 5:42 ` Pasha Tatashin
2023-12-21 14:06 ` Matthew Wilcox
2023-12-21 14:58 ` Pasha Tatashin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231221031915.619337-1-pasha.tatashin@soleen.com \
--to=pasha.tatashin@soleen.com \
--cc=akpm@linux-foundation.org \
--cc=baolu.lu@linux.intel.com \
--cc=dwmw2@infradead.org \
--cc=iommu@lists.linux.dev \
--cc=joro@8bytes.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=rientjes@google.com \
--cc=robin.murphy@arm.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox