From: Zi Yan <zi.yan@sent.com>
To: linux-mm@kvack.org
Cc: dnellans@nvidia.com, apopple@au1.ibm.com,
paulmck@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com,
zi.yan@cs.rutgers.edu
Subject: [RFC PATCH 09/14] mm: migrate: Add exchange_page_mthread() and exchange_page_lists_mthread() to exchange two pages or two page lists.
Date: Fri, 17 Feb 2017 10:05:46 -0500 [thread overview]
Message-ID: <20170217150551.117028-10-zi.yan@sent.com> (raw)
In-Reply-To: <20170217150551.117028-1-zi.yan@sent.com>
From: Zi Yan <ziy@nvidia.com>
When some pages are going to migrated into a full memory node, instead
of two-step migrate_pages(), we use exchange_page_mthread() to exchange
two pages. This can save two unnecessary page allocations.
Current implmentation only supports anonymous pages.
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
mm/copy_pages.c | 133 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++
mm/internal.h | 5 +++
2 files changed, 138 insertions(+)
diff --git a/mm/copy_pages.c b/mm/copy_pages.c
index 516c0a1a57f3..879e2d944ad0 100644
--- a/mm/copy_pages.c
+++ b/mm/copy_pages.c
@@ -146,3 +146,136 @@ int copy_page_lists_mthread(struct page **to, struct page **from, int nr_pages)
return err;
}
+static void exchange_page_routine(char *to, char *from, unsigned long chunk_size)
+{
+ u64 tmp;
+ int i;
+
+ for (i = 0; i < chunk_size; i += sizeof(tmp)) {
+ tmp = *((u64*)(from + i));
+ *((u64*)(from + i)) = *((u64*)(to + i));
+ *((u64*)(to + i)) = tmp;
+ }
+}
+
+static void exchange_page_work_queue_thread(struct work_struct *work)
+{
+ struct copy_info *my_work = (struct copy_info*)work;
+
+ exchange_page_routine(my_work->to,
+ my_work->from,
+ my_work->chunk_size);
+}
+
+int exchange_page_mthread(struct page *to, struct page *from, int nr_pages)
+{
+ int total_mt_num = nr_copythreads;
+ int to_node = page_to_nid(to);
+ int i;
+ struct copy_info *work_items;
+ char *vto, *vfrom;
+ unsigned long chunk_size;
+ const struct cpumask *per_node_cpumask = cpumask_of_node(to_node);
+ int cpu_id_list[32] = {0};
+ int cpu;
+
+ work_items = kzalloc(sizeof(struct copy_info)*total_mt_num,
+ GFP_KERNEL);
+ if (!work_items)
+ return -ENOMEM;
+
+ i = 0;
+ for_each_cpu(cpu, per_node_cpumask) {
+ if (i >= total_mt_num)
+ break;
+ cpu_id_list[i] = cpu;
+ ++i;
+ }
+
+ vfrom = kmap(from);
+ vto = kmap(to);
+ chunk_size = PAGE_SIZE*nr_pages / total_mt_num;
+
+ for (i = 0; i < total_mt_num; ++i) {
+ INIT_WORK((struct work_struct *)&work_items[i],
+ exchange_page_work_queue_thread);
+
+ work_items[i].to = vto + i * chunk_size;
+ work_items[i].from = vfrom + i * chunk_size;
+ work_items[i].chunk_size = chunk_size;
+
+ queue_work_on(cpu_id_list[i],
+ system_highpri_wq,
+ (struct work_struct *)&work_items[i]);
+ }
+
+ /* Wait until it finishes */
+ for (i = 0; i < total_mt_num; ++i)
+ flush_work((struct work_struct *) &work_items[i]);
+
+ kunmap(to);
+ kunmap(from);
+
+ kfree(work_items);
+
+ return 0;
+}
+
+int exchange_page_lists_mthread(struct page **to, struct page **from,
+ int nr_pages)
+{
+ int err = 0;
+ int total_mt_num = nr_copythreads;
+ int to_node = page_to_nid(*to);
+ int i;
+ struct copy_info *work_items;
+ int nr_pages_per_page = hpage_nr_pages(*from);
+ const struct cpumask *per_node_cpumask = cpumask_of_node(to_node);
+ int cpu_id_list[32] = {0};
+ int cpu;
+
+ work_items = kzalloc(sizeof(struct copy_info)*nr_pages,
+ GFP_KERNEL);
+ if (!work_items)
+ return -ENOMEM;
+
+ total_mt_num = min_t(int, nr_pages, total_mt_num);
+
+ i = 0;
+ for_each_cpu(cpu, per_node_cpumask) {
+ if (i >= total_mt_num)
+ break;
+ cpu_id_list[i] = cpu;
+ ++i;
+ }
+
+ for (i = 0; i < nr_pages; ++i) {
+ int thread_idx = i % total_mt_num;
+
+ INIT_WORK((struct work_struct *)&work_items[i],
+ exchange_page_work_queue_thread);
+
+ work_items[i].to = kmap(to[i]);
+ work_items[i].from = kmap(from[i]);
+ work_items[i].chunk_size = PAGE_SIZE * hpage_nr_pages(from[i]);
+
+ BUG_ON(nr_pages_per_page != hpage_nr_pages(from[i]));
+ BUG_ON(nr_pages_per_page != hpage_nr_pages(to[i]));
+
+
+ queue_work_on(cpu_id_list[thread_idx], system_highpri_wq, (struct work_struct *)&work_items[i]);
+ }
+
+ /* Wait until it finishes */
+ for (i = 0; i < total_mt_num; ++i)
+ flush_work((struct work_struct *) &work_items[i]);
+
+ for (i = 0; i < nr_pages; ++i) {
+ kunmap(to[i]);
+ kunmap(from[i]);
+ }
+
+ kfree(work_items);
+
+ return err;
+}
diff --git a/mm/internal.h b/mm/internal.h
index 175e08ed524a..b99a634b4d09 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -501,4 +501,9 @@ extern const struct trace_print_flags gfpflag_names[];
extern int copy_page_lists_mthread(struct page **to,
struct page **from, int nr_pages);
+extern int exchange_page_mthread(struct page *to, struct page *from,
+ int nr_pages);
+extern int exchange_page_lists_mthread(struct page **to,
+ struct page **from,
+ int nr_pages);
#endif /* __MM_INTERNAL_H */
--
2.11.0
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-02-17 15:06 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-02-17 15:05 [RFC PATCH 00/14] Accelerating page migrations Zi Yan
2017-02-17 15:05 ` [RFC PATCH 01/14] mm/migrate: Add new mode parameter to migrate_page_copy() function Zi Yan
2017-02-17 15:05 ` [RFC PATCH 02/14] mm/migrate: Make migrate_mode types non-exclussive Zi Yan
2017-02-17 15:05 ` [RFC PATCH 03/14] mm/migrate: Add copy_pages_mthread function Zi Yan
2017-02-23 6:06 ` Naoya Horiguchi
2017-02-23 7:50 ` Anshuman Khandual
2017-02-23 8:02 ` Naoya Horiguchi
2017-03-09 5:35 ` Anshuman Khandual
2017-02-17 15:05 ` [RFC PATCH 04/14] mm/migrate: Add new migrate mode MIGRATE_MT Zi Yan
2017-02-23 6:54 ` Naoya Horiguchi
2017-02-23 7:54 ` Anshuman Khandual
2017-02-17 15:05 ` [RFC PATCH 05/14] mm/migrate: Add new migration flag MPOL_MF_MOVE_MT for syscalls Zi Yan
2017-02-17 15:05 ` [RFC PATCH 06/14] sysctl: Add global tunable mt_page_copy Zi Yan
2017-02-17 15:05 ` [RFC PATCH 07/14] migrate: Add copy_page_lists_mthread() function Zi Yan
2017-02-23 8:54 ` Naoya Horiguchi
2017-03-09 13:02 ` Anshuman Khandual
2017-02-17 15:05 ` [RFC PATCH 08/14] mm: migrate: Add concurrent page migration into move_pages syscall Zi Yan
2017-02-24 8:25 ` Naoya Horiguchi
2017-02-24 15:05 ` Zi Yan
2017-02-17 15:05 ` Zi Yan [this message]
2017-02-17 15:05 ` [RFC PATCH 10/14] mm: Add exchange_pages and exchange_pages_concur functions to exchange two lists of pages instead of two migrate_pages() Zi Yan
2017-02-17 15:05 ` [RFC PATCH 11/14] mm: migrate: Add exchange_pages syscall to exchange two page lists Zi Yan
2017-02-17 15:05 ` [RFC PATCH 12/14] migrate: Add copy_page_dma to use DMA Engine to copy pages Zi Yan
2017-02-17 15:05 ` [RFC PATCH 13/14] mm: migrate: Add copy_page_dma into migrate_page_copy Zi Yan
2017-02-17 15:05 ` [RFC PATCH 14/14] mm: Add copy_page_lists_dma_always to support copy a list of pages Zi Yan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170217150551.117028-10-zi.yan@sent.com \
--to=zi.yan@sent.com \
--cc=apopple@au1.ibm.com \
--cc=dnellans@nvidia.com \
--cc=khandual@linux.vnet.ibm.com \
--cc=linux-mm@kvack.org \
--cc=paulmck@linux.vnet.ibm.com \
--cc=zi.yan@cs.rutgers.edu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox