linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Dave Hansen <dave@sr71.net>
To: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org, akpm@linux-foundation.org,
	mgorman@suse.de, tim.c.chen@linux.intel.com,
	Dave Hansen <dave@sr71.net>
Subject: [RFCv2][PATCH 0/5] mm: Batch page reclamation under shink_page_list
Date: Thu, 16 May 2013 13:34:27 -0700	[thread overview]
Message-ID: <20130516203427.E3386936@viggo.jf.intel.com> (raw)

These are an update of Tim Chen's earlier work:

	http://lkml.kernel.org/r/1347293960.9977.70.camel@schen9-DESK

I broke the patches up a bit more, and tried to incorporate some
changes based on some feedback from Mel and Andrew.

Changes for v2:
 * use page_mapping() accessor instead of direct access
   to page->mapping (could cause crashes when running in
   to swap cache pages.
 * group the batch function's introduction patch with
   its first use
 * rename a few functions as suggested by Mel
 * Ran some single-threaded tests to look for regressions
   caused by the batching.  If there is overhead, it is only
   in the worst-case scenarios, and then only in hundreths of
   a percent of CPU time.

If you're curious how effective the batching is, I have a quick
and dirty patch to keep some stats:

	https://www.sr71.net/~dave/intel/rmb-stats-only.patch

--

To do page reclamation in shrink_page_list function, there are
two locks taken on a page by page basis.  One is the tree lock
protecting the radix tree of the page mapping and the other is
the mapping->i_mmap_mutex protecting the mapped pages.  This set
deals only with mapping->tree_lock.

Tim managed to get 14% throughput improvement when with a workload
putting heavy pressure of page cache by reading many large mmaped
files simultaneously on a 8 socket Westmere server.

I've been testing these by running large parallel kernel compiles
on systems that are under memory pressure.  During development,
I caught quite a few races on smaller setups, and it's being
quite stable that large (160 logical CPU / 1TB) system.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

             reply	other threads:[~2013-05-16 20:34 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-05-16 20:34 Dave Hansen [this message]
2013-05-16 20:34 ` [RFCv2][PATCH 1/5] defer clearing of page_private() for swap cache pages Dave Hansen
2013-05-16 20:34 ` [RFCv2][PATCH 2/5] make 'struct page' and swp_entry_t variants of swapcache_free() Dave Hansen
2013-05-17 13:27   ` Mel Gorman
2013-05-16 20:34 ` [RFCv2][PATCH 3/5] break up __remove_mapping() Dave Hansen
2013-05-16 20:34 ` [RFCv2][PATCH 4/5] break out mapping "freepage" code Dave Hansen
2013-05-16 20:34 ` [RFCv2][PATCH 5/5] batch shrink_page_list() locking operations Dave Hansen
2013-05-17 13:35   ` Mel Gorman
2013-05-20 21:55 ` [RFCv2][PATCH 0/5] mm: Batch page reclamation under shink_page_list Seth Jennings

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130516203427.E3386936@viggo.jf.intel.com \
    --to=dave@sr71.net \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=tim.c.chen@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox