linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.cz>
To: Dave Hansen <dave@sr71.net>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
	Hugh Dickins <hughd@google.com>,
	Dave Hansen <dave.hansen@intel.com>, Tejun Heo <tj@kernel.org>,
	Linux-MM <linux-mm@kvack.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Vladimir Davydov <vdavydov@parallels.com>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: regression caused by cgroups optimization in 3.17-rc2
Date: Thu, 4 Sep 2014 16:27:21 +0200	[thread overview]
Message-ID: <20140904142721.GB14548@dhcp22.suse.cz> (raw)
In-Reply-To: <54062F32.5070504@sr71.net>

[Sorry to reply so late]

On Tue 02-09-14 13:57:22, Dave Hansen wrote:
> I, of course, forgot to include the most important detail.  This appears
> to be pretty run-of-the-mill spinlock contention in the resource counter
> code.  Nearly 80% of the CPU is spent spinning in the charge or uncharge
> paths in the kernel.  It is apparently spinning on res_counter->lock in
> both the charge and uncharge paths.
> 
> It already does _some_ batching here on the free side, but that
> apparently breaks down after ~40 threads.
> 
> It's a no-brainer since the patch in question removed an optimization
> skipping the charging, and now we're seeing overhead from the charging.
> 
> Here's the first entry from perf top:
> 
>     80.18%    80.18%  [kernel]               [k] _raw_spin_lock
>                   |
>                   --- _raw_spin_lock
>                      |
>                      |--66.59%-- res_counter_uncharge_until
>                      |          res_counter_uncharge
>                      |          uncharge_batch
>                      |          uncharge_list
>                      |          mem_cgroup_uncharge_list
>                      |          release_pages
>                      |          free_pages_and_swap_cache

Ouch. free_pages_and_swap_cache completely kills the uncharge batching
because it reduces it to PAGEVEC_SIZE batches.

I think we really do not need PAGEVEC_SIZE batching anymore. We are
already batching on tlb_gather layer. That one is limited so I think
the below should be safe but I have to think about this some more. There
is a risk of prolonged lru_lock wait times but the number of pages is
limited to 10k and the heavy work is done outside of the lock. If this
is really a problem then we can tear LRU part and the actual
freeing/uncharging into a separate functions in this path.

Could you test with this half baked patch, please? I didn't get to test
it myself unfortunately.
---
diff --git a/mm/swap_state.c b/mm/swap_state.c
index ef1f39139b71..154444918685 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -265,18 +265,12 @@ void free_page_and_swap_cache(struct page *page)
 void free_pages_and_swap_cache(struct page **pages, int nr)
 {
 	struct page **pagep = pages;
+	int i;
 
 	lru_add_drain();
-	while (nr) {
-		int todo = min(nr, PAGEVEC_SIZE);
-		int i;
-
-		for (i = 0; i < todo; i++)
-			free_swap_cache(pagep[i]);
-		release_pages(pagep, todo, false);
-		pagep += todo;
-		nr -= todo;
-	}
+	for (i = 0; i < nr; i++)
+		free_swap_cache(pagep[i]);
+	release_pages(pagep, nr, false);
 }
 
 /*
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2014-09-04 14:27 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-09-02 19:05 Dave Hansen
2014-09-02 20:18 ` Dave Hansen
2014-09-02 20:57   ` Dave Hansen
2014-09-04 14:27     ` Michal Hocko [this message]
2014-09-04 20:27       ` Dave Hansen
2014-09-04 22:53         ` Dave Hansen
2014-09-05  9:28           ` Michal Hocko
2014-09-05  9:25         ` Michal Hocko
2014-09-05 14:47           ` Johannes Weiner
2014-09-05 15:39             ` Michal Hocko
2014-09-10 16:29           ` Michal Hocko
2014-09-10 16:57             ` Dave Hansen
2014-09-10 17:05               ` Michal Hocko
2014-09-05 12:35         ` Johannes Weiner
2014-09-08 15:47           ` Dave Hansen
2014-09-09 14:50             ` Johannes Weiner
2014-09-09 18:23               ` Dave Hansen
2014-09-02 22:18 ` Johannes Weiner
2014-09-02 22:36   ` Dave Hansen
2014-09-03  0:10     ` Johannes Weiner
2014-09-03  0:20       ` Linus Torvalds
2014-09-03  1:33         ` Johannes Weiner
2014-09-03  3:15           ` Dave Hansen
2014-09-03  0:30       ` Dave Hansen
2014-09-04 15:08         ` Johannes Weiner
2014-09-04 20:50           ` Dave Hansen
2014-09-05  8:04           ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140904142721.GB14548@dhcp22.suse.cz \
    --to=mhocko@suse.cz \
    --cc=akpm@linux-foundation.org \
    --cc=dave.hansen@intel.com \
    --cc=dave@sr71.net \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=tj@kernel.org \
    --cc=torvalds@linux-foundation.org \
    --cc=vdavydov@parallels.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox