From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
To: Dave Hansen <dave@sr71.net>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
dave.jiang@intel.com, akpm@linux-foundation.org,
dhillf@gmail.com, Mel Gorman <mgorman@suse.de>
Subject: Re: [PATCH] mm: call cond_resched() per MAX_ORDER_NR_PAGES pages copy
Date: Mon, 18 Nov 2013 16:56:18 -0500 [thread overview]
Message-ID: <1384811778-7euptzgp-mutt-n-horiguchi@ah.jp.nec.com> (raw)
In-Reply-To: <528A7D36.5020500@sr71.net>
On Mon, Nov 18, 2013 at 12:48:54PM -0800, Dave Hansen wrote:
> On 11/18/2013 12:20 PM, Naoya Horiguchi wrote:
> >> > Really, though, a lot of things seem to have MAX_ORDER set up so that
> >> > it's at 256MB or 512MB. That's an awful lot to do between rescheds.
> > Yes.
> >
> > BTW, I found that we have the same problem for other functions like
> > copy_user_gigantic_page, copy_user_huge_page, and maybe clear_gigantic_page.
> > So we had better handle them too.
>
> Is there a problem you're trying to solve here? The common case of the
> cond_resched() call boils down to a read of a percpu variable which will
> surely be in the L1 cache after the first run around the loop. In other
> words, it's about as cheap of an operation as we're going to get.
Yes, cond_resched() is cheap if should_resched() is false (and it is in
common case).
> Why bother trying to "optimize" it?
I thought that if we call cond_resched() too often, the copying thread can
take too long in a heavy load system, because the copying thread always
yields the CPU in every loop.
But it seems to be an extreme case, so I can't push it strongly.
Thanks,
Naoya
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-11-18 21:56 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-11-15 22:55 [v3][PATCH 0/2] v3: fix hugetlb vs. anon-thp copy page Dave Hansen
2013-11-15 22:55 ` [v3][PATCH 1/2] mm: hugetlbfs: Add VM_BUG_ON()s to catch non-hugetlbfs pages Dave Hansen
2013-11-15 22:55 ` [v3][PATCH 2/2] mm: thp: give transparent hugepage code a separate copy_page Dave Hansen
2013-11-18 10:32 ` Mel Gorman
2013-11-18 18:51 ` [v3][PATCH 2/2] mm: thp: give transparent hugepage code aseparate copy_page Naoya Horiguchi
2013-11-18 18:54 ` [PATCH] mm: call cond_resched() per MAX_ORDER_NR_PAGES pages copy Naoya Horiguchi
2013-11-18 19:02 ` Dave Hansen
2013-11-18 20:20 ` Naoya Horiguchi
2013-11-18 20:48 ` Dave Hansen
2013-11-18 21:56 ` Naoya Horiguchi [this message]
2013-11-18 22:29 ` Dave Hansen
2013-11-19 0:34 ` Naoya Horiguchi
2013-11-18 19:23 ` [v3][PATCH 0/2] v3: fix hugetlb vs. anon-thp copy page Jiang, Dave
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1384811778-7euptzgp-mutt-n-horiguchi@ah.jp.nec.com \
--to=n-horiguchi@ah.jp.nec.com \
--cc=akpm@linux-foundation.org \
--cc=dave.jiang@intel.com \
--cc=dave@sr71.net \
--cc=dhillf@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox