linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: <zhouxianrong@huawei.com>
To: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org, akpm@linux-foundation.org,
	viro@zeniv.linux.org.uk, mingo@redhat.com, peterz@infradead.org,
	hannes@cmpxchg.org, mgorman@techsingularity.net, vbabka@suse.cz,
	mhocko@suse.com, vdavydov.dev@gmail.com, minchan@kernel.org,
	riel@redhat.com, zhouxianrong@huawei.com, zhouxiyu@huawei.com,
	zhangshiming5@huawei.com, won.ho.park@huawei.com,
	tuxiaobing@huawei.com
Subject: [PATCH] bdi flusher should not be throttled here when it fall into buddy slow path
Date: Tue, 18 Oct 2016 15:12:45 +0800	[thread overview]
Message-ID: <1476774765-21130-1-git-send-email-zhouxianrong@huawei.com> (raw)

From: z00281421 <z00281421@notesmail.huawei.com>

bdi flusher may enter page alloc slow path due to writepage and kmalloc. 
in that case the flusher as a direct reclaimer should not be throttled here
because it can not to reclaim clean file pages or anaonymous pages
for next moment; furthermore writeback rate of dirty pages would be
slow down and other direct reclaimers and kswapd would be affected.
bdi flusher should be iosceduled by get_request rather than here.

Signed-off-by: z00281421 <z00281421@notesmail.huawei.com>
---
 fs/fs-writeback.c     |    4 ++--
 include/linux/sched.h |    1 +
 mm/vmscan.c           |   15 +++++++++++----
 3 files changed, 14 insertions(+), 6 deletions(-)

diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 05713a5..f6bf067 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -1908,7 +1908,7 @@ void wb_workfn(struct work_struct *work)
 	long pages_written;
 
 	set_worker_desc("flush-%s", dev_name(wb->bdi->dev));
-	current->flags |= PF_SWAPWRITE;
+	current->flags |= (PF_SWAPWRITE | PF_BDI_FLUSHER | PF_LESS_THROTTLE);
 
 	if (likely(!current_is_workqueue_rescuer() ||
 		   !test_bit(WB_registered, &wb->state))) {
@@ -1938,7 +1938,7 @@ void wb_workfn(struct work_struct *work)
 	else if (wb_has_dirty_io(wb) && dirty_writeback_interval)
 		wb_wakeup_delayed(wb);
 
-	current->flags &= ~PF_SWAPWRITE;
+	current->flags &= ~(PF_SWAPWRITE | PF_BDI_FLUSHER | PF_LESS_THROTTLE);
 }
 
 /*
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 62c68e5..4bb70f2 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2232,6 +2232,7 @@ extern void thread_group_cputime_adjusted(struct task_struct *p, cputime_t *ut,
 #define PF_KTHREAD	0x00200000	/* I am a kernel thread */
 #define PF_RANDOMIZE	0x00400000	/* randomize virtual address space */
 #define PF_SWAPWRITE	0x00800000	/* Allowed to write to swap */
+#define PF_BDI_FLUSHER  0x01000000	/* I am bdi flusher */
 #define PF_NO_SETAFFINITY 0x04000000	/* Userland is not allowed to meddle with cpus_allowed */
 #define PF_MCE_EARLY    0x08000000      /* Early kill for mce process policy */
 #define PF_MUTEX_TESTER	0x20000000	/* Thread belongs to the rt mutex tester */
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 0fe8b71..492e9e7 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1643,12 +1643,19 @@ putback_inactive_pages(struct lruvec *lruvec, struct list_head *page_list)
  * If a kernel thread (such as nfsd for loop-back mounts) services
  * a backing device by writing to the page cache it sets PF_LESS_THROTTLE.
  * In that case we should only throttle if the backing device it is
- * writing to is congested.  In other cases it is safe to throttle.
+ * writing to is congested.  another case is that bdi flusher could
+ * not be throttled here even though whose bdi is consgested.
+ * In other cases it is safe to throttle.
  */
-static int current_may_throttle(void)
+static bool current_may_throttle(void)
 {
-	return !(current->flags & PF_LESS_THROTTLE) ||
-		current->backing_dev_info == NULL ||
+	if (!(current->flags & PF_LESS_THROTTLE))
+		return true;
+
+	if (current->flags & PF_BDI_FLUSHER)
+		return false;
+
+	return current->backing_dev_info == NULL ||
 		bdi_write_congested(current->backing_dev_info);
 }
 
-- 
1.7.9.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

             reply	other threads:[~2016-10-18  7:20 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-18  7:12 zhouxianrong [this message]
2016-10-18  9:34 ` Hillf Danton
2016-10-18  9:59 ` Mel Gorman
2016-10-18 11:08   ` zhouxianrong
2016-10-18 11:42     ` Michal Hocko
2016-10-20 12:38 ` zhouxianrong
2016-10-20 13:05   ` Mika Penttilä
2016-10-20 13:28   ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1476774765-21130-1-git-send-email-zhouxianrong@huawei.com \
    --to=zhouxianrong@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@suse.com \
    --cc=minchan@kernel.org \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=riel@redhat.com \
    --cc=tuxiaobing@huawei.com \
    --cc=vbabka@suse.cz \
    --cc=vdavydov.dev@gmail.com \
    --cc=viro@zeniv.linux.org.uk \
    --cc=won.ho.park@huawei.com \
    --cc=zhangshiming5@huawei.com \
    --cc=zhouxiyu@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox