From: Kautuk Consul <consul.kautuk@gmail.com>
To: minchan@kernel.org, riel@redhat.com,
kosaki.motohiro@jp.fujitsu.com, Zheng Liu <wenqing.lz@taobao.com>,
linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Subject: Re: Fwd: Control page reclaim granularity
Date: Tue, 13 Mar 2012 13:13:14 +0530 [thread overview]
Message-ID: <CAFPAmTTPxGzrZrW+FR4B_MYDB372HyzdnioO0=CRwx0zQueRSQ@mail.gmail.com> (raw)
In-Reply-To: <4F5EF563.5000700@openvz.org>
Hi,
I noticed this discussion and decided to pitch in one small idea from my side.
It would be nice to range lock an inode's pages by storing those
ranges which would be locked.
This could also add some good routines for the kernel in terms of
range locking for a single inode.
However, wouldn't this add some overhead to shrink_page_list() since
that code would need to go through
all these ranges while trying to reclaim a single page ?
One small suggestion from my side is:
Why don't we implement something like : "Complete page-cache reclaim
control from usermode"?
In this, we can set/unset the mapping to AS_UNEVICTABLE (as Konstantin
mentioned) for a file's
inode from usermode by using ioctl or fcntl or maybe even go as far as
implementing an O_NORECL
option to the open system call.
After setting the AS_UNEVICTABLE, the usermode application can choose
to keep and remove pages by
using the fadvise(WILLNEED) and fadvise(DONTNEED).
( I think maybe the presence of any VMA is might not really be
required for this idea. )
Thanks,
Kautuk.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2012-03-13 7:43 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-03-08 7:34 Zheng Liu
2012-03-08 8:39 ` Greg Thelen
2012-03-08 16:13 ` Zheng Liu
2012-03-08 16:32 ` Zhu Yanhai
2012-03-14 7:19 ` Greg Thelen
2012-03-08 9:35 ` Minchan Kim
2012-03-08 16:54 ` Zheng Liu
2012-03-12 0:28 ` Minchan Kim
2012-03-12 2:06 ` Fwd: " Zheng Liu
2012-03-12 5:19 ` Minchan Kim
2012-03-12 6:20 ` Konstantin Khlebnikov
2012-03-12 8:14 ` Zheng Liu
2012-03-12 13:42 ` Minchan Kim
2012-03-12 14:18 ` Konstantin Khlebnikov
2012-03-13 2:48 ` Minchan Kim
2012-03-13 4:37 ` Konstantin Khlebnikov
2012-03-13 5:00 ` Konstantin Khlebnikov
2012-03-13 6:30 ` Zheng Liu
2012-03-13 6:48 ` Zheng Liu
2012-03-13 7:21 ` Konstantin Khlebnikov
2012-03-13 7:43 ` Kautuk Consul [this message]
2012-03-13 7:47 ` Kautuk Consul
2012-03-13 8:05 ` Zheng Liu
2012-03-13 8:04 ` Kautuk Consul
2012-03-13 8:08 ` Kautuk Consul
2012-03-13 8:28 ` Zheng Liu
2012-03-13 8:36 ` Kautuk Consul
2012-03-13 9:03 ` Kautuk Consul
2012-03-12 15:15 ` Zheng Liu
2012-03-13 2:51 ` Minchan Kim
2012-03-12 14:55 ` Rik van Riel
2012-03-13 2:57 ` Minchan Kim
2012-03-13 14:57 ` Rik van Riel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAFPAmTTPxGzrZrW+FR4B_MYDB372HyzdnioO0=CRwx0zQueRSQ@mail.gmail.com' \
--to=consul.kautuk@gmail.com \
--cc=kosaki.motohiro@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=minchan@kernel.org \
--cc=riel@redhat.com \
--cc=wenqing.lz@taobao.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox