From: Minchan Kim <minchan.kim@gmail.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
Alan Cox <alan@lxorguk.ukuu.org.uk>,
LKML <linux-kernel@vger.kernel.org>,
linux-mm <linux-mm@kvack.org>,
Andrew Morton <akpm@linux-foundation.org>,
linux-mmc@vger.kernel.org
Subject: Re: [PATCH 2/7] mmc: Don't use PF_MEMALLOC
Date: Wed, 18 Nov 2009 20:15:10 +0900 [thread overview]
Message-ID: <28c262360911180315p6905c365we97f984c49c8fe18@mail.gmail.com> (raw)
In-Reply-To: <1258541696.3918.237.camel@laptop>
On Wed, Nov 18, 2009 at 7:54 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Wed, 2009-11-18 at 19:31 +0900, Minchan Kim wrote:
>> >
>> > Sure some generic blocklevel infrastructure might work, _but_ you cannot
>> > take away the responsibility of determining the amount of memory needed,
>> > nor does any of this have any merit if you do not limit yourself to that
>> > amount.
>>
>> Yes. Some one have to take a responsibility.
>>
>> The intention was we could take away the responsibility from block driver.
>> Instead of driver, VM would take the responsibility.
>>
>> You mean althgouth VM could take the responsiblity, it is hard to
>> expect amout of pages needed by block drivers?
>
> Correct, its near impossible for the VM to accurately guess the amount
> of memory needed for a driver, or limit the usage of the driver.
>
> The driver could be very simple in that it'll just start a DMA on the
> page and get an interrupt when done, not consuming much (if any) memory
> beyond the generic BIO structure, but it could also be some iSCSI
> monstrosity which involves the full network stack and userspace.
Wow, Thanks for good example.
Until now, I don't know iSCSI is such memory hog driver.
> That is why I generally prefer the user of PF_MEMALLOC to take
> responsibility, because it knows its own consumption and can limit its
> own consumption.
Okay. I understand your point by good explanation.
> Now, I don't think (but I could be wring here) that you need to bother
> with PF_MEMALLOC unless you're swapping. File based pages should always
> be able to free some memory due to the dirty-limit, whcih basically
> guarantees that there are some clean file pages for every dirty file
> page.
>
> My swap-over-nfs series used to have a block-layer hook to expose the
> swap-over-block behaviour:
>
> http://programming.kicks-ass.net/kernel-patches/vm_deadlock/v12.99/blk_queue_swapdev.patch
>
> That gives block devices the power to refuse being swapped over, which
> could be an alternative to using PF_MEMALLOC.
>
Thanks for noticing me.
I will look at your patches.
Thanks again, Peter.
--
Kind regards,
Minchan Kim
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2009-11-18 11:15 UTC|newest]
Thread overview: 45+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-11-17 7:16 [PATCH 0/7] Kill PF_MEMALLOC abuse KOSAKI Motohiro
2009-11-17 7:17 ` [PATCH 1/7] dm: use __GFP_HIGH instead PF_MEMALLOC KOSAKI Motohiro
2009-11-17 13:15 ` Alasdair G Kergon
2009-11-18 6:17 ` KOSAKI Motohiro
2009-11-17 7:17 ` [PATCH 2/7] mmc: Don't use PF_MEMALLOC KOSAKI Motohiro
2009-11-17 10:29 ` Alan Cox
2009-11-17 10:32 ` Minchan Kim
2009-11-17 10:38 ` Oliver Neukum
2009-11-17 11:58 ` KOSAKI Motohiro
2009-11-17 12:51 ` Minchan Kim
2009-11-17 20:47 ` Peter Zijlstra
2009-11-18 0:01 ` Minchan Kim
2009-11-18 9:56 ` Peter Zijlstra
2009-11-18 10:31 ` Minchan Kim
2009-11-18 10:54 ` Peter Zijlstra
2009-11-18 11:15 ` Minchan Kim [this message]
2009-11-17 7:18 ` [PATCH 3/7] mtd: " KOSAKI Motohiro
2009-11-17 7:19 ` [PATCH 4/7] nandsim: " KOSAKI Motohiro
2009-11-23 15:00 ` Artem Bityutskiy
2009-11-23 20:01 ` Adrian Hunter
2009-11-24 10:46 ` KOSAKI Motohiro
2009-11-24 11:56 ` Adrian Hunter
2009-11-25 0:42 ` KOSAKI Motohiro
2009-11-25 7:13 ` Adrian Hunter
2009-11-25 7:18 ` KOSAKI Motohiro
2009-11-17 7:21 ` [PATCH 5/7] Revert "Intel IOMMU: Avoid memory allocation failures in dma map api calls" KOSAKI Motohiro
2009-11-17 7:22 ` [PATCH 6/7] cifs: Don't use PF_MEMALLOC KOSAKI Motohiro
2009-11-17 7:32 ` [PATCH] Mark cifs mailing list as "moderated as non-subscribers" KOSAKI Motohiro
2009-11-17 12:47 ` [PATCH 6/7] cifs: Don't use PF_MEMALLOC Jeff Layton
2009-11-17 16:40 ` Steve French
2009-11-18 6:31 ` KOSAKI Motohiro
2009-11-17 7:23 ` [PATCH 7/7] xfs: " KOSAKI Motohiro
2009-11-17 22:11 ` Dave Chinner
2009-11-18 8:56 ` KOSAKI Motohiro
2009-11-18 22:16 ` Dave Chinner
2009-11-17 8:07 ` [PATCH 0/7] Kill PF_MEMALLOC abuse David Rientjes
2009-11-17 8:33 ` KOSAKI Motohiro
2009-11-17 8:36 ` David Rientjes
2009-11-17 20:56 ` Peter Zijlstra
2009-11-18 5:55 ` KOSAKI Motohiro
2009-11-17 10:15 ` Christoph Hellwig
2009-11-17 10:24 ` KOSAKI Motohiro
2009-11-17 10:27 ` Christoph Hellwig
2009-11-17 12:24 ` KOSAKI Motohiro
2009-11-17 12:47 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=28c262360911180315p6905c365we97f984c49c8fe18@mail.gmail.com \
--to=minchan.kim@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=alan@lxorguk.ukuu.org.uk \
--cc=kosaki.motohiro@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-mmc@vger.kernel.org \
--cc=peterz@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox