* Re: Dirty/Writeback fields in /proc/meminfo affected by 20d74bf29c
[not found] <80b21fe4-ee8b-314c-ee3e-c09386bf368d@pgaddict.com>
@ 2016-08-04 20:55 ` Andrew Morton
2016-08-06 22:15 ` Tomas Vondra
0 siblings, 1 reply; 2+ messages in thread
From: Andrew Morton @ 2016-08-04 20:55 UTC (permalink / raw)
To: Tomas Vondra; +Cc: linux-kernel, linux-mm, linux-scsi
On Mon, 1 Aug 2016 04:36:28 +0200 Tomas Vondra <tomas@pgaddict.com> wrote:
> Hi,
>
> While investigating a strange OOM issue on the 3.18.x branch (which
> turned out to be already fixed by 52c84a95), I've noticed a strange
> difference in Dirty/Writeback fields in /proc/meminfo depending on
> kernel version. I'm wondering whether this is expected ...
>
> I've bisected the change to 20d74bf29c, added in 3.18.22 (upstream
> commit 4f258a46):
>
> sd: Fix maximum I/O size for BLOCK_PC requests
>
> With /etc/sysctl.conf containing
>
> vm.dirty_background_bytes = 67108864
> vm.dirty_bytes = 1073741824
>
> a simple "dd" example writing 10GB file
>
> dd if=/dev/zero of=ssd.test.file bs=1M count=10240
>
> results in about this on 3.18.21:
>
> Dirty: 740856 kB
> Writeback: 12400 kB
>
> but on 3.18.22:
>
> Dirty: 49244 kB
> Writeback: 656396 kB
>
> I.e. it seems to revert the relationship. I haven't identified any
> performance impact, and apparently for random writes the behavior did
> not change at all (or at least I haven't managed to reproduce it).
>
> But it's unclear to me why setting a maximum I/O size should affect
> this, and perhaps it has impact that I don't see.
So what appears to be happening here is that background writeback is
cutting in earlier - the amount of pending writeback ("Dirty") is
reduced while the amount of active writeback ("Writeback") is
correspondingly increased.
4f258a46 had the effect of permitting larger requests into the request
queue. It's unclear to me why larger requests would cause background
writeback to cut in earlier - the writeback code doesn't even care
about individual request sizes, it only cares about aggregate pagecache
state.
Less Dirty and more Writeback isn't necessarily a bad thing at all, but
I don't like mysteries. cc linux-mm to see if anyone else can
spot-the-difference.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: Dirty/Writeback fields in /proc/meminfo affected by 20d74bf29c
2016-08-04 20:55 ` Dirty/Writeback fields in /proc/meminfo affected by 20d74bf29c Andrew Morton
@ 2016-08-06 22:15 ` Tomas Vondra
0 siblings, 0 replies; 2+ messages in thread
From: Tomas Vondra @ 2016-08-06 22:15 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-kernel, linux-mm, linux-scsi
On 08/04/2016 10:55 PM, Andrew Morton wrote:
> On Mon, 1 Aug 2016 04:36:28 +0200 Tomas Vondra <tomas@pgaddict.com> wrote:
>
>> Hi,
>>
>> While investigating a strange OOM issue on the 3.18.x branch (which
>> turned out to be already fixed by 52c84a95), I've noticed a strange
>> difference in Dirty/Writeback fields in /proc/meminfo depending on
>> kernel version. I'm wondering whether this is expected ...
>>
>> I've bisected the change to 20d74bf29c, added in 3.18.22 (upstream
>> commit 4f258a46):
>>
>> sd: Fix maximum I/O size for BLOCK_PC requests
>>
>> With /etc/sysctl.conf containing
>>
>> vm.dirty_background_bytes = 67108864
>> vm.dirty_bytes = 1073741824
>>
>> a simple "dd" example writing 10GB file
>>
>> dd if=/dev/zero of=ssd.test.file bs=1M count=10240
>>
>> results in about this on 3.18.21:
>>
>> Dirty: 740856 kB
>> Writeback: 12400 kB
>>
>> but on 3.18.22:
>>
>> Dirty: 49244 kB
>> Writeback: 656396 kB
>>
>> I.e. it seems to revert the relationship. I haven't identified any
>> performance impact, and apparently for random writes the behavior did
>> not change at all (or at least I haven't managed to reproduce it).
>>
>> But it's unclear to me why setting a maximum I/O size should affect
>> this, and perhaps it has impact that I don't see.
>
> So what appears to be happening here is that background writeback is
> cutting in earlier - the amount of pending writeback ("Dirty") is
> reduced while the amount of active writeback ("Writeback") is
> correspondingly increased.
>
> 4f258a46 had the effect of permitting larger requests into the
> request queue. It's unclear to me why larger requests would cause
> background writeback to cut in earlier - the writeback code doesn't
> even care about individual request sizes, it only cares about
> aggregate pagecache state.
>
Right. Not a kernel expert here, but that's mostly my thinking.
> Less Dirty and more Writeback isn't necessarily a bad thing at all,
> but I don't like mysteries. cc linux-mm to see if anyone else can
> spot-the-difference.
>
I'm not sure if the change has positive or negative impact (or perhaps
no actual impact), but as a database guy (PostgreSQL) I'm interested in
this, as the interaction between the database write activity and kernel
matters to us a lot. So I'm wondering if this change might trigger the
writeback sooner, etc.
regards
Tomas
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2016-08-06 22:15 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <80b21fe4-ee8b-314c-ee3e-c09386bf368d@pgaddict.com>
2016-08-04 20:55 ` Dirty/Writeback fields in /proc/meminfo affected by 20d74bf29c Andrew Morton
2016-08-06 22:15 ` Tomas Vondra
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox