linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.cz>
To: Mark Hills <mark@xwax.org>
Cc: Vlastimil Babka <vbabka@suse.cz>,
	linux-mm@kvack.org, Mel Gorman <mgorman@suse.de>,
	Johannes Weiner <hannes@cmpxchg.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: Write throughput impaired by touching dirty_ratio
Date: Thu, 25 Jun 2015 14:56:04 +0200	[thread overview]
Message-ID: <20150625125604.GE17237@dhcp22.suse.cz> (raw)
In-Reply-To: <20150625092056.GB17237@dhcp22.suse.cz>

On Thu 25-06-15 11:20:56, Michal Hocko wrote:
[...]
> From your /proc/zoneinfo:
> > Node 0, zone  HighMem
> >   pages free     2536526
> >         min      128
> >         low      37501
> >         high     74874
> >         scanned  0
> >         spanned  3214338
> >         present  3017668
> >         managed  3017668
> 
> You have 11G of highmem. Which is a lot wrt. the the lowmem
> 
> > Node 0, zone   Normal
> >   pages free     37336
> >         min      4789
> >         low      5986
> >         high     7183
> >         scanned  0
> >         spanned  123902
> >         present  123902
> >         managed  96773
> 
> which is only 378M! So something had to eat portion of the lowmem.

And just to clarify. Your lowmem has only 123902 pages (+DMA zone which
has 16M so it doesn't add much) which is ~480M. The lowmem can sit only
in the low 1G (actually less because part of that is used by kernel for
special mappings). You only have half of that because, presumably some
HW has reserved portion of that address range. So your lowmem zone is
really tiny. Now part of that range is used for kernel stuff like struct
pages which have to describe the full memory and this is eating quite a
lot for 3 million pages. So you ended up with only 378M really usable
for all the kernel allocations which cannot live in the highmem (and there
are many of those). This makes a large memory pressure on that zone even
though you might have huge amount of highmem free. This is the primary
reason why PAE kernels are not really usable for large memory setups
in general. A very specific usecases might work but even then I would
have to a very strong reason to stick with 32b kernel (e.g. a stupid out
of tree driver which is 32b specific or something similar).
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2015-06-25 12:56 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-06-19 15:16 Mark Hills
2015-06-24  8:27 ` Vlastimil Babka
2015-06-24  9:16   ` Michal Hocko
2015-06-24 22:26   ` Mark Hills
2015-06-25  9:20     ` Michal Hocko
2015-06-25 12:56       ` Michal Hocko [this message]
2015-06-25 21:45       ` Mark Hills
2015-07-01 15:40         ` Michal Hocko
2015-06-25  9:30     ` Vlastimil Babka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150625125604.GE17237@dhcp22.suse.cz \
    --to=mhocko@suse.cz \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mark@xwax.org \
    --cc=mgorman@suse.de \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox