linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mikulas Patocka <mpatocka@redhat.com>
To: Mel Gorman <mgorman@techsingularity.net>
Cc: Vlastimil Babka <vbabka@suse.cz>,
	Jesper Dangaard Brouer <brouer@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	linux-parisc@vger.kernel.org, Helge Deller <deller@gmx.de>
Subject: The patch "mm, page_alloc: avoid looking up the first zone in a zonelist twice" breaks memory management
Date: Tue, 31 May 2016 17:20:45 -0400 (EDT)	[thread overview]
Message-ID: <alpine.LRH.2.02.1605311706040.16635@file01.intranet.prod.int.rdu2.redhat.com> (raw)

Hi

The patch c33d6c06f60f710f0305ae792773e1c2560e1e51 ("mm, page_alloc: avoid 
looking up the first zone in a zonelist twice") breaks memory management 
on PA-RISC.

The PA-RISC system is not NUMA, but the chipset maps physical memory to 
three distinct ranges, so the kernel sets up three nodes. My machine has 
7GiB RAM and the memory is mapped to these ranges:

 Memory Ranges:
  0) Start 0x0000000000000000 End 0x000000003fffffff Size   1024 MB
  1) Start 0x0000000100000000 End 0x00000001bfdfffff Size   3070 MB
  2) Start 0x0000004040000000 End 0x00000040ffffffff Size   3072 MB
 Total Memory: 7166 MB
 On node 0 totalpages: 262144
 free_area_init_node: node 0, pgdat 405e44d0, node_mem_map 415ed000
   Normal zone: 3584 pages used for memmap
   Normal zone: 0 pages reserved
   Normal zone: 262144 pages, LIFO batch:31
 On node 1 totalpages: 785920
 free_area_init_node: node 1, pgdat 405e5140, node_mem_map 140000000
   Normal zone: 10745 pages used for memmap
   Normal zone: 0 pages reserved
   Normal zone: 785920 pages, LIFO batch:31
 On node 2 totalpages: 786432
 free_area_init_node: node 2, pgdat 405e5db0, node_mem_map 4080000000
   Normal zone: 10752 pages used for memmap
   Normal zone: 0 pages reserved
   Normal zone: 786432 pages, LIFO batch:31

Prior to the patch c33d6c06f60f710f0305ae792773e1c2560e1e51, the kernel 
could use all 7GiB of RAM as file cache. After this patch, the kernel 
fills the first 1GiB zone with cache and then starts reclaiming the cache 
(or sometimes even swapping) instead of using the remaining two zones as a 
file cache.

The bug can be reproduced by reading 2GiB file and noticing that the 
amount of cached memory stays near 1GiB.

Mikulas

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

             reply	other threads:[~2016-05-31 21:20 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-05-31 21:20 Mikulas Patocka [this message]
2016-05-31 21:47 ` Vlastimil Babka
2016-06-01 12:26   ` Mikulas Patocka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.LRH.2.02.1605311706040.16635@file01.intranet.prod.int.rdu2.redhat.com \
    --to=mpatocka@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=brouer@redhat.com \
    --cc=deller@gmx.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-parisc@vger.kernel.org \
    --cc=mgorman@techsingularity.net \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox