From: Miles Chen <miles.chen@mediatek.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Joe Perches <joe@perches.com>,
Matthew Wilcox <willy@infradead.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
linux-arm-kernel@lists.infradead.org,
linux-mediatek@lists.infradead.org, wsd_upstream@mediatek.com
Subject: Re: [PATCH v3] mm/page_owner: use kvmalloc instead of kmalloc
Date: Thu, 1 Nov 2018 18:00:12 +0800 [thread overview]
Message-ID: <1541066412.31492.10.camel@mtkswgap22> (raw)
In-Reply-To: <20181031114107.GM32673@dhcp22.suse.cz>
On Wed, 2018-10-31 at 12:41 +0100, Michal Hocko wrote:
> On Wed 31-10-18 18:19:42, Miles Chen wrote:
> > On Wed, 2018-10-31 at 11:15 +0100, Michal Hocko wrote:
> > > On Wed 31-10-18 16:47:17, Miles Chen wrote:
> > > > On Tue, 2018-10-30 at 09:15 +0100, Michal Hocko wrote:
> > > > > On Tue 30-10-18 14:55:51, Miles Chen wrote:
> > > > > [...]
> > > > > > It's a real problem when using page_owner.
> > > > > > I found this issue recently: I'm not able to read page_owner information
> > > > > > during a overnight test. (error: read failed: Out of memory). I replace
> > > > > > kmalloc() with vmalloc() and it worked well.
> > > > >
> > > > > Is this with trimming the allocation to a single page and doing shorter
> > > > > than requested reads?
> > > >
> > > >
> > > > I printed out the allocate count on my device the request count is <=
> > > > 4096. So I tested this scenario by trimming the count to from 4096 to
> > > > 1024 bytes and it works fine.
> > > >
> > > > count = count > 1024? 1024: count;
> > > >
> > > > It tested it on both 32bit and 64bit kernel.
> > >
> > > Are you saying that you see OOMs for 4k size?
> > >
> > yes, because kmalloc only use normal memor, not highmem + normal memory
> > I think that's why vmalloc() works.
>
> Can I see an OOM report please? I am especially interested that 1k
> doesn't cause the problem because there shouldn't be that much of a
> difference between the two. Larger allocations could be a result of
> memory fragmentation but 1k vs. 4k to make a difference really seems
> unexpected.
>
You're right.
I pulled out the log and found that the allocation fail is for order=4.
I found that the if I do the read on the device, the read count is <=
4096; if I do the read by 'adb pull' from my host PC, the read count
becomes 65532. (I'm working on a android device)
The overnight test used 'adb pull' command, so allocation fail occurred
because of the large read count and the arbitrary size allocation design
of page_owner. That also explains why vmalloc() works.
I did a test today, the only code changed is to clamp to read count to
PAGE_SIZE and it worked well. Maybe we can solve this issue by just
clamping the read count.
count = count > PAGE_SIZE ? PAGE_SIZE : count;
Here is the log:
<4>[ 261.841770] (0)[2880:sync svc 43]sync svc 43: page allocation
failure: order:4, mode:0x24040c0
<4>[ 261.841815]-(0)[2880:sync svc 43]CPU: 0 PID: 2880 Comm: sync svc
43 Tainted: G W O 4.4.146+ #16
<4>[ 261.841825]-(0)[2880:sync svc 43]Hardware name: Generic DT based
system
<4>[ 261.841834]-(0)[2880:sync svc 43]Backtrace:
<4>[ 261.841844]-(0)[2880:sync svc 43][<c010d57c>] (dump_backtrace)
from [<c010d7a4>] (show_stack+0x18/0x1c)
<4>[ 261.841866]-(0)[2880:sync svc 43] r6:60030013 r5:c123d488
r4:00000000 r3:dc8ba692
<4>[ 261.841880]-(0)[2880:sync svc 43][<c010d78c>] (show_stack) from
[<c0470b84>] (dump_stack+0x94/0xa8)
<4>[ 261.841892]-(0)[2880:sync svc 43][<c0470af0>] (dump_stack) from
[<c0236060>] (warn_alloc_failed+0x108/0x148)
<4>[ 261.841905]-(0)[2880:sync svc 43] r6:00000000 r5:024040c0
r4:c1204948 r3:dc8ba692
<4>[ 261.841919]-(0)[2880:sync svc 43][<c0235f5c>] (warn_alloc_failed)
from [<c023a284>] (__alloc_pages_nodemask+0xa08/0xbd8)
<4>[ 261.841929]-(0)[2880:sync svc 43] r3:0000000f r2:00000000
<4>[ 261.841939]-(0)[2880:sync svc 43] r8:0000002f r7:00000004
r6:dbb7a000 r5:024040c0
<4>[ 261.841953]-(0)[2880:sync svc 43][<c023987c>]
(__alloc_pages_nodemask) from [<c023a5fc>] (alloc_kmem_pages+0x18/0x20)
<4>[ 261.841963]-(0)[2880:sync svc 43] r10:c0286560 r9:c027b348
r8:0000fff8 r7:00000004
<4>[ 261.841978]-(0)[2880:sync svc 43][<c023a5e4>] (alloc_kmem_pages)
from [<c02573c0>] (kmalloc_order_trace+0x2c/0xec)
next prev parent reply other threads:[~2018-11-01 10:00 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-10-29 5:16 miles.chen
2018-10-29 8:07 ` Michal Hocko
2018-10-29 8:17 ` Michal Hocko
2018-10-30 1:29 ` Miles Chen
2018-10-30 6:06 ` Michal Hocko
2018-10-30 6:55 ` Miles Chen
2018-10-30 8:15 ` Michal Hocko
2018-10-31 8:47 ` Miles Chen
2018-10-31 10:15 ` Michal Hocko
2018-10-31 10:19 ` Miles Chen
2018-10-31 11:41 ` Michal Hocko
2018-11-01 10:00 ` Miles Chen [this message]
2018-11-01 10:27 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1541066412.31492.10.camel@mtkswgap22 \
--to=miles.chen@mediatek.com \
--cc=akpm@linux-foundation.org \
--cc=joe@perches.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mediatek@lists.infradead.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=willy@infradead.org \
--cc=wsd_upstream@mediatek.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox