linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
To: Laura Abbott <lauraa@codeaurora.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Rik van Riel <riel@redhat.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Mel Gorman <mgorman@suse.de>, Minchan Kim <minchan@kernel.org>,
	Heesub Shin <heesub.shin@samsung.com>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	Michal Nazarewicz <mina86@mina86.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	lmark@codeaurora.org
Subject: Re: [RFC PATCH 2/3] CMA: aggressively allocate the pages on cma reserved memory when not used
Date: Mon, 26 May 2014 11:44:17 +0900	[thread overview]
Message-ID: <20140526024417.GA26935@js1304-P5Q-DELUXE> (raw)
In-Reply-To: <537FEE96.8000704@codeaurora.org>

On Fri, May 23, 2014 at 05:57:58PM -0700, Laura Abbott wrote:
> On 5/12/2014 10:04 AM, Laura Abbott wrote:
> > 
> > I'm going to see about running this through tests internally for comparison.
> > Hopefully I'll get useful results in a day or so.
> > 
> > Thanks,
> > Laura
> > 
> 
> We ran some tests internally and found that for our purposes these patches made
> the benchmarks worse vs. the existing implementation of using CMA first for some
> pages. These are mostly androidisms but androidisms that we care about for
> having a device be useful.
> 
> The foreground memory headroom on the device was on average about 40 MB smaller
>  when using these patches vs our existing implementation of something like
> solution #1. By foreground memory headroom we simply mean the amount of memory
> that the foreground application can allocate before it is killed by the Android
>  Low Memory killer.
> 
> We also found that when running a sequence of app launches these patches had
> more high priority app kills by the LMK and more alloc stalls. The test did a
> total of 500 hundred app launches (using 9 separate applications) The CMA
> memory in our system is rarely used by its client and is therefore available
> to the system most of the time.
> 
> Test device
> - 4 CPUs
> - Android 4.4.2
> - 512MB of RAM
> - 68 MB of CMA
> 
> 
> Results:
> 
> Existing solution:
> Foreground headroom: 200MB
> Number of higher priority LMK kills (oom_score_adj < 529): 332
> Number of alloc stalls: 607
> 
> 
> Test patches:
> Foreground headroom: 160MB
> Number of higher priority LMK kills (oom_score_adj < 529):
> 459 Number of alloc stalls: 29538
> 
> We believe that the issues seen with these patches are the result of the LMK
> being more aggressive. The LMK will be more aggressive because it will ignore
> free CMA pages for unmovable allocations, and since most calls to the LMK are
> made by kswapd (which uses GFP_KERNEL) the LMK will mostly ignore free CMA
> pages. Because the LMK thresholds are higher than the zone watermarks, there
> will often be a lot of free CMA pages in the system when the LMK is called,
> which the LMK will usually ignore.

Hello,

Really thanks for testing!!!
If possible, please let me know nr_free_cma of these patches/your in-house
implementation before testing.

I can guess following scenario about your test.

On boot-up, CMA memory are mostly used by native processes, because
your implementation use CMA first for some pages. kswapd
is woken up late since non-CMA free memory is larger than my
implementation. And, on reclaiming, the LMK reclaiming memory by
killing app process would reclaim movable memory with high probability
since cma memory are mostly used by native processes and app processes
have just movable memory.

This is just my guess. But, if it is true, this is not fair test for
this patchset. If possible, could you make nr_free_cma same on both
implementation before testing?

Moreover, in mainline implementation, the LMK doesn't consider if memory
type is CMA or not. Maybe your overall system would be highly optimized
for your implementation, so I'm not sure if your testing is
appropriate or not for this patchset.

Anyway, I would like to optimize this for android. :)
Please let me know more about your system.

Thanks.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2014-05-26  2:41 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-05-08  0:32 [RFC PATCH 0/3] Aggressively allocate the pages on cma reserved memory Joonsoo Kim
2014-05-08  0:32 ` [RFC PATCH 1/3] CMA: remove redundant retrying code in __alloc_contig_migrate_range Joonsoo Kim
2014-05-09 15:44   ` Michal Nazarewicz
2014-05-08  0:32 ` [RFC PATCH 2/3] CMA: aggressively allocate the pages on cma reserved memory when not used Joonsoo Kim
2014-05-09 15:45   ` Michal Nazarewicz
2014-05-12 17:04   ` Laura Abbott
2014-05-13  1:14     ` Joonsoo Kim
2014-05-13  3:05     ` Minchan Kim
2014-05-24  0:57     ` Laura Abbott
2014-05-26  2:44       ` Joonsoo Kim [this message]
2014-05-13  3:00   ` Minchan Kim
2014-05-15  1:53     ` Joonsoo Kim
2014-05-15  2:43       ` Minchan Kim
2014-05-19  2:11         ` Joonsoo Kim
2014-05-19  2:53           ` Minchan Kim
2014-05-19  4:50             ` Joonsoo Kim
2014-05-19 23:18               ` Minchan Kim
2014-05-20  6:33                 ` Joonsoo Kim
2014-05-15  2:45       ` Heesub Shin
2014-05-15  5:06         ` Minchan Kim
2014-05-19 23:22         ` Minchan Kim
2014-05-16  8:02       ` [RFC][PATCH] CMA: drivers/base/Kconfig: restrict CMA size to non-zero value Gioh Kim
2014-05-16 17:45         ` Michal Nazarewicz
2014-05-19  1:47           ` Gioh Kim
2014-05-19  5:55             ` Joonsoo Kim
2014-05-19  9:14               ` Gioh Kim
2014-05-19 19:59               ` Michal Nazarewicz
2014-05-20  0:50                 ` Gioh Kim
2014-05-20  1:28                   ` Michal Nazarewicz
2014-05-20  2:26                     ` Gioh Kim
2014-05-20 18:15                       ` Michal Nazarewicz
2014-05-20 11:38                   ` Marek Szyprowski
2014-05-20 12:23                     ` Gi-Oh Kim
2014-05-21  0:15                     ` Gioh Kim
2014-05-14  8:42   ` [RFC PATCH 2/3] CMA: aggressively allocate the pages on cma reserved memory when not used Aneesh Kumar K.V
2014-05-15  1:58     ` Joonsoo Kim
2014-05-18 17:36       ` Aneesh Kumar K.V
2014-05-19  2:29         ` Joonsoo Kim
2014-05-08  0:32 ` [RFC PATCH 3/3] CMA: always treat free cma pages as non-free on watermark checking Joonsoo Kim
2014-05-09 15:46   ` Michal Nazarewicz
2014-05-09 12:39 ` [RFC PATCH 0/3] Aggressively allocate the pages on cma reserved memory Marek Szyprowski
2014-05-13  2:26   ` Joonsoo Kim
2014-05-14  9:44     ` Aneesh Kumar K.V
2014-05-15  2:10       ` Joonsoo Kim
2014-05-15  9:47         ` Mel Gorman
2014-05-19  2:12           ` Joonsoo Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140526024417.GA26935@js1304-P5Q-DELUXE \
    --to=iamjoonsoo.kim@lge.com \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=heesub.shin@samsung.com \
    --cc=lauraa@codeaurora.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lmark@codeaurora.org \
    --cc=m.szyprowski@samsung.com \
    --cc=mgorman@suse.de \
    --cc=mina86@mina86.com \
    --cc=minchan@kernel.org \
    --cc=riel@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox