From: John Hubbard <jhubbard@nvidia.com>
To: "Huang, Ying" <ying.huang@intel.com>
Cc: David Rientjes <rientjes@google.com>,
Andrew Morton <akpm@linux-foundation.org>,
Andi Kleen <ak@linux.intel.com>,
Dave Hansen <dave.hansen@linux.intel.com>,
Shaohua Li <shli@kernel.org>, Rik van Riel <riel@redhat.com>,
Tim Chen <tim.c.chen@linux.intel.com>,
Michal Hocko <mhocko@suse.com>,
Mel Gorman <mgorman@techsingularity.net>,
Aaron Lu <aaron.lu@intel.com>,
Gerald Schaefer <gerald.schaefer@de.ibm.com>,
"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
Hugh Dickins <hughd@google.com>, Ingo Molnar <mingo@kernel.org>,
Vegard Nossum <vegard.nossum@oracle.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH -v2 1/2] mm, swap: Use kvzalloc to allocate some swap data structure
Date: Fri, 24 Mar 2017 00:33:25 -0700 [thread overview]
Message-ID: <624b8e59-34e5-3538-0a93-d33d9e4ac555@nvidia.com> (raw)
In-Reply-To: <87d1d7uoti.fsf@yhuang-dev.intel.com>
[...]
>>>> Hi Ying,
>>>>
>>>> I'm a little surprised to see vmalloc calls replaced with
>>>> kmalloc-then-vmalloc calls, because that actually makes fragmentation
>>>> worse (contrary to the above claim). That's because you will consume
>>>> contiguous memory (even though you don't need it to be contiguous),
>>>> whereas before, you would have been able to get by with page-at-a-time
>>>> for vmalloc.
>>>>
>>>> So, things like THP will find fewer contiguous chunks, as a result of patches such as this.
>>>
>>> Hi, John,
>>>
>>> I don't think so. The pages allocated by vmalloc() cannot be moved
>>> during de-fragment. For example, if 512 dis-continuous physical pages
>>> are allocated via vmalloc(), at worst, one page will be allocate from
>>> one distinct 2MB continous physical pages. This makes 512 * 2MB = 1GB
>>> memory cannot be used for THP allocation. Because these pages cannot be
>>> defragmented until vfree().
>>
>> kmalloc requires a resource that vmalloc does not: contiguous
>> pages. Therefore, given the same mix of pages (some groups of
>> contiguous pages, and a scattering of isolated single-page, or
>> too-small-to-satisfy-entire-alloc groups of pages, and the same
>> underlying page allocator, kmalloc *must* consume the more valuable
>> contiguous pages. However, vmalloc *may* consume those same pages.
>>
>> So, if you run kmalloc a bunch of times, with higher-order requests,
>> you *will* run out of contiguous pages (until more are freed up). If
>> you run vmalloc with the same initial conditions and the same
>> requests, you may not necessary use up those contiguous pages.
>>
>> It's true that there are benefits to doing a kmalloc-then-vmalloc, of
>> course: if the pages are available, it's faster and uses less
>> resources. Yes. I just don't think "less fragmentation" should be
>> listed as a benefit, because you can definitely cause *more*
>> fragmentation if you use up contiguous blocks unnecessarily.
>
> Yes, I agree that for some cases, kmalloc() will use more contiguous
> blocks, for example, non-movable pages are scattered all over the
> memory. But I still think in common cases, if defragement is enabled,
> and non-movable pages allocation is restricted to some memory area if
> possible, kmalloc() is better than vmalloc() as for fragmentation.
There might be some additional information you are using to come up with that
conclusion, that is not obvious to me. Any thoughts there? These calls use the same
underlying page allocator (and I thought that both were subject to the same
constraints on defragmentation, as a result of that). So I am not seeing any way
that kmalloc could possibly be a less-fragmenting call than vmalloc.
--
thanks,
john h
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-03-24 7:33 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-03-20 8:47 Huang, Ying
2017-03-20 8:47 ` [PATCH -v2 2/2] mm, swap: Sort swap entries before free Huang, Ying
2017-03-20 21:32 ` [PATCH -v2 1/2] mm, swap: Use kvzalloc to allocate some swap data structure David Rientjes
2017-03-24 2:41 ` Huang, Ying
2017-03-24 4:27 ` John Hubbard
2017-03-24 4:52 ` Huang, Ying
2017-03-24 6:48 ` John Hubbard
2017-03-24 7:16 ` Huang, Ying
2017-03-24 7:33 ` John Hubbard [this message]
2017-03-24 13:56 ` Dave Hansen
2017-03-24 16:52 ` Tim Chen
2017-03-24 18:15 ` John Hubbard
2017-03-30 16:31 ` Michal Hocko
2017-04-01 4:47 ` Huang, Ying
2017-04-03 8:15 ` Michal Hocko
2017-04-05 0:49 ` Huang, Ying
2017-04-05 13:43 ` Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=624b8e59-34e5-3538-0a93-d33d9e4ac555@nvidia.com \
--to=jhubbard@nvidia.com \
--cc=aaron.lu@intel.com \
--cc=ak@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=dave.hansen@linux.intel.com \
--cc=gerald.schaefer@de.ibm.com \
--cc=hughd@google.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=mhocko@suse.com \
--cc=mingo@kernel.org \
--cc=riel@redhat.com \
--cc=rientjes@google.com \
--cc=shli@kernel.org \
--cc=tim.c.chen@linux.intel.com \
--cc=vegard.nossum@oracle.com \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox