From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DD15C2BA15 for ; Wed, 19 Jun 2024 08:35:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 218E56B0157; Wed, 19 Jun 2024 04:35:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1CA286B0166; Wed, 19 Jun 2024 04:35:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F358E6B0157; Wed, 19 Jun 2024 04:35:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id D2D3E6B0125 for ; Wed, 19 Jun 2024 04:35:53 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 83C5E120F29 for ; Wed, 19 Jun 2024 08:35:53 +0000 (UTC) X-FDA: 82246980186.04.37FD281 Received: from m16.mail.126.com (m16.mail.126.com [220.197.31.9]) by imf20.hostedemail.com (Postfix) with ESMTP id B800A1C0012 for ; Wed, 19 Jun 2024 08:35:50 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=126.com header.s=s110527 header.b=RFjAhhVQ; spf=pass (imf20.hostedemail.com: domain of yangge1116@126.com designates 220.197.31.9 as permitted sender) smtp.mailfrom=yangge1116@126.com; dmarc=pass (policy=none) header.from=126.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718786147; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Z6JfjzrRUfNAPGCFA9PtTYIFp6HCB5d+ynXWN0ORPFw=; b=mjx2MjwIOPZCrY068oWy8oOP7mHJWWGJ0IH/G0gZ1+NAPLKchin7JYj4nyrmzqfOXkvEzS hwaSc2sWoWdRB0kKhfiV5ovUjgqZTPk65o0nwL/I5uN2er+SFD0GcWcbjRbqHGoToYDy1X EnY979NXjMRMPmo8+kRiAAgNaF3LWkI= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=126.com header.s=s110527 header.b=RFjAhhVQ; spf=pass (imf20.hostedemail.com: domain of yangge1116@126.com designates 220.197.31.9 as permitted sender) smtp.mailfrom=yangge1116@126.com; dmarc=pass (policy=none) header.from=126.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718786147; a=rsa-sha256; cv=none; b=JdZT4A2UWmSAnS2qgzMUlDZ9L5SDAnMPROpiEGWx3NZa7zA8p2FMS0JQYlzD+42Z5cOcCP yKEVXIqVN5wF1F1d1bU2Qy7LE+9sF2QrY+X5LZign5Z8Mu7RSr8LcCpOZxPy+VrCarrzxd WUxj3JJd/Ncd0Alx+lwLZIygu7VNg6E= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=126.com; s=s110527; h=Message-ID:Date:MIME-Version:Subject:From: Content-Type; bh=Z6JfjzrRUfNAPGCFA9PtTYIFp6HCB5d+ynXWN0ORPFw=; b=RFjAhhVQtQ84fafRjGnHTSrH0BaDqyAzJDGufUvIya2CPZyi5qvQqteHj9gK/u OrOK0TFhVTLX1+W7blDSh03dq4/mhBVZgWt4gOaMX81R/1YWhhw7DvpI4++1WlFP rhQRYbS7p3Uv1/2Ly89ZeRFpW+nj04/zQ5/ChUhb5rP4M= Received: from [172.21.22.210] (unknown [118.242.3.34]) by gzga-smtp-mta-g1-4 (Coremail) with SMTP id _____wAXbiddmHJm1g3yAw--.18601S2; Wed, 19 Jun 2024 16:35:41 +0800 (CST) Message-ID: <9a636c96-7279-4a84-9dbc-5ab4f262ab96@126.com> Date: Wed, 19 Jun 2024 16:35:41 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH] mm/page_alloc: skip THP-sized PCP list when allocating non-CMA THP-sized page To: Barry Song <21cnbao@gmail.com> Cc: akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, baolin.wang@linux.alibaba.com, liuzixing@hygon.cn References: <1717492460-19457-1-git-send-email-yangge1116@126.com> <2e3a3a3f-737c-ed01-f820-87efee0adc93@126.com> <9b227c9d-f59b-a8b0-b353-7876a56c0bde@126.com> <4482bf69-eb07-0ec9-f777-28ce40f96589@126.com> <69414410-4e2d-c04c-6fc3-9779f9377cf2@126.com> From: Ge Yang In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-CM-TRANSID:_____wAXbiddmHJm1g3yAw--.18601S2 X-Coremail-Antispam: 1Uf129KBjvJXoW3Zr1DGr15urW3AFyrZF18Zrb_yoWktr1Dpr y8JF1Utr4UJr1UJr1Utrn8Ar1Utr1UJr1UXr1UJr1UArnFyr17Ar1UJr1UuF18JryUJr1U Jr1Utry3Zr1UArDanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDUYxBIdaVFxhVjvjDU0xZFpf9x07jGQ6JUUUUU= X-Originating-IP: [118.242.3.34] X-CM-SenderInfo: 51dqwwjhrrila6rslhhfrp/1tbiWRADG2VLa0w8MgABsX X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: B800A1C0012 X-Stat-Signature: of9hknynqoxxr9qm7ntbrwztfo7p8pyq X-Rspam-User: X-HE-Tag: 1718786150-977128 X-HE-Meta: U2FsdGVkX188ZtrvPkVxdLuSIQfdeOaMMETiUmxF2jOkGNgIKVzH18KYF7yCHYBA8Jp2IZhaSS8Rogy3SpdOJ7pUqc52x18o101nX4KUesGn9QwMzRCRywoZWd4LQ15pAOnrHbBhhiPlkizwdEvV73Jf+azM7zvLluJBEzgxi3WENrsUc10kvf/rt1s/b3k2kQMlYKuB+hYbl0zoIDUQO2q4STpvIPFJHcKKMJQG2jGU3eAJ01+Y9C+tJyQk0dUQzQRD5dn/yIHoagIZRMAPwLxPA00JEAus61VpUnqaUP+UnrXROhNQXoQ/4Ry1SOoOaDva1jwhmsWwoVoOWQ/O+F9Yg2srE/riiXrjObQC+RjKprxRaE0XzYDZFpSeDc2dDd6tRdp0you09egcBfwXuPN6f06RSNuCo6YaltsyYODlD/Z8hJ2MJe7LdJ8EXWkcEBbQMQP5P1OIYR2Sfg7zuWeMSf+27oXupNaXIt/p5t66JJp65jSKsP3bZqNmW88FtkEFrwNfgHpUlMlREKNffAT+jU3tjIckk4ggYLw31aPeHQob23rYMcUW+FWF+/tTkyn0nD3LlgCEpQW9cgt8eyJHNBIf/NeM2cIwog+VUde1tQuYqx24vX5M08K3QEBVbEIPl9FVgqvcWe7yNCTb+2qS+FyZ7aBKeDS/qFhCcfJLJ1uVckJqrDlYxj/nR4jtcBcRcz5Qt1vyIUNg1susXdoYa+rhen1wGNingSJgw9LWmii5203uAqQYqB5FqWECrL2d7ip6CNyw466QHRVRcGSA+kc64iO7UxdB52xdcuSlHQgI7n1xbLy++AVOhz+FlnfT2JN/jSOudS9XtsAbegugHYbWqa49SN/5oUIycSH91WnGOEHE0DImg7iFKnqw9GMaNOIcs5W93/L6rPNsAAXUrDIX5Yi4rRoO/8vzYonSar/WpViD2H64/KmUNT/S+5U9i6Vor+1EAB8t+Vw tr6blcgr Etst6SVKmpLV7bjvkdTX8+iszOQp208PmGvH22ANHhWEZIiRSGIAPC6mmyK5nDQnss7+efbbwTBUr3fPbAK/doSCfyJ8IsWRJXdKsNJbkNszDdEe3U8wT/8YqCY5FLOgKQBlYW94bajya4yV1EwtfhKWOj4Sb4METL3kQpIsHvWQgLjhNI4k/kskx/hOLndYE374D4ivse90Pi9BKxVok9IFD9Ss2wk1M1nR1obnW53lhVcl03+N8fqBclJME3cQsQGfwRP9KIcmQIM6NHH8mtbneA/3fwtgSC8DTubK0Wu3ZyRzIrCia3p1fP+kK0p1wgRqn59z4ilLP9eRAFKxeq8PG/8qnMRKvQ540gNv4JyNbZwsSc6dbCgNkrUTQyGgEVXkBIHOoR/IvkbbrAMflj+af542ZoQyN/5wG8pS4tR2mpPB5822BJSdODMaaJ9XC/M32IyFk6yq9TRa5RPNBaBobmc5DJMEmzdjfav0j/B3kqXV+PtIBeJOSAzSL6zjLFsrnlzwjwIt47ROd9eG/LLkD+w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: 在 2024/6/19 16:20, Barry Song 写道: > On Wed, Jun 19, 2024 at 5:35 PM Ge Yang wrote: >> >> >> >> 在 2024/6/18 15:51, yangge1116 写道: >>> >>> >>> 在 2024/6/18 下午2:58, Barry Song 写道: >>>> On Tue, Jun 18, 2024 at 6:56 PM yangge1116 wrote: >>>>> >>>>> >>>>> >>>>> 在 2024/6/18 下午12:10, Barry Song 写道: >>>>>> On Tue, Jun 18, 2024 at 3:32 PM yangge1116 wrote: >>>>>>> >>>>>>> >>>>>>> >>>>>>> 在 2024/6/18 上午9:55, Barry Song 写道: >>>>>>>> On Tue, Jun 18, 2024 at 9:36 AM yangge1116 >>>>>>>> wrote: >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> 在 2024/6/17 下午8:47, yangge1116 写道: >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> 在 2024/6/17 下午6:26, Barry Song 写道: >>>>>>>>>>> On Tue, Jun 4, 2024 at 9:15 PM wrote: >>>>>>>>>>>> >>>>>>>>>>>> From: yangge >>>>>>>>>>>> >>>>>>>>>>>> Since commit 5d0a661d808f ("mm/page_alloc: use only one PCP >>>>>>>>>>>> list for >>>>>>>>>>>> THP-sized allocations") no longer differentiates the migration >>>>>>>>>>>> type >>>>>>>>>>>> of pages in THP-sized PCP list, it's possible to get a CMA >>>>>>>>>>>> page from >>>>>>>>>>>> the list, in some cases, it's not acceptable, for example, >>>>>>>>>>>> allocating >>>>>>>>>>>> a non-CMA page with PF_MEMALLOC_PIN flag returns a CMA page. >>>>>>>>>>>> >>>>>>>>>>>> The patch forbids allocating non-CMA THP-sized page from >>>>>>>>>>>> THP-sized >>>>>>>>>>>> PCP list to avoid the issue above. >>>>>>>>>>> >>>>>>>>>>> Could you please describe the impact on users in the commit log? >>>>>>>>>> >>>>>>>>>> If a large number of CMA memory are configured in the system (for >>>>>>>>>> example, the CMA memory accounts for 50% of the system memory), >>>>>>>>>> starting >>>>>>>>>> virtual machine with device passthrough will get stuck. >>>>>>>>>> >>>>>>>>>> During starting virtual machine, it will call >>>>>>>>>> pin_user_pages_remote(..., >>>>>>>>>> FOLL_LONGTERM, ...) to pin memory. If a page is in CMA area, >>>>>>>>>> pin_user_pages_remote() will migrate the page from CMA area to >>>>>>>>>> non-CMA >>>>>>>>>> area because of FOLL_LONGTERM flag. If non-movable allocation >>>>>>>>>> requests >>>>>>>>>> return CMA memory, pin_user_pages_remote() will enter endless >>>>>>>>>> loops. >>>>>>>>>> >>>>>>>>>> backtrace: >>>>>>>>>> pin_user_pages_remote >>>>>>>>>> ----__gup_longterm_locked //cause endless loops in this function >>>>>>>>>> --------__get_user_pages_locked >>>>>>>>>> --------check_and_migrate_movable_pages //always check fail and >>>>>>>>>> continue >>>>>>>>>> to migrate >>>>>>>>>> ------------migrate_longterm_unpinnable_pages >>>>>>>>>> ----------------alloc_migration_target // non-movable allocation >>>>>>>>>> >>>>>>>>>>> Is it possible that some CMA memory might be used by non-movable >>>>>>>>>>> allocation requests? >>>>>>>>>> >>>>>>>>>> Yes. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>> If so, will CMA somehow become unable to migrate, causing >>>>>>>>>>> cma_alloc() >>>>>>>>>>> to fail? >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> No, it will cause endless loops in __gup_longterm_locked(). If >>>>>>>>>> non-movable allocation requests return CMA memory, >>>>>>>>>> migrate_longterm_unpinnable_pages() will migrate a CMA page to >>>>>>>>>> another >>>>>>>>>> CMA page, which is useless and cause endless loops in >>>>>>>>>> __gup_longterm_locked(). >>>>>>>> >>>>>>>> This is only one perspective. We also need to consider the impact on >>>>>>>> CMA itself. For example, >>>>>>>> when CMA is borrowed by THP, and we need to reclaim it through >>>>>>>> cma_alloc() or dma_alloc_coherent(), >>>>>>>> we must move those pages out to ensure CMA's users can retrieve that >>>>>>>> contiguous memory. >>>>>>>> >>>>>>>> Currently, CMA's memory is occupied by non-movable pages, meaning we >>>>>>>> can't relocate them. >>>>>>>> As a result, cma_alloc() is more likely to fail. >>>>>>>> >>>>>>>>>> >>>>>>>>>> backtrace: >>>>>>>>>> pin_user_pages_remote >>>>>>>>>> ----__gup_longterm_locked //cause endless loops in this function >>>>>>>>>> --------__get_user_pages_locked >>>>>>>>>> --------check_and_migrate_movable_pages //always check fail and >>>>>>>>>> continue >>>>>>>>>> to migrate >>>>>>>>>> ------------migrate_longterm_unpinnable_pages >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Fixes: 5d0a661d808f ("mm/page_alloc: use only one PCP list for >>>>>>>>>>>> THP-sized allocations") >>>>>>>>>>>> Signed-off-by: yangge >>>>>>>>>>>> --- >>>>>>>>>>>> mm/page_alloc.c | 10 ++++++++++ >>>>>>>>>>>> 1 file changed, 10 insertions(+) >>>>>>>>>>>> >>>>>>>>>>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >>>>>>>>>>>> index 2e22ce5..0bdf471 100644 >>>>>>>>>>>> --- a/mm/page_alloc.c >>>>>>>>>>>> +++ b/mm/page_alloc.c >>>>>>>>>>>> @@ -2987,10 +2987,20 @@ struct page *rmqueue(struct zone >>>>>>>>>>>> *preferred_zone, >>>>>>>>>>>> WARN_ON_ONCE((gfp_flags & __GFP_NOFAIL) && (order >>>>>>>>>>>>> 1)); >>>>>>>>>>>> >>>>>>>>>>>> if (likely(pcp_allowed_order(order))) { >>>>>>>>>>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE >>>>>>>>>>>> + if (!IS_ENABLED(CONFIG_CMA) || alloc_flags & >>>>>>>>>>>> ALLOC_CMA || >>>>>>>>>>>> + order != >>>>>>>>>>>> HPAGE_PMD_ORDER) { >>>>>>>>>>>> + page = rmqueue_pcplist(preferred_zone, >>>>>>>>>>>> zone, >>>>>>>>>>>> order, >>>>>>>>>>>> + migratetype, >>>>>>>>>>>> alloc_flags); >>>>>>>>>>>> + if (likely(page)) >>>>>>>>>>>> + goto out; >>>>>>>>>>>> + } >>>>>>>>>>> >>>>>>>>>>> This seems not ideal, because non-CMA THP gets no chance to use >>>>>>>>>>> PCP. >>>>>>>>>>> But it >>>>>>>>>>> still seems better than causing the failure of CMA allocation. >>>>>>>>>>> >>>>>>>>>>> Is there a possible approach to avoiding adding CMA THP into >>>>>>>>>>> pcp from >>>>>>>>>>> the first >>>>>>>>>>> beginning? Otherwise, we might need a separate PCP for CMA. >>>>>>>>>>> >>>>>>>>> >>>>>>>>> The vast majority of THP-sized allocations are GFP_MOVABLE, avoiding >>>>>>>>> adding CMA THP into pcp may incur a slight performance penalty. >>>>>>>>> >>>>>>>> >>>>>>>> But the majority of movable pages aren't CMA, right? >>>>>>> >>>>>>>> Do we have an estimate for >>>>>>>> adding back a CMA THP PCP? Will per_cpu_pages introduce a new >>>>>>>> cacheline, which >>>>>>>> the original intention for THP was to avoid by having only one >>>>>>>> PCP[1]? >>>>>>>> >>>>>>>> [1] >>>>>>>> https://patchwork.kernel.org/project/linux-mm/patch/20220624125423.6126-3-mgorman@techsingularity.net/ >>>>>>>> >>>>>>> >>>>>>> The size of struct per_cpu_pages is 256 bytes in current code >>>>>>> containing >>>>>>> commit 5d0a661d808f ("mm/page_alloc: use only one PCP list for >>>>>>> THP-sized >>>>>>> allocations"). >>>>>>> crash> struct per_cpu_pages >>>>>>> struct per_cpu_pages { >>>>>>> spinlock_t lock; >>>>>>> int count; >>>>>>> int high; >>>>>>> int high_min; >>>>>>> int high_max; >>>>>>> int batch; >>>>>>> u8 flags; >>>>>>> u8 alloc_factor; >>>>>>> u8 expire; >>>>>>> short free_count; >>>>>>> struct list_head lists[13]; >>>>>>> } >>>>>>> SIZE: 256 >>>>>>> >>>>>>> After revert commit 5d0a661d808f ("mm/page_alloc: use only one PCP >>>>>>> list >>>>>>> for THP-sized allocations"), the size of struct per_cpu_pages is >>>>>>> 272 bytes. >>>>>>> crash> struct per_cpu_pages >>>>>>> struct per_cpu_pages { >>>>>>> spinlock_t lock; >>>>>>> int count; >>>>>>> int high; >>>>>>> int high_min; >>>>>>> int high_max; >>>>>>> int batch; >>>>>>> u8 flags; >>>>>>> u8 alloc_factor; >>>>>>> u8 expire; >>>>>>> short free_count; >>>>>>> struct list_head lists[15]; >>>>>>> } >>>>>>> SIZE: 272 >>>>>>> >>>>>>> Seems commit 5d0a661d808f ("mm/page_alloc: use only one PCP list for >>>>>>> THP-sized allocations") decrease one cacheline. >>>>>> >>>>>> the proposal is not reverting the patch but adding one CMA pcp. >>>>>> so it is "struct list_head lists[14]"; in this case, the size is still >>>>>> 256? >>>>>> >>>>> >>>>> Yes, the size is still 256. If add one PCP list, we will have 2 PCP >>>>> lists for THP. One PCP list is used by MIGRATE_UNMOVABLE, and the other >>>>> PCP list is used by MIGRATE_MOVABLE and MIGRATE_RECLAIMABLE. Is that >>>>> right? >>>> >>>> i am not quite sure about MIGRATE_RECLAIMABLE as we want to >>>> CMA is only used by movable. >>>> So it might be: >>>> MOVABLE and NON-MOVABLE. >>> >>> One PCP list is used by UNMOVABLE pages, and the other PCP list is used >>> by MOVABLE pages, seems it is feasible. UNMOVABLE PCP list contains >>> MIGRATE_UNMOVABLE pages and MIGRATE_RECLAIMABLE pages, and MOVABLE PCP >>> list contains MIGRATE_MOVABLE pages. >>> >> >> Is the following modification feasiable? >> >> #ifdef CONFIG_TRANSPARENT_HUGEPAGE >> -#define NR_PCP_THP 1 >> +#define NR_PCP_THP 2 >> +#define PCP_THP_MOVABLE 0 >> +#define PCP_THP_UNMOVABLE 1 >> #else >> #define NR_PCP_THP 0 >> #endif >> >> static inline unsigned int order_to_pindex(int migratetype, int order) >> { >> + int pcp_type = migratetype; >> + >> #ifdef CONFIG_TRANSPARENT_HUGEPAGE >> if (order > PAGE_ALLOC_COSTLY_ORDER) { >> VM_BUG_ON(order != HPAGE_PMD_ORDER); >> - return NR_LOWORDER_PCP_LISTS; >> + >> + if (migratetype != MIGRATE_MOVABLE) >> + pcp_type = PCP_THP_UNMOVABLE; >> + else >> + pcp_type = PCP_THP_MOVABLE; >> + >> + return NR_LOWORDER_PCP_LISTS + pcp_type; >> } >> #else >> VM_BUG_ON(order > PAGE_ALLOC_COSTLY_ORDER); >> #endif >> >> - return (MIGRATE_PCPTYPES * order) + migratetype; >> + return (MIGRATE_PCPTYPES * order) + pcp_type; >> } >> > > a minimum change might be, then you can drop most new code. > Thanks, I will prepare the V2. > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 120a317d0938..cfe1e0625e38 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -588,6 +588,7 @@ static void bad_page(struct page *page, const char *reason) > > static inline unsigned int order_to_pindex(int migratetype, int order) > { > + bool __maybe_unused movable; > #ifdef CONFIG_CMA > /* > * We shouldn't get here for MIGRATE_CMA if those pages don't > @@ -600,7 +601,8 @@ static inline unsigned int order_to_pindex(int > migratetype, int order) > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > if (order > PAGE_ALLOC_COSTLY_ORDER) { > VM_BUG_ON(order != pageblock_order); > - return NR_LOWORDER_PCP_LISTS; > + movable = migratetype == MIGRATE_MOVABLE; > + return NR_LOWORDER_PCP_LISTS + movable; > } > #else > VM_BUG_ON(order > PAGE_ALLOC_COSTLY_ORDER); > > >> >> >> @@ -521,7 +529,7 @@ static inline int pindex_to_order(unsigned int pindex) >> int order = pindex / MIGRATE_PCPTYPES; >> >> #ifdef CONFIG_TRANSPARENT_HUGEPAGE >> - if (pindex == NR_LOWORDER_PCP_LISTS) >> + if (order > PAGE_ALLOC_COSTLY_ORDER) >> order = HPAGE_PMD_ORDER; >> #else >> VM_BUG_ON(order > PAGE_ALLOC_COSTLY_ORDER); >> >> >> >>>> >>>>> >>>>>> >>>>>>> >>>>>>>> >>>>>>>>> Commit 1d91df85f399 takes a similar approach to filter, and I mainly >>>>>>>>> refer to it. >>>>>>>>> >>>>>>>>> >>>>>>>>>>>> +#else >>>>>>>>>>>> page = rmqueue_pcplist(preferred_zone, >>>>>>>>>>>> zone, order, >>>>>>>>>>>> migratetype, >>>>>>>>>>>> alloc_flags); >>>>>>>>>>>> if (likely(page)) >>>>>>>>>>>> goto out; >>>>>>>>>>>> +#endif >>>>>>>>>>>> } >>>>>>>>>>>> >>>>>>>>>>>> page = rmqueue_buddy(preferred_zone, zone, order, >>>>>>>>>>>> alloc_flags, >>>>>>>>>>>> -- >>>>>>>>>>>> 2.7.4 >>>>>>>>>>> >>>>>>>>>>> Thanks >>>>>>>>>>> Barry >>>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>> >>>>> >>