From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A,SIGNED_OFF_BY,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26DB1C433E2 for ; Tue, 1 Sep 2020 18:43:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D98F120767 for ; Tue, 1 Sep 2020 18:43:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="Cr/u24N/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D98F120767 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=oracle.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 717866B00CB; Tue, 1 Sep 2020 14:43:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6C8936B00CF; Tue, 1 Sep 2020 14:43:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5B693900015; Tue, 1 Sep 2020 14:43:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0181.hostedemail.com [216.40.44.181]) by kanga.kvack.org (Postfix) with ESMTP id 468B46B00CB for ; Tue, 1 Sep 2020 14:43:35 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 01FA82471 for ; Tue, 1 Sep 2020 18:43:35 +0000 (UTC) X-FDA: 77215365990.11.bikes59_0a15f8d2709a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id C6296180F8B82 for ; Tue, 1 Sep 2020 18:43:34 +0000 (UTC) X-HE-Tag: bikes59_0a15f8d2709a X-Filterd-Recvd-Size: 7331 Received: from userp2130.oracle.com (userp2130.oracle.com [156.151.31.86]) by imf24.hostedemail.com (Postfix) with ESMTP for ; Tue, 1 Sep 2020 18:43:33 +0000 (UTC) Received: from pps.filterd (userp2130.oracle.com [127.0.0.1]) by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 081IdClw002307; Tue, 1 Sep 2020 18:43:30 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc : references : from : message-id : date : mime-version : in-reply-to : content-type : content-transfer-encoding; s=corp-2020-01-29; bh=zxztOR6cQcYB7/Fpo/aQYgBDF6MTHK9ZseHyToQ3H1U=; b=Cr/u24N//zYSaxHpe+AH4OHIYxfryr5VH81FiWk4ivy4rTq7I+Os+MrHI5FRsmP0W//D ZSukpO1IO4XWkKYDy2I/HsdI8I6Xml2D5o2l+VgKirVY+k2qjl+JLRlIstcMDq4iPw5D CeAd0pvaYnJotVGVN1vI05CtGaN/MODpVKus6GuSSyMlufX2YaVV/TrW01iZQvUeGxbe YLhoXqX2FkeOPvwiCQehIqbF+pUEHQyOl0ZoomdvOQ8+/bvmn11RZlR2b4/Tyo2EpwJ6 eX4O3gS+Eh7ZKmGiW3tFlFLksPuo6o7kojn5n8AH3QEvnyoM3I4MEzEpcPtXQfbX61QA +A== Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70]) by userp2130.oracle.com with ESMTP id 337eeqx9ba-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 01 Sep 2020 18:43:30 +0000 Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1]) by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 081Iebg8183628; Tue, 1 Sep 2020 18:43:29 GMT Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by aserp3020.oracle.com with ESMTP id 3380x44yhs-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 01 Sep 2020 18:43:29 +0000 Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 081IhRCM013521; Tue, 1 Sep 2020 18:43:27 GMT Received: from [192.168.2.112] (/50.38.35.18) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Tue, 01 Sep 2020 11:43:27 -0700 Subject: Re: [PATCH V2] mm/hugetlb: try preferred node first when alloc gigantic page from cma To: Michal Hocko , Li Xinhai Cc: linux-mm@kvack.org, akpm@linux-foundation.org, Roman Gushchin References: <20200901144924.678195-1-lixinhai.lxh@gmail.com> <20200901150405.GH16650@dhcp22.suse.cz> From: Mike Kravetz Message-ID: <80d359f8-fb77-c560-91f7-89eafc5311ae@oracle.com> Date: Tue, 1 Sep 2020 11:43:25 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.11.0 MIME-Version: 1.0 In-Reply-To: <20200901150405.GH16650@dhcp22.suse.cz> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9731 signatures=668679 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 spamscore=0 phishscore=0 mlxlogscore=999 adultscore=0 suspectscore=0 bulkscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2009010158 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9731 signatures=668679 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 clxscore=1015 priorityscore=1501 lowpriorityscore=0 malwarescore=0 adultscore=0 spamscore=0 mlxscore=0 phishscore=0 impostorscore=0 mlxlogscore=999 bulkscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000 definitions=main-2009010158 X-Rspamd-Queue-Id: C6296180F8B82 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 9/1/20 8:04 AM, Michal Hocko wrote: > On Tue 01-09-20 22:49:24, Li Xinhai wrote: >> Since commit cf11e85fc08cc6a4 ("mm: hugetlb: optionally allocate gigantic >> hugepages using cma"), the gigantic page would be allocated from node >> which is not the preferred node, although there are pages available from >> that node. The reason is that the nid parameter has been ignored in >> alloc_gigantic_page(). >> >> Besides, the __GFP_THISNODE also need be checked if user required to >> alloc only from the preferred node. >> >> After this patch, the preferred node is tried first before other allowed >> nodes, and don't try to allocate from other nodes if __GFP_THISNODE is >> specified. >> >> Fixes: cf11e85fc08cc6a4 ("mm: hugetlb: optionally allocate gigantic hugepages using cma") >> Cc: Roman Gushchin >> Cc: Mike Kravetz >> Cc: Michal Hocko >> Signed-off-by: Li Xinhai > > LGTM > Acked-by: Michal Hocko Thank you both for the updates! >> --- >> v1->v2: >> With review by Mike and Michal, need to check __GFP_THISNODE to avoid >> allocate from other nodes. >> >> mm/hugetlb.c | 21 +++++++++++++++------ >> 1 file changed, 15 insertions(+), 6 deletions(-) >> >> diff --git a/mm/hugetlb.c b/mm/hugetlb.c >> index a301c2d672bf..d24986145087 100644 >> --- a/mm/hugetlb.c >> +++ b/mm/hugetlb.c >> @@ -1256,15 +1256,24 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, >> struct page *page; >> int node; >> >> - for_each_node_mask(node, *nodemask) { >> - if (!hugetlb_cma[node]) >> - continue; >> - >> - page = cma_alloc(hugetlb_cma[node], nr_pages, >> - huge_page_order(h), true); >> + if (nid != NUMA_NO_NODE && hugetlb_cma[nid]) { >> + page = cma_alloc(hugetlb_cma[nid], nr_pages, >> + huge_page_order(h), true); I missed the NUMA_NO_NODE issue yesterday, but thought about it a bit today. As Michal says, we do not call into alloc_gigantic_page with 'nid == NUMA_NO_NODE' today, but we should handle it correctly. Other places in the hugetlb code such as alloc_buddy_huge_page and even the low level interface alloc_pages_node have code as follows: if (nid == NUMA_NO_NODE) nid = numa_mem_id(); this attempts the allocation on the current node first if NUMA_NO_NODE is specified. Of course, it falls back to other nodes allowed in the mask. If we are adding the code to interpret NUMA_NO_NODE, I suggest we make this type of change as well. This would simply be added at the beginning of alloc_gigantic_page to handle the non-CMA case as well. Suggestion for an updated patch below. -- Mike Kravetz diff --git a/mm/hugetlb.c b/mm/hugetlb.c index a301c2d672bf..98dc44a602b4 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1251,20 +1251,32 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, { unsigned long nr_pages = 1UL << huge_page_order(h); + if (nid == NUMA_NO_NODE) + nid = numa_mem_id(); + #ifdef CONFIG_CMA { struct page *page; int node; - for_each_node_mask(node, *nodemask) { - if (!hugetlb_cma[node]) - continue; - - page = cma_alloc(hugetlb_cma[node], nr_pages, - huge_page_order(h), true); + if (hugetlb_cma[nid]) { + page = cma_alloc(hugetlb_cma[nid], nr_pages, + huge_page_order(h), true); if (page) return page; } + + if (!(gfp_mask & __GFP_THISNODE)) { + for_each_node_mask(node, *nodemask) { + if (node == nid || !hugetlb_cma[node]) + continue; + + page = cma_alloc(hugetlb_cma[node], nr_pages, + huge_page_order(h), true); + if (page) + return page; + } + } } #endif