From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A696C43334 for ; Fri, 3 Jun 2022 15:10:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C1ED56B0071; Fri, 3 Jun 2022 11:10:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BF8136B0072; Fri, 3 Jun 2022 11:10:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A97818D0001; Fri, 3 Jun 2022 11:10:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 9638D6B0071 for ; Fri, 3 Jun 2022 11:10:21 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 6E9F260A99 for ; Fri, 3 Jun 2022 15:10:21 +0000 (UTC) X-FDA: 79537260642.26.512580E Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by imf07.hostedemail.com (Postfix) with ESMTP id 99E9C4003C for ; Fri, 3 Jun 2022 15:10:07 +0000 (UTC) Received: from pps.filterd (m0187473.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 253Dvmeu039165; Fri, 3 Jun 2022 15:10:01 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=message-id : date : subject : to : cc : references : from : in-reply-to : content-type : content-transfer-encoding : mime-version; s=pp1; bh=g7jUFWBxlvBEqq1quOiCvFi96EvwCZUtaqf3egOZBgw=; b=R8iOlcOVS7fJjcbdzbv19n3ZO6onF0ufsfiUeyDTkVl0BBLBSUMGj3lxrk3RRWOyKsMV FoxqD7FcwSGhPtzRPoltucsByWGfozqaYdgBk+2clXwWqyCVOfmfkpJQVPhlh2LJI8CW SRbZ+qk9HTHtYkyznoExKJtDt09s80AuFgNEHkMdKmJgPh/gPXi5FDq4lcvM54tPIXy+ yOwKfIWqXGqlR/abeash4Wgb11w73cEUzFaXlTAcY5AAX5o/k/F+yEP/TUKunULDJBMv td+RZ9qk6ZWU+8TpqaunIRk1pzSPCGo0EG0Yeo8RKWnA43j2o/fHDdanazFmPdW12Rvz WQ== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3gfkn69df9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 03 Jun 2022 15:10:00 +0000 Received: from m0187473.ppops.net (m0187473.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 253F8QTZ039849; Fri, 3 Jun 2022 15:10:00 GMT Received: from ppma03fra.de.ibm.com (6b.4a.5195.ip4.static.sl-reverse.com [149.81.74.107]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3gfkn69def-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 03 Jun 2022 15:10:00 +0000 Received: from pps.filterd (ppma03fra.de.ibm.com [127.0.0.1]) by ppma03fra.de.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 253F6DCq014972; Fri, 3 Jun 2022 15:09:57 GMT Received: from b06cxnps4075.portsmouth.uk.ibm.com (d06relay12.portsmouth.uk.ibm.com [9.149.109.197]) by ppma03fra.de.ibm.com with ESMTP id 3gbc97xhxj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 03 Jun 2022 15:09:57 +0000 Received: from d06av24.portsmouth.uk.ibm.com (d06av24.portsmouth.uk.ibm.com [9.149.105.60]) by b06cxnps4075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 253F9t8L38207904 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 3 Jun 2022 15:09:55 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 213C242042; Fri, 3 Jun 2022 15:09:55 +0000 (GMT) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id A6BA342041; Fri, 3 Jun 2022 15:09:49 +0000 (GMT) Received: from [9.43.93.173] (unknown [9.43.93.173]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTP; Fri, 3 Jun 2022 15:09:49 +0000 (GMT) Message-ID: <046c373a-f30b-091d-47a1-e28bfb7e9394@linux.ibm.com> Date: Fri, 3 Jun 2022 20:39:47 +0530 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.10.0 Subject: Re: [RFC PATCH v4 7/7] mm/demotion: Demote pages according to allocation fallback order Content-Language: en-US To: Ying Huang , linux-mm@kvack.org, akpm@linux-foundation.org Cc: Greg Thelen , Yang Shi , Davidlohr Bueso , Tim C Chen , Brice Goglin , Michal Hocko , Linux Kernel Mailing List , Hesham Almatary , Dave Hansen , Jonathan Cameron , Alistair Popple , Dan Williams , Feng Tang , Jagdish Gediya , Baolin Wang , David Rientjes References: <20220527122528.129445-1-aneesh.kumar@linux.ibm.com> <20220527122528.129445-8-aneesh.kumar@linux.ibm.com> From: Aneesh Kumar K V In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: 8x6etQlFsFBiMkGXOAXlYGd_HIYEx0EX X-Proofpoint-GUID: vXFrfR2MtraSKq6ABD5k3wf9zz8ThVPu Content-Transfer-Encoding: 8bit X-Proofpoint-UnRewURL: 0 URL was un-rewritten MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.874,Hydra:6.0.517,FMLib:17.11.64.514 definitions=2022-06-03_05,2022-06-03_01,2022-02-23_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 priorityscore=1501 bulkscore=0 impostorscore=0 phishscore=0 adultscore=0 clxscore=1015 mlxlogscore=999 lowpriorityscore=0 mlxscore=0 spamscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2204290000 definitions=main-2206030067 X-Rspam-User: Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=R8iOlcOV; dmarc=pass (policy=none) header.from=ibm.com; spf=pass (imf07.hostedemail.com: domain of aneesh.kumar@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=aneesh.kumar@linux.ibm.com X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 99E9C4003C X-Stat-Signature: y664bj8qeij7d1hgxq95hdigjqi9a4ir X-HE-Tag: 1654269007-589068 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 6/2/22 1:05 PM, Ying Huang wrote: > On Fri, 2022-05-27 at 17:55 +0530, Aneesh Kumar K.V wrote: >> From: Jagdish Gediya >> >> currently, a higher tier node can only be demoted to selected >> nodes on the next lower tier as defined by the demotion path, >> not any other node from any lower tier. This strict, hard-coded >> demotion order does not work in all use cases (e.g. some use cases >> may want to allow cross-socket demotion to another node in the same >> demotion tier as a fallback when the preferred demotion node is out >> of space). This demotion order is also inconsistent with the page >> allocation fallback order when all the nodes in a higher tier are >> out of space: The page allocation can fall back to any node from any >> lower tier, whereas the demotion order doesn't allow that currently. >> >> This patch adds support to get all the allowed demotion targets mask >> for node, also demote_page_list() function is modified to utilize this >> allowed node mask by filling it in migration_target_control structure >> before passing it to migrate_pages(). > ... >>   * Take pages on @demote_list and attempt to demote them to >>   * another node. Pages which are not demoted are left on >> @@ -1481,6 +1464,19 @@ static unsigned int demote_page_list(struct list_head *demote_pages, >>  { >>   int target_nid = next_demotion_node(pgdat->node_id); >>   unsigned int nr_succeeded; >> + nodemask_t allowed_mask; >> + >> + struct migration_target_control mtc = { >> + /* >> + * Allocate from 'node', or fail quickly and quietly. >> + * When this happens, 'page' will likely just be discarded >> + * instead of migrated. >> + */ >> + .gfp_mask = (GFP_HIGHUSER_MOVABLE & ~__GFP_RECLAIM) | __GFP_NOWARN | >> + __GFP_NOMEMALLOC | GFP_NOWAIT, >> + .nid = target_nid, >> + .nmask = &allowed_mask >> + }; > > IMHO, we should try to allocate from preferred node firstly (which will > kick kswapd of the preferred node if necessary). If failed, we will > fallback to all allowed node. > > As we discussed as follows, > > https://lore.kernel.org/lkml/69f2d063a15f8c4afb4688af7b7890f32af55391.camel@intel.com/ > > That is, something like below, > > static struct page *alloc_demote_page(struct page *page, unsigned long node) > { > struct page *page; > nodemask_t allowed_mask; > struct migration_target_control mtc = { > /* > * Allocate from 'node', or fail quickly and quietly. > * When this happens, 'page' will likely just be discarded > * instead of migrated. > */ > .gfp_mask = (GFP_HIGHUSER_MOVABLE & ~__GFP_RECLAIM) | > __GFP_THISNODE | __GFP_NOWARN | > __GFP_NOMEMALLOC | GFP_NOWAIT, > .nid = node > }; > > page = alloc_migration_target(page, (unsigned long)&mtc); > if (page) > return page; > > mtc.gfp_mask &= ~__GFP_THISNODE; > mtc.nmask = &allowed_mask; > > return alloc_migration_target(page, (unsigned long)&mtc); > } I skipped doing this in v5 because I was not sure this is really what we want. I guess we can do this as part of the change that is going to introduce the usage of memory policy for the allocation? -aneesh