From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC154C433DF for ; Fri, 19 Jun 2020 16:24:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 78E6E2168B for ; Fri, 19 Jun 2020 16:24:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 78E6E2168B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DFB218D00CE; Fri, 19 Jun 2020 12:24:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DAA008D00D0; Fri, 19 Jun 2020 12:24:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C4BC98D00CE; Fri, 19 Jun 2020 12:24:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0152.hostedemail.com [216.40.44.152]) by kanga.kvack.org (Postfix) with ESMTP id 9794B8D00D0 for ; Fri, 19 Jun 2020 12:24:23 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 5B648181AC9C6 for ; Fri, 19 Jun 2020 16:24:23 +0000 (UTC) X-FDA: 76946484006.21.field52_1e08c6026e1a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id 357F2180442CD for ; Fri, 19 Jun 2020 16:24:23 +0000 (UTC) X-HE-Tag: field52_1e08c6026e1a X-Filterd-Recvd-Size: 4187 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Fri, 19 Jun 2020 16:24:22 +0000 (UTC) IronPort-SDR: RM9sbYDUHcAwvID5v3zE7tHkRWaT2nXW48toCYEdOTGM7ywAER+tdAGmKGIv/88gaxQGjJCs6B qBPEpAAfBTOw== X-IronPort-AV: E=McAfee;i="6000,8403,9657"; a="208241117" X-IronPort-AV: E=Sophos;i="5.75,255,1589266800"; d="scan'208";a="208241117" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 09:24:17 -0700 IronPort-SDR: wTLYaLdxXcb9HP4yMwnhyFLZCDPAvQ1IMDewWFZBvyYh9EAoXxqiREZM7HVipny3k8BgJPN/pU 8hWhA1w0v/TQ== X-IronPort-AV: E=Sophos;i="5.75,255,1589266800"; d="scan'208";a="264366461" Received: from sjiang-mobl2.ccr.corp.intel.com (HELO bwidawsk-mobl5.local) ([10.252.131.131]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Jun 2020 09:24:17 -0700 From: Ben Widawsky To: linux-mm Subject: [PATCH 13/18] mm: kill __alloc_pages Date: Fri, 19 Jun 2020 09:24:09 -0700 Message-Id: <20200619162414.1052234-14-ben.widawsky@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200619162414.1052234-1-ben.widawsky@intel.com> References: <20200619162414.1052234-1-ben.widawsky@intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 357F2180442CD X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: IMPORTANT NOTE: It's unclear how safe it is to declare nodemask_t on the stack, when nodemask_t can be relatively large in huge NUMA systems. Upcoming patches will try to limit this. The primary purpose of this patch is to clear up which interfaces should be used for page allocation. There are several attributes in page allocation after the obvious gfp and order: 1. node mask: set of nodes to try to allocate from, fail if unavailable 2. preferred nid: a preferred node to try to allocate from, falling back to node mask if unavailable 3. (soon) preferred mask: like preferred nid, but multiple nodes. Here's a summary of the existing interfaces, and which they cover *alloc_pages: () *alloc_pages_node: (2) __alloc_pages_nodemask: (1,2,3) I am instead proposing instead the following interfaces as a reasonable set. Generally node binding isn't used by kernel code, it's only used for mempolicy. On the other hand, the kernel does have preferred nodes (today it's only one), and that is why those interfaces exist while an interface to specify binding does not. alloc_pages: () I don't care, give me pages. alloc_pages_node: (2) I want pages from this particular node first alloc_pages_nodes: (3) I want pages from *these* nodes first __alloc_pages_nodemask: (1,2,3) I'm picky about my pages Cc: Andrew Morton Cc: Michal Hocko Signed-off-by: Ben Widawsky --- include/linux/gfp.h | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 67a0774e080b..9ab5c07579bd 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -504,9 +504,10 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int = order, int preferred_nid, nodemask_t *nodemask); =20 static inline struct page * -__alloc_pages(gfp_t gfp_mask, unsigned int order, int preferred_nid) +__alloc_pages_nodes(nodemask_t *nodes, gfp_t gfp_mask, unsigned int orde= r) { - return __alloc_pages_nodemask(gfp_mask, order, preferred_nid, NULL); + return __alloc_pages_nodemask(gfp_mask, order, first_node(*nodes), + NULL); } =20 /* @@ -516,10 +517,12 @@ __alloc_pages(gfp_t gfp_mask, unsigned int order, i= nt preferred_nid) static inline struct page * __alloc_pages_node(int nid, gfp_t gfp_mask, unsigned int order) { + nodemask_t tmp; VM_BUG_ON(nid < 0 || nid >=3D MAX_NUMNODES); VM_WARN_ON((gfp_mask & __GFP_THISNODE) && !node_online(nid)); =20 - return __alloc_pages(gfp_mask, order, nid); + tmp =3D nodemask_of_node(nid); + return __alloc_pages_nodes(&tmp, gfp_mask, order); } =20 /* --=20 2.27.0