From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B637C4332F for ; Thu, 9 Nov 2023 00:25:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8AC8F8D00CB; Wed, 8 Nov 2023 19:25:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 85B508D0073; Wed, 8 Nov 2023 19:25:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6D63C8D00CB; Wed, 8 Nov 2023 19:25:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 5825D8D0073 for ; Wed, 8 Nov 2023 19:25:34 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 3691CB5EB4 for ; Thu, 9 Nov 2023 00:25:34 +0000 (UTC) X-FDA: 81436522188.15.D8AF281 Received: from mail-pg1-f193.google.com (mail-pg1-f193.google.com [209.85.215.193]) by imf19.hostedemail.com (Postfix) with ESMTP id 4B0991A000E for ; Thu, 9 Nov 2023 00:25:32 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=QwBm8Nzp; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf19.hostedemail.com: domain of gourry.memverge@gmail.com designates 209.85.215.193 as permitted sender) smtp.mailfrom=gourry.memverge@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1699489532; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Of70rUTdf0QdbQCNfJN7g62PDHTFajfmsuYYkD/l5Bw=; b=VOc/WFkZwgoY4xWtusIGwzDP02p9zZ4O8u3DaJhWiXKAI7AZontVCUBNKp7MvSu2Z6etB8 EX3Lak0uAiwKlg3mHHmCrzDdllpNJ5qksiNIl3JoXqs9Jma7MT9zAEj2e2PpQ/Bb4sWPWz u8P4a9n4ZUAEGIjeh56txtrA22gyiRM= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=QwBm8Nzp; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf19.hostedemail.com: domain of gourry.memverge@gmail.com designates 209.85.215.193 as permitted sender) smtp.mailfrom=gourry.memverge@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1699489532; a=rsa-sha256; cv=none; b=nFUPOpZbPp0Z4+C5suY+JSIxHTnXdc7BWFXhWRxT9KzzaaeO97ZTfEbz1XFAhAjo0TEnz3 dPDPCEUPaMEPvD4bza31PhbDwuTiSZ9nrIwULN0CHL0mUtRPuQAieNnXp8mdgD6cRwayUw fQlDdhp8YHuQsQvXOl6QdW2LVib6z5o= Received: by mail-pg1-f193.google.com with SMTP id 41be03b00d2f7-5bdb0be3591so236115a12.2 for ; Wed, 08 Nov 2023 16:25:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1699489531; x=1700094331; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Of70rUTdf0QdbQCNfJN7g62PDHTFajfmsuYYkD/l5Bw=; b=QwBm8NzpQGPsjdcxDMBW5IWXftbrWzktoTl3fc61wTZknG8Up1SITpVsUm92KERd0i eWU4eTbIRSUl12jKxhfFlORAyIfm2KqPKm8Vm8R7FUeH4jxgX6+RtuMOeena+XjJdoxr eyl+ngVMrSLbtvYBAtFHykT3p0c7TFwvF7O/CmtGKSZkR5MCQxe9Q+0zCj/OVVgkYOXa KJPfXjzdNdGrWsJ2R/fJqhuc17Km0kEl7GU8eIrYk6/oRD6rMLIpe3mP+5Bz70wnJs3U kwX6d5zmf7We2gD33GKKOePbVNS8yZhHOS2jaGk6D19TqmyzpeoA9Ohc6nd63L6qL/Q/ BIZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1699489531; x=1700094331; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Of70rUTdf0QdbQCNfJN7g62PDHTFajfmsuYYkD/l5Bw=; b=h/LNC/RVHXUZKaffDBhyItYWqiudzfJ57HRB6q+ll7whxCdDTEuJxrpuymYbOt5uy0 7tWLEqBUTZIE9wdviq25z5plVus8rmS2wJqA/o06BCNpy0CiW3EKT+wTX4vDGCUU3URv XQghjDVQblnW+NU7o5Y8/1vG9kM2nnQZDZhu6MATowWNFwGmcJb33oWDl70EhtI9ggwa d24ZRR5BmPUPI1k/WzxMD/ns35Ha+D9DAMnZUCSmxU9ONoZCU+d1+pxxzIiui0VP7Eu1 N5pQge5Vii2L6RjjRzxAuPbqa4/lHszsD1UJr0YydGOZIoSO5UGdGtvyI4ebXeU1hhiN 5Eew== X-Gm-Message-State: AOJu0Yz12lFk7YwFDaOR9Ymz+e24EA6SnZNcFeJH4YEd3MFxCqCE9hiK Eb87CRRzSFLxjH+jWoW1eQ== X-Google-Smtp-Source: AGHT+IHbwFVdwphwt7gr3UzzhUHfqfxDv14F7XVPq9KZV1yJ2tp1Uqm1Do7o8FIgS8mNhqzO/PmdGQ== X-Received: by 2002:a05:6a21:7985:b0:180:d17a:7677 with SMTP id bh5-20020a056a21798500b00180d17a7677mr3646043pzc.36.1699489531028; Wed, 08 Nov 2023 16:25:31 -0800 (PST) Received: from fedora.mshome.net (pool-173-79-56-208.washdc.fios.verizon.net. [173.79.56.208]) by smtp.gmail.com with ESMTPSA id b10-20020a170902a9ca00b001bc21222e34sm2219073plr.285.2023.11.08.16.25.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Nov 2023 16:25:30 -0800 (PST) From: Gregory Price X-Google-Original-From: Gregory Price To: linux-kernel@vger.kernel.org Cc: linux-cxl@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, ying.huang@intel.com, akpm@linux-foundation.org, mhocko@kernel.org, tj@kernel.org, lizefan.x@bytedance.com, hannes@cmpxchg.org, corbet@lwn.net, roman.gushchin@linux.dev, shakeelb@google.com, muchun.song@linux.dev, Gregory Price Subject: [RFC PATCH v4 2/3] mm/mempolicy: implement weighted interleave Date: Wed, 8 Nov 2023 19:25:16 -0500 Message-Id: <20231109002517.106829-3-gregory.price@memverge.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231109002517.106829-1-gregory.price@memverge.com> References: <20231109002517.106829-1-gregory.price@memverge.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 4B0991A000E X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: oy41a9348ap7wu48ybny9yo7j783btip X-HE-Tag: 1699489532-57561 X-HE-Meta: U2FsdGVkX18lzDL4BKBLZYpqVFIIkpzv9aQcOuOoRJPswBx1NcpGAUi6j0/RrDxlxzDs9np0qmpAvrJpqR2gxTeWwqwntqh4Ext8I7xA+vm6iOJCSHQxt0ufM4J3t4kUCybRvJL9OZDwksoBT/3j1KlW4HPPnsxG+ES47QYrCMDaymiBIgCn+6GHtCOFLnfnZ3mvOOD9tYNGJnTja/NXdPiAXtC5k14H37xAzX9Vq+E7VY4KwLJ1IK1ifexTGN8LpfjfAkRxdJKI5x7HCuMBAZsD9W6cYaVjtk7jG71jfgqMYZjWNKHmEu8ycf9jjtFvO3kT+DuW1s0RzRPBvvaDDmW0rxrJFNAMDuGgSFBHO2/XKdswzOJMaLWDmGgKOGr8gDuA7uaVvWpeFgcYidJmyrtScDwjNWIYzIEWCpcv0vCQh16vZdXbinKc1UaeCjQ2z1u+GxZ4audhGOyxHRq3RgZLLOXPPZ/cWI9QjZ6z73ykKGxJ2SAFiBY2CE5kGBHtL0qL5Q2E2Tg2H7uVE2MHEW9ZiuzEsYoM0kvxRgecldG4cib7+Jpemz3j+M9GJWBuGIYYf9YQlt0LZiR/U/impUgpSbcL5NuBSTEDTsZIjjR6xnE7gacs2Nj/lyOtxVT0WYcgp37vCZL3APt9DkQX0pmJsoVRkniD+Dr2SAQ0m+kaDerqo+ouYTzfN1HDxcgeX0CVNIlTlRAmKdfIQA6xgzW+7bxq4A70XvQvTjWXAZhLIhdwkmxw3DGD/G/GZdmXkYdcAUsMQRjL0PI0qiJAB1Sxe+NpFf66ohHWf3nrIwRHJPpuRlLx+6sxdjeJzAg+jUQX31Yoj1GeFiivWrlTxGVy6ERmnmlFhLO1pyLwTeeTUXVmpUabCq2280wwpZLigjSzjziTbJ0sCLUhst9PMWNG3xlF2RlAhqji4jyzRIHUJk8tcutVBabXRjUAHRLFk1Ri4GLGU3GAW3ypO/x ruIM+AsP KLhEMkf/WBKldbNi1xvIStTXJwO0B59+xXzgNCE4ODXyjXAnv18Fzv+nHyARNcyAXvm4u4bJxO0U9B2u5M1WRGy4vX4yPHmis2b7lqQFeYAWT/iENJAT5xWeCzS9wXRelpppWsUVxodrj0gDTi69WmCW60xapdagn9XLrr7qBFt4Xwm66+j8NVHHLlAgl1SMBA6ePdciCl9sONmkmSmllPadQzJK5EZVRr1eD0cu32XbsjLrjtKlcQsUOhscIBYdHCHCMrapMZnaEo8sBpMkbEktx6g4SddzUqIRMtfpEhHaBQtuYSRoD/ASJRv5qqk+88LqJSgqaISFHqx/grUQ0cRS/P74WCQcDmKnL0xFw4wDtA2BAGMp+7XsVlsL+lEJt7DN5oITxkc5KMnStU8vnufK+aG42VeQpyGOvACmw/dW9J2PcIoSAe4tcY/PMMXWCc/+5rq6lc4a9dvrQjXe+GInrkChPcIPfLakVtj6e6W5yz62Gm1GHRHDXDfGQE3oRFtN3gZ/ClQptqs5u767QjCavEQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000104, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Implements interleave weighting for bandwidth optimization. The mempolicy MPOL_INTERLEAVE utilizes the node weights to implement weighted interleave. There are 3 integration points: interleave_nodes: Counts the number of allocations as they occur, and applies the weight for the current node. When the weight reaches 0, switch to the next node. offset_il_node: Gets the total weight of the nodemask as well as each individual node weight, then calculates the node based on the given index n. bulk_array_interleave: Gets the total weight of the nodemask as well as each individual node weight, then calculates the number of "interleave rounds" as well as any delta ("partial round"). Calculates the number of pages for each node and allocates them. If a node was scheduled for interleave via interleave_nodes, the current weight (pol->cur_weight) will be allocated first, before the remaining bulk calculation is done. This simplifies the calculation at the cost of an additional allocation call. The functions mempolicy_get_il_weight and mempolicy_get_il_weights were added so that should mempolicy be extended in the future to include local mempolicy weights, there is a clear integration point. Signed-off-by: Gregory Price --- include/linux/mempolicy.h | 3 + mm/mempolicy.c | 153 +++++++++++++++++++++++++++++++------- 2 files changed, 128 insertions(+), 28 deletions(-) diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index d232de7cdc56..b1ca63077fc4 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -48,6 +48,9 @@ struct mempolicy { nodemask_t nodes; /* interleave/bind/perfer */ int home_node; /* Home node to use for MPOL_BIND and MPOL_PREFERRED_MANY */ + unsigned char cur_weight; /* weight of current il node */ + unsigned char il_weights[MAX_NUMNODES]; /* used during allocation */ + union { nodemask_t cpuset_mems_allowed; /* relative to these nodes */ nodemask_t user_nodemask; /* nodemask passed by user */ diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 29ebf1e7898c..231b9bbd391a 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -300,6 +300,7 @@ static struct mempolicy *mpol_new(unsigned short mode, unsigned short flags, policy->mode = mode; policy->flags = flags; policy->home_node = NUMA_NO_NODE; + policy->cur_weight = 0; return policy; } @@ -334,6 +335,7 @@ static void mpol_rebind_nodemask(struct mempolicy *pol, const nodemask_t *nodes) tmp = *nodes; pol->nodes = tmp; + pol->cur_weight = 0; } static void mpol_rebind_preferred(struct mempolicy *pol, @@ -881,8 +883,10 @@ static long do_set_mempolicy(unsigned short mode, unsigned short flags, old = current->mempolicy; current->mempolicy = new; - if (new && new->mode == MPOL_INTERLEAVE) + if (new && new->mode == MPOL_INTERLEAVE) { current->il_prev = MAX_NUMNODES-1; + new->cur_weight = 0; + } task_unlock(current); mpol_put(old); ret = 0; @@ -1900,15 +1904,50 @@ static int policy_node(gfp_t gfp, struct mempolicy *policy, int nd) return nd; } +static unsigned char mempolicy_get_il_weight(struct mempolicy *policy, + unsigned int nid) +{ + int weight = mem_cgroup_get_il_weight(nid); + + return weight ? weight : 1; +} + +static unsigned int mempolicy_get_il_weights(struct mempolicy *policy, + nodemask_t *nodes, + unsigned char *weights) +{ + unsigned int total = 0; + unsigned int nid; + + total = mem_cgroup_get_il_weights(nodes, weights); + if (total) + return total; + + for_each_node_mask(nid, *nodes) { + weights[nid] = 1; + total += 1; + } + return total; +} + /* Do dynamic interleaving for a process */ static unsigned interleave_nodes(struct mempolicy *policy) { unsigned next; + unsigned char next_weight; struct task_struct *me = current; next = next_node_in(me->il_prev, policy->nodes); - if (next < MAX_NUMNODES) + if (!policy->cur_weight) { + /* If the node is set, at least 1 allocation is required */ + next_weight = mempolicy_get_il_weight(policy, next); + policy->cur_weight = next_weight ? next_weight : 1; + } + + policy->cur_weight--; + if (next < MAX_NUMNODES && !policy->cur_weight) me->il_prev = next; + return next; } @@ -1967,8 +2006,8 @@ unsigned int mempolicy_slab_node(void) static unsigned offset_il_node(struct mempolicy *pol, unsigned long n) { nodemask_t nodemask = pol->nodes; - unsigned int target, nnodes; - int i; + unsigned int target, nnodes, il_weight; + unsigned char weight; int nid; /* * The barrier will stabilize the nodemask in a register or on @@ -1982,10 +2021,18 @@ static unsigned offset_il_node(struct mempolicy *pol, unsigned long n) nnodes = nodes_weight(nodemask); if (!nnodes) return numa_node_id(); - target = (unsigned int)n % nnodes; + + il_weight = mempolicy_get_il_weights(pol, &nodemask, pol->il_weights); + target = (unsigned int)n % il_weight; nid = first_node(nodemask); - for (i = 0; i < target; i++) - nid = next_node(nid, nodemask); + + while (target) { + weight = pol->il_weights[nid]; + if (target < weight) + break; + target -= weight; + nid = next_node_in(nid, nodemask); + } return nid; } @@ -2319,32 +2366,82 @@ static unsigned long alloc_pages_bulk_array_interleave(gfp_t gfp, struct mempolicy *pol, unsigned long nr_pages, struct page **page_array) { - int nodes; - unsigned long nr_pages_per_node; - int delta; - int i; - unsigned long nr_allocated; + struct task_struct *me = current; unsigned long total_allocated = 0; + unsigned long nr_allocated; + unsigned long rounds; + unsigned long node_pages, delta; + unsigned char weight; + unsigned long il_weight; + unsigned long req_pages = nr_pages; + int nnodes, node, prev_node; + int i; - nodes = nodes_weight(pol->nodes); - nr_pages_per_node = nr_pages / nodes; - delta = nr_pages - nodes * nr_pages_per_node; - - for (i = 0; i < nodes; i++) { - if (delta) { - nr_allocated = __alloc_pages_bulk(gfp, - interleave_nodes(pol), NULL, - nr_pages_per_node + 1, NULL, - page_array); - delta--; - } else { - nr_allocated = __alloc_pages_bulk(gfp, - interleave_nodes(pol), NULL, - nr_pages_per_node, NULL, page_array); + prev_node = me->il_prev; + nnodes = nodes_weight(pol->nodes); + /* Continue allocating from most recent node */ + if (pol->cur_weight) { + node = next_node_in(prev_node, pol->nodes); + node_pages = pol->cur_weight; + if (node_pages > nr_pages) + node_pages = nr_pages; + nr_allocated = __alloc_pages_bulk(gfp, node, NULL, node_pages, + NULL, page_array); + page_array += nr_allocated; + total_allocated += nr_allocated; + /* if that's all the pages, no need to interleave */ + if (req_pages <= pol->cur_weight) { + pol->cur_weight -= req_pages; + return total_allocated; } - + /* Otherwise we adjust req_pages down, and continue from there */ + req_pages -= pol->cur_weight; + pol->cur_weight = 0; + prev_node = node; + } + + il_weight = mempolicy_get_il_weights(pol, &pol->nodes, + pol->il_weights); + rounds = req_pages / il_weight; + delta = req_pages % il_weight; + for (i = 0; i < nnodes; i++) { + node = next_node_in(prev_node, pol->nodes); + weight = pol->il_weights[node]; + node_pages = weight * rounds; + if (delta > weight) { + node_pages += weight; + delta -= weight; + } else if (delta) { + node_pages += delta; + delta = 0; + } + /* The number of requested pages may not hit every node */ + if (!node_pages) + break; + /* If an over-allocation would occur, floor it */ + if (node_pages + total_allocated > nr_pages) { + node_pages = nr_pages - total_allocated; + delta = 0; + } + nr_allocated = __alloc_pages_bulk(gfp, node, NULL, node_pages, + NULL, page_array); page_array += nr_allocated; total_allocated += nr_allocated; + prev_node = node; + } + + /* + * Finally, we need to update me->il_prev and pol->cur_weight + * If the last node allocated on has un-used weight, apply + * the remainder as the cur_weight, otherwise proceed to next node + */ + if (node_pages) { + me->il_prev = prev_node; + node_pages %= weight; + pol->cur_weight = weight - node_pages; + } else { + me->il_prev = node; + pol->cur_weight = 0; } return total_allocated; -- 2.39.1