From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.0 required=3.0 tests=MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02E29C433DF for ; Mon, 22 Jun 2020 07:10:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B4A4125440 for ; Mon, 22 Jun 2020 07:10:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B4A4125440 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5B61C8D0048; Mon, 22 Jun 2020 03:10:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 566968D0032; Mon, 22 Jun 2020 03:10:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 42EDA8D0048; Mon, 22 Jun 2020 03:10:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0151.hostedemail.com [216.40.44.151]) by kanga.kvack.org (Postfix) with ESMTP id 2BC598D0032 for ; Mon, 22 Jun 2020 03:10:02 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id B30C0180AD81D for ; Mon, 22 Jun 2020 07:10:01 +0000 (UTC) X-FDA: 76955973402.06.help12_1a09de326e30 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id 921F7100393F2 for ; Mon, 22 Jun 2020 07:10:01 +0000 (UTC) X-HE-Tag: help12_1a09de326e30 X-Filterd-Recvd-Size: 9581 Received: from mail-ej1-f65.google.com (mail-ej1-f65.google.com [209.85.218.65]) by imf24.hostedemail.com (Postfix) with ESMTP for ; Mon, 22 Jun 2020 07:10:00 +0000 (UTC) Received: by mail-ej1-f65.google.com with SMTP id dp18so16896571ejc.8 for ; Mon, 22 Jun 2020 00:10:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=KXQZAoKaQ85xb6lR9U0VGvt7vanuquPVxRZo+duTvfI=; b=lmDh1mN/MMb2/YAQdJPnFn30g6Dy0rCQaPlyXlesGWFquPZNxDbB+/2zpDh1sPJKUO SV89qQQrgzik7E7Q6fjmxCRoKRmja9FR8VxwYv26T3d7rE/nwr2xKsA9ckXkhzBlvwnM Kd989qVyaI6raW9htTP21kIzdnvwn+Iyem5O0n58AnLTQ+s3MnhGsEZXTyMeK1mbU8Be NYsJnY1QHUGFKgVlZ6Y943tFkVkW+DKN0X02mYcnQhm6NvWKtVoojAiblHPGgZ0uC+IE KDCQbwcX5KzcO/hU9XKAcPy0rBtQHKYHGHlxFj53hw9ky2wKbuvczuDnliqONUNxgfJd dhSQ== X-Gm-Message-State: AOAM530slS7Unwi7B2Fx8baOm/h61aYFXfjpdCfC3Pnr9iigPT/neWCl UzjSsM5MdZb45TgPz/dveV0= X-Google-Smtp-Source: ABdhPJxBrTPpCzk+2FQ/fo8+zXLNz+4GRPZv2YBkssEWPvBSGUtG1kVWvl9z6JoSn0c9NI9zWKRjVA== X-Received: by 2002:a17:907:7290:: with SMTP id dt16mr5824117ejc.63.1592809799655; Mon, 22 Jun 2020 00:09:59 -0700 (PDT) Received: from localhost (ip-37-188-173-135.eurotel.cz. [37.188.173.135]) by smtp.gmail.com with ESMTPSA id sd15sm3562106ejb.66.2020.06.22.00.09.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Jun 2020 00:09:58 -0700 (PDT) Date: Mon, 22 Jun 2020 09:09:57 +0200 From: Michal Hocko To: Ben Widawsky Cc: linux-mm , Andi Kleen , Andrew Morton , Christoph Lameter , Dan Williams , Dave Hansen , David Hildenbrand , David Rientjes , Jason Gunthorpe , Johannes Weiner , Jonathan Corbet , Kuppuswamy Sathyanarayanan , Lee Schermerhorn , Li Xinhai , Mel Gorman , Mike Kravetz , Mina Almasry , Tejun Heo , Vlastimil Babka , linux-api@vger.kernel.org Subject: Re: [PATCH 00/18] multiple preferred nodes Message-ID: <20200622070957.GB31426@dhcp22.suse.cz> References: <20200619162425.1052382-1-ben.widawsky@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200619162425.1052382-1-ben.widawsky@intel.com> X-Rspamd-Queue-Id: 921F7100393F2 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: User visible APIs changes/additions should be posted to the linux-api mailing list. Now added. On Fri 19-06-20 09:24:07, Ben Widawsky wrote: > This patch series introduces the concept of the MPOL_PREFERRED_MANY mempolicy. > This mempolicy mode can be used with either the set_mempolicy(2) or mbind(2) > interfaces. Like the MPOL_PREFERRED interface, it allows an application to set a > preference for nodes which will fulfil memory allocation requests. Like the > MPOL_BIND interface, it works over a set of nodes. > > Summary: > 1-2: Random fixes I found along the way > 3-4: Logic to handle many preferred nodes in page allocation > 5-9: Plumbing to allow multiple preferred nodes in mempolicy > 10-13: Teach page allocation APIs about nodemasks > 14: Provide a helper to generate preferred nodemasks > 15: Have page allocation callers generate preferred nodemasks > 16-17: Flip the switch to have __alloc_pages_nodemask take preferred mask. > 18: Expose the new uapi > > Along with these patches are patches for libnuma, numactl, numademo, and memhog. > They still need some polish, but can be found here: > https://gitlab.com/bwidawsk/numactl/-/tree/prefer-many > It allows new usage: `numactl -P 0,3,4` > > The goal of the new mode is to enable some use-cases when using tiered memory > usage models which I've lovingly named. > 1a. The Hare - The interconnect is fast enough to meet bandwidth and latency > requirements allowing preference to be given to all nodes with "fast" memory. > 1b. The Indiscriminate Hare - An application knows it wants fast memory (or > perhaps slow memory), but doesn't care which node it runs on. The application > can prefer a set of nodes and then xpu bind to the local node (cpu, accelerator, > etc). This reverses the nodes are chosen today where the kernel attempts to use > local memory to the CPU whenever possible. This will attempt to use the local > accelerator to the memory. > 2. The Tortoise - The administrator (or the application itself) is aware it only > needs slow memory, and so can prefer that. > > Much of this is almost achievable with the bind interface, but the bind > interface suffers from an inability to fallback to another set of nodes if > binding fails to all nodes in the nodemask. > > Like MPOL_BIND a nodemask is given. Inherently this removes ordering from the > preference. > > > /* Set first two nodes as preferred in an 8 node system. */ > > const unsigned long nodes = 0x3 > > set_mempolicy(MPOL_PREFER_MANY, &nodes, 8); > > > /* Mimic interleave policy, but have fallback *. > > const unsigned long nodes = 0xaa > > set_mempolicy(MPOL_PREFER_MANY, &nodes, 8); > > Some internal discussion took place around the interface. There are two > alternatives which we have discussed, plus one I stuck in: > 1. Ordered list of nodes. Currently it's believed that the added complexity is > nod needed for expected usecases. > 2. A flag for bind to allow falling back to other nodes. This confuses the > notion of binding and is less flexible than the current solution. > 3. Create flags or new modes that helps with some ordering. This offers both a > friendlier API as well as a solution for more customized usage. It's unknown > if it's worth the complexity to support this. Here is sample code for how > this might work: > > > // Default > > set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_SOCKET, NULL, 0); > > // which is the same as > > set_mempolicy(MPOL_DEFAULT, NULL, 0); > > > > // The Hare > > set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_TYPE, NULL, 0); > > > > // The Tortoise > > set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_TYPE_REV, NULL, 0); > > > > // Prefer the fast memory of the first two sockets > > set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_TYPE, -1, 2); > > > > // Prefer specific nodes for some something wacky > > set_mempolicy(MPOL_PREFER_MANY | MPOL_F_PREFER_ORDER_TYPE_CUSTOM, 0x17c, 1024); > > --- > > Cc: Andi Kleen > Cc: Andrew Morton > Cc: Christoph Lameter > Cc: Dan Williams > Cc: Dave Hansen > Cc: David Hildenbrand > Cc: David Rientjes > Cc: Jason Gunthorpe > Cc: Johannes Weiner > Cc: Jonathan Corbet > Cc: Kuppuswamy Sathyanarayanan > Cc: Lee Schermerhorn > Cc: Li Xinhai > Cc: Mel Gorman > Cc: Michal Hocko > Cc: Mike Kravetz > Cc: Mina Almasry > Cc: Tejun Heo > Cc: Vlastimil Babka > > Ben Widawsky (14): > mm/mempolicy: Add comment for missing LOCAL > mm/mempolicy: Use node_mem_id() instead of node_id() > mm/page_alloc: start plumbing multi preferred node > mm/page_alloc: add preferred pass to page allocation > mm: Finish handling MPOL_PREFERRED_MANY > mm: clean up alloc_pages_vma (thp) > mm: Extract THP hugepage allocation > mm/mempolicy: Use __alloc_page_node for interleaved > mm: kill __alloc_pages > mm/mempolicy: Introduce policy_preferred_nodes() > mm: convert callers of __alloc_pages_nodemask to pmask > alloc_pages_nodemask: turn preferred nid into a nodemask > mm: Use less stack for page allocations > mm/mempolicy: Advertise new MPOL_PREFERRED_MANY > > Dave Hansen (4): > mm/mempolicy: convert single preferred_node to full nodemask > mm/mempolicy: Add MPOL_PREFERRED_MANY for multiple preferred nodes > mm/mempolicy: allow preferred code to take a nodemask > mm/mempolicy: refactor rebind code for PREFERRED_MANY > > .../admin-guide/mm/numa_memory_policy.rst | 22 +- > include/linux/gfp.h | 19 +- > include/linux/mempolicy.h | 4 +- > include/linux/migrate.h | 4 +- > include/linux/mmzone.h | 3 + > include/uapi/linux/mempolicy.h | 6 +- > mm/hugetlb.c | 10 +- > mm/internal.h | 1 + > mm/mempolicy.c | 271 +++++++++++++----- > mm/page_alloc.c | 179 +++++++++++- > 10 files changed, 403 insertions(+), 116 deletions(-) > > > -- > 2.27.0 -- Michal Hocko SUSE Labs