From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,HTML_MESSAGE,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7CF44C43331 for ; Fri, 3 Apr 2020 09:51:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 19D7320737 for ; Fri, 3 Apr 2020 09:51:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Ed7LFcP/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 19D7320737 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AC9888E0008; Fri, 3 Apr 2020 05:51:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A9FC78E0007; Fri, 3 Apr 2020 05:51:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 98E018E0008; Fri, 3 Apr 2020 05:51:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0043.hostedemail.com [216.40.44.43]) by kanga.kvack.org (Postfix) with ESMTP id 7C2C28E0007 for ; Fri, 3 Apr 2020 05:51:45 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 09883824556B for ; Fri, 3 Apr 2020 09:51:45 +0000 (UTC) X-FDA: 76666076970.30.arm38_496ddbfa3a836 X-HE-Tag: arm38_496ddbfa3a836 X-Filterd-Recvd-Size: 22457 Received: from mail-qt1-f193.google.com (mail-qt1-f193.google.com [209.85.160.193]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Fri, 3 Apr 2020 09:51:44 +0000 (UTC) Received: by mail-qt1-f193.google.com with SMTP id a5so5883191qtw.10 for ; Fri, 03 Apr 2020 02:51:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=slk/hcKFYxA8dIj9deLoeurU8bXOthDCkdQ/i53TXys=; b=Ed7LFcP/ZzAxE90CTsF8yfHE1/EaZesHnP2sC6I25ssxJm9cjeI6rsvMx0F7SP9/X3 2pTsB/P6nmF3TxJlfM+rd6nIEv7CK8eIAWX4ZHumIPj3054KvKS81JvYAmKgRKXUlH7C NJwZHjpXhbdGjM5CVgaXmfsX+s5lFRRWgMSg58e5enEOdNRXTmCMEz95p7J7ky+xQpZK /ahC9HEUmZxbqc+Y2/P+blaBTFNKvzVDnrDXHJJfcJGM6AleoU4/F4rNJwyfutkbFhjq pdA3aUkKa2pzi2DOvJKU9qCR8f+iSaiTG2Wx3HItnX+PTBRLy7cxIk84yKcrlmoM5pn+ pkhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=slk/hcKFYxA8dIj9deLoeurU8bXOthDCkdQ/i53TXys=; b=iRYohYdmstjaQOhRXq8pzk54tkj3kMZja7L2H25d789Lf79jerx1c2keqloSqHalDA IAaPNWeuKtgXxoB0JgqdifE/9fkzWbZ62mwk5YwDUW52r2wicD8DjxZk3FA08N8ECI83 anqb2LR9uhlGogtDak1Gf1FZO7l0GxB6AzJGilm56oeYACrsp1She8sux4EKVY4dZiV1 5iMbmXxm/Ka2Kk6O4CbEtEZXxN2eVMCXOI93AP0UyYn+2JUp01Ddw2CHPhaVso/CeN8Y ioRcA0Jd8Vcy5ByR1SMw5PZoTeplQoMc1TCQQo58qOsPuydlQlFu096MMsGwG3A3XDod V45g== X-Gm-Message-State: AGi0PubOO/nAxv0F6RU8gEmIjRfGzp+bNccN8aw3EauNLnD+rR4pC3vg /zoFhzYvNDCyOG2oO7Xh6Y2istlMCrpj7toOLwU= X-Google-Smtp-Source: APiQypJeD7zvV/4ThM03z8r3rCjLhKeHqGP526bEjUqfRcKupgzmAIpDr9AtY3/SXfUpdDcImaXnRfptwGDwtBx3XG8= X-Received: by 2002:ac8:3757:: with SMTP id p23mr7479621qtb.274.1585907503955; Fri, 03 Apr 2020 02:51:43 -0700 (PDT) MIME-Version: 1.0 References: <20200403021254.2842224-1-aslan@fb.com> <20200403050205.GI3952565@iweiny-DESK2.sc.intel.com> In-Reply-To: <20200403050205.GI3952565@iweiny-DESK2.sc.intel.com> From: Aslan Bakirov Date: Fri, 3 Apr 2020 10:51:32 +0100 Message-ID: Subject: Re: [PATCH v3] mm: cma: NUMA node interface To: Ira Weiny Cc: Aslan Bakirov , akpm@linux-foundation.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel-team@fb.com, riel@surriel.com, Roman Gushchin , mhocko@kernel.org, hannes@cmpxchg.org Content-Type: multipart/alternative; boundary="0000000000006b8a2105a25fddc4" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: --0000000000006b8a2105a25fddc4 Content-Type: text/plain; charset="UTF-8" On Fri, Apr 3, 2020 at 6:02 AM Ira Weiny wrote: > On Thu, Apr 02, 2020 at 07:12:56PM -0700, Aslan Bakirov wrote: > > I've noticed that there is no interfaces exposed by CMA which would let > me > > to declare contigous memory on particular NUMA node. > > > > This patchset adds the ability to try to allocate contiguous memory on > > specific node. It will fallback to other nodes if the specified one > > doesn't work. > > > > Implement a new method for declaring contigous memory on particular node > > and keep cma_declare_contiguous() as a wrapper. > > Is there an additional patch which uses this new interface? > > Generally the patch seems reasonable but we should have a user. Thanks for the comments. Yes, actually, this is the version 3 of first patch ([PATCH 1/2] mm: cma: NUMA node interface) of patchset. Second patch, which uses this interface is "[PATCH 2/2] mm: hugetlb: Use node interface of cma" Ira > > > > > Signed-off-by: Aslan Bakirov > > --- > > include/linux/cma.h | 13 +++++++++++-- > > include/linux/memblock.h | 3 +++ > > mm/cma.c | 16 +++++++++------- > > mm/memblock.c | 2 +- > > 4 files changed, 24 insertions(+), 10 deletions(-) > > > > diff --git a/include/linux/cma.h b/include/linux/cma.h > > index 190184b5ff32..eae834c2162f 100644 > > --- a/include/linux/cma.h > > +++ b/include/linux/cma.h > > @@ -24,10 +24,19 @@ extern phys_addr_t cma_get_base(const struct cma > *cma); > > extern unsigned long cma_get_size(const struct cma *cma); > > extern const char *cma_get_name(const struct cma *cma); > > > > -extern int __init cma_declare_contiguous(phys_addr_t base, > > +extern int __init cma_declare_contiguous_nid(phys_addr_t base, > > phys_addr_t size, phys_addr_t limit, > > phys_addr_t alignment, unsigned int order_per_bit, > > - bool fixed, const char *name, struct cma > **res_cma); > > + bool fixed, const char *name, struct cma **res_cma, > > + int nid); > > +static inline int __init cma_declare_contiguous(phys_addr_t base, > > + phys_addr_t size, phys_addr_t limit, > > + phys_addr_t alignment, unsigned int order_per_bit, > > + bool fixed, const char *name, struct cma **res_cma) > > +{ > > + return cma_declare_contiguous_nid(base, size, limit, alignment, > > + order_per_bit, fixed, name, res_cma, NUMA_NO_NODE); > > +} > > extern int cma_init_reserved_mem(phys_addr_t base, phys_addr_t size, > > unsigned int order_per_bit, > > const char *name, > > diff --git a/include/linux/memblock.h b/include/linux/memblock.h > > index 079d17d96410..6bc37a731d27 100644 > > --- a/include/linux/memblock.h > > +++ b/include/linux/memblock.h > > @@ -348,6 +348,9 @@ static inline int memblock_get_region_node(const > struct memblock_region *r) > > > > phys_addr_t memblock_phys_alloc_range(phys_addr_t size, phys_addr_t > align, > > phys_addr_t start, phys_addr_t end); > > +phys_addr_t memblock_alloc_range_nid(phys_addr_t size, > > + phys_addr_t align, phys_addr_t start, > > + phys_addr_t end, int nid, bool > exact_nid); > > phys_addr_t memblock_phys_alloc_try_nid(phys_addr_t size, phys_addr_t > align, int nid); > > > > static inline phys_addr_t memblock_phys_alloc(phys_addr_t size, > > diff --git a/mm/cma.c b/mm/cma.c > > index be55d1988c67..6405af3dc118 100644 > > --- a/mm/cma.c > > +++ b/mm/cma.c > > @@ -220,7 +220,7 @@ int __init cma_init_reserved_mem(phys_addr_t base, > phys_addr_t size, > > } > > > > /** > > - * cma_declare_contiguous() - reserve custom contiguous area > > + * cma_declare_contiguous_nid() - reserve custom contiguous area > > * @base: Base address of the reserved area optional, use 0 for any > > * @size: Size of the reserved area (in bytes), > > * @limit: End address of the reserved memory (optional, 0 for any). > > @@ -229,6 +229,7 @@ int __init cma_init_reserved_mem(phys_addr_t base, > phys_addr_t size, > > * @fixed: hint about where to place the reserved area > > * @name: The name of the area. See function cma_init_reserved_mem() > > * @res_cma: Pointer to store the created cma region. > > + * @nid: nid of the free area to find, %NUMA_NO_NODE for any node > > * > > * This function reserves memory from early allocator. It should be > > * called by arch specific code once the early allocator (memblock or > bootmem) > > @@ -238,10 +239,11 @@ int __init cma_init_reserved_mem(phys_addr_t base, > phys_addr_t size, > > * If @fixed is true, reserve contiguous area at exactly @base. If > false, > > * reserve in range from @base to @limit. > > */ > > -int __init cma_declare_contiguous(phys_addr_t base, > > +int __init cma_declare_contiguous_nid(phys_addr_t base, > > phys_addr_t size, phys_addr_t limit, > > phys_addr_t alignment, unsigned int order_per_bit, > > - bool fixed, const char *name, struct cma **res_cma) > > + bool fixed, const char *name, struct cma **res_cma, > > + int nid) > > { > > phys_addr_t memblock_end = memblock_end_of_DRAM(); > > phys_addr_t highmem_start; > > @@ -336,14 +338,14 @@ int __init cma_declare_contiguous(phys_addr_t base, > > * memory in case of failure. > > */ > > if (base < highmem_start && limit > highmem_start) { > > - addr = memblock_phys_alloc_range(size, alignment, > > - highmem_start, > limit); > > + addr = memblock_alloc_range_nid(size, alignment, > > + highmem_start, limit, nid, false); > > limit = highmem_start; > > } > > > > if (!addr) { > > - addr = memblock_phys_alloc_range(size, alignment, > base, > > - limit); > > + addr = memblock_alloc_range_nid(size, alignment, > base, > > + alimit, nid, false); > > if (!addr) { > > ret = -ENOMEM; > > goto err; > > diff --git a/mm/memblock.c b/mm/memblock.c > > index 4d06bbaded0f..c79ba6f9920c 100644 > > --- a/mm/memblock.c > > +++ b/mm/memblock.c > > @@ -1349,7 +1349,7 @@ __next_mem_pfn_range_in_zone(u64 *idx, struct zone > *zone, > > * Return: > > * Physical address of allocated memory block on success, %0 on failure. > > */ > > -static phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size, > > +phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size, > > phys_addr_t align, phys_addr_t > start, > > phys_addr_t end, int nid, > > bool exact_nid) > > -- > > 2.24.1 > > > > > > --0000000000006b8a2105a25fddc4 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable


=
On Fri, Apr 3, 2020 at 6:02 AM Ira We= iny <ira.weiny@intel.com> = wrote:
On Thu, A= pr 02, 2020 at 07:12:56PM -0700, Aslan Bakirov wrote:
> I've noticed that there is no interfaces exposed by CMA which woul= d let me
> to declare contigous memory on particular NUMA node.
>
> This patchset adds the ability to try to allocate contiguous memory on=
> specific node. It will fallback to other nodes if the specified one > doesn't work.
>
> Implement a new method for declaring contigous memory on particular no= de
> and keep cma_declare_contiguous() as a wrapper.

Is there an additional patch which uses this new interface?

Generally the patch seems reasonable but we should have a user.

=C2=A0Thanks for the comments. Yes, actually, this is = the version 3 of first patch ([PATCH 1/2] mm: cma: NUMA node interface)
= =C2=A0of patchset. Second patch, which uses this interface is=C2=A0 "[= PATCH 2/2] mm: hugetlb: Use node interface of cma"

Ira

>
> Signed-off-by: Aslan Bakirov <aslan@fb.com>
> ---
>=C2=A0 include/linux/cma.h=C2=A0 =C2=A0 =C2=A0 | 13 +++++++++++--
>=C2=A0 include/linux/memblock.h |=C2=A0 3 +++
>=C2=A0 mm/cma.c=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0| 16 +++++++++-------
>=C2=A0 mm/memblock.c=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 |=C2=A0 2= +-
>=C2=A0 4 files changed, 24 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/cma.h b/include/linux/cma.h
> index 190184b5ff32..eae834c2162f 100644
> --- a/include/linux/cma.h
> +++ b/include/linux/cma.h
> @@ -24,10 +24,19 @@ extern phys_addr_t cma_get_base(const struct cma *= cma);
>=C2=A0 extern unsigned long cma_get_size(const struct cma *cma);
>=C2=A0 extern const char *cma_get_name(const struct cma *cma);
>=C2=A0
> -extern int __init cma_declare_contiguous(phys_addr_t base,
> +extern int __init cma_declare_contiguous_nid(phys_addr_t base,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0phys_addr_t size, phys_addr_t limit,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0phys_addr_t alignment, unsigned int order_per_bit,
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0bool fixed, const char *name, struct cma **res_cma);
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0bool fixed, const char *name, struct cma **res_cma,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0int nid);
> +static inline int __init cma_declare_contiguous(phys_addr_t base,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0phys_addr_t size, phys_addr_t limit,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0phys_addr_t alignment, unsigned int order_per_bit,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0bool fixed, const char *name, struct cma **res_cma)
> +{
> +=C2=A0 =C2=A0 =C2=A0return cma_declare_contiguous_nid(base, size, lim= it, alignment,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0order_per_bit, fixed, name, res_cma, NUMA_NO_NODE);
> +}
>=C2=A0 extern int cma_init_reserved_mem(phys_addr_t base, phys_addr_t s= ize,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0unsign= ed int order_per_bit,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0const = char *name,
> diff --git a/include/linux/memblock.h b/include/linux/memblock.h
> index 079d17d96410..6bc37a731d27 100644
> --- a/include/linux/memblock.h
> +++ b/include/linux/memblock.h
> @@ -348,6 +348,9 @@ static inline int memblock_get_region_node(const s= truct memblock_region *r)
>=C2=A0
>=C2=A0 phys_addr_t memblock_phys_alloc_range(phys_addr_t size, phys_add= r_t align,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0phys_addr_t s= tart, phys_addr_t end);
> +phys_addr_t memblock_alloc_range_nid(phys_addr_t size,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0phys_addr_t align, = phys_addr_t start,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0phys_addr_t end, in= t nid, bool exact_nid);
>=C2=A0 phys_addr_t memblock_phys_alloc_try_nid(phys_addr_t size, phys_a= ddr_t align, int nid);
>=C2=A0
>=C2=A0 static inline phys_addr_t memblock_phys_alloc(phys_addr_t size,<= br> > diff --git a/mm/cma.c b/mm/cma.c
> index be55d1988c67..6405af3dc118 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -220,7 +220,7 @@ int __init cma_init_reserved_mem(phys_addr_t base,= phys_addr_t size,
>=C2=A0 }
>=C2=A0
>=C2=A0 /**
> - * cma_declare_contiguous() - reserve custom contiguous area
> + * cma_declare_contiguous_nid() - reserve custom contiguous area
>=C2=A0 =C2=A0* @base: Base address of the reserved area optional, use 0= for any
>=C2=A0 =C2=A0* @size: Size of the reserved area (in bytes),
>=C2=A0 =C2=A0* @limit: End address of the reserved memory (optional, 0 = for any).
> @@ -229,6 +229,7 @@ int __init cma_init_reserved_mem(phys_addr_t base,= phys_addr_t size,
>=C2=A0 =C2=A0* @fixed: hint about where to place the reserved area
>=C2=A0 =C2=A0* @name: The name of the area. See function cma_init_reser= ved_mem()
>=C2=A0 =C2=A0* @res_cma: Pointer to store the created cma region.
> + * @nid: nid of the free area to find, %NUMA_NO_NODE for any node
>=C2=A0 =C2=A0*
>=C2=A0 =C2=A0* This function reserves memory from early allocator. It s= hould be
>=C2=A0 =C2=A0* called by arch specific code once the early allocator (m= emblock or bootmem)
> @@ -238,10 +239,11 @@ int __init cma_init_reserved_mem(phys_addr_t bas= e, phys_addr_t size,
>=C2=A0 =C2=A0* If @fixed is true, reserve contiguous area at exactly @b= ase.=C2=A0 If false,
>=C2=A0 =C2=A0* reserve in range from @base to @limit.
>=C2=A0 =C2=A0*/
> -int __init cma_declare_contiguous(phys_addr_t base,
> +int __init cma_declare_contiguous_nid(phys_addr_t base,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0phys_addr_t size, phys_addr_t limit,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0phys_addr_t alignment, unsigned int order_per_bit,
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0bool fixed, const char *name, struct cma **res_cma)
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0bool fixed, const char *name, struct cma **res_cma,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0int nid)
>=C2=A0 {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0phys_addr_t memblock_end =3D memblock_end_of= _DRAM();
>=C2=A0 =C2=A0 =C2=A0 =C2=A0phys_addr_t highmem_start;
> @@ -336,14 +338,14 @@ int __init cma_declare_contiguous(phys_addr_t ba= se,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 * memory in cas= e of failure.
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 */
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (base < hi= ghmem_start && limit > highmem_start) {
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0addr =3D memblock_phys_alloc_range(size, alignment,
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 highmem_start, limit);
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0addr =3D memblock_alloc_range_nid(size, alignment,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0highmem_star= t, limit, nid, false);
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0limit =3D highmem_start;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}
>=C2=A0
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (!addr) {
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0addr =3D memblock_phys_alloc_range(size, alignment, base,
> -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 limit);
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0addr =3D memblock_alloc_range_nid(size, alignment, base,
> +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0alimit, nid,= false);
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0if (!addr) {
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0ret =3D -ENOMEM;
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0goto err;
> diff --git a/mm/memblock.c b/mm/memblock.c
> index 4d06bbaded0f..c79ba6f9920c 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -1349,7 +1349,7 @@ __next_mem_pfn_range_in_zone(u64 *idx, struct zo= ne *zone,
>=C2=A0 =C2=A0* Return:
>=C2=A0 =C2=A0* Physical address of allocated memory block on success, %= 0 on failure.
>=C2=A0 =C2=A0*/
> -static phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size,<= br> > +phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0phys_a= ddr_t align, phys_addr_t start,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0phys_a= ddr_t end, int nid,
>=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0bool e= xact_nid)
> --
> 2.24.1
>
>

--0000000000006b8a2105a25fddc4--