From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01800EB64DC for ; Tue, 11 Jul 2023 15:21:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 628A56B0072; Tue, 11 Jul 2023 11:21:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5FF996B0074; Tue, 11 Jul 2023 11:21:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4C7B26B0075; Tue, 11 Jul 2023 11:21:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 3F4026B0072 for ; Tue, 11 Jul 2023 11:21:10 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id F3C671C8172 for ; Tue, 11 Jul 2023 15:21:09 +0000 (UTC) X-FDA: 80999694300.06.5760BBE Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf19.hostedemail.com (Postfix) with ESMTP id 5D6C31A0022 for ; Tue, 11 Jul 2023 15:21:06 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=ab2JmDe9; spf=pass (imf19.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1689088867; a=rsa-sha256; cv=none; b=QMydtiTScj3jJmO5OTgBlab7RrkmNVh9h+eBfolpgOZ3gDU3v0c34hJcpJdFujnFWyiD5l Va92q1bADAMmAg9Yt3WiOv2RucJABNENSyS1ouxALXNV+2qAdION3C2TsCbTfKAG4MAg4n 6fHw3+Bg/280UtPXDIHmNQzROqAvMMs= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=ab2JmDe9; spf=pass (imf19.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1689088867; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hR8o5w/Hx4dMIcWhqSVmnIxH5BFRWJkuE4F/mZRv5R0=; b=ql4YFYUWhzVnCuc6Rfff8HmaI3gnUDKoBzCoK5kn4U5iS5456AXzr09jglR1YEsVCLW4uh qqspeJtM0FqTaVFEDCRbpRfwwhQoQbguX6fbF3FrjzpxGyGAt/0qR75Cl9hFAp9JAh7ynT O04+inkcPNTlJTv7Bw2Ec7xEEkYTmHY= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1689088866; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=hR8o5w/Hx4dMIcWhqSVmnIxH5BFRWJkuE4F/mZRv5R0=; b=ab2JmDe9tMHNAz5SicgjnlUlRL6VE7Hd+ADjiPw9oPWMud+xpwUyIHRGeomRoMEufDSw2c 6S5wlxtgmPnvccRG0ZBKfrgiwu0WzRWkB8xNwkCoTlYfz7kkm4BnD+lPog6hLEh5oU1pL2 PmGwFR+SXln9XZEgapOw5x87U6WoFbU= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-153-UbA5VgGzMQKmb7MgbChPoQ-1; Tue, 11 Jul 2023 11:21:05 -0400 X-MC-Unique: UbA5VgGzMQKmb7MgbChPoQ-1 Received: by mail-wm1-f70.google.com with SMTP id 5b1f17b1804b1-3fbfc766a78so29583495e9.3 for ; Tue, 11 Jul 2023 08:21:05 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689088864; x=1691680864; h=content-transfer-encoding:in-reply-to:subject:organization:from :references:cc:to:content-language:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=hR8o5w/Hx4dMIcWhqSVmnIxH5BFRWJkuE4F/mZRv5R0=; b=gtgWDbjPCFC1UffFOv2SPLgyrCA0ot0BlskkUKnZacxlMC3vfrtgG06bz+2McZeCDd iPZ4RZZMlU17NfI4rlhsRmiaPsGDt3pvZ0hdOSqB+r8QfGxnuSdXHLJNa1sbLbeGw+hw TrSqKbhPUeTeifmljMo4cbhZ4eZbGc0DBrABmhljaLykx2Ormnz2J7UbNjUaaDQQ0tPp 7sgJ9xGvClY/ZPOtOT+cRC52jjaKlCZiiEW+SsvYJ8YdBFe1Fws+KQAjwqXglbXtmZmg JDqj56U+LwwcaNS8WqYShTwj2YuaR3CTkiqVYPSl1Utca1EG4GOzfQTk19nilhrkDzKV 3Tmw== X-Gm-Message-State: ABy/qLbrTXoJff/5vHW5tnC9Jdtnv4Ex+y7X9eukTV1dcRuRP9ttKD5u d7/bignqIHqmyACX5zilmFHDX6gu3yRZOkZtCpaUBVZ0kvJA74EuVjz2IANO5x+lBelqWNZkoJY sC5kZ1UwO2QM= X-Received: by 2002:a05:600c:3659:b0:3fb:ffca:b6b8 with SMTP id y25-20020a05600c365900b003fbffcab6b8mr10769700wmq.41.1689088864171; Tue, 11 Jul 2023 08:21:04 -0700 (PDT) X-Google-Smtp-Source: APBJJlFa8FePMOcFtsaR5lyUM9Zkhx4gUcfLsSekgoJaLPVs+fd3+XRakBj3NxEJhRdLG4yh8Eojkw== X-Received: by 2002:a05:600c:3659:b0:3fb:ffca:b6b8 with SMTP id y25-20020a05600c365900b003fbffcab6b8mr10769677wmq.41.1689088863833; Tue, 11 Jul 2023 08:21:03 -0700 (PDT) Received: from ?IPV6:2003:cb:c745:4000:13ad:ed64:37e6:115d? (p200300cbc745400013aded6437e6115d.dip0.t-ipconnect.de. [2003:cb:c745:4000:13ad:ed64:37e6:115d]) by smtp.gmail.com with ESMTPSA id k24-20020a05600c0b5800b003fc01189b0dsm2774139wmr.42.2023.07.11.08.21.02 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 11 Jul 2023 08:21:03 -0700 (PDT) Message-ID: <1df12885-9ae4-6aef-1a31-91ecd5a18d24@redhat.com> Date: Tue, 11 Jul 2023 17:21:02 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.12.0 To: "Aneesh Kumar K.V" , Vishal Verma , "Rafael J. Wysocki" , Len Brown , Andrew Morton , Oscar Salvador , Dan Williams , Dave Jiang Cc: linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, nvdimm@lists.linux.dev, linux-cxl@vger.kernel.org, Huang Ying , Dave Hansen References: <20230613-vv-kmem_memmap-v1-0-f6de9c6af2c6@intel.com> <20230613-vv-kmem_memmap-v1-3-f6de9c6af2c6@intel.com> <87edleplkn.fsf@linux.ibm.com> From: David Hildenbrand Organization: Red Hat Subject: Re: [PATCH 3/3] dax/kmem: Always enroll hotplugged memory for memmap_on_memory In-Reply-To: <87edleplkn.fsf@linux.ibm.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 5D6C31A0022 X-Stat-Signature: xhqb93bf97xkt3kjez7rrb5tg1i5b8ey X-Rspam-User: X-HE-Tag: 1689088866-212146 X-HE-Meta: U2FsdGVkX1+7IsmV686q1JkD2kn9xV+fvhbONXSotIMly+dKsWSTvNJ8lVo1fRI/C3gJUaklDn70Zd3FvlPcpWhdybNC4rrI7E/GEY79BUlGqN3cY5H3dqcx56fT9Y+nErrKys84cVAkuZw26uWm8s6n1Pg2AmVkXUkhoxEYgO7FaGujcHhezM79KxIDmJIXwiS7ChyfN5CmhJsAAvzgz7bW7aQbY6qwno5WTr1dEcp9/QX2nvn4aiqNfQ2bzL63nOdFSH7e46sibt0s97m1KOQ3EEpxfuB115zmcasvY7GeQUC+YrqWY1h9iYknACy4uRLq7A3nLn0UdY6eorQghYJLjoY+oiGLkrFUL891COVwCg3FLTERx/wTS6MGut7cxsEmSX/5to1zX4GHkG8rNmfgU+6ueZX8iGsIQZLyz7bVzXfCQ9OsaZXGDop6+PvBUC3TaEjPFYTUWyXen4F5ku0KLf5L/cZvnG5qYHSMdCkWvwzD+L5qjEz+9oKCrYv91NNMLo0lDhycNC6SF0EIiMFi3FXIK1/QQOiru2b6cYsiiPPgpPxc61fRN94nOU9qhUDjgRIjV2i/uc0Nd4jOHGMa8tdDN3kKIehaVBIa63Bof1ReIFnFvEwRqIkcDF4d8DbeXgHF3I7MvTosjgH9kBlBmkmDR0Tt4fjDOBjxtOUmShQt9f/aSJ9OW1arw4apuloFr4chlD/OdohDsj5ZgQAMmnlWgBWzp6z3Apior9GPZyukdiVskATus/6eZaBpYlQzFPdyEISegPHhCUFVXFp2H4N1atBBp6YNAygaw9lQ584R80jaMlnhfZS35qK0trYdYfsg5cVz+qnie1xDkD8JxbcIMGTePN/bEZ53UeTq35AFfya5XCB3Vyaa9fpSk8IaJqH+EMqRkCU+qx48GSTN6ug+nmHls2eMN+s0+5zTxqJYLPv9oHaGjlB9dE52UyfkvuUGmBkz1TZ6WXG VzczKD1c bUyt181Cbm4jVkUc0ZL4itfW9fyRHK+XNefEYqYA8pC4PSLjmoaMPrRQfBht6E5F7TBBytU6/9hsBJ/tmDTPsem+3hQ7M7Yq+Cc+v6EfNbKyiO08556VTaHcEIslRGtfVwaK2QFb5vshleJXjhCLFL7ISK8aPnvygf/GARbe6gBxAuYbQy+tQmJ6ezZR75xhQzhR6OH31WxzGw/+wD6orAgp/F7CEpcqopLJjYVfTbppixyGKNWQjRFmoaKhFkMKwKM+KQCWuLi3XDCZp3Vdh8doigj78+rtQHStcS/CbfW58EO97K5aeUS87Ozl880fsHZlSdg4vLSO1cigAAM1H1/5Ae0Jb3uPjElbAGWW6gqN/iwZXHLmNMBjkd2Y2bbR/zi+UG8oPB9wGCxxQipNfxxCUuWbonl+yDvsyKHkVA/k2H903gb/a3BHyXc5GI9ueVQXcTTOn9krAyr3VGUHCmGRvOiNVrdU08/rXUmq5R/RU6X2zWGqEocKxzmXRg3IYWIf6sXbK1pZm+cTYl7++9Cn9vvWD6PfdcFK2h+70ESG1LU/iYQgrcWcrZghoVak7ItjNyZsyUQPyL90cO6Rj5/4z9w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 11.07.23 16:30, Aneesh Kumar K.V wrote: > David Hildenbrand writes: > >> On 16.06.23 00:00, Vishal Verma wrote: >>> With DAX memory regions originating from CXL memory expanders or >>> NVDIMMs, the kmem driver may be hot-adding huge amounts of system memory >>> on a system without enough 'regular' main memory to support the memmap >>> for it. To avoid this, ensure that all kmem managed hotplugged memory is >>> added with the MHP_MEMMAP_ON_MEMORY flag to place the memmap on the >>> new memory region being hot added. >>> >>> To do this, call add_memory() in chunks of memory_block_size_bytes() as >>> that is a requirement for memmap_on_memory. Additionally, Use the >>> mhp_flag to force the memmap_on_memory checks regardless of the >>> respective module parameter setting. >>> >>> Cc: "Rafael J. Wysocki" >>> Cc: Len Brown >>> Cc: Andrew Morton >>> Cc: David Hildenbrand >>> Cc: Oscar Salvador >>> Cc: Dan Williams >>> Cc: Dave Jiang >>> Cc: Dave Hansen >>> Cc: Huang Ying >>> Signed-off-by: Vishal Verma >>> --- >>> drivers/dax/kmem.c | 49 ++++++++++++++++++++++++++++++++++++------------- >>> 1 file changed, 36 insertions(+), 13 deletions(-) >>> >>> diff --git a/drivers/dax/kmem.c b/drivers/dax/kmem.c >>> index 7b36db6f1cbd..0751346193ef 100644 >>> --- a/drivers/dax/kmem.c >>> +++ b/drivers/dax/kmem.c >>> @@ -12,6 +12,7 @@ >>> #include >>> #include >>> #include >>> +#include >>> #include "dax-private.h" >>> #include "bus.h" >>> >>> @@ -105,6 +106,7 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax) >>> data->mgid = rc; >>> >>> for (i = 0; i < dev_dax->nr_range; i++) { >>> + u64 cur_start, cur_len, remaining; >>> struct resource *res; >>> struct range range; >>> >>> @@ -137,21 +139,42 @@ static int dev_dax_kmem_probe(struct dev_dax *dev_dax) >>> res->flags = IORESOURCE_SYSTEM_RAM; >>> >>> /* >>> - * Ensure that future kexec'd kernels will not treat >>> - * this as RAM automatically. >>> + * Add memory in chunks of memory_block_size_bytes() so that >>> + * it is considered for MHP_MEMMAP_ON_MEMORY >>> + * @range has already been aligned to memory_block_size_bytes(), >>> + * so the following loop will always break it down cleanly. >>> */ >>> - rc = add_memory_driver_managed(data->mgid, range.start, >>> - range_len(&range), kmem_name, MHP_NID_IS_MGID); >>> + cur_start = range.start; >>> + cur_len = memory_block_size_bytes(); >>> + remaining = range_len(&range); >>> + while (remaining) { >>> + mhp_t mhp_flags = MHP_NID_IS_MGID; >>> >>> - if (rc) { >>> - dev_warn(dev, "mapping%d: %#llx-%#llx memory add failed\n", >>> - i, range.start, range.end); >>> - remove_resource(res); >>> - kfree(res); >>> - data->res[i] = NULL; >>> - if (mapped) >>> - continue; >>> - goto err_request_mem; >>> + if (mhp_supports_memmap_on_memory(cur_len, >>> + MHP_MEMMAP_ON_MEMORY)) >>> + mhp_flags |= MHP_MEMMAP_ON_MEMORY; >>> + /* >>> + * Ensure that future kexec'd kernels will not treat >>> + * this as RAM automatically. >>> + */ >>> + rc = add_memory_driver_managed(data->mgid, cur_start, >>> + cur_len, kmem_name, >>> + mhp_flags); >>> + >>> + if (rc) { >>> + dev_warn(dev, >>> + "mapping%d: %#llx-%#llx memory add failed\n", >>> + i, cur_start, cur_start + cur_len - 1); >>> + remove_resource(res); >>> + kfree(res); >>> + data->res[i] = NULL; >>> + if (mapped) >>> + continue; >>> + goto err_request_mem; >>> + } >>> + >>> + cur_start += cur_len; >>> + remaining -= cur_len; >>> } >>> mapped++; >>> } >>> >> >> Maybe the better alternative is teach >> add_memory_resource()/try_remove_memory() to do that internally. >> >> In the add_memory_resource() case, it might be a loop around that >> memmap_on_memory + arch_add_memory code path (well, and the error path >> also needs adjustment): >> >> /* >> * Self hosted memmap array >> */ >> if (mhp_flags & MHP_MEMMAP_ON_MEMORY) { >> if (!mhp_supports_memmap_on_memory(size)) { >> ret = -EINVAL; >> goto error; >> } >> mhp_altmap.free = PHYS_PFN(size); >> mhp_altmap.base_pfn = PHYS_PFN(start); >> params.altmap = &mhp_altmap; >> } >> >> /* call arch's memory hotadd */ >> ret = arch_add_memory(nid, start, size, ¶ms); >> if (ret < 0) >> goto error; >> >> >> Note that we want to handle that on a per memory-block basis, because we >> don't want the vmemmap of memory block #2 to end up on memory block #1. >> It all gets messy with memory onlining/offlining etc otherwise ... >> > > I tried to implement this inside add_memory_driver_managed() and also > within dax/kmem. IMHO doing the error handling inside dax/kmem is > better. Here is how it looks: > > 1. If any blocks got added before (mapped > 0) we loop through all successful request_mem_regions > 2. For each succesful request_mem_regions if any blocks got added, we > keep the resource. If none got added, we will kfree the resource > Doing this unconditional splitting outside of add_memory_driver_managed() is undesirable for at least two reasons 1) You end up always creating individual entries in the resource tree (/proc/iomem) even if MHP_MEMMAP_ON_MEMORY is not effective. 2) As we call arch_add_memory() in memory block granularity (e.g., 128 MiB on x86), we might not make use of large PUDs (e.g., 1 GiB) in the identify mapping -- even if MHP_MEMMAP_ON_MEMORY is not effective. While you could sense for support and do the split based on that, it will be beneficial for other users (especially DIMMs) if we do that internally -- where we already know if MHP_MEMMAP_ON_MEMORY can be effective or not. In general, we avoid placing important kernel data-structures on slow memory. That's one of the reasons why PMEM decided to mostly always use ZONE_MOVABLE such that exactly what this patch does would not happen. So I'm wondering if there would be demand for an additional toggle. Because even with memmap_on_memory enabled in general, you might not want to do that for dax/kmem. IMHO, this patch should be dropped from your ppc64 series, as it's an independent change that might be valuable for other architectures as well. -- Cheers, David / dhildenb