From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B189EC433EF for ; Thu, 16 Jun 2022 10:16:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0881A6B0072; Thu, 16 Jun 2022 06:16:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 037466B0073; Thu, 16 Jun 2022 06:16:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E194A6B0074; Thu, 16 Jun 2022 06:16:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id D4F5E6B0072 for ; Thu, 16 Jun 2022 06:16:43 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 70710216CB for ; Thu, 16 Jun 2022 10:16:43 +0000 (UTC) X-FDA: 79583695086.04.9FF41F5 Received: from mail-pg1-f171.google.com (mail-pg1-f171.google.com [209.85.215.171]) by imf10.hostedemail.com (Postfix) with ESMTP id 30E82C007A for ; Thu, 16 Jun 2022 10:16:42 +0000 (UTC) Received: by mail-pg1-f171.google.com with SMTP id d129so824676pgc.9 for ; Thu, 16 Jun 2022 03:16:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=iPRG78ac7inBmDs1bk7VF2TAFXWPtFDuV+zoozPjimU=; b=A9BdQs8S2qTsKhErtvYF/3G0zmeLlzOdl4btPSNB3FyyFs1x3UC3h3qL9eNtq+dVld VDz/kR9rVZHVnxvn2Kbq3A852qM4i8OWBZ733LuXzRvdM8KgiJQqRx3tfnpE1dd9ABSY 9DcUHpd0B9PibMBgkl/XzYVbSC8odtUH0+d3y9SGO7Ecr23B60+ImxriniFlHXjprAqA 8aJBb7gQ1EYrd9jr2E37p7NxNszZVoklj5chq6ZjrcrlpYAv4HvneGg2sE1gIEi1OuXt 0Ev4M1Lx+dJuVy8mVzT4AnBpc4vJBk8w6UKKNuf1C5/2e147UaWRfmFV6x+lXhMH7ncd jzAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=iPRG78ac7inBmDs1bk7VF2TAFXWPtFDuV+zoozPjimU=; b=yjqmnsyhgRcGsnuIKPGB3/BW+uBe4xz6B9qAJ20TH/ykaloj13+49NQvWhOpK2Q/Ra Zc/ABz1T4cdG9i5oU1f2nAej6AWWQo4MPFtp5I3f9LSlDyyCZZRc/GpUINiWGe0at5yo nmcu3v667I4gdmjJDDDbkFbjYKeLkFXAV4zWN76Kmt+c4DabO1DsTTzL6nnAtJmDGiC6 EDyuD3CIaRFD6Z9XnJ/DdKcC6LohZus+v0uo6gBHPokWuOY6DF6Sae14k0RbYlfNYeaa 2k8T/FSHCLaSPYFfsNItWJ48Ti1Mrwxs0NbaYJQQhiFXHlD7sssIwL6/brmfRg6agSXb tTrw== X-Gm-Message-State: AJIora8Fhigbb8iGWIJqV9LvP7MVjviC9TJXfYDh0ediQPvWko8mckxE Be8k8i2jbmfaPsstIXYvQbpcRw== X-Google-Smtp-Source: AGRyM1vgeCdn+EFaZZ8SW1oYuIciTRc5/67yBPikjMuukfUd9Gv2GE4VScrC72M8C/IOKqlNyjOWgQ== X-Received: by 2002:a05:6a00:2344:b0:51c:157f:83d5 with SMTP id j4-20020a056a00234400b0051c157f83d5mr4056234pfj.5.1655374600828; Thu, 16 Jun 2022 03:16:40 -0700 (PDT) Received: from localhost ([139.177.225.255]) by smtp.gmail.com with ESMTPSA id t5-20020a1709027fc500b001690d398401sm214048plb.88.2022.06.16.03.16.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 Jun 2022 03:16:40 -0700 (PDT) Date: Thu, 16 Jun 2022 18:16:36 +0800 From: Muchun Song To: David Hildenbrand Cc: corbet@lwn.net, akpm@linux-foundation.org, paulmck@kernel.org, mike.kravetz@oracle.com, osalvador@suse.de, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com Subject: Re: [PATCH v2 2/2] mm: memory_hotplug: introduce SECTION_CANNOT_OPTIMIZE_VMEMMAP Message-ID: References: <20220520025538.21144-1-songmuchun@bytedance.com> <20220520025538.21144-3-songmuchun@bytedance.com> <53024884-0182-df5f-9ca2-00652c64ce36@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655374603; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=iPRG78ac7inBmDs1bk7VF2TAFXWPtFDuV+zoozPjimU=; b=cMjjJaRk9x+8djS+M6g4DkfOQ3jzNsOITqSvW4THGfjrvU7t5uxoNtqUWeOUkQfnpwSCnQ 3KUL/+uANp0EziHR67DQjJJCkHacuCWV4Ca3qloruKZ5Mf7uMn1BkHtM+/iMe29SELfJpd TdMq66If3iwx0svT3vQ8wkMShzsYvnk= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=A9BdQs8S; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf10.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.171 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655374603; a=rsa-sha256; cv=none; b=7Cn7qMDHgR1N/yl1xWx+rYM24cbc1YlxV08RRH+x0kjGwXHmEiCvPG2HuiLbvIEhAHYzOL FvrOlWzPyXkctRtZc+myaHNmSyxDAYsqyflZDBupiQDstMtpO5TSGgwrgteWpmlDRwfAM/ RrGDjIOJ/EqsQcFIAl+0AToNVa3GV3s= X-Rspamd-Queue-Id: 30E82C007A X-Rspam-User: X-Stat-Signature: qajdfzjr1re7bwfznkrowx9udihahrme Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=A9BdQs8S; dmarc=pass (policy=none) header.from=bytedance.com; spf=pass (imf10.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.215.171 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com X-Rspamd-Server: rspam04 X-HE-Tag: 1655374602-339321 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Jun 16, 2022 at 09:21:35AM +0200, David Hildenbrand wrote: > On 16.06.22 04:45, Muchun Song wrote: > > On Wed, Jun 15, 2022 at 11:51:49AM +0200, David Hildenbrand wrote: > >> On 20.05.22 04:55, Muchun Song wrote: > >>> For now, the feature of hugetlb_free_vmemmap is not compatible with the > >>> feature of memory_hotplug.memmap_on_memory, and hugetlb_free_vmemmap > >>> takes precedence over memory_hotplug.memmap_on_memory. However, someone > >>> wants to make memory_hotplug.memmap_on_memory takes precedence over > >>> hugetlb_free_vmemmap since memmap_on_memory makes it more likely to > >>> succeed memory hotplug in close-to-OOM situations. So the decision > >>> of making hugetlb_free_vmemmap take precedence is not wise and elegant. > >>> The proper approach is to have hugetlb_vmemmap.c do the check whether > >>> the section which the HugeTLB pages belong to can be optimized. If > >>> the section's vmemmap pages are allocated from the added memory block > >>> itself, hugetlb_free_vmemmap should refuse to optimize the vmemmap, > >>> otherwise, do the optimization. Then both kernel parameters are > >>> compatible. So this patch introduces SECTION_CANNOT_OPTIMIZE_VMEMMAP > >>> to indicate whether the section could be optimized. > >>> > >> > >> In theory, we have that information stored in the relevant memory block, > >> but I assume that lookup in the xarray + locking is impractical. > >> > >> I wonder if we can derive that information simply from the vmemmap pages > >> themselves, because *drumroll* > >> > >> For one vmemmap page (the first one), the vmemmap corresponds to itself > >> -- what?! > >> > >> > >> [ hotplugged memory ] > >> [ memmap ][ usable memory ] > >> | | | > >> ^--- | | > >> ^------- | > >> ^---------------------- > >> > >> The memmap of the first page of hotplugged memory falls onto itself. > >> We'd have to derive from actual "usable memory" that condition. > >> > >> > >> We currently support memmap_on_memory memory only within fixed-size > >> memory blocks. So "hotplugged memory" is guaranteed to be aligned to > >> memory_block_size_bytes() and the size is memory_block_size_bytes(). > >> > >> If we'd have a page falling into usbale memory, we'd simply lookup the > >> first page and test if the vmemmap maps to itself. > >> > > > > I think this can work. Should we use this approach in next version? > > > > Either that or more preferable, flagging the vmemmap pages eventually. > That's might be future proof. > All right. I think we can go with the above approach, we can improve it to flagging-base approach in the future if needed. > >> > >> Of course, once we'd support variable-sized memory blocks, it would be > >> different. > >> > >> > >> An easier/future-proof approach might simply be flagging the vmemmap > >> pages as being special. We reuse page flags for that, which don't have > >> semantics yet (i.e., PG_reserved indicates a boot-time allocation via > >> memblock). > >> > > > > I think you mean flag vmemmap pages' struct page as PG_reserved if it > > can be optimized, right? When the vmemmap pages are allocated in > > hugetlb_vmemmap_alloc(), is it valid to flag them as PG_reserved (they > > are allocated from buddy allocator not memblock)? > > > > Sorry I wasn't clear. I'd flag them with some other > not-yet-used-for-vmemmap-pages flag. Reusing PG_reserved could result in > trouble. > Sorry. I thought you suggest reusing "PG_reserved". My bad misreading. Thanks.