From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BEB58C433F5 for ; Tue, 17 May 2022 08:32:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1C1586B0072; Tue, 17 May 2022 04:32:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1719D6B0073; Tue, 17 May 2022 04:32:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 038D86B0074; Tue, 17 May 2022 04:32:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id E5DC86B0072 for ; Tue, 17 May 2022 04:32:45 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id B5AEA60BAA for ; Tue, 17 May 2022 08:32:45 +0000 (UTC) X-FDA: 79474569090.10.31247C3 Received: from mail-pf1-f173.google.com (mail-pf1-f173.google.com [209.85.210.173]) by imf27.hostedemail.com (Postfix) with ESMTP id 6A05D400C6 for ; Tue, 17 May 2022 08:32:42 +0000 (UTC) Received: by mail-pf1-f173.google.com with SMTP id j6so16238025pfe.13 for ; Tue, 17 May 2022 01:32:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=mHd014Sv/cZzHNLl2XS0FcsPOA0F2xVdYa0mHdLVxOQ=; b=bnrJcO344X5t3ZkCt1ZSG/l5aQuFvZZdLCAlhxNnakNoSpxHL5HM+DMwnDoU4CHJCB QkuMDMy4fOR7yRqoeOxfQ7j5vGr7gW9SbHVTOsimcI1crpe1/hEmg0HaTGncvG3L4hTt P4vkZVhKl8G3Fgm1u4fK2AAguZHp2ZGN72vipCDJWn6LJWhQWl9SOfUkqvh3bOjbOgoJ olrBNI1wZoL4/dP1/vvBdp/eNuM+jGIaE/blJP5lRsKgp6pOZYLj2kyLaqv93x9+bmvY fZeBT9cYNkKq23IRbYx74A/w+llEKW3TTYTdLVNWe5R/PKUSV+2hftRwoqdFvMaf69Cv y5Tw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=mHd014Sv/cZzHNLl2XS0FcsPOA0F2xVdYa0mHdLVxOQ=; b=1sZcvyaA/KDOJEckc3+5mD6Cf1k3aUsjj7BMHiu5vozw0bb8n/1YcaXYZJWf+akAp/ Y9rp5W7TFkU8G4/eKkKujnJsakx/mA5ZsuqpaWpAeCfzYo0nCMMnB1zLvpn2qrVHOK51 HIxvVcooSBq0lG7bU+RZAd+Azd3/Zb/6XBfCsYIWNkqwaQHoQM6otCFGkiMyLBLN216h 5UMime4a9+DgXFZdu984KAGAXWZPsBQE1pKyHLhhMEalVQ5eDoyOE4f0yYlNkv9Wn9ho Ms2Cc469eo8uVqdkTxy16DqPBoMzrhvlJ6OWTm8UvSEwwxbX5mVZhjyvpDNSg9ssu0Yl woPA== X-Gm-Message-State: AOAM533DRFeFWtIGY/+InfalJ+f/s1NFM5WLj4+AjaKcbX7mNdO55xNC pHCIIFqW0F+jk+8O3RgTM2NHaQ== X-Google-Smtp-Source: ABdhPJyUwVJJgbOX1VtpusM6TC4Bf0yY494nRFSWFy8ynRdzsv0O2gko9OPqVETRQWTXBoDxFmhhow== X-Received: by 2002:a63:7d48:0:b0:3db:33fd:b227 with SMTP id m8-20020a637d48000000b003db33fdb227mr19178856pgn.194.1652776364095; Tue, 17 May 2022 01:32:44 -0700 (PDT) Received: from localhost ([139.177.225.234]) by smtp.gmail.com with ESMTPSA id d4-20020a170903230400b0015e8d4eb22bsm8936923plh.117.2022.05.17.01.32.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 May 2022 01:32:43 -0700 (PDT) Date: Tue, 17 May 2022 16:32:40 +0800 From: Muchun Song To: Oscar Salvador Cc: corbet@lwn.net, mike.kravetz@oracle.com, akpm@linux-foundation.org, mcgrof@kernel.org, keescook@chromium.org, yzaikin@google.com, david@redhat.com, masahiroy@kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com Subject: Re: [PATCH v12 3/7] mm: memory_hotplug: enumerate all supported section flags Message-ID: References: <20220516102211.41557-1-songmuchun@bytedance.com> <20220516102211.41557-4-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 6A05D400C6 X-Stat-Signature: hi6kb68oy8zn5kkpij5736adk8d9tp4u X-Rspam-User: Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=bnrJcO34; spf=pass (imf27.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.210.173 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-HE-Tag: 1652776362-683764 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, May 17, 2022 at 09:47:48AM +0200, Oscar Salvador wrote: > On Mon, May 16, 2022 at 06:22:07PM +0800, Muchun Song wrote: > > We are almost running out of free slots, only one bit is available in the > > I would be more precise about what are we running out of. Free slots of > what? > > > worst case (powerpc with 256k pages). However, there are still some free > > slots on other architectures (e.g. x86_64 has 10 bits available, arm64 > > has 8 bits available with worst case of 64K pages). We have hard coded > > those numbers in code, it is inconvenient to use those bits on other > > architectures except powerpc. So transfer those section flags to > > enumeration to make it easy to add new section flags in the future. Also, > > move SECTION_TAINT_ZONE_DEVICE into the scope of CONFIG_ZONE_DEVICE > > to save a bit on non-zone-device case. > > > > Signed-off-by: Muchun Song > > --- > ... > > --- a/include/linux/mmzone.h > > +++ b/include/linux/mmzone.h > > @@ -1418,16 +1418,37 @@ extern size_t mem_section_usage_size(void); > > * (equal SECTION_SIZE_BITS - PAGE_SHIFT), and the > > * worst combination is powerpc with 256k pages, > > * which results in PFN_SECTION_SHIFT equal 6. > > - * To sum it up, at least 6 bits are available. > > + * To sum it up, at least 6 bits are available on all architectures. > > + * However, we can exceed 6 bits on some other architectures except > > + * powerpc (e.g. 15 bits are available on x86_64, 13 bits are available > > + * with the worst case of 64K pages on arm64) if we make sure the > > + * exceeded bit is not applicable to powerpc. > > */ > > -#define SECTION_MARKED_PRESENT (1UL<<0) > > -#define SECTION_HAS_MEM_MAP (1UL<<1) > > -#define SECTION_IS_ONLINE (1UL<<2) > > -#define SECTION_IS_EARLY (1UL<<3) > > -#define SECTION_TAINT_ZONE_DEVICE (1UL<<4) > > -#define SECTION_MAP_LAST_BIT (1UL<<5) > > +#define ENUM_SECTION_FLAG(MAPPER) \ > > + MAPPER(MARKED_PRESENT) \ > > + MAPPER(HAS_MEM_MAP) \ > > + MAPPER(IS_ONLINE) \ > > + MAPPER(IS_EARLY) \ > > + MAPPER(TAINT_ZONE_DEVICE, CONFIG_ZONE_DEVICE) \ > > + MAPPER(MAP_LAST_BIT) > > + > > +#define __SECTION_SHIFT_FLAG_MAPPER_0(x) > > +#define __SECTION_SHIFT_FLAG_MAPPER_1(x) SECTION_##x##_SHIFT, > > +#define __SECTION_SHIFT_FLAG_MAPPER(x, ...) \ > > + __PASTE(__SECTION_SHIFT_FLAG_MAPPER_, IS_ENABLED(__VA_ARGS__))(x) > > + > > +#define __SECTION_FLAG_MAPPER_0(x) > > +#define __SECTION_FLAG_MAPPER_1(x) SECTION_##x = BIT(SECTION_##x##_SHIFT), > > +#define __SECTION_FLAG_MAPPER(x, ...) \ > > + __PASTE(__SECTION_FLAG_MAPPER_, IS_ENABLED(__VA_ARGS__))(x) > > + > > +enum { > > + ENUM_SECTION_FLAG(__SECTION_SHIFT_FLAG_MAPPER) > > + ENUM_SECTION_FLAG(__SECTION_FLAG_MAPPER) > > +}; > > + > > #define SECTION_MAP_MASK (~(SECTION_MAP_LAST_BIT-1)) > > -#define SECTION_NID_SHIFT 6 > > +#define SECTION_NID_SHIFT SECTION_MAP_LAST_BIT_SHIFT > > Is this really worth the extra code? And it might be me that I am not > familiar with all this magic, but it looks overcomplicated. > Maybe some comments here and there help clarifying what it is going on > here. > Yeah, it's a little complicated. All the magic aims to generate two enumeration from one MAPPER(xxx, config), one is SECTION_xxx_SHIFT, another is SECTION_xxx = BIT(SECTION_xxx_SHIFT) if the 'config' is configured. If we want to add a new flag, like the follow patch, just one line could do that. MAPPER(CANNOT_OPTIMIZE_VMEMMAP, CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP) Without those magic, we have to add 4 lines like follows to do the similar thing. #ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP SECTION_CANNOT_OPTIMIZE_VMEMMAP_SHIFT, #define SECTION_CANNOT_OPTIMIZE_VMEMMAP BIT(SECTION_CANNOT_OPTIMIZE_VMEMMAP_SHIFT) #endif I admit it is more clear but not simplified as above approach. Both two approaches are fine to me. If we choose the simplified one, I agree with you I should add more comments to explain what happens here. Thanks.