From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F752C54EE9 for ; Mon, 19 Sep 2022 13:16:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AC3E480007; Mon, 19 Sep 2022 09:16:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A730D940007; Mon, 19 Sep 2022 09:16:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8EC8F80007; Mon, 19 Sep 2022 09:16:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 7F2DA940007 for ; Mon, 19 Sep 2022 09:16:27 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 5CDA4407D2 for ; Mon, 19 Sep 2022 13:16:27 +0000 (UTC) X-FDA: 79928884014.26.9E211FD Received: from mail-pj1-f45.google.com (mail-pj1-f45.google.com [209.85.216.45]) by imf06.hostedemail.com (Postfix) with ESMTP id 12495180096 for ; Mon, 19 Sep 2022 13:16:26 +0000 (UTC) Received: by mail-pj1-f45.google.com with SMTP id q3so27805150pjg.3 for ; Mon, 19 Sep 2022 06:16:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date; bh=VSUTTuBtczOK0dusiAERizvPFnsTvHA+wMPPrDEBhnI=; b=QJCmFKIIyxpHeniueWZ2Tv+at8cKXHPCC5ScaVf69+guJLWX9NsepszEC8SxslbQDV jdO2G4ofc9R0MBwfF9uLB6EZ/MS0baLvmH+DnMCgQddH5zbKUhgJ1gmDNLK+Sng6LAOW sIo4n3bYTOqOkyUEX3XaWfofaUuKQuBm4FXr+VTWvJOfzfX8DzXd0q3lUfDEsiZV2NqZ 59K0GFy4n7kOYVoz2QQcYH43ZiqwSb0Pk0WpP9pe2FIc/NEfrUoPlJlcuz+UdaCrY5ht VBwlX6KjiYL/hxNMM9RXM/cVa+VjbJKz2wXtYiWlfU094mr736SzGTxqv5o6PHkztek9 1tHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date; bh=VSUTTuBtczOK0dusiAERizvPFnsTvHA+wMPPrDEBhnI=; b=bRrH+lfq6WbC9mc3g3yKcAzEJSPn4RvrRL5uOlDiJSc6sDK/so3buruYzotkku3XEQ GPcfuuiLiadiqC4El5FU2O+WEAKa1VRVr5NaK8XzVXt556jtG3ydMYsxuhsZQt+LKdQl JszDjUzXGLThg17oubfW0uD+sy3vwVunWKpilwoQCcBdeVhUjCmRSzlQmAQsOB3PgCqw p+Nkxnen3LuHy55ZIiwLbCk/2D9k/JIRYO4iPiqUcgKQB6U4K5cr9XkUm9Yl+Z4gZZgR XSEcJi++xxbYRC7dwSBRy2TbMdAW9Bd1SUPNADYG1xNMMUpBOn4KqBvo0ddWqq6RKNkN YMjw== X-Gm-Message-State: ACrzQf0qWOsUMqcmOnJWJElN1SDc+BcG+bBFTYYBt4QtEwt7E92Tvlkq QddTbaTd/Q2ubqHNq8nJJspmVfr4WpI= X-Google-Smtp-Source: AMsMyM6IASeXmQ3l3P+K47bKKt+cvumIy2sbCsxVZvlVL37ZzO+KA+E+zXnH1QB0tR2slCp41/Pubg== X-Received: by 2002:a17:90b:3b8e:b0:202:6f3d:539e with SMTP id pc14-20020a17090b3b8e00b002026f3d539emr29875176pjb.205.1663593385609; Mon, 19 Sep 2022 06:16:25 -0700 (PDT) Received: from hyeyoo ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id z13-20020a17090a170d00b002037a4e913bsm3874336pjd.44.2022.09.19.06.16.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Sep 2022 06:16:25 -0700 (PDT) Date: Mon, 19 Sep 2022 22:16:17 +0900 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , Naoya Horiguchi , Miaohe Lin , "Matthew Wilcox (Oracle)" , Minchan Kim , Mel Gorman , Andrea Arcangeli , Dan Williams , Hugh Dickins , Muchun Song , David Hildenbrand , Andrey Konovalov , Marco Elver Subject: Re: [PATCH] mm: move PG_slab flag to page_type Message-ID: References: <20220919125708.276864-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220919125708.276864-1-42.hyeyoo@gmail.com> ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1663593387; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VSUTTuBtczOK0dusiAERizvPFnsTvHA+wMPPrDEBhnI=; b=NnjOCAaSZw84og7hy7wAgpZea2ICU8DsdmA2PQ9ss1DzeWFzN/b0u0qyvZXP7cldXYkT7l V1lqOLV2PKotT8cc0reH3QtTZ1KO7B1CH1ZLwpg0mpQWVq5FRjn+80vdO5Dhhxujyo1AKZ EvfrpoeuSgPd0CYTSMLJ230Euo8AHxc= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=QJCmFKII; spf=pass (imf06.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.45 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1663593387; a=rsa-sha256; cv=none; b=tC9ZJK1/bzPZ8SlLsuS6vvkcW3jStIPXslTAMbPEy8Yd6X71BkxjTmCcAAqMDQpZl4tTUN xxwD5fzO4ye1bNKm+KnG+LnV7/mBrtv/ZsmrLpvJ6rqy549NsJE/CwaeirnER/9KJ1wHAw XNHAeDXFkN4ADwYRTKTtE2dgpbjIpoU= X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 12495180096 X-Rspam-User: Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=QJCmFKII; spf=pass (imf06.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.216.45 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Stat-Signature: a8feub4fhmeqadisso74s9coyzyy6ar9 X-HE-Tag: 1663593386-55336 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Sep 19, 2022 at 09:57:08PM +0900, Hyeonggon Yoo wrote: > For now, only SLAB uses _mapcount field as a number of active objects in > a slab, and other slab allocators do not use it. As 16 bits are enough > for that, use remaining 16 bits of _mapcount as page_type even when > SLAB is used. And then move PG_slab flag to page_type! > > Note that page_type is always placed in upper 16 bits of _mapcount to > avoid confusing normal _mapcount as page_type. As underflow (actually > I mean, yeah, overflow) is not a concern anymore, use more lower bits > except bit zero. > > Add more folio helpers for PAGE_TYPE_OPS() not to break existing > slab implementations. > > Remove PG_slab check from PAGE_FLAGS_CHECK_AT_FREE. buddy will still > check if _mapcount is properly set at free. > > Exclude PG_slab from hwpoison and show_page_flags() for now. > > Note that with this patch, page_mapped() and folio_mapped() always return > false for slab page. > [...] Hi. a silly mistake: > > include/linux/mm_types.h | 22 +++++++-- > include/linux/page-flags.h | 83 ++++++++++++++++++++++++++-------- > include/trace/events/mmflags.h | 1 - > mm/memory-failure.c | 8 ---- > mm/slab.h | 11 ++++- > 5 files changed, 92 insertions(+), 33 deletions(-) > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > index cf97f3884fda..4b217c6fbe1f 100644 > --- a/include/linux/mm_types.h > +++ b/include/linux/mm_types.h > @@ -193,12 +193,24 @@ struct page { > atomic_t _mapcount; > > /* > - * If the page is neither PageSlab nor mappable to userspace, > - * the value stored here may help determine what this page > - * is used for. See page-flags.h for a list of page types > - * which are currently stored here. > + * If the page is not mappable to userspace, the value > + * stored here may help determine what this page is used for. > + * See page-flags.h for a list of page types which are currently > + * stored here. > */ > - unsigned int page_type; > + struct { > + /* > + * Always place page_type in > + * upper 16 bits of _mapcount > + */ > +#ifdef CPU_BIG_ENDIAN s/CPU_BIG_ENDIAN/CONFIG_CPU_BIG_ENDIAN/g > + __u16 page_type; > + __u16 active; > +#else > + __u16 active; > + __u16 page_type; > +#endif > + }; > }; > > /* Usage count. *DO NOT USE DIRECTLY*. See page_ref.h */ [...] > diff --git a/mm/slab.h b/mm/slab.h > index 985820b9069b..a5273e189265 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -20,7 +20,16 @@ struct slab { > }; > struct rcu_head rcu_head; > }; > - unsigned int active; > + struct { > + /* always place page_type in upper 16 bits of _mapcount */ > +#ifdef CPU_BIG_ENDIAN same here. > + __u16 page_type; > + __u16 active; > +#else > + __u16 active; > + __u16 page_type; > +#endif > + }; > > #elif defined(CONFIG_SLUB) > > -- > 2.32.0 > -- Thanks, Hyeonggon