From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D2DBC433FE for ; Fri, 7 Oct 2022 13:37:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C38C26B0072; Fri, 7 Oct 2022 09:37:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BE84D6B0073; Fri, 7 Oct 2022 09:37:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A61A26B0074; Fri, 7 Oct 2022 09:37:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 8555F6B0072 for ; Fri, 7 Oct 2022 09:37:06 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 5054FA0ADA for ; Fri, 7 Oct 2022 13:37:06 +0000 (UTC) X-FDA: 79994254452.17.2BCD8A4 Received: from mail-pf1-f174.google.com (mail-pf1-f174.google.com [209.85.210.174]) by imf02.hostedemail.com (Postfix) with ESMTP id D66D78000F for ; Fri, 7 Oct 2022 13:37:05 +0000 (UTC) Received: by mail-pf1-f174.google.com with SMTP id d10so4915093pfh.6 for ; Fri, 07 Oct 2022 06:37:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=qfzoSLkcHG9LxhIC2H734DevtfkcO6HZr+TJXAPvmGk=; b=qUb1WBEen0IFaBR9zTn1HDyMj4phtCAgJuAI40CPrxUlU3/zNLqoyUFIyh7ELug1wx nyJMPwmRpwtgw3iWk+pWNKGEYxVxf6lyjNBNMYzy/poT9C92afc91s8mI+IFxVPKizBt vQ+H4F2XHAA+AVd+YacaqaGAzYkMx/fcKzCK7Ffi+DtNvdirYDO/puKqCRfwZPrHTcUX 4MU29UWhx9RNoIvgFD66R9idAscwie0/Gc7y9OIXe2cqjHIgAe0+BfYNMbdp/omhnGO9 DBRWVoeW+6xoZ+EnDZA+Supp4abp2puEC2LVRs/smwVGP/R/zn6CkrUfkJpqDwNfRfGj yO0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=qfzoSLkcHG9LxhIC2H734DevtfkcO6HZr+TJXAPvmGk=; b=eQr0l5mqiXHyY5PCMk0NRkIZAxv+amMWkokV1MKdnbVJK0pmKeh0hGJnhlwRseC1Y+ tYOlgXsP+k0JhiDz/kWJyxE58BDMFMLIc6AaKuosmBHpO9hNpV2B3FeiKl0WSdlCNzx6 Za+Bh+GyagqLrvQpr8Zt6mKlkfqxJhzJqvk4RpsjYElJsmKqqvhrZ5467kGdKqfJXiPb aq/stp53KybGJJpEJ61KvP+u2/KWfWvcqvSd0mT+ik3XSvVa7hgVH/yN65kZJs0ZW0kG 7TRBzjLwtdDgg4Na4LgMz3hL13FflP3rYiIcQRrS8okXdkVUzz6my10KtJVmMO3SAPQP LR9A== X-Gm-Message-State: ACrzQf1wBy94iwjHGplyUvfVKFlF1luUQrHOFr3Q7EQOi+hWfNqbwF1A G+vNmsMtGcXxJdSHKg2oYxY= X-Google-Smtp-Source: AMsMyM5GBGDJNZNzlcj466Gh/G4aXMY7ytb7E7CFnygImq45porGxQNgcdfzAv6nWyPfK9FxrISKlQ== X-Received: by 2002:a05:6a00:1ace:b0:546:94a3:b235 with SMTP id f14-20020a056a001ace00b0054694a3b235mr5459442pfv.50.1665149824567; Fri, 07 Oct 2022 06:37:04 -0700 (PDT) Received: from hyeyoo ([114.29.91.56]) by smtp.gmail.com with ESMTPSA id u13-20020a170902e5cd00b0017f7819732dsm1041459plf.77.2022.10.07.06.36.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 07 Oct 2022 06:37:03 -0700 (PDT) Date: Fri, 7 Oct 2022 22:36:56 +0900 From: Hyeonggon Yoo <42.hyeyoo@gmail.com> To: Matthew Wilcox Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , Naoya Horiguchi , Miaohe Lin , Minchan Kim , Mel Gorman , Andrea Arcangeli , Dan Williams , Hugh Dickins , Muchun Song , David Hildenbrand , Andrey Konovalov , Marco Elver Subject: Re: [PATCH] mm: move PG_slab flag to page_type Message-ID: References: <20220919125708.276864-1-42.hyeyoo@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1665149825; a=rsa-sha256; cv=none; b=J0VdyEhb6sBBQjeyyq5Grdot9UFp6eDg98Y21TwxUv4cwIeDpBLapqpLfLnCx4kNb2B8bp L4QUfi7PpSWXAxQyd9KkzXGjeRSpCppVgQ1Px3l414fhMx7XgB/8bwWjRYFv002pgoT6ZK ApNzMR3AiyVL48AhpCmykZS5shVja3c= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=qUb1WBEe; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf02.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.210.174 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1665149825; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qfzoSLkcHG9LxhIC2H734DevtfkcO6HZr+TJXAPvmGk=; b=CrYVXTuKJyoGclkEFEyuswspOy7Lwl5kQSaFiZWkv8REIZgtGLMJGvMlZ1gGnxNfEXvFOY 6wXahnCquxAFeJlym6gYTVbRryVoHUyzjXd8DhIYOr15MjvnlvQ/D2fhHW6eKjCZ2iRApk ACJ3BAI+3DpncrD2GlKUsWAxkhpvwkI= X-Stat-Signature: q4skz4kkqf1yzzuhf3kaq1qgh8qxtdyh X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: D66D78000F Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=qUb1WBEe; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf02.hostedemail.com: domain of 42.hyeyoo@gmail.com designates 209.85.210.174 as permitted sender) smtp.mailfrom=42.hyeyoo@gmail.com X-Rspam-User: X-HE-Tag: 1665149825-574259 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sun, Sep 25, 2022 at 12:04:40AM +0100, Matthew Wilcox wrote: > On Mon, Sep 19, 2022 at 09:57:08PM +0900, Hyeonggon Yoo wrote: > > For now, only SLAB uses _mapcount field as a number of active objects in > > a slab, and other slab allocators do not use it. As 16 bits are enough > > for that, use remaining 16 bits of _mapcount as page_type even when > > SLAB is used. And then move PG_slab flag to page_type! > > > > Note that page_type is always placed in upper 16 bits of _mapcount to > > avoid confusing normal _mapcount as page_type. As underflow (actually > > I mean, yeah, overflow) is not a concern anymore, use more lower bits > > except bit zero. > > > > Add more folio helpers for PAGE_TYPE_OPS() not to break existing > > slab implementations. > > > > Remove PG_slab check from PAGE_FLAGS_CHECK_AT_FREE. buddy will still > > check if _mapcount is properly set at free. > > > > Exclude PG_slab from hwpoison and show_page_flags() for now. > > > > Note that with this patch, page_mapped() and folio_mapped() always return > > false for slab page. > > This is an interesting approach. It raises some questions. Hello Matthew, sorry for late reply and I didn't mean to ignore your feedback. I realized compound pages and folio stuffs are my weak side and needed some time to learn :) > First, you say that folio_mapped() returns false for slab pages. That's > only true for order-0 slab pages. For larger pages, > > if (!folio_test_large(folio)) > return atomic_read(&folio->_mapcount) >= 0; > if (atomic_read(folio_mapcount_ptr(folio)) >= 0) > return true; > > so that's going to depend what folio_mapcount_ptr() aliases with. IIUC it's true for order > 0 slab too. As slab pages are not mapped to userspace at all, entire compound page nor base pages are not mapped to userspace. AFAIK followings are true for order > 0 slab: - (first tail page)->compound_mapcount is -1 - _mapcount of base pages are -1 So: folio_mapped() and page_mapped() (if applied to head page) returns false for larger pages with this patch. I wrote simple testcase and did check that folio_mapped() and page_mapped() returns false for both order-0 page and larger pages. (and SLAB returned true for them before) > Second, this patch changes the behaviour of PageSlab() when applied to > tail pages. Altough it changes the way it checks the flag, it does not change behavior when applied to tail pages - PageSlab() on tail page returns false with or without this patch. If PageSlab() need to return true for tail pages too, we may make it check page_type at head page. But I'm not sure when it the behavior is needed. Can you please share your insight on this? > Which raises the further question of what PageBuddy(), > PageTable(), PageGuard() and PageIsolated() should do for multi-page > folios, if that is even possible. For users that uses real compound page like slab, we can make it check page_type of head page. (if needed) But for cases David described, there isn't much thing we can do except making them to use real compound pages. > Third, can we do this without that awkward __u16 thing? Perhaps > > -#define PG_buddy 0x00000080 > -#define PG_offline 0x00000100 > -#define PG_table 0x00000200 > -#define PG_guard 0x00000400 > +#define PG_buddy 0x00010000 > +#define PG_offline 0x00020000 > +#define PG_table 0x00040000 > +#define PG_guard 0x00080000 > +#define PG_slab 0x00100000 > > ... and then use wrappers in slab.c to access the bottom 16 bits? Definitely! I prefer that way and will adjust in RFC v2. Thank you for precious feedback. -- Hyeonggon