From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 575D2C433EF for ; Wed, 25 May 2022 16:54:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D6F648D0005; Wed, 25 May 2022 12:54:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D1CE28D0001; Wed, 25 May 2022 12:54:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C0BCF8D0005; Wed, 25 May 2022 12:54:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id B426C8D0001 for ; Wed, 25 May 2022 12:54:48 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 81B5532E91 for ; Wed, 25 May 2022 16:54:48 +0000 (UTC) X-FDA: 79504864656.23.BA021A0 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf08.hostedemail.com (Postfix) with ESMTP id 3E86A160031 for ; Wed, 25 May 2022 16:54:26 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B23EA616BA; Wed, 25 May 2022 16:54:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 06D9DC385B8; Wed, 25 May 2022 16:54:42 +0000 (UTC) Date: Wed, 25 May 2022 17:54:39 +0100 From: Catalin Marinas To: Barry Song <21cnbao@gmail.com> Cc: Andrew Morton , Will Deacon , Linux-MM , LAK , LKML , hanchuanhua , =?utf-8?B?5byg6K+X5piOKFNpbW9uIFpoYW5nKQ==?= , =?utf-8?B?6YOt5YGl?= , Barry Song , "Huang, Ying" , Minchan Kim , Johannes Weiner , Hugh Dickins , Shaohua Li , Rik van Riel , Andrea Arcangeli , Steven Price Subject: Re: [PATCH] arm64: enable THP_SWAP for arm64 Message-ID: References: <20220524071403.128644-1-21cnbao@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 3E86A160031 X-Stat-Signature: wh51mqnq1fskwfunuef9sscxw7opuoqy X-Rspam-User: Authentication-Results: imf08.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=arm.com (policy=none); spf=pass (imf08.hostedemail.com: domain of cmarinas@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=cmarinas@kernel.org X-HE-Tag: 1653497666-667620 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, May 25, 2022 at 11:10:41PM +1200, Barry Song wrote: > On Wed, May 25, 2022 at 7:14 AM Catalin Marinas wrote: > > I think this should work and with your other proposal it would be > > limited to MTE pages: > > > > #define arch_thp_swp_supported(page) (!test_bit(PG_mte_tagged, &page->flags)) > > > > Are THP pages loaded from swap as a whole or are they split? IIRC the > > i can confirm thp is written as a whole through: > [ 90.622863] __swap_writepage+0xe8/0x580 > [ 90.622881] swap_writepage+0x44/0xf8 > [ 90.622891] pageout+0xe0/0x2a8 > [ 90.622906] shrink_page_list+0x9dc/0xde0 > [ 90.622917] shrink_inactive_list+0x1ec/0x3c8 > [ 90.622928] shrink_lruvec+0x3dc/0x628 > [ 90.622939] shrink_node+0x37c/0x6a0 > [ 90.622950] balance_pgdat+0x354/0x668 > [ 90.622961] kswapd+0x1e0/0x3c0 > [ 90.622972] kthread+0x110/0x120 > > but i have never got a backtrace in which thp is loaded as a whole though it > seems the code has this path: > int swap_readpage(struct page *page, bool synchronous) > { > ... > bio = bio_alloc(sis->bdev, 1, REQ_OP_READ, GFP_KERNEL); > bio->bi_iter.bi_sector = swap_page_sector(page); > bio->bi_end_io = end_swap_bio_read; > bio_add_page(bio, page, thp_size(page), 0); > ... > submit_bio(bio); > } > > > splitting still happens but after the swapping out finishes. Even if > > they are loaded as 4K pages, we still have the mte_save_tags() that only > > understands small pages currently, so rejecting THP pages is probably > > best. > > as anyway i don't have a mte-hardware to do a valid test to go any > further, so i will totally disable thp_swp for hardware having mte for > this moment in patch v2. It makes sense. If we decide to improve this for MTE, we'll change the arch check. Thanks. -- Catalin