From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76F25C54E58 for ; Thu, 21 Mar 2024 04:41:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E83656B0089; Thu, 21 Mar 2024 00:41:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E33516B008A; Thu, 21 Mar 2024 00:41:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CFACD6B008C; Thu, 21 Mar 2024 00:41:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id C05986B0089 for ; Thu, 21 Mar 2024 00:41:37 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 948D11C17BF for ; Thu, 21 Mar 2024 04:41:37 +0000 (UTC) X-FDA: 81919797834.13.3E327D7 Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21]) by imf09.hostedemail.com (Postfix) with ESMTP id 15A1A140013 for ; Thu, 21 Mar 2024 04:41:34 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=PjYFZIkP; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf09.hostedemail.com: domain of ying.huang@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710996095; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jGj0xlGam44PjA7Pa1PbOM6373l/rIlTzOK/iT+e1cI=; b=ZCeGatRDoHQBPoZOKnare3t0n67A4Yk6bZP2L2IvMBechbz3/NkN1RAT+1WoOzf1pyzj3e T+wEBLMzrzgoAMlqmaFOm11QwtMCil7QINonx7P1YyUTZmonaz7h9a2rfr3dAEhH1JLZ0f uNzsKzfABGlux81YIOtsO7HmTf9Hf0k= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=PjYFZIkP; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf09.hostedemail.com: domain of ying.huang@intel.com designates 198.175.65.21 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710996095; a=rsa-sha256; cv=none; b=N14Cf8vlIySaqfDFxpMSws1crUkPXRfEo4luuhluuqLlLGG5XdPRrHV6HpbIAZ8ULUukyk 556tR76F8IdaNFbG1551kySr0gLj7N+dITCoq4RJ/3ckgKgpSKfRez3zOCpZNCg05SJr5g MyrlbOtU0+pGLnK0TmcQpNScuFF7iqk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1710996096; x=1742532096; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version; bh=QEydSPBqt/QCL6B+mc4yp2WGTbDT5hEdcQk6+KWkD4Y=; b=PjYFZIkPP0quODlsfFJpMFK4oinwRXjmGA2WQFX2Bx3c3jRqyptyQ+lA Sjb1nyNTn6pWCvSG9dxEE5+MaIJ91v8boKrSx0IEfURVb/UgJ/FH8JjXw ATU/RJcfX7DyBI/Jvt8dH8W59eKsWmsIv5adMemQDxgatAqgwjzLW8jn9 Yw25xbGJUsvzfHh0BOmmmQ0ZkxwKte08w4zJHupKFS5y7rpf5oJvJEAzb hABGqvSRYHTOE3hqo5ImYJheQBeh2z70l4EgGqx6FOavxLA9bT2xqizq0 QFq8HEM/leEptZIAv0qravmgYy5cQcefNC1uhMUmE83ZMxPELo4NFT+G4 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,11019"; a="5902326" X-IronPort-AV: E=Sophos;i="6.07,141,1708416000"; d="scan'208";a="5902326" Received: from fmviesa005.fm.intel.com ([10.60.135.145]) by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Mar 2024 21:41:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.07,141,1708416000"; d="scan'208";a="18850240" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by fmviesa005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Mar 2024 21:41:30 -0700 From: "Huang, Ying" To: Ryan Roberts Cc: Andrew Morton , David Hildenbrand , Matthew Wilcox , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Barry Song <21cnbao@gmail.com>, Chris Li , , Subject: Re: [PATCH v4 4/6] mm: swap: Allow storage of all mTHP orders In-Reply-To: (Ryan Roberts's message of "Wed, 20 Mar 2024 12:22:18 +0000") References: <20240311150058.1122862-1-ryan.roberts@arm.com> <20240311150058.1122862-5-ryan.roberts@arm.com> <87jzm751n3.fsf@yhuang6-desk2.ccr.corp.intel.com> Date: Thu, 21 Mar 2024 12:39:36 +0800 Message-ID: <8734skryev.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspamd-Queue-Id: 15A1A140013 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: o51ymg8o68j8ku1jci6hprjpogy7fnun X-HE-Tag: 1710996094-167314 X-HE-Meta: U2FsdGVkX19H9+DVOPgy0oi3admQpp8vjiIkoHnxkQfK5lEXqPEN1z1de/MGxl0FV6YimscXQLkNMaFA38jk8pAtG20C7ZON38X5jbzkHkTYdzHT6993UmM8wtX+6x12/W/hEa03I+VP5u2Vm6+i6dEvuzEQGpqodqPiyzyMUcgjTouzehcb8oRaAhpxrMp/RWUVYTlJ9HHC9O7iXxjuw9XFt98y9EM9D3H0uRdY3iBvZqj60nz50GPLTLkS9y16ss+hbF4c9jsFYBKYXeW4tSVBRji9xduYYOqZLPOE1Ep7mTzYqQWSxxSsSgLfXmm0pusAaLl7fdbQWHMDEqEQpt9Glz4V/xsZbekmdqu3qJnVjt9zdOCYF+j+6vXqQDmyFTjKS63cuHUBRGv87WkGQNJJKzMi4IWDKPxf2J8H7T68o7Guj0gXfpor6QigA+TqX426/4Olkz5sUolYL4Q7NW87j0hEFONN7BVWqY2w7JzqZvfSnf2pKV6RhIjdpJjH6JeH0YxokjmrhPrwcio5LgPUEnQ0UoRq667FqsLh2AtFRgAq4L1NLh1+hxy8/mbDJbeLB7zioTp2zGLC9+g2y2dHSxhoPF2HZojkoqTUt12vgkQE42ugr3gGVkAGXNeAt9Re+d/yvrVMb2/wtpKPNe0Nj95PRZowF7ENCda7GeWeKpOz8Js2dJd1wRZfMcWdEH8i7w2El18yRX8zyStJfiDrx7vtNs6FizjKRhHLB5M/fZFy+xA49uctYmenbiiOYjzSVxVu15pesjJcrbYpsS8Fs6IBtQ46Gjad2TUki6QPTsfmanNhPOP/rqdyazIkzvBKX+LLMch3GqugZ8bwLRg8UFeLrvupSlfbISESLZrwwXZjYmaIBEWRnlKiqjTosAWb6wAeJ29LlD+7vnQ/wmFAVWFEuoMA+TMGr5Re0u1ALMl6EUza9/aKstNUa7M4x5SvS2/SZHS4HAA75OQ z25NJsNR coiiF0EUnaFnz5wZqV4h2SwPvDYwsl3wenaG6fjc8t2svS16bmhpOkC50QeS6rKcUN6WIZ9FIV7InT8NuCuY2eD+hCfbIkj7W2cjBYtY/rY3Tt6+sSeJ0mgBwLOmkmWeqBfApmdgiFQtX/ZGw/E39OErwPqb+kkAiZoI1BTg5DDWVMp8o1pRAUbaSXHyWI5NK97GNuWlYQWJdJS4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Ryan Roberts writes: > Hi Huang, Ying, > > > On 12/03/2024 07:51, Huang, Ying wrote: >> Ryan Roberts writes: >> >>> Multi-size THP enables performance improvements by allocating large, >>> pte-mapped folios for anonymous memory. However I've observed that on an >>> arm64 system running a parallel workload (e.g. kernel compilation) >>> across many cores, under high memory pressure, the speed regresses. This >>> is due to bottlenecking on the increased number of TLBIs added due to >>> all the extra folio splitting when the large folios are swapped out. >>> >>> Therefore, solve this regression by adding support for swapping out mTHP >>> without needing to split the folio, just like is already done for >>> PMD-sized THP. This change only applies when CONFIG_THP_SWAP is enabled, >>> and when the swap backing store is a non-rotating block device. These >>> are the same constraints as for the existing PMD-sized THP swap-out >>> support. >>> >>> Note that no attempt is made to swap-in (m)THP here - this is still done >>> page-by-page, like for PMD-sized THP. But swapping-out mTHP is a >>> prerequisite for swapping-in mTHP. >>> >>> The main change here is to improve the swap entry allocator so that it >>> can allocate any power-of-2 number of contiguous entries between [1, (1 >>> << PMD_ORDER)]. This is done by allocating a cluster for each distinct >>> order and allocating sequentially from it until the cluster is full. >>> This ensures that we don't need to search the map and we get no >>> fragmentation due to alignment padding for different orders in the >>> cluster. If there is no current cluster for a given order, we attempt to >>> allocate a free cluster from the list. If there are no free clusters, we >>> fail the allocation and the caller can fall back to splitting the folio >>> and allocates individual entries (as per existing PMD-sized THP >>> fallback). >>> >>> The per-order current clusters are maintained per-cpu using the existing >>> infrastructure. This is done to avoid interleving pages from different >>> tasks, which would prevent IO being batched. This is already done for >>> the order-0 allocations so we follow the same pattern. >>> >>> As is done for order-0 per-cpu clusters, the scanner now can steal >>> order-0 entries from any per-cpu-per-order reserved cluster. This >>> ensures that when the swap file is getting full, space doesn't get tied >>> up in the per-cpu reserves. >>> >>> This change only modifies swap to be able to accept any order mTHP. It >>> doesn't change the callers to elide doing the actual split. That will be >>> done in separate changes. > > [...] > >>> @@ -905,17 +961,18 @@ static int scan_swap_map_slots(struct swap_info_struct *si, >>> } >>> >>> if (si->swap_map[offset]) { >>> + VM_WARN_ON(order > 0); >>> unlock_cluster(ci); >>> if (!n_ret) >>> goto scan; >>> else >>> goto done; >>> } >>> - WRITE_ONCE(si->swap_map[offset], usage); >>> - inc_cluster_info_page(si, si->cluster_info, offset); >>> + memset(si->swap_map + offset, usage, nr_pages); >> >> Add barrier() here corresponds to original WRITE_ONCE()? >> unlock_cluster(ci) may be NOP for some swap devices. > > Looking at this a bit more closely, I'm not sure this is needed. Even if there > is no cluster, the swap_info is still locked, so unlocking that will act as a > barrier. There are a number of other callsites that memset(si->swap_map) without > an explicit barrier and with the swap_info locked. > > Looking at the original commit that added the WRITE_ONCE() it was worried about > a race with reading swap_map in _swap_info_get(). But that site is now annotated > with a data_race(), which will suppress the warning. And I don't believe there > are any places that read swap_map locklessly and depend upon observing ordering > between it and other state? So I think the si unlock is sufficient? > > I'm not planning to add barrier() here. Let me know if you disagree. swap_map[] may be read locklessly in swap_offset_available_and_locked() in parallel. IIUC, WRITE_ONCE() here is to make the writing take effect as early as possible there. > >> >>> + add_cluster_info_page(si, si->cluster_info, offset, nr_pages); >>> unlock_cluster(ci); -- Best Regards, Huang, Ying