From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D47FCC433EF for ; Wed, 20 Jul 2022 09:38:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5D0756B0072; Wed, 20 Jul 2022 05:38:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 580756B0073; Wed, 20 Jul 2022 05:38:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 448436B0074; Wed, 20 Jul 2022 05:38:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 320E16B0072 for ; Wed, 20 Jul 2022 05:38:04 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id E5E5040545 for ; Wed, 20 Jul 2022 09:38:03 +0000 (UTC) X-FDA: 79706976846.11.FD13FD2 Received: from mail-pj1-f52.google.com (mail-pj1-f52.google.com [209.85.216.52]) by imf12.hostedemail.com (Postfix) with ESMTP id 956D240016 for ; Wed, 20 Jul 2022 09:38:03 +0000 (UTC) Received: by mail-pj1-f52.google.com with SMTP id b7-20020a17090a12c700b001f20eb82a08so1503215pjg.3 for ; Wed, 20 Jul 2022 02:38:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=m61XlA3gPUk9TvkQUllaLy707hzRWBsuoibec3hSiG0=; b=O3gs4d8a9QHKwre5GM5EbL4fSA7COzH9Q8NzR+jrhYIlDLn01bePZaD0mm7i4t+LWA uxkhUqAYiDnx2qnxIG9f/jjxlFUzVgTQRqIOS5y+vBm7WCf+qFOci4lJDBIk6rNwZwSo vOpkEVC35lT9ZvNvl7D/re98NJXVUw1mu2UfcehjB54v0OU0Gl/vGIIlJ2sZpYoxf+xW 229aUuIUNgaV7yN/pN661+UF5nzGFMIeWnlqBCNotW4GUizDKQc30tvxcFCM8EbfgCNu KCVutTyj2Ubdq+wVNJGc6eXMlrTo6gJkALOlgoFoJg7bqq8n+tx7o/UQEKX0+irHPGMm WLWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=m61XlA3gPUk9TvkQUllaLy707hzRWBsuoibec3hSiG0=; b=dmZ75g5wpW7HNmbNrj0TLR40g9SQh86AKGJH0VrzfEhql4CnxI79dCq30rX+A/8/N5 dH9FAwAg9H6UFkZl2Rj1GoDmE/iDTJtCY2q64EHgM01cDHw6DK3beHwAP7xEOI7olstu F3nzi+VL9RN2ybgikIYubri7HulVOZfZJnQZU2OkxJqb6x3ID9UpZY4SsIt8/pIrvWTn 8NoApZzLqj2j/wp9SjSD3WAzWcueSZYDwBMbun260ESYIBa+MRmx9OYU3oFWAxIMJEp7 mAZs7k9+BDY4wN8hwt4vVRt85Z4EDoXJnxdTKlCZw0mKZ9Ufz7tadxEeL3GvgDMDW40d cX4g== X-Gm-Message-State: AJIora+5jhmc+FZdDGoi5RgnxbqWG9ym+gTX84tz1ULiJAy+mm7DTCuZ EUg2fenN57msKDrbjaiabzo= X-Google-Smtp-Source: AGRyM1tGU3zIW4Y3vgr8S7MlO4rFvVSFSxxSx+zZ7xXaGX9RUTSAwghJaZpfAJq/faWM1Q9Uh1CI5g== X-Received: by 2002:a17:902:ea0f:b0:16c:134:a247 with SMTP id s15-20020a170902ea0f00b0016c0134a247mr37557383plg.86.1658309882426; Wed, 20 Jul 2022 02:38:02 -0700 (PDT) Received: from localhost.localdomain (47-72-206-164.dsl.dyn.ihug.co.nz. [47.72.206.164]) by smtp.gmail.com with ESMTPSA id s13-20020a170902b18d00b0016c35b21901sm13280942plr.195.2022.07.20.02.37.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Jul 2022 02:38:01 -0700 (PDT) From: Barry Song <21cnbao@gmail.com> To: will@kernel.org, akpm@linux-foundation.org, anshuman.khandual@arm.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, steven.price@arm.com Cc: aarcange@redhat.com, guojian@oppo.com, hanchuanhua@oppo.com, hannes@cmpxchg.org, hughd@google.com, linux-kernel@vger.kernel.org, minchan@kernel.org, shy828301@gmail.com, v-songbaohua@oppo.com, ying.huang@intel.com, zhangshiming@oppo.com Subject: [PATCH v4] arm64: enable THP_SWAP for arm64 Date: Wed, 20 Jul 2022 21:37:37 +1200 Message-Id: <20220720093737.133375-1-21cnbao@gmail.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1658309883; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=m61XlA3gPUk9TvkQUllaLy707hzRWBsuoibec3hSiG0=; b=MblAJfanNqf8tKAonuh66zM1sc5AKic6eXtYxxKsruLMsaPQY7LVr9lu5e+goTbJhQoBmc lV5+YJguGe9KsJHXq6I1VR8PqAlWrp4vXkbqd3BKxZb74Wznr+y+SrGYC993b+h03DSs5W /cWxbH3qcJ/mYo5gIr0uol3URWWKer8= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1658309883; a=rsa-sha256; cv=none; b=W2zzrlrqJaB2niCmUaH4vEjGQTXAlH3DxWXal3C7K42sALL08ERciDOdF8SwMaTMNVLClU yYU/JWz7biZBfxcbnlrSvNzBLVb2XBesa3joVu+42Flyq/xejsYdnJsXuFLoDEotJLoTuR PSHqi0GpyfrsCR8+GWYs1+GB7lg36MI= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=O3gs4d8a; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf12.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.216.52 as permitted sender) smtp.mailfrom=21cnbao@gmail.com X-Rspamd-Queue-Id: 956D240016 Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=O3gs4d8a; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf12.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.216.52 as permitted sender) smtp.mailfrom=21cnbao@gmail.com X-Rspamd-Server: rspam12 X-Rspam-User: X-Stat-Signature: hdrustfiuo9sabcfgetwuxeu8xm8qsx6 X-HE-Tag: 1658309883-401061 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Barry Song THP_SWAP has been proven to improve the swap throughput significantly on x86_64 according to commit bd4c82c22c367e ("mm, THP, swap: delay splitting THP after swapped out"). As long as arm64 uses 4K page size, it is quite similar with x86_64 by having 2MB PMD THP. THP_SWAP is architecture-independent, thus, enabling it on arm64 will benefit arm64 as well. A corner case is that MTE has an assumption that only base pages can be swapped. We won't enable THP_SWAP for ARM64 hardware with MTE support until MTE is reworked to coexist with THP_SWAP. A micro-benchmark is written to measure thp swapout throughput as below, unsigned long long tv_to_ms(struct timeval tv) { return tv.tv_sec * 1000 + tv.tv_usec / 1000; } main() { struct timeval tv_b, tv_e;; #define SIZE 400*1024*1024 volatile void *p = mmap(NULL, SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); if (!p) { perror("fail to get memory"); exit(-1); } madvise(p, SIZE, MADV_HUGEPAGE); memset(p, 0x11, SIZE); /* write to get mem */ gettimeofday(&tv_b, NULL); madvise(p, SIZE, MADV_PAGEOUT); gettimeofday(&tv_e, NULL); printf("swp out bandwidth: %ld bytes/ms\n", SIZE/(tv_to_ms(tv_e) - tv_to_ms(tv_b))); } Testing is done on rk3568 64bit Quad Core Cortex-A55 platform - ROCK 3A. thp swp throughput w/o patch: 2734bytes/ms (mean of 10 tests) thp swp throughput w/ patch: 3331bytes/ms (mean of 10 tests) Cc: "Huang, Ying" Cc: Minchan Kim Cc: Johannes Weiner Cc: Hugh Dickins Cc: Andrea Arcangeli Cc: Steven Price Cc: Yang Shi Reviewed-by: Anshuman Khandual Signed-off-by: Barry Song --- -v4: collected Reviewed-by of Anshuman; also thanks for Ying's comments arch/arm64/Kconfig | 1 + arch/arm64/include/asm/pgtable.h | 6 ++++++ include/linux/huge_mm.h | 12 ++++++++++++ mm/swap_slots.c | 2 +- 4 files changed, 20 insertions(+), 1 deletion(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 1652a9800ebe..e1c540e80eec 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -101,6 +101,7 @@ config ARM64 select ARCH_WANT_HUGETLB_PAGE_OPTIMIZE_VMEMMAP select ARCH_WANT_LD_ORPHAN_WARN select ARCH_WANTS_NO_INSTR + select ARCH_WANTS_THP_SWAP if ARM64_4K_PAGES select ARCH_HAS_UBSAN_SANITIZE_ALL select ARM_AMBA select ARM_ARCH_TIMER diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 0b6632f18364..78d6f6014bfb 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -45,6 +45,12 @@ __flush_tlb_range(vma, addr, end, PUD_SIZE, false, 1) #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ +static inline bool arch_thp_swp_supported(void) +{ + return !system_supports_mte(); +} +#define arch_thp_swp_supported arch_thp_swp_supported + /* * Outside of a few very special situations (e.g. hibernation), we always * use broadcast TLB invalidation instructions, therefore a spurious page diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index de29821231c9..4ddaf6ad73ef 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -461,4 +461,16 @@ static inline int split_folio_to_list(struct folio *folio, return split_huge_page_to_list(&folio->page, list); } +/* + * archs that select ARCH_WANTS_THP_SWAP but don't support THP_SWP due to + * limitations in the implementation like arm64 MTE can override this to + * false + */ +#ifndef arch_thp_swp_supported +static inline bool arch_thp_swp_supported(void) +{ + return true; +} +#endif + #endif /* _LINUX_HUGE_MM_H */ diff --git a/mm/swap_slots.c b/mm/swap_slots.c index 2a65a89b5b4d..10b94d64cc25 100644 --- a/mm/swap_slots.c +++ b/mm/swap_slots.c @@ -307,7 +307,7 @@ swp_entry_t folio_alloc_swap(struct folio *folio) entry.val = 0; if (folio_test_large(folio)) { - if (IS_ENABLED(CONFIG_THP_SWAP)) + if (IS_ENABLED(CONFIG_THP_SWAP) && arch_thp_swp_supported()) get_swap_pages(1, &entry, folio_nr_pages(folio)); goto out; } -- 2.25.1