From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F89AE7717F for ; Fri, 13 Dec 2024 06:35:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D94AF6B0083; Fri, 13 Dec 2024 01:35:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D1CAD6B0085; Fri, 13 Dec 2024 01:35:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BBD4E6B0088; Fri, 13 Dec 2024 01:35:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 9BD0F6B0083 for ; Fri, 13 Dec 2024 01:35:53 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id C4B4E140C98 for ; Fri, 13 Dec 2024 06:35:52 +0000 (UTC) X-FDA: 82888974882.16.819F812 Received: from out30-118.freemail.mail.aliyun.com (out30-118.freemail.mail.aliyun.com [115.124.30.118]) by imf12.hostedemail.com (Postfix) with ESMTP id 8056B4000E for ; Fri, 13 Dec 2024 06:35:38 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=mxPU0tvT; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf12.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.118 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734071739; a=rsa-sha256; cv=none; b=DYO7cGttoHt4B1CzE/KLojvGFXoOxxsFCb+2jD/rIEgbKLprbUx14jCnBZK92t7nLop26M 0U5tI7QiEiWuHsx2E3nrZx6x+hXXQmNA8vH+0pe/xWfLhrYbX8eVQMpla7PKyBH4WlwnL5 QS4jHg+N9KSfFrrF8KbKEuM1D13nCck= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=mxPU0tvT; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf12.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.118 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734071739; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GuERQIdIT/vwm6N56w1N+DSK0yIlarH/qYLcXAvrhv8=; b=prjnBVPdZ+Pqt/JdmU1kbX1ji8vlzlmt9WQzkCECpS4G/b+iMESkEt3VqfQmmqB8hs8B2e 9og0Jdn9B0bRJJ2rA7hcOwXg89cLFG8huMxbg4rrJ8k6DSZPPid0wQ/ItJ9jCJeiZJg2DC 4Vq0getxABpDMtIZVlptDzvH3g7V/RY= DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1734071746; h=Message-ID:Date:MIME-Version:Subject:To:From:Content-Type; bh=GuERQIdIT/vwm6N56w1N+DSK0yIlarH/qYLcXAvrhv8=; b=mxPU0tvT2XX5SdIbAvePLJrQodT/MBoZgIIgiIPMUr5JXzHUFHuF6VaKouM+Oltv0cDX6rL2f8xuRPpWnJn53cnQIMqQlrrQNSQ3u65iEnqLVD6CjnLE0Y6kBj+8bd/VMXAZjE1lxR6bKyqXROHu42CyA8qKZunSJGqmfyK02A0= Received: from 30.74.144.152(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WLO9-tL_1734071744 cluster:ay36) by smtp.aliyun-inc.com; Fri, 13 Dec 2024 14:35:45 +0800 Message-ID: Date: Fri, 13 Dec 2024 14:35:43 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH] mm: shmem: skip swapcache for swapin of synchronous swap device To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, wangkefeng.wang@huawei.com, kasong@tencent.com, ying.huang@linux.alibaba.com, 21cnbao@gmail.com, ryan.roberts@arm.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <8c40d045dc1ea46cc0983c0188b566615d9eb490.1733897892.git.baolin.wang@linux.alibaba.com> From: Baolin Wang In-Reply-To: <8c40d045dc1ea46cc0983c0188b566615d9eb490.1733897892.git.baolin.wang@linux.alibaba.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 8056B4000E X-Stat-Signature: iczxty9x8gnobsmkkeu1frho7nn9odn3 X-HE-Tag: 1734071738-551421 X-HE-Meta: U2FsdGVkX18XackNIzfzbvQHCXQzFsTX/PsNEXC0bmkLt00JqgmQyhmSqKp/X6MUQkTYn3KRFlG3jLsfXZTPL2tBe0twuTcfSKkDSft5uozcb/wV+JOETebJbl1yxCXo5mn3O9TiYWnma7bpU/3ZrI7oXWKJWJAd/68nKx/7g9Anngz8zEgDxreAsRAaAfoHVLNbO7v68h4e8L1qNkZHNsoeVyonaBx5eOOXqJn84qARqp6bG/GxpuKpq7ykZ6ZvD/snexiDe0xTkhstoWID/CwKDPAnoT1zI9sL8qS4yvETzGGmnp38W6x8baP8Nr86TxBROeN6A2RGaMbLx1yCx1IJuAQFiZ+kx2Hkcf50o+pK9bjTy79CGRUtugICx4aXnswjc08/yv/xbQk5VqsHOdBY+wSeki2Y0DVIPRUoBcGH7SkhJ8WuC/w0w3NXlXKA16mSvsua/o047nf0ULEEhmnmVtR7wsFbjOylZFO6o/hOAJ57fiT7T6LI8NURaKC5b0fnZgCjPiswkRca14dLPahm7PfW9AkOB0GOCMHqrcPXQ0c2OOYnP6l7UYC4eMIFJR1JQg1mZBS/PpVO1xGQmhurq/JTn8kv98+VayQCDQItkHU+9jo/ecVFs/pkzNHWhBbHL1gW529qKKM5pZN0zsSm9nlQk2DR4P+90OTNmwbJxLUQz38of1CVf2lGPZ2PvG5dwE0nH31BL1la+FbpMK9Yqzv6key0O3iKD2/rOTtASO9QDakPKeE5Ri/uUI3vBE7vT1/+pIQSNkICo/xqKB4HoKGN8Vsy2UzylHExvY/vCFVoslAf5vkq+w3H+W5ToTSCP+LMUftizNbNotqluk5WhCuZA272DjIPrTyoPQ++1pk5vmqr+EJ9rp7H8zp3Hz9YxRjb18tytmIjWAu/+fsY87ayyqcAm98AegS6bavUaJBYSDLexvP+gLXVC4avxVpCVGAhCSSRnoVJjXS vbTgWyAm pnEIx76CR40i2bp3RA0YcSJlghmAxqUqlX36hGeGVhIUWs9kQbB4h0HbO6o+wSOQ5yHWb1zgmBOFUVKXaKwLJVTSfujPW/U/VN9MtxUSBxqn5PYaeoLXFYTV/6CpznNPi/f7dkmhINzXr4mxM7i70ORTAZoPNGQSLI6TlHwgvCQmhq8BfKkTAwcp8AIRvdREn3Oabj/Pridee1n8l0q5qIFyR5HBo8N/Dqe0X/xtxAcg03H+W2uoiJwub1dThVJl80/X353COLpD+0GYW1lcI/FU2gA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/12/11 14:26, Baolin Wang wrote: > With fast swap devices (such as zram), swapin latency is crucial to applications. > For shmem swapin, similar to anonymous memory swapin, we can skip the swapcache > operation to improve swapin latency. Testing 1G shmem sequential swapin without > THP enabled, I observed approximately a 6% performance improvement: > (Note: I repeated 5 times and took the mean data for each test) > > w/o patch w/ patch changes > 534.8ms 501ms +6.3% > > In addition, currently, we always split the large swap entry stored in the > shmem mapping during shmem large folio swapin, which is not perfect, especially > with a fast swap device. We can swap in the whole large folio instead of > splitting the precious large folios to take advantage of the large folios > and improve the swapin latency if the swap device is synchronous device, > which is also similar to anonymous memory mTHP swapin. Testing 1G shmem > sequential swapin with 64K mTHP and 2M mTHP, I observed obvious performance > improvement: > > mTHP=64K > w/o patch w/ patch changes > 550.4ms 169.6ms +69% > > mTHP=2M > w/o patch w/ patch changes > 542.8ms 126.8ms +77% > > Note that skipping swapcache requires attention to concurrent swapin scenarios. > Fortunately the swapcache_prepare() and shmem_add_to_page_cache() can help > identify concurrent swapin and large swap entry split scenarios, and return > -EEXIST for retry. > > Signed-off-by: Baolin Wang > --- > mm/shmem.c | 102 +++++++++++++++++++++++++++++++++++++++++++++++++++-- > 1 file changed, 100 insertions(+), 2 deletions(-) > > diff --git a/mm/shmem.c b/mm/shmem.c > index 41d7a181ed89..a110f973dec0 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -1966,6 +1966,66 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf, > return ERR_PTR(error); > } > > +static struct folio *shmem_swap_alloc_folio(struct inode *inode, struct vm_area_struct *vma, > + pgoff_t index, swp_entry_t entry, int order, gfp_t gfp) > +{ > + struct shmem_inode_info *info = SHMEM_I(inode); > + struct folio *new; > + void *shadow; > + int nr_pages; > + > + /* > + * We have arrived here because our zones are constrained, so don't > + * limit chance of success by further cpuset and node constraints. > + */ > + gfp &= ~GFP_CONSTRAINT_MASK; > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE > + if (order > 0) { > + gfp_t huge_gfp = vma_thp_gfp_mask(vma); > + > + gfp = limit_gfp_mask(huge_gfp, gfp); > + } > +#endif > + > + new = shmem_alloc_folio(gfp, order, info, index); > + if (!new) > + return ERR_PTR(-ENOMEM); > + > + nr_pages = folio_nr_pages(new); > + if (mem_cgroup_swapin_charge_folio(new, vma ? vma->vm_mm : NULL, > + gfp, entry)) { > + folio_put(new); > + return ERR_PTR(-ENOMEM); > + } > + > + /* > + * Prevent parallel swapin from proceeding with the swap cache flag. > + * > + * Of course there is another possible concurrent scenario as well, > + * that is to say, the swap cache flag of a large folio has already > + * been set by swapcache_prepare(), while another thread may have > + * already split the large swap entry stored in the shmem mapping. > + * In this case, shmem_add_to_page_cache() will help identify the > + * concurrent swapin and return -EEXIST. > + */ > + if (swapcache_prepare(entry, nr_pages)) { > + folio_put(new); > + return ERR_PTR(-EEXIST); > + } > + > + __folio_set_locked(new); > + __folio_set_swapbacked(new); > + new->swap = entry; > + > + mem_cgroup_swapin_uncharge_swap(entry, nr_pages); > + shadow = get_shadow_from_swap_cache(entry); > + if (shadow) > + workingset_refault(new, shadow); > + folio_add_lru(new); > + swap_read_folio(new, NULL); > + return new; > +} > + > /* > * When a page is moved from swapcache to shmem filecache (either by the > * usual swapin of shmem_get_folio_gfp(), or by the less common swapoff of > @@ -2189,6 +2249,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, > struct shmem_inode_info *info = SHMEM_I(inode); > struct swap_info_struct *si; > struct folio *folio = NULL; > + bool skip_swapcache = false; > swp_entry_t swap; > int error, nr_pages; > > @@ -2210,6 +2271,8 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, > /* Look it up and read it in.. */ > folio = swap_cache_get_folio(swap, NULL, 0); > if (!folio) { > + int order = xa_get_order(&mapping->i_pages, index); > + bool fallback_order0 = false; > int split_order; > > /* Or update major stats only when swapin succeeds?? */ > @@ -2219,6 +2282,33 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, > count_memcg_event_mm(fault_mm, PGMAJFAULT); > } > > + /* > + * If uffd is active for the vma, we need per-page fault > + * fidelity to maintain the uffd semantics, then fallback > + * to swapin order-0 folio, as well as for zswap case. > + */ > + if (order > 0 && ((vma && unlikely(userfaultfd_armed(vma))) || > + !zswap_never_enabled())) > + fallback_order0 = true; > + > + /* Skip swapcache for synchronous device. */ > + if (!fallback_order0 && data_race(si->flags & SWP_SYNCHRONOUS_IO)) { > + folio = shmem_swap_alloc_folio(inode, vma, index, swap, order, gfp); > + if (!IS_ERR(folio)) { > + skip_swapcache = true; > + goto alloced; > + } > + > + /* > + * Fallback to swapin order-0 folio unless the swap entry > + * already exists. > + */ > + error = PTR_ERR(folio); > + folio = NULL; > + if (error == -EEXIST) > + goto failed; > + } > + > /* > * Now swap device can only swap in order 0 folio, then we > * should split the large swap entry stored in the pagecache > @@ -2249,9 +2339,10 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, > } > } > > +alloced: > /* We have to do this with folio locked to prevent races */ > folio_lock(folio); > - if (!folio_test_swapcache(folio) || > + if ((!skip_swapcache && !folio_test_swapcache(folio)) || > folio->swap.val != swap.val || > !shmem_confirm_swap(mapping, index, swap)) { > error = -EEXIST; > @@ -2287,7 +2378,12 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, > if (sgp == SGP_WRITE) > folio_mark_accessed(folio); > > - delete_from_swap_cache(folio); > + if (skip_swapcache) { > + folio->swap.val = 0; > + swapcache_clear(si, swap, nr_pages); > + } else { > + delete_from_swap_cache(folio); > + } > folio_mark_dirty(folio); > swap_free_nr(swap, nr_pages); > put_swap_device(si); > @@ -2300,6 +2396,8 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, > if (error == -EIO) > shmem_set_folio_swapin_error(inode, index, folio, swap); Oops, I missed handling the uncommon swapin_error case when skipping the swapcache. Will fix in V2. > unlock: > + if (skip_swapcache) > + swapcache_clear(si, swap, folio_nr_pages(folio)); > if (folio) { > folio_unlock(folio); > folio_put(folio);