From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3FCFC54E67 for ; Wed, 27 Mar 2024 06:56:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 90BC26B009A; Wed, 27 Mar 2024 02:56:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 894E16B009B; Wed, 27 Mar 2024 02:56:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 75C756B009C; Wed, 27 Mar 2024 02:56:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 628F36B009A for ; Wed, 27 Mar 2024 02:56:13 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 2F1F840A40 for ; Wed, 27 Mar 2024 06:56:13 +0000 (UTC) X-FDA: 81941909826.14.CCEBE3D Received: from mail-lj1-f172.google.com (mail-lj1-f172.google.com [209.85.208.172]) by imf15.hostedemail.com (Postfix) with ESMTP id 4E564A0004 for ; Wed, 27 Mar 2024 06:56:11 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ZWTSFfPx; spf=pass (imf15.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.208.172 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711522571; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VEhC8Rd/dGZdcGTLIVJ88DhHsuV/GHJuUT32hTDzmo8=; b=fgqNh9iNfQitw/Sz2b6qcosFMuOWT/q5hmRM+sE8OV7xPVPQQKyNLrvvZbHsQbHFGCzHD2 X1yL8TcjnnL/T+Zw8JOwgOGL82Xy9ZAIoepTgeAkFh7a0847eWlFa2ARZAAq0TFEWkashU qjFLE1n5OuqcfNBzlaExEALkv0XjTtk= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ZWTSFfPx; spf=pass (imf15.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.208.172 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711522571; a=rsa-sha256; cv=none; b=Yy7ps4e1+hvEWS/dQ273+cMjsKIMMHdPbouMcgPABZupaqdj+tM06TAEr7JeDb8y2xci4H cZL6m5rAdbpevWut3942cb1tBQtYSnVG8OHivil66d4BHQGGpJZbEf2NVEBJsmEvBNKa/7 jDg0iFVB5oDbjh82qgesC4ICB+1d4+o= Received: by mail-lj1-f172.google.com with SMTP id 38308e7fff4ca-2d6eedf3e99so5050791fa.1 for ; Tue, 26 Mar 2024 23:56:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1711522569; x=1712127369; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=VEhC8Rd/dGZdcGTLIVJ88DhHsuV/GHJuUT32hTDzmo8=; b=ZWTSFfPxy2S4fJD8+XxgFY36aT71gDrVWQikT1eC5d2V5liNmDdQlGUnrP7+DFz+Kg 7MWJbTUdvnasK4j78DBVKe9SBEE/f28lTT3RgBN9JicbsvmT3Sd8RyaNGEzL/1BuWAFY 4HikrpxfUONgVrgUh7ZP0tsIElBniSE1OZViJSafMcgZ5HipCtVWv17AMUi9W51wzwzn uWjMIsl98cXCxlCXXDaKOx2D9SJCcalZS2LYfEOYjo6ge4PulU4InfYeWr/diSymQgiH 8Q31o4yjp2VhxeVy0z0gG+0pnQJQMYtgChQo6f7t1QxgTVkiF986RseqAX4hVcn5sDDg qLDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711522569; x=1712127369; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VEhC8Rd/dGZdcGTLIVJ88DhHsuV/GHJuUT32hTDzmo8=; b=b1AJzgrdfbAAn9nvnGqrJTYErIQt+Nw88RQwKu61+BTv83mqwFmv/5oDB7iesu3J5h Io5VupEZrAzAcY4jdVGpbjJx7p0zj05hCvjjRwJHy66ZQtebEQt6R+LUjq8sk7przbk4 2AQI9XCsMukKNc/v0bZTap1VC2FtBZVVQts4UsDDypyhxyfFjcd6TsyVQyP2sj7v/+i8 h45c4inR/iou4NXGgHnNb3nDgzOOlhSi1YayvnTCu596nCwzwqP0/YQzEKmyrZkKSzuu U1lfLGp+qvTQ70SgtZ4lZ0upBD7Sd/GvjLJ5qwzgJliRwjKWBEkleVx4dW4kklGVSOO9 CxlQ== X-Gm-Message-State: AOJu0YxAKhmFZjDCeRx+4AFCWrKaWS/TPlW26usFybxmWyxyljUz9Z61 aMuni6e8e1Z+kvOXB12c0bwh6Ggh9XkbaxvuU2T16n6cKAk1eTKwgvurxIaRy5CdFNdF+UpyhgJ XGteHpU9TZa2KTWb74TpJdDtpYwk= X-Google-Smtp-Source: AGHT+IHAgurMjSG3ISiwdkjytkJzKdB3Xph10OvdzWTCMeYbNKDEtlPgl9slggTJYhUNYXNyMuU8/3IBiZ713ktIl+0= X-Received: by 2002:a05:651c:1991:b0:2d4:9334:3c11 with SMTP id bx17-20020a05651c199100b002d493343c11mr1335574ljb.16.1711522569219; Tue, 26 Mar 2024 23:56:09 -0700 (PDT) MIME-Version: 1.0 References: <20240326185032.72159-1-ryncsn@gmail.com> <20240326185032.72159-5-ryncsn@gmail.com> <87v858mbjh.fsf@yhuang6-desk2.ccr.corp.intel.com> In-Reply-To: <87v858mbjh.fsf@yhuang6-desk2.ccr.corp.intel.com> From: Kairui Song Date: Wed, 27 Mar 2024 14:55:53 +0800 Message-ID: Subject: Re: [RFC PATCH 04/10] mm/swap: remove cache bypass swapin To: "Huang, Ying" Cc: linux-mm@kvack.org, Chris Li , Minchan Kim , Barry Song , Ryan Roberts , Yu Zhao , SeongJae Park , David Hildenbrand , Yosry Ahmed , Johannes Weiner , Matthew Wilcox , Nhat Pham , Chengming Zhou , Andrew Morton , linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 4E564A0004 X-Rspam-User: X-Stat-Signature: k7tkwsxu3iy3tmqxfcgt741uhdpanokg X-Rspamd-Server: rspam01 X-HE-Tag: 1711522571-344096 X-HE-Meta: U2FsdGVkX1/94RGVKHNGlE27ZM2LLv2qvYqxh2o1GCldZNh5K+XVy7omrPfqRlgyaoaxoMZCmKfn4r8P6Sk5LRR39dq8UmRHCZ6fTR9AsBfyL1kLaK6IOn/JDwYRbm7U8Re4If45kQrv1JWcowgyudjbgdO84QbgsdJwAelfZ2O6VxYWzlh6fOvWs0ODN2hGvo3j/S/9Mimh1eFydjOo8IE/RtFmxowGxKYxxjbV3p+MIeNGnjJ43beUWokc7JK+jFhVXeOaCaJwRghVDq7vHWs0Vt3MDuKkkzo+1jxFKR1CcQ9BPI0vC+nYBAZJgnrO5v50o7KTgl5pyZOCzy1ms1X0t0ilKI9yrgFGH1fVEdDnDqg11RN/LWKaI+3Av8UBpZnxgL2kyxjEZXQpgtNRuYGhJ4XooMuQNtjYhIj9dsMWQUQRtJTfvTs2z+FCXJi84M8sN7YIKM0f1c5jCtUhKJMtbXmy56VaYObWAhGXVv41Q43X9a6PeyHhUL3WTWvpBFtakEaVC68BDbLsp7xIkNq5BYBQzn0rUkNAC5Qm8dg94rXyKTr1qRo76hjE3L3XUE8fevGf6a+P+dbuo/I5cXyQ1YDET42pl2aC/JRrc+h/l3MCG4ZVzwmkFJKgsK34yq/Lriub4QVO/GICMjksEEcPFYT02O1xfrrlts6I2QiwhCuUV/dmsqtiytvBBHvDSowz2fJbtu7JBhAkfar3MREFt7ZGabQ/So9AanV9zUmzPvXHAn0iOz0zNpSizYPdwAaz0hDvbaHZy1QphuB1Kxk33TvXBA/vf5wuKndAm1osCCO+QabVLZUAJth2Php8g6QrJFKpO2XNe97newUKyZFWRAj0SWPi75oLZK/+vsS/X7thURnl9npdh5sUsE/+tz562QB8aqyH2tIdaBLxh+GLPamn6CuFYUM14/VMshG2BL5ArAz9oOwfwgODDYiwaDg8ykgfAk82MPg6v8r 6AViJgt6 5/mvq8IjLXYmJw3bu0KfLZA/dBQ0e/ceBcqwH4ybgtkg3+1+8y/M1zCtapwUNjbNuri4Q4L1ILaMNRTbclhT5K7p/y4zdJcO2w6aNPpMgz0qeGwCszunpL3jD6u64vLU4dhnAwhFRgx0xBjoMzuIkMuK8z0GXkq8zok5uNRjBWn6FxuFI2C48OtORFyilQHJJez+E9K3VwBzja0qS0cw3Vd40Cxwr6HA8MNaFDVVDJLfJktvOCt4vZkRRaYpYIFJyExbDMSEHWmmYBFMz6dfcfD4UFzRr2Y51Lg3yJitS6EG1tSC1fpf2JiJulOhlZPwV7OB7wBWgmvGFEvj0NUK+Rz27OiKv3OWCel4LAcUhD5RHysDbIF9EYh4g9PXUCuhdhTOmR0d5rI2GEPWrUX9YQDwtyiAraM7hPsqHQtcqIhH/9doqxy5kQyWkiOE4kRjMZNYimKx71Vg4ww3nxinxkat0dGdQ2qD98oZgpexH0pEjqhM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Mar 27, 2024 at 2:32=E2=80=AFPM Huang, Ying = wrote: > > Kairui Song writes: > > > From: Kairui Song > > > > We used to have the cache bypass swapin path for better performance, > > but by removing it, more optimization can be applied and have > > an even better overall performance and less hackish. > > > > And these optimizations are not easily doable or not doable at all > > without this. > > > > This patch simply removes it, and the performance will drop heavily > > for simple swapin, things won't get this worse for real workloads > > but still observable. Following commits will fix this and archive a > > better performance. > > > > Swapout/in 30G zero pages from ZRAM (This mostly measures overhead > > of swap path itself, because zero pages are not compressed but simply > > recorded in ZRAM, and performance drops more as SWAP device is getting > > full): > > > > Test result of sequential swapin/out: > > > > Before (us) After (us) > > Swapout: 33619409 33624641 > > Swapin: 32393771 41614858 (-28.4%) > > Swapout (THP): 7817909 7795530 > > Swapin (THP) : 32452387 41708471 (-28.4%) > > > > Signed-off-by: Kairui Song > > --- > > mm/memory.c | 18 ++++------------- > > mm/swap.h | 10 +++++----- > > mm/swap_state.c | 53 ++++++++++--------------------------------------- > > mm/swapfile.c | 13 ------------ > > 4 files changed, 19 insertions(+), 75 deletions(-) > > > > diff --git a/mm/memory.c b/mm/memory.c > > index dfdb620a9123..357d239ee2f6 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -3932,7 +3932,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > struct page *page; > > struct swap_info_struct *si =3D NULL; > > rmap_t rmap_flags =3D RMAP_NONE; > > - bool need_clear_cache =3D false; > > bool exclusive =3D false; > > swp_entry_t entry; > > pte_t pte; > > @@ -4000,14 +3999,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > if (!folio) { > > if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && > > __swap_count(entry) =3D=3D 1) { > > - /* skip swapcache and readahead */ > > folio =3D swapin_direct(entry, GFP_HIGHUSER_MOVAB= LE, vmf); > > - if (PTR_ERR(folio) =3D=3D -EBUSY) > > - goto out; > > - need_clear_cache =3D true; > > } else { > > folio =3D swapin_readahead(entry, GFP_HIGHUSER_MO= VABLE, vmf); > > - swapcache =3D folio; > > } > > > > if (!folio) { > > @@ -4023,6 +4017,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > goto unlock; > > } > > > > + swapcache =3D folio; > > page =3D folio_file_page(folio, swp_offset(entry)); > > > > /* Had to read the page from swap area: Major fault */ > > @@ -4187,7 +4182,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > vmf->orig_pte =3D pte; > > > > /* ksm created a completely new copy */ > > - if (unlikely(folio !=3D swapcache && swapcache)) { > > + if (unlikely(folio !=3D swapcache)) { > > folio_add_new_anon_rmap(folio, vma, vmf->address); > > folio_add_lru_vma(folio, vma); > > } else { > > @@ -4201,7 +4196,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_p= te); > > > > folio_unlock(folio); > > - if (folio !=3D swapcache && swapcache) { > > + if (folio !=3D swapcache) { > > /* > > * Hold the lock to avoid the swap entry to be reused > > * until we take the PT lock for the pte_same() check > > @@ -4227,9 +4222,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > if (vmf->pte) > > pte_unmap_unlock(vmf->pte, vmf->ptl); > > out: > > - /* Clear the swap cache pin for direct swapin after PTL unlock */ > > - if (need_clear_cache) > > - swapcache_clear(si, entry); > > if (si) > > put_swap_device(si); > > return ret; > > @@ -4240,12 +4232,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > folio_unlock(folio); > > out_release: > > folio_put(folio); > > - if (folio !=3D swapcache && swapcache) { > > + if (folio !=3D swapcache) { > > folio_unlock(swapcache); > > folio_put(swapcache); > > } > > - if (need_clear_cache) > > - swapcache_clear(si, entry); > > if (si) > > put_swap_device(si); > > return ret; > > diff --git a/mm/swap.h b/mm/swap.h > > index aee134907a70..ac9573b03432 100644 > > --- a/mm/swap.h > > +++ b/mm/swap.h > > @@ -41,7 +41,6 @@ void __delete_from_swap_cache(struct folio *folio, > > void delete_from_swap_cache(struct folio *folio); > > void clear_shadow_from_swap_cache(int type, unsigned long begin, > > unsigned long end); > > -void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry); > > struct folio *swap_cache_get_folio(swp_entry_t entry, > > struct vm_area_struct *vma, unsigned long addr); > > struct folio *filemap_get_incore_folio(struct address_space *mapping, > > @@ -100,14 +99,15 @@ static inline struct folio *swapin_readahead(swp_e= ntry_t swp, gfp_t gfp_mask, > > { > > return NULL; > > } > > - > > -static inline int swap_writepage(struct page *p, struct writeback_cont= rol *wbc) > > +static inline struct folio *swapin_direct(swp_entry_t entry, gfp_t fla= g, > > + struct vm_fault *vmf); > > { > > - return 0; > > + return NULL; > > } > > > > -static inline void swapcache_clear(struct swap_info_struct *si, swp_en= try_t entry) > > +static inline int swap_writepage(struct page *p, struct writeback_cont= rol *wbc) > > { > > + return 0; > > } > > > > static inline struct folio *swap_cache_get_folio(swp_entry_t entry, > > diff --git a/mm/swap_state.c b/mm/swap_state.c > > index 2a9c6bdff5ea..49ef6250f676 100644 > > --- a/mm/swap_state.c > > +++ b/mm/swap_state.c > > @@ -880,61 +880,28 @@ static struct folio *swap_vma_readahead(swp_entry= _t targ_entry, gfp_t gfp_mask, > > } > > > > /** > > - * swapin_direct - swap in folios skipping swap cache and readahead > > + * swapin_direct - swap in folios skipping readahead > > * @entry: swap entry of this memory > > * @gfp_mask: memory allocation flags > > * @vmf: fault information > > * > > - * Returns the struct folio for entry and addr after the swap entry is= read > > - * in. > > + * Returns the folio for entry after it is read in. > > */ > > struct folio *swapin_direct(swp_entry_t entry, gfp_t gfp_mask, > > struct vm_fault *vmf) > > { > > - struct vm_area_struct *vma =3D vmf->vma; > > + struct mempolicy *mpol; > > struct folio *folio; > > - void *shadow =3D NULL; > > - > > - /* > > - * Prevent parallel swapin from proceeding with > > - * the cache flag. Otherwise, another thread may > > - * finish swapin first, free the entry, and swapout > > - * reusing the same entry. It's undetectable as > > - * pte_same() returns true due to entry reuse. > > - */ > > - if (swapcache_prepare(entry)) { > > - /* Relax a bit to prevent rapid repeated page faults */ > > - schedule_timeout_uninterruptible(1); > > - return ERR_PTR(-EBUSY); > > - } > > - > > - /* skip swapcache */ > > - folio =3D vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, > > - vma, vmf->address, false); > > - if (folio) { > > - __folio_set_locked(folio); > > - __folio_set_swapbacked(folio); > > - > > - if (mem_cgroup_swapin_charge_folio(folio, > > - vma->vm_mm, GFP_KERNEL, > > - entry)) { > > - folio_unlock(folio); > > - folio_put(folio); > > - return NULL; > > - } > > - mem_cgroup_swapin_uncharge_swap(entry); > > - > > - shadow =3D get_shadow_from_swap_cache(entry); > > - if (shadow) > > - workingset_refault(folio, shadow); > > + bool page_allocated; > > + pgoff_t ilx; > > > > - folio_add_lru(folio); > > + mpol =3D get_vma_policy(vmf->vma, vmf->address, 0, &ilx); > > + folio =3D __read_swap_cache_async(entry, gfp_mask, mpol, ilx, > > + &page_allocated, false); > > + mpol_cond_put(mpol); > > > > - /* To provide entry to swap_read_folio() */ > > - folio->swap =3D entry; > > + if (page_allocated) > > swap_read_folio(folio, true, NULL); > > - folio->private =3D NULL; > > - } > > > > return folio; > > } > > This looks similar as read_swap_cache_async(). Can we merge them? Yes, that's doable. But I may have to split it out again for later optimizations though. > > And, we should avoid to readahead in swapin_readahead() or > swap_vma_readahead() for SWP_SYNCHRONOUS_IO anyway. So, it appears that > we can change and use swapin_readahead() directly? Good point, SWP_SYNCHRONOUS_IO check can be extended more after this series, but readahead optimization could be another series (like the previous one which tried to unify readahead for shmem/anon), so I thought it's better to keep it untouched for now.