From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C0BAAC54791 for ; Wed, 13 Mar 2024 07:20:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 412A080008; Wed, 13 Mar 2024 03:20:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3C1E6940010; Wed, 13 Mar 2024 03:20:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 28AD680008; Wed, 13 Mar 2024 03:20:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 18B58940010 for ; Wed, 13 Mar 2024 03:20:05 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id B4A62120E73 for ; Wed, 13 Mar 2024 07:20:04 +0000 (UTC) X-FDA: 81891166728.04.0E649BC Received: from mail-vk1-f181.google.com (mail-vk1-f181.google.com [209.85.221.181]) by imf18.hostedemail.com (Postfix) with ESMTP id E47D01C0010 for ; Wed, 13 Mar 2024 07:20:01 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=YlF+Mag7; spf=pass (imf18.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.221.181 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710314401; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=6LqyRUsa9L8DlGv8vpJo8mQwj+3mx3nz1YNWYDzK03k=; b=GocIUCC3gOLBumHuXn/tE4Rgq8LvT/Tm5NgxhyuNvAniFW8kI17cFLdHMPt8u8hdgyrDa5 fQwvPgjEyHIXOTCavn+MZPQO4Pcqsh5SycNofSWn1CJwWiwD5BSwJBd00OFYzAFDDPXAdQ 3LOsNDgEI7F0s4awbC08RMc4n1wmogs= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710314401; a=rsa-sha256; cv=none; b=knC+HUUArbUl0y0qOZqMwfAb9O/j/+EV5/aCrHS5XJlqPPb7Bf6mzgkwv9vzqfWTal36ht uSgX+gwWxUiJ4Tg4MupTjfZmqQYO4rb9qe+M9lLYto1NMS0oZwQhwp0GnXYAjzw9yU3LmA A7SUjnWidPgvD/I+UkKhdWfGJeAesxU= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=YlF+Mag7; spf=pass (imf18.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.221.181 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-vk1-f181.google.com with SMTP id 71dfb90a1353d-4d419b31ddbso114665e0c.1 for ; Wed, 13 Mar 2024 00:20:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1710314401; x=1710919201; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=6LqyRUsa9L8DlGv8vpJo8mQwj+3mx3nz1YNWYDzK03k=; b=YlF+Mag7hbBstfqIa/3RjIydrbo241Vabs7/+S4DulFn/V5HYUTg9naHS00TsMM2kf Mr/2K1p3tahf8pIueIilSpJYV37bQOMR89UR3Uvw1YmdTBpXwqBsxNadd6QCajPCnGZr U/XvZF2VLG+PQ5IKw6ygcpvPi5TDy+HOsrvfGhuv7oiWCZiqhn2kat5InBG4P5cNSZdL bQdBEdh8zjAtw1YQRubybG5wgXqTQUh6jdYphD98xxtF6UgdAMvgV3N3HghdyfaGzRC1 Th6wREs0IsPxvoWm179op1UpszO0iL187WlL6tF26iKYXZH3ghft3nNI4VnGVBqKtDii KnEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710314401; x=1710919201; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=6LqyRUsa9L8DlGv8vpJo8mQwj+3mx3nz1YNWYDzK03k=; b=T14/PPPZaLctOipYr2vrzqEbZeVpztiF+a+FuyMz+SalS+IDiFQVAI6mVsAX0F2CMn 4hYp2sev32UrmUkoSqZ4CKNBjuOgUScXw0Q558y4+7d9ZeNFnqojPjO/MSntT6Y5djqw vOqokh9UST+ZuXsO8/HZL/w63jC3VX59JBw7BiGxHTsVH/Io12rMyZPdF7ppiL2+qkVd L/V3U6elAQ6lOfYm21WnnBtPjZIvm+ooCL1FvKs3I8pDxxVzNMO1uL1wMY9L948BaNWb YkkGgf49vdwBM5eo1a4FMlsOOrrkdZmS8Ks5tGbAa6ohGGiJ9zacdPnDNDf0qngQv7xT a0Pg== X-Forwarded-Encrypted: i=1; AJvYcCUEMfibSzcu46UAkQbpxBHcVpoTAHOEPP7dpagFrtcnOY1om3ZhZDnLmPz8uy3zS4Tnhzo7/LuhPlwaYtxbzxk9i0w= X-Gm-Message-State: AOJu0YyihbUxb+6GMGZLdgCcQn2MIYlYftZwCDh0YgFlphwgSNkgj+Qp PNbCSUF9XO8+pGGskgG67YmY7/FMx1uy/Jh40acQv9qJDtV1saICoL5vRaRQ6RQS3FwX6aptmKk 1oeMUT6TbrfxWJ6w1IsAwQkNpfFc= X-Google-Smtp-Source: AGHT+IHWB0vh+0pWHZmCw+JFWove1J03Z0sz+NvVh+uQ6phJ4FWYD2xq4wYTZbftNACxPOY1w/ROcxsspkjMBem8NAc= X-Received: by 2002:a05:6122:e5c:b0:4c9:98f8:83cc with SMTP id bj28-20020a0561220e5c00b004c998f883ccmr6797237vkb.3.1710314400877; Wed, 13 Mar 2024 00:20:00 -0700 (PDT) MIME-Version: 1.0 References: <20240311150058.1122862-1-ryan.roberts@arm.com> <20240311150058.1122862-7-ryan.roberts@arm.com> In-Reply-To: <20240311150058.1122862-7-ryan.roberts@arm.com> From: Barry Song <21cnbao@gmail.com> Date: Wed, 13 Mar 2024 20:19:49 +1300 Message-ID: Subject: Re: [PATCH v4 6/6] mm: madvise: Avoid split during MADV_PAGEOUT and MADV_COLD To: Ryan Roberts Cc: Andrew Morton , David Hildenbrand , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang , Chris Li , linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: E47D01C0010 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: ij3cxg1u7fjjae355oyng9ii6hg1fwpk X-HE-Tag: 1710314401-22653 X-HE-Meta: U2FsdGVkX1+pSCCcVWKISBlNdXN/phDIPV2DBQmwtZJQKgIV6cLJYYF47SNXMr36Xl/z6yq8V0bBwKjwOfz7MeYjVOGx0aFSuJEgw5ztJR6p7yqqaW89gG+MWWj+P0GwsAmT2bcd/CBTJn7CdgA3IvEVsQFzTx6lWZEor3HR48O7eEtzE/H5sRwj/nh8DsfSeiWgHVjJgN5IDVtrONpwyX6jWN+4XgRuEGJKCEeYnvyUL3R9mTLJ9mf6XdenX0BehVedwfO1iOHc0SL7mrvr4hy4z3eCRTyN74n8LxfF5bUai+48SPruImrkMy09ouBeG7x2b/lA0NIabm7TNFECC76QzmGd9/yp9DQ9N/RXOxEgmTmEhv92kAQzmvAbFlXdILEkSQGDPpOQc533Y9McOyG/qFZ5bPaNvp8HV9/lk9LR4sfH8HXMEQ1cgD+Wgmh1qSqXOxiYatmr9nztN9SzrwJrBiwpM91NlRLwwdoVgtULqTkg8mcu9KgEMy156evHSy1jYsf5/wXAzgcKs9m1ZVmmQGSPXGzWBKQrzvJgknVLYYW8j0Z92FC++uScLo0EUUia0zTmonOtW2ukU+Mr0QZdtWLhXYPdNR/PxalFddb4Z6jpPAE/JVjK3LfqzIVjmQsZQbWY/rmJrt5R/mg89SAGdrK+mcXWSe3ekY7e1VuQzBAciKA7ygZVclUDG/pWQdcGowRLVioTeH2bkKLDWudSlC3Gg1V8CuoCsTqTKGufJFYSsfCXSMOHh+oeo6nzwmokNqREYUl55VDq1j1cfrf7BeuuwyAAB6NK4kHuuI/nP5rFehU+NJNyVvWilmkWmjRL5KZQgkzmETKZkEh2ur+l5LzOo4Ky34Fosad7vmFmZFanBFtyUBPMcHGCsCHfZcAI+wGsrXECEiiueOuHnlqcevHevAvgWMBD85v3JcC08Tbs5A7bvOTksv3PdYycj7WFAhgu5ihQ+MA/l8m VPnjUQqZ BFK2k9oCnaLkkBXlRp9+DlE+XI8y/NB1HausJRLOtp5MBXNPa8zSNxj1wVdq0cf1wTJLEUX3vDKY+GlDFspzsWBagqvrCvIo8aiDfFYkAO3ld8ZaA0N5pztLTkHq82OLqmgUfYM3A1fO8B1QmHQffGM8CyZb5PiiakBl3ChRZlP3osLh2R0/1SIidc8O5oT3l5h93He/RVF3n7HTmHscSCi72GKOJNkh27iOXwRR5N9ivVNB7iItQqmEtL/RIGkNd9wM/6NaS0fbOvJb6ZMrHU7eDNBpXF+jwiJ0/UtMO0rmevNzpnwXe1mamb/J3CuoDEfzyZ9BZRVkL4TQuRaOtDVKHrQnqniEdPguya7GJYF5bTUxrJ6cBLKaIYL+W6/lIYoe+7AcWL0Z5DVUXVmnZuLp8GVj6M0EYAdnM+/UYl/knXmtmiICOQPckGX4sxum89XpsxIBt6VWLsRmqdF3j+Wc5YXrUMogA2M78smhxhgIdmH5Yk9uog7pj5CRnhEyiztoITvHfxS0L3pM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Mar 12, 2024 at 4:01=E2=80=AFAM Ryan Roberts = wrote: > > Rework madvise_cold_or_pageout_pte_range() to avoid splitting any large > folio that is fully and contiguously mapped in the pageout/cold vm > range. This change means that large folios will be maintained all the > way to swap storage. This both improves performance during swap-out, by > eliding the cost of splitting the folio, and sets us up nicely for > maintaining the large folio when it is swapped back in (to be covered in > a separate series). > > Folios that are not fully mapped in the target range are still split, > but note that behavior is changed so that if the split fails for any > reason (folio locked, shared, etc) we now leave it as is and move to the > next pte in the range and continue work on the proceeding folios. > Previously any failure of this sort would cause the entire operation to > give up and no folios mapped at higher addresses were paged out or made > cold. Given large folios are becoming more common, this old behavior > would have likely lead to wasted opportunities. > > While we are at it, change the code that clears young from the ptes to > use ptep_test_and_clear_young(), which is more efficent than > get_and_clear/modify/set, especially for contpte mappings on arm64, > where the old approach would require unfolding/refolding and the new > approach can be done in place. > > Signed-off-by: Ryan Roberts This looks so much better than our initial RFC. Thank you for your excellent work! > --- > mm/madvise.c | 89 ++++++++++++++++++++++++++++++---------------------- > 1 file changed, 51 insertions(+), 38 deletions(-) > > diff --git a/mm/madvise.c b/mm/madvise.c > index 547dcd1f7a39..56c7ba7bd558 100644 > --- a/mm/madvise.c > +++ b/mm/madvise.c > @@ -336,6 +336,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *p= md, > LIST_HEAD(folio_list); > bool pageout_anon_only_filter; > unsigned int batch_count =3D 0; > + int nr; > > if (fatal_signal_pending(current)) > return -EINTR; > @@ -423,7 +424,8 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *p= md, > return 0; > flush_tlb_batched_pending(mm); > arch_enter_lazy_mmu_mode(); > - for (; addr < end; pte++, addr +=3D PAGE_SIZE) { > + for (; addr < end; pte +=3D nr, addr +=3D nr * PAGE_SIZE) { > + nr =3D 1; > ptent =3D ptep_get(pte); > > if (++batch_count =3D=3D SWAP_CLUSTER_MAX) { > @@ -447,55 +449,66 @@ static int madvise_cold_or_pageout_pte_range(pmd_t = *pmd, > continue; > > /* > - * Creating a THP page is expensive so split it only if w= e > - * are sure it's worth. Split it if we are only owner. > + * If we encounter a large folio, only split it if it is = not > + * fully mapped within the range we are operating on. Oth= erwise > + * leave it as is so that it can be swapped out whole. If= we > + * fail to split a folio, leave it in place and advance t= o the > + * next pte in the range. > */ > if (folio_test_large(folio)) { > - int err; > - > - if (folio_estimated_sharers(folio) > 1) > - break; > - if (pageout_anon_only_filter && !folio_test_anon(= folio)) > - break; > - if (!folio_trylock(folio)) > - break; > - folio_get(folio); > - arch_leave_lazy_mmu_mode(); > - pte_unmap_unlock(start_pte, ptl); > - start_pte =3D NULL; > - err =3D split_folio(folio); > - folio_unlock(folio); > - folio_put(folio); > - if (err) > - break; > - start_pte =3D pte =3D > - pte_offset_map_lock(mm, pmd, addr, &ptl); > - if (!start_pte) > - break; > - arch_enter_lazy_mmu_mode(); > - pte--; > - addr -=3D PAGE_SIZE; > - continue; > + const fpb_t fpb_flags =3D FPB_IGNORE_DIRTY | > + FPB_IGNORE_SOFT_DIRTY; > + int max_nr =3D (end - addr) / PAGE_SIZE; > + > + nr =3D folio_pte_batch(folio, addr, pte, ptent, m= ax_nr, > + fpb_flags, NULL); I wonder if we have a quick way to avoid folio_pte_batch() if users are doing madvise() on a portion of a large folio. > + > + if (nr < folio_nr_pages(folio)) { > + int err; > + > + if (folio_estimated_sharers(folio) > 1) > + continue; > + if (pageout_anon_only_filter && !folio_te= st_anon(folio)) > + continue; > + if (!folio_trylock(folio)) > + continue; > + folio_get(folio); > + arch_leave_lazy_mmu_mode(); > + pte_unmap_unlock(start_pte, ptl); > + start_pte =3D NULL; > + err =3D split_folio(folio); > + folio_unlock(folio); > + folio_put(folio); > + if (err) > + continue; > + start_pte =3D pte =3D > + pte_offset_map_lock(mm, pmd, addr= , &ptl); > + if (!start_pte) > + break; > + arch_enter_lazy_mmu_mode(); > + nr =3D 0; > + continue; > + } > } > > /* > * Do not interfere with other mappings of this folio and > - * non-LRU folio. > + * non-LRU folio. If we have a large folio at this point,= we > + * know it is fully mapped so if its mapcount is the same= as its > + * number of pages, it must be exclusive. > */ > - if (!folio_test_lru(folio) || folio_mapcount(folio) !=3D = 1) > + if (!folio_test_lru(folio) || > + folio_mapcount(folio) !=3D folio_nr_pages(folio)) > continue; This looks so perfect and is exactly what I wanted to achieve. > > if (pageout_anon_only_filter && !folio_test_anon(folio)) > continue; > > - VM_BUG_ON_FOLIO(folio_test_large(folio), folio); > - > - if (!pageout && pte_young(ptent)) { > - ptent =3D ptep_get_and_clear_full(mm, addr, pte, > - tlb->fullmm); > - ptent =3D pte_mkold(ptent); > - set_pte_at(mm, addr, pte, ptent); > - tlb_remove_tlb_entry(tlb, pte, addr); > + if (!pageout) { > + for (; nr !=3D 0; nr--, pte++, addr +=3D PAGE_SIZ= E) { > + if (ptep_test_and_clear_young(vma, addr, = pte)) > + tlb_remove_tlb_entry(tlb, pte, ad= dr); > + } This looks so smart. if it is not pageout, we have increased pte and addr here; so nr is 0 and we don't need to increase again in for (; addr < end; pte +=3D nr, addr +=3D nr * PAGE_SIZE) otherwise, nr won't be 0. so we will increase addr and pte by nr. > } > > /* > -- > 2.25.1 > Overall, LGTM, Reviewed-by: Barry Song