From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 140B3C48BF6 for ; Thu, 7 Mar 2024 08:10:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9B7E66B0122; Thu, 7 Mar 2024 03:10:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 968786B0123; Thu, 7 Mar 2024 03:10:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8092F6B0124; Thu, 7 Mar 2024 03:10:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 6DDA56B0122 for ; Thu, 7 Mar 2024 03:10:24 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 3FA031C1300 for ; Thu, 7 Mar 2024 08:10:24 +0000 (UTC) X-FDA: 81869520768.18.90A6D3A Received: from mail-vs1-f51.google.com (mail-vs1-f51.google.com [209.85.217.51]) by imf03.hostedemail.com (Postfix) with ESMTP id 9FD8B2001D for ; Thu, 7 Mar 2024 08:10:22 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="gBdIy/3+"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf03.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.51 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709799022; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2AmHTTL6+wN1fLYHyk0rj2h2RNkxty2LDYDaC4IfuRE=; b=59oQLfUkr39YmEWSeOf2wLuWLZzrnTzZZvQCi8AzuTKktb8BlXTTDqxBqTJ3ThqGkLDvGM HJZopopufdI6HFmc0yh01N0Gzdm1tf481WsBW4W9lTWQa8A1PdWyfoYOtjgh/JeoB9YomF DaomrkFGGq0DA/VmMefuKvYEbHR7LVE= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="gBdIy/3+"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf03.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.51 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709799022; a=rsa-sha256; cv=none; b=7BizT/buRKBA2sxVPDVByLfxUvviSH65GRmNBMaFN3HqXesVfQaQSdDe4pfplYNvSShpyZ 5NM/5fhII1y4VoOhe7qdjFJgK8Tn1sxCQxySFFp1v0E2bfnj4DAHXQ+GZ4o3Du+0RR//93 uCQoG8Kg6LEl3iDcUxGw4sXWNRrBg7A= Received: by mail-vs1-f51.google.com with SMTP id ada2fe7eead31-47273838a21so960490137.1 for ; Thu, 07 Mar 2024 00:10:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1709799021; x=1710403821; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=2AmHTTL6+wN1fLYHyk0rj2h2RNkxty2LDYDaC4IfuRE=; b=gBdIy/3+9PlTKRV3CNou9SSAq9uMKc9N+AQjEmk4ltNxMvu2GcxGRz4YG1XBBhdIYJ DiZXu6QLd1dwlOoINOXt4Tcwxs5WLr7v5ZQobNaxMkX/zqPqvtfl81O1qi8bMlvvKRRv Z/wTcjTgYo2O8PN8v17y+qVEnt2arD9nn/1udlys8x1gw04H/fVCC/jAGKztTTWfq+vm JP9rx3fCtL0DREj4PQCPsCU8xnnPECGHb7H5lCxRlOI+SQbje4w4fqxDAoXkjM0u/Goa sfeCLHHPpQtoDQZIXJKORoM2mu3KeiHK3ypDAOSalELYgVaDjEvk51O039QHFlFmYjcy ko6A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709799021; x=1710403821; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2AmHTTL6+wN1fLYHyk0rj2h2RNkxty2LDYDaC4IfuRE=; b=eltUapxH5JUSTiFf0gKZAOAqoiIRKqwj/LmhdA51Nr4LV+skKu6KDVbxdHcDRZNJVX wyQYWCbQgldCBFadOAJ5OJtke2aQHZKP2iH180Xzm0aIPzKrYhQHDncNVSp02bSIkVaY 7q8tM6/L7rr7EV1LSYlcY+IUlXCnLXOCODdxfAO3N8UI3R3L54IGh9ALOeacnez+miII cUNaQ85Lz+msJkWeU6swovEKV8FrojJ/p3ZoHtVyHLTldU9A0ykhAtimEjcxaiTRhJZm nSTRRnTs2tQ0t4hVHTLS6H6Q6SI7QRF1GIOCyaBit1j71BMCIRqQm62sb8iQQK8k/rLR VjHQ== X-Forwarded-Encrypted: i=1; AJvYcCXeOcGRe/i5tqw/QhaMlROT/5G0S2/lV6QJ4LoxlQfsbE9WVWNETCUw3AHU2gEdXfI2zcTV1Y3pY8aq2VtL+6q9jRI= X-Gm-Message-State: AOJu0YxHVl67JRCtErzJtTkbtACgFVvL/6GcmQNB4s4WhySqY1G6aOph x+R/IFOFnQQQpoL0FjhSN3FaYPL8grwcPZqUMCLdbUd8jbmUmB1ifwQKFM+V69rZPqX81SjA+KN oESTFRE0dLgJWlVVs8SpS0dnlDoM= X-Google-Smtp-Source: AGHT+IHBd9M4Vewc9Pofp+DsTiYPu+tTGVg926af7ZK02t8baQJeg5Mn6oUyR1UvKrgVzKigkni/uZuO+nVKVLFBLkQ= X-Received: by 2002:a67:ead6:0:b0:471:e02d:9eb1 with SMTP id s22-20020a67ead6000000b00471e02d9eb1mr260220vso.9.1709799021655; Thu, 07 Mar 2024 00:10:21 -0800 (PST) MIME-Version: 1.0 References: <20240307061425.21013-1-ioworker0@gmail.com> In-Reply-To: From: Barry Song <21cnbao@gmail.com> Date: Thu, 7 Mar 2024 21:10:10 +1300 Message-ID: Subject: Re: [PATCH v2 1/1] mm/madvise: enhance lazyfreeing with mTHP in madvise_free To: Lance Yang , david@redhat.com, Vishal Moola Cc: akpm@linux-foundation.org, zokeefe@google.com, ryan.roberts@arm.com, shy828301@gmail.com, mhocko@suse.com, fengwei.yin@intel.com, xiehuan09@gmail.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 9FD8B2001D X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: rnb1bzh9ynh4zt6w89idpjqouymoboxj X-HE-Tag: 1709799022-117039 X-HE-Meta: U2FsdGVkX1++8bdynvJO14V4E/tFWKgnFRBv87TjC4A+JawFoTuDdEVVKnWHHD95u068fDHZfyHYJxhdr9IsLVnDLrU+mBVJisayQ0ui8yFfkHXT/lIPQLJAaLf4EoPYeMSPSAXKJ7gJRjlXeKy6+tX5qlMgzXhH7gShQsSwPw9aQmPN1OMTBeo9I0wukYtSHLnQkcNQeVs3OTR/9nmcvA4T+7RSpwpMtWOCNPCd0OwAFFQuE0CxlRWauNaHmRgyBNUcbRAxDtTD868azLWn+Tbej4Afae78odkP8jOUtQ+c+dYbOQGFWMvy1lPHTD4wMhmm4vOza/wcQf0kgwx+knpzSfSqs1VZVs0bFgj7kdDM/Fu9XE3vNTmWu+iFa5cS4WEqjCRzHYpFglGLwfY6BliuMWoVXasyw84ZYcWn93gEbOrVRNRHVq+lzsAAPrIFFVQaykNKH1F1wGvsF3ydsaShdVYeRRn8Nmlkv5JN7whmO7rdY/sCB4Q3r9DdJNtoQ9q67Gqh8/VcyZxashmLtjoUo2adk30N5yHXT8iWh0w/91MQpD7ZGJOaLmKo+34sMTPkIGc24u3VliALQNoDvX0Te4vc/GqcC0RCzBxhMAUZmaQqc3B2kabZOwAzZvOzYqCnihJHYL2/Ydp0lNH0h4y8lI/deO03ZRtAuNqkd++rkeftFLrg4UjtxqqLac8EuG0FUA/2u/dD9WiMisQdjDLYmi85BGacMwmiU3cCkqcFBAdcMd02gZvVHeKydoIicLJOPy8DE5jzEMC2bluN9AFd55a67OOzrh28wcjM6NI6OhKkzyNh6Xm0imjint+7llKjCgcOrKIqyLSLPlpWOo3nkf/jWtpjVjGpBtOYSz96m2aQ+6rJIDpz0WNdib5o7sg+PNn607eSLOdXiLhXgbjl64vHy01FZFFxaQK3YFHKyPossMCPTr3Cda/aMVwb3hoCqpB6Qgfa1Dnmtmu AGJUYe5U lbrTGrbJz8mUQrhT8ew6CpD1wd6Ugf2T7Baj7p4Guq/lORAS9wmdUbK7tT3r4etYVIqHV53ada83yvSNA4I1xj6Wsn35yEaeXuiPO40XUH9yX7HKX4d6wuubbMT0jO1SFydUtG98ZonOvTBwx/EgX+kWBj6lulJR4/vly X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Mar 7, 2024 at 9:00=E2=80=AFPM Lance Yang wro= te: > > Hey Barry, > > Thanks for taking time to review! > > On Thu, Mar 7, 2024 at 3:00=E2=80=AFPM Barry Song <21cnbao@gmail.com> wro= te: > > > > On Thu, Mar 7, 2024 at 7:15=E2=80=AFPM Lance Yang = wrote: > > > > [...] > > > +static inline bool can_mark_large_folio_lazyfree(unsigned long addr, > > > + struct folio *folio,= pte_t *start_pte) > > > +{ > > > + int nr_pages =3D folio_nr_pages(folio); > > > + fpb_t flags =3D FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; > > > + > > > + for (int i =3D 0; i < nr_pages; i++) > > > + if (page_mapcount(folio_page(folio, i)) !=3D 1) > > > + return false; > > > > we have moved to folio_estimated_sharers though it is not precise, so > > we don't do > > this check with lots of loops and depending on the subpage's mapcount. > > If we don't check the subpage=E2=80=99s mapcount, and there is a cow foli= o associated > with this folio and the cow folio has smaller size than this folio, > should we still > mark this folio as lazyfree? I agree, this is true. However, we've somehow accepted the fact that folio_likely_mapped_shared can result in false negatives or false positives to balance the overhead. So I really don't know :-) Maybe David and Vishal can give some comments here. > > > BTW, do we need to rebase our work against David's changes[1]? > > [1] https://lore.kernel.org/linux-mm/20240227201548.857831-1-david@redh= at.com/ > > Yes, we should rebase our work against David=E2=80=99s changes. > > > > > > + > > > + return nr_pages =3D=3D folio_pte_batch(folio, addr, start_pte= , > > > + ptep_get(start_pte), nr_page= s, flags, NULL); > > > +} > > > + > > > static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, > > > unsigned long end, struct mm_walk *wa= lk) > > > > > > @@ -676,11 +690,45 @@ static int madvise_free_pte_range(pmd_t *pmd, u= nsigned long addr, > > > */ > > > if (folio_test_large(folio)) { > > > int err; > > > + unsigned long next_addr, align; > > > > > > - if (folio_estimated_sharers(folio) !=3D 1) > > > - break; > > > - if (!folio_trylock(folio)) > > > - break; > > > + if (folio_estimated_sharers(folio) !=3D 1 || > > > + !folio_trylock(folio)) > > > + goto skip_large_folio; > > > > > > I don't think we can skip all the PTEs for nr_pages, as some of them mi= ght be > > pointing to other folios. > > > > for example, for a large folio with 16PTEs, you do MADV_DONTNEED(15-16)= , > > and write the memory of PTE15 and PTE16, you get page faults, thus PTE1= 5 > > and PTE16 will point to two different small folios. We can only skip wh= en we > > are sure nr_pages =3D=3D folio_pte_batch() is sure. > > Agreed. Thanks for pointing that out. > > > > > > + > > > + align =3D folio_nr_pages(folio) * PAGE_SIZE; > > > + next_addr =3D ALIGN_DOWN(addr + align, align)= ; > > > + > > > + /* > > > + * If we mark only the subpages as lazyfree, = or > > > + * cannot mark the entire large folio as lazy= free, > > > + * then just split it. > > > + */ > > > + if (next_addr > end || next_addr - addr !=3D = align || > > > + !can_mark_large_folio_lazyfree(addr, foli= o, pte)) > > > + goto split_large_folio; > > > + > > > + /* > > > + * Avoid unnecessary folio splitting if the l= arge > > > + * folio is entirely within the given range. > > > + */ > > > + folio_clear_dirty(folio); > > > + folio_unlock(folio); > > > + for (; addr !=3D next_addr; pte++, addr +=3D = PAGE_SIZE) { > > > + ptent =3D ptep_get(pte); > > > + if (pte_young(ptent) || pte_dirty(pte= nt)) { > > > + ptent =3D ptep_get_and_clear_= full( > > > + mm, addr, pte, tlb->f= ullmm); > > > + ptent =3D pte_mkold(ptent); > > > + ptent =3D pte_mkclean(ptent); > > > + set_pte_at(mm, addr, pte, pte= nt); > > > + tlb_remove_tlb_entry(tlb, pte= , addr); > > > + } > > > > Can we do this in batches? for a CONT-PTE mapped large folio, you are u= nfolding > > and folding again. It seems quite expensive. > > Thanks for your suggestion. I'll do this in batches in v3. > > Thanks again for your time! > > Best, > Lance > > > > > > + } > > > + folio_mark_lazyfree(folio); > > > + goto next_folio; > > > + > > > +split_large_folio: > > > folio_get(folio); > > > arch_leave_lazy_mmu_mode(); > > > pte_unmap_unlock(start_pte, ptl); > > > @@ -688,13 +736,28 @@ static int madvise_free_pte_range(pmd_t *pmd, u= nsigned long addr, > > > err =3D split_folio(folio); > > > folio_unlock(folio); > > > folio_put(folio); > > > - if (err) > > > - break; > > > - start_pte =3D pte =3D > > > - pte_offset_map_lock(mm, pmd, addr, &p= tl); > > > - if (!start_pte) > > > - break; > > > - arch_enter_lazy_mmu_mode(); > > > + > > > + /* > > > + * If the large folio is locked or cannot be = split, > > > + * we just skip it. > > > + */ > > > + if (err) { > > > +skip_large_folio: > > > + if (next_addr >=3D end) > > > + break; > > > + pte +=3D (next_addr - addr) / PAGE_SI= ZE; > > > + addr =3D next_addr; > > > + } > > > + > > > + if (!start_pte) { > > > + start_pte =3D pte =3D pte_offset_map_= lock( > > > + mm, pmd, addr, &ptl); > > > + if (!start_pte) > > > + break; > > > + arch_enter_lazy_mmu_mode(); > > > + } > > > + > > > +next_folio: > > > pte--; > > > addr -=3D PAGE_SIZE; > > > continue; > > > -- > > > 2.33.1 > > > Thanks Barry