From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.6 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93BE4C43331 for ; Mon, 30 Mar 2020 18:38:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5816A20714 for ; Mon, 30 Mar 2020 18:38:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="YDvQetHR" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5816A20714 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E805B6B000C; Mon, 30 Mar 2020 14:38:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E30676B0032; Mon, 30 Mar 2020 14:38:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D46D36B0037; Mon, 30 Mar 2020 14:38:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0200.hostedemail.com [216.40.44.200]) by kanga.kvack.org (Postfix) with ESMTP id BDADE6B000C for ; Mon, 30 Mar 2020 14:38:44 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 585B35010 for ; Mon, 30 Mar 2020 18:38:44 +0000 (UTC) X-FDA: 76652889768.05.stew69_4c4056447a320 X-HE-Tag: stew69_4c4056447a320 X-Filterd-Recvd-Size: 9006 Received: from mail-ed1-f66.google.com (mail-ed1-f66.google.com [209.85.208.66]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Mon, 30 Mar 2020 18:38:43 +0000 (UTC) Received: by mail-ed1-f66.google.com with SMTP id bd14so21962262edb.10 for ; Mon, 30 Mar 2020 11:38:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=auR3E1Pbe5ms+A8CKWhsPDMUJpxtg8PJQEwEFgJ+SYk=; b=YDvQetHR48Kn7wAbs6+f366X+rtfGTaImZR8UdL4UiZmZw8o49rMkiAOh4Jo3CxvdM xUK9JGKpkYiZvehaQVqrm3XwwSzQvVtJkWMcaAN5S+aTqSGcHYmYuAG120pSdrcGErE2 G9nrHY8fsrc3e9zeHmVOSvIHMtKZfd5T5B0DP1jIs/NDncc588BP8b3+rC0GUWj5+fsq vQVt+10p6vdLW/1ZtKcHHRRmCYU/BhSQ+4/l9UVTqFMZUojCANzYT0h6hAitTW0gzjEM WeGYZxnunGRDm+FptRrOzq2oETQxNY+UsqHzY6ZPNzTt1hsURa4AA4z+E/pIpN9GWEg0 NUGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=auR3E1Pbe5ms+A8CKWhsPDMUJpxtg8PJQEwEFgJ+SYk=; b=Brvh11md/AtqLkONpvGQ4N181T3Uh1m6oaL+j1xlkLLf1OQjZVgdZGwk2QAJy7U3Wg eWAaS7rke6kzyo8qgtfOJm98ZaZdqAU+ZwoFOB36qqmGmq1/lOBK6hscRVyV2uzWJnGy v9SXHSk+JeGJ/7qxym+ljdLhyVJ3TI0qpYNDOCD3iy03sD+KReMtuMiidVfeTvg6mDa8 QVdB950adV1sRPUl1tYinJxwII6MuCge7JL81dXtGAIM6wIzge3a8TbrMhbwwaa8Fqzg EQhSmKkTI7j02IcB5gN/Qm7K5u/jadZf86J/aMRqmYso3dSoQKpvwfuf17YaHGtGttme 0GwQ== X-Gm-Message-State: ANhLgQ0Lbu3DNSWAvlEpOL1R4c8ktpy92K5XSuRJ6fMvH6tZC9+Ah35X 5xAtE+HCCul3/6A8A6otonrVGdpC0FZAucGsY3Q= X-Google-Smtp-Source: ADFU+vtvIuLBIin0OCR6AZtwlv5OzR4Fkkt0xu3NAZHRjPzPO3kHQZGwQw6sqK74czIBQ8WcqeI62BinT2x8ajjcVzg= X-Received: by 2002:a50:9f6e:: with SMTP id b101mr12486683edf.372.1585593522745; Mon, 30 Mar 2020 11:38:42 -0700 (PDT) MIME-Version: 1.0 References: <20200327170601.18563-1-kirill.shutemov@linux.intel.com> <20200327170601.18563-6-kirill.shutemov@linux.intel.com> <20200328003424.kusu2xnhnlbmnfzl@box> <20200328122735.nzius2ikvnyvpf2f@box> In-Reply-To: <20200328122735.nzius2ikvnyvpf2f@box> From: Yang Shi Date: Mon, 30 Mar 2020 11:38:30 -0700 Message-ID: Subject: Re: [PATCH 5/7] khugepaged: Allow to collapse PTE-mapped compound pages To: "Kirill A. Shutemov" Cc: Andrew Morton , Andrea Arcangeli , Linux MM , Linux Kernel Mailing List , "Kirill A. Shutemov" Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sat, Mar 28, 2020 at 5:27 AM Kirill A. Shutemov wrote: > > On Fri, Mar 27, 2020 at 06:09:38PM -0700, Yang Shi wrote: > > On Fri, Mar 27, 2020 at 5:34 PM Kirill A. Shutemov wrote: > > > > > > On Fri, Mar 27, 2020 at 11:53:57AM -0700, Yang Shi wrote: > > > > On Fri, Mar 27, 2020 at 10:06 AM Kirill A. Shutemov > > > > wrote: > > > > > > > > > > We can collapse PTE-mapped compound pages. We only need to avoid > > > > > handling them more than once: lock/unlock page only once if it's present > > > > > in the PMD range multiple times as it handled on compound level. The > > > > > same goes for LRU isolation and putpack. > > > > > > > > > > Signed-off-by: Kirill A. Shutemov > > > > > --- > > > > > mm/khugepaged.c | 41 +++++++++++++++++++++++++++++++---------- > > > > > 1 file changed, 31 insertions(+), 10 deletions(-) > > > > > > > > > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > > > > > index b47edfe57f7b..c8c2c463095c 100644 > > > > > --- a/mm/khugepaged.c > > > > > +++ b/mm/khugepaged.c > > > > > @@ -515,6 +515,17 @@ void __khugepaged_exit(struct mm_struct *mm) > > > > > > > > > > static void release_pte_page(struct page *page) > > > > > { > > > > > + /* > > > > > + * We need to unlock and put compound page on LRU only once. > > > > > + * The rest of the pages have to be locked and not on LRU here. > > > > > + */ > > > > > + VM_BUG_ON_PAGE(!PageCompound(page) && > > > > > + (!PageLocked(page) && PageLRU(page)), page); > > > > > + > > > > > + if (!PageLocked(page)) > > > > > + return; > > > > > + > > > > > + page = compound_head(page); > > > > > dec_node_page_state(page, NR_ISOLATED_ANON + page_is_file_cache(page)); > > > > > > > > We need count in the number of base pages. The same counter is > > > > modified by vmscan in base page unit. > > > > > > Is it though? Where? > > > > __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, nr_taken) in > > vmscan.c, here nr_taken is nr_compound(page), so if it is THP the > > number would be 512. > > Could you point a particular codepath? shrink_inactive_list -> nr_taken = isolate_lru_pages() __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, nr_taken); Then in isolate_lru_pages() : nr_pages = compound_nr(page); ... switch (__isolate_lru_page(page, mode)) { case 0: nr_taken += nr_pages; > > > So in both inc and dec path of collapse PTE mapped THP, we should mod > > nr_compound(page) too. > > I disagree. Compound page is represented by single entry on LRU, so it has > to be counted once. It was not a problem without THP swap. But with THP swap we saw pgsteal_{kswapd|direct} may be greater than pgscan_{kswapd|direct} if we count THP as one page. Please refer to the below commit: commit 98879b3b9edc1604f2d1a6686576ef4d08ed3310 Author: Yang Shi Date: Thu Jul 11 20:59:30 2019 -0700 mm: vmscan: correct some vmscan counters for THP swapout Commit bd4c82c22c36 ("mm, THP, swap: delay splitting THP after swapped out"), THP can be swapped out in a whole. But, nr_reclaimed and some other vm counters still get inc'ed by one even though a whole THP (512 pages) gets swapped out. This doesn't make too much sense to memory reclaim. For example, direct reclaim may just need reclaim SWAP_CLUSTER_MAX pages, reclaiming one THP could fulfill it. But, if nr_reclaimed is not increased correctly, direct reclaim may just waste time to reclaim more pages, SWAP_CLUSTER_MAX * 512 pages in worst case. And, it may cause pgsteal_{kswapd|direct} is greater than pgscan_{kswapd|direct}, like the below: pgsteal_kswapd 122933 pgsteal_direct 26600225 pgscan_kswapd 174153 pgscan_direct 14678312 nr_reclaimed and nr_scanned must be fixed in parallel otherwise it would break some page reclaim logic, e.g. vmpressure: this looks at the scanned/reclaimed ratio so it won't change semantics as long as scanned & reclaimed are fixed in parallel. compaction/reclaim: compaction wants a certain number of physical pages freed up before going back to compacting. kswapd priority raising: kswapd raises priority if we scan fewer pages than the reclaim target (which itself is obviously expressed in order-0 pages). As a result, kswapd can falsely raise its aggressiveness even when it's making great progress. Other than nr_scanned and nr_reclaimed, some other counters, e.g. pgactivate, nr_skipped, nr_ref_keep and nr_unmap_fail need to be fixed too since they are user visible via cgroup, /proc/vmstat or trace points, otherwise they would be underreported. When isolating pages from LRUs, nr_taken has been accounted in base page, but nr_scanned and nr_skipped are still accounted in THP. It doesn't make too much sense too since this may cause trace point underreport the numbers as well. So accounting those counters in base page instead of accounting THP as one page. nr_dirty, nr_unqueued_dirty, nr_congested and nr_writeback are used by file cache, so they are not impacted by THP swap. This change may result in lower steal/scan ratio in some cases since THP may get split during page reclaim, then a part of tail pages get reclaimed instead of the whole 512 pages, but nr_scanned is accounted by 512, particularly for direct reclaim. But, this should be not a significant issue. So, since we count THP in base page unit in vmscan path, so the same counter should be updated in base page unit in other path as well IMHO. > > -- > Kirill A. Shutemov