From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 837D0C43331 for ; Mon, 30 Mar 2020 21:38:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1D799206CC for ; Mon, 30 Mar 2020 21:38:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="llEGJ9lB" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1D799206CC Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7DF138E0001; Mon, 30 Mar 2020 17:38:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 76FFB6B0037; Mon, 30 Mar 2020 17:38:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 62F718E0001; Mon, 30 Mar 2020 17:38:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0207.hostedemail.com [216.40.44.207]) by kanga.kvack.org (Postfix) with ESMTP id 48D6B6B0032 for ; Mon, 30 Mar 2020 17:38:51 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 16C683AA3 for ; Mon, 30 Mar 2020 21:38:51 +0000 (UTC) X-FDA: 76653343662.07.cable22_301e5841bda61 X-HE-Tag: cable22_301e5841bda61 X-Filterd-Recvd-Size: 6245 Received: from mail-lf1-f65.google.com (mail-lf1-f65.google.com [209.85.167.65]) by imf11.hostedemail.com (Postfix) with ESMTP for ; Mon, 30 Mar 2020 21:38:50 +0000 (UTC) Received: by mail-lf1-f65.google.com with SMTP id t11so3111220lfe.4 for ; Mon, 30 Mar 2020 14:38:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=uCxy/V6956TjkzZDFahzb5xk0CquL6PGhHn2gPT6QRU=; b=llEGJ9lBbSNAzPFWre1EMYa5L5y95ZPiRM0CaJhpzY6pX8wzLDB8bhMqdpMFR+8m/4 NM5ejm1kBNM0OX9A1Xo95aK++V0vfXbXmKYGssRUgf94CAlE5gTTUFTnYb1QQZdIBKFW tIcV0euZZY/IKJP7TTpAWjTEVyEbrUwcI5Se93AIBDL84kPi4eS1NuU9wXt0SWd2/leI J6/+QOB4NNPPEahfr0WGk26hRSz+BZrzej7OxlDTcmA8qNDZi7QK1AukPwD5XH5i8ttA 4ea/nccyJLGQIQelaTWrP+8BVYOcd3QTiX2Ujdd2c5G9r4Iw6nyyJOryYHQrGkjVSpJG LGAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=uCxy/V6956TjkzZDFahzb5xk0CquL6PGhHn2gPT6QRU=; b=SeGyU4VVpCnVL5sbiRG4wTDega1RsHEpsK2nfGuKuBam/O8Ecfq23j+tSxaZuanUDa L8WnMFqaDQsdQUttdfIwBoVP8UVYnNLwD8QxRLXiKCN5PQul2rVa63+cyO1xeEMiRJ+y X4pgTDeOMPQd4PqSFegkeHAPMoOMuODnxWJp8pbsPMZ2AuFf8hKziraJltdwostZR5RR D6TPQgyAeCEi9+cP9cf+UcXkN5Me0k1P+b6xjgobrgPeqyLaoHJVp90L6YQXUZJ3tBQS WP9fpFb7v0xhQgkWb7M70bHvwaqhKTXtqex/y64NHbejjizcoZxwx6E7ssV3aHEPyOys 2BEA== X-Gm-Message-State: AGi0PuYnHR56phzbVj8kmVlJcR3FR613P6PshEaNtuOfOoiPoYADOjOs wIrG0PHi5lPWeqP3nFoMFzz7NQ== X-Google-Smtp-Source: APiQypJChlBcMBAVZeVK7plWc78eRD6sZth/Ah1ckPyI5Hxgoy6OXUxyimasJLtFUCtdIQ+kVJQQNQ== X-Received: by 2002:a19:4a50:: with SMTP id x77mr9258764lfa.159.1585604328816; Mon, 30 Mar 2020 14:38:48 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id d6sm9644083lfn.72.2020.03.30.14.38.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 Mar 2020 14:38:48 -0700 (PDT) Received: by box.localdomain (Postfix, from userid 1000) id 6E055101DCA; Tue, 31 Mar 2020 00:38:48 +0300 (+03) Date: Tue, 31 Mar 2020 00:38:48 +0300 From: "Kirill A. Shutemov" To: Yang Shi Cc: Andrew Morton , Andrea Arcangeli , Linux MM , Linux Kernel Mailing List , "Kirill A. Shutemov" Subject: Re: [PATCH 3/7] khugepaged: Drain LRU add pagevec to get rid of extra pins Message-ID: <20200330213848.xmi3egioh7ygvfsz@box> References: <20200327170601.18563-1-kirill.shutemov@linux.intel.com> <20200327170601.18563-4-kirill.shutemov@linux.intel.com> <20200328121829.kzmcmcshbwynjmqc@box> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Mar 30, 2020 at 11:30:14AM -0700, Yang Shi wrote: > On Sat, Mar 28, 2020 at 5:18 AM Kirill A. Shutemov wrote: > > > > On Fri, Mar 27, 2020 at 11:10:40AM -0700, Yang Shi wrote: > > > On Fri, Mar 27, 2020 at 10:06 AM Kirill A. Shutemov > > > wrote: > > > > > > > > __collapse_huge_page_isolate() may fail due to extra pin in the LRU add > > > > pagevec. It's petty common for swapin case: we swap in pages just to > > > > fail due to the extra pin. > > > > > > > > Signed-off-by: Kirill A. Shutemov > > > > --- > > > > mm/khugepaged.c | 8 ++++++++ > > > > 1 file changed, 8 insertions(+) > > > > > > > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > > > > index 14d7afc90786..39e0994abeb8 100644 > > > > --- a/mm/khugepaged.c > > > > +++ b/mm/khugepaged.c > > > > @@ -585,11 +585,19 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, > > > > * The page must only be referenced by the scanned process > > > > * and page swap cache. > > > > */ > > > > + if (page_count(page) != 1 + PageSwapCache(page)) { > > > > + /* > > > > + * Drain pagevec and retry just in case we can get rid > > > > + * of the extra pin, like in swapin case. > > > > + */ > > > > + lru_add_drain(); > > > > > > This is definitely correct. > > > > > > I'm wondering if we need one more lru_add_drain() before PageLRU check > > > in khugepaged_scan_pmd() or not? The page might be in lru cache then > > > get skipped. This would improve the success rate. > > > > Could you elaborate on the scenario, I don't follow. > > I mean the below change: > > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -1195,6 +1195,9 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, > goto out_unmap; > } > khugepaged_node_load[node]++; > + if (!PageLRU(page)) > + /* Flush the page out of lru cache */ > + lru_add_drain(); > if (!PageLRU(page)) { > result = SCAN_PAGE_LRU; > goto out_unmap; > > If the page is not on LRU we even can't reach collapse_huge_page(), right? Yeah, I've archived the same by doing lru_add_drain_all() once per khugepaged_do_scan(). It is more effective than lru_add_drain() inside khugepaged_scan_pmd() and should have too much overhead. The lru_add_drain() from this patch moved into swapin routine and called only on success. -- Kirill A. Shutemov