From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0188FC2BA2B for ; Tue, 14 Apr 2020 01:31:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 97F602072D for ; Tue, 14 Apr 2020 01:31:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 97F602072D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DE4098E0003; Mon, 13 Apr 2020 21:31:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D94CE8E0001; Mon, 13 Apr 2020 21:31:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CAA528E0003; Mon, 13 Apr 2020 21:31:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0154.hostedemail.com [216.40.44.154]) by kanga.kvack.org (Postfix) with ESMTP id AC2C08E0001 for ; Mon, 13 Apr 2020 21:31:29 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 5FBC08245571 for ; Tue, 14 Apr 2020 01:31:29 +0000 (UTC) X-FDA: 76704733098.26.lift90_8155db2436e2a X-HE-Tag: lift90_8155db2436e2a X-Filterd-Recvd-Size: 3617 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by imf17.hostedemail.com (Postfix) with ESMTP for ; Tue, 14 Apr 2020 01:31:28 +0000 (UTC) IronPort-SDR: CW3XX5U1o0gr55rjOlWC1ahdgIcpjXiuSYXfkC/QBQYTnZsoykcS30DX/aFX5t6bHKVXpvhHMy SLFyhH2f+KuQ== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2020 18:31:26 -0700 IronPort-SDR: kOzw4nBLI2e7kjIdZeS0pezxBLzcZ/XvN8vKLi1t0VHDBzG0XjIK7125/jN/gUjRz6L0zxodKh TcJmLRoO8icA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,381,1580803200"; d="scan'208";a="253051814" Received: from yhuang-dev.sh.intel.com (HELO yhuang-dev) ([10.239.159.23]) by orsmga003.jf.intel.com with ESMTP; 13 Apr 2020 18:31:24 -0700 From: "Huang\, Ying" To: Andrea Righi Cc: Andrew Morton , Minchan Kim , Anchal Agarwal , , Subject: Re: [PATCH v2] mm: swap: use fixed-size readahead during swapoff References: <20200413111810.GA801367@xps-13> <87a73f7d71.fsf@yhuang-dev.intel.com> <20200413133150.GA810380@xps-13> Date: Tue, 14 Apr 2020 09:31:24 +0800 In-Reply-To: <20200413133150.GA810380@xps-13> (Andrea Righi's message of "Mon, 13 Apr 2020 15:31:50 +0200") Message-ID: <87wo6i6efn.fsf@yhuang-dev.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Andrea Righi writes: > On Mon, Apr 13, 2020 at 09:00:34PM +0800, Huang, Ying wrote: >> Andrea Righi writes: >> >> [snip] >> >> > diff --git a/mm/swap_state.c b/mm/swap_state.c >> > index ebed37bbf7a3..c71abc8df304 100644 >> > --- a/mm/swap_state.c >> > +++ b/mm/swap_state.c >> > @@ -20,6 +20,7 @@ >> > #include >> > #include >> > #include >> > +#include >> > #include >> > >> > #include >> > @@ -507,6 +508,14 @@ static unsigned long swapin_nr_pages(unsigned long offset) >> > max_pages = 1 << READ_ONCE(page_cluster); >> > if (max_pages <= 1) >> > return 1; >> > + /* >> > + * If current task is using too much memory or swapoff is running >> > + * simply use the max readahead size. Since we likely want to load a >> > + * lot of pages back into memory, using a fixed-size max readhaead can >> > + * give better performance in this case. >> > + */ >> > + if (oom_task_origin(current)) >> > + return max_pages; >> > >> > hits = atomic_xchg(&swapin_readahead_hits, 0); >> > pages = __swapin_nr_pages(prev_offset, offset, hits, max_pages, >> >> Thinks this again. If my understanding were correct, the accessing >> pattern during swapoff is sequential, why swap readahead doesn't work? >> If so, can you root cause that firstly? > > Theoretically if the pattern is sequential the current heuristic should > already select a big readahead size, but apparently it's not doing that. > > I'll repeat my tests tracing the readahead size during swapoff to see > exactly what's going on here. I haven't verify it. It may be helpful to call lookup_swap_cache() before swapin_readahead() in unuse_pte_range(). The theory behind it is to update the swap readahead statistics via lookup_swap_cache(). Best Regards, Huang, Ying