From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.5 required=3.0 tests=INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55BD5C47404 for ; Fri, 4 Oct 2019 13:12:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 15C9D215EA for ; Fri, 4 Oct 2019 13:12:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 15C9D215EA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AEF8E6B0003; Fri, 4 Oct 2019 09:12:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AA0448E0003; Fri, 4 Oct 2019 09:12:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 98EAD6B0007; Fri, 4 Oct 2019 09:12:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0199.hostedemail.com [216.40.44.199]) by kanga.kvack.org (Postfix) with ESMTP id 75E9C6B0003 for ; Fri, 4 Oct 2019 09:12:33 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 1CDEA181AC9AE for ; Fri, 4 Oct 2019 13:12:33 +0000 (UTC) X-FDA: 76006141386.08.river60_6d892ca40a242 X-HE-Tag: river60_6d892ca40a242 X-Filterd-Recvd-Size: 3159 Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Fri, 4 Oct 2019 13:12:32 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id DF4EBB192; Fri, 4 Oct 2019 13:12:30 +0000 (UTC) Date: Fri, 4 Oct 2019 15:12:30 +0200 From: Michal Hocko To: Konstantin Khlebnikov Cc: linux-mm@kvack.org, Andrew Morton , linux-kernel@vger.kernel.org, Matthew Wilcox Subject: Re: [PATCH v2] mm/swap: piggyback lru_add_drain_all() calls Message-ID: <20191004131230.GL9578@dhcp22.suse.cz> References: <157019456205.3142.3369423180908482020.stgit@buzz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <157019456205.3142.3369423180908482020.stgit@buzz> User-Agent: Mutt/1.10.1 (2018-07-13) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri 04-10-19 16:09:22, Konstantin Khlebnikov wrote: > This is very slow operation. There is no reason to do it again if somebody > else already drained all per-cpu vectors while we waited for lock. > > Piggyback on drain started and finished while we waited for lock: > all pages pended at the time of our enter were drained from vectors. > > Callers like POSIX_FADV_DONTNEED retry their operations once after > draining per-cpu vectors when pages have unexpected references. This describes why we need to wait for preexisted pages on the pvecs but the changelog doesn't say anything about improvements this leads to. In other words what kind of workloads benefit from it? > Signed-off-by: Konstantin Khlebnikov > --- > mm/swap.c | 16 +++++++++++++++- > 1 file changed, 15 insertions(+), 1 deletion(-) > > diff --git a/mm/swap.c b/mm/swap.c > index 38c3fa4308e2..5ba948a9d82a 100644 > --- a/mm/swap.c > +++ b/mm/swap.c > @@ -708,9 +708,10 @@ static void lru_add_drain_per_cpu(struct work_struct *dummy) > */ > void lru_add_drain_all(void) > { > + static seqcount_t seqcount = SEQCNT_ZERO(seqcount); > static DEFINE_MUTEX(lock); > static struct cpumask has_work; > - int cpu; > + int cpu, seq; > > /* > * Make sure nobody triggers this path before mm_percpu_wq is fully > @@ -719,7 +720,19 @@ void lru_add_drain_all(void) > if (WARN_ON(!mm_percpu_wq)) > return; > > + seq = raw_read_seqcount_latch(&seqcount); > + > mutex_lock(&lock); > + > + /* > + * Piggyback on drain started and finished while we waited for lock: > + * all pages pended at the time of our enter were drained from vectors. > + */ > + if (__read_seqcount_retry(&seqcount, seq)) > + goto done; > + > + raw_write_seqcount_latch(&seqcount); > + > cpumask_clear(&has_work); > > for_each_online_cpu(cpu) { > @@ -740,6 +753,7 @@ void lru_add_drain_all(void) > for_each_cpu(cpu, &has_work) > flush_work(&per_cpu(lru_add_drain_work, cpu)); > > +done: > mutex_unlock(&lock); > } > #else > -- Michal Hocko SUSE Labs