From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 22C1FC433EF for ; Thu, 25 Nov 2021 03:24:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8DA256B0074; Wed, 24 Nov 2021 22:24:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 88A166B0075; Wed, 24 Nov 2021 22:24:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7519A6B007B; Wed, 24 Nov 2021 22:24:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0241.hostedemail.com [216.40.44.241]) by kanga.kvack.org (Postfix) with ESMTP id 65F4F6B0074 for ; Wed, 24 Nov 2021 22:24:25 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 2DFD21838A39E for ; Thu, 25 Nov 2021 03:24:15 +0000 (UTC) X-FDA: 78846009228.24.A2E507A Received: from mail-ed1-f50.google.com (mail-ed1-f50.google.com [209.85.208.50]) by imf17.hostedemail.com (Postfix) with ESMTP id C76D6F0001F2 for ; Thu, 25 Nov 2021 03:24:14 +0000 (UTC) Received: by mail-ed1-f50.google.com with SMTP id w1so19403180edc.6 for ; Wed, 24 Nov 2021 19:24:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=t/Nlxsq9kW/Hom8aFbXv00FjzaRJZNLNO5zx85ROZpc=; b=iGvZ8sbmh/4SE7+kimL/hMSaYE6r3hTAUYMI1l1prKEWmHZwP4CZeYeMDyPAmsPta+ FwQrQaLpCEo6tG39Unv35ph6rBzGZacRiXcEqCmPHZ34POlGCVRwXkasiqA5VzzdNPV/ f5qVk911D4yMley6KUZFDJjQ6cx060O0k7yZUCE1N92qpg9B4dFQQRTWfJ2QbFMbLw0Z SlnJRYLQms27nZlMngnOuPBHq/QbijNdcbDeiYrQB4czAFqQoJ4OFtH1GB0MS8pdzYt1 cKL12skDmYNcy+0+LVCfXrzsnHNaO5ihPHcwGflVoq5v4xOEvMeVXG730b9fWi6lxWX0 sPhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=t/Nlxsq9kW/Hom8aFbXv00FjzaRJZNLNO5zx85ROZpc=; b=CVXD+32Hl/sOpVsP2NtAGSVG8hN29gLjqXtgyFdd2NyiCY2ngJIIc3AaBisySKhOs0 cX77OCNVaGOCIot0q4lWujJRSJ/4Yl86zgFoHihQfYMQOjl7ZOWksLIWWzpT0W+t6bzR TZ1infhS9mCCJ7kGNJcd4PUZ4jqKcCBwMR6ThnvtHaYke/yaZwWSZHbYJrgCYyvNWWrz xwKZy8qo4twSe4gmYIumwg7fEglHF9oirKVPdT8QKMq7gNorxL2fVN4eYyEUemsuKfnm aTQ8H/6PJuyIN8yPPQeTzCMmHvLKNi3aerEHd1UkTvaJnnx7zun1k3PnYcwGQUOiOwPN uWqw== X-Gm-Message-State: AOAM533St84q1Ukafb7+41NzHy+ZY/eQ3TgNF0ej0tJx6vvleiay3xAI GCG5YoW1wFmESJLc+SuRK3fef3i7q1d9te9bpqE= X-Google-Smtp-Source: ABdhPJySMgGvdmDaRvFQFVEQoB2qGvJuSyAXBbrPED7a/l3n6RMuGIvBLy1Q2Qq7q7GYTcScKEsxxfkxtjgdrHBodFY= X-Received: by 2002:a05:6402:90c:: with SMTP id g12mr33019827edz.36.1637810653452; Wed, 24 Nov 2021 19:24:13 -0800 (PST) MIME-Version: 1.0 References: <20211124151915.GA6163@haolee.io> In-Reply-To: From: Hao Lee Date: Thu, 25 Nov 2021 11:24:02 +0800 Message-ID: Subject: Re: [PATCH] mm: reduce spinlock contention in release_pages() To: Michal Hocko Cc: Linux MM , Johannes Weiner , vdavydov.dev@gmail.com, Shakeel Butt , cgroups@vger.kernel.org, LKML Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: C76D6F0001F2 X-Stat-Signature: 8yjct4qhd458unmrsg8j4zd6hx6pobib Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=iGvZ8sbm; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf17.hostedemail.com: domain of haolee.swjtu@gmail.com designates 209.85.208.50 as permitted sender) smtp.mailfrom=haolee.swjtu@gmail.com X-HE-Tag: 1637810654-683642 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000057, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Nov 25, 2021 at 12:31 AM Michal Hocko wrote: > > On Wed 24-11-21 15:19:15, Hao Lee wrote: > > When several tasks are terminated simultaneously, lots of pages will be > > released, which can cause severe spinlock contention. Other tasks which > > are running on the same core will be seriously affected. We can yield > > cpu to fix this problem. > > How does this actually address the problem? You are effectivelly losing > fairness completely. Got it. Thanks! > We do batch currently so no single task should be > able to monopolize the cpu for too long. Why this is not sufficient? uncharge and unref indeed take advantage of the batch process, but del_from_lru needs more time to complete. Several tasks will contend spinlock in the loop if nr is very large. We can notice a transient peak of sys% reflecting this, and perf can also report spinlock slowpath takes too much time. This scenario is not rare, especially when containers are destroyed simultaneously and other latency critical tasks may be affected by this problem. So I want to figure out a way to deal with it. Thanks. > > > diff --git a/mm/swap.c b/mm/swap.c > > index e8c9dc6d0377..91850d51a5a5 100644 > > --- a/mm/swap.c > > +++ b/mm/swap.c > > @@ -960,8 +960,14 @@ void release_pages(struct page **pages, int nr) > > if (PageLRU(page)) { > > struct lruvec *prev_lruvec = lruvec; > > > > - lruvec = folio_lruvec_relock_irqsave(folio, lruvec, > > +retry: > > + lruvec = folio_lruvec_tryrelock_irqsave(folio, lruvec, > > &flags); > > + if (!lruvec) { > > + cond_resched(); > > + goto retry; > > + } > > + > > if (prev_lruvec != lruvec) > > lock_batch = 0; > > > > -- > > 2.31.1 > > -- > Michal Hocko > SUSE Labs