From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.1 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A2F9C43215 for ; Tue, 19 Nov 2019 16:05:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8BC942230F for ; Tue, 19 Nov 2019 16:05:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=cmpxchg-org.20150623.gappssmtp.com header.i=@cmpxchg-org.20150623.gappssmtp.com header.b="diRMc+R/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8BC942230F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0D92A6B0005; Tue, 19 Nov 2019 11:05:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 062376B0006; Tue, 19 Nov 2019 11:05:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E446C6B0007; Tue, 19 Nov 2019 11:05:00 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0241.hostedemail.com [216.40.44.241]) by kanga.kvack.org (Postfix) with ESMTP id CB9136B0005 for ; Tue, 19 Nov 2019 11:05:00 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 6BAEB180480A4 for ; Tue, 19 Nov 2019 16:05:00 +0000 (UTC) X-FDA: 76173500760.29.son14_48a8d9a2c9125 X-HE-Tag: son14_48a8d9a2c9125 X-Filterd-Recvd-Size: 10027 Received: from mail-qk1-f194.google.com (mail-qk1-f194.google.com [209.85.222.194]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Tue, 19 Nov 2019 16:04:59 +0000 (UTC) Received: by mail-qk1-f194.google.com with SMTP id i3so6285554qkk.9 for ; Tue, 19 Nov 2019 08:04:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:content-transfer-encoding:in-reply-to :user-agent; bh=ERJD83QXVwqnYgYCzdb0bwkiShGx05+e+BAlf5zZbPQ=; b=diRMc+R/wv6O/zk5sdqMsYxdrOTmEtcK0piBIQlHb+EQf9wKsiMHHHi50G+BtrGxIp g8XHgXZ4v0DxazaNoJ7EPyK/XdKNVRXKTKahGH1S3OCcVJSD1f1j0CBDGEnMkQmBr18f 5ZEe/pCPanQf8JN/v1mRs9ndYKZ7ySQWOozgfKll3FDFUTncarXPng3kASzB4s7lJllW S1yfmOkaJPz3MxL6xQwkHXy4hk10MW5mWKr+5VeSD3abLpxHDAVozL6iitP1R5gp/8iL 3HXiy6FzPmMHX86MujgxMaCQn+LSk8cJs4Na3TUGePFMc08J7yCgWW+vk9Ipbt7W7sXr BIGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to:user-agent; bh=ERJD83QXVwqnYgYCzdb0bwkiShGx05+e+BAlf5zZbPQ=; b=JfSVYRc4yu0PjMlJRbqP63M60VmcvdAh/o2kFLSuQKaOfpAMPps12ZXYUbOQIAQJaE j2FFETBQ17TjZ2A/NM5Ip1MC4vZhFpsgHirtoFaqlK09fY8mNnA0LfPhlY8AC3X1Cftt hoVLNljZiAbLqPhVsO0IJcADsSqkJI/qfs0LmcwGHDn9YrbJFUqfZC2f/45qq1QbCEjO G7XtqpE2pJm09XA53mDtcFh2lDxG7iSQGuiws2H/zhw54NY5r9LJ9uCy0iGzoDGPfUMM 0NdVZFc9WbTfwHNUuUyyny8jjbNT6h4QljpDD+xFb7AvQpJdWhsxFX7LZ24dj+kReFHh RTlQ== X-Gm-Message-State: APjAAAUYX/IcUYVB9unzuMkAzfAeg+ZgFrCayKyWPUfsxXjpClIssHRJ owhU3jTY1Ysd7izAuJ0MqEfVbw== X-Google-Smtp-Source: APXvYqw2v6bJSWDxeCEnbqqPppvc1Yj0ecfqEFC8U014BmTqGMukbhvOKTq4flu3nm9BfpSxWaEy6g== X-Received: by 2002:a37:6105:: with SMTP id v5mr30624171qkb.40.1574179498147; Tue, 19 Nov 2019 08:04:58 -0800 (PST) Received: from localhost ([2620:10d:c091:500::c7ac]) by smtp.gmail.com with ESMTPSA id a18sm10612742qkc.2.2019.11.19.08.04.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Nov 2019 08:04:57 -0800 (PST) Date: Tue, 19 Nov 2019 11:04:56 -0500 From: Johannes Weiner To: Alex Shi Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, willy@infradead.org, shakeelb@google.com, Michal Hocko , Vladimir Davydov , Roman Gushchin , Chris Down , Thomas Gleixner , Vlastimil Babka , Qian Cai , Andrey Ryabinin , "Kirill A. Shutemov" , =?iso-8859-1?B?Suly9G1l?= Glisse , Andrea Arcangeli , David Rientjes , "Aneesh Kumar K.V" , swkhack , "Potyra, Stefan" , Mike Rapoport , Stephen Rothwell , Colin Ian King , Jason Gunthorpe , Mauro Carvalho Chehab , Peng Fan , Nikolay Borisov , Ira Weiny , Kirill Tkhai , Yafang Shao Subject: Re: [PATCH v4 3/9] mm/lru: replace pgdat lru_lock with lruvec lock Message-ID: <20191119160456.GD382712@cmpxchg.org> References: <1574166203-151975-1-git-send-email-alex.shi@linux.alibaba.com> <1574166203-151975-4-git-send-email-alex.shi@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <1574166203-151975-4-git-send-email-alex.shi@linux.alibaba.com> User-Agent: Mutt/1.12.2 (2019-09-21) Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Nov 19, 2019 at 08:23:17PM +0800, Alex Shi wrote: > This patchset move lru_lock into lruvec, give a lru_lock for each of > lruvec, thus bring a lru_lock for each of memcg per node. >=20 > This is the main patch to replace per node lru_lock with per memcg > lruvec lock. >=20 > We introduce function lock_page_lruvec, it's same as vanilla pgdat lock > when memory cgroup unset, w/o memcg, the function will keep repin the > lruvec's lock to guard from page->mem_cgroup changes in page > migrations between memcgs. (Thanks Hugh Dickins and Konstantin > Khlebnikov reminder on this. Than the core logical is same as their > previous patchs) >=20 > According to Daniel Jordan's suggestion, I run 64 'dd' with on 32 > containers on my 2s* 8 core * HT box with the modefied case: > https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.gi= t/tree/case-lru-file-readtwice >=20 > With this and later patches, the dd performance is 144MB/s, the vanilla > kernel performance is 123MB/s. 17% performance increased. >=20 > Signed-off-by: Alex Shi > Cc: Johannes Weiner > Cc: Michal Hocko > Cc: Vladimir Davydov > Cc: Andrew Morton > Cc: Roman Gushchin > Cc: Shakeel Butt > Cc: Chris Down > Cc: Thomas Gleixner > Cc: Mel Gorman > Cc: Vlastimil Babka > Cc: Qian Cai > Cc: Andrey Ryabinin > Cc: "Kirill A. Shutemov" > Cc: "J=E9r=F4me Glisse" > Cc: Andrea Arcangeli > Cc: Yang Shi > Cc: David Rientjes > Cc: "Aneesh Kumar K.V" > Cc: swkhack > Cc: "Potyra, Stefan" > Cc: Mike Rapoport > Cc: Stephen Rothwell > Cc: Colin Ian King > Cc: Jason Gunthorpe > Cc: Mauro Carvalho Chehab > Cc: Matthew Wilcox > Cc: Peng Fan > Cc: Nikolay Borisov > Cc: Ira Weiny > Cc: Kirill Tkhai > Cc: Yafang Shao > Cc: Konstantin Khlebnikov > Cc: Hugh Dickins > Cc: Tejun Heo > Cc: linux-kernel@vger.kernel.org > Cc: linux-mm@kvack.org > Cc: cgroups@vger.kernel.org > --- > include/linux/memcontrol.h | 24 +++++++++++++++ > include/linux/mmzone.h | 2 ++ > mm/compaction.c | 67 ++++++++++++++++++++++++++++----------= --- > mm/huge_memory.c | 15 ++++------ > mm/memcontrol.c | 75 +++++++++++++++++++++++++++++++++++---= -------- > mm/mlock.c | 31 ++++++++++--------- > mm/mmzone.c | 1 + > mm/page_idle.c | 5 ++-- > mm/swap.c | 74 +++++++++++++++++++-------------------= ------- > mm/vmscan.c | 58 +++++++++++++++++------------------ > 10 files changed, 214 insertions(+), 138 deletions(-) >=20 > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index 5b86287fa069..9538253998a6 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -418,6 +418,10 @@ static inline struct lruvec *mem_cgroup_lruvec(str= uct mem_cgroup *memcg, > =20 > struct lruvec *mem_cgroup_page_lruvec(struct page *, struct pglist_dat= a *); > =20 > +struct lruvec *lock_page_lruvec_irq(struct page *, struct pglist_data = *); > +struct lruvec *lock_page_lruvec_irqsave(struct page *, struct pglist_d= ata *, > + unsigned long*); > + > struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p); > =20 > struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm); > @@ -901,6 +905,26 @@ static inline struct lruvec *mem_cgroup_page_lruve= c(struct page *page, > return &pgdat->__lruvec; > } > =20 > +static inline struct lruvec *lock_page_lruvec_irq(struct page *page, > + struct pglist_data *pgdat) > +{ > + struct lruvec *lruvec =3D mem_cgroup_page_lruvec(page, pgdat); > + > + spin_lock_irq(&lruvec->lru_lock); > + > + return lruvec; While this works in practice, it looks wrong because it doesn't follow the mem_cgroup_page_lruvec() rules. Please open-code spin_lock_irq(&pgdat->__lruvec->lru_lock) instead. > @@ -1246,6 +1245,46 @@ struct lruvec *mem_cgroup_page_lruvec(struct pag= e *page, struct pglist_data *pgd > return lruvec; > } > =20 > +struct lruvec *lock_page_lruvec_irq(struct page *page, > + struct pglist_data *pgdat) > +{ > + struct lruvec *lruvec; > + > +again: > + rcu_read_lock(); > + lruvec =3D mem_cgroup_page_lruvec(page, pgdat); > + spin_lock_irq(&lruvec->lru_lock); > + rcu_read_unlock(); The spinlock doesn't prevent the lruvec from being freed. You deleted the rules from the mem_cgroup_page_lruvec() documentation, but they still apply: if the page is already !PageLRU() by the time you get here, it could get reclaimed or migrated to another cgroup, and that can free the memcg/lruvec. Merely having the lru_lock held does not prevent this. Either the page needs to be locked, or the page needs to be PageLRU with the lru_lock held to prevent somebody else from isolating it. Otherwise, the lruvec is not safe to use.