From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E71AC433E1 for ; Fri, 10 Jul 2020 11:28:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 53AA520772 for ; Fri, 10 Jul 2020 11:28:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="0d2zXEJC" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 53AA520772 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B782B6B0002; Fri, 10 Jul 2020 07:28:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B29196B0005; Fri, 10 Jul 2020 07:28:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9F0AC8D0001; Fri, 10 Jul 2020 07:28:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0056.hostedemail.com [216.40.44.56]) by kanga.kvack.org (Postfix) with ESMTP id 868626B0002 for ; Fri, 10 Jul 2020 07:28:33 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id DD0BC181AEF07 for ; Fri, 10 Jul 2020 11:28:32 +0000 (UTC) X-FDA: 77021943264.01.flesh60_2806c7626ece Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin01.hostedemail.com (Postfix) with ESMTP id AA74C1000A771140 for ; Fri, 10 Jul 2020 11:28:32 +0000 (UTC) X-HE-Tag: flesh60_2806c7626ece X-Filterd-Recvd-Size: 8435 Received: from mail-lj1-f194.google.com (mail-lj1-f194.google.com [209.85.208.194]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Fri, 10 Jul 2020 11:28:32 +0000 (UTC) Received: by mail-lj1-f194.google.com with SMTP id q4so6055999lji.2 for ; Fri, 10 Jul 2020 04:28:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:content-transfer-encoding:in-reply-to; bh=yo6GfTcCMupTrMRKem+btQvxrtNqSqESFGT+uZFmbn4=; b=0d2zXEJCtis4uxazM4ixKMdDzSVYOlAiutHC2TF+B9/01BpQqix1HmEp2iUYwimsM1 Ycb8MWNjtDVkJmVbDMzyQsOvKbj53fJBRacuYGHkrOxQMUKeOU6CAF3OQY+hVBg08zh8 CddhzhWrLNlP8YPVb8/pU+JC2MVq1UknmMIVi+7sFMmlWhEWmOoJxS0FyaYPm0AArZ1S eHGPwNHGCXvOHJfAMmbH7Fs5yPESimELjKGgCMA1sRRx/X5IZpGVLPmazc8KIlpZkecm nIm8hKfW/g5bCS77pMb6B4I7dMRgq6zXFwP01SvZAX3usMMTDPVxrTmMjETwy2jdat+K XxwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to; bh=yo6GfTcCMupTrMRKem+btQvxrtNqSqESFGT+uZFmbn4=; b=TaWOLhQIidJ12E1FFKLG9zS6vLTxGXcD1vpSbKjcuVsHpp3uNN6c11X3l0dhCyZueY CPUIUwP0zNrDSDcbRLwI6eiHdG4XvUSmMrCIpKiOtpVQmddYIb9jnIw24x/00cYk/8I8 O4zHptyuA/2SfK2idxubEsNbLeBwbs/tBr7s0tK1xsD45yHsz/LT3ZXZ+i7uhmsPgr0L gMGMeCQ/9kF3w8BeV9lZhQHkBkeZZ5GHvEKJgn5zaO9d3lPcIu00/IYhLnf8ceSVoOaw RMJDbeuYANzXuEWdvyu5XggPW1QAcxokJc/FM7uaG1IXj2+2mRyyhUup5LPdOJvQ5JXZ +IzA== X-Gm-Message-State: AOAM532JSLU6FZNC1p/bGu0FDWZtsFv0rCHbSaZ7MrU50cYQ2b2X8kzm 2glEl2IT9LLOZFHedKQegwWWHw== X-Google-Smtp-Source: ABdhPJykQ22Nbx0jphqsKpinGPhC8HRoqjmOOK83cBKhtXlGRJVhoXE3qQ3FUDx10aTmfXDTskfC5Q== X-Received: by 2002:a2e:3010:: with SMTP id w16mr39053856ljw.449.1594380510639; Fri, 10 Jul 2020 04:28:30 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id f19sm1808321lja.84.2020.07.10.04.28.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 10 Jul 2020 04:28:29 -0700 (PDT) Received: by box.localdomain (Postfix, from userid 1000) id ECCA110222B; Fri, 10 Jul 2020 14:28:31 +0300 (+03) Date: Fri, 10 Jul 2020 14:28:31 +0300 From: "Kirill A. Shutemov" To: Alex Shi , ""@box.kvack.org Cc: Hugh Dickins , Matthew Wilcox , Johannes Weiner , akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com Subject: Re: [PATCH v14 07/20] mm/thp: narrow lru locking Message-ID: <20200710112831.jrv4hzjzjqtxtc7u@box> References: <1593752873-4493-1-git-send-email-alex.shi@linux.alibaba.com> <1593752873-4493-8-git-send-email-alex.shi@linux.alibaba.com> <124eeef1-ff2b-609e-3bf6-a118100c3f2a@linux.alibaba.com> <20200706113513.GY25523@casper.infradead.org> <20200709154816.wieg5thfejyv2h2l@box> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: AA74C1000A771140 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Jul 10, 2020 at 04:23:35PM +0800, Alex Shi wrote: >=20 >=20 > =E5=9C=A8 2020/7/9 =E4=B8=8B=E5=8D=8811:48, Kirill A. Shutemov =E5=86=99= =E9=81=93: > > On Mon, Jul 06, 2020 at 09:52:34PM -0700, Hugh Dickins wrote: > >> On Mon, 6 Jul 2020, Matthew Wilcox wrote: > >>> On Mon, Jul 06, 2020 at 05:15:09PM +0800, Alex Shi wrote: > >>>> Hi Kirill & Johannes & Matthew, > >> > >> Adding Kirill, who was in patch's Cc list but not mail's Cc list. > >> > >> I asked Alex to direct this one particularly to Kirill and Johannes > >> and Matthew because (and I regret that the commit message still does > >> not make this at all clear) this patch changes the lock ordering: > >> which for years has been lru_lock outside memcg move_lock outside > >> i_pages lock, but here inverted to lru_lock inside i_pages lock. > >> > >> I don't see a strong reason to have them one way round or the other, > >> and think Alex is right that they can safely be reversed here: but > >> he doesn't actually give any reason for doing so (if cleanup, then > >> I think the cleanup should have been taken further), and no reason > >> for doing so as part of this series. > >=20 > > I've looked around and changing order of lru_lock wrt. i_pages lock s= eems > > safe. I don't have much experience with memcg move_lock. >=20 > Hi Kirill, >=20 > Thanks for response! > mem_cgroup_move_account(page) could not reach here since 2 blocks, > 1, isolate_lru_page() before it will take page from lru, this compete f= or > page reclaim path, list non-null. >=20 > 2, try_lock_page in it, will guard split_huge_page(), !list. >=20 > >=20 > > Alex, if you are going ahead with the patch, please document the lock= ing > > order. We have some locking orders listed at the beginning of filemap= .c > > and rmap.c. >=20 > Thanks for reminder! > Hugh Dickins did this in above 2 files at the end of patchset, any comm= ents? >=20 > diff --git a/mm/filemap.c b/mm/filemap.c > index f0ae9a6308cb..1b42aaae4d3e 100644 > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -101,8 +101,8 @@ > * ->swap_lock (try_to_unmap_one) > * ->private_lock (try_to_unmap_one) > * ->i_pages lock (try_to_unmap_one) > - * ->pgdat->lru_lock (follow_page->mark_page_accessed) > - * ->pgdat->lru_lock (check_pte_range->isolate_lru_page) > + * ->lruvec->lru_lock (follow_page->mark_page_accessed) > + * ->lruvec->lru_lock (check_pte_range->isolate_lru_page) > * ->private_lock (page_remove_rmap->set_page_dirty) > * ->i_pages lock (page_remove_rmap->set_page_dirty) > * bdi.wb->list_lock (page_remove_rmap->set_page_dirty) > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index d5e56be42f21..926d7d95dc1d 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -3057,7 +3057,7 @@ void __memcg_kmem_uncharge_page(struct page *page= , int order) > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > =20 > /* > - * Because tail pages are not marked as "used", set it. We're under > + * Because tail pages are not marked as "used", set it. Don't need > * lruvec->lru_lock and migration entries setup in all page mappings. > */ > void mem_cgroup_split_huge_fixup(struct page *head) > diff --git a/mm/rmap.c b/mm/rmap.c > index 5fe2dedce1fc..7fbc382e6f9e 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -28,12 +28,12 @@ > * hugetlb_fault_mutex (hugetlbfs specific page fault mutex) > * anon_vma->rwsem > * mm->page_table_lock or pte_lock > - * pgdat->lru_lock (in mark_page_accessed, isolate_lru_p= age) > * swap_lock (in swap_duplicate, swap_info_get) > * mmlist_lock (in mmput, drain_mmlist and others) > * mapping->private_lock (in __set_page_dirty_buffers) > - * mem_cgroup_{begin,end}_page_stat (memcg->move_loc= k) > + * lock_page_memcg move_lock (in __set_page_dirty_bu= ffers) > * i_pages lock (widely used) > + * lock_page_lruvec_irq lruvec->lru_lock I think it has to be lruvec->lru_lock (in lock_page_lruvec_irq) No? > * inode->i_lock (in set_page_dirty's __mark_inode_dir= ty) > * bdi.wb->list_lock (in set_page_dirty's __mark_inode= _dirty) > * sb_lock (within inode_lock in fs/fs-writeback.c) >=20 > >=20 > > local_irq_disable() also deserves a comment. > >=20 >=20 > yes, I will add a comment for this. Do you mind give reviewed-by for th= is patch? Reviewed-by: Kirill A. Shutemov --=20 Kirill A. Shutemov