From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.1 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 294ADC433E0 for ; Tue, 4 Aug 2020 14:30:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D9439208A9 for ; Tue, 4 Aug 2020 14:30:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="dbt3cGyX" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D9439208A9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6D6078D016C; Tue, 4 Aug 2020 10:30:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 686488D0081; Tue, 4 Aug 2020 10:30:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 574D98D016C; Tue, 4 Aug 2020 10:30:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0089.hostedemail.com [216.40.44.89]) by kanga.kvack.org (Postfix) with ESMTP id 3EECF8D0081 for ; Tue, 4 Aug 2020 10:30:14 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id EFCC58248047 for ; Tue, 4 Aug 2020 14:30:13 +0000 (UTC) X-FDA: 77113121106.24.fang39_3a0abb126fa7 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin24.hostedemail.com (Postfix) with ESMTP id C65B81A4B3 for ; Tue, 4 Aug 2020 14:30:10 +0000 (UTC) X-HE-Tag: fang39_3a0abb126fa7 X-Filterd-Recvd-Size: 7538 Received: from mail-il1-f196.google.com (mail-il1-f196.google.com [209.85.166.196]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Tue, 4 Aug 2020 14:30:09 +0000 (UTC) Received: by mail-il1-f196.google.com with SMTP id i138so28429281ild.9 for ; Tue, 04 Aug 2020 07:30:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=DSILtMaPZVkkWKSXWo0XxONGdj0mqua+F4iUMIFu5jo=; b=dbt3cGyXK0zGkFNIC+R9pXNutUkFJp04mRi+AlD6bPj6TtJ2sBFrNr9I+9rfQpo6hX yHwRsJIC9S0+7cLucGvZPblimft6ur5BNrp0vaHJfrkMUAn00H7skzug6KA7LYH+4fKE JJBI5zZkXe8hA7I3uvZ338dmSi4v2qZNctvJM93IjxLHCi8GroNaCxQ2VkZGa9DpOSZM WeR6/T8+HnIDg5YEvSDD1xKeLfigme727LtghevcFLf/u+mt1kIC3gkOpQj+EDtyWPOn KsukeoiHTZkxannG30Cf8DMv4sKlp2kx6wQlG25ckRZG/PD0ywxC7bPb0TfoiW/k0OU7 AyIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=DSILtMaPZVkkWKSXWo0XxONGdj0mqua+F4iUMIFu5jo=; b=Msmu7e5+xDMANt16i4RWClvHgJheKbXLkoeBjKEEaaL8atF8yiGj/MHHhg4b1Aqo6F Rywrfn7skXGFD2b9PQleqwgLEvTynQ8UDk7fGg8huZ9uUH+2DeXwUIu8omV+V9dJmXIT dWxNhroDkzLFu/gM6mSrs22cCLJTRpFBfL92iFDsCq7bvQ2NSP6b9plvxXZi64R/W9kr uCA3j89tfxfwx43LetjS20vWsV1IDXxNpUBbtMJl/cg9p8ACBL076BiR161zuuVXtA7q CXd97j8sbdsJwhVRi4A44jEcWDTmjV7Rieed+1zupkjHQnVGolMVSzT0Db2+wKwpk30P 3liw== X-Gm-Message-State: AOAM531FybFjDQ3uTSkb3o/3lr53sR7Y0ibST+qP2KCu2zsG3Wp3NjPZ 08VX1IA2A3Uv7+V1Y4/gH8Lxr/GZ6c8HCHg7nkY= X-Google-Smtp-Source: ABdhPJx+/R4HDpNXzSmnTKJFKIeOAikwhjnVzXBltUix+rq+uHJyiD1Knr/XfbuHnHbULKOrMwOwjP83EPxhYVDIjew= X-Received: by 2002:a05:6e02:143:: with SMTP id j3mr4680115ilr.97.1596551409234; Tue, 04 Aug 2020 07:30:09 -0700 (PDT) MIME-Version: 1.0 References: <1595681998-19193-1-git-send-email-alex.shi@linux.alibaba.com> <1595681998-19193-22-git-send-email-alex.shi@linux.alibaba.com> In-Reply-To: From: Alexander Duyck Date: Tue, 4 Aug 2020 07:29:58 -0700 Message-ID: Subject: Re: [PATCH v17 21/21] mm/lru: revise the comments of lru_lock To: Alex Shi Cc: Andrew Morton , Mel Gorman , Tejun Heo , Hugh Dickins , Konstantin Khlebnikov , Daniel Jordan , Yang Shi , Matthew Wilcox , Johannes Weiner , kbuild test robot , linux-mm , LKML , cgroups@vger.kernel.org, Shakeel Butt , Joonsoo Kim , Wei Yang , "Kirill A. Shutemov" , Rong Chen , Andrey Ryabinin , Jann Horn Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: C65B81A4B3 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Aug 4, 2020 at 3:04 AM Alex Shi wrote: > > > > =E5=9C=A8 2020/8/4 =E4=B8=8A=E5=8D=886:37, Alexander Duyck =E5=86=99=E9= =81=93: > >> > >> shrink_inactive_list() also diverts any unevictable pages that it fin= ds on the > >> -inactive lists to the appropriate zone's unevictable list. > >> +inactive lists to the appropriate node's unevictable list. > >> > >> shrink_inactive_list() should only see SHM_LOCK'd pages that became S= HM_LOCK'd > >> after shrink_active_list() had moved them to the inactive list, or pa= ges mapped > > Same here. > > lruvec is used per memcg per node actually, and it fallback to node if me= mcg disabled. > So the comments are still right. > > And most of changes just fix from zone->lru_lock to pgdat->lru_lock chang= e. Actually in my mind one thing that might work better would be to explain what the lruvec is and where it resides. Then replace zone with lruvec since that is really where the unevictable list resides. Then it would be correct for both the memcg and pgdat case. > > > >> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > >> index 64ede5f150dc..44738cdb5a55 100644 > >> --- a/include/linux/mm_types.h > >> +++ b/include/linux/mm_types.h > >> @@ -78,7 +78,7 @@ struct page { > >> struct { /* Page cache and anonymous pages */ > >> /** > >> * @lru: Pageout list, eg. active_list protect= ed by > >> - * pgdat->lru_lock. Sometimes used as a gener= ic list > >> + * lruvec->lru_lock. Sometimes used as a gene= ric list > >> * by the page owner. > >> */ > >> struct list_head lru; > >> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > >> index 8af956aa13cf..c92289a4e14d 100644 > >> --- a/include/linux/mmzone.h > >> +++ b/include/linux/mmzone.h > >> @@ -115,7 +115,7 @@ static inline bool free_area_empty(struct free_are= a *area, int migratetype) > >> struct pglist_data; > >> > >> /* > >> - * zone->lock and the zone lru_lock are two of the hottest locks in t= he kernel. > >> + * zone->lock and the lru_lock are two of the hottest locks in the ke= rnel. > >> * So add a wild amount of padding here to ensure that they fall into= separate > >> * cachelines. There are very few zone structures in the machine, so= space > >> * consumption is not a concern here. > > So I don't believe you are using ZONE_PADDING in any way to try and > > protect the LRU lock currently. At least you aren't using it in the > > lruvec. As such it might make sense to just drop the reference to the > > lru_lock here. That reminds me that we still need to review the > > placement of the lru_lock and determine if there might be a better > > placement and/or padding that might improve performance when under > > heavy stress. > > > > Right, is it the following looks better? > > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > index ccc76590f823..0ed520954843 100644 > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -113,8 +113,7 @@ static inline bool free_area_empty(struct free_area *= area, int migratetype) > struct pglist_data; > > /* > - * zone->lock and the lru_lock are two of the hottest locks in the kerne= l. > - * So add a wild amount of padding here to ensure that they fall into se= parate > + * Add a wild amount of padding here to ensure datas fall into separate > * cachelines. There are very few zone structures in the machine, so sp= ace > * consumption is not a concern here. > */ > > Thanks! > Alex I would maybe tweak it to make sure it is clear that we are using this to pad out items that are likely to cause cache thrash such as various hot spinocks and such.