From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46E14C432BE for ; Wed, 1 Sep 2021 03:25:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D92DF61051 for ; Wed, 1 Sep 2021 03:25:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D92DF61051 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id ED3256B006C; Tue, 31 Aug 2021 23:25:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E827F8D0001; Tue, 31 Aug 2021 23:25:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D70F06B0072; Tue, 31 Aug 2021 23:25:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0197.hostedemail.com [216.40.44.197]) by kanga.kvack.org (Postfix) with ESMTP id CBD0C6B006C for ; Tue, 31 Aug 2021 23:25:38 -0400 (EDT) Received: from smtpin32.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 5CD731802EAB7 for ; Wed, 1 Sep 2021 03:25:38 +0000 (UTC) X-FDA: 78537564756.32.2AC345B Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf19.hostedemail.com (Postfix) with ESMTP id 8AF7AB0000A2 for ; Wed, 1 Sep 2021 03:25:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=oAgnfSvnfESWGAK8QHbbE4+c2ZjvA7X34fzLdM1J34s=; b=mHfxkE++4EOW/MVei3iNYDbj09 8oY7t0361fX7+XNPRVzdk5pASaQW4i/mGaxnnMTimsb85DO48cwCWrf59R3OkCpfFWEtcDbJh7kWE vbHKS/rKP3XyUMHsQHKy5mxIL19iyqysuK1IX0v25/MhB4I1+YrVZWPzl5XZN3D/WkWL1SsLMkn0r tdvO2KQPmP5pDGjRHyePO4Zc1bqVLkQuaJBURtGa+dNIeBj1Tx1LsPA0+FGex2egdSwVrVD33Aid1 IvSsvRt724M/NQiA8Z0286+VAf/0TzCih4w421nzdp1TsQogM/eccFaRallebMASVMyi27X+u7x/w o3CL2g1g==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1mLGs1-001pbg-I7; Wed, 01 Sep 2021 03:25:06 +0000 Date: Wed, 1 Sep 2021 04:25:01 +0100 From: Matthew Wilcox To: Shijie Huang Cc: torvalds@linux-foundation.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, linux-mm@kvack.org, song.bao.hua@hisilicon.com, linux-kernel@vger.kernel.org, Frank Wang Subject: Re: Is it possible to implement the per-node page cache for programs/libraries? Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=mHfxkE++; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 8AF7AB0000A2 X-Stat-Signature: 7d3apyxr6hhw5tess9eh9uu1iwr6j13e X-HE-Tag: 1630466736-310739 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Sep 01, 2021 at 11:07:41AM +0800, Shijie Huang wrote: > In the NUMA, we only have one page cache for each file. For the > program/shared libraries, the > remote-access delays longer then the local-access. > > So, is it possible to implement the per-node page cache for > programs/libraries? At this point, we have no way to support text replication within a process. So what you're suggesting (if implemented) would work for processes which limit themselves to a single node. That is, if you have a system with CPUs 0-3 on node 0 and CPUs 4-7 on node 1, a process which only works on node 0 or only works on node 1 will get text on the appropriate node. If there's a process which runs on both nodes 0 and 1, there's no support for per-node PGDs. So it will get a mix of pages from nodes 0 and 1, and that doesn't necessarily seem like a big win. I haven't yet dived into how hard it would be to make mm->pgd a per-node allocation. I have been thinking about this a bit; one of our internal performance teams flagged the potential performance win to me a few months ago. I don't have a concrete design for text replication yet; there have been various attempts over the years, but none were particularly compelling. By the way, the degree of performance win varies between different CPUs, but it's measurable on all the systems we've tested on (from three different vendors).