From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C24C2C433F5 for ; Tue, 22 Feb 2022 16:34:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 30B838D0005; Tue, 22 Feb 2022 11:34:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 292C58D0003; Tue, 22 Feb 2022 11:34:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 10DAA8D0005; Tue, 22 Feb 2022 11:34:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 0011A8D0003 for ; Tue, 22 Feb 2022 11:34:17 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id CEFF261275 for ; Tue, 22 Feb 2022 16:34:17 +0000 (UTC) X-FDA: 79170963354.09.55B542B Received: from mail-qv1-f46.google.com (mail-qv1-f46.google.com [209.85.219.46]) by imf06.hostedemail.com (Postfix) with ESMTP id 1F736180015 for ; Tue, 22 Feb 2022 16:34:16 +0000 (UTC) Received: by mail-qv1-f46.google.com with SMTP id e22so320054qvf.9 for ; Tue, 22 Feb 2022 08:34:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20210112.gappssmtp.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=2pjjU5zHuaxPJTKda8Jl5cHbMT8Mpw2hlMIdhnjkMTk=; b=xGdwdrTr+PVvzHmGxLrfQ8lZ1kFry5pJlG+CnWsag0DswuSZIeyEJpz9s5RTAavQQd d6d4Qe4t9ZeAC7AFQU2sIR6mt2d7E7657obKwwHSJP6BuMjaO1FuHA6jQBD2lEIMP/gv KWZqSycsAvowxw65Xn0Dn0V5DCyKALHG/EZd5xEQQoryFGjVorGfD/ABH/tXbwpkfswg VV7v03mSPCry5prSzvSCax4BA9XE6685uHcyRNqhwZR1DouitIV6urZ1WCK0Xn56Nc1T C99SqVsj/ySVfrZrFFqyiU+cRnnsSfglQm9IiTElvkGSUPEWJT4EqD6/hXY3K381DYqI u3/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=2pjjU5zHuaxPJTKda8Jl5cHbMT8Mpw2hlMIdhnjkMTk=; b=6y4PZnCBEixXLXRQjyZjsLlAPHEugJG6nqsD/awzDU+CXaa3U+6dDOogre89c7+PZE P4ySlXMMvcgVs4KDtZWP3v9IEggw05AR+HnRzvQ3ZE9boliOCJRdkNBzMwlUf7sNRvdg LxB/qxsdD43Vdq2jesPNIFrKV1MfAzBjOwU99X0xQ92av60sr1WJj8ARwdToOsll48eF eFjmTRx1VDtVM8HCozjyuXhHMW79GtyhuWX8wXAb+vg2BzJvXjZXum7PYPvfUYwyymET jG7osSgDH3AJa67JeTPF8/QVma0fF7k5hNkV8BmyemEglKbfq/pMURaNDjprqHXtjWFM S+Qg== X-Gm-Message-State: AOAM531UuSMx/sW0YxGKwKexUekuKvncruCNVzQ6nUkfqP6j3pzrgbvg 9flXkdZ8DYwn/Fh0A17Jp4t1Xg== X-Google-Smtp-Source: ABdhPJyYjQ0swClJ5D1OlmgUW4KG7J4LAiCCD+RMaflR2hpgwXvv9ZP/fApy7yxOrmcfJJYlWwgziw== X-Received: by 2002:a05:622a:2c4:b0:2dd:2d86:3fe9 with SMTP id a4-20020a05622a02c400b002dd2d863fe9mr22824519qtx.624.1645547656242; Tue, 22 Feb 2022 08:34:16 -0800 (PST) Received: from localhost (cpe-98-15-154-102.hvc.res.rr.com. [98.15.154.102]) by smtp.gmail.com with ESMTPSA id x17sm35291qtr.69.2022.02.22.08.34.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Feb 2022 08:34:15 -0800 (PST) Date: Tue, 22 Feb 2022 11:34:15 -0500 From: Johannes Weiner To: Huang Ying Cc: Peter Zijlstra , Mel Gorman , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Feng Tang , Baolin Wang , Michal Hocko , Rik van Riel , Dave Hansen , Yang Shi , Zi Yan , Wei Xu , Oscar Salvador , Shakeel Butt , zhongjiang-ali , Randy Dunlap Subject: Re: [PATCH -V13 2/3] NUMA balancing: optimize page placement for memory tiering system Message-ID: References: <20220221084529.1052339-1-ying.huang@intel.com> <20220221084529.1052339-3-ying.huang@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220221084529.1052339-3-ying.huang@intel.com> Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=cmpxchg-org.20210112.gappssmtp.com header.s=20210112 header.b=xGdwdrTr; dmarc=pass (policy=none) header.from=cmpxchg.org; spf=pass (imf06.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.219.46 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 1F736180015 X-Stat-Signature: 9m55krjw89jsycemz47ckmri4598dudk X-HE-Tag: 1645547656-461004 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Feb 21, 2022 at 04:45:28PM +0800, Huang Ying wrote: > With the advent of various new memory types, some machines will have > multiple types of memory, e.g. DRAM and PMEM (persistent memory). The > memory subsystem of these machines can be called memory tiering > system, because the performance of the different types of memory are > usually different. > > In such system, because of the memory accessing pattern changing etc, > some pages in the slow memory may become hot globally. So in this > patch, the NUMA balancing mechanism is enhanced to optimize the page > placement among the different memory types according to hot/cold > dynamically. > > In a typical memory tiering system, there are CPUs, fast memory and > slow memory in each physical NUMA node. The CPUs and the fast memory > will be put in one logical node (called fast memory node), while the > slow memory will be put in another (faked) logical node (called slow > memory node). That is, the fast memory is regarded as local while the > slow memory is regarded as remote. So it's possible for the recently > accessed pages in the slow memory node to be promoted to the fast > memory node via the existing NUMA balancing mechanism. > > The original NUMA balancing mechanism will stop to migrate pages if > the free memory of the target node becomes below the high watermark. > This is a reasonable policy if there's only one memory type. But this > makes the original NUMA balancing mechanism almost do not work to > optimize page placement among different memory types. Details are as > follows. > > It's the common cases that the working-set size of the workload is > larger than the size of the fast memory nodes. Otherwise, it's > unnecessary to use the slow memory at all. So, there are almost > always no enough free pages in the fast memory nodes, so that the > globally hot pages in the slow memory node cannot be promoted to the > fast memory node. To solve the issue, we have 2 choices as follows, > > a. Ignore the free pages watermark checking when promoting hot pages > from the slow memory node to the fast memory node. This will > create some memory pressure in the fast memory node, thus trigger > the memory reclaiming. So that, the cold pages in the fast memory > node will be demoted to the slow memory node. > > b. Make kswapd of the fast memory node to reclaim pages until the free > pages are a little more than the high watermark (named as promo > watermark). Then, if the free pages of the fast memory node reaches > high watermark, and some hot pages need to be promoted, kswapd of the > fast memory node will be waken up to demote more cold pages in the > fast memory node to the slow memory node. This will free some extra > space in the fast memory node, so the hot pages in the slow memory > node can be promoted to the fast memory node. > > The choice "a" may create high memory pressure in the fast memory > node. If the memory pressure of the workload is high, the memory > pressure may become so high that the memory allocation latency of the > workload is influenced, e.g. the direct reclaiming may be triggered. > > The choice "b" works much better at this aspect. If the memory > pressure of the workload is high, the hot pages promotion will stop > earlier because its allocation watermark is higher than that of the > normal memory allocation. So in this patch, choice "b" is > implemented. A new zone watermark (WMARK_PROMO) is added. Which is > larger than the high watermark and can be controlled via > watermark_scale_factor. > > In addition to the original page placement optimization among sockets, > the NUMA balancing mechanism is extended to be used to optimize page > placement according to hot/cold among different memory types. So the > sysctl user space interface (numa_balancing) is extended in a backward > compatible way as follow, so that the users can enable/disable these > functionality individually. > > The sysctl is converted from a Boolean value to a bits field. The > definition of the flags is, > > - 0: NUMA_BALANCING_DISABLED > - 1: NUMA_BALANCING_NORMAL > - 2: NUMA_BALANCING_MEMORY_TIERING > > We have tested the patch with the pmbench memory accessing benchmark > with the 80:20 read/write ratio and the Gauss access address > distribution on a 2 socket Intel server with Optane DC Persistent > Memory Model. The test results shows that the pmbench score can > improve up to 95.9%. > > Thanks Andrew Morton to help fix the document format error. > > Signed-off-by: "Huang, Ying" > Tested-by: Baolin Wang > Reviewed-by: Baolin Wang > Cc: Andrew Morton > Cc: Michal Hocko > Cc: Rik van Riel > Cc: Mel Gorman > Cc: Peter Zijlstra > Cc: Dave Hansen > Cc: Yang Shi > Cc: Zi Yan > Cc: Wei Xu > Cc: Oscar Salvador > Cc: Shakeel Butt > Cc: zhongjiang-ali > Cc: Randy Dunlap > Cc: Johannes Weiner > Cc: linux-kernel@vger.kernel.org > Cc: linux-mm@kvack.org Looks good to me, Acked-by: Johannes Weiner