From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 069A7C433F5 for ; Tue, 1 Mar 2022 06:48:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 922FA8D0002; Tue, 1 Mar 2022 01:48:00 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8AC448D0001; Tue, 1 Mar 2022 01:48:00 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 773B68D0002; Tue, 1 Mar 2022 01:48:00 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 6523B8D0001 for ; Tue, 1 Mar 2022 01:48:00 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 2C368613 for ; Tue, 1 Mar 2022 06:48:00 +0000 (UTC) X-FDA: 79194887520.13.C9F7268 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf21.hostedemail.com (Postfix) with ESMTP id 0A5421C000C for ; Tue, 1 Mar 2022 06:47:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646117279; x=1677653279; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=GgVNOL7Ku3TgRlMR3ALd7PpCE1otCvbcR+kwYCWgXls=; b=dPOgq8yhhUW9oOxApeMjwRbzCcJQYmytFGYF6Eup6uUO4fvBZNME1v9R Mq+Kezv3H+mts0+CqczLLIhJj2EHGtbf+0Gic3FCqPXIc4tPPm/FLz0Gi eSS/gRemPQM2e+q0uoC7EO3OIa74Ff13fUOfO7ym0HSR3iFqOEOxc057S aHDMK9eSQt4Q4TwAFJPbicw3PZ3WSciosY8OeZJ0gNqq+quiUCVFOfFDm WqWT9wKZOzSf1yEawI+c9CRJ8hyermpIyyQOXJoz1Ai0uxbaGAkx/c2HY gVYzWypKdNFmAgFrTuqY/qeYUI5KRPtD2A9pk+s95sJWbGjTOy7gt6tRB A==; X-IronPort-AV: E=McAfee;i="6200,9189,10272"; a="316285645" X-IronPort-AV: E=Sophos;i="5.90,145,1643702400"; d="scan'208";a="316285645" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2022 22:47:57 -0800 X-IronPort-AV: E=Sophos;i="5.90,145,1643702400"; d="scan'208";a="575598003" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.239.13.11]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2022 22:47:53 -0800 From: "Huang, Ying" To: Miaohe Lin Cc: , , Feng Tang , Baolin Wang , "Michal Hocko" , Rik van Riel , Dave Hansen , Yang Shi , Zi Yan , Wei Xu , Oscar Salvador , Shakeel Butt , zhongjiang-ali , Randy Dunlap , Johannes Weiner , Peter Zijlstra , Mel Gorman , Andrew Morton Subject: Re: [PATCH -V13 2/3] NUMA balancing: optimize page placement for memory tiering system References: <20220221084529.1052339-1-ying.huang@intel.com> <20220221084529.1052339-3-ying.huang@intel.com> <4652446e-2089-a3c4-fbdb-321322887392@huawei.com> Date: Tue, 01 Mar 2022 14:47:50 +0800 In-Reply-To: <4652446e-2089-a3c4-fbdb-321322887392@huawei.com> (Miaohe Lin's message of "Tue, 1 Mar 2022 14:28:13 +0800") Message-ID: <874k4i2mp5.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 0A5421C000C X-Stat-Signature: ro4ce6cofny3yh98hmshdngq18cjt8et Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=dPOgq8yh; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf21.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=ying.huang@intel.com X-HE-Tag: 1646117278-812012 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Miaohe Lin writes: > On 2022/2/21 16:45, Huang Ying wrote: >> With the advent of various new memory types, some machines will have >> multiple types of memory, e.g. DRAM and PMEM (persistent memory). The >> memory subsystem of these machines can be called memory tiering >> system, because the performance of the different types of memory are >> usually different. >> >> In such system, because of the memory accessing pattern changing etc, >> some pages in the slow memory may become hot globally. So in this >> patch, the NUMA balancing mechanism is enhanced to optimize the page >> placement among the different memory types according to hot/cold >> dynamically. >> >> In a typical memory tiering system, there are CPUs, fast memory and >> slow memory in each physical NUMA node. The CPUs and the fast memory >> will be put in one logical node (called fast memory node), while the >> slow memory will be put in another (faked) logical node (called slow >> memory node). That is, the fast memory is regarded as local while the >> slow memory is regarded as remote. So it's possible for the recently >> accessed pages in the slow memory node to be promoted to the fast >> memory node via the existing NUMA balancing mechanism. >> >> The original NUMA balancing mechanism will stop to migrate pages if >> the free memory of the target node becomes below the high watermark. >> This is a reasonable policy if there's only one memory type. But this >> makes the original NUMA balancing mechanism almost do not work to >> optimize page placement among different memory types. Details are as >> follows. >> >> It's the common cases that the working-set size of the workload is >> larger than the size of the fast memory nodes. Otherwise, it's >> unnecessary to use the slow memory at all. So, there are almost >> always no enough free pages in the fast memory nodes, so that the >> globally hot pages in the slow memory node cannot be promoted to the >> fast memory node. To solve the issue, we have 2 choices as follows, >> >> a. Ignore the free pages watermark checking when promoting hot pages >> from the slow memory node to the fast memory node. This will >> create some memory pressure in the fast memory node, thus trigger >> the memory reclaiming. So that, the cold pages in the fast memory >> node will be demoted to the slow memory node. >> >> b. Make kswapd of the fast memory node to reclaim pages until the free >> pages are a little more than the high watermark (named as promo >> watermark). Then, if the free pages of the fast memory node reaches >> high watermark, and some hot pages need to be promoted, kswapd of the >> fast memory node will be waken up to demote more cold pages in the >> fast memory node to the slow memory node. This will free some extra >> space in the fast memory node, so the hot pages in the slow memory >> node can be promoted to the fast memory node. >> >> The choice "a" may create high memory pressure in the fast memory >> node. If the memory pressure of the workload is high, the memory >> pressure may become so high that the memory allocation latency of the >> workload is influenced, e.g. the direct reclaiming may be triggered. >> >> The choice "b" works much better at this aspect. If the memory >> pressure of the workload is high, the hot pages promotion will stop >> earlier because its allocation watermark is higher than that of the > > Many thanks for your path. The patch looks good to me but I have a question. > WMARK_PROMO is only used inside pgdat_balanced when NUMA_BALANCING_MEMORY_TIERING > is set. So its allocation watermark seems to be as same as the normal memory > allocation. How should I understand the above sentence? Am I miss something? Before allocating pages for promotion, the watermark of the fast node will be checked (please refer to migrate_balanced_pgdat()). If the watermark is going to be lower than the high watermark, promotion will abort. Best Regards, Huang, Ying