From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E246EC433F5 for ; Tue, 1 Mar 2022 01:16:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6A06D8D0002; Mon, 28 Feb 2022 20:16:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 628B48D0001; Mon, 28 Feb 2022 20:16:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4C8B18D0002; Mon, 28 Feb 2022 20:16:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 3872A8D0001 for ; Mon, 28 Feb 2022 20:16:28 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 07D8161165 for ; Tue, 1 Mar 2022 01:16:28 +0000 (UTC) X-FDA: 79194052056.15.82BC01D Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by imf19.hostedemail.com (Postfix) with ESMTP id D24BA1A0006 for ; Tue, 1 Mar 2022 01:16:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1646097386; x=1677633386; h=from:to:cc:subject:references:date:in-reply-to: message-id:mime-version; bh=43A+I2DRhzesNA9DrxPAzZEy9Kax5aljFE3cXiDt9D0=; b=VkzOPiGIqDJBRwWhSosS2Butuo/eMaTK0ROla3MPwWBinslpV6ulyE6o mzHNKJPgctJszcIWOAEglsuAiUiXkeeoZeghsM5Ioce+NIGaQXTiCAQUb ChM4UQFB/gs9fMBxFqZObYS3IRVjwBpC0WtouQx4at0ZHqSZZUq95uxOR m4zLIdc1c9DtC7DM6FLrgdMrQRDsTODRKdRVC6/mntQ0HBJR6QEpsp+U+ DLhflkpornD7t2p31Y37C0SqjL2bJUyCql8fj4WFONWYLfZb7dyMJtt8z SIfYeRXwQtm5gvz8731Y93HtWSSQfwPXXHDdBtBGjfgTzDpDlE87xk4gr Q==; X-IronPort-AV: E=McAfee;i="6200,9189,10272"; a="339461863" X-IronPort-AV: E=Sophos;i="5.90,144,1643702400"; d="scan'208";a="339461863" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2022 17:16:25 -0800 X-IronPort-AV: E=Sophos;i="5.90,144,1643702400"; d="scan'208";a="550492306" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.239.13.11]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Feb 2022 17:16:20 -0800 From: "Huang, Ying" To: Oscar Salvador Cc: Peter Zijlstra , Mel Gorman , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Feng Tang , Baolin Wang , Michal Hocko , Rik van Riel , Dave Hansen , Yang Shi , Zi Yan , Wei Xu , Shakeel Butt , zhongjiang-ali , Randy Dunlap , Johannes Weiner Subject: Re: [PATCH -V13 2/3] NUMA balancing: optimize page placement for memory tiering system References: <20220221084529.1052339-1-ying.huang@intel.com> <20220221084529.1052339-3-ying.huang@intel.com> Date: Tue, 01 Mar 2022 09:16:18 +0800 In-Reply-To: (Oscar Salvador's message of "Mon, 28 Feb 2022 16:54:35 +0100") Message-ID: <87czj6321p.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: D24BA1A0006 X-Stat-Signature: zwyjyg8dy4scyqmmi1nzixur5ruz8pfc Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=VkzOPiGI; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf19.hostedemail.com: domain of ying.huang@intel.com has no SPF policy when checking 192.55.52.43) smtp.mailfrom=ying.huang@intel.com X-HE-Tag: 1646097386-643354 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi, Oscar, Oscar Salvador writes: > On Mon, Feb 21, 2022 at 04:45:28PM +0800, Huang Ying wrote: >> b. Make kswapd of the fast memory node to reclaim pages until the free >> pages are a little more than the high watermark (named as promo >> watermark). Then, if the free pages of the fast memory node reaches >> high watermark, and some hot pages need to be promoted, kswapd of the >> fast memory node will be waken up to demote more cold pages in the >> fast memory node to the slow memory node. This will free some extra >> space in the fast memory node, so the hot pages in the slow memory >> node can be promoted to the fast memory node. > > The patch looks good to me, but I think I might be confused by the wording > here. > > IIUC, we define a new wmark (wmark_promo) which is higher than > wmark_high. > When we cannot migrate a page to another numa node because it has less > than wmark_high free pages, we wake up kswapd, and we keep reclaiming > until we either have mark_promo pages free when > NUMA_BALANCING_MEMORY_TIERING, or mark_high pages free. Is that right? Yes. And we only wake up kswapd for promotion when NUMA_BALANCING_MEMORY_TIERING. > Because above you say "Then, if the free pages of the fast memory node reaches > high watermark, and some hot pages need to be promoted..." What I wanted to say is that If the free pages of the fast memory will become lower than the high watermark, and some hot pages need to be promoted... That is, "reach high watermark" here is from "free pages more than high watermark" to "free pages lower or equal high watermark". This appears confusing. > but that should read promo watermark instead? Am I missing something? Sorry for confusing. How about the following? b. Make kswapd of the fast memory node to reclaim pages until the free pages are a little more than the high watermark (named as promo watermark). If we want to promote some hot pages from the slow memory to the fast memory, but the free pages of the fast memory node will become lower than the high watermark after promotion, we will wake up kswapd of the fast memory node to demote more cold pages in the fast memory node to the slow memory node firstly. This will free some extra space in the fast memory node, so the hot pages in the slow memory node can be promoted to the fast memory node. Best Regards, Huang, Ying