From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFACDC4332F for ; Fri, 18 Nov 2022 22:33:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6FAB96B0071; Fri, 18 Nov 2022 17:33:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6AABF6B007B; Fri, 18 Nov 2022 17:33:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5722B8E0002; Fri, 18 Nov 2022 17:33:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 4729B6B0071 for ; Fri, 18 Nov 2022 17:33:43 -0500 (EST) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 0D6711C6013 for ; Fri, 18 Nov 2022 22:33:43 +0000 (UTC) X-FDA: 80148016326.28.A6A6920 Received: from mail-pf1-f181.google.com (mail-pf1-f181.google.com [209.85.210.181]) by imf02.hostedemail.com (Postfix) with ESMTP id A2B5280006 for ; Fri, 18 Nov 2022 22:33:42 +0000 (UTC) Received: by mail-pf1-f181.google.com with SMTP id q9so6192693pfg.5 for ; Fri, 18 Nov 2022 14:33:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:sender:from:to:cc:subject:date:message-id :reply-to; bh=aVzVLArgD7RNKLVWeUZIZoxtCtmx3JSrZO4kb5CUx40=; b=TlId9O71lvpXOeFc7cXTaWIxfMnYs0lq8IEPrUsA+o6jPBBwjeuxQlh3UBhty+JyBl JCnLStEJOA2yspLtGwVGEA2QKDtUFA9IsXHIz621CVw7U5ztQPmGk9QQ58y7VhleDrzG admxuCz2Zto8xM9F5TR6F+HAF4kT0kFwLJVsav2coIyQsVCcSww7xWNoJMQM/HavSJf3 YcoT/sZiIzLJ74pSGJvWMFLDde8y7Ef5iamKtVKQYKxj9UJhQm8B8PYYVPPabptJHRBu j7L+DMTqnKzrI+l9A04Qq9+b65arc87DM8BYcXrLqYMPmCNjZwu7U2AJK0xMfE7YDKfa oSog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:sender:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aVzVLArgD7RNKLVWeUZIZoxtCtmx3JSrZO4kb5CUx40=; b=xxY23u0QpQPY+q60kyny5CLw0kV8cyakGQL95HGqKsqPR09yjPTMQJ/UHzTIPhrFHK lni78SUB0OK+WiA0sSrR78OUyb3vgQQBDK+hTGBqUjJU/1PMFgBZSv7Xo0LhQcWmyA5y JAMBRiabA7r1TN+cniJ2DyMj2zG+gGOTSMon+C0c4UT/jQsr0E37snttprXh9ioG6ujg NEABNxMqocscondWcYk+ruyYlK8G8/i+NbcpmekrYa4rfBLqkXL2QjenR44JfS5gyq+e 5YZImA8EaYPR0WGYPuwVk59027wtHBLo8ufj3nZnCWpJdVIFokmQ0KxZWFwoT6bDW5Af FdZg== X-Gm-Message-State: ANoB5plpSz0x4Y2M7vDI96c9ri17tu7Z/YJ9/js7E+UjXjkkVYVzBvs+ ysXLbTMMxnaODZa8XAXWaEc= X-Google-Smtp-Source: AA0mqf4N/0bRND8Dk7fMe7D/AIh0GWY5fDcIa9qb0oy1O8+sTVOk7p7+ZtX3mvkTlmM7K/9FnyxRPQ== X-Received: by 2002:a63:180c:0:b0:476:848f:1ecd with SMTP id y12-20020a63180c000000b00476848f1ecdmr8288824pgl.589.1668810821466; Fri, 18 Nov 2022 14:33:41 -0800 (PST) Received: from google.com ([2620:15c:211:201:bba9:9f92:b2cc:16a4]) by smtp.gmail.com with ESMTPSA id g19-20020a635653000000b00477055c6d85sm3271502pgm.22.2022.11.18.14.33.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Nov 2022 14:33:41 -0800 (PST) Date: Fri, 18 Nov 2022 14:33:39 -0800 From: Minchan Kim To: Yu Zhao Cc: Andrew Morton , linux-mm@kvack.org Subject: Re: [PATCH 1/2] mm: multi-gen LRU: retry folios written back while isolated Message-ID: References: <20221116013808.3995280-1-yuzhao@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1668810822; a=rsa-sha256; cv=none; b=TvqAZ+0oDYRjLx0XxdFO69L9kXgilIPPPWVHKFRdbG8VKYh07/ZCFb9Gexq/lJ/EgptQfA GT0RDCZeHngcfu/YLCtV2lX7t7krgpywPDizzTe8CWnkHV4NBflYPiQBCiGR2ixKQKU5lx BzOlIecuKZEhLEhHgSeA2QK1KVCEDsE= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=TlId9O71; spf=pass (imf02.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.210.181 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none) ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1668810822; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=aVzVLArgD7RNKLVWeUZIZoxtCtmx3JSrZO4kb5CUx40=; b=k0xOy2lpig7n0PL0lBiDOUlvRObubauxAk9zXDp3QhrIokhWEkfKJkFtZa2dSI4zIHPrjk m8KWz6aLdwHIoawN9C43vKYmWDKgLv8JbCEby98zM4nXUoI6kHT6tERRoVHbcsrftrRJCo II+i5+NYU48JOAYGg3CsDbBG5WzXvCA= X-Stat-Signature: e41byd9zi4wm1aidfqegirgh3f6jo4ot X-Rspamd-Queue-Id: A2B5280006 Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=TlId9O71; spf=pass (imf02.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.210.181 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none) X-Rspamd-Server: rspam04 X-Rspam-User: X-HE-Tag: 1668810822-759707 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Nov 18, 2022 at 02:51:01PM -0700, Yu Zhao wrote: > On Fri, Nov 18, 2022 at 2:25 PM Minchan Kim wrote: > > > > On Thu, Nov 17, 2022 at 06:40:05PM -0700, Yu Zhao wrote: > > > On Thu, Nov 17, 2022 at 6:26 PM Minchan Kim wrote: > > > > > > > > On Thu, Nov 17, 2022 at 03:22:42PM -0700, Yu Zhao wrote: > > > > > On Thu, Nov 17, 2022 at 12:47 AM Minchan Kim wrote: > > > > > > > > > > > > On Tue, Nov 15, 2022 at 06:38:07PM -0700, Yu Zhao wrote: > > > > > > > The page reclaim isolates a batch of folios from the tail of one of > > > > > > > the LRU lists and works on those folios one by one. For a suitable > > > > > > > swap-backed folio, if the swap device is async, it queues that folio > > > > > > > for writeback. After the page reclaim finishes an entire batch, it > > > > > > > puts back the folios it queued for writeback to the head of the > > > > > > > original LRU list. > > > > > > > > > > > > > > In the meantime, the page writeback flushes the queued folios also by > > > > > > > batches. Its batching logic is independent from that of the page > > > > > > > reclaim. For each of the folios it writes back, the page writeback > > > > > > > calls folio_rotate_reclaimable() which tries to rotate a folio to the > > > > > > > tail. > > > > > > > > > > > > > > folio_rotate_reclaimable() only works for a folio after the page > > > > > > > reclaim has put it back. If an async swap device is fast enough, the > > > > > > > page writeback can finish with that folio while the page reclaim is > > > > > > > still working on the rest of the batch containing it. In this case, > > > > > > > that folio will remain at the head and the page reclaim will not retry > > > > > > > it before reaching there. > > > > > > > > > > > > > > This patch adds a retry to evict_folios(). After evict_folios() has > > > > > > > finished an entire batch and before it puts back folios it cannot free > > > > > > > immediately, it retries those that may have missed the rotation. > > > > > > > > > > > > Can we make something like this? > > > > > > > > > > This works for both the active/inactive LRU and MGLRU. > > > > > > > > I hope we fix both altogether. > > > > > > > > > > > > > > But it's not my prefered way because of these two subtle differences: > > > > > 1. Folios eligible for retry take an unnecessary round trip below -- > > > > > they are first added to the LRU list and then removed from there for > > > > > retry. For high speed swap devices, the LRU lock contention is already > > > > > quite high (>10% in CPU profile under heavy memory pressure). So I'm > > > > > hoping we can avoid this round trip. > > > > > 2. The number of retries of a folio on folio_wb_list is unlimited, > > > > > whereas this patch limits the retry to one. So in theory, we can spin > > > > > on a bunch of folios that keep failing. > > > > > > > > > > The most ideal solution would be to have the one-off retry logic in > > > > > shrink_folio_list(). But right now, that function is very cluttered. I > > > > > plan to refactor it (low priority at the moment), and probably after > > > > > that, we can add a generic retry for both the active/inactive LRU and > > > > > MGLRU. I'll raise its priority if you strongly prefer this. Please > > > > > feel free to let me know. > > > > > > > > Well, my preference for *ideal solution* is writeback completion drops > > > > page immediately without LRU rotating. IIRC, concern was softirq latency > > > > and locking relevant in the context at that time when I tried it. > > > > > > Are we good for now or are there other ideas we want to try while we are at it? > > > > > > > good for now with what solution you are thinking? The retry logic you > > suggested? I personally don't like the solution relies on the timing. > > > > If you are concerning about unnecessary round trip, it shouldn't > > happen frequency since your assumption is swap device is so fast > > so second loop would see their wb done? > > No, the round trip that hits the LRU lock in the process. I see what you meant. > > For folios written and ready to be freed, they'll have to go from > being isolated to the tail of LRU list and then to getting isolated > again. This requires an extra hit on the LRU lock, which is highly > contended for fast swap devices under heavy memory pressure. > > > Anyway, I am strongly push my preference. Feel free to go with way Oh, sorry for the typo: "not strongly push my preference" > > you want if the solution can fix both LRU schemes. > > There is another concern I listed previously: > > > > > > 2. The number of retries of a folio on folio_wb_list is unlimited, > > > > > whereas this patch limits the retry to one. So in theory, we can spin > > > > > on a bunch of folios that keep failing. > > If this can happen, it'd be really hard to track it down. Any thoughts on this? Could you elaborate why folio_wb_list can keep spinning? My concern is how we can make sure the timing bet is good for most workloads on heterogeneous/dvfs frequency core control env. > > I share your desire to fix both. But I don't think we can just dismiss > the two points I listed above. They are reasonable, aren't they? >