From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 398B0C48260 for ; Thu, 25 Jan 2024 09:04:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C29DE8D0020; Thu, 25 Jan 2024 04:03:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BD90D8D000C; Thu, 25 Jan 2024 04:03:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AA04B8D0020; Thu, 25 Jan 2024 04:03:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 97DA98D000C for ; Thu, 25 Jan 2024 04:03:59 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 69FC740797 for ; Thu, 25 Jan 2024 09:03:59 +0000 (UTC) X-FDA: 81717246198.09.65259B7 Received: from mail-lf1-f43.google.com (mail-lf1-f43.google.com [209.85.167.43]) by imf30.hostedemail.com (Postfix) with ESMTP id A66408000D for ; Thu, 25 Jan 2024 09:03:57 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=ZRaQbaFx; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf30.hostedemail.com: domain of yosryahmed@google.com designates 209.85.167.43 as permitted sender) smtp.mailfrom=yosryahmed@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1706173437; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=lCpLs7E421WcMGulhvf0KfE1pIEkKe5LczYASKz1w1A=; b=CJcRk5EuJJJ2mvPMJovv/cOCmxYreN7yniLaMVErxiZ6JzLraIvzZ5FwwZhWcAu/7fuuWC JkQaivZrgE7ToyvFvFbzAXASv5h84N2VJTX4T8J6elpnppqP6KyQDoI20Q4T0HNXOrryso hFrlCnJn1sxk9tLb0sKMbFHO0KqkbpU= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=ZRaQbaFx; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf30.hostedemail.com: domain of yosryahmed@google.com designates 209.85.167.43 as permitted sender) smtp.mailfrom=yosryahmed@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1706173437; a=rsa-sha256; cv=none; b=O2/votsJWGtH401kIj4ZeggFZOBMXHoTVymf3HPb5WtAQGOZg4jiTqmgOou8/HO1NFvSTQ qC70cMG0IAdz36r3vADNPRq0wJMmhbw86rpvcJ/e3CD7/7PjTmFhl1+v1S7HNgP+Yvkie/ PdSd7ru0qnxRO7MI+osMUC2fxOkGrOY= Received: by mail-lf1-f43.google.com with SMTP id 2adb3069b0e04-50ea9daac4cso7072171e87.3 for ; Thu, 25 Jan 2024 01:03:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1706173436; x=1706778236; darn=kvack.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=lCpLs7E421WcMGulhvf0KfE1pIEkKe5LczYASKz1w1A=; b=ZRaQbaFxVST6Lh2zhsccKWqTaa8uxUqvXW9AECZfnZTKMvHTFN6qO9gvKkedr7mea6 lBI9YhWJUDK2Hq4h8LL6uCc9uFuCofTJadXBEIgraeWex0XkjSVolNWqd/mimQRp0UNw TPUCb1mfgPMgRq+UFx2hFoZlHmZvTrxF56Tc+kHMrKEdswjadX3cjLV4lt1whf11rzmU OS9WpMMDC6k5KbXg4mB2LitVSQKb96+G+YQJfLkNXrvcCrVuglF9m7Ejo+sxWR7f2Uqw +NVOCsmdeJIr+T0TwovnXSf+np9v7Huld1SAUXbhOGDotZEhABSKkG1wOex+DGq6qWVm NGsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1706173436; x=1706778236; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=lCpLs7E421WcMGulhvf0KfE1pIEkKe5LczYASKz1w1A=; b=USXCB9Bk8++cCrdHd0cWhmCxzLhEwAaptN0A5OEWe0cBTdGL93Yv1yvvM4PZDahoWV igxuU3Q8RyLGeU6wK1fLTJMjYjVRnXeXaWXPr5Er26XkoHWzUETxauRAbLO1ADEc5yq3 l6yfUtU5b+AxOL9yEsEzGy22qTYJu+CgjkJ8UXQSk9BJbedkqiWauc636P6MI/FWRiMy OHgWguItHg5OiT9ua2frQCw9nEvlvF8YPmoLeB5d91Ot/a6ZwACKEMZTa++iaw8sUE3y sx9oPuA34XlHdswbR1DbGiaIZOyNbzTc7TR6tJ78KGsgDxQ219klEQtkkwrK/U8rqGiP fr3Q== X-Gm-Message-State: AOJu0YxjvYg4bqbbHTEeN0TAsed9xAosEkpvySQLEK9aq/53JcWbPuWh IyDCLxVhstlVicDbBCLgholZWMu9vrcRp/E3PzSh5Dq+6mJKXvyntePxpeODTFnMCf6IgFvz8CN 4A6BftqyqIxWAfIdGYc+Ba7Vng8NM3FbDHNps X-Google-Smtp-Source: AGHT+IGaUFMupZ0OqkZ2M2weP4tIoXXZ5zWfJsaAb+DzoczkyGqeGdhCslJXaL4xjEDOPQj4Zgv36x+m1y+7Wb9zKFQ= X-Received: by 2002:a05:6512:3caa:b0:50e:3fbc:7c12 with SMTP id h42-20020a0565123caa00b0050e3fbc7c12mr315562lfv.126.1706173435656; Thu, 25 Jan 2024 01:03:55 -0800 (PST) MIME-Version: 1.0 References: <20240120024007.2850671-1-yosryahmed@google.com> <20240120024007.2850671-3-yosryahmed@google.com> <20240122201906.GA1567330@cmpxchg.org> <20240123153851.GA1745986@cmpxchg.org> <20240123201234.GC1745986@cmpxchg.org> <1496dce3-a4bb-4ccf-92d6-701a45b67da3@bytedance.com> <35c3b0e5-a5eb-44b2-aa7d-3167f4603c73@bytedance.com> In-Reply-To: <35c3b0e5-a5eb-44b2-aa7d-3167f4603c73@bytedance.com> From: Yosry Ahmed Date: Thu, 25 Jan 2024 01:03:19 -0800 Message-ID: Subject: Re: [PATCH 2/2] mm: zswap: remove unnecessary tree cleanups in zswap_swapoff() To: Chengming Zhou Cc: Johannes Weiner , Andrew Morton , Nhat Pham , Chris Li , Huang Ying , linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: A66408000D X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: dad1t1b31pcgqi3kaxc59m5w81toqzmi X-HE-Tag: 1706173437-25089 X-HE-Meta: U2FsdGVkX187ApA0Xx0fiGwIQVSyMN8zMp/8d+7XMBBRAF7C3W1Y51Va7gcmXoR3TfjpEcKOmqbM6q4YsUqtmWycQKQ+yzrfnU3ajubiPM9DLaYV1KyYgEXAXpNbqe5wwzpF1Vt8Rxgdwc2Mg/c8o0bJIj4OWKJttW0heVw+iBGmn2htzpvPuny1NWjOfB4tp84TZY3C960pQ+XrUGWNEybaUPsmhS8HuIheV6CChT6bSPdInZT6mhwXGONanMKSJ9E3cfZ132G1CmrdH3MheTc2Fte+8lPLIcfB3iIHFRVTUP4vNlET3ZF3/EySf3vH4bp44YabkUOYi55LEsiwJGU88NXzMadIwKKnaMARPxpp6ga3I8whihTXHCTfS+TWq0+9zCDhS36Ueww1yAzpvtuNMZEE85WhmZyrsk+WBpgBewwJRNJrzJRLbNnuXyGa+ONK+JCpkE2CzUZLF7vVyuqY1rrXuNVFDyv7pJFR0czKeJqAmBUg5tsCcHGFUdLwHsHRY7din0N7lkKD0NiEfPxDeOeX9djzr1NKMerqSwXac+BFLrHwXQamC6M04dT3mOK2rjPYBYguZYxl34PxhEjpj6i67TF/nk0Bl15+YebK8L3QomPFiDKKL6X3yY8aYtP2X6rBNBIY74lL3pnm3nrvzexOAczPMMi4os2sgCFyNWR/EH+z+SL0KghH9LJ3MIkVDst2oFO/ZwjTWwCLjmkGrlKgdzJsPpKCrRMO95iXlYQPZTgWJWPPn6C/64oUbrEPuIShz23VX0Fq9NCp3iRgzBmFM5utQVXhnxoVXTOe6IhgAGdxTkUhXy73JMOluuhnlyH+jUeqs6ncDT8XaXppmmuNip7xSm7kfLaV7DB+6CbcoxdKoUSSZlfPd7i5gUsYgcv/tAo3wSycbp15Bof0LbrB+uDKnRorQn4hM/cHg4eYa9/9mENk2O79LY4GYJfov9O+prxHiywsdJ2 aAGIweEv 5XzBubkY9i4dQcJ5MyM9cW43Aymnpsn0ORzaBLqqVc3FJ+Uhd9hesZaoAsO7MWXXucV6qgnYvJScai99lCeiRug3cIOJqVfIk3Oe4lLE0vnQPazrWF8i4JPwFCAEmh4fBwfnfl3h82+817hRtWntuhbdEveDfKAVI1QCyf3Ncx+nvV0TIpvCHNhZrHmMt22wAn5tsGU0MjVOOf3Wq+s563K86WMEK8Ioph3v9LV+xmPuEdSvhaQxvNtQvOeeW3IQvYNqDsbIbA7DH/6o= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: > >>>> The second difference is the handling of lru entry, which is easy that we > >>>> just zswap_lru_del() in tree lock. > >>> > >>> Why do we need zswap_lru_del() at all? We should have already isolated > >>> the entry at that point IIUC. > >> > >> I was thinking how to handle the "zswap_lru_putback()" if not writeback, > >> in which case we can't use the entry actually since we haven't got reference > >> of it. So we can don't isolate at the entry, and only zswap_lru_del() when > >> we are going to writeback actually. > > > > Why not just call zswap_lru_putback() before we unlock the folio? > > When early return because __read_swap_cache_async() return NULL or !folio_was_allocated, > we don't have a locked folio yet. The entry maybe invalidated and freed concurrently. Oh, that path, right. If we don't isolate the entry straightaway, concurrent reclaimers will see the same entry, call __read_swap_cache_async(), find the folio already in the swapcache and stop shrinking. This is because usually this means we are racing with swapin and hitting the warmer part of the zswap LRU. I am not sure if this would matter in practice, maybe Nhat knows better. Perhaps we can rotate the entry in the LRU before calling __read_swap_cache_async() to minimize the chances of such a race? Or we can serialize the calls to __read_swap_cache_async() but this may be an overkill.