From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60667C433F5 for ; Tue, 10 May 2022 02:03:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B99516B0072; Mon, 9 May 2022 22:03:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B21D66B0073; Mon, 9 May 2022 22:03:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9C2986B0074; Mon, 9 May 2022 22:03:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 88E386B0072 for ; Mon, 9 May 2022 22:03:25 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 522081479 for ; Tue, 10 May 2022 02:03:25 +0000 (UTC) X-FDA: 79448186370.07.21355F0 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf27.hostedemail.com (Postfix) with ESMTP id E8D6C400A0 for ; Tue, 10 May 2022 02:03:21 +0000 (UTC) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Ky1X30h4dzhYxZ; Tue, 10 May 2022 10:02:43 +0800 (CST) Received: from [10.174.177.76] (10.174.177.76) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Tue, 10 May 2022 10:03:19 +0800 Subject: Re: [PATCH 11/15] mm/swap: add helper swap_offset_available() To: NeilBrown CC: , , , , , , , , , , References: <20220509131416.17553-1-linmiaohe@huawei.com> <20220509131416.17553-12-linmiaohe@huawei.com> <165214355418.14782.13896859043718755300@noble.neil.brown.name> From: Miaohe Lin Message-ID: <9319a62b-f43d-8ee9-77b9-a1afee7dbc10@huawei.com> Date: Tue, 10 May 2022 10:03:19 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: <165214355418.14782.13896859043718755300@noble.neil.brown.name> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.177.76] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: E8D6C400A0 X-Stat-Signature: phykeey6wc7b7aybazpjk8zju73u38ah Authentication-Results: imf27.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf27.hostedemail.com: domain of linmiaohe@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=linmiaohe@huawei.com X-Rspam-User: X-HE-Tag: 1652148201-125253 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2022/5/10 8:45, NeilBrown wrote: > On Mon, 09 May 2022, Miaohe Lin wrote: >> Add helper swap_offset_available() to remove some duplicated codes. >> Minor readability improvement. > > I don't think that putting the spin_lock() inside the inline helper is > good for readability. > If the function was called > swap_offset_available_and_locked() Yes, swap_offset_available_and_locked should be more suitable as we do the spin_lock inside it. Will do this in next version. Thanks! > > it might be ok. Otherwise I would rather the spin_lock() was called > when the function returned true. > > Thanks, > NeilBrown > >> >> Signed-off-by: Miaohe Lin >> --- >> mm/swapfile.c | 33 +++++++++++++++++---------------- >> 1 file changed, 17 insertions(+), 16 deletions(-) >> >> diff --git a/mm/swapfile.c b/mm/swapfile.c >> index c90298a0561a..d5d3e2d03d28 100644 >> --- a/mm/swapfile.c >> +++ b/mm/swapfile.c >> @@ -776,6 +776,21 @@ static void set_cluster_next(struct swap_info_struct *si, unsigned long next) >> this_cpu_write(*si->cluster_next_cpu, next); >> } >> >> +static inline bool swap_offset_available(struct swap_info_struct *si, unsigned long offset) >> +{ >> + if (data_race(!si->swap_map[offset])) { >> + spin_lock(&si->lock); >> + return true; >> + } >> + >> + if (vm_swap_full() && READ_ONCE(si->swap_map[offset]) == SWAP_HAS_CACHE) { >> + spin_lock(&si->lock); >> + return true; >> + } >> + >> + return false; >> +} >> + >> static int scan_swap_map_slots(struct swap_info_struct *si, >> unsigned char usage, int nr, >> swp_entry_t slots[]) >> @@ -953,15 +968,8 @@ static int scan_swap_map_slots(struct swap_info_struct *si, >> scan: >> spin_unlock(&si->lock); >> while (++offset <= READ_ONCE(si->highest_bit)) { >> - if (data_race(!si->swap_map[offset])) { >> - spin_lock(&si->lock); >> + if (swap_offset_available(si, offset)) >> goto checks; >> - } >> - if (vm_swap_full() && >> - READ_ONCE(si->swap_map[offset]) == SWAP_HAS_CACHE) { >> - spin_lock(&si->lock); >> - goto checks; >> - } >> if (unlikely(--latency_ration < 0)) { >> cond_resched(); >> latency_ration = LATENCY_LIMIT; >> @@ -970,15 +978,8 @@ static int scan_swap_map_slots(struct swap_info_struct *si, >> } >> offset = si->lowest_bit; >> while (offset < scan_base) { >> - if (data_race(!si->swap_map[offset])) { >> - spin_lock(&si->lock); >> + if (swap_offset_available(si, offset)) >> goto checks; >> - } >> - if (vm_swap_full() && >> - READ_ONCE(si->swap_map[offset]) == SWAP_HAS_CACHE) { >> - spin_lock(&si->lock); >> - goto checks; >> - } >> if (unlikely(--latency_ration < 0)) { >> cond_resched(); >> latency_ration = LATENCY_LIMIT; >> -- >> 2.23.0 >> >> > . >