From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D872E77188 for ; Fri, 3 Jan 2025 08:07:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B0DFE6B007B; Fri, 3 Jan 2025 03:07:28 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id ABD446B0082; Fri, 3 Jan 2025 03:07:28 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 95DE26B0083; Fri, 3 Jan 2025 03:07:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 734B36B007B for ; Fri, 3 Jan 2025 03:07:28 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 02A37AEF6B for ; Fri, 3 Jan 2025 08:07:27 +0000 (UTC) X-FDA: 82965408918.25.C5560AD Received: from mail-lj1-f178.google.com (mail-lj1-f178.google.com [209.85.208.178]) by imf15.hostedemail.com (Postfix) with ESMTP id 77064A000D for ; Fri, 3 Jan 2025 08:05:51 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=DMXBcDr+; spf=pass (imf15.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.208.178 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1735891598; a=rsa-sha256; cv=none; b=uZddYUj9neH0ZZAo23nNEOpx9SjOhbUxJKha+K21FoSdW7fgRFeJBiSibWwB9BOeghGMHU 6dBFh9BpwoJGGdK/V8gaPateaUkFDAFCeZDW8AcmjIxJY9YPbWFnmImpd/YSeapRtYB4w/ cabKK/lTDfl2f37D+0llPOj/Uxd79Rc= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=DMXBcDr+; spf=pass (imf15.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.208.178 as permitted sender) smtp.mailfrom=ryncsn@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1735891598; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hZLTArBw1/+FylqQRi3iKQavt8iTMYzpPuCs1DTqFLY=; b=071qdg8oIRmddHJuj9qdBB2MacERk4Y1x5QMJ0GrcZwilzSW3MmvnqhieGA5NtETWQiL16 nudxrTwtSD476ZpSUJ1/Zr7DWASGr3Lcq9Q8dR4wgSxRcbserj32h56E3XoScdKmJj3dl5 rr+A6H7DZ4+1T4QL82Omw0YshbARRr0= Received: by mail-lj1-f178.google.com with SMTP id 38308e7fff4ca-303548a9361so114394431fa.0 for ; Fri, 03 Jan 2025 00:07:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1735891644; x=1736496444; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=hZLTArBw1/+FylqQRi3iKQavt8iTMYzpPuCs1DTqFLY=; b=DMXBcDr+vnMhRx+xQkN6B519zXHSBwt9ovVQLOc1zSYkB/Z6Jbvw5k7IKV8cMpUvHW 36LbLbo5sX96JReVQ+U4Z57sEC1TUwDErdY9qGRNs93PI/hU0UJI8g0zmj90K6tWVgg9 Ft6Am2n0DgMZqHlEFSMQpBwmMbECLdsJmFEGhi41CBGhHiQDiat0+P/VGkEmWAavxHJl 2XQj0fsHe+AqMyLgYfFwpSbWg5nZjUpKE/ZsLPEh1gbcwqPYNF8RRZzOd+5MgpPvagtQ u/YJOF9vwU6IthI16Le8Ilm5CFyZQBm3WuBG6CQfMe/eVpfHu9It4riFBY+CLw/Us0md AI0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1735891644; x=1736496444; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hZLTArBw1/+FylqQRi3iKQavt8iTMYzpPuCs1DTqFLY=; b=tb3dpqNhs0nrtB8NVNLGHHWU8B2+Llr6emYhgtGcdALolIWOFOl66gNno7UvGrKJhN kxgQcQ4/h4okn/tj4TFclcT9+/Frcv7RhSudseW/yKtzqpaPOxoA4jfweWMGTzzR3pcF baCNSl3qai6MPFsdhfDSXF6HSzEJxpHp79AplbMiubPpXrbfmoOecj8f+9gXS7PhX4Ca d9oKOC09XVP4IRgkTqIgL7wc9X3UYo4vUjAAtpYd3OQlfylvj4mZmI9GzEXnM0MN/cZY KBv7ACmWdbzluZYoD/lACfmXGd88vFvSenFdZzasFNas4Kfeqy0R1BClKILeMK2UYR0W Q27w== X-Gm-Message-State: AOJu0Yy/WXEvKvPs/cMNry5Fu1BmQLI7XwBCHgKYATnmLeV6iQIviSMe YcPL0fwpczmWS46wEOyYedEIm8afqQ/+AjLOEpid3uyKqYWpf7QTONt6r2ITf3PoL914r9lQ2t0 8d/KnubhZ0/+eL5XE6WpQm8wqXD0= X-Gm-Gg: ASbGnctp6obS8ekCNficyMIWHRU+OgZEvA8xUAWDXQnGwoSJArDlB7OlYO7uc8A1zbv z377d0x7GtUZowEiU7/VKuR1qrKuSHRW7GkT+ X-Google-Smtp-Source: AGHT+IEMP3ZnKpbq9a+Y2Qbv+rEisacNJcO89zxahEGXIURQ6iicibbw/sgNPQHslmIKoyEZrt4o4jcpBRtAJG5I6+8= X-Received: by 2002:a05:651c:511:b0:302:25ef:813e with SMTP id 38308e7fff4ca-30468608eaamr164131501fa.32.1735891643699; Fri, 03 Jan 2025 00:07:23 -0800 (PST) MIME-Version: 1.0 References: <20241230174621.61185-1-ryncsn@gmail.com> <20241230174621.61185-7-ryncsn@gmail.com> In-Reply-To: From: Kairui Song Date: Fri, 3 Jan 2025 16:07:07 +0800 Message-ID: Subject: Re: [PATCH v3 06/13] mm, swap: clean up plist removal and adding To: Baoquan He Cc: linux-mm@kvack.org, Andrew Morton , Chris Li , Barry Song , Ryan Roberts , Hugh Dickins , Yosry Ahmed , "Huang, Ying" , Nhat Pham , Johannes Weiner , Kalesh Singh , linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 77064A000D X-Stat-Signature: 3xhi3uxr6kh5sx7ro79roocq99za6dzs X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1735891551-539299 X-HE-Meta: U2FsdGVkX18wXZcoEKOe669xXcHU+mQdyBe8Uu4VQbTsVBW33K9L1azPqaH1UR7ZYdqnHir8s8Y28b8CCXB6jc+Nd59iUDDQC4pGzE0ZiScN55SrIVIwCQk1vcXAK0kKDIHj7EzBNj3ep5BNKmtFrv1tmQXu9PC8NBiikIFx9ghYh4h7s+Pq+KP3U6AmvExfJ5i+x+YlwDDOEL9cEkDvlNnlC+P1Zuu8AciEsezEErgKsU5/1uY5ZsA0NmWps6kGEr14Tj6zteSfc1GiotzzLNy5UvM3Z4xElT51M7Zr1YfXTRMKP0a1VIEbzL0lFzTdup53is2FXvJdLHecPUvMdWscnEghQKk7HcR/CFG/4bU+qWi25AFHuzgAdzG3l0vBD9iAkVQZaiCxFVmbFQVGZsUTMXtZdVbw6330lQAAGBmZTCXpF0bQzSDRC94Rz3VLuZ+J2TVDqG63xWH2NABMxyzyiB98IkaC1RbxyvLDrhUpLpIUA18uJraPkV/IG2vfgk8AvyhAD9KSHRBrCYBduIbYDGOCVqGnCIkpQOX1L29Y5rg7Z3jMxi5BESbR0HAJqCKtt/uIPj2rpQkEqK4/8tirDr3ooPXRdyCixjPG3tQZ2zKWxpJ6sUosEp+iAptmCNVxj2u7RBzCKtDmx5fN327h79IV34Hb9iUudzJyD6b9zx7hpAEE24jVXOXFG2Y40LQ5+0Jhrgdm5WF9XRd8X8VgC9XinIDePOW118DdqXSk5rAoBFPnmvVY3qyXSIpejkEa5ehAeNma9/gF3oXx5MIOGHM1GwTWFI630NaMxWzadFCA2tCytAt+a6XlgFY9gya+F2vbCH8xwJhwsC0A8KOm51TmWm+tzNSx4B/szVh6FvFNOZI5XfyGKUbejB14XV9Qz+jOMeV5TdUepjUCJb13Q8pyWzlLB2aUEKELzz7pquR35/GTnUs6jcqMBrQSgY6sSpgqxhMcK4tAGLL vWUqDoej UsWyNmVGKX24Tq3w9RFWVP9qTqenDpSRz/IpTFJ7nZrYjKkuLO9K7IZ6lZqcMCFjWwlgjreHX0XrY/hgEXFqc6plHMgT9hUBcTBdzhtiBEbNIQuh31WzTdqPuX1U+aZPG2i9fNerfY9tyladGFoqmgpf/qTrg6Mup0tkbemd0DctCuKRnoGjbSbzMvnq9pWUwoK63r3zwbNB7qmS45J41y1P/3d6ZJQi3n3I7yTxxF/htk1z7xzw7Vkor9R0BfCcSwDHDAul864r1oVyaAwFO4wfCpVB5XuNdGqE7wxgl4E7syPBzlKpVZt85OLhB4ZuD9KOueP+rVo6FwQ5lE5Z4gmRUZQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Jan 2, 2025 at 4:59=E2=80=AFPM Baoquan He wrote: > > Hi Kairui, > > On 12/31/24 at 01:46am, Kairui Song wrote: > ......snip... > > diff --git a/mm/swapfile.c b/mm/swapfile.c > > index 7963a0c646a4..e6e58cfb5178 100644 > > --- a/mm/swapfile.c > > +++ b/mm/swapfile.c > > @@ -128,6 +128,26 @@ static inline unsigned char swap_count(unsigned ch= ar ent) > > return ent & ~SWAP_HAS_CACHE; /* may include COUNT_CONTINUED fl= ag */ > > } > > I am reading swap code, while at it, I am going through this patchset > too. Have some nitpick, please see below inline comments. Thanks! > > > > +/* > > + * Use the second highest bit of inuse_pages counter as the indicator > > + * of if one swap device is on the available plist, so the atomic can > ~~ redundant? > > + * still be updated arithmetic while having special data embedded. > ~~~~~~~~~~ typo, arithmetically? > > + * > > + * inuse_pages counter is the only thing indicating if a device should > > + * be on avail_lists or not (except swapon / swapoff). By embedding th= e > > + * on-list bit in the atomic counter, updates no longer need any lock > ~~~ off-list? Ah, right, some typos, will fix these. > > + * to check the list status. > > + * > > + * This bit will be set if the device is not on the plist and not > > + * usable, will be cleared if the device is on the plist. > > + */ > > +#define SWAP_USAGE_OFFLIST_BIT (1UL << (BITS_PER_TYPE(atomic_t) - 2)) > > +#define SWAP_USAGE_COUNTER_MASK (~SWAP_USAGE_OFFLIST_BIT) > > +static long swap_usage_in_pages(struct swap_info_struct *si) > > +{ > > + return atomic_long_read(&si->inuse_pages) & SWAP_USAGE_COUNTER_MA= SK; > > +} > > + > > /* Reclaim the swap entry anyway if possible */ > > #define TTRS_ANYWAY 0x1 > > /* > > @@ -717,7 +737,7 @@ static void swap_reclaim_full_clusters(struct swap_= info_struct *si, bool force) > > int nr_reclaim; > > > > if (force) > > - to_scan =3D si->inuse_pages / SWAPFILE_CLUSTER; > > + to_scan =3D swap_usage_in_pages(si) / SWAPFILE_CLUSTER; > > > > while (!list_empty(&si->full_clusters)) { > > ci =3D list_first_entry(&si->full_clusters, struct swap_c= luster_info, list); > > @@ -872,42 +892,128 @@ static unsigned long cluster_alloc_swap_entry(st= ruct swap_info_struct *si, int o > > return found; > > } > > > > -static void __del_from_avail_list(struct swap_info_struct *si) > > +/* SWAP_USAGE_OFFLIST_BIT can only be cleared by this helper. */ > Seems it just says the opposite. The off-list bit is set in > this function. Right, the comments are opposite... will fix them. > > +static void del_from_avail_list(struct swap_info_struct *si, bool swap= off) > > { > > int nid; > > + unsigned long pages; > > + > > + spin_lock(&swap_avail_lock); > > + > > + if (swapoff) { > > + /* > > + * Forcefully remove it. Clear the SWP_WRITEOK flags for > > + * swapoff here so it's synchronized by both si->lock and > > + * swap_avail_lock, to ensure the result can be seen by > > + * add_to_avail_list. > > + */ > > + lockdep_assert_held(&si->lock); > > + si->flags &=3D ~SWP_WRITEOK; > > + atomic_long_or(SWAP_USAGE_OFFLIST_BIT, &si->inuse_pages); > > + } else { > > + /* > > + * If not called by swapoff, take it off-list only if it'= s > > + * full and SWAP_USAGE_OFFLIST_BIT is not set (strictly > > + * si->inuse_pages =3D=3D pages), any concurrent slot fre= eing, > > + * or device already removed from plist by someone else > > + * will make this return false. > > + */ > > + pages =3D si->pages; > > + if (!atomic_long_try_cmpxchg(&si->inuse_pages, &pages, > > + pages | SWAP_USAGE_OFFLIST_B= IT)) > > + goto skip; > > + } > > > > - assert_spin_locked(&si->lock); > > for_each_node(nid) > > plist_del(&si->avail_lists[nid], &swap_avail_heads[nid]); > > + > > +skip: > > + spin_unlock(&swap_avail_lock); > > } > > > > -static void del_from_avail_list(struct swap_info_struct *si) > > +/* SWAP_USAGE_OFFLIST_BIT can only be set by this helper. */ > > Ditto. > > > +static void add_to_avail_list(struct swap_info_struct *si, bool swapon= ) > > { > > + int nid; > > + long val; > > + unsigned long pages; > > + > > spin_lock(&swap_avail_lock); > > - __del_from_avail_list(si); > > + > > + /* Corresponding to SWP_WRITEOK clearing in del_from_avail_list *= / > > + if (swapon) { > > + lockdep_assert_held(&si->lock); > > + si->flags |=3D SWP_WRITEOK; > > + } else { > > + if (!(READ_ONCE(si->flags) & SWP_WRITEOK)) > > + goto skip; > > + } > > + > > + if (!(atomic_long_read(&si->inuse_pages) & SWAP_USAGE_OFFLIST_BIT= )) > > + goto skip; > > + > > + val =3D atomic_long_fetch_and_relaxed(~SWAP_USAGE_OFFLIST_BIT, &s= i->inuse_pages); > > + > > + /* > > + * When device is full and device is on the plist, only one updat= er will > > + * see (inuse_pages =3D=3D si->pages) and will call del_from_avai= l_list. If > > + * that updater happen to be here, just skip adding. > > + */ > > + pages =3D si->pages; > > + if (val =3D=3D pages) { > > + /* Just like the cmpxchg in del_from_avail_list */ > > + if (atomic_long_try_cmpxchg(&si->inuse_pages, &pages, > > + pages | SWAP_USAGE_OFFLIST_BI= T)) > > + goto skip; > > + } > > + > > + for_each_node(nid) > > + plist_add(&si->avail_lists[nid], &swap_avail_heads[nid]); > > + > > +skip: > > spin_unlock(&swap_avail_lock); > > } > > > > -static void swap_range_alloc(struct swap_info_struct *si, > > - unsigned int nr_entries) > > +/* > > + * swap_usage_add / swap_usage_sub of each slot are serialized by ci->= lock > > Not sure if swap_inuse_add()/swap_inuse_sub() or swap_inuse_cnt_add/sub() > is better, because it mixes with the usage of si->swap_map[offset]. > Anyway, not strong opinion. > > > + * within each cluster, so the total contribution to the global counte= r should > > + * always be positive and cannot exceed the total number of usable slo= ts. > > + */ > > +static bool swap_usage_add(struct swap_info_struct *si, unsigned int n= r_entries) > > { > > - WRITE_ONCE(si->inuse_pages, si->inuse_pages + nr_entries); > > - if (si->inuse_pages =3D=3D si->pages) { > > - del_from_avail_list(si); > > + long val =3D atomic_long_add_return_relaxed(nr_entries, &si->inus= e_pages); > > > > - if (si->cluster_info && vm_swap_full()) > > - schedule_work(&si->reclaim_work); > > + /* > > + * If device is full, and SWAP_USAGE_OFFLIST_BIT is not set, > > + * remove it from the plist. > > + */ > > + if (unlikely(val =3D=3D si->pages)) { > > + del_from_avail_list(si, false); > > + return true; > > } > > + > > + return false; > > } > > > > -static void add_to_avail_list(struct swap_info_struct *si) > > +static void swap_usage_sub(struct swap_info_struct *si, unsigned int n= r_entries) > > { > > - int nid; > > + long val =3D atomic_long_sub_return_relaxed(nr_entries, &si->inus= e_pages); > > > > - spin_lock(&swap_avail_lock); > > - for_each_node(nid) > > - plist_add(&si->avail_lists[nid], &swap_avail_heads[nid]); > > - spin_unlock(&swap_avail_lock); > > + /* > > + * If device is not full, and SWAP_USAGE_OFFLIST_BIT is set, > > + * remove it from the plist. > > + */ > > + if (unlikely(val & SWAP_USAGE_OFFLIST_BIT)) > > + add_to_avail_list(si, false); > > +} > > + > > +static void swap_range_alloc(struct swap_info_struct *si, > > + unsigned int nr_entries) > > +{ > > + if (swap_usage_add(si, nr_entries)) { > > + if (si->cluster_info && vm_swap_full()) > > We may not need check si->cluster_info here since it always exists now. Good catch, it can be dropped indeed as an optimization, one previous patch in this series is supposed to drop them all, I think I forgot this one. > > > + schedule_work(&si->reclaim_work); > > + } > > } > > > > static void swap_range_free(struct swap_info_struct *si, unsigned long= offset, > > @@ -925,8 +1031,6 @@ static void swap_range_free(struct swap_info_struc= t *si, unsigned long offset, > > for (i =3D 0; i < nr_entries; i++) > > clear_bit(offset + i, si->zeromap); > > > > - if (si->inuse_pages =3D=3D si->pages) > > - add_to_avail_list(si); > > if (si->flags & SWP_BLKDEV) > > swap_slot_free_notify =3D > > si->bdev->bd_disk->fops->swap_slot_free_notify; > > @@ -946,7 +1050,7 @@ static void swap_range_free(struct swap_info_struc= t *si, unsigned long offset, > > */ > > smp_wmb(); > > atomic_long_add(nr_entries, &nr_swap_pages); > > - WRITE_ONCE(si->inuse_pages, si->inuse_pages - nr_entries); > > + swap_usage_sub(si, nr_entries); > > } > > > > static int cluster_alloc_swap(struct swap_info_struct *si, > > @@ -1036,19 +1140,6 @@ int get_swap_pages(int n_goal, swp_entry_t swp_e= ntries[], int entry_order) > > plist_requeue(&si->avail_lists[node], &swap_avail_heads[n= ode]); > > spin_unlock(&swap_avail_lock); > > spin_lock(&si->lock); > > - if ((si->inuse_pages =3D=3D si->pages) || !(si->flags & S= WP_WRITEOK)) { > > - spin_lock(&swap_avail_lock); > > - if (plist_node_empty(&si->avail_lists[node])) { > > - spin_unlock(&si->lock); > > - goto nextsi; > > - } > > - WARN(!(si->flags & SWP_WRITEOK), > > - "swap_info %d in list but !SWP_WRITEOK\n", > > - si->type); > > - __del_from_avail_list(si); > > - spin_unlock(&si->lock); > > - goto nextsi; > > - } > > n_ret =3D scan_swap_map_slots(si, SWAP_HAS_CACHE, > > n_goal, swp_entries, order); > > spin_unlock(&si->lock); > > @@ -1057,7 +1148,6 @@ int get_swap_pages(int n_goal, swp_entry_t swp_en= tries[], int entry_order) > > cond_resched(); > > > > spin_lock(&swap_avail_lock); > > -nextsi: > > /* > > * if we got here, it's likely that si was almost full be= fore, > > * and since scan_swap_map_slots() can drop the si->lock, > > @@ -1789,7 +1879,7 @@ unsigned int count_swap_pages(int type, int free) > > if (sis->flags & SWP_WRITEOK) { > > n =3D sis->pages; > > if (free) > > - n -=3D sis->inuse_pages; > > + n -=3D swap_usage_in_pages(sis); > > } > > spin_unlock(&sis->lock); > > } > > @@ -2124,7 +2214,7 @@ static int try_to_unuse(unsigned int type) > > swp_entry_t entry; > > unsigned int i; > > > > - if (!READ_ONCE(si->inuse_pages)) > > + if (!swap_usage_in_pages(si)) > > goto success; > > > > retry: > > @@ -2137,7 +2227,7 @@ static int try_to_unuse(unsigned int type) > > > > spin_lock(&mmlist_lock); > > p =3D &init_mm.mmlist; > > - while (READ_ONCE(si->inuse_pages) && > > + while (swap_usage_in_pages(si) && > > !signal_pending(current) && > > (p =3D p->next) !=3D &init_mm.mmlist) { > > > > @@ -2165,7 +2255,7 @@ static int try_to_unuse(unsigned int type) > > mmput(prev_mm); > > > > i =3D 0; > > - while (READ_ONCE(si->inuse_pages) && > > + while (swap_usage_in_pages(si) && > > !signal_pending(current) && > > (i =3D find_next_to_unuse(si, i)) !=3D 0) { > > > > @@ -2200,7 +2290,7 @@ static int try_to_unuse(unsigned int type) > > * folio_alloc_swap(), temporarily hiding that swap. It's easy > > * and robust (though cpu-intensive) just to keep retrying. > > */ > > - if (READ_ONCE(si->inuse_pages)) { > > + if (swap_usage_in_pages(si)) { > > if (!signal_pending(current)) > > goto retry; > > return -EINTR; > > @@ -2209,7 +2299,7 @@ static int try_to_unuse(unsigned int type) > > success: > > /* > > * Make sure that further cleanups after try_to_unuse() returns h= appen > > - * after swap_range_free() reduces si->inuse_pages to 0. > > + * after swap_range_free() reduces inuse_pages to 0. > > Here, I personally think the original si->inuse_pages may be better. I updated this comment to avoid people from mis-using it directly, anyway it's a trivial comment, can keep it unchanged.