From: "Huang, Ying" <ying.huang@intel.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: linux-mm@kvack.org, david@redhat.com, hughd@google.com,
osandov@fb.com, linux-fsdevel@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH] mm: swapfile: fix SSD detection with swapfile on btrfs
Date: Tue, 26 Mar 2024 13:47:45 +0800 [thread overview]
Message-ID: <87plvho872.fsf@yhuang6-desk2.ccr.corp.intel.com> (raw)
In-Reply-To: <20240322164956.422815-1-hannes@cmpxchg.org> (Johannes Weiner's message of "Fri, 22 Mar 2024 12:42:21 -0400")
Hi, Johannes,
Johannes Weiner <hannes@cmpxchg.org> writes:
> +static struct swap_cluster_info *setup_clusters(struct swap_info_struct *p,
> + unsigned char *swap_map)
> +{
> + unsigned long nr_clusters = DIV_ROUND_UP(p->max, SWAPFILE_CLUSTER);
> + unsigned long col = p->cluster_next / SWAPFILE_CLUSTER % SWAP_CLUSTER_COLS;
> + struct swap_cluster_info *cluster_info;
> + unsigned long i, j, k, idx;
> + int cpu, err = -ENOMEM;
> +
> + cluster_info = kvcalloc(nr_clusters, sizeof(*cluster_info), GFP_KERNEL);
> if (!cluster_info)
> - return nr_extents;
> + goto err;
> +
> + for (i = 0; i < nr_clusters; i++)
> + spin_lock_init(&cluster_info[i].lock);
>
> + p->cluster_next_cpu = alloc_percpu(unsigned int);
> + if (!p->cluster_next_cpu)
> + goto err_free;
> +
> + /* Random start position to help with wear leveling */
> + for_each_possible_cpu(cpu)
> + per_cpu(*p->cluster_next_cpu, cpu) =
> + get_random_u32_inclusive(1, p->highest_bit);
> +
> + p->percpu_cluster = alloc_percpu(struct percpu_cluster);
> + if (!p->percpu_cluster)
> + goto err_free;
> +
> + for_each_possible_cpu(cpu) {
> + struct percpu_cluster *cluster;
> +
> + cluster = per_cpu_ptr(p->percpu_cluster, cpu);
> + cluster_set_null(&cluster->index);
> + }
> +
> + /*
> + * Mark unusable pages as unavailable. The clusters aren't
> + * marked free yet, so no list operations are involved yet.
> + */
> + for (i = 0; i < round_up(p->max, SWAPFILE_CLUSTER); i++)
> + if (i >= p->max || swap_map[i] == SWAP_MAP_BAD)
> + inc_cluster_info_page(p, cluster_info, i);
If p->max is large, it seems better to use an loop like below?
for (i = 0; i < swap_header->info.nr_badpages; i++) {
/* check i and inc_cluster_info_page() */
}
in most cases, swap_header->info.nr_badpages should be much smaller than
p->max.
> +
> + cluster_list_init(&p->free_clusters);
> + cluster_list_init(&p->discard_clusters);
>
> /*
> * Reduce false cache line sharing between cluster_info and
> @@ -2994,7 +3019,13 @@ static int setup_swap_map_and_extents(struct swap_info_struct *p,
> idx);
> }
> }
> - return nr_extents;
> +
> + return cluster_info;
> +
> +err_free:
> + kvfree(cluster_info);
> +err:
> + return ERR_PTR(err);
> }
>
[snip]
--
Best Regards,
Huang, Ying
next prev parent reply other threads:[~2024-03-26 5:49 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-03-22 16:42 Johannes Weiner
2024-03-26 5:47 ` Huang, Ying [this message]
2024-03-26 12:51 ` Johannes Weiner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87plvho872.fsf@yhuang6-desk2.ccr.corp.intel.com \
--to=ying.huang@intel.com \
--cc=david@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=osandov@fb.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox