From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91FBAC021B2 for ; Tue, 25 Feb 2025 06:43:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 25AA86B0088; Tue, 25 Feb 2025 01:43:40 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 20B716B0089; Tue, 25 Feb 2025 01:43:40 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0D3B16B008A; Tue, 25 Feb 2025 01:43:40 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id E2CEC6B0088 for ; Tue, 25 Feb 2025 01:43:39 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 7FDD8120B5D for ; Tue, 25 Feb 2025 06:43:39 +0000 (UTC) X-FDA: 83157526158.10.AFBDE16 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf07.hostedemail.com (Postfix) with ESMTP id 9A1EE40005 for ; Tue, 25 Feb 2025 06:43:37 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=BF9WQF6z; spf=pass (imf07.hostedemail.com: domain of bhe@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=bhe@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1740465817; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=JrGIcEv+jzbLfhxhyPqK901c2WdYTEZ2bh/GwMiJWiI=; b=VnM4L/ojMAWBzD8z+2H6ZnP+Crv8KPJiVAcpJWi1hYPVhwQoKd2gS/MiCrwGF6WOi7e6Dm bASzMnxk7WeizCyiZ/eYoI9yv/loQz7UFYA2C/Hl/lLk2W74967xRYrxSYxOA87sDYCyCI W1ENxCcvIby1TDSKENtak5FMKD3ijEs= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=BF9WQF6z; spf=pass (imf07.hostedemail.com: domain of bhe@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=bhe@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1740465817; a=rsa-sha256; cv=none; b=hncxRoPc+qbCsHdhXuvmFUNfEflyBD+9jczthik1V6MF06Tv1wuRqaBAomv/ytvVn9p/xJ v/pzA+xM9OFRxV/SaVC7n4csiqR8gjTWjIrqbkiaPNbdESAYZk21D90LNmwO1TgsJAJISc NOBthZ1ys3UjfZu0WvIAqbRwDQfnpgU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1740465817; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=JrGIcEv+jzbLfhxhyPqK901c2WdYTEZ2bh/GwMiJWiI=; b=BF9WQF6zHL5sdzmb2cpthaDyX/KKRHqkGfbMHMM9MbBA+EDV3VCJ2b73koa6rnAdAUyanK Je5YqzQ2xcUqmirCd14Hwda3sTLnRPwp6KgrsPQSc/c5zQDExZXw8INPNE2maIpup2NAzK gDLYnkMYhyZpKcs0MxNydyGqYjd3UXI= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-486-ZxQAVhyFOl6-nef4plbVHA-1; Tue, 25 Feb 2025 01:43:31 -0500 X-MC-Unique: ZxQAVhyFOl6-nef4plbVHA-1 X-Mimecast-MFC-AGG-ID: ZxQAVhyFOl6-nef4plbVHA_1740465809 Received: from mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.17]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 4074B196E078; Tue, 25 Feb 2025 06:43:29 +0000 (UTC) Received: from localhost (unknown [10.72.112.127]) by mx-prod-int-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 5E02519560A3; Tue, 25 Feb 2025 06:43:25 +0000 (UTC) Date: Tue, 25 Feb 2025 14:43:21 +0800 From: Baoquan He To: Kairui Song Cc: linux-mm@kvack.org, Andrew Morton , Chris Li , Barry Song , Hugh Dickins , Yosry Ahmed , "Huang, Ying" , Nhat Pham , Johannes Weiner , Baolin Wang , Kalesh Singh , Matthew Wilcox , linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 7/7] mm, swap: simplify folio swap allocation Message-ID: References: <20250224180212.22802-1-ryncsn@gmail.com> <20250224180212.22802-8-ryncsn@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250224180212.22802-8-ryncsn@gmail.com> X-Scanned-By: MIMEDefang 3.0 on 10.30.177.17 X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 9A1EE40005 X-Stat-Signature: jxbdng5w5dcef51tbwrj8qbt8n8h1q3n X-HE-Tag: 1740465817-296731 X-HE-Meta: U2FsdGVkX18EfOMgUHaFiX1DZERM8xcu2pQEmDei1eC11SI0bY2+YwOOPWalCoBn59g8ZOLzOJEJOM4/0Ts/rmKgpznz3l4ARRaDdV7m+q6xgnNnHfGVx3e1AwXBrectoDCq1rj2bf2apMte7ZoU8OezVqNJFlm9P6qWJQZz7mzqdYpl5wqktKKAGWaqF3xY+8FLUXOm6mhSMMKdynId9P9yd7M2TgY8+J/Weml1M6HSsaiS0nnIKBcW+oNKSeaiwLdM1ml9zg9K5xnY5GOAHHWFueJR8CJ6nE6N09OxMTZ3roBAYiD/j/JbPcSRUiO0s1kBjZ30TppnCcQ7d++pDzl0J9zvn8LdAScuD88TfMhSlhiCv6FtZTrN+YJD0MS4uk6hU5lk/jS0CCJGWN+vqIBIH8NbO+BT6/BoZGPJ3pck5D20g6fDnvrYy7RgDrav5IVnqzexfjUuzvceYLwCGgGQ0AGRbmTZrrcb1KfRZVPmxCA+0MdLIn9wm5XXmQHtRzG99EE3W7/C1fK55sG3AaBmusw3vsxzgibU+VGuw8YnB3/dQy3me62gIxeSWDtQh4x7IXUgBZSm0UfkcFz8jtCzZPdLPwH6cnHiO8YzCBDZZhoZiqeeJmOlEerv5nSb9jXyLOkj+LCiP1pQhl95hxYbTnTiL4iNnMkBp/bn9Ct0nl2b6MLw8qebbPUyn7+qZOy50Mil9tgsncgTJz+6sOLnzUwwNggwQRHOuMw56atZ/FlHRGhrHh3K9wZwWquOAZM3yVePoPzKzEeHb5iNf8FylaI9lL/9pXxWbiAqwWj7iX2QhEg6FqPuIJBXcgBCsGDbgPMgIwNp2rDZdnh7TfVy3hUe2FJwYCepmLI58Xq1hOiSEbtye7YNUR6e8n1uSvfLKBVc2aIof/as7FufvPAFwChcj6lYp2u70Q7rbDLYzlY792Bq1IwEV6lvXyT8Pcrv5twuh2iSuhWtUtr T4IH0pa0 ExtGWSbGHqbMFULmi7ZQPqOAzAaACyC4c1iuwxWrOi1Vheq9ipzcooii6BsuQJYtWXNV+/aUomPD4iWgRQvyUvc8mV6QAC6yNjTXExKMxasa16b5zaomp1D9M3DxV1U987o1TtCFKByJ8EA3q0+lXIiiEn15ly+JTMTvf7d8MtyQtuwq+LZw1xmWL5zFqYvPrbF0a1GHpEq1hw5CLvjhs+M3TBqm28ZX93ehzu8pthC9SduWneX3mWxV21dbzWXUeDRAVmEuiO60YlJ81qIzplUo4rHQhShMEJzXXlD3BXVohYGFDgTeA+UkNMR5srEn4rW8eB/hmTigag3HtwVj6Kw4eljFOx1V4U+OpM+HUTnVK9/TFLxF7zuCo8M5R8nGpQJhHHjR2IwEgN1ZZMyPHKS56tHDtfVI953YJW8KX2fR80GM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 02/25/25 at 02:02am, Kairui Song wrote: ......snip... > @@ -1265,20 +1249,68 @@ swp_entry_t folio_alloc_swap(struct folio *folio) > goto start_over; > } > spin_unlock(&swap_avail_lock); > -out_failed: > + return false; > +} > + > +/** > + * folio_alloc_swap - allocate swap space for a folio > + * @folio: folio we want to move to swap > + * @gfp: gfp mask for shadow nodes > + * > + * Allocate swap space for the folio and add the folio to the > + * swap cache. > + * > + * Context: Caller needs to hold the folio lock. > + * Return: Whether the folio was added to the swap cache. If only returning on whether folio being added or not, it's better to return bool value for now. Anyway, this is trivial. This whole patch looks good to me. Reviewed-by: Baoquan He > + */ > +int folio_alloc_swap(struct folio *folio, gfp_t gfp) > +{ > + unsigned int order = folio_order(folio); > + unsigned int size = 1 << order; > + swp_entry_t entry = {}; > + > + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); > + VM_BUG_ON_FOLIO(!folio_test_uptodate(folio), folio); > + > + /* > + * Should not even be attempting large allocations when huge > + * page swap is disabled. Warn and fail the allocation. > + */ > + if (order && (!IS_ENABLED(CONFIG_THP_SWAP) || size > SWAPFILE_CLUSTER)) { > + VM_WARN_ON_ONCE(1); > + return -EINVAL; > + } > + > + local_lock(&percpu_swap_cluster.lock); > + if (swap_alloc_fast(&entry, SWAP_HAS_CACHE, order)) > + goto out_alloced; > + if (swap_alloc_slow(&entry, SWAP_HAS_CACHE, order)) > + goto out_alloced; > local_unlock(&percpu_swap_cluster.lock); > - return entry; > + return -ENOMEM; > > out_alloced: > local_unlock(&percpu_swap_cluster.lock); > - if (mem_cgroup_try_charge_swap(folio, entry)) { > - put_swap_folio(folio, entry); > - entry.val = 0; > - } else { > - atomic_long_sub(size, &nr_swap_pages); > - } > + if (mem_cgroup_try_charge_swap(folio, entry)) > + goto out_free; > > - return entry; > + /* > + * XArray node allocations from PF_MEMALLOC contexts could > + * completely exhaust the page allocator. __GFP_NOMEMALLOC > + * stops emergency reserves from being allocated. > + * > + * TODO: this could cause a theoretical memory reclaim > + * deadlock in the swap out path. > + */ > + if (add_to_swap_cache(folio, entry, gfp | __GFP_NOMEMALLOC, NULL)) > + goto out_free; > + > + atomic_long_sub(size, &nr_swap_pages); > + return 0; > + > +out_free: > + put_swap_folio(folio, entry); > + return -ENOMEM; > } > > static struct swap_info_struct *_swap_info_get(swp_entry_t entry) ....snip....