From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21C05C54ED1 for ; Tue, 27 May 2025 05:05:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B3C626B008A; Tue, 27 May 2025 01:05:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B13676B008C; Tue, 27 May 2025 01:05:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A50086B0092; Tue, 27 May 2025 01:05:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 8678C6B008A for ; Tue, 27 May 2025 01:05:29 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id EB16781B2E for ; Tue, 27 May 2025 05:05:28 +0000 (UTC) X-FDA: 83487499536.13.E27228B Received: from mout-p-202.mailbox.org (mout-p-202.mailbox.org [80.241.56.172]) by imf28.hostedemail.com (Postfix) with ESMTP id 60756C0004 for ; Tue, 27 May 2025 05:05:26 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=samsung.com (policy=none); spf=pass (imf28.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.172 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1748322327; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZJTdJ9l/bPSSxcFXX6fMJU2hnOaD/fv3ONstIgh3K5w=; b=sSmfCIOatCoESbt1rE9CYC2KGezEos+U07eg9wK2wsw6LdXos1xXbg7BA462xYz2gL6EBO IT1Z30qhWMZQEvNcIka+b6vAEJkn35zb+l6zbS6P7pmHReWHQQhutorvi3f7gSHCTQLfPp vK8ujBy+aKkHpDB0DSwpuBSKI6v0Q1o= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=samsung.com (policy=none); spf=pass (imf28.hostedemail.com: domain of kernel@pankajraghav.com designates 80.241.56.172 as permitted sender) smtp.mailfrom=kernel@pankajraghav.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1748322327; a=rsa-sha256; cv=none; b=U7i5n0in9pXnYPH53BheNsAP+tetrkTW9q5d8kFvuPvdUgMjHoULNgtyKiQPWzTBd9gyrE uKSEflrPgAVjFGPbb+tmEVYa9UjKC5aM4X6YaX/mWyc0T9e4V680lZGDLaGlye6MbX9E3G z1u2fmzHJe8tq8ligEJnx0QPClQ8ts0= Received: from smtp202.mailbox.org (smtp202.mailbox.org [IPv6:2001:67c:2050:b231:465::202]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by mout-p-202.mailbox.org (Postfix) with ESMTPS id 4b60v75Kq0z9tPW; Tue, 27 May 2025 07:05:23 +0200 (CEST) From: Pankaj Raghav To: Suren Baghdasaryan , Ryan Roberts , Vlastimil Babka , Baolin Wang , Borislav Petkov , Ingo Molnar , "H . Peter Anvin" , Zi Yan , Mike Rapoport , Dave Hansen , Michal Hocko , David Hildenbrand , Lorenzo Stoakes , Andrew Morton , Thomas Gleixner , Nico Pache , Dev Jain , "Liam R . Howlett" , Jens Axboe Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org, willy@infradead.org, x86@kernel.org, linux-fsdevel@vger.kernel.org, "Darrick J . Wong" , mcgrof@kernel.org, gost.dev@samsung.com, kernel@pankajraghav.com, hch@lst.de, Pankaj Raghav Subject: [RFC 3/3] block: use mm_huge_zero_folio in __blkdev_issue_zero_pages() Date: Tue, 27 May 2025 07:04:52 +0200 Message-ID: <20250527050452.817674-4-p.raghav@samsung.com> In-Reply-To: <20250527050452.817674-1-p.raghav@samsung.com> References: <20250527050452.817674-1-p.raghav@samsung.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 60756C0004 X-Stat-Signature: jira3iqo3h78hwkcfs4t33f7efpkg9qt X-Rspam-User: X-HE-Tag: 1748322326-163780 X-HE-Meta: U2FsdGVkX18a18H8Q8I6HILyroCN/ZpK5kSievqSkvYxHXoF3MA0hQC2cgEOJ7+lPbWijgy7rCuu+s1G7Lu1yfPHAv81E/fZnlXSUFXSeT/vQGe293UuErKyi4coH0PM4zwhagL1OTtWW6CGcluhf1QEyC6hg7IOdqGAGPZFeOoFpeURR5w0vkyPi5rlXhuCvV1JC6YBBrCRf50ch7Og0h+5q1yviHFqorDVjXqGwISTynWq9Ub0ZoFuTCuIPSVdEgb+gpTQPa6ldW9IpubxGgQcdgSZXzgBoxz66OPeUEeUKoJTtVm+EeQnHZDPW5snJ3xiwyDbXGOwxO6E+UV8d2SJOSQBfnJqLbkpbhwpgFHD4vPg63DjeRhuuW/LHU5GiE992FbkCDeU8Dy1bMSKeOGWzMspA+l1M6BbVc48vYAKRvv5PMFZKAb3QDdhiM6NgOb956f90OBsr0YlqKe95SCfI/r6y9hhWt8Ua5PeF5E+vMwNslDr7dxfp1w6HqbZLHNVxVyrhyv888tzR8y6I2i6uSrxYH62wktKoBtMU53Aq/lx8zDLbsUggITM8ik1mEZEaCI8/Iq41k888zy2lur7po/+48eUYm0OvckTVDljnIaiHnyDAXDF4D4JheE/qMmUUKEozq+a65xboMJknjyZotFXWYNHxOPa5DSUcclsQ7uvBTeQT2QeUe/VEZdsOezxf7Ci6qypQx15H4R6nKNKRR7r9DkpyaDZI4uePqgJPMoKJz/lpXLfvA5L1W+JILebc00LYED0umxGRD4noOX0sQg9HUtMuLOoOJPafbNOF3zGnPOXpPZlSjjneow4j/dIBPGvhFZtu268XeI8LonqB39fuepIt4OY3RiJDjSb/zt/uDEnHZc6AitWTGVXJ0IbMeCx8tIHrAMjWTDFfAKeM+bwB9MKXMDPI9o37yrEX9O52fZCJ1S11fayZJyjbMM1zeepaQW8zj8PxPi yrUmtc2v mcvmzHrG23f0bYNM2zK1tEoBNKCENxvHw+PwHzyB6yFY7Umxdjt+DHDGuhqEvywASb7Of/Ji03pdsW2fbPw2taLICYD8IJWX/E8b1GMNXlZQllUdZFcltTPBQYMFY97ordFbBsTU4mhCFCPrWkRjk/Ow46WAndfhws5fn0mVde0SrJdM2wdBxexQbv1psSDQicKEH+klcMUp0Wp0rvistuqbs8ro3GdQWI2CjsYT2+Ra2UJrrm6popCXdlS31+ERm08JVWoSmtnGZmekcNZrAagQ4YKTt0HM+4VrqioFj0cKrB3bKIchBWDoIolCYT/08v7ohTu/vBDkqkOsl7jWIhG1oyjm7BmEq628oKNKASkHVn3cFw2VejxU9+EiQGiTFm6c5DD+iwYC9Grqu0qfncehrsjFK7ZgspzC2EpI+R0UABuT0gNDFNsLmxhNetpRZwkf+53yIzidH9qY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Use mm_huge_zero_folio in __blkdev_issue_zero_pages(). Fallback to ZERO_PAGE if mm_huge_zero_folio is not available. On systems that allocates mm_huge_zero_folio, we will end up sending larger bvecs instead of multiple small ones. Noticed a 4% increase in performance on a commercial NVMe SSD which does not support OP_WRITE_ZEROES. The device's MDTS was 128K. The performance gains might be bigger if the device supports bigger MDTS. Signed-off-by: Pankaj Raghav --- block/blk-lib.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/block/blk-lib.c b/block/blk-lib.c index 4c9f20a689f7..0fd55e028170 100644 --- a/block/blk-lib.c +++ b/block/blk-lib.c @@ -4,6 +4,7 @@ */ #include #include +#include #include #include #include @@ -196,6 +197,12 @@ static void __blkdev_issue_zero_pages(struct block_device *bdev, sector_t sector, sector_t nr_sects, gfp_t gfp_mask, struct bio **biop, unsigned int flags) { + struct folio *zero_folio; + + zero_folio = mm_get_huge_zero_folio(NULL); + if (!zero_folio) + zero_folio = page_folio(ZERO_PAGE(0)); + while (nr_sects) { unsigned int nr_vecs = __blkdev_sectors_to_bio_pages(nr_sects); struct bio *bio; @@ -208,11 +215,12 @@ static void __blkdev_issue_zero_pages(struct block_device *bdev, break; do { - unsigned int len, added; + unsigned int len, added = 0; - len = min_t(sector_t, - PAGE_SIZE, nr_sects << SECTOR_SHIFT); - added = bio_add_page(bio, ZERO_PAGE(0), len, 0); + len = min_t(sector_t, folio_size(zero_folio), + nr_sects << SECTOR_SHIFT); + if (bio_add_folio(bio, zero_folio, len, 0)) + added = len; if (added < len) break; nr_sects -= added >> SECTOR_SHIFT; -- 2.47.2