From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB8E7C433DF for ; Wed, 10 Jun 2020 20:39:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8CC76206A4 for ; Wed, 10 Jun 2020 20:39:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="VD7fU7Lu" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8CC76206A4 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 944158D0018; Wed, 10 Jun 2020 16:39:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8C9CA8D000E; Wed, 10 Jun 2020 16:39:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 748258D0018; Wed, 10 Jun 2020 16:39:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0103.hostedemail.com [216.40.44.103]) by kanga.kvack.org (Postfix) with ESMTP id 4E8B48D000E for ; Wed, 10 Jun 2020 16:39:02 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 0C5AA80354E4 for ; Wed, 10 Jun 2020 20:39:02 +0000 (UTC) X-FDA: 76914466524.20.maid95_6302c3126dce Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id DD0FE180C10ED for ; Wed, 10 Jun 2020 20:39:01 +0000 (UTC) X-HE-Tag: maid95_6302c3126dce X-Filterd-Recvd-Size: 6402 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Wed, 10 Jun 2020 20:39:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=MzsyfRnyDe67a2Cfz5uY3LqHozA0IstJ5c01vhKwG9I=; b=VD7fU7LuZ61xGvb1zRlmkXDnh4 +J3601D7omZ47AOcyP+VEXSGypqy+y0+hH94n8UEFLeQPDc9dsY5Bu2qWFAiwrAzglRamvVcwu+7e d848ODtU2nVCwl6T6jTIu7CidmoIFW1nRasfwKVUVqR6Gbo9E6hfsX6u+0FHkIXuquVSbZsrp7L6s QunVJjJ+xkv3jR36/3bmAKO2y7f9H7k/36I5mAdyPKpS/Z+vlPZca6tXIviFNHXpLj78UH1o4bpk/ +cTr/CPFU8TDrENMB7Y1x9EXwm/3E5+x6cjoODs5Zjc2SxJhs/ZkfbNLuM/fmRCDAwnGwV1jLpy6W KRUk9+Pg==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1jj76b-0003YT-B2; Wed, 10 Jun 2020 20:13:49 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v6 50/51] mm: Add THP readahead Date: Wed, 10 Jun 2020 13:13:44 -0700 Message-Id: <20200610201345.13273-51-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200610201345.13273-1-willy@infradead.org> References: <20200610201345.13273-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: DD0FE180C10ED X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" If the filesystem supports THPs, allocate larger pages in the readahead code when it seems worth doing. The heuristic for choosing larger page sizes will surely need some tuning, but this aggressive ramp-up seems good for testing. Signed-off-by: Matthew Wilcox (Oracle) --- mm/readahead.c | 93 ++++++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 87 insertions(+), 6 deletions(-) diff --git a/mm/readahead.c b/mm/readahead.c index 74c7e1eff540..98bbcc986b39 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -149,7 +149,7 @@ static void read_pages(struct readahead_control *rac,= struct list_head *pages, =20 blk_finish_plug(&plug); =20 - BUG_ON(!list_empty(pages)); + BUG_ON(pages && !list_empty(pages)); BUG_ON(readahead_count(rac)); =20 out: @@ -428,13 +428,92 @@ static int try_context_readahead(struct address_spa= ce *mapping, return 1; } =20 +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +static inline int ra_alloc_page(struct readahead_control *rac, pgoff_t i= ndex, + pgoff_t mark, unsigned int order, gfp_t gfp) +{ + int err; + struct page *page =3D __page_cache_alloc_order(gfp, order); + + if (!page) + return -ENOMEM; + if (mark - index < (1UL << order)) + SetPageReadahead(page); + err =3D add_to_page_cache_lru(page, rac->mapping, index, gfp); + if (err) + put_page(page); + else + rac->_nr_pages +=3D 1UL << order; + return err; +} + +static bool page_cache_readahead_order(struct readahead_control *rac, + struct file_ra_state *ra, unsigned int order) +{ + struct address_space *mapping =3D rac->mapping; + unsigned int old_order =3D order; + pgoff_t index =3D readahead_index(rac); + pgoff_t limit =3D (i_size_read(mapping->host) - 1) >> PAGE_SHIFT; + pgoff_t mark =3D index + ra->size - ra->async_size; + int err =3D 0; + gfp_t gfp =3D readahead_gfp_mask(mapping); + + if (!mapping_thp_support(mapping)) + return false; + + limit =3D min(limit, index + ra->size - 1); + + /* Grow page size up to PMD size */ + if (order < HPAGE_PMD_ORDER) { + order +=3D 2; + if (order > HPAGE_PMD_ORDER) + order =3D HPAGE_PMD_ORDER; + while ((1 << order) > ra->size) + order--; + } + + /* If size is somehow misaligned, fill with order-0 pages */ + while (!err && index & ((1UL << old_order) - 1)) + err =3D ra_alloc_page(rac, index++, mark, 0, gfp); + + while (!err && index & ((1UL << order) - 1)) { + err =3D ra_alloc_page(rac, index, mark, old_order, gfp); + index +=3D 1UL << old_order; + } + + while (!err && index <=3D limit) { + err =3D ra_alloc_page(rac, index, mark, order, gfp); + index +=3D 1UL << order; + } + + if (index > limit) { + ra->size +=3D index - limit - 1; + ra->async_size +=3D index - limit - 1; + } + + read_pages(rac, NULL, false); + + /* + * If there were already pages in the page cache, then we may have + * left some gaps. Let the regular readahead code take care of this + * situation. + */ + return !err; +} +#else +static bool page_cache_readahead_order(struct readahead_control *rac, + struct file_ra_state *ra, unsigned int order) +{ + return false; +} +#endif + /* * A minimal readahead algorithm for trivial sequential/random reads. */ static void ondemand_readahead(struct address_space *mapping, struct file_ra_state *ra, struct file *file, - bool hit_readahead_marker, pgoff_t index, - unsigned long req_size) + struct page *page, pgoff_t index, unsigned long req_size) { DEFINE_READAHEAD(rac, file, mapping, index); struct backing_dev_info *bdi =3D inode_to_bdi(mapping->host); @@ -473,7 +552,7 @@ static void ondemand_readahead(struct address_space *= mapping, * Query the pagecache for async_size, which normally equals to * readahead size. Ramp it up and use it as the new readahead size. */ - if (hit_readahead_marker) { + if (page) { pgoff_t start; =20 rcu_read_lock(); @@ -544,6 +623,8 @@ static void ondemand_readahead(struct address_space *= mapping, } =20 rac._index =3D ra->start; + if (page && page_cache_readahead_order(&rac, ra, thp_order(page))) + return; __do_page_cache_readahead(&rac, ra->size, ra->async_size); } =20 @@ -578,7 +659,7 @@ void page_cache_sync_readahead(struct address_space *= mapping, } =20 /* do read-ahead */ - ondemand_readahead(mapping, ra, filp, false, index, req_count); + ondemand_readahead(mapping, ra, filp, NULL, index, req_count); } EXPORT_SYMBOL_GPL(page_cache_sync_readahead); =20 @@ -624,7 +705,7 @@ page_cache_async_readahead(struct address_space *mapp= ing, return; =20 /* do read-ahead */ - ondemand_readahead(mapping, ra, filp, true, index, req_count); + ondemand_readahead(mapping, ra, filp, page, index, req_count); } EXPORT_SYMBOL_GPL(page_cache_async_readahead); =20 --=20 2.26.2