From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89D8AC54798 for ; Tue, 5 Mar 2024 07:07:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E1B046B0089; Tue, 5 Mar 2024 02:07:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DA4186B008A; Tue, 5 Mar 2024 02:07:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C440C6B008C; Tue, 5 Mar 2024 02:07:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id ADF846B0089 for ; Tue, 5 Mar 2024 02:07:39 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id EEEC4A044C for ; Tue, 5 Mar 2024 07:07:38 +0000 (UTC) X-FDA: 81862104996.23.F3E047C Received: from szxga07-in.huawei.com (szxga07-in.huawei.com [45.249.212.35]) by imf05.hostedemail.com (Postfix) with ESMTP id B996D10000B for ; Tue, 5 Mar 2024 07:07:35 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=none; spf=pass (imf05.hostedemail.com: domain of liushixin2@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=liushixin2@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709622457; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xfrPiQevPHUt1nE7WJ8TIdMXCt4bZV4rNMc/WAPuy4I=; b=e6OBnlEA2LnLZPRiUFXxC1/tmmq++cyDVpeynZSuPePtY25ZWOyJkMi2ErGzJkW4hn9Va+ /UsHulygFpqBLVeInHTpzcl/rAbalEgfKe1NhluXc0FbtaXJT/1VbCK3BiPNUEolPfdl/e yL3J2M5SJMJbD+BekfQw9H4HdKRjIhI= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=none; spf=pass (imf05.hostedemail.com: domain of liushixin2@huawei.com designates 45.249.212.35 as permitted sender) smtp.mailfrom=liushixin2@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709622457; a=rsa-sha256; cv=none; b=E0Lw5/UoN54eXx2ITd39juXmXkDyPVn2mcogqxjvIG+N2oRz/LocsfY9aVJLPf902g160R wCTD5aAdcsI+xMA5RHZOkjJhzYDCF62hw6Unc5TZvwEgCZu8wbmfvhvtrXTzu/LkvcdtO2 WsTDZzq5v7AvB+kiXUtRtJUG88ArsPg= Received: from mail.maildlp.com (unknown [172.19.163.44]) by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4Tpmm70PXPz1Q9sl; Tue, 5 Mar 2024 15:05:11 +0800 (CST) Received: from dggpemd200004.china.huawei.com (unknown [7.185.36.141]) by mail.maildlp.com (Postfix) with ESMTPS id 808D61402CE; Tue, 5 Mar 2024 15:07:31 +0800 (CST) Received: from [10.174.179.24] (10.174.179.24) by dggpemd200004.china.huawei.com (7.185.36.141) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Tue, 5 Mar 2024 15:07:30 +0800 Subject: Re: [PATCH 2/2] mm/readahead: limit sync readahead while too many active refault To: Alexander Viro , Christian Brauner , Jan Kara , Matthew Wilcox , Andrew Morton References: <20240201100835.1626685-1-liushixin2@huawei.com> <20240201100835.1626685-3-liushixin2@huawei.com> CC: , , From: Liu Shixin Message-ID: <09e871aa-bbe6-47a8-4aea-e2a1674366a1@huawei.com> Date: Tue, 5 Mar 2024 15:07:30 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.7.1 MIME-Version: 1.0 In-Reply-To: <20240201100835.1626685-3-liushixin2@huawei.com> Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.179.24] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemd200004.china.huawei.com (7.185.36.141) X-Rspamd-Queue-Id: B996D10000B X-Rspam-User: X-Stat-Signature: sr79aut8e4tkz1y4i9xtgzx99w7i91zs X-Rspamd-Server: rspam01 X-HE-Tag: 1709622455-262093 X-HE-Meta: U2FsdGVkX183mKoC+Dh1UK05alQJVze02InoGERQNfK7YKbmTihkHb9z8MQaJM9eQvcNVrchDYDHFyjiI4RqgTtnGBKhEHTHv3cZwf4Y/WrEc9YP/JroROXS0dagja8n2n4gyTM+xo0Q/LUmJWlzUsSAyffJZCuIfldLUHW1+q9af59tL1mIK3O4vtoBpEUKbJvogcT8VFMIcGLeuuYoCLXl+FVpOIJ/AC7mFaosp2z2+XlWwdQ48vm/6MmobCmL+zmRQfq5g1QGfqLPGf8v7YEwFqsTgORBguQFSnpq40iqfxzxP5zSKpoMtFrtAsEdz2+SK4xW5VC/oDNLFVlaw9MJEd9OWKDSLAIq8i1M39O+NKozFNDJjM7a85M1cncXwD3u/g+ZtwRux1iA9lv2xQLy1v/atq5KSPFyT87VABbfVR5SCnExsRHj+zRT0d4sUEj1d9sx4/kjW7+utFqUwQav4MsGhv8NwaPaBvoAOjzOoF0oOv116sl/0hkIm6ih3dflKaVV6fwm1J9+BsIQFun8Q2zgV044i4eaJZH0bLyyA8XTk4Y80yN9p4VDoSkEFlxg3hYbhxJdqCNtnYraT1ZAwB9HZVPXLFj5LaCb9qYwObnC7RQKLBez0x5GoKRJ7XjZx5hvd4yRMxi8Cq32+2s39+Ehd0PWC9VsZhCmGcKnQ0UQIvukE0p8aTnZgGKVmpCXiO2bf+Ej2rpXNQo5alVl2Aga/fJu+SqYFE3wxLeSugHgo4iT5R7zIzzypT5MKu7mwIeKaQ45jE65A4VHR3Z0h78co0coSXGnjKfBhgqeHi2f9Xf5wwau/U/Q1/fndIlINXIu+Ca0kZlmVrsyxZ1CHij+0WkkiKLD5zyq2OLQtJEyQfHfMHqClG4aUo+k5Knq7ng6X8nctsQQNYgXzlWRp/TRmBRPDdYlxI2reuXva81Rc5gxr24/AlFvAjvhEMULrkA49Exz8LH1Evn 8B7//t9R duwl47CaJ1e7DA4l4YpXhqBctM0Z1w6+CgdahJo4dNOO5CYqKGYsb9ZziEoBkjj3LdImttVynoQcpQXR0tJ6zm97IFyW8oYxQNnPJAhmSDe6QFRKBKwbRqlJJ+5DHfB8MKgXheqx4xHGHewCY19JMezt0jmRfzNUteumUGmRBle3D/FDctcC4YZ94lg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi, Jan, All, Please take a look at this patch again. Although this may not be a graceful way. I can't think any other way to fix the problem except using workingset. Thanks, On 2024/2/1 18:08, Liu Shixin wrote: > When the pagefault is not for write and the refault distance is close, > the page will be activated directly. If there are too many such pages in > a file, that means the pages may be reclaimed immediately. > In such situation, there is no positive effect to read-ahead since it will > only waste IO. So collect the number of such pages and when the number is > too large, stop bothering with read-ahead for a while until it decreased > automatically. > > Define 'too large' as 10000 experientially, which can solves the problem > and does not affect by the occasional active refault. > > Signed-off-by: Liu Shixin > --- > include/linux/fs.h | 2 ++ > include/linux/pagemap.h | 1 + > mm/filemap.c | 16 ++++++++++++++++ > mm/readahead.c | 4 ++++ > 4 files changed, 23 insertions(+) > > diff --git a/include/linux/fs.h b/include/linux/fs.h > index ed5966a704951..f2a1825442f5a 100644 > --- a/include/linux/fs.h > +++ b/include/linux/fs.h > @@ -960,6 +960,7 @@ struct fown_struct { > * the first of these pages is accessed. > * @ra_pages: Maximum size of a readahead request, copied from the bdi. > * @mmap_miss: How many mmap accesses missed in the page cache. > + * @active_refault: Number of active page refault. > * @prev_pos: The last byte in the most recent read request. > * > * When this structure is passed to ->readahead(), the "most recent" > @@ -971,6 +972,7 @@ struct file_ra_state { > unsigned int async_size; > unsigned int ra_pages; > unsigned int mmap_miss; > + unsigned int active_refault; > loff_t prev_pos; > }; > > diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h > index 2df35e65557d2..da9eaf985dec4 100644 > --- a/include/linux/pagemap.h > +++ b/include/linux/pagemap.h > @@ -1256,6 +1256,7 @@ struct readahead_control { > pgoff_t _index; > unsigned int _nr_pages; > unsigned int _batch_count; > + unsigned int _active_refault; > bool _workingset; > unsigned long _pflags; > }; > diff --git a/mm/filemap.c b/mm/filemap.c > index 750e779c23db7..4de80592ab270 100644 > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -3037,6 +3037,7 @@ loff_t mapping_seek_hole_data(struct address_space *mapping, loff_t start, > > #ifdef CONFIG_MMU > #define MMAP_LOTSAMISS (100) > +#define ACTIVE_REFAULT_LIMIT (10000) > /* > * lock_folio_maybe_drop_mmap - lock the page, possibly dropping the mmap_lock > * @vmf - the vm_fault for this fault. > @@ -3142,6 +3143,18 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf) > if (mmap_miss > MMAP_LOTSAMISS) > return fpin; > > + ractl._active_refault = READ_ONCE(ra->active_refault); > + if (ractl._active_refault) > + WRITE_ONCE(ra->active_refault, --ractl._active_refault); > + > + /* > + * If there are a lot of refault of active pages in this file, > + * that means the memory reclaim is ongoing. Stop bothering with > + * read-ahead since it will only waste IO. > + */ > + if (ractl._active_refault >= ACTIVE_REFAULT_LIMIT) > + return fpin; > + > /* > * mmap read-around > */ > @@ -3151,6 +3164,9 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf) > ra->async_size = ra->ra_pages / 4; > ractl._index = ra->start; > page_cache_ra_order(&ractl, ra, 0); > + > + WRITE_ONCE(ra->active_refault, ractl._active_refault); > + > return fpin; > } > > diff --git a/mm/readahead.c b/mm/readahead.c > index cc4abb67eb223..d79bb70a232c4 100644 > --- a/mm/readahead.c > +++ b/mm/readahead.c > @@ -263,6 +263,10 @@ void page_cache_ra_unbounded(struct readahead_control *ractl, > folio_set_readahead(folio); > ractl->_workingset |= folio_test_workingset(folio); > ractl->_nr_pages++; > + if (unlikely(folio_test_workingset(folio))) > + ractl->_active_refault++; > + else if (unlikely(ractl->_active_refault)) > + ractl->_active_refault--; > } > > /*