From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD68DC4725D for ; Mon, 22 Jan 2024 12:05:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 517C08D0006; Mon, 22 Jan 2024 07:05:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4C5DC8D0001; Mon, 22 Jan 2024 07:05:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 38DD38D0006; Mon, 22 Jan 2024 07:05:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 245438D0001 for ; Mon, 22 Jan 2024 07:05:27 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id EBAEC80443 for ; Mon, 22 Jan 2024 12:05:26 +0000 (UTC) X-FDA: 81706817052.27.BFAC560 Received: from mail-pg1-f169.google.com (mail-pg1-f169.google.com [209.85.215.169]) by imf29.hostedemail.com (Postfix) with ESMTP id 36E31120026 for ; Mon, 22 Jan 2024 12:05:25 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=fromorbit-com.20230601.gappssmtp.com header.s=20230601 header.b=dbJpME+U; dmarc=pass (policy=quarantine) header.from=fromorbit.com; spf=pass (imf29.hostedemail.com: domain of david@fromorbit.com designates 209.85.215.169 as permitted sender) smtp.mailfrom=david@fromorbit.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1705925125; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=CdsFYs1hbqJTJhEkF6HJtvK9MoHRmN5ij98nFDbYWBw=; b=mXcQNc37pOrGTpLu41XxZhlJ2RvKVbrCZTLGPptwDrdiN3Js5XkObo5sTFZ0c8ZhzPS1YD nm0hw2x93zYguKJcQvb3bKl5Zvc5vQa+h75Zozww594TnVJEFZjUxkGjZjGVNP7o7slE+O ed5iuJcXkW8IpBZ8j9SLtrbmN+6k5Fo= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=fromorbit-com.20230601.gappssmtp.com header.s=20230601 header.b=dbJpME+U; dmarc=pass (policy=quarantine) header.from=fromorbit.com; spf=pass (imf29.hostedemail.com: domain of david@fromorbit.com designates 209.85.215.169 as permitted sender) smtp.mailfrom=david@fromorbit.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1705925125; a=rsa-sha256; cv=none; b=M+spICP4lme32LgZG4lb0EkW4banrWWXpsV/6TTxXcHCVv1PDWjBPIaOcwQ0qgoteGIput MENX7Iga64sxsBy8pvg16dJuouIxgWOj9TP4R/LqCxLVLWGIY/YjLjfSXhVvwRoVyS+nVd GPLJtUnjEOLvG/Nx/A28aVsZr3k2sG4= Received: by mail-pg1-f169.google.com with SMTP id 41be03b00d2f7-5ce2aada130so1353746a12.1 for ; Mon, 22 Jan 2024 04:05:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=fromorbit-com.20230601.gappssmtp.com; s=20230601; t=1705925124; x=1706529924; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=CdsFYs1hbqJTJhEkF6HJtvK9MoHRmN5ij98nFDbYWBw=; b=dbJpME+UUojr937p6VnB9ziH+BSDTOuwUJKzpJUDkqRFxKdqUPktRb5gKosfIEEiy8 YbeeWwyIuU3hNRf48FvxJhVWeq/7s86HjfYarvPqcBGPJpwZvVoo8pP6WwqerJ3JXxkX RoT99jIDd1bAjpa9qCDimv/F3HbipRL92INK5HiXN3tZgyz+Y3ANL00KB0zUDkiyYtWk neivj5HaH5frxL7Fo1tn5IhkO7syQ5VTLCfqHqoyagZH/J9KdCEtcgyKzkDd4YOvqQxJ o/uoHboEEkNc+CTDzbMKFCmBmnvW3ht7PvwTfvbxtu4+qB4lojB+ChAnAiKPNdbJojQD /mbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1705925124; x=1706529924; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=CdsFYs1hbqJTJhEkF6HJtvK9MoHRmN5ij98nFDbYWBw=; b=SFb0AP3+eO1qcjPdQO1bCqAFPWWPa0HGCBJ+OtgoXHYoBG5C2ozvSMXfnl53xhi541 xXaLD6v1aNFYMKt/QXPvPfaezgsZ+mkjtCJfS0FSszC9rq+yySHIB00mYrN/7GmqhHjg w2eTihfnX1BbGcsZtiUM5yYcAs5YqE9sClIGX6lqpukG44X92z9Ku6iJ78YTGezt6qAI hL8DGLknRZX72iaa+FpwSbceO2R/WYo/ElxxZijfAcQSdb964F46eGFB66PrJ/qNDCaf xb9xjxz7ydKYoz/0FTLuyvv7OU25+ht6zTNJUXur6G07RK/o5k2sW29laSG+hhvCodIS fKFQ== X-Gm-Message-State: AOJu0YwroJA2CrkSQKaPm/091yBgbnyMryzcrr9UMOM2rDIutbqNU343 c2R+qWAl5zryfo297x3petxGmXjmPDWM5EHbBo/0ShfKKHp3CIIZSj8+EsR4Z8ELNAGkZM3lqvC e X-Google-Smtp-Source: AGHT+IFaZdDR5Pz7mFXHXwCTmHCx1vERlQUzFnEideqvy0ruWoG5uG365Ld/X4lB8t51oT9ZUdIKoA== X-Received: by 2002:a17:90a:c0f:b0:28e:754e:b3f2 with SMTP id 15-20020a17090a0c0f00b0028e754eb3f2mr1104331pjs.62.1705925123860; Mon, 22 Jan 2024 04:05:23 -0800 (PST) Received: from dread.disaster.area (pa49-180-249-6.pa.nsw.optusnet.com.au. [49.180.249.6]) by smtp.gmail.com with ESMTPSA id st13-20020a17090b1fcd00b0028cf59fea33sm9319349pjb.42.2024.01.22.04.05.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Jan 2024 04:05:23 -0800 (PST) Received: from dave by dread.disaster.area with local (Exim 4.96) (envelope-from ) id 1rRt3I-00DlLj-2Z; Mon, 22 Jan 2024 23:05:20 +1100 Date: Mon, 22 Jan 2024 23:05:20 +1100 From: Dave Chinner To: Christoph Hellwig Cc: "Darrick J. Wong" , linux-xfs@vger.kernel.org, willy@infradead.org, linux-mm@kvack.org Subject: Re: [PATCH 2/3] xfs: use folios in the buffer cache Message-ID: References: <20240118222216.4131379-1-david@fromorbit.com> <20240118222216.4131379-3-david@fromorbit.com> <20240119012624.GQ674499@frogsfrogsfrogs> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: 36E31120026 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: 6455dheadqbs4xeax14cbbxaz6xags7r X-HE-Tag: 1705925125-713110 X-HE-Meta: U2FsdGVkX19vPst2iyyDVAW7dSgDS63+2PcM/l+IRZEJ9MmIPfcy1+2JguLMw+8NqftvPDW1rvGE4P/7cY6cJN4PKbVl/kQPfiZn0VLPlMNc39+Kxj7ByVp/Sbh+vSXBL5Fn5tSEhmczCVHEKmFCzxy9+NIFU8kGgnB15dwcyqSKg6F0mLFG0AyANo8wK4UWuam3QX3ZNN4A5yRKn2AM9K4YGg6i4zlEY/KbK+0alX4pi0puhmf+31ojQ8R/vHqHZaYuYDapfKKvhhbV66jDc2x4IdmKS3pgqCmcKNpQ7GqJfWwLNADKzrqHbwhdVTlbXidCJE3y7n04Z+i3EvgvcgWuCx0JAx6DkQETeiAzei787TpZXMOZc6pFd1YwtEnzynrwaaXre4pRGpY5vTw73DWmXYFdxlVMV33c/LTXvgTcsSFBhFZiJM7iBi1nTDYSufiAsNghdHnLeJnUBOz09wm3JFk2VjHyyTQ4bfDJgNlV+qy2mZ+SRPJSepctJWi9gU5evaNFScQ9loR4ZWVwt4oF6fk/nuqwBOW+yhoJHo1iyvdWjiPK4SMTOXSvFnoOeA9OEfzImHKDwaYLou5aPJQK1oJCumfquJ2518jqIfED3i77y2kjz4PfPLcJj1Fk6s/TqPPFmkMIS47OO/zOxymZOHy20B5UP29LR/y272eKxrf2+y6/neeQ9D/+cduO2qiDiFfsrn6YKwj9RopUA4TbBoJMb0B8Yvv5uj+ya1zqLMCC0LpXG4Cg/7JtTssaMJpdg8fvInw+hWxbkUUmXW3EzWTk8/uO+S/sXp6GtteZwS3H5iQXG6t8whOq0SELygdhXWGWh2RokWajV6SDGRwhamjvkOiT4EBWTlcMH1W1hxMKnhyxTA8ct2sY6SVJzLFafQqs91ATezCnU8V71aiYSU3cY4Q1tyc58ugmf8K+Nr0AtqZejhEZwLwHqlwvx3D8d6nhIPhKUHoEu7P McUP/mzj 06p193HdHNpJUAkRADfIrlIo16Os7fw0jwjPbpULkL39gR7bR9ZX0UeeZjqar4ZOFe9Fn5c/Y8gtYvd/QQWfsdg1icLksDLjuTcE7Nan2/f3IzwE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000047, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Sun, Jan 21, 2024 at 10:39:12PM -0800, Christoph Hellwig wrote: > On Thu, Jan 18, 2024 at 05:26:24PM -0800, Darrick J. Wong wrote: > > Ugh, pointer casting. I suppose here is where we might want an > > alloc_folio_bulk_array that might give us successively smaller > > large-folios until b_page_count is satisfied? (Maybe that's in the next > > patch?) > > > > I guess you'd also need a large-folio capable vm_map_ram. > > We need to just stop using vm_map_ram, there is no reason to do that > even right now. It was needed when we used the page cache to back > pagebuf, but these days just sing vmalloc is the right thing for > !unmapped buffers that can't use large folios. I haven't looked at what using vmalloc means for packing the buffer into a bio - we currently use bio_add_page(), so does that mean we have to use some variant of virt_to_page() to break the vmalloc region up into it's backing pages to feed them to the bio? Or is there some helper that I'm unaware of that does it all for us magically? > And I'm seriously > wondering if we should bother with unmapped buffers in the long run > if we end up normally using larger folios or just consolidate down to: > > - kmalloc for buffers < PAGE_SIZE > - folio for buffers >= PAGE_SIZE > - vmalloc if allocation a larger folios is not possible Yeah, that's kind of where I'm going with this. Large folios already turn off unmapped buffers, and I'd really like to get rid of that page straddling mess that unmapped buffers require in the buffer item dirty region tracking. That means we have to get rid of unmapped buffers.... -Dave. -- Dave Chinner david@fromorbit.com