From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) by kanga.kvack.org (Postfix) with ESMTP id F3D508E01DC for ; Fri, 14 Dec 2018 14:53:33 -0500 (EST) Received: by mail-pl1-f198.google.com with SMTP id b24so4282006pls.11 for ; Fri, 14 Dec 2018 11:53:33 -0800 (PST) Received: from mga01.intel.com (mga01.intel.com. [192.55.52.88]) by mx.google.com with ESMTPS id e4si4726204plk.260.2018.12.14.11.53.32 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 14 Dec 2018 11:53:32 -0800 (PST) Subject: Re: [PATCH 1/2] mm: introduce put_user_page*(), placeholder versions References: <20181208022445.GA7024@redhat.com> <20181210102846.GC29289@quack2.suse.cz> <20181212150319.GA3432@redhat.com> <20181212214641.GB29416@dastard> <20181212215931.GG5037@redhat.com> <20181213005119.GD29416@dastard> <05a68829-6e6d-b766-11b4-99e1ba4bc87b@nvidia.com> <01cf4e0c-b2d6-225a-3ee9-ef0f7e53684d@nvidia.com> <20181214194843.GG10600@bombadil.infradead.org> From: Dave Hansen Message-ID: Date: Fri, 14 Dec 2018 11:53:31 -0800 MIME-Version: 1.0 In-Reply-To: <20181214194843.GG10600@bombadil.infradead.org> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: owner-linux-mm@kvack.org List-ID: To: Matthew Wilcox , Dan Williams Cc: John Hubbard , david , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , Jan Kara , John Hubbard , Andrew Morton , Linux MM , tom@talpey.com, Al Viro , benve@cisco.com, Christoph Hellwig , Christopher Lameter , "Dalessandro, Dennis" , Doug Ledford , Jason Gunthorpe , Michal Hocko , Mike Marciniszyn , rcampbell@nvidia.com, Linux Kernel Mailing List , linux-fsdevel On 12/14/18 11:48 AM, Matthew Wilcox wrote: > I think we can do better than a proxy object with bit 0 set. I'd go > for allocating something like this: > > struct dynamic_page { > struct page; > unsigned long vaddr; > unsigned long pfn; > ... > }; > > and use a bit in struct page to indicate that this is a dynamic page. That might be fun. We'd just need a fast/static and slow/dynamic path in page_to_pfn()/pfn_to_page(). We'd also need some kind of auxiliary pfn-to-page structure since we could not fit that^ structure in vmemmap[].