From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 088EFC0044D for ; Wed, 11 Mar 2020 17:00:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4B69920736 for ; Wed, 11 Mar 2020 17:00:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4B69920736 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arndb.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5FC646B0003; Wed, 11 Mar 2020 13:00:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5ADBA6B0005; Wed, 11 Mar 2020 13:00:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4C2716B0006; Wed, 11 Mar 2020 13:00:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0103.hostedemail.com [216.40.44.103]) by kanga.kvack.org (Postfix) with ESMTP id 324A76B0003 for ; Wed, 11 Mar 2020 13:00:18 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id D5F3A181AC9C6 for ; Wed, 11 Mar 2020 17:00:17 +0000 (UTC) X-FDA: 76583694474.18.horse48_1f779098051b X-HE-Tag: horse48_1f779098051b X-Filterd-Recvd-Size: 5759 Received: from mout.kundenserver.de (mout.kundenserver.de [212.227.126.133]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Wed, 11 Mar 2020 17:00:13 +0000 (UTC) Received: from mail-qt1-f169.google.com ([209.85.160.169]) by mrelayeu.kundenserver.de (mreue012 [212.227.15.129]) with ESMTPSA (Nemesis) id 1N4vFE-1jMDiZ34nE-010y6D for ; Wed, 11 Mar 2020 18:00:11 +0100 Received: by mail-qt1-f169.google.com with SMTP id m33so2103308qtb.3 for ; Wed, 11 Mar 2020 10:00:11 -0700 (PDT) X-Gm-Message-State: ANhLgQ2s+1CQdCZ1yEY1+9C1BNqMdXRG0CDlf3uOjD1EZBuGlZOu4liZ b08g1lw+2PGkGbxZ2DU0PYh4/fQ5jelxaWbWpzA= X-Google-Smtp-Source: ADFU+vu9fVJOAF+9P6CbsNSvk6tSClNpCucHRIY2BueNHAFfNflfKP7sKJEZulP0d/np9ao6uI0PuseGoAEvcwJyhAc= X-Received: by 2002:ac8:16b8:: with SMTP id r53mr3575245qtj.7.1583946010319; Wed, 11 Mar 2020 10:00:10 -0700 (PDT) MIME-Version: 1.0 References: <20200212085004.GL25745@shell.armlinux.org.uk> <671b05bc-7237-7422-3ece-f1a4a3652c92@oracle.com> <7c4c1459-60d5-24c8-6eb9-da299ead99ea@oracle.com> <20200306203439.peytghdqragjfhdx@kahuna> <20200309155945.GA4124965@arrakis.emea.arm.com> <20200309160919.GM25745@shell.armlinux.org.uk> <20200311142905.GI3216816@arrakis.emea.arm.com> In-Reply-To: <20200311142905.GI3216816@arrakis.emea.arm.com> From: Arnd Bergmann Date: Wed, 11 Mar 2020 17:59:53 +0100 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH] vfs: keep inodes with page cache off the inode shrinker LRU To: Catalin Marinas Cc: Russell King - ARM Linux admin , Nishanth Menon , Santosh Shilimkar , Tero Kristo , Linux ARM , Michal Hocko , Rik van Riel , Santosh Shilimkar , Dave Chinner , Linux Kernel Mailing List , Linux-MM , Yafang Shao , Al Viro , Johannes Weiner , linux-fsdevel , kernel-team@fb.com, Kishon Vijay Abraham I , Linus Torvalds , Andrew Morton , Roman Gushchin Content-Type: text/plain; charset="UTF-8" X-Provags-ID: V03:K1:8O0FL5CDph8KjxfN/vam6iXGh3z0Gn48q4NFrSMAlQKzRUKISxH FO9x3ztb2/AF5TWu4Aofo/n46mTickPCRV7NPvOzvzvyq/HH+smTtujswjJDxUF/wVjI0W9 DkYfmbdN0LazwRMJ6TpjNNxTbo7wqoQfEGhJ0KQ1h1dbJX9v92kCYtp+cCZOtIawS2dwSQF uqcBWq9rS8mWfX/6ceZwg== X-UI-Out-Filterresults: notjunk:1;V03:K0:RVXeqOIlS7Y=:S4gaikgyP+lVV9gpM1SVt9 AYb/NwbzThP6q6crjOA+v8cD2CaZnUWJFFvhaig/bIoPtrf2gaMqYFBRDUeuXjAd80ZjalPp7 sRySt6jPAvu1l0GfP6VV4kW454cmAnLsfBUOT6rtEG2ZvveHu8yhdk85cMjgHS+gYquRko5+Y Hzscwh+LAPEi3ALPV9accpt3ukEcY8wtV+4JfK5FbW/NOXb4KBfdTlDixplSx5Lq0L8kZHCdI 3a5/+4hPrOne+XwCUG6Ny4qaj4BD3a5whB6A2k3Hk7XYvYudb/82NY64xl8E9O0U4nIptv5I7 yxjDOe1Xf/CKh4ZT4CMoFyr2gZDLo+LKdWE8ECFRbk9dTg7KErTL+4m78VI4YSmG3BTVEffwX aU9Nic4krwyRwBeDR7H+keqe6GprN93On+8nxKPAvO1NdScmmuXb2ZGj/yPqEmwYceArXu1Ss wmoPmbdYq+7ZzK2suuYWcZY4t/Wu9WsyQXpuf21A+gSyzKrzkMewVJlPhW21xpBUHDiYwFuHs eUj96wDqVstrtjoz0YF894mHSdC83ujo32gxgV/Hb7QO2jyTQAuL7kTQqqoIPA8yYnrrG+pWs bb2D5aa0tX0VYmfn1QOm11k95ZjCmT2UwIbjhhvgs16RYG3qNDsaBlipbV25DOupTdHOnB2NF I6aJvi/U6+NyOdCqf9P3Hdv/SRWRq3+SUqtZiFBZ3+ruP16h9sQlupX57jDJqb9oWndxZBgks 6uBXAgDKzVoqO/zhhD0I7F7aBBTwBX2IuZ1RAuJrlgeVSmgCwFxeWbmgFhI/u+7OfY4FBP+ic tJcHlNKRgWJg3jpPxrABa5/cfTF6qKZQQHQgvlQjZzQzdS/BTQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Mar 11, 2020 at 3:29 PM Catalin Marinas wrote: > > - Flip TTBR0 on kernel entry/exit, and again during user access. > > > > This is probably more work to implement than your idea, but > > I would hope this has a lower overhead on most microarchitectures > > as it doesn't require pinning the pages. Depending on the > > microarchitecture, I'd hope the overhead would be comparable > > to that of ARM64_SW_TTBR0_PAN. > > This still doesn't solve the copy_{from,to}_user() case where both > address spaces need to be available during copy. So you either pin the > user pages in memory and access them via the kernel mapping or you > temporarily map (kmap?) the destination/source kernel address. The > overhead I'd expect to be significantly greater than ARM64_SW_TTBR0_PAN > for the uaccess routines. For user entry/exit, your suggestion is > probably comparable with SW PAN. Good point, that is indeed a larger overhead. The simplest implementation I had in mind would use the code from arch/arm/lib/copy_from_user.S and flip ttbr0 between each ldm and stm (up to 32 bytes), but I have no idea of the cost of storing to ttbr0, so this might be even more expensive. Do you have an estimate of how long writing to TTBR0_64 takes on Cortex-A7 and A15, respectively? Another way might be to use a use a temporary buffer that is already mapped, and add a memcpy() through L1-cache to reduce the number of ttbr0 changes. The buffer would probably have to be on the stack, which limits the size, but for large copies get_user_pages()+memcpy() may end up being faster anyway. Arnd