From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.9 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30FF0C4741F for ; Fri, 30 Oct 2020 19:35:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 918D1221FA for ; Fri, 30 Oct 2020 19:35:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="db633F26"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="il7YKHja" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 918D1221FA Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linutronix.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CED9D6B0036; Fri, 30 Oct 2020 15:35:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C9D946B005C; Fri, 30 Oct 2020 15:35:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B63306B005D; Fri, 30 Oct 2020 15:35:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0241.hostedemail.com [216.40.44.241]) by kanga.kvack.org (Postfix) with ESMTP id 86F826B0036 for ; Fri, 30 Oct 2020 15:35:22 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 19B20362A for ; Fri, 30 Oct 2020 19:35:22 +0000 (UTC) X-FDA: 77429595684.27.scene15_47116bf27298 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id EF9FA3D66B for ; Fri, 30 Oct 2020 19:35:21 +0000 (UTC) X-HE-Tag: scene15_47116bf27298 X-Filterd-Recvd-Size: 5457 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Fri, 30 Oct 2020 19:35:21 +0000 (UTC) From: Thomas Gleixner DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1604086519; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=KjnQMCzBdzGXaCOpmbjT8Ur9FZq3y7opRBflIXIldZw=; b=db633F26JvpFY6g0OD9+HbOB2bGDkf74GvzdnzaI3Jackio3Pxmuc4jaTjmmTn0rWAIpK+ FwvojioAnGZiiH15k8FN/MiA2IKJM2U7IeHO0IsBZqBGgEsd8kwilqueFED/GNpoidpnzO yy2TxOvoKUkHNkNBXz892WSrXAjSj1bek74dJCnwm2fooU7rVOQjFed7q+Vvsmw9zNfHs4 G2wInIc2Xh8w8e1+gsfiziPOVyn2tk/4xzLJndeycG1E0X5k3bHmibgmZZcOeJXy9ENlqt 1WjhiLEbUG/fvEPPOJsLp5CaNR4eEbW+SMl/VHUu0cW+yWNLKzRpz7CwuUxChg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1604086519; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=KjnQMCzBdzGXaCOpmbjT8Ur9FZq3y7opRBflIXIldZw=; b=il7YKHjaiUFmDOQKgI3UCsxgkMmYaTjOnYVZzaPy8He727Hr6NRROPSIn6WIMwC/jiW6Jw nlMa3TiG03b3UjDQ== To: Matthew Wilcox Cc: LKML , linux-arch@vger.kernel.org, Linus Torvalds , Peter Zijlstra , Paul McKenney , David Airlie , Daniel Vetter , Ard Biesheuvel , Herbert Xu , Christoph Hellwig , Sebastian Andrzej Siewior , Ingo Molnar , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Andrew Morton , linux-mm@kvack.org, x86@kernel.org, Vineet Gupta , linux-snps-arc@lists.infradead.org, Russell King , Arnd Bergmann , linux-arm-kernel@lists.infradead.org, Guo Ren , linux-csky@vger.kernel.org, Michal Simek , Thomas Bogendoerfer , linux-mips@vger.kernel.org, Nick Hu , Greentime Hu , Vincent Chen , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , linuxppc-dev@lists.ozlabs.org, "David S. Miller" , sparclinux@vger.kernel.org, Chris Zankel , Max Filippov , linux-xtensa@linux-xtensa.org Subject: Re: [patch V2 00/18] mm/highmem: Preemptible variant of kmap_atomic & friends In-Reply-To: <20201030130627.GI27442@casper.infradead.org> References: <20201029221806.189523375@linutronix.de> <20201030130627.GI27442@casper.infradead.org> Date: Fri, 30 Oct 2020 20:35:18 +0100 Message-ID: <87k0v7mrrd.fsf@nanos.tec.linutronix.de> MIME-Version: 1.0 Content-Type: text/plain X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Oct 30 2020 at 13:06, Matthew Wilcox wrote: > On Thu, Oct 29, 2020 at 11:18:06PM +0100, Thomas Gleixner wrote: >> This series provides kmap_local.* iomap_local variants which only disable >> migration to keep the virtual mapping address stable accross preemption, >> but do neither disable pagefaults nor preemption. The new functions can be >> used in any context, but if used in atomic context the caller has to take >> care of eventually disabling pagefaults. > > Could I ask for a CONFIG_KMAP_DEBUG which aliases all the kmap variants > to vmap()? I think we currently have a problem in iov_iter on HIGHMEM > configs: For kmap() that would work, but for kmap_atomic() not so much when it is called in non-preemptible context because vmap() might sleep. > copy_page_to_iter() calls page_copy_sane() which checks: > > head = compound_head(page); > if (likely(n <= v && v <= page_size(head))) > return true; > > but then: > > void *kaddr = kmap_atomic(page); > size_t wanted = copy_to_iter(kaddr + offset, bytes, i); > kunmap_atomic(kaddr); > > so if offset to offset+bytes is larger than PAGE_SIZE, this is going to > work for lowmem pages and fail miserably for highmem pages. I suggest > vmap() because vmap has a PAGE_SIZE gap between each allocation. On 32bit highmem the kmap_atomic() case is easy: Double the number of mapping slots and only use every second one, which gives you a guard page between the maps. For 64bit we could do something ugly: Enable the highmem kmap_atomic() crud and enforce an alias mapping (at least on the architectures where this is reasonable). Then you get the same as for 32bit. > Alternatively if we could have a kmap_atomic_compound(), that would > be awesome, but probably not realistic to implement. I've more > or less resigned myself to having to map things one page at a time. That might be horribly awesome on 32bit :) Thanks, tglx