From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F990C433F5 for ; Wed, 6 Apr 2022 04:43:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A78246B0072; Wed, 6 Apr 2022 00:42:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9FFC76B0073; Wed, 6 Apr 2022 00:42:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8C7656B0074; Wed, 6 Apr 2022 00:42:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 79CA06B0072 for ; Wed, 6 Apr 2022 00:42:59 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 4B9D160537 for ; Wed, 6 Apr 2022 04:42:49 +0000 (UTC) X-FDA: 79325208858.19.BB3BD17 Received: from verein.lst.de (verein.lst.de [213.95.11.211]) by imf28.hostedemail.com (Postfix) with ESMTP id 93B0BC0005 for ; Wed, 6 Apr 2022 04:42:48 +0000 (UTC) Received: by verein.lst.de (Postfix, from userid 2407) id 8331868AFE; Wed, 6 Apr 2022 06:42:44 +0200 (CEST) Date: Wed, 6 Apr 2022 06:42:44 +0200 From: Christoph Hellwig To: Omar Sandoval Cc: linux-mm@kvack.org, kexec@lists.infradead.org, Andrew Morton , Uladzislau Rezki , Christoph Hellwig , Cliff Wickman , x86@kernel.org, kernel-team@fb.com Subject: Re: [PATCH] mm/vmalloc: fix spinning drain_vmap_work after reading from /proc/vmcore Message-ID: <20220406044244.GA9959@lst.de> References: <75014514645de97f2d9e087aa3df0880ea311b77.1649187356.git.osandov@fb.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <75014514645de97f2d9e087aa3df0880ea311b77.1649187356.git.osandov@fb.com> User-Agent: Mutt/1.5.17 (2007-11-01) X-Stat-Signature: x5df45pa1r6xepgf7qmejfb5839g5jyt Authentication-Results: imf28.hostedemail.com; dkim=none; spf=none (imf28.hostedemail.com: domain of hch@lst.de has no SPF policy when checking 213.95.11.211) smtp.mailfrom=hch@lst.de; dmarc=none X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 93B0BC0005 X-HE-Tag: 1649220168-658493 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Apr 05, 2022 at 12:40:31PM -0700, Omar Sandoval wrote: > A simple way to "fix" this would be to make set_iounmap_nonlazy() set > vmap_lazy_nr to lazy_max_pages() instead of lazy_max_pages() + 1. But, I > think it'd be better to get rid of this hack of clobbering vmap_lazy_nr. > Instead, this fix makes __copy_oldmem_page() explicitly drain the vmap > areas itself. This fixes the bug and the interface also is better than what we had before. But a vmap/iounmap_eager would seem even better. But hey, right now it has one caller in always built іn x86 arch code, so maybe it isn't worth spending more effort on this.