From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E10BEB64D9 for ; Tue, 27 Jun 2023 15:15:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C0D108D0002; Tue, 27 Jun 2023 11:15:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BBD108D0001; Tue, 27 Jun 2023 11:15:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A5DF58D0002; Tue, 27 Jun 2023 11:15:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 951068D0001 for ; Tue, 27 Jun 2023 11:15:19 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id EB22CAFA9D for ; Tue, 27 Jun 2023 15:15:18 +0000 (UTC) X-FDA: 80948876316.24.1D601A8 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf24.hostedemail.com (Postfix) with ESMTP id 32EFC180125 for ; Tue, 27 Jun 2023 15:14:22 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=HNdgg5a0; spf=pass (imf24.hostedemail.com: domain of mhocko@suse.com designates 195.135.220.28 as permitted sender) smtp.mailfrom=mhocko@suse.com; dmarc=pass (policy=quarantine) header.from=suse.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687878863; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=q+D0xhAl6bCP8w4Ev6dkVwB38CzSCevMrsveKQ90aLY=; b=GsdAdMTN0hbIQD+uALGbdb96MUiXia5QSBxxlFFyVuuw2Qbe/WaGPtghz5MaRxUlQw3MP7 ryFk6v8PK0Q+9KKlmLsac1k7hYKwNs50cREY8HRgMmauyvJSrJWXr0chv/IgAmU2rr6Zi/ JBGX+sU8uOyBs+E1E0I/6kJJFKmMuJY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687878863; a=rsa-sha256; cv=none; b=ZwmrhUC08HTHRW33oc2dR4WHX27jq4xqNY6Q08BziP0ckdeCNP5JOHRXuZTeWtT1s6U1CH iXTfkuHsGdiElUKJ4hHp/BTp9MpHO80KkujWJrfp7ziEkqRuTHXMmcld3DOSy9N7rlCyE3 OxbGwdkSjq5G5DmELNh6Tvc1MpEKOBk= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=HNdgg5a0; spf=pass (imf24.hostedemail.com: domain of mhocko@suse.com designates 195.135.220.28 as permitted sender) smtp.mailfrom=mhocko@suse.com; dmarc=pass (policy=quarantine) header.from=suse.com Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 372552188A; Tue, 27 Jun 2023 15:14:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1687878861; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=q+D0xhAl6bCP8w4Ev6dkVwB38CzSCevMrsveKQ90aLY=; b=HNdgg5a0G7+6HiDl5KyxLX0wrgSDAiYzCIlkI9EAqxSI+pCAwVpQ12whuebX5nXV8uXt3s u50x/bMzzJBPKP/j3fRw8VBZ+GNYaItLYsoiygmHQXAIhTwfpgZU/bKdS6VcEMuwSsBVUM gWJwKULODMlTs634Ad/ljD6V92ZN7TA= Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 1A56713276; Tue, 27 Jun 2023 15:14:21 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id M3OWA838mmQzJQAAMHmgww (envelope-from ); Tue, 27 Jun 2023 15:14:21 +0000 Date: Tue, 27 Jun 2023 17:14:20 +0200 From: Michal Hocko To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, virtualization@lists.linux-foundation.org, Andrew Morton , "Michael S. Tsirkin" , John Hubbard , Oscar Salvador , Jason Wang , Xuan Zhuo Subject: Re: [PATCH v1 3/5] mm/memory_hotplug: make offline_and_remove_memory() timeout instead of failing on fatal signals Message-ID: References: <20230627112220.229240-1-david@redhat.com> <20230627112220.229240-4-david@redhat.com> <74cbbdd3-5a05-25b1-3f81-2fd47e089ac3@redhat.com> <0929f4b9-bdad-bcb4-4192-44e88378016b@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <0929f4b9-bdad-bcb4-4192-44e88378016b@redhat.com> X-Stat-Signature: a449a1b6khe9epk5sebj4i9caabpey5y X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 32EFC180125 X-Rspam-User: X-HE-Tag: 1687878862-293450 X-HE-Meta: U2FsdGVkX1+4ryx2/yNXO5YphfV2IPEn/F6bE15veunZjG/fQKxtCFQ7+bmU5znxCAF9NFCiXkd/FOU8bm/pLBIUIF0qUKe7F/iw/h30LhKlo9NP/LWpHkS0hY1dobqVABJ3L3GDUbXERu8gnDGV7iNkf+JTm+m6N7k7jcwLDdC72rphuF34/DymbsDqn748kYqwzRvWdpel/7LsWqUs4MHCpKLWlp6Q8xE97KNRYaruMsNcGgXS63ApugU4owIbgTH2jxVAbt+D8VBAKAyn5nsSGjJ+B+ab/gieKNwqJnwtudihtMtL7c0floYxmn566xf+8Ck0GSh8rlPPAqUi7Q12LATAqQMUdB+NiZNEo2BX79mtQGYcbvuymxKe4DoS7crJHRt8Oh842etGuvyhDeyF3aWBCPy5Zt9kJfxSkdwj6YBbIHCNuUkF0jARmtvcjp67l+yFQx5KNlOMNz37rx/LMojfmX+vXnQ9HZjwafnwTswzJWH8wEsm1V80XHOLEULeWFgU/U6zBhbwwQ7Q1ksMarrK3PcpeDvMVGYYDtDh4DU8LS/CvKJMHhS99OQKC5wGxaHRrFA6Ma89be3+dcPEqGnGaNlwVko8iXkjgTXhLCgSIoO/++l/7C5FP/f/gHzOrE81aor+f+bjqXVzfSobiJjD7R3+2Fe7o33Y0cqTG2/KxyUhb0BRnBhURkWT1B16nqNR62v3Y4Yiozey2z0/80F5iYUWoRNvrVlmbVvCK86+mBC6g9uTRDcsg2e8kbOuEIgr/DJzvS1x9FCkiJIU7zUmRr2fxYeDzAPmEz21TkEDxEtsH2HNTcKiVJ6GwU58JPxJWrFvujGTLNyrdE4fczW3X5qWeP/uT0eZEZtNSfSL4kgb8M0qjMdK3uUgDlB71k6rQ9tzM5b985yJCWt6vbPB8n/RlsMjm5hpVgu7zEEthgKWKNYYZ6a3OQlXd9blf6SjgImNcyoQuxM C7Q5uMsK WXxqBWysb+iJNbWXBjtzzT9cVNGLFmdOfh7tH2mKz22AxXSsL9JCpWbI9IF8ifIKnokIjzW5wBJ3pOaLHd01LnSwr7M08ydZwgQfxfHCuyPlo8WpGXt0ruf115nru7XWLSY/2wGS87A+1LQ6bxDSt34B8dL7bC50Zl4eS1fMspvFOdjL2dJS6n75XD91OX8ZxwXu/8TnDKoW8YhI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue 27-06-23 16:57:53, David Hildenbrand wrote: > On 27.06.23 16:17, Michal Hocko wrote: > > On Tue 27-06-23 15:14:11, David Hildenbrand wrote: > > > On 27.06.23 14:40, Michal Hocko wrote: > > > > On Tue 27-06-23 13:22:18, David Hildenbrand wrote: > > > > > John Hubbard writes [1]: > > > > > > > > > > Some device drivers add memory to the system via memory hotplug. > > > > > When the driver is unloaded, that memory is hot-unplugged. > > > > > > > > > > However, memory hot unplug can fail. And these days, it fails a > > > > > little too easily, with respect to the above case. Specifically, if > > > > > a signal is pending on the process, hot unplug fails. > > > > > > > > > > [...] > > > > > > > > > > So in this case, other things (unmovable pages, un-splittable huge > > > > > pages) can also cause the above problem. However, those are > > > > > demonstrably less common than simply having a pending signal. I've > > > > > got bug reports from users who can trivially reproduce this by > > > > > killing their process with a "kill -9", for example. > > > > > > > > This looks like a bug of the said driver no? If the tear down process is > > > > killed it could very well happen right before offlining so you end up in > > > > the very same state. Or what am I missing? > > > > > > IIUC (John can correct me if I am wrong): > > > > > > 1) The process holds the device node open > > > 2) The process gets killed or quits > > > 3) As the process gets torn down, it closes the device node > > > 4) Closing the device node results in the driver removing the device and > > > calling offline_and_remove_memory() > > > > > > So it's not a "tear down process" that triggers that offlining_removal > > > somehow explicitly, it's just a side-product of it letting go of the device > > > node as the process gets torn down. > > > > Isn't that just fragile? The operation might fail for other reasons. Why > > cannot there be a hold on the resource to control the tear down > > explicitly? > > I'll let John comment on that. But from what I understood, in most setups > where ZONE_MOVABLE gets used for hotplugged memory > offline_and_remove_memory() succeeds and allows for reusing the device later > without a reboot. > > For the cases where it doesn't work, a reboot is required. Then the solution should be really robust and means to handle the failure - e.g. by retrying or alerting the admin. > > > > > Especially with ZONE_MOVABLE, offlining is supposed to work in most > > > > > cases when offlining actually hotplugged (not boot) memory, and only fail > > > > > in rare corner cases (e.g., some driver holds a reference to a page in > > > > > ZONE_MOVABLE, turning it unmovable). > > > > > > > > > > In these corner cases we really don't want to be stuck forever in > > > > > offline_and_remove_memory(). But in the general cases, we really want to > > > > > do our best to make memory offlining succeed -- in a reasonable > > > > > timeframe. > > > > > > > > > > Reliably failing in the described case when there is a fatal signal pending > > > > > is sub-optimal. The pending signal check is mostly only relevant when user > > > > > space explicitly triggers offlining of memory using sysfs device attributes > > > > > ("state" or "online" attribute), but not when coming via > > > > > offline_and_remove_memory(). > > > > > > > > > > So let's use a timer instead and ignore fatal signals, because they are > > > > > not really expressive for offline_and_remove_memory() users. Let's default > > > > > to 30 seconds if no timeout was specified, and limit the timeout to 120 > > > > > seconds. > > > > > > > > I really hate having timeouts back. They just proven to be hard to get > > > > right and it is essentially a policy implemented in the kernel. They > > > > simply do not belong to the kernel space IMHO. > > > > > > As much as I agree with you in terms of offlining triggered from user space > > > (e.g., write "state" or "online" attribute) where user-space is actually in > > > charge and can do something reasonable (timeout, retry, whatever), in these > > > the offline_and_remove_memory() case it's the driver that wants a > > > best-effort memory offlining+removal. > > > > > > If it times out, virtio-mem will simply try another block or retry later. > > > Right now, it could get stuck forever in offline_and_remove_memory(), which > > > is obviously "not great". Fortunately, for virtio-mem it's configurable and > > > we use the alloc_contig_range()-method for now as default. > > > > It seems that offline_and_remove_memory is using a wrong operation then. > > If it wants an opportunistic offlining with some sort of policy. Timeout > > might be just one policy to use but failure mode or a retry count might > > be a better fit for some users. So rather than (ab)using offline_pages, > > would be make more sense to extract basic offlining steps and allow > > drivers like virtio-mem to reuse them and define their own policy? > > virtio-mem, in default operation, does that: use alloc_contig_range() to > logically unplug ("fake offline") that memory and then just trigger > offline_and_remove_memory() to make it "officially offline". > > In that mode, offline_and_remove_memory() cannot really timeout and is > almost always going to succeed (except memory notifiers and some hugetlb > dissolving). > > Right now we also allow the admin to configure ordinary offlining directly > (without prior fake offlining) when bigger memory blocks are used: > offline_pages() is more reliable than alloc_contig_range(), for example, > because it disables the PCP and the LRU cache, and retries more often (well, > unfortunately then also forever). It has a higher chance of succeeding > especially when bigger blocks of memory are offlined+removed. > > Maybe we should make the alloc_contig_range()-based mechanism more > configurable and make it the only mode in virtio-mem, such that we don't > have to mess with offline_and_remove_memory() endless loops -- at least for > virtio-mem. Yes, that sounds better than hooking up into offline_pages the way this patch is doing. -- Michal Hocko SUSE Labs