From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 41194C43603 for ; Mon, 16 Dec 2019 16:24:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0D2582072D for ; Mon, 16 Dec 2019 16:24:21 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0D2582072D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=stgolabs.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B63FD8E0009; Mon, 16 Dec 2019 11:24:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B13F58E0003; Mon, 16 Dec 2019 11:24:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A51DF8E0009; Mon, 16 Dec 2019 11:24:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0108.hostedemail.com [216.40.44.108]) by kanga.kvack.org (Postfix) with ESMTP id 91A368E0003 for ; Mon, 16 Dec 2019 11:24:20 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 44792180AD801 for ; Mon, 16 Dec 2019 16:24:20 +0000 (UTC) X-FDA: 76271527080.26.arch15_42ea500f41123 X-HE-Tag: arch15_42ea500f41123 X-Filterd-Recvd-Size: 2892 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf11.hostedemail.com (Postfix) with ESMTP for ; Mon, 16 Dec 2019 16:24:19 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 3B3D0B14A; Mon, 16 Dec 2019 16:24:18 +0000 (UTC) Date: Mon, 16 Dec 2019 08:17:48 -0800 From: Davidlohr Bueso To: Michal Hocko Cc: Andrew Morton , Mike Kravetz , Waiman Long , Matthew Wilcox , linux-kernel@vger.kernel.org, linux-mm@kvack.org, aneesh.kumar@linux.ibm.com Subject: Re: [PATCH v2] mm/hugetlb: defer free_huge_page() to a workqueue Message-ID: <20191216161748.tgi2oictlfqy6azi@linux-p48b> Mail-Followup-To: Michal Hocko , Andrew Morton , Mike Kravetz , Waiman Long , Matthew Wilcox , linux-kernel@vger.kernel.org, linux-mm@kvack.org, aneesh.kumar@linux.ibm.com References: <20191211194615.18502-1-longman@redhat.com> <4fbc39a9-2c9c-4c2c-2b13-a548afe6083c@oracle.com> <32d2d4f2-83b9-2e40-05e2-71cd07e01b80@redhat.com> <0fcce71f-bc20-0ea3-b075-46592c8d533d@oracle.com> <20191212060650.ftqq27ftutxpc5hq@linux-p48b> <20191212063050.ufrpij6s6jkv7g7j@linux-p48b> <20191212190427.ouyohviijf5inhur@linux-p48b> <20191216133711.GH30281@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: <20191216133711.GH30281@dhcp22.suse.cz> User-Agent: NeoMutt/20180716 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, 16 Dec 2019, Michal Hocko wrote: >I am afraid that work_struct is too large to be stuffed into the struct >page array (because of the lockdep part). Yeah, this needs to be done without touching struct page. Which is why I had done the stack allocated way in this patch, but we cannot wait for it to complete in irq, so that's out the window. Andi had suggested percpu allocated work items, but having played with the idea over the weekend, I don't see how we can prevent another page being freed on the same cpu before previous work on the same cpu is complete (cpu0 wants to free pageA, schedules the work, in the mean time cpu0 wants to free pageB and workerfn for pageA still hasn't been called). >I think that it would be just safer to make hugetlb_lock irq safe. Are >there any other locks that would require the same? It would be simpler. Any performance issues that arise would probably be only seen in microbenchmarks, assuming we want to have full irq safety. If we don't need to worry about hardirq, then even better. The subpool lock would also need to be irq safe. Thanks, Davidlohr