From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DAD4C64EC7 for ; Tue, 28 Feb 2023 21:08:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8BEE56B0078; Tue, 28 Feb 2023 16:08:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 845DB6B007B; Tue, 28 Feb 2023 16:08:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6BF556B007D; Tue, 28 Feb 2023 16:08:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 5858D6B0078 for ; Tue, 28 Feb 2023 16:08:06 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 2AD57A0C71 for ; Tue, 28 Feb 2023 21:08:06 +0000 (UTC) X-FDA: 80517938172.10.BFB8CCC Received: from mail-qt1-f182.google.com (mail-qt1-f182.google.com [209.85.160.182]) by imf01.hostedemail.com (Postfix) with ESMTP id 67FD740023 for ; Tue, 28 Feb 2023 21:08:04 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=LKz9vk4s; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf01.hostedemail.com: domain of hughd@google.com designates 209.85.160.182 as permitted sender) smtp.mailfrom=hughd@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677618484; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=XrTkPkZkw/0rszifswppJiV3UUqdBojElA8azL6bV4g=; b=dUws2VNeyQC3gwwckwjXRb4zrplP+O8LggXOS/I8zeLCLgYV/WsUPbZFDuPPRpy7qn9xvG AjE9fB8oD2+4uKe7l1lp2zBASugQpYRyq3RAIP6O+DQDURWQH0CCdopl7OxKN3eMxGwPWX qQUIO/r7TwBE1HmyX/nFwTCgHwfzTok= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=LKz9vk4s; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf01.hostedemail.com: domain of hughd@google.com designates 209.85.160.182 as permitted sender) smtp.mailfrom=hughd@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677618484; a=rsa-sha256; cv=none; b=lmvr4i/AMkyssiDF5AboN0JNOF5didKTmUAdoboY6pC9YuriYeiDQUeQ7SwL1RIAC/OSys 78rCl8OOPar7r3LAgC9n0eD1JpcVREypbRma0oz7hhACU3b1gdp9TBlmoUVxOtZsXNif86 W8daoEGYbkC+PruP+aqL66gbzombQaU= Received: by mail-qt1-f182.google.com with SMTP id l18so11070501qtp.1 for ; Tue, 28 Feb 2023 13:08:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1677618483; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=XrTkPkZkw/0rszifswppJiV3UUqdBojElA8azL6bV4g=; b=LKz9vk4sEfto1Nq1L/+2Rd50Syw+DCzYCP7cEpxMagAp+GU+k7zv993ZEmxmvjQ5Mx bGLusjNGzJI0rmzwuXZsdQ6yCqvjM97UAzvHi7YNoWvh/4pGxsLQwYVarDRtlBJC45Xq fy++Aax/BLW8GqHestXB09RxUKTikSmCcXiCowwZR2ytAqTYOHfNXE+smUS9yjyW4Ivf lZe2ds9ofj8zwJw4/kBcM6YMJHJQq10ms9HPV/dHEnbd/6g5k6s1z7qL2pRUegwOQkd/ 7f9TH/8lI4p1wQLDH4Vfm6mdfH5eaw8NWbAigb8H86vAz/lS0T3kH557WjwaUxCgyfGd jMXQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1677618483; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XrTkPkZkw/0rszifswppJiV3UUqdBojElA8azL6bV4g=; b=QlozeyUao0x2GOOp22XlMXWaTk7yWsDnrrCzDcRoq+rw6JIS0KsNdXjC85rwM08M1X xpypIatba8JgcRcvg2DMcKi3w2r467XaowpFkhpqZym55a0NXnjrZaHcQJYEFixFeO+x SO/tEjic1Dq9TfjQCpixLZNaDOG35Kj1rVr8f2kVr1vbmABkFbwzf8rw6pVU49aKjq0j Dzo9zok7W104dArJsrpkgh56xJjAORkpF+nOwjaxu/iiT3lWw8sik7fvvvGluQB4ZMrZ orsItA5zYdUu5H8IfJhNgPjxO15KpttBsMAX+M8V9ZyPCQ1vqHFjeRs/v4Daf5JRJscY Lzdg== X-Gm-Message-State: AO0yUKWVVyv1hlB7ZGS+9Sh6Bhf8JkEWBZODPkReQMK1sxSnojOe8fR6 EXXTnx/B2gMqrIPoF9LgudNv4Q== X-Google-Smtp-Source: AK7set+6aLyPMkD7xe29+p60VT/myxoeCyqKiL1h6WPHDNFq05JHuLlu+LV3Cb9C/IVZ+9WPTNr1dA== X-Received: by 2002:ac8:7d8a:0:b0:3bf:e796:bfdd with SMTP id c10-20020ac87d8a000000b003bfe796bfddmr6686616qtd.9.1677618483404; Tue, 28 Feb 2023 13:08:03 -0800 (PST) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id g128-20020a37b686000000b007419f1561fesm7352678qkf.112.2023.02.28.13.08.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Feb 2023 13:08:02 -0800 (PST) Date: Tue, 28 Feb 2023 13:07:41 -0800 (PST) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: "Huang, Ying" cc: Hugh Dickins , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Xu, Pengfei" , Christoph Hellwig , Stefan Roesch , Tejun Heo , Xin Hao , Zi Yan , Yang Shi , Baolin Wang , Matthew Wilcox , Mike Kravetz Subject: Re: [PATCH 1/3] migrate_pages: fix deadlock in batched migration In-Reply-To: <87h6v6b6er.fsf@yhuang6-desk2.ccr.corp.intel.com> Message-ID: References: <20230224141145.96814-1-ying.huang@intel.com> <20230224141145.96814-2-ying.huang@intel.com> <87h6v6b6er.fsf@yhuang6-desk2.ccr.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 67FD740023 X-Stat-Signature: 9ehjhegecmxqsrkxmbtjktmf7iwork95 X-HE-Tag: 1677618484-243607 X-HE-Meta: U2FsdGVkX1+HzAnj+sqe3jXf/pBMtrBQbwDXJ+lwdJnIG22XRDQYnlWJCW77ziSQvilT1230MMBde8D/h320ozATD3Rg/NFCSmKTq3/tyO+NlYxfqnjtIpjZOumsIYVQqSiKL3wsvRwUI9jIOCZvw2hGZ07/H1kd+d8fK8uiiarxCs3uZZWLEXOtk/eJFJ9JuHPtnVlQS1+fQ35x9HsNbypU4LglnM6qmJrOjij63LV8W0/O6XTB8Wzu6Ed2eGmISEcMhdZIyNSgf8nqvnqKt62K0nTM7DrWXFGP/WLhVR4yR+R09CmTgvWUq0JR16PyhZMF2DoivqZKT/LndR0mZh4qpiNVMS8Xzfi+mR1JapSy7drpbO6BSErz96DHsP28GM+muWqUqVfGxmBCWhgp/VoYpY1TQDdncnBTB3/gLYY44MkNFFyOReMUug2oJ4McdhFwxjUlljrrM10CM3xbmiW6GjAN4Ww1spwMROUiYcP0T/CVtkIUTi1VV7RrPtv1pHIiSYuo4zwjSTLmniWVX6GnJztqS80PeCNSiBE88MFfvAN26q8nwiuD4O4+wJM9uY1C7M1UKKFBSEoWvPElF0HLvEESQvHjfyB4Gx2iBhpmyTwhMcCxLdl8FaNWuQkSXL0TyA1lTNYNfgzIWa4Wv6RDLLuLElcV0MSkH9m1IlOo+1t2DgajTnvs97vtCUmOFrXN/ysRw8+G6fUVzm7FL1QHpoZgQQV0E7lD3AAs33qg0IU0zT5nFpuNa2TrDWdV+ewM0SVsaFHRJdy/83UI5gGXA2oOBk51to4FxqQFbOzfhuquXf5aFbOXuV69S1daNsBI466rYpESQOsYPg2fDgsvCTBm4Tx9C07L2IMDGFXkCql/b6uJy96u+20Sg0X1ZX7VcMCX2UDcAmhZ/VO6mEOxGOd1UICijlrx+KYgmKMa7P0udbAi4RKnEtJ8BAiMg+/NnbuJH7nawGiXGWT LKCXDGDE E+oZinXUr1pNzhhAb3akH2sMXdAhdzmfFI1jVz6HsXUTQsIwZPORk+vEwXqiTVOgH7+4eEmhyqDG3GeCDJIWA/ns9fV8j7UwhKbs700o3eFzQorzITBrTTTe77YxtiWsM3/82NQaJefUjKPpZaEsP4I2XwK7sghlFXo+3DB1XMxOU0JLnBRKIm1AEJCqClrhUiWaIqAZKXXC+43oE19iEXBfBfbieL3fOPHMA0eUctHS+B9ZQdzsk6TsWoZpH6UH9PFiyDpOODu8KiXeajGtxdm+0TDVrygQZeBKZfUqi3wehGpDRZInwrwEDi0IftKyaaVQdpvv91FDykuTFDMsxgaLVhWUOOFmkTgIL2dk9j0sLYn7/2761W1FDezNOyCLfLycdgdm5/hFW5jgrQlasL1idOC4lcyeLETtJzHRjyqeu0hrV71ifdj4clTNds29XMHcE X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, 28 Feb 2023, Huang, Ying wrote: > Hugh Dickins writes: > > On Fri, 24 Feb 2023, Huang Ying wrote: > >> @@ -1247,7 +1236,7 @@ static int migrate_folio_unmap(new_page_t get_new_page, free_page_t put_new_page > >> /* Establish migration ptes */ > >> VM_BUG_ON_FOLIO(folio_test_anon(src) && > >> !folio_test_ksm(src) && !anon_vma, src); > >> - try_to_migrate(src, TTU_BATCH_FLUSH); > >> + try_to_migrate(src, mode == MIGRATE_ASYNC ? TTU_BATCH_FLUSH : 0); > > > > Why that change, I wonder? The TTU_BATCH_FLUSH can still be useful for > > gathering multiple cross-CPU TLB flushes into one, even when it's only > > a single page in the batch. > > Firstly, I would have thought that we have no opportunities to batch the > TLB flushing now. But as you pointed out, it is still possible to batch > if mapcount > 1. Secondly, without TTU_BATCH_FLUSH, we may flush the > TLB for a single page (with invlpg instruction), otherwise, we will > flush the TLB for all pages. The former is faster and will not > influence other TLB entries of the process. > > Or we use TTU_BATCH_FLUSH only if mapcount > 1? I had not thought at all of the "invlpg" advantage (which I imagine some other architectures than x86 share) to not delaying the TLB flush of a single PTE. Frankly, I just don't have any feeling for the tradeoff between multiple remote invlpgs versus one remote batched TLB flush of all. Which presumably depends on number of CPUs, size of TLBs, etc etc. Your "mapcount > 1" idea might be good, but I cannot tell: I'd say now that there's no reason to change your "mode == MIGRATE_ASYNC ? TTU_BATCH_FLUSH : 0" without much more thought, or a quick insight from someone else. Some other time maybe. Hugh