From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.5 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE, SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90723C433E1 for ; Wed, 26 Aug 2020 13:16:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 255142080C for ; Wed, 26 Aug 2020 13:16:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 255142080C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8938A6B0003; Wed, 26 Aug 2020 09:16:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 81AEC6B0005; Wed, 26 Aug 2020 09:16:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6BB626B0006; Wed, 26 Aug 2020 09:16:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0155.hostedemail.com [216.40.44.155]) by kanga.kvack.org (Postfix) with ESMTP id 4F70C6B0003 for ; Wed, 26 Aug 2020 09:16:56 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id F2B40365E for ; Wed, 26 Aug 2020 13:16:55 +0000 (UTC) X-FDA: 77192769990.15.seed39_110455f27064 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id B694F1814B0C1 for ; Wed, 26 Aug 2020 13:16:55 +0000 (UTC) X-HE-Tag: seed39_110455f27064 X-Filterd-Recvd-Size: 4215 Received: from out4436.biz.mail.alibaba.com (out4436.biz.mail.alibaba.com [47.88.44.36]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Wed, 26 Aug 2020 13:16:54 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R121e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01f04427;MF=xlpang@linux.alibaba.com;NM=1;PH=DS;RN=6;SR=0;TI=SMTPD_---0U6w.nFU_1598447788; Received: from xunleideMacBook-Pro.local(mailfrom:xlpang@linux.alibaba.com fp:SMTPD_---0U6w.nFU_1598447788) by smtp.aliyun-inc.com(127.0.0.1); Wed, 26 Aug 2020 21:16:28 +0800 Reply-To: xlpang@linux.alibaba.com Subject: Re: [PATCH] mm: memcg: Fix memcg reclaim soft lockup To: Michal Hocko Cc: Johannes Weiner , Andrew Morton , Vladimir Davydov , linux-kernel@vger.kernel.org, linux-mm@kvack.org References: <1598426822-93737-1-git-send-email-xlpang@linux.alibaba.com> <20200826081102.GM22869@dhcp22.suse.cz> <99efed0e-050a-e313-46ab-8fe6228839d5@linux.alibaba.com> <20200826110015.GO22869@dhcp22.suse.cz> <20200826120740.GP22869@dhcp22.suse.cz> <19eb48db-7d5e-0f55-5dfc-6a71274fd896@linux.alibaba.com> <20200826124810.GQ22869@dhcp22.suse.cz> From: xunlei Message-ID: <061e8600-e162-6ac9-2f4f-bf161435a5b2@linux.alibaba.com> Date: Wed, 26 Aug 2020 21:16:28 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:68.0) Gecko/20100101 Thunderbird/68.11.0 MIME-Version: 1.0 In-Reply-To: <20200826124810.GQ22869@dhcp22.suse.cz> Content-Type: text/plain; charset=utf-8 X-Rspamd-Queue-Id: B694F1814B0C1 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2020/8/26 =E4=B8=8B=E5=8D=888:48, Michal Hocko wrote: > On Wed 26-08-20 20:21:39, xunlei wrote: >> On 2020/8/26 =E4=B8=8B=E5=8D=888:07, Michal Hocko wrote: >>> On Wed 26-08-20 20:00:47, xunlei wrote: >>>> On 2020/8/26 =E4=B8=8B=E5=8D=887:00, Michal Hocko wrote: >>>>> On Wed 26-08-20 18:41:18, xunlei wrote: >>>>>> On 2020/8/26 =E4=B8=8B=E5=8D=884:11, Michal Hocko wrote: >>>>>>> On Wed 26-08-20 15:27:02, Xunlei Pang wrote: >>>>>>>> We've met softlockup with "CONFIG_PREEMPT_NONE=3Dy", when >>>>>>>> the target memcg doesn't have any reclaimable memory. >>>>>>> >>>>>>> Do you have any scenario when this happens or is this some sort o= f a >>>>>>> test case? >>>>>> >>>>>> It can happen on tiny guest scenarios. >>>>> >>>>> OK, you made me more curious. If this is a tiny guest and this is a= hard >>>>> limit reclaim path then we should trigger an oom killer which shoul= d >>>>> kill the offender and that in turn bail out from the try_charge lop= p >>>>> (see should_force_charge). So how come this repeats enough in your = setup >>>>> that it causes soft lockups? >>>>> >>>> >>>> should_force_charge() is false, the current trapped in endless loop = is >>>> not the oom victim. >>> >>> How is that possible? If the oom killer kills a task and that doesn't >>> resolve the oom situation then it would go after another one until al= l >>> tasks are killed. Or is your task living outside of the memcg it trie= s >>> to charge? >>> >> >> All tasks are in memcgs. Looks like the first oom victim is not finish= ed >> (unable to schedule), later mem_cgroup_oom()->...->oom_evaluate_task() >> will set oc->chosen to -1 and abort. >=20 > This shouldn't be possible for too long because oom_reaper would > make it invisible to the oom killer so it should proceed. Also > mem_cgroup_out_of_memory takes a mutex and that is an implicit > scheduling point already. >=20 > Which kernel version is this? >=20 I reproduced it on "5.9.0-rc2". oom_reaper also can't get scheduled because of 1-cpu, and the mutex uses might_sleep() which is noop in case of "CONFIG_PREEMPT_VOLUNTARY is not set" I mentioned in the commit log. > And just for the clarification. I am not against the additional > cond_resched. That sounds like a good thing in general because we do > want to have a predictable scheduling during reclaim which is > independent on reclaimability as much as possible. But I would like to > drill down to why you are seeing the lockup because those shouldn't > really happen. >=20