From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ed1-f70.google.com (mail-ed1-f70.google.com [209.85.208.70]) by kanga.kvack.org (Postfix) with ESMTP id 879916B0273 for ; Wed, 25 Jul 2018 04:21:03 -0400 (EDT) Received: by mail-ed1-f70.google.com with SMTP id j14-v6so1790972edr.2 for ; Wed, 25 Jul 2018 01:21:03 -0700 (PDT) Received: from mx1.suse.de (mx2.suse.de. [195.135.220.15]) by mx.google.com with ESMTPS id z12-v6si393517edz.61.2018.07.25.01.21.02 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 25 Jul 2018 01:21:02 -0700 (PDT) Date: Wed, 25 Jul 2018 10:21:00 +0200 From: Michal Hocko Subject: Re: Re: [PATCH] [PATCH] mm: disable preemption before swapcache_free Message-ID: <20180725082100.GV28386@dhcp22.suse.cz> References: <2018072514375722198958@wingtech.com> <20180725074009.GU28386@dhcp22.suse.cz> <2018072515575576668668@wingtech.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <2018072515575576668668@wingtech.com> Sender: owner-linux-mm@kvack.org List-ID: To: "zhaowuyun@wingtech.com" Cc: mgorman , akpm , minchan , vinmenon , hannes , "hillf.zj" , linux-mm , linux-kernel [Please do not top post - thank you] [CC Hugh - the original patch was http://lkml.kernel.org/r/2018072514375722198958@wingtech.com] On Wed 25-07-18 15:57:55, zhaowuyun@wingtech.com wrote: > That is a BUG we found in mm/vmscan.c at KERNEL VERSION 4.9.82 The code is quite similar in the current tree as well. > Sumary is TASK A (normal priority) doing __remove_mapping page preempted by TASK B (RT priority) doing __read_swap_cache_async, > the TASK A preempted before swapcache_free, left SWAP_HAS_CACHE flag in the swap cache, > the TASK B which doing __read_swap_cache_async, will not success at swapcache_prepare(entry) because the swap cache was exist, then it will loop forever because it is a RT thread... > the spin lock unlocked before swapcache_free, so disable preemption until swapcache_free executed ... OK, I see your point now. I have missed the lock is dropped before swapcache_free. How can preemption disabling prevent this race to happen while the code is preempted by an IRQ? -- Michal Hocko SUSE Labs