From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.9 required=3.0 tests=DKIM_ADSP_ALL,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5A0FC169F9 for ; Wed, 19 Feb 2020 18:04:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A24A5206DB for ; Wed, 19 Feb 2020 18:04:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=amazon.com header.i=@amazon.com header.b="lH2rupTO" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A24A5206DB Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=amazon.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 40E016B0003; Wed, 19 Feb 2020 13:04:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3E3FC6B0006; Wed, 19 Feb 2020 13:04:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2F8CC6B0007; Wed, 19 Feb 2020 13:04:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0200.hostedemail.com [216.40.44.200]) by kanga.kvack.org (Postfix) with ESMTP id 17BBE6B0003 for ; Wed, 19 Feb 2020 13:04:46 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id C5AB18248047 for ; Wed, 19 Feb 2020 18:04:45 +0000 (UTC) X-FDA: 76507652130.06.neck97_4695125325231 X-HE-Tag: neck97_4695125325231 X-Filterd-Recvd-Size: 9427 Received: from smtp-fw-33001.amazon.com (smtp-fw-33001.amazon.com [207.171.190.10]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Wed, 19 Feb 2020 18:04:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1582135486; x=1613671486; h=date:from:to:cc:subject:message-id:references: mime-version:content-transfer-encoding:in-reply-to; bh=3Zqh9Z5ArCKT5eh3kytqNR+atJTyGLoFKV/QsPjClkg=; b=lH2rupTOJymR4WSs4/i0nFmQigGTmc6GRkYVw5DAKnIk6l4GcXptV573 a96BpskBq+8VNmUSvmAV1r5W6TlIzf0i7B+nXYyhdZotcDUwKqPTa1T8L LKFN4UmxZIitM8IEtaBg4Y6VFGmaQFWXJaGISeT8UTqcO2xbwrJGhr4aC g=; IronPort-SDR: SWoQO1RFkW7rgd2CL4S62O3VarDPqWQrz58ohCj8J+VHKH9cgFWDywhjXDZn2lMphuLg99bCok LxZKOSkF49jw== X-IronPort-AV: E=Sophos;i="5.70,461,1574121600"; d="scan'208";a="27530192" Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-2b-4ff6265a.us-west-2.amazon.com) ([10.47.23.38]) by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP; 19 Feb 2020 18:04:42 +0000 Received: from EX13MTAUEE002.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan3.pdx.amazon.com [10.170.41.166]) by email-inbound-relay-2b-4ff6265a.us-west-2.amazon.com (Postfix) with ESMTPS id 170F4A268C; Wed, 19 Feb 2020 18:04:40 +0000 (UTC) Received: from EX13D08UEE003.ant.amazon.com (10.43.62.118) by EX13MTAUEE002.ant.amazon.com (10.43.62.24) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Wed, 19 Feb 2020 18:04:25 +0000 Received: from EX13MTAUEE002.ant.amazon.com (10.43.62.24) by EX13D08UEE003.ant.amazon.com (10.43.62.118) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Wed, 19 Feb 2020 18:04:25 +0000 Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (172.22.96.68) by mail-relay.amazon.com (10.43.62.224) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Wed, 19 Feb 2020 18:04:24 +0000 Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix, from userid 4335130) id DFEB5403C0; Wed, 19 Feb 2020 18:04:24 +0000 (UTC) Date: Wed, 19 Feb 2020 18:04:24 +0000 From: Anchal Agarwal To: Roger Pau =?iso-8859-1?Q?Monn=E9?= , , , , , , , , , , , , , , , , , , , , , , , , , , , , CC: , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: Re: [RFC PATCH v3 06/12] xen-blkfront: add callbacks for PM suspend and hibernation Message-ID: <20200219180424.GA17584@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com> References: <890c404c585d7790514527f0c021056a7be6e748.1581721799.git.anchalag@amazon.com> <20200217100509.GE4679@Air-de-Roger> <20200217230553.GA8100@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com> <20200218091611.GN4679@Air-de-Roger> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Disposition: inline In-Reply-To: <20200218091611.GN4679@Air-de-Roger> User-Agent: Mutt/1.5.21 (2010-09-15) Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Feb 18, 2020 at 10:16:11AM +0100, Roger Pau Monn=E9 wrote: > On Mon, Feb 17, 2020 at 11:05:53PM +0000, Anchal Agarwal wrote: > > On Mon, Feb 17, 2020 at 11:05:09AM +0100, Roger Pau Monn=E9 wrote: > > > On Fri, Feb 14, 2020 at 11:25:34PM +0000, Anchal Agarwal wrote: > > > > From: Munehisa Kamata > > >=20 > > > > Add freeze, thaw and restore callbacks for PM suspend and hiberna= tion > > > > support. All frontend drivers that needs to use PM_HIBERNATION/PM= _SUSPEND > > > > events, need to implement these xenbus_driver callbacks. > > > > The freeze handler stops a block-layer queue and disconnect the > > > > frontend from the backend while freeing ring_info and associated = resources. > > > > The restore handler re-allocates ring_info and re-connect to the > > > > backend, so the rest of the kernel can continue to use the block = device > > > > transparently. Also, the handlers are used for both PM suspend an= d > > > > hibernation so that we can keep the existing suspend/resume callb= acks for > > > > Xen suspend without modification. Before disconnecting from backe= nd, > > > > we need to prevent any new IO from being queued and wait for exis= ting > > > > IO to complete. > > >=20 > > > This is different from Xen (xenstore) initiated suspension, as in t= hat > > > case Linux doesn't flush the rings or disconnects from the backend. > > Yes, AFAIK in xen initiated suspension backend takes care of it.=20 >=20 > No, in Xen initiated suspension backend doesn't take care of flushing > the rings, the frontend has a shadow copy of the ring contents and it > re-issues the requests on resume. >=20 Yes, I meant suspension in general where both xenstore and backend knows system is going under suspension and not flushing of rings. That happens in frontend when backend indicates that state is closing and so on. I may have written it in wrong context. > > > > +static int blkfront_freeze(struct xenbus_device *dev) > > > > +{ > > > > + unsigned int i; > > > > + struct blkfront_info *info =3D dev_get_drvdata(&dev->dev); > > > > + struct blkfront_ring_info *rinfo; > > > > + /* This would be reasonable timeout as used in xenbus_dev_shutd= own() */ > > > > + unsigned int timeout =3D 5 * HZ; > > > > + int err =3D 0; > > > > + > > > > + info->connected =3D BLKIF_STATE_FREEZING; > > > > + > > > > + blk_mq_freeze_queue(info->rq); > > > > + blk_mq_quiesce_queue(info->rq); > > > > + > > > > + for (i =3D 0; i < info->nr_rings; i++) { > > > > + rinfo =3D &info->rinfo[i]; > > > > + > > > > + gnttab_cancel_free_callback(&rinfo->callback); > > > > + flush_work(&rinfo->work); > > > > + } > > > > + > > > > + /* Kick the backend to disconnect */ > > > > + xenbus_switch_state(dev, XenbusStateClosing); > > >=20 > > > Are you sure this is safe? > > >=20 > > In my testing running multiple fio jobs, other test scenarios running > > a memory loader works fine. I did not came across a scenario that wou= ld > > have failed resume due to blkfront issues unless you can sugest some? >=20 > AFAICT you don't wait for the in-flight requests to be finished, and > just rely on blkback to finish processing those. I'm not sure all > blkback implementations out there can guarantee that. >=20 > The approach used by Xen initiated suspension is to re-issue the > in-flight requests when resuming. I have to admit I don't think this > is the best approach, but I would like to keep both the Xen and the PM > initiated suspension using the same logic, and hence I would request > that you try to re-use the existing resume logic (blkfront_resume). >=20 > > > I don't think you wait for all requests pending on the ring to be > > > finished by the backend, and hence you might loose requests as the > > > ones on the ring would not be re-issued by blkfront_restore AFAICT. > > >=20 > > AFAIU, blk_mq_freeze_queue/blk_mq_quiesce_queue should take care of n= o used > > request on the shared ring. Also, we I want to pause the queue and fl= ush all > > the pending requests in the shared ring before disconnecting from bac= kend. >=20 > Oh, so blk_mq_freeze_queue does wait for in-flight requests to be > finished. I guess it's fine then. >=20 Ok. > > Quiescing the queue seemed a better option here as we want to make su= re ongoing > > requests dispatches are totally drained. > > I should accept that some of these notion is borrowed from how nvme f= reeze/unfreeze=20 > > is done although its not apple to apple comparison. >=20 > That's fine, but I would still like to requests that you use the same > logic (as much as possible) for both the Xen and the PM initiated > suspension. >=20 > So you either apply this freeze/unfreeze to the Xen suspension (and > drop the re-issuing of requests on resume) or adapt the same approach > as the Xen initiated suspension. Keeping two completely different > approaches to suspension / resume on blkfront is not suitable long > term. >=20 I agree with you on overhaul of xen suspend/resume wrt blkfront is a good idea however, IMO that is a work for future and this patch series should=20 not be blocked for it. What do you think? > Thanks, Roger. >=20 Thanks, Anchal