From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 916C0C52D7C for ; Mon, 19 Aug 2024 19:00:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 02CD56B007B; Mon, 19 Aug 2024 15:00:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F1EC66B0082; Mon, 19 Aug 2024 15:00:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DE6B46B0088; Mon, 19 Aug 2024 15:00:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id BB9E76B007B for ; Mon, 19 Aug 2024 15:00:19 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 58C1A41474 for ; Mon, 19 Aug 2024 19:00:18 +0000 (UTC) X-FDA: 82469910516.22.5918880 Received: from mail-qk1-f173.google.com (mail-qk1-f173.google.com [209.85.222.173]) by imf02.hostedemail.com (Postfix) with ESMTP id 6821C80023 for ; Mon, 19 Aug 2024 19:00:16 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.173 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none) ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724094001; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=w8F6Nydj3yEfTP1phJ/fYlgauLlLfmFz/dc96wAQKWY=; b=ZcgVgUISx9XcQz2ec19OuGbnGcvCwAFE4Qbyn8NdVyY1DB4gkCPuqKDO03ysbxG3X76eRp ZucDsFPBKzdqdC5g4yz/rAhIkg/d4rM+cBxrRit1J1YupJbEfuqbCJENkAEBvISfsDFl+x 2yq+8onmfEO6ncFHf5yBJdOEJpNPIMs= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.173 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=kernel.org (policy=none) ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724094001; a=rsa-sha256; cv=none; b=i5ZA38Zjynmhhrn3SpgtZuyQaO4Ig2eMHeBJ0uJ6lX5K1vst+j0sLHT44zlGYnU9eYSr7p 1QscOLodmRZ3O3ELoIcYNDHkMUeFxbG/G+3wY+t9FoBOcyPN3HuWi5s+peajpNWbIcBGzf YNuKD3u6uBOvYdeYtxs7Sr/M+BIwNqI= Received: by mail-qk1-f173.google.com with SMTP id af79cd13be357-7a3375015f8so329092985a.1 for ; Mon, 19 Aug 2024 12:00:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724094015; x=1724698815; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=w8F6Nydj3yEfTP1phJ/fYlgauLlLfmFz/dc96wAQKWY=; b=qoct4Vbg26/yixV/rlJzj41vWpVmFj0SGDkwWioA8JG49LrODb1ylAmasNwc5a3dnS SPvBRVfasSRoYZpv7cgeyem2cTvNv2SmdbaYFA5QCle3pDvvawUJVz0IXiCKOd8ksQOJ rY/D7HfluQn3GoHjs+IwWeYFna0ku45+cERRVGAcI0f82c3Aj3bsIhZzAw4zrrc2gP95 AHR1A7l6z2/KFZ2VLae96+hkD4oXPHQ8pB5a35raAuf11tkmb7K+pRzS9VM6CZ3dSY05 YuRey+nCGpBvdUzm4FJmzd07K+U/dQeI5LuPEfaJ0GjCvvpRrHBotTm3p4vGh/OdISHo gnBA== X-Forwarded-Encrypted: i=1; AJvYcCUgboshxbSSkyY5BUH0WKsIwmUJu/esBIMxJzzWxqxWwlnhQPHynDgSqH+a43YCIGpm4eyM+0azUgoUPuB7VGHEdPw= X-Gm-Message-State: AOJu0Yygtz5JJ413sFx3M2cOfILMqZfOVIWE4/lmjpo92ig9mRKTQWpC l67GHLISXEnsKL863VJkZZUm8eEa6Zbkfw0jDYeaCDrPkhcILOy+cQ7xTeEweZd7broq01rgtUH veFfylfDjwOdiVsqwUeLClSSdbyY= X-Google-Smtp-Source: AGHT+IHeBtHMm1l2+gHNOdhm9JSOpinboSXGzjArPcNWUeX4B+C8eMaR9wfrA7a5dVtS3Etzoz9NWCzl+QtNevqSeHA= X-Received: by 2002:a05:6214:469f:b0:6bd:88d5:fa87 with SMTP id 6a1803df08f44-6bf7ce9cdb6mr175109976d6.55.1724094015061; Mon, 19 Aug 2024 12:00:15 -0700 (PDT) MIME-Version: 1.0 References: <20240819023145.2415299-1-usamaarif642@gmail.com> <20240819023145.2415299-5-usamaarif642@gmail.com> In-Reply-To: From: Barry Song Date: Tue, 20 Aug 2024 07:00:01 +1200 Message-ID: Subject: Re: [PATCH v4 4/6] mm: Introduce a pageflag for partially mapped folios To: Usama Arif Cc: akpm@linux-foundation.org, linux-mm@kvack.org, hannes@cmpxchg.org, riel@surriel.com, shakeel.butt@linux.dev, roman.gushchin@linux.dev, yuzhao@google.com, david@redhat.com, ryan.roberts@arm.com, rppt@kernel.org, willy@infradead.org, cerasuolodomenico@gmail.com, ryncsn@gmail.com, corbet@lwn.net, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kernel-team@meta.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Stat-Signature: wkhrtfnktxdm39o4kqh96c6orqqo58mc X-Rspamd-Queue-Id: 6821C80023 X-Rspamd-Server: rspam11 X-HE-Tag: 1724094016-737835 X-HE-Meta: U2FsdGVkX1++49Mit/yC3NK5NMSz1Nf0q9tqx9qVQPYuAWalyeAnwSpFFTPxy0cYf2oD045Inf73+Ho8V3nVi6725EO94wjMSW+xyn827eaZX+QEZqCMNe7zpTRJZOTIiGVwWy4S8TPIgKnBBR3PozUkKgYPokKMWSDI3f23xlrCbTnPS08touIBJpja34b+HPls+ykKWyivYNVhwMWWfZutbDCb0lPoeNRdmivY1zVU6S43RtG1LgtVwqHLIadBvqQl3FtDlA4SqbXdPxfQH/qCRxybW7ReuSg7CcOiVFzlD0WhFBR0o9xRuH33EtLPIyebroW4Hi8evDHgRHfHU5YoUWnXnUmOFF2Z1Vy7utDS60hPQ2bhbhTyOfZVgMmtXzLpTnARJ7n0W1G55yjseVSH/y99QHh3SN5YOrAEecU8fvJOBpKOR9DHnjRlUuFU5YKKoul2HN70RHg31C3q+n/ZT1qJAFVFf5YeiflHDbc+1qP96qhIOssAGGa4rBUKZUhUlZqFjKdiZRHk6hQHM/W4opgsUtBmdh7/fr7TaoXo9+mipdtgzlP8oXixQ8KaOOX7xUmgJBVNpQYifxuoMfyM3+4EsX/gMBoBneNGWAF6Sfr2Wo6osc8ZQ72HV3QcDzD3JunBwfweZ9+5yos0FlnHLwZitshDcxKsALvQpxK4VKZ0P0cEfVL+5IWDzzTEtmwWL+mcnv2gLd8mMAHcMZ4Gq3tqZ8VAPmsHuAfH42agOIBMBCEqc2IBhkipaSCLsXE3OL6dr1XgsNszPInALCvHcwhAjDcEM0kUeT3W+CbmM+LLm/5stENuRCBWnVhR0AvkKBuxdDLcD4Ca7eld53gewstGlMcZiDvd4umTFiNaSgmB55NJIVXkBLyGjxgLIRj257XZRwSYBrQNVXvU/baUYDrV2G6QegzKXS05cmdqrcsSjfxVgkl72q79dgTC4ibB1xf2ZGGIaXXjWve T+WJ0Hop 8BpQPzPuKq9kD78glD74HKAWgmorxXmRwaNCJTaeBkTm2lTLa5r6V6LaAse6U1Df64Insh6mBXPF3tCGx8vvS2wrYTK9Yq5FaqdccfGq0FeXQ/jHQQVkhWE+YaOcyze5sFfIoj7odwqka1Y6dyicdVfYezwdui3a9cXynvxZC6521dncgsWIixpcOAflRU1i22abzdxqT1vZrE+r4PJMI6whAOdaDA+Wz7zMry125eSv8Gul2rzqQWUZG6k/hG/F4nazHod60JyCqonSappAzHRUAAstOxweiWb3zhXeyqDBI21w= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Aug 20, 2024 at 2:17=E2=80=AFAM Usama Arif = wrote: > > > > On 19/08/2024 09:29, Barry Song wrote: > > Hi Usama, > > > > I feel it is much better now! thanks! > > > > On Mon, Aug 19, 2024 at 2:31=E2=80=AFPM Usama Arif wrote: > >> > >> Currently folio->_deferred_list is used to keep track of > >> partially_mapped folios that are going to be split under memory > >> pressure. In the next patch, all THPs that are faulted in and collapse= d > >> by khugepaged are also going to be tracked using _deferred_list. > >> > >> This patch introduces a pageflag to be able to distinguish between > >> partially mapped folios and others in the deferred_list at split time = in > >> deferred_split_scan. Its needed as __folio_remove_rmap decrements > >> _mapcount, _large_mapcount and _entire_mapcount, hence it won't be > >> possible to distinguish between partially mapped folios and others in > >> deferred_split_scan. > >> > >> Eventhough it introduces an extra flag to track if the folio is > >> partially mapped, there is no functional change intended with this > >> patch and the flag is not useful in this patch itself, it will > >> become useful in the next patch when _deferred_list has non partially > >> mapped folios. > >> > >> Signed-off-by: Usama Arif > >> --- > >> include/linux/huge_mm.h | 4 ++-- > >> include/linux/page-flags.h | 11 +++++++++++ > >> mm/huge_memory.c | 23 ++++++++++++++++------- > >> mm/internal.h | 4 +++- > >> mm/memcontrol.c | 3 ++- > >> mm/migrate.c | 3 ++- > >> mm/page_alloc.c | 5 +++-- > >> mm/rmap.c | 5 +++-- > >> mm/vmscan.c | 3 ++- > >> 9 files changed, 44 insertions(+), 17 deletions(-) > >> > >> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > >> index 4c32058cacfe..969f11f360d2 100644 > >> --- a/include/linux/huge_mm.h > >> +++ b/include/linux/huge_mm.h > >> @@ -321,7 +321,7 @@ static inline int split_huge_page(struct page *pag= e) > >> { > >> return split_huge_page_to_list_to_order(page, NULL, 0); > >> } > >> -void deferred_split_folio(struct folio *folio); > >> +void deferred_split_folio(struct folio *folio, bool partially_mapped)= ; > >> > >> void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd, > >> unsigned long address, bool freeze, struct folio *foli= o); > >> @@ -495,7 +495,7 @@ static inline int split_huge_page(struct page *pag= e) > >> { > >> return 0; > >> } > >> -static inline void deferred_split_folio(struct folio *folio) {} > >> +static inline void deferred_split_folio(struct folio *folio, bool par= tially_mapped) {} > >> #define split_huge_pmd(__vma, __pmd, __address) \ > >> do { } while (0) > >> > >> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h > >> index a0a29bd092f8..c3bb0e0da581 100644 > >> --- a/include/linux/page-flags.h > >> +++ b/include/linux/page-flags.h > >> @@ -182,6 +182,7 @@ enum pageflags { > >> /* At least one page in this folio has the hwpoison flag set *= / > >> PG_has_hwpoisoned =3D PG_active, > >> PG_large_rmappable =3D PG_workingset, /* anon or file-backed *= / > >> + PG_partially_mapped =3D PG_reclaim, /* was identified to be pa= rtially mapped */ > >> }; > >> > >> #define PAGEFLAGS_MASK ((1UL << NR_PAGEFLAGS) - 1) > >> @@ -861,8 +862,18 @@ static inline void ClearPageCompound(struct page = *page) > >> ClearPageHead(page); > >> } > >> FOLIO_FLAG(large_rmappable, FOLIO_SECOND_PAGE) > >> +FOLIO_TEST_FLAG(partially_mapped, FOLIO_SECOND_PAGE) > >> +/* > >> + * PG_partially_mapped is protected by deferred_split split_queue_loc= k, > >> + * so its safe to use non-atomic set/clear. > >> + */ > >> +__FOLIO_SET_FLAG(partially_mapped, FOLIO_SECOND_PAGE) > >> +__FOLIO_CLEAR_FLAG(partially_mapped, FOLIO_SECOND_PAGE) > >> #else > >> FOLIO_FLAG_FALSE(large_rmappable) > >> +FOLIO_TEST_FLAG_FALSE(partially_mapped) > >> +__FOLIO_SET_FLAG_NOOP(partially_mapped) > >> +__FOLIO_CLEAR_FLAG_NOOP(partially_mapped) > >> #endif > >> > >> #define PG_head_mask ((1UL << PG_head)) > >> diff --git a/mm/huge_memory.c b/mm/huge_memory.c > >> index 2d77b5d2291e..70ee49dfeaad 100644 > >> --- a/mm/huge_memory.c > >> +++ b/mm/huge_memory.c > >> @@ -3398,6 +3398,7 @@ int split_huge_page_to_list_to_order(struct page= *page, struct list_head *list, > >> * page_deferred_list. > >> */ > >> list_del_init(&folio->_deferred_list); > >> + __folio_clear_partially_mapped(folio); > >> } > >> spin_unlock(&ds_queue->split_queue_lock); > >> if (mapping) { > >> @@ -3454,11 +3455,13 @@ void __folio_undo_large_rmappable(struct folio= *folio) > >> if (!list_empty(&folio->_deferred_list)) { > >> ds_queue->split_queue_len--; > >> list_del_init(&folio->_deferred_list); > >> + __folio_clear_partially_mapped(folio); > > > > is it possible to make things clearer by > > > > if (folio_clear_partially_mapped) > > __folio_clear_partially_mapped(folio); > > > > While writing without conditions isn't necessarily wrong, adding a cond= ition > > will improve the readability of the code and enhance the clarity of my = mTHP > > counters series. also help decrease smp cache sync if we can avoid > > unnecessary writing? > > > > Do you mean if(folio_test_partially_mapped(folio))? > > I don't like this idea. I think it makes the readability worse? If I was = looking at if (test) -> clear for the first time, I would become confused w= hy its being tested if its going to be clear at the end anyways? In the pmd-order case, the majority of folios are not partially mapped. Unconditional writes will trigger cache synchronization across all CPUs (related to the MESI protocol), making them more costly. By using conditional writes, such as "if(test) write," we can avoid most unnecessary writes, which is much more efficient. Additionally, we only need to manage nr_split_deferred when the condition is met. We are carefully evaluating all scenarios to determine if modifications to the partially_mapped flag are necessary. > > > >> } > >> spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); > >> } > >> > >> -void deferred_split_folio(struct folio *folio) > >> +/* partially_mapped=3Dfalse won't clear PG_partially_mapped folio fla= g */ > >> +void deferred_split_folio(struct folio *folio, bool partially_mapped) > >> { > >> struct deferred_split *ds_queue =3D get_deferred_split_queue(f= olio); > >> #ifdef CONFIG_MEMCG > >> @@ -3486,14 +3489,19 @@ void deferred_split_folio(struct folio *folio) > >> if (folio_test_swapcache(folio)) > >> return; > >> > >> - if (!list_empty(&folio->_deferred_list)) > >> - return; > >> - > >> spin_lock_irqsave(&ds_queue->split_queue_lock, flags); > >> + if (partially_mapped) { > >> + if (!folio_test_partially_mapped(folio)) { > >> + __folio_set_partially_mapped(folio); > >> + if (folio_test_pmd_mappable(folio)) > >> + count_vm_event(THP_DEFERRED_SPLIT_PAGE= ); > >> + count_mthp_stat(folio_order(folio), MTHP_STAT_= SPLIT_DEFERRED); > >> + } > >> + } else { > >> + /* partially mapped folios cannot become non-partially= mapped */ > >> + VM_WARN_ON_FOLIO(folio_test_partially_mapped(folio), f= olio); > >> + } > >> if (list_empty(&folio->_deferred_list)) { > >> - if (folio_test_pmd_mappable(folio)) > >> - count_vm_event(THP_DEFERRED_SPLIT_PAGE); > >> - count_mthp_stat(folio_order(folio), MTHP_STAT_SPLIT_DE= FERRED); > >> list_add_tail(&folio->_deferred_list, &ds_queue->split= _queue); > >> ds_queue->split_queue_len++; > >> #ifdef CONFIG_MEMCG > >> @@ -3542,6 +3550,7 @@ static unsigned long deferred_split_scan(struct = shrinker *shrink, > >> } else { > >> /* We lost race with folio_put() */ > >> list_del_init(&folio->_deferred_list); > >> + __folio_clear_partially_mapped(folio); > > > > as above? Do we also need if(test) for split_huge_page_to_list_to_order= ()? > > > >> ds_queue->split_queue_len--; > >> } > >> if (!--sc->nr_to_scan) > >> diff --git a/mm/internal.h b/mm/internal.h > >> index 52f7fc4e8ac3..27cbb5365841 100644 > >> --- a/mm/internal.h > >> +++ b/mm/internal.h > >> @@ -662,8 +662,10 @@ static inline void prep_compound_head(struct page= *page, unsigned int order) > >> atomic_set(&folio->_entire_mapcount, -1); > >> atomic_set(&folio->_nr_pages_mapped, 0); > >> atomic_set(&folio->_pincount, 0); > >> - if (order > 1) > >> + if (order > 1) { > >> INIT_LIST_HEAD(&folio->_deferred_list); > >> + __folio_clear_partially_mapped(folio); > > > > if partially_mapped is true for a new folio, does it mean we already ha= ve > > a bug somewhere? > > > > How is it possible for a new folio to be partially mapped? > > > > Its not, I did it because I wanted to make it explicit that the folio is = being initialized, similar to how before this INIT_LIST_HEAD(&folio->_defer= red_list) is done here. > > There is no functional issue in removing it here, because I believe the f= lag is initialized to false from start. > >> + } > >> } > >> > >> static inline void prep_compound_tail(struct page *head, int tail_idx= ) > >> diff --git a/mm/memcontrol.c b/mm/memcontrol.c > >> index e1ffd2950393..0fd95daecf9a 100644 > >> --- a/mm/memcontrol.c > >> +++ b/mm/memcontrol.c > >> @@ -4669,7 +4669,8 @@ static void uncharge_folio(struct folio *folio, = struct uncharge_gather *ug) > >> VM_BUG_ON_FOLIO(folio_test_lru(folio), folio); > >> VM_BUG_ON_FOLIO(folio_order(folio) > 1 && > >> !folio_test_hugetlb(folio) && > >> - !list_empty(&folio->_deferred_list), folio); > >> + !list_empty(&folio->_deferred_list) && > >> + folio_test_partially_mapped(folio), folio); > >> > >> /* > >> * Nobody should be changing or seriously looking at > >> diff --git a/mm/migrate.c b/mm/migrate.c > >> index 2d2e65d69427..ef4a732f22b1 100644 > >> --- a/mm/migrate.c > >> +++ b/mm/migrate.c > >> @@ -1735,7 +1735,8 @@ static int migrate_pages_batch(struct list_head = *from, > >> * use _deferred_list. > >> */ > >> if (nr_pages > 2 && > >> - !list_empty(&folio->_deferred_list)) { > >> + !list_empty(&folio->_deferred_list) && > >> + folio_test_partially_mapped(folio)) { > >> if (!try_split_folio(folio, split_foli= os, mode)) { > >> nr_failed++; > >> stats->nr_thp_failed +=3D is_t= hp; > >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c > >> index 408ef3d25cf5..a145c550dd2a 100644 > >> --- a/mm/page_alloc.c > >> +++ b/mm/page_alloc.c > >> @@ -957,8 +957,9 @@ static int free_tail_page_prepare(struct page *hea= d_page, struct page *page) > >> break; > >> case 2: > >> /* the second tail page: deferred_list overlaps ->mapp= ing */ > >> - if (unlikely(!list_empty(&folio->_deferred_list))) { > >> - bad_page(page, "on deferred list"); > >> + if (unlikely(!list_empty(&folio->_deferred_list) && > >> + folio_test_partially_mapped(folio))) { > >> + bad_page(page, "partially mapped folio on defe= rred list"); > >> goto out; > >> } > >> break; > >> diff --git a/mm/rmap.c b/mm/rmap.c > >> index a6b9cd0b2b18..4c330635aa4e 100644 > >> --- a/mm/rmap.c > >> +++ b/mm/rmap.c > >> @@ -1578,8 +1578,9 @@ static __always_inline void __folio_remove_rmap(= struct folio *folio, > >> * Check partially_mapped first to ensure it is a large folio. > >> */ > >> if (partially_mapped && folio_test_anon(folio) && > >> - list_empty(&folio->_deferred_list)) > >> - deferred_split_folio(folio); > >> + !folio_test_partially_mapped(folio)) > >> + deferred_split_folio(folio, true); > >> + > >> __folio_mod_stat(folio, -nr, -nr_pmdmapped); > >> > >> /* > >> diff --git a/mm/vmscan.c b/mm/vmscan.c > >> index 25e43bb3b574..25f4e8403f41 100644 > >> --- a/mm/vmscan.c > >> +++ b/mm/vmscan.c > >> @@ -1233,7 +1233,8 @@ static unsigned int shrink_folio_list(struct lis= t_head *folio_list, > >> * Split partially mapped foli= os right away. > >> * We can free the unmapped pa= ges without IO. > >> */ > >> - if (data_race(!list_empty(&fol= io->_deferred_list)) && > >> + if (data_race(!list_empty(&fol= io->_deferred_list) && > >> + folio_test_partially_mappe= d(folio)) && > >> split_folio_to_list(folio,= folio_list)) > >> goto activate_locked; > >> } > >> -- > >> 2.43.5 > >> > > > > Thanks > > Barry