From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DB82E77188 for ; Fri, 20 Dec 2024 07:48:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C5EF76B007B; Fri, 20 Dec 2024 02:48:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C0EB66B0083; Fri, 20 Dec 2024 02:48:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AAF276B0085; Fri, 20 Dec 2024 02:48:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 87E586B007B for ; Fri, 20 Dec 2024 02:48:54 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 27AC51A082E for ; Fri, 20 Dec 2024 07:48:54 +0000 (UTC) X-FDA: 82914558972.09.756C97C Received: from dggsgout11.his.huawei.com (dggsgout11.his.huawei.com [45.249.212.51]) by imf22.hostedemail.com (Postfix) with ESMTP id D47CDC0008 for ; Fri, 20 Dec 2024 07:48:12 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf22.hostedemail.com: domain of chenridong@huaweicloud.com designates 45.249.212.51 as permitted sender) smtp.mailfrom=chenridong@huaweicloud.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734680916; a=rsa-sha256; cv=none; b=Jk9KeBsRoz7ZomH313N7xdu3bpp7z+2CBFFKUlKQ+NZKCwmTIoVWyoddotdLNjqXkF8kvA bkwo+7NnjVAwYqFCIfzQxSBUuMrkanHFJ5RbD78EbXT0K7v1kUBXuGatskjSFhQYF0XUwH PnJtt9LPDl2S7GsxtjcNRhARrsa2bm0= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf22.hostedemail.com: domain of chenridong@huaweicloud.com designates 45.249.212.51 as permitted sender) smtp.mailfrom=chenridong@huaweicloud.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734680916; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DyC/FXTnXP7xvwVDakAqiCVedoDAbjzrPdNzQmYKRgg=; b=0NCjzXwJ2oBemCj1O6DreVDrLpJu9Jrauh7lfhFd6J1xCYXLgGXAyoSoqoy66ZiOT1spYW jU5fWVEBpNIyME3eBrtmYWqi+Lt/rcwEBo/JZvkHItSrUbqD/mZRe0EiCa1EGyU36VQWwf zzNm/08mpJ5MhDeXJ+Gz/2nGAl9K724= Received: from mail.maildlp.com (unknown [172.19.163.216]) by dggsgout11.his.huawei.com (SkyGuard) with ESMTP id 4YF00D3MhBz4f3jqF for ; Fri, 20 Dec 2024 15:48:28 +0800 (CST) Received: from mail02.huawei.com (unknown [10.116.40.252]) by mail.maildlp.com (Postfix) with ESMTP id F20C61A0197 for ; Fri, 20 Dec 2024 15:48:42 +0800 (CST) Received: from [10.67.109.79] (unknown [10.67.109.79]) by APP3 (Coremail) with SMTP id _Ch0CgC3lcJYIWVnf0bjEw--.5138S2; Fri, 20 Dec 2024 15:48:41 +0800 (CST) Message-ID: <3f06379a-b9d0-40b7-9ecf-6e4c9a5f51dc@huaweicloud.com> Date: Fri, 20 Dec 2024 15:48:40 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH -next v5] mm: vmscan: retry folios written back while isolated for traditional LRU To: Barry Song <21cnbao@gmail.com>, cuibixuan@vivo.com Cc: akpm@linux-foundation.org, mhocko@suse.com, hannes@cmpxchg.org, yosryahmed@google.com, yuzhao@google.com, david@redhat.com, willy@infradead.org, ryan.roberts@arm.com, wangkefeng.wang@huawei.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, chenridong@huawei.com, wangweiyang2@huawei.com, xieym_ict@hotmail.com References: <20241220010931.3603111-1-chenridong@huaweicloud.com> Content-Language: en-US From: Chen Ridong In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-CM-TRANSID:_Ch0CgC3lcJYIWVnf0bjEw--.5138S2 X-Coremail-Antispam: 1UD129KBjvJXoW3CrWDtw1UJFWftFykCFyUWrg_yoWkur4fpF Z3WFsFyw48Jr13Kr4avF1Ygr1SkrWxGr47XF43GFy2kF90vF13KrW2kryj9F1rCrykGF1S vrsFgrZxWa90yFJanT9S1TB71UUUUU7qnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUv0b4IE77IF4wAFF20E14v26r4j6ryUM7CY07I20VC2zVCF04k2 6cxKx2IYs7xG6rWj6s0DM7CIcVAFz4kK6r1j6r18M28lY4IEw2IIxxk0rwA2F7IY1VAKz4 vEj48ve4kI8wA2z4x0Y4vE2Ix0cI8IcVAFwI0_tr0E3s1l84ACjcxK6xIIjxv20xvEc7Cj xVAFwI0_Gr1j6F4UJwA2z4x0Y4vEx4A2jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x 0267AKxVW0oVCq3wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG 6I80ewAv7VC0I7IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFV Cjc4AY6r1j6r4UM4x0Y48IcVAKI48JM4IIrI8v6xkF7I0E8cxan2IY04v7MxkF7I0En4kS 14v26r1q6r43MxAIw28IcxkI7VAKI48JMxC20s026xCaFVCjc4AY6r1j6r4UMI8I3I0E5I 8CrVAFwI0_Jr0_Jr4lx2IqxVCjr7xvwVAFwI0_JrI_JrWlx4CE17CEb7AF67AKxVW8ZVWr XwCIc40Y0x0EwIxGrwCI42IY6xIIjxv20xvE14v26r1j6r1xMIIF0xvE2Ix0cI8IcVCY1x 0267AKxVW8JVWxJwCI42IY6xAIw20EY4v20xvaj40_Jr0_JF4lIxAIcVC2z280aVAFwI0_ Jr0_Gr1lIxAIcVC2z280aVCY1x0267AKxVW8JVW8JrUvcSsGvfC2KfnxnUUI43ZEXa7IU1 7KsUUUUUU== X-CM-SenderInfo: hfkh02xlgr0w46kxt4xhlfz01xgou0bp/ X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: D47CDC0008 X-Stat-Signature: qhasia7sju38c6g6sjgoxqi7cngh7dkz X-HE-Tag: 1734680892-292684 X-HE-Meta: U2FsdGVkX18fUPlIoorVobMWryib3D1AloRoWMhWpFT7sKS0+UHYGLFlsfgXTiXnVucM007IK+8Peaa3y/37RN5Ub4ZREQNO4/ZBu9+/9fcCVNsKy9aBmkHwZl+SJh4kbCSyPY7SLQocSZdDSjhRqHIH710n1iV48b+TiIoBM6/2rTCiDPka6FWHQ4O3uJZ9N9/MDmTrVjSrfN1Ffmk4ofoelhwMb+GbuKMsiLLPSl4Ebz0Hj1UKlhiephh02ORx1BPYLs+IvZpOgdsyQZAH4tHn69icmQfGvSGJjrFlIgZ7yo0Uj3ou8UjpZ0RjzbGDvK07Kh2/Gkru9ZNuOwRmZxH3SeoW+emckEV3MI4ZoUcI2rZBvV4VqttiIl5JXM4dRievcIUVtAtnSH1pgs4F34W4c1S8PTJoXMAWfNtSBoc9AVMqkACMXAhZAw8tVuJwzDFLvho4A0tlbNPUZRkMvdJTAekOAmwlrfT1YBkmWj5VzCoZC5acgVk608XHSxD43sZ7JScZAXVic4gFChHAheD2wRGFknCzO/X6w+l2QKeI35Yn3pYvGi1OJRdC9/lQWvo7rpZXZ3OEXKYWGtG+sTAQVqpIHp+lYccb53qOo7KoD6XrR8TonMm7ay+J/nqqE5vtrzyY6mnI8zCVEi8GT/2915EVSKlA8snfeT58hFUu1Rd1FwKyw/KmuzzYsxEWumkYdYjZCQOlUPEEpzH+uxsqZc2vw5DnIuhC6xi9k+HERn8t/RLi85LLv9f/U1NvsvT0U62nexfqIugZPaxvERXxQQgM3OPjH5Ai6PBzrrTI+2qAOwOVc/5d5b4seDdshMxfDUYFg/vuh1eX1z4kqWZERZerJpOQMC0q+CUWgHMOhVAxz0I4/ZTJezmM7Hz+SwYlqHu+Yn25SJLK8cO3Cew8Ki8RbV1fPy2X8D6HvYx+/y4W3udPViGY5PUWVo8Mf/2wCeM1MCtCwATCzOr 1UghYPxg 9X0kkjNcONiFOqnahuBTTQ5IyR0A1X+lXVPhQoI7GtpH0CTXp5txX5gtN6i9WRtxVntE432sEM98olxrSG1Nz2Tk0BeGS9v6PpJRDnn/gYVvOBrkqUZ/B8svd9rg3aV2XAGadz6PxEV5BoXn+w3uI04LFOcqnZ4WmkB08FqugCIv9Kr8tFkv7He/thRPgr+4wjDTiGkGeYO3qqbnno9OWiAUK+ZyZTNZTz31/Iu3qB7eIhyZYkTAd56KC1CbVO6aCBPtd9B7m21X5V3jlZAW/0jxrkQHs5W/C88qOMc3b6ECrWnQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/12/20 11:09, Barry Song wrote: > On Fri, Dec 20, 2024 at 3:30 PM Barry Song <21cnbao@gmail.com> wrote: >> >> On Fri, Dec 20, 2024 at 2:19 PM Chen Ridong wrote: >>> >>> From: Chen Ridong >>> >>> The page reclaim isolates a batch of folios from the tail of one of the >>> LRU lists and works on those folios one by one. For a suitable >>> swap-backed folio, if the swap device is async, it queues that folio for >>> writeback. After the page reclaim finishes an entire batch, it puts back >>> the folios it queued for writeback to the head of the original LRU list. >>> >>> In the meantime, the page writeback flushes the queued folios also by >>> batches. Its batching logic is independent from that of the page reclaim. >>> For each of the folios it writes back, the page writeback calls >>> folio_rotate_reclaimable() which tries to rotate a folio to the tail. >>> >>> folio_rotate_reclaimable() only works for a folio after the page reclaim >>> has put it back. If an async swap device is fast enough, the page >>> writeback can finish with that folio while the page reclaim is still >>> working on the rest of the batch containing it. In this case, that folio >>> will remain at the head and the page reclaim will not retry it before >>> reaching there. >>> >>> The commit 359a5e1416ca ("mm: multi-gen LRU: retry folios written back >>> while isolated") only fixed the issue for mglru. However, this issue >>> also exists in the traditional active/inactive LRU. This issue will be >>> worse if THP is split, which makes the list longer and needs longer time >>> to finish a batch of folios reclaim. >>> >>> This issue should be fixed in the same way for the traditional LRU. >>> Therefore, the common logic was extracted to the 'find_folios_written_back' >>> function firstly, which is then reused in the 'shrink_inactive_list' >>> function. Finally, retry reclaiming those folios that may have missed the >>> rotation for traditional LRU. >>> >>> Link: https://lore.kernel.org/linux-kernel/20241010081802.290893-1-chenridong@huaweicloud.com/ >>> Link: https://lore.kernel.org/linux-kernel/CAGsJ_4zqL8ZHNRZ44o_CC69kE7DBVXvbZfvmQxMGiFqRxqHQdA@mail.gmail.com/ >>> Signed-off-by: Chen Ridong >>> --- >>> mm/vmscan.c | 108 ++++++++++++++++++++++++++++++++++------------------ >>> 1 file changed, 70 insertions(+), 38 deletions(-) >>> >>> diff --git a/mm/vmscan.c b/mm/vmscan.c >>> index 39886f435ec5..e67e446540ba 100644 >>> --- a/mm/vmscan.c >>> +++ b/mm/vmscan.c >>> @@ -283,6 +283,39 @@ static void set_task_reclaim_state(struct task_struct *task, >>> task->reclaim_state = rs; >>> } >>> >>> +/** >>> + * find_folios_written_back - Find and move the written back folios to a new list. >>> + * @list: filios list >>> + * @clean: the written back folios list >>> + * @is_retried: whether the list has already been retried. >>> + */ >>> +static inline void find_folios_written_back(struct list_head *list, >>> + struct list_head *clean, bool is_retried) >>> +{ >>> + struct folio *folio; >>> + struct folio *next; >>> + >>> + list_for_each_entry_safe_reverse(folio, next, list, lru) { >>> + if (!folio_evictable(folio)) { >>> + list_del(&folio->lru); >>> + folio_putback_lru(folio); >>> + continue; >>> + } >>> + >>> + /* retry folios that may have missed folio_rotate_reclaimable() */ >>> + if (!is_retried && !folio_test_active(folio) && !folio_mapped(folio) && >>> + !folio_test_dirty(folio) && !folio_test_writeback(folio)) { >>> + list_move(&folio->lru, clean); >>> + continue; >>> + } >>> + >>> + /* don't add rejected folios to the oldest generation */ >>> + if (lru_gen_enabled() && !lru_gen_distance(folio, false)) >>> + set_mask_bits(&folio->flags, LRU_REFS_FLAGS, BIT(PG_active)); >>> + } >>> + >>> +} >>> + >>> /* >>> * flush_reclaim_state(): add pages reclaimed outside of LRU-based reclaim to >>> * scan_control->nr_reclaimed. >>> @@ -1959,14 +1992,18 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan, >>> enum lru_list lru) >>> { >>> LIST_HEAD(folio_list); >>> + LIST_HEAD(clean_list); >>> unsigned long nr_scanned; >>> - unsigned int nr_reclaimed = 0; >>> + unsigned int nr_reclaimed, total_reclaimed = 0; >>> + unsigned int nr_pageout = 0; >>> + unsigned int nr_unqueued_dirty = 0; >>> unsigned long nr_taken; >>> struct reclaim_stat stat; >>> bool file = is_file_lru(lru); >>> enum vm_event_item item; >>> struct pglist_data *pgdat = lruvec_pgdat(lruvec); >>> bool stalled = false; >>> + bool is_retried = false; > > The name is_retried is a bit confusing. It should be is_retry or > is_retrying since > we are currently retrying, not that we have already retried. > >>> >>> while (unlikely(too_many_isolated(pgdat, file, sc))) { >>> if (stalled) >>> @@ -2000,22 +2037,47 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan, >>> if (nr_taken == 0) >>> return 0; >>> >>> +retry: >>> nr_reclaimed = shrink_folio_list(&folio_list, pgdat, sc, &stat, false); >>> >>> + sc->nr.dirty += stat.nr_dirty; >>> + sc->nr.congested += stat.nr_congested; >>> + sc->nr.unqueued_dirty += stat.nr_unqueued_dirty; >>> + sc->nr.writeback += stat.nr_writeback; >>> + sc->nr.immediate += stat.nr_immediate; >>> + total_reclaimed += nr_reclaimed; >>> + nr_pageout += stat.nr_pageout; >>> + nr_unqueued_dirty += stat.nr_unqueued_dirty; >>> + >>> + trace_mm_vmscan_lru_shrink_inactive(pgdat->node_id, >>> + nr_scanned, nr_reclaimed, &stat, sc->priority, file); >> >> This is a bit odd, as nr_scanned during a retry still uses the >> previous nr_scanned >> value. However, I find that mglru shows no difference. >> >> retry: >> reclaimed = shrink_folio_list(&list, pgdat, sc, &stat, false); >> sc->nr.unqueued_dirty += stat.nr_unqueued_dirty; >> sc->nr_reclaimed += reclaimed; >> trace_mm_vmscan_lru_shrink_inactive(pgdat->node_id, >> scanned, reclaimed, &stat, sc->priority, >> type ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON); >> >> Currently, the active/inactive state aligns with mglru in this trace. >> It seems that >> the userspace BPF should recognize that the nr_scanned during a retry doesn't >> indicate we are isolating new nr_scanned folios. Ideally, the is_retry >> flag should >> be passed to the trace, allowing userspace to identify that it's a retry and >> disregard the nr_scanned value. >> >> It might be worth addressing this in a separate patch. Add Bixuan to clarify >> how userspace depends on this trace and if "retry" will break his userspace >> BPF for both MGLRU and active/inactive cases. >> >> Otherwise, the patch looks good to me. >> > > By the way, it's completely clear that the trace was added after mglru's retry: > https://lore.kernel.org/linux-mm/20240105013607.2868-3-cuibixuan@vivo.com/ > > Therefore, I don't believe the potential confusion about nr_scanned in the trace > should prevent Ridong's fix for the missed rotation of written-back folios from > proceeding. > > If there is an issue with that, we should open a separate thread to address the > trace. > > Please feel free to add the below in the future version after you fix > "is_retried". > > Reviewed-by: Barry Song > Thank you very much. I will update. Best regards Ridong >>> + >>> + find_folios_written_back(&folio_list, &clean_list, is_retried); >>> + >>> spin_lock_irq(&lruvec->lru_lock); >>> move_folios_to_lru(lruvec, &folio_list); >>> >>> __mod_lruvec_state(lruvec, PGDEMOTE_KSWAPD + reclaimer_offset(), >>> stat.nr_demoted); >>> - __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); >>> item = PGSTEAL_KSWAPD + reclaimer_offset(); >>> if (!cgroup_reclaim(sc)) >>> __count_vm_events(item, nr_reclaimed); >>> __count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed); >>> __count_vm_events(PGSTEAL_ANON + file, nr_reclaimed); >>> + >>> + if (!list_empty(&clean_list)) { >>> + list_splice_init(&clean_list, &folio_list); >>> + is_retried = true; >>> + spin_unlock_irq(&lruvec->lru_lock); >>> + goto retry; >>> + } >>> + __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); >>> spin_unlock_irq(&lruvec->lru_lock); >>> + sc->nr.taken += nr_taken; >>> + if (file) >>> + sc->nr.file_taken += nr_taken; >>> >>> - lru_note_cost(lruvec, file, stat.nr_pageout, nr_scanned - nr_reclaimed); >>> + lru_note_cost(lruvec, file, nr_pageout, nr_scanned - total_reclaimed); >>> >>> /* >>> * If dirty folios are scanned that are not queued for IO, it >>> @@ -2028,7 +2090,7 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan, >>> * the flushers simply cannot keep up with the allocation >>> * rate. Nudge the flusher threads in case they are asleep. >>> */ >>> - if (stat.nr_unqueued_dirty == nr_taken) { >>> + if (nr_unqueued_dirty == nr_taken) { >>> wakeup_flusher_threads(WB_REASON_VMSCAN); >>> /* >>> * For cgroupv1 dirty throttling is achieved by waking up >>> @@ -2043,18 +2105,7 @@ static unsigned long shrink_inactive_list(unsigned long nr_to_scan, >>> reclaim_throttle(pgdat, VMSCAN_THROTTLE_WRITEBACK); >>> } >>> >>> - sc->nr.dirty += stat.nr_dirty; >>> - sc->nr.congested += stat.nr_congested; >>> - sc->nr.unqueued_dirty += stat.nr_unqueued_dirty; >>> - sc->nr.writeback += stat.nr_writeback; >>> - sc->nr.immediate += stat.nr_immediate; >>> - sc->nr.taken += nr_taken; >>> - if (file) >>> - sc->nr.file_taken += nr_taken; >>> - >>> - trace_mm_vmscan_lru_shrink_inactive(pgdat->node_id, >>> - nr_scanned, nr_reclaimed, &stat, sc->priority, file); >>> - return nr_reclaimed; >>> + return total_reclaimed; >>> } >>> >>> /* >>> @@ -4585,12 +4636,10 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap >>> int reclaimed; >>> LIST_HEAD(list); >>> LIST_HEAD(clean); >>> - struct folio *folio; >>> - struct folio *next; >>> enum vm_event_item item; >>> struct reclaim_stat stat; >>> struct lru_gen_mm_walk *walk; >>> - bool skip_retry = false; >>> + bool is_retried = false; >>> struct lru_gen_folio *lrugen = &lruvec->lrugen; >>> struct mem_cgroup *memcg = lruvec_memcg(lruvec); >>> struct pglist_data *pgdat = lruvec_pgdat(lruvec); >>> @@ -4616,24 +4665,7 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap >>> scanned, reclaimed, &stat, sc->priority, >>> type ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON); >>> >>> - list_for_each_entry_safe_reverse(folio, next, &list, lru) { >>> - if (!folio_evictable(folio)) { >>> - list_del(&folio->lru); >>> - folio_putback_lru(folio); >>> - continue; >>> - } >>> - >>> - /* retry folios that may have missed folio_rotate_reclaimable() */ >>> - if (!skip_retry && !folio_test_active(folio) && !folio_mapped(folio) && >>> - !folio_test_dirty(folio) && !folio_test_writeback(folio)) { >>> - list_move(&folio->lru, &clean); >>> - continue; >>> - } >>> - >>> - /* don't add rejected folios to the oldest generation */ >>> - if (!lru_gen_distance(folio, false)) >>> - set_mask_bits(&folio->flags, LRU_REFS_FLAGS, BIT(PG_active)); >>> - } >>> + find_folios_written_back(&list, &clean, is_retried); >>> >>> spin_lock_irq(&lruvec->lru_lock); >>> >>> @@ -4656,7 +4688,7 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap >>> list_splice_init(&clean, &list); >>> >>> if (!list_empty(&list)) { >>> - skip_retry = true; >>> + is_retried = true; >>> goto retry; >>> } >>> >>> -- >>> 2.34.1 >>> >> > > Thanks > barry