* Slab corruption mm3 + davem fixes
@ 2003-05-11 3:19 Ed Tomlinson
2003-05-11 16:21 ` Ed Tomlinson
0 siblings, 1 reply; 8+ messages in thread
From: Ed Tomlinson @ 2003-05-11 3:19 UTC (permalink / raw)
To: akpm, davem, linux-mm
Hi,
I looked at my logs and found the following error in it. My kernel is 69-mm3
with two davem fixes on it.
May 10 22:41:06 oscar kernel: *************************************************************************************
*******************************************************************************************************************
*******************************************************************************************************************
*******************************************************************************************************************
*******************************************************************************************************************
*******************************************************************************************************************
*******************************************************************************************************************
*******************************************************************************************************************
*********************************************************************************************************
May 10 22:41:06 oscar kernel: **********************************************************************A5
May 10 22:41:06 oscar kernel: Call Trace:
May 10 22:41:06 oscar kernel: [__slab_error+30/32] __slab_error+0x1e/0x20
May 10 22:41:06 oscar kernel: [check_poison_obj+376/384] check_poison_obj+0x178/0x180
May 10 22:41:06 oscar kernel: [kmalloc+221/392] kmalloc+0xdd/0x188
May 10 22:41:06 oscar kernel: [alloc_skb+64/240] alloc_skb+0x40/0xf0
May 10 22:41:06 oscar kernel: [alloc_skb+64/240] alloc_skb+0x40/0xf0
May 10 22:41:06 oscar kernel: [skb_copy+45/204] skb_copy+0x2d/0xcc
May 10 22:41:06 oscar kernel: [_end+615445203/1070187180] skb_ip_make_writable+0xcf/0x164 [iptable_nat]
May 10 22:41:06 oscar kernel: [cache_init_objs+71/308] cache_init_objs+0x47/0x134
May 10 22:41:06 oscar kernel: [_end+615444563/1070187180] icmp_reply_translation+0x33/0x1e4 [iptable_nat]
May 10 22:41:06 oscar kernel: [_end+615450270/1070187180] gcc2_compiled.+0xc2/0x1d8 [iptable_nat]
May 10 22:41:06 oscar kernel: [_end+615450641/1070187180] ip_nat_out+0x5d/0x64 [iptable_nat]
May 10 22:41:06 oscar kernel: [ip_finish_output2+0/416] ip_finish_output2+0x0/0x1a0
May 10 22:41:06 oscar kernel: [nf_iterate+63/156] nf_iterate+0x3f/0x9c
May 10 22:41:06 oscar kernel: [ip_finish_output2+0/416] ip_finish_output2+0x0/0x1a0
May 10 22:41:06 oscar kernel: [nf_hook_slow+149/296] nf_hook_slow+0x95/0x128
May 10 22:41:06 oscar kernel: [ip_finish_output2+0/416] ip_finish_output2+0x0/0x1a0
May 10 22:41:06 oscar kernel: [_end+615462636/1070187180] ip_nat_out_ops+0x0/0x1c [iptable_nat]
May 10 22:41:06 oscar kernel: [ip_output+535/544] ip_output+0x217/0x220
May 10 22:41:06 oscar kernel: [ip_finish_output2+0/416] ip_finish_output2+0x0/0x1a0
May 10 22:41:06 oscar kernel: [nf_hook_slow+149/296] nf_hook_slow+0x95/0x128
May 10 22:41:06 oscar kernel: [ip_forward_finish+39/60] ip_forward_finish+0x27/0x3c
May 10 22:41:06 oscar kernel: [nf_hook_slow+208/296] nf_hook_slow+0xd0/0x128
May 10 22:41:06 oscar kernel: [ip_forward+490/564] ip_forward+0x1ea/0x234
May 10 22:41:06 oscar kernel: [ip_forward_finish+0/60] ip_forward_finish+0x0/0x3c
May 10 22:41:06 oscar kernel: [ip_rcv_finish+441/512] ip_rcv_finish+0x1b9/0x200
May 10 22:41:06 oscar kernel: [nf_hook_slow+208/296] nf_hook_slow+0xd0/0x128
May 10 22:41:06 oscar kernel: [ip_rcv+924/984] ip_rcv+0x39c/0x3d8
May 10 22:41:06 oscar kernel: [ip_rcv_finish+0/512] ip_rcv_finish+0x0/0x200
May 10 22:41:06 oscar kernel: [netif_receive_skb+283/332] netif_receive_skb+0x11b/0x14c
May 10 22:41:06 oscar kernel: [process_backlog+113/292] process_backlog+0x71/0x124
May 10 22:41:06 oscar kernel: [net_rx_action+114/328] net_rx_action+0x72/0x148
May 10 22:41:06 oscar kernel: [do_softirq+82/172] do_softirq+0x52/0xac
May 10 22:41:06 oscar kernel: [local_bh_enable+82/108] local_bh_enable+0x52/0x6c
May 10 22:41:06 oscar kernel: [_end+614250407/1070187180] ppp_asynctty_receive+0x4f/0x84 [ppp_async]
May 10 22:41:06 oscar kernel: [pty_write+237/336] pty_write+0xed/0x150
May 10 22:41:06 oscar kernel: [write_chan+424/516] write_chan+0x1a8/0x204
May 10 22:41:06 oscar kernel: [default_wake_function+0/24] default_wake_function+0x0/0x18
May 10 22:41:06 oscar kernel: [default_wake_function+0/24] default_wake_function+0x0/0x18
May 10 22:41:06 oscar kernel: [tty_write+515/708] tty_write+0x203/0x2c4
May 10 22:41:06 oscar kernel: [write_chan+0/516] write_chan+0x0/0x204
May 10 22:41:06 oscar kernel: [vfs_write+162/208] vfs_write+0xa2/0xd0
May 10 22:41:06 oscar kernel: [sys_write+46/76] sys_write+0x2e/0x4c
May 10 22:41:06 oscar kernel: [syscall_call+7/11] syscall_call+0x7/0xb
May 10 22:41:06 oscar kernel:
And with an ipchains based firewall:
May 9 19:55:54 oscar kernel: *************************************************************************************
*******************************************************************************************************************
*******************************************************************************************************************
*******************************************************************************************************************
*******************************************************************************************************************
*******************************************************************************************************************
*******************************************************************************************************************
*******************************************************************************************************************
*********************************************************************************************************
May 9 19:55:54 oscar kernel: **********************************************************************A5
May 9 19:55:54 oscar kernel: Call Trace:
May 9 19:55:55 oscar kernel: [__slab_error+30/32] __slab_error+0x1e/0x20
May 9 19:55:55 oscar kernel: [check_poison_obj+376/384] check_poison_obj+0x178/0x180
May 9 19:55:55 oscar kernel: [kmalloc+221/392] kmalloc+0xdd/0x188
May 9 19:55:55 oscar kernel: [alloc_skb+64/240] alloc_skb+0x40/0xf0
May 9 19:55:55 oscar kernel: [alloc_skb+64/240] alloc_skb+0x40/0xf0
May 9 19:55:55 oscar kernel: [_end+547372157/1070273676] icmp_manip_pkt+0x45/0x64 [ipchains]
May 9 19:55:55 oscar kernel: [skb_copy+45/204] skb_copy+0x2d/0xcc
May 9 19:55:55 oscar kernel: [_end+547367523/1070273676] skb_ip_make_writable+0xcf/0x164 [ipchains]
May 9 19:55:55 oscar kernel: [_end+547367236/1070273676] icmp_reply_translation+0x194/0x1e4 [ipchains]
May 9 19:55:55 oscar kernel: [_end+547366883/1070273676] icmp_reply_translation+0x33/0x1e4 [ipchains]
May 9 19:55:55 oscar kernel: [_end+547362523/1070273676] check_for_demasq+0xbb/0x1bc [ipchains]
May 9 19:55:55 oscar kernel: [_end+547400300/1070273676] ip_conntrack_protocol_icmp+0x0/0x40 [ipchains]
May 9 19:55:55 oscar kernel: [_end+547359450/1070273676] fw_in+0x162/0x2b8 [ipchains]
May 9 19:55:55 oscar kernel: [_end+547400948/1070273676] ipfw_ops+0x0/0x18 [ipchains]
May 9 19:55:55 oscar kernel: [_end+547359672/1070273676] fw_in+0x240/0x2b8 [ipchains]
May 9 19:55:55 oscar kernel: [nf_iterate+63/156] nf_iterate+0x3f/0x9c
May 9 19:55:55 oscar kernel: [ip_rcv_finish+0/512] ip_rcv_finish+0x0/0x200
May 9 19:55:55 oscar kernel: [nf_hook_slow+149/296] nf_hook_slow+0x95/0x128
May 9 19:55:55 oscar kernel: [ip_rcv_finish+0/512] ip_rcv_finish+0x0/0x200
May 9 19:55:55 oscar kernel: [_end+547400364/1070273676] preroute_ops+0x0/0x1c [ipchains]
May 9 19:55:55 oscar kernel: [ip_rcv+924/984] ip_rcv+0x39c/0x3d8
May 9 19:55:55 oscar kernel: [ip_rcv_finish+0/512] ip_rcv_finish+0x0/0x200
May 9 19:55:55 oscar kernel: [netif_receive_skb+283/332] netif_receive_skb+0x11b/0x14c
May 9 19:55:55 oscar kernel: [process_backlog+113/292] process_backlog+0x71/0x124
May 9 19:55:55 oscar kernel: [net_rx_action+114/328] net_rx_action+0x72/0x148
May 9 19:55:55 oscar kernel: [do_softirq+82/172] do_softirq+0x52/0xac
May 9 19:55:55 oscar kernel: [local_bh_enable+82/108] local_bh_enable+0x52/0x6c
May 9 19:55:55 oscar kernel: [_end+547215751/1070273676] ppp_asynctty_receive+0x4f/0x84 [ppp_async]
May 9 19:55:55 oscar kernel: [pty_write+237/336] pty_write+0xed/0x150
May 9 19:55:55 oscar kernel: [write_chan+424/516] write_chan+0x1a8/0x204
May 9 19:55:55 oscar kernel: [default_wake_function+0/24] default_wake_function+0x0/0x18
May 9 19:55:55 oscar kernel: [default_wake_function+0/24] default_wake_function+0x0/0x18
May 9 19:55:55 oscar kernel: [tty_write+515/708] tty_write+0x203/0x2c4
May 9 19:55:55 oscar kernel: [write_chan+0/516] write_chan+0x0/0x204
May 9 19:55:55 oscar kernel: [vfs_write+162/208] vfs_write+0xa2/0xd0
May 9 19:55:55 oscar kernel: [sys_write+46/76] sys_write+0x2e/0x4c
May 9 19:55:55 oscar kernel: [syscall_call+7/11] syscall_call+0x7/0xb
May 9 19:55:55 oscar kernel:
Hope this helps,
Ed Tomlinson
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"aart@kvack.org"> aart@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Slab corruption mm3 + davem fixes
2003-05-11 3:19 Slab corruption mm3 + davem fixes Ed Tomlinson
@ 2003-05-11 16:21 ` Ed Tomlinson
2003-05-11 22:01 ` David S. Miller
0 siblings, 1 reply; 8+ messages in thread
From: Ed Tomlinson @ 2003-05-11 16:21 UTC (permalink / raw)
To: akpm, davem, linux-mm; +Cc: linux-kernel
Hi,
I am also seeing this on 69-bk (as of Sunday morning)
Ed
On May 10, 2003 11:19 pm, Ed Tomlinson wrote:
> Hi,
>
> I looked at my logs and found the following error in it. My kernel is
> 69-mm3 with two davem fixes on it.
>
> May 10 22:41:06 oscar kernel:
> ***************************************************************************
>**********
> ***************************************************************************
>****************************************
> ***************************************************************************
>****************************************
> ***************************************************************************
>****************************************
> ***************************************************************************
>****************************************
> ***************************************************************************
>****************************************
> ***************************************************************************
>****************************************
> ***************************************************************************
>****************************************
> ***************************************************************************
>****************************** May 10 22:41:06 oscar kernel:
> **********************************************************************A5
> May 10 22:41:06 oscar kernel: Call Trace:
> May 10 22:41:06 oscar kernel: [__slab_error+30/32] __slab_error+0x1e/0x20
> May 10 22:41:06 oscar kernel: [check_poison_obj+376/384]
> check_poison_obj+0x178/0x180 May 10 22:41:06 oscar kernel:
> [kmalloc+221/392] kmalloc+0xdd/0x188 May 10 22:41:06 oscar kernel:
> [alloc_skb+64/240] alloc_skb+0x40/0xf0 May 10 22:41:06 oscar kernel:
> [alloc_skb+64/240] alloc_skb+0x40/0xf0 May 10 22:41:06 oscar kernel:
> [skb_copy+45/204] skb_copy+0x2d/0xcc May 10 22:41:06 oscar kernel:
> [_end+615445203/1070187180] skb_ip_make_writable+0xcf/0x164 [iptable_nat]
> May 10 22:41:06 oscar kernel: [cache_init_objs+71/308]
> cache_init_objs+0x47/0x134 May 10 22:41:06 oscar kernel:
> [_end+615444563/1070187180] icmp_reply_translation+0x33/0x1e4 [iptable_nat]
> May 10 22:41:06 oscar kernel: [_end+615450270/1070187180]
> gcc2_compiled.+0xc2/0x1d8 [iptable_nat] May 10 22:41:06 oscar kernel:
> [_end+615450641/1070187180] ip_nat_out+0x5d/0x64 [iptable_nat] May 10
> 22:41:06 oscar kernel: [ip_finish_output2+0/416]
> ip_finish_output2+0x0/0x1a0 May 10 22:41:06 oscar kernel:
> [nf_iterate+63/156] nf_iterate+0x3f/0x9c May 10 22:41:06 oscar kernel:
> [ip_finish_output2+0/416] ip_finish_output2+0x0/0x1a0 May 10 22:41:06 oscar
> kernel: [nf_hook_slow+149/296] nf_hook_slow+0x95/0x128 May 10 22:41:06
> oscar kernel: [ip_finish_output2+0/416] ip_finish_output2+0x0/0x1a0 May 10
> 22:41:06 oscar kernel: [_end+615462636/1070187180] ip_nat_out_ops+0x0/0x1c
> [iptable_nat] May 10 22:41:06 oscar kernel: [ip_output+535/544]
> ip_output+0x217/0x220 May 10 22:41:06 oscar kernel:
> [ip_finish_output2+0/416] ip_finish_output2+0x0/0x1a0 May 10 22:41:06 oscar
> kernel: [nf_hook_slow+149/296] nf_hook_slow+0x95/0x128 May 10 22:41:06
> oscar kernel: [ip_forward_finish+39/60] ip_forward_finish+0x27/0x3c May 10
> 22:41:06 oscar kernel: [nf_hook_slow+208/296] nf_hook_slow+0xd0/0x128 May
> 10 22:41:06 oscar kernel: [ip_forward+490/564] ip_forward+0x1ea/0x234 May
> 10 22:41:06 oscar kernel: [ip_forward_finish+0/60]
> ip_forward_finish+0x0/0x3c May 10 22:41:06 oscar kernel:
> [ip_rcv_finish+441/512] ip_rcv_finish+0x1b9/0x200 May 10 22:41:06 oscar
> kernel: [nf_hook_slow+208/296] nf_hook_slow+0xd0/0x128 May 10 22:41:06
> oscar kernel: [ip_rcv+924/984] ip_rcv+0x39c/0x3d8 May 10 22:41:06 oscar
> kernel: [ip_rcv_finish+0/512] ip_rcv_finish+0x0/0x200 May 10 22:41:06
> oscar kernel: [netif_receive_skb+283/332] netif_receive_skb+0x11b/0x14c
> May 10 22:41:06 oscar kernel: [process_backlog+113/292]
> process_backlog+0x71/0x124 May 10 22:41:06 oscar kernel:
> [net_rx_action+114/328] net_rx_action+0x72/0x148 May 10 22:41:06 oscar
> kernel: [do_softirq+82/172] do_softirq+0x52/0xac May 10 22:41:06 oscar
> kernel: [local_bh_enable+82/108] local_bh_enable+0x52/0x6c May 10 22:41:06
> oscar kernel: [_end+614250407/1070187180] ppp_asynctty_receive+0x4f/0x84
> [ppp_async] May 10 22:41:06 oscar kernel: [pty_write+237/336]
> pty_write+0xed/0x150 May 10 22:41:06 oscar kernel: [write_chan+424/516]
> write_chan+0x1a8/0x204 May 10 22:41:06 oscar kernel:
> [default_wake_function+0/24] default_wake_function+0x0/0x18 May 10 22:41:06
> oscar kernel: [default_wake_function+0/24] default_wake_function+0x0/0x18
> May 10 22:41:06 oscar kernel: [tty_write+515/708] tty_write+0x203/0x2c4
> May 10 22:41:06 oscar kernel: [write_chan+0/516] write_chan+0x0/0x204 May
> 10 22:41:06 oscar kernel: [vfs_write+162/208] vfs_write+0xa2/0xd0 May 10
> 22:41:06 oscar kernel: [sys_write+46/76] sys_write+0x2e/0x4c May 10
> 22:41:06 oscar kernel: [syscall_call+7/11] syscall_call+0x7/0xb May 10
> 22:41:06 oscar kernel:
>
> And with an ipchains based firewall:
>
> May 9 19:55:54 oscar kernel:
> ***************************************************************************
>**********
> ***************************************************************************
>****************************************
> ***************************************************************************
>****************************************
> ***************************************************************************
>****************************************
> ***************************************************************************
>****************************************
> ***************************************************************************
>****************************************
> ***************************************************************************
>****************************************
> ***************************************************************************
>****************************************
> ***************************************************************************
>****************************** May 9 19:55:54 oscar kernel:
> **********************************************************************A5
> May 9 19:55:54 oscar kernel: Call Trace:
> May 9 19:55:55 oscar kernel: [__slab_error+30/32] __slab_error+0x1e/0x20
> May 9 19:55:55 oscar kernel: [check_poison_obj+376/384]
> check_poison_obj+0x178/0x180 May 9 19:55:55 oscar kernel:
> [kmalloc+221/392] kmalloc+0xdd/0x188 May 9 19:55:55 oscar kernel:
> [alloc_skb+64/240] alloc_skb+0x40/0xf0 May 9 19:55:55 oscar kernel:
> [alloc_skb+64/240] alloc_skb+0x40/0xf0 May 9 19:55:55 oscar kernel:
> [_end+547372157/1070273676] icmp_manip_pkt+0x45/0x64 [ipchains] May 9
> 19:55:55 oscar kernel: [skb_copy+45/204] skb_copy+0x2d/0xcc May 9
> 19:55:55 oscar kernel: [_end+547367523/1070273676]
> skb_ip_make_writable+0xcf/0x164 [ipchains] May 9 19:55:55 oscar kernel:
> [_end+547367236/1070273676] icmp_reply_translation+0x194/0x1e4 [ipchains]
> May 9 19:55:55 oscar kernel: [_end+547366883/1070273676]
> icmp_reply_translation+0x33/0x1e4 [ipchains] May 9 19:55:55 oscar kernel:
> [_end+547362523/1070273676] check_for_demasq+0xbb/0x1bc [ipchains] May 9
> 19:55:55 oscar kernel: [_end+547400300/1070273676]
> ip_conntrack_protocol_icmp+0x0/0x40 [ipchains] May 9 19:55:55 oscar
> kernel: [_end+547359450/1070273676] fw_in+0x162/0x2b8 [ipchains] May 9
> 19:55:55 oscar kernel: [_end+547400948/1070273676] ipfw_ops+0x0/0x18
> [ipchains] May 9 19:55:55 oscar kernel: [_end+547359672/1070273676]
> fw_in+0x240/0x2b8 [ipchains] May 9 19:55:55 oscar kernel:
> [nf_iterate+63/156] nf_iterate+0x3f/0x9c May 9 19:55:55 oscar kernel:
> [ip_rcv_finish+0/512] ip_rcv_finish+0x0/0x200 May 9 19:55:55 oscar kernel:
> [nf_hook_slow+149/296] nf_hook_slow+0x95/0x128 May 9 19:55:55 oscar
> kernel: [ip_rcv_finish+0/512] ip_rcv_finish+0x0/0x200 May 9 19:55:55
> oscar kernel: [_end+547400364/1070273676] preroute_ops+0x0/0x1c [ipchains]
> May 9 19:55:55 oscar kernel: [ip_rcv+924/984] ip_rcv+0x39c/0x3d8 May 9
> 19:55:55 oscar kernel: [ip_rcv_finish+0/512] ip_rcv_finish+0x0/0x200 May
> 9 19:55:55 oscar kernel: [netif_receive_skb+283/332]
> netif_receive_skb+0x11b/0x14c May 9 19:55:55 oscar kernel:
> [process_backlog+113/292] process_backlog+0x71/0x124 May 9 19:55:55 oscar
> kernel: [net_rx_action+114/328] net_rx_action+0x72/0x148 May 9 19:55:55
> oscar kernel: [do_softirq+82/172] do_softirq+0x52/0xac May 9 19:55:55
> oscar kernel: [local_bh_enable+82/108] local_bh_enable+0x52/0x6c May 9
> 19:55:55 oscar kernel: [_end+547215751/1070273676]
> ppp_asynctty_receive+0x4f/0x84 [ppp_async] May 9 19:55:55 oscar kernel:
> [pty_write+237/336] pty_write+0xed/0x150 May 9 19:55:55 oscar kernel:
> [write_chan+424/516] write_chan+0x1a8/0x204 May 9 19:55:55 oscar kernel:
> [default_wake_function+0/24] default_wake_function+0x0/0x18 May 9 19:55:55
> oscar kernel: [default_wake_function+0/24] default_wake_function+0x0/0x18
> May 9 19:55:55 oscar kernel: [tty_write+515/708] tty_write+0x203/0x2c4
> May 9 19:55:55 oscar kernel: [write_chan+0/516] write_chan+0x0/0x204 May
> 9 19:55:55 oscar kernel: [vfs_write+162/208] vfs_write+0xa2/0xd0 May 9
> 19:55:55 oscar kernel: [sys_write+46/76] sys_write+0x2e/0x4c May 9
> 19:55:55 oscar kernel: [syscall_call+7/11] syscall_call+0x7/0xb May 9
> 19:55:55 oscar kernel:
>
> Hope this helps,
> Ed Tomlinson
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"aart@kvack.org"> aart@kvack.org </a>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"aart@kvack.org"> aart@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Slab corruption mm3 + davem fixes
2003-05-11 22:15 ` Andrew Morton
@ 2003-05-11 21:24 ` David S. Miller
2003-05-11 22:34 ` David S. Miller
1 sibling, 0 replies; 8+ messages in thread
From: David S. Miller @ 2003-05-11 21:24 UTC (permalink / raw)
To: akpm; +Cc: tomlins, linux-mm, linux-kernel, rusty, laforge
Did you mean to send a one megabyte diff?
Fuck wrong patch, that one was a 2.4.x backport of IPSEC enjoy :-)
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"aart@kvack.org"> aart@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Slab corruption mm3 + davem fixes
2003-05-11 16:21 ` Ed Tomlinson
@ 2003-05-11 22:01 ` David S. Miller
2003-05-11 22:15 ` Andrew Morton
0 siblings, 1 reply; 8+ messages in thread
From: David S. Miller @ 2003-05-11 22:01 UTC (permalink / raw)
To: Ed Tomlinson; +Cc: akpm, linux-mm, linux-kernel, rusty, laforge
[-- Attachment #1: Type: text/plain, Size: 956 bytes --]
On Sun, 2003-05-11 at 09:21, Ed Tomlinson wrote:
> I am also seeing this on 69-bk (as of Sunday morning)
...
> On May 10, 2003 11:19 pm, Ed Tomlinson wrote:
> > I looked at my logs and found the following error in it. My kernel is
> > 69-mm3 with two davem fixes on it.
...
> > May 10 22:41:06 oscar kernel: Call Trace:
> > May 10 22:41:06 oscar kernel: [__slab_error+30/32] __slab_error+0x1e/0x20
> > May 10 22:41:06 oscar kernel: [check_poison_obj+376/384]
> > check_poison_obj+0x178/0x180 May 10 22:41:06 oscar kernel:
> > [kmalloc+221/392] kmalloc+0xdd/0x188 May 10 22:41:06 oscar kernel:
> > [alloc_skb+64/240] alloc_skb+0x40/0xf0 May 10 22:41:06 oscar kernel:
Yeah, more bugs in the NAT netfilter changes. Debugging this one
patch is becomming a full time job :-(
This should fix it. Rusty, you're computing checksums and mangling
src/dst using header pointers potentially pointing to free'd skbs.
--
David S. Miller <davem@redhat.com>
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: diff --]
[-- Type: text/plain; name=diff; charset=UTF-8, Size: 1088425 bytes --]
# This is a BitKeeper generated patch for the following project:
# Project Name: Linux kernel tree
# This patch format is intended for GNU patch command version 2.5 or higher.
# This patch includes the following deltas:
# ChangeSet 1.1199 -> 1.1411
# drivers/net/3c59x.c 1.19 -> 1.20
# include/net/route.h 1.7 -> 1.15
# net/ipv4/syncookies.c 1.7 -> 1.10
# include/linux/sysctl.h 1.23 -> 1.24
# net/ipv4/ip_forward.c 1.2 -> 1.9
# net/ipv4/raw.c 1.7 -> 1.17
# net/ipv6/tcp_ipv6.c 1.23 -> 1.32
# net/ipv6/af_inet6.c 1.9 -> 1.11
# arch/cris/config.in 1.13 -> 1.14
# include/linux/skbuff.h 1.15 -> 1.18
# include/net/dn_route.h 1.1 -> 1.2
# net/ipv6/ndisc.c 1.20 -> 1.24
# net/ipv6/exthdrs.c 1.4 -> 1.7
# net/ipv4/ipconfig.c 1.18 -> 1.19
# net/ipv6/ip6_output.c 1.11 -> 1.17
# net/ipv4/netfilter/ipt_MASQUERADE.c 1.5 -> 1.6
# net/ipv4/ip_input.c 1.5 -> 1.10
# arch/sparc/config.in 1.14 -> 1.15
# net/ipv6/ipv6_sockglue.c 1.8 -> 1.10
# net/ipv4/Config.in 1.3 -> 1.8
# net/ipv6/datagram.c 1.5 -> 1.6
# arch/s390x/config.in 1.9 -> 1.10
# Makefile 1.208 -> 1.209
# include/linux/inetdevice.h 1.4 -> 1.5
# include/net/tcp.h 1.22 -> 1.24
# include/linux/ipsec.h 1.2 -> 1.4
# net/ipv4/protocol.c 1.3 -> 1.4
# net/ipv4/af_inet.c 1.10 -> 1.15
# net/ipv4/netfilter/ip_nat_core.c 1.17 -> 1.18
# net/ipv4/udp.c 1.9 -> 1.19
# net/ipv6/protocol.c 1.2 -> 1.3
# include/asm-ppc/kmap_types.h 1.8 -> 1.11
# net/ipv4/fib_semantics.c 1.6 -> 1.7
# net/core/skbuff.c 1.9 -> 1.12
# arch/arm/config.in 1.21 -> 1.22
# net/ipv6/ip6_input.c 1.2 -> 1.9
# net/ipv6/route.c 1.14 -> 1.22
# net/ipv4/tcp_input.c 1.23 -> 1.24
# net/ipv4/tcp_minisocks.c 1.13 -> 1.14
# include/asm-sparc/kmap_types.h 1.6 -> 1.9
# include/net/ip6_route.h 1.3 -> 1.5
# arch/alpha/config.in 1.22 -> 1.23
# net/Config.in 1.10 -> 1.12
# include/linux/udp.h 1.1 -> 1.4
# net/ipv4/devinet.c 1.8 -> 1.10
# arch/ppc64/config.in 1.5 -> 1.6
# net/ipv4/Makefile 1.3 -> 1.10
# net/ipv4/netfilter/ip_fw_compat_masq.c 1.4 -> 1.5
# arch/s390/config.in 1.9 -> 1.10
# net/ipv6/ip6_fib.c 1.7 -> 1.8
# include/net/ipv6.h 1.5 -> 1.6
# net/ipv4/tcp.c 1.26 -> 1.28
# net/ipv6/udp.c 1.9 -> 1.16
# net/ipv4/netfilter/ipt_REJECT.c 1.11 -> 1.14
# include/linux/ip.h 1.1 -> 1.4
# net/sched/cls_route.c 1.4 -> 1.5
# net/ipv4/ip_sockglue.c 1.6 -> 1.10
# net/ipv6/icmp.c 1.14 -> 1.18
# arch/sparc64/config.in 1.25 -> 1.26
# include/linux/netlink.h 1.7 -> 1.9
# net/ipv6/reassembly.c 1.6 -> 1.8
# net/ipv4/arp.c 1.10 -> 1.12
# arch/i386/config.in 1.41 -> 1.42
# net/ipv4/tcp_ipv4.c 1.22 -> 1.31
# include/net/flow.h 1.1 -> 1.3
# include/net/ip_fib.h 1.2 -> 1.4
# include/net/raw.h 1.2 -> 1.3
# net/ipv4/icmp.c 1.16 -> 1.21
# net/ipv6/raw.c 1.10 -> 1.13
# include/net/ip6_fw.h 1.1 -> (deleted)
# include/linux/netdevice.h 1.24 -> 1.26
# net/netsyms.c 1.35 -> 1.55
# include/net/protocol.h 1.4 -> 1.9
# net/Makefile 1.6 -> 1.9
# include/net/rawv6.h 1.2 -> 1.3
# net/atm/clip.c 1.4 -> 1.5
# net/ipv4/netfilter/ip_conntrack_standalone.c 1.10 -> 1.11
# net/core/netfilter.c 1.7 -> 1.8
# net/ipv6/ip6_fw.c 1.2 -> (deleted)
# net/ipv4/fib_frontend.c 1.7 -> 1.8
# net/ipv4/ipmr.c 1.8 -> 1.14
# net/ipv4/fib_hash.c 1.2 -> 1.3
# net/ipv6/Config.in 1.3 -> 1.5
# include/linux/in.h 1.1 -> 1.4
# include/net/transp_v6.h 1.1 -> 1.2
# net/netlink/af_netlink.c 1.10 -> 1.11
# net/core/dev.c 1.35 -> 1.36
# net/ipv4/ipip.c 1.13 -> 1.23
# arch/sh/config.in 1.8 -> 1.9
# arch/ia64/config.in 1.17 -> 1.18
# net/core/rtnetlink.c 1.5 -> 1.6
# net/ipv4/netfilter/ipt_TCPMSS.c 1.5 -> 1.7
# arch/mips/config-shared.in 1.2 -> 1.3
# arch/ppc/config.in 1.31 -> 1.32
# net/ipv4/fib_rules.c 1.3 -> 1.4
# MAINTAINERS 1.101 -> 1.102
# net/core/dst.c 1.3 -> 1.6
# include/linux/in6.h 1.4 -> 1.5
# lib/Config.in 1.1 -> 1.2
# net/ipv4/tcp_output.c 1.15 -> 1.17
# include/net/ip.h 1.2 -> 1.6
# arch/x86_64/config.in 1.4 -> 1.5
# include/linux/ipv6.h 1.1 -> 1.3
# include/asm-i386/kmap_types.h 1.6 -> 1.9
# include/net/sock.h 1.15 -> 1.20
# arch/m68k/config.in 1.17 -> 1.18
# net/ipv4/sysctl_net_ipv4.c 1.8 -> 1.9
# include/net/ip6_fib.h 1.2 -> 1.3
# net/ipv4/ip_gre.c 1.8 -> 1.16
# net/ipv6/Makefile 1.2 -> 1.8
# net/ipv4/ip_nat_dumb.c 1.2 -> 1.4
# include/linux/rtnetlink.h 1.6 -> 1.7
# Documentation/Configure.help 1.179 -> 1.195
# net/ipv6/sit.c 1.12 -> 1.17
# include/net/dst.h 1.1 -> 1.12
# net/decnet/dn_route.c 1.6 -> 1.9
# include/net/ipip.h 1.2 -> 1.3
# include/asm-x86_64/kmap_types.h 1.3 -> 1.6
# net/decnet/dn_nsp_out.c 1.2 -> 1.3
# net/ipv4/igmp.c 1.7 -> 1.12
# net/ipv4/ip_output.c 1.12 -> 1.23
# net/ipv4/route.c 1.24 -> 1.34
# arch/parisc/config.in 1.5 -> 1.6
# net/ipv4/netfilter/ipt_MIRROR.c 1.3 -> 1.4
# (new) -> 1.3 net/ipv4/xfrm4_input.c
# (new) -> 1.1 net/xfrm/xfrm_output.c
# (new) -> 1.28 net/key/af_key.c
# (new) -> 1.5 include/linux/pfkeyv2.h
# (new) -> 1.3 net/ipv6/xfrm6_policy.c
# (new) -> 1.1 net/xfrm/Config.in
# (new) -> 1.2 net/ipv4/xfrm4_state.c
# (new) -> 1.16 crypto/internal.h
# (new) -> 1.17 net/xfrm/xfrm_user.c
# (new) -> 1.14 net/ipv6/esp6.c
# (new) -> 1.1 include/asm-sparc64/kmap_types.h
# (new) -> 1.3 crypto/blowfish.c
# (new) -> 1.3 net/xfrm/Makefile
# (new) -> 1.7 crypto/compress.c
# (new) -> 1.1 Documentation/crypto/descore-readme.txt
# (new) -> 1.26 crypto/api.c
# (new) -> 1.12 Documentation/crypto/api-intro.txt
# (new) -> 1.27 include/linux/crypto.h
# (new) -> 1.8 crypto/md5.c
# (new) -> 1.6 net/ipv4/ipcomp.c
# (new) -> 1.8 net/xfrm/xfrm_algo.c
# (new) -> 1.2 crypto/crypto_null.c
# (new) -> 1.2 include/net/esp.h
# (new) -> 1.14 crypto/Makefile
# (new) -> 1.3 crypto/hmac.c
# (new) -> 1.2 crypto/serpent.c
# (new) -> 1.7 include/linux/xfrm.h
# (new) -> 1.13 crypto/tcrypt.h
# (new) -> 1.14 net/ipv6/ah6.c
# (new) -> 1.3 net/ipv4/xfrm4_policy.c
# (new) -> 1.2 net/ipv6/xfrm6_state.c
# (new) -> 1.1 net/ipv6/ipv6_syms.c
# (new) -> 1.3 crypto/proc.c
# (new) -> 1.9 crypto/sha1.c
# (new) -> 1.14 crypto/cipher.c
# (new) -> 1.14 crypto/Config.in
# (new) -> 1.2 crypto/sha256.c
# (new) -> 1.21 crypto/tcrypt.c
# (new) -> 1.33 include/net/xfrm.h
# (new) -> 1.7 net/ipv6/xfrm6_input.c
# (new) -> 1.6 crypto/md4.c
# (new) -> 1.1 net/key/Makefile
# (new) -> 1.15 crypto/digest.c
# (new) -> 1.23 net/xfrm/xfrm_policy.c
# (new) -> 1.5 net/ipv4/xfrm4_tunnel.c
# (new) -> 1.6 crypto/autoload.c
# (new) -> 1.3 crypto/deflate.c
# (new) -> 1.30 net/ipv4/esp.c
# (new) -> 1.23 net/ipv4/ah.c
# (new) -> 1.3 crypto/aes.c
# (new) -> 1.10 net/xfrm/xfrm_input.c
# (new) -> 1.2 crypto/twofish.c
# (new) -> 1.2 include/net/ah.h
# (new) -> 1.23 net/xfrm/xfrm_state.c
# (new) -> 1.1 crypto/sha512.c
# (new) -> 1.11 crypto/des.c
#
# The following is the BitKeeper ChangeSet Log
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1200
# [CRYPTO]: Add initial crypto api subsystem.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1201
# [CRYPTO]: Cleanups based upon feedback from Rusty and jgarzik
# - s/__u/u/
# - s/char/u8/
# - Fixed bug in cipher.c, page remapped was off by one block
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1202
# [CRYPTO]: Cleanups based upon feedback from Rusty and jgarzik
# - s/__u/u/
# - s/char/u8/
# - Fixed bug in cipher.c, page remapped was off by one block
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1203
# [CRYPTO]: Use try_inc_mod_count and semaphore for alg list.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1204
# [CRYPTO]: Use kmod to try to autoload modules.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1205
# [CRYPTO]: Bug fixes and cleanups.
# - try_inc_mod_count() already does what crypto_alg_get() was trying to do.
# (feedback from Andrew Morton.)
# - Moved the BUG_ON() in crypto_unregister_alg() further up, no need to
# bother iterating over the list.
# - Always use kmap_atomic (feedback from Andrew Morton). Implemented two
# atomic kmaps, KM_USER for user context and KM_SOFTIRQ for softirq
# context.
# - Fixup KM_CRYPTO_ placement so Dave does not go crazy.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1206
# [CRYPTO]: More bug fixes and cleanups.
# - added back USAGI copyright for HMAC (lost earlier during some
# refactoring).
# - bugfix: make sure tfm pointer is set to NULL during post allocation
# failure path in crypto_alloc_tfm()
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1207
# [CRYPTO]: Add MD4.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1208
# [CRYPTO]: Algorithm lookup API change plus bug fixes.
# - API change: implemented simplest version of algorithm lookup
# by name (feedback from Rusty Russell and Herbert Valerio Riedel).
# - Now need to add the following line to to /etc/modules.conf for
# dynamic module loading:
# alias des3_ede des
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1209
# [CRYPTO]: Run tcrypt through lindent, plus doc update.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1210
# [CRYPTO]: Assert that interfaces are called on correct cipher type.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1211
# [CRYPTO]: Cleanups and more consistency checks.
# - Removed local_bh_disable() from kmap wrapper, not needed now with
# two atomic kmaps.
# - Nuked atomic flag, use in_softirq() instead.
# - Converted crypto_kmap() and crypto_yield() to check in_softirq().
# - Check CRYPTO_MAX_CIPHER_BLOCK_SIZE during alg init.
# - Try to initialize as much at compile time as possible
# (feedback from Christoph Hellwig).
# - Clean up list handling a bit (feedback from Christoph Hellwig).
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1212
# [CRYPTO]: Update to IV get/set interface.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1213
# [CRYPTO]: kunmap does not return a value.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1214
# [CRYPTO]: Build/warning fixups.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1215
# [CRYPTO]: Add some documentation.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1216
# [CRYPTO]: Clean up header file usage.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1217
# [CRYPTO]: Fix some credits.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1218
# [CRYPTO]: Cleanups based upon suggestions by Jeff Garzik.
# - Changed unsigned to unsigned int in algos.
# - Consistent use of u32 for flags throughout api.
# - Use of unsigned int rather than int for counting things which must
# be positive, also replaced size_ts to keep code simpler and lessen
# bloat on some archs.
# - got rid of some unneeded returns.
# - const correctness.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1219
# [CRYPTO]: Uninline some functions to save some bloat.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1220
# [CRYPTO]: Cleanups based upon feedback from jgarzik.
# - make crypto_cipher_flags() return u32 (this means it will return
# the actual flags reliably, instead of being just a boolean op).
# - simplify error path in crypto_init_flags().
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1221
# [CRYPTO]: Add crypto_alg_available interface.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1222
# [CRYPTO]: Rework HMAC interface.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1223
# [CRYPTO]: Include kernel.h in crypto.h
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1224
# [CRYPTO]: Allocate work buffers instead of using kstack.
# --------------------------------------------
# 03/05/07 torvalds@transmeta.com 1.1225
# The crypto auto-load should be enabled if crypto is enabled.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1226
# [CRYPTO]: Add SHA256 plus bug fixes.
# - Bugfix in sha1 copyright
# - Add support for SHA256, test vectors and HMAC test vectors
# - Remove obsolete atomic messages.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1227
# [CRYPTO]: Add blowfish algorithm.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1228
# [CRYPTO]: Make sha256.c more palatable to GCCs optimizers.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1229
# [CRYPTO]: minor updates
# - Fixed min keysize bug for Blowfish (it is 32, not 64).
# - Documentation updates.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1230
# [CRYPTO] kstack cleanup (v0.28)
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1231
# [CRYPTO] Add maintainers entry.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1232
# [CRYPTO] Minor doc update.
# --------------------------------------------
# 03/05/07 jgarzik@redhat.com 1.1233
# [CRYPTO]: Kill accidental double memset.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1234
# [CRYPTO]: Add null algorithms and minor cleanups.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1235
# [CRYPTO]: Kill stray CRYPTO_ALG_TYPE_COMP.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1236
# [CRYPTO]: Add twofish algorithm.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1237
# [CRYPTO]: Add serpent algorithm.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1238
# [CRYPTO]: Documentation update.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1239
# [CRYPTO]: Dont compile procfs stuff if procfs is not enabled.
# --------------------------------------------
# 03/05/07 adam@yggdrasil.com 1.1240
# [CRYPTO]: Simplify crypto memory allocation.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1241
# [CRYPTO]: internal.h needs init.h
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1242
# [CRYPTO]: Add AES algorithm.
# - Merged AES code from Adam J. Richter <adam@yggdrasil.com>
# - Add kconfig help and test vector code from
# Martin Clausen <martin@ostenfeld.dk>
# - Minor cleanups: removed EXPORT_NO_SYMBOLS (not needed for 2.5),
# removed debugging code etc.
# - Documentation updates.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1243
# [CRYPTO]: Use appropriate defaults if AH/ESP is enabled.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1244
# [CRYPTO]: More credits for AES.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1245
# [CRYPTO]: Add support for SHA-386 and SHA-512
# - Merged SHA-384 and SHA-512 code from Kyle McMartin
# <kyle@gondolin.debian.net>
# - Added test vectors.
# - Documentation and credits updates.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1246
# [CRYPTO] remove superfluous goto from des module init exception path
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1247
# [CRYPTO] Add AES and MD4 to tcrypto crypto_alg_available() test.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1248
# [CRYPTO]: in/out scatterlist support for ciphers.
# - Merge scatterwalk patch from Adam J. Richter <adam@yggdrasil.com>
# API change: cipher methods now take in/out scatterlists and nbytes
# params.
# - Merge gss_krb5_crypto update from Adam J. Richter <adam@yggdrasil.com>
# - Add KM_SOFTIRQn (instead of KM_CRYPTO_IN etc).
# - Add asm/kmap_types.h to crypto/internal.h
# - Update cipher.c credits.
# - Update cipher.c documentation.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1249
# [CRYPTO]: Move km_types out of header.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1250
# [CRYPTO]: Add encrypt_iv() and decrypt_iv() methods.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1251
# [CRYPTO]: Eliminate crypto_tfm.crt_ctx, from Adam Richter.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1252
# [CRYPTO]: Documentation updates.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1253
# [CRYPTO-2.4]: Add dummy kmap_types.h header for sparc64.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1254
# [CRYPTO]: Include linux/errno.h as appropriate.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1255
# [CRYPTO-2.4]: module_name does not exist in 2.4.x
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1256
# [CRYPTO]: Make use of crypto_exit_ops() during crypto_free_tfm().
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1257
# [CRYPTO]: Add Deflate algorithm to crypto API.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1258
# [CRYPTO]: deflate module: workaround zlib bug.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1259
# [CRYPTO-2.4]: const static --> static const.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1260
# [CRYPTO]: deflate.c needs slab.h
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1261
# [CRYPTO-2.4]: Fix condition typos in crypto/Config.in
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1262
# [IPV4/IPV6]: Cleanup inet{,6}_protocol.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1263
# [IPV4]: Use generic struct flowi as routing key.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1264
# [NET]: Ipv4 output path changes by Alexey.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1265
# [IPV4]: Provide full proto/ports in flowi route lookups.
# --------------------------------------------
# 03/05/07 kuznet@ms2.inr.ac.ru 1.1266
# [IPV4]: Kill ip_send, use dst_output instead.
# --------------------------------------------
# 03/05/07 kuznet@ms2.inr.ac.ru 1.1267
# [NET]: Kill reroute from DST ops, unused.
# --------------------------------------------
# 03/05/07 kuznet@ms2.inr.ac.ru 1.1268
# [IPV4]: Missing ip_rt_put in ip_route_newports.
# --------------------------------------------
# 03/05/07 kuznet@ms2.inr.ac.ru 1.1269
# include/linux/ip.h: Define AH/ESP header layout.
# --------------------------------------------
# 03/05/07 kuznet@ms2.inr.ac.ru 1.1270
# [NET]: Fix rtnetlink metric type, should be u32.
# --------------------------------------------
# 03/05/07 kuznet@ms2.inr.ac.ru 1.1271
# [NET]: Cleanup DST metrics and abstract MSS/PMTU further.
# - Changed dst named metrics, to RTAX_MAX metrics array.
# - Add inline shorthands to access them
# - Add update_pmtu and get_mss to DST ops.
# - Add path component to DST, it is DST itself by default.
# --------------------------------------------
# 03/05/07 kuznet@ms2.inr.ac.ru 1.1272
# [NET]: Add DST_NOXFRM and DST_NOPOLICY flags.
# --------------------------------------------
# 03/05/07 kuznet@ms2.inr.ac.ru 1.1273
# net/ipv4/route.c: Create compare_keys to compare flowi identities.
# --------------------------------------------
# 03/05/07 kuznet@ms2.inr.ac.ru 1.1274
# [IPV4]: Rework key route lookup interface slightly.
# --------------------------------------------
# 03/05/07 kuznet@ms2.inr.ac.ru 1.1275
# [IPSEC]: Add transform engine and AH implementation.
# --------------------------------------------
# 03/05/07 viro@math.psu.edu 1.1276
# [NET]: Compile fixes.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1277
# [IPV4]: Define IPPROTO_SCTP.
# --------------------------------------------
# 03/05/07 kuznet@ms2.inr.ac.ru 1.1278
# [UDP]: Delete buggy assertion.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1279
# [NET]: Some missed cases of dst_pmtu conversion.
# --------------------------------------------
# 03/05/07 kuznet@ms2.inr.ac.ru 1.1280
# [NET]: IPSEC updates.
# - Add ESP tranformer.
# - Add AF_KEY socket layer.
# - Rework xfrm structures for user interfaces
# - Add CONFIG_IP_{AH,ESP}.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1281
# [IPSEC-2.4]: Fix inet_getid invocation for 2.4.x
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1282
# [IPSEC]: Fix xfrm policy locking.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1283
# [AF_KEY]: Convert to/from IPSEC_PROTO_ANY.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1284
# [IPSEC]: XFRM policy bug fixes.
# - Fix dst metric memcpy length.
# - Iterator for walking skb sec_path goes in wrong direction.
# --------------------------------------------
# 03/05/07 kuznet@ms2.inr.ac.ru 1.1285
# [IPSEC]: Bug fixes and updates.
# - Implement IP_IPSEC_POLICY setsockopt
# - Rework input policy checks to use it
# - dst->child destruction is repaired
# - Fix tunnel mode IP header building.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1286
# [IPSEC]: Export xfrm_policy_list.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1287
# [IPSEC]: Allocate work buffers instead of using kstack.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1288
# [IPSEC]: RAWv4 makes inverted policy check.
# --------------------------------------------
# 03/05/07 kuznet@ms2.inr.ac.ru 1.1289
# [IPSEC]: Semantic fixes with help from Maxim Giryaev.
# - BSD apps want sin_zero cleared in sys_getname.
# - Fix protocol setting in flow descriptor of RAW sockets
# - Wildcard protocol is represented differently in policy
# than for association.
# - Missing update of key manager sequence in xfrm_state entries.
# --------------------------------------------
# 03/05/07 kuznet@ms2.inr.ac.ru 1.1290
# [IPSEC]: Few changes to keep racoon ISAKMP daemon happy.
# --------------------------------------------
# 03/05/07 kuznet@ms2.inr.ac.ru 1.1291
# [IPSEC] More work.
# 1. Expiration of SAs. Some missing updates of counters.
# Question: very strange, rfc defines use_time as time of the first use
# of SA. But kame setkey refers to this as lastuse.
# 2. Bug fixes for tunnel mode and forwarding.
# 3. Fix bugs in per-socket policy: policy entries do not leak but are destroyed,
# when socket is closed, and are cloned on children of listening sockets.
# 4. Implemented use policy: i.e. use ipsec if a SA is available,
# ignore if it is not.
# 5. Added sysctl to disable in/out policy on some devices.
# It is set on loopback by default.
# 6. Remove resolved reference from template. It is not used,
# but pollutes code.
# 7. Added all the SASTATEs, now they make sense.
# --------------------------------------------
# 03/05/07 kuznet@ms2.inr.ac.ru 1.1292
# [IPSEC]: Fix lockup in xfrm4_dst_check.
# --------------------------------------------
# 03/05/07 kuznet@ms2.inr.ac.ru 1.1293
# [IPSEC]: More fixes and corrections.
# - Make connect() policy selection actually happen
# - return len instead of 0 on successful pfkey sendmsg
# - make prefixlen checks in a way more compatible with isakmpd
# - key manager wait queues are totally wrong
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1294
# [IPSEC]: Make netlink user interface header.
# --------------------------------------------
# 03/05/07 viro@math.psu.edu 1.1295
# [ipt_TCPMSS]: Compile fix.
# --------------------------------------------
# 03/05/07 kuznet@ms2.inr.ac.ru 1.1296
# [UDP]: silly bug, local input policy did not work on udp sockets.
# --------------------------------------------
# 03/05/07 kuznet@ms2.inr.ac.ru 1.1297
# [IPSEC]: ah/esp, 0 was used as tunnels protocol.
# --------------------------------------------
# 03/05/07 kuznet@ms2.inr.ac.ru 1.1298
# [IPSEC]: authentication signature for MD5/SHA was not truncated to conform RFC.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1299
# [AF_KEY]: Fix alloc_skb args.
# --------------------------------------------
# 03/05/07 kuznet@ms2.inr.ac.ru 1.1300
# [IPSEC]: More pfkey2 semantic fixes.
# - xfrm_state.c: never return mature SAs on getspi.
# - af_key.c: do not forget to delete dummy super-larvals when they are resolve\d
# - af_key.c: wow! specially for this case I added gfp argument
# to xfrm_alloc_policy() and forgot to use it really.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1301
# [IPSEC]: Netlink configuration interface.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1302
# [XFRM_USER]: Destroy netlink socket on shutdown.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1303
# [XFRM]: Add family member to state/policy structs.
# --------------------------------------------
# 03/05/07 taral@taral.net 1.1304
# [IPSEC]: Fix double unlock in esp/ah.
# --------------------------------------------
# 03/05/07 kuznet@ms2.inr.ac.ru 1.1305
# [IPSEC]: Policy timeout and pfkey acquire fixes.
# - Implement policy timeouts.
# - Make PF_KEY return proper error from KM acquire.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1306
# [IPSEC]: Make xfrm_user key manager return proper errors.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1307
# [XFRM_USER]: Index xfrma array correctly.
# --------------------------------------------
# 03/05/07 yoshfuji@linux-ipv6.org 1.1308
# [PATCH] IPv6: Fix BUG When Received Unknown Protocol.
# --------------------------------------------
# 03/05/07 hch@lst.de 1.1309
# [AF_KEY]: Fix comment typo.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1310
# [NET]: Protect skbuff secpath code with CONFIG_INET.
# --------------------------------------------
# 03/05/07 akpm@digeo.com 1.1311
# [IPSEC]: Uninline _decode_session.
# --------------------------------------------
# 03/05/07 akpm@digeo.com 1.1312
# [IPV4 OUTPUT]: Uninline ip_finish_output and skb_fill_page_desc.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1313
# [IPSEC]: Clean up key manager algorithm handling.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1314
# [IPSEC]: Dont check algorithm availability unless CONFIG_CRYPTO.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1315
# [IPSEC]: Kill warning in xfrm_algo.c.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1316
# [IPSEC]: Clear SKB checksum state when mangling.
# --------------------------------------------
# 03/05/07 thomas@bender.thinknerd.de 1.1317
# [IPSEC]: Fix some buglets in xfrm_user.c
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1318
# [IPSEC]: remove trailer_len from esp and xfrm properties.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1319
# [IPSEC]: Update ah documentation.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1320
# [IPSEC] Convert esp auth to use proper crypto api calls.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1321
# [IPSEC] Generic ICV handling for ESP.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1322
# [IPSEC]: in/out scatterlist support for ciphers.
# --------------------------------------------
# 03/05/07 kunihiro@ipinfusion.com 1.1323
# [XFRM]: Add family member to xfrm_usersa_id.
# --------------------------------------------
# 03/05/07 latten@austin.ibm.com 1.1324
# [IPSEC]: Make AF_KEY allow NULL encryption.
# --------------------------------------------
# 03/05/07 toml@us.ibm.com 1.1325
# [IPSEC]: Make sure to clear sin_zero in AF_KEY.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1326
# [IPSEC]: Add missed bit of sin_zero fix.
# --------------------------------------------
# 03/05/07 toml@us.ibm.com 1.1327
# [IPSEC] Make sure SADB_X_SPDADD messages have proper spid.
# --------------------------------------------
# 03/05/07 kunihiro@ipinfusion.com 1.1328
# [IPSEC]: Add ipv6 support infrastructure.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1329
# [IPSEC]: Mark pfkey_sadb_addr2xfrm_addr static again.
# --------------------------------------------
# 03/05/07 kunihiro@ipinfusion.com 1.1330
# [IPSEC]: Add ipv6 support to ipsec netlink sockets.
# --------------------------------------------
# 03/05/07 yoshfuji@linux-ipv6.org 1.1331
# [AF_KEY]: Add missing credit.
# --------------------------------------------
# 03/05/07 yoshfuji@linux-ipv6.org 1.1332
# [NET]: Convert dst->{input,output}() fully to dst_{input,output}().
# --------------------------------------------
# 03/05/07 mk@linux-ipv6.org 1.1333
# [IPSEC]: Add missing credit and include to xfrm_user ipv6 changes.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1334
# [IPSEC]: Move xfrm6 policy code to net/ipv4/xfrm_policy.c
# --------------------------------------------
# 03/05/07 latten@austin.ibm.com 1.1335
# [IPSEC]: Make sure ESP output pads Null Encryption properly.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1336
# [IPSEC]: Add family argument to compile_policy.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1337
# [IPSEC]: Use dst_hold unless assigning result to something.
# --------------------------------------------
# 03/05/07 yoshfuji@linux-ipv6.org 1.1338
# [IPSEC]: Add full ipv6 support.
#
# Credits also to Mitsuru Kanda <kanda@karaba.org>,
# YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>,
# and Kunihiro Ishiguro.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1339
# [IPV4]: Fix multicast route lookups.
# --------------------------------------------
# 03/05/07 yoshfuji@linux-ipv6.org 1.1340
# [IPSEC]: Fix obvious typo in xfrm_sk_clone_policy.
# --------------------------------------------
# 03/05/07 kuznet@ms2.inr.ac.ru 1.1341
# [NET]: hard_header reservation.
# 1. Fix bad reservation in xfrm_state_check_space()
# 2. Macroize formula for reservation, use the macro over all the places
# in IP.
# --------------------------------------------
# 03/05/07 kuznet@ms2.in.ac.ru 1.1342
# [NET]: miscellaneous fixes.
# 1. Fix illegal dereference of potentially freed memory in xfrm_policy.c
# 2. Complete wildcard flow addresses to real ones in xfrm_lookup().
# 3. Respect optional flag when chacking for input policy.
# 4. Delete orphaned comments in ip.h.
# 5. Fix mistakedly freed route in tcp connect.
# --------------------------------------------
# 03/05/07 kuznet@ms2.inr.ac.ru 1.1343
# [IPSEC]: fragmentation & tcp mss calculation.
# 1. Add local_df field to struct sk_buff to mark packets which
# are to be fragmented locally despite of their IPv6ness of IP DF flag
# 2. Add ext2_header_len to tcp_opt to keep memory of part of header length
# depending on route
# 3. Add trailer_len to struct dst_entry and xfrm_state to know how
# much of space should be reserved at tail of frame for subsequent
# transformations.
# 4. [BUG] icv_trun_len must be used while mss claculation, not
# icv_full_length.
# --------------------------------------------
# 03/05/07 thomas@bender.thinknerd.de 1.1344
# [IPSEC]: Fix null authentication/encryption.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1345
# [IPSEC]: fix skb leak in ah and esp.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1346
# [IPSEC]: return error when no dst in ah & esp output.
# --------------------------------------------
# 03/05/07 toml@us.ibm.com 1.1347
# [IPSEC]: Add IPV6_{IPSEC,XFRM}_POLICY socket option support.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1348
# [IPSEC]: Export xfrm_user_policy.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1349
# [IPSEC]: net/xfrm.h needs net/sock.h
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1350
# [IPSEC-2.4]: try_inc_mod_count --> __MOD_INC_USE_COUNT.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1351
# [IPSEC-2.4]: Fix ip_select_ident args in ESP/v4.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1352
# [IPSEC-2.4]: Fixup AF_KEY for 2.4.x interface differences.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1353
# [IPSEC]: Fix parsing of 16-bit ipcomp cpi.
# --------------------------------------------
# 03/05/07 toml@us.ibm.com 1.1354
# [IPSEC]: IPV6 source address not set correctly in xfrm_state.
# --------------------------------------------
# 03/05/07 jgrimm2@us.ibm.com 1.1355
# [IPSEC]: Fix SKB alloc len in ip6_build_xmit.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1356
# [IPSEC] Add initial compression support for pfkey and xfrm_algo.
# --------------------------------------------
# 03/05/07 yoshfuji@linux-ipv6.org 1.1357
# [IPSEC]: Split up XFRM Subsystem.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1358
# [IPSEC]: Fix build when ipsec is disabled.
# --------------------------------------------
# 03/05/07 torvalds@transmeta.com 1.1359
# Avoid warning with modern gcc's in xfrm_policy.c
# --------------------------------------------
# 03/05/07 mk@linux-ipv6.org 1.1360
# [IPV6]: Process all extension headers via ipproto->handler.
# --------------------------------------------
# 03/05/07 yoshfuji@linux-ipv6.org 1.1361
# [IPSEC]: Fix bug in xfrm_parse_spi()
# --------------------------------------------
# 03/05/07 yoshfuji@linux-ipv6.org 1.1362
# [IPSEC]: Make get_acqseq() xfrm_state.c:xfrm_get_acqseq().
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1363
# [IPSEC-2.4]: Fix ipv6 xfrm exports.
# --------------------------------------------
# 03/05/07 toml@us.ibm.com 1.1364
# [IPSEC]: Fix IPV6 UDP policy checking.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1365
# [IPSEC-2.4]: Fix module get in xfrm_policy.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1366
# [IPSEC]: Kill skb_ah_walk, not needed.
# --------------------------------------------
# 03/05/07 yoshfuji@linux-ipv6.org 1.1367
# [IPSEC]: Remove duplicate / obsolete entry in include/linux/dst.h
# --------------------------------------------
# 03/05/07 kuznet@ms2.inr.ac.ru 1.1368
# [IPV4]: Make sure rtcache flush happens after sysctl updates.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1369
# [IPSEC]: Remove unused field owner from selector.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1370
# [IPSEC]: linux/xfrm.h u32 --> __u32.
# --------------------------------------------
# 03/05/07 toml@us.ibm.com 1.1371
# [IPSEC]: Missing ipv6 policy checks.
# --------------------------------------------
# 03/05/07 toml@us.ibm.com 1.1372
# [IPSEC]: IPV6 AH/ESP fixes.
# --------------------------------------------
# 03/05/07 toml@us.ibm.com 1.1373
# [IPSEC]: Use "sizeof" for header sizes.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1374
# [IPSEC]: Fix xfrm_state refcounts.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1375
# [IPSEC-2.4]: Fix xfrm/Makefile for 2.4.x
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1376
# [NET]: Use current_text_addr instead of label tricks in dst_release.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1377
# [IPSEC]: xfrm_{state,user}.c need asm/uaccess.h
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1378
# [IPSEC-2.4]: Fix net/Makefile so xfrm modules get built.
# --------------------------------------------
# 03/05/07 yoshfuji@linux-ipv6.org 1.1379
# [IPSEC]: Use of "sizeof" for header sizes, part II
# --------------------------------------------
# 03/05/07 derek@ihtfp.com 1.1380
# [IPSEC]: Implement UDP Encapsulation framework.
#
# In particular, implement ESPinUDP encapsulation for IPsec
# Nat Traversal.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1381
# [IPSEC]: Store xfrm_encap_tmpl directly in xfrm_state.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1382
# [IPSEC]: Add encap support for xfrm_user.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1383
# [IPSEC]: Clean up decap state, minimize its size.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1384
# [IPSEC]: Move xfrm type destructor out of spinlock.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1385
# [IPSEC-2.4]: Use schedule_task for xfrm_state gc work.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1386
# [IPSEC]: AH/ESP forget to free private structs.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1387
# [IPSEC]: Really move type destructor out of spinlock.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1388
# [IPSEC]: Support for optional policies on input got lost.
# --------------------------------------------
# 03/05/07 rusty@rustcorp.com.au 1.1389
# [IPSEC]: Avoid using SET_MODULE_OWNER.
# --------------------------------------------
# 03/05/07 jef@linuxbe.org 1.1390
# [IPSEC]: Check xfrm state expiration on input after replay check.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1391
# [IPSEC]: Add initial IPCOMP support.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1392
# [IPSEC]: Add ipv4 tunnel transformer.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1393
# [IPSEC]: Fix handling of uncompressable packets in tunnel mode.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1394
# [IPV4]: xfrm4_tunnel and ipip need to privateize some symbols.
# --------------------------------------------
# 03/05/07 yoshfuji@linux-ipv6.org 1.1395
# [IPV6]: Fixed multiple mistake extension header handling.
# - double free if sending Parameter Problem message in reassembly code.
# - (sometimes) broken checksum
# - HbH not producing unknown header; it is only allowed at the beginning of
# the exthdrs chain.
# - wrong pointer value in Parameter Problem message.
# --------------------------------------------
# 03/05/07 yoshfuji@linux-ipv6.org 1.1396
# [NET]: Use fl6_{src,dst} etc.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1397
# [IPSEC-2.4]: Missing UDP_ENCAP_ESPINUDP define.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1398
# [IPSEC-2.4]: More __ip_select_ident args fixes.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1399
# [IPSEC-2.4]: Backport synchronize_net from 2.5
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1400
# [IPSEC-2.4]: Missing bits of UDP encap changes.
# --------------------------------------------
# 03/05/07 yoshfuji@linux-ipv6.org 1.1401
# [IPSEC]: nexthdr in xfrm6_input needs to be int.
# --------------------------------------------
# 03/05/07 steve@gw.chygwyn.com 1.1402
# [IP_GRE]: Kill duplicate update_pmtu call.
# --------------------------------------------
# 03/05/07 yoshfuji@linux-ipv6.org 1.1403
# [IPV6]: dst_alloc() clean-up.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1404
# [IPSEC]: allow only tunnel mode in xfrm4_tunnels.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1405
# [IPV4]: Use dst_pmtu not dev->mtu to determine if fragmentation is needed.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1406
# [IPV4]: Fix typos in ipip.c commented out code.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1407
# [IPSEC]: pmtu discovery support at local tunnel gateway.
# --------------------------------------------
# 03/05/07 jmorris@intercode.com.au 1.1408
# [IPSEC]: Consolidate some output code into xfrm_check_output.
# --------------------------------------------
# 03/05/07 davem@nuts.ninka.net 1.1409
# [IPSEC-2.4]: Fix mispatch, need to pass sk->allocation to kmalloc in ip_append_data.
# --------------------------------------------
# 03/05/08 mk@linux-ipv6.org 1.1410
# [IPSEC]: Fix ipcomp header handling in ipv4 IPCOMP.
# --------------------------------------------
# 03/05/08 davem@nuts.ninka.net 1.1411
# [IPSEC-2.4]: Fix mis-patch of ipt_REJECT.c
# --------------------------------------------
#
diff -Nru a/Documentation/Configure.help b/Documentation/Configure.help
--- a/Documentation/Configure.help Thu May 8 10:41:38 2003
+++ b/Documentation/Configure.help Thu May 8 10:41:38 2003
@@ -5349,6 +5349,14 @@
and you should also say Y to "Kernel/User network link driver",
below. If unsure, say N.
+PF_KEY sockets
+CONFIG_NET_KEY
+ PF_KEYv2 socket family, compatible to KAME ones.
+ They are required if you are going to use IPsec tools ported
+ from KAME.
+
+ Say Y unless you know what you are doing.
+
TCP/IP networking
CONFIG_INET
These are the protocols used on the Internet and on most local
@@ -5614,6 +5622,32 @@
gated-5). This routing protocol is not used widely, so say N unless
you want to play with it.
+IP: AH transformation
+CONFIG_INET_AH
+ Support for IPsec AH.
+
+ If unsure, say Y.
+
+IP: ESP transformation
+CONFIG_INET_ESP
+ Support for IPsec ESP.
+
+ If unsure, say Y.
+
+IP: IPComp transformation
+CONFIG_INET_IPCOMP
+ Support for IP Paylod Compression (RFC3173), typically needed
+ for IPsec.
+
+ If unsure, say Y.
+
+IP: IPsec user configuration interface
+CONFIG_XFRM_USER
+ Support for IPsec user configuration interface used
+ by native Linux tools.
+
+ If unsure, say Y.
+
Unix domain sockets
CONFIG_UNIX
If you say Y here, you will include support for Unix domain sockets;
@@ -26524,6 +26558,98 @@
This is experimental code, not yet tested on many boards.
If unsure, say N.
+
+CONFIG_CRYPTO
+ This option provides the core Cryptographic API.
+
+CONFIG_CRYPTO_HMAC
+ HMAC: Keyed-Hashing for Message Authentication (RFC2104).
+ This is required for IPSec.
+
+CONFIG_CRYPTO_NULL
+ These are 'Null' algorithms, used by IPsec, which do nothing.
+
+CONFIG_CRYPTO_MD4
+ MD4 message digest algorithm (RFC1320).
+
+CONFIG_CRYPTO_MD5
+ MD5 message digest algorithm (RFC1321).
+
+CONFIG_CRYPTO_SHA1
+ SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2).
+
+CONFIG_CRYPTO_SHA256
+ SHA256 secure hash standard (DFIPS 180-2).
+
+ This version of SHA implements a 256 bit hash with 128 bits of
+ security against collision attacks.
+
+CONFIG_CRYPTO_SHA512
+ SHA512 secure hash standard (DFIPS 180-2).
+
+ This version of SHA implements a 512 bit hash with 256 bits of
+ security against collision attacks.
+
+ This code also includes SHA-384, a 384 bit hash with 192 bits
+ of security against collision attacks.
+
+CONFIG_CRYPTO_DES
+ DES cipher algorithm (FIPS 46-2), and Triple DES EDE (FIPS 46-3).
+
+CONFIG_CRYPTO_BLOWFISH
+ Blowfish cipher algorithm, by Bruce Schneier.
+
+ This is a variable key length cipher which can use keys from 32
+ bits to 448 bits in length. It's fast, simple and specifically
+ designed for use on "large microprocessors".
+
+ See also <http://www.counterpane.com/blowfish.html>.
+
+CONFIG_CRYPTO_TWOFISH
+ Twofish cipher algorithm.
+
+ Twofish was submitted as an AES (Advanced Encryption Standard)
+ candidate cipher by researchers at CounterPane Systems. It is a
+ 16 round block cipher supporting key sizes of 128, 192, and 256
+ bits.
+
+ See also:
+ http://www.counterpane.com/twofish.html
+
+CONFIG_CRYPTO_SERPENT
+ Serpent cipher algorithm, by Anderson, Biham & Knudsen.
+
+ Keys are allowed to be from 0 to 256 bits in length, in steps
+ of 8 bits.
+
+ See also:
+ http://www.cl.cam.ac.uk/~rja14/serpent.html
+
+CONFIG_CRYPTO_AES
+ AES cipher algorithms (FIPS-197). AES uses the Rijndael
+ algorithm.
+
+ Rijndael appears to be consistently a very good performer in
+ both hardware and software across a wide range of computing
+ environments regardless of its use in feedback or non-feedback
+ modes. Its key setup time is excellent, and its key agility is
+ good. Rijndael's very low memory requirements make it very well
+ suited for restricted-space environments, in which it also
+ demonstrates excellent performance. Rijndael's operations are
+ among the easiest to defend against power and timing attacks.
+
+ The AES specifies three key sizes: 128, 192 and 256 bits
+
+ See http://csrc.nist.gov/encryption/aes/ for more information.
+
+CONFIG_CRYPTO_DEFLATE
+ This is the Deflate algorithm (RFC1951), specified for use in
+ IPSec with the IPCOMP protocol (RFC3173, RFC2394).
+
+ You will most probably want this if using IPSec.
+
+CONFIG_CRYPTO_TEST
+ Quick & dirty crypto test module.
#
# A couple of things I keep forgetting:
diff -Nru a/Documentation/crypto/api-intro.txt b/Documentation/crypto/api-intro.txt
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/Documentation/crypto/api-intro.txt Thu May 8 10:41:38 2003
@@ -0,0 +1,221 @@
+
+ Scatterlist Cryptographic API
+
+INTRODUCTION
+
+The Scatterlist Crypto API takes page vectors (scatterlists) as
+arguments, and works directly on pages. In some cases (e.g. ECB
+mode ciphers), this will allow for pages to be encrypted in-place
+with no copying.
+
+One of the initial goals of this design was to readily support IPsec,
+so that processing can be applied to paged skb's without the need
+for linearization.
+
+
+DETAILS
+
+At the lowest level are algorithms, which register dynamically with the
+API.
+
+'Transforms' are user-instantiated objects, which maintain state, handle all
+of the implementation logic (e.g. manipulating page vectors), provide an
+abstraction to the underlying algorithms, and handle common logical
+operations (e.g. cipher modes, HMAC for digests). However, at the user
+level they are very simple.
+
+Conceptually, the API layering looks like this:
+
+ [transform api] (user interface)
+ [transform ops] (per-type logic glue e.g. cipher.c, digest.c)
+ [algorithm api] (for registering algorithms)
+
+The idea is to make the user interface and algorithm registration API
+very simple, while hiding the core logic from both. Many good ideas
+from existing APIs such as Cryptoapi and Nettle have been adapted for this.
+
+The API currently supports three types of transforms: Ciphers, Digests and
+Compressors. The compression algorithms especially seem to be performing
+very well so far.
+
+Support for hardware crypto devices via an asynchronous interface is
+under development.
+
+Here's an example of how to use the API:
+
+ #include <linux/crypto.h>
+
+ struct scatterlist sg[2];
+ char result[128];
+ struct crypto_tfm *tfm;
+
+ tfm = crypto_alloc_tfm("md5", 0);
+ if (tfm == NULL)
+ fail();
+
+ /* ... set up the scatterlists ... */
+
+ crypto_digest_init(tfm);
+ crypto_digest_update(tfm, &sg, 2);
+ crypto_digest_final(tfm, result);
+
+ crypto_free_tfm(tfm);
+
+
+Many real examples are available in the regression test module (tcrypt.c).
+
+
+CONFIGURATION NOTES
+
+As Triple DES is part of the DES module, for those using modular builds,
+add the following line to /etc/modules.conf:
+
+ alias des3_ede des
+
+The Null algorithms reside in the crypto_null module, so these lines
+should also be added:
+
+ alias cipher_null crypto_null
+ alias digest_null crypto_null
+ alias compress_null crypto_null
+
+The SHA384 algorithm shares code within the SHA512 module, so you'll
+also need:
+ alias sha384 sha512
+
+
+DEVELOPER NOTES
+
+Transforms may only be allocated in user context, and cryptographic
+methods may only be called from softirq and user contexts.
+
+When using the API for ciphers, performance will be optimal if each
+scatterlist contains data which is a multiple of the cipher's block
+size (typically 8 bytes). This prevents having to do any copying
+across non-aligned page fragment boundaries.
+
+
+ADDING NEW ALGORITHMS
+
+When submitting a new algorithm for inclusion, a mandatory requirement
+is that at least a few test vectors from known sources (preferably
+standards) be included.
+
+Converting existing well known code is preferred, as it is more likely
+to have been reviewed and widely tested. If submitting code from LGPL
+sources, please consider changing the license to GPL (see section 3 of
+the LGPL).
+
+Algorithms submitted must also be generally patent-free (e.g. IDEA
+will not be included in the mainline until around 2011), and be based
+on a recognized standard and/or have been subjected to appropriate
+peer review.
+
+Also check for any RFCs which may relate to the use of specific algorithms,
+as well as general application notes such as RFC2451 ("The ESP CBC-Mode
+Cipher Algorithms").
+
+It's a good idea to avoid using lots of macros and use inlined functions
+instead, as gcc does a good job with inlining, while excessive use of
+macros can cause compilation problems on some platforms.
+
+Also check the TODO list at the web site listed below to see what people
+might already be working on.
+
+
+BUGS
+
+Send bug reports to:
+James Morris <jmorris@intercode.com.au>
+Cc: David S. Miller <davem@redhat.com>
+
+
+FURTHER INFORMATION
+
+For further patches and various updates, including the current TODO
+list, see:
+http://samba.org/~jamesm/crypto/
+
+
+AUTHORS
+
+James Morris
+David S. Miller
+
+
+CREDITS
+
+The following people provided invaluable feedback during the development
+of the API:
+
+ Alexey Kuznetzov
+ Rusty Russell
+ Herbert Valerio Riedel
+ Jeff Garzik
+ Michael Richardson
+ Andrew Morton
+ Ingo Oeser
+ Christoph Hellwig
+
+Portions of this API were derived from the following projects:
+
+ Kerneli Cryptoapi (http://www.kerneli.org/)
+ Alexander Kjeldaas
+ Herbert Valerio Riedel
+ Kyle McMartin
+ Jean-Luc Cooke
+ David Bryson
+ Clemens Fruhwirth
+ Tobias Ringstrom
+ Harald Welte
+
+and;
+
+ Nettle (http://www.lysator.liu.se/~nisse/nettle/)
+ Niels Möller
+
+Original developers of the crypto algorithms:
+
+ Dana L. How (DES)
+ Andrew Tridgell and Steve French (MD4)
+ Colin Plumb (MD5)
+ Steve Reid (SHA1)
+ Jean-Luc Cooke (SHA256, SHA384, SHA512)
+ Kazunori Miyazawa / USAGI (HMAC)
+ Matthew Skala (Twofish)
+ Dag Arne Osvik (Serpent)
+ Brian Gladman (AES)
+
+
+SHA1 algorithm contributors:
+ Jean-Francois Dive
+
+DES algorithm contributors:
+ Raimar Falke
+ Gisle Sælensminde
+ Niels Möller
+
+Blowfish algorithm contributors:
+ Herbert Valerio Riedel
+ Kyle McMartin
+
+Twofish algorithm contributors:
+ Werner Koch
+ Marc Mutz
+
+SHA256/384/512 algorithm contributors:
+ Andrew McDonald
+ Kyle McMartin
+ Herbert Valerio Riedel
+
+AES algorithm contributors:
+ Alexander Kjeldaas
+ Herbert Valerio Riedel
+ Kyle McMartin
+ Adam J. Richter
+
+Generic scatterwalk code by Adam J. Richter <adam@yggdrasil.com>
+
+Please send any credits updates or corrections to:
+James Morris <jmorris@intercode.com.au>
+
diff -Nru a/Documentation/crypto/descore-readme.txt b/Documentation/crypto/descore-readme.txt
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/Documentation/crypto/descore-readme.txt Thu May 8 10:41:38 2003
@@ -0,0 +1,352 @@
+Below is the orginal README file from the descore.shar package.
+------------------------------------------------------------------------------
+
+des - fast & portable DES encryption & decryption.
+Copyright (C) 1992 Dana L. How
+
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU Library General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU Library General Public License for more details.
+
+You should have received a copy of the GNU Library General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+
+Author's address: how@isl.stanford.edu
+
+$Id: README,v 1.15 1992/05/20 00:25:32 how E $
+
+
+==>> To compile after untarring/unsharring, just `make' <<==
+
+
+This package was designed with the following goals:
+1. Highest possible encryption/decryption PERFORMANCE.
+2. PORTABILITY to any byte-addressable host with a 32bit unsigned C type
+3. Plug-compatible replacement for KERBEROS's low-level routines.
+
+This second release includes a number of performance enhancements for
+register-starved machines. My discussions with Richard Outerbridge,
+71755.204@compuserve.com, sparked a number of these enhancements.
+
+To more rapidly understand the code in this package, inspect desSmallFips.i
+(created by typing `make') BEFORE you tackle desCode.h. The latter is set
+up in a parameterized fashion so it can easily be modified by speed-daemon
+hackers in pursuit of that last microsecond. You will find it more
+illuminating to inspect one specific implementation,
+and then move on to the common abstract skeleton with this one in mind.
+
+
+performance comparison to other available des code which i could
+compile on a SPARCStation 1 (cc -O4, gcc -O2):
+
+this code (byte-order independent):
+ 30us per encryption (options: 64k tables, no IP/FP)
+ 33us per encryption (options: 64k tables, FIPS standard bit ordering)
+ 45us per encryption (options: 2k tables, no IP/FP)
+ 48us per encryption (options: 2k tables, FIPS standard bit ordering)
+ 275us to set a new key (uses 1k of key tables)
+ this has the quickest encryption/decryption routines i've seen.
+ since i was interested in fast des filters rather than crypt(3)
+ and password cracking, i haven't really bothered yet to speed up
+ the key setting routine. also, i have no interest in re-implementing
+ all the other junk in the mit kerberos des library, so i've just
+ provided my routines with little stub interfaces so they can be
+ used as drop-in replacements with mit's code or any of the mit-
+ compatible packages below. (note that the first two timings above
+ are highly variable because of cache effects).
+
+kerberos des replacement from australia (version 1.95):
+ 53us per encryption (uses 2k of tables)
+ 96us to set a new key (uses 2.25k of key tables)
+ so despite the author's inclusion of some of the performance
+ improvements i had suggested to him, this package's
+ encryption/decryption is still slower on the sparc and 68000.
+ more specifically, 19-40% slower on the 68020 and 11-35% slower
+ on the sparc, depending on the compiler;
+ in full gory detail (ALT_ECB is a libdes variant):
+ compiler machine desCore libdes ALT_ECB slower by
+ gcc 2.1 -O2 Sun 3/110 304 uS 369.5uS 461.8uS 22%
+ cc -O1 Sun 3/110 336 uS 436.6uS 399.3uS 19%
+ cc -O2 Sun 3/110 360 uS 532.4uS 505.1uS 40%
+ cc -O4 Sun 3/110 365 uS 532.3uS 505.3uS 38%
+ gcc 2.1 -O2 Sun 4/50 48 uS 53.4uS 57.5uS 11%
+ cc -O2 Sun 4/50 48 uS 64.6uS 64.7uS 35%
+ cc -O4 Sun 4/50 48 uS 64.7uS 64.9uS 35%
+ (my time measurements are not as accurate as his).
+ the comments in my first release of desCore on version 1.92:
+ 68us per encryption (uses 2k of tables)
+ 96us to set a new key (uses 2.25k of key tables)
+ this is a very nice package which implements the most important
+ of the optimizations which i did in my encryption routines.
+ it's a bit weak on common low-level optimizations which is why
+ it's 39%-106% slower. because he was interested in fast crypt(3) and
+ password-cracking applications, he also used the same ideas to
+ speed up the key-setting routines with impressive results.
+ (at some point i may do the same in my package). he also implements
+ the rest of the mit des library.
+ (code from eay@psych.psy.uq.oz.au via comp.sources.misc)
+
+fast crypt(3) package from denmark:
+ the des routine here is buried inside a loop to do the
+ crypt function and i didn't feel like ripping it out and measuring
+ performance. his code takes 26 sparc instructions to compute one
+ des iteration; above, Quick (64k) takes 21 and Small (2k) takes 37.
+ he claims to use 280k of tables but the iteration calculation seems
+ to use only 128k. his tables and code are machine independent.
+ (code from glad@daimi.aau.dk via alt.sources or comp.sources.misc)
+
+swedish reimplementation of Kerberos des library
+ 108us per encryption (uses 34k worth of tables)
+ 134us to set a new key (uses 32k of key tables to get this speed!)
+ the tables used seem to be machine-independent;
+ he seems to have included a lot of special case code
+ so that, e.g., `long' loads can be used instead of 4 `char' loads
+ when the machine's architecture allows it.
+ (code obtained from chalmers.se:pub/des)
+
+crack 3.3c package from england:
+ as in crypt above, the des routine is buried in a loop. it's
+ also very modified for crypt. his iteration code uses 16k
+ of tables and appears to be slow.
+ (code obtained from aem@aber.ac.uk via alt.sources or comp.sources.misc)
+
+``highly optimized'' and tweaked Kerberos/Athena code (byte-order dependent):
+ 165us per encryption (uses 6k worth of tables)
+ 478us to set a new key (uses <1k of key tables)
+ so despite the comments in this code, it was possible to get
+ faster code AND smaller tables, as well as making the tables
+ machine-independent.
+ (code obtained from prep.ai.mit.edu)
+
+UC Berkeley code (depends on machine-endedness):
+ 226us per encryption
+10848us to set a new key
+ table sizes are unclear, but they don't look very small
+ (code obtained from wuarchive.wustl.edu)
+
+
+motivation and history
+
+a while ago i wanted some des routines and the routines documented on sun's
+man pages either didn't exist or dumped core. i had heard of kerberos,
+and knew that it used des, so i figured i'd use its routines. but once
+i got it and looked at the code, it really set off a lot of pet peeves -
+it was too convoluted, the code had been written without taking
+advantage of the regular structure of operations such as IP, E, and FP
+(i.e. the author didn't sit down and think before coding),
+it was excessively slow, the author had attempted to clarify the code
+by adding MORE statements to make the data movement more `consistent'
+instead of simplifying his implementation and cutting down on all data
+movement (in particular, his use of L1, R1, L2, R2), and it was full of
+idiotic `tweaks' for particular machines which failed to deliver significant
+speedups but which did obfuscate everything. so i took the test data
+from his verification program and rewrote everything else.
+
+a while later i ran across the great crypt(3) package mentioned above.
+the fact that this guy was computing 2 sboxes per table lookup rather
+than one (and using a MUCH larger table in the process) emboldened me to
+do the same - it was a trivial change from which i had been scared away
+by the larger table size. in his case he didn't realize you don't need to keep
+the working data in TWO forms, one for easy use of half the sboxes in
+indexing, the other for easy use of the other half; instead you can keep
+it in the form for the first half and use a simple rotate to get the other
+half. this means i have (almost) half the data manipulation and half
+the table size. in fairness though he might be encoding something particular
+to crypt(3) in his tables - i didn't check.
+
+i'm glad that i implemented it the way i did, because this C version is
+portable (the ifdef's are performance enhancements) and it is faster
+than versions hand-written in assembly for the sparc!
+
+
+porting notes
+
+one thing i did not want to do was write an enormous mess
+which depended on endedness and other machine quirks,
+and which necessarily produced different code and different lookup tables
+for different machines. see the kerberos code for an example
+of what i didn't want to do; all their endedness-specific `optimizations'
+obfuscate the code and in the end were slower than a simpler machine
+independent approach. however, there are always some portability
+considerations of some kind, and i have included some options
+for varying numbers of register variables.
+perhaps some will still regard the result as a mess!
+
+1) i assume everything is byte addressable, although i don't actually
+ depend on the byte order, and that bytes are 8 bits.
+ i assume word pointers can be freely cast to and from char pointers.
+ note that 99% of C programs make these assumptions.
+ i always use unsigned char's if the high bit could be set.
+2) the typedef `word' means a 32 bit unsigned integral type.
+ if `unsigned long' is not 32 bits, change the typedef in desCore.h.
+ i assume sizeof(word) == 4 EVERYWHERE.
+
+the (worst-case) cost of my NOT doing endedness-specific optimizations
+in the data loading and storing code surrounding the key iterations
+is less than 12%. also, there is the added benefit that
+the input and output work areas do not need to be word-aligned.
+
+
+OPTIONAL performance optimizations
+
+1) you should define one of `i386,' `vax,' `mc68000,' or `sparc,'
+ whichever one is closest to the capabilities of your machine.
+ see the start of desCode.h to see exactly what this selection implies.
+ note that if you select the wrong one, the des code will still work;
+ these are just performance tweaks.
+2) for those with functional `asm' keywords: you should change the
+ ROR and ROL macros to use machine rotate instructions if you have them.
+ this will save 2 instructions and a temporary per use,
+ or about 32 to 40 instructions per en/decryption.
+ note that gcc is smart enough to translate the ROL/R macros into
+ machine rotates!
+
+these optimizations are all rather persnickety, yet with them you should
+be able to get performance equal to assembly-coding, except that:
+1) with the lack of a bit rotate operator in C, rotates have to be synthesized
+ from shifts. so access to `asm' will speed things up if your machine
+ has rotates, as explained above in (3) (not necessary if you use gcc).
+2) if your machine has less than 12 32-bit registers i doubt your compiler will
+ generate good code.
+ `i386' tries to configure the code for a 386 by only declaring 3 registers
+ (it appears that gcc can use ebx, esi and edi to hold register variables).
+ however, if you like assembly coding, the 386 does have 7 32-bit registers,
+ and if you use ALL of them, use `scaled by 8' address modes with displacement
+ and other tricks, you can get reasonable routines for DesQuickCore... with
+ about 250 instructions apiece. For DesSmall... it will help to rearrange
+ des_keymap, i.e., now the sbox # is the high part of the index and
+ the 6 bits of data is the low part; it helps to exchange these.
+ since i have no way to conveniently test it i have not provided my
+ shoehorned 386 version. note that with this release of desCore, gcc is able
+ to put everything in registers(!), and generate about 370 instructions apiece
+ for the DesQuickCore... routines!
+
+coding notes
+
+the en/decryption routines each use 6 necessary register variables,
+with 4 being actively used at once during the inner iterations.
+if you don't have 4 register variables get a new machine.
+up to 8 more registers are used to hold constants in some configurations.
+
+i assume that the use of a constant is more expensive than using a register:
+a) additionally, i have tried to put the larger constants in registers.
+ registering priority was by the following:
+ anything more than 12 bits (bad for RISC and CISC)
+ greater than 127 in value (can't use movq or byte immediate on CISC)
+ 9-127 (may not be able to use CISC shift immediate or add/sub quick),
+ 1-8 were never registered, being the cheapest constants.
+b) the compiler may be too stupid to realize table and table+256 should
+ be assigned to different constant registers and instead repetitively
+ do the arithmetic, so i assign these to explicit `m' register variables
+ when possible and helpful.
+
+i assume that indexing is cheaper or equivalent to auto increment/decrement,
+where the index is 7 bits unsigned or smaller.
+this assumption is reversed for 68k and vax.
+
+i assume that addresses can be cheaply formed from two registers,
+or from a register and a small constant.
+for the 68000, the `two registers and small offset' form is used sparingly.
+all index scaling is done explicitly - no hidden shifts by log2(sizeof).
+
+the code is written so that even a dumb compiler
+should never need more than one hidden temporary,
+increasing the chance that everything will fit in the registers.
+KEEP THIS MORE SUBTLE POINT IN MIND IF YOU REWRITE ANYTHING.
+(actually, there are some code fragments now which do require two temps,
+but fixing it would either break the structure of the macros or
+require declaring another temporary).
+
+
+special efficient data format
+
+bits are manipulated in this arrangement most of the time (S7 S5 S3 S1):
+ 003130292827xxxx242322212019xxxx161514131211xxxx080706050403xxxx
+(the x bits are still there, i'm just emphasizing where the S boxes are).
+bits are rotated left 4 when computing S6 S4 S2 S0:
+ 282726252423xxxx201918171615xxxx121110090807xxxx040302010031xxxx
+the rightmost two bits are usually cleared so the lower byte can be used
+as an index into an sbox mapping table. the next two x'd bits are set
+to various values to access different parts of the tables.
+
+
+how to use the routines
+
+datatypes:
+ pointer to 8 byte area of type DesData
+ used to hold keys and input/output blocks to des.
+
+ pointer to 128 byte area of type DesKeys
+ used to hold full 768-bit key.
+ must be long-aligned.
+
+DesQuickInit()
+ call this before using any other routine with `Quick' in its name.
+ it generates the special 64k table these routines need.
+DesQuickDone()
+ frees this table
+
+DesMethod(m, k)
+ m points to a 128byte block, k points to an 8 byte des key
+ which must have odd parity (or -1 is returned) and which must
+ not be a (semi-)weak key (or -2 is returned).
+ normally DesMethod() returns 0.
+ m is filled in from k so that when one of the routines below
+ is called with m, the routine will act like standard des
+ en/decryption with the key k. if you use DesMethod,
+ you supply a standard 56bit key; however, if you fill in
+ m yourself, you will get a 768bit key - but then it won't
+ be standard. it's 768bits not 1024 because the least significant
+ two bits of each byte are not used. note that these two bits
+ will be set to magic constants which speed up the encryption/decryption
+ on some machines. and yes, each byte controls
+ a specific sbox during a specific iteration.
+ you really shouldn't use the 768bit format directly; i should
+ provide a routine that converts 128 6-bit bytes (specified in
+ S-box mapping order or something) into the right format for you.
+ this would entail some byte concatenation and rotation.
+
+Des{Small|Quick}{Fips|Core}{Encrypt|Decrypt}(d, m, s)
+ performs des on the 8 bytes at s into the 8 bytes at d. (d,s: char *).
+ uses m as a 768bit key as explained above.
+ the Encrypt|Decrypt choice is obvious.
+ Fips|Core determines whether a completely standard FIPS initial
+ and final permutation is done; if not, then the data is loaded
+ and stored in a nonstandard bit order (FIPS w/o IP/FP).
+ Fips slows down Quick by 10%, Small by 9%.
+ Small|Quick determines whether you use the normal routine
+ or the crazy quick one which gobbles up 64k more of memory.
+ Small is 50% slower then Quick, but Quick needs 32 times as much
+ memory. Quick is included for programs that do nothing but DES,
+ e.g., encryption filters, etc.
+
+
+Getting it to compile on your machine
+
+there are no machine-dependencies in the code (see porting),
+except perhaps the `now()' macro in desTest.c.
+ALL generated tables are machine independent.
+you should edit the Makefile with the appropriate optimization flags
+for your compiler (MAX optimization).
+
+
+Speeding up kerberos (and/or its des library)
+
+note that i have included a kerberos-compatible interface in desUtil.c
+through the functions des_key_sched() and des_ecb_encrypt().
+to use these with kerberos or kerberos-compatible code put desCore.a
+ahead of the kerberos-compatible library on your linker's command line.
+you should not need to #include desCore.h; just include the header
+file provided with the kerberos library.
+
+Other uses
+
+the macros in desCode.h would be very useful for putting inline des
+functions in more complicated encryption routines.
diff -Nru a/MAINTAINERS b/MAINTAINERS
--- a/MAINTAINERS Thu May 8 10:41:37 2003
+++ b/MAINTAINERS Thu May 8 10:41:37 2003
@@ -430,6 +430,15 @@
W: http://developer.axis.com
S: Maintained
+CRYPTO API
+P: James Morris
+M: jmorris@intercode.com.au
+P: David S. Miller
+M: davem@redhat.com
+W http://samba.org/~jamesm/crypto/
+L: linux-kernel@vger.kernel.org
+S: Maintained
+
CYBERPRO FB DRIVER
P: Russell King
M: rmk@arm.linux.org.uk
diff -Nru a/Makefile b/Makefile
--- a/Makefile Thu May 8 10:41:36 2003
+++ b/Makefile Thu May 8 10:41:36 2003
@@ -125,7 +125,7 @@
NETWORKS =net/network.o
LIBS =$(TOPDIR)/lib/lib.a
-SUBDIRS =kernel drivers mm fs net ipc lib
+SUBDIRS =kernel drivers mm fs net ipc lib crypto
DRIVERS-n :=
DRIVERS-y :=
@@ -190,6 +190,7 @@
DRIVERS-$(CONFIG_BLUEZ) += drivers/bluetooth/bluetooth.o
DRIVERS-$(CONFIG_HOTPLUG_PCI) += drivers/hotplug/vmlinux-obj.o
DRIVERS-$(CONFIG_ISDN_BOOL) += drivers/isdn/vmlinux-obj.o
+DRIVERS-$(CONFIG_CRYPTO) += crypto/crypto.o
DRIVERS := $(DRIVERS-y)
diff -Nru a/arch/alpha/config.in b/arch/alpha/config.in
--- a/arch/alpha/config.in Thu May 8 10:41:37 2003
+++ b/arch/alpha/config.in Thu May 8 10:41:37 2003
@@ -443,4 +443,5 @@
endmenu
+source crypto/Config.in
source lib/Config.in
diff -Nru a/arch/arm/config.in b/arch/arm/config.in
--- a/arch/arm/config.in Thu May 8 10:41:36 2003
+++ b/arch/arm/config.in Thu May 8 10:41:36 2003
@@ -656,4 +656,5 @@
dep_bool ' Kernel low-level debugging messages via UART2' CONFIG_DEBUG_CLPS711X_UART2 $CONFIG_DEBUG_LL $CONFIG_ARCH_CLPS711X
endmenu
+source crypto/Config.in
source lib/Config.in
diff -Nru a/arch/cris/config.in b/arch/cris/config.in
--- a/arch/cris/config.in Thu May 8 10:41:36 2003
+++ b/arch/cris/config.in Thu May 8 10:41:36 2003
@@ -261,5 +261,6 @@
int ' Profile shift count' CONFIG_PROFILE_SHIFT 2
fi
+source crypto/Config.in
source lib/Config.in
endmenu
diff -Nru a/arch/i386/config.in b/arch/i386/config.in
--- a/arch/i386/config.in Thu May 8 10:41:37 2003
+++ b/arch/i386/config.in Thu May 8 10:41:37 2003
@@ -484,4 +484,5 @@
endmenu
+source crypto/Config.in
source lib/Config.in
diff -Nru a/arch/ia64/config.in b/arch/ia64/config.in
--- a/arch/ia64/config.in Thu May 8 10:41:37 2003
+++ b/arch/ia64/config.in Thu May 8 10:41:37 2003
@@ -243,6 +243,7 @@
source drivers/usb/Config.in
source lib/Config.in
+ source crypto/Config.in
if [ "$CONFIG_EXPERIMENTAL" = "y" ]; then
source net/bluetooth/Config.in
diff -Nru a/arch/m68k/config.in b/arch/m68k/config.in
--- a/arch/m68k/config.in Thu May 8 10:41:37 2003
+++ b/arch/m68k/config.in Thu May 8 10:41:37 2003
@@ -562,4 +562,5 @@
endmenu
+source crypto/Config.in
source lib/Config.in
diff -Nru a/arch/mips/config-shared.in b/arch/mips/config-shared.in
--- a/arch/mips/config-shared.in Thu May 8 10:41:37 2003
+++ b/arch/mips/config-shared.in Thu May 8 10:41:37 2003
@@ -800,4 +800,5 @@
fi
endmenu
+source crypto/Config.in
source lib/Config.in
diff -Nru a/arch/parisc/config.in b/arch/parisc/config.in
--- a/arch/parisc/config.in Thu May 8 10:41:38 2003
+++ b/arch/parisc/config.in Thu May 8 10:41:38 2003
@@ -196,4 +196,5 @@
bool 'Magic SysRq key' CONFIG_MAGIC_SYSRQ
endmenu
+source crypto/Config.in
source lib/Config.in
diff -Nru a/arch/ppc/config.in b/arch/ppc/config.in
--- a/arch/ppc/config.in Thu May 8 10:41:37 2003
+++ b/arch/ppc/config.in Thu May 8 10:41:37 2003
@@ -411,6 +411,7 @@
source net/bluetooth/Config.in
+source crypto/Config.in
source lib/Config.in
mainmenu_option next_comment
diff -Nru a/arch/ppc64/config.in b/arch/ppc64/config.in
--- a/arch/ppc64/config.in Thu May 8 10:41:37 2003
+++ b/arch/ppc64/config.in Thu May 8 10:41:37 2003
@@ -231,6 +231,8 @@
source lib/Config.in
+source crypto/Config.in
+
mainmenu_option next_comment
comment 'Kernel hacking'
diff -Nru a/arch/s390/config.in b/arch/s390/config.in
--- a/arch/s390/config.in Thu May 8 10:41:37 2003
+++ b/arch/s390/config.in Thu May 8 10:41:37 2003
@@ -75,4 +75,5 @@
bool 'Magic SysRq key' CONFIG_MAGIC_SYSRQ
endmenu
+source crypto/Config.in
source lib/Config.in
diff -Nru a/arch/s390x/config.in b/arch/s390x/config.in
--- a/arch/s390x/config.in Thu May 8 10:41:36 2003
+++ b/arch/s390x/config.in Thu May 8 10:41:36 2003
@@ -79,4 +79,5 @@
bool 'Magic SysRq key' CONFIG_MAGIC_SYSRQ
endmenu
+source crypto/Config.in
source lib/Config.in
diff -Nru a/arch/sh/config.in b/arch/sh/config.in
--- a/arch/sh/config.in Thu May 8 10:41:37 2003
+++ b/arch/sh/config.in Thu May 8 10:41:37 2003
@@ -387,4 +387,5 @@
fi
endmenu
+source crypto/Config.in
source lib/Config.in
diff -Nru a/arch/sparc/config.in b/arch/sparc/config.in
--- a/arch/sparc/config.in Thu May 8 10:41:36 2003
+++ b/arch/sparc/config.in Thu May 8 10:41:36 2003
@@ -275,4 +275,5 @@
endmenu
+source crypto/Config.in
source lib/Config.in
diff -Nru a/arch/sparc64/config.in b/arch/sparc64/config.in
--- a/arch/sparc64/config.in Thu May 8 10:41:37 2003
+++ b/arch/sparc64/config.in Thu May 8 10:41:37 2003
@@ -309,4 +309,5 @@
endmenu
+source crypto/Config.in
source lib/Config.in
diff -Nru a/arch/x86_64/config.in b/arch/x86_64/config.in
--- a/arch/x86_64/config.in Thu May 8 10:41:37 2003
+++ b/arch/x86_64/config.in Thu May 8 10:41:37 2003
@@ -232,6 +232,8 @@
source net/bluetooth/Config.in
+source crypto/Config.in
+
mainmenu_option next_comment
comment 'Kernel hacking'
diff -Nru a/crypto/Config.in b/crypto/Config.in
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/crypto/Config.in Thu May 8 10:41:38 2003
@@ -0,0 +1,61 @@
+#
+# Cryptographic API Configuration
+#
+mainmenu_option next_comment
+comment 'Cryptographic options'
+
+if [ "$CONFIG_INET_AH" = "y" -o \
+ "$CONFIG_INET_AH" = "m" -o \
+ "$CONFIG_INET_ESP" = "y" -o \
+ "$CONFIG_INET_ESP" = "m" ]; then
+ define_bool CONFIG_CRYPTO y
+else
+ bool 'Cryptographic API' CONFIG_CRYPTO
+fi
+
+if [ "$CONFIG_CRYPTO" = "y" ]; then
+ if [ "$CONFIG_INET_AH" = "y" -o \
+ "$CONFIG_INET_AH" = "m" -o \
+ "$CONFIG_INET_ESP" = "y" -o \
+ "$CONFIG_INET_ESP" = "m" ]; then
+ define_bool CONFIG_CRYPTO_HMAC y
+ else
+ bool ' HMAC support' CONFIG_CRYPTO_HMAC
+ fi
+ tristate ' NULL algorithms' CONFIG_CRYPTO_NULL
+ tristate ' MD4 digest algorithm' CONFIG_CRYPTO_MD4
+ if [ "$CONFIG_INET_AH" = "y" -o \
+ "$CONFIG_INET_AH" = "m" -o \
+ "$CONFIG_INET_ESP" = "y" -o \
+ "$CONFIG_INET_ESP" = "m" ]; then
+ define_bool CONFIG_CRYPTO_MD5 y
+ else
+ tristate ' MD5 digest algorithm' CONFIG_CRYPTO_MD5
+ fi
+ if [ "$CONFIG_INET_AH" = "y" -o \
+ "$CONFIG_INET_AH" = "m" -o \
+ "$CONFIG_INET_ESP" = "y" -o \
+ "$CONFIG_INET_ESP" = "m" ]; then
+ define_bool CONFIG_CRYPTO_SHA1 y
+ else
+ tristate ' SHA1 digest algorithm' CONFIG_CRYPTO_SHA1
+ fi
+ tristate ' SHA256 digest algorithm' CONFIG_CRYPTO_SHA256
+ tristate ' SHA384 and SHA512 digest algorithms' CONFIG_CRYPTO_SHA512
+ if [ "$CONFIG_INET_AH" = "y" -o \
+ "$CONFIG_INET_AH" = "m" -o \
+ "$CONFIG_INET_ESP" = "y" -o \
+ "$CONFIG_INET_ESP" = "m" ]; then
+ define_bool CONFIG_CRYPTO_DES y
+ else
+ tristate ' DES and Triple DES EDE cipher algorithms' CONFIG_CRYPTO_DES
+ fi
+ tristate ' Blowfish cipher algorithm' CONFIG_CRYPTO_BLOWFISH
+ tristate ' Twofish cipher algorithm' CONFIG_CRYPTO_TWOFISH
+ tristate ' Serpent cipher algorithm' CONFIG_CRYPTO_SERPENT
+ tristate ' AES cipher algorithms' CONFIG_CRYPTO_AES
+ tristate ' Deflate compression algorithm' CONFIG_CRYPTO_DEFLATE
+ tristate ' Testing module' CONFIG_CRYPTO_TEST
+fi
+
+endmenu
diff -Nru a/crypto/Makefile b/crypto/Makefile
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/crypto/Makefile Thu May 8 10:41:38 2003
@@ -0,0 +1,31 @@
+#
+# Cryptographic API
+#
+
+O_TARGET := crypto.o
+
+export-objs := api.o hmac.o
+
+autoload-crypto-$(CONFIG_KMOD) = autoload.o
+proc-crypto-$(CONFIG_PROC_FS) = proc.o
+
+obj-$(CONFIG_CRYPTO) += api.o cipher.o digest.o compress.o \
+ $(autoload-crypto-y) $(proc-crypto-y)
+
+obj-$(CONFIG_CRYPTO_HMAC) += hmac.o
+obj-$(CONFIG_CRYPTO_NULL) += crypto_null.o
+obj-$(CONFIG_CRYPTO_MD4) += md4.o
+obj-$(CONFIG_CRYPTO_MD5) += md5.o
+obj-$(CONFIG_CRYPTO_SHA1) += sha1.o
+obj-$(CONFIG_CRYPTO_SHA256) += sha256.o
+obj-$(CONFIG_CRYPTO_SHA512) += sha512.o
+obj-$(CONFIG_CRYPTO_DES) += des.o
+obj-$(CONFIG_CRYPTO_BLOWFISH) += blowfish.o
+obj-$(CONFIG_CRYPTO_TWOFISH) += twofish.o
+obj-$(CONFIG_CRYPTO_SERPENT) += serpent.o
+obj-$(CONFIG_CRYPTO_AES) += aes.o
+obj-$(CONFIG_CRYPTO_DEFLATE) += deflate.o
+
+obj-$(CONFIG_CRYPTO_TEST) += tcrypt.o
+
+include $(TOPDIR)/Rules.make
diff -Nru a/crypto/aes.c b/crypto/aes.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/crypto/aes.c Thu May 8 10:41:38 2003
@@ -0,0 +1,469 @@
+/*
+ * Cryptographic API.
+ *
+ * AES Cipher Algorithm.
+ *
+ * Based on Brian Gladman's code.
+ *
+ * Linux developers:
+ * Alexander Kjeldaas <astor@fast.no>
+ * Herbert Valerio Riedel <hvr@hvrlab.org>
+ * Kyle McMartin <kyle@debian.org>
+ * Adam J. Richter <adam@yggdrasil.com> (conversion to 2.5 API).
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * ---------------------------------------------------------------------------
+ * Copyright (c) 2002, Dr Brian Gladman <brg@gladman.me.uk>, Worcester, UK.
+ * All rights reserved.
+ *
+ * LICENSE TERMS
+ *
+ * The free distribution and use of this software in both source and binary
+ * form is allowed (with or without changes) provided that:
+ *
+ * 1. distributions of this source code include the above copyright
+ * notice, this list of conditions and the following disclaimer;
+ *
+ * 2. distributions in binary form include the above copyright
+ * notice, this list of conditions and the following disclaimer
+ * in the documentation and/or other associated materials;
+ *
+ * 3. the copyright holder's name is not used to endorse products
+ * built using this software without specific written permission.
+ *
+ * ALTERNATIVELY, provided that this notice is retained in full, this product
+ * may be distributed under the terms of the GNU General Public License (GPL),
+ * in which case the provisions of the GPL apply INSTEAD OF those given above.
+ *
+ * DISCLAIMER
+ *
+ * This software is provided 'as is' with no explicit or implied warranties
+ * in respect of its properties, including, but not limited to, correctness
+ * and/or fitness for purpose.
+ * ---------------------------------------------------------------------------
+ */
+
+/* Some changes from the Gladman version:
+ s/RIJNDAEL(e_key)/E_KEY/g
+ s/RIJNDAEL(d_key)/D_KEY/g
+*/
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/crypto.h>
+#include <asm/byteorder.h>
+
+#define AES_MIN_KEY_SIZE 16
+#define AES_MAX_KEY_SIZE 32
+
+#define AES_BLOCK_SIZE 16
+
+static inline
+u32 generic_rotr32 (const u32 x, const unsigned bits)
+{
+ const unsigned n = bits % 32;
+ return (x >> n) | (x << (32 - n));
+}
+
+static inline
+u32 generic_rotl32 (const u32 x, const unsigned bits)
+{
+ const unsigned n = bits % 32;
+ return (x << n) | (x >> (32 - n));
+}
+
+#define rotl generic_rotl32
+#define rotr generic_rotr32
+
+/*
+ * #define byte(x, nr) ((unsigned char)((x) >> (nr*8)))
+ */
+inline static u8
+byte(const u32 x, const unsigned n)
+{
+ return x >> (n << 3);
+}
+
+#define u32_in(x) le32_to_cpu(*(const u32 *)(x))
+#define u32_out(to, from) (*(u32 *)(to) = cpu_to_le32(from))
+
+struct aes_ctx {
+ int key_length;
+ u32 E[60];
+ u32 D[60];
+};
+
+#define E_KEY ctx->E
+#define D_KEY ctx->D
+
+static u8 pow_tab[256];
+static u8 log_tab[256];
+static u8 sbx_tab[256];
+static u8 isb_tab[256];
+static u32 rco_tab[10];
+static u32 ft_tab[4][256];
+static u32 it_tab[4][256];
+
+static u32 fl_tab[4][256];
+static u32 il_tab[4][256];
+
+static inline u8
+f_mult (u8 a, u8 b)
+{
+ u8 aa = log_tab[a], cc = aa + log_tab[b];
+
+ return pow_tab[cc + (cc < aa ? 1 : 0)];
+}
+
+#define ff_mult(a,b) (a && b ? f_mult(a, b) : 0)
+
+#define f_rn(bo, bi, n, k) \
+ bo[n] = ft_tab[0][byte(bi[n],0)] ^ \
+ ft_tab[1][byte(bi[(n + 1) & 3],1)] ^ \
+ ft_tab[2][byte(bi[(n + 2) & 3],2)] ^ \
+ ft_tab[3][byte(bi[(n + 3) & 3],3)] ^ *(k + n)
+
+#define i_rn(bo, bi, n, k) \
+ bo[n] = it_tab[0][byte(bi[n],0)] ^ \
+ it_tab[1][byte(bi[(n + 3) & 3],1)] ^ \
+ it_tab[2][byte(bi[(n + 2) & 3],2)] ^ \
+ it_tab[3][byte(bi[(n + 1) & 3],3)] ^ *(k + n)
+
+#define ls_box(x) \
+ ( fl_tab[0][byte(x, 0)] ^ \
+ fl_tab[1][byte(x, 1)] ^ \
+ fl_tab[2][byte(x, 2)] ^ \
+ fl_tab[3][byte(x, 3)] )
+
+#define f_rl(bo, bi, n, k) \
+ bo[n] = fl_tab[0][byte(bi[n],0)] ^ \
+ fl_tab[1][byte(bi[(n + 1) & 3],1)] ^ \
+ fl_tab[2][byte(bi[(n + 2) & 3],2)] ^ \
+ fl_tab[3][byte(bi[(n + 3) & 3],3)] ^ *(k + n)
+
+#define i_rl(bo, bi, n, k) \
+ bo[n] = il_tab[0][byte(bi[n],0)] ^ \
+ il_tab[1][byte(bi[(n + 3) & 3],1)] ^ \
+ il_tab[2][byte(bi[(n + 2) & 3],2)] ^ \
+ il_tab[3][byte(bi[(n + 1) & 3],3)] ^ *(k + n)
+
+static void
+gen_tabs (void)
+{
+ u32 i, t;
+ u8 p, q;
+
+ /* log and power tables for GF(2**8) finite field with
+ 0x011b as modular polynomial - the simplest prmitive
+ root is 0x03, used here to generate the tables */
+
+ for (i = 0, p = 1; i < 256; ++i) {
+ pow_tab[i] = (u8) p;
+ log_tab[p] = (u8) i;
+
+ p ^= (p << 1) ^ (p & 0x80 ? 0x01b : 0);
+ }
+
+ log_tab[1] = 0;
+
+ for (i = 0, p = 1; i < 10; ++i) {
+ rco_tab[i] = p;
+
+ p = (p << 1) ^ (p & 0x80 ? 0x01b : 0);
+ }
+
+ for (i = 0; i < 256; ++i) {
+ p = (i ? pow_tab[255 - log_tab[i]] : 0);
+ q = ((p >> 7) | (p << 1)) ^ ((p >> 6) | (p << 2));
+ p ^= 0x63 ^ q ^ ((q >> 6) | (q << 2));
+ sbx_tab[i] = p;
+ isb_tab[p] = (u8) i;
+ }
+
+ for (i = 0; i < 256; ++i) {
+ p = sbx_tab[i];
+
+ t = p;
+ fl_tab[0][i] = t;
+ fl_tab[1][i] = rotl (t, 8);
+ fl_tab[2][i] = rotl (t, 16);
+ fl_tab[3][i] = rotl (t, 24);
+
+ t = ((u32) ff_mult (2, p)) |
+ ((u32) p << 8) |
+ ((u32) p << 16) | ((u32) ff_mult (3, p) << 24);
+
+ ft_tab[0][i] = t;
+ ft_tab[1][i] = rotl (t, 8);
+ ft_tab[2][i] = rotl (t, 16);
+ ft_tab[3][i] = rotl (t, 24);
+
+ p = isb_tab[i];
+
+ t = p;
+ il_tab[0][i] = t;
+ il_tab[1][i] = rotl (t, 8);
+ il_tab[2][i] = rotl (t, 16);
+ il_tab[3][i] = rotl (t, 24);
+
+ t = ((u32) ff_mult (14, p)) |
+ ((u32) ff_mult (9, p) << 8) |
+ ((u32) ff_mult (13, p) << 16) |
+ ((u32) ff_mult (11, p) << 24);
+
+ it_tab[0][i] = t;
+ it_tab[1][i] = rotl (t, 8);
+ it_tab[2][i] = rotl (t, 16);
+ it_tab[3][i] = rotl (t, 24);
+ }
+}
+
+#define star_x(x) (((x) & 0x7f7f7f7f) << 1) ^ ((((x) & 0x80808080) >> 7) * 0x1b)
+
+#define imix_col(y,x) \
+ u = star_x(x); \
+ v = star_x(u); \
+ w = star_x(v); \
+ t = w ^ (x); \
+ (y) = u ^ v ^ w; \
+ (y) ^= rotr(u ^ t, 8) ^ \
+ rotr(v ^ t, 16) ^ \
+ rotr(t,24)
+
+/* initialise the key schedule from the user supplied key */
+
+#define loop4(i) \
+{ t = rotr(t, 8); t = ls_box(t) ^ rco_tab[i]; \
+ t ^= E_KEY[4 * i]; E_KEY[4 * i + 4] = t; \
+ t ^= E_KEY[4 * i + 1]; E_KEY[4 * i + 5] = t; \
+ t ^= E_KEY[4 * i + 2]; E_KEY[4 * i + 6] = t; \
+ t ^= E_KEY[4 * i + 3]; E_KEY[4 * i + 7] = t; \
+}
+
+#define loop6(i) \
+{ t = rotr(t, 8); t = ls_box(t) ^ rco_tab[i]; \
+ t ^= E_KEY[6 * i]; E_KEY[6 * i + 6] = t; \
+ t ^= E_KEY[6 * i + 1]; E_KEY[6 * i + 7] = t; \
+ t ^= E_KEY[6 * i + 2]; E_KEY[6 * i + 8] = t; \
+ t ^= E_KEY[6 * i + 3]; E_KEY[6 * i + 9] = t; \
+ t ^= E_KEY[6 * i + 4]; E_KEY[6 * i + 10] = t; \
+ t ^= E_KEY[6 * i + 5]; E_KEY[6 * i + 11] = t; \
+}
+
+#define loop8(i) \
+{ t = rotr(t, 8); ; t = ls_box(t) ^ rco_tab[i]; \
+ t ^= E_KEY[8 * i]; E_KEY[8 * i + 8] = t; \
+ t ^= E_KEY[8 * i + 1]; E_KEY[8 * i + 9] = t; \
+ t ^= E_KEY[8 * i + 2]; E_KEY[8 * i + 10] = t; \
+ t ^= E_KEY[8 * i + 3]; E_KEY[8 * i + 11] = t; \
+ t = E_KEY[8 * i + 4] ^ ls_box(t); \
+ E_KEY[8 * i + 12] = t; \
+ t ^= E_KEY[8 * i + 5]; E_KEY[8 * i + 13] = t; \
+ t ^= E_KEY[8 * i + 6]; E_KEY[8 * i + 14] = t; \
+ t ^= E_KEY[8 * i + 7]; E_KEY[8 * i + 15] = t; \
+}
+
+static int
+aes_set_key(void *ctx_arg, const u8 *in_key, unsigned int key_len, u32 *flags)
+{
+ struct aes_ctx *ctx = ctx_arg;
+ u32 i, t, u, v, w;
+
+ if (key_len != 16 && key_len != 24 && key_len != 32) {
+ *flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
+ return -EINVAL;
+ }
+
+ ctx->key_length = key_len;
+
+ E_KEY[0] = u32_in (in_key);
+ E_KEY[1] = u32_in (in_key + 4);
+ E_KEY[2] = u32_in (in_key + 8);
+ E_KEY[3] = u32_in (in_key + 12);
+
+ switch (key_len) {
+ case 16:
+ t = E_KEY[3];
+ for (i = 0; i < 10; ++i)
+ loop4 (i);
+ break;
+
+ case 24:
+ E_KEY[4] = u32_in (in_key + 16);
+ t = E_KEY[5] = u32_in (in_key + 20);
+ for (i = 0; i < 8; ++i)
+ loop6 (i);
+ break;
+
+ case 32:
+ E_KEY[4] = u32_in (in_key + 16);
+ E_KEY[5] = u32_in (in_key + 20);
+ E_KEY[6] = u32_in (in_key + 24);
+ t = E_KEY[7] = u32_in (in_key + 28);
+ for (i = 0; i < 7; ++i)
+ loop8 (i);
+ break;
+ }
+
+ D_KEY[0] = E_KEY[0];
+ D_KEY[1] = E_KEY[1];
+ D_KEY[2] = E_KEY[2];
+ D_KEY[3] = E_KEY[3];
+
+ for (i = 4; i < key_len + 24; ++i) {
+ imix_col (D_KEY[i], E_KEY[i]);
+ }
+
+ return 0;
+}
+
+/* encrypt a block of text */
+
+#define f_nround(bo, bi, k) \
+ f_rn(bo, bi, 0, k); \
+ f_rn(bo, bi, 1, k); \
+ f_rn(bo, bi, 2, k); \
+ f_rn(bo, bi, 3, k); \
+ k += 4
+
+#define f_lround(bo, bi, k) \
+ f_rl(bo, bi, 0, k); \
+ f_rl(bo, bi, 1, k); \
+ f_rl(bo, bi, 2, k); \
+ f_rl(bo, bi, 3, k)
+
+static void aes_encrypt(void *ctx_arg, u8 *out, const u8 *in)
+{
+ const struct aes_ctx *ctx = ctx_arg;
+ u32 b0[4], b1[4];
+ const u32 *kp = E_KEY + 4;
+
+ b0[0] = u32_in (in) ^ E_KEY[0];
+ b0[1] = u32_in (in + 4) ^ E_KEY[1];
+ b0[2] = u32_in (in + 8) ^ E_KEY[2];
+ b0[3] = u32_in (in + 12) ^ E_KEY[3];
+
+ if (ctx->key_length > 24) {
+ f_nround (b1, b0, kp);
+ f_nround (b0, b1, kp);
+ }
+
+ if (ctx->key_length > 16) {
+ f_nround (b1, b0, kp);
+ f_nround (b0, b1, kp);
+ }
+
+ f_nround (b1, b0, kp);
+ f_nround (b0, b1, kp);
+ f_nround (b1, b0, kp);
+ f_nround (b0, b1, kp);
+ f_nround (b1, b0, kp);
+ f_nround (b0, b1, kp);
+ f_nround (b1, b0, kp);
+ f_nround (b0, b1, kp);
+ f_nround (b1, b0, kp);
+ f_lround (b0, b1, kp);
+
+ u32_out (out, b0[0]);
+ u32_out (out + 4, b0[1]);
+ u32_out (out + 8, b0[2]);
+ u32_out (out + 12, b0[3]);
+}
+
+/* decrypt a block of text */
+
+#define i_nround(bo, bi, k) \
+ i_rn(bo, bi, 0, k); \
+ i_rn(bo, bi, 1, k); \
+ i_rn(bo, bi, 2, k); \
+ i_rn(bo, bi, 3, k); \
+ k -= 4
+
+#define i_lround(bo, bi, k) \
+ i_rl(bo, bi, 0, k); \
+ i_rl(bo, bi, 1, k); \
+ i_rl(bo, bi, 2, k); \
+ i_rl(bo, bi, 3, k)
+
+static void aes_decrypt(void *ctx_arg, u8 *out, const u8 *in)
+{
+ const struct aes_ctx *ctx = ctx_arg;
+ u32 b0[4], b1[4];
+ const int key_len = ctx->key_length;
+ const u32 *kp = D_KEY + key_len + 20;
+
+ b0[0] = u32_in (in) ^ E_KEY[key_len + 24];
+ b0[1] = u32_in (in + 4) ^ E_KEY[key_len + 25];
+ b0[2] = u32_in (in + 8) ^ E_KEY[key_len + 26];
+ b0[3] = u32_in (in + 12) ^ E_KEY[key_len + 27];
+
+ if (key_len > 24) {
+ i_nround (b1, b0, kp);
+ i_nround (b0, b1, kp);
+ }
+
+ if (key_len > 16) {
+ i_nround (b1, b0, kp);
+ i_nround (b0, b1, kp);
+ }
+
+ i_nround (b1, b0, kp);
+ i_nround (b0, b1, kp);
+ i_nround (b1, b0, kp);
+ i_nround (b0, b1, kp);
+ i_nround (b1, b0, kp);
+ i_nround (b0, b1, kp);
+ i_nround (b1, b0, kp);
+ i_nround (b0, b1, kp);
+ i_nround (b1, b0, kp);
+ i_lround (b0, b1, kp);
+
+ u32_out (out, b0[0]);
+ u32_out (out + 4, b0[1]);
+ u32_out (out + 8, b0[2]);
+ u32_out (out + 12, b0[3]);
+}
+
+
+static struct crypto_alg aes_alg = {
+ .cra_name = "aes",
+ .cra_flags = CRYPTO_ALG_TYPE_CIPHER,
+ .cra_blocksize = AES_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct aes_ctx),
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(aes_alg.cra_list),
+ .cra_u = {
+ .cipher = {
+ .cia_min_keysize = AES_MIN_KEY_SIZE,
+ .cia_max_keysize = AES_MAX_KEY_SIZE,
+ .cia_ivsize = AES_BLOCK_SIZE,
+ .cia_setkey = aes_set_key,
+ .cia_encrypt = aes_encrypt,
+ .cia_decrypt = aes_decrypt
+ }
+ }
+};
+
+static int __init aes_init(void)
+{
+ gen_tabs();
+ return crypto_register_alg(&aes_alg);
+}
+
+static void __exit aes_fini(void)
+{
+ crypto_unregister_alg(&aes_alg);
+}
+
+module_init(aes_init);
+module_exit(aes_fini);
+
+MODULE_DESCRIPTION("Rijndael (AES) Cipher Algorithm");
+MODULE_LICENSE("Dual BSD/GPL");
+
diff -Nru a/crypto/api.c b/crypto/api.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/crypto/api.c Thu May 8 10:41:38 2003
@@ -0,0 +1,227 @@
+/*
+ * Scatterlist Cryptographic API.
+ *
+ * Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
+ * Copyright (c) 2002 David S. Miller (davem@redhat.com)
+ *
+ * Portions derived from Cryptoapi, by Alexander Kjeldaas <astor@fast.no>
+ * and Nettle, by Niels Möller.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#include <linux/init.h>
+#include <linux/crypto.h>
+#include <linux/errno.h>
+#include <linux/rwsem.h>
+#include <linux/slab.h>
+#include "internal.h"
+
+LIST_HEAD(crypto_alg_list);
+DECLARE_RWSEM(crypto_alg_sem);
+
+static inline int crypto_alg_get(struct crypto_alg *alg)
+{
+ return try_inc_mod_count(alg->cra_module);
+}
+
+static inline void crypto_alg_put(struct crypto_alg *alg)
+{
+ if (alg->cra_module)
+ __MOD_DEC_USE_COUNT(alg->cra_module);
+}
+
+struct crypto_alg *crypto_alg_lookup(const char *name)
+{
+ struct crypto_alg *q, *alg = NULL;
+
+ down_read(&crypto_alg_sem);
+
+ list_for_each_entry(q, &crypto_alg_list, cra_list) {
+ if (!(strcmp(q->cra_name, name))) {
+ if (crypto_alg_get(q))
+ alg = q;
+ break;
+ }
+ }
+
+ up_read(&crypto_alg_sem);
+ return alg;
+}
+
+static int crypto_init_flags(struct crypto_tfm *tfm, u32 flags)
+{
+ tfm->crt_flags = 0;
+
+ switch (crypto_tfm_alg_type(tfm)) {
+ case CRYPTO_ALG_TYPE_CIPHER:
+ return crypto_init_cipher_flags(tfm, flags);
+
+ case CRYPTO_ALG_TYPE_DIGEST:
+ return crypto_init_digest_flags(tfm, flags);
+
+ case CRYPTO_ALG_TYPE_COMPRESS:
+ return crypto_init_compress_flags(tfm, flags);
+
+ default:
+ break;
+ }
+
+ BUG();
+ return -EINVAL;
+}
+
+static int crypto_init_ops(struct crypto_tfm *tfm)
+{
+ switch (crypto_tfm_alg_type(tfm)) {
+ case CRYPTO_ALG_TYPE_CIPHER:
+ return crypto_init_cipher_ops(tfm);
+
+ case CRYPTO_ALG_TYPE_DIGEST:
+ return crypto_init_digest_ops(tfm);
+
+ case CRYPTO_ALG_TYPE_COMPRESS:
+ return crypto_init_compress_ops(tfm);
+
+ default:
+ break;
+ }
+
+ BUG();
+ return -EINVAL;
+}
+
+static void crypto_exit_ops(struct crypto_tfm *tfm)
+{
+ switch (crypto_tfm_alg_type(tfm)) {
+ case CRYPTO_ALG_TYPE_CIPHER:
+ crypto_exit_cipher_ops(tfm);
+ break;
+
+ case CRYPTO_ALG_TYPE_DIGEST:
+ crypto_exit_digest_ops(tfm);
+ break;
+
+ case CRYPTO_ALG_TYPE_COMPRESS:
+ crypto_exit_compress_ops(tfm);
+ break;
+
+ default:
+ BUG();
+
+ }
+}
+
+struct crypto_tfm *crypto_alloc_tfm(const char *name, u32 flags)
+{
+ struct crypto_tfm *tfm = NULL;
+ struct crypto_alg *alg;
+
+ alg = crypto_alg_mod_lookup(name);
+ if (alg == NULL)
+ goto out;
+
+ tfm = kmalloc(sizeof(*tfm) + alg->cra_ctxsize, GFP_KERNEL);
+ if (tfm == NULL)
+ goto out_put;
+
+ memset(tfm, 0, sizeof(*tfm) + alg->cra_ctxsize);
+
+ tfm->__crt_alg = alg;
+
+ if (crypto_init_flags(tfm, flags))
+ goto out_free_tfm;
+
+ if (crypto_init_ops(tfm)) {
+ crypto_exit_ops(tfm);
+ goto out_free_tfm;
+ }
+
+ goto out;
+
+out_free_tfm:
+ kfree(tfm);
+ tfm = NULL;
+out_put:
+ crypto_alg_put(alg);
+out:
+ return tfm;
+}
+
+void crypto_free_tfm(struct crypto_tfm *tfm)
+{
+ crypto_exit_ops(tfm);
+ crypto_alg_put(tfm->__crt_alg);
+ kfree(tfm);
+}
+
+int crypto_register_alg(struct crypto_alg *alg)
+{
+ int ret = 0;
+ struct crypto_alg *q;
+
+ down_write(&crypto_alg_sem);
+
+ list_for_each_entry(q, &crypto_alg_list, cra_list) {
+ if (!(strcmp(q->cra_name, alg->cra_name))) {
+ ret = -EEXIST;
+ goto out;
+ }
+ }
+
+ list_add_tail(&alg->cra_list, &crypto_alg_list);
+out:
+ up_write(&crypto_alg_sem);
+ return ret;
+}
+
+int crypto_unregister_alg(struct crypto_alg *alg)
+{
+ int ret = -ENOENT;
+ struct crypto_alg *q;
+
+ BUG_ON(!alg->cra_module);
+
+ down_write(&crypto_alg_sem);
+ list_for_each_entry(q, &crypto_alg_list, cra_list) {
+ if (alg == q) {
+ list_del(&alg->cra_list);
+ ret = 0;
+ goto out;
+ }
+ }
+out:
+ up_write(&crypto_alg_sem);
+ return ret;
+}
+
+int crypto_alg_available(const char *name, u32 flags)
+{
+ int ret = 0;
+ struct crypto_alg *alg = crypto_alg_mod_lookup(name);
+
+ if (alg) {
+ crypto_alg_put(alg);
+ ret = 1;
+ }
+
+ return ret;
+}
+
+static int __init init_crypto(void)
+{
+ printk(KERN_INFO "Initializing Cryptographic API\n");
+ crypto_init_proc();
+ return 0;
+}
+
+__initcall(init_crypto);
+
+EXPORT_SYMBOL_GPL(crypto_register_alg);
+EXPORT_SYMBOL_GPL(crypto_unregister_alg);
+EXPORT_SYMBOL_GPL(crypto_alloc_tfm);
+EXPORT_SYMBOL_GPL(crypto_free_tfm);
+EXPORT_SYMBOL_GPL(crypto_alg_available);
diff -Nru a/crypto/autoload.c b/crypto/autoload.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/crypto/autoload.c Thu May 8 10:41:38 2003
@@ -0,0 +1,37 @@
+/*
+ * Cryptographic API.
+ *
+ * Algorithm autoloader.
+ *
+ * Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#include <linux/kernel.h>
+#include <linux/crypto.h>
+#include <linux/string.h>
+#include <linux/kmod.h>
+#include "internal.h"
+
+/*
+ * A far more intelligent version of this is planned. For now, just
+ * try an exact match on the name of the algorithm.
+ */
+void crypto_alg_autoload(const char *name)
+{
+ request_module(name);
+}
+
+struct crypto_alg *crypto_alg_mod_lookup(const char *name)
+{
+ struct crypto_alg *alg = crypto_alg_lookup(name);
+ if (alg == NULL) {
+ crypto_alg_autoload(name);
+ alg = crypto_alg_lookup(name);
+ }
+ return alg;
+}
diff -Nru a/crypto/blowfish.c b/crypto/blowfish.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/crypto/blowfish.c Thu May 8 10:41:38 2003
@@ -0,0 +1,479 @@
+/*
+ * Cryptographic API.
+ *
+ * Blowfish Cipher Algorithm, by Bruce Schneier.
+ * http://www.counterpane.com/blowfish.html
+ *
+ * Adapated from Kerneli implementation.
+ *
+ * Copyright (c) Herbert Valerio Riedel <hvr@hvrlab.org>
+ * Copyright (c) Kyle McMartin <kyle@debian.org>
+ * Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ */
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/mm.h>
+#include <asm/scatterlist.h>
+#include <linux/crypto.h>
+
+#define BF_BLOCK_SIZE 8
+#define BF_MIN_KEY_SIZE 4
+#define BF_MAX_KEY_SIZE 56
+
+struct bf_ctx {
+ u32 p[18];
+ u32 s[1024];
+};
+
+static const u32 bf_pbox[16 + 2] = {
+ 0x243f6a88, 0x85a308d3, 0x13198a2e, 0x03707344,
+ 0xa4093822, 0x299f31d0, 0x082efa98, 0xec4e6c89,
+ 0x452821e6, 0x38d01377, 0xbe5466cf, 0x34e90c6c,
+ 0xc0ac29b7, 0xc97c50dd, 0x3f84d5b5, 0xb5470917,
+ 0x9216d5d9, 0x8979fb1b,
+};
+
+static const u32 bf_sbox[256 * 4] = {
+ 0xd1310ba6, 0x98dfb5ac, 0x2ffd72db, 0xd01adfb7,
+ 0xb8e1afed, 0x6a267e96, 0xba7c9045, 0xf12c7f99,
+ 0x24a19947, 0xb3916cf7, 0x0801f2e2, 0x858efc16,
+ 0x636920d8, 0x71574e69, 0xa458fea3, 0xf4933d7e,
+ 0x0d95748f, 0x728eb658, 0x718bcd58, 0x82154aee,
+ 0x7b54a41d, 0xc25a59b5, 0x9c30d539, 0x2af26013,
+ 0xc5d1b023, 0x286085f0, 0xca417918, 0xb8db38ef,
+ 0x8e79dcb0, 0x603a180e, 0x6c9e0e8b, 0xb01e8a3e,
+ 0xd71577c1, 0xbd314b27, 0x78af2fda, 0x55605c60,
+ 0xe65525f3, 0xaa55ab94, 0x57489862, 0x63e81440,
+ 0x55ca396a, 0x2aab10b6, 0xb4cc5c34, 0x1141e8ce,
+ 0xa15486af, 0x7c72e993, 0xb3ee1411, 0x636fbc2a,
+ 0x2ba9c55d, 0x741831f6, 0xce5c3e16, 0x9b87931e,
+ 0xafd6ba33, 0x6c24cf5c, 0x7a325381, 0x28958677,
+ 0x3b8f4898, 0x6b4bb9af, 0xc4bfe81b, 0x66282193,
+ 0x61d809cc, 0xfb21a991, 0x487cac60, 0x5dec8032,
+ 0xef845d5d, 0xe98575b1, 0xdc262302, 0xeb651b88,
+ 0x23893e81, 0xd396acc5, 0x0f6d6ff3, 0x83f44239,
+ 0x2e0b4482, 0xa4842004, 0x69c8f04a, 0x9e1f9b5e,
+ 0x21c66842, 0xf6e96c9a, 0x670c9c61, 0xabd388f0,
+ 0x6a51a0d2, 0xd8542f68, 0x960fa728, 0xab5133a3,
+ 0x6eef0b6c, 0x137a3be4, 0xba3bf050, 0x7efb2a98,
+ 0xa1f1651d, 0x39af0176, 0x66ca593e, 0x82430e88,
+ 0x8cee8619, 0x456f9fb4, 0x7d84a5c3, 0x3b8b5ebe,
+ 0xe06f75d8, 0x85c12073, 0x401a449f, 0x56c16aa6,
+ 0x4ed3aa62, 0x363f7706, 0x1bfedf72, 0x429b023d,
+ 0x37d0d724, 0xd00a1248, 0xdb0fead3, 0x49f1c09b,
+ 0x075372c9, 0x80991b7b, 0x25d479d8, 0xf6e8def7,
+ 0xe3fe501a, 0xb6794c3b, 0x976ce0bd, 0x04c006ba,
+ 0xc1a94fb6, 0x409f60c4, 0x5e5c9ec2, 0x196a2463,
+ 0x68fb6faf, 0x3e6c53b5, 0x1339b2eb, 0x3b52ec6f,
+ 0x6dfc511f, 0x9b30952c, 0xcc814544, 0xaf5ebd09,
+ 0xbee3d004, 0xde334afd, 0x660f2807, 0x192e4bb3,
+ 0xc0cba857, 0x45c8740f, 0xd20b5f39, 0xb9d3fbdb,
+ 0x5579c0bd, 0x1a60320a, 0xd6a100c6, 0x402c7279,
+ 0x679f25fe, 0xfb1fa3cc, 0x8ea5e9f8, 0xdb3222f8,
+ 0x3c7516df, 0xfd616b15, 0x2f501ec8, 0xad0552ab,
+ 0x323db5fa, 0xfd238760, 0x53317b48, 0x3e00df82,
+ 0x9e5c57bb, 0xca6f8ca0, 0x1a87562e, 0xdf1769db,
+ 0xd542a8f6, 0x287effc3, 0xac6732c6, 0x8c4f5573,
+ 0x695b27b0, 0xbbca58c8, 0xe1ffa35d, 0xb8f011a0,
+ 0x10fa3d98, 0xfd2183b8, 0x4afcb56c, 0x2dd1d35b,
+ 0x9a53e479, 0xb6f84565, 0xd28e49bc, 0x4bfb9790,
+ 0xe1ddf2da, 0xa4cb7e33, 0x62fb1341, 0xcee4c6e8,
+ 0xef20cada, 0x36774c01, 0xd07e9efe, 0x2bf11fb4,
+ 0x95dbda4d, 0xae909198, 0xeaad8e71, 0x6b93d5a0,
+ 0xd08ed1d0, 0xafc725e0, 0x8e3c5b2f, 0x8e7594b7,
+ 0x8ff6e2fb, 0xf2122b64, 0x8888b812, 0x900df01c,
+ 0x4fad5ea0, 0x688fc31c, 0xd1cff191, 0xb3a8c1ad,
+ 0x2f2f2218, 0xbe0e1777, 0xea752dfe, 0x8b021fa1,
+ 0xe5a0cc0f, 0xb56f74e8, 0x18acf3d6, 0xce89e299,
+ 0xb4a84fe0, 0xfd13e0b7, 0x7cc43b81, 0xd2ada8d9,
+ 0x165fa266, 0x80957705, 0x93cc7314, 0x211a1477,
+ 0xe6ad2065, 0x77b5fa86, 0xc75442f5, 0xfb9d35cf,
+ 0xebcdaf0c, 0x7b3e89a0, 0xd6411bd3, 0xae1e7e49,
+ 0x00250e2d, 0x2071b35e, 0x226800bb, 0x57b8e0af,
+ 0x2464369b, 0xf009b91e, 0x5563911d, 0x59dfa6aa,
+ 0x78c14389, 0xd95a537f, 0x207d5ba2, 0x02e5b9c5,
+ 0x83260376, 0x6295cfa9, 0x11c81968, 0x4e734a41,
+ 0xb3472dca, 0x7b14a94a, 0x1b510052, 0x9a532915,
+ 0xd60f573f, 0xbc9bc6e4, 0x2b60a476, 0x81e67400,
+ 0x08ba6fb5, 0x571be91f, 0xf296ec6b, 0x2a0dd915,
+ 0xb6636521, 0xe7b9f9b6, 0xff34052e, 0xc5855664,
+ 0x53b02d5d, 0xa99f8fa1, 0x08ba4799, 0x6e85076a,
+ 0x4b7a70e9, 0xb5b32944, 0xdb75092e, 0xc4192623,
+ 0xad6ea6b0, 0x49a7df7d, 0x9cee60b8, 0x8fedb266,
+ 0xecaa8c71, 0x699a17ff, 0x5664526c, 0xc2b19ee1,
+ 0x193602a5, 0x75094c29, 0xa0591340, 0xe4183a3e,
+ 0x3f54989a, 0x5b429d65, 0x6b8fe4d6, 0x99f73fd6,
+ 0xa1d29c07, 0xefe830f5, 0x4d2d38e6, 0xf0255dc1,
+ 0x4cdd2086, 0x8470eb26, 0x6382e9c6, 0x021ecc5e,
+ 0x09686b3f, 0x3ebaefc9, 0x3c971814, 0x6b6a70a1,
+ 0x687f3584, 0x52a0e286, 0xb79c5305, 0xaa500737,
+ 0x3e07841c, 0x7fdeae5c, 0x8e7d44ec, 0x5716f2b8,
+ 0xb03ada37, 0xf0500c0d, 0xf01c1f04, 0x0200b3ff,
+ 0xae0cf51a, 0x3cb574b2, 0x25837a58, 0xdc0921bd,
+ 0xd19113f9, 0x7ca92ff6, 0x94324773, 0x22f54701,
+ 0x3ae5e581, 0x37c2dadc, 0xc8b57634, 0x9af3dda7,
+ 0xa9446146, 0x0fd0030e, 0xecc8c73e, 0xa4751e41,
+ 0xe238cd99, 0x3bea0e2f, 0x3280bba1, 0x183eb331,
+ 0x4e548b38, 0x4f6db908, 0x6f420d03, 0xf60a04bf,
+ 0x2cb81290, 0x24977c79, 0x5679b072, 0xbcaf89af,
+ 0xde9a771f, 0xd9930810, 0xb38bae12, 0xdccf3f2e,
+ 0x5512721f, 0x2e6b7124, 0x501adde6, 0x9f84cd87,
+ 0x7a584718, 0x7408da17, 0xbc9f9abc, 0xe94b7d8c,
+ 0xec7aec3a, 0xdb851dfa, 0x63094366, 0xc464c3d2,
+ 0xef1c1847, 0x3215d908, 0xdd433b37, 0x24c2ba16,
+ 0x12a14d43, 0x2a65c451, 0x50940002, 0x133ae4dd,
+ 0x71dff89e, 0x10314e55, 0x81ac77d6, 0x5f11199b,
+ 0x043556f1, 0xd7a3c76b, 0x3c11183b, 0x5924a509,
+ 0xf28fe6ed, 0x97f1fbfa, 0x9ebabf2c, 0x1e153c6e,
+ 0x86e34570, 0xeae96fb1, 0x860e5e0a, 0x5a3e2ab3,
+ 0x771fe71c, 0x4e3d06fa, 0x2965dcb9, 0x99e71d0f,
+ 0x803e89d6, 0x5266c825, 0x2e4cc978, 0x9c10b36a,
+ 0xc6150eba, 0x94e2ea78, 0xa5fc3c53, 0x1e0a2df4,
+ 0xf2f74ea7, 0x361d2b3d, 0x1939260f, 0x19c27960,
+ 0x5223a708, 0xf71312b6, 0xebadfe6e, 0xeac31f66,
+ 0xe3bc4595, 0xa67bc883, 0xb17f37d1, 0x018cff28,
+ 0xc332ddef, 0xbe6c5aa5, 0x65582185, 0x68ab9802,
+ 0xeecea50f, 0xdb2f953b, 0x2aef7dad, 0x5b6e2f84,
+ 0x1521b628, 0x29076170, 0xecdd4775, 0x619f1510,
+ 0x13cca830, 0xeb61bd96, 0x0334fe1e, 0xaa0363cf,
+ 0xb5735c90, 0x4c70a239, 0xd59e9e0b, 0xcbaade14,
+ 0xeecc86bc, 0x60622ca7, 0x9cab5cab, 0xb2f3846e,
+ 0x648b1eaf, 0x19bdf0ca, 0xa02369b9, 0x655abb50,
+ 0x40685a32, 0x3c2ab4b3, 0x319ee9d5, 0xc021b8f7,
+ 0x9b540b19, 0x875fa099, 0x95f7997e, 0x623d7da8,
+ 0xf837889a, 0x97e32d77, 0x11ed935f, 0x16681281,
+ 0x0e358829, 0xc7e61fd6, 0x96dedfa1, 0x7858ba99,
+ 0x57f584a5, 0x1b227263, 0x9b83c3ff, 0x1ac24696,
+ 0xcdb30aeb, 0x532e3054, 0x8fd948e4, 0x6dbc3128,
+ 0x58ebf2ef, 0x34c6ffea, 0xfe28ed61, 0xee7c3c73,
+ 0x5d4a14d9, 0xe864b7e3, 0x42105d14, 0x203e13e0,
+ 0x45eee2b6, 0xa3aaabea, 0xdb6c4f15, 0xfacb4fd0,
+ 0xc742f442, 0xef6abbb5, 0x654f3b1d, 0x41cd2105,
+ 0xd81e799e, 0x86854dc7, 0xe44b476a, 0x3d816250,
+ 0xcf62a1f2, 0x5b8d2646, 0xfc8883a0, 0xc1c7b6a3,
+ 0x7f1524c3, 0x69cb7492, 0x47848a0b, 0x5692b285,
+ 0x095bbf00, 0xad19489d, 0x1462b174, 0x23820e00,
+ 0x58428d2a, 0x0c55f5ea, 0x1dadf43e, 0x233f7061,
+ 0x3372f092, 0x8d937e41, 0xd65fecf1, 0x6c223bdb,
+ 0x7cde3759, 0xcbee7460, 0x4085f2a7, 0xce77326e,
+ 0xa6078084, 0x19f8509e, 0xe8efd855, 0x61d99735,
+ 0xa969a7aa, 0xc50c06c2, 0x5a04abfc, 0x800bcadc,
+ 0x9e447a2e, 0xc3453484, 0xfdd56705, 0x0e1e9ec9,
+ 0xdb73dbd3, 0x105588cd, 0x675fda79, 0xe3674340,
+ 0xc5c43465, 0x713e38d8, 0x3d28f89e, 0xf16dff20,
+ 0x153e21e7, 0x8fb03d4a, 0xe6e39f2b, 0xdb83adf7,
+ 0xe93d5a68, 0x948140f7, 0xf64c261c, 0x94692934,
+ 0x411520f7, 0x7602d4f7, 0xbcf46b2e, 0xd4a20068,
+ 0xd4082471, 0x3320f46a, 0x43b7d4b7, 0x500061af,
+ 0x1e39f62e, 0x97244546, 0x14214f74, 0xbf8b8840,
+ 0x4d95fc1d, 0x96b591af, 0x70f4ddd3, 0x66a02f45,
+ 0xbfbc09ec, 0x03bd9785, 0x7fac6dd0, 0x31cb8504,
+ 0x96eb27b3, 0x55fd3941, 0xda2547e6, 0xabca0a9a,
+ 0x28507825, 0x530429f4, 0x0a2c86da, 0xe9b66dfb,
+ 0x68dc1462, 0xd7486900, 0x680ec0a4, 0x27a18dee,
+ 0x4f3ffea2, 0xe887ad8c, 0xb58ce006, 0x7af4d6b6,
+ 0xaace1e7c, 0xd3375fec, 0xce78a399, 0x406b2a42,
+ 0x20fe9e35, 0xd9f385b9, 0xee39d7ab, 0x3b124e8b,
+ 0x1dc9faf7, 0x4b6d1856, 0x26a36631, 0xeae397b2,
+ 0x3a6efa74, 0xdd5b4332, 0x6841e7f7, 0xca7820fb,
+ 0xfb0af54e, 0xd8feb397, 0x454056ac, 0xba489527,
+ 0x55533a3a, 0x20838d87, 0xfe6ba9b7, 0xd096954b,
+ 0x55a867bc, 0xa1159a58, 0xcca92963, 0x99e1db33,
+ 0xa62a4a56, 0x3f3125f9, 0x5ef47e1c, 0x9029317c,
+ 0xfdf8e802, 0x04272f70, 0x80bb155c, 0x05282ce3,
+ 0x95c11548, 0xe4c66d22, 0x48c1133f, 0xc70f86dc,
+ 0x07f9c9ee, 0x41041f0f, 0x404779a4, 0x5d886e17,
+ 0x325f51eb, 0xd59bc0d1, 0xf2bcc18f, 0x41113564,
+ 0x257b7834, 0x602a9c60, 0xdff8e8a3, 0x1f636c1b,
+ 0x0e12b4c2, 0x02e1329e, 0xaf664fd1, 0xcad18115,
+ 0x6b2395e0, 0x333e92e1, 0x3b240b62, 0xeebeb922,
+ 0x85b2a20e, 0xe6ba0d99, 0xde720c8c, 0x2da2f728,
+ 0xd0127845, 0x95b794fd, 0x647d0862, 0xe7ccf5f0,
+ 0x5449a36f, 0x877d48fa, 0xc39dfd27, 0xf33e8d1e,
+ 0x0a476341, 0x992eff74, 0x3a6f6eab, 0xf4f8fd37,
+ 0xa812dc60, 0xa1ebddf8, 0x991be14c, 0xdb6e6b0d,
+ 0xc67b5510, 0x6d672c37, 0x2765d43b, 0xdcd0e804,
+ 0xf1290dc7, 0xcc00ffa3, 0xb5390f92, 0x690fed0b,
+ 0x667b9ffb, 0xcedb7d9c, 0xa091cf0b, 0xd9155ea3,
+ 0xbb132f88, 0x515bad24, 0x7b9479bf, 0x763bd6eb,
+ 0x37392eb3, 0xcc115979, 0x8026e297, 0xf42e312d,
+ 0x6842ada7, 0xc66a2b3b, 0x12754ccc, 0x782ef11c,
+ 0x6a124237, 0xb79251e7, 0x06a1bbe6, 0x4bfb6350,
+ 0x1a6b1018, 0x11caedfa, 0x3d25bdd8, 0xe2e1c3c9,
+ 0x44421659, 0x0a121386, 0xd90cec6e, 0xd5abea2a,
+ 0x64af674e, 0xda86a85f, 0xbebfe988, 0x64e4c3fe,
+ 0x9dbc8057, 0xf0f7c086, 0x60787bf8, 0x6003604d,
+ 0xd1fd8346, 0xf6381fb0, 0x7745ae04, 0xd736fccc,
+ 0x83426b33, 0xf01eab71, 0xb0804187, 0x3c005e5f,
+ 0x77a057be, 0xbde8ae24, 0x55464299, 0xbf582e61,
+ 0x4e58f48f, 0xf2ddfda2, 0xf474ef38, 0x8789bdc2,
+ 0x5366f9c3, 0xc8b38e74, 0xb475f255, 0x46fcd9b9,
+ 0x7aeb2661, 0x8b1ddf84, 0x846a0e79, 0x915f95e2,
+ 0x466e598e, 0x20b45770, 0x8cd55591, 0xc902de4c,
+ 0xb90bace1, 0xbb8205d0, 0x11a86248, 0x7574a99e,
+ 0xb77f19b6, 0xe0a9dc09, 0x662d09a1, 0xc4324633,
+ 0xe85a1f02, 0x09f0be8c, 0x4a99a025, 0x1d6efe10,
+ 0x1ab93d1d, 0x0ba5a4df, 0xa186f20f, 0x2868f169,
+ 0xdcb7da83, 0x573906fe, 0xa1e2ce9b, 0x4fcd7f52,
+ 0x50115e01, 0xa70683fa, 0xa002b5c4, 0x0de6d027,
+ 0x9af88c27, 0x773f8641, 0xc3604c06, 0x61a806b5,
+ 0xf0177a28, 0xc0f586e0, 0x006058aa, 0x30dc7d62,
+ 0x11e69ed7, 0x2338ea63, 0x53c2dd94, 0xc2c21634,
+ 0xbbcbee56, 0x90bcb6de, 0xebfc7da1, 0xce591d76,
+ 0x6f05e409, 0x4b7c0188, 0x39720a3d, 0x7c927c24,
+ 0x86e3725f, 0x724d9db9, 0x1ac15bb4, 0xd39eb8fc,
+ 0xed545578, 0x08fca5b5, 0xd83d7cd3, 0x4dad0fc4,
+ 0x1e50ef5e, 0xb161e6f8, 0xa28514d9, 0x6c51133c,
+ 0x6fd5c7e7, 0x56e14ec4, 0x362abfce, 0xddc6c837,
+ 0xd79a3234, 0x92638212, 0x670efa8e, 0x406000e0,
+ 0x3a39ce37, 0xd3faf5cf, 0xabc27737, 0x5ac52d1b,
+ 0x5cb0679e, 0x4fa33742, 0xd3822740, 0x99bc9bbe,
+ 0xd5118e9d, 0xbf0f7315, 0xd62d1c7e, 0xc700c47b,
+ 0xb78c1b6b, 0x21a19045, 0xb26eb1be, 0x6a366eb4,
+ 0x5748ab2f, 0xbc946e79, 0xc6a376d2, 0x6549c2c8,
+ 0x530ff8ee, 0x468dde7d, 0xd5730a1d, 0x4cd04dc6,
+ 0x2939bbdb, 0xa9ba4650, 0xac9526e8, 0xbe5ee304,
+ 0xa1fad5f0, 0x6a2d519a, 0x63ef8ce2, 0x9a86ee22,
+ 0xc089c2b8, 0x43242ef6, 0xa51e03aa, 0x9cf2d0a4,
+ 0x83c061ba, 0x9be96a4d, 0x8fe51550, 0xba645bd6,
+ 0x2826a2f9, 0xa73a3ae1, 0x4ba99586, 0xef5562e9,
+ 0xc72fefd3, 0xf752f7da, 0x3f046f69, 0x77fa0a59,
+ 0x80e4a915, 0x87b08601, 0x9b09e6ad, 0x3b3ee593,
+ 0xe990fd5a, 0x9e34d797, 0x2cf0b7d9, 0x022b8b51,
+ 0x96d5ac3a, 0x017da67d, 0xd1cf3ed6, 0x7c7d2d28,
+ 0x1f9f25cf, 0xadf2b89b, 0x5ad6b472, 0x5a88f54c,
+ 0xe029ac71, 0xe019a5e6, 0x47b0acfd, 0xed93fa9b,
+ 0xe8d3c48d, 0x283b57cc, 0xf8d56629, 0x79132e28,
+ 0x785f0191, 0xed756055, 0xf7960e44, 0xe3d35e8c,
+ 0x15056dd4, 0x88f46dba, 0x03a16125, 0x0564f0bd,
+ 0xc3eb9e15, 0x3c9057a2, 0x97271aec, 0xa93a072a,
+ 0x1b3f6d9b, 0x1e6321f5, 0xf59c66fb, 0x26dcf319,
+ 0x7533d928, 0xb155fdf5, 0x03563482, 0x8aba3cbb,
+ 0x28517711, 0xc20ad9f8, 0xabcc5167, 0xccad925f,
+ 0x4de81751, 0x3830dc8e, 0x379d5862, 0x9320f991,
+ 0xea7a90c2, 0xfb3e7bce, 0x5121ce64, 0x774fbe32,
+ 0xa8b6e37e, 0xc3293d46, 0x48de5369, 0x6413e680,
+ 0xa2ae0810, 0xdd6db224, 0x69852dfd, 0x09072166,
+ 0xb39a460a, 0x6445c0dd, 0x586cdecf, 0x1c20c8ae,
+ 0x5bbef7dd, 0x1b588d40, 0xccd2017f, 0x6bb4e3bb,
+ 0xdda26a7e, 0x3a59ff45, 0x3e350a44, 0xbcb4cdd5,
+ 0x72eacea8, 0xfa6484bb, 0x8d6612ae, 0xbf3c6f47,
+ 0xd29be463, 0x542f5d9e, 0xaec2771b, 0xf64e6370,
+ 0x740e0d8d, 0xe75b1357, 0xf8721671, 0xaf537d5d,
+ 0x4040cb08, 0x4eb4e2cc, 0x34d2466a, 0x0115af84,
+ 0xe1b00428, 0x95983a1d, 0x06b89fb4, 0xce6ea048,
+ 0x6f3f3b82, 0x3520ab82, 0x011a1d4b, 0x277227f8,
+ 0x611560b1, 0xe7933fdc, 0xbb3a792b, 0x344525bd,
+ 0xa08839e1, 0x51ce794b, 0x2f32c9b7, 0xa01fbac9,
+ 0xe01cc87e, 0xbcc7d1f6, 0xcf0111c3, 0xa1e8aac7,
+ 0x1a908749, 0xd44fbd9a, 0xd0dadecb, 0xd50ada38,
+ 0x0339c32a, 0xc6913667, 0x8df9317c, 0xe0b12b4f,
+ 0xf79e59b7, 0x43f5bb3a, 0xf2d519ff, 0x27d9459c,
+ 0xbf97222c, 0x15e6fc2a, 0x0f91fc71, 0x9b941525,
+ 0xfae59361, 0xceb69ceb, 0xc2a86459, 0x12baa8d1,
+ 0xb6c1075e, 0xe3056a0c, 0x10d25065, 0xcb03a442,
+ 0xe0ec6e0e, 0x1698db3b, 0x4c98a0be, 0x3278e964,
+ 0x9f1f9532, 0xe0d392df, 0xd3a0342b, 0x8971f21e,
+ 0x1b0a7441, 0x4ba3348c, 0xc5be7120, 0xc37632d8,
+ 0xdf359f8d, 0x9b992f2e, 0xe60b6f47, 0x0fe3f11d,
+ 0xe54cda54, 0x1edad891, 0xce6279cf, 0xcd3e7e6f,
+ 0x1618b166, 0xfd2c1d05, 0x848fd2c5, 0xf6fb2299,
+ 0xf523f357, 0xa6327623, 0x93a83531, 0x56cccd02,
+ 0xacf08162, 0x5a75ebb5, 0x6e163697, 0x88d273cc,
+ 0xde966292, 0x81b949d0, 0x4c50901b, 0x71c65614,
+ 0xe6c6c7bd, 0x327a140a, 0x45e1d006, 0xc3f27b9a,
+ 0xc9aa53fd, 0x62a80f00, 0xbb25bfe2, 0x35bdd2f6,
+ 0x71126905, 0xb2040222, 0xb6cbcf7c, 0xcd769c2b,
+ 0x53113ec0, 0x1640e3d3, 0x38abbd60, 0x2547adf0,
+ 0xba38209c, 0xf746ce76, 0x77afa1c5, 0x20756060,
+ 0x85cbfe4e, 0x8ae88dd8, 0x7aaaf9b0, 0x4cf9aa7e,
+ 0x1948c25c, 0x02fb8a8c, 0x01c36ae4, 0xd6ebe1f9,
+ 0x90d4f869, 0xa65cdea0, 0x3f09252d, 0xc208e69f,
+ 0xb74e6132, 0xce77e25b, 0x578fdfe3, 0x3ac372e6,
+};
+
+/*
+ * Round loop unrolling macros, S is a pointer to a S-Box array
+ * organized in 4 unsigned longs at a row.
+ */
+#define GET32_3(x) (((x) & 0xff))
+#define GET32_2(x) (((x) >> (8)) & (0xff))
+#define GET32_1(x) (((x) >> (16)) & (0xff))
+#define GET32_0(x) (((x) >> (24)) & (0xff))
+
+#define bf_F(x) (((S[GET32_0(x)] + S[256 + GET32_1(x)]) ^ \
+ S[512 + GET32_2(x)]) + S[768 + GET32_3(x)])
+
+#define ROUND(a, b, n) b ^= P[n]; a ^= bf_F (b)
+
+/*
+ * The blowfish encipher, processes 64-bit blocks.
+ * NOTE: This function MUSTN'T respect endianess
+ */
+static inline void encrypt_block(struct bf_ctx *bctx, u32 *dst, u32 *src)
+{
+ const u32 *P = bctx->p;
+ const u32 *S = bctx->s;
+ u32 yl = src[0];
+ u32 yr = src[1];
+
+ ROUND(yr, yl, 0);
+ ROUND(yl, yr, 1);
+ ROUND(yr, yl, 2);
+ ROUND(yl, yr, 3);
+ ROUND(yr, yl, 4);
+ ROUND(yl, yr, 5);
+ ROUND(yr, yl, 6);
+ ROUND(yl, yr, 7);
+ ROUND(yr, yl, 8);
+ ROUND(yl, yr, 9);
+ ROUND(yr, yl, 10);
+ ROUND(yl, yr, 11);
+ ROUND(yr, yl, 12);
+ ROUND(yl, yr, 13);
+ ROUND(yr, yl, 14);
+ ROUND(yl, yr, 15);
+
+ yl ^= P[16];
+ yr ^= P[17];
+
+ dst[0] = yr;
+ dst[1] = yl;
+}
+
+static void bf_encrypt(void *ctx, u8 *dst, const u8 *src)
+{
+ const u32 *in_blk = (const u32 *)src;
+ u32 *const out_blk = (u32 *)dst;
+ u32 in32[2], out32[2];
+
+ in32[0] = be32_to_cpu(in_blk[0]);
+ in32[1] = be32_to_cpu(in_blk[1]);
+ encrypt_block(ctx, out32, in32);
+ out_blk[0] = cpu_to_be32(out32[0]);
+ out_blk[1] = cpu_to_be32(out32[1]);
+}
+
+static void bf_decrypt(void *ctx, u8 *dst, const u8 *src)
+{
+ const u32 *in_blk = (const u32 *)src;
+ u32 *const out_blk = (u32 *)dst;
+ const u32 *P = ((struct bf_ctx *)ctx)->p;
+ const u32 *S = ((struct bf_ctx *)ctx)->s;
+ u32 yl = be32_to_cpu(in_blk[0]);
+ u32 yr = be32_to_cpu(in_blk[1]);
+
+ ROUND(yr, yl, 17);
+ ROUND(yl, yr, 16);
+ ROUND(yr, yl, 15);
+ ROUND(yl, yr, 14);
+ ROUND(yr, yl, 13);
+ ROUND(yl, yr, 12);
+ ROUND(yr, yl, 11);
+ ROUND(yl, yr, 10);
+ ROUND(yr, yl, 9);
+ ROUND(yl, yr, 8);
+ ROUND(yr, yl, 7);
+ ROUND(yl, yr, 6);
+ ROUND(yr, yl, 5);
+ ROUND(yl, yr, 4);
+ ROUND(yr, yl, 3);
+ ROUND(yl, yr, 2);
+
+ yl ^= P[1];
+ yr ^= P[0];
+
+ out_blk[0] = cpu_to_be32(yr);
+ out_blk[1] = cpu_to_be32(yl);
+}
+
+/*
+ * Calculates the blowfish S and P boxes for encryption and decryption.
+ */
+static int bf_setkey(void *ctx, const u8 *key, unsigned int keylen, u32 *flags)
+{
+ short i, j, count;
+ u32 data[2], temp;
+ u32 *P = ((struct bf_ctx *)ctx)->p;
+ u32 *S = ((struct bf_ctx *)ctx)->s;
+
+ /* Copy the initialization s-boxes */
+ for (i = 0, count = 0; i < 256; i++)
+ for (j = 0; j < 4; j++, count++)
+ S[count] = bf_sbox[count];
+
+ /* Set the p-boxes */
+ for (i = 0; i < 16 + 2; i++)
+ P[i] = bf_pbox[i];
+
+ /* Actual subkey generation */
+ for (j = 0, i = 0; i < 16 + 2; i++) {
+ temp = (((u32 )key[j] << 24) |
+ ((u32 )key[(j + 1) % keylen] << 16) |
+ ((u32 )key[(j + 2) % keylen] << 8) |
+ ((u32 )key[(j + 3) % keylen]));
+
+ P[i] = P[i] ^ temp;
+ j = (j + 4) % keylen;
+ }
+
+ data[0] = 0x00000000;
+ data[1] = 0x00000000;
+
+ for (i = 0; i < 16 + 2; i += 2) {
+ encrypt_block((struct bf_ctx *)ctx, data, data);
+
+ P[i] = data[0];
+ P[i + 1] = data[1];
+ }
+
+ for (i = 0; i < 4; i++) {
+ for (j = 0, count = i * 256; j < 256; j += 2, count += 2) {
+ encrypt_block((struct bf_ctx *)ctx, data, data);
+
+ S[count] = data[0];
+ S[count + 1] = data[1];
+ }
+ }
+
+ /* Bruce says not to bother with the weak key check. */
+ return 0;
+}
+
+static struct crypto_alg alg = {
+ .cra_name = "blowfish",
+ .cra_flags = CRYPTO_ALG_TYPE_CIPHER,
+ .cra_blocksize = BF_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct bf_ctx),
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(alg.cra_list),
+ .cra_u = { .cipher = {
+ .cia_min_keysize = BF_MIN_KEY_SIZE,
+ .cia_max_keysize = BF_MAX_KEY_SIZE,
+ .cia_ivsize = BF_BLOCK_SIZE,
+ .cia_setkey = bf_setkey,
+ .cia_encrypt = bf_encrypt,
+ .cia_decrypt = bf_decrypt } }
+};
+
+static int __init init(void)
+{
+ return crypto_register_alg(&alg);
+}
+
+static void __exit fini(void)
+{
+ crypto_unregister_alg(&alg);
+}
+
+module_init(init);
+module_exit(fini);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Blowfish Cipher Algorithm");
diff -Nru a/crypto/cipher.c b/crypto/cipher.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/crypto/cipher.c Thu May 8 10:41:38 2003
@@ -0,0 +1,417 @@
+/*
+ * Cryptographic API.
+ *
+ * Cipher operations.
+ *
+ * Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
+ * Generic scatterwalk code by Adam J. Richter <adam@yggdrasil.com>.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#include <linux/kernel.h>
+#include <linux/crypto.h>
+#include <linux/errno.h>
+#include <linux/mm.h>
+#include <linux/slab.h>
+#include <linux/pagemap.h>
+#include <linux/highmem.h>
+#include <asm/scatterlist.h>
+#include "internal.h"
+
+typedef void (cryptfn_t)(void *, u8 *, const u8 *);
+typedef void (procfn_t)(struct crypto_tfm *, u8 *,
+ u8*, cryptfn_t, int enc, void *);
+
+struct scatter_walk {
+ struct scatterlist *sg;
+ struct page *page;
+ void *data;
+ unsigned int len_this_page;
+ unsigned int len_this_segment;
+ unsigned int offset;
+};
+
+enum km_type crypto_km_types[] = {
+ KM_USER0,
+ KM_USER1,
+ KM_SOFTIRQ0,
+ KM_SOFTIRQ1,
+};
+
+static inline void xor_64(u8 *a, const u8 *b)
+{
+ ((u32 *)a)[0] ^= ((u32 *)b)[0];
+ ((u32 *)a)[1] ^= ((u32 *)b)[1];
+}
+
+static inline void xor_128(u8 *a, const u8 *b)
+{
+ ((u32 *)a)[0] ^= ((u32 *)b)[0];
+ ((u32 *)a)[1] ^= ((u32 *)b)[1];
+ ((u32 *)a)[2] ^= ((u32 *)b)[2];
+ ((u32 *)a)[3] ^= ((u32 *)b)[3];
+}
+
+
+/* Define sg_next is an inline routine now in case we want to change
+ scatterlist to a linked list later. */
+static inline struct scatterlist *sg_next(struct scatterlist *sg)
+{
+ return sg + 1;
+}
+
+void *which_buf(struct scatter_walk *walk, unsigned int nbytes, void *scratch)
+{
+ if (nbytes <= walk->len_this_page &&
+ (((unsigned long)walk->data) & (PAGE_CACHE_SIZE - 1)) + nbytes <=
+ PAGE_CACHE_SIZE)
+ return walk->data;
+ else
+ return scratch;
+}
+
+static void memcpy_dir(void *buf, void *sgdata, size_t nbytes, int out)
+{
+ if (out)
+ memcpy(sgdata, buf, nbytes);
+ else
+ memcpy(buf, sgdata, nbytes);
+}
+
+static void scatterwalk_start(struct scatter_walk *walk, struct scatterlist *sg)
+{
+ unsigned int rest_of_page;
+
+ walk->sg = sg;
+
+ walk->page = sg->page;
+ walk->len_this_segment = sg->length;
+
+ rest_of_page = PAGE_CACHE_SIZE - (sg->offset & (PAGE_CACHE_SIZE - 1));
+ walk->len_this_page = min(sg->length, rest_of_page);
+ walk->offset = sg->offset;
+}
+
+static void scatterwalk_map(struct scatter_walk *walk, int out)
+{
+ walk->data = crypto_kmap(walk->page, out) + walk->offset;
+}
+
+static void scatter_page_done(struct scatter_walk *walk, int out,
+ unsigned int more)
+{
+ /* walk->data may be pointing the first byte of the next page;
+ however, we know we transfered at least one byte. So,
+ walk->data - 1 will be a virutual address in the mapped page. */
+
+ if (out)
+ flush_dcache_page(walk->page);
+
+ if (more) {
+ walk->len_this_segment -= walk->len_this_page;
+
+ if (walk->len_this_segment) {
+ walk->page++;
+ walk->len_this_page = min(walk->len_this_segment,
+ (unsigned)PAGE_CACHE_SIZE);
+ walk->offset = 0;
+ }
+ else
+ scatterwalk_start(walk, sg_next(walk->sg));
+ }
+}
+
+static void scatter_done(struct scatter_walk *walk, int out, int more)
+{
+ crypto_kunmap(walk->data, out);
+ if (walk->len_this_page == 0 || !more)
+ scatter_page_done(walk, out, more);
+}
+
+/*
+ * Do not call this unless the total length of all of the fragments
+ * has been verified as multiple of the block size.
+ */
+static int copy_chunks(void *buf, struct scatter_walk *walk,
+ size_t nbytes, int out)
+{
+ if (buf != walk->data) {
+ while (nbytes > walk->len_this_page) {
+ memcpy_dir(buf, walk->data, walk->len_this_page, out);
+ buf += walk->len_this_page;
+ nbytes -= walk->len_this_page;
+
+ crypto_kunmap(walk->data, out);
+ scatter_page_done(walk, out, 1);
+ scatterwalk_map(walk, out);
+ }
+
+ memcpy_dir(buf, walk->data, nbytes, out);
+ }
+
+ walk->offset += nbytes;
+ walk->len_this_page -= nbytes;
+ walk->len_this_segment -= nbytes;
+ return 0;
+}
+
+/*
+ * Generic encrypt/decrypt wrapper for ciphers, handles operations across
+ * multiple page boundaries by using temporary blocks. In user context,
+ * the kernel is given a chance to schedule us once per block.
+ */
+static int crypt(struct crypto_tfm *tfm,
+ struct scatterlist *dst,
+ struct scatterlist *src,
+ unsigned int nbytes, cryptfn_t crfn,
+ procfn_t prfn, int enc, void *info)
+{
+ struct scatter_walk walk_in, walk_out;
+ const unsigned int bsize = crypto_tfm_alg_blocksize(tfm);
+ u8 tmp_src[nbytes > src->length ? bsize : 0];
+ u8 tmp_dst[nbytes > dst->length ? bsize : 0];
+
+ if (!nbytes)
+ return 0;
+
+ if (nbytes % bsize) {
+ tfm->crt_flags |= CRYPTO_TFM_RES_BAD_BLOCK_LEN;
+ return -EINVAL;
+ }
+
+ scatterwalk_start(&walk_in, src);
+ scatterwalk_start(&walk_out, dst);
+
+ for(;;) {
+ u8 *src_p, *dst_p;
+
+ scatterwalk_map(&walk_in, 0);
+ scatterwalk_map(&walk_out, 1);
+ src_p = which_buf(&walk_in, bsize, tmp_src);
+ dst_p = which_buf(&walk_out, bsize, tmp_dst);
+
+ nbytes -= bsize;
+
+ copy_chunks(src_p, &walk_in, bsize, 0);
+
+ prfn(tfm, dst_p, src_p, crfn, enc, info);
+
+ scatter_done(&walk_in, 0, nbytes);
+
+ copy_chunks(dst_p, &walk_out, bsize, 1);
+ scatter_done(&walk_out, 1, nbytes);
+
+ if (!nbytes)
+ return 0;
+
+ crypto_yield(tfm);
+ }
+}
+
+static void cbc_process(struct crypto_tfm *tfm,
+ u8 *dst, u8 *src, cryptfn_t fn, int enc, void *info)
+{
+ u8 *iv = info;
+
+ /* Null encryption */
+ if (!iv)
+ return;
+
+ if (enc) {
+ tfm->crt_u.cipher.cit_xor_block(iv, src);
+ fn(crypto_tfm_ctx(tfm), dst, iv);
+ memcpy(iv, dst, crypto_tfm_alg_blocksize(tfm));
+ } else {
+ const int need_stack = (src == dst);
+ u8 stack[need_stack ? crypto_tfm_alg_blocksize(tfm) : 0];
+ u8 *buf = need_stack ? stack : dst;
+
+ fn(crypto_tfm_ctx(tfm), buf, src);
+ tfm->crt_u.cipher.cit_xor_block(buf, iv);
+ memcpy(iv, src, crypto_tfm_alg_blocksize(tfm));
+ if (buf != dst)
+ memcpy(dst, buf, crypto_tfm_alg_blocksize(tfm));
+ }
+}
+
+static void ecb_process(struct crypto_tfm *tfm, u8 *dst, u8 *src,
+ cryptfn_t fn, int enc, void *info)
+{
+ fn(crypto_tfm_ctx(tfm), dst, src);
+}
+
+static int setkey(struct crypto_tfm *tfm, const u8 *key, unsigned int keylen)
+{
+ struct cipher_alg *cia = &tfm->__crt_alg->cra_cipher;
+
+ if (keylen < cia->cia_min_keysize || keylen > cia->cia_max_keysize) {
+ tfm->crt_flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
+ return -EINVAL;
+ } else
+ return cia->cia_setkey(crypto_tfm_ctx(tfm), key, keylen,
+ &tfm->crt_flags);
+}
+
+static int ecb_encrypt(struct crypto_tfm *tfm,
+ struct scatterlist *dst,
+ struct scatterlist *src, unsigned int nbytes)
+{
+ return crypt(tfm, dst, src, nbytes,
+ tfm->__crt_alg->cra_cipher.cia_encrypt,
+ ecb_process, 1, NULL);
+}
+
+static int ecb_decrypt(struct crypto_tfm *tfm,
+ struct scatterlist *dst,
+ struct scatterlist *src,
+ unsigned int nbytes)
+{
+ return crypt(tfm, dst, src, nbytes,
+ tfm->__crt_alg->cra_cipher.cia_decrypt,
+ ecb_process, 1, NULL);
+}
+
+static int cbc_encrypt(struct crypto_tfm *tfm,
+ struct scatterlist *dst,
+ struct scatterlist *src,
+ unsigned int nbytes)
+{
+ return crypt(tfm, dst, src, nbytes,
+ tfm->__crt_alg->cra_cipher.cia_encrypt,
+ cbc_process, 1, tfm->crt_cipher.cit_iv);
+}
+
+static int cbc_encrypt_iv(struct crypto_tfm *tfm,
+ struct scatterlist *dst,
+ struct scatterlist *src,
+ unsigned int nbytes, u8 *iv)
+{
+ return crypt(tfm, dst, src, nbytes,
+ tfm->__crt_alg->cra_cipher.cia_encrypt,
+ cbc_process, 1, iv);
+}
+
+static int cbc_decrypt(struct crypto_tfm *tfm,
+ struct scatterlist *dst,
+ struct scatterlist *src,
+ unsigned int nbytes)
+{
+ return crypt(tfm, dst, src, nbytes,
+ tfm->__crt_alg->cra_cipher.cia_decrypt,
+ cbc_process, 0, tfm->crt_cipher.cit_iv);
+}
+
+static int cbc_decrypt_iv(struct crypto_tfm *tfm,
+ struct scatterlist *dst,
+ struct scatterlist *src,
+ unsigned int nbytes, u8 *iv)
+{
+ return crypt(tfm, dst, src, nbytes,
+ tfm->__crt_alg->cra_cipher.cia_decrypt,
+ cbc_process, 0, iv);
+}
+
+static int nocrypt(struct crypto_tfm *tfm,
+ struct scatterlist *dst,
+ struct scatterlist *src,
+ unsigned int nbytes)
+{
+ return -ENOSYS;
+}
+
+static int nocrypt_iv(struct crypto_tfm *tfm,
+ struct scatterlist *dst,
+ struct scatterlist *src,
+ unsigned int nbytes, u8 *iv)
+{
+ return -ENOSYS;
+}
+
+int crypto_init_cipher_flags(struct crypto_tfm *tfm, u32 flags)
+{
+ u32 mode = flags & CRYPTO_TFM_MODE_MASK;
+
+ tfm->crt_cipher.cit_mode = mode ? mode : CRYPTO_TFM_MODE_ECB;
+ if (flags & CRYPTO_TFM_REQ_WEAK_KEY)
+ tfm->crt_flags = CRYPTO_TFM_REQ_WEAK_KEY;
+
+ return 0;
+}
+
+int crypto_init_cipher_ops(struct crypto_tfm *tfm)
+{
+ int ret = 0;
+ struct crypto_alg *alg = tfm->__crt_alg;
+ struct cipher_tfm *ops = &tfm->crt_cipher;
+
+ ops->cit_setkey = setkey;
+
+ switch (tfm->crt_cipher.cit_mode) {
+ case CRYPTO_TFM_MODE_ECB:
+ ops->cit_encrypt = ecb_encrypt;
+ ops->cit_decrypt = ecb_decrypt;
+ break;
+
+ case CRYPTO_TFM_MODE_CBC:
+ ops->cit_encrypt = cbc_encrypt;
+ ops->cit_decrypt = cbc_decrypt;
+ ops->cit_encrypt_iv = cbc_encrypt_iv;
+ ops->cit_decrypt_iv = cbc_decrypt_iv;
+ break;
+
+ case CRYPTO_TFM_MODE_CFB:
+ ops->cit_encrypt = nocrypt;
+ ops->cit_decrypt = nocrypt;
+ ops->cit_encrypt_iv = nocrypt_iv;
+ ops->cit_decrypt_iv = nocrypt_iv;
+ break;
+
+ case CRYPTO_TFM_MODE_CTR:
+ ops->cit_encrypt = nocrypt;
+ ops->cit_decrypt = nocrypt;
+ ops->cit_encrypt_iv = nocrypt_iv;
+ ops->cit_decrypt_iv = nocrypt_iv;
+ break;
+
+ default:
+ BUG();
+ }
+
+ if (alg->cra_cipher.cia_ivsize &&
+ ops->cit_mode != CRYPTO_TFM_MODE_ECB) {
+
+ switch (crypto_tfm_alg_blocksize(tfm)) {
+ case 8:
+ ops->cit_xor_block = xor_64;
+ break;
+
+ case 16:
+ ops->cit_xor_block = xor_128;
+ break;
+
+ default:
+ printk(KERN_WARNING "%s: block size %u not supported\n",
+ crypto_tfm_alg_name(tfm),
+ crypto_tfm_alg_blocksize(tfm));
+ ret = -EINVAL;
+ goto out;
+ }
+
+ ops->cit_iv = kmalloc(alg->cra_cipher.cia_ivsize, GFP_KERNEL);
+ if (ops->cit_iv == NULL)
+ ret = -ENOMEM;
+ }
+
+out:
+ return ret;
+}
+
+void crypto_exit_cipher_ops(struct crypto_tfm *tfm)
+{
+ if (tfm->crt_cipher.cit_iv)
+ kfree(tfm->crt_cipher.cit_iv);
+}
diff -Nru a/crypto/compress.c b/crypto/compress.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/crypto/compress.c Thu May 8 10:41:38 2003
@@ -0,0 +1,63 @@
+/*
+ * Cryptographic API.
+ *
+ * Compression operations.
+ *
+ * Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#include <linux/types.h>
+#include <linux/crypto.h>
+#include <linux/errno.h>
+#include <asm/scatterlist.h>
+#include <linux/string.h>
+#include "internal.h"
+
+static int crypto_compress(struct crypto_tfm *tfm,
+ const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen)
+{
+ return tfm->__crt_alg->cra_compress.coa_compress(crypto_tfm_ctx(tfm),
+ src, slen, dst,
+ dlen);
+}
+
+static int crypto_decompress(struct crypto_tfm *tfm,
+ const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen)
+{
+ return tfm->__crt_alg->cra_compress.coa_decompress(crypto_tfm_ctx(tfm),
+ src, slen, dst,
+ dlen);
+}
+
+int crypto_init_compress_flags(struct crypto_tfm *tfm, u32 flags)
+{
+ return flags ? -EINVAL : 0;
+}
+
+int crypto_init_compress_ops(struct crypto_tfm *tfm)
+{
+ int ret = 0;
+ struct compress_tfm *ops = &tfm->crt_compress;
+
+ ret = tfm->__crt_alg->cra_compress.coa_init(crypto_tfm_ctx(tfm));
+ if (ret)
+ goto out;
+
+ ops->cot_compress = crypto_compress;
+ ops->cot_decompress = crypto_decompress;
+
+out:
+ return ret;
+}
+
+void crypto_exit_compress_ops(struct crypto_tfm *tfm)
+{
+ tfm->__crt_alg->cra_compress.coa_exit(crypto_tfm_ctx(tfm));
+}
diff -Nru a/crypto/crypto_null.c b/crypto/crypto_null.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/crypto/crypto_null.c Thu May 8 10:41:38 2003
@@ -0,0 +1,134 @@
+/*
+ * Cryptographic API.
+ *
+ * Null algorithms, aka Much Ado About Nothing.
+ *
+ * These are needed for IPsec, and may be useful in general for
+ * testing & debugging.
+ *
+ * The null cipher is compliant with RFC2410.
+ *
+ * Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ */
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/mm.h>
+#include <asm/scatterlist.h>
+#include <linux/crypto.h>
+
+#define NULL_KEY_SIZE 0
+#define NULL_BLOCK_SIZE 1
+#define NULL_DIGEST_SIZE 0
+
+static int null_compress(void *ctx, const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen)
+{ return 0; }
+
+static int null_decompress(void *ctx, const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen)
+{ return 0; }
+
+static void null_init(void *ctx)
+{ }
+
+static void null_update(void *ctx, const u8 *data, unsigned int len)
+{ }
+
+static void null_final(void *ctx, u8 *out)
+{ }
+
+static int null_setkey(void *ctx, const u8 *key,
+ unsigned int keylen, u32 *flags)
+{ return 0; }
+
+static void null_encrypt(void *ctx, u8 *dst, const u8 *src)
+{ }
+
+static void null_decrypt(void *ctx, u8 *dst, const u8 *src)
+{ }
+
+static struct crypto_alg compress_null = {
+ .cra_name = "compress_null",
+ .cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
+ .cra_blocksize = NULL_BLOCK_SIZE,
+ .cra_ctxsize = 0,
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(compress_null.cra_list),
+ .cra_u = { .compress = {
+ .coa_compress = null_compress,
+ .coa_decompress = null_decompress } }
+};
+
+static struct crypto_alg digest_null = {
+ .cra_name = "digest_null",
+ .cra_flags = CRYPTO_ALG_TYPE_DIGEST,
+ .cra_blocksize = NULL_BLOCK_SIZE,
+ .cra_ctxsize = 0,
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(digest_null.cra_list),
+ .cra_u = { .digest = {
+ .dia_digestsize = NULL_DIGEST_SIZE,
+ .dia_init = null_init,
+ .dia_update = null_update,
+ .dia_final = null_final } }
+};
+
+static struct crypto_alg cipher_null = {
+ .cra_name = "cipher_null",
+ .cra_flags = CRYPTO_ALG_TYPE_CIPHER,
+ .cra_blocksize = NULL_BLOCK_SIZE,
+ .cra_ctxsize = 0,
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(cipher_null.cra_list),
+ .cra_u = { .cipher = {
+ .cia_min_keysize = NULL_KEY_SIZE,
+ .cia_max_keysize = NULL_KEY_SIZE,
+ .cia_ivsize = 0,
+ .cia_setkey = null_setkey,
+ .cia_encrypt = null_encrypt,
+ .cia_decrypt = null_decrypt } }
+};
+
+static int __init init(void)
+{
+ int ret = 0;
+
+ ret = crypto_register_alg(&cipher_null);
+ if (ret < 0)
+ goto out;
+
+ ret = crypto_register_alg(&digest_null);
+ if (ret < 0) {
+ crypto_unregister_alg(&cipher_null);
+ goto out;
+ }
+
+ ret = crypto_register_alg(&compress_null);
+ if (ret < 0) {
+ crypto_unregister_alg(&digest_null);
+ crypto_unregister_alg(&cipher_null);
+ goto out;
+ }
+
+out:
+ return ret;
+}
+
+static void __exit fini(void)
+{
+ crypto_unregister_alg(&compress_null);
+ crypto_unregister_alg(&digest_null);
+ crypto_unregister_alg(&cipher_null);
+}
+
+module_init(init);
+module_exit(fini);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Null Cryptographic Algorithms");
diff -Nru a/crypto/deflate.c b/crypto/deflate.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/crypto/deflate.c Thu May 8 10:41:38 2003
@@ -0,0 +1,236 @@
+/*
+ * Cryptographic API.
+ *
+ * Deflate algorithm (RFC 1951), implemented here primarily for use
+ * by IPCOMP (RFC 3173 & RFC 2394).
+ *
+ * Copyright (c) 2003 James Morris <jmorris@intercode.com.au>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ * FIXME: deflate transforms will require up to a total of about 436k of kernel
+ * memory on i386 (390k for compression, the rest for decompression), as the
+ * current zlib kernel code uses a worst case pre-allocation system by default.
+ * This needs to be fixed so that the amount of memory required is properly
+ * related to the winbits and memlevel parameters.
+ *
+ * The default winbits of 11 should suit most packets, and it may be something
+ * to configure on a per-tfm basis in the future.
+ *
+ * Currently, compression history is not maintained between tfm calls, as
+ * it is not needed for IPCOMP and keeps the code simpler. It can be
+ * implemented if someone wants it.
+ */
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/crypto.h>
+#include <linux/zlib.h>
+#include <linux/vmalloc.h>
+#include <linux/interrupt.h>
+#include <linux/mm.h>
+#include <linux/net.h>
+#include <linux/slab.h>
+
+#define DEFLATE_DEF_LEVEL Z_DEFAULT_COMPRESSION
+#define DEFLATE_DEF_WINBITS 11
+#define DEFLATE_DEF_MEMLEVEL MAX_MEM_LEVEL
+
+struct deflate_ctx {
+ int comp_initialized;
+ int decomp_initialized;
+ struct z_stream_s comp_stream;
+ struct z_stream_s decomp_stream;
+};
+
+static inline int deflate_gfp(void)
+{
+ return in_softirq() ? GFP_ATOMIC : GFP_KERNEL;
+}
+
+static int deflate_init(void *ctx)
+{
+ return 0;
+}
+
+static void deflate_exit(void *ctx)
+{
+ struct deflate_ctx *dctx = ctx;
+
+ if (dctx->comp_initialized)
+ vfree(dctx->comp_stream.workspace);
+ if (dctx->decomp_initialized)
+ kfree(dctx->decomp_stream.workspace);
+}
+
+/*
+ * Lazy initialization to make interface simple without allocating
+ * un-needed workspaces. Thus can be called in softirq context.
+ */
+static int deflate_comp_init(struct deflate_ctx *ctx)
+{
+ int ret = 0;
+ struct z_stream_s *stream = &ctx->comp_stream;
+
+ stream->workspace = __vmalloc(zlib_deflate_workspacesize(),
+ deflate_gfp()|__GFP_HIGHMEM,
+ PAGE_KERNEL);
+ if (!stream->workspace ) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ memset(stream->workspace, 0, sizeof(stream->workspace));
+ ret = zlib_deflateInit2(stream, DEFLATE_DEF_LEVEL, Z_DEFLATED,
+ -DEFLATE_DEF_WINBITS, DEFLATE_DEF_MEMLEVEL,
+ Z_DEFAULT_STRATEGY);
+ if (ret != Z_OK) {
+ ret = -EINVAL;
+ goto out_free;
+ }
+ ctx->comp_initialized = 1;
+out:
+ return ret;
+out_free:
+ vfree(stream->workspace);
+ goto out;
+}
+
+static int deflate_decomp_init(struct deflate_ctx *ctx)
+{
+ int ret = 0;
+ struct z_stream_s *stream = &ctx->decomp_stream;
+
+ stream->workspace = kmalloc(zlib_inflate_workspacesize(),
+ deflate_gfp());
+ if (!stream->workspace ) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ memset(stream->workspace, 0, sizeof(stream->workspace));
+ ret = zlib_inflateInit2(stream, -DEFLATE_DEF_WINBITS);
+ if (ret != Z_OK) {
+ ret = -EINVAL;
+ goto out_free;
+ }
+ ctx->decomp_initialized = 1;
+out:
+ return ret;
+out_free:
+ kfree(stream->workspace);
+ goto out;
+}
+
+static int deflate_compress(void *ctx, const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen)
+{
+ int ret = 0;
+ struct deflate_ctx *dctx = ctx;
+ struct z_stream_s *stream = &dctx->comp_stream;
+
+ if (!dctx->comp_initialized) {
+ ret = deflate_comp_init(dctx);
+ if (ret)
+ goto out;
+ }
+
+ ret = zlib_deflateReset(stream);
+ if (ret != Z_OK) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ stream->next_in = (u8 *)src;
+ stream->avail_in = slen;
+ stream->next_out = (u8 *)dst;
+ stream->avail_out = *dlen;
+
+ ret = zlib_deflate(stream, Z_FINISH);
+ if (ret != Z_STREAM_END) {
+ ret = -EINVAL;
+ goto out;
+ }
+ ret = 0;
+ *dlen = stream->total_out;
+out:
+ return ret;
+}
+
+static int deflate_decompress(void *ctx, const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen)
+{
+
+ int ret = 0;
+ struct deflate_ctx *dctx = ctx;
+ struct z_stream_s *stream = &dctx->decomp_stream;
+
+ if (!dctx->decomp_initialized) {
+ ret = deflate_decomp_init(dctx);
+ if (ret)
+ goto out;
+ }
+
+ ret = zlib_inflateReset(stream);
+ if (ret != Z_OK) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ stream->next_in = (u8 *)src;
+ stream->avail_in = slen;
+ stream->next_out = (u8 *)dst;
+ stream->avail_out = *dlen;
+
+ ret = zlib_inflate(stream, Z_SYNC_FLUSH);
+ /*
+ * Work around a bug in zlib, which sometimes wants to taste an extra
+ * byte when being used in the (undocumented) raw deflate mode.
+ * (From USAGI).
+ */
+ if (ret == Z_OK && !stream->avail_in && stream->avail_out) {
+ u8 zerostuff = 0;
+ stream->next_in = &zerostuff;
+ stream->avail_in = 1;
+ ret = zlib_inflate(stream, Z_FINISH);
+ }
+ if (ret != Z_STREAM_END) {
+ ret = -EINVAL;
+ goto out;
+ }
+ ret = 0;
+ *dlen = stream->total_out;
+out:
+ return ret;
+}
+
+static struct crypto_alg alg = {
+ .cra_name = "deflate",
+ .cra_flags = CRYPTO_ALG_TYPE_COMPRESS,
+ .cra_ctxsize = sizeof(struct deflate_ctx),
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(alg.cra_list),
+ .cra_u = { .compress = {
+ .coa_init = deflate_init,
+ .coa_exit = deflate_exit,
+ .coa_compress = deflate_compress,
+ .coa_decompress = deflate_decompress } }
+};
+
+static int __init init(void)
+{
+ return crypto_register_alg(&alg);
+}
+
+static void __exit fini(void)
+{
+ crypto_unregister_alg(&alg);
+}
+
+module_init(init);
+module_exit(fini);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Deflate Compression Algorithm for IPCOMP");
+MODULE_AUTHOR("James Morris <jmorris@intercode.com.au>");
+
diff -Nru a/crypto/des.c b/crypto/des.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/crypto/des.c Thu May 8 10:41:38 2003
@@ -0,0 +1,1299 @@
+/*
+ * Cryptographic API.
+ *
+ * DES & Triple DES EDE Cipher Algorithms.
+ *
+ * Originally released as descore by Dana L. How <how@isl.stanford.edu>.
+ * Modified by Raimar Falke <rf13@inf.tu-dresden.de> for the Linux-Kernel.
+ * Derived from Cryptoapi and Nettle implementations, adapted for in-place
+ * scatterlist interface. Changed LGPL to GPL per section 3 of the LGPL.
+ *
+ * Copyright (c) 1992 Dana L. How.
+ * Copyright (c) Raimar Falke <rf13@inf.tu-dresden.de>
+ * Copyright (c) Gisle Sælensminde <gisle@ii.uib.no>
+ * Copyright (C) 2001 Niels Möller.
+ * Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ */
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/mm.h>
+#include <linux/errno.h>
+#include <asm/scatterlist.h>
+#include <linux/crypto.h>
+
+#define DES_KEY_SIZE 8
+#define DES_EXPKEY_WORDS 32
+#define DES_BLOCK_SIZE 8
+
+#define DES3_EDE_KEY_SIZE (3 * DES_KEY_SIZE)
+#define DES3_EDE_EXPKEY_WORDS (3 * DES_EXPKEY_WORDS)
+#define DES3_EDE_BLOCK_SIZE DES_BLOCK_SIZE
+
+#define ROR(d,c,o) ((d) = (d) >> (c) | (d) << (o))
+
+struct des_ctx {
+ u8 iv[DES_BLOCK_SIZE];
+ u32 expkey[DES_EXPKEY_WORDS];
+};
+
+struct des3_ede_ctx {
+ u8 iv[DES_BLOCK_SIZE];
+ u32 expkey[DES3_EDE_EXPKEY_WORDS];
+};
+
+const static u32 des_keymap[] = {
+ 0x02080008, 0x02082000, 0x00002008, 0x00000000,
+ 0x02002000, 0x00080008, 0x02080000, 0x02082008,
+ 0x00000008, 0x02000000, 0x00082000, 0x00002008,
+ 0x00082008, 0x02002008, 0x02000008, 0x02080000,
+ 0x00002000, 0x00082008, 0x00080008, 0x02002000,
+ 0x02082008, 0x02000008, 0x00000000, 0x00082000,
+ 0x02000000, 0x00080000, 0x02002008, 0x02080008,
+ 0x00080000, 0x00002000, 0x02082000, 0x00000008,
+ 0x00080000, 0x00002000, 0x02000008, 0x02082008,
+ 0x00002008, 0x02000000, 0x00000000, 0x00082000,
+ 0x02080008, 0x02002008, 0x02002000, 0x00080008,
+ 0x02082000, 0x00000008, 0x00080008, 0x02002000,
+ 0x02082008, 0x00080000, 0x02080000, 0x02000008,
+ 0x00082000, 0x00002008, 0x02002008, 0x02080000,
+ 0x00000008, 0x02082000, 0x00082008, 0x00000000,
+ 0x02000000, 0x02080008, 0x00002000, 0x00082008,
+
+ 0x08000004, 0x00020004, 0x00000000, 0x08020200,
+ 0x00020004, 0x00000200, 0x08000204, 0x00020000,
+ 0x00000204, 0x08020204, 0x00020200, 0x08000000,
+ 0x08000200, 0x08000004, 0x08020000, 0x00020204,
+ 0x00020000, 0x08000204, 0x08020004, 0x00000000,
+ 0x00000200, 0x00000004, 0x08020200, 0x08020004,
+ 0x08020204, 0x08020000, 0x08000000, 0x00000204,
+ 0x00000004, 0x00020200, 0x00020204, 0x08000200,
+ 0x00000204, 0x08000000, 0x08000200, 0x00020204,
+ 0x08020200, 0x00020004, 0x00000000, 0x08000200,
+ 0x08000000, 0x00000200, 0x08020004, 0x00020000,
+ 0x00020004, 0x08020204, 0x00020200, 0x00000004,
+ 0x08020204, 0x00020200, 0x00020000, 0x08000204,
+ 0x08000004, 0x08020000, 0x00020204, 0x00000000,
+ 0x00000200, 0x08000004, 0x08000204, 0x08020200,
+ 0x08020000, 0x00000204, 0x00000004, 0x08020004,
+
+ 0x80040100, 0x01000100, 0x80000000, 0x81040100,
+ 0x00000000, 0x01040000, 0x81000100, 0x80040000,
+ 0x01040100, 0x81000000, 0x01000000, 0x80000100,
+ 0x81000000, 0x80040100, 0x00040000, 0x01000000,
+ 0x81040000, 0x00040100, 0x00000100, 0x80000000,
+ 0x00040100, 0x81000100, 0x01040000, 0x00000100,
+ 0x80000100, 0x00000000, 0x80040000, 0x01040100,
+ 0x01000100, 0x81040000, 0x81040100, 0x00040000,
+ 0x81040000, 0x80000100, 0x00040000, 0x81000000,
+ 0x00040100, 0x01000100, 0x80000000, 0x01040000,
+ 0x81000100, 0x00000000, 0x00000100, 0x80040000,
+ 0x00000000, 0x81040000, 0x01040100, 0x00000100,
+ 0x01000000, 0x81040100, 0x80040100, 0x00040000,
+ 0x81040100, 0x80000000, 0x01000100, 0x80040100,
+ 0x80040000, 0x00040100, 0x01040000, 0x81000100,
+ 0x80000100, 0x01000000, 0x81000000, 0x01040100,
+
+ 0x04010801, 0x00000000, 0x00010800, 0x04010000,
+ 0x04000001, 0x00000801, 0x04000800, 0x00010800,
+ 0x00000800, 0x04010001, 0x00000001, 0x04000800,
+ 0x00010001, 0x04010800, 0x04010000, 0x00000001,
+ 0x00010000, 0x04000801, 0x04010001, 0x00000800,
+ 0x00010801, 0x04000000, 0x00000000, 0x00010001,
+ 0x04000801, 0x00010801, 0x04010800, 0x04000001,
+ 0x04000000, 0x00010000, 0x00000801, 0x04010801,
+ 0x00010001, 0x04010800, 0x04000800, 0x00010801,
+ 0x04010801, 0x00010001, 0x04000001, 0x00000000,
+ 0x04000000, 0x00000801, 0x00010000, 0x04010001,
+ 0x00000800, 0x04000000, 0x00010801, 0x04000801,
+ 0x04010800, 0x00000800, 0x00000000, 0x04000001,
+ 0x00000001, 0x04010801, 0x00010800, 0x04010000,
+ 0x04010001, 0x00010000, 0x00000801, 0x04000800,
+ 0x04000801, 0x00000001, 0x04010000, 0x00010800,
+
+ 0x00000400, 0x00000020, 0x00100020, 0x40100000,
+ 0x40100420, 0x40000400, 0x00000420, 0x00000000,
+ 0x00100000, 0x40100020, 0x40000020, 0x00100400,
+ 0x40000000, 0x00100420, 0x00100400, 0x40000020,
+ 0x40100020, 0x00000400, 0x40000400, 0x40100420,
+ 0x00000000, 0x00100020, 0x40100000, 0x00000420,
+ 0x40100400, 0x40000420, 0x00100420, 0x40000000,
+ 0x40000420, 0x40100400, 0x00000020, 0x00100000,
+ 0x40000420, 0x00100400, 0x40100400, 0x40000020,
+ 0x00000400, 0x00000020, 0x00100000, 0x40100400,
+ 0x40100020, 0x40000420, 0x00000420, 0x00000000,
+ 0x00000020, 0x40100000, 0x40000000, 0x00100020,
+ 0x00000000, 0x40100020, 0x00100020, 0x00000420,
+ 0x40000020, 0x00000400, 0x40100420, 0x00100000,
+ 0x00100420, 0x40000000, 0x40000400, 0x40100420,
+ 0x40100000, 0x00100420, 0x00100400, 0x40000400,
+
+ 0x00800000, 0x00001000, 0x00000040, 0x00801042,
+ 0x00801002, 0x00800040, 0x00001042, 0x00801000,
+ 0x00001000, 0x00000002, 0x00800002, 0x00001040,
+ 0x00800042, 0x00801002, 0x00801040, 0x00000000,
+ 0x00001040, 0x00800000, 0x00001002, 0x00000042,
+ 0x00800040, 0x00001042, 0x00000000, 0x00800002,
+ 0x00000002, 0x00800042, 0x00801042, 0x00001002,
+ 0x00801000, 0x00000040, 0x00000042, 0x00801040,
+ 0x00801040, 0x00800042, 0x00001002, 0x00801000,
+ 0x00001000, 0x00000002, 0x00800002, 0x00800040,
+ 0x00800000, 0x00001040, 0x00801042, 0x00000000,
+ 0x00001042, 0x00800000, 0x00000040, 0x00001002,
+ 0x00800042, 0x00000040, 0x00000000, 0x00801042,
+ 0x00801002, 0x00801040, 0x00000042, 0x00001000,
+ 0x00001040, 0x00801002, 0x00800040, 0x00000042,
+ 0x00000002, 0x00001042, 0x00801000, 0x00800002,
+
+ 0x10400000, 0x00404010, 0x00000010, 0x10400010,
+ 0x10004000, 0x00400000, 0x10400010, 0x00004010,
+ 0x00400010, 0x00004000, 0x00404000, 0x10000000,
+ 0x10404010, 0x10000010, 0x10000000, 0x10404000,
+ 0x00000000, 0x10004000, 0x00404010, 0x00000010,
+ 0x10000010, 0x10404010, 0x00004000, 0x10400000,
+ 0x10404000, 0x00400010, 0x10004010, 0x00404000,
+ 0x00004010, 0x00000000, 0x00400000, 0x10004010,
+ 0x00404010, 0x00000010, 0x10000000, 0x00004000,
+ 0x10000010, 0x10004000, 0x00404000, 0x10400010,
+ 0x00000000, 0x00404010, 0x00004010, 0x10404000,
+ 0x10004000, 0x00400000, 0x10404010, 0x10000000,
+ 0x10004010, 0x10400000, 0x00400000, 0x10404010,
+ 0x00004000, 0x00400010, 0x10400010, 0x00004010,
+ 0x00400010, 0x00000000, 0x10404000, 0x10000010,
+ 0x10400000, 0x10004010, 0x00000010, 0x00404000,
+
+ 0x00208080, 0x00008000, 0x20200000, 0x20208080,
+ 0x00200000, 0x20008080, 0x20008000, 0x20200000,
+ 0x20008080, 0x00208080, 0x00208000, 0x20000080,
+ 0x20200080, 0x00200000, 0x00000000, 0x20008000,
+ 0x00008000, 0x20000000, 0x00200080, 0x00008080,
+ 0x20208080, 0x00208000, 0x20000080, 0x00200080,
+ 0x20000000, 0x00000080, 0x00008080, 0x20208000,
+ 0x00000080, 0x20200080, 0x20208000, 0x00000000,
+ 0x00000000, 0x20208080, 0x00200080, 0x20008000,
+ 0x00208080, 0x00008000, 0x20000080, 0x00200080,
+ 0x20208000, 0x00000080, 0x00008080, 0x20200000,
+ 0x20008080, 0x20000000, 0x20200000, 0x00208000,
+ 0x20208080, 0x00008080, 0x00208000, 0x20200080,
+ 0x00200000, 0x20000080, 0x20008000, 0x00000000,
+ 0x00008000, 0x00200000, 0x20200080, 0x00208080,
+ 0x20000000, 0x20208000, 0x00000080, 0x20008080,
+};
+
+const static u8 rotors[] = {
+ 34, 13, 5, 46, 47, 18, 32, 41, 11, 53, 33, 20,
+ 14, 36, 30, 24, 49, 2, 15, 37, 42, 50, 0, 21,
+ 38, 48, 6, 26, 39, 4, 52, 25, 12, 27, 31, 40,
+ 1, 17, 28, 29, 23, 51, 35, 7, 3, 22, 9, 43,
+
+ 41, 20, 12, 53, 54, 25, 39, 48, 18, 31, 40, 27,
+ 21, 43, 37, 0, 1, 9, 22, 44, 49, 2, 7, 28,
+ 45, 55, 13, 33, 46, 11, 6, 32, 19, 34, 38, 47,
+ 8, 24, 35, 36, 30, 3, 42, 14, 10, 29, 16, 50,
+
+ 55, 34, 26, 38, 11, 39, 53, 5, 32, 45, 54, 41,
+ 35, 2, 51, 14, 15, 23, 36, 3, 8, 16, 21, 42,
+ 6, 12, 27, 47, 31, 25, 20, 46, 33, 48, 52, 4,
+ 22, 7, 49, 50, 44, 17, 1, 28, 24, 43, 30, 9,
+
+ 12, 48, 40, 52, 25, 53, 38, 19, 46, 6, 11, 55,
+ 49, 16, 10, 28, 29, 37, 50, 17, 22, 30, 35, 1,
+ 20, 26, 41, 4, 45, 39, 34, 31, 47, 5, 13, 18,
+ 36, 21, 8, 9, 3, 0, 15, 42, 7, 2, 44, 23,
+
+ 26, 5, 54, 13, 39, 38, 52, 33, 31, 20, 25, 12,
+ 8, 30, 24, 42, 43, 51, 9, 0, 36, 44, 49, 15,
+ 34, 40, 55, 18, 6, 53, 48, 45, 4, 19, 27, 32,
+ 50, 35, 22, 23, 17, 14, 29, 1, 21, 16, 3, 37,
+
+ 40, 19, 11, 27, 53, 52, 13, 47, 45, 34, 39, 26,
+ 22, 44, 7, 1, 2, 10, 23, 14, 50, 3, 8, 29,
+ 48, 54, 12, 32, 20, 38, 5, 6, 18, 33, 41, 46,
+ 9, 49, 36, 37, 0, 28, 43, 15, 35, 30, 17, 51,
+
+ 54, 33, 25, 41, 38, 13, 27, 4, 6, 48, 53, 40,
+ 36, 3, 21, 15, 16, 24, 37, 28, 9, 17, 22, 43,
+ 5, 11, 26, 46, 34, 52, 19, 20, 32, 47, 55, 31,
+ 23, 8, 50, 51, 14, 42, 2, 29, 49, 44, 0, 10,
+
+ 11, 47, 39, 55, 52, 27, 41, 18, 20, 5, 38, 54,
+ 50, 17, 35, 29, 30, 7, 51, 42, 23, 0, 36, 2,
+ 19, 25, 40, 31, 48, 13, 33, 34, 46, 4, 12, 45,
+ 37, 22, 9, 10, 28, 1, 16, 43, 8, 3, 14, 24,
+
+ 18, 54, 46, 5, 6, 34, 48, 25, 27, 12, 45, 4,
+ 2, 24, 42, 36, 37, 14, 3, 49, 30, 7, 43, 9,
+ 26, 32, 47, 38, 55, 20, 40, 41, 53, 11, 19, 52,
+ 44, 29, 16, 17, 35, 8, 23, 50, 15, 10, 21, 0,
+
+ 32, 11, 31, 19, 20, 48, 5, 39, 41, 26, 6, 18,
+ 16, 7, 1, 50, 51, 28, 17, 8, 44, 21, 2, 23,
+ 40, 46, 4, 52, 12, 34, 54, 55, 38, 25, 33, 13,
+ 3, 43, 30, 0, 49, 22, 37, 9, 29, 24, 35, 14,
+
+ 46, 25, 45, 33, 34, 5, 19, 53, 55, 40, 20, 32,
+ 30, 21, 15, 9, 10, 42, 0, 22, 3, 35, 16, 37,
+ 54, 31, 18, 13, 26, 48, 11, 12, 52, 39, 47, 27,
+ 17, 2, 44, 14, 8, 36, 51, 23, 43, 7, 49, 28,
+
+ 31, 39, 6, 47, 48, 19, 33, 38, 12, 54, 34, 46,
+ 44, 35, 29, 23, 24, 1, 14, 36, 17, 49, 30, 51,
+ 11, 45, 32, 27, 40, 5, 25, 26, 13, 53, 4, 41,
+ 0, 16, 3, 28, 22, 50, 10, 37, 2, 21, 8, 42,
+
+ 45, 53, 20, 4, 5, 33, 47, 52, 26, 11, 48, 31,
+ 3, 49, 43, 37, 7, 15, 28, 50, 0, 8, 44, 10,
+ 25, 6, 46, 41, 54, 19, 39, 40, 27, 38, 18, 55,
+ 14, 30, 17, 42, 36, 9, 24, 51, 16, 35, 22, 1,
+
+ 6, 38, 34, 18, 19, 47, 4, 13, 40, 25, 5, 45,
+ 17, 8, 2, 51, 21, 29, 42, 9, 14, 22, 3, 24,
+ 39, 20, 31, 55, 11, 33, 53, 54, 41, 52, 32, 12,
+ 28, 44, 0, 1, 50, 23, 7, 10, 30, 49, 36, 15,
+
+ 20, 52, 48, 32, 33, 4, 18, 27, 54, 39, 19, 6,
+ 0, 22, 16, 10, 35, 43, 1, 23, 28, 36, 17, 7,
+ 53, 34, 45, 12, 25, 47, 38, 11, 55, 13, 46, 26,
+ 42, 3, 14, 15, 9, 37, 21, 24, 44, 8, 50, 29,
+
+ 27, 6, 55, 39, 40, 11, 25, 34, 4, 46, 26, 13,
+ 7, 29, 23, 17, 42, 50, 8, 30, 35, 43, 24, 14,
+ 31, 41, 52, 19, 32, 54, 45, 18, 5, 20, 53, 33,
+ 49, 10, 21, 22, 16, 44, 28, 0, 51, 15, 2, 36,
+};
+
+const static u8 parity[] = {
+ 8,1,0,8,0,8,8,0,0,8,8,0,8,0,2,8,0,8,8,0,8,0,0,8,8,0,0,8,0,8,8,3,
+ 0,8,8,0,8,0,0,8,8,0,0,8,0,8,8,0,8,0,0,8,0,8,8,0,0,8,8,0,8,0,0,8,
+ 0,8,8,0,8,0,0,8,8,0,0,8,0,8,8,0,8,0,0,8,0,8,8,0,0,8,8,0,8,0,0,8,
+ 8,0,0,8,0,8,8,0,0,8,8,0,8,0,0,8,0,8,8,0,8,0,0,8,8,0,0,8,0,8,8,0,
+ 0,8,8,0,8,0,0,8,8,0,0,8,0,8,8,0,8,0,0,8,0,8,8,0,0,8,8,0,8,0,0,8,
+ 8,0,0,8,0,8,8,0,0,8,8,0,8,0,0,8,0,8,8,0,8,0,0,8,8,0,0,8,0,8,8,0,
+ 8,0,0,8,0,8,8,0,0,8,8,0,8,0,0,8,0,8,8,0,8,0,0,8,8,0,0,8,0,8,8,0,
+ 4,8,8,0,8,0,0,8,8,0,0,8,0,8,8,0,8,5,0,8,0,8,8,0,0,8,8,0,8,0,6,8,
+};
+
+
+static void des_small_fips_encrypt(u32 *expkey, u8 *dst, const u8 *src)
+{
+ u32 x, y, z;
+
+ x = src[7];
+ x <<= 8;
+ x |= src[6];
+ x <<= 8;
+ x |= src[5];
+ x <<= 8;
+ x |= src[4];
+ y = src[3];
+ y <<= 8;
+ y |= src[2];
+ y <<= 8;
+ y |= src[1];
+ y <<= 8;
+ y |= src[0];
+ z = ((x >> 004) ^ y) & 0x0F0F0F0FL;
+ x ^= z << 004;
+ y ^= z;
+ z = ((y >> 020) ^ x) & 0x0000FFFFL;
+ y ^= z << 020;
+ x ^= z;
+ z = ((x >> 002) ^ y) & 0x33333333L;
+ x ^= z << 002;
+ y ^= z;
+ z = ((y >> 010) ^ x) & 0x00FF00FFL;
+ y ^= z << 010;
+ x ^= z;
+ x = x >> 1 | x << 31;
+ z = (x ^ y) & 0x55555555L;
+ y ^= z;
+ x ^= z;
+ y = y >> 1 | y << 31;
+ z = expkey[0];
+ z ^= y;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[1];
+ z ^= y;
+ z = z << 4 | z >> 28;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[2];
+ z ^= x;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[3];
+ z ^= x;
+ z = z << 4 | z >> 28;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[4];
+ z ^= y;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[5];
+ z ^= y;
+ z = z << 4 | z >> 28;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[6];
+ z ^= x;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[7];
+ z ^= x;
+ z = z << 4 | z >> 28;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[8];
+ z ^= y;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[9];
+ z ^= y;
+ z = z << 4 | z >> 28;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[10];
+ z ^= x;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[11];
+ z ^= x;
+ z = z << 4 | z >> 28;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[12];
+ z ^= y;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[13];
+ z ^= y;
+ z = z << 4 | z >> 28;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[14];
+ z ^= x;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[15];
+ z ^= x;
+ z = z << 4 | z >> 28;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[16];
+ z ^= y;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[17];
+ z ^= y;
+ z = z << 4 | z >> 28;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[18];
+ z ^= x;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[19];
+ z ^= x;
+ z = z << 4 | z >> 28;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[20];
+ z ^= y;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[21];
+ z ^= y;
+ z = z << 4 | z >> 28;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[22];
+ z ^= x;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[23];
+ z ^= x;
+ z = z << 4 | z >> 28;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[24];
+ z ^= y;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[25];
+ z ^= y;
+ z = z << 4 | z >> 28;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[26];
+ z ^= x;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[27];
+ z ^= x;
+ z = z << 4 | z >> 28;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[28];
+ z ^= y;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[29];
+ z ^= y;
+ z = z << 4 | z >> 28;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[30];
+ z ^= x;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[31];
+ z ^= x;
+ z = z << 4 | z >> 28;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ x = x << 1 | x >> 31;
+ z = (x ^ y) & 0x55555555L;
+ y ^= z;
+ x ^= z;
+ y = y << 1 | y >> 31;
+ z = ((x >> 010) ^ y) & 0x00FF00FFL;
+ x ^= z << 010;
+ y ^= z;
+ z = ((y >> 002) ^ x) & 0x33333333L;
+ y ^= z << 002;
+ x ^= z;
+ z = ((x >> 020) ^ y) & 0x0000FFFFL;
+ x ^= z << 020;
+ y ^= z;
+ z = ((y >> 004) ^ x) & 0x0F0F0F0FL;
+ y ^= z << 004;
+ x ^= z;
+ dst[0] = x;
+ x >>= 8;
+ dst[1] = x;
+ x >>= 8;
+ dst[2] = x;
+ x >>= 8;
+ dst[3] = x;
+ dst[4] = y;
+ y >>= 8;
+ dst[5] = y;
+ y >>= 8;
+ dst[6] = y;
+ y >>= 8;
+ dst[7] = y;
+}
+
+static void des_small_fips_decrypt(u32 *expkey, u8 *dst, const u8 *src)
+{
+ u32 x, y, z;
+
+ x = src[7];
+ x <<= 8;
+ x |= src[6];
+ x <<= 8;
+ x |= src[5];
+ x <<= 8;
+ x |= src[4];
+ y = src[3];
+ y <<= 8;
+ y |= src[2];
+ y <<= 8;
+ y |= src[1];
+ y <<= 8;
+ y |= src[0];
+ z = ((x >> 004) ^ y) & 0x0F0F0F0FL;
+ x ^= z << 004;
+ y ^= z;
+ z = ((y >> 020) ^ x) & 0x0000FFFFL;
+ y ^= z << 020;
+ x ^= z;
+ z = ((x >> 002) ^ y) & 0x33333333L;
+ x ^= z << 002;
+ y ^= z;
+ z = ((y >> 010) ^ x) & 0x00FF00FFL;
+ y ^= z << 010;
+ x ^= z;
+ x = x >> 1 | x << 31;
+ z = (x ^ y) & 0x55555555L;
+ y ^= z;
+ x ^= z;
+ y = y >> 1 | y << 31;
+ z = expkey[31];
+ z ^= y;
+ z = z << 4 | z >> 28;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[30];
+ z ^= y;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[29];
+ z ^= x;
+ z = z << 4 | z >> 28;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[28];
+ z ^= x;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[27];
+ z ^= y;
+ z = z << 4 | z >> 28;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[26];
+ z ^= y;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[25];
+ z ^= x;
+ z = z << 4 | z >> 28;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[24];
+ z ^= x;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[23];
+ z ^= y;
+ z = z << 4 | z >> 28;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[22];
+ z ^= y;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[21];
+ z ^= x;
+ z = z << 4 | z >> 28;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[20];
+ z ^= x;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[19];
+ z ^= y;
+ z = z << 4 | z >> 28;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[18];
+ z ^= y;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[17];
+ z ^= x;
+ z = z << 4 | z >> 28;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[16];
+ z ^= x;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[15];
+ z ^= y;
+ z = z << 4 | z >> 28;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[14];
+ z ^= y;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[13];
+ z ^= x;
+ z = z << 4 | z >> 28;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[12];
+ z ^= x;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[11];
+ z ^= y;
+ z = z << 4 | z >> 28;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[10];
+ z ^= y;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[9];
+ z ^= x;
+ z = z << 4 | z >> 28;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[8];
+ z ^= x;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[7];
+ z ^= y;
+ z = z << 4 | z >> 28;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[6];
+ z ^= y;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[5];
+ z ^= x;
+ z = z << 4 | z >> 28;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[4];
+ z ^= x;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[3];
+ z ^= y;
+ z = z << 4 | z >> 28;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[2];
+ z ^= y;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ x ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ z = expkey[1];
+ z ^= x;
+ z = z << 4 | z >> 28;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 448) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 384) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 320) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 256) + (0xFC & z));
+ z = expkey[0];
+ z ^= x;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 192) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 128) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) (des_keymap + 64) + (0xFC & z));
+ z >>= 8;
+ y ^= * (u32 *) ((u8 *) des_keymap + (0xFC & z));
+ x = x << 1 | x >> 31;
+ z = (x ^ y) & 0x55555555L;
+ y ^= z;
+ x ^= z;
+ y = y << 1 | y >> 31;
+ z = ((x >> 010) ^ y) & 0x00FF00FFL;
+ x ^= z << 010;
+ y ^= z;
+ z = ((y >> 002) ^ x) & 0x33333333L;
+ y ^= z << 002;
+ x ^= z;
+ z = ((x >> 020) ^ y) & 0x0000FFFFL;
+ x ^= z << 020;
+ y ^= z;
+ z = ((y >> 004) ^ x) & 0x0F0F0F0FL;
+ y ^= z << 004;
+ x ^= z;
+ dst[0] = x;
+ x >>= 8;
+ dst[1] = x;
+ x >>= 8;
+ dst[2] = x;
+ x >>= 8;
+ dst[3] = x;
+ dst[4] = y;
+ y >>= 8;
+ dst[5] = y;
+ y >>= 8;
+ dst[6] = y;
+ y >>= 8;
+ dst[7] = y;
+}
+
+/*
+ * RFC2451: Weak key checks SHOULD be performed.
+ */
+static int setkey(u32 *expkey, const u8 *key, unsigned int keylen, u32 *flags)
+{
+ const u8 *k;
+ u8 *b0, *b1;
+ u32 n, w;
+ u8 bits0[56], bits1[56];
+
+ n = parity[key[0]]; n <<= 4;
+ n |= parity[key[1]]; n <<= 4;
+ n |= parity[key[2]]; n <<= 4;
+ n |= parity[key[3]]; n <<= 4;
+ n |= parity[key[4]]; n <<= 4;
+ n |= parity[key[5]]; n <<= 4;
+ n |= parity[key[6]]; n <<= 4;
+ n |= parity[key[7]];
+ w = 0x88888888L;
+
+ if ((*flags & CRYPTO_TFM_REQ_WEAK_KEY)
+ && !((n - (w >> 3)) & w)) { /* 1 in 10^10 keys passes this test */
+ if (n < 0x41415151) {
+ if (n < 0x31312121) {
+ if (n < 0x14141515) {
+ /* 01 01 01 01 01 01 01 01 */
+ if (n == 0x11111111) goto weak;
+ /* 01 1F 01 1F 01 0E 01 0E */
+ if (n == 0x13131212) goto weak;
+ } else {
+ /* 01 E0 01 E0 01 F1 01 F1 */
+ if (n == 0x14141515) goto weak;
+ /* 01 FE 01 FE 01 FE 01 FE */
+ if (n == 0x16161616) goto weak;
+ }
+ } else {
+ if (n < 0x34342525) {
+ /* 1F 01 1F 01 0E 01 0E 01 */
+ if (n == 0x31312121) goto weak;
+ /* 1F 1F 1F 1F 0E 0E 0E 0E (?) */
+ if (n == 0x33332222) goto weak;
+ } else {
+ /* 1F E0 1F E0 0E F1 0E F1 */
+ if (n == 0x34342525) goto weak;
+ /* 1F FE 1F FE 0E FE 0E FE */
+ if (n == 0x36362626) goto weak;
+ }
+ }
+ } else {
+ if (n < 0x61616161) {
+ if (n < 0x44445555) {
+ /* E0 01 E0 01 F1 01 F1 01 */
+ if (n == 0x41415151) goto weak;
+ /* E0 1F E0 1F F1 0E F1 0E */
+ if (n == 0x43435252) goto weak;
+ } else {
+ /* E0 E0 E0 E0 F1 F1 F1 F1 (?) */
+ if (n == 0x44445555) goto weak;
+ /* E0 FE E0 FE F1 FE F1 FE */
+ if (n == 0x46465656) goto weak;
+ }
+ } else {
+ if (n < 0x64646565) {
+ /* FE 01 FE 01 FE 01 FE 01 */
+ if (n == 0x61616161) goto weak;
+ /* FE 1F FE 1F FE 0E FE 0E */
+ if (n == 0x63636262) goto weak;
+ } else {
+ /* FE E0 FE E0 FE F1 FE F1 */
+ if (n == 0x64646565) goto weak;
+ /* FE FE FE FE FE FE FE FE */
+ if (n == 0x66666666) goto weak;
+ }
+ }
+ }
+
+ goto not_weak;
+weak:
+ *flags |= CRYPTO_TFM_RES_WEAK_KEY;
+ return -EINVAL;
+ }
+
+not_weak:
+
+ /* explode the bits */
+ n = 56;
+ b0 = bits0;
+ b1 = bits1;
+
+ do {
+ w = (256 | *key++) << 2;
+ do {
+ --n;
+ b1[n] = 8 & w;
+ w >>= 1;
+ b0[n] = 4 & w;
+ } while ( w >= 16 );
+ } while ( n );
+
+ /* put the bits in the correct places */
+ n = 16;
+ k = rotors;
+
+ do {
+ w = (b1[k[ 0 ]] | b0[k[ 1 ]]) << 4;
+ w |= (b1[k[ 2 ]] | b0[k[ 3 ]]) << 2;
+ w |= b1[k[ 4 ]] | b0[k[ 5 ]];
+ w <<= 8;
+ w |= (b1[k[ 6 ]] | b0[k[ 7 ]]) << 4;
+ w |= (b1[k[ 8 ]] | b0[k[ 9 ]]) << 2;
+ w |= b1[k[10 ]] | b0[k[11 ]];
+ w <<= 8;
+ w |= (b1[k[12 ]] | b0[k[13 ]]) << 4;
+ w |= (b1[k[14 ]] | b0[k[15 ]]) << 2;
+ w |= b1[k[16 ]] | b0[k[17 ]];
+ w <<= 8;
+ w |= (b1[k[18 ]] | b0[k[19 ]]) << 4;
+ w |= (b1[k[20 ]] | b0[k[21 ]]) << 2;
+ w |= b1[k[22 ]] | b0[k[23 ]];
+ expkey[0] = w;
+
+ w = (b1[k[ 0+24]] | b0[k[ 1+24]]) << 4;
+ w |= (b1[k[ 2+24]] | b0[k[ 3+24]]) << 2;
+ w |= b1[k[ 4+24]] | b0[k[ 5+24]];
+ w <<= 8;
+ w |= (b1[k[ 6+24]] | b0[k[ 7+24]]) << 4;
+ w |= (b1[k[ 8+24]] | b0[k[ 9+24]]) << 2;
+ w |= b1[k[10+24]] | b0[k[11+24]];
+ w <<= 8;
+ w |= (b1[k[12+24]] | b0[k[13+24]]) << 4;
+ w |= (b1[k[14+24]] | b0[k[15+24]]) << 2;
+ w |= b1[k[16+24]] | b0[k[17+24]];
+ w <<= 8;
+ w |= (b1[k[18+24]] | b0[k[19+24]]) << 4;
+ w |= (b1[k[20+24]] | b0[k[21+24]]) << 2;
+ w |= b1[k[22+24]] | b0[k[23+24]];
+
+ ROR(w, 4, 28); /* could be eliminated */
+ expkey[1] = w;
+
+ k += 48;
+ expkey += 2;
+ } while (--n);
+
+ return 0;
+}
+
+static int des_setkey(void *ctx, const u8 *key, unsigned int keylen, u32 *flags)
+{
+ return setkey(((struct des_ctx *)ctx)->expkey, key, keylen, flags);
+}
+
+static void des_encrypt(void *ctx, u8 *dst, const u8 *src)
+{
+ des_small_fips_encrypt(((struct des_ctx *)ctx)->expkey, dst, src);
+}
+
+static void des_decrypt(void *ctx, u8 *dst, const u8 *src)
+{
+ des_small_fips_decrypt(((struct des_ctx *)ctx)->expkey, dst, src);
+}
+
+/*
+ * RFC2451:
+ *
+ * For DES-EDE3, there is no known need to reject weak or
+ * complementation keys. Any weakness is obviated by the use of
+ * multiple keys.
+ *
+ * However, if the first two or last two independent 64-bit keys are
+ * equal (k1 == k2 or k2 == k3), then the DES3 operation is simply the
+ * same as DES. Implementers MUST reject keys that exhibit this
+ * property.
+ *
+ */
+static int des3_ede_setkey(void *ctx, const u8 *key,
+ unsigned int keylen, u32 *flags)
+{
+ unsigned int i, off;
+ struct des3_ede_ctx *dctx = ctx;
+
+ if (!(memcmp(key, &key[DES_KEY_SIZE], DES_KEY_SIZE) &&
+ memcmp(&key[DES_KEY_SIZE], &key[DES_KEY_SIZE * 2],
+ DES_KEY_SIZE))) {
+
+ *flags |= CRYPTO_TFM_RES_BAD_KEY_SCHED;
+ return -EINVAL;
+ }
+
+ for (i = 0, off = 0; i < 3; i++, off += DES_EXPKEY_WORDS,
+ key += DES_KEY_SIZE) {
+ int ret = setkey(&dctx->expkey[off], key, DES_KEY_SIZE, flags);
+ if (ret < 0)
+ return ret;
+ }
+ return 0;
+}
+
+static void des3_ede_encrypt(void *ctx, u8 *dst, const u8 *src)
+{
+ struct des3_ede_ctx *dctx = ctx;
+
+ des_small_fips_encrypt(dctx->expkey, dst, src);
+ des_small_fips_decrypt(&dctx->expkey[DES_EXPKEY_WORDS], dst, dst);
+ des_small_fips_encrypt(&dctx->expkey[DES_EXPKEY_WORDS * 2], dst, dst);
+}
+
+static void des3_ede_decrypt(void *ctx, u8 *dst, const u8 *src)
+{
+ struct des3_ede_ctx *dctx = ctx;
+
+ des_small_fips_decrypt(&dctx->expkey[DES_EXPKEY_WORDS * 2], dst, src);
+ des_small_fips_encrypt(&dctx->expkey[DES_EXPKEY_WORDS], dst, dst);
+ des_small_fips_decrypt(dctx->expkey, dst, dst);
+}
+
+static struct crypto_alg des_alg = {
+ .cra_name = "des",
+ .cra_flags = CRYPTO_ALG_TYPE_CIPHER,
+ .cra_blocksize = DES_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct des_ctx),
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(des_alg.cra_list),
+ .cra_u = { .cipher = {
+ .cia_min_keysize = DES_KEY_SIZE,
+ .cia_max_keysize = DES_KEY_SIZE,
+ .cia_ivsize = DES_BLOCK_SIZE,
+ .cia_setkey = des_setkey,
+ .cia_encrypt = des_encrypt,
+ .cia_decrypt = des_decrypt } }
+};
+
+static struct crypto_alg des3_ede_alg = {
+ .cra_name = "des3_ede",
+ .cra_flags = CRYPTO_ALG_TYPE_CIPHER,
+ .cra_blocksize = DES3_EDE_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct des3_ede_ctx),
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(des3_ede_alg.cra_list),
+ .cra_u = { .cipher = {
+ .cia_min_keysize = DES3_EDE_KEY_SIZE,
+ .cia_max_keysize = DES3_EDE_KEY_SIZE,
+ .cia_ivsize = DES3_EDE_BLOCK_SIZE,
+ .cia_setkey = des3_ede_setkey,
+ .cia_encrypt = des3_ede_encrypt,
+ .cia_decrypt = des3_ede_decrypt } }
+};
+
+static int __init init(void)
+{
+ int ret = 0;
+
+ ret = crypto_register_alg(&des_alg);
+ if (ret < 0)
+ goto out;
+
+ ret = crypto_register_alg(&des3_ede_alg);
+ if (ret < 0)
+ crypto_unregister_alg(&des_alg);
+out:
+ return ret;
+}
+
+static void __exit fini(void)
+{
+ crypto_unregister_alg(&des3_ede_alg);
+ crypto_unregister_alg(&des_alg);
+}
+
+module_init(init);
+module_exit(fini);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("DES & Triple DES EDE Cipher Algorithms");
diff -Nru a/crypto/digest.c b/crypto/digest.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/crypto/digest.c Thu May 8 10:41:38 2003
@@ -0,0 +1,82 @@
+/*
+ * Cryptographic API.
+ *
+ * Digest operations.
+ *
+ * Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#include <linux/crypto.h>
+#include <linux/mm.h>
+#include <linux/errno.h>
+#include <linux/highmem.h>
+#include <asm/scatterlist.h>
+#include "internal.h"
+
+static void init(struct crypto_tfm *tfm)
+{
+ tfm->__crt_alg->cra_digest.dia_init(crypto_tfm_ctx(tfm));
+}
+
+static void update(struct crypto_tfm *tfm,
+ struct scatterlist *sg, unsigned int nsg)
+{
+ unsigned int i;
+
+ for (i = 0; i < nsg; i++) {
+ char *p = crypto_kmap(sg[i].page, 0) + sg[i].offset;
+ tfm->__crt_alg->cra_digest.dia_update(crypto_tfm_ctx(tfm),
+ p, sg[i].length);
+ crypto_kunmap(p, 0);
+ crypto_yield(tfm);
+ }
+}
+
+static void final(struct crypto_tfm *tfm, u8 *out)
+{
+ tfm->__crt_alg->cra_digest.dia_final(crypto_tfm_ctx(tfm), out);
+}
+
+static void digest(struct crypto_tfm *tfm,
+ struct scatterlist *sg, unsigned int nsg, u8 *out)
+{
+ unsigned int i;
+
+ tfm->crt_digest.dit_init(tfm);
+
+ for (i = 0; i < nsg; i++) {
+ char *p = crypto_kmap(sg[i].page, 0) + sg[i].offset;
+ tfm->__crt_alg->cra_digest.dia_update(crypto_tfm_ctx(tfm),
+ p, sg[i].length);
+ crypto_kunmap(p, 0);
+ crypto_yield(tfm);
+ }
+ crypto_digest_final(tfm, out);
+}
+
+int crypto_init_digest_flags(struct crypto_tfm *tfm, u32 flags)
+{
+ return flags ? -EINVAL : 0;
+}
+
+int crypto_init_digest_ops(struct crypto_tfm *tfm)
+{
+ struct digest_tfm *ops = &tfm->crt_digest;
+
+ ops->dit_init = init;
+ ops->dit_update = update;
+ ops->dit_final = final;
+ ops->dit_digest = digest;
+
+ return crypto_alloc_hmac_block(tfm);
+}
+
+void crypto_exit_digest_ops(struct crypto_tfm *tfm)
+{
+ crypto_free_hmac_block(tfm);
+}
diff -Nru a/crypto/hmac.c b/crypto/hmac.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/crypto/hmac.c Thu May 8 10:41:38 2003
@@ -0,0 +1,134 @@
+/*
+ * Cryptographic API.
+ *
+ * HMAC: Keyed-Hashing for Message Authentication (RFC2104).
+ *
+ * Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
+ *
+ * The HMAC implementation is derived from USAGI.
+ * Copyright (c) 2002 Kazunori Miyazawa <miyazawa@linux-ipv6.org> / USAGI
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#include <linux/crypto.h>
+#include <linux/mm.h>
+#include <linux/highmem.h>
+#include <linux/slab.h>
+#include <asm/scatterlist.h>
+#include "internal.h"
+
+static void hash_key(struct crypto_tfm *tfm, u8 *key, unsigned int keylen)
+{
+ struct scatterlist tmp;
+
+ tmp.page = virt_to_page(key);
+ tmp.offset = ((long)key & ~PAGE_MASK);
+ tmp.length = keylen;
+ crypto_digest_digest(tfm, &tmp, 1, key);
+
+}
+
+int crypto_alloc_hmac_block(struct crypto_tfm *tfm)
+{
+ int ret = 0;
+
+ BUG_ON(!crypto_tfm_alg_blocksize(tfm));
+
+ tfm->crt_digest.dit_hmac_block = kmalloc(crypto_tfm_alg_blocksize(tfm),
+ GFP_KERNEL);
+ if (tfm->crt_digest.dit_hmac_block == NULL)
+ ret = -ENOMEM;
+
+ return ret;
+
+}
+
+void crypto_free_hmac_block(struct crypto_tfm *tfm)
+{
+ if (tfm->crt_digest.dit_hmac_block)
+ kfree(tfm->crt_digest.dit_hmac_block);
+}
+
+void crypto_hmac_init(struct crypto_tfm *tfm, u8 *key, unsigned int *keylen)
+{
+ unsigned int i;
+ struct scatterlist tmp;
+ char *ipad = tfm->crt_digest.dit_hmac_block;
+
+ if (*keylen > crypto_tfm_alg_blocksize(tfm)) {
+ hash_key(tfm, key, *keylen);
+ *keylen = crypto_tfm_alg_digestsize(tfm);
+ }
+
+ memset(ipad, 0, crypto_tfm_alg_blocksize(tfm));
+ memcpy(ipad, key, *keylen);
+
+ for (i = 0; i < crypto_tfm_alg_blocksize(tfm); i++)
+ ipad[i] ^= 0x36;
+
+ tmp.page = virt_to_page(ipad);
+ tmp.offset = ((long)ipad & ~PAGE_MASK);
+ tmp.length = crypto_tfm_alg_blocksize(tfm);
+
+ crypto_digest_init(tfm);
+ crypto_digest_update(tfm, &tmp, 1);
+}
+
+void crypto_hmac_update(struct crypto_tfm *tfm,
+ struct scatterlist *sg, unsigned int nsg)
+{
+ crypto_digest_update(tfm, sg, nsg);
+}
+
+void crypto_hmac_final(struct crypto_tfm *tfm, u8 *key,
+ unsigned int *keylen, u8 *out)
+{
+ unsigned int i;
+ struct scatterlist tmp;
+ char *opad = tfm->crt_digest.dit_hmac_block;
+
+ if (*keylen > crypto_tfm_alg_blocksize(tfm)) {
+ hash_key(tfm, key, *keylen);
+ *keylen = crypto_tfm_alg_digestsize(tfm);
+ }
+
+ crypto_digest_final(tfm, out);
+
+ memset(opad, 0, crypto_tfm_alg_blocksize(tfm));
+ memcpy(opad, key, *keylen);
+
+ for (i = 0; i < crypto_tfm_alg_blocksize(tfm); i++)
+ opad[i] ^= 0x5c;
+
+ tmp.page = virt_to_page(opad);
+ tmp.offset = ((long)opad & ~PAGE_MASK);
+ tmp.length = crypto_tfm_alg_blocksize(tfm);
+
+ crypto_digest_init(tfm);
+ crypto_digest_update(tfm, &tmp, 1);
+
+ tmp.page = virt_to_page(out);
+ tmp.offset = ((long)out & ~PAGE_MASK);
+ tmp.length = crypto_tfm_alg_digestsize(tfm);
+
+ crypto_digest_update(tfm, &tmp, 1);
+ crypto_digest_final(tfm, out);
+}
+
+void crypto_hmac(struct crypto_tfm *tfm, u8 *key, unsigned int *keylen,
+ struct scatterlist *sg, unsigned int nsg, u8 *out)
+{
+ crypto_hmac_init(tfm, key, keylen);
+ crypto_hmac_update(tfm, sg, nsg);
+ crypto_hmac_final(tfm, key, keylen, out);
+}
+
+EXPORT_SYMBOL_GPL(crypto_hmac_init);
+EXPORT_SYMBOL_GPL(crypto_hmac_update);
+EXPORT_SYMBOL_GPL(crypto_hmac_final);
+EXPORT_SYMBOL_GPL(crypto_hmac);
+
diff -Nru a/crypto/internal.h b/crypto/internal.h
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/crypto/internal.h Thu May 8 10:41:38 2003
@@ -0,0 +1,94 @@
+/*
+ * Cryptographic API.
+ *
+ * Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#ifndef _CRYPTO_INTERNAL_H
+#define _CRYPTO_INTERNAL_H
+#include <linux/mm.h>
+#include <linux/highmem.h>
+#include <linux/init.h>
+#include <asm/hardirq.h>
+#include <asm/softirq.h>
+#include <asm/kmap_types.h>
+
+extern enum km_type crypto_km_types[];
+
+static inline enum km_type crypto_kmap_type(int out)
+{
+ return crypto_km_types[(in_softirq() ? 2 : 0) + out];
+}
+
+static inline void *crypto_kmap(struct page *page, int out)
+{
+ return kmap_atomic(page, crypto_kmap_type(out));
+}
+
+static inline void crypto_kunmap(void *vaddr, int out)
+{
+ kunmap_atomic(vaddr, crypto_kmap_type(out));
+}
+
+static inline void crypto_yield(struct crypto_tfm *tfm)
+{
+ if (!in_softirq())
+ cond_resched();
+}
+
+static inline void *crypto_tfm_ctx(struct crypto_tfm *tfm)
+{
+ return (void *)&tfm[1];
+}
+
+struct crypto_alg *crypto_alg_lookup(const char *name);
+
+#ifdef CONFIG_KMOD
+void crypto_alg_autoload(const char *name);
+struct crypto_alg *crypto_alg_mod_lookup(const char *name);
+#else
+static inline struct crypto_alg *crypto_alg_mod_lookup(const char *name)
+{
+ return crypto_alg_lookup(name);
+}
+#endif
+
+#ifdef CONFIG_CRYPTO_HMAC
+int crypto_alloc_hmac_block(struct crypto_tfm *tfm);
+void crypto_free_hmac_block(struct crypto_tfm *tfm);
+#else
+static inline int crypto_alloc_hmac_block(struct crypto_tfm *tfm)
+{
+ return 0;
+}
+
+static inline void crypto_free_hmac_block(struct crypto_tfm *tfm)
+{ }
+#endif
+
+#ifdef CONFIG_PROC_FS
+void __init crypto_init_proc(void);
+#else
+static inline void crypto_init_proc(void)
+{ }
+#endif
+
+int crypto_init_digest_flags(struct crypto_tfm *tfm, u32 flags);
+int crypto_init_cipher_flags(struct crypto_tfm *tfm, u32 flags);
+int crypto_init_compress_flags(struct crypto_tfm *tfm, u32 flags);
+
+int crypto_init_digest_ops(struct crypto_tfm *tfm);
+int crypto_init_cipher_ops(struct crypto_tfm *tfm);
+int crypto_init_compress_ops(struct crypto_tfm *tfm);
+
+void crypto_exit_digest_ops(struct crypto_tfm *tfm);
+void crypto_exit_cipher_ops(struct crypto_tfm *tfm);
+void crypto_exit_compress_ops(struct crypto_tfm *tfm);
+
+#endif /* _CRYPTO_INTERNAL_H */
+
diff -Nru a/crypto/md4.c b/crypto/md4.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/crypto/md4.c Thu May 8 10:41:38 2003
@@ -0,0 +1,250 @@
+/*
+ * Cryptographic API.
+ *
+ * MD4 Message Digest Algorithm (RFC1320).
+ *
+ * Implementation derived from Andrew Tridgell and Steve French's
+ * CIFS MD4 implementation, and the cryptoapi implementation
+ * originally based on the public domain implementation written
+ * by Colin Plumb in 1993.
+ *
+ * Copyright (c) Andrew Tridgell 1997-1998.
+ * Modified by Steve French (sfrench@us.ibm.com) 2002
+ * Copyright (c) Cryptoapi developers.
+ * Copyright (c) 2002 David S. Miller (davem@redhat.com)
+ * Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ */
+#include <linux/init.h>
+#include <linux/crypto.h>
+#include <linux/kernel.h>
+#include <linux/string.h>
+#include <asm/byteorder.h>
+
+#define MD4_DIGEST_SIZE 16
+#define MD4_HMAC_BLOCK_SIZE 64
+#define MD4_BLOCK_WORDS 16
+#define MD4_HASH_WORDS 4
+
+struct md4_ctx {
+ u32 hash[MD4_HASH_WORDS];
+ u32 block[MD4_BLOCK_WORDS];
+ u64 byte_count;
+};
+
+static inline u32 lshift(u32 x, unsigned int s)
+{
+ x &= 0xFFFFFFFF;
+ return ((x << s) & 0xFFFFFFFF) | (x >> (32 - s));
+}
+
+static inline u32 F(u32 x, u32 y, u32 z)
+{
+ return (x & y) | ((~x) & z);
+}
+
+static inline u32 G(u32 x, u32 y, u32 z)
+{
+ return (x & y) | (x & z) | (y & z);
+}
+
+static inline u32 H(u32 x, u32 y, u32 z)
+{
+ return x ^ y ^ z;
+}
+
+#define ROUND1(a,b,c,d,k,s) (a = lshift(a + F(b,c,d) + k, s))
+#define ROUND2(a,b,c,d,k,s) (a = lshift(a + G(b,c,d) + k + (u32)0x5A827999,s))
+#define ROUND3(a,b,c,d,k,s) (a = lshift(a + H(b,c,d) + k + (u32)0x6ED9EBA1,s))
+
+/* XXX: this stuff can be optimized */
+static inline void le32_to_cpu_array(u32 *buf, unsigned int words)
+{
+ while (words--) {
+ __le32_to_cpus(buf);
+ buf++;
+ }
+}
+
+static inline void cpu_to_le32_array(u32 *buf, unsigned int words)
+{
+ while (words--) {
+ __cpu_to_le32s(buf);
+ buf++;
+ }
+}
+
+static void md4_transform(u32 *hash, u32 const *in)
+{
+ u32 a, b, c, d;
+
+ a = hash[0];
+ b = hash[1];
+ c = hash[2];
+ d = hash[3];
+
+ ROUND1(a, b, c, d, in[0], 3);
+ ROUND1(d, a, b, c, in[1], 7);
+ ROUND1(c, d, a, b, in[2], 11);
+ ROUND1(b, c, d, a, in[3], 19);
+ ROUND1(a, b, c, d, in[4], 3);
+ ROUND1(d, a, b, c, in[5], 7);
+ ROUND1(c, d, a, b, in[6], 11);
+ ROUND1(b, c, d, a, in[7], 19);
+ ROUND1(a, b, c, d, in[8], 3);
+ ROUND1(d, a, b, c, in[9], 7);
+ ROUND1(c, d, a, b, in[10], 11);
+ ROUND1(b, c, d, a, in[11], 19);
+ ROUND1(a, b, c, d, in[12], 3);
+ ROUND1(d, a, b, c, in[13], 7);
+ ROUND1(c, d, a, b, in[14], 11);
+ ROUND1(b, c, d, a, in[15], 19);
+
+ ROUND2(a, b, c, d,in[ 0], 3);
+ ROUND2(d, a, b, c, in[4], 5);
+ ROUND2(c, d, a, b, in[8], 9);
+ ROUND2(b, c, d, a, in[12], 13);
+ ROUND2(a, b, c, d, in[1], 3);
+ ROUND2(d, a, b, c, in[5], 5);
+ ROUND2(c, d, a, b, in[9], 9);
+ ROUND2(b, c, d, a, in[13], 13);
+ ROUND2(a, b, c, d, in[2], 3);
+ ROUND2(d, a, b, c, in[6], 5);
+ ROUND2(c, d, a, b, in[10], 9);
+ ROUND2(b, c, d, a, in[14], 13);
+ ROUND2(a, b, c, d, in[3], 3);
+ ROUND2(d, a, b, c, in[7], 5);
+ ROUND2(c, d, a, b, in[11], 9);
+ ROUND2(b, c, d, a, in[15], 13);
+
+ ROUND3(a, b, c, d,in[ 0], 3);
+ ROUND3(d, a, b, c, in[8], 9);
+ ROUND3(c, d, a, b, in[4], 11);
+ ROUND3(b, c, d, a, in[12], 15);
+ ROUND3(a, b, c, d, in[2], 3);
+ ROUND3(d, a, b, c, in[10], 9);
+ ROUND3(c, d, a, b, in[6], 11);
+ ROUND3(b, c, d, a, in[14], 15);
+ ROUND3(a, b, c, d, in[1], 3);
+ ROUND3(d, a, b, c, in[9], 9);
+ ROUND3(c, d, a, b, in[5], 11);
+ ROUND3(b, c, d, a, in[13], 15);
+ ROUND3(a, b, c, d, in[3], 3);
+ ROUND3(d, a, b, c, in[11], 9);
+ ROUND3(c, d, a, b, in[7], 11);
+ ROUND3(b, c, d, a, in[15], 15);
+
+ hash[0] += a;
+ hash[1] += b;
+ hash[2] += c;
+ hash[3] += d;
+}
+
+static inline void md4_transform_helper(struct md4_ctx *ctx)
+{
+ le32_to_cpu_array(ctx->block, sizeof(ctx->block) / sizeof(u32));
+ md4_transform(ctx->hash, ctx->block);
+}
+
+static void md4_init(void *ctx)
+{
+ struct md4_ctx *mctx = ctx;
+
+ mctx->hash[0] = 0x67452301;
+ mctx->hash[1] = 0xefcdab89;
+ mctx->hash[2] = 0x98badcfe;
+ mctx->hash[3] = 0x10325476;
+ mctx->byte_count = 0;
+}
+
+static void md4_update(void *ctx, const u8 *data, unsigned int len)
+{
+ struct md4_ctx *mctx = ctx;
+ const u32 avail = sizeof(mctx->block) - (mctx->byte_count & 0x3f);
+
+ mctx->byte_count += len;
+
+ if (avail > len) {
+ memcpy((char *)mctx->block + (sizeof(mctx->block) - avail),
+ data, len);
+ return;
+ }
+
+ memcpy((char *)mctx->block + (sizeof(mctx->block) - avail),
+ data, avail);
+
+ md4_transform_helper(mctx);
+ data += avail;
+ len -= avail;
+
+ while (len >= sizeof(mctx->block)) {
+ memcpy(mctx->block, data, sizeof(mctx->block));
+ md4_transform_helper(mctx);
+ data += sizeof(mctx->block);
+ len -= sizeof(mctx->block);
+ }
+
+ memcpy(mctx->block, data, len);
+}
+
+static void md4_final(void *ctx, u8 *out)
+{
+ struct md4_ctx *mctx = ctx;
+ const unsigned int offset = mctx->byte_count & 0x3f;
+ char *p = (char *)mctx->block + offset;
+ int padding = 56 - (offset + 1);
+
+ *p++ = 0x80;
+ if (padding < 0) {
+ memset(p, 0x00, padding + sizeof (u64));
+ md4_transform_helper(mctx);
+ p = (char *)mctx->block;
+ padding = 56;
+ }
+
+ memset(p, 0, padding);
+ mctx->block[14] = mctx->byte_count << 3;
+ mctx->block[15] = mctx->byte_count >> 29;
+ le32_to_cpu_array(mctx->block, (sizeof(mctx->block) -
+ sizeof(u64)) / sizeof(u32));
+ md4_transform(mctx->hash, mctx->block);
+ cpu_to_le32_array(mctx->hash, sizeof(mctx->hash) / sizeof(u32));
+ memcpy(out, mctx->hash, sizeof(mctx->hash));
+ memset(mctx, 0, sizeof(mctx));
+}
+
+static struct crypto_alg alg = {
+ .cra_name = "md4",
+ .cra_flags = CRYPTO_ALG_TYPE_DIGEST,
+ .cra_blocksize = MD4_HMAC_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct md4_ctx),
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(alg.cra_list),
+ .cra_u = { .digest = {
+ .dia_digestsize = MD4_DIGEST_SIZE,
+ .dia_init = md4_init,
+ .dia_update = md4_update,
+ .dia_final = md4_final } }
+};
+
+static int __init init(void)
+{
+ return crypto_register_alg(&alg);
+}
+
+static void __exit fini(void)
+{
+ crypto_unregister_alg(&alg);
+}
+
+module_init(init);
+module_exit(fini);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("MD4 Message Digest Algorithm");
+
diff -Nru a/crypto/md5.c b/crypto/md5.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/crypto/md5.c Thu May 8 10:41:38 2003
@@ -0,0 +1,244 @@
+/*
+ * Cryptographic API.
+ *
+ * MD5 Message Digest Algorithm (RFC1321).
+ *
+ * Derived from cryptoapi implementation, originally based on the
+ * public domain implementation written by Colin Plumb in 1993.
+ *
+ * Copyright (c) Cryptoapi developers.
+ * Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/string.h>
+#include <linux/crypto.h>
+#include <asm/byteorder.h>
+
+#define MD5_DIGEST_SIZE 16
+#define MD5_HMAC_BLOCK_SIZE 64
+#define MD5_BLOCK_WORDS 16
+#define MD5_HASH_WORDS 4
+
+#define F1(x, y, z) (z ^ (x & (y ^ z)))
+#define F2(x, y, z) F1(z, x, y)
+#define F3(x, y, z) (x ^ y ^ z)
+#define F4(x, y, z) (y ^ (x | ~z))
+
+#define MD5STEP(f, w, x, y, z, in, s) \
+ (w += f(x, y, z) + in, w = (w<<s | w>>(32-s)) + x)
+
+struct md5_ctx {
+ u32 hash[MD5_HASH_WORDS];
+ u32 block[MD5_BLOCK_WORDS];
+ u64 byte_count;
+};
+
+static void md5_transform(u32 *hash, u32 const *in)
+{
+ u32 a, b, c, d;
+
+ a = hash[0];
+ b = hash[1];
+ c = hash[2];
+ d = hash[3];
+
+ MD5STEP(F1, a, b, c, d, in[0] + 0xd76aa478, 7);
+ MD5STEP(F1, d, a, b, c, in[1] + 0xe8c7b756, 12);
+ MD5STEP(F1, c, d, a, b, in[2] + 0x242070db, 17);
+ MD5STEP(F1, b, c, d, a, in[3] + 0xc1bdceee, 22);
+ MD5STEP(F1, a, b, c, d, in[4] + 0xf57c0faf, 7);
+ MD5STEP(F1, d, a, b, c, in[5] + 0x4787c62a, 12);
+ MD5STEP(F1, c, d, a, b, in[6] + 0xa8304613, 17);
+ MD5STEP(F1, b, c, d, a, in[7] + 0xfd469501, 22);
+ MD5STEP(F1, a, b, c, d, in[8] + 0x698098d8, 7);
+ MD5STEP(F1, d, a, b, c, in[9] + 0x8b44f7af, 12);
+ MD5STEP(F1, c, d, a, b, in[10] + 0xffff5bb1, 17);
+ MD5STEP(F1, b, c, d, a, in[11] + 0x895cd7be, 22);
+ MD5STEP(F1, a, b, c, d, in[12] + 0x6b901122, 7);
+ MD5STEP(F1, d, a, b, c, in[13] + 0xfd987193, 12);
+ MD5STEP(F1, c, d, a, b, in[14] + 0xa679438e, 17);
+ MD5STEP(F1, b, c, d, a, in[15] + 0x49b40821, 22);
+
+ MD5STEP(F2, a, b, c, d, in[1] + 0xf61e2562, 5);
+ MD5STEP(F2, d, a, b, c, in[6] + 0xc040b340, 9);
+ MD5STEP(F2, c, d, a, b, in[11] + 0x265e5a51, 14);
+ MD5STEP(F2, b, c, d, a, in[0] + 0xe9b6c7aa, 20);
+ MD5STEP(F2, a, b, c, d, in[5] + 0xd62f105d, 5);
+ MD5STEP(F2, d, a, b, c, in[10] + 0x02441453, 9);
+ MD5STEP(F2, c, d, a, b, in[15] + 0xd8a1e681, 14);
+ MD5STEP(F2, b, c, d, a, in[4] + 0xe7d3fbc8, 20);
+ MD5STEP(F2, a, b, c, d, in[9] + 0x21e1cde6, 5);
+ MD5STEP(F2, d, a, b, c, in[14] + 0xc33707d6, 9);
+ MD5STEP(F2, c, d, a, b, in[3] + 0xf4d50d87, 14);
+ MD5STEP(F2, b, c, d, a, in[8] + 0x455a14ed, 20);
+ MD5STEP(F2, a, b, c, d, in[13] + 0xa9e3e905, 5);
+ MD5STEP(F2, d, a, b, c, in[2] + 0xfcefa3f8, 9);
+ MD5STEP(F2, c, d, a, b, in[7] + 0x676f02d9, 14);
+ MD5STEP(F2, b, c, d, a, in[12] + 0x8d2a4c8a, 20);
+
+ MD5STEP(F3, a, b, c, d, in[5] + 0xfffa3942, 4);
+ MD5STEP(F3, d, a, b, c, in[8] + 0x8771f681, 11);
+ MD5STEP(F3, c, d, a, b, in[11] + 0x6d9d6122, 16);
+ MD5STEP(F3, b, c, d, a, in[14] + 0xfde5380c, 23);
+ MD5STEP(F3, a, b, c, d, in[1] + 0xa4beea44, 4);
+ MD5STEP(F3, d, a, b, c, in[4] + 0x4bdecfa9, 11);
+ MD5STEP(F3, c, d, a, b, in[7] + 0xf6bb4b60, 16);
+ MD5STEP(F3, b, c, d, a, in[10] + 0xbebfbc70, 23);
+ MD5STEP(F3, a, b, c, d, in[13] + 0x289b7ec6, 4);
+ MD5STEP(F3, d, a, b, c, in[0] + 0xeaa127fa, 11);
+ MD5STEP(F3, c, d, a, b, in[3] + 0xd4ef3085, 16);
+ MD5STEP(F3, b, c, d, a, in[6] + 0x04881d05, 23);
+ MD5STEP(F3, a, b, c, d, in[9] + 0xd9d4d039, 4);
+ MD5STEP(F3, d, a, b, c, in[12] + 0xe6db99e5, 11);
+ MD5STEP(F3, c, d, a, b, in[15] + 0x1fa27cf8, 16);
+ MD5STEP(F3, b, c, d, a, in[2] + 0xc4ac5665, 23);
+
+ MD5STEP(F4, a, b, c, d, in[0] + 0xf4292244, 6);
+ MD5STEP(F4, d, a, b, c, in[7] + 0x432aff97, 10);
+ MD5STEP(F4, c, d, a, b, in[14] + 0xab9423a7, 15);
+ MD5STEP(F4, b, c, d, a, in[5] + 0xfc93a039, 21);
+ MD5STEP(F4, a, b, c, d, in[12] + 0x655b59c3, 6);
+ MD5STEP(F4, d, a, b, c, in[3] + 0x8f0ccc92, 10);
+ MD5STEP(F4, c, d, a, b, in[10] + 0xffeff47d, 15);
+ MD5STEP(F4, b, c, d, a, in[1] + 0x85845dd1, 21);
+ MD5STEP(F4, a, b, c, d, in[8] + 0x6fa87e4f, 6);
+ MD5STEP(F4, d, a, b, c, in[15] + 0xfe2ce6e0, 10);
+ MD5STEP(F4, c, d, a, b, in[6] + 0xa3014314, 15);
+ MD5STEP(F4, b, c, d, a, in[13] + 0x4e0811a1, 21);
+ MD5STEP(F4, a, b, c, d, in[4] + 0xf7537e82, 6);
+ MD5STEP(F4, d, a, b, c, in[11] + 0xbd3af235, 10);
+ MD5STEP(F4, c, d, a, b, in[2] + 0x2ad7d2bb, 15);
+ MD5STEP(F4, b, c, d, a, in[9] + 0xeb86d391, 21);
+
+ hash[0] += a;
+ hash[1] += b;
+ hash[2] += c;
+ hash[3] += d;
+}
+
+/* XXX: this stuff can be optimized */
+static inline void le32_to_cpu_array(u32 *buf, unsigned int words)
+{
+ while (words--) {
+ __le32_to_cpus(buf);
+ buf++;
+ }
+}
+
+static inline void cpu_to_le32_array(u32 *buf, unsigned int words)
+{
+ while (words--) {
+ __cpu_to_le32s(buf);
+ buf++;
+ }
+}
+
+static inline void md5_transform_helper(struct md5_ctx *ctx)
+{
+ le32_to_cpu_array(ctx->block, sizeof(ctx->block) / sizeof(u32));
+ md5_transform(ctx->hash, ctx->block);
+}
+
+static void md5_init(void *ctx)
+{
+ struct md5_ctx *mctx = ctx;
+
+ mctx->hash[0] = 0x67452301;
+ mctx->hash[1] = 0xefcdab89;
+ mctx->hash[2] = 0x98badcfe;
+ mctx->hash[3] = 0x10325476;
+ mctx->byte_count = 0;
+}
+
+static void md5_update(void *ctx, const u8 *data, unsigned int len)
+{
+ struct md5_ctx *mctx = ctx;
+ const u32 avail = sizeof(mctx->block) - (mctx->byte_count & 0x3f);
+
+ mctx->byte_count += len;
+
+ if (avail > len) {
+ memcpy((char *)mctx->block + (sizeof(mctx->block) - avail),
+ data, len);
+ return;
+ }
+
+ memcpy((char *)mctx->block + (sizeof(mctx->block) - avail),
+ data, avail);
+
+ md5_transform_helper(mctx);
+ data += avail;
+ len -= avail;
+
+ while (len >= sizeof(mctx->block)) {
+ memcpy(mctx->block, data, sizeof(mctx->block));
+ md5_transform_helper(mctx);
+ data += sizeof(mctx->block);
+ len -= sizeof(mctx->block);
+ }
+
+ memcpy(mctx->block, data, len);
+}
+
+static void md5_final(void *ctx, u8 *out)
+{
+ struct md5_ctx *mctx = ctx;
+ const unsigned int offset = mctx->byte_count & 0x3f;
+ char *p = (char *)mctx->block + offset;
+ int padding = 56 - (offset + 1);
+
+ *p++ = 0x80;
+ if (padding < 0) {
+ memset(p, 0x00, padding + sizeof (u64));
+ md5_transform_helper(mctx);
+ p = (char *)mctx->block;
+ padding = 56;
+ }
+
+ memset(p, 0, padding);
+ mctx->block[14] = mctx->byte_count << 3;
+ mctx->block[15] = mctx->byte_count >> 29;
+ le32_to_cpu_array(mctx->block, (sizeof(mctx->block) -
+ sizeof(u64)) / sizeof(u32));
+ md5_transform(mctx->hash, mctx->block);
+ cpu_to_le32_array(mctx->hash, sizeof(mctx->hash) / sizeof(u32));
+ memcpy(out, mctx->hash, sizeof(mctx->hash));
+ memset(mctx, 0, sizeof(mctx));
+}
+
+static struct crypto_alg alg = {
+ .cra_name = "md5",
+ .cra_flags = CRYPTO_ALG_TYPE_DIGEST,
+ .cra_blocksize = MD5_HMAC_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct md5_ctx),
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(alg.cra_list),
+ .cra_u = { .digest = {
+ .dia_digestsize = MD5_DIGEST_SIZE,
+ .dia_init = md5_init,
+ .dia_update = md5_update,
+ .dia_final = md5_final } }
+};
+
+static int __init init(void)
+{
+ return crypto_register_alg(&alg);
+}
+
+static void __exit fini(void)
+{
+ crypto_unregister_alg(&alg);
+}
+
+module_init(init);
+module_exit(fini);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("MD5 Message Digest Algorithm");
diff -Nru a/crypto/proc.c b/crypto/proc.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/crypto/proc.c Thu May 8 10:41:38 2003
@@ -0,0 +1,106 @@
+/*
+ * Scatterlist Cryptographic API.
+ *
+ * Procfs information.
+ *
+ * Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#include <linux/init.h>
+#include <linux/crypto.h>
+#include <linux/rwsem.h>
+#include <linux/proc_fs.h>
+#include <linux/seq_file.h>
+#include "internal.h"
+
+extern struct list_head crypto_alg_list;
+extern struct rw_semaphore crypto_alg_sem;
+
+static void *c_start(struct seq_file *m, loff_t *pos)
+{
+ struct list_head *v;
+ loff_t n = *pos;
+
+ down_read(&crypto_alg_sem);
+ list_for_each(v, &crypto_alg_list)
+ if (!n--)
+ return list_entry(v, struct crypto_alg, cra_list);
+ return NULL;
+}
+
+static void *c_next(struct seq_file *m, void *p, loff_t *pos)
+{
+ struct list_head *v = p;
+
+ (*pos)++;
+ v = v->next;
+ return (v == &crypto_alg_list) ?
+ NULL : list_entry(v, struct crypto_alg, cra_list);
+}
+
+static void c_stop(struct seq_file *m, void *p)
+{
+ up_read(&crypto_alg_sem);
+}
+
+static int c_show(struct seq_file *m, void *p)
+{
+ struct crypto_alg *alg = (struct crypto_alg *)p;
+
+ seq_printf(m, "name : %s\n", alg->cra_name);
+ seq_printf(m, "module : %s\n", alg->cra_module->name);
+
+ switch (alg->cra_flags & CRYPTO_ALG_TYPE_MASK) {
+ case CRYPTO_ALG_TYPE_CIPHER:
+ seq_printf(m, "blocksize : %u\n", alg->cra_blocksize);
+ seq_printf(m, "min keysize : %u\n",
+ alg->cra_cipher.cia_min_keysize);
+ seq_printf(m, "max keysize : %u\n",
+ alg->cra_cipher.cia_max_keysize);
+ seq_printf(m, "ivsize : %u\n",
+ alg->cra_cipher.cia_ivsize);
+ break;
+
+ case CRYPTO_ALG_TYPE_DIGEST:
+ seq_printf(m, "blocksize : %u\n", alg->cra_blocksize);
+ seq_printf(m, "digestsize : %u\n",
+ alg->cra_digest.dia_digestsize);
+ break;
+ }
+
+ seq_putc(m, '\n');
+ return 0;
+}
+
+static struct seq_operations crypto_seq_ops = {
+ .start = c_start,
+ .next = c_next,
+ .stop = c_stop,
+ .show = c_show
+};
+
+static int crypto_info_open(struct inode *inode, struct file *file)
+{
+ return seq_open(file, &crypto_seq_ops);
+}
+
+static struct file_operations proc_crypto_ops = {
+ .open = crypto_info_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release
+};
+
+void __init crypto_init_proc(void)
+{
+ struct proc_dir_entry *proc;
+
+ proc = create_proc_entry("crypto", 0, NULL);
+ if (proc)
+ proc->proc_fops = &proc_crypto_ops;
+}
diff -Nru a/crypto/serpent.c b/crypto/serpent.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/crypto/serpent.c Thu May 8 10:41:38 2003
@@ -0,0 +1,507 @@
+/*
+ * Cryptographic API.
+ *
+ * Serpent Cipher Algorithm.
+ *
+ * Copyright (C) 2002 Dag Arne Osvik <osvik@ii.uib.no>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/errno.h>
+#include <asm/byteorder.h>
+#include <linux/crypto.h>
+
+/* Key is padded to the maximum of 256 bits before round key generation.
+ * Any key length <= 256 bits (32 bytes) is allowed by the algorithm.
+ */
+
+#define SERPENT_MIN_KEY_SIZE 0
+#define SERPENT_MAX_KEY_SIZE 32
+#define SERPENT_EXPKEY_WORDS 132
+#define SERPENT_BLOCK_SIZE 16
+
+#define PHI 0x9e3779b9UL
+#define ROL(x,r) ((x) = ((x) << (r)) | ((x) >> (32-(r))))
+#define ROR(x,r) ((x) = ((x) >> (r)) | ((x) << (32-(r))))
+
+#define keyiter(a,b,c,d,i,j) \
+ b ^= d; b ^= c; b ^= a; b ^= PHI ^ i; ROL(b,11); k[j] = b;
+
+#define loadkeys(x0,x1,x2,x3,i) \
+ x0=k[i]; x1=k[i+1]; x2=k[i+2]; x3=k[i+3];
+
+#define storekeys(x0,x1,x2,x3,i) \
+ k[i]=x0; k[i+1]=x1; k[i+2]=x2; k[i+3]=x3;
+
+#define K(x0,x1,x2,x3,i) \
+ x3 ^= k[4*(i)+3]; x2 ^= k[4*(i)+2]; \
+ x1 ^= k[4*(i)+1]; x0 ^= k[4*(i)+0];
+
+#define LK(x0,x1,x2,x3,x4,i) \
+ ROL(x0,13); \
+ ROL(x2,3); x1 ^= x0; x4 = x0 << 3; \
+ x3 ^= x2; x1 ^= x2; \
+ ROL(x1,1); x3 ^= x4; \
+ ROL(x3,7); x4 = x1; \
+ x0 ^= x1; x4 <<= 7; x2 ^= x3; \
+ x0 ^= x3; x2 ^= x4; x3 ^= k[4*i+3]; \
+ x1 ^= k[4*i+1]; ROL(x0,5); ROL(x2,22); \
+ x0 ^= k[4*i+0]; x2 ^= k[4*i+2];
+
+#define KL(x0,x1,x2,x3,x4,i) \
+ x0 ^= k[4*i+0]; x1 ^= k[4*i+1]; x2 ^= k[4*i+2]; \
+ x3 ^= k[4*i+3]; ROR(x0,5); ROR(x2,22); \
+ x4 = x1; x2 ^= x3; x0 ^= x3; \
+ x4 <<= 7; x0 ^= x1; ROR(x1,1); \
+ x2 ^= x4; ROR(x3,7); x4 = x0 << 3; \
+ x1 ^= x0; x3 ^= x4; ROR(x0,13); \
+ x1 ^= x2; x3 ^= x2; ROR(x2,3);
+
+#define S0(x0,x1,x2,x3,x4) \
+ x4 = x3; \
+ x3 |= x0; x0 ^= x4; x4 ^= x2; \
+ x4 =~ x4; x3 ^= x1; x1 &= x0; \
+ x1 ^= x4; x2 ^= x0; x0 ^= x3; \
+ x4 |= x0; x0 ^= x2; x2 &= x1; \
+ x3 ^= x2; x1 =~ x1; x2 ^= x4; \
+ x1 ^= x2;
+
+#define S1(x0,x1,x2,x3,x4) \
+ x4 = x1; \
+ x1 ^= x0; x0 ^= x3; x3 =~ x3; \
+ x4 &= x1; x0 |= x1; x3 ^= x2; \
+ x0 ^= x3; x1 ^= x3; x3 ^= x4; \
+ x1 |= x4; x4 ^= x2; x2 &= x0; \
+ x2 ^= x1; x1 |= x0; x0 =~ x0; \
+ x0 ^= x2; x4 ^= x1;
+
+#define S2(x0,x1,x2,x3,x4) \
+ x3 =~ x3; \
+ x1 ^= x0; x4 = x0; x0 &= x2; \
+ x0 ^= x3; x3 |= x4; x2 ^= x1; \
+ x3 ^= x1; x1 &= x0; x0 ^= x2; \
+ x2 &= x3; x3 |= x1; x0 =~ x0; \
+ x3 ^= x0; x4 ^= x0; x0 ^= x2; \
+ x1 |= x2;
+
+#define S3(x0,x1,x2,x3,x4) \
+ x4 = x1; \
+ x1 ^= x3; x3 |= x0; x4 &= x0; \
+ x0 ^= x2; x2 ^= x1; x1 &= x3; \
+ x2 ^= x3; x0 |= x4; x4 ^= x3; \
+ x1 ^= x0; x0 &= x3; x3 &= x4; \
+ x3 ^= x2; x4 |= x1; x2 &= x1; \
+ x4 ^= x3; x0 ^= x3; x3 ^= x2;
+
+#define S4(x0,x1,x2,x3,x4) \
+ x4 = x3; \
+ x3 &= x0; x0 ^= x4; \
+ x3 ^= x2; x2 |= x4; x0 ^= x1; \
+ x4 ^= x3; x2 |= x0; \
+ x2 ^= x1; x1 &= x0; \
+ x1 ^= x4; x4 &= x2; x2 ^= x3; \
+ x4 ^= x0; x3 |= x1; x1 =~ x1; \
+ x3 ^= x0;
+
+#define S5(x0,x1,x2,x3,x4) \
+ x4 = x1; x1 |= x0; \
+ x2 ^= x1; x3 =~ x3; x4 ^= x0; \
+ x0 ^= x2; x1 &= x4; x4 |= x3; \
+ x4 ^= x0; x0 &= x3; x1 ^= x3; \
+ x3 ^= x2; x0 ^= x1; x2 &= x4; \
+ x1 ^= x2; x2 &= x0; \
+ x3 ^= x2;
+
+#define S6(x0,x1,x2,x3,x4) \
+ x4 = x1; \
+ x3 ^= x0; x1 ^= x2; x2 ^= x0; \
+ x0 &= x3; x1 |= x3; x4 =~ x4; \
+ x0 ^= x1; x1 ^= x2; \
+ x3 ^= x4; x4 ^= x0; x2 &= x0; \
+ x4 ^= x1; x2 ^= x3; x3 &= x1; \
+ x3 ^= x0; x1 ^= x2;
+
+#define S7(x0,x1,x2,x3,x4) \
+ x1 =~ x1; \
+ x4 = x1; x0 =~ x0; x1 &= x2; \
+ x1 ^= x3; x3 |= x4; x4 ^= x2; \
+ x2 ^= x3; x3 ^= x0; x0 |= x1; \
+ x2 &= x0; x0 ^= x4; x4 ^= x3; \
+ x3 &= x0; x4 ^= x1; \
+ x2 ^= x4; x3 ^= x1; x4 |= x0; \
+ x4 ^= x1;
+
+#define SI0(x0,x1,x2,x3,x4) \
+ x4 = x3; x1 ^= x0; \
+ x3 |= x1; x4 ^= x1; x0 =~ x0; \
+ x2 ^= x3; x3 ^= x0; x0 &= x1; \
+ x0 ^= x2; x2 &= x3; x3 ^= x4; \
+ x2 ^= x3; x1 ^= x3; x3 &= x0; \
+ x1 ^= x0; x0 ^= x2; x4 ^= x3;
+
+#define SI1(x0,x1,x2,x3,x4) \
+ x1 ^= x3; x4 = x0; \
+ x0 ^= x2; x2 =~ x2; x4 |= x1; \
+ x4 ^= x3; x3 &= x1; x1 ^= x2; \
+ x2 &= x4; x4 ^= x1; x1 |= x3; \
+ x3 ^= x0; x2 ^= x0; x0 |= x4; \
+ x2 ^= x4; x1 ^= x0; \
+ x4 ^= x1;
+
+#define SI2(x0,x1,x2,x3,x4) \
+ x2 ^= x1; x4 = x3; x3 =~ x3; \
+ x3 |= x2; x2 ^= x4; x4 ^= x0; \
+ x3 ^= x1; x1 |= x2; x2 ^= x0; \
+ x1 ^= x4; x4 |= x3; x2 ^= x3; \
+ x4 ^= x2; x2 &= x1; \
+ x2 ^= x3; x3 ^= x4; x4 ^= x0;
+
+#define SI3(x0,x1,x2,x3,x4) \
+ x2 ^= x1; \
+ x4 = x1; x1 &= x2; \
+ x1 ^= x0; x0 |= x4; x4 ^= x3; \
+ x0 ^= x3; x3 |= x1; x1 ^= x2; \
+ x1 ^= x3; x0 ^= x2; x2 ^= x3; \
+ x3 &= x1; x1 ^= x0; x0 &= x2; \
+ x4 ^= x3; x3 ^= x0; x0 ^= x1;
+
+#define SI4(x0,x1,x2,x3,x4) \
+ x2 ^= x3; x4 = x0; x0 &= x1; \
+ x0 ^= x2; x2 |= x3; x4 =~ x4; \
+ x1 ^= x0; x0 ^= x2; x2 &= x4; \
+ x2 ^= x0; x0 |= x4; \
+ x0 ^= x3; x3 &= x2; \
+ x4 ^= x3; x3 ^= x1; x1 &= x0; \
+ x4 ^= x1; x0 ^= x3;
+
+#define SI5(x0,x1,x2,x3,x4) \
+ x4 = x1; x1 |= x2; \
+ x2 ^= x4; x1 ^= x3; x3 &= x4; \
+ x2 ^= x3; x3 |= x0; x0 =~ x0; \
+ x3 ^= x2; x2 |= x0; x4 ^= x1; \
+ x2 ^= x4; x4 &= x0; x0 ^= x1; \
+ x1 ^= x3; x0 &= x2; x2 ^= x3; \
+ x0 ^= x2; x2 ^= x4; x4 ^= x3;
+
+#define SI6(x0,x1,x2,x3,x4) \
+ x0 ^= x2; \
+ x4 = x0; x0 &= x3; x2 ^= x3; \
+ x0 ^= x2; x3 ^= x1; x2 |= x4; \
+ x2 ^= x3; x3 &= x0; x0 =~ x0; \
+ x3 ^= x1; x1 &= x2; x4 ^= x0; \
+ x3 ^= x4; x4 ^= x2; x0 ^= x1; \
+ x2 ^= x0;
+
+#define SI7(x0,x1,x2,x3,x4) \
+ x4 = x3; x3 &= x0; x0 ^= x2; \
+ x2 |= x4; x4 ^= x1; x0 =~ x0; \
+ x1 |= x3; x4 ^= x0; x0 &= x2; \
+ x0 ^= x1; x1 &= x2; x3 ^= x2; \
+ x4 ^= x3; x2 &= x3; x3 |= x0; \
+ x1 ^= x4; x3 ^= x4; x4 &= x0; \
+ x4 ^= x2;
+
+struct serpent_ctx {
+ u8 iv[SERPENT_BLOCK_SIZE];
+ u32 expkey[SERPENT_EXPKEY_WORDS];
+};
+
+static int setkey(void *ctx, const u8 *key, unsigned int keylen, u32 *flags)
+{
+ u32 *k = ((struct serpent_ctx *)ctx)->expkey;
+ u8 *k8 = (u8 *)k;
+ u32 r0,r1,r2,r3,r4;
+ int i;
+
+ if ((keylen < SERPENT_MIN_KEY_SIZE)
+ || (keylen > SERPENT_MAX_KEY_SIZE))
+ {
+ *flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
+ return -EINVAL;
+ }
+
+ /* Copy key, add padding */
+
+ for (i = 0; i < keylen; ++i)
+ k8[i] = key[i];
+ if (i < SERPENT_MAX_KEY_SIZE)
+ k8[i++] = 1;
+ while (i < SERPENT_MAX_KEY_SIZE)
+ k8[i++] = 0;
+
+ /* Expand key using polynomial */
+
+ r0 = le32_to_cpu(k[3]);
+ r1 = le32_to_cpu(k[4]);
+ r2 = le32_to_cpu(k[5]);
+ r3 = le32_to_cpu(k[6]);
+ r4 = le32_to_cpu(k[7]);
+
+ keyiter(le32_to_cpu(k[0]),r0,r4,r2,0,0);
+ keyiter(le32_to_cpu(k[1]),r1,r0,r3,1,1);
+ keyiter(le32_to_cpu(k[2]),r2,r1,r4,2,2);
+ keyiter(le32_to_cpu(k[3]),r3,r2,r0,3,3);
+ keyiter(le32_to_cpu(k[4]),r4,r3,r1,4,4);
+ keyiter(le32_to_cpu(k[5]),r0,r4,r2,5,5);
+ keyiter(le32_to_cpu(k[6]),r1,r0,r3,6,6);
+ keyiter(le32_to_cpu(k[7]),r2,r1,r4,7,7);
+
+ keyiter(k[ 0],r3,r2,r0, 8, 8); keyiter(k[ 1],r4,r3,r1, 9, 9);
+ keyiter(k[ 2],r0,r4,r2, 10, 10); keyiter(k[ 3],r1,r0,r3, 11, 11);
+ keyiter(k[ 4],r2,r1,r4, 12, 12); keyiter(k[ 5],r3,r2,r0, 13, 13);
+ keyiter(k[ 6],r4,r3,r1, 14, 14); keyiter(k[ 7],r0,r4,r2, 15, 15);
+ keyiter(k[ 8],r1,r0,r3, 16, 16); keyiter(k[ 9],r2,r1,r4, 17, 17);
+ keyiter(k[ 10],r3,r2,r0, 18, 18); keyiter(k[ 11],r4,r3,r1, 19, 19);
+ keyiter(k[ 12],r0,r4,r2, 20, 20); keyiter(k[ 13],r1,r0,r3, 21, 21);
+ keyiter(k[ 14],r2,r1,r4, 22, 22); keyiter(k[ 15],r3,r2,r0, 23, 23);
+ keyiter(k[ 16],r4,r3,r1, 24, 24); keyiter(k[ 17],r0,r4,r2, 25, 25);
+ keyiter(k[ 18],r1,r0,r3, 26, 26); keyiter(k[ 19],r2,r1,r4, 27, 27);
+ keyiter(k[ 20],r3,r2,r0, 28, 28); keyiter(k[ 21],r4,r3,r1, 29, 29);
+ keyiter(k[ 22],r0,r4,r2, 30, 30); keyiter(k[ 23],r1,r0,r3, 31, 31);
+
+ k += 50;
+
+ keyiter(k[-26],r2,r1,r4, 32,-18); keyiter(k[-25],r3,r2,r0, 33,-17);
+ keyiter(k[-24],r4,r3,r1, 34,-16); keyiter(k[-23],r0,r4,r2, 35,-15);
+ keyiter(k[-22],r1,r0,r3, 36,-14); keyiter(k[-21],r2,r1,r4, 37,-13);
+ keyiter(k[-20],r3,r2,r0, 38,-12); keyiter(k[-19],r4,r3,r1, 39,-11);
+ keyiter(k[-18],r0,r4,r2, 40,-10); keyiter(k[-17],r1,r0,r3, 41, -9);
+ keyiter(k[-16],r2,r1,r4, 42, -8); keyiter(k[-15],r3,r2,r0, 43, -7);
+ keyiter(k[-14],r4,r3,r1, 44, -6); keyiter(k[-13],r0,r4,r2, 45, -5);
+ keyiter(k[-12],r1,r0,r3, 46, -4); keyiter(k[-11],r2,r1,r4, 47, -3);
+ keyiter(k[-10],r3,r2,r0, 48, -2); keyiter(k[ -9],r4,r3,r1, 49, -1);
+ keyiter(k[ -8],r0,r4,r2, 50, 0); keyiter(k[ -7],r1,r0,r3, 51, 1);
+ keyiter(k[ -6],r2,r1,r4, 52, 2); keyiter(k[ -5],r3,r2,r0, 53, 3);
+ keyiter(k[ -4],r4,r3,r1, 54, 4); keyiter(k[ -3],r0,r4,r2, 55, 5);
+ keyiter(k[ -2],r1,r0,r3, 56, 6); keyiter(k[ -1],r2,r1,r4, 57, 7);
+ keyiter(k[ 0],r3,r2,r0, 58, 8); keyiter(k[ 1],r4,r3,r1, 59, 9);
+ keyiter(k[ 2],r0,r4,r2, 60, 10); keyiter(k[ 3],r1,r0,r3, 61, 11);
+ keyiter(k[ 4],r2,r1,r4, 62, 12); keyiter(k[ 5],r3,r2,r0, 63, 13);
+ keyiter(k[ 6],r4,r3,r1, 64, 14); keyiter(k[ 7],r0,r4,r2, 65, 15);
+ keyiter(k[ 8],r1,r0,r3, 66, 16); keyiter(k[ 9],r2,r1,r4, 67, 17);
+ keyiter(k[ 10],r3,r2,r0, 68, 18); keyiter(k[ 11],r4,r3,r1, 69, 19);
+ keyiter(k[ 12],r0,r4,r2, 70, 20); keyiter(k[ 13],r1,r0,r3, 71, 21);
+ keyiter(k[ 14],r2,r1,r4, 72, 22); keyiter(k[ 15],r3,r2,r0, 73, 23);
+ keyiter(k[ 16],r4,r3,r1, 74, 24); keyiter(k[ 17],r0,r4,r2, 75, 25);
+ keyiter(k[ 18],r1,r0,r3, 76, 26); keyiter(k[ 19],r2,r1,r4, 77, 27);
+ keyiter(k[ 20],r3,r2,r0, 78, 28); keyiter(k[ 21],r4,r3,r1, 79, 29);
+ keyiter(k[ 22],r0,r4,r2, 80, 30); keyiter(k[ 23],r1,r0,r3, 81, 31);
+
+ k += 50;
+
+ keyiter(k[-26],r2,r1,r4, 82,-18); keyiter(k[-25],r3,r2,r0, 83,-17);
+ keyiter(k[-24],r4,r3,r1, 84,-16); keyiter(k[-23],r0,r4,r2, 85,-15);
+ keyiter(k[-22],r1,r0,r3, 86,-14); keyiter(k[-21],r2,r1,r4, 87,-13);
+ keyiter(k[-20],r3,r2,r0, 88,-12); keyiter(k[-19],r4,r3,r1, 89,-11);
+ keyiter(k[-18],r0,r4,r2, 90,-10); keyiter(k[-17],r1,r0,r3, 91, -9);
+ keyiter(k[-16],r2,r1,r4, 92, -8); keyiter(k[-15],r3,r2,r0, 93, -7);
+ keyiter(k[-14],r4,r3,r1, 94, -6); keyiter(k[-13],r0,r4,r2, 95, -5);
+ keyiter(k[-12],r1,r0,r3, 96, -4); keyiter(k[-11],r2,r1,r4, 97, -3);
+ keyiter(k[-10],r3,r2,r0, 98, -2); keyiter(k[ -9],r4,r3,r1, 99, -1);
+ keyiter(k[ -8],r0,r4,r2,100, 0); keyiter(k[ -7],r1,r0,r3,101, 1);
+ keyiter(k[ -6],r2,r1,r4,102, 2); keyiter(k[ -5],r3,r2,r0,103, 3);
+ keyiter(k[ -4],r4,r3,r1,104, 4); keyiter(k[ -3],r0,r4,r2,105, 5);
+ keyiter(k[ -2],r1,r0,r3,106, 6); keyiter(k[ -1],r2,r1,r4,107, 7);
+ keyiter(k[ 0],r3,r2,r0,108, 8); keyiter(k[ 1],r4,r3,r1,109, 9);
+ keyiter(k[ 2],r0,r4,r2,110, 10); keyiter(k[ 3],r1,r0,r3,111, 11);
+ keyiter(k[ 4],r2,r1,r4,112, 12); keyiter(k[ 5],r3,r2,r0,113, 13);
+ keyiter(k[ 6],r4,r3,r1,114, 14); keyiter(k[ 7],r0,r4,r2,115, 15);
+ keyiter(k[ 8],r1,r0,r3,116, 16); keyiter(k[ 9],r2,r1,r4,117, 17);
+ keyiter(k[ 10],r3,r2,r0,118, 18); keyiter(k[ 11],r4,r3,r1,119, 19);
+ keyiter(k[ 12],r0,r4,r2,120, 20); keyiter(k[ 13],r1,r0,r3,121, 21);
+ keyiter(k[ 14],r2,r1,r4,122, 22); keyiter(k[ 15],r3,r2,r0,123, 23);
+ keyiter(k[ 16],r4,r3,r1,124, 24); keyiter(k[ 17],r0,r4,r2,125, 25);
+ keyiter(k[ 18],r1,r0,r3,126, 26); keyiter(k[ 19],r2,r1,r4,127, 27);
+ keyiter(k[ 20],r3,r2,r0,128, 28); keyiter(k[ 21],r4,r3,r1,129, 29);
+ keyiter(k[ 22],r0,r4,r2,130, 30); keyiter(k[ 23],r1,r0,r3,131, 31);
+
+ /* Apply S-boxes */
+
+ S3(r3,r4,r0,r1,r2); storekeys(r1,r2,r4,r3, 28); loadkeys(r1,r2,r4,r3, 24);
+ S4(r1,r2,r4,r3,r0); storekeys(r2,r4,r3,r0, 24); loadkeys(r2,r4,r3,r0, 20);
+ S5(r2,r4,r3,r0,r1); storekeys(r1,r2,r4,r0, 20); loadkeys(r1,r2,r4,r0, 16);
+ S6(r1,r2,r4,r0,r3); storekeys(r4,r3,r2,r0, 16); loadkeys(r4,r3,r2,r0, 12);
+ S7(r4,r3,r2,r0,r1); storekeys(r1,r2,r0,r4, 12); loadkeys(r1,r2,r0,r4, 8);
+ S0(r1,r2,r0,r4,r3); storekeys(r0,r2,r4,r1, 8); loadkeys(r0,r2,r4,r1, 4);
+ S1(r0,r2,r4,r1,r3); storekeys(r3,r4,r1,r0, 4); loadkeys(r3,r4,r1,r0, 0);
+ S2(r3,r4,r1,r0,r2); storekeys(r2,r4,r3,r0, 0); loadkeys(r2,r4,r3,r0, -4);
+ S3(r2,r4,r3,r0,r1); storekeys(r0,r1,r4,r2, -4); loadkeys(r0,r1,r4,r2, -8);
+ S4(r0,r1,r4,r2,r3); storekeys(r1,r4,r2,r3, -8); loadkeys(r1,r4,r2,r3,-12);
+ S5(r1,r4,r2,r3,r0); storekeys(r0,r1,r4,r3,-12); loadkeys(r0,r1,r4,r3,-16);
+ S6(r0,r1,r4,r3,r2); storekeys(r4,r2,r1,r3,-16); loadkeys(r4,r2,r1,r3,-20);
+ S7(r4,r2,r1,r3,r0); storekeys(r0,r1,r3,r4,-20); loadkeys(r0,r1,r3,r4,-24);
+ S0(r0,r1,r3,r4,r2); storekeys(r3,r1,r4,r0,-24); loadkeys(r3,r1,r4,r0,-28);
+ k -= 50;
+ S1(r3,r1,r4,r0,r2); storekeys(r2,r4,r0,r3, 22); loadkeys(r2,r4,r0,r3, 18);
+ S2(r2,r4,r0,r3,r1); storekeys(r1,r4,r2,r3, 18); loadkeys(r1,r4,r2,r3, 14);
+ S3(r1,r4,r2,r3,r0); storekeys(r3,r0,r4,r1, 14); loadkeys(r3,r0,r4,r1, 10);
+ S4(r3,r0,r4,r1,r2); storekeys(r0,r4,r1,r2, 10); loadkeys(r0,r4,r1,r2, 6);
+ S5(r0,r4,r1,r2,r3); storekeys(r3,r0,r4,r2, 6); loadkeys(r3,r0,r4,r2, 2);
+ S6(r3,r0,r4,r2,r1); storekeys(r4,r1,r0,r2, 2); loadkeys(r4,r1,r0,r2, -2);
+ S7(r4,r1,r0,r2,r3); storekeys(r3,r0,r2,r4, -2); loadkeys(r3,r0,r2,r4, -6);
+ S0(r3,r0,r2,r4,r1); storekeys(r2,r0,r4,r3, -6); loadkeys(r2,r0,r4,r3,-10);
+ S1(r2,r0,r4,r3,r1); storekeys(r1,r4,r3,r2,-10); loadkeys(r1,r4,r3,r2,-14);
+ S2(r1,r4,r3,r2,r0); storekeys(r0,r4,r1,r2,-14); loadkeys(r0,r4,r1,r2,-18);
+ S3(r0,r4,r1,r2,r3); storekeys(r2,r3,r4,r0,-18); loadkeys(r2,r3,r4,r0,-22);
+ k -= 50;
+ S4(r2,r3,r4,r0,r1); storekeys(r3,r4,r0,r1, 28); loadkeys(r3,r4,r0,r1, 24);
+ S5(r3,r4,r0,r1,r2); storekeys(r2,r3,r4,r1, 24); loadkeys(r2,r3,r4,r1, 20);
+ S6(r2,r3,r4,r1,r0); storekeys(r4,r0,r3,r1, 20); loadkeys(r4,r0,r3,r1, 16);
+ S7(r4,r0,r3,r1,r2); storekeys(r2,r3,r1,r4, 16); loadkeys(r2,r3,r1,r4, 12);
+ S0(r2,r3,r1,r4,r0); storekeys(r1,r3,r4,r2, 12); loadkeys(r1,r3,r4,r2, 8);
+ S1(r1,r3,r4,r2,r0); storekeys(r0,r4,r2,r1, 8); loadkeys(r0,r4,r2,r1, 4);
+ S2(r0,r4,r2,r1,r3); storekeys(r3,r4,r0,r1, 4); loadkeys(r3,r4,r0,r1, 0);
+ S3(r3,r4,r0,r1,r2); storekeys(r1,r2,r4,r3, 0);
+
+ return 0;
+}
+
+static void encrypt(void *ctx, u8 *dst, const u8 *src)
+{
+ const u32
+ *k = ((struct serpent_ctx *)ctx)->expkey,
+ *s = (const u32 *)src;
+ u32 *d = (u32 *)dst,
+ r0, r1, r2, r3, r4;
+
+/*
+ * Note: The conversions between u8* and u32* might cause trouble
+ * on architectures with stricter alignment rules than x86
+ */
+
+ r0 = le32_to_cpu(s[0]);
+ r1 = le32_to_cpu(s[1]);
+ r2 = le32_to_cpu(s[2]);
+ r3 = le32_to_cpu(s[3]);
+
+ K(r0,r1,r2,r3,0);
+ S0(r0,r1,r2,r3,r4); LK(r2,r1,r3,r0,r4,1);
+ S1(r2,r1,r3,r0,r4); LK(r4,r3,r0,r2,r1,2);
+ S2(r4,r3,r0,r2,r1); LK(r1,r3,r4,r2,r0,3);
+ S3(r1,r3,r4,r2,r0); LK(r2,r0,r3,r1,r4,4);
+ S4(r2,r0,r3,r1,r4); LK(r0,r3,r1,r4,r2,5);
+ S5(r0,r3,r1,r4,r2); LK(r2,r0,r3,r4,r1,6);
+ S6(r2,r0,r3,r4,r1); LK(r3,r1,r0,r4,r2,7);
+ S7(r3,r1,r0,r4,r2); LK(r2,r0,r4,r3,r1,8);
+ S0(r2,r0,r4,r3,r1); LK(r4,r0,r3,r2,r1,9);
+ S1(r4,r0,r3,r2,r1); LK(r1,r3,r2,r4,r0,10);
+ S2(r1,r3,r2,r4,r0); LK(r0,r3,r1,r4,r2,11);
+ S3(r0,r3,r1,r4,r2); LK(r4,r2,r3,r0,r1,12);
+ S4(r4,r2,r3,r0,r1); LK(r2,r3,r0,r1,r4,13);
+ S5(r2,r3,r0,r1,r4); LK(r4,r2,r3,r1,r0,14);
+ S6(r4,r2,r3,r1,r0); LK(r3,r0,r2,r1,r4,15);
+ S7(r3,r0,r2,r1,r4); LK(r4,r2,r1,r3,r0,16);
+ S0(r4,r2,r1,r3,r0); LK(r1,r2,r3,r4,r0,17);
+ S1(r1,r2,r3,r4,r0); LK(r0,r3,r4,r1,r2,18);
+ S2(r0,r3,r4,r1,r2); LK(r2,r3,r0,r1,r4,19);
+ S3(r2,r3,r0,r1,r4); LK(r1,r4,r3,r2,r0,20);
+ S4(r1,r4,r3,r2,r0); LK(r4,r3,r2,r0,r1,21);
+ S5(r4,r3,r2,r0,r1); LK(r1,r4,r3,r0,r2,22);
+ S6(r1,r4,r3,r0,r2); LK(r3,r2,r4,r0,r1,23);
+ S7(r3,r2,r4,r0,r1); LK(r1,r4,r0,r3,r2,24);
+ S0(r1,r4,r0,r3,r2); LK(r0,r4,r3,r1,r2,25);
+ S1(r0,r4,r3,r1,r2); LK(r2,r3,r1,r0,r4,26);
+ S2(r2,r3,r1,r0,r4); LK(r4,r3,r2,r0,r1,27);
+ S3(r4,r3,r2,r0,r1); LK(r0,r1,r3,r4,r2,28);
+ S4(r0,r1,r3,r4,r2); LK(r1,r3,r4,r2,r0,29);
+ S5(r1,r3,r4,r2,r0); LK(r0,r1,r3,r2,r4,30);
+ S6(r0,r1,r3,r2,r4); LK(r3,r4,r1,r2,r0,31);
+ S7(r3,r4,r1,r2,r0); K(r0,r1,r2,r3,32);
+
+ d[0] = cpu_to_le32(r0);
+ d[1] = cpu_to_le32(r1);
+ d[2] = cpu_to_le32(r2);
+ d[3] = cpu_to_le32(r3);
+}
+
+static void decrypt(void *ctx, u8 *dst, const u8 *src)
+{
+ const u32
+ *k = ((struct serpent_ctx *)ctx)->expkey,
+ *s = (const u32 *)src;
+ u32 *d = (u32 *)dst,
+ r0, r1, r2, r3, r4;
+
+ r0 = le32_to_cpu(s[0]);
+ r1 = le32_to_cpu(s[1]);
+ r2 = le32_to_cpu(s[2]);
+ r3 = le32_to_cpu(s[3]);
+
+ K(r0,r1,r2,r3,32);
+ SI7(r0,r1,r2,r3,r4); KL(r1,r3,r0,r4,r2,31);
+ SI6(r1,r3,r0,r4,r2); KL(r0,r2,r4,r1,r3,30);
+ SI5(r0,r2,r4,r1,r3); KL(r2,r3,r0,r4,r1,29);
+ SI4(r2,r3,r0,r4,r1); KL(r2,r0,r1,r4,r3,28);
+ SI3(r2,r0,r1,r4,r3); KL(r1,r2,r3,r4,r0,27);
+ SI2(r1,r2,r3,r4,r0); KL(r2,r0,r4,r3,r1,26);
+ SI1(r2,r0,r4,r3,r1); KL(r1,r0,r4,r3,r2,25);
+ SI0(r1,r0,r4,r3,r2); KL(r4,r2,r0,r1,r3,24);
+ SI7(r4,r2,r0,r1,r3); KL(r2,r1,r4,r3,r0,23);
+ SI6(r2,r1,r4,r3,r0); KL(r4,r0,r3,r2,r1,22);
+ SI5(r4,r0,r3,r2,r1); KL(r0,r1,r4,r3,r2,21);
+ SI4(r0,r1,r4,r3,r2); KL(r0,r4,r2,r3,r1,20);
+ SI3(r0,r4,r2,r3,r1); KL(r2,r0,r1,r3,r4,19);
+ SI2(r2,r0,r1,r3,r4); KL(r0,r4,r3,r1,r2,18);
+ SI1(r0,r4,r3,r1,r2); KL(r2,r4,r3,r1,r0,17);
+ SI0(r2,r4,r3,r1,r0); KL(r3,r0,r4,r2,r1,16);
+ SI7(r3,r0,r4,r2,r1); KL(r0,r2,r3,r1,r4,15);
+ SI6(r0,r2,r3,r1,r4); KL(r3,r4,r1,r0,r2,14);
+ SI5(r3,r4,r1,r0,r2); KL(r4,r2,r3,r1,r0,13);
+ SI4(r4,r2,r3,r1,r0); KL(r4,r3,r0,r1,r2,12);
+ SI3(r4,r3,r0,r1,r2); KL(r0,r4,r2,r1,r3,11);
+ SI2(r0,r4,r2,r1,r3); KL(r4,r3,r1,r2,r0,10);
+ SI1(r4,r3,r1,r2,r0); KL(r0,r3,r1,r2,r4,9);
+ SI0(r0,r3,r1,r2,r4); KL(r1,r4,r3,r0,r2,8);
+ SI7(r1,r4,r3,r0,r2); KL(r4,r0,r1,r2,r3,7);
+ SI6(r4,r0,r1,r2,r3); KL(r1,r3,r2,r4,r0,6);
+ SI5(r1,r3,r2,r4,r0); KL(r3,r0,r1,r2,r4,5);
+ SI4(r3,r0,r1,r2,r4); KL(r3,r1,r4,r2,r0,4);
+ SI3(r3,r1,r4,r2,r0); KL(r4,r3,r0,r2,r1,3);
+ SI2(r4,r3,r0,r2,r1); KL(r3,r1,r2,r0,r4,2);
+ SI1(r3,r1,r2,r0,r4); KL(r4,r1,r2,r0,r3,1);
+ SI0(r4,r1,r2,r0,r3); K(r2,r3,r1,r4,0);
+
+ d[0] = cpu_to_le32(r2);
+ d[1] = cpu_to_le32(r3);
+ d[2] = cpu_to_le32(r1);
+ d[3] = cpu_to_le32(r4);
+}
+
+static struct crypto_alg serpent_alg = {
+ .cra_name = "serpent",
+ .cra_flags = CRYPTO_ALG_TYPE_CIPHER,
+ .cra_blocksize = SERPENT_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct serpent_ctx),
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(serpent_alg.cra_list),
+ .cra_u = { .cipher = {
+ .cia_min_keysize = SERPENT_MIN_KEY_SIZE,
+ .cia_max_keysize = SERPENT_MAX_KEY_SIZE,
+ .cia_ivsize = SERPENT_BLOCK_SIZE,
+ .cia_setkey = setkey,
+ .cia_encrypt = encrypt,
+ .cia_decrypt = decrypt } }
+};
+
+static int __init init(void)
+{
+ return crypto_register_alg(&serpent_alg);
+}
+
+static void __exit fini(void)
+{
+ crypto_unregister_alg(&serpent_alg);
+}
+
+module_init(init);
+module_exit(fini);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Serpent Cipher Algorithm");
+MODULE_AUTHOR("Dag Arne Osvik <osvik@ii.uib.no>");
diff -Nru a/crypto/sha1.c b/crypto/sha1.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/crypto/sha1.c Thu May 8 10:41:38 2003
@@ -0,0 +1,208 @@
+/*
+ * Cryptographic API.
+ *
+ * SHA1 Secure Hash Algorithm.
+ *
+ * Derived from cryptoapi implementation, adapted for in-place
+ * scatterlist interface. Originally based on the public domain
+ * implementation written by Steve Reid.
+ *
+ * Copyright (c) Alan Smithee.
+ * Copyright (c) Andrew McDonald <andrew@mcdonald.org.uk>
+ * Copyright (c) Jean-Francois Dive <jef@linuxbe.org>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/mm.h>
+#include <linux/crypto.h>
+#include <asm/scatterlist.h>
+#include <asm/byteorder.h>
+
+#define SHA1_DIGEST_SIZE 20
+#define SHA1_HMAC_BLOCK_SIZE 64
+
+static inline u32 rol(u32 value, u32 bits)
+{
+ return (((value) << (bits)) | ((value) >> (32 - (bits))));
+}
+
+/* blk0() and blk() perform the initial expand. */
+/* I got the idea of expanding during the round function from SSLeay */
+# define blk0(i) block32[i]
+
+#define blk(i) (block32[i&15] = rol(block32[(i+13)&15]^block32[(i+8)&15] \
+ ^block32[(i+2)&15]^block32[i&15],1))
+
+/* (R0+R1), R2, R3, R4 are the different operations used in SHA1 */
+#define R0(v,w,x,y,z,i) z+=((w&(x^y))^y)+blk0(i)+0x5A827999+rol(v,5); \
+ w=rol(w,30);
+#define R1(v,w,x,y,z,i) z+=((w&(x^y))^y)+blk(i)+0x5A827999+rol(v,5); \
+ w=rol(w,30);
+#define R2(v,w,x,y,z,i) z+=(w^x^y)+blk(i)+0x6ED9EBA1+rol(v,5);w=rol(w,30);
+#define R3(v,w,x,y,z,i) z+=(((w|x)&y)|(w&x))+blk(i)+0x8F1BBCDC+rol(v,5); \
+ w=rol(w,30);
+#define R4(v,w,x,y,z,i) z+=(w^x^y)+blk(i)+0xCA62C1D6+rol(v,5);w=rol(w,30);
+
+struct sha1_ctx {
+ u64 count;
+ u32 state[5];
+ u8 buffer[64];
+};
+
+/* Hash a single 512-bit block. This is the core of the algorithm. */
+static void sha1_transform(u32 *state, const u8 *in)
+{
+ u32 a, b, c, d, e;
+ u32 block32[16];
+
+ /* convert/copy data to workspace */
+ for (a = 0; a < sizeof(block32)/sizeof(u32); a++)
+ block32[a] = be32_to_cpu (((const u32 *)in)[a]);
+
+ /* Copy context->state[] to working vars */
+ a = state[0];
+ b = state[1];
+ c = state[2];
+ d = state[3];
+ e = state[4];
+
+ /* 4 rounds of 20 operations each. Loop unrolled. */
+ R0(a,b,c,d,e, 0); R0(e,a,b,c,d, 1); R0(d,e,a,b,c, 2); R0(c,d,e,a,b, 3);
+ R0(b,c,d,e,a, 4); R0(a,b,c,d,e, 5); R0(e,a,b,c,d, 6); R0(d,e,a,b,c, 7);
+ R0(c,d,e,a,b, 8); R0(b,c,d,e,a, 9); R0(a,b,c,d,e,10); R0(e,a,b,c,d,11);
+ R0(d,e,a,b,c,12); R0(c,d,e,a,b,13); R0(b,c,d,e,a,14); R0(a,b,c,d,e,15);
+ R1(e,a,b,c,d,16); R1(d,e,a,b,c,17); R1(c,d,e,a,b,18); R1(b,c,d,e,a,19);
+ R2(a,b,c,d,e,20); R2(e,a,b,c,d,21); R2(d,e,a,b,c,22); R2(c,d,e,a,b,23);
+ R2(b,c,d,e,a,24); R2(a,b,c,d,e,25); R2(e,a,b,c,d,26); R2(d,e,a,b,c,27);
+ R2(c,d,e,a,b,28); R2(b,c,d,e,a,29); R2(a,b,c,d,e,30); R2(e,a,b,c,d,31);
+ R2(d,e,a,b,c,32); R2(c,d,e,a,b,33); R2(b,c,d,e,a,34); R2(a,b,c,d,e,35);
+ R2(e,a,b,c,d,36); R2(d,e,a,b,c,37); R2(c,d,e,a,b,38); R2(b,c,d,e,a,39);
+ R3(a,b,c,d,e,40); R3(e,a,b,c,d,41); R3(d,e,a,b,c,42); R3(c,d,e,a,b,43);
+ R3(b,c,d,e,a,44); R3(a,b,c,d,e,45); R3(e,a,b,c,d,46); R3(d,e,a,b,c,47);
+ R3(c,d,e,a,b,48); R3(b,c,d,e,a,49); R3(a,b,c,d,e,50); R3(e,a,b,c,d,51);
+ R3(d,e,a,b,c,52); R3(c,d,e,a,b,53); R3(b,c,d,e,a,54); R3(a,b,c,d,e,55);
+ R3(e,a,b,c,d,56); R3(d,e,a,b,c,57); R3(c,d,e,a,b,58); R3(b,c,d,e,a,59);
+ R4(a,b,c,d,e,60); R4(e,a,b,c,d,61); R4(d,e,a,b,c,62); R4(c,d,e,a,b,63);
+ R4(b,c,d,e,a,64); R4(a,b,c,d,e,65); R4(e,a,b,c,d,66); R4(d,e,a,b,c,67);
+ R4(c,d,e,a,b,68); R4(b,c,d,e,a,69); R4(a,b,c,d,e,70); R4(e,a,b,c,d,71);
+ R4(d,e,a,b,c,72); R4(c,d,e,a,b,73); R4(b,c,d,e,a,74); R4(a,b,c,d,e,75);
+ R4(e,a,b,c,d,76); R4(d,e,a,b,c,77); R4(c,d,e,a,b,78); R4(b,c,d,e,a,79);
+ /* Add the working vars back into context.state[] */
+ state[0] += a;
+ state[1] += b;
+ state[2] += c;
+ state[3] += d;
+ state[4] += e;
+ /* Wipe variables */
+ a = b = c = d = e = 0;
+ memset (block32, 0x00, sizeof block32);
+}
+
+static void sha1_init(void *ctx)
+{
+ struct sha1_ctx *sctx = ctx;
+ static const struct sha1_ctx initstate = {
+ 0,
+ { 0x67452301, 0xEFCDAB89, 0x98BADCFE, 0x10325476, 0xC3D2E1F0 },
+ { 0, }
+ };
+
+ *sctx = initstate;
+}
+
+static void sha1_update(void *ctx, const u8 *data, unsigned int len)
+{
+ struct sha1_ctx *sctx = ctx;
+ unsigned int i, j;
+
+ j = (sctx->count >> 3) & 0x3f;
+ sctx->count += len << 3;
+
+ if ((j + len) > 63) {
+ memcpy(&sctx->buffer[j], data, (i = 64-j));
+ sha1_transform(sctx->state, sctx->buffer);
+ for ( ; i + 63 < len; i += 64) {
+ sha1_transform(sctx->state, &data[i]);
+ }
+ j = 0;
+ }
+ else i = 0;
+ memcpy(&sctx->buffer[j], &data[i], len - i);
+}
+
+
+/* Add padding and return the message digest. */
+static void sha1_final(void* ctx, u8 *out)
+{
+ struct sha1_ctx *sctx = ctx;
+ u32 i, j, index, padlen;
+ u64 t;
+ u8 bits[8] = { 0, };
+ static const u8 padding[64] = { 0x80, };
+
+ t = sctx->count;
+ bits[7] = 0xff & t; t>>=8;
+ bits[6] = 0xff & t; t>>=8;
+ bits[5] = 0xff & t; t>>=8;
+ bits[4] = 0xff & t; t>>=8;
+ bits[3] = 0xff & t; t>>=8;
+ bits[2] = 0xff & t; t>>=8;
+ bits[1] = 0xff & t; t>>=8;
+ bits[0] = 0xff & t;
+
+ /* Pad out to 56 mod 64 */
+ index = (sctx->count >> 3) & 0x3f;
+ padlen = (index < 56) ? (56 - index) : ((64+56) - index);
+ sha1_update(sctx, padding, padlen);
+
+ /* Append length */
+ sha1_update(sctx, bits, sizeof bits);
+
+ /* Store state in digest */
+ for (i = j = 0; i < 5; i++, j += 4) {
+ u32 t2 = sctx->state[i];
+ out[j+3] = t2 & 0xff; t2>>=8;
+ out[j+2] = t2 & 0xff; t2>>=8;
+ out[j+1] = t2 & 0xff; t2>>=8;
+ out[j ] = t2 & 0xff;
+ }
+
+ /* Wipe context */
+ memset(sctx, 0, sizeof *sctx);
+}
+
+static struct crypto_alg alg = {
+ .cra_name = "sha1",
+ .cra_flags = CRYPTO_ALG_TYPE_DIGEST,
+ .cra_blocksize = SHA1_HMAC_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct sha1_ctx),
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(alg.cra_list),
+ .cra_u = { .digest = {
+ .dia_digestsize = SHA1_DIGEST_SIZE,
+ .dia_init = sha1_init,
+ .dia_update = sha1_update,
+ .dia_final = sha1_final } }
+};
+
+static int __init init(void)
+{
+ return crypto_register_alg(&alg);
+}
+
+static void __exit fini(void)
+{
+ crypto_unregister_alg(&alg);
+}
+
+module_init(init);
+module_exit(fini);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("SHA1 Secure Hash Algorithm");
diff -Nru a/crypto/sha256.c b/crypto/sha256.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/crypto/sha256.c Thu May 8 10:41:38 2003
@@ -0,0 +1,362 @@
+/*
+ * Cryptographic API.
+ *
+ * SHA-256, as specified in
+ * http://csrc.nist.gov/cryptval/shs/sha256-384-512.pdf
+ *
+ * SHA-256 code by Jean-Luc Cooke <jlcooke@certainkey.com>.
+ *
+ * Copyright (c) Jean-Luc Cooke <jlcooke@certainkey.com>
+ * Copyright (c) Andrew McDonald <andrew@mcdonald.org.uk>
+ * Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/mm.h>
+#include <linux/crypto.h>
+#include <asm/scatterlist.h>
+#include <asm/byteorder.h>
+
+#define SHA256_DIGEST_SIZE 32
+#define SHA256_HMAC_BLOCK_SIZE 64
+
+struct sha256_ctx {
+ u32 count[2];
+ u32 state[8];
+ u8 buf[128];
+};
+
+static inline u32 Ch(u32 x, u32 y, u32 z)
+{
+ return ((x & y) ^ (~x & z));
+}
+
+static inline u32 Maj(u32 x, u32 y, u32 z)
+{
+ return ((x & y) ^ (x & z) ^ (y & z));
+}
+
+static inline u32 RORu32(u32 x, u32 y)
+{
+ return (x >> y) | (x << (32 - y));
+}
+
+#define e0(x) (RORu32(x, 2) ^ RORu32(x,13) ^ RORu32(x,22))
+#define e1(x) (RORu32(x, 6) ^ RORu32(x,11) ^ RORu32(x,25))
+#define s0(x) (RORu32(x, 7) ^ RORu32(x,18) ^ (x >> 3))
+#define s1(x) (RORu32(x,17) ^ RORu32(x,19) ^ (x >> 10))
+
+#define H0 0x6a09e667
+#define H1 0xbb67ae85
+#define H2 0x3c6ef372
+#define H3 0xa54ff53a
+#define H4 0x510e527f
+#define H5 0x9b05688c
+#define H6 0x1f83d9ab
+#define H7 0x5be0cd19
+
+static inline void LOAD_OP(int I, u32 *W, const u8 *input)
+{
+ u32 t1 = input[(4 * I)] & 0xff;
+
+ t1 <<= 8;
+ t1 |= input[(4 * I) + 1] & 0xff;
+ t1 <<= 8;
+ t1 |= input[(4 * I) + 2] & 0xff;
+ t1 <<= 8;
+ t1 |= input[(4 * I) + 3] & 0xff;
+ W[I] = t1;
+}
+
+static inline void BLEND_OP(int I, u32 *W)
+{
+ W[I] = s1(W[I-2]) + W[I-7] + s0(W[I-15]) + W[I-16];
+}
+
+static void sha256_transform(u32 *state, const u8 *input)
+{
+ u32 a, b, c, d, e, f, g, h, t1, t2;
+ u32 W[64];
+ int i;
+
+ /* load the input */
+ for (i = 0; i < 16; i++)
+ LOAD_OP(i, W, input);
+
+ /* now blend */
+ for (i = 16; i < 64; i++)
+ BLEND_OP(i, W);
+
+ /* load the state into our registers */
+ a=state[0]; b=state[1]; c=state[2]; d=state[3];
+ e=state[4]; f=state[5]; g=state[6]; h=state[7];
+
+ /* now iterate */
+ t1 = h + e1(e) + Ch(e,f,g) + 0x428a2f98 + W[ 0];
+ t2 = e0(a) + Maj(a,b,c); d+=t1; h=t1+t2;
+ t1 = g + e1(d) + Ch(d,e,f) + 0x71374491 + W[ 1];
+ t2 = e0(h) + Maj(h,a,b); c+=t1; g=t1+t2;
+ t1 = f + e1(c) + Ch(c,d,e) + 0xb5c0fbcf + W[ 2];
+ t2 = e0(g) + Maj(g,h,a); b+=t1; f=t1+t2;
+ t1 = e + e1(b) + Ch(b,c,d) + 0xe9b5dba5 + W[ 3];
+ t2 = e0(f) + Maj(f,g,h); a+=t1; e=t1+t2;
+ t1 = d + e1(a) + Ch(a,b,c) + 0x3956c25b + W[ 4];
+ t2 = e0(e) + Maj(e,f,g); h+=t1; d=t1+t2;
+ t1 = c + e1(h) + Ch(h,a,b) + 0x59f111f1 + W[ 5];
+ t2 = e0(d) + Maj(d,e,f); g+=t1; c=t1+t2;
+ t1 = b + e1(g) + Ch(g,h,a) + 0x923f82a4 + W[ 6];
+ t2 = e0(c) + Maj(c,d,e); f+=t1; b=t1+t2;
+ t1 = a + e1(f) + Ch(f,g,h) + 0xab1c5ed5 + W[ 7];
+ t2 = e0(b) + Maj(b,c,d); e+=t1; a=t1+t2;
+
+ t1 = h + e1(e) + Ch(e,f,g) + 0xd807aa98 + W[ 8];
+ t2 = e0(a) + Maj(a,b,c); d+=t1; h=t1+t2;
+ t1 = g + e1(d) + Ch(d,e,f) + 0x12835b01 + W[ 9];
+ t2 = e0(h) + Maj(h,a,b); c+=t1; g=t1+t2;
+ t1 = f + e1(c) + Ch(c,d,e) + 0x243185be + W[10];
+ t2 = e0(g) + Maj(g,h,a); b+=t1; f=t1+t2;
+ t1 = e + e1(b) + Ch(b,c,d) + 0x550c7dc3 + W[11];
+ t2 = e0(f) + Maj(f,g,h); a+=t1; e=t1+t2;
+ t1 = d + e1(a) + Ch(a,b,c) + 0x72be5d74 + W[12];
+ t2 = e0(e) + Maj(e,f,g); h+=t1; d=t1+t2;
+ t1 = c + e1(h) + Ch(h,a,b) + 0x80deb1fe + W[13];
+ t2 = e0(d) + Maj(d,e,f); g+=t1; c=t1+t2;
+ t1 = b + e1(g) + Ch(g,h,a) + 0x9bdc06a7 + W[14];
+ t2 = e0(c) + Maj(c,d,e); f+=t1; b=t1+t2;
+ t1 = a + e1(f) + Ch(f,g,h) + 0xc19bf174 + W[15];
+ t2 = e0(b) + Maj(b,c,d); e+=t1; a=t1+t2;
+
+ t1 = h + e1(e) + Ch(e,f,g) + 0xe49b69c1 + W[16];
+ t2 = e0(a) + Maj(a,b,c); d+=t1; h=t1+t2;
+ t1 = g + e1(d) + Ch(d,e,f) + 0xefbe4786 + W[17];
+ t2 = e0(h) + Maj(h,a,b); c+=t1; g=t1+t2;
+ t1 = f + e1(c) + Ch(c,d,e) + 0x0fc19dc6 + W[18];
+ t2 = e0(g) + Maj(g,h,a); b+=t1; f=t1+t2;
+ t1 = e + e1(b) + Ch(b,c,d) + 0x240ca1cc + W[19];
+ t2 = e0(f) + Maj(f,g,h); a+=t1; e=t1+t2;
+ t1 = d + e1(a) + Ch(a,b,c) + 0x2de92c6f + W[20];
+ t2 = e0(e) + Maj(e,f,g); h+=t1; d=t1+t2;
+ t1 = c + e1(h) + Ch(h,a,b) + 0x4a7484aa + W[21];
+ t2 = e0(d) + Maj(d,e,f); g+=t1; c=t1+t2;
+ t1 = b + e1(g) + Ch(g,h,a) + 0x5cb0a9dc + W[22];
+ t2 = e0(c) + Maj(c,d,e); f+=t1; b=t1+t2;
+ t1 = a + e1(f) + Ch(f,g,h) + 0x76f988da + W[23];
+ t2 = e0(b) + Maj(b,c,d); e+=t1; a=t1+t2;
+
+ t1 = h + e1(e) + Ch(e,f,g) + 0x983e5152 + W[24];
+ t2 = e0(a) + Maj(a,b,c); d+=t1; h=t1+t2;
+ t1 = g + e1(d) + Ch(d,e,f) + 0xa831c66d + W[25];
+ t2 = e0(h) + Maj(h,a,b); c+=t1; g=t1+t2;
+ t1 = f + e1(c) + Ch(c,d,e) + 0xb00327c8 + W[26];
+ t2 = e0(g) + Maj(g,h,a); b+=t1; f=t1+t2;
+ t1 = e + e1(b) + Ch(b,c,d) + 0xbf597fc7 + W[27];
+ t2 = e0(f) + Maj(f,g,h); a+=t1; e=t1+t2;
+ t1 = d + e1(a) + Ch(a,b,c) + 0xc6e00bf3 + W[28];
+ t2 = e0(e) + Maj(e,f,g); h+=t1; d=t1+t2;
+ t1 = c + e1(h) + Ch(h,a,b) + 0xd5a79147 + W[29];
+ t2 = e0(d) + Maj(d,e,f); g+=t1; c=t1+t2;
+ t1 = b + e1(g) + Ch(g,h,a) + 0x06ca6351 + W[30];
+ t2 = e0(c) + Maj(c,d,e); f+=t1; b=t1+t2;
+ t1 = a + e1(f) + Ch(f,g,h) + 0x14292967 + W[31];
+ t2 = e0(b) + Maj(b,c,d); e+=t1; a=t1+t2;
+
+ t1 = h + e1(e) + Ch(e,f,g) + 0x27b70a85 + W[32];
+ t2 = e0(a) + Maj(a,b,c); d+=t1; h=t1+t2;
+ t1 = g + e1(d) + Ch(d,e,f) + 0x2e1b2138 + W[33];
+ t2 = e0(h) + Maj(h,a,b); c+=t1; g=t1+t2;
+ t1 = f + e1(c) + Ch(c,d,e) + 0x4d2c6dfc + W[34];
+ t2 = e0(g) + Maj(g,h,a); b+=t1; f=t1+t2;
+ t1 = e + e1(b) + Ch(b,c,d) + 0x53380d13 + W[35];
+ t2 = e0(f) + Maj(f,g,h); a+=t1; e=t1+t2;
+ t1 = d + e1(a) + Ch(a,b,c) + 0x650a7354 + W[36];
+ t2 = e0(e) + Maj(e,f,g); h+=t1; d=t1+t2;
+ t1 = c + e1(h) + Ch(h,a,b) + 0x766a0abb + W[37];
+ t2 = e0(d) + Maj(d,e,f); g+=t1; c=t1+t2;
+ t1 = b + e1(g) + Ch(g,h,a) + 0x81c2c92e + W[38];
+ t2 = e0(c) + Maj(c,d,e); f+=t1; b=t1+t2;
+ t1 = a + e1(f) + Ch(f,g,h) + 0x92722c85 + W[39];
+ t2 = e0(b) + Maj(b,c,d); e+=t1; a=t1+t2;
+
+ t1 = h + e1(e) + Ch(e,f,g) + 0xa2bfe8a1 + W[40];
+ t2 = e0(a) + Maj(a,b,c); d+=t1; h=t1+t2;
+ t1 = g + e1(d) + Ch(d,e,f) + 0xa81a664b + W[41];
+ t2 = e0(h) + Maj(h,a,b); c+=t1; g=t1+t2;
+ t1 = f + e1(c) + Ch(c,d,e) + 0xc24b8b70 + W[42];
+ t2 = e0(g) + Maj(g,h,a); b+=t1; f=t1+t2;
+ t1 = e + e1(b) + Ch(b,c,d) + 0xc76c51a3 + W[43];
+ t2 = e0(f) + Maj(f,g,h); a+=t1; e=t1+t2;
+ t1 = d + e1(a) + Ch(a,b,c) + 0xd192e819 + W[44];
+ t2 = e0(e) + Maj(e,f,g); h+=t1; d=t1+t2;
+ t1 = c + e1(h) + Ch(h,a,b) + 0xd6990624 + W[45];
+ t2 = e0(d) + Maj(d,e,f); g+=t1; c=t1+t2;
+ t1 = b + e1(g) + Ch(g,h,a) + 0xf40e3585 + W[46];
+ t2 = e0(c) + Maj(c,d,e); f+=t1; b=t1+t2;
+ t1 = a + e1(f) + Ch(f,g,h) + 0x106aa070 + W[47];
+ t2 = e0(b) + Maj(b,c,d); e+=t1; a=t1+t2;
+
+ t1 = h + e1(e) + Ch(e,f,g) + 0x19a4c116 + W[48];
+ t2 = e0(a) + Maj(a,b,c); d+=t1; h=t1+t2;
+ t1 = g + e1(d) + Ch(d,e,f) + 0x1e376c08 + W[49];
+ t2 = e0(h) + Maj(h,a,b); c+=t1; g=t1+t2;
+ t1 = f + e1(c) + Ch(c,d,e) + 0x2748774c + W[50];
+ t2 = e0(g) + Maj(g,h,a); b+=t1; f=t1+t2;
+ t1 = e + e1(b) + Ch(b,c,d) + 0x34b0bcb5 + W[51];
+ t2 = e0(f) + Maj(f,g,h); a+=t1; e=t1+t2;
+ t1 = d + e1(a) + Ch(a,b,c) + 0x391c0cb3 + W[52];
+ t2 = e0(e) + Maj(e,f,g); h+=t1; d=t1+t2;
+ t1 = c + e1(h) + Ch(h,a,b) + 0x4ed8aa4a + W[53];
+ t2 = e0(d) + Maj(d,e,f); g+=t1; c=t1+t2;
+ t1 = b + e1(g) + Ch(g,h,a) + 0x5b9cca4f + W[54];
+ t2 = e0(c) + Maj(c,d,e); f+=t1; b=t1+t2;
+ t1 = a + e1(f) + Ch(f,g,h) + 0x682e6ff3 + W[55];
+ t2 = e0(b) + Maj(b,c,d); e+=t1; a=t1+t2;
+
+ t1 = h + e1(e) + Ch(e,f,g) + 0x748f82ee + W[56];
+ t2 = e0(a) + Maj(a,b,c); d+=t1; h=t1+t2;
+ t1 = g + e1(d) + Ch(d,e,f) + 0x78a5636f + W[57];
+ t2 = e0(h) + Maj(h,a,b); c+=t1; g=t1+t2;
+ t1 = f + e1(c) + Ch(c,d,e) + 0x84c87814 + W[58];
+ t2 = e0(g) + Maj(g,h,a); b+=t1; f=t1+t2;
+ t1 = e + e1(b) + Ch(b,c,d) + 0x8cc70208 + W[59];
+ t2 = e0(f) + Maj(f,g,h); a+=t1; e=t1+t2;
+ t1 = d + e1(a) + Ch(a,b,c) + 0x90befffa + W[60];
+ t2 = e0(e) + Maj(e,f,g); h+=t1; d=t1+t2;
+ t1 = c + e1(h) + Ch(h,a,b) + 0xa4506ceb + W[61];
+ t2 = e0(d) + Maj(d,e,f); g+=t1; c=t1+t2;
+ t1 = b + e1(g) + Ch(g,h,a) + 0xbef9a3f7 + W[62];
+ t2 = e0(c) + Maj(c,d,e); f+=t1; b=t1+t2;
+ t1 = a + e1(f) + Ch(f,g,h) + 0xc67178f2 + W[63];
+ t2 = e0(b) + Maj(b,c,d); e+=t1; a=t1+t2;
+
+ state[0] += a; state[1] += b; state[2] += c; state[3] += d;
+ state[4] += e; state[5] += f; state[6] += g; state[7] += h;
+
+ /* clear any sensitive info... */
+ a = b = c = d = e = f = g = h = t1 = t2 = 0;
+ memset(W, 0, 64 * sizeof(u32));
+}
+
+static void sha256_init(void *ctx)
+{
+ struct sha256_ctx *sctx = ctx;
+ sctx->state[0] = H0;
+ sctx->state[1] = H1;
+ sctx->state[2] = H2;
+ sctx->state[3] = H3;
+ sctx->state[4] = H4;
+ sctx->state[5] = H5;
+ sctx->state[6] = H6;
+ sctx->state[7] = H7;
+ sctx->count[0] = sctx->count[1] = 0;
+ memset(sctx->buf, 0, sizeof(sctx->buf));
+}
+
+static void sha256_update(void *ctx, const u8 *data, unsigned int len)
+{
+ struct sha256_ctx *sctx = ctx;
+ unsigned int i, index, part_len;
+
+ /* Compute number of bytes mod 128 */
+ index = (unsigned int)((sctx->count[0] >> 3) & 0x3f);
+
+ /* Update number of bits */
+ if ((sctx->count[0] += (len << 3)) < (len << 3)) {
+ sctx->count[1]++;
+ sctx->count[1] += (len >> 29);
+ }
+
+ part_len = 64 - index;
+
+ /* Transform as many times as possible. */
+ if (len >= part_len) {
+ memcpy(&sctx->buf[index], data, part_len);
+ sha256_transform(sctx->state, sctx->buf);
+
+ for (i = part_len; i + 63 < len; i += 64)
+ sha256_transform(sctx->state, &data[i]);
+ index = 0;
+ } else {
+ i = 0;
+ }
+
+ /* Buffer remaining input */
+ memcpy(&sctx->buf[index], &data[i], len-i);
+}
+
+static void sha256_final(void* ctx, u8 *out)
+{
+ struct sha256_ctx *sctx = ctx;
+ u8 bits[8];
+ unsigned int index, pad_len, t;
+ int i, j;
+ const u8 padding[64] = { 0x80, };
+
+ /* Save number of bits */
+ t = sctx->count[0];
+ bits[7] = t; t >>= 8;
+ bits[6] = t; t >>= 8;
+ bits[5] = t; t >>= 8;
+ bits[4] = t;
+ t = sctx->count[1];
+ bits[3] = t; t >>= 8;
+ bits[2] = t; t >>= 8;
+ bits[1] = t; t >>= 8;
+ bits[0] = t;
+
+ /* Pad out to 56 mod 64. */
+ index = (sctx->count[0] >> 3) & 0x3f;
+ pad_len = (index < 56) ? (56 - index) : ((64+56) - index);
+ sha256_update(sctx, padding, pad_len);
+
+ /* Append length (before padding) */
+ sha256_update(sctx, bits, 8);
+
+ /* Store state in digest */
+ for (i = j = 0; i < 8; i++, j += 4) {
+ t = sctx->state[i];
+ out[j+3] = t; t >>= 8;
+ out[j+2] = t; t >>= 8;
+ out[j+1] = t; t >>= 8;
+ out[j ] = t;
+ }
+
+ /* Zeroize sensitive information. */
+ memset(sctx, 0, sizeof(*sctx));
+}
+
+
+static struct crypto_alg alg = {
+ .cra_name = "sha256",
+ .cra_flags = CRYPTO_ALG_TYPE_DIGEST,
+ .cra_blocksize = SHA256_HMAC_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct sha256_ctx),
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(alg.cra_list),
+ .cra_u = { .digest = {
+ .dia_digestsize = SHA256_DIGEST_SIZE,
+ .dia_init = sha256_init,
+ .dia_update = sha256_update,
+ .dia_final = sha256_final } }
+};
+
+static int __init init(void)
+{
+ return crypto_register_alg(&alg);
+}
+
+static void __exit fini(void)
+{
+ crypto_unregister_alg(&alg);
+}
+
+module_init(init);
+module_exit(fini);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("SHA256 Secure Hash Algorithm");
diff -Nru a/crypto/sha512.c b/crypto/sha512.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/crypto/sha512.c Thu May 8 10:41:38 2003
@@ -0,0 +1,373 @@
+/* SHA-512 code by Jean-Luc Cooke <jlcooke@certainkey.com>
+ *
+ * Copyright (c) Jean-Luc Cooke <jlcooke@certainkey.com>
+ * Copyright (c) Andrew McDonald <andrew@mcdonald.org.uk>
+ * Copyright (c) 2003 Kyle McMartin <kyle@debian.org>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2, or (at your option) any
+ * later version.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+
+#include <linux/mm.h>
+#include <linux/init.h>
+#include <linux/crypto.h>
+
+#include <asm/scatterlist.h>
+#include <asm/byteorder.h>
+
+#define SHA384_DIGEST_SIZE 48
+#define SHA512_DIGEST_SIZE 64
+#define SHA384_HMAC_BLOCK_SIZE 96
+#define SHA512_HMAC_BLOCK_SIZE 128
+
+struct sha512_ctx {
+ u64 state[8];
+ u32 count[4];
+ u8 buf[128];
+};
+
+static inline u64 Ch(u64 x, u64 y, u64 z)
+{
+ return ((x & y) ^ (~x & z));
+}
+
+static inline u64 Maj(u64 x, u64 y, u64 z)
+{
+ return ((x & y) ^ (x & z) ^ (y & z));
+}
+
+static inline u64 RORu64(u64 x, u64 y)
+{
+ return (x >> y) | (x << (64 - y));
+}
+
+const u64 sha512_K[80] = {
+ 0x428a2f98d728ae22, 0x7137449123ef65cd, 0xb5c0fbcfec4d3b2f,
+ 0xe9b5dba58189dbbc, 0x3956c25bf348b538, 0x59f111f1b605d019,
+ 0x923f82a4af194f9b, 0xab1c5ed5da6d8118, 0xd807aa98a3030242,
+ 0x12835b0145706fbe, 0x243185be4ee4b28c, 0x550c7dc3d5ffb4e2,
+ 0x72be5d74f27b896f, 0x80deb1fe3b1696b1, 0x9bdc06a725c71235,
+ 0xc19bf174cf692694, 0xe49b69c19ef14ad2, 0xefbe4786384f25e3,
+ 0x0fc19dc68b8cd5b5, 0x240ca1cc77ac9c65, 0x2de92c6f592b0275,
+ 0x4a7484aa6ea6e483, 0x5cb0a9dcbd41fbd4, 0x76f988da831153b5,
+ 0x983e5152ee66dfab, 0xa831c66d2db43210, 0xb00327c898fb213f,
+ 0xbf597fc7beef0ee4, 0xc6e00bf33da88fc2, 0xd5a79147930aa725,
+ 0x06ca6351e003826f, 0x142929670a0e6e70, 0x27b70a8546d22ffc,
+ 0x2e1b21385c26c926, 0x4d2c6dfc5ac42aed, 0x53380d139d95b3df,
+ 0x650a73548baf63de, 0x766a0abb3c77b2a8, 0x81c2c92e47edaee6,
+ 0x92722c851482353b, 0xa2bfe8a14cf10364, 0xa81a664bbc423001,
+ 0xc24b8b70d0f89791, 0xc76c51a30654be30, 0xd192e819d6ef5218,
+ 0xd69906245565a910, 0xf40e35855771202a, 0x106aa07032bbd1b8,
+ 0x19a4c116b8d2d0c8, 0x1e376c085141ab53, 0x2748774cdf8eeb99,
+ 0x34b0bcb5e19b48a8, 0x391c0cb3c5c95a63, 0x4ed8aa4ae3418acb,
+ 0x5b9cca4f7763e373, 0x682e6ff3d6b2b8a3, 0x748f82ee5defb2fc,
+ 0x78a5636f43172f60, 0x84c87814a1f0ab72, 0x8cc702081a6439ec,
+ 0x90befffa23631e28, 0xa4506cebde82bde9, 0xbef9a3f7b2c67915,
+ 0xc67178f2e372532b, 0xca273eceea26619c, 0xd186b8c721c0c207,
+ 0xeada7dd6cde0eb1e, 0xf57d4f7fee6ed178, 0x06f067aa72176fba,
+ 0x0a637dc5a2c898a6, 0x113f9804bef90dae, 0x1b710b35131c471b,
+ 0x28db77f523047d84, 0x32caab7b40c72493, 0x3c9ebe0a15c9bebc,
+ 0x431d67c49c100d4c, 0x4cc5d4becb3e42b6, 0x597f299cfc657e2a,
+ 0x5fcb6fab3ad6faec, 0x6c44198c4a475817,
+};
+
+#define e0(x) (RORu64(x,28) ^ RORu64(x,34) ^ RORu64(x,39))
+#define e1(x) (RORu64(x,14) ^ RORu64(x,18) ^ RORu64(x,41))
+#define s0(x) (RORu64(x, 1) ^ RORu64(x, 8) ^ (x >> 7))
+#define s1(x) (RORu64(x,19) ^ RORu64(x,61) ^ (x >> 6))
+
+/* H* initial state for SHA-512 */
+#define H0 0x6a09e667f3bcc908
+#define H1 0xbb67ae8584caa73b
+#define H2 0x3c6ef372fe94f82b
+#define H3 0xa54ff53a5f1d36f1
+#define H4 0x510e527fade682d1
+#define H5 0x9b05688c2b3e6c1f
+#define H6 0x1f83d9abfb41bd6b
+#define H7 0x5be0cd19137e2179
+
+/* H'* initial state for SHA-384 */
+#define HP0 0xcbbb9d5dc1059ed8
+#define HP1 0x629a292a367cd507
+#define HP2 0x9159015a3070dd17
+#define HP3 0x152fecd8f70e5939
+#define HP4 0x67332667ffc00b31
+#define HP5 0x8eb44a8768581511
+#define HP6 0xdb0c2e0d64f98fa7
+#define HP7 0x47b5481dbefa4fa4
+
+static inline void LOAD_OP(int I, u64 *W, const u8 *input)
+{
+ u64 t1 = input[(8*I) ] & 0xff;
+ t1 <<= 8;
+ t1 |= input[(8*I)+1] & 0xff;
+ t1 <<= 8;
+ t1 |= input[(8*I)+2] & 0xff;
+ t1 <<= 8;
+ t1 |= input[(8*I)+3] & 0xff;
+ t1 <<= 8;
+ t1 |= input[(8*I)+4] & 0xff;
+ t1 <<= 8;
+ t1 |= input[(8*I)+5] & 0xff;
+ t1 <<= 8;
+ t1 |= input[(8*I)+6] & 0xff;
+ t1 <<= 8;
+ t1 |= input[(8*I)+7] & 0xff;
+ W[I] = t1;
+}
+
+static inline void BLEND_OP(int I, u64 *W)
+{
+ W[I] = s1(W[I-2]) + W[I-7] + s0(W[I-15]) + W[I-16];
+}
+
+static void
+sha512_transform(u64 *state, const u8 *input)
+{
+ u64 a, b, c, d, e, f, g, h, t1, t2;
+ u64 W[80];
+
+ int i;
+
+ /* load the input */
+ for (i = 0; i < 16; i++)
+ LOAD_OP(i, W, input);
+
+ for (i = 16; i < 80; i++) {
+ BLEND_OP(i, W);
+ }
+
+ /* load the state into our registers */
+ a=state[0]; b=state[1]; c=state[2]; d=state[3];
+ e=state[4]; f=state[5]; g=state[6]; h=state[7];
+
+ /* now iterate */
+ for (i=0; i<80; i+=8) {
+ t1 = h + e1(e) + Ch(e,f,g) + sha512_K[i ] + W[i ];
+ t2 = e0(a) + Maj(a,b,c); d+=t1; h=t1+t2;
+ t1 = g + e1(d) + Ch(d,e,f) + sha512_K[i+1] + W[i+1];
+ t2 = e0(h) + Maj(h,a,b); c+=t1; g=t1+t2;
+ t1 = f + e1(c) + Ch(c,d,e) + sha512_K[i+2] + W[i+2];
+ t2 = e0(g) + Maj(g,h,a); b+=t1; f=t1+t2;
+ t1 = e + e1(b) + Ch(b,c,d) + sha512_K[i+3] + W[i+3];
+ t2 = e0(f) + Maj(f,g,h); a+=t1; e=t1+t2;
+ t1 = d + e1(a) + Ch(a,b,c) + sha512_K[i+4] + W[i+4];
+ t2 = e0(e) + Maj(e,f,g); h+=t1; d=t1+t2;
+ t1 = c + e1(h) + Ch(h,a,b) + sha512_K[i+5] + W[i+5];
+ t2 = e0(d) + Maj(d,e,f); g+=t1; c=t1+t2;
+ t1 = b + e1(g) + Ch(g,h,a) + sha512_K[i+6] + W[i+6];
+ t2 = e0(c) + Maj(c,d,e); f+=t1; b=t1+t2;
+ t1 = a + e1(f) + Ch(f,g,h) + sha512_K[i+7] + W[i+7];
+ t2 = e0(b) + Maj(b,c,d); e+=t1; a=t1+t2;
+ }
+
+ state[0] += a; state[1] += b; state[2] += c; state[3] += d;
+ state[4] += e; state[5] += f; state[6] += g; state[7] += h;
+
+ /* erase our data */
+ a = b = c = d = e = f = g = h = t1 = t2 = 0;
+ memset(W, 0, 80 * sizeof(u64));
+}
+
+static void
+sha512_init(void *ctx)
+{
+ struct sha512_ctx *sctx = ctx;
+ sctx->state[0] = H0;
+ sctx->state[1] = H1;
+ sctx->state[2] = H2;
+ sctx->state[3] = H3;
+ sctx->state[4] = H4;
+ sctx->state[5] = H5;
+ sctx->state[6] = H6;
+ sctx->state[7] = H7;
+ sctx->count[0] = sctx->count[1] = sctx->count[2] = sctx->count[3] = 0;
+ memset(sctx->buf, 0, sizeof(sctx->buf));
+}
+
+static void
+sha384_init(void *ctx)
+{
+ struct sha512_ctx *sctx = ctx;
+ sctx->state[0] = HP0;
+ sctx->state[1] = HP1;
+ sctx->state[2] = HP2;
+ sctx->state[3] = HP3;
+ sctx->state[4] = HP4;
+ sctx->state[5] = HP5;
+ sctx->state[6] = HP6;
+ sctx->state[7] = HP7;
+ sctx->count[0] = sctx->count[1] = sctx->count[2] = sctx->count[3] = 0;
+ memset(sctx->buf, 0, sizeof(sctx->buf));
+}
+
+static void
+sha512_update(void *ctx, const u8 *data, unsigned int len)
+{
+ struct sha512_ctx *sctx = ctx;
+
+ unsigned int i, index, part_len;
+
+ /* Compute number of bytes mod 128 */
+ index = (unsigned int)((sctx->count[0] >> 3) & 0x7F);
+
+ /* Update number of bits */
+ if ((sctx->count[0] += (len << 3)) < (len << 3)) {
+ if ((sctx->count[1] += 1) < 1)
+ if ((sctx->count[2] += 1) < 1)
+ sctx->count[3]++;
+ sctx->count[1] += (len >> 29);
+ }
+
+ part_len = 128 - index;
+
+ /* Transform as many times as possible. */
+ if (len >= part_len) {
+ memcpy(&sctx->buf[index], data, part_len);
+ sha512_transform(sctx->state, sctx->buf);
+
+ for (i = part_len; i + 127 < len; i+=128)
+ sha512_transform(sctx->state, &data[i]);
+
+ index = 0;
+ } else {
+ i = 0;
+ }
+
+ /* Buffer remaining input */
+ memcpy(&sctx->buf[index], &data[i], len - i);
+}
+
+static void
+sha512_final(void *ctx, u8 *hash)
+{
+ struct sha512_ctx *sctx = ctx;
+
+ const static u8 padding[128] = { 0x80, };
+
+ u32 t;
+ u64 t2;
+ u8 bits[128];
+ unsigned int index, pad_len;
+ int i, j;
+
+ index = pad_len = t = i = j = 0;
+ t2 = 0;
+
+ /* Save number of bits */
+ t = sctx->count[0];
+ bits[15] = t; t>>=8;
+ bits[14] = t; t>>=8;
+ bits[13] = t; t>>=8;
+ bits[12] = t;
+ t = sctx->count[1];
+ bits[11] = t; t>>=8;
+ bits[10] = t; t>>=8;
+ bits[9 ] = t; t>>=8;
+ bits[8 ] = t;
+ t = sctx->count[2];
+ bits[7 ] = t; t>>=8;
+ bits[6 ] = t; t>>=8;
+ bits[5 ] = t; t>>=8;
+ bits[4 ] = t;
+ t = sctx->count[3];
+ bits[3 ] = t; t>>=8;
+ bits[2 ] = t; t>>=8;
+ bits[1 ] = t; t>>=8;
+ bits[0 ] = t;
+
+ /* Pad out to 112 mod 128. */
+ index = (sctx->count[0] >> 3) & 0x7f;
+ pad_len = (index < 112) ? (112 - index) : ((128+112) - index);
+ sha512_update(sctx, padding, pad_len);
+
+ /* Append length (before padding) */
+ sha512_update(sctx, bits, 16);
+
+ /* Store state in digest */
+ for (i = j = 0; i < 8; i++, j += 8) {
+ t2 = sctx->state[i];
+ hash[j+7] = (char)t2 & 0xff; t2>>=8;
+ hash[j+6] = (char)t2 & 0xff; t2>>=8;
+ hash[j+5] = (char)t2 & 0xff; t2>>=8;
+ hash[j+4] = (char)t2 & 0xff; t2>>=8;
+ hash[j+3] = (char)t2 & 0xff; t2>>=8;
+ hash[j+2] = (char)t2 & 0xff; t2>>=8;
+ hash[j+1] = (char)t2 & 0xff; t2>>=8;
+ hash[j ] = (char)t2 & 0xff;
+ }
+
+ /* Zeroize sensitive information. */
+ memset(sctx, 0, sizeof(struct sha512_ctx));
+}
+
+static void sha384_final(void *ctx, u8 *hash)
+{
+ struct sha512_ctx *sctx = ctx;
+ u8 D[64];
+
+ sha512_final(sctx, D);
+
+ memcpy(hash, D, 48);
+ memset(D, 0, 64);
+}
+
+static struct crypto_alg sha512 = {
+ .cra_name = "sha512",
+ .cra_flags = CRYPTO_ALG_TYPE_DIGEST,
+ .cra_blocksize = SHA512_HMAC_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct sha512_ctx),
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(sha512.cra_list),
+ .cra_u = { .digest = {
+ .dia_digestsize = SHA512_DIGEST_SIZE,
+ .dia_init = sha512_init,
+ .dia_update = sha512_update,
+ .dia_final = sha512_final }
+ }
+};
+
+static struct crypto_alg sha384 = {
+ .cra_name = "sha384",
+ .cra_flags = CRYPTO_ALG_TYPE_DIGEST,
+ .cra_blocksize = SHA384_HMAC_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct sha512_ctx),
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(sha384.cra_list),
+ .cra_u = { .digest = {
+ .dia_digestsize = SHA384_DIGEST_SIZE,
+ .dia_init = sha384_init,
+ .dia_update = sha512_update,
+ .dia_final = sha384_final }
+ }
+};
+
+static int __init init(void)
+{
+ int ret = 0;
+
+ if ((ret = crypto_register_alg(&sha384)) < 0)
+ goto out;
+ if ((ret = crypto_register_alg(&sha512)) < 0)
+ crypto_unregister_alg(&sha384);
+out:
+ return ret;
+}
+
+static void __exit fini(void)
+{
+ crypto_unregister_alg(&sha384);
+ crypto_unregister_alg(&sha512);
+}
+
+module_init(init);
+module_exit(fini);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("SHA-512 and SHA-384 Secure Hash Algorithms");
diff -Nru a/crypto/tcrypt.c b/crypto/tcrypt.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/crypto/tcrypt.c Thu May 8 10:41:38 2003
@@ -0,0 +1,2418 @@
+/*
+ * Quick & dirty crypto testing module.
+ *
+ * This will only exist until we have a better testing mechanism
+ * (e.g. a char device).
+ *
+ * Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
+ * Copyright (c) 2002 Jean-Francois Dive <jef@linuxbe.org>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/string.h>
+#include <linux/mm.h>
+#include <linux/slab.h>
+#include <asm/scatterlist.h>
+#include <linux/string.h>
+#include <linux/crypto.h>
+#include <linux/highmem.h>
+#include "tcrypt.h"
+
+/*
+ * Need to kmalloc() memory for testing kmap().
+ */
+#define TVMEMSIZE 4096
+#define XBUFSIZE 32768
+
+/*
+ * Indexes into the xbuf to simulate cross-page access.
+ */
+#define IDX1 37
+#define IDX2 32400
+#define IDX3 1
+#define IDX4 8193
+#define IDX5 22222
+#define IDX6 17101
+#define IDX7 27333
+#define IDX8 3000
+
+static int mode = 0;
+static char *xbuf;
+static char *tvmem;
+
+static char *check[] = {
+ "des", "md5", "des3_ede", "rot13", "sha1", "sha256", "blowfish",
+ "twofish", "serpent", "sha384", "sha512", "md4", "aes", "deflate",
+ NULL
+};
+
+static void
+hexdump(unsigned char *buf, unsigned int len)
+{
+ while (len--)
+ printk("%02x", *buf++);
+
+ printk("\n");
+}
+
+static void
+test_md5(void)
+{
+ char *p;
+ unsigned int i;
+ struct scatterlist sg[2];
+ char result[128];
+ struct crypto_tfm *tfm;
+ struct md5_testvec *md5_tv;
+ unsigned int tsize;
+
+ printk("\ntesting md5\n");
+
+ tsize = sizeof (md5_tv_template);
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ return;
+ }
+
+ memcpy(tvmem, md5_tv_template, tsize);
+ md5_tv = (void *) tvmem;
+
+ tfm = crypto_alloc_tfm("md5", 0);
+ if (tfm == NULL) {
+ printk("failed to load transform for md5\n");
+ return;
+ }
+
+ for (i = 0; i < MD5_TEST_VECTORS; i++) {
+ printk("test %u:\n", i + 1);
+ memset(result, 0, sizeof (result));
+
+ p = md5_tv[i].plaintext;
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = strlen(md5_tv[i].plaintext);
+
+ crypto_digest_init(tfm);
+ crypto_digest_update(tfm, sg, 1);
+ crypto_digest_final(tfm, result);
+
+ hexdump(result, crypto_tfm_alg_digestsize(tfm));
+ printk("%s\n",
+ memcmp(result, md5_tv[i].digest,
+ crypto_tfm_alg_digestsize(tfm)) ? "fail" :
+ "pass");
+ }
+
+ printk("\ntesting md5 across pages\n");
+
+ /* setup the dummy buffer first */
+ memset(xbuf, 0, sizeof (xbuf));
+ memcpy(&xbuf[IDX1], "abcdefghijklm", 13);
+ memcpy(&xbuf[IDX2], "nopqrstuvwxyz", 13);
+
+ p = &xbuf[IDX1];
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = 13;
+
+ p = &xbuf[IDX2];
+ sg[1].page = virt_to_page(p);
+ sg[1].offset = ((long) p & ~PAGE_MASK);
+ sg[1].length = 13;
+
+ memset(result, 0, sizeof (result));
+ crypto_digest_digest(tfm, sg, 2, result);
+ hexdump(result, crypto_tfm_alg_digestsize(tfm));
+
+ printk("%s\n",
+ memcmp(result, md5_tv[4].digest,
+ crypto_tfm_alg_digestsize(tfm)) ? "fail" : "pass");
+ crypto_free_tfm(tfm);
+}
+
+#ifdef CONFIG_CRYPTO_HMAC
+static void
+test_hmac_md5(void)
+{
+ char *p;
+ unsigned int i, klen;
+ struct scatterlist sg[2];
+ char result[128];
+ struct crypto_tfm *tfm;
+ struct hmac_md5_testvec *hmac_md5_tv;
+ unsigned int tsize;
+
+ tfm = crypto_alloc_tfm("md5", 0);
+ if (tfm == NULL) {
+ printk("failed to load transform for md5\n");
+ return;
+ }
+
+ printk("\ntesting hmac_md5\n");
+
+ tsize = sizeof (hmac_md5_tv_template);
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ goto out;
+ }
+
+ memcpy(tvmem, hmac_md5_tv_template, tsize);
+ hmac_md5_tv = (void *) tvmem;
+
+ for (i = 0; i < HMAC_MD5_TEST_VECTORS; i++) {
+ printk("test %u:\n", i + 1);
+ memset(result, 0, sizeof (result));
+
+ p = hmac_md5_tv[i].plaintext;
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = strlen(hmac_md5_tv[i].plaintext);
+
+ klen = strlen(hmac_md5_tv[i].key);
+ crypto_hmac(tfm, hmac_md5_tv[i].key, &klen, sg, 1, result);
+
+ hexdump(result, crypto_tfm_alg_digestsize(tfm));
+ printk("%s\n",
+ memcmp(result, hmac_md5_tv[i].digest,
+ crypto_tfm_alg_digestsize(tfm)) ? "fail" :
+ "pass");
+ }
+
+ printk("\ntesting hmac_md5 across pages\n");
+
+ memset(xbuf, 0, sizeof (xbuf));
+
+ memcpy(&xbuf[IDX1], "what do ya want ", 16);
+ memcpy(&xbuf[IDX2], "for nothing?", 12);
+
+ p = &xbuf[IDX1];
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = 16;
+
+ p = &xbuf[IDX2];
+ sg[1].page = virt_to_page(p);
+ sg[1].offset = ((long) p & ~PAGE_MASK);
+ sg[1].length = 12;
+
+ memset(result, 0, sizeof (result));
+ klen = strlen(hmac_md5_tv[7].key);
+ crypto_hmac(tfm, hmac_md5_tv[7].key, &klen, sg, 2, result);
+ hexdump(result, crypto_tfm_alg_digestsize(tfm));
+
+ printk("%s\n",
+ memcmp(result, hmac_md5_tv[7].digest,
+ crypto_tfm_alg_digestsize(tfm)) ? "fail" : "pass");
+out:
+ crypto_free_tfm(tfm);
+}
+
+static void
+test_hmac_sha1(void)
+{
+ char *p;
+ unsigned int i, klen;
+ struct crypto_tfm *tfm;
+ struct hmac_sha1_testvec *hmac_sha1_tv;
+ struct scatterlist sg[2];
+ unsigned int tsize;
+ char result[SHA1_DIGEST_SIZE];
+
+ tfm = crypto_alloc_tfm("sha1", 0);
+ if (tfm == NULL) {
+ printk("failed to load transform for sha1\n");
+ return;
+ }
+
+ printk("\ntesting hmac_sha1\n");
+
+ tsize = sizeof (hmac_sha1_tv_template);
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ goto out;
+ }
+
+ memcpy(tvmem, hmac_sha1_tv_template, tsize);
+ hmac_sha1_tv = (void *) tvmem;
+
+ for (i = 0; i < HMAC_SHA1_TEST_VECTORS; i++) {
+ printk("test %u:\n", i + 1);
+ memset(result, 0, sizeof (result));
+
+ p = hmac_sha1_tv[i].plaintext;
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = strlen(hmac_sha1_tv[i].plaintext);
+
+ klen = strlen(hmac_sha1_tv[i].key);
+
+ crypto_hmac(tfm, hmac_sha1_tv[i].key, &klen, sg, 1, result);
+
+ hexdump(result, sizeof (result));
+ printk("%s\n",
+ memcmp(result, hmac_sha1_tv[i].digest,
+ crypto_tfm_alg_digestsize(tfm)) ? "fail" :
+ "pass");
+ }
+
+ printk("\ntesting hmac_sha1 across pages\n");
+
+ /* setup the dummy buffer first */
+ memset(xbuf, 0, sizeof (xbuf));
+
+ memcpy(&xbuf[IDX1], "what do ya want ", 16);
+ memcpy(&xbuf[IDX2], "for nothing?", 12);
+
+ p = &xbuf[IDX1];
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = 16;
+
+ p = &xbuf[IDX2];
+ sg[1].page = virt_to_page(p);
+ sg[1].offset = ((long) p & ~PAGE_MASK);
+ sg[1].length = 12;
+
+ memset(result, 0, sizeof (result));
+ klen = strlen(hmac_sha1_tv[7].key);
+ crypto_hmac(tfm, hmac_sha1_tv[7].key, &klen, sg, 2, result);
+ hexdump(result, crypto_tfm_alg_digestsize(tfm));
+
+ printk("%s\n",
+ memcmp(result, hmac_sha1_tv[7].digest,
+ crypto_tfm_alg_digestsize(tfm)) ? "fail" : "pass");
+out:
+ crypto_free_tfm(tfm);
+}
+
+static void
+test_hmac_sha256(void)
+{
+ char *p;
+ unsigned int i, klen;
+ struct crypto_tfm *tfm;
+ struct hmac_sha256_testvec *hmac_sha256_tv;
+ struct scatterlist sg[2];
+ unsigned int tsize;
+ char result[SHA256_DIGEST_SIZE];
+
+ tfm = crypto_alloc_tfm("sha256", 0);
+ if (tfm == NULL) {
+ printk("failed to load transform for sha256\n");
+ return;
+ }
+
+ printk("\ntesting hmac_sha256\n");
+
+ tsize = sizeof (hmac_sha256_tv_template);
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ goto out;
+ }
+
+ memcpy(tvmem, hmac_sha256_tv_template, tsize);
+ hmac_sha256_tv = (void *) tvmem;
+
+ for (i = 0; i < HMAC_SHA256_TEST_VECTORS; i++) {
+ printk("test %u:\n", i + 1);
+ memset(result, 0, sizeof (result));
+
+ p = hmac_sha256_tv[i].plaintext;
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = strlen(hmac_sha256_tv[i].plaintext);
+
+ klen = strlen(hmac_sha256_tv[i].key);
+
+ hexdump(hmac_sha256_tv[i].key, strlen(hmac_sha256_tv[i].key));
+ crypto_hmac(tfm, hmac_sha256_tv[i].key, &klen, sg, 1, result);
+ hexdump(result, crypto_tfm_alg_digestsize(tfm));
+ printk("%s\n",
+ memcmp(result, hmac_sha256_tv[i].digest,
+ crypto_tfm_alg_digestsize(tfm)) ? "fail" : "pass");
+ }
+
+out:
+ crypto_free_tfm(tfm);
+}
+
+#endif /* CONFIG_CRYPTO_HMAC */
+
+static void
+test_md4(void)
+{
+ char *p;
+ unsigned int i;
+ struct scatterlist sg[1];
+ char result[128];
+ struct crypto_tfm *tfm;
+ struct md4_testvec *md4_tv;
+ unsigned int tsize;
+
+ printk("\ntesting md4\n");
+
+ tsize = sizeof (md4_tv_template);
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ return;
+ }
+
+ memcpy(tvmem, md4_tv_template, tsize);
+ md4_tv = (void *) tvmem;
+
+ tfm = crypto_alloc_tfm("md4", 0);
+ if (tfm == NULL) {
+ printk("failed to load transform for md4\n");
+ return;
+ }
+
+ for (i = 0; i < MD4_TEST_VECTORS; i++) {
+ printk("test %u:\n", i + 1);
+ memset(result, 0, sizeof (result));
+
+ p = md4_tv[i].plaintext;
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = strlen(md4_tv[i].plaintext);
+
+ crypto_digest_digest(tfm, sg, 1, result);
+
+ hexdump(result, crypto_tfm_alg_digestsize(tfm));
+ printk("%s\n",
+ memcmp(result, md4_tv[i].digest,
+ crypto_tfm_alg_digestsize(tfm)) ? "fail" :
+ "pass");
+ }
+
+ crypto_free_tfm(tfm);
+}
+
+static void
+test_sha1(void)
+{
+ char *p;
+ unsigned int i;
+ struct crypto_tfm *tfm;
+ struct sha1_testvec *sha1_tv;
+ struct scatterlist sg[2];
+ unsigned int tsize;
+ char result[SHA1_DIGEST_SIZE];
+
+ printk("\ntesting sha1\n");
+
+ tsize = sizeof (sha1_tv_template);
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ return;
+ }
+
+ memcpy(tvmem, sha1_tv_template, tsize);
+ sha1_tv = (void *) tvmem;
+
+ tfm = crypto_alloc_tfm("sha1", 0);
+ if (tfm == NULL) {
+ printk("failed to load transform for sha1\n");
+ return;
+ }
+
+ for (i = 0; i < SHA1_TEST_VECTORS; i++) {
+ printk("test %u:\n", i + 1);
+ memset(result, 0, sizeof (result));
+
+ p = sha1_tv[i].plaintext;
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = strlen(sha1_tv[i].plaintext);
+
+ crypto_digest_init(tfm);
+ crypto_digest_update(tfm, sg, 1);
+ crypto_digest_final(tfm, result);
+
+ hexdump(result, crypto_tfm_alg_digestsize(tfm));
+ printk("%s\n",
+ memcmp(result, sha1_tv[i].digest,
+ crypto_tfm_alg_digestsize(tfm)) ? "fail" :
+ "pass");
+ }
+
+ printk("\ntesting sha1 across pages\n");
+
+ /* setup the dummy buffer first */
+ memset(xbuf, 0, sizeof (xbuf));
+ memcpy(&xbuf[IDX1], "abcdbcdecdefdefgefghfghighij", 28);
+ memcpy(&xbuf[IDX2], "hijkijkljklmklmnlmnomnopnopq", 28);
+
+ p = &xbuf[IDX1];
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = 28;
+
+ p = &xbuf[IDX2];
+ sg[1].page = virt_to_page(p);
+ sg[1].offset = ((long) p & ~PAGE_MASK);
+ sg[1].length = 28;
+
+ memset(result, 0, sizeof (result));
+ crypto_digest_digest(tfm, sg, 2, result);
+ hexdump(result, crypto_tfm_alg_digestsize(tfm));
+ printk("%s\n",
+ memcmp(result, sha1_tv[1].digest,
+ crypto_tfm_alg_digestsize(tfm)) ? "fail" : "pass");
+ crypto_free_tfm(tfm);
+}
+
+static void
+test_sha256(void)
+{
+ char *p;
+ unsigned int i;
+ struct crypto_tfm *tfm;
+ struct sha256_testvec *sha256_tv;
+ struct scatterlist sg[2];
+ unsigned int tsize;
+ char result[SHA256_DIGEST_SIZE];
+
+ printk("\ntesting sha256\n");
+
+ tsize = sizeof (sha256_tv_template);
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ return;
+ }
+
+ memcpy(tvmem, sha256_tv_template, tsize);
+ sha256_tv = (void *) tvmem;
+
+ tfm = crypto_alloc_tfm("sha256", 0);
+ if (tfm == NULL) {
+ printk("failed to load transform for sha256\n");
+ return;
+ }
+
+ for (i = 0; i < SHA256_TEST_VECTORS; i++) {
+ printk("test %u:\n", i + 1);
+ memset(result, 0, sizeof (result));
+
+ p = sha256_tv[i].plaintext;
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = strlen(sha256_tv[i].plaintext);
+
+ crypto_digest_init(tfm);
+ crypto_digest_update(tfm, sg, 1);
+ crypto_digest_final(tfm, result);
+
+ hexdump(result, crypto_tfm_alg_digestsize(tfm));
+ printk("%s\n",
+ memcmp(result, sha256_tv[i].digest,
+ crypto_tfm_alg_digestsize(tfm)) ? "fail" :
+ "pass");
+ }
+
+ printk("\ntesting sha256 across pages\n");
+
+ /* setup the dummy buffer first */
+ memset(xbuf, 0, sizeof (xbuf));
+ memcpy(&xbuf[IDX1], "abcdbcdecdefdefgefghfghighij", 28);
+ memcpy(&xbuf[IDX2], "hijkijkljklmklmnlmnomnopnopq", 28);
+
+ p = &xbuf[IDX1];
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = 28;
+
+ p = &xbuf[IDX2];
+ sg[1].page = virt_to_page(p);
+ sg[1].offset = ((long) p & ~PAGE_MASK);
+ sg[1].length = 28;
+
+ memset(result, 0, sizeof (result));
+ crypto_digest_digest(tfm, sg, 2, result);
+ hexdump(result, crypto_tfm_alg_digestsize(tfm));
+ printk("%s\n",
+ memcmp(result, sha256_tv[1].digest,
+ crypto_tfm_alg_digestsize(tfm)) ? "fail" : "pass");
+
+ crypto_free_tfm(tfm);
+}
+
+static void
+test_sha384(void)
+{
+ char *p;
+ unsigned int i;
+ struct crypto_tfm *tfm;
+ struct sha384_testvec *sha384_tv;
+ struct scatterlist sg[2];
+ unsigned int tsize;
+ char result[SHA384_DIGEST_SIZE];
+
+ printk("\ntesting sha384\n");
+
+ tsize = sizeof (sha384_tv_template);
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ return;
+ }
+
+ memcpy(tvmem, sha384_tv_template, tsize);
+ sha384_tv = (void *) tvmem;
+
+ tfm = crypto_alloc_tfm("sha384", 0);
+ if (tfm == NULL) {
+ printk("failed to load transform for sha384\n");
+ return;
+ }
+
+ for (i = 0; i < SHA384_TEST_VECTORS; i++) {
+ printk("test %u:\n", i + 1);
+ memset(result, 0, sizeof (result));
+
+ p = sha384_tv[i].plaintext;
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = strlen(sha384_tv[i].plaintext);
+
+ crypto_digest_init(tfm);
+ crypto_digest_update(tfm, sg, 1);
+ crypto_digest_final(tfm, result);
+
+ hexdump(result, crypto_tfm_alg_digestsize(tfm));
+ printk("%s\n",
+ memcmp(result, sha384_tv[i].digest,
+ crypto_tfm_alg_digestsize(tfm)) ? "fail" :
+ "pass");
+ }
+
+ crypto_free_tfm(tfm);
+}
+
+static void
+test_sha512(void)
+{
+ char *p;
+ unsigned int i;
+ struct crypto_tfm *tfm;
+ struct sha512_testvec *sha512_tv;
+ struct scatterlist sg[2];
+ unsigned int tsize;
+ char result[SHA512_DIGEST_SIZE];
+
+ printk("\ntesting sha512\n");
+
+ tsize = sizeof (sha512_tv_template);
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ return;
+ }
+
+ memcpy(tvmem, sha512_tv_template, tsize);
+ sha512_tv = (void *) tvmem;
+
+ tfm = crypto_alloc_tfm("sha512", 0);
+ if (tfm == NULL) {
+ printk("failed to load transform for sha512\n");
+ return;
+ }
+
+ for (i = 0; i < SHA512_TEST_VECTORS; i++) {
+ printk("test %u:\n", i + 1);
+ memset(result, 0, sizeof (result));
+
+ p = sha512_tv[i].plaintext;
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = strlen(sha512_tv[i].plaintext);
+
+ crypto_digest_init(tfm);
+ crypto_digest_update(tfm, sg, 1);
+ crypto_digest_final(tfm, result);
+
+ hexdump(result, crypto_tfm_alg_digestsize(tfm));
+ printk("%s\n",
+ memcmp(result, sha512_tv[i].digest,
+ crypto_tfm_alg_digestsize(tfm)) ? "fail" :
+ "pass");
+ }
+
+ crypto_free_tfm(tfm);
+}
+
+void
+test_des(void)
+{
+ unsigned int ret, i, len;
+ unsigned int tsize;
+ char *p, *q;
+ struct crypto_tfm *tfm;
+ char *key;
+ char res[8];
+ struct des_tv *des_tv;
+ struct scatterlist sg[8];
+
+ printk("\ntesting des encryption\n");
+
+ tsize = sizeof (des_enc_tv_template);
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ return;
+ }
+
+ memcpy(tvmem, des_enc_tv_template, tsize);
+ des_tv = (void *) tvmem;
+
+ tfm = crypto_alloc_tfm("des", 0);
+ if (tfm == NULL) {
+ printk("failed to load transform for des (default ecb)\n");
+ return;
+ }
+
+ for (i = 0; i < DES_ENC_TEST_VECTORS; i++) {
+ printk("test %u:\n", i + 1);
+
+ key = des_tv[i].key;
+ tfm->crt_flags |= CRYPTO_TFM_REQ_WEAK_KEY;
+
+ ret = crypto_cipher_setkey(tfm, key, 8);
+ if (ret) {
+ printk("setkey() failed flags=%x\n", tfm->crt_flags);
+
+ if (!des_tv[i].fail)
+ goto out;
+ }
+
+ len = des_tv[i].len;
+
+ p = des_tv[i].plaintext;
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = len;
+ ret = crypto_cipher_encrypt(tfm, sg, sg, len);
+ if (ret) {
+ printk("encrypt() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ q = kmap(sg[0].page) + sg[0].offset;
+ hexdump(q, len);
+
+ printk("%s\n",
+ memcmp(q, des_tv[i].result, len) ? "fail" : "pass");
+
+ }
+
+ printk("\ntesting des ecb encryption across pages\n");
+
+ i = 5;
+ key = des_tv[i].key;
+ tfm->crt_flags = 0;
+
+ hexdump(key, 8);
+
+ ret = crypto_cipher_setkey(tfm, key, 8);
+ if (ret) {
+ printk("setkey() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ /* setup the dummy buffer first */
+ memset(xbuf, 0, sizeof (xbuf));
+ memcpy(&xbuf[IDX1], des_tv[i].plaintext, 8);
+ memcpy(&xbuf[IDX2], des_tv[i].plaintext + 8, 8);
+
+ p = &xbuf[IDX1];
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = 8;
+
+ p = &xbuf[IDX2];
+ sg[1].page = virt_to_page(p);
+ sg[1].offset = ((long) p & ~PAGE_MASK);
+ sg[1].length = 8;
+
+ ret = crypto_cipher_encrypt(tfm, sg, sg, 16);
+ if (ret) {
+ printk("encrypt() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ printk("page 1\n");
+ q = kmap(sg[0].page) + sg[0].offset;
+ hexdump(q, 8);
+ printk("%s\n", memcmp(q, des_tv[i].result, 8) ? "fail" : "pass");
+
+ printk("page 2\n");
+ q = kmap(sg[1].page) + sg[1].offset;
+ hexdump(q, 8);
+ printk("%s\n", memcmp(q, des_tv[i].result + 8, 8) ? "fail" : "pass");
+
+ printk("\ntesting des ecb encryption chunking scenario A\n");
+
+ /*
+ * Scenario A:
+ *
+ * F1 F2 F3
+ * [8 + 6] [2 + 8] [8]
+ * ^^^^^^ ^
+ * a b c
+ *
+ * Chunking should begin at a, then end with b, and
+ * continue encrypting at an offset of 2 until c.
+ *
+ */
+ i = 7;
+
+ key = des_tv[i].key;
+ tfm->crt_flags = 0;
+
+ ret = crypto_cipher_setkey(tfm, key, 8);
+ if (ret) {
+ printk("setkey() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ /* setup the dummy buffer first */
+ memset(xbuf, 0, sizeof (xbuf));
+
+ /* Frag 1: 8 + 6 */
+ memcpy(&xbuf[IDX3], des_tv[i].plaintext, 14);
+
+ /* Frag 2: 2 + 8 */
+ memcpy(&xbuf[IDX4], des_tv[i].plaintext + 14, 10);
+
+ /* Frag 3: 8 */
+ memcpy(&xbuf[IDX5], des_tv[i].plaintext + 24, 8);
+
+ p = &xbuf[IDX3];
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = 14;
+
+ p = &xbuf[IDX4];
+ sg[1].page = virt_to_page(p);
+ sg[1].offset = ((long) p & ~PAGE_MASK);
+ sg[1].length = 10;
+
+ p = &xbuf[IDX5];
+ sg[2].page = virt_to_page(p);
+ sg[2].offset = ((long) p & ~PAGE_MASK);
+ sg[2].length = 8;
+
+ ret = crypto_cipher_encrypt(tfm, sg, sg, 32);
+
+ if (ret) {
+ printk("decrypt() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ printk("page 1\n");
+ q = kmap(sg[0].page) + sg[0].offset;
+ hexdump(q, 14);
+ printk("%s\n", memcmp(q, des_tv[i].result, 14) ? "fail" : "pass");
+
+ printk("page 2\n");
+ q = kmap(sg[1].page) + sg[1].offset;
+ hexdump(q, 10);
+ printk("%s\n", memcmp(q, des_tv[i].result + 14, 10) ? "fail" : "pass");
+
+ printk("page 3\n");
+ q = kmap(sg[2].page) + sg[2].offset;
+ hexdump(q, 8);
+ printk("%s\n", memcmp(q, des_tv[i].result + 24, 8) ? "fail" : "pass");
+
+ printk("\ntesting des ecb encryption chunking scenario B\n");
+
+ /*
+ * Scenario B:
+ *
+ * F1 F2 F3 F4
+ * [2] [1] [3] [2 + 8 + 8]
+ */
+ i = 7;
+
+ key = des_tv[i].key;
+ tfm->crt_flags = 0;
+
+ ret = crypto_cipher_setkey(tfm, key, 8);
+ if (ret) {
+ printk("setkey() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ /* setup the dummy buffer first */
+ memset(xbuf, 0, sizeof (xbuf));
+
+ /* Frag 1: 2 */
+ memcpy(&xbuf[IDX3], des_tv[i].plaintext, 2);
+
+ /* Frag 2: 1 */
+ memcpy(&xbuf[IDX4], des_tv[i].plaintext + 2, 1);
+
+ /* Frag 3: 3 */
+ memcpy(&xbuf[IDX5], des_tv[i].plaintext + 3, 3);
+
+ /* Frag 4: 2 + 8 + 8 */
+ memcpy(&xbuf[IDX6], des_tv[i].plaintext + 6, 18);
+
+ p = &xbuf[IDX3];
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = 2;
+
+ p = &xbuf[IDX4];
+ sg[1].page = virt_to_page(p);
+ sg[1].offset = ((long) p & ~PAGE_MASK);
+ sg[1].length = 1;
+
+ p = &xbuf[IDX5];
+ sg[2].page = virt_to_page(p);
+ sg[2].offset = ((long) p & ~PAGE_MASK);
+ sg[2].length = 3;
+
+ p = &xbuf[IDX6];
+ sg[3].page = virt_to_page(p);
+ sg[3].offset = ((long) p & ~PAGE_MASK);
+ sg[3].length = 18;
+
+ ret = crypto_cipher_encrypt(tfm, sg, sg, 24);
+
+ if (ret) {
+ printk("encrypt() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ printk("page 1\n");
+ q = kmap(sg[0].page) + sg[0].offset;
+ hexdump(q, 2);
+ printk("%s\n", memcmp(q, des_tv[i].result, 2) ? "fail" : "pass");
+
+ printk("page 2\n");
+ q = kmap(sg[1].page) + sg[1].offset;
+ hexdump(q, 1);
+ printk("%s\n", memcmp(q, des_tv[i].result + 2, 1) ? "fail" : "pass");
+
+ printk("page 3\n");
+ q = kmap(sg[2].page) + sg[2].offset;
+ hexdump(q, 3);
+ printk("%s\n", memcmp(q, des_tv[i].result + 3, 3) ? "fail" : "pass");
+
+ printk("page 4\n");
+ q = kmap(sg[3].page) + sg[3].offset;
+ hexdump(q, 18);
+ printk("%s\n", memcmp(q, des_tv[i].result + 6, 18) ? "fail" : "pass");
+
+ printk("\ntesting des ecb encryption chunking scenario C\n");
+
+ /*
+ * Scenario B:
+ *
+ * F1 F2 F3 F4 F5
+ * [2] [2] [2] [2] [8]
+ */
+ i = 7;
+
+ key = des_tv[i].key;
+ tfm->crt_flags = 0;
+
+ ret = crypto_cipher_setkey(tfm, key, 8);
+ if (ret) {
+ printk("setkey() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ /* setup the dummy buffer first */
+ memset(xbuf, 0, sizeof (xbuf));
+
+ /* Frag 1: 2 */
+ memcpy(&xbuf[IDX3], des_tv[i].plaintext, 2);
+
+ /* Frag 2: 2 */
+ memcpy(&xbuf[IDX4], des_tv[i].plaintext + 2, 2);
+
+ /* Frag 3: 2 */
+ memcpy(&xbuf[IDX5], des_tv[i].plaintext + 4, 2);
+
+ /* Frag 4: 2 */
+ memcpy(&xbuf[IDX6], des_tv[i].plaintext + 6, 2);
+
+ /* Frag 5: 8 */
+ memcpy(&xbuf[IDX7], des_tv[i].plaintext + 8, 8);
+
+ p = &xbuf[IDX3];
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = 2;
+
+ p = &xbuf[IDX4];
+ sg[1].page = virt_to_page(p);
+ sg[1].offset = ((long) p & ~PAGE_MASK);
+ sg[1].length = 2;
+
+ p = &xbuf[IDX5];
+ sg[2].page = virt_to_page(p);
+ sg[2].offset = ((long) p & ~PAGE_MASK);
+ sg[2].length = 2;
+
+ p = &xbuf[IDX6];
+ sg[3].page = virt_to_page(p);
+ sg[3].offset = ((long) p & ~PAGE_MASK);
+ sg[3].length = 2;
+
+ p = &xbuf[IDX7];
+ sg[4].page = virt_to_page(p);
+ sg[4].offset = ((long) p & ~PAGE_MASK);
+ sg[4].length = 8;
+
+ ret = crypto_cipher_encrypt(tfm, sg, sg, 16);
+
+ if (ret) {
+ printk("encrypt() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ printk("page 1\n");
+ q = kmap(sg[0].page) + sg[0].offset;
+ hexdump(q, 2);
+ printk("%s\n", memcmp(q, des_tv[i].result, 2) ? "fail" : "pass");
+
+ printk("page 2\n");
+ q = kmap(sg[1].page) + sg[1].offset;
+ hexdump(q, 2);
+ printk("%s\n", memcmp(q, des_tv[i].result + 2, 2) ? "fail" : "pass");
+
+ printk("page 3\n");
+ q = kmap(sg[2].page) + sg[2].offset;
+ hexdump(q, 2);
+ printk("%s\n", memcmp(q, des_tv[i].result + 4, 2) ? "fail" : "pass");
+
+ printk("page 4\n");
+ q = kmap(sg[3].page) + sg[3].offset;
+ hexdump(q, 2);
+ printk("%s\n", memcmp(q, des_tv[i].result + 6, 2) ? "fail" : "pass");
+
+ printk("page 5\n");
+ q = kmap(sg[4].page) + sg[4].offset;
+ hexdump(q, 8);
+ printk("%s\n", memcmp(q, des_tv[i].result + 8, 8) ? "fail" : "pass");
+
+ printk("\ntesting des ecb encryption chunking scenario D\n");
+
+ /*
+ * Scenario D, torture test, one byte per frag.
+ */
+ i = 7;
+ key = des_tv[i].key;
+ tfm->crt_flags = 0;
+
+ ret = crypto_cipher_setkey(tfm, key, 8);
+ if (ret) {
+ printk("setkey() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ /* setup the dummy buffer first */
+ memset(xbuf, 0, sizeof (xbuf));
+
+ xbuf[IDX1] = des_tv[i].plaintext[0];
+ xbuf[IDX2] = des_tv[i].plaintext[1];
+ xbuf[IDX3] = des_tv[i].plaintext[2];
+ xbuf[IDX4] = des_tv[i].plaintext[3];
+ xbuf[IDX5] = des_tv[i].plaintext[4];
+ xbuf[IDX6] = des_tv[i].plaintext[5];
+ xbuf[IDX7] = des_tv[i].plaintext[6];
+ xbuf[IDX8] = des_tv[i].plaintext[7];
+
+ p = &xbuf[IDX1];
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = 1;
+
+ p = &xbuf[IDX2];
+ sg[1].page = virt_to_page(p);
+ sg[1].offset = ((long) p & ~PAGE_MASK);
+ sg[1].length = 1;
+
+ p = &xbuf[IDX3];
+ sg[2].page = virt_to_page(p);
+ sg[2].offset = ((long) p & ~PAGE_MASK);
+ sg[2].length = 1;
+
+ p = &xbuf[IDX4];
+ sg[3].page = virt_to_page(p);
+ sg[3].offset = ((long) p & ~PAGE_MASK);
+ sg[3].length = 1;
+
+ p = &xbuf[IDX5];
+ sg[4].page = virt_to_page(p);
+ sg[4].offset = ((long) p & ~PAGE_MASK);
+ sg[4].length = 1;
+
+ p = &xbuf[IDX6];
+ sg[5].page = virt_to_page(p);
+ sg[5].offset = ((long) p & ~PAGE_MASK);
+ sg[5].length = 1;
+
+ p = &xbuf[IDX7];
+ sg[6].page = virt_to_page(p);
+ sg[6].offset = ((long) p & ~PAGE_MASK);
+ sg[6].length = 1;
+
+ p = &xbuf[IDX8];
+ sg[7].page = virt_to_page(p);
+ sg[7].offset = ((long) p & ~PAGE_MASK);
+ sg[7].length = 1;
+
+ ret = crypto_cipher_encrypt(tfm, sg, sg, 8);
+ if (ret) {
+ printk("encrypt() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ for (i = 0; i < 8; i++)
+ res[i] = *(char *) (kmap(sg[i].page) + sg[i].offset);
+
+ hexdump(res, 8);
+ printk("%s\n", memcmp(res, des_tv[7].result, 8) ? "fail" : "pass");
+
+ printk("\ntesting des decryption\n");
+
+ tsize = sizeof (des_dec_tv_template);
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ return;
+ }
+ memcpy(tvmem, des_dec_tv_template, tsize);
+ des_tv = (void *) tvmem;
+
+ for (i = 0; i < DES_DEC_TEST_VECTORS; i++) {
+ printk("test %u:\n", i + 1);
+
+ key = des_tv[i].key;
+
+ tfm->crt_flags = 0;
+ ret = crypto_cipher_setkey(tfm, key, 8);
+ if (ret) {
+ printk("setkey() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ len = des_tv[i].len;
+
+ p = des_tv[i].plaintext;
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = len;
+
+ ret = crypto_cipher_decrypt(tfm, sg, sg, sg[0].length);
+ if (ret) {
+ printk("des_decrypt() failed flags=%x\n",
+ tfm->crt_flags);
+ goto out;
+ }
+
+ q = kmap(sg[0].page) + sg[0].offset;
+ hexdump(q, len);
+
+ printk("%s\n",
+ memcmp(q, des_tv[i].result, len) ? "fail" : "pass");
+
+ }
+
+ printk("\ntesting des ecb decryption across pages\n");
+
+ i = 6;
+
+ key = des_tv[i].key;
+ tfm->crt_flags = 0;
+
+ ret = crypto_cipher_setkey(tfm, key, 8);
+ if (ret) {
+ printk("setkey() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ /* setup the dummy buffer first */
+ memset(xbuf, 0, sizeof (xbuf));
+ memcpy(&xbuf[IDX1], des_tv[i].plaintext, 8);
+ memcpy(&xbuf[IDX2], des_tv[i].plaintext + 8, 8);
+
+ p = &xbuf[IDX1];
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = 8;
+
+ p = &xbuf[IDX2];
+ sg[1].page = virt_to_page(p);
+ sg[1].offset = ((long) p & ~PAGE_MASK);
+ sg[1].length = 8;
+
+ ret = crypto_cipher_decrypt(tfm, sg, sg, 16);
+ if (ret) {
+ printk("decrypt() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ printk("page 1\n");
+ q = kmap(sg[0].page) + sg[0].offset;
+ hexdump(q, 8);
+ printk("%s\n", memcmp(q, des_tv[i].result, 8) ? "fail" : "pass");
+
+ printk("page 2\n");
+ q = kmap(sg[1].page) + sg[1].offset;
+ hexdump(q, 8);
+ printk("%s\n", memcmp(q, des_tv[i].result + 8, 8) ? "fail" : "pass");
+
+ /*
+ * Scenario E:
+ *
+ * F1 F2 F3
+ * [3] [5 + 7] [1]
+ *
+ */
+ printk("\ntesting des ecb decryption chunking scenario E\n");
+ i = 2;
+
+ key = des_tv[i].key;
+ tfm->crt_flags = 0;
+
+ ret = crypto_cipher_setkey(tfm, key, 8);
+ if (ret) {
+ printk("setkey() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ /* setup the dummy buffer first */
+ memset(xbuf, 0, sizeof (xbuf));
+
+ memcpy(&xbuf[IDX1], des_tv[i].plaintext, 3);
+ memcpy(&xbuf[IDX2], des_tv[i].plaintext + 3, 12);
+ memcpy(&xbuf[IDX3], des_tv[i].plaintext + 15, 1);
+
+ p = &xbuf[IDX1];
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = 3;
+
+ p = &xbuf[IDX2];
+ sg[1].page = virt_to_page(p);
+ sg[1].offset = ((long) p & ~PAGE_MASK);
+ sg[1].length = 12;
+
+ p = &xbuf[IDX3];
+ sg[2].page = virt_to_page(p);
+ sg[2].offset = ((long) p & ~PAGE_MASK);
+ sg[2].length = 1;
+
+ ret = crypto_cipher_decrypt(tfm, sg, sg, 16);
+
+ if (ret) {
+ printk("decrypt() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ printk("page 1\n");
+ q = kmap(sg[0].page) + sg[0].offset;
+ hexdump(q, 3);
+ printk("%s\n", memcmp(q, des_tv[i].result, 3) ? "fail" : "pass");
+
+ printk("page 2\n");
+ q = kmap(sg[1].page) + sg[1].offset;
+ hexdump(q, 12);
+ printk("%s\n", memcmp(q, des_tv[i].result + 3, 12) ? "fail" : "pass");
+
+ printk("page 3\n");
+ q = kmap(sg[2].page) + sg[2].offset;
+ hexdump(q, 1);
+ printk("%s\n", memcmp(q, des_tv[i].result + 15, 1) ? "fail" : "pass");
+
+ crypto_free_tfm(tfm);
+
+ tfm = crypto_alloc_tfm("des", CRYPTO_TFM_MODE_CBC);
+ if (tfm == NULL) {
+ printk("failed to load transform for des cbc\n");
+ return;
+ }
+
+ printk("\ntesting des cbc encryption\n");
+
+ tsize = sizeof (des_cbc_enc_tv_template);
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ return;
+ }
+ memcpy(tvmem, des_cbc_enc_tv_template, tsize);
+ des_tv = (void *) tvmem;
+
+ crypto_cipher_set_iv(tfm, des_tv[i].iv, crypto_tfm_alg_ivsize(tfm));
+ crypto_cipher_get_iv(tfm, res, crypto_tfm_alg_ivsize(tfm));
+
+ if (memcmp(res, des_tv[i].iv, sizeof(res))) {
+ printk("crypto_cipher_[set|get]_iv() failed\n");
+ goto out;
+ }
+
+ for (i = 0; i < DES_CBC_ENC_TEST_VECTORS; i++) {
+ printk("test %u:\n", i + 1);
+
+ key = des_tv[i].key;
+
+ ret = crypto_cipher_setkey(tfm, key, 8);
+ if (ret) {
+ printk("setkey() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ len = des_tv[i].len;
+ p = des_tv[i].plaintext;
+
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = len;
+
+ crypto_cipher_set_iv(tfm, des_tv[i].iv,
+ crypto_tfm_alg_ivsize(tfm));
+
+ ret = crypto_cipher_encrypt(tfm, sg, sg, len);
+ if (ret) {
+ printk("des_cbc_encrypt() failed flags=%x\n",
+ tfm->crt_flags);
+ goto out;
+ }
+
+ q = kmap(sg[0].page) + sg[0].offset;
+ hexdump(q, len);
+
+ printk("%s\n",
+ memcmp(q, des_tv[i].result, len) ? "fail" : "pass");
+ }
+
+ crypto_free_tfm(tfm);
+
+ /*
+ * Scenario F:
+ *
+ * F1 F2
+ * [8 + 5] [3 + 8]
+ *
+ */
+ printk("\ntesting des cbc encryption chunking scenario F\n");
+ i = 4;
+
+ tfm = crypto_alloc_tfm("des", CRYPTO_TFM_MODE_CBC);
+ if (tfm == NULL) {
+ printk("failed to load transform for CRYPTO_ALG_DES_CCB\n");
+ return;
+ }
+
+ tfm->crt_flags = 0;
+ key = des_tv[i].key;
+
+ ret = crypto_cipher_setkey(tfm, key, 8);
+ if (ret) {
+ printk("setkey() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ /* setup the dummy buffer first */
+ memset(xbuf, 0, sizeof (xbuf));
+
+ memcpy(&xbuf[IDX1], des_tv[i].plaintext, 13);
+ memcpy(&xbuf[IDX2], des_tv[i].plaintext + 13, 11);
+
+ p = &xbuf[IDX1];
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = 13;
+
+ p = &xbuf[IDX2];
+ sg[1].page = virt_to_page(p);
+ sg[1].offset = ((long) p & ~PAGE_MASK);
+ sg[1].length = 11;
+
+ crypto_cipher_set_iv(tfm, des_tv[i].iv, crypto_tfm_alg_ivsize(tfm));
+
+ ret = crypto_cipher_encrypt(tfm, sg, sg, 24);
+ if (ret) {
+ printk("des_cbc_decrypt() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ printk("page 1\n");
+ q = kmap(sg[0].page) + sg[0].offset;
+ hexdump(q, 13);
+ printk("%s\n", memcmp(q, des_tv[i].result, 13) ? "fail" : "pass");
+
+ printk("page 2\n");
+ q = kmap(sg[1].page) + sg[1].offset;
+ hexdump(q, 11);
+ printk("%s\n", memcmp(q, des_tv[i].result + 13, 11) ? "fail" : "pass");
+
+ tsize = sizeof (des_cbc_dec_tv_template);
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ return;
+ }
+ memcpy(tvmem, des_cbc_dec_tv_template, tsize);
+ des_tv = (void *) tvmem;
+
+ printk("\ntesting des cbc decryption\n");
+
+ for (i = 0; i < DES_CBC_DEC_TEST_VECTORS; i++) {
+ printk("test %u:\n", i + 1);
+
+ tfm->crt_flags = 0;
+ key = des_tv[i].key;
+
+ ret = crypto_cipher_setkey(tfm, key, 8);
+ if (ret) {
+ printk("setkey() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ len = des_tv[i].len;
+ p = des_tv[i].plaintext;
+
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = len;
+
+ crypto_cipher_set_iv(tfm, des_tv[i].iv,
+ crypto_tfm_alg_blocksize(tfm));
+
+ ret = crypto_cipher_decrypt(tfm, sg, sg, len);
+ if (ret) {
+ printk("des_cbc_decrypt() failed flags=%x\n",
+ tfm->crt_flags);
+ goto out;
+ }
+
+ hexdump(tfm->crt_cipher.cit_iv, 8);
+
+ q = kmap(sg[0].page) + sg[0].offset;
+ hexdump(q, len);
+
+ printk("%s\n",
+ memcmp(q, des_tv[i].result, len) ? "fail" : "pass");
+ }
+
+ /*
+ * Scenario G:
+ *
+ * F1 F2
+ * [4] [4]
+ *
+ */
+ printk("\ntesting des cbc decryption chunking scenario G\n");
+ i = 3;
+
+ tfm->crt_flags = 0;
+ key = des_tv[i].key;
+
+ ret = crypto_cipher_setkey(tfm, key, 8);
+ if (ret) {
+ printk("setkey() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ /* setup the dummy buffer first */
+ memset(xbuf, 0, sizeof (xbuf));
+ memcpy(&xbuf[IDX1], des_tv[i].plaintext, 4);
+ memcpy(&xbuf[IDX2], des_tv[i].plaintext + 4, 4);
+
+ p = &xbuf[IDX1];
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = 4;
+
+ p = &xbuf[IDX2];
+ sg[1].page = virt_to_page(p);
+ sg[1].offset = ((long) p & ~PAGE_MASK);
+ sg[1].length = 4;
+
+ crypto_cipher_set_iv(tfm, des_tv[i].iv, crypto_tfm_alg_ivsize(tfm));
+
+ ret = crypto_cipher_decrypt(tfm, sg, sg, 8);
+ if (ret) {
+ printk("des_cbc_decrypt() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ printk("page 1\n");
+ q = kmap(sg[0].page) + sg[0].offset;
+ hexdump(q, 4);
+ printk("%s\n", memcmp(q, des_tv[i].result, 4) ? "fail" : "pass");
+
+ printk("page 2\n");
+ q = kmap(sg[1].page) + sg[1].offset;
+ hexdump(q, 4);
+ printk("%s\n", memcmp(q, des_tv[i].result + 4, 4) ? "fail" : "pass");
+
+ out:
+ crypto_free_tfm(tfm);
+}
+
+void
+test_des3_ede(void)
+{
+ unsigned int ret, i, len;
+ unsigned int tsize;
+ char *p, *q;
+ struct crypto_tfm *tfm;
+ char *key;
+ /*char res[8]; */
+ struct des_tv *des_tv;
+ struct scatterlist sg[8];
+
+ printk("\ntesting des3 ede encryption\n");
+
+ tsize = sizeof (des3_ede_enc_tv_template);
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ return;
+ }
+
+ memcpy(tvmem, des3_ede_enc_tv_template, tsize);
+ des_tv = (void *) tvmem;
+
+ tfm = crypto_alloc_tfm("des3_ede", CRYPTO_TFM_MODE_ECB);
+ if (tfm == NULL) {
+ printk("failed to load transform for 3des ecb\n");
+ return;
+ }
+
+ for (i = 0; i < DES3_EDE_ENC_TEST_VECTORS; i++) {
+ printk("test %u:\n", i + 1);
+
+ key = des_tv[i].key;
+ ret = crypto_cipher_setkey(tfm, key, 24);
+ if (ret) {
+ printk("setkey() failed flags=%x\n", tfm->crt_flags);
+
+ if (!des_tv[i].fail)
+ goto out;
+ }
+
+ len = des_tv[i].len;
+
+ p = des_tv[i].plaintext;
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = len;
+ ret = crypto_cipher_encrypt(tfm, sg, sg, len);
+ if (ret) {
+ printk("encrypt() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ q = kmap(sg[0].page) + sg[0].offset;
+ hexdump(q, len);
+
+ printk("%s\n",
+ memcmp(q, des_tv[i].result, len) ? "fail" : "pass");
+ }
+
+ printk("\ntesting des3 ede decryption\n");
+
+ tsize = sizeof (des3_ede_dec_tv_template);
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ return;
+ }
+
+ memcpy(tvmem, des3_ede_dec_tv_template, tsize);
+ des_tv = (void *) tvmem;
+
+ for (i = 0; i < DES3_EDE_DEC_TEST_VECTORS; i++) {
+ printk("test %u:\n", i + 1);
+
+ key = des_tv[i].key;
+ ret = crypto_cipher_setkey(tfm, key, 24);
+ if (ret) {
+ printk("setkey() failed flags=%x\n", tfm->crt_flags);
+
+ if (!des_tv[i].fail)
+ goto out;
+ }
+
+ len = des_tv[i].len;
+
+ p = des_tv[i].plaintext;
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = len;
+ ret = crypto_cipher_decrypt(tfm, sg, sg, len);
+ if (ret) {
+ printk("decrypt() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ q = kmap(sg[0].page) + sg[0].offset;
+ hexdump(q, len);
+
+ printk("%s\n",
+ memcmp(q, des_tv[i].result, len) ? "fail" : "pass");
+ }
+
+ out:
+ crypto_free_tfm(tfm);
+}
+
+void
+test_blowfish(void)
+{
+ unsigned int ret, i;
+ unsigned int tsize;
+ char *p, *q;
+ struct crypto_tfm *tfm;
+ char *key;
+ struct bf_tv *bf_tv;
+ struct scatterlist sg[1];
+
+ printk("\ntesting blowfish encryption\n");
+
+ tsize = sizeof (bf_enc_tv_template);
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ return;
+ }
+
+ memcpy(tvmem, bf_enc_tv_template, tsize);
+ bf_tv = (void *) tvmem;
+
+ tfm = crypto_alloc_tfm("blowfish", 0);
+ if (tfm == NULL) {
+ printk("failed to load transform for blowfish (default ecb)\n");
+ return;
+ }
+
+ for (i = 0; i < BF_ENC_TEST_VECTORS; i++) {
+ printk("test %u (%d bit key):\n",
+ i + 1, bf_tv[i].keylen * 8);
+ key = bf_tv[i].key;
+
+ ret = crypto_cipher_setkey(tfm, key, bf_tv[i].keylen);
+ if (ret) {
+ printk("setkey() failed flags=%x\n", tfm->crt_flags);
+
+ if (!bf_tv[i].fail)
+ goto out;
+ }
+
+ p = bf_tv[i].plaintext;
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = bf_tv[i].plen;
+ ret = crypto_cipher_encrypt(tfm, sg, sg, sg[0].length);
+ if (ret) {
+ printk("encrypt() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ q = kmap(sg[0].page) + sg[0].offset;
+ hexdump(q, bf_tv[i].rlen);
+
+ printk("%s\n", memcmp(q, bf_tv[i].result, bf_tv[i].rlen) ?
+ "fail" : "pass");
+ }
+
+ printk("\ntesting blowfish decryption\n");
+
+ tsize = sizeof (bf_dec_tv_template);
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ return;
+ }
+
+ memcpy(tvmem, bf_dec_tv_template, tsize);
+ bf_tv = (void *) tvmem;
+
+ for (i = 0; i < BF_DEC_TEST_VECTORS; i++) {
+ printk("test %u (%d bit key):\n",
+ i + 1, bf_tv[i].keylen * 8);
+ key = bf_tv[i].key;
+
+ ret = crypto_cipher_setkey(tfm, key, bf_tv[i].keylen);
+ if (ret) {
+ printk("setkey() failed flags=%x\n", tfm->crt_flags);
+
+ if (!bf_tv[i].fail)
+ goto out;
+ }
+
+ p = bf_tv[i].plaintext;
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = bf_tv[i].plen;
+ ret = crypto_cipher_decrypt(tfm, sg, sg, sg[0].length);
+ if (ret) {
+ printk("decrypt() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ q = kmap(sg[0].page) + sg[0].offset;
+ hexdump(q, bf_tv[i].rlen);
+
+ printk("%s\n", memcmp(q, bf_tv[i].result, bf_tv[i].rlen) ?
+ "fail" : "pass");
+ }
+
+ crypto_free_tfm(tfm);
+
+ tfm = crypto_alloc_tfm("blowfish", CRYPTO_TFM_MODE_CBC);
+ if (tfm == NULL) {
+ printk("failed to load transform for blowfish cbc\n");
+ return;
+ }
+
+ printk("\ntesting blowfish cbc encryption\n");
+
+ tsize = sizeof (bf_cbc_enc_tv_template);
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ goto out;
+ }
+ memcpy(tvmem, bf_cbc_enc_tv_template, tsize);
+ bf_tv = (void *) tvmem;
+
+ for (i = 0; i < BF_CBC_ENC_TEST_VECTORS; i++) {
+ printk("test %u (%d bit key):\n",
+ i + 1, bf_tv[i].keylen * 8);
+
+ key = bf_tv[i].key;
+
+ ret = crypto_cipher_setkey(tfm, key, bf_tv[i].keylen);
+ if (ret) {
+ printk("setkey() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ p = bf_tv[i].plaintext;
+
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = bf_tv[i].plen;
+
+ crypto_cipher_set_iv(tfm, bf_tv[i].iv,
+ crypto_tfm_alg_ivsize(tfm));
+
+ ret = crypto_cipher_encrypt(tfm, sg, sg, sg[0].length);
+ if (ret) {
+ printk("blowfish_cbc_encrypt() failed flags=%x\n",
+ tfm->crt_flags);
+ goto out;
+ }
+
+ q = kmap(sg[0].page) + sg[0].offset;
+ hexdump(q, bf_tv[i].rlen);
+
+ printk("%s\n", memcmp(q, bf_tv[i].result, bf_tv[i].rlen)
+ ? "fail" : "pass");
+ }
+
+ printk("\ntesting blowfish cbc decryption\n");
+
+ tsize = sizeof (bf_cbc_dec_tv_template);
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ goto out;
+ }
+ memcpy(tvmem, bf_cbc_dec_tv_template, tsize);
+ bf_tv = (void *) tvmem;
+
+ for (i = 0; i < BF_CBC_ENC_TEST_VECTORS; i++) {
+ printk("test %u (%d bit key):\n",
+ i + 1, bf_tv[i].keylen * 8);
+ key = bf_tv[i].key;
+
+ ret = crypto_cipher_setkey(tfm, key, bf_tv[i].keylen);
+ if (ret) {
+ printk("setkey() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ p = bf_tv[i].plaintext;
+
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = bf_tv[i].plen;
+
+ crypto_cipher_set_iv(tfm, bf_tv[i].iv,
+ crypto_tfm_alg_ivsize(tfm));
+
+ ret = crypto_cipher_decrypt(tfm, sg, sg, sg[0].length);
+ if (ret) {
+ printk("blowfish_cbc_decrypt() failed flags=%x\n",
+ tfm->crt_flags);
+ goto out;
+ }
+
+ q = kmap(sg[0].page) + sg[0].offset;
+ hexdump(q, bf_tv[i].rlen);
+
+ printk("%s\n", memcmp(q, bf_tv[i].result, bf_tv[i].rlen)
+ ? "fail" : "pass");
+ }
+
+out:
+ crypto_free_tfm(tfm);
+}
+
+
+void
+test_twofish(void)
+{
+ unsigned int ret, i;
+ unsigned int tsize;
+ char *p, *q;
+ struct crypto_tfm *tfm;
+ char *key;
+ struct tf_tv *tf_tv;
+ struct scatterlist sg[1];
+
+ printk("\ntesting twofish encryption\n");
+
+ tsize = sizeof (tf_enc_tv_template);
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ return;
+ }
+
+ memcpy(tvmem, tf_enc_tv_template, tsize);
+ tf_tv = (void *) tvmem;
+
+ tfm = crypto_alloc_tfm("twofish", 0);
+ if (tfm == NULL) {
+ printk("failed to load transform for blowfish (default ecb)\n");
+ return;
+ }
+
+ for (i = 0; i < TF_ENC_TEST_VECTORS; i++) {
+ printk("test %u (%d bit key):\n",
+ i + 1, tf_tv[i].keylen * 8);
+ key = tf_tv[i].key;
+
+ ret = crypto_cipher_setkey(tfm, key, tf_tv[i].keylen);
+ if (ret) {
+ printk("setkey() failed flags=%x\n", tfm->crt_flags);
+
+ if (!tf_tv[i].fail)
+ goto out;
+ }
+
+ p = tf_tv[i].plaintext;
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = tf_tv[i].plen;
+ ret = crypto_cipher_encrypt(tfm, sg, sg, sg[0].length);
+ if (ret) {
+ printk("encrypt() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ q = kmap(sg[0].page) + sg[0].offset;
+ hexdump(q, tf_tv[i].rlen);
+
+ printk("%s\n", memcmp(q, tf_tv[i].result, tf_tv[i].rlen) ?
+ "fail" : "pass");
+ }
+
+ printk("\ntesting twofish decryption\n");
+
+ tsize = sizeof (tf_dec_tv_template);
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ return;
+ }
+
+ memcpy(tvmem, tf_dec_tv_template, tsize);
+ tf_tv = (void *) tvmem;
+
+ for (i = 0; i < TF_DEC_TEST_VECTORS; i++) {
+ printk("test %u (%d bit key):\n",
+ i + 1, tf_tv[i].keylen * 8);
+ key = tf_tv[i].key;
+
+ ret = crypto_cipher_setkey(tfm, key, tf_tv[i].keylen);
+ if (ret) {
+ printk("setkey() failed flags=%x\n", tfm->crt_flags);
+
+ if (!tf_tv[i].fail)
+ goto out;
+ }
+
+ p = tf_tv[i].plaintext;
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = tf_tv[i].plen;
+ ret = crypto_cipher_decrypt(tfm, sg, sg, sg[0].length);
+ if (ret) {
+ printk("decrypt() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ q = kmap(sg[0].page) + sg[0].offset;
+ hexdump(q, tf_tv[i].rlen);
+
+ printk("%s\n", memcmp(q, tf_tv[i].result, tf_tv[i].rlen) ?
+ "fail" : "pass");
+ }
+
+ crypto_free_tfm(tfm);
+
+ tfm = crypto_alloc_tfm("twofish", CRYPTO_TFM_MODE_CBC);
+ if (tfm == NULL) {
+ printk("failed to load transform for twofish cbc\n");
+ return;
+ }
+
+ printk("\ntesting twofish cbc encryption\n");
+
+ tsize = sizeof (tf_cbc_enc_tv_template);
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ goto out;
+ }
+ memcpy(tvmem, tf_cbc_enc_tv_template, tsize);
+ tf_tv = (void *) tvmem;
+
+ for (i = 0; i < TF_CBC_ENC_TEST_VECTORS; i++) {
+ printk("test %u (%d bit key):\n",
+ i + 1, tf_tv[i].keylen * 8);
+
+ key = tf_tv[i].key;
+
+ ret = crypto_cipher_setkey(tfm, key, tf_tv[i].keylen);
+ if (ret) {
+ printk("setkey() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ p = tf_tv[i].plaintext;
+
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = tf_tv[i].plen;
+
+ crypto_cipher_set_iv(tfm, tf_tv[i].iv,
+ crypto_tfm_alg_ivsize(tfm));
+
+ ret = crypto_cipher_encrypt(tfm, sg, sg, sg[0].length);
+ if (ret) {
+ printk("blowfish_cbc_encrypt() failed flags=%x\n",
+ tfm->crt_flags);
+ goto out;
+ }
+
+ q = kmap(sg[0].page) + sg[0].offset;
+ hexdump(q, tf_tv[i].rlen);
+
+ printk("%s\n", memcmp(q, tf_tv[i].result, tf_tv[i].rlen)
+ ? "fail" : "pass");
+ }
+
+ printk("\ntesting twofish cbc decryption\n");
+
+ tsize = sizeof (tf_cbc_dec_tv_template);
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ goto out;
+ }
+ memcpy(tvmem, tf_cbc_dec_tv_template, tsize);
+ tf_tv = (void *) tvmem;
+
+ for (i = 0; i < TF_CBC_DEC_TEST_VECTORS; i++) {
+ printk("test %u (%d bit key):\n",
+ i + 1, tf_tv[i].keylen * 8);
+
+ key = tf_tv[i].key;
+
+ ret = crypto_cipher_setkey(tfm, key, tf_tv[i].keylen);
+ if (ret) {
+ printk("setkey() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ p = tf_tv[i].plaintext;
+
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = tf_tv[i].plen;
+
+ crypto_cipher_set_iv(tfm, tf_tv[i].iv,
+ crypto_tfm_alg_ivsize(tfm));
+
+ ret = crypto_cipher_decrypt(tfm, sg, sg, sg[0].length);
+ if (ret) {
+ printk("blowfish_cbc_decrypt() failed flags=%x\n",
+ tfm->crt_flags);
+ goto out;
+ }
+
+ q = kmap(sg[0].page) + sg[0].offset;
+ hexdump(q, tf_tv[i].rlen);
+
+ printk("%s\n", memcmp(q, tf_tv[i].result, tf_tv[i].rlen)
+ ? "fail" : "pass");
+ }
+
+out:
+ crypto_free_tfm(tfm);
+}
+
+void
+test_serpent(void)
+{
+ unsigned int ret, i, tsize;
+ u8 *p, *q, *key;
+ struct crypto_tfm *tfm;
+ struct serpent_tv *serp_tv;
+ struct scatterlist sg[1];
+
+ printk("\ntesting serpent encryption\n");
+
+ tfm = crypto_alloc_tfm("serpent", 0);
+ if (tfm == NULL) {
+ printk("failed to load transform for serpent (default ecb)\n");
+ return;
+ }
+
+ tsize = sizeof (serpent_enc_tv_template);
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ return;
+ }
+
+ memcpy(tvmem, serpent_enc_tv_template, tsize);
+ serp_tv = (void *) tvmem;
+ for (i = 0; i < SERPENT_ENC_TEST_VECTORS; i++) {
+ printk("test %u (%d bit key):\n", i + 1, serp_tv[i].keylen * 8);
+ key = serp_tv[i].key;
+
+ ret = crypto_cipher_setkey(tfm, key, serp_tv[i].keylen);
+ if (ret) {
+ printk("setkey() failed flags=%x\n", tfm->crt_flags);
+
+ if (!serp_tv[i].fail)
+ goto out;
+ }
+
+ p = serp_tv[i].plaintext;
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = sizeof(serp_tv[i].plaintext);
+ ret = crypto_cipher_encrypt(tfm, sg, sg, sg[0].length);
+ if (ret) {
+ printk("encrypt() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ q = kmap(sg[0].page) + sg[0].offset;
+ hexdump(q, sizeof(serp_tv[i].result));
+
+ printk("%s\n", memcmp(q, serp_tv[i].result,
+ sizeof(serp_tv[i].result)) ? "fail" : "pass");
+ }
+
+ printk("\ntesting serpent decryption\n");
+
+ tsize = sizeof (serpent_dec_tv_template);
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ return;
+ }
+
+ memcpy(tvmem, serpent_dec_tv_template, tsize);
+ serp_tv = (void *) tvmem;
+ for (i = 0; i < SERPENT_DEC_TEST_VECTORS; i++) {
+ printk("test %u (%d bit key):\n", i + 1, serp_tv[i].keylen * 8);
+ key = serp_tv[i].key;
+
+ ret = crypto_cipher_setkey(tfm, key, serp_tv[i].keylen);
+ if (ret) {
+ printk("setkey() failed flags=%x\n", tfm->crt_flags);
+
+ if (!serp_tv[i].fail)
+ goto out;
+ }
+
+ p = serp_tv[i].plaintext;
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = sizeof(serp_tv[i].plaintext);
+ ret = crypto_cipher_decrypt(tfm, sg, sg, sg[0].length);
+ if (ret) {
+ printk("decrypt() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ q = kmap(sg[0].page) + sg[0].offset;
+ hexdump(q, sizeof(serp_tv[i].result));
+
+ printk("%s\n", memcmp(q, serp_tv[i].result,
+ sizeof(serp_tv[i].result)) ? "fail" : "pass");
+ }
+
+out:
+ crypto_free_tfm(tfm);
+}
+
+void
+test_aes(void)
+{
+ unsigned int ret, i;
+ unsigned int tsize;
+ char *p, *q;
+ struct crypto_tfm *tfm;
+ char *key;
+ struct aes_tv *aes_tv;
+ struct scatterlist sg[1];
+
+ printk("\ntesting aes encryption\n");
+
+ tsize = sizeof (aes_enc_tv_template);
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ return;
+ }
+
+ memcpy(tvmem, aes_enc_tv_template, tsize);
+ aes_tv = (void *) tvmem;
+
+ tfm = crypto_alloc_tfm("aes", 0);
+ if (tfm == NULL) {
+ printk("failed to load transform for aes (default ecb)\n");
+ return;
+ }
+
+ for (i = 0; i < AES_ENC_TEST_VECTORS; i++) {
+ printk("test %u (%d bit key):\n",
+ i + 1, aes_tv[i].keylen * 8);
+ key = aes_tv[i].key;
+
+ ret = crypto_cipher_setkey(tfm, key, aes_tv[i].keylen);
+ if (ret) {
+ printk("setkey() failed flags=%x\n", tfm->crt_flags);
+
+ if (!aes_tv[i].fail)
+ goto out;
+ }
+
+ p = aes_tv[i].plaintext;
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = aes_tv[i].plen;
+ ret = crypto_cipher_encrypt(tfm, sg, sg, sg[0].length);
+ if (ret) {
+ printk("encrypt() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ q = kmap(sg[0].page) + sg[0].offset;
+ hexdump(q, aes_tv[i].rlen);
+
+ printk("%s\n", memcmp(q, aes_tv[i].result, aes_tv[i].rlen) ?
+ "fail" : "pass");
+ }
+
+ printk("\ntesting aes decryption\n");
+
+ tsize = sizeof (aes_dec_tv_template);
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ return;
+ }
+
+ memcpy(tvmem, aes_dec_tv_template, tsize);
+ aes_tv = (void *) tvmem;
+
+ for (i = 0; i < AES_DEC_TEST_VECTORS; i++) {
+ printk("test %u (%d bit key):\n",
+ i + 1, aes_tv[i].keylen * 8);
+ key = aes_tv[i].key;
+
+ ret = crypto_cipher_setkey(tfm, key, aes_tv[i].keylen);
+ if (ret) {
+ printk("setkey() failed flags=%x\n", tfm->crt_flags);
+
+ if (!aes_tv[i].fail)
+ goto out;
+ }
+
+ p = aes_tv[i].plaintext;
+ sg[0].page = virt_to_page(p);
+ sg[0].offset = ((long) p & ~PAGE_MASK);
+ sg[0].length = aes_tv[i].plen;
+ ret = crypto_cipher_decrypt(tfm, sg, sg, sg[0].length);
+ if (ret) {
+ printk("decrypt() failed flags=%x\n", tfm->crt_flags);
+ goto out;
+ }
+
+ q = kmap(sg[0].page) + sg[0].offset;
+ hexdump(q, aes_tv[i].rlen);
+
+ printk("%s\n", memcmp(q, aes_tv[i].result, aes_tv[i].rlen) ?
+ "fail" : "pass");
+ }
+
+out:
+ crypto_free_tfm(tfm);
+}
+
+static void
+test_deflate(void)
+{
+ unsigned int i;
+ char result[COMP_BUF_SIZE];
+ struct crypto_tfm *tfm;
+ struct comp_testvec *tv;
+ unsigned int tsize;
+
+ printk("\ntesting deflate compression\n");
+
+ tsize = sizeof (deflate_comp_tv_template);
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ return;
+ }
+
+ memcpy(tvmem, deflate_comp_tv_template, tsize);
+ tv = (void *) tvmem;
+
+ tfm = crypto_alloc_tfm("deflate", 0);
+ if (tfm == NULL) {
+ printk("failed to load transform for deflate\n");
+ return;
+ }
+
+ for (i = 0; i < DEFLATE_COMP_TEST_VECTORS; i++) {
+ int ilen, ret, dlen = COMP_BUF_SIZE;
+
+ printk("test %u:\n", i + 1);
+ memset(result, 0, sizeof (result));
+
+ ilen = tv[i].inlen;
+ ret = crypto_comp_compress(tfm, tv[i].input,
+ ilen, result, &dlen);
+ if (ret) {
+ printk("fail: ret=%d\n", ret);
+ continue;
+ }
+ hexdump(result, dlen);
+ printk("%s (ratio %d:%d)\n",
+ memcmp(result, tv[i].output, dlen) ? "fail" : "pass",
+ ilen, dlen);
+ }
+
+ printk("\ntesting deflate decompression\n");
+
+ tsize = sizeof (deflate_decomp_tv_template);
+ if (tsize > TVMEMSIZE) {
+ printk("template (%u) too big for tvmem (%u)\n", tsize,
+ TVMEMSIZE);
+ goto out;
+ }
+
+ memcpy(tvmem, deflate_decomp_tv_template, tsize);
+ tv = (void *) tvmem;
+
+ for (i = 0; i < DEFLATE_DECOMP_TEST_VECTORS; i++) {
+ int ilen, ret, dlen = COMP_BUF_SIZE;
+
+ printk("test %u:\n", i + 1);
+ memset(result, 0, sizeof (result));
+
+ ilen = tv[i].inlen;
+ ret = crypto_comp_decompress(tfm, tv[i].input,
+ ilen, result, &dlen);
+ if (ret) {
+ printk("fail: ret=%d\n", ret);
+ continue;
+ }
+ hexdump(result, dlen);
+ printk("%s (ratio %d:%d)\n",
+ memcmp(result, tv[i].output, dlen) ? "fail" : "pass",
+ ilen, dlen);
+ }
+out:
+ crypto_free_tfm(tfm);
+}
+
+static void
+test_available(void)
+{
+ char **name = check;
+
+ while (*name) {
+ printk("alg %s ", *name);
+ printk((crypto_alg_available(*name, 0)) ?
+ "found\n" : "not found\n");
+ name++;
+ }
+}
+
+static void
+do_test(void)
+{
+ switch (mode) {
+
+ case 0:
+ test_md5();
+ test_sha1();
+ test_des();
+ test_des3_ede();
+ test_md4();
+ test_sha256();
+ test_blowfish();
+ test_twofish();
+ test_serpent();
+ test_aes();
+ test_sha384();
+ test_sha512();
+ test_deflate();
+#ifdef CONFIG_CRYPTO_HMAC
+ test_hmac_md5();
+ test_hmac_sha1();
+ test_hmac_sha256();
+#endif
+ break;
+
+ case 1:
+ test_md5();
+ break;
+
+ case 2:
+ test_sha1();
+ break;
+
+ case 3:
+ test_des();
+ break;
+
+ case 4:
+ test_des3_ede();
+ break;
+
+ case 5:
+ test_md4();
+ break;
+
+ case 6:
+ test_sha256();
+ break;
+
+ case 7:
+ test_blowfish();
+ break;
+
+ case 8:
+ test_twofish();
+ break;
+
+ case 9:
+ test_serpent();
+ break;
+
+ case 10:
+ test_aes();
+ break;
+
+ case 11:
+ test_sha384();
+ break;
+
+ case 12:
+ test_sha512();
+ break;
+
+ case 13:
+ test_deflate();
+ break;
+
+#ifdef CONFIG_CRYPTO_HMAC
+ case 100:
+ test_hmac_md5();
+ break;
+
+ case 101:
+ test_hmac_sha1();
+ break;
+
+ case 102:
+ test_hmac_sha256();
+ break;
+
+#endif
+
+ case 1000:
+ test_available();
+ break;
+
+ default:
+ /* useful for debugging */
+ printk("not testing anything\n");
+ break;
+ }
+}
+
+static int __init
+init(void)
+{
+ tvmem = kmalloc(TVMEMSIZE, GFP_KERNEL);
+ if (tvmem == NULL)
+ return -ENOMEM;
+
+ xbuf = kmalloc(XBUFSIZE, GFP_KERNEL);
+ if (xbuf == NULL) {
+ kfree(tvmem);
+ return -ENOMEM;
+ }
+
+ do_test();
+
+ kfree(xbuf);
+ kfree(tvmem);
+ return 0;
+}
+
+module_init(init);
+
+MODULE_PARM(mode, "i");
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Quick & dirty crypto testing module");
+MODULE_AUTHOR("James Morris <jmorris@intercode.com.au>");
diff -Nru a/crypto/tcrypt.h b/crypto/tcrypt.h
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/crypto/tcrypt.h Thu May 8 10:41:38 2003
@@ -0,0 +1,1785 @@
+/*
+ * Quick & dirty crypto testing module.
+ *
+ * This will only exist until we have a better testing mechanism
+ * (e.g. a char device).
+ *
+ * Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
+ * Copyright (c) 2002 Jean-Francois Dive <jef@linuxbe.org>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#ifndef _CRYPTO_TCRYPT_H
+#define _CRYPTO_TCRYPT_H
+
+#define MD5_DIGEST_SIZE 16
+#define MD4_DIGEST_SIZE 16
+#define SHA1_DIGEST_SIZE 20
+#define SHA256_DIGEST_SIZE 32
+#define SHA384_DIGEST_SIZE 48
+#define SHA512_DIGEST_SIZE 64
+
+/*
+ * MD4 test vectors from RFC1320
+ */
+#define MD4_TEST_VECTORS 7
+
+struct md4_testvec {
+ char plaintext[128];
+ char digest[MD4_DIGEST_SIZE];
+} md4_tv_template[] = {
+ { "",
+ { 0x31, 0xd6, 0xcf, 0xe0, 0xd1, 0x6a, 0xe9, 0x31,
+ 0xb7, 0x3c, 0x59, 0xd7, 0xe0, 0xc0, 0x89, 0xc0 }
+ },
+
+ { "a",
+ { 0xbd, 0xe5, 0x2c, 0xb3, 0x1d, 0xe3, 0x3e, 0x46,
+ 0x24, 0x5e, 0x05, 0xfb, 0xdb, 0xd6, 0xfb, 0x24 }
+ },
+
+ { "abc",
+ { 0xa4, 0x48, 0x01, 0x7a, 0xaf, 0x21, 0xd8, 0x52,
+ 0x5f, 0xc1, 0x0a, 0xe8, 0x7a, 0xa6, 0x72, 0x9d }
+ },
+
+ { "message digest",
+ { 0xd9, 0x13, 0x0a, 0x81, 0x64, 0x54, 0x9f, 0xe8,
+ 0x18, 0x87, 0x48, 0x06, 0xe1, 0xc7, 0x01, 0x4b }
+ },
+
+ { "abcdefghijklmnopqrstuvwxyz",
+ { 0xd7, 0x9e, 0x1c, 0x30, 0x8a, 0xa5, 0xbb, 0xcd,
+ 0xee, 0xa8, 0xed, 0x63, 0xdf, 0x41, 0x2d, 0xa9 }
+ },
+
+ { "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789",
+ { 0x04, 0x3f, 0x85, 0x82, 0xf2, 0x41, 0xdb, 0x35,
+ 0x1c, 0xe6, 0x27, 0xe1, 0x53, 0xe7, 0xf0, 0xe4 }
+ },
+
+ { "123456789012345678901234567890123456789012345678901234567890123"
+ "45678901234567890",
+ { 0xe3, 0x3b, 0x4d, 0xdc, 0x9c, 0x38, 0xf2, 0x19,
+ 0x9c, 0x3e, 0x7b, 0x16, 0x4f, 0xcc, 0x05, 0x36 }
+ },
+};
+
+/*
+ * MD5 test vectors from RFC1321
+ */
+#define MD5_TEST_VECTORS 7
+
+struct md5_testvec {
+ char plaintext[128];
+ char digest[MD5_DIGEST_SIZE];
+} md5_tv_template[] = {
+ { "",
+ { 0xd4, 0x1d, 0x8c, 0xd9, 0x8f, 0x00, 0xb2, 0x04,
+ 0xe9, 0x80, 0x09, 0x98, 0xec, 0xf8, 0x42, 0x7e } },
+
+ { "a",
+ { 0x0c, 0xc1, 0x75, 0xb9, 0xc0, 0xf1, 0xb6, 0xa8,
+ 0x31, 0xc3, 0x99, 0xe2, 0x69, 0x77, 0x26, 0x61 } },
+
+ { "abc",
+ { 0x90, 0x01, 0x50, 0x98, 0x3c, 0xd2, 0x4f, 0xb0,
+ 0xd6, 0x96, 0x3f, 0x7d, 0x28, 0xe1, 0x7f, 0x72 } },
+
+ { "message digest",
+ { 0xf9, 0x6b, 0x69, 0x7d, 0x7c, 0xb7, 0x93, 0x8d,
+ 0x52, 0x5a, 0x2f, 0x31, 0xaa, 0xf1, 0x61, 0xd0 } },
+
+ { "abcdefghijklmnopqrstuvwxyz",
+ { 0xc3, 0xfc, 0xd3, 0xd7, 0x61, 0x92, 0xe4, 0x00,
+ 0x7d, 0xfb, 0x49, 0x6c, 0xca, 0x67, 0xe1, 0x3b } },
+
+ { "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789",
+ { 0xd1, 0x74, 0xab, 0x98, 0xd2, 0x77, 0xd9, 0xf5,
+ 0xa5, 0x61, 0x1c, 0x2c, 0x9f, 0x41, 0x9d, 0x9f } },
+
+ { "12345678901234567890123456789012345678901234567890123456789012"
+ "345678901234567890",
+ { 0x57, 0xed, 0xf4, 0xa2, 0x2b, 0xe3, 0xc9, 0x55,
+ 0xac, 0x49, 0xda, 0x2e, 0x21, 0x07, 0xb6, 0x7a } }
+};
+
+#ifdef CONFIG_CRYPTO_HMAC
+/*
+ * HMAC-MD5 test vectors from RFC2202
+ * (These need to be fixed to not use strlen).
+ */
+#define HMAC_MD5_TEST_VECTORS 7
+
+struct hmac_md5_testvec {
+ char key[128];
+ char plaintext[128];
+ char digest[MD5_DIGEST_SIZE];
+};
+
+struct hmac_md5_testvec hmac_md5_tv_template[] =
+{
+
+ {
+ { 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b,
+ 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x00},
+
+ "Hi There",
+
+ { 0x92, 0x94, 0x72, 0x7a, 0x36, 0x38, 0xbb, 0x1c,
+ 0x13, 0xf4, 0x8e, 0xf8, 0x15, 0x8b, 0xfc, 0x9d }
+ },
+
+ {
+ { 'J', 'e', 'f', 'e', 0 },
+
+ "what do ya want for nothing?",
+
+ { 0x75, 0x0c, 0x78, 0x3e, 0x6a, 0xb0, 0xb5, 0x03,
+ 0xea, 0xa8, 0x6e, 0x31, 0x0a, 0x5d, 0xb7, 0x38 }
+ },
+
+ {
+ { 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0x00 },
+
+ { 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd,
+ 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd,
+ 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd,
+ 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd,
+ 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd,
+ 0x00 },
+
+ { 0x56, 0xbe, 0x34, 0x52, 0x1d, 0x14, 0x4c, 0x88,
+ 0xdb, 0xb8, 0xc7, 0x33, 0xf0, 0xe8, 0xb3, 0xf6 }
+ },
+
+ {
+ { 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
+ 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10,
+ 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x00 },
+
+ {
+ 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd,
+ 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd,
+ 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd,
+ 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd,
+ 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd,
+ 0x00 },
+
+ { 0x69, 0x7e, 0xaf, 0x0a, 0xca, 0x3a, 0x3a, 0xea,
+ 0x3a, 0x75, 0x16, 0x47, 0x46, 0xff, 0xaa, 0x79 }
+ },
+
+ {
+ { 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c,
+ 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x00 },
+
+ "Test With Truncation",
+
+ { 0x56, 0x46, 0x1e, 0xf2, 0x34, 0x2e, 0xdc, 0x00,
+ 0xf9, 0xba, 0xb9, 0x95, 0x69, 0x0e, 0xfd, 0x4c }
+ },
+
+ {
+ { 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0x00 },
+
+ "Test Using Larger Than Block-Size Key - Hash Key First",
+
+ { 0x6b, 0x1a, 0xb7, 0xfe, 0x4b, 0xd7, 0xbf, 0x8f,
+ 0x0b, 0x62, 0xe6, 0xce, 0x61, 0xb9, 0xd0, 0xcd }
+ },
+
+ {
+ { 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0x00 },
+
+ "Test Using Larger Than Block-Size Key and Larger Than One "
+ "Block-Size Data",
+
+ { 0x6f, 0x63, 0x0f, 0xad, 0x67, 0xcd, 0xa0, 0xee,
+ 0x1f, 0xb1, 0xf5, 0x62, 0xdb, 0x3a, 0xa5, 0x3e }
+ },
+
+ /* cross page test, need to retain key */
+
+ {
+ { 'J', 'e', 'f', 'e', 0 },
+
+ "what do ya want for nothing?",
+
+ { 0x75, 0x0c, 0x78, 0x3e, 0x6a, 0xb0, 0xb5, 0x03,
+ 0xea, 0xa8, 0x6e, 0x31, 0x0a, 0x5d, 0xb7, 0x38 }
+ },
+
+};
+
+
+/*
+ * HMAC-SHA1 test vectors from RFC2202
+ */
+
+#define HMAC_SHA1_TEST_VECTORS 7
+
+struct hmac_sha1_testvec {
+ char key[128];
+ char plaintext[128];
+ char digest[SHA1_DIGEST_SIZE];
+} hmac_sha1_tv_template[] = {
+
+ {
+ { 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b,
+ 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b,
+ 0x00},
+
+ "Hi There",
+
+ { 0xb6, 0x17, 0x31, 0x86, 0x55, 0x05, 0x72, 0x64,
+ 0xe2, 0x8b, 0xc0, 0xb6, 0xfb ,0x37, 0x8c, 0x8e, 0xf1,
+ 0x46, 0xbe, 0x00 }
+ },
+
+ {
+ { 'J', 'e', 'f', 'e', 0 },
+
+ "what do ya want for nothing?",
+
+ { 0xef, 0xfc, 0xdf, 0x6a, 0xe5, 0xeb, 0x2f, 0xa2, 0xd2, 0x74,
+ 0x16, 0xd5, 0xf1, 0x84, 0xdf, 0x9c, 0x25, 0x9a, 0x7c, 0x79 }
+
+ },
+
+ {
+ { 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0x00},
+
+
+ { 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd,
+ 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd,
+ 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd,
+ 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd,
+ 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd,
+ 0x00 },
+
+ { 0x12, 0x5d, 0x73, 0x42, 0xb9, 0xac, 0x11, 0xcd, 0x91, 0xa3,
+ 0x9a, 0xf4, 0x8a, 0xa1, 0x7b, 0x4f, 0x63, 0xf1, 0x75, 0xd3 }
+
+ },
+
+ {
+ { 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
+ 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10,
+ 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18, 0x19, 0x00 },
+
+ {
+ 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd,
+ 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd,
+ 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd,
+ 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd,
+ 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd,
+ 0x00 },
+
+ { 0x4c, 0x90, 0x07, 0xf4, 0x02, 0x62, 0x50, 0xc6, 0xbc, 0x84,
+ 0x14, 0xf9, 0xbf, 0x50, 0xc8, 0x6c, 0x2d, 0x72, 0x35, 0xda }
+
+ },
+
+ {
+ { 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c,
+ 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c,
+ 0x00 },
+
+ "Test With Truncation",
+
+ { 0x4c, 0x1a, 0x03, 0x42, 0x4b, 0x55, 0xe0, 0x7f, 0xe7, 0xf2,
+ 0x7b, 0xe1, 0xd5, 0x8b, 0xb9, 0x32, 0x4a, 0x9a, 0x5a, 0x04 }
+
+ },
+
+ {
+ { 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0x00 },
+
+ "Test Using Larger Than Block-Size Key - Hash Key First",
+
+ { 0xaa, 0x4a, 0xe5, 0xe1, 0x52, 0x72, 0xd0, 0x0e, 0x95, 0x70,
+ 0x56, 0x37, 0xce, 0x8a, 0x3b, 0x55, 0xed, 0x40, 0x21, 0x12 }
+
+ },
+
+ {
+ { 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0x00 },
+
+ "Test Using Larger Than Block-Size Key and Larger Than One "
+ "Block-Size Data",
+
+ { 0xe8, 0xe9, 0x9d, 0x0f, 0x45, 0x23, 0x7d, 0x78, 0x6d, 0x6b,
+ 0xba, 0xa7, 0x96, 0x5c, 0x78, 0x08, 0xbb, 0xff, 0x1a, 0x91 }
+ },
+
+ /* cross page test */
+ {
+ { 'J', 'e', 'f', 'e', 0 },
+
+ "what do ya want for nothing?",
+
+ { 0xef, 0xfc, 0xdf, 0x6a, 0xe5, 0xeb, 0x2f, 0xa2, 0xd2, 0x74,
+ 0x16, 0xd5, 0xf1, 0x84, 0xdf, 0x9c, 0x25, 0x9a, 0x7c, 0x79 }
+
+ },
+
+};
+
+/*
+ * HMAC-SHA256 test vectors from
+ * draft-ietf-ipsec-ciph-sha-256-01.txt
+ */
+#define HMAC_SHA256_TEST_VECTORS 10
+
+struct hmac_sha256_testvec {
+ char key[128];
+ char plaintext[128];
+ char digest[SHA256_DIGEST_SIZE];
+} hmac_sha256_tv_template[] = {
+
+ {
+ { 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
+ 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10,
+ 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18,
+ 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x20, 0x00 },
+
+
+ { "abc" },
+
+ { 0xa2, 0x1b, 0x1f, 0x5d, 0x4c, 0xf4, 0xf7, 0x3a,
+ 0x4d, 0xd9, 0x39, 0x75, 0x0f, 0x7a, 0x06, 0x6a,
+ 0x7f, 0x98, 0xcc, 0x13, 0x1c, 0xb1, 0x6a, 0x66,
+ 0x92, 0x75, 0x90, 0x21, 0xcf, 0xab, 0x81, 0x81 },
+
+ },
+
+ {
+ { 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
+ 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10,
+ 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18,
+ 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x20, 0x00 },
+
+ { "abcdbcdecdefdefgefghfghighijhijkijkljklmklmnlmnomnopnopq" },
+
+ { 0x10, 0x4f, 0xdc, 0x12, 0x57, 0x32, 0x8f, 0x08,
+ 0x18, 0x4b, 0xa7, 0x31, 0x31, 0xc5, 0x3c, 0xae,
+ 0xe6, 0x98, 0xe3, 0x61, 0x19, 0x42, 0x11, 0x49,
+ 0xea, 0x8c, 0x71, 0x24, 0x56, 0x69, 0x7d, 0x30 }
+ },
+
+ {
+ { 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
+ 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10,
+ 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18,
+ 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x20, 0x00 },
+
+ { "abcdbcdecdefdefgefghfghighijhijkijkljklmklmnlmnomnopnopq"
+ "abcdbcdecdefdefgefghfghighijhijkijkljklmklmnlmnomnopnopq" },
+
+ { 0x47, 0x03, 0x05, 0xfc, 0x7e, 0x40, 0xfe, 0x34,
+ 0xd3, 0xee, 0xb3, 0xe7, 0x73, 0xd9, 0x5a, 0xab,
+ 0x73, 0xac, 0xf0, 0xfd, 0x06, 0x04, 0x47, 0xa5,
+ 0xeb, 0x45, 0x95, 0xbf, 0x33, 0xa9, 0xd1, 0xa3 }
+ },
+
+ {
+ { 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b,
+ 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b,
+ 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b,
+ 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x0b, 0x00 },
+
+ { "Hi There" },
+
+ { 0x19, 0x8a, 0x60, 0x7e, 0xb4, 0x4b, 0xfb, 0xc6,
+ 0x99, 0x03, 0xa0, 0xf1, 0xcf, 0x2b, 0xbd, 0xc5,
+ 0xba, 0x0a, 0xa3, 0xf3, 0xd9, 0xae, 0x3c, 0x1c,
+ 0x7a, 0x3b, 0x16, 0x96, 0xa0, 0xb6, 0x8c, 0xf7 }
+ },
+
+ {
+ { "Jefe" },
+
+ { "what do ya want for nothing?" },
+
+ { 0x5b, 0xdc, 0xc1, 0x46, 0xbf, 0x60, 0x75, 0x4e,
+ 0x6a, 0x04, 0x24, 0x26, 0x08, 0x95, 0x75, 0xc7,
+ 0x5a, 0x00, 0x3f, 0x08, 0x9d, 0x27, 0x39, 0x83,
+ 0x9d, 0xec, 0x58, 0xb9, 0x64, 0xec, 0x38, 0x43 }
+ },
+
+ {
+ { 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0x00 },
+
+ { 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd,
+ 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd,
+ 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd,
+ 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd,
+ 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd,
+ 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd, 0xdd,
+ 0xdd, 0xdd, 0x00 },
+
+ { 0xcd, 0xcb, 0x12, 0x20, 0xd1, 0xec, 0xcc, 0xea,
+ 0x91, 0xe5, 0x3a, 0xba, 0x30, 0x92, 0xf9, 0x62,
+ 0xe5, 0x49, 0xfe, 0x6c, 0xe9, 0xed, 0x7f, 0xdc,
+ 0x43, 0x19, 0x1f, 0xbd, 0xe4, 0x5c, 0x30, 0xb0 }
+ },
+
+ {
+ { 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08,
+ 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, 0x10,
+ 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x18,
+ 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, 0x20,
+ 0x21, 0x22, 0x23, 0x24, 0x25, 0x00 },
+
+ { 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd,
+ 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd,
+ 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd,
+ 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd,
+ 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd,
+ 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd, 0xcd,
+ 0xcd, 0xcd, 0x00 },
+
+ { 0xd4, 0x63, 0x3c, 0x17, 0xf6, 0xfb, 0x8d, 0x74,
+ 0x4c, 0x66, 0xde, 0xe0, 0xf8, 0xf0, 0x74, 0x55,
+ 0x6e, 0xc4, 0xaf, 0x55, 0xef, 0x07, 0x99, 0x85,
+ 0x41, 0x46, 0x8e, 0xb4, 0x9b, 0xd2, 0xe9, 0x17 }
+ },
+
+ {
+ { 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c,
+ 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c,
+ 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c,
+ 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x0c, 0x00 },
+
+ { "Test With Truncation" },
+
+ { 0x75, 0x46, 0xaf, 0x01, 0x84, 0x1f, 0xc0, 0x9b,
+ 0x1a, 0xb9, 0xc3, 0x74, 0x9a, 0x5f, 0x1c, 0x17,
+ 0xd4, 0xf5, 0x89, 0x66, 0x8a, 0x58, 0x7b, 0x27,
+ 0x00, 0xa9, 0xc9, 0x7c, 0x11, 0x93, 0xcf, 0x42 }
+ },
+
+ {
+ { 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0x00 },
+
+ { "Test Using Larger Than Block-Size Key - Hash Key First" },
+
+ { 0x69, 0x53, 0x02, 0x5e, 0xd9, 0x6f, 0x0c, 0x09,
+ 0xf8, 0x0a, 0x96, 0xf7, 0x8e, 0x65, 0x38, 0xdb,
+ 0xe2, 0xe7, 0xb8, 0x20, 0xe3, 0xdd, 0x97, 0x0e,
+ 0x7d, 0xdd, 0x39, 0x09, 0x1b, 0x32, 0x35, 0x2f }
+ },
+
+ {
+ { 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa,
+ 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0x00 },
+
+ { "Test Using Larger Than Block-Size Key and Larger Than "
+ "One Block-Size Data" },
+
+ { 0x63, 0x55, 0xac, 0x22, 0xe8, 0x90, 0xd0, 0xa3,
+ 0xc8, 0x48, 0x1a, 0x5c, 0xa4, 0x82, 0x5b, 0xc8,
+ 0x84, 0xd3, 0xe7, 0xa1, 0xff, 0x98, 0xa2, 0xfc,
+ 0x2a, 0xc7, 0xd8, 0xe0, 0x64, 0xc3, 0xb2, 0xe6 }
+ },
+};
+
+
+#endif /* CONFIG_CRYPTO_HMAC */
+
+/*
+ * SHA1 test vectors from from FIPS PUB 180-1
+ */
+#define SHA1_TEST_VECTORS 2
+
+struct sha1_testvec {
+ char plaintext[128];
+ char digest[SHA1_DIGEST_SIZE];
+} sha1_tv_template[] = {
+ { "abc",
+ { 0xA9, 0x99, 0x3E, 0x36, 0x47, 0x06, 0x81, 0x6A, 0xBA, 0x3E,
+ 0x25, 0x71, 0x78, 0x50, 0xC2, 0x6C ,0x9C, 0xD0, 0xD8, 0x9D }
+ },
+
+ { "abcdbcdecdefdefgefghfghighijhijkijkljklmklmnlmnomnopnopq",
+
+ { 0x84, 0x98, 0x3E, 0x44, 0x1C, 0x3B, 0xD2, 0x6E ,0xBA, 0xAE,
+ 0x4A, 0xA1, 0xF9, 0x51, 0x29, 0xE5, 0xE5, 0x46, 0x70, 0xF1 }
+ }
+};
+
+/*
+ * SHA256 test vectors from from NIST
+ */
+#define SHA256_TEST_VECTORS 2
+
+struct sha256_testvec {
+ char plaintext[128];
+ char digest[SHA256_DIGEST_SIZE];
+} sha256_tv_template[] = {
+ { "abc",
+ { 0xba, 0x78, 0x16, 0xbf, 0x8f, 0x01, 0xcf, 0xea,
+ 0x41, 0x41, 0x40, 0xde, 0x5d, 0xae, 0x22, 0x23,
+ 0xb0, 0x03, 0x61, 0xa3, 0x96, 0x17, 0x7a, 0x9c,
+ 0xb4, 0x10, 0xff, 0x61, 0xf2, 0x00, 0x15, 0xad }
+ },
+
+ { "abcdbcdecdefdefgefghfghighijhijkijkljklmklmnlmnomnopnopq",
+ { 0x24, 0x8d, 0x6a, 0x61, 0xd2, 0x06, 0x38, 0xb8,
+ 0xe5, 0xc0, 0x26, 0x93, 0x0c, 0x3e, 0x60, 0x39,
+ 0xa3, 0x3c, 0xe4, 0x59, 0x64, 0xff, 0x21, 0x67,
+ 0xf6, 0xec, 0xed, 0xd4, 0x19, 0xdb, 0x06, 0xc1 }
+ },
+};
+
+/*
+ * SHA384 test vectors from from NIST and kerneli
+ */
+#define SHA384_TEST_VECTORS 4
+
+struct sha384_testvec {
+ char plaintext[128];
+ char digest[SHA384_DIGEST_SIZE];
+} sha384_tv_template[] = {
+
+ { "abc",
+ { 0xcb, 0x00, 0x75, 0x3f, 0x45, 0xa3, 0x5e, 0x8b,
+ 0xb5, 0xa0, 0x3d, 0x69, 0x9a, 0xc6, 0x50, 0x07,
+ 0x27, 0x2c, 0x32, 0xab, 0x0e, 0xde, 0xd1, 0x63,
+ 0x1a, 0x8b, 0x60, 0x5a, 0x43, 0xff, 0x5b, 0xed,
+ 0x80, 0x86, 0x07, 0x2b, 0xa1, 0xe7, 0xcc, 0x23,
+ 0x58, 0xba, 0xec, 0xa1, 0x34, 0xc8, 0x25, 0xa7 }
+ },
+
+ { "abcdbcdecdefdefgefghfghighijhijkijkljklmklmnlmnomnopnopq",
+ { 0x33, 0x91, 0xfd, 0xdd, 0xfc, 0x8d, 0xc7, 0x39,
+ 0x37, 0x07, 0xa6, 0x5b, 0x1b, 0x47, 0x09, 0x39,
+ 0x7c, 0xf8, 0xb1, 0xd1, 0x62, 0xaf, 0x05, 0xab,
+ 0xfe, 0x8f, 0x45, 0x0d, 0xe5, 0xf3, 0x6b, 0xc6,
+ 0xb0, 0x45, 0x5a, 0x85, 0x20, 0xbc, 0x4e, 0x6f,
+ 0x5f, 0xe9, 0x5b, 0x1f, 0xe3, 0xc8, 0x45, 0x2b }
+ },
+
+ { "abcdefghbcdefghicdefghijdefghijkefghijklfghijklmghijklmn"
+ "hijklmnoijklmnopjklmnopqklmnopqrlmnopqrsmnopqrstnopqrstu",
+ { 0x09, 0x33, 0x0c, 0x33, 0xf7, 0x11, 0x47, 0xe8,
+ 0x3d, 0x19, 0x2f, 0xc7, 0x82, 0xcd, 0x1b, 0x47,
+ 0x53, 0x11, 0x1b, 0x17, 0x3b, 0x3b, 0x05, 0xd2,
+ 0x2f, 0xa0, 0x80, 0x86, 0xe3, 0xb0, 0xf7, 0x12,
+ 0xfc, 0xc7, 0xc7, 0x1a, 0x55, 0x7e, 0x2d, 0xb9,
+ 0x66, 0xc3, 0xe9, 0xfa, 0x91, 0x74, 0x60, 0x39 }
+ },
+
+ { "abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcd"
+ "efghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz",
+ { 0x3d, 0x20, 0x89, 0x73, 0xab, 0x35, 0x08, 0xdb,
+ 0xbd, 0x7e, 0x2c, 0x28, 0x62, 0xba, 0x29, 0x0a,
+ 0xd3, 0x01, 0x0e, 0x49, 0x78, 0xc1, 0x98, 0xdc,
+ 0x4d, 0x8f, 0xd0, 0x14, 0xe5, 0x82, 0x82, 0x3a,
+ 0x89, 0xe1, 0x6f, 0x9b, 0x2a, 0x7b, 0xbc, 0x1a,
+ 0xc9, 0x38, 0xe2, 0xd1, 0x99, 0xe8, 0xbe, 0xa4 }
+ },
+};
+
+/*
+ * SHA512 test vectors from from NIST and kerneli
+ */
+#define SHA512_TEST_VECTORS 4
+
+struct sha512_testvec {
+ char plaintext[128];
+ char digest[SHA512_DIGEST_SIZE];
+} sha512_tv_template[] = {
+
+ { "abc",
+ { 0xdd, 0xaf, 0x35, 0xa1, 0x93, 0x61, 0x7a, 0xba,
+ 0xcc, 0x41, 0x73, 0x49, 0xae, 0x20, 0x41, 0x31,
+ 0x12, 0xe6, 0xfa, 0x4e, 0x89, 0xa9, 0x7e, 0xa2,
+ 0x0a, 0x9e, 0xee, 0xe6, 0x4b, 0x55, 0xd3, 0x9a,
+ 0x21, 0x92, 0x99, 0x2a, 0x27, 0x4f, 0xc1, 0xa8,
+ 0x36, 0xba, 0x3c, 0x23, 0xa3, 0xfe, 0xeb, 0xbd,
+ 0x45, 0x4d, 0x44, 0x23, 0x64, 0x3c, 0xe8, 0x0e,
+ 0x2a, 0x9a, 0xc9, 0x4f, 0xa5, 0x4c, 0xa4, 0x9f }
+ },
+
+ { "abcdbcdecdefdefgefghfghighijhijkijkljklmklmnlmnomnopnopq",
+ { 0x20, 0x4a, 0x8f, 0xc6, 0xdd, 0xa8, 0x2f, 0x0a,
+ 0x0c, 0xed, 0x7b, 0xeb, 0x8e, 0x08, 0xa4, 0x16,
+ 0x57, 0xc1, 0x6e, 0xf4, 0x68, 0xb2, 0x28, 0xa8,
+ 0x27, 0x9b, 0xe3, 0x31, 0xa7, 0x03, 0xc3, 0x35,
+ 0x96, 0xfd, 0x15, 0xc1, 0x3b, 0x1b, 0x07, 0xf9,
+ 0xaa, 0x1d, 0x3b, 0xea, 0x57, 0x78, 0x9c, 0xa0,
+ 0x31, 0xad, 0x85, 0xc7, 0xa7, 0x1d, 0xd7, 0x03,
+ 0x54, 0xec, 0x63, 0x12, 0x38, 0xca, 0x34, 0x45 }
+ },
+
+ { "abcdefghbcdefghicdefghijdefghijkefghijklfghijklmghijklmn"
+ "hijklmnoijklmnopjklmnopqklmnopqrlmnopqrsmnopqrstnopqrstu",
+ { 0x8e, 0x95, 0x9b, 0x75, 0xda, 0xe3, 0x13, 0xda,
+ 0x8c, 0xf4, 0xf7, 0x28, 0x14, 0xfc, 0x14, 0x3f,
+ 0x8f, 0x77, 0x79, 0xc6, 0xeb, 0x9f, 0x7f, 0xa1,
+ 0x72, 0x99, 0xae, 0xad, 0xb6, 0x88, 0x90, 0x18,
+ 0x50, 0x1d, 0x28, 0x9e, 0x49, 0x00, 0xf7, 0xe4,
+ 0x33, 0x1b, 0x99, 0xde, 0xc4, 0xb5, 0x43, 0x3a,
+ 0xc7, 0xd3, 0x29, 0xee, 0xb6, 0xdd, 0x26, 0x54,
+ 0x5e, 0x96, 0xe5, 0x5b, 0x87, 0x4b, 0xe9, 0x09 }
+ },
+
+ { "abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcd"
+ "efghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz",
+ { 0x93, 0x0d, 0x0c, 0xef, 0xcb, 0x30, 0xff, 0x11,
+ 0x33, 0xb6, 0x89, 0x81, 0x21, 0xf1, 0xcf, 0x3d,
+ 0x27, 0x57, 0x8a, 0xfc, 0xaf, 0xe8, 0x67, 0x7c,
+ 0x52, 0x57, 0xcf, 0x06, 0x99, 0x11, 0xf7, 0x5d,
+ 0x8f, 0x58, 0x31, 0xb5, 0x6e, 0xbf, 0xda, 0x67,
+ 0xb2, 0x78, 0xe6, 0x6d, 0xff, 0x8b, 0x84, 0xfe,
+ 0x2b, 0x28, 0x70, 0xf7, 0x42, 0xa5, 0x80, 0xd8,
+ 0xed, 0xb4, 0x19, 0x87, 0x23, 0x28, 0x50, 0xc9
+ }
+ },
+};
+
+/*
+ * DES test vectors.
+ */
+#define DES_ENC_TEST_VECTORS 5
+#define DES_DEC_TEST_VECTORS 2
+#define DES_CBC_ENC_TEST_VECTORS 4
+#define DES_CBC_DEC_TEST_VECTORS 3
+#define DES3_EDE_ENC_TEST_VECTORS 3
+#define DES3_EDE_DEC_TEST_VECTORS 3
+
+struct des_tv {
+ unsigned int len;
+ int fail;
+ char key[24];
+ char iv[8];
+ char plaintext[128];
+ char result[128];
+};
+
+struct des_tv des_enc_tv_template[] = {
+
+ /* From Applied Cryptography */
+ {
+ 8, 0,
+
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef },
+ { 0 },
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xe7 },
+ { 0xc9, 0x57, 0x44, 0x25, 0x6a, 0x5e, 0xd3, 0x1d }
+ },
+
+ /* Same key, different plaintext block */
+ {
+ 8, 0,
+
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef },
+ { 0 },
+ { 0x22, 0x33, 0x44, 0x55, 0x66, 0x77, 0x88, 0x99 },
+ { 0xf7, 0x9c, 0x89, 0x2a, 0x33, 0x8f, 0x4a, 0x8b }
+ },
+
+ /* Sbox test from NBS */
+ {
+ 8, 0,
+
+ { 0x7C, 0xA1, 0x10, 0x45, 0x4A, 0x1A, 0x6E, 0x57 },
+ { 0 },
+ { 0x01, 0xA1, 0xD6, 0xD0, 0x39, 0x77, 0x67, 0x42 },
+ { 0x69, 0x0F, 0x5B, 0x0D, 0x9A, 0x26, 0x93, 0x9B }
+ },
+
+ /* Three blocks */
+ {
+ 24, 0,
+
+
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef },
+
+ { 0 },
+
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xe7,
+ 0x22, 0x33, 0x44, 0x55, 0x66, 0x77, 0x88, 0x99,
+ 0xca, 0xfe, 0xba, 0xbe, 0xfe, 0xed, 0xbe, 0xef },
+
+ { 0xc9, 0x57, 0x44, 0x25, 0x6a, 0x5e, 0xd3, 0x1d,
+ 0xf7, 0x9c, 0x89, 0x2a, 0x33, 0x8f, 0x4a, 0x8b,
+ 0xb4, 0x99, 0x26, 0xf7, 0x1f, 0xe1, 0xd4, 0x90 },
+ },
+
+ /* Weak key */
+ {
+ 8, 1,
+
+ { 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x01 },
+ { 0 },
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xe7 },
+ { 0xc9, 0x57, 0x44, 0x25, 0x6a, 0x5e, 0xd3, 0x1d }
+ },
+
+ /* Two blocks -- for testing encryption across pages */
+ {
+ 16, 0,
+
+
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef },
+
+ { 0 },
+
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xe7,
+ 0x22, 0x33, 0x44, 0x55, 0x66, 0x77, 0x88, 0x99 },
+
+ { 0xc9, 0x57, 0x44, 0x25, 0x6a, 0x5e, 0xd3, 0x1d,
+ 0xf7, 0x9c, 0x89, 0x2a, 0x33, 0x8f, 0x4a, 0x8b }
+ },
+
+ /* Two blocks -- for testing decryption across pages */
+ {
+ 16, 0,
+
+
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef },
+
+ { 0 },
+
+ { 0xc9, 0x57, 0x44, 0x25, 0x6a, 0x5e, 0xd3, 0x1d,
+ 0xf7, 0x9c, 0x89, 0x2a, 0x33, 0x8f, 0x4a, 0x8b },
+
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xe7,
+ 0x22, 0x33, 0x44, 0x55, 0x66, 0x77, 0x88, 0x99 },
+ },
+
+ /* Four blocks -- for testing encryption with chunking */
+ {
+ 24, 0,
+
+
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef },
+
+ { 0 },
+
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xe7,
+ 0x22, 0x33, 0x44, 0x55, 0x66, 0x77, 0x88, 0x99,
+ 0xca, 0xfe, 0xba, 0xbe, 0xfe, 0xed, 0xbe, 0xef,
+ 0x22, 0x33, 0x44, 0x55, 0x66, 0x77, 0x88, 0x99 },
+
+ { 0xc9, 0x57, 0x44, 0x25, 0x6a, 0x5e, 0xd3, 0x1d,
+ 0xf7, 0x9c, 0x89, 0x2a, 0x33, 0x8f, 0x4a, 0x8b,
+ 0xb4, 0x99, 0x26, 0xf7, 0x1f, 0xe1, 0xd4, 0x90,
+ 0xf7, 0x9c, 0x89, 0x2a, 0x33, 0x8f, 0x4a, 0x8b },
+ },
+
+};
+
+struct des_tv des_dec_tv_template[] = {
+
+ /* From Applied Cryptography */
+ {
+ 8, 0,
+
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef },
+ { 0 },
+ { 0xc9, 0x57, 0x44, 0x25, 0x6a, 0x5e, 0xd3, 0x1d },
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xe7 },
+ },
+
+ /* Sbox test from NBS */
+ {
+ 8, 0,
+
+ { 0x7C, 0xA1, 0x10, 0x45, 0x4A, 0x1A, 0x6E, 0x57 },
+ { 0 },
+ { 0x69, 0x0F, 0x5B, 0x0D, 0x9A, 0x26, 0x93, 0x9B },
+ { 0x01, 0xA1, 0xD6, 0xD0, 0x39, 0x77, 0x67, 0x42 }
+ },
+
+ /* Two blocks, for chunking test */
+ {
+ 16, 0,
+
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef },
+ { 0 },
+
+ { 0xc9, 0x57, 0x44, 0x25, 0x6a, 0x5e, 0xd3, 0x1d,
+ 0x69, 0x0F, 0x5B, 0x0D, 0x9A, 0x26, 0x93, 0x9B },
+
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xe7,
+ 0xa3, 0x99, 0x7b, 0xca, 0xaf, 0x69, 0xa0, 0xf5 }
+ },
+
+};
+
+struct des_tv des_cbc_enc_tv_template[] = {
+ /* From OpenSSL */
+ {
+ 24, 0,
+
+ {0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef},
+ {0xfe, 0xdc, 0xba, 0x98, 0x76, 0x54, 0x32, 0x10},
+
+ { 0x37, 0x36, 0x35, 0x34, 0x33, 0x32, 0x31, 0x20,
+ 0x4E, 0x6F, 0x77, 0x20, 0x69, 0x73, 0x20, 0x74,
+ 0x68, 0x65, 0x20, 0x74, 0x69, 0x6D, 0x65, 0x20,
+ 0x66, 0x6F, 0x72, 0x20, 0x00, 0x31, 0x00, 0x00 },
+
+ { 0xcc, 0xd1, 0x73, 0xff, 0xab, 0x20, 0x39, 0xf4,
+ 0xac, 0xd8, 0xae, 0xfd, 0xdf, 0xd8, 0xa1, 0xeb,
+ 0x46, 0x8e, 0x91, 0x15, 0x78, 0x88, 0xba, 0x68,
+ 0x1d, 0x26, 0x93, 0x97, 0xf7, 0xfe, 0x62, 0xb4 }
+ },
+
+ /* FIPS Pub 81 */
+ {
+ 8, 0,
+
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef },
+ { 0x12, 0x34, 0x56, 0x78, 0x90, 0xab, 0xcd, 0xef },
+ { 0x4e, 0x6f, 0x77, 0x20, 0x69, 0x73, 0x20, 0x74 },
+ { 0xe5, 0xc7, 0xcd, 0xde, 0x87, 0x2b, 0xf2, 0x7c },
+
+ },
+
+ {
+ 8, 0,
+
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef },
+ { 0xe5, 0xc7, 0xcd, 0xde, 0x87, 0x2b, 0xf2, 0x7c },
+ { 0x68, 0x65, 0x20, 0x74, 0x69, 0x6d, 0x65, 0x20 },
+ { 0x43, 0xe9, 0x34, 0x00, 0x8c, 0x38, 0x9c, 0x0f },
+ },
+
+ {
+ 8, 0,
+
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef },
+ { 0x43, 0xe9, 0x34, 0x00, 0x8c, 0x38, 0x9c, 0x0f },
+ { 0x66, 0x6f, 0x72, 0x20, 0x61, 0x6c, 0x6c, 0x20 },
+ { 0x68, 0x37, 0x88, 0x49, 0x9a, 0x7c, 0x05, 0xf6 },
+ },
+
+ /* Copy of openssl vector for chunk testing */
+
+ /* From OpenSSL */
+ {
+ 24, 0,
+
+ {0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef},
+ {0xfe, 0xdc, 0xba, 0x98, 0x76, 0x54, 0x32, 0x10},
+
+ { 0x37, 0x36, 0x35, 0x34, 0x33, 0x32, 0x31, 0x20,
+ 0x4E, 0x6F, 0x77, 0x20, 0x69, 0x73, 0x20, 0x74,
+ 0x68, 0x65, 0x20, 0x74, 0x69, 0x6D, 0x65, 0x20,
+ 0x66, 0x6F, 0x72, 0x20, 0x00, 0x31, 0x00, 0x00 },
+
+ { 0xcc, 0xd1, 0x73, 0xff, 0xab, 0x20, 0x39, 0xf4,
+ 0xac, 0xd8, 0xae, 0xfd, 0xdf, 0xd8, 0xa1, 0xeb,
+ 0x46, 0x8e, 0x91, 0x15, 0x78, 0x88, 0xba, 0x68,
+ 0x1d, 0x26, 0x93, 0x97, 0xf7, 0xfe, 0x62, 0xb4 }
+ },
+
+
+};
+
+struct des_tv des_cbc_dec_tv_template[] = {
+
+ /* FIPS Pub 81 */
+ {
+ 8, 0,
+
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef },
+ { 0x12, 0x34, 0x56, 0x78, 0x90, 0xab, 0xcd, 0xef },
+ { 0xe5, 0xc7, 0xcd, 0xde, 0x87, 0x2b, 0xf2, 0x7c },
+ { 0x4e, 0x6f, 0x77, 0x20, 0x69, 0x73, 0x20, 0x74 },
+ },
+
+ {
+ 8, 0,
+
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef },
+ { 0xe5, 0xc7, 0xcd, 0xde, 0x87, 0x2b, 0xf2, 0x7c },
+ { 0x43, 0xe9, 0x34, 0x00, 0x8c, 0x38, 0x9c, 0x0f },
+ { 0x68, 0x65, 0x20, 0x74, 0x69, 0x6d, 0x65, 0x20 },
+ },
+
+ {
+ 8, 0,
+
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef },
+ { 0x43, 0xe9, 0x34, 0x00, 0x8c, 0x38, 0x9c, 0x0f },
+ { 0x68, 0x37, 0x88, 0x49, 0x9a, 0x7c, 0x05, 0xf6 },
+ { 0x66, 0x6f, 0x72, 0x20, 0x61, 0x6c, 0x6c, 0x20 },
+ },
+
+ /* Copy of above, for chunk testing */
+
+ {
+ 8, 0,
+
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef },
+ { 0x43, 0xe9, 0x34, 0x00, 0x8c, 0x38, 0x9c, 0x0f },
+ { 0x68, 0x37, 0x88, 0x49, 0x9a, 0x7c, 0x05, 0xf6 },
+ { 0x66, 0x6f, 0x72, 0x20, 0x61, 0x6c, 0x6c, 0x20 },
+ },
+};
+
+/*
+ * We really need some more test vectors, especially for DES3 CBC.
+ */
+struct des_tv des3_ede_enc_tv_template[] = {
+
+ /* These are from openssl */
+ {
+ 8, 0,
+
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xAB, 0xCD, 0xEF,
+ 0x55, 0x55, 0x55, 0x55, 0x55, 0x55, 0x55, 0x55,
+ 0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54, 0x32, 0x10},
+
+ { 0 },
+
+ { 0x73, 0x6F, 0x6D, 0x65, 0x64, 0x61, 0x74, 0x61 },
+
+ { 0x18, 0xd7, 0x48, 0xe5, 0x63, 0x62, 0x05, 0x72 },
+ },
+
+ {
+ 8, 0,
+
+ { 0x03,0x52,0x02,0x07,0x67,0x20,0x82,0x17,
+ 0x86,0x02,0x87,0x66,0x59,0x08,0x21,0x98,
+ 0x64,0x05,0x6A,0xBD,0xFE,0xA9,0x34,0x57 },
+
+ { 0 },
+
+ { 0x73,0x71,0x75,0x69,0x67,0x67,0x6C,0x65 },
+
+ { 0xc0,0x7d,0x2a,0x0f,0xa5,0x66,0xfa,0x30 }
+ },
+
+
+ {
+ 8, 0,
+
+ { 0x10,0x46,0x10,0x34,0x89,0x98,0x80,0x20,
+ 0x91,0x07,0xD0,0x15,0x89,0x19,0x01,0x01,
+ 0x19,0x07,0x92,0x10,0x98,0x1A,0x01,0x01 },
+
+ { 0 },
+
+ { 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00 },
+
+ { 0xe1,0xef,0x62,0xc3,0x32,0xfe,0x82,0x5b }
+ },
+};
+
+struct des_tv des3_ede_dec_tv_template[] = {
+
+ /* These are from openssl */
+ {
+ 8, 0,
+
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xAB, 0xCD, 0xEF,
+ 0x55, 0x55, 0x55, 0x55, 0x55, 0x55, 0x55, 0x55,
+ 0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54, 0x32, 0x10},
+
+ { 0 },
+
+
+ { 0x18, 0xd7, 0x48, 0xe5, 0x63, 0x62, 0x05, 0x72 },
+
+ { 0x73, 0x6F, 0x6D, 0x65, 0x64, 0x61, 0x74, 0x61 },
+ },
+
+ {
+ 8, 0,
+
+ { 0x03,0x52,0x02,0x07,0x67,0x20,0x82,0x17,
+ 0x86,0x02,0x87,0x66,0x59,0x08,0x21,0x98,
+ 0x64,0x05,0x6A,0xBD,0xFE,0xA9,0x34,0x57 },
+
+ { 0 },
+
+ { 0xc0,0x7d,0x2a,0x0f,0xa5,0x66,0xfa,0x30 },
+
+ { 0x73,0x71,0x75,0x69,0x67,0x67,0x6C,0x65 },
+
+ },
+
+
+ {
+ 8, 0,
+
+ { 0x10,0x46,0x10,0x34,0x89,0x98,0x80,0x20,
+ 0x91,0x07,0xD0,0x15,0x89,0x19,0x01,0x01,
+ 0x19,0x07,0x92,0x10,0x98,0x1A,0x01,0x01 },
+
+ { 0 },
+
+ { 0xe1,0xef,0x62,0xc3,0x32,0xfe,0x82,0x5b },
+
+ { 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00 },
+ },
+};
+
+/*
+ * Blowfish test vectors.
+ */
+#define BF_ENC_TEST_VECTORS 6
+#define BF_DEC_TEST_VECTORS 6
+#define BF_CBC_ENC_TEST_VECTORS 1
+#define BF_CBC_DEC_TEST_VECTORS 1
+
+struct bf_tv {
+ unsigned int keylen;
+ unsigned int plen;
+ unsigned int rlen;
+ int fail;
+ char key[56];
+ char iv[8];
+ char plaintext[32];
+ char result[32];
+};
+
+struct bf_tv bf_enc_tv_template[] = {
+
+ /* DES test vectors from OpenSSL */
+ {
+ 8, 8, 8, 0,
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, },
+ { 0 },
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ { 0x4E, 0xF9, 0x97, 0x45, 0x61, 0x98, 0xDD, 0x78 },
+ },
+
+ {
+ 8, 8, 8, 0,
+ { 0x1F, 0x1F, 0x1F, 0x1F, 0x0E, 0x0E, 0x0E, 0x0E, },
+ { 0 },
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xAB, 0xCD, 0xEF },
+ { 0xA7, 0x90, 0x79, 0x51, 0x08, 0xEA, 0x3C, 0xAE },
+ },
+
+ {
+ 8, 8, 8, 0,
+ { 0xF0, 0xE1, 0xD2, 0xC3, 0xB4, 0xA5, 0x96, 0x87, },
+ { 0 },
+ { 0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54, 0x32, 0x10 },
+ { 0xE8, 0x7A, 0x24, 0x4E, 0x2C, 0xC8, 0x5E, 0x82 }
+ },
+
+ /* Vary the keylength... */
+
+ {
+ 16, 8, 8, 0,
+ { 0xF0, 0xE1, 0xD2, 0xC3, 0xB4, 0xA5, 0x96, 0x87,
+ 0x78, 0x69, 0x5A, 0x4B, 0x3C, 0x2D, 0x1E, 0x0F },
+ { 0 },
+ { 0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54, 0x32, 0x10 },
+ { 0x93, 0x14, 0x28, 0x87, 0xEE, 0x3B, 0xE1, 0x5C }
+ },
+
+ {
+ 21, 8, 8, 0,
+ { 0xF0, 0xE1, 0xD2, 0xC3, 0xB4, 0xA5, 0x96, 0x87,
+ 0x78, 0x69, 0x5A, 0x4B, 0x3C, 0x2D, 0x1E, 0x0F,
+ 0x00, 0x11, 0x22, 0x33, 0x44 },
+ { 0 },
+ { 0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54, 0x32, 0x10 },
+ { 0xE6, 0xF5, 0x1E, 0xD7, 0x9B, 0x9D, 0xB2, 0x1F }
+ },
+
+ /* Generated with bf488 */
+ {
+ 56, 8, 8, 0,
+ { 0xF0, 0xE1, 0xD2, 0xC3, 0xB4, 0xA5, 0x96, 0x87,
+ 0x78, 0x69, 0x5A, 0x4B, 0x3C, 0x2D, 0x1E, 0x0F,
+ 0x00, 0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77,
+ 0x04, 0x68, 0x91, 0x04, 0xC2, 0xFD, 0x3B, 0x2F,
+ 0x58, 0x40, 0x23, 0x64, 0x1A, 0xBA, 0x61, 0x76,
+ 0x1F, 0x1F, 0x1F, 0x1F, 0x0E, 0x0E, 0x0E, 0x0E,
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF },
+ { 0 },
+ { 0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54, 0x32, 0x10 },
+ { 0xc0, 0x45, 0x04, 0x01, 0x2e, 0x4e, 0x1f, 0x53 }
+ }
+
+};
+
+struct bf_tv bf_dec_tv_template[] = {
+
+ /* DES test vectors from OpenSSL */
+ {
+ 8, 8, 8, 0,
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, },
+ { 0 },
+ { 0x4E, 0xF9, 0x97, 0x45, 0x61, 0x98, 0xDD, 0x78 },
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }
+ },
+
+ {
+ 8, 8, 8, 0,
+ { 0x1F, 0x1F, 0x1F, 0x1F, 0x0E, 0x0E, 0x0E, 0x0E, },
+ { 0 },
+ { 0xA7, 0x90, 0x79, 0x51, 0x08, 0xEA, 0x3C, 0xAE },
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xAB, 0xCD, 0xEF }
+ },
+
+ {
+ 8, 8, 8, 0,
+ { 0xF0, 0xE1, 0xD2, 0xC3, 0xB4, 0xA5, 0x96, 0x87, },
+ { 0 },
+ { 0xE8, 0x7A, 0x24, 0x4E, 0x2C, 0xC8, 0x5E, 0x82 },
+ { 0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54, 0x32, 0x10 }
+ },
+
+ /* Vary the keylength... */
+
+ {
+ 16, 8, 8, 0,
+ { 0xF0, 0xE1, 0xD2, 0xC3, 0xB4, 0xA5, 0x96, 0x87,
+ 0x78, 0x69, 0x5A, 0x4B, 0x3C, 0x2D, 0x1E, 0x0F },
+ { 0 },
+ { 0x93, 0x14, 0x28, 0x87, 0xEE, 0x3B, 0xE1, 0x5C },
+ { 0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54, 0x32, 0x10 }
+ },
+
+ {
+ 21, 8, 8, 0,
+ { 0xF0, 0xE1, 0xD2, 0xC3, 0xB4, 0xA5, 0x96, 0x87,
+ 0x78, 0x69, 0x5A, 0x4B, 0x3C, 0x2D, 0x1E, 0x0F,
+ 0x00, 0x11, 0x22, 0x33, 0x44 },
+ { 0 },
+ { 0xE6, 0xF5, 0x1E, 0xD7, 0x9B, 0x9D, 0xB2, 0x1F },
+ { 0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54, 0x32, 0x10 }
+ },
+
+ /* Generated with bf488, using OpenSSL, Libgcrypt and Nettle */
+ {
+ 56, 8, 8, 0,
+ { 0xF0, 0xE1, 0xD2, 0xC3, 0xB4, 0xA5, 0x96, 0x87,
+ 0x78, 0x69, 0x5A, 0x4B, 0x3C, 0x2D, 0x1E, 0x0F,
+ 0x00, 0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77,
+ 0x04, 0x68, 0x91, 0x04, 0xC2, 0xFD, 0x3B, 0x2F,
+ 0x58, 0x40, 0x23, 0x64, 0x1A, 0xBA, 0x61, 0x76,
+ 0x1F, 0x1F, 0x1F, 0x1F, 0x0E, 0x0E, 0x0E, 0x0E,
+ 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF },
+ { 0 },
+ { 0xc0, 0x45, 0x04, 0x01, 0x2e, 0x4e, 0x1f, 0x53 },
+ { 0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54, 0x32, 0x10 }
+ }
+};
+
+struct bf_tv bf_cbc_enc_tv_template[] = {
+
+ /* From OpenSSL */
+ {
+ 16, 32, 32, 0,
+
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xAB, 0xCD, 0xEF,
+ 0xF0, 0xE1, 0xD2, 0xC3, 0xB4, 0xA5, 0x96, 0x87 },
+
+ { 0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54, 0x32, 0x10 },
+
+ { 0x37, 0x36, 0x35, 0x34, 0x33, 0x32, 0x31, 0x20,
+ 0x4E, 0x6F, 0x77, 0x20, 0x69, 0x73, 0x20, 0x74,
+ 0x68, 0x65, 0x20, 0x74, 0x69, 0x6D, 0x65, 0x20,
+ 0x66, 0x6F, 0x72, 0x20, 0x00, 0x00, 0x00, 0x00 },
+
+ { 0x6B, 0x77, 0xB4, 0xD6, 0x30, 0x06, 0xDE, 0xE6,
+ 0x05, 0xB1, 0x56, 0xE2, 0x74, 0x03, 0x97, 0x93,
+ 0x58, 0xDE, 0xB9, 0xE7, 0x15, 0x46, 0x16, 0xD9,
+ 0x59, 0xF1, 0x65, 0x2B, 0xD5, 0xFF, 0x92, 0xCC }
+ },
+};
+
+struct bf_tv bf_cbc_dec_tv_template[] = {
+
+ /* From OpenSSL */
+ {
+ 16, 32, 32, 0,
+
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xAB, 0xCD, 0xEF,
+ 0xF0, 0xE1, 0xD2, 0xC3, 0xB4, 0xA5, 0x96, 0x87 },
+
+ { 0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54, 0x32, 0x10 },
+
+ { 0x6B, 0x77, 0xB4, 0xD6, 0x30, 0x06, 0xDE, 0xE6,
+ 0x05, 0xB1, 0x56, 0xE2, 0x74, 0x03, 0x97, 0x93,
+ 0x58, 0xDE, 0xB9, 0xE7, 0x15, 0x46, 0x16, 0xD9,
+ 0x59, 0xF1, 0x65, 0x2B, 0xD5, 0xFF, 0x92, 0xCC },
+
+ { 0x37, 0x36, 0x35, 0x34, 0x33, 0x32, 0x31, 0x20,
+ 0x4E, 0x6F, 0x77, 0x20, 0x69, 0x73, 0x20, 0x74,
+ 0x68, 0x65, 0x20, 0x74, 0x69, 0x6D, 0x65, 0x20,
+ 0x66, 0x6F, 0x72, 0x20, 0x00, 0x00, 0x00, 0x00 }
+ },
+};
+
+/*
+ * Twofish test vectors.
+ */
+#define TF_ENC_TEST_VECTORS 3
+#define TF_DEC_TEST_VECTORS 3
+#define TF_CBC_ENC_TEST_VECTORS 4
+#define TF_CBC_DEC_TEST_VECTORS 4
+
+struct tf_tv {
+ unsigned int keylen;
+ unsigned int plen;
+ unsigned int rlen;
+ int fail;
+ char key[32];
+ char iv[16];
+ char plaintext[48];
+ char result[48];
+};
+
+struct tf_tv tf_enc_tv_template[] = {
+ {
+ 16, 16, 16, 0,
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ { 0 },
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ { 0x9F, 0x58, 0x9F, 0x5C, 0xF6, 0x12, 0x2C, 0x32,
+ 0xB6, 0xBF, 0xEC, 0x2F, 0x2A, 0xE8, 0xC3, 0x5A }
+ },
+ {
+ 24, 16, 16, 0,
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xAB, 0xCD, 0xEF,
+ 0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54, 0x32, 0x10,
+ 0x00, 0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77 },
+ { 0 },
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ { 0xCF, 0xD1, 0xD2, 0xE5, 0xA9, 0xBE, 0x9C, 0xDF,
+ 0x50, 0x1F, 0x13, 0xB8, 0x92, 0xBD, 0x22, 0x48 }
+ },
+ {
+ 32, 16, 16, 0,
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xAB, 0xCD, 0xEF,
+ 0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54, 0x32, 0x10,
+ 0x00, 0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77,
+ 0x88, 0x99, 0xAA, 0xBB, 0xCC, 0xDD, 0xEE, 0xFF },
+ { 0 },
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ { 0x37, 0x52, 0x7B, 0xE0, 0x05, 0x23, 0x34, 0xB8,
+ 0x9F, 0x0C, 0xFC, 0xCA, 0xE8, 0x7C, 0xFA, 0x20 }
+ },
+};
+
+struct tf_tv tf_dec_tv_template[] = {
+ {
+ 16, 16, 16, 0,
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ { 0 },
+ { 0x9F, 0x58, 0x9F, 0x5C, 0xF6, 0x12, 0x2C, 0x32,
+ 0xB6, 0xBF, 0xEC, 0x2F, 0x2A, 0xE8, 0xC3, 0x5A },
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ },
+ {
+ 24, 16, 16, 0,
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xAB, 0xCD, 0xEF,
+ 0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54, 0x32, 0x10,
+ 0x00, 0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77 },
+ { 0 },
+ { 0xCF, 0xD1, 0xD2, 0xE5, 0xA9, 0xBE, 0x9C, 0xDF,
+ 0x50, 0x1F, 0x13, 0xB8, 0x92, 0xBD, 0x22, 0x48 },
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ },
+ {
+ 32, 16, 16, 0,
+ { 0x01, 0x23, 0x45, 0x67, 0x89, 0xAB, 0xCD, 0xEF,
+ 0xFE, 0xDC, 0xBA, 0x98, 0x76, 0x54, 0x32, 0x10,
+ 0x00, 0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77,
+ 0x88, 0x99, 0xAA, 0xBB, 0xCC, 0xDD, 0xEE, 0xFF },
+ { 0 },
+ { 0x37, 0x52, 0x7B, 0xE0, 0x05, 0x23, 0x34, 0xB8,
+ 0x9F, 0x0C, 0xFC, 0xCA, 0xE8, 0x7C, 0xFA, 0x20 },
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ },
+};
+
+struct tf_tv tf_cbc_enc_tv_template[] = {
+ /* Generated with Nettle */
+ {
+ 16, 16, 16, 0,
+
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ { 0x9f, 0x58, 0x9f, 0x5c, 0xf6, 0x12, 0x2c, 0x32,
+ 0xb6, 0xbf, 0xec, 0x2f, 0x2a, 0xe8, 0xc3, 0x5a },
+ },
+
+ {
+ 16, 16, 16, 0,
+
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ { 0x9f, 0x58, 0x9f, 0x5c, 0xf6, 0x12, 0x2c, 0x32,
+ 0xb6, 0xbf, 0xec, 0x2f, 0x2a, 0xe8, 0xc3, 0x5a },
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ { 0xd4, 0x91, 0xdb, 0x16, 0xe7, 0xb1, 0xc3, 0x9e,
+ 0x86, 0xcb, 0x08, 0x6b, 0x78, 0x9f, 0x54, 0x19 },
+ },
+
+ {
+ 16, 16, 16, 0,
+
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ { 0xd4, 0x91, 0xdb, 0x16, 0xe7, 0xb1, 0xc3, 0x9e,
+ 0x86, 0xcb, 0x08, 0x6b, 0x78, 0x9f, 0x54, 0x19 },
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ { 0x05, 0xef, 0x8c, 0x61, 0xa8, 0x11, 0x58, 0x26,
+ 0x34, 0xba, 0x5c, 0xb7, 0x10, 0x6a, 0xa6, 0x41 },
+ },
+
+ {
+ 16, 48, 48, 0,
+
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ { 0x9f, 0x58, 0x9f, 0x5c, 0xf6, 0x12, 0x2c, 0x32,
+ 0xb6, 0xbf, 0xec, 0x2f, 0x2a, 0xe8, 0xc3, 0x5a,
+ 0xd4, 0x91, 0xdb, 0x16, 0xe7, 0xb1, 0xc3, 0x9e,
+ 0x86, 0xcb, 0x08, 0x6b, 0x78, 0x9f, 0x54, 0x19,
+ 0x05, 0xef, 0x8c, 0x61, 0xa8, 0x11, 0x58, 0x26,
+ 0x34, 0xba, 0x5c, 0xb7, 0x10, 0x6a, 0xa6, 0x41 },
+ },
+};
+
+struct tf_tv tf_cbc_dec_tv_template[] = {
+ /* Reverse of the first four above */
+ {
+ 16, 16, 16, 0,
+
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ { 0x9f, 0x58, 0x9f, 0x5c, 0xf6, 0x12, 0x2c, 0x32,
+ 0xb6, 0xbf, 0xec, 0x2f, 0x2a, 0xe8, 0xc3, 0x5a },
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ },
+
+ {
+ 16, 16, 16, 0,
+
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ { 0x9f, 0x58, 0x9f, 0x5c, 0xf6, 0x12, 0x2c, 0x32,
+ 0xb6, 0xbf, 0xec, 0x2f, 0x2a, 0xe8, 0xc3, 0x5a },
+ { 0xd4, 0x91, 0xdb, 0x16, 0xe7, 0xb1, 0xc3, 0x9e,
+ 0x86, 0xcb, 0x08, 0x6b, 0x78, 0x9f, 0x54, 0x19 },
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ },
+
+ {
+ 16, 16, 16, 0,
+
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ { 0xd4, 0x91, 0xdb, 0x16, 0xe7, 0xb1, 0xc3, 0x9e,
+ 0x86, 0xcb, 0x08, 0x6b, 0x78, 0x9f, 0x54, 0x19 },
+ { 0x05, 0xef, 0x8c, 0x61, 0xa8, 0x11, 0x58, 0x26,
+ 0x34, 0xba, 0x5c, 0xb7, 0x10, 0x6a, 0xa6, 0x41 },
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ },
+
+ {
+ 16, 48, 48, 0,
+
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ { 0x9f, 0x58, 0x9f, 0x5c, 0xf6, 0x12, 0x2c, 0x32,
+ 0xb6, 0xbf, 0xec, 0x2f, 0x2a, 0xe8, 0xc3, 0x5a,
+ 0xd4, 0x91, 0xdb, 0x16, 0xe7, 0xb1, 0xc3, 0x9e,
+ 0x86, 0xcb, 0x08, 0x6b, 0x78, 0x9f, 0x54, 0x19,
+ 0x05, 0xef, 0x8c, 0x61, 0xa8, 0x11, 0x58, 0x26,
+ 0x34, 0xba, 0x5c, 0xb7, 0x10, 0x6a, 0xa6, 0x41 },
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ },
+};
+
+/*
+ * Serpent test vectors. These are backwards because Serpent writes
+ * octect sequences in right-to-left mode.
+ */
+#define SERPENT_ENC_TEST_VECTORS 4
+#define SERPENT_DEC_TEST_VECTORS 4
+
+struct serpent_tv {
+ unsigned int keylen, fail;
+ u8 key[32], plaintext[16], result[16];
+};
+
+struct serpent_tv serpent_enc_tv_template[] =
+{
+ {
+ 0, 0,
+ { 0 },
+ { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f },
+ { 0x12, 0x07, 0xfc, 0xce, 0x9b, 0xd0, 0xd6, 0x47,
+ 0x6a, 0xe9, 0x8f, 0xbe, 0xd1, 0x43, 0xa0, 0xe2 }
+ },
+ {
+ 16, 0,
+ { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f },
+ { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f },
+ { 0x4c, 0x7d, 0x8a, 0x32, 0x80, 0x72, 0xa2, 0x2c,
+ 0x82, 0x3e, 0x4a, 0x1f, 0x3a, 0xcd, 0xa1, 0x6d }
+ },
+ {
+ 32, 0,
+ { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
+ { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f },
+ { 0xde, 0x26, 0x9f, 0xf8, 0x33, 0xe4, 0x32, 0xb8,
+ 0x5b, 0x2e, 0x88, 0xd2, 0x70, 0x1c, 0xe7, 0x5c }
+ },
+ {
+ 16, 0,
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80 },
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ { 0xdd, 0xd2, 0x6b, 0x98, 0xa5, 0xff, 0xd8, 0x2c,
+ 0x05, 0x34, 0x5a, 0x9d, 0xad, 0xbf, 0xaf, 0x49}
+ }
+};
+
+struct serpent_tv serpent_dec_tv_template[] =
+{
+ {
+ 0, 0,
+ { 0 },
+ { 0x12, 0x07, 0xfc, 0xce, 0x9b, 0xd0, 0xd6, 0x47,
+ 0x6a, 0xe9, 0x8f, 0xbe, 0xd1, 0x43, 0xa0, 0xe2 },
+ { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f },
+
+ },
+ {
+ 16, 0,
+ { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f },
+ { 0x4c, 0x7d, 0x8a, 0x32, 0x80, 0x72, 0xa2, 0x2c,
+ 0x82, 0x3e, 0x4a, 0x1f, 0x3a, 0xcd, 0xa1, 0x6d },
+ { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f },
+ },
+ {
+ 32, 0,
+ { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
+
+ { 0xde, 0x26, 0x9f, 0xf8, 0x33, 0xe4, 0x32, 0xb8,
+ 0x5b, 0x2e, 0x88, 0xd2, 0x70, 0x1c, 0xe7, 0x5c },
+ { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f },
+ },
+ {
+ 16, 0,
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80 },
+ { 0xdd, 0xd2, 0x6b, 0x98, 0xa5, 0xff, 0xd8, 0x2c,
+ 0x05, 0x34, 0x5a, 0x9d, 0xad, 0xbf, 0xaf, 0x49},
+ { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 },
+ }
+};
+
+/*
+ * AES test vectors.
+ */
+#define AES_ENC_TEST_VECTORS 3
+#define AES_DEC_TEST_VECTORS 3
+
+struct aes_tv {
+ unsigned int keylen;
+ unsigned int plen;
+ unsigned int rlen;
+ int fail;
+ char key[32];
+ char iv[8];
+ char plaintext[16];
+ char result[16];
+};
+
+struct aes_tv aes_enc_tv_template[] = {
+ /* From FIPS-197 */
+ {
+ 16, 16, 16, 0,
+ { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f },
+ { 0 },
+ { 0x00, 0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77,
+ 0x88, 0x99, 0xaa, 0xbb, 0xcc, 0xdd, 0xee, 0xff },
+ { 0x69, 0xc4, 0xe0, 0xd8, 0x6a, 0x7b, 0x04, 0x30,
+ 0xd8, 0xcd, 0xb7, 0x80, 0x70, 0xb4, 0xc5, 0x5a },
+ },
+ {
+ 24, 16, 16, 0,
+ { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17 },
+ { 0 },
+ { 0x00, 0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77,
+ 0x88, 0x99, 0xaa, 0xbb, 0xcc, 0xdd, 0xee, 0xff },
+ { 0xdd, 0xa9, 0x7c, 0xa4, 0x86, 0x4c, 0xdf, 0xe0,
+ 0x6e, 0xaf, 0x70, 0xa0, 0xec, 0x0d, 0x71, 0x91 },
+ },
+ {
+ 32, 16, 16, 0,
+ { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
+ { 0 },
+ { 0x00, 0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77,
+ 0x88, 0x99, 0xaa, 0xbb, 0xcc, 0xdd, 0xee, 0xff },
+ { 0x8e, 0xa2, 0xb7, 0xca, 0x51, 0x67, 0x45, 0xbf,
+ 0xea, 0xfc, 0x49, 0x90, 0x4b, 0x49, 0x60, 0x89 },
+ },
+};
+
+struct aes_tv aes_dec_tv_template[] = {
+ /* From FIPS-197 */
+ {
+ 16, 16, 16, 0,
+
+ { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f },
+ { 0 },
+ { 0x69, 0xc4, 0xe0, 0xd8, 0x6a, 0x7b, 0x04, 0x30,
+ 0xd8, 0xcd, 0xb7, 0x80, 0x70, 0xb4, 0xc5, 0x5a },
+ { 0x00, 0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77,
+ 0x88, 0x99, 0xaa, 0xbb, 0xcc, 0xdd, 0xee, 0xff },
+ },
+
+ {
+ 24, 16, 16, 0,
+ { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17 },
+ { 0 },
+ { 0xdd, 0xa9, 0x7c, 0xa4, 0x86, 0x4c, 0xdf, 0xe0,
+ 0x6e, 0xaf, 0x70, 0xa0, 0xec, 0x0d, 0x71, 0x91 },
+ { 0x00, 0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77,
+ 0x88, 0x99, 0xaa, 0xbb, 0xcc, 0xdd, 0xee, 0xff },
+ },
+ {
+ 32, 16, 16, 0,
+ { 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07,
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f,
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17,
+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f },
+ { 0 },
+ { 0x8e, 0xa2, 0xb7, 0xca, 0x51, 0x67, 0x45, 0xbf,
+ 0xea, 0xfc, 0x49, 0x90, 0x4b, 0x49, 0x60, 0x89 },
+ { 0x00, 0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77,
+ 0x88, 0x99, 0xaa, 0xbb, 0xcc, 0xdd, 0xee, 0xff },
+ },
+};
+
+/*
+ * Compression stuff.
+ */
+#define COMP_BUF_SIZE 512
+
+struct comp_testvec {
+ int inlen, outlen;
+ char input[COMP_BUF_SIZE];
+ char output[COMP_BUF_SIZE];
+};
+
+/*
+ * Deflate test vectors (null-terminated strings).
+ * Params: winbits=11, Z_DEFAULT_COMPRESSION, MAX_MEM_LEVEL.
+ */
+#define DEFLATE_COMP_TEST_VECTORS 2
+#define DEFLATE_DECOMP_TEST_VECTORS 2
+
+struct comp_testvec deflate_comp_tv_template[] = {
+ {
+ 70, 38,
+
+ "Join us now and share the software "
+ "Join us now and share the software ",
+
+ { 0xf3, 0xca, 0xcf, 0xcc, 0x53, 0x28, 0x2d, 0x56,
+ 0xc8, 0xcb, 0x2f, 0x57, 0x48, 0xcc, 0x4b, 0x51,
+ 0x28, 0xce, 0x48, 0x2c, 0x4a, 0x55, 0x28, 0xc9,
+ 0x48, 0x55, 0x28, 0xce, 0x4f, 0x2b, 0x29, 0x07,
+ 0x71, 0xbc, 0x08, 0x2b, 0x01, 0x00
+ },
+ },
+
+ {
+ 191, 122,
+
+ "This document describes a compression method based on the DEFLATE"
+ "compression algorithm. This document defines the application of "
+ "the DEFLATE algorithm to the IP Payload Compression Protocol.",
+
+ { 0x5d, 0x8d, 0x31, 0x0e, 0xc2, 0x30, 0x10, 0x04,
+ 0xbf, 0xb2, 0x2f, 0xc8, 0x1f, 0x10, 0x04, 0x09,
+ 0x89, 0xc2, 0x85, 0x3f, 0x70, 0xb1, 0x2f, 0xf8,
+ 0x24, 0xdb, 0x67, 0xd9, 0x47, 0xc1, 0xef, 0x49,
+ 0x68, 0x12, 0x51, 0xae, 0x76, 0x67, 0xd6, 0x27,
+ 0x19, 0x88, 0x1a, 0xde, 0x85, 0xab, 0x21, 0xf2,
+ 0x08, 0x5d, 0x16, 0x1e, 0x20, 0x04, 0x2d, 0xad,
+ 0xf3, 0x18, 0xa2, 0x15, 0x85, 0x2d, 0x69, 0xc4,
+ 0x42, 0x83, 0x23, 0xb6, 0x6c, 0x89, 0x71, 0x9b,
+ 0xef, 0xcf, 0x8b, 0x9f, 0xcf, 0x33, 0xca, 0x2f,
+ 0xed, 0x62, 0xa9, 0x4c, 0x80, 0xff, 0x13, 0xaf,
+ 0x52, 0x37, 0xed, 0x0e, 0x52, 0x6b, 0x59, 0x02,
+ 0xd9, 0x4e, 0xe8, 0x7a, 0x76, 0x1d, 0x02, 0x98,
+ 0xfe, 0x8a, 0x87, 0x83, 0xa3, 0x4f, 0x56, 0x8a,
+ 0xb8, 0x9e, 0x8e, 0x5c, 0x57, 0xd3, 0xa0, 0x79,
+ 0xfa, 0x02 },
+ },
+};
+
+struct comp_testvec deflate_decomp_tv_template[] = {
+ {
+ 122, 191,
+
+ { 0x5d, 0x8d, 0x31, 0x0e, 0xc2, 0x30, 0x10, 0x04,
+ 0xbf, 0xb2, 0x2f, 0xc8, 0x1f, 0x10, 0x04, 0x09,
+ 0x89, 0xc2, 0x85, 0x3f, 0x70, 0xb1, 0x2f, 0xf8,
+ 0x24, 0xdb, 0x67, 0xd9, 0x47, 0xc1, 0xef, 0x49,
+ 0x68, 0x12, 0x51, 0xae, 0x76, 0x67, 0xd6, 0x27,
+ 0x19, 0x88, 0x1a, 0xde, 0x85, 0xab, 0x21, 0xf2,
+ 0x08, 0x5d, 0x16, 0x1e, 0x20, 0x04, 0x2d, 0xad,
+ 0xf3, 0x18, 0xa2, 0x15, 0x85, 0x2d, 0x69, 0xc4,
+ 0x42, 0x83, 0x23, 0xb6, 0x6c, 0x89, 0x71, 0x9b,
+ 0xef, 0xcf, 0x8b, 0x9f, 0xcf, 0x33, 0xca, 0x2f,
+ 0xed, 0x62, 0xa9, 0x4c, 0x80, 0xff, 0x13, 0xaf,
+ 0x52, 0x37, 0xed, 0x0e, 0x52, 0x6b, 0x59, 0x02,
+ 0xd9, 0x4e, 0xe8, 0x7a, 0x76, 0x1d, 0x02, 0x98,
+ 0xfe, 0x8a, 0x87, 0x83, 0xa3, 0x4f, 0x56, 0x8a,
+ 0xb8, 0x9e, 0x8e, 0x5c, 0x57, 0xd3, 0xa0, 0x79,
+ 0xfa, 0x02 },
+
+ "This document describes a compression method based on the DEFLATE"
+ "compression algorithm. This document defines the application of "
+ "the DEFLATE algorithm to the IP Payload Compression Protocol.",
+ },
+
+ {
+ 38, 70,
+
+ { 0xf3, 0xca, 0xcf, 0xcc, 0x53, 0x28, 0x2d, 0x56,
+ 0xc8, 0xcb, 0x2f, 0x57, 0x48, 0xcc, 0x4b, 0x51,
+ 0x28, 0xce, 0x48, 0x2c, 0x4a, 0x55, 0x28, 0xc9,
+ 0x48, 0x55, 0x28, 0xce, 0x4f, 0x2b, 0x29, 0x07,
+ 0x71, 0xbc, 0x08, 0x2b, 0x01, 0x00
+ },
+
+ "Join us now and share the software "
+ "Join us now and share the software ",
+ },
+};
+
+#endif /* _CRYPTO_TCRYPT_H */
diff -Nru a/crypto/twofish.c b/crypto/twofish.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/crypto/twofish.c Thu May 8 10:41:38 2003
@@ -0,0 +1,900 @@
+/*
+ * Twofish for CryptoAPI
+ *
+ * Originaly Twofish for GPG
+ * By Matthew Skala <mskala@ansuz.sooke.bc.ca>, July 26, 1998
+ * 256-bit key length added March 20, 1999
+ * Some modifications to reduce the text size by Werner Koch, April, 1998
+ * Ported to the kerneli patch by Marc Mutz <Marc@Mutz.com>
+ * Ported to CryptoAPI by Colin Slater <hoho@tacomeat.net>
+ *
+ * The original author has disclaimed all copyright interest in this
+ * code and thus put it in the public domain. The subsequent authors
+ * have put this under the GNU General Public License.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
+ * USA
+ *
+ * This code is a "clean room" implementation, written from the paper
+ * _Twofish: A 128-Bit Block Cipher_ by Bruce Schneier, John Kelsey,
+ * Doug Whiting, David Wagner, Chris Hall, and Niels Ferguson, available
+ * through http://www.counterpane.com/twofish.html
+ *
+ * For background information on multiplication in finite fields, used for
+ * the matrix operations in the key schedule, see the book _Contemporary
+ * Abstract Algebra_ by Joseph A. Gallian, especially chapter 22 in the
+ * Third Edition.
+ */
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/crypto.h>
+
+
+/* The large precomputed tables for the Twofish cipher (twofish.c)
+ * Taken from the same source as twofish.c
+ * Marc Mutz <Marc@Mutz.com>
+ */
+
+/* These two tables are the q0 and q1 permutations, exactly as described in
+ * the Twofish paper. */
+
+static const u8 q0[256] = {
+ 0xA9, 0x67, 0xB3, 0xE8, 0x04, 0xFD, 0xA3, 0x76, 0x9A, 0x92, 0x80, 0x78,
+ 0xE4, 0xDD, 0xD1, 0x38, 0x0D, 0xC6, 0x35, 0x98, 0x18, 0xF7, 0xEC, 0x6C,
+ 0x43, 0x75, 0x37, 0x26, 0xFA, 0x13, 0x94, 0x48, 0xF2, 0xD0, 0x8B, 0x30,
+ 0x84, 0x54, 0xDF, 0x23, 0x19, 0x5B, 0x3D, 0x59, 0xF3, 0xAE, 0xA2, 0x82,
+ 0x63, 0x01, 0x83, 0x2E, 0xD9, 0x51, 0x9B, 0x7C, 0xA6, 0xEB, 0xA5, 0xBE,
+ 0x16, 0x0C, 0xE3, 0x61, 0xC0, 0x8C, 0x3A, 0xF5, 0x73, 0x2C, 0x25, 0x0B,
+ 0xBB, 0x4E, 0x89, 0x6B, 0x53, 0x6A, 0xB4, 0xF1, 0xE1, 0xE6, 0xBD, 0x45,
+ 0xE2, 0xF4, 0xB6, 0x66, 0xCC, 0x95, 0x03, 0x56, 0xD4, 0x1C, 0x1E, 0xD7,
+ 0xFB, 0xC3, 0x8E, 0xB5, 0xE9, 0xCF, 0xBF, 0xBA, 0xEA, 0x77, 0x39, 0xAF,
+ 0x33, 0xC9, 0x62, 0x71, 0x81, 0x79, 0x09, 0xAD, 0x24, 0xCD, 0xF9, 0xD8,
+ 0xE5, 0xC5, 0xB9, 0x4D, 0x44, 0x08, 0x86, 0xE7, 0xA1, 0x1D, 0xAA, 0xED,
+ 0x06, 0x70, 0xB2, 0xD2, 0x41, 0x7B, 0xA0, 0x11, 0x31, 0xC2, 0x27, 0x90,
+ 0x20, 0xF6, 0x60, 0xFF, 0x96, 0x5C, 0xB1, 0xAB, 0x9E, 0x9C, 0x52, 0x1B,
+ 0x5F, 0x93, 0x0A, 0xEF, 0x91, 0x85, 0x49, 0xEE, 0x2D, 0x4F, 0x8F, 0x3B,
+ 0x47, 0x87, 0x6D, 0x46, 0xD6, 0x3E, 0x69, 0x64, 0x2A, 0xCE, 0xCB, 0x2F,
+ 0xFC, 0x97, 0x05, 0x7A, 0xAC, 0x7F, 0xD5, 0x1A, 0x4B, 0x0E, 0xA7, 0x5A,
+ 0x28, 0x14, 0x3F, 0x29, 0x88, 0x3C, 0x4C, 0x02, 0xB8, 0xDA, 0xB0, 0x17,
+ 0x55, 0x1F, 0x8A, 0x7D, 0x57, 0xC7, 0x8D, 0x74, 0xB7, 0xC4, 0x9F, 0x72,
+ 0x7E, 0x15, 0x22, 0x12, 0x58, 0x07, 0x99, 0x34, 0x6E, 0x50, 0xDE, 0x68,
+ 0x65, 0xBC, 0xDB, 0xF8, 0xC8, 0xA8, 0x2B, 0x40, 0xDC, 0xFE, 0x32, 0xA4,
+ 0xCA, 0x10, 0x21, 0xF0, 0xD3, 0x5D, 0x0F, 0x00, 0x6F, 0x9D, 0x36, 0x42,
+ 0x4A, 0x5E, 0xC1, 0xE0
+};
+
+static const u8 q1[256] = {
+ 0x75, 0xF3, 0xC6, 0xF4, 0xDB, 0x7B, 0xFB, 0xC8, 0x4A, 0xD3, 0xE6, 0x6B,
+ 0x45, 0x7D, 0xE8, 0x4B, 0xD6, 0x32, 0xD8, 0xFD, 0x37, 0x71, 0xF1, 0xE1,
+ 0x30, 0x0F, 0xF8, 0x1B, 0x87, 0xFA, 0x06, 0x3F, 0x5E, 0xBA, 0xAE, 0x5B,
+ 0x8A, 0x00, 0xBC, 0x9D, 0x6D, 0xC1, 0xB1, 0x0E, 0x80, 0x5D, 0xD2, 0xD5,
+ 0xA0, 0x84, 0x07, 0x14, 0xB5, 0x90, 0x2C, 0xA3, 0xB2, 0x73, 0x4C, 0x54,
+ 0x92, 0x74, 0x36, 0x51, 0x38, 0xB0, 0xBD, 0x5A, 0xFC, 0x60, 0x62, 0x96,
+ 0x6C, 0x42, 0xF7, 0x10, 0x7C, 0x28, 0x27, 0x8C, 0x13, 0x95, 0x9C, 0xC7,
+ 0x24, 0x46, 0x3B, 0x70, 0xCA, 0xE3, 0x85, 0xCB, 0x11, 0xD0, 0x93, 0xB8,
+ 0xA6, 0x83, 0x20, 0xFF, 0x9F, 0x77, 0xC3, 0xCC, 0x03, 0x6F, 0x08, 0xBF,
+ 0x40, 0xE7, 0x2B, 0xE2, 0x79, 0x0C, 0xAA, 0x82, 0x41, 0x3A, 0xEA, 0xB9,
+ 0xE4, 0x9A, 0xA4, 0x97, 0x7E, 0xDA, 0x7A, 0x17, 0x66, 0x94, 0xA1, 0x1D,
+ 0x3D, 0xF0, 0xDE, 0xB3, 0x0B, 0x72, 0xA7, 0x1C, 0xEF, 0xD1, 0x53, 0x3E,
+ 0x8F, 0x33, 0x26, 0x5F, 0xEC, 0x76, 0x2A, 0x49, 0x81, 0x88, 0xEE, 0x21,
+ 0xC4, 0x1A, 0xEB, 0xD9, 0xC5, 0x39, 0x99, 0xCD, 0xAD, 0x31, 0x8B, 0x01,
+ 0x18, 0x23, 0xDD, 0x1F, 0x4E, 0x2D, 0xF9, 0x48, 0x4F, 0xF2, 0x65, 0x8E,
+ 0x78, 0x5C, 0x58, 0x19, 0x8D, 0xE5, 0x98, 0x57, 0x67, 0x7F, 0x05, 0x64,
+ 0xAF, 0x63, 0xB6, 0xFE, 0xF5, 0xB7, 0x3C, 0xA5, 0xCE, 0xE9, 0x68, 0x44,
+ 0xE0, 0x4D, 0x43, 0x69, 0x29, 0x2E, 0xAC, 0x15, 0x59, 0xA8, 0x0A, 0x9E,
+ 0x6E, 0x47, 0xDF, 0x34, 0x35, 0x6A, 0xCF, 0xDC, 0x22, 0xC9, 0xC0, 0x9B,
+ 0x89, 0xD4, 0xED, 0xAB, 0x12, 0xA2, 0x0D, 0x52, 0xBB, 0x02, 0x2F, 0xA9,
+ 0xD7, 0x61, 0x1E, 0xB4, 0x50, 0x04, 0xF6, 0xC2, 0x16, 0x25, 0x86, 0x56,
+ 0x55, 0x09, 0xBE, 0x91
+};
+
+/* These MDS tables are actually tables of MDS composed with q0 and q1,
+ * because it is only ever used that way and we can save some time by
+ * precomputing. Of course the main saving comes from precomputing the
+ * GF(2^8) multiplication involved in the MDS matrix multiply; by looking
+ * things up in these tables we reduce the matrix multiply to four lookups
+ * and three XORs. Semi-formally, the definition of these tables is:
+ * mds[0][i] = MDS (q1[i] 0 0 0)^T mds[1][i] = MDS (0 q0[i] 0 0)^T
+ * mds[2][i] = MDS (0 0 q1[i] 0)^T mds[3][i] = MDS (0 0 0 q0[i])^T
+ * where ^T means "transpose", the matrix multiply is performed in GF(2^8)
+ * represented as GF(2)[x]/v(x) where v(x)=x^8+x^6+x^5+x^3+1 as described
+ * by Schneier et al, and I'm casually glossing over the byte/word
+ * conversion issues. */
+
+static const u32 mds[4][256] = {
+ {0xBCBC3275, 0xECEC21F3, 0x202043C6, 0xB3B3C9F4, 0xDADA03DB, 0x02028B7B,
+ 0xE2E22BFB, 0x9E9EFAC8, 0xC9C9EC4A, 0xD4D409D3, 0x18186BE6, 0x1E1E9F6B,
+ 0x98980E45, 0xB2B2387D, 0xA6A6D2E8, 0x2626B74B, 0x3C3C57D6, 0x93938A32,
+ 0x8282EED8, 0x525298FD, 0x7B7BD437, 0xBBBB3771, 0x5B5B97F1, 0x474783E1,
+ 0x24243C30, 0x5151E20F, 0xBABAC6F8, 0x4A4AF31B, 0xBFBF4887, 0x0D0D70FA,
+ 0xB0B0B306, 0x7575DE3F, 0xD2D2FD5E, 0x7D7D20BA, 0x666631AE, 0x3A3AA35B,
+ 0x59591C8A, 0x00000000, 0xCDCD93BC, 0x1A1AE09D, 0xAEAE2C6D, 0x7F7FABC1,
+ 0x2B2BC7B1, 0xBEBEB90E, 0xE0E0A080, 0x8A8A105D, 0x3B3B52D2, 0x6464BAD5,
+ 0xD8D888A0, 0xE7E7A584, 0x5F5FE807, 0x1B1B1114, 0x2C2CC2B5, 0xFCFCB490,
+ 0x3131272C, 0x808065A3, 0x73732AB2, 0x0C0C8173, 0x79795F4C, 0x6B6B4154,
+ 0x4B4B0292, 0x53536974, 0x94948F36, 0x83831F51, 0x2A2A3638, 0xC4C49CB0,
+ 0x2222C8BD, 0xD5D5F85A, 0xBDBDC3FC, 0x48487860, 0xFFFFCE62, 0x4C4C0796,
+ 0x4141776C, 0xC7C7E642, 0xEBEB24F7, 0x1C1C1410, 0x5D5D637C, 0x36362228,
+ 0x6767C027, 0xE9E9AF8C, 0x4444F913, 0x1414EA95, 0xF5F5BB9C, 0xCFCF18C7,
+ 0x3F3F2D24, 0xC0C0E346, 0x7272DB3B, 0x54546C70, 0x29294CCA, 0xF0F035E3,
+ 0x0808FE85, 0xC6C617CB, 0xF3F34F11, 0x8C8CE4D0, 0xA4A45993, 0xCACA96B8,
+ 0x68683BA6, 0xB8B84D83, 0x38382820, 0xE5E52EFF, 0xADAD569F, 0x0B0B8477,
+ 0xC8C81DC3, 0x9999FFCC, 0x5858ED03, 0x19199A6F, 0x0E0E0A08, 0x95957EBF,
+ 0x70705040, 0xF7F730E7, 0x6E6ECF2B, 0x1F1F6EE2, 0xB5B53D79, 0x09090F0C,
+ 0x616134AA, 0x57571682, 0x9F9F0B41, 0x9D9D803A, 0x111164EA, 0x2525CDB9,
+ 0xAFAFDDE4, 0x4545089A, 0xDFDF8DA4, 0xA3A35C97, 0xEAEAD57E, 0x353558DA,
+ 0xEDEDD07A, 0x4343FC17, 0xF8F8CB66, 0xFBFBB194, 0x3737D3A1, 0xFAFA401D,
+ 0xC2C2683D, 0xB4B4CCF0, 0x32325DDE, 0x9C9C71B3, 0x5656E70B, 0xE3E3DA72,
+ 0x878760A7, 0x15151B1C, 0xF9F93AEF, 0x6363BFD1, 0x3434A953, 0x9A9A853E,
+ 0xB1B1428F, 0x7C7CD133, 0x88889B26, 0x3D3DA65F, 0xA1A1D7EC, 0xE4E4DF76,
+ 0x8181942A, 0x91910149, 0x0F0FFB81, 0xEEEEAA88, 0x161661EE, 0xD7D77321,
+ 0x9797F5C4, 0xA5A5A81A, 0xFEFE3FEB, 0x6D6DB5D9, 0x7878AEC5, 0xC5C56D39,
+ 0x1D1DE599, 0x7676A4CD, 0x3E3EDCAD, 0xCBCB6731, 0xB6B6478B, 0xEFEF5B01,
+ 0x12121E18, 0x6060C523, 0x6A6AB0DD, 0x4D4DF61F, 0xCECEE94E, 0xDEDE7C2D,
+ 0x55559DF9, 0x7E7E5A48, 0x2121B24F, 0x03037AF2, 0xA0A02665, 0x5E5E198E,
+ 0x5A5A6678, 0x65654B5C, 0x62624E58, 0xFDFD4519, 0x0606F48D, 0x404086E5,
+ 0xF2F2BE98, 0x3333AC57, 0x17179067, 0x05058E7F, 0xE8E85E05, 0x4F4F7D64,
+ 0x89896AAF, 0x10109563, 0x74742FB6, 0x0A0A75FE, 0x5C5C92F5, 0x9B9B74B7,
+ 0x2D2D333C, 0x3030D6A5, 0x2E2E49CE, 0x494989E9, 0x46467268, 0x77775544,
+ 0xA8A8D8E0, 0x9696044D, 0x2828BD43, 0xA9A92969, 0xD9D97929, 0x8686912E,
+ 0xD1D187AC, 0xF4F44A15, 0x8D8D1559, 0xD6D682A8, 0xB9B9BC0A, 0x42420D9E,
+ 0xF6F6C16E, 0x2F2FB847, 0xDDDD06DF, 0x23233934, 0xCCCC6235, 0xF1F1C46A,
+ 0xC1C112CF, 0x8585EBDC, 0x8F8F9E22, 0x7171A1C9, 0x9090F0C0, 0xAAAA539B,
+ 0x0101F189, 0x8B8BE1D4, 0x4E4E8CED, 0x8E8E6FAB, 0xABABA212, 0x6F6F3EA2,
+ 0xE6E6540D, 0xDBDBF252, 0x92927BBB, 0xB7B7B602, 0x6969CA2F, 0x3939D9A9,
+ 0xD3D30CD7, 0xA7A72361, 0xA2A2AD1E, 0xC3C399B4, 0x6C6C4450, 0x07070504,
+ 0x04047FF6, 0x272746C2, 0xACACA716, 0xD0D07625, 0x50501386, 0xDCDCF756,
+ 0x84841A55, 0xE1E15109, 0x7A7A25BE, 0x1313EF91},
+
+ {0xA9D93939, 0x67901717, 0xB3719C9C, 0xE8D2A6A6, 0x04050707, 0xFD985252,
+ 0xA3658080, 0x76DFE4E4, 0x9A084545, 0x92024B4B, 0x80A0E0E0, 0x78665A5A,
+ 0xE4DDAFAF, 0xDDB06A6A, 0xD1BF6363, 0x38362A2A, 0x0D54E6E6, 0xC6432020,
+ 0x3562CCCC, 0x98BEF2F2, 0x181E1212, 0xF724EBEB, 0xECD7A1A1, 0x6C774141,
+ 0x43BD2828, 0x7532BCBC, 0x37D47B7B, 0x269B8888, 0xFA700D0D, 0x13F94444,
+ 0x94B1FBFB, 0x485A7E7E, 0xF27A0303, 0xD0E48C8C, 0x8B47B6B6, 0x303C2424,
+ 0x84A5E7E7, 0x54416B6B, 0xDF06DDDD, 0x23C56060, 0x1945FDFD, 0x5BA33A3A,
+ 0x3D68C2C2, 0x59158D8D, 0xF321ECEC, 0xAE316666, 0xA23E6F6F, 0x82165757,
+ 0x63951010, 0x015BEFEF, 0x834DB8B8, 0x2E918686, 0xD9B56D6D, 0x511F8383,
+ 0x9B53AAAA, 0x7C635D5D, 0xA63B6868, 0xEB3FFEFE, 0xA5D63030, 0xBE257A7A,
+ 0x16A7ACAC, 0x0C0F0909, 0xE335F0F0, 0x6123A7A7, 0xC0F09090, 0x8CAFE9E9,
+ 0x3A809D9D, 0xF5925C5C, 0x73810C0C, 0x2C273131, 0x2576D0D0, 0x0BE75656,
+ 0xBB7B9292, 0x4EE9CECE, 0x89F10101, 0x6B9F1E1E, 0x53A93434, 0x6AC4F1F1,
+ 0xB499C3C3, 0xF1975B5B, 0xE1834747, 0xE66B1818, 0xBDC82222, 0x450E9898,
+ 0xE26E1F1F, 0xF4C9B3B3, 0xB62F7474, 0x66CBF8F8, 0xCCFF9999, 0x95EA1414,
+ 0x03ED5858, 0x56F7DCDC, 0xD4E18B8B, 0x1C1B1515, 0x1EADA2A2, 0xD70CD3D3,
+ 0xFB2BE2E2, 0xC31DC8C8, 0x8E195E5E, 0xB5C22C2C, 0xE9894949, 0xCF12C1C1,
+ 0xBF7E9595, 0xBA207D7D, 0xEA641111, 0x77840B0B, 0x396DC5C5, 0xAF6A8989,
+ 0x33D17C7C, 0xC9A17171, 0x62CEFFFF, 0x7137BBBB, 0x81FB0F0F, 0x793DB5B5,
+ 0x0951E1E1, 0xADDC3E3E, 0x242D3F3F, 0xCDA47676, 0xF99D5555, 0xD8EE8282,
+ 0xE5864040, 0xC5AE7878, 0xB9CD2525, 0x4D049696, 0x44557777, 0x080A0E0E,
+ 0x86135050, 0xE730F7F7, 0xA1D33737, 0x1D40FAFA, 0xAA346161, 0xED8C4E4E,
+ 0x06B3B0B0, 0x706C5454, 0xB22A7373, 0xD2523B3B, 0x410B9F9F, 0x7B8B0202,
+ 0xA088D8D8, 0x114FF3F3, 0x3167CBCB, 0xC2462727, 0x27C06767, 0x90B4FCFC,
+ 0x20283838, 0xF67F0404, 0x60784848, 0xFF2EE5E5, 0x96074C4C, 0x5C4B6565,
+ 0xB1C72B2B, 0xAB6F8E8E, 0x9E0D4242, 0x9CBBF5F5, 0x52F2DBDB, 0x1BF34A4A,
+ 0x5FA63D3D, 0x9359A4A4, 0x0ABCB9B9, 0xEF3AF9F9, 0x91EF1313, 0x85FE0808,
+ 0x49019191, 0xEE611616, 0x2D7CDEDE, 0x4FB22121, 0x8F42B1B1, 0x3BDB7272,
+ 0x47B82F2F, 0x8748BFBF, 0x6D2CAEAE, 0x46E3C0C0, 0xD6573C3C, 0x3E859A9A,
+ 0x6929A9A9, 0x647D4F4F, 0x2A948181, 0xCE492E2E, 0xCB17C6C6, 0x2FCA6969,
+ 0xFCC3BDBD, 0x975CA3A3, 0x055EE8E8, 0x7AD0EDED, 0xAC87D1D1, 0x7F8E0505,
+ 0xD5BA6464, 0x1AA8A5A5, 0x4BB72626, 0x0EB9BEBE, 0xA7608787, 0x5AF8D5D5,
+ 0x28223636, 0x14111B1B, 0x3FDE7575, 0x2979D9D9, 0x88AAEEEE, 0x3C332D2D,
+ 0x4C5F7979, 0x02B6B7B7, 0xB896CACA, 0xDA583535, 0xB09CC4C4, 0x17FC4343,
+ 0x551A8484, 0x1FF64D4D, 0x8A1C5959, 0x7D38B2B2, 0x57AC3333, 0xC718CFCF,
+ 0x8DF40606, 0x74695353, 0xB7749B9B, 0xC4F59797, 0x9F56ADAD, 0x72DAE3E3,
+ 0x7ED5EAEA, 0x154AF4F4, 0x229E8F8F, 0x12A2ABAB, 0x584E6262, 0x07E85F5F,
+ 0x99E51D1D, 0x34392323, 0x6EC1F6F6, 0x50446C6C, 0xDE5D3232, 0x68724646,
+ 0x6526A0A0, 0xBC93CDCD, 0xDB03DADA, 0xF8C6BABA, 0xC8FA9E9E, 0xA882D6D6,
+ 0x2BCF6E6E, 0x40507070, 0xDCEB8585, 0xFE750A0A, 0x328A9393, 0xA48DDFDF,
+ 0xCA4C2929, 0x10141C1C, 0x2173D7D7, 0xF0CCB4B4, 0xD309D4D4, 0x5D108A8A,
+ 0x0FE25151, 0x00000000, 0x6F9A1919, 0x9DE01A1A, 0x368F9494, 0x42E6C7C7,
+ 0x4AECC9C9, 0x5EFDD2D2, 0xC1AB7F7F, 0xE0D8A8A8},
+
+ {0xBC75BC32, 0xECF3EC21, 0x20C62043, 0xB3F4B3C9, 0xDADBDA03, 0x027B028B,
+ 0xE2FBE22B, 0x9EC89EFA, 0xC94AC9EC, 0xD4D3D409, 0x18E6186B, 0x1E6B1E9F,
+ 0x9845980E, 0xB27DB238, 0xA6E8A6D2, 0x264B26B7, 0x3CD63C57, 0x9332938A,
+ 0x82D882EE, 0x52FD5298, 0x7B377BD4, 0xBB71BB37, 0x5BF15B97, 0x47E14783,
+ 0x2430243C, 0x510F51E2, 0xBAF8BAC6, 0x4A1B4AF3, 0xBF87BF48, 0x0DFA0D70,
+ 0xB006B0B3, 0x753F75DE, 0xD25ED2FD, 0x7DBA7D20, 0x66AE6631, 0x3A5B3AA3,
+ 0x598A591C, 0x00000000, 0xCDBCCD93, 0x1A9D1AE0, 0xAE6DAE2C, 0x7FC17FAB,
+ 0x2BB12BC7, 0xBE0EBEB9, 0xE080E0A0, 0x8A5D8A10, 0x3BD23B52, 0x64D564BA,
+ 0xD8A0D888, 0xE784E7A5, 0x5F075FE8, 0x1B141B11, 0x2CB52CC2, 0xFC90FCB4,
+ 0x312C3127, 0x80A38065, 0x73B2732A, 0x0C730C81, 0x794C795F, 0x6B546B41,
+ 0x4B924B02, 0x53745369, 0x9436948F, 0x8351831F, 0x2A382A36, 0xC4B0C49C,
+ 0x22BD22C8, 0xD55AD5F8, 0xBDFCBDC3, 0x48604878, 0xFF62FFCE, 0x4C964C07,
+ 0x416C4177, 0xC742C7E6, 0xEBF7EB24, 0x1C101C14, 0x5D7C5D63, 0x36283622,
+ 0x672767C0, 0xE98CE9AF, 0x441344F9, 0x149514EA, 0xF59CF5BB, 0xCFC7CF18,
+ 0x3F243F2D, 0xC046C0E3, 0x723B72DB, 0x5470546C, 0x29CA294C, 0xF0E3F035,
+ 0x088508FE, 0xC6CBC617, 0xF311F34F, 0x8CD08CE4, 0xA493A459, 0xCAB8CA96,
+ 0x68A6683B, 0xB883B84D, 0x38203828, 0xE5FFE52E, 0xAD9FAD56, 0x0B770B84,
+ 0xC8C3C81D, 0x99CC99FF, 0x580358ED, 0x196F199A, 0x0E080E0A, 0x95BF957E,
+ 0x70407050, 0xF7E7F730, 0x6E2B6ECF, 0x1FE21F6E, 0xB579B53D, 0x090C090F,
+ 0x61AA6134, 0x57825716, 0x9F419F0B, 0x9D3A9D80, 0x11EA1164, 0x25B925CD,
+ 0xAFE4AFDD, 0x459A4508, 0xDFA4DF8D, 0xA397A35C, 0xEA7EEAD5, 0x35DA3558,
+ 0xED7AEDD0, 0x431743FC, 0xF866F8CB, 0xFB94FBB1, 0x37A137D3, 0xFA1DFA40,
+ 0xC23DC268, 0xB4F0B4CC, 0x32DE325D, 0x9CB39C71, 0x560B56E7, 0xE372E3DA,
+ 0x87A78760, 0x151C151B, 0xF9EFF93A, 0x63D163BF, 0x345334A9, 0x9A3E9A85,
+ 0xB18FB142, 0x7C337CD1, 0x8826889B, 0x3D5F3DA6, 0xA1ECA1D7, 0xE476E4DF,
+ 0x812A8194, 0x91499101, 0x0F810FFB, 0xEE88EEAA, 0x16EE1661, 0xD721D773,
+ 0x97C497F5, 0xA51AA5A8, 0xFEEBFE3F, 0x6DD96DB5, 0x78C578AE, 0xC539C56D,
+ 0x1D991DE5, 0x76CD76A4, 0x3EAD3EDC, 0xCB31CB67, 0xB68BB647, 0xEF01EF5B,
+ 0x1218121E, 0x602360C5, 0x6ADD6AB0, 0x4D1F4DF6, 0xCE4ECEE9, 0xDE2DDE7C,
+ 0x55F9559D, 0x7E487E5A, 0x214F21B2, 0x03F2037A, 0xA065A026, 0x5E8E5E19,
+ 0x5A785A66, 0x655C654B, 0x6258624E, 0xFD19FD45, 0x068D06F4, 0x40E54086,
+ 0xF298F2BE, 0x335733AC, 0x17671790, 0x057F058E, 0xE805E85E, 0x4F644F7D,
+ 0x89AF896A, 0x10631095, 0x74B6742F, 0x0AFE0A75, 0x5CF55C92, 0x9BB79B74,
+ 0x2D3C2D33, 0x30A530D6, 0x2ECE2E49, 0x49E94989, 0x46684672, 0x77447755,
+ 0xA8E0A8D8, 0x964D9604, 0x284328BD, 0xA969A929, 0xD929D979, 0x862E8691,
+ 0xD1ACD187, 0xF415F44A, 0x8D598D15, 0xD6A8D682, 0xB90AB9BC, 0x429E420D,
+ 0xF66EF6C1, 0x2F472FB8, 0xDDDFDD06, 0x23342339, 0xCC35CC62, 0xF16AF1C4,
+ 0xC1CFC112, 0x85DC85EB, 0x8F228F9E, 0x71C971A1, 0x90C090F0, 0xAA9BAA53,
+ 0x018901F1, 0x8BD48BE1, 0x4EED4E8C, 0x8EAB8E6F, 0xAB12ABA2, 0x6FA26F3E,
+ 0xE60DE654, 0xDB52DBF2, 0x92BB927B, 0xB702B7B6, 0x692F69CA, 0x39A939D9,
+ 0xD3D7D30C, 0xA761A723, 0xA21EA2AD, 0xC3B4C399, 0x6C506C44, 0x07040705,
+ 0x04F6047F, 0x27C22746, 0xAC16ACA7, 0xD025D076, 0x50865013, 0xDC56DCF7,
+ 0x8455841A, 0xE109E151, 0x7ABE7A25, 0x139113EF},
+
+ {0xD939A9D9, 0x90176790, 0x719CB371, 0xD2A6E8D2, 0x05070405, 0x9852FD98,
+ 0x6580A365, 0xDFE476DF, 0x08459A08, 0x024B9202, 0xA0E080A0, 0x665A7866,
+ 0xDDAFE4DD, 0xB06ADDB0, 0xBF63D1BF, 0x362A3836, 0x54E60D54, 0x4320C643,
+ 0x62CC3562, 0xBEF298BE, 0x1E12181E, 0x24EBF724, 0xD7A1ECD7, 0x77416C77,
+ 0xBD2843BD, 0x32BC7532, 0xD47B37D4, 0x9B88269B, 0x700DFA70, 0xF94413F9,
+ 0xB1FB94B1, 0x5A7E485A, 0x7A03F27A, 0xE48CD0E4, 0x47B68B47, 0x3C24303C,
+ 0xA5E784A5, 0x416B5441, 0x06DDDF06, 0xC56023C5, 0x45FD1945, 0xA33A5BA3,
+ 0x68C23D68, 0x158D5915, 0x21ECF321, 0x3166AE31, 0x3E6FA23E, 0x16578216,
+ 0x95106395, 0x5BEF015B, 0x4DB8834D, 0x91862E91, 0xB56DD9B5, 0x1F83511F,
+ 0x53AA9B53, 0x635D7C63, 0x3B68A63B, 0x3FFEEB3F, 0xD630A5D6, 0x257ABE25,
+ 0xA7AC16A7, 0x0F090C0F, 0x35F0E335, 0x23A76123, 0xF090C0F0, 0xAFE98CAF,
+ 0x809D3A80, 0x925CF592, 0x810C7381, 0x27312C27, 0x76D02576, 0xE7560BE7,
+ 0x7B92BB7B, 0xE9CE4EE9, 0xF10189F1, 0x9F1E6B9F, 0xA93453A9, 0xC4F16AC4,
+ 0x99C3B499, 0x975BF197, 0x8347E183, 0x6B18E66B, 0xC822BDC8, 0x0E98450E,
+ 0x6E1FE26E, 0xC9B3F4C9, 0x2F74B62F, 0xCBF866CB, 0xFF99CCFF, 0xEA1495EA,
+ 0xED5803ED, 0xF7DC56F7, 0xE18BD4E1, 0x1B151C1B, 0xADA21EAD, 0x0CD3D70C,
+ 0x2BE2FB2B, 0x1DC8C31D, 0x195E8E19, 0xC22CB5C2, 0x8949E989, 0x12C1CF12,
+ 0x7E95BF7E, 0x207DBA20, 0x6411EA64, 0x840B7784, 0x6DC5396D, 0x6A89AF6A,
+ 0xD17C33D1, 0xA171C9A1, 0xCEFF62CE, 0x37BB7137, 0xFB0F81FB, 0x3DB5793D,
+ 0x51E10951, 0xDC3EADDC, 0x2D3F242D, 0xA476CDA4, 0x9D55F99D, 0xEE82D8EE,
+ 0x8640E586, 0xAE78C5AE, 0xCD25B9CD, 0x04964D04, 0x55774455, 0x0A0E080A,
+ 0x13508613, 0x30F7E730, 0xD337A1D3, 0x40FA1D40, 0x3461AA34, 0x8C4EED8C,
+ 0xB3B006B3, 0x6C54706C, 0x2A73B22A, 0x523BD252, 0x0B9F410B, 0x8B027B8B,
+ 0x88D8A088, 0x4FF3114F, 0x67CB3167, 0x4627C246, 0xC06727C0, 0xB4FC90B4,
+ 0x28382028, 0x7F04F67F, 0x78486078, 0x2EE5FF2E, 0x074C9607, 0x4B655C4B,
+ 0xC72BB1C7, 0x6F8EAB6F, 0x0D429E0D, 0xBBF59CBB, 0xF2DB52F2, 0xF34A1BF3,
+ 0xA63D5FA6, 0x59A49359, 0xBCB90ABC, 0x3AF9EF3A, 0xEF1391EF, 0xFE0885FE,
+ 0x01914901, 0x6116EE61, 0x7CDE2D7C, 0xB2214FB2, 0x42B18F42, 0xDB723BDB,
+ 0xB82F47B8, 0x48BF8748, 0x2CAE6D2C, 0xE3C046E3, 0x573CD657, 0x859A3E85,
+ 0x29A96929, 0x7D4F647D, 0x94812A94, 0x492ECE49, 0x17C6CB17, 0xCA692FCA,
+ 0xC3BDFCC3, 0x5CA3975C, 0x5EE8055E, 0xD0ED7AD0, 0x87D1AC87, 0x8E057F8E,
+ 0xBA64D5BA, 0xA8A51AA8, 0xB7264BB7, 0xB9BE0EB9, 0x6087A760, 0xF8D55AF8,
+ 0x22362822, 0x111B1411, 0xDE753FDE, 0x79D92979, 0xAAEE88AA, 0x332D3C33,
+ 0x5F794C5F, 0xB6B702B6, 0x96CAB896, 0x5835DA58, 0x9CC4B09C, 0xFC4317FC,
+ 0x1A84551A, 0xF64D1FF6, 0x1C598A1C, 0x38B27D38, 0xAC3357AC, 0x18CFC718,
+ 0xF4068DF4, 0x69537469, 0x749BB774, 0xF597C4F5, 0x56AD9F56, 0xDAE372DA,
+ 0xD5EA7ED5, 0x4AF4154A, 0x9E8F229E, 0xA2AB12A2, 0x4E62584E, 0xE85F07E8,
+ 0xE51D99E5, 0x39233439, 0xC1F66EC1, 0x446C5044, 0x5D32DE5D, 0x72466872,
+ 0x26A06526, 0x93CDBC93, 0x03DADB03, 0xC6BAF8C6, 0xFA9EC8FA, 0x82D6A882,
+ 0xCF6E2BCF, 0x50704050, 0xEB85DCEB, 0x750AFE75, 0x8A93328A, 0x8DDFA48D,
+ 0x4C29CA4C, 0x141C1014, 0x73D72173, 0xCCB4F0CC, 0x09D4D309, 0x108A5D10,
+ 0xE2510FE2, 0x00000000, 0x9A196F9A, 0xE01A9DE0, 0x8F94368F, 0xE6C742E6,
+ 0xECC94AEC, 0xFDD25EFD, 0xAB7FC1AB, 0xD8A8E0D8}
+};
+
+/* The exp_to_poly and poly_to_exp tables are used to perform efficient
+ * operations in GF(2^8) represented as GF(2)[x]/w(x) where
+ * w(x)=x^8+x^6+x^3+x^2+1. We care about doing that because it's part of the
+ * definition of the RS matrix in the key schedule. Elements of that field
+ * are polynomials of degree not greater than 7 and all coefficients 0 or 1,
+ * which can be represented naturally by bytes (just substitute x=2). In that
+ * form, GF(2^8) addition is the same as bitwise XOR, but GF(2^8)
+ * multiplication is inefficient without hardware support. To multiply
+ * faster, I make use of the fact x is a generator for the nonzero elements,
+ * so that every element p of GF(2)[x]/w(x) is either 0 or equal to (x)^n for
+ * some n in 0..254. Note that that caret is exponentiation in GF(2^8),
+ * *not* polynomial notation. So if I want to compute pq where p and q are
+ * in GF(2^8), I can just say:
+ * 1. if p=0 or q=0 then pq=0
+ * 2. otherwise, find m and n such that p=x^m and q=x^n
+ * 3. pq=(x^m)(x^n)=x^(m+n), so add m and n and find pq
+ * The translations in steps 2 and 3 are looked up in the tables
+ * poly_to_exp (for step 2) and exp_to_poly (for step 3). To see this
+ * in action, look at the CALC_S macro. As additional wrinkles, note that
+ * one of my operands is always a constant, so the poly_to_exp lookup on it
+ * is done in advance; I included the original values in the comments so
+ * readers can have some chance of recognizing that this *is* the RS matrix
+ * from the Twofish paper. I've only included the table entries I actually
+ * need; I never do a lookup on a variable input of zero and the biggest
+ * exponents I'll ever see are 254 (variable) and 237 (constant), so they'll
+ * never sum to more than 491. I'm repeating part of the exp_to_poly table
+ * so that I don't have to do mod-255 reduction in the exponent arithmetic.
+ * Since I know my constant operands are never zero, I only have to worry
+ * about zero values in the variable operand, and I do it with a simple
+ * conditional branch. I know conditionals are expensive, but I couldn't
+ * see a non-horrible way of avoiding them, and I did manage to group the
+ * statements so that each if covers four group multiplications. */
+
+static const u8 poly_to_exp[255] = {
+ 0x00, 0x01, 0x17, 0x02, 0x2E, 0x18, 0x53, 0x03, 0x6A, 0x2F, 0x93, 0x19,
+ 0x34, 0x54, 0x45, 0x04, 0x5C, 0x6B, 0xB6, 0x30, 0xA6, 0x94, 0x4B, 0x1A,
+ 0x8C, 0x35, 0x81, 0x55, 0xAA, 0x46, 0x0D, 0x05, 0x24, 0x5D, 0x87, 0x6C,
+ 0x9B, 0xB7, 0xC1, 0x31, 0x2B, 0xA7, 0xA3, 0x95, 0x98, 0x4C, 0xCA, 0x1B,
+ 0xE6, 0x8D, 0x73, 0x36, 0xCD, 0x82, 0x12, 0x56, 0x62, 0xAB, 0xF0, 0x47,
+ 0x4F, 0x0E, 0xBD, 0x06, 0xD4, 0x25, 0xD2, 0x5E, 0x27, 0x88, 0x66, 0x6D,
+ 0xD6, 0x9C, 0x79, 0xB8, 0x08, 0xC2, 0xDF, 0x32, 0x68, 0x2C, 0xFD, 0xA8,
+ 0x8A, 0xA4, 0x5A, 0x96, 0x29, 0x99, 0x22, 0x4D, 0x60, 0xCB, 0xE4, 0x1C,
+ 0x7B, 0xE7, 0x3B, 0x8E, 0x9E, 0x74, 0xF4, 0x37, 0xD8, 0xCE, 0xF9, 0x83,
+ 0x6F, 0x13, 0xB2, 0x57, 0xE1, 0x63, 0xDC, 0xAC, 0xC4, 0xF1, 0xAF, 0x48,
+ 0x0A, 0x50, 0x42, 0x0F, 0xBA, 0xBE, 0xC7, 0x07, 0xDE, 0xD5, 0x78, 0x26,
+ 0x65, 0xD3, 0xD1, 0x5F, 0xE3, 0x28, 0x21, 0x89, 0x59, 0x67, 0xFC, 0x6E,
+ 0xB1, 0xD7, 0xF8, 0x9D, 0xF3, 0x7A, 0x3A, 0xB9, 0xC6, 0x09, 0x41, 0xC3,
+ 0xAE, 0xE0, 0xDB, 0x33, 0x44, 0x69, 0x92, 0x2D, 0x52, 0xFE, 0x16, 0xA9,
+ 0x0C, 0x8B, 0x80, 0xA5, 0x4A, 0x5B, 0xB5, 0x97, 0xC9, 0x2A, 0xA2, 0x9A,
+ 0xC0, 0x23, 0x86, 0x4E, 0xBC, 0x61, 0xEF, 0xCC, 0x11, 0xE5, 0x72, 0x1D,
+ 0x3D, 0x7C, 0xEB, 0xE8, 0xE9, 0x3C, 0xEA, 0x8F, 0x7D, 0x9F, 0xEC, 0x75,
+ 0x1E, 0xF5, 0x3E, 0x38, 0xF6, 0xD9, 0x3F, 0xCF, 0x76, 0xFA, 0x1F, 0x84,
+ 0xA0, 0x70, 0xED, 0x14, 0x90, 0xB3, 0x7E, 0x58, 0xFB, 0xE2, 0x20, 0x64,
+ 0xD0, 0xDD, 0x77, 0xAD, 0xDA, 0xC5, 0x40, 0xF2, 0x39, 0xB0, 0xF7, 0x49,
+ 0xB4, 0x0B, 0x7F, 0x51, 0x15, 0x43, 0x91, 0x10, 0x71, 0xBB, 0xEE, 0xBF,
+ 0x85, 0xC8, 0xA1
+};
+
+static const u8 exp_to_poly[492] = {
+ 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, 0x4D, 0x9A, 0x79, 0xF2,
+ 0xA9, 0x1F, 0x3E, 0x7C, 0xF8, 0xBD, 0x37, 0x6E, 0xDC, 0xF5, 0xA7, 0x03,
+ 0x06, 0x0C, 0x18, 0x30, 0x60, 0xC0, 0xCD, 0xD7, 0xE3, 0x8B, 0x5B, 0xB6,
+ 0x21, 0x42, 0x84, 0x45, 0x8A, 0x59, 0xB2, 0x29, 0x52, 0xA4, 0x05, 0x0A,
+ 0x14, 0x28, 0x50, 0xA0, 0x0D, 0x1A, 0x34, 0x68, 0xD0, 0xED, 0x97, 0x63,
+ 0xC6, 0xC1, 0xCF, 0xD3, 0xEB, 0x9B, 0x7B, 0xF6, 0xA1, 0x0F, 0x1E, 0x3C,
+ 0x78, 0xF0, 0xAD, 0x17, 0x2E, 0x5C, 0xB8, 0x3D, 0x7A, 0xF4, 0xA5, 0x07,
+ 0x0E, 0x1C, 0x38, 0x70, 0xE0, 0x8D, 0x57, 0xAE, 0x11, 0x22, 0x44, 0x88,
+ 0x5D, 0xBA, 0x39, 0x72, 0xE4, 0x85, 0x47, 0x8E, 0x51, 0xA2, 0x09, 0x12,
+ 0x24, 0x48, 0x90, 0x6D, 0xDA, 0xF9, 0xBF, 0x33, 0x66, 0xCC, 0xD5, 0xE7,
+ 0x83, 0x4B, 0x96, 0x61, 0xC2, 0xC9, 0xDF, 0xF3, 0xAB, 0x1B, 0x36, 0x6C,
+ 0xD8, 0xFD, 0xB7, 0x23, 0x46, 0x8C, 0x55, 0xAA, 0x19, 0x32, 0x64, 0xC8,
+ 0xDD, 0xF7, 0xA3, 0x0B, 0x16, 0x2C, 0x58, 0xB0, 0x2D, 0x5A, 0xB4, 0x25,
+ 0x4A, 0x94, 0x65, 0xCA, 0xD9, 0xFF, 0xB3, 0x2B, 0x56, 0xAC, 0x15, 0x2A,
+ 0x54, 0xA8, 0x1D, 0x3A, 0x74, 0xE8, 0x9D, 0x77, 0xEE, 0x91, 0x6F, 0xDE,
+ 0xF1, 0xAF, 0x13, 0x26, 0x4C, 0x98, 0x7D, 0xFA, 0xB9, 0x3F, 0x7E, 0xFC,
+ 0xB5, 0x27, 0x4E, 0x9C, 0x75, 0xEA, 0x99, 0x7F, 0xFE, 0xB1, 0x2F, 0x5E,
+ 0xBC, 0x35, 0x6A, 0xD4, 0xE5, 0x87, 0x43, 0x86, 0x41, 0x82, 0x49, 0x92,
+ 0x69, 0xD2, 0xE9, 0x9F, 0x73, 0xE6, 0x81, 0x4F, 0x9E, 0x71, 0xE2, 0x89,
+ 0x5F, 0xBE, 0x31, 0x62, 0xC4, 0xC5, 0xC7, 0xC3, 0xCB, 0xDB, 0xFB, 0xBB,
+ 0x3B, 0x76, 0xEC, 0x95, 0x67, 0xCE, 0xD1, 0xEF, 0x93, 0x6B, 0xD6, 0xE1,
+ 0x8F, 0x53, 0xA6, 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, 0x4D,
+ 0x9A, 0x79, 0xF2, 0xA9, 0x1F, 0x3E, 0x7C, 0xF8, 0xBD, 0x37, 0x6E, 0xDC,
+ 0xF5, 0xA7, 0x03, 0x06, 0x0C, 0x18, 0x30, 0x60, 0xC0, 0xCD, 0xD7, 0xE3,
+ 0x8B, 0x5B, 0xB6, 0x21, 0x42, 0x84, 0x45, 0x8A, 0x59, 0xB2, 0x29, 0x52,
+ 0xA4, 0x05, 0x0A, 0x14, 0x28, 0x50, 0xA0, 0x0D, 0x1A, 0x34, 0x68, 0xD0,
+ 0xED, 0x97, 0x63, 0xC6, 0xC1, 0xCF, 0xD3, 0xEB, 0x9B, 0x7B, 0xF6, 0xA1,
+ 0x0F, 0x1E, 0x3C, 0x78, 0xF0, 0xAD, 0x17, 0x2E, 0x5C, 0xB8, 0x3D, 0x7A,
+ 0xF4, 0xA5, 0x07, 0x0E, 0x1C, 0x38, 0x70, 0xE0, 0x8D, 0x57, 0xAE, 0x11,
+ 0x22, 0x44, 0x88, 0x5D, 0xBA, 0x39, 0x72, 0xE4, 0x85, 0x47, 0x8E, 0x51,
+ 0xA2, 0x09, 0x12, 0x24, 0x48, 0x90, 0x6D, 0xDA, 0xF9, 0xBF, 0x33, 0x66,
+ 0xCC, 0xD5, 0xE7, 0x83, 0x4B, 0x96, 0x61, 0xC2, 0xC9, 0xDF, 0xF3, 0xAB,
+ 0x1B, 0x36, 0x6C, 0xD8, 0xFD, 0xB7, 0x23, 0x46, 0x8C, 0x55, 0xAA, 0x19,
+ 0x32, 0x64, 0xC8, 0xDD, 0xF7, 0xA3, 0x0B, 0x16, 0x2C, 0x58, 0xB0, 0x2D,
+ 0x5A, 0xB4, 0x25, 0x4A, 0x94, 0x65, 0xCA, 0xD9, 0xFF, 0xB3, 0x2B, 0x56,
+ 0xAC, 0x15, 0x2A, 0x54, 0xA8, 0x1D, 0x3A, 0x74, 0xE8, 0x9D, 0x77, 0xEE,
+ 0x91, 0x6F, 0xDE, 0xF1, 0xAF, 0x13, 0x26, 0x4C, 0x98, 0x7D, 0xFA, 0xB9,
+ 0x3F, 0x7E, 0xFC, 0xB5, 0x27, 0x4E, 0x9C, 0x75, 0xEA, 0x99, 0x7F, 0xFE,
+ 0xB1, 0x2F, 0x5E, 0xBC, 0x35, 0x6A, 0xD4, 0xE5, 0x87, 0x43, 0x86, 0x41,
+ 0x82, 0x49, 0x92, 0x69, 0xD2, 0xE9, 0x9F, 0x73, 0xE6, 0x81, 0x4F, 0x9E,
+ 0x71, 0xE2, 0x89, 0x5F, 0xBE, 0x31, 0x62, 0xC4, 0xC5, 0xC7, 0xC3, 0xCB
+};
+
+
+/* The table constants are indices of
+ * S-box entries, preprocessed through q0 and q1. */
+static const u8 calc_sb_tbl[512] = {
+ 0xA9, 0x75, 0x67, 0xF3, 0xB3, 0xC6, 0xE8, 0xF4,
+ 0x04, 0xDB, 0xFD, 0x7B, 0xA3, 0xFB, 0x76, 0xC8,
+ 0x9A, 0x4A, 0x92, 0xD3, 0x80, 0xE6, 0x78, 0x6B,
+ 0xE4, 0x45, 0xDD, 0x7D, 0xD1, 0xE8, 0x38, 0x4B,
+ 0x0D, 0xD6, 0xC6, 0x32, 0x35, 0xD8, 0x98, 0xFD,
+ 0x18, 0x37, 0xF7, 0x71, 0xEC, 0xF1, 0x6C, 0xE1,
+ 0x43, 0x30, 0x75, 0x0F, 0x37, 0xF8, 0x26, 0x1B,
+ 0xFA, 0x87, 0x13, 0xFA, 0x94, 0x06, 0x48, 0x3F,
+ 0xF2, 0x5E, 0xD0, 0xBA, 0x8B, 0xAE, 0x30, 0x5B,
+ 0x84, 0x8A, 0x54, 0x00, 0xDF, 0xBC, 0x23, 0x9D,
+ 0x19, 0x6D, 0x5B, 0xC1, 0x3D, 0xB1, 0x59, 0x0E,
+ 0xF3, 0x80, 0xAE, 0x5D, 0xA2, 0xD2, 0x82, 0xD5,
+ 0x63, 0xA0, 0x01, 0x84, 0x83, 0x07, 0x2E, 0x14,
+ 0xD9, 0xB5, 0x51, 0x90, 0x9B, 0x2C, 0x7C, 0xA3,
+ 0xA6, 0xB2, 0xEB, 0x73, 0xA5, 0x4C, 0xBE, 0x54,
+ 0x16, 0x92, 0x0C, 0x74, 0xE3, 0x36, 0x61, 0x51,
+ 0xC0, 0x38, 0x8C, 0xB0, 0x3A, 0xBD, 0xF5, 0x5A,
+ 0x73, 0xFC, 0x2C, 0x60, 0x25, 0x62, 0x0B, 0x96,
+ 0xBB, 0x6C, 0x4E, 0x42, 0x89, 0xF7, 0x6B, 0x10,
+ 0x53, 0x7C, 0x6A, 0x28, 0xB4, 0x27, 0xF1, 0x8C,
+ 0xE1, 0x13, 0xE6, 0x95, 0xBD, 0x9C, 0x45, 0xC7,
+ 0xE2, 0x24, 0xF4, 0x46, 0xB6, 0x3B, 0x66, 0x70,
+ 0xCC, 0xCA, 0x95, 0xE3, 0x03, 0x85, 0x56, 0xCB,
+ 0xD4, 0x11, 0x1C, 0xD0, 0x1E, 0x93, 0xD7, 0xB8,
+ 0xFB, 0xA6, 0xC3, 0x83, 0x8E, 0x20, 0xB5, 0xFF,
+ 0xE9, 0x9F, 0xCF, 0x77, 0xBF, 0xC3, 0xBA, 0xCC,
+ 0xEA, 0x03, 0x77, 0x6F, 0x39, 0x08, 0xAF, 0xBF,
+ 0x33, 0x40, 0xC9, 0xE7, 0x62, 0x2B, 0x71, 0xE2,
+ 0x81, 0x79, 0x79, 0x0C, 0x09, 0xAA, 0xAD, 0x82,
+ 0x24, 0x41, 0xCD, 0x3A, 0xF9, 0xEA, 0xD8, 0xB9,
+ 0xE5, 0xE4, 0xC5, 0x9A, 0xB9, 0xA4, 0x4D, 0x97,
+ 0x44, 0x7E, 0x08, 0xDA, 0x86, 0x7A, 0xE7, 0x17,
+ 0xA1, 0x66, 0x1D, 0x94, 0xAA, 0xA1, 0xED, 0x1D,
+ 0x06, 0x3D, 0x70, 0xF0, 0xB2, 0xDE, 0xD2, 0xB3,
+ 0x41, 0x0B, 0x7B, 0x72, 0xA0, 0xA7, 0x11, 0x1C,
+ 0x31, 0xEF, 0xC2, 0xD1, 0x27, 0x53, 0x90, 0x3E,
+ 0x20, 0x8F, 0xF6, 0x33, 0x60, 0x26, 0xFF, 0x5F,
+ 0x96, 0xEC, 0x5C, 0x76, 0xB1, 0x2A, 0xAB, 0x49,
+ 0x9E, 0x81, 0x9C, 0x88, 0x52, 0xEE, 0x1B, 0x21,
+ 0x5F, 0xC4, 0x93, 0x1A, 0x0A, 0xEB, 0xEF, 0xD9,
+ 0x91, 0xC5, 0x85, 0x39, 0x49, 0x99, 0xEE, 0xCD,
+ 0x2D, 0xAD, 0x4F, 0x31, 0x8F, 0x8B, 0x3B, 0x01,
+ 0x47, 0x18, 0x87, 0x23, 0x6D, 0xDD, 0x46, 0x1F,
+ 0xD6, 0x4E, 0x3E, 0x2D, 0x69, 0xF9, 0x64, 0x48,
+ 0x2A, 0x4F, 0xCE, 0xF2, 0xCB, 0x65, 0x2F, 0x8E,
+ 0xFC, 0x78, 0x97, 0x5C, 0x05, 0x58, 0x7A, 0x19,
+ 0xAC, 0x8D, 0x7F, 0xE5, 0xD5, 0x98, 0x1A, 0x57,
+ 0x4B, 0x67, 0x0E, 0x7F, 0xA7, 0x05, 0x5A, 0x64,
+ 0x28, 0xAF, 0x14, 0x63, 0x3F, 0xB6, 0x29, 0xFE,
+ 0x88, 0xF5, 0x3C, 0xB7, 0x4C, 0x3C, 0x02, 0xA5,
+ 0xB8, 0xCE, 0xDA, 0xE9, 0xB0, 0x68, 0x17, 0x44,
+ 0x55, 0xE0, 0x1F, 0x4D, 0x8A, 0x43, 0x7D, 0x69,
+ 0x57, 0x29, 0xC7, 0x2E, 0x8D, 0xAC, 0x74, 0x15,
+ 0xB7, 0x59, 0xC4, 0xA8, 0x9F, 0x0A, 0x72, 0x9E,
+ 0x7E, 0x6E, 0x15, 0x47, 0x22, 0xDF, 0x12, 0x34,
+ 0x58, 0x35, 0x07, 0x6A, 0x99, 0xCF, 0x34, 0xDC,
+ 0x6E, 0x22, 0x50, 0xC9, 0xDE, 0xC0, 0x68, 0x9B,
+ 0x65, 0x89, 0xBC, 0xD4, 0xDB, 0xED, 0xF8, 0xAB,
+ 0xC8, 0x12, 0xA8, 0xA2, 0x2B, 0x0D, 0x40, 0x52,
+ 0xDC, 0xBB, 0xFE, 0x02, 0x32, 0x2F, 0xA4, 0xA9,
+ 0xCA, 0xD7, 0x10, 0x61, 0x21, 0x1E, 0xF0, 0xB4,
+ 0xD3, 0x50, 0x5D, 0x04, 0x0F, 0xF6, 0x00, 0xC2,
+ 0x6F, 0x16, 0x9D, 0x25, 0x36, 0x86, 0x42, 0x56,
+ 0x4A, 0x55, 0x5E, 0x09, 0xC1, 0xBE, 0xE0, 0x91
+};
+
+/* Macro to perform one column of the RS matrix multiplication. The
+ * parameters a, b, c, and d are the four bytes of output; i is the index
+ * of the key bytes, and w, x, y, and z, are the column of constants from
+ * the RS matrix, preprocessed through the poly_to_exp table. */
+
+#define CALC_S(a, b, c, d, i, w, x, y, z) \
+ if (key[i]) { \
+ tmp = poly_to_exp[key[i] - 1]; \
+ (a) ^= exp_to_poly[tmp + (w)]; \
+ (b) ^= exp_to_poly[tmp + (x)]; \
+ (c) ^= exp_to_poly[tmp + (y)]; \
+ (d) ^= exp_to_poly[tmp + (z)]; \
+ }
+
+/* Macros to calculate the key-dependent S-boxes for a 128-bit key using
+ * the S vector from CALC_S. CALC_SB_2 computes a single entry in all
+ * four S-boxes, where i is the index of the entry to compute, and a and b
+ * are the index numbers preprocessed through the q0 and q1 tables
+ * respectively. */
+
+#define CALC_SB_2(i, a, b) \
+ ctx->s[0][i] = mds[0][q0[(a) ^ sa] ^ se]; \
+ ctx->s[1][i] = mds[1][q0[(b) ^ sb] ^ sf]; \
+ ctx->s[2][i] = mds[2][q1[(a) ^ sc] ^ sg]; \
+ ctx->s[3][i] = mds[3][q1[(b) ^ sd] ^ sh]
+
+/* Macro exactly like CALC_SB_2, but for 192-bit keys. */
+
+#define CALC_SB192_2(i, a, b) \
+ ctx->s[0][i] = mds[0][q0[q0[(b) ^ sa] ^ se] ^ si]; \
+ ctx->s[1][i] = mds[1][q0[q1[(b) ^ sb] ^ sf] ^ sj]; \
+ ctx->s[2][i] = mds[2][q1[q0[(a) ^ sc] ^ sg] ^ sk]; \
+ ctx->s[3][i] = mds[3][q1[q1[(a) ^ sd] ^ sh] ^ sl];
+
+/* Macro exactly like CALC_SB_2, but for 256-bit keys. */
+
+#define CALC_SB256_2(i, a, b) \
+ ctx->s[0][i] = mds[0][q0[q0[q1[(b) ^ sa] ^ se] ^ si] ^ sm]; \
+ ctx->s[1][i] = mds[1][q0[q1[q1[(a) ^ sb] ^ sf] ^ sj] ^ sn]; \
+ ctx->s[2][i] = mds[2][q1[q0[q0[(a) ^ sc] ^ sg] ^ sk] ^ so]; \
+ ctx->s[3][i] = mds[3][q1[q1[q0[(b) ^ sd] ^ sh] ^ sl] ^ sp];
+
+/* Macros to calculate the whitening and round subkeys. CALC_K_2 computes the
+ * last two stages of the h() function for a given index (either 2i or 2i+1).
+ * a, b, c, and d are the four bytes going into the last two stages. For
+ * 128-bit keys, this is the entire h() function and a and c are the index
+ * preprocessed through q0 and q1 respectively; for longer keys they are the
+ * output of previous stages. j is the index of the first key byte to use.
+ * CALC_K computes a pair of subkeys for 128-bit Twofish, by calling CALC_K_2
+ * twice, doing the Psuedo-Hadamard Transform, and doing the necessary
+ * rotations. Its parameters are: a, the array to write the results into,
+ * j, the index of the first output entry, k and l, the preprocessed indices
+ * for index 2i, and m and n, the preprocessed indices for index 2i+1.
+ * CALC_K192_2 expands CALC_K_2 to handle 192-bit keys, by doing an
+ * additional lookup-and-XOR stage. The parameters a, b, c and d are the
+ * four bytes going into the last three stages. For 192-bit keys, c = d
+ * are the index preprocessed through q0, and a = b are the index
+ * preprocessed through q1; j is the index of the first key byte to use.
+ * CALC_K192 is identical to CALC_K but for using the CALC_K192_2 macro
+ * instead of CALC_K_2.
+ * CALC_K256_2 expands CALC_K192_2 to handle 256-bit keys, by doing an
+ * additional lookup-and-XOR stage. The parameters a and b are the index
+ * preprocessed through q0 and q1 respectively; j is the index of the first
+ * key byte to use. CALC_K256 is identical to CALC_K but for using the
+ * CALC_K256_2 macro instead of CALC_K_2. */
+
+#define CALC_K_2(a, b, c, d, j) \
+ mds[0][q0[a ^ key[(j) + 8]] ^ key[j]] \
+ ^ mds[1][q0[b ^ key[(j) + 9]] ^ key[(j) + 1]] \
+ ^ mds[2][q1[c ^ key[(j) + 10]] ^ key[(j) + 2]] \
+ ^ mds[3][q1[d ^ key[(j) + 11]] ^ key[(j) + 3]]
+
+#define CALC_K(a, j, k, l, m, n) \
+ x = CALC_K_2 (k, l, k, l, 0); \
+ y = CALC_K_2 (m, n, m, n, 4); \
+ y = (y << 8) + (y >> 24); \
+ x += y; y += x; ctx->a[j] = x; \
+ ctx->a[(j) + 1] = (y << 9) + (y >> 23)
+
+#define CALC_K192_2(a, b, c, d, j) \
+ CALC_K_2 (q0[a ^ key[(j) + 16]], \
+ q1[b ^ key[(j) + 17]], \
+ q0[c ^ key[(j) + 18]], \
+ q1[d ^ key[(j) + 19]], j)
+
+#define CALC_K192(a, j, k, l, m, n) \
+ x = CALC_K192_2 (l, l, k, k, 0); \
+ y = CALC_K192_2 (n, n, m, m, 4); \
+ y = (y << 8) + (y >> 24); \
+ x += y; y += x; ctx->a[j] = x; \
+ ctx->a[(j) + 1] = (y << 9) + (y >> 23)
+
+#define CALC_K256_2(a, b, j) \
+ CALC_K192_2 (q1[b ^ key[(j) + 24]], \
+ q1[a ^ key[(j) + 25]], \
+ q0[a ^ key[(j) + 26]], \
+ q0[b ^ key[(j) + 27]], j)
+
+#define CALC_K256(a, j, k, l, m, n) \
+ x = CALC_K256_2 (k, l, 0); \
+ y = CALC_K256_2 (m, n, 4); \
+ y = (y << 8) + (y >> 24); \
+ x += y; y += x; ctx->a[j] = x; \
+ ctx->a[(j) + 1] = (y << 9) + (y >> 23)
+
+
+/* Macros to compute the g() function in the encryption and decryption
+ * rounds. G1 is the straight g() function; G2 includes the 8-bit
+ * rotation for the high 32-bit word. */
+
+#define G1(a) \
+ (ctx->s[0][(a) & 0xFF]) ^ (ctx->s[1][((a) >> 8) & 0xFF]) \
+ ^ (ctx->s[2][((a) >> 16) & 0xFF]) ^ (ctx->s[3][(a) >> 24])
+
+#define G2(b) \
+ (ctx->s[1][(b) & 0xFF]) ^ (ctx->s[2][((b) >> 8) & 0xFF]) \
+ ^ (ctx->s[3][((b) >> 16) & 0xFF]) ^ (ctx->s[0][(b) >> 24])
+
+/* Encryption and decryption Feistel rounds. Each one calls the two g()
+ * macros, does the PHT, and performs the XOR and the appropriate bit
+ * rotations. The parameters are the round number (used to select subkeys),
+ * and the four 32-bit chunks of the text. */
+
+#define ENCROUND(n, a, b, c, d) \
+ x = G1 (a); y = G2 (b); \
+ x += y; y += x + ctx->k[2 * (n) + 1]; \
+ (c) ^= x + ctx->k[2 * (n)]; \
+ (c) = ((c) >> 1) + ((c) << 31); \
+ (d) = (((d) << 1)+((d) >> 31)) ^ y
+
+#define DECROUND(n, a, b, c, d) \
+ x = G1 (a); y = G2 (b); \
+ x += y; y += x; \
+ (d) ^= y + ctx->k[2 * (n) + 1]; \
+ (d) = ((d) >> 1) + ((d) << 31); \
+ (c) = (((c) << 1)+((c) >> 31)); \
+ (c) ^= (x + ctx->k[2 * (n)])
+
+/* Encryption and decryption cycles; each one is simply two Feistel rounds
+ * with the 32-bit chunks re-ordered to simulate the "swap" */
+
+#define ENCCYCLE(n) \
+ ENCROUND (2 * (n), a, b, c, d); \
+ ENCROUND (2 * (n) + 1, c, d, a, b)
+
+#define DECCYCLE(n) \
+ DECROUND (2 * (n) + 1, c, d, a, b); \
+ DECROUND (2 * (n), a, b, c, d)
+
+/* Macros to convert the input and output bytes into 32-bit words,
+ * and simultaneously perform the whitening step. INPACK packs word
+ * number n into the variable named by x, using whitening subkey number m.
+ * OUTUNPACK unpacks word number n from the variable named by x, using
+ * whitening subkey number m. */
+
+#define INPACK(n, x, m) \
+ x = in[4 * (n)] ^ (in[4 * (n) + 1] << 8) \
+ ^ (in[4 * (n) + 2] << 16) ^ (in[4 * (n) + 3] << 24) ^ ctx->w[m]
+
+#define OUTUNPACK(n, x, m) \
+ x ^= ctx->w[m]; \
+ out[4 * (n)] = x; out[4 * (n) + 1] = x >> 8; \
+ out[4 * (n) + 2] = x >> 16; out[4 * (n) + 3] = x >> 24
+
+#define TF_MIN_KEY_SIZE 16
+#define TF_MAX_KEY_SIZE 32
+#define TF_BLOCK_SIZE 16
+
+/* Structure for an expanded Twofish key. s contains the key-dependent
+ * S-boxes composed with the MDS matrix; w contains the eight "whitening"
+ * subkeys, K[0] through K[7]. k holds the remaining, "round" subkeys. Note
+ * that k[i] corresponds to what the Twofish paper calls K[i+8]. */
+struct twofish_ctx {
+ u32 s[4][256], w[8], k[32];
+};
+
+/* Perform the key setup. */
+static int twofish_setkey(void *cx, const u8 *key,
+ unsigned int key_len, u32 *flags)
+{
+
+ struct twofish_ctx *ctx = cx;
+
+ int i, j, k;
+
+ /* Temporaries for CALC_K. */
+ u32 x, y;
+
+ /* The S vector used to key the S-boxes, split up into individual bytes.
+ * 128-bit keys use only sa through sh; 256-bit use all of them. */
+ u8 sa = 0, sb = 0, sc = 0, sd = 0, se = 0, sf = 0, sg = 0, sh = 0;
+ u8 si = 0, sj = 0, sk = 0, sl = 0, sm = 0, sn = 0, so = 0, sp = 0;
+
+ /* Temporary for CALC_S. */
+ u8 tmp;
+
+ /* Check key length. */
+ if (key_len != 16 && key_len != 24 && key_len != 32)
+ return -EINVAL; /* unsupported key length */
+
+ /* Compute the first two words of the S vector. The magic numbers are
+ * the entries of the RS matrix, preprocessed through poly_to_exp. The
+ * numbers in the comments are the original (polynomial form) matrix
+ * entries. */
+ CALC_S (sa, sb, sc, sd, 0, 0x00, 0x2D, 0x01, 0x2D); /* 01 A4 02 A4 */
+ CALC_S (sa, sb, sc, sd, 1, 0x2D, 0xA4, 0x44, 0x8A); /* A4 56 A1 55 */
+ CALC_S (sa, sb, sc, sd, 2, 0x8A, 0xD5, 0xBF, 0xD1); /* 55 82 FC 87 */
+ CALC_S (sa, sb, sc, sd, 3, 0xD1, 0x7F, 0x3D, 0x99); /* 87 F3 C1 5A */
+ CALC_S (sa, sb, sc, sd, 4, 0x99, 0x46, 0x66, 0x96); /* 5A 1E 47 58 */
+ CALC_S (sa, sb, sc, sd, 5, 0x96, 0x3C, 0x5B, 0xED); /* 58 C6 AE DB */
+ CALC_S (sa, sb, sc, sd, 6, 0xED, 0x37, 0x4F, 0xE0); /* DB 68 3D 9E */
+ CALC_S (sa, sb, sc, sd, 7, 0xE0, 0xD0, 0x8C, 0x17); /* 9E E5 19 03 */
+ CALC_S (se, sf, sg, sh, 8, 0x00, 0x2D, 0x01, 0x2D); /* 01 A4 02 A4 */
+ CALC_S (se, sf, sg, sh, 9, 0x2D, 0xA4, 0x44, 0x8A); /* A4 56 A1 55 */
+ CALC_S (se, sf, sg, sh, 10, 0x8A, 0xD5, 0xBF, 0xD1); /* 55 82 FC 87 */
+ CALC_S (se, sf, sg, sh, 11, 0xD1, 0x7F, 0x3D, 0x99); /* 87 F3 C1 5A */
+ CALC_S (se, sf, sg, sh, 12, 0x99, 0x46, 0x66, 0x96); /* 5A 1E 47 58 */
+ CALC_S (se, sf, sg, sh, 13, 0x96, 0x3C, 0x5B, 0xED); /* 58 C6 AE DB */
+ CALC_S (se, sf, sg, sh, 14, 0xED, 0x37, 0x4F, 0xE0); /* DB 68 3D 9E */
+ CALC_S (se, sf, sg, sh, 15, 0xE0, 0xD0, 0x8C, 0x17); /* 9E E5 19 03 */
+
+ if (key_len == 24 || key_len == 32) { /* 192- or 256-bit key */
+ /* Calculate the third word of the S vector */
+ CALC_S (si, sj, sk, sl, 16, 0x00, 0x2D, 0x01, 0x2D); /* 01 A4 02 A4 */
+ CALC_S (si, sj, sk, sl, 17, 0x2D, 0xA4, 0x44, 0x8A); /* A4 56 A1 55 */
+ CALC_S (si, sj, sk, sl, 18, 0x8A, 0xD5, 0xBF, 0xD1); /* 55 82 FC 87 */
+ CALC_S (si, sj, sk, sl, 19, 0xD1, 0x7F, 0x3D, 0x99); /* 87 F3 C1 5A */
+ CALC_S (si, sj, sk, sl, 20, 0x99, 0x46, 0x66, 0x96); /* 5A 1E 47 58 */
+ CALC_S (si, sj, sk, sl, 21, 0x96, 0x3C, 0x5B, 0xED); /* 58 C6 AE DB */
+ CALC_S (si, sj, sk, sl, 22, 0xED, 0x37, 0x4F, 0xE0); /* DB 68 3D 9E */
+ CALC_S (si, sj, sk, sl, 23, 0xE0, 0xD0, 0x8C, 0x17); /* 9E E5 19 03 */
+ }
+
+ if (key_len == 32) { /* 256-bit key */
+ /* Calculate the fourth word of the S vector */
+ CALC_S (sm, sn, so, sp, 24, 0x00, 0x2D, 0x01, 0x2D); /* 01 A4 02 A4 */
+ CALC_S (sm, sn, so, sp, 25, 0x2D, 0xA4, 0x44, 0x8A); /* A4 56 A1 55 */
+ CALC_S (sm, sn, so, sp, 26, 0x8A, 0xD5, 0xBF, 0xD1); /* 55 82 FC 87 */
+ CALC_S (sm, sn, so, sp, 27, 0xD1, 0x7F, 0x3D, 0x99); /* 87 F3 C1 5A */
+ CALC_S (sm, sn, so, sp, 28, 0x99, 0x46, 0x66, 0x96); /* 5A 1E 47 58 */
+ CALC_S (sm, sn, so, sp, 29, 0x96, 0x3C, 0x5B, 0xED); /* 58 C6 AE DB */
+ CALC_S (sm, sn, so, sp, 30, 0xED, 0x37, 0x4F, 0xE0); /* DB 68 3D 9E */
+ CALC_S (sm, sn, so, sp, 31, 0xE0, 0xD0, 0x8C, 0x17); /* 9E E5 19 03 */
+
+ /* Compute the S-boxes. */
+ for ( i = j = 0, k = 1; i < 256; i++, j += 2, k += 2 ) {
+ CALC_SB256_2( i, calc_sb_tbl[j], calc_sb_tbl[k] );
+ }
+
+ /* Calculate whitening and round subkeys. The constants are
+ * indices of subkeys, preprocessed through q0 and q1. */
+ CALC_K256 (w, 0, 0xA9, 0x75, 0x67, 0xF3);
+ CALC_K256 (w, 2, 0xB3, 0xC6, 0xE8, 0xF4);
+ CALC_K256 (w, 4, 0x04, 0xDB, 0xFD, 0x7B);
+ CALC_K256 (w, 6, 0xA3, 0xFB, 0x76, 0xC8);
+ CALC_K256 (k, 0, 0x9A, 0x4A, 0x92, 0xD3);
+ CALC_K256 (k, 2, 0x80, 0xE6, 0x78, 0x6B);
+ CALC_K256 (k, 4, 0xE4, 0x45, 0xDD, 0x7D);
+ CALC_K256 (k, 6, 0xD1, 0xE8, 0x38, 0x4B);
+ CALC_K256 (k, 8, 0x0D, 0xD6, 0xC6, 0x32);
+ CALC_K256 (k, 10, 0x35, 0xD8, 0x98, 0xFD);
+ CALC_K256 (k, 12, 0x18, 0x37, 0xF7, 0x71);
+ CALC_K256 (k, 14, 0xEC, 0xF1, 0x6C, 0xE1);
+ CALC_K256 (k, 16, 0x43, 0x30, 0x75, 0x0F);
+ CALC_K256 (k, 18, 0x37, 0xF8, 0x26, 0x1B);
+ CALC_K256 (k, 20, 0xFA, 0x87, 0x13, 0xFA);
+ CALC_K256 (k, 22, 0x94, 0x06, 0x48, 0x3F);
+ CALC_K256 (k, 24, 0xF2, 0x5E, 0xD0, 0xBA);
+ CALC_K256 (k, 26, 0x8B, 0xAE, 0x30, 0x5B);
+ CALC_K256 (k, 28, 0x84, 0x8A, 0x54, 0x00);
+ CALC_K256 (k, 30, 0xDF, 0xBC, 0x23, 0x9D);
+ } else if (key_len == 24) { /* 192-bit key */
+ /* Compute the S-boxes. */
+ for ( i = j = 0, k = 1; i < 256; i++, j += 2, k += 2 ) {
+ CALC_SB192_2( i, calc_sb_tbl[j], calc_sb_tbl[k] );
+ }
+
+ /* Calculate whitening and round subkeys. The constants are
+ * indices of subkeys, preprocessed through q0 and q1. */
+ CALC_K192 (w, 0, 0xA9, 0x75, 0x67, 0xF3);
+ CALC_K192 (w, 2, 0xB3, 0xC6, 0xE8, 0xF4);
+ CALC_K192 (w, 4, 0x04, 0xDB, 0xFD, 0x7B);
+ CALC_K192 (w, 6, 0xA3, 0xFB, 0x76, 0xC8);
+ CALC_K192 (k, 0, 0x9A, 0x4A, 0x92, 0xD3);
+ CALC_K192 (k, 2, 0x80, 0xE6, 0x78, 0x6B);
+ CALC_K192 (k, 4, 0xE4, 0x45, 0xDD, 0x7D);
+ CALC_K192 (k, 6, 0xD1, 0xE8, 0x38, 0x4B);
+ CALC_K192 (k, 8, 0x0D, 0xD6, 0xC6, 0x32);
+ CALC_K192 (k, 10, 0x35, 0xD8, 0x98, 0xFD);
+ CALC_K192 (k, 12, 0x18, 0x37, 0xF7, 0x71);
+ CALC_K192 (k, 14, 0xEC, 0xF1, 0x6C, 0xE1);
+ CALC_K192 (k, 16, 0x43, 0x30, 0x75, 0x0F);
+ CALC_K192 (k, 18, 0x37, 0xF8, 0x26, 0x1B);
+ CALC_K192 (k, 20, 0xFA, 0x87, 0x13, 0xFA);
+ CALC_K192 (k, 22, 0x94, 0x06, 0x48, 0x3F);
+ CALC_K192 (k, 24, 0xF2, 0x5E, 0xD0, 0xBA);
+ CALC_K192 (k, 26, 0x8B, 0xAE, 0x30, 0x5B);
+ CALC_K192 (k, 28, 0x84, 0x8A, 0x54, 0x00);
+ CALC_K192 (k, 30, 0xDF, 0xBC, 0x23, 0x9D);
+ } else { /* 128-bit key */
+ /* Compute the S-boxes. */
+ for ( i = j = 0, k = 1; i < 256; i++, j += 2, k += 2 ) {
+ CALC_SB_2( i, calc_sb_tbl[j], calc_sb_tbl[k] );
+ }
+
+ /* Calculate whitening and round subkeys. The constants are
+ * indices of subkeys, preprocessed through q0 and q1. */
+ CALC_K (w, 0, 0xA9, 0x75, 0x67, 0xF3);
+ CALC_K (w, 2, 0xB3, 0xC6, 0xE8, 0xF4);
+ CALC_K (w, 4, 0x04, 0xDB, 0xFD, 0x7B);
+ CALC_K (w, 6, 0xA3, 0xFB, 0x76, 0xC8);
+ CALC_K (k, 0, 0x9A, 0x4A, 0x92, 0xD3);
+ CALC_K (k, 2, 0x80, 0xE6, 0x78, 0x6B);
+ CALC_K (k, 4, 0xE4, 0x45, 0xDD, 0x7D);
+ CALC_K (k, 6, 0xD1, 0xE8, 0x38, 0x4B);
+ CALC_K (k, 8, 0x0D, 0xD6, 0xC6, 0x32);
+ CALC_K (k, 10, 0x35, 0xD8, 0x98, 0xFD);
+ CALC_K (k, 12, 0x18, 0x37, 0xF7, 0x71);
+ CALC_K (k, 14, 0xEC, 0xF1, 0x6C, 0xE1);
+ CALC_K (k, 16, 0x43, 0x30, 0x75, 0x0F);
+ CALC_K (k, 18, 0x37, 0xF8, 0x26, 0x1B);
+ CALC_K (k, 20, 0xFA, 0x87, 0x13, 0xFA);
+ CALC_K (k, 22, 0x94, 0x06, 0x48, 0x3F);
+ CALC_K (k, 24, 0xF2, 0x5E, 0xD0, 0xBA);
+ CALC_K (k, 26, 0x8B, 0xAE, 0x30, 0x5B);
+ CALC_K (k, 28, 0x84, 0x8A, 0x54, 0x00);
+ CALC_K (k, 30, 0xDF, 0xBC, 0x23, 0x9D);
+ }
+
+ return 0;
+}
+
+/* Encrypt one block. in and out may be the same. */
+static void twofish_encrypt(void *cx, u8 *out, const u8 *in)
+{
+ struct twofish_ctx *ctx = cx;
+
+ /* The four 32-bit chunks of the text. */
+ u32 a, b, c, d;
+
+ /* Temporaries used by the round function. */
+ u32 x, y;
+
+ /* Input whitening and packing. */
+ INPACK (0, a, 0);
+ INPACK (1, b, 1);
+ INPACK (2, c, 2);
+ INPACK (3, d, 3);
+
+ /* Encryption Feistel cycles. */
+ ENCCYCLE (0);
+ ENCCYCLE (1);
+ ENCCYCLE (2);
+ ENCCYCLE (3);
+ ENCCYCLE (4);
+ ENCCYCLE (5);
+ ENCCYCLE (6);
+ ENCCYCLE (7);
+
+ /* Output whitening and unpacking. */
+ OUTUNPACK (0, c, 4);
+ OUTUNPACK (1, d, 5);
+ OUTUNPACK (2, a, 6);
+ OUTUNPACK (3, b, 7);
+
+}
+
+/* Decrypt one block. in and out may be the same. */
+static void twofish_decrypt(void *cx, u8 *out, const u8 *in)
+{
+ struct twofish_ctx *ctx = cx;
+
+ /* The four 32-bit chunks of the text. */
+ u32 a, b, c, d;
+
+ /* Temporaries used by the round function. */
+ u32 x, y;
+
+ /* Input whitening and packing. */
+ INPACK (0, c, 4);
+ INPACK (1, d, 5);
+ INPACK (2, a, 6);
+ INPACK (3, b, 7);
+
+ /* Encryption Feistel cycles. */
+ DECCYCLE (7);
+ DECCYCLE (6);
+ DECCYCLE (5);
+ DECCYCLE (4);
+ DECCYCLE (3);
+ DECCYCLE (2);
+ DECCYCLE (1);
+ DECCYCLE (0);
+
+ /* Output whitening and unpacking. */
+ OUTUNPACK (0, a, 0);
+ OUTUNPACK (1, b, 1);
+ OUTUNPACK (2, c, 2);
+ OUTUNPACK (3, d, 3);
+
+}
+
+static struct crypto_alg alg = {
+ .cra_name = "twofish",
+ .cra_flags = CRYPTO_ALG_TYPE_CIPHER,
+ .cra_blocksize = TF_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct twofish_ctx),
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(alg.cra_list),
+ .cra_u = { .cipher = {
+ .cia_min_keysize = TF_MIN_KEY_SIZE,
+ .cia_max_keysize = TF_MAX_KEY_SIZE,
+ .cia_ivsize = TF_BLOCK_SIZE,
+ .cia_setkey = twofish_setkey,
+ .cia_encrypt = twofish_encrypt,
+ .cia_decrypt = twofish_decrypt } }
+};
+
+static int __init init(void)
+{
+ return crypto_register_alg(&alg);
+}
+
+static void __exit fini(void)
+{
+ crypto_unregister_alg(&alg);
+}
+
+module_init(init);
+module_exit(fini);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION ("Twofish Cipher Algorithm");
diff -Nru a/drivers/net/3c59x.c b/drivers/net/3c59x.c
--- a/drivers/net/3c59x.c Thu May 8 10:41:36 2003
+++ b/drivers/net/3c59x.c Thu May 8 10:41:36 2003
@@ -1997,7 +1997,7 @@
if (skb->ip_summed != CHECKSUM_HW)
vp->tx_ring[entry].status = cpu_to_le32(skb->len | TxIntrUploaded);
else
- vp->tx_ring[entry].status = cpu_to_le32(skb->len | TxIntrUploaded | AddTCPChksum);
+ vp->tx_ring[entry].status = cpu_to_le32(skb->len | TxIntrUploaded | AddTCPChksum | AddUDPChksum);
if (!skb_shinfo(skb)->nr_frags) {
vp->tx_ring[entry].frag[0].addr = cpu_to_le32(pci_map_single(vp->pdev, skb->data,
diff -Nru a/include/asm-i386/kmap_types.h b/include/asm-i386/kmap_types.h
--- a/include/asm-i386/kmap_types.h Thu May 8 10:41:37 2003
+++ b/include/asm-i386/kmap_types.h Thu May 8 10:41:37 2003
@@ -8,6 +8,8 @@
KM_USER0,
KM_USER1,
KM_BH_IRQ,
+ KM_SOFTIRQ0,
+ KM_SOFTIRQ1,
KM_TYPE_NR
};
diff -Nru a/include/asm-ppc/kmap_types.h b/include/asm-ppc/kmap_types.h
--- a/include/asm-ppc/kmap_types.h Thu May 8 10:41:36 2003
+++ b/include/asm-ppc/kmap_types.h Thu May 8 10:41:36 2003
@@ -9,6 +9,8 @@
KM_USER0,
KM_USER1,
KM_BH_IRQ,
+ KM_SOFTIRQ0,
+ KM_SOFTIRQ1,
KM_TYPE_NR
};
diff -Nru a/include/asm-sparc/kmap_types.h b/include/asm-sparc/kmap_types.h
--- a/include/asm-sparc/kmap_types.h Thu May 8 10:41:37 2003
+++ b/include/asm-sparc/kmap_types.h Thu May 8 10:41:37 2003
@@ -8,6 +8,8 @@
KM_USER0,
KM_USER1,
KM_BH_IRQ,
+ KM_SOFTIRQ0,
+ KM_SOFTIRQ1,
KM_TYPE_NR
};
diff -Nru a/include/asm-sparc64/kmap_types.h b/include/asm-sparc64/kmap_types.h
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/include/asm-sparc64/kmap_types.h Thu May 8 10:41:38 2003
@@ -0,0 +1,20 @@
+#ifndef _ASM_KMAP_TYPES_H
+#define _ASM_KMAP_TYPES_H
+
+/* Dummy header just to define km_type. None of this
+ * is actually used on sparc64. -DaveM
+ */
+
+enum km_type {
+ KM_BOUNCE_READ,
+ KM_SKB_SUNRPC_DATA,
+ KM_SKB_DATA_SOFTIRQ,
+ KM_USER0,
+ KM_USER1,
+ KM_BH_IRQ,
+ KM_SOFTIRQ0,
+ KM_SOFTIRQ1,
+ KM_TYPE_NR
+};
+
+#endif
diff -Nru a/include/asm-x86_64/kmap_types.h b/include/asm-x86_64/kmap_types.h
--- a/include/asm-x86_64/kmap_types.h Thu May 8 10:41:38 2003
+++ b/include/asm-x86_64/kmap_types.h Thu May 8 10:41:38 2003
@@ -7,6 +7,8 @@
KM_SKB_DATA_SOFTIRQ,
KM_USER0,
KM_USER1,
+ KM_SOFTIRQ0,
+ KM_SOFTIRQ1,
KM_TYPE_NR
};
diff -Nru a/include/linux/crypto.h b/include/linux/crypto.h
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/include/linux/crypto.h Thu May 8 10:41:38 2003
@@ -0,0 +1,379 @@
+/*
+ * Scatterlist Cryptographic API.
+ *
+ * Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
+ * Copyright (c) 2002 David S. Miller (davem@redhat.com)
+ *
+ * Portions derived from Cryptoapi, by Alexander Kjeldaas <astor@fast.no>
+ * and Nettle, by Niels Möller.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ */
+#ifndef _LINUX_CRYPTO_H
+#define _LINUX_CRYPTO_H
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/list.h>
+#include <linux/string.h>
+#include <asm/page.h>
+
+/*
+ * Algorithm masks and types.
+ */
+#define CRYPTO_ALG_TYPE_MASK 0x000000ff
+#define CRYPTO_ALG_TYPE_CIPHER 0x00000001
+#define CRYPTO_ALG_TYPE_DIGEST 0x00000002
+#define CRYPTO_ALG_TYPE_COMPRESS 0x00000004
+
+/*
+ * Transform masks and values (for crt_flags).
+ */
+#define CRYPTO_TFM_MODE_MASK 0x000000ff
+#define CRYPTO_TFM_REQ_MASK 0x000fff00
+#define CRYPTO_TFM_RES_MASK 0xfff00000
+
+#define CRYPTO_TFM_MODE_ECB 0x00000001
+#define CRYPTO_TFM_MODE_CBC 0x00000002
+#define CRYPTO_TFM_MODE_CFB 0x00000004
+#define CRYPTO_TFM_MODE_CTR 0x00000008
+
+#define CRYPTO_TFM_REQ_WEAK_KEY 0x00000100
+#define CRYPTO_TFM_RES_WEAK_KEY 0x00100000
+#define CRYPTO_TFM_RES_BAD_KEY_LEN 0x00200000
+#define CRYPTO_TFM_RES_BAD_KEY_SCHED 0x00400000
+#define CRYPTO_TFM_RES_BAD_BLOCK_LEN 0x00800000
+#define CRYPTO_TFM_RES_BAD_FLAGS 0x01000000
+
+/*
+ * Miscellaneous stuff.
+ */
+#define CRYPTO_UNSPEC 0
+#define CRYPTO_MAX_ALG_NAME 64
+
+struct scatterlist;
+
+/*
+ * Algorithms: modular crypto algorithm implementations, managed
+ * via crypto_register_alg() and crypto_unregister_alg().
+ */
+struct cipher_alg {
+ unsigned int cia_min_keysize;
+ unsigned int cia_max_keysize;
+ unsigned int cia_ivsize;
+ int (*cia_setkey)(void *ctx, const u8 *key,
+ unsigned int keylen, u32 *flags);
+ void (*cia_encrypt)(void *ctx, u8 *dst, const u8 *src);
+ void (*cia_decrypt)(void *ctx, u8 *dst, const u8 *src);
+};
+
+struct digest_alg {
+ unsigned int dia_digestsize;
+ void (*dia_init)(void *ctx);
+ void (*dia_update)(void *ctx, const u8 *data, unsigned int len);
+ void (*dia_final)(void *ctx, u8 *out);
+};
+
+struct compress_alg {
+ int (*coa_init)(void *ctx);
+ void (*coa_exit)(void *ctx);
+ int (*coa_compress)(void *ctx, const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen);
+ int (*coa_decompress)(void *ctx, const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen);
+};
+
+#define cra_cipher cra_u.cipher
+#define cra_digest cra_u.digest
+#define cra_compress cra_u.compress
+
+struct crypto_alg {
+ struct list_head cra_list;
+ u32 cra_flags;
+ unsigned int cra_blocksize;
+ unsigned int cra_ctxsize;
+ const char cra_name[CRYPTO_MAX_ALG_NAME];
+
+ union {
+ struct cipher_alg cipher;
+ struct digest_alg digest;
+ struct compress_alg compress;
+ } cra_u;
+
+ struct module *cra_module;
+};
+
+/*
+ * Algorithm registration interface.
+ */
+int crypto_register_alg(struct crypto_alg *alg);
+int crypto_unregister_alg(struct crypto_alg *alg);
+
+/*
+ * Algorithm query interface.
+ */
+int crypto_alg_available(const char *name, u32 flags);
+
+/*
+ * Transforms: user-instantiated objects which encapsulate algorithms
+ * and core processing logic. Managed via crypto_alloc_tfm() and
+ * crypto_free_tfm(), as well as the various helpers below.
+ */
+struct crypto_tfm;
+
+struct cipher_tfm {
+ void *cit_iv;
+ u32 cit_mode;
+ int (*cit_setkey)(struct crypto_tfm *tfm,
+ const u8 *key, unsigned int keylen);
+ int (*cit_encrypt)(struct crypto_tfm *tfm,
+ struct scatterlist *dst,
+ struct scatterlist *src,
+ unsigned int nbytes);
+ int (*cit_encrypt_iv)(struct crypto_tfm *tfm,
+ struct scatterlist *dst,
+ struct scatterlist *src,
+ unsigned int nbytes, u8 *iv);
+ int (*cit_decrypt)(struct crypto_tfm *tfm,
+ struct scatterlist *dst,
+ struct scatterlist *src,
+ unsigned int nbytes);
+ int (*cit_decrypt_iv)(struct crypto_tfm *tfm,
+ struct scatterlist *dst,
+ struct scatterlist *src,
+ unsigned int nbytes, u8 *iv);
+ void (*cit_xor_block)(u8 *dst, const u8 *src);
+};
+
+struct digest_tfm {
+ void (*dit_init)(struct crypto_tfm *tfm);
+ void (*dit_update)(struct crypto_tfm *tfm,
+ struct scatterlist *sg, unsigned int nsg);
+ void (*dit_final)(struct crypto_tfm *tfm, u8 *out);
+ void (*dit_digest)(struct crypto_tfm *tfm, struct scatterlist *sg,
+ unsigned int nsg, u8 *out);
+#ifdef CONFIG_CRYPTO_HMAC
+ void *dit_hmac_block;
+#endif
+};
+
+struct compress_tfm {
+ int (*cot_compress)(struct crypto_tfm *tfm,
+ const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen);
+ int (*cot_decompress)(struct crypto_tfm *tfm,
+ const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen);
+};
+
+#define crt_cipher crt_u.cipher
+#define crt_digest crt_u.digest
+#define crt_compress crt_u.compress
+
+struct crypto_tfm {
+
+ u32 crt_flags;
+
+ union {
+ struct cipher_tfm cipher;
+ struct digest_tfm digest;
+ struct compress_tfm compress;
+ } crt_u;
+
+ struct crypto_alg *__crt_alg;
+};
+
+/*
+ * Transform user interface.
+ */
+
+/*
+ * crypto_alloc_tfm() will first attempt to locate an already loaded algorithm.
+ * If that fails and the kernel supports dynamically loadable modules, it
+ * will then attempt to load a module of the same name or alias. A refcount
+ * is grabbed on the algorithm which is then associated with the new transform.
+ *
+ * crypto_free_tfm() frees up the transform and any associated resources,
+ * then drops the refcount on the associated algorithm.
+ */
+struct crypto_tfm *crypto_alloc_tfm(const char *alg_name, u32 tfm_flags);
+void crypto_free_tfm(struct crypto_tfm *tfm);
+
+/*
+ * Transform helpers which query the underlying algorithm.
+ */
+static inline const char *crypto_tfm_alg_name(struct crypto_tfm *tfm)
+{
+ return tfm->__crt_alg->cra_name;
+}
+
+static inline const char *crypto_tfm_alg_modname(struct crypto_tfm *tfm)
+{
+ struct crypto_alg *alg = tfm->__crt_alg;
+
+ if (alg->cra_module)
+ return alg->cra_module->name;
+ else
+ return NULL;
+}
+
+static inline u32 crypto_tfm_alg_type(struct crypto_tfm *tfm)
+{
+ return tfm->__crt_alg->cra_flags & CRYPTO_ALG_TYPE_MASK;
+}
+
+static inline unsigned int crypto_tfm_alg_min_keysize(struct crypto_tfm *tfm)
+{
+ BUG_ON(crypto_tfm_alg_type(tfm) != CRYPTO_ALG_TYPE_CIPHER);
+ return tfm->__crt_alg->cra_cipher.cia_min_keysize;
+}
+
+static inline unsigned int crypto_tfm_alg_max_keysize(struct crypto_tfm *tfm)
+{
+ BUG_ON(crypto_tfm_alg_type(tfm) != CRYPTO_ALG_TYPE_CIPHER);
+ return tfm->__crt_alg->cra_cipher.cia_max_keysize;
+}
+
+static inline unsigned int crypto_tfm_alg_ivsize(struct crypto_tfm *tfm)
+{
+ BUG_ON(crypto_tfm_alg_type(tfm) != CRYPTO_ALG_TYPE_CIPHER);
+ return tfm->__crt_alg->cra_cipher.cia_ivsize;
+}
+
+static inline unsigned int crypto_tfm_alg_blocksize(struct crypto_tfm *tfm)
+{
+ return tfm->__crt_alg->cra_blocksize;
+}
+
+static inline unsigned int crypto_tfm_alg_digestsize(struct crypto_tfm *tfm)
+{
+ BUG_ON(crypto_tfm_alg_type(tfm) != CRYPTO_ALG_TYPE_DIGEST);
+ return tfm->__crt_alg->cra_digest.dia_digestsize;
+}
+
+/*
+ * API wrappers.
+ */
+static inline void crypto_digest_init(struct crypto_tfm *tfm)
+{
+ BUG_ON(crypto_tfm_alg_type(tfm) != CRYPTO_ALG_TYPE_DIGEST);
+ tfm->crt_digest.dit_init(tfm);
+}
+
+static inline void crypto_digest_update(struct crypto_tfm *tfm,
+ struct scatterlist *sg,
+ unsigned int nsg)
+{
+ BUG_ON(crypto_tfm_alg_type(tfm) != CRYPTO_ALG_TYPE_DIGEST);
+ tfm->crt_digest.dit_update(tfm, sg, nsg);
+}
+
+static inline void crypto_digest_final(struct crypto_tfm *tfm, u8 *out)
+{
+ BUG_ON(crypto_tfm_alg_type(tfm) != CRYPTO_ALG_TYPE_DIGEST);
+ tfm->crt_digest.dit_final(tfm, out);
+}
+
+static inline void crypto_digest_digest(struct crypto_tfm *tfm,
+ struct scatterlist *sg,
+ unsigned int nsg, u8 *out)
+{
+ BUG_ON(crypto_tfm_alg_type(tfm) != CRYPTO_ALG_TYPE_DIGEST);
+ tfm->crt_digest.dit_digest(tfm, sg, nsg, out);
+}
+
+static inline int crypto_cipher_setkey(struct crypto_tfm *tfm,
+ const u8 *key, unsigned int keylen)
+{
+ BUG_ON(crypto_tfm_alg_type(tfm) != CRYPTO_ALG_TYPE_CIPHER);
+ return tfm->crt_cipher.cit_setkey(tfm, key, keylen);
+}
+
+static inline int crypto_cipher_encrypt(struct crypto_tfm *tfm,
+ struct scatterlist *dst,
+ struct scatterlist *src,
+ unsigned int nbytes)
+{
+ BUG_ON(crypto_tfm_alg_type(tfm) != CRYPTO_ALG_TYPE_CIPHER);
+ return tfm->crt_cipher.cit_encrypt(tfm, dst, src, nbytes);
+}
+
+static inline int crypto_cipher_encrypt_iv(struct crypto_tfm *tfm,
+ struct scatterlist *dst,
+ struct scatterlist *src,
+ unsigned int nbytes, u8 *iv)
+{
+ BUG_ON(crypto_tfm_alg_type(tfm) != CRYPTO_ALG_TYPE_CIPHER);
+ BUG_ON(tfm->crt_cipher.cit_mode == CRYPTO_TFM_MODE_ECB);
+ return tfm->crt_cipher.cit_encrypt_iv(tfm, dst, src, nbytes, iv);
+}
+
+static inline int crypto_cipher_decrypt(struct crypto_tfm *tfm,
+ struct scatterlist *dst,
+ struct scatterlist *src,
+ unsigned int nbytes)
+{
+ BUG_ON(crypto_tfm_alg_type(tfm) != CRYPTO_ALG_TYPE_CIPHER);
+ return tfm->crt_cipher.cit_decrypt(tfm, dst, src, nbytes);
+}
+
+static inline int crypto_cipher_decrypt_iv(struct crypto_tfm *tfm,
+ struct scatterlist *dst,
+ struct scatterlist *src,
+ unsigned int nbytes, u8 *iv)
+{
+ BUG_ON(crypto_tfm_alg_type(tfm) != CRYPTO_ALG_TYPE_CIPHER);
+ BUG_ON(tfm->crt_cipher.cit_mode == CRYPTO_TFM_MODE_ECB);
+ return tfm->crt_cipher.cit_decrypt_iv(tfm, dst, src, nbytes, iv);
+}
+
+static inline void crypto_cipher_set_iv(struct crypto_tfm *tfm,
+ const u8 *src, unsigned int len)
+{
+ BUG_ON(crypto_tfm_alg_type(tfm) != CRYPTO_ALG_TYPE_CIPHER);
+ memcpy(tfm->crt_cipher.cit_iv, src, len);
+}
+
+static inline void crypto_cipher_get_iv(struct crypto_tfm *tfm,
+ u8 *dst, unsigned int len)
+{
+ BUG_ON(crypto_tfm_alg_type(tfm) != CRYPTO_ALG_TYPE_CIPHER);
+ memcpy(dst, tfm->crt_cipher.cit_iv, len);
+}
+
+static inline int crypto_comp_compress(struct crypto_tfm *tfm,
+ const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen)
+{
+ BUG_ON(crypto_tfm_alg_type(tfm) != CRYPTO_ALG_TYPE_COMPRESS);
+ return tfm->crt_compress.cot_compress(tfm, src, slen, dst, dlen);
+}
+
+static inline int crypto_comp_decompress(struct crypto_tfm *tfm,
+ const u8 *src, unsigned int slen,
+ u8 *dst, unsigned int *dlen)
+{
+ BUG_ON(crypto_tfm_alg_type(tfm) != CRYPTO_ALG_TYPE_COMPRESS);
+ return tfm->crt_compress.cot_decompress(tfm, src, slen, dst, dlen);
+}
+
+/*
+ * HMAC support.
+ */
+#ifdef CONFIG_CRYPTO_HMAC
+void crypto_hmac_init(struct crypto_tfm *tfm, u8 *key, unsigned int *keylen);
+void crypto_hmac_update(struct crypto_tfm *tfm,
+ struct scatterlist *sg, unsigned int nsg);
+void crypto_hmac_final(struct crypto_tfm *tfm, u8 *key,
+ unsigned int *keylen, u8 *out);
+void crypto_hmac(struct crypto_tfm *tfm, u8 *key, unsigned int *keylen,
+ struct scatterlist *sg, unsigned int nsg, u8 *out);
+#endif /* CONFIG_CRYPTO_HMAC */
+
+#endif /* _LINUX_CRYPTO_H */
+
diff -Nru a/include/linux/in.h b/include/linux/in.h
--- a/include/linux/in.h Thu May 8 10:41:37 2003
+++ b/include/linux/in.h Thu May 8 10:41:37 2003
@@ -41,6 +41,7 @@
IPPROTO_ESP = 50, /* Encapsulation Security Payload protocol */
IPPROTO_AH = 51, /* Authentication Header protocol */
IPPROTO_COMP = 108, /* Compression Header protocol */
+ IPPROTO_SCTP = 132, /* Stream Control Transport Protocol */
IPPROTO_RAW = 255, /* Raw IP packets */
IPPROTO_MAX
@@ -67,6 +68,8 @@
#define IP_RECVTOS 13
#define IP_MTU 14
#define IP_FREEBIND 15
+#define IP_IPSEC_POLICY 16
+#define IP_XFRM_POLICY 17
/* BSD compatibility */
#define IP_RECVRETOPTS IP_RETOPTS
diff -Nru a/include/linux/in6.h b/include/linux/in6.h
--- a/include/linux/in6.h Thu May 8 10:41:37 2003
+++ b/include/linux/in6.h Thu May 8 10:41:37 2003
@@ -180,5 +180,8 @@
#define IPV6_FLOWLABEL_MGR 32
#define IPV6_FLOWINFO_SEND 33
+#define IPV6_IPSEC_POLICY 34
+#define IPV6_XFRM_POLICY 35
+
#endif
diff -Nru a/include/linux/inetdevice.h b/include/linux/inetdevice.h
--- a/include/linux/inetdevice.h Thu May 8 10:41:36 2003
+++ b/include/linux/inetdevice.h Thu May 8 10:41:36 2003
@@ -19,6 +19,8 @@
int tag;
int arp_filter;
int medium_id;
+ int no_xfrm;
+ int no_policy;
void *sysctl;
};
diff -Nru a/include/linux/ip.h b/include/linux/ip.h
--- a/include/linux/ip.h Thu May 8 10:41:37 2003
+++ b/include/linux/ip.h Thu May 8 10:41:37 2003
@@ -18,8 +18,6 @@
#define _LINUX_IP_H
#include <asm/byteorder.h>
-/* SOL_IP socket options */
-
#define IPTOS_TOS_MASK 0x1E
#define IPTOS_TOS(tos) ((tos)&IPTOS_TOS_MASK)
#define IPTOS_LOWDELAY 0x10
@@ -67,14 +65,6 @@
#define MAXTTL 255
#define IPDEFTTL 64
-/* struct timestamp, struct route and MAX_ROUTES are removed.
-
- REASONS: it is clear that nobody used them because:
- - MAX_ROUTES value was wrong.
- - "struct route" was wrong.
- - "struct timestamp" had fatally misaligned bitfields and was completely unusable.
- */
-
#define IPOPT_OPTVAL 0
#define IPOPT_OLEN 1
#define IPOPT_OFFSET 2
@@ -133,6 +123,21 @@
__u32 saddr;
__u32 daddr;
/*The options start here. */
+};
+
+struct ip_auth_hdr {
+ __u8 nexthdr;
+ __u8 hdrlen; /* This one is measured in 32 bit units! */
+ __u16 reserved;
+ __u32 spi;
+ __u32 seq_no; /* Sequence number */
+ __u8 auth_data[0]; /* Variable len but >=4. Mind the 64 bit alignment! */
+};
+
+struct ip_esp_hdr {
+ __u32 spi;
+ __u32 seq_no; /* Sequence number */
+ __u8 enc_data[0]; /* Variable len but >=8. Mind the 64 bit alignment! */
};
#endif /* _LINUX_IP_H */
diff -Nru a/include/linux/ipsec.h b/include/linux/ipsec.h
--- a/include/linux/ipsec.h Thu May 8 10:41:36 2003
+++ b/include/linux/ipsec.h Thu May 8 10:41:36 2003
@@ -1,69 +1,46 @@
-/*
- * Definitions for the SECurity layer
- *
- * Author:
- * Robert Muchsel <muchsel@acm.org>
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version
- * 2 of the License, or (at your option) any later version.
- */
-
#ifndef _LINUX_IPSEC_H
#define _LINUX_IPSEC_H
-#include <linux/config.h>
-#include <linux/socket.h>
-#include <net/sock.h>
-#include <linux/skbuff.h>
-
-/* Values for the set/getsockopt calls */
-
-/* These defines are compatible with NRL IPv6, however their semantics
- is different */
-
-#define IPSEC_LEVEL_NONE -1 /* send plaintext, accept any */
-#define IPSEC_LEVEL_DEFAULT 0 /* encrypt/authenticate if possible */
- /* the default MUST be 0, because a */
- /* socket is initialized with 0's */
-#define IPSEC_LEVEL_USE 1 /* use outbound, don't require inbound */
-#define IPSEC_LEVEL_REQUIRE 2 /* require both directions */
-#define IPSEC_LEVEL_UNIQUE 2 /* for compatibility only */
-
-#ifdef __KERNEL__
-
-/* skb bit flags set on packet input processing */
-
-#define RCV_SEC 0x0f /* options on receive */
-#define RCV_AUTH 0x01 /* was authenticated */
-#define RCV_CRYPT 0x02 /* was encrypted */
-#define RCV_TUNNEL 0x04 /* was tunneled */
-#define SND_SEC 0xf0 /* options on send, these are */
-#define SND_AUTH 0x10 /* currently unused */
-#define SND_CRYPT 0x20
-#define SND_TUNNEL 0x40
-
-/*
- * FIXME: ignores network encryption for now..
- */
-
-#ifdef CONFIG_NET_SECURITY
-static __inline__ int ipsec_sk_policy(struct sock *sk, struct sk_buff *skb)
-{
- return ((sk->authentication < IPSEC_LEVEL_REQUIRE) ||
- (skb->security & RCV_AUTH)) &&
- ((sk->encryption < IPSEC_LEVEL_REQUIRE) ||
- (skb->security & RCV_CRYPT));
-}
-
-#else
-
-static __inline__ int ipsec_sk_policy(struct sock *sk, struct sk_buff *skb)
-{
- return 1;
-}
-#endif /* CONFIG */
+/* The definitions, required to talk to KAME racoon IKE. */
+
+#include <linux/pfkeyv2.h>
+
+#define IPSEC_PORT_ANY 0
+#define IPSEC_ULPROTO_ANY 255
+#define IPSEC_PROTO_ANY 255
+
+enum {
+ IPSEC_MODE_ANY = 0, /* We do not support this for SA */
+ IPSEC_MODE_TRANSPORT = 1,
+ IPSEC_MODE_TUNNEL = 2
+};
+
+enum {
+ IPSEC_DIR_ANY = 0,
+ IPSEC_DIR_INBOUND = 1,
+ IPSEC_DIR_OUTBOUND = 2,
+ IPSEC_DIR_FWD = 3, /* It is our own */
+ IPSEC_DIR_MAX = 4,
+ IPSEC_DIR_INVALID = 5
+};
+
+enum {
+ IPSEC_POLICY_DISCARD = 0,
+ IPSEC_POLICY_NONE = 1,
+ IPSEC_POLICY_IPSEC = 2,
+ IPSEC_POLICY_ENTRUST = 3,
+ IPSEC_POLICY_BYPASS = 4
+};
+
+enum {
+ IPSEC_LEVEL_DEFAULT = 0,
+ IPSEC_LEVEL_USE = 1,
+ IPSEC_LEVEL_REQUIRE = 2,
+ IPSEC_LEVEL_UNIQUE = 3
+};
+
+#define IPSEC_MANUAL_REQID_MAX 0x3fff
+
+#define IPSEC_REPLAYWSIZE 32
-#endif /* __KERNEL__ */
#endif /* _LINUX_IPSEC_H */
diff -Nru a/include/linux/ipv6.h b/include/linux/ipv6.h
--- a/include/linux/ipv6.h Thu May 8 10:41:37 2003
+++ b/include/linux/ipv6.h Thu May 8 10:41:37 2003
@@ -73,6 +73,21 @@
#define rt0_type rt_hdr.type;
};
+struct ipv6_auth_hdr {
+ __u8 nexthdr;
+ __u8 hdrlen; /* This one is measured in 32 bit units! */
+ __u16 reserved;
+ __u32 spi;
+ __u32 seq_no; /* Sequence number */
+ __u8 auth_data[0]; /* Length variable but >=4. Mind the 64 bit alignment! */
+};
+
+struct ipv6_esp_hdr {
+ __u32 spi;
+ __u32 seq_no; /* Sequence number */
+ __u8 enc_data[0]; /* Length variable but >=8. Mind the 64 bit alignment! */
+};
+
/*
* IPv6 fixed header
*
diff -Nru a/include/linux/netdevice.h b/include/linux/netdevice.h
--- a/include/linux/netdevice.h Thu May 8 10:41:37 2003
+++ b/include/linux/netdevice.h Thu May 8 10:41:37 2003
@@ -89,6 +89,11 @@
#define MAX_HEADER (LL_MAX_HEADER + 48)
#endif
+/* Reserve 16byte aligned hard_header_len, but at least 16.
+ * Alternative is: dev->hard_header_len ? (dev->hard_header_len + 15)&~15 : 0
+ */
+#define LL_RESERVED_SPACE(dev) (((dev)->hard_header_len&~15) + 16)
+
/*
* Network device statistics. Akin to the 2.0 ether stats but
* with byte counters.
@@ -478,6 +483,7 @@
extern int dev_queue_xmit(struct sk_buff *skb);
extern int register_netdevice(struct net_device *dev);
extern int unregister_netdevice(struct net_device *dev);
+extern void synchronize_net(void);
extern int register_netdevice_notifier(struct notifier_block *nb);
extern int unregister_netdevice_notifier(struct notifier_block *nb);
extern int dev_new_index(void);
diff -Nru a/include/linux/netlink.h b/include/linux/netlink.h
--- a/include/linux/netlink.h Thu May 8 10:41:37 2003
+++ b/include/linux/netlink.h Thu May 8 10:41:37 2003
@@ -7,6 +7,7 @@
#define NETLINK_FIREWALL 3 /* Firewalling hook */
#define NETLINK_TCPDIAG 4 /* TCP socket monitoring */
#define NETLINK_NFLOG 5 /* netfilter/iptables ULOG */
+#define NETLINK_XFRM 6 /* ipsec */
#define NETLINK_ARPD 8
#define NETLINK_ROUTE6 11 /* af_inet6 route comm channel */
#define NETLINK_IP6_FW 13
@@ -86,6 +87,8 @@
#ifdef __KERNEL__
+#include <linux/capability.h>
+
struct netlink_skb_parms
{
struct ucred creds; /* Skb credentials */
@@ -107,8 +110,8 @@
extern struct sock *netlink_kernel_create(int unit, void (*input)(struct sock *sk, int len));
extern void netlink_ack(struct sk_buff *in_skb, struct nlmsghdr *nlh, int err);
extern int netlink_unicast(struct sock *ssk, struct sk_buff *skb, __u32 pid, int nonblock);
-extern void netlink_broadcast(struct sock *ssk, struct sk_buff *skb, __u32 pid,
- __u32 group, int allocation);
+extern int netlink_broadcast(struct sock *ssk, struct sk_buff *skb, __u32 pid,
+ __u32 group, int allocation);
extern void netlink_set_err(struct sock *ssk, __u32 pid, __u32 group, int code);
extern int netlink_register_notifier(struct notifier_block *nb);
extern int netlink_unregister_notifier(struct notifier_block *nb);
diff -Nru a/include/linux/pfkeyv2.h b/include/linux/pfkeyv2.h
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/include/linux/pfkeyv2.h Thu May 8 10:41:38 2003
@@ -0,0 +1,329 @@
+/* PF_KEY user interface, this is defined by rfc2367 so
+ * do not make arbitrary modifications or else this header
+ * file will not be compliant.
+ */
+
+#ifndef _LINUX_PFKEY2_H
+#define _LINUX_PFKEY2_H
+
+#include <linux/types.h>
+
+#define PF_KEY_V2 2
+#define PFKEYV2_REVISION 199806L
+
+struct sadb_msg {
+ uint8_t sadb_msg_version;
+ uint8_t sadb_msg_type;
+ uint8_t sadb_msg_errno;
+ uint8_t sadb_msg_satype;
+ uint16_t sadb_msg_len;
+ uint16_t sadb_msg_reserved;
+ uint32_t sadb_msg_seq;
+ uint32_t sadb_msg_pid;
+} __attribute__((packed));
+/* sizeof(struct sadb_msg) == 16 */
+
+struct sadb_ext {
+ uint16_t sadb_ext_len;
+ uint16_t sadb_ext_type;
+} __attribute__((packed));
+/* sizeof(struct sadb_ext) == 4 */
+
+struct sadb_sa {
+ uint16_t sadb_sa_len;
+ uint16_t sadb_sa_exttype;
+ uint32_t sadb_sa_spi;
+ uint8_t sadb_sa_replay;
+ uint8_t sadb_sa_state;
+ uint8_t sadb_sa_auth;
+ uint8_t sadb_sa_encrypt;
+ uint32_t sadb_sa_flags;
+} __attribute__((packed));
+/* sizeof(struct sadb_sa) == 16 */
+
+struct sadb_lifetime {
+ uint16_t sadb_lifetime_len;
+ uint16_t sadb_lifetime_exttype;
+ uint32_t sadb_lifetime_allocations;
+ uint64_t sadb_lifetime_bytes;
+ uint64_t sadb_lifetime_addtime;
+ uint64_t sadb_lifetime_usetime;
+} __attribute__((packed));
+/* sizeof(struct sadb_lifetime) == 32 */
+
+struct sadb_address {
+ uint16_t sadb_address_len;
+ uint16_t sadb_address_exttype;
+ uint8_t sadb_address_proto;
+ uint8_t sadb_address_prefixlen;
+ uint16_t sadb_address_reserved;
+} __attribute__((packed));
+/* sizeof(struct sadb_address) == 8 */
+
+struct sadb_key {
+ uint16_t sadb_key_len;
+ uint16_t sadb_key_exttype;
+ uint16_t sadb_key_bits;
+ uint16_t sadb_key_reserved;
+} __attribute__((packed));
+/* sizeof(struct sadb_key) == 8 */
+
+struct sadb_ident {
+ uint16_t sadb_ident_len;
+ uint16_t sadb_ident_exttype;
+ uint16_t sadb_ident_type;
+ uint16_t sadb_ident_reserved;
+ uint64_t sadb_ident_id;
+} __attribute__((packed));
+/* sizeof(struct sadb_ident) == 16 */
+
+struct sadb_sens {
+ uint16_t sadb_sens_len;
+ uint16_t sadb_sens_exttype;
+ uint32_t sadb_sens_dpd;
+ uint8_t sadb_sens_sens_level;
+ uint8_t sadb_sens_sens_len;
+ uint8_t sadb_sens_integ_level;
+ uint8_t sadb_sens_integ_len;
+ uint32_t sadb_sens_reserved;
+} __attribute__((packed));
+/* sizeof(struct sadb_sens) == 16 */
+
+/* followed by:
+ uint64_t sadb_sens_bitmap[sens_len];
+ uint64_t sadb_integ_bitmap[integ_len]; */
+
+struct sadb_prop {
+ uint16_t sadb_prop_len;
+ uint16_t sadb_prop_exttype;
+ uint8_t sadb_prop_replay;
+ uint8_t sadb_prop_reserved[3];
+} __attribute__((packed));
+/* sizeof(struct sadb_prop) == 8 */
+
+/* followed by:
+ struct sadb_comb sadb_combs[(sadb_prop_len +
+ sizeof(uint64_t) - sizeof(struct sadb_prop)) /
+ sizeof(strut sadb_comb)]; */
+
+struct sadb_comb {
+ uint8_t sadb_comb_auth;
+ uint8_t sadb_comb_encrypt;
+ uint16_t sadb_comb_flags;
+ uint16_t sadb_comb_auth_minbits;
+ uint16_t sadb_comb_auth_maxbits;
+ uint16_t sadb_comb_encrypt_minbits;
+ uint16_t sadb_comb_encrypt_maxbits;
+ uint32_t sadb_comb_reserved;
+ uint32_t sadb_comb_soft_allocations;
+ uint32_t sadb_comb_hard_allocations;
+ uint64_t sadb_comb_soft_bytes;
+ uint64_t sadb_comb_hard_bytes;
+ uint64_t sadb_comb_soft_addtime;
+ uint64_t sadb_comb_hard_addtime;
+ uint64_t sadb_comb_soft_usetime;
+ uint64_t sadb_comb_hard_usetime;
+} __attribute__((packed));
+/* sizeof(struct sadb_comb) == 72 */
+
+struct sadb_supported {
+ uint16_t sadb_supported_len;
+ uint16_t sadb_supported_exttype;
+ uint32_t sadb_supported_reserved;
+} __attribute__((packed));
+/* sizeof(struct sadb_supported) == 8 */
+
+/* followed by:
+ struct sadb_alg sadb_algs[(sadb_supported_len +
+ sizeof(uint64_t) - sizeof(struct sadb_supported)) /
+ sizeof(struct sadb_alg)]; */
+
+struct sadb_alg {
+ uint8_t sadb_alg_id;
+ uint8_t sadb_alg_ivlen;
+ uint16_t sadb_alg_minbits;
+ uint16_t sadb_alg_maxbits;
+ uint16_t sadb_alg_reserved;
+} __attribute__((packed));
+/* sizeof(struct sadb_alg) == 8 */
+
+struct sadb_spirange {
+ uint16_t sadb_spirange_len;
+ uint16_t sadb_spirange_exttype;
+ uint32_t sadb_spirange_min;
+ uint32_t sadb_spirange_max;
+ uint32_t sadb_spirange_reserved;
+} __attribute__((packed));
+/* sizeof(struct sadb_spirange) == 16 */
+
+struct sadb_x_kmprivate {
+ uint16_t sadb_x_kmprivate_len;
+ uint16_t sadb_x_kmprivate_exttype;
+ u_int32_t sadb_x_kmprivate_reserved;
+} __attribute__((packed));
+/* sizeof(struct sadb_x_kmprivate) == 8 */
+
+struct sadb_x_sa2 {
+ uint16_t sadb_x_sa2_len;
+ uint16_t sadb_x_sa2_exttype;
+ uint8_t sadb_x_sa2_mode;
+ uint8_t sadb_x_sa2_reserved1;
+ uint16_t sadb_x_sa2_reserved2;
+ uint32_t sadb_x_sa2_sequence;
+ uint32_t sadb_x_sa2_reqid;
+} __attribute__((packed));
+/* sizeof(struct sadb_x_sa2) == 16 */
+
+struct sadb_x_policy {
+ uint16_t sadb_x_policy_len;
+ uint16_t sadb_x_policy_exttype;
+ uint16_t sadb_x_policy_type;
+ uint8_t sadb_x_policy_dir;
+ uint8_t sadb_x_policy_reserved;
+ uint32_t sadb_x_policy_id;
+ uint32_t sadb_x_policy_reserved2;
+} __attribute__((packed));
+/* sizeof(struct sadb_x_policy) == 16 */
+
+struct sadb_x_ipsecrequest {
+ uint16_t sadb_x_ipsecrequest_len;
+ uint16_t sadb_x_ipsecrequest_proto;
+ uint8_t sadb_x_ipsecrequest_mode;
+ uint8_t sadb_x_ipsecrequest_level;
+ uint16_t sadb_x_ipsecrequest_reqid;
+} __attribute__((packed));
+/* sizeof(struct sadb_x_ipsecrequest) == 16 */
+
+/* This defines the TYPE of Nat Traversal in use. Currently only one
+ * type of NAT-T is supported, draft-ietf-ipsec-udp-encaps-06
+ */
+struct sadb_x_nat_t_type {
+ uint16_t sadb_x_nat_t_type_len;
+ uint16_t sadb_x_nat_t_type_exttype;
+ uint8_t sadb_x_nat_t_type_type;
+ uint8_t sadb_x_nat_t_type_reserved[3];
+} __attribute__((packed));
+/* sizeof(struct sadb_x_nat_t_type) == 8 */
+
+/* Pass a NAT Traversal port (Source or Dest port) */
+struct sadb_x_nat_t_port {
+ uint16_t sadb_x_nat_t_port_len;
+ uint16_t sadb_x_nat_t_port_exttype;
+ uint16_t sadb_x_nat_t_port_port;
+ uint16_t sadb_x_nat_t_port_reserved;
+} __attribute__((packed));
+/* sizeof(struct sadb_x_nat_t_port) == 8 */
+
+/* Message types */
+#define SADB_RESERVED 0
+#define SADB_GETSPI 1
+#define SADB_UPDATE 2
+#define SADB_ADD 3
+#define SADB_DELETE 4
+#define SADB_GET 5
+#define SADB_ACQUIRE 6
+#define SADB_REGISTER 7
+#define SADB_EXPIRE 8
+#define SADB_FLUSH 9
+#define SADB_DUMP 10
+#define SADB_X_PROMISC 11
+#define SADB_X_PCHANGE 12
+#define SADB_X_SPDUPDATE 13
+#define SADB_X_SPDADD 14
+#define SADB_X_SPDDELETE 15
+#define SADB_X_SPDGET 16
+#define SADB_X_SPDACQUIRE 17
+#define SADB_X_SPDDUMP 18
+#define SADB_X_SPDFLUSH 19
+#define SADB_X_SPDSETIDX 20
+#define SADB_X_SPDEXPIRE 21
+#define SADB_X_SPDDELETE2 22
+#define SADB_X_NAT_T_NEW_MAPPING 23
+#define SADB_MAX 23
+
+/* Security Association flags */
+#define SADB_SAFLAGS_PFS 1
+
+/* Security Association states */
+#define SADB_SASTATE_LARVAL 0
+#define SADB_SASTATE_MATURE 1
+#define SADB_SASTATE_DYING 2
+#define SADB_SASTATE_DEAD 3
+#define SADB_SASTATE_MAX 3
+
+/* Security Association types */
+#define SADB_SATYPE_UNSPEC 0
+#define SADB_SATYPE_AH 2
+#define SADB_SATYPE_ESP 3
+#define SADB_SATYPE_RSVP 5
+#define SADB_SATYPE_OSPFV2 6
+#define SADB_SATYPE_RIPV2 7
+#define SADB_SATYPE_MIP 8
+#define SADB_X_SATYPE_IPCOMP 9
+#define SADB_SATYPE_MAX 9
+
+/* Authentication algorithms */
+#define SADB_AALG_NONE 0
+#define SADB_AALG_MD5HMAC 2
+#define SADB_AALG_SHA1HMAC 3
+#define SADB_X_AALG_SHA2_256HMAC 5
+#define SADB_X_AALG_SHA2_384HMAC 6
+#define SADB_X_AALG_SHA2_512HMAC 7
+#define SADB_X_AALG_RIPEMD160HMAC 8
+#define SADB_X_AALG_NULL 251 /* kame */
+#define SADB_AALG_MAX 251
+
+/* Encryption algorithms */
+#define SADB_EALG_NONE 0
+#define SADB_EALG_DESCBC 1
+#define SADB_EALG_3DESCBC 2
+#define SADB_X_EALG_CASTCBC 6
+#define SADB_X_EALG_BLOWFISHCBC 7
+#define SADB_EALG_NULL 11
+#define SADB_X_EALG_AESCBC 12
+#define SADB_EALG_MAX 12
+
+/* Compression algorithms */
+#define SADB_X_CALG_NONE 0
+#define SADB_X_CALG_OUI 1
+#define SADB_X_CALG_DEFLATE 2
+#define SADB_X_CALG_LZS 3
+#define SADB_X_CALG_LZJH 4
+#define SADB_X_CALG_MAX 4
+
+/* Extension Header values */
+#define SADB_EXT_RESERVED 0
+#define SADB_EXT_SA 1
+#define SADB_EXT_LIFETIME_CURRENT 2
+#define SADB_EXT_LIFETIME_HARD 3
+#define SADB_EXT_LIFETIME_SOFT 4
+#define SADB_EXT_ADDRESS_SRC 5
+#define SADB_EXT_ADDRESS_DST 6
+#define SADB_EXT_ADDRESS_PROXY 7
+#define SADB_EXT_KEY_AUTH 8
+#define SADB_EXT_KEY_ENCRYPT 9
+#define SADB_EXT_IDENTITY_SRC 10
+#define SADB_EXT_IDENTITY_DST 11
+#define SADB_EXT_SENSITIVITY 12
+#define SADB_EXT_PROPOSAL 13
+#define SADB_EXT_SUPPORTED_AUTH 14
+#define SADB_EXT_SUPPORTED_ENCRYPT 15
+#define SADB_EXT_SPIRANGE 16
+#define SADB_X_EXT_KMPRIVATE 17
+#define SADB_X_EXT_POLICY 18
+#define SADB_X_EXT_SA2 19
+/* The next four entries are for setting up NAT Traversal */
+#define SADB_X_EXT_NAT_T_TYPE 20
+#define SADB_X_EXT_NAT_T_SPORT 21
+#define SADB_X_EXT_NAT_T_DPORT 22
+#define SADB_X_EXT_NAT_T_OA 23
+#define SADB_EXT_MAX 23
+
+/* Identity Extension values */
+#define SADB_IDENTTYPE_RESERVED 0
+#define SADB_IDENTTYPE_PREFIX 1
+#define SADB_IDENTTYPE_FQDN 2
+#define SADB_IDENTTYPE_USERFQDN 3
+#define SADB_IDENTTYPE_MAX 3
+
+#endif /* !(_LINUX_PFKEY2_H) */
diff -Nru a/include/linux/rtnetlink.h b/include/linux/rtnetlink.h
--- a/include/linux/rtnetlink.h Thu May 8 10:41:37 2003
+++ b/include/linux/rtnetlink.h Thu May 8 10:41:37 2003
@@ -198,10 +198,11 @@
RTA_MULTIPATH,
RTA_PROTOINFO,
RTA_FLOW,
- RTA_CACHEINFO
+ RTA_CACHEINFO,
+ RTA_SESSION,
};
-#define RTA_MAX RTA_CACHEINFO
+#define RTA_MAX RTA_SESSION
#define RTM_RTA(r) ((struct rtattr*)(((char*)(r)) + NLMSG_ALIGN(sizeof(struct rtmsg))))
#define RTM_PAYLOAD(n) NLMSG_PAYLOAD(n,sizeof(struct rtmsg))
@@ -284,6 +285,25 @@
#define RTAX_MAX RTAX_REORDERING
+struct rta_session
+{
+ __u8 proto;
+
+ union {
+ struct {
+ __u16 sport;
+ __u16 dport;
+ } ports;
+
+ struct {
+ __u8 type;
+ __u8 code;
+ __u16 ident;
+ } icmpt;
+
+ __u32 spi;
+ } u;
+};
/*********************************************************
@@ -559,7 +579,7 @@
extern struct rtnetlink_link * rtnetlink_links[NPROTO];
extern int rtnetlink_dump_ifinfo(struct sk_buff *skb, struct netlink_callback *cb);
extern int rtnetlink_send(struct sk_buff *skb, u32 pid, u32 group, int echo);
-extern int rtnetlink_put_metrics(struct sk_buff *skb, unsigned *metrics);
+extern int rtnetlink_put_metrics(struct sk_buff *skb, u32 *metrics);
extern void __rta_fill(struct sk_buff *skb, int attrtype, int attrlen, const void *data);
diff -Nru a/include/linux/skbuff.h b/include/linux/skbuff.h
--- a/include/linux/skbuff.h Thu May 8 10:41:36 2003
+++ b/include/linux/skbuff.h Thu May 8 10:41:36 2003
@@ -165,7 +165,8 @@
unsigned char *raw;
} mac;
- struct dst_entry *dst;
+ struct dst_entry *dst;
+ struct sec_path *sp;
/*
* This is the control buffer. It is free to use for every
@@ -178,7 +179,7 @@
unsigned int len; /* Length of actual data */
unsigned int data_len;
unsigned int csum; /* Checksum */
- unsigned char __unused, /* Dead field, may be reused */
+ unsigned char local_df,
cloned, /* head may be cloned (check refcnt to be sure). */
pkt_type, /* Packet class */
ip_summed; /* Driver fed us an IP checksum */
@@ -755,6 +756,15 @@
static inline unsigned int skb_headlen(const struct sk_buff *skb)
{
return skb->len - skb->data_len;
+}
+
+static inline int skb_pagelen(const struct sk_buff *skb)
+{
+ int i, len = 0;
+
+ for (i = (int)skb_shinfo(skb)->nr_frags - 1; i >= 0; i--)
+ len += skb_shinfo(skb)->frags[i].size;
+ return len + skb_headlen(skb);
}
#define SKB_PAGE_ASSERT(skb) do { if (skb_shinfo(skb)->nr_frags) out_of_line_bug(); } while (0)
diff -Nru a/include/linux/sysctl.h b/include/linux/sysctl.h
--- a/include/linux/sysctl.h Thu May 8 10:41:36 2003
+++ b/include/linux/sysctl.h Thu May 8 10:41:36 2003
@@ -343,6 +343,8 @@
NET_IPV4_CONF_TAG=12,
NET_IPV4_CONF_ARPFILTER=13,
NET_IPV4_CONF_MEDIUM_ID=14,
+ NET_IPV4_CONF_NOXFRM=15,
+ NET_IPV4_CONF_NOPOLICY=16,
};
/* /proc/sys/net/ipv6 */
diff -Nru a/include/linux/udp.h b/include/linux/udp.h
--- a/include/linux/udp.h Thu May 8 10:41:37 2003
+++ b/include/linux/udp.h Thu May 8 10:41:37 2003
@@ -17,6 +17,9 @@
#ifndef _LINUX_UDP_H
#define _LINUX_UDP_H
+#include <asm/byteorder.h>
+#include <net/sock.h>
+#include <linux/ip.h>
struct udphdr {
__u16 source;
@@ -25,5 +28,11 @@
__u16 check;
};
+/* UDP socket options */
+#define UDP_CORK 1 /* Never send partially complete segments */
+#define UDP_ENCAP 100 /* Set the socket to accept encapsulated packets */
+
+/* UDP encapsulation types */
+#define UDP_ENCAP_ESPINUDP 2 /* draft-ietf-ipsec-udp-encaps-06 */
#endif /* _LINUX_UDP_H */
diff -Nru a/include/linux/xfrm.h b/include/linux/xfrm.h
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/include/linux/xfrm.h Thu May 8 10:41:38 2003
@@ -0,0 +1,217 @@
+#ifndef _LINUX_XFRM_H
+#define _LINUX_XFRM_H
+
+#include <linux/types.h>
+
+/* All of the structures in this file may not change size as they are
+ * passed into the kernel from userspace via netlink sockets.
+ */
+
+/* Structure to encapsulate addresses. I do not want to use
+ * "standard" structure. My apologies.
+ */
+typedef union
+{
+ __u32 a4;
+ __u32 a6[4];
+} xfrm_address_t;
+
+/* Ident of a specific xfrm_state. It is used on input to lookup
+ * the state by (spi,daddr,ah/esp) or to store information about
+ * spi, protocol and tunnel address on output.
+ */
+struct xfrm_id
+{
+ xfrm_address_t daddr;
+ __u32 spi;
+ __u8 proto;
+};
+
+/* Selector, used as selector both on policy rules (SPD) and SAs. */
+
+struct xfrm_selector
+{
+ xfrm_address_t daddr;
+ xfrm_address_t saddr;
+ __u16 dport;
+ __u16 dport_mask;
+ __u16 sport;
+ __u16 sport_mask;
+ __u8 prefixlen_d;
+ __u8 prefixlen_s;
+ __u8 proto;
+ int ifindex;
+ uid_t user;
+};
+
+#define XFRM_INF (~(u64)0)
+
+struct xfrm_lifetime_cfg
+{
+ __u64 soft_byte_limit;
+ __u64 hard_byte_limit;
+ __u64 soft_packet_limit;
+ __u64 hard_packet_limit;
+ __u64 soft_add_expires_seconds;
+ __u64 hard_add_expires_seconds;
+ __u64 soft_use_expires_seconds;
+ __u64 hard_use_expires_seconds;
+};
+
+struct xfrm_lifetime_cur
+{
+ __u64 bytes;
+ __u64 packets;
+ __u64 add_time;
+ __u64 use_time;
+};
+
+struct xfrm_replay_state
+{
+ __u32 oseq;
+ __u32 seq;
+ __u32 bitmap;
+};
+
+struct xfrm_algo {
+ char alg_name[64];
+ int alg_key_len; /* in bits */
+ char alg_key[0];
+};
+
+struct xfrm_stats {
+ __u32 replay_window;
+ __u32 replay;
+ __u32 integrity_failed;
+};
+
+enum
+{
+ XFRM_POLICY_IN = 0,
+ XFRM_POLICY_OUT = 1,
+ XFRM_POLICY_FWD = 2,
+ XFRM_POLICY_MAX = 3
+};
+
+enum
+{
+ XFRM_SHARE_ANY, /* No limitations */
+ XFRM_SHARE_SESSION, /* For this session only */
+ XFRM_SHARE_USER, /* For this user only */
+ XFRM_SHARE_UNIQUE /* Use once */
+};
+
+/* Netlink configuration messages. */
+#define XFRM_MSG_BASE 0x10
+
+#define XFRM_MSG_NEWSA (RTM_BASE + 0)
+#define XFRM_MSG_DELSA (RTM_BASE + 1)
+#define XFRM_MSG_GETSA (RTM_BASE + 2)
+
+#define XFRM_MSG_NEWPOLICY (RTM_BASE + 3)
+#define XFRM_MSG_DELPOLICY (RTM_BASE + 4)
+#define XFRM_MSG_GETPOLICY (RTM_BASE + 5)
+
+#define XFRM_MSG_ALLOCSPI (RTM_BASE + 6)
+#define XFRM_MSG_ACQUIRE (RTM_BASE + 7)
+#define XFRM_MSG_EXPIRE (RTM_BASE + 8)
+
+#define XFRM_MSG_MAX (XFRM_MSG_EXPIRE+1)
+
+struct xfrm_user_tmpl {
+ struct xfrm_id id;
+ xfrm_address_t saddr;
+ __u16 reqid;
+ __u8 mode;
+ __u8 share;
+ __u8 optional;
+ __u32 aalgos;
+ __u32 ealgos;
+ __u32 calgos;
+};
+
+struct xfrm_encap_tmpl {
+ __u16 encap_type;
+ __u16 encap_sport;
+ __u16 encap_dport;
+};
+
+/* Netlink message attributes. */
+enum xfrm_attr_type_t {
+ XFRMA_UNSPEC,
+ XFRMA_ALG_AUTH, /* struct xfrm_algo */
+ XFRMA_ALG_CRYPT, /* struct xfrm_algo */
+ XFRMA_ALG_COMP, /* struct xfrm_algo */
+ XFRMA_ENCAP, /* struct xfrm_algo + struct xfrm_encap_tmpl */
+ XFRMA_TMPL, /* 1 or more struct xfrm_user_tmpl */
+
+#define XFRMA_MAX XFRMA_TMPL
+};
+
+struct xfrm_usersa_info {
+ struct xfrm_selector sel;
+ struct xfrm_id id;
+ struct xfrm_lifetime_cfg lft;
+ struct xfrm_lifetime_cur curlft;
+ struct xfrm_stats stats;
+ __u32 seq;
+ __u16 family;
+ __u16 reqid;
+ __u8 mode; /* 0=transport,1=tunnel */
+ __u8 replay_window;
+};
+
+struct xfrm_usersa_id {
+ xfrm_address_t saddr;
+ __u32 spi;
+ __u16 family;
+ __u8 proto;
+};
+
+struct xfrm_userspi_info {
+ struct xfrm_usersa_info info;
+ __u32 min;
+ __u32 max;
+};
+
+struct xfrm_userpolicy_info {
+ struct xfrm_selector sel;
+ struct xfrm_lifetime_cfg lft;
+ struct xfrm_lifetime_cur curlft;
+ __u32 priority;
+ __u32 index;
+ __u16 family;
+ __u8 dir;
+ __u8 action;
+#define XFRM_POLICY_ALLOW 0
+#define XFRM_POLICY_BLOCK 1
+ __u8 flags;
+#define XFRM_POLICY_LOCALOK 1 /* Allow user to override global policy */
+ __u8 share;
+};
+
+struct xfrm_userpolicy_id {
+ struct xfrm_selector sel;
+ __u32 index;
+ __u8 dir;
+};
+
+struct xfrm_user_acquire {
+ struct xfrm_id id;
+ xfrm_address_t saddr;
+ struct xfrm_userpolicy_info policy;
+ __u32 aalgos;
+ __u32 ealgos;
+ __u32 calgos;
+ __u32 seq;
+};
+
+struct xfrm_user_expire {
+ struct xfrm_usersa_info state;
+ __u8 hard;
+};
+
+#define XFRMGRP_ACQUIRE 1
+#define XFRMGRP_EXPIRE 2
+
+#endif /* _LINUX_XFRM_H */
diff -Nru a/include/net/ah.h b/include/net/ah.h
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/include/net/ah.h Thu May 8 10:41:38 2003
@@ -0,0 +1,32 @@
+#ifndef _NET_AH_H
+#define _NET_AH_H
+
+#include <net/xfrm.h>
+
+struct ah_data
+{
+ u8 *key;
+ int key_len;
+ u8 *work_icv;
+ int icv_full_len;
+ int icv_trunc_len;
+
+ void (*icv)(struct ah_data*,
+ struct sk_buff *skb, u8 *icv);
+
+ struct crypto_tfm *tfm;
+};
+
+static inline void
+ah_hmac_digest(struct ah_data *ahp, struct sk_buff *skb, u8 *auth_data)
+{
+ struct crypto_tfm *tfm = ahp->tfm;
+
+ memset(auth_data, 0, ahp->icv_trunc_len);
+ crypto_hmac_init(tfm, ahp->key, &ahp->key_len);
+ skb_icv_walk(skb, tfm, 0, skb->len, crypto_hmac_update);
+ crypto_hmac_final(tfm, ahp->key, &ahp->key_len, ahp->work_icv);
+ memcpy(auth_data, ahp->work_icv, ahp->icv_trunc_len);
+}
+
+#endif
diff -Nru a/include/net/dn_route.h b/include/net/dn_route.h
--- a/include/net/dn_route.h Thu May 8 10:41:36 2003
+++ b/include/net/dn_route.h Thu May 8 10:41:36 2003
@@ -122,7 +122,7 @@
if ((dst = sk->dst_cache) && !dst->obsolete) {
try_again:
skb->dst = dst_clone(dst);
- dst->output(skb);
+ dst_output(skb);
return;
}
diff -Nru a/include/net/dst.h b/include/net/dst.h
--- a/include/net/dst.h Thu May 8 10:41:38 2003
+++ b/include/net/dst.h Thu May 8 10:41:38 2003
@@ -9,6 +9,7 @@
#define _NET_DST_H
#include <linux/config.h>
+#include <linux/rtnetlink.h>
#include <net/neighbour.h>
/*
@@ -22,6 +23,13 @@
#define DST_GC_INC (5*HZ)
#define DST_GC_MAX (120*HZ)
+/* Each dst_entry has reference count and sits in some parent list(s).
+ * When it is removed from parent list, it is "freed" (dst_free).
+ * After this it enters dead state (dst->obsolete > 0) and if its refcnt
+ * is zero, it can be destroyed immediately, otherwise it is added
+ * to gc list and garbage collector periodically checks the refcnt.
+ */
+
struct sk_buff;
struct dst_entry
@@ -29,22 +37,22 @@
struct dst_entry *next;
atomic_t __refcnt; /* client references */
int __use;
+ struct dst_entry *child;
struct net_device *dev;
int obsolete;
int flags;
#define DST_HOST 1
+#define DST_NOXFRM 2
+#define DST_NOPOLICY 4
+#define DST_NOHASH 8
unsigned long lastuse;
unsigned long expires;
- unsigned mxlock;
- unsigned pmtu;
- unsigned window;
- unsigned rtt;
- unsigned rttvar;
- unsigned ssthresh;
- unsigned cwnd;
- unsigned advmss;
- unsigned reordering;
+ unsigned short header_len; /* more space at head required */
+ unsigned short trailer_len; /* space to reserve at tail */
+
+ u32 metrics[RTAX_MAX];
+ struct dst_entry *path;
unsigned long rate_last; /* rate limiting for ICMP */
unsigned long rate_tokens;
@@ -53,6 +61,7 @@
struct neighbour *neighbour;
struct hh_cache *hh;
+ struct xfrm_state *xfrm;
int (*input)(struct sk_buff*);
int (*output)(struct sk_buff*);
@@ -75,11 +84,11 @@
int (*gc)(void);
struct dst_entry * (*check)(struct dst_entry *, __u32 cookie);
- struct dst_entry * (*reroute)(struct dst_entry *,
- struct sk_buff *);
void (*destroy)(struct dst_entry *);
struct dst_entry * (*negative_advice)(struct dst_entry *);
void (*link_failure)(struct sk_buff *);
+ void (*update_pmtu)(struct dst_entry *dst, u32 mtu);
+ int (*get_mss)(struct dst_entry *dst, u32 mtu);
int entry_size;
atomic_t entries;
@@ -88,6 +97,33 @@
#ifdef __KERNEL__
+static inline u32
+dst_metric(struct dst_entry *dst, int metric)
+{
+ return dst->metrics[metric-1];
+}
+
+static inline u32
+dst_path_metric(struct dst_entry *dst, int metric)
+{
+ return dst->path->metrics[metric-1];
+}
+
+static inline u32
+dst_pmtu(struct dst_entry *dst)
+{
+ u32 mtu = dst_path_metric(dst, RTAX_MTU);
+ /* Yes, _exactly_. This is paranoia. */
+ barrier();
+ return mtu;
+}
+
+static inline int
+dst_metric_locked(struct dst_entry *dst, int metric)
+{
+ return dst_metric(dst, RTAX_LOCK) & (1<<metric);
+}
+
static inline void dst_hold(struct dst_entry * dst)
{
atomic_inc(&dst->__refcnt);
@@ -104,22 +140,40 @@
static inline
void dst_release(struct dst_entry * dst)
{
- if (dst)
+ if (dst) {
+ if (atomic_read(&dst->__refcnt) < 1) {
+ printk("BUG: dst underflow %d: %p\n",
+ atomic_read(&dst->__refcnt),
+ current_text_addr());
+ }
atomic_dec(&dst->__refcnt);
+ }
+}
+
+/* Children define the path of the packet through the
+ * Linux networking. Thus, destinations are stackable.
+ */
+
+static inline struct dst_entry *dst_pop(struct dst_entry *dst)
+{
+ struct dst_entry *child = dst_clone(dst->child);
+
+ dst_release(dst);
+ return child;
}
extern void * dst_alloc(struct dst_ops * ops);
extern void __dst_free(struct dst_entry * dst);
-extern void dst_destroy(struct dst_entry * dst);
+extern struct dst_entry *dst_destroy(struct dst_entry * dst);
-static inline
-void dst_free(struct dst_entry * dst)
+static inline void dst_free(struct dst_entry * dst)
{
if (dst->obsolete > 1)
return;
if (!atomic_read(&dst->__refcnt)) {
- dst_destroy(dst);
- return;
+ dst = dst_destroy(dst);
+ if (!dst)
+ return;
}
__dst_free(dst);
}
@@ -155,8 +209,42 @@
dst->expires = expires;
}
+/* Output packet to network from transport. */
+static inline int dst_output(struct sk_buff *skb)
+{
+ int err;
+
+ for (;;) {
+ err = skb->dst->output(skb);
+
+ if (likely(err == 0))
+ return err;
+ if (unlikely(err != NET_XMIT_BYPASS))
+ return err;
+ }
+}
+
+/* Input packet from network to transport. */
+static inline int dst_input(struct sk_buff *skb)
+{
+ int err;
+
+ for (;;) {
+ err = skb->dst->input(skb);
+
+ if (likely(err == 0))
+ return err;
+ /* Oh, Jamal... Seems, I will not forgive you this mess. :-) */
+ if (unlikely(err != NET_XMIT_BYPASS))
+ return err;
+ }
+}
+
extern void dst_init(void);
+struct flowi;
+extern int xfrm_lookup(struct dst_entry **dst_p, struct flowi *fl,
+ struct sock *sk, int flags);
#endif
#endif /* _NET_DST_H */
diff -Nru a/include/net/esp.h b/include/net/esp.h
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/include/net/esp.h Thu May 8 10:41:38 2003
@@ -0,0 +1,54 @@
+#ifndef _NET_ESP_H
+#define _NET_ESP_H
+
+#include <net/xfrm.h>
+
+struct esp_data
+{
+ /* Confidentiality */
+ struct {
+ u8 *key; /* Key */
+ int key_len; /* Key length */
+ u8 *ivec; /* ivec buffer */
+ /* ivlen is offset from enc_data, where encrypted data start.
+ * It is logically different of crypto_tfm_alg_ivsize(tfm).
+ * We assume that it is either zero (no ivec), or
+ * >= crypto_tfm_alg_ivsize(tfm). */
+ int ivlen;
+ int padlen; /* 0..255 */
+ struct crypto_tfm *tfm; /* crypto handle */
+ } conf;
+
+ /* Integrity. It is active when icv_full_len != 0 */
+ struct {
+ u8 *key; /* Key */
+ int key_len; /* Length of the key */
+ u8 *work_icv;
+ int icv_full_len;
+ int icv_trunc_len;
+ void (*icv)(struct esp_data*,
+ struct sk_buff *skb,
+ int offset, int len, u8 *icv);
+ struct crypto_tfm *tfm;
+ } auth;
+};
+
+extern int skb_to_sgvec(struct sk_buff *skb, struct scatterlist *sg, int offset, int len);
+extern int skb_cow_data(struct sk_buff *skb, int tailbits, struct sk_buff **trailer);
+extern void *pskb_put(struct sk_buff *skb, struct sk_buff *tail, int len);
+
+static inline void
+esp_hmac_digest(struct esp_data *esp, struct sk_buff *skb, int offset,
+ int len, u8 *auth_data)
+{
+ struct crypto_tfm *tfm = esp->auth.tfm;
+ char *icv = esp->auth.work_icv;
+
+ memset(auth_data, 0, esp->auth.icv_trunc_len);
+ crypto_hmac_init(tfm, esp->auth.key, &esp->auth.key_len);
+ skb_icv_walk(skb, tfm, offset, len, crypto_hmac_update);
+ crypto_hmac_final(tfm, esp->auth.key, &esp->auth.key_len, icv);
+ memcpy(auth_data, icv, esp->auth.icv_trunc_len);
+}
+
+#endif
diff -Nru a/include/net/flow.h b/include/net/flow.h
--- a/include/net/flow.h Thu May 8 10:41:37 2003
+++ b/include/net/flow.h Thu May 8 10:41:37 2003
@@ -1,6 +1,6 @@
/*
*
- * Flow based forwarding rules (usage: firewalling, etc)
+ * Generic internet FLOW.
*
*/
@@ -8,12 +8,16 @@
#define _NET_FLOW_H
struct flowi {
- int proto; /* {TCP, UDP, ICMP} */
+ int oif;
+ int iif;
union {
struct {
__u32 daddr;
__u32 saddr;
+ __u32 fwmark;
+ __u8 tos;
+ __u8 scope;
} ip4_u;
struct {
@@ -27,9 +31,12 @@
#define fl6_flowlabel nl_u.ip6_u.flowlabel
#define fl4_dst nl_u.ip4_u.daddr
#define fl4_src nl_u.ip4_u.saddr
+#define fl4_fwmark nl_u.ip4_u.fwmark
+#define fl4_tos nl_u.ip4_u.tos
+#define fl4_scope nl_u.ip4_u.scope
- int oif;
-
+ __u8 proto;
+ __u8 flags;
union {
struct {
__u16 sport;
@@ -41,61 +48,12 @@
__u8 code;
} icmpt;
- unsigned long data;
+ __u32 spi;
} uli_u;
+#define fl_ip_sport uli_u.ports.sport
+#define fl_ip_dport uli_u.ports.dport
+#define fl_icmp_type uli_u.icmpt.type
+#define fl_icmp_code uli_u.icmpt.code
+#define fl_ipsec_spi uli_u.spi
};
-
-#define FLOWR_NODECISION 0 /* rule not appliable to flow */
-#define FLOWR_SELECT 1 /* flow must follow this rule */
-#define FLOWR_CLEAR 2 /* priority level clears flow */
-#define FLOWR_ERROR 3
-
-struct fl_acc_args {
- int type;
-
-
-#define FL_ARG_FORWARD 1
-#define FL_ARG_ORIGIN 2
-
- union {
- struct sk_buff *skb;
- struct {
- struct sock *sk;
- struct flowi *flow;
- } fl_o;
- } fl_u;
-};
-
-
-struct pkt_filter {
- atomic_t refcnt;
- unsigned int offset;
- __u32 value;
- __u32 mask;
- struct pkt_filter *next;
-};
-
-#define FLR_INPUT 1
-#define FLR_OUTPUT 2
-
-struct flow_filter {
- int type;
- union {
- struct pkt_filter *filter;
- struct sock *sk;
- } u;
-};
-
-struct flow_rule {
- struct flow_rule_ops *ops;
- unsigned char private[0];
-};
-
-struct flow_rule_ops {
- int (*accept)(struct rt6_info *rt,
- struct rt6_info *rule,
- struct fl_acc_args *args,
- struct rt6_info **nrt);
-};
-
#endif
diff -Nru a/include/net/ip.h b/include/net/ip.h
--- a/include/net/ip.h Thu May 8 10:41:37 2003
+++ b/include/net/ip.h Thu May 8 10:41:37 2003
@@ -46,6 +46,7 @@
#define IPSKB_MASQUERADED 1
#define IPSKB_TRANSLATED 2
#define IPSKB_FORWARDED 4
+#define IPSKB_XFRM_TUNNEL_SIZE 8
};
struct ipcm_cookie
@@ -97,16 +98,19 @@
extern void ip_send_check(struct iphdr *ip);
extern int ip_queue_xmit(struct sk_buff *skb);
extern void ip_init(void);
-extern int ip_build_xmit(struct sock *sk,
- int getfrag (const void *,
- char *,
- unsigned int,
- unsigned int),
- const void *frag,
- unsigned length,
- struct ipcm_cookie *ipc,
- struct rtable *rt,
- int flags);
+extern int ip_append_data(struct sock *sk,
+ int getfrag(void *from, char *to, int offset, int len,
+ int odd, struct sk_buff *skb),
+ void *from, int len, int protolen,
+ struct ipcm_cookie *ipc,
+ struct rtable *rt,
+ unsigned int flags);
+extern int ip_generic_getfrag(void *from, char *to, int offset, int len, int odd, struct sk_buff *skb);
+extern ssize_t ip_append_page(struct sock *sk, struct page *page,
+ int offset, size_t size, int flags);
+extern int ip_push_pending_frames(struct sock *sk);
+extern void ip_flush_pending_frames(struct sock *sk);
+
/*
* Map a multicast IP onto multicast MAC for type Token Ring.
@@ -126,8 +130,7 @@
}
struct ip_reply_arg {
- struct iovec iov[2];
- int n_iov; /* redundant */
+ struct iovec iov[1];
u32 csum;
int csumoffset; /* u16 offset of csum in iov[0].iov_base */
/* -1 if not needed */
@@ -159,14 +162,6 @@
extern int sysctl_ip_default_ttl;
#ifdef CONFIG_INET
-static inline int ip_send(struct sk_buff *skb)
-{
- if (skb->len > skb->dst->pmtu)
- return ip_fragment(skb, ip_finish_output);
- else
- return ip_finish_output(skb);
-}
-
/* The function in 2.2 was invalid, producing wrong result for
* check=0xFEFF. It was noticed by Arthur Skawina _year_ ago. --ANK(000625) */
static inline
@@ -183,7 +178,7 @@
{
return (sk->protinfo.af_inet.pmtudisc == IP_PMTUDISC_DO ||
(sk->protinfo.af_inet.pmtudisc == IP_PMTUDISC_WANT &&
- !(dst->mxlock&(1<<RTAX_MTU))));
+ !(dst_metric(dst, RTAX_LOCK)&(1<<RTAX_MTU))));
}
extern void __ip_select_ident(struct iphdr *iph, struct dst_entry *dst);
diff -Nru a/include/net/ip6_fib.h b/include/net/ip6_fib.h
--- a/include/net/ip6_fib.h Thu May 8 10:41:37 2003
+++ b/include/net/ip6_fib.h Thu May 8 10:41:37 2003
@@ -70,14 +70,6 @@
u8 rt6i_hoplimit;
atomic_t rt6i_ref;
- union {
- struct flow_rule *rt6iu_flowr;
- struct flow_filter *rt6iu_filter;
- } flow_u;
-
-#define rt6i_flowr flow_u.rt6iu_flowr
-#define rt6i_filter flow_u.rt6iu_filter
-
struct rt6key rt6i_dst;
struct rt6key rt6i_src;
diff -Nru a/include/net/ip6_fw.h b/include/net/ip6_fw.h
--- a/include/net/ip6_fw.h Thu May 8 10:41:37 2003
+++ /dev/null Wed Dec 31 16:00:00 1969
@@ -1,54 +0,0 @@
-#ifndef __NET_IP6_FW_H
-#define __NET_IP6_FW_H
-
-#define IP6_FW_LISTHEAD 0x1000
-#define IP6_FW_ACCEPT 0x0001
-#define IP6_FW_REJECT 0x0002
-
-#define IP6_FW_DEBUG 2
-
-#define IP6_FW_MSG_ADD 1
-#define IP6_FW_MSG_DEL 2
-#define IP6_FW_MSG_REPORT 3
-
-
-/*
- * Fast "hack" user interface
- */
-struct ip6_fw_msg {
- struct in6_addr dst;
- struct in6_addr src;
- int dst_len;
- int src_len;
- int action;
- int policy;
- int proto;
- union {
- struct {
- __u16 sport;
- __u16 dport;
- } transp;
-
- unsigned long data;
-
- int icmp_type;
- } u;
-
- int msg_len;
-};
-
-#ifdef __KERNEL__
-
-#include <net/flow.h>
-
-struct ip6_fw_rule {
- struct flow_rule flowr;
- struct ip6_fw_rule *next;
- struct ip6_fw_rule *prev;
- struct flowi info;
- unsigned long policy;
-};
-
-#endif
-
-#endif
diff -Nru a/include/net/ip6_route.h b/include/net/ip6_route.h
--- a/include/net/ip6_route.h Thu May 8 10:41:37 2003
+++ b/include/net/ip6_route.h Thu May 8 10:41:37 2003
@@ -60,6 +60,8 @@
struct in6_addr *saddr,
int oif, int flags);
+extern struct rt6_info *ip6_dst_alloc(void);
+
/*
* support functions for ND
*
diff -Nru a/include/net/ip_fib.h b/include/net/ip_fib.h
--- a/include/net/ip_fib.h Thu May 8 10:41:37 2003
+++ b/include/net/ip_fib.h Thu May 8 10:41:37 2003
@@ -17,6 +17,7 @@
#define _NET_IP_FIB_H
#include <linux/config.h>
+#include <net/flow.h>
struct kern_rta
{
@@ -65,7 +66,7 @@
int fib_protocol;
u32 fib_prefsrc;
u32 fib_priority;
- unsigned fib_metrics[RTAX_MAX];
+ u32 fib_metrics[RTAX_MAX];
#define fib_mtu fib_metrics[RTAX_MTU-1]
#define fib_window fib_metrics[RTAX_WINDOW-1]
#define fib_rtt fib_metrics[RTAX_RTT-1]
@@ -117,7 +118,7 @@
{
unsigned char tb_id;
unsigned tb_stamp;
- int (*tb_lookup)(struct fib_table *tb, const struct rt_key *key, struct fib_result *res);
+ int (*tb_lookup)(struct fib_table *tb, const struct flowi *flp, struct fib_result *res);
int (*tb_insert)(struct fib_table *table, struct rtmsg *r,
struct kern_rta *rta, struct nlmsghdr *n,
struct netlink_skb_parms *req);
@@ -130,7 +131,7 @@
int (*tb_get_info)(struct fib_table *table, char *buf,
int first, int count);
void (*tb_select_default)(struct fib_table *table,
- const struct rt_key *key, struct fib_result *res);
+ const struct flowi *flp, struct fib_result *res);
unsigned char tb_data[0];
};
@@ -152,18 +153,18 @@
return fib_get_table(id);
}
-static inline int fib_lookup(const struct rt_key *key, struct fib_result *res)
+static inline int fib_lookup(const struct flowi *flp, struct fib_result *res)
{
- if (local_table->tb_lookup(local_table, key, res) &&
- main_table->tb_lookup(main_table, key, res))
+ if (local_table->tb_lookup(local_table, flp, res) &&
+ main_table->tb_lookup(main_table, flp, res))
return -ENETUNREACH;
return 0;
}
-static inline void fib_select_default(const struct rt_key *key, struct fib_result *res)
+static inline void fib_select_default(const struct flowi *flp, struct fib_result *res)
{
if (FIB_RES_GW(*res) && FIB_RES_NH(*res).nh_scope == RT_SCOPE_LINK)
- main_table->tb_select_default(main_table, key, res);
+ main_table->tb_select_default(main_table, flp, res);
}
#else /* CONFIG_IP_MULTIPLE_TABLES */
@@ -171,7 +172,7 @@
#define main_table (fib_tables[RT_TABLE_MAIN])
extern struct fib_table * fib_tables[RT_TABLE_MAX+1];
-extern int fib_lookup(const struct rt_key *key, struct fib_result *res);
+extern int fib_lookup(const struct flowi *flp, struct fib_result *res);
extern struct fib_table *__fib_new_table(int id);
extern void fib_rule_put(struct fib_rule *r);
@@ -191,7 +192,7 @@
return fib_tables[id] ? : __fib_new_table(id);
}
-extern void fib_select_default(const struct rt_key *key, struct fib_result *res);
+extern void fib_select_default(const struct flowi *flp, struct fib_result *res);
#endif /* CONFIG_IP_MULTIPLE_TABLES */
@@ -204,13 +205,13 @@
extern int inet_dump_fib(struct sk_buff *skb, struct netlink_callback *cb);
extern int fib_validate_source(u32 src, u32 dst, u8 tos, int oif,
struct net_device *dev, u32 *spec_dst, u32 *itag);
-extern void fib_select_multipath(const struct rt_key *key, struct fib_result *res);
+extern void fib_select_multipath(const struct flowi *flp, struct fib_result *res);
/* Exported by fib_semantics.c */
extern int ip_fib_check_default(u32 gw, struct net_device *dev);
extern void fib_release_info(struct fib_info *);
extern int fib_semantic_match(int type, struct fib_info *,
- const struct rt_key *, struct fib_result*);
+ const struct flowi *, struct fib_result*);
extern struct fib_info *fib_create_info(const struct rtmsg *r, struct kern_rta *rta,
const struct nlmsghdr *, int *err);
extern int fib_nh_match(struct rtmsg *r, struct nlmsghdr *, struct kern_rta *rta, struct fib_info *fi);
diff -Nru a/include/net/ipip.h b/include/net/ipip.h
--- a/include/net/ipip.h Thu May 8 10:41:38 2003
+++ b/include/net/ipip.h Thu May 8 10:41:38 2003
@@ -34,7 +34,7 @@
ip_select_ident(iph, &rt->u.dst, NULL); \
ip_send_check(iph); \
\
- err = NF_HOOK(PF_INET, NF_IP_LOCAL_OUT, skb, NULL, rt->u.dst.dev, do_ip_send); \
+ err = NF_HOOK(PF_INET, NF_IP_LOCAL_OUT, skb, NULL, rt->u.dst.dev, dst_output);\
if (err == NET_XMIT_SUCCESS || err == NET_XMIT_CN) { \
stats->tx_bytes += pkt_len; \
stats->tx_packets++; \
diff -Nru a/include/net/ipv6.h b/include/net/ipv6.h
--- a/include/net/ipv6.h Thu May 8 10:41:37 2003
+++ b/include/net/ipv6.h Thu May 8 10:41:37 2003
@@ -198,11 +198,7 @@
extern int ip6_call_ra_chain(struct sk_buff *skb, int sel);
-extern int ipv6_reassembly(struct sk_buff **skb, int);
-
extern int ipv6_parse_hopopts(struct sk_buff *skb, int);
-
-extern int ipv6_parse_exthdrs(struct sk_buff **skb, int);
extern struct ipv6_txoptions * ipv6_dup_options(struct sock *sk, struct ipv6_txoptions *opt);
diff -Nru a/include/net/protocol.h b/include/net/protocol.h
--- a/include/net/protocol.h Thu May 8 10:41:37 2003
+++ b/include/net/protocol.h Thu May 8 10:41:37 2003
@@ -30,7 +30,7 @@
#include <linux/ipv6.h>
#endif
-#define MAX_INET_PROTOS 32 /* Must be a power of 2 */
+#define MAX_INET_PROTOS 256 /* Must be a power of 2 */
/* This is used to register protocols. */
@@ -38,29 +38,23 @@
{
int (*handler)(struct sk_buff *skb);
void (*err_handler)(struct sk_buff *skb, u32 info);
- struct inet_protocol *next;
- unsigned char protocol;
- unsigned char copy:1;
- void *data;
- const char *name;
+ int no_policy;
};
#if defined(CONFIG_IPV6) || defined (CONFIG_IPV6_MODULE)
struct inet6_protocol
{
- int (*handler)(struct sk_buff *skb);
+ int (*handler)(struct sk_buff **skb, unsigned int *nhoffp);
void (*err_handler)(struct sk_buff *skb,
struct inet6_skb_parm *opt,
int type, int code, int offset,
__u32 info);
- struct inet6_protocol *next;
- unsigned char protocol;
- unsigned char copy:1;
- void *data;
- const char *name;
+ unsigned int flags; /* INET6_PROTO_xxx */
};
+#define INET6_PROTO_NOPOLICY 0x1
+#define INET6_PROTO_FINAL 0x2
#endif
/* This is used to register socket interfaces for IP protocols. */
@@ -93,14 +87,14 @@
extern struct list_head inetsw6[SOCK_MAX];
#endif
-extern void inet_add_protocol(struct inet_protocol *prot);
-extern int inet_del_protocol(struct inet_protocol *prot);
+extern int inet_add_protocol(struct inet_protocol *prot, unsigned char num);
+extern int inet_del_protocol(struct inet_protocol *prot, unsigned char num);
extern void inet_register_protosw(struct inet_protosw *p);
extern void inet_unregister_protosw(struct inet_protosw *p);
#if defined(CONFIG_IPV6) || defined (CONFIG_IPV6_MODULE)
-extern void inet6_add_protocol(struct inet6_protocol *prot);
-extern int inet6_del_protocol(struct inet6_protocol *prot);
+extern int inet6_add_protocol(struct inet6_protocol *prot, unsigned char num);
+extern int inet6_del_protocol(struct inet6_protocol *prot, unsigned char num);
extern void inet6_register_protosw(struct inet_protosw *p);
extern void inet6_unregister_protosw(struct inet_protosw *p);
#endif
diff -Nru a/include/net/raw.h b/include/net/raw.h
--- a/include/net/raw.h Thu May 8 10:41:37 2003
+++ b/include/net/raw.h Thu May 8 10:41:37 2003
@@ -37,6 +37,6 @@
unsigned long raddr, unsigned long laddr,
int dif);
-extern struct sock *raw_v4_input(struct sk_buff *skb, struct iphdr *iph, int hash);
+extern void raw_v4_input(struct sk_buff *skb, struct iphdr *iph, int hash);
#endif /* _RAW_H */
diff -Nru a/include/net/rawv6.h b/include/net/rawv6.h
--- a/include/net/rawv6.h Thu May 8 10:41:37 2003
+++ b/include/net/rawv6.h Thu May 8 10:41:37 2003
@@ -7,9 +7,7 @@
extern struct sock *raw_v6_htable[RAWV6_HTABLE_SIZE];
extern rwlock_t raw_v6_lock;
-extern struct sock * ipv6_raw_deliver(struct sk_buff *skb,
- int nexthdr);
-
+extern void ipv6_raw_deliver(struct sk_buff *skb, int nexthdr);
extern struct sock *__raw_v6_lookup(struct sock *sk, unsigned short num,
struct in6_addr *loc_addr, struct in6_addr *rmt_addr);
diff -Nru a/include/net/route.h b/include/net/route.h
--- a/include/net/route.h Thu May 8 10:41:36 2003
+++ b/include/net/route.h Thu May 8 10:41:36 2003
@@ -27,6 +27,7 @@
#include <linux/config.h>
#include <net/dst.h>
#include <net/inetpeer.h>
+#include <net/flow.h>
#include <linux/in_route.h>
#include <linux/rtnetlink.h>
#include <linux/route.h>
@@ -45,19 +46,6 @@
#define RT_CONN_FLAGS(sk) (RT_TOS(sk->protinfo.af_inet.tos) | sk->localroute)
-struct rt_key
-{
- __u32 dst;
- __u32 src;
- int iif;
- int oif;
-#ifdef CONFIG_IP_ROUTE_FWMARK
- __u32 fwmark;
-#endif
- __u8 tos;
- __u8 scope;
-};
-
struct inet_peer;
struct rtable
{
@@ -78,7 +66,7 @@
__u32 rt_gateway;
/* Cache lookup keys */
- struct rt_key key;
+ struct flowi fl;
/* Miscellaneous cached information */
__u32 rt_spec_dst; /* RFC1122 specific destination */
@@ -124,10 +112,11 @@
u32 src, u8 tos, struct net_device *dev);
extern void ip_rt_advice(struct rtable **rp, int advice);
extern void rt_cache_flush(int how);
-extern int ip_route_output_key(struct rtable **, const struct rt_key *key);
+extern int __ip_route_output_key(struct rtable **, const struct flowi *flp);
+extern int ip_route_output_key(struct rtable **, struct flowi *flp);
+extern int ip_route_output_flow(struct rtable **rp, struct flowi *flp, struct sock *sk, int flags);
extern int ip_route_input(struct sk_buff*, u32 dst, u32 src, u8 tos, struct net_device *devin);
extern unsigned short ip_rt_frag_needed(struct iphdr *iph, unsigned short new_mtu);
-extern void ip_rt_update_pmtu(struct dst_entry *dst, unsigned mtu);
extern void ip_rt_send_redirect(struct sk_buff *skb);
extern unsigned inet_addr_type(u32 addr);
@@ -136,16 +125,6 @@
extern void ip_rt_get_source(u8 *src, struct rtable *rt);
extern int ip_rt_dump(struct sk_buff *skb, struct netlink_callback *cb);
-/* Deprecated: use ip_route_output_key directly */
-static inline int ip_route_output(struct rtable **rp,
- u32 daddr, u32 saddr, u32 tos, int oif)
-{
- struct rt_key key = { dst:daddr, src:saddr, oif:oif, tos:tos };
-
- return ip_route_output_key(rp, &key);
-}
-
-
static inline void ip_rt_put(struct rtable * rt)
{
if (rt)
@@ -161,17 +140,47 @@
return ip_tos2prio[IPTOS_TOS(tos)>>1];
}
-static inline int ip_route_connect(struct rtable **rp, u32 dst, u32 src, u32 tos, int oif)
-{
+static inline int ip_route_connect(struct rtable **rp, u32 dst,
+ u32 src, u32 tos, int oif, u8 protocol,
+ u16 sport, u16 dport, struct sock *sk)
+{
+ struct flowi fl = { .oif = oif,
+ .nl_u = { .ip4_u = { .daddr = dst,
+ .saddr = src,
+ .tos = tos } },
+ .proto = protocol,
+ .uli_u = { .ports =
+ { .sport = sport,
+ .dport = dport } } };
+
int err;
- err = ip_route_output(rp, dst, src, tos, oif);
- if (err || (dst && src))
- return err;
- dst = (*rp)->rt_dst;
- src = (*rp)->rt_src;
- ip_rt_put(*rp);
- *rp = NULL;
- return ip_route_output(rp, dst, src, tos, oif);
+ if (!dst || !src) {
+ err = __ip_route_output_key(rp, &fl);
+ if (err)
+ return err;
+ fl.fl4_dst = (*rp)->rt_dst;
+ fl.fl4_src = (*rp)->rt_src;
+ ip_rt_put(*rp);
+ *rp = NULL;
+ }
+ return ip_route_output_flow(rp, &fl, sk, 0);
+}
+
+static inline int ip_route_newports(struct rtable **rp, u16 sport, u16 dport,
+ struct sock *sk)
+{
+ if (sport != (*rp)->fl.fl_ip_sport ||
+ dport != (*rp)->fl.fl_ip_dport) {
+ struct flowi fl;
+
+ memcpy(&fl, &(*rp)->fl, sizeof(fl));
+ fl.fl_ip_sport = sport;
+ fl.fl_ip_dport = dport;
+ ip_rt_put(*rp);
+ *rp = NULL;
+ return ip_route_output_flow(rp, &fl, sk, 0);
+ }
+ return 0;
}
extern void rt_bind_peer(struct rtable *rt, int create);
diff -Nru a/include/net/sock.h b/include/net/sock.h
--- a/include/net/sock.h Thu May 8 10:41:37 2003
+++ b/include/net/sock.h Thu May 8 10:41:37 2003
@@ -221,7 +221,24 @@
int mc_index; /* Multicast device index */
__u32 mc_addr;
struct ip_mc_socklist *mc_list; /* Group array */
+ struct page *sndmsg_page; /* Cached page for sendmsg */
+ u32 sndmsg_off; /* Cached offset for sendmsg */
+ /*
+ * Following members are used to retain the infomation to build
+ * an ip header on each ip fragmentation while the socket is corked.
+ */
+ struct {
+ unsigned int flags;
+ unsigned int fragsize;
+ struct ip_options *opt;
+ struct rtable *rt;
+ int length; /* Total length of all frames */
+ u32 addr;
+ } cork;
};
+
+#define IPCORK_OPT 1 /* ip-options has been held in ipcork.opt */
+
#endif
#if defined(CONFIG_PPPOE) || defined (CONFIG_PPPOE_MODULE)
@@ -247,6 +264,14 @@
#define pppoe_relay proto.pppoe.relay
#endif
+#if defined(CONFIG_NET_KEY) || defined(CONFIG_NET_KEY_MODULE)
+struct pfkey_opt {
+ int registered;
+ int promisc;
+};
+#define pfkey_sk(__sk) ((__sk)->protinfo.pf_key)
+#endif
+
/* This defines a selective acknowledgement block. */
struct tcp_sack_block {
__u32 start_seq;
@@ -304,6 +329,7 @@
__u16 mss_cache; /* Cached effective mss, not including SACKS */
__u16 mss_clamp; /* Maximal mss, negotiated at connection setup */
__u16 ext_header_len; /* Network protocol overhead (IP/IPv6 options) */
+ __u16 ext2_header_len;/* Options depending on route */
__u8 ca_state; /* State of fast-retransmit machine */
__u8 retransmits; /* Number of unrecovered RTO timeouts. */
@@ -344,8 +370,6 @@
struct tcp_func *af_specific; /* Operations which are AF_INET{4,6} specific */
struct sk_buff *send_head; /* Front of stuff to transmit */
- struct page *sndmsg_page; /* Cached page for sendmsg */
- u32 sndmsg_off; /* Cached offset for sendmsg */
__u32 rcv_wnd; /* Current receiver window */
__u32 rcv_wup; /* rcv_nxt on last window update sent */
@@ -431,6 +455,20 @@
unsigned long last_synq_overflow;
};
+struct udp_opt {
+ int pending; /* Any pending frames ? */
+ unsigned int corkflag; /* Cork is required */
+ __u16 encap_type; /* Is this an Encapsulation socket? */
+ /*
+ * Following members retains the infomation to create a UDP header
+ * when the socket is uncorked.
+ */
+ u32 saddr; /* source address */
+ u32 daddr; /* destination address */
+ __u16 sport; /* source port */
+ __u16 dport; /* destination port */
+ __u16 len; /* total length of pending frames */
+};
/*
* This structure really needs to be cleaned up.
@@ -526,6 +564,7 @@
wait_queue_head_t *sleep; /* Sock wait queue */
struct dst_entry *dst_cache; /* Destination cache */
rwlock_t dst_lock;
+ struct xfrm_policy *policy[2];
atomic_t rmem_alloc; /* Receive queue bytes committed */
struct sk_buff_head receive_queue; /* Incoming packets */
atomic_t wmem_alloc; /* Transmit queue bytes committed */
@@ -586,6 +625,7 @@
union {
struct tcp_opt af_tcp;
+ struct udp_opt af_udp;
#if defined(CONFIG_INET) || defined (CONFIG_INET_MODULE)
struct raw_opt tp_raw4;
#endif
@@ -597,6 +637,8 @@
#endif /* CONFIG_SPX */
} tp_pinfo;
+#define tcp_sk(sk) (&(sk)->tp_pinfo.af_tcp)
+#define udp_sk(sk) (&(sk)->tp_pinfo.af_udp)
int err, err_soft; /* Soft holds errors that don't
cause failure but are the cause
@@ -667,8 +709,11 @@
#if defined(CONFIG_WAN_ROUTER) || defined(CONFIG_WAN_ROUTER_MODULE)
struct wanpipe_opt *af_wanpipe;
#endif
+#if defined(CONFIG_NET_KEY) || defined(CONFIG_NET_KEY_MODULE)
+ struct pfkey_opt *pf_key;
+#endif
} protinfo;
-
+#define inet_sk(sk) (&(sk)->protinfo.af_inet)
/* This part is used for the timeout functions. */
struct timer_list timer; /* This is the sock cleanup timer. */
@@ -732,6 +777,8 @@
int (*recvmsg)(struct sock *sk, struct msghdr *msg,
int len, int noblock, int flags,
int *addr_len);
+ int (*sendpage)(struct sock *sk, struct page *page,
+ int offset, size_t size, int flags);
int (*bind)(struct sock *sk,
struct sockaddr *uaddr, int addr_len);
diff -Nru a/include/net/tcp.h b/include/net/tcp.h
--- a/include/net/tcp.h Thu May 8 10:41:36 2003
+++ b/include/net/tcp.h Thu May 8 10:41:36 2003
@@ -907,9 +907,12 @@
struct dst_entry *dst = __sk_dst_get(sk);
int mss_now = tp->mss_cache;
- if (dst && dst->pmtu != tp->pmtu_cookie)
- mss_now = tcp_sync_mss(sk, dst->pmtu);
-
+ if (dst) {
+ u32 mtu = dst_pmtu(dst);
+ if (mtu != tp->pmtu_cookie ||
+ tp->ext2_header_len != dst->header_len)
+ mss_now = tcp_sync_mss(sk, mtu);
+ }
if (tp->eff_sacks)
mss_now -= (TCPOLEN_SACK_BASE_ALIGNED +
(tp->eff_sacks * TCPOLEN_SACK_PERBLOCK));
diff -Nru a/include/net/transp_v6.h b/include/net/transp_v6.h
--- a/include/net/transp_v6.h Thu May 8 10:41:37 2003
+++ b/include/net/transp_v6.h Thu May 8 10:41:37 2003
@@ -15,6 +15,14 @@
struct flowi;
+/* extention headers */
+extern void ipv6_hopopts_init(void);
+extern void ipv6_rthdr_init(void);
+extern void ipv6_frag_init(void);
+extern void ipv6_nodata_init(void);
+extern void ipv6_destopt_init(void);
+
+/* transport protocols */
extern void rawv6_init(void);
extern void udpv6_init(void);
extern void tcpv6_init(void);
diff -Nru a/include/net/xfrm.h b/include/net/xfrm.h
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/include/net/xfrm.h Thu May 8 10:41:38 2003
@@ -0,0 +1,824 @@
+#ifndef _NET_XFRM_H
+#define _NET_XFRM_H
+
+#include <linux/xfrm.h>
+#include <linux/spinlock.h>
+#include <linux/list.h>
+#include <linux/skbuff.h>
+#include <linux/netdevice.h>
+#include <linux/crypto.h>
+#include <linux/pfkeyv2.h>
+#include <linux/in6.h>
+
+#include <net/sock.h>
+#include <net/dst.h>
+#include <net/route.h>
+#include <net/ipv6.h>
+#include <net/ip6_fib.h>
+
+#define XFRM_ALIGN8(len) (((len) + 7) & ~7)
+
+extern struct semaphore xfrm_cfg_sem;
+
+/* Organization of SPD aka "XFRM rules"
+ ------------------------------------
+
+ Basic objects:
+ - policy rule, struct xfrm_policy (=SPD entry)
+ - bundle of transformations, struct dst_entry == struct xfrm_dst (=SA bundle)
+ - instance of a transformer, struct xfrm_state (=SA)
+ - template to clone xfrm_state, struct xfrm_tmpl
+
+ SPD is plain linear list of xfrm_policy rules, ordered by priority.
+ (To be compatible with existing pfkeyv2 implementations,
+ many rules with priority of 0x7fffffff are allowed to exist and
+ such rules are ordered in an unpredictable way, thanks to bsd folks.)
+
+ Lookup is plain linear search until the first match with selector.
+
+ If "action" is "block", then we prohibit the flow, otherwise:
+ if "xfrms_nr" is zero, the flow passes untransformed. Otherwise,
+ policy entry has list of up to XFRM_MAX_DEPTH transformations,
+ described by templates xfrm_tmpl. Each template is resolved
+ to a complete xfrm_state (see below) and we pack bundle of transformations
+ to a dst_entry returned to requestor.
+
+ dst -. xfrm .-> xfrm_state #1
+ |---. child .-> dst -. xfrm .-> xfrm_state #2
+ |---. child .-> dst -. xfrm .-> xfrm_state #3
+ |---. child .-> NULL
+
+ Bundles are cached at xrfm_policy struct (field ->bundles).
+
+
+ Resolution of xrfm_tmpl
+ -----------------------
+ Template contains:
+ 1. ->mode Mode: transport or tunnel
+ 2. ->id.proto Protocol: AH/ESP/IPCOMP
+ 3. ->id.daddr Remote tunnel endpoint, ignored for transport mode.
+ Q: allow to resolve security gateway?
+ 4. ->id.spi If not zero, static SPI.
+ 5. ->saddr Local tunnel endpoint, ignored for transport mode.
+ 6. ->algos List of allowed algos. Plain bitmask now.
+ Q: ealgos, aalgos, calgos. What a mess...
+ 7. ->share Sharing mode.
+ Q: how to implement private sharing mode? To add struct sock* to
+ flow id?
+
+ Having this template we search through SAD searching for entries
+ with appropriate mode/proto/algo, permitted by selector.
+ If no appropriate entry found, it is requested from key manager.
+
+ PROBLEMS:
+ Q: How to find all the bundles referring to a physical path for
+ PMTU discovery? Seems, dst should contain list of all parents...
+ and enter to infinite locking hierarchy disaster.
+ No! It is easier, we will not search for them, let them find us.
+ We add genid to each dst plus pointer to genid of raw IP route,
+ pmtu disc will update pmtu on raw IP route and increase its genid.
+ dst_check() will see this for top level and trigger resyncing
+ metrics. Plus, it will be made via sk->dst_cache. Solved.
+ */
+
+/* Full description of state of transformer. */
+struct xfrm_state
+{
+ /* Note: bydst is re-used during gc */
+ struct list_head bydst;
+ struct list_head byspi;
+
+ atomic_t refcnt;
+ spinlock_t lock;
+
+ struct xfrm_id id;
+ struct xfrm_selector sel;
+
+ /* Key manger bits */
+ struct {
+ u8 state;
+ u8 dying;
+ u32 seq;
+ } km;
+
+ /* Parameters of this state. */
+ struct {
+ u8 mode;
+ u8 replay_window;
+ u8 aalgo, ealgo, calgo;
+ u16 reqid;
+ u16 family;
+ xfrm_address_t saddr;
+ int header_len;
+ int trailer_len;
+ } props;
+
+ struct xfrm_lifetime_cfg lft;
+
+ /* Data for transformer */
+ struct xfrm_algo *aalg;
+ struct xfrm_algo *ealg;
+ struct xfrm_algo *calg;
+
+ /* Data for encapsulator */
+ struct xfrm_encap_tmpl *encap;
+
+ /* State for replay detection */
+ struct xfrm_replay_state replay;
+
+ /* Statistics */
+ struct xfrm_stats stats;
+
+ struct xfrm_lifetime_cur curlft;
+ struct timer_list timer;
+
+ /* Reference to data common to all the instances of this
+ * transformer. */
+ struct xfrm_type *type;
+
+ /* Private data of this transformer, format is opaque,
+ * interpreted by xfrm_type methods. */
+ void *data;
+};
+
+enum {
+ XFRM_STATE_VOID,
+ XFRM_STATE_ACQ,
+ XFRM_STATE_VALID,
+ XFRM_STATE_ERROR,
+ XFRM_STATE_EXPIRED,
+ XFRM_STATE_DEAD
+};
+
+struct xfrm_type;
+struct xfrm_dst;
+struct xfrm_policy_afinfo {
+ unsigned short family;
+ rwlock_t lock;
+ struct xfrm_type_map *type_map;
+ struct dst_ops *dst_ops;
+ void (*garbage_collect)(void);
+ int (*dst_lookup)(struct xfrm_dst **dst, struct flowi *fl);
+ struct dst_entry *(*find_bundle)(struct flowi *fl, struct rtable *rt, struct xfrm_policy *policy);
+ int (*bundle_create)(struct xfrm_policy *policy,
+ struct xfrm_state **xfrm,
+ int nx,
+ struct flowi *fl,
+ struct dst_entry **dst_p);
+ void (*decode_session)(struct sk_buff *skb,
+ struct flowi *fl);
+};
+
+extern int xfrm_policy_register_afinfo(struct xfrm_policy_afinfo *afinfo);
+extern int xfrm_policy_unregister_afinfo(struct xfrm_policy_afinfo *afinfo);
+extern struct xfrm_policy_afinfo *xfrm_policy_get_afinfo(unsigned short family);
+extern void xfrm_policy_put_afinfo(struct xfrm_policy_afinfo *afinfo);
+
+#define XFRM_ACQ_EXPIRES 30
+
+struct xfrm_tmpl;
+struct xfrm_state_afinfo {
+ unsigned short family;
+ rwlock_t lock;
+ struct list_head *state_bydst;
+ struct list_head *state_byspi;
+ void (*init_tempsel)(struct xfrm_state *x, struct flowi *fl,
+ struct xfrm_tmpl *tmpl,
+ xfrm_address_t *daddr, xfrm_address_t *saddr);
+ struct xfrm_state *(*state_lookup)(xfrm_address_t *daddr, u32 spi, u8 proto);
+ struct xfrm_state *(*find_acq)(u8 mode, u16 reqid, u8 proto,
+ xfrm_address_t *daddr, xfrm_address_t *saddr,
+ int create);
+};
+
+extern int xfrm_state_register_afinfo(struct xfrm_state_afinfo *afinfo);
+extern int xfrm_state_unregister_afinfo(struct xfrm_state_afinfo *afinfo);
+extern struct xfrm_state_afinfo *xfrm_state_get_afinfo(unsigned short family);
+extern void xfrm_state_put_afinfo(struct xfrm_state_afinfo *afinfo);
+
+struct xfrm_decap_state;
+struct xfrm_type
+{
+ char *description;
+ struct module *owner;
+ __u8 proto;
+
+ int (*init_state)(struct xfrm_state *x, void *args);
+ void (*destructor)(struct xfrm_state *);
+ int (*input)(struct xfrm_state *, struct xfrm_decap_state *, struct sk_buff *skb);
+ int (*post_input)(struct xfrm_state *, struct xfrm_decap_state *, struct sk_buff *skb);
+ int (*output)(struct sk_buff *skb);
+ /* Estimate maximal size of result of transformation of a dgram */
+ u32 (*get_max_size)(struct xfrm_state *, int size);
+};
+
+struct xfrm_type_map {
+ rwlock_t lock;
+ struct xfrm_type *map[256];
+};
+
+extern int xfrm_register_type(struct xfrm_type *type, unsigned short family);
+extern int xfrm_unregister_type(struct xfrm_type *type, unsigned short family);
+extern struct xfrm_type *xfrm_get_type(u8 proto, unsigned short family);
+extern void xfrm_put_type(struct xfrm_type *type);
+
+struct xfrm_tmpl
+{
+/* id in template is interpreted as:
+ * daddr - destination of tunnel, may be zero for transport mode.
+ * spi - zero to acquire spi. Not zero if spi is static, then
+ * daddr must be fixed too.
+ * proto - AH/ESP/IPCOMP
+ */
+ struct xfrm_id id;
+
+/* Source address of tunnel. Ignored, if it is not a tunnel. */
+ xfrm_address_t saddr;
+
+ __u16 reqid;
+
+/* Mode: transport/tunnel */
+ __u8 mode;
+
+/* Sharing mode: unique, this session only, this user only etc. */
+ __u8 share;
+
+/* May skip this transfomration if no SA is found */
+ __u8 optional;
+
+/* Bit mask of algos allowed for acquisition */
+ __u32 aalgos;
+ __u32 ealgos;
+ __u32 calgos;
+};
+
+#define XFRM_MAX_DEPTH 4
+
+struct xfrm_policy
+{
+ struct xfrm_policy *next;
+
+ /* This lock only affects elements except for entry. */
+ rwlock_t lock;
+ atomic_t refcnt;
+ struct timer_list timer;
+
+ u32 priority;
+ u32 index;
+ struct xfrm_selector selector;
+ struct xfrm_lifetime_cfg lft;
+ struct xfrm_lifetime_cur curlft;
+ struct dst_entry *bundles;
+ __u16 family;
+ __u8 action;
+ __u8 flags;
+ __u8 dead;
+ __u8 xfrm_nr;
+ struct xfrm_tmpl xfrm_vec[XFRM_MAX_DEPTH];
+};
+
+struct xfrm_mgr
+{
+ struct list_head list;
+ char *id;
+ int (*notify)(struct xfrm_state *x, int event);
+ int (*acquire)(struct xfrm_state *x, struct xfrm_tmpl *, struct xfrm_policy *xp, int dir);
+ struct xfrm_policy *(*compile_policy)(u16 family, int opt, u8 *data, int len, int *dir);
+ int (*new_mapping)(struct xfrm_state *x, xfrm_address_t *ipaddr, u16 sport);
+};
+
+extern int xfrm_register_km(struct xfrm_mgr *km);
+extern int xfrm_unregister_km(struct xfrm_mgr *km);
+
+
+#define XFRM_FLOWCACHE_HASH_SIZE 1024
+
+static inline u32 __flow_hash4(struct flowi *fl)
+{
+ u32 hash = fl->fl4_src ^ fl->fl_ip_sport;
+
+ hash = ((hash & 0xF0F0F0F0) >> 4) | ((hash & 0x0F0F0F0F) << 4);
+
+ hash ^= fl->fl4_dst ^ fl->fl_ip_dport;
+ hash ^= (hash >> 10);
+ hash ^= (hash >> 20);
+ return hash & (XFRM_FLOWCACHE_HASH_SIZE-1);
+}
+
+static inline u32 __flow_hash6(struct flowi *fl)
+{
+ u32 hash = fl->fl6_src->s6_addr32[2] ^
+ fl->fl6_src->s6_addr32[3] ^
+ fl->fl_ip_sport;
+
+ hash = ((hash & 0xF0F0F0F0) >> 4) | ((hash & 0x0F0F0F0F) << 4);
+
+ hash ^= fl->fl6_dst->s6_addr32[2] ^
+ fl->fl6_dst->s6_addr32[3] ^
+ fl->fl_ip_dport;
+ hash ^= (hash >> 10);
+ hash ^= (hash >> 20);
+ return hash & (XFRM_FLOWCACHE_HASH_SIZE-1);
+}
+
+static inline u32 flow_hash(struct flowi *fl, unsigned short family)
+{
+ switch (family) {
+ case AF_INET:
+ return __flow_hash4(fl);
+ case AF_INET6:
+ return __flow_hash6(fl);
+ }
+ return 0; /*XXX*/
+}
+
+extern struct xfrm_policy *xfrm_policy_list[XFRM_POLICY_MAX*2];
+
+static inline void xfrm_pol_hold(struct xfrm_policy *policy)
+{
+ if (policy)
+ atomic_inc(&policy->refcnt);
+}
+
+extern void __xfrm_policy_destroy(struct xfrm_policy *policy);
+
+static inline void xfrm_pol_put(struct xfrm_policy *policy)
+{
+ if (atomic_dec_and_test(&policy->refcnt))
+ __xfrm_policy_destroy(policy);
+}
+
+#define XFRM_DST_HSIZE 1024
+
+static __inline__
+unsigned __xfrm4_dst_hash(xfrm_address_t *addr)
+{
+ unsigned h;
+ h = ntohl(addr->a4);
+ h = (h ^ (h>>16)) % XFRM_DST_HSIZE;
+ return h;
+}
+
+static __inline__
+unsigned __xfrm6_dst_hash(xfrm_address_t *addr)
+{
+ unsigned h;
+ h = ntohl(addr->a6[2]^addr->a6[3]);
+ h = (h ^ (h>>16)) % XFRM_DST_HSIZE;
+ return h;
+}
+
+static __inline__
+unsigned xfrm_dst_hash(xfrm_address_t *addr, unsigned short family)
+{
+ switch (family) {
+ case AF_INET:
+ return __xfrm4_dst_hash(addr);
+ case AF_INET6:
+ return __xfrm6_dst_hash(addr);
+ }
+ return 0;
+}
+
+static __inline__
+unsigned __xfrm4_spi_hash(xfrm_address_t *addr, u32 spi, u8 proto)
+{
+ unsigned h;
+ h = ntohl(addr->a4^spi^proto);
+ h = (h ^ (h>>10) ^ (h>>20)) % XFRM_DST_HSIZE;
+ return h;
+}
+
+static __inline__
+unsigned __xfrm6_spi_hash(xfrm_address_t *addr, u32 spi, u8 proto)
+{
+ unsigned h;
+ h = ntohl(addr->a6[2]^addr->a6[3]^spi^proto);
+ h = (h ^ (h>>10) ^ (h>>20)) % XFRM_DST_HSIZE;
+ return h;
+}
+
+static __inline__
+unsigned xfrm_spi_hash(xfrm_address_t *addr, u32 spi, u8 proto, unsigned short family)
+{
+ switch (family) {
+ case AF_INET:
+ return __xfrm4_spi_hash(addr, spi, proto);
+ case AF_INET6:
+ return __xfrm6_spi_hash(addr, spi, proto);
+ }
+ return 0; /*XXX*/
+}
+
+extern void __xfrm_state_destroy(struct xfrm_state *);
+
+static inline void xfrm_state_put(struct xfrm_state *x)
+{
+ if (atomic_dec_and_test(&x->refcnt))
+ __xfrm_state_destroy(x);
+}
+
+static inline void xfrm_state_hold(struct xfrm_state *x)
+{
+ atomic_inc(&x->refcnt);
+}
+
+static __inline__ int addr_match(void *token1, void *token2, int prefixlen)
+{
+ __u32 *a1 = token1;
+ __u32 *a2 = token2;
+ int pdw;
+ int pbi;
+
+ pdw = prefixlen >> 5; /* num of whole __u32 in prefix */
+ pbi = prefixlen & 0x1f; /* num of bits in incomplete u32 in prefix */
+
+ if (pdw)
+ if (memcmp(a1, a2, pdw << 2))
+ return 0;
+
+ if (pbi) {
+ __u32 mask;
+
+ mask = htonl((0xffffffff) << (32 - pbi));
+
+ if ((a1[pdw] ^ a2[pdw]) & mask)
+ return 0;
+ }
+
+ return 1;
+}
+
+static inline int
+__xfrm4_selector_match(struct xfrm_selector *sel, struct flowi *fl)
+{
+ return addr_match(&fl->fl4_dst, &sel->daddr, sel->prefixlen_d) &&
+ addr_match(&fl->fl4_src, &sel->saddr, sel->prefixlen_s) &&
+ !((fl->fl_ip_dport^sel->dport)&sel->dport_mask) &&
+ !((fl->fl_ip_sport^sel->sport)&sel->sport_mask) &&
+ (fl->proto == sel->proto || !sel->proto) &&
+ (fl->oif == sel->ifindex || !sel->ifindex);
+}
+
+static inline int
+__xfrm6_selector_match(struct xfrm_selector *sel, struct flowi *fl)
+{
+ return addr_match(fl->fl6_dst, &sel->daddr, sel->prefixlen_d) &&
+ addr_match(fl->fl6_src, &sel->saddr, sel->prefixlen_s) &&
+ !((fl->fl_ip_dport^sel->dport)&sel->dport_mask) &&
+ !((fl->fl_ip_sport^sel->sport)&sel->sport_mask) &&
+ (fl->proto == sel->proto || !sel->proto) &&
+ (fl->oif == sel->ifindex || !sel->ifindex);
+}
+
+static inline int
+xfrm_selector_match(struct xfrm_selector *sel, struct flowi *fl,
+ unsigned short family)
+{
+ switch (family) {
+ case AF_INET:
+ return __xfrm4_selector_match(sel, fl);
+ case AF_INET6:
+ return __xfrm6_selector_match(sel, fl);
+ }
+ return 0;
+}
+
+/* placeholder until xfrm6_tunnel.c is written */
+static inline int xfrm6_tunnel_check_size(struct sk_buff *skb)
+{ return 0; }
+
+/* A struct encoding bundle of transformations to apply to some set of flow.
+ *
+ * dst->child points to the next element of bundle.
+ * dst->xfrm points to an instanse of transformer.
+ *
+ * Due to unfortunate limitations of current routing cache, which we
+ * have no time to fix, it mirrors struct rtable and bound to the same
+ * routing key, including saddr,daddr. However, we can have many of
+ * bundles differing by session id. All the bundles grow from a parent
+ * policy rule.
+ */
+struct xfrm_dst
+{
+ union {
+ struct xfrm_dst *next;
+ struct dst_entry dst;
+ struct rtable rt;
+ struct rt6_info rt6;
+ } u;
+};
+
+/* Decapsulation state, used by the input to store data during
+ * decapsulation procedure, to be used later (during the policy
+ * check
+ */
+struct xfrm_decap_state {
+ char decap_data[20];
+ __u16 decap_type;
+};
+
+struct sec_decap_state {
+ struct xfrm_state *xvec;
+ struct xfrm_decap_state decap;
+};
+
+struct sec_path
+{
+ kmem_cache_t *pool;
+ atomic_t refcnt;
+ int len;
+ struct sec_decap_state x[XFRM_MAX_DEPTH];
+};
+
+static inline struct sec_path *
+secpath_get(struct sec_path *sp)
+{
+ if (sp)
+ atomic_inc(&sp->refcnt);
+ return sp;
+}
+
+extern void __secpath_destroy(struct sec_path *sp);
+
+static inline void
+secpath_put(struct sec_path *sp)
+{
+ if (sp && atomic_dec_and_test(&sp->refcnt))
+ __secpath_destroy(sp);
+}
+
+static inline int
+__xfrm4_state_addr_cmp(struct xfrm_tmpl *tmpl, struct xfrm_state *x)
+{
+ return (tmpl->saddr.a4 &&
+ tmpl->saddr.a4 != x->props.saddr.a4);
+}
+
+static inline int
+__xfrm6_state_addr_cmp(struct xfrm_tmpl *tmpl, struct xfrm_state *x)
+{
+ return (!ipv6_addr_any((struct in6_addr*)&tmpl->saddr) &&
+ ipv6_addr_cmp((struct in6_addr *)&tmpl->saddr, (struct in6_addr*)&x->props.saddr));
+}
+
+static inline int
+xfrm_state_addr_cmp(struct xfrm_tmpl *tmpl, struct xfrm_state *x, unsigned short family)
+{
+ switch (family) {
+ case AF_INET:
+ return __xfrm4_state_addr_cmp(tmpl, x);
+ case AF_INET6:
+ return __xfrm6_state_addr_cmp(tmpl, x);
+ }
+ return !0;
+}
+
+extern int __xfrm_policy_check(struct sock *, int dir, struct sk_buff *skb, unsigned short family);
+
+static inline int xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb, unsigned short family)
+{
+ if (sk && sk->policy[XFRM_POLICY_IN])
+ return __xfrm_policy_check(sk, dir, skb, family);
+
+ return !xfrm_policy_list[dir] ||
+ (skb->dst->flags & DST_NOPOLICY) ||
+ __xfrm_policy_check(sk, dir, skb, family);
+}
+
+static inline int xfrm4_policy_check(struct sock *sk, int dir, struct sk_buff *skb)
+{
+ return xfrm_policy_check(sk, dir, skb, AF_INET);
+}
+
+static inline int xfrm6_policy_check(struct sock *sk, int dir, struct sk_buff *skb)
+{
+ return xfrm_policy_check(sk, dir, skb, AF_INET6);
+}
+
+
+extern int __xfrm_route_forward(struct sk_buff *skb, unsigned short family);
+
+static inline int xfrm_route_forward(struct sk_buff *skb, unsigned short family)
+{
+ return !xfrm_policy_list[XFRM_POLICY_OUT] ||
+ (skb->dst->flags & DST_NOXFRM) ||
+ __xfrm_route_forward(skb, family);
+}
+
+static inline int xfrm4_route_forward(struct sk_buff *skb)
+{
+ return xfrm_route_forward(skb, AF_INET);
+}
+
+static inline int xfrm6_route_forward(struct sk_buff *skb)
+{
+ return xfrm_route_forward(skb, AF_INET6);
+}
+
+extern int __xfrm_sk_clone_policy(struct sock *sk);
+
+static inline int xfrm_sk_clone_policy(struct sock *sk)
+{
+ if (unlikely(sk->policy[0] || sk->policy[1]))
+ return __xfrm_sk_clone_policy(sk);
+ return 0;
+}
+
+extern void __xfrm_sk_free_policy(struct xfrm_policy *, int dir);
+
+static inline void xfrm_sk_free_policy(struct sock *sk)
+{
+ if (unlikely(sk->policy[0] != NULL)) {
+ __xfrm_sk_free_policy(sk->policy[0], 0);
+ sk->policy[0] = NULL;
+ }
+ if (unlikely(sk->policy[1] != NULL)) {
+ __xfrm_sk_free_policy(sk->policy[1], 1);
+ sk->policy[1] = NULL;
+ }
+}
+
+static __inline__
+xfrm_address_t *xfrm_flowi_daddr(struct flowi *fl, unsigned short family)
+{
+ switch (family){
+ case AF_INET:
+ return (xfrm_address_t *)&fl->fl4_dst;
+ case AF_INET6:
+ return (xfrm_address_t *)fl->fl6_dst;
+ }
+ return NULL;
+}
+
+static __inline__
+xfrm_address_t *xfrm_flowi_saddr(struct flowi *fl, unsigned short family)
+{
+ switch (family){
+ case AF_INET:
+ return (xfrm_address_t *)&fl->fl4_src;
+ case AF_INET6:
+ return (xfrm_address_t *)fl->fl6_src;
+ }
+ return NULL;
+}
+
+static __inline__ int
+__xfrm4_state_addr_check(struct xfrm_state *x,
+ xfrm_address_t *daddr, xfrm_address_t *saddr)
+{
+ if (daddr->a4 == x->id.daddr.a4 &&
+ (saddr->a4 == x->props.saddr.a4 || !saddr->a4 || !x->props.saddr.a4))
+ return 1;
+ return 0;
+}
+
+static __inline__ int
+__xfrm6_state_addr_check(struct xfrm_state *x,
+ xfrm_address_t *daddr, xfrm_address_t *saddr)
+{
+ if (!ipv6_addr_cmp((struct in6_addr *)daddr, (struct in6_addr *)&x->id.daddr) &&
+ (!ipv6_addr_cmp((struct in6_addr *)saddr, (struct in6_addr *)&x->props.saddr)||
+ ipv6_addr_any((struct in6_addr *)saddr) ||
+ ipv6_addr_any((struct in6_addr *)&x->props.saddr)))
+ return 1;
+ return 0;
+}
+
+static __inline__ int
+xfrm_state_addr_check(struct xfrm_state *x,
+ xfrm_address_t *daddr, xfrm_address_t *saddr,
+ unsigned short family)
+{
+ switch (family) {
+ case AF_INET:
+ return __xfrm4_state_addr_check(x, daddr, saddr);
+ case AF_INET6:
+ return __xfrm6_state_addr_check(x, daddr, saddr);
+ }
+ return 0;
+}
+
+/*
+ * xfrm algorithm information
+ */
+struct xfrm_algo_auth_info {
+ u16 icv_truncbits;
+ u16 icv_fullbits;
+};
+
+struct xfrm_algo_encr_info {
+ u16 blockbits;
+ u16 defkeybits;
+};
+
+struct xfrm_algo_comp_info {
+ u16 threshold;
+};
+
+struct xfrm_algo_desc {
+ char *name;
+ u8 available:1;
+ union {
+ struct xfrm_algo_auth_info auth;
+ struct xfrm_algo_encr_info encr;
+ struct xfrm_algo_comp_info comp;
+ } uinfo;
+ struct sadb_alg desc;
+};
+
+/* XFRM tunnel handlers. */
+struct xfrm_tunnel {
+ int (*handler)(struct sk_buff *skb);
+ void (*err_handler)(struct sk_buff *skb, void *info);
+};
+
+extern void xfrm_init(void);
+extern void xfrm4_init(void);
+extern void xfrm4_fini(void);
+extern void xfrm6_init(void);
+extern void xfrm6_fini(void);
+extern void xfrm_state_init(void);
+extern void xfrm4_state_init(void);
+extern void xfrm4_state_fini(void);
+extern void xfrm6_state_init(void);
+extern void xfrm6_state_fini(void);
+
+extern int xfrm_state_walk(u8 proto, int (*func)(struct xfrm_state *, int, void*), void *);
+extern struct xfrm_state *xfrm_state_alloc(void);
+extern struct xfrm_state *xfrm_state_find(xfrm_address_t *daddr, xfrm_address_t *saddr,
+ struct flowi *fl, struct xfrm_tmpl *tmpl,
+ struct xfrm_policy *pol, int *err,
+ unsigned short family);
+extern int xfrm_state_check_expire(struct xfrm_state *x);
+extern void xfrm_state_insert(struct xfrm_state *x);
+extern int xfrm_state_check_space(struct xfrm_state *x, struct sk_buff *skb);
+extern struct xfrm_state *xfrm_state_lookup(xfrm_address_t *daddr, u32 spi, u8 proto, unsigned short family);
+extern struct xfrm_state *xfrm_find_acq_byseq(u32 seq);
+extern void xfrm_state_delete(struct xfrm_state *x);
+extern void xfrm_state_flush(u8 proto);
+extern int xfrm_replay_check(struct xfrm_state *x, u32 seq);
+extern void xfrm_replay_advance(struct xfrm_state *x, u32 seq);
+extern int xfrm_check_selectors(struct xfrm_state **x, int n, struct flowi *fl);
+extern int xfrm_check_output(struct xfrm_state *x, struct sk_buff *skb, unsigned short family);
+extern int xfrm4_rcv(struct sk_buff *skb);
+extern int xfrm4_rcv_encap(struct sk_buff *skb, __u16 encap_type);
+extern int xfrm4_tunnel_register(struct xfrm_tunnel *handler);
+extern int xfrm4_tunnel_deregister(struct xfrm_tunnel *handler);
+extern int xfrm4_tunnel_check_size(struct sk_buff *skb);
+extern int xfrm6_rcv(struct sk_buff **pskb, unsigned int *nhoffp);
+extern int xfrm6_clear_mutable_options(struct sk_buff *skb, u16 *nh_offset, int dir);
+extern int xfrm_user_policy(struct sock *sk, int optname, u8 *optval, int optlen);
+
+void xfrm_policy_init(void);
+void xfrm4_policy_init(void);
+void xfrm6_policy_init(void);
+struct xfrm_policy *xfrm_policy_alloc(int gfp);
+extern int xfrm_policy_walk(int (*func)(struct xfrm_policy *, int, int, void*), void *);
+struct xfrm_policy *xfrm_policy_lookup(int dir, struct flowi *fl, unsigned short family);
+int xfrm_policy_insert(int dir, struct xfrm_policy *policy, int excl);
+struct xfrm_policy *xfrm_policy_delete(int dir, struct xfrm_selector *sel);
+struct xfrm_policy *xfrm_policy_byid(int dir, u32 id, int delete);
+void xfrm_policy_flush(void);
+u32 xfrm_get_acqseq(void);
+void xfrm_alloc_spi(struct xfrm_state *x, u32 minspi, u32 maxspi);
+struct xfrm_state * xfrm_find_acq(u8 mode, u16 reqid, u8 proto,
+ xfrm_address_t *daddr, xfrm_address_t *saddr,
+ int create, unsigned short family);
+extern void xfrm_policy_flush(void);
+extern void xfrm_policy_kill(struct xfrm_policy *);
+extern int xfrm_sk_policy_insert(struct sock *sk, int dir, struct xfrm_policy *pol);
+extern struct xfrm_policy *xfrm_sk_policy_lookup(struct sock *sk, int dir, struct flowi *fl);
+extern int xfrm_flush_bundles(struct xfrm_state *x);
+extern int xfrm_dst_lookup(struct xfrm_dst **dst, struct flowi *fl, unsigned short family);
+
+extern wait_queue_head_t km_waitq;
+extern void km_warn_expired(struct xfrm_state *x);
+extern void km_expired(struct xfrm_state *x);
+extern int km_query(struct xfrm_state *x, struct xfrm_tmpl *, struct xfrm_policy *pol);
+extern int km_new_mapping(struct xfrm_state *x, xfrm_address_t *ipaddr, u16 sport);
+
+extern void xfrm4_input_init(void);
+extern void xfrm6_input_init(void);
+extern int xfrm_parse_spi(struct sk_buff *skb, u8 nexthdr, u32 *spi, u32 *seq);
+
+extern void xfrm_probe_algs(void);
+extern int xfrm_count_auth_supported(void);
+extern int xfrm_count_enc_supported(void);
+extern struct xfrm_algo_desc *xfrm_aalg_get_byidx(unsigned int idx);
+extern struct xfrm_algo_desc *xfrm_ealg_get_byidx(unsigned int idx);
+extern struct xfrm_algo_desc *xfrm_calg_get_byidx(unsigned int idx);
+extern struct xfrm_algo_desc *xfrm_aalg_get_byid(int alg_id);
+extern struct xfrm_algo_desc *xfrm_ealg_get_byid(int alg_id);
+extern struct xfrm_algo_desc *xfrm_calg_get_byid(int alg_id);
+extern struct xfrm_algo_desc *xfrm_aalg_get_byname(char *name);
+extern struct xfrm_algo_desc *xfrm_ealg_get_byname(char *name);
+extern struct xfrm_algo_desc *xfrm_calg_get_byname(char *name);
+
+struct crypto_tfm;
+typedef void (icv_update_fn_t)(struct crypto_tfm *, struct scatterlist *, unsigned int);
+
+extern void skb_icv_walk(const struct sk_buff *skb, struct crypto_tfm *tfm,
+ int offset, int len, icv_update_fn_t icv_update);
+
+#endif /* _NET_XFRM_H */
diff -Nru a/lib/Config.in b/lib/Config.in
--- a/lib/Config.in Thu May 8 10:41:37 2003
+++ b/lib/Config.in Thu May 8 10:41:37 2003
@@ -9,12 +9,14 @@
#
if [ "$CONFIG_CRAMFS" = "y" -o \
"$CONFIG_PPP_DEFLATE" = "y" -o \
+ "$CONFIG_CRYPTO_DEFLATE" = "y" -o \
"$CONFIG_JFFS2_FS" = "y" -o \
"$CONFIG_ZISOFS_FS" = "y" ]; then
define_tristate CONFIG_ZLIB_INFLATE y
else
if [ "$CONFIG_CRAMFS" = "m" -o \
"$CONFIG_PPP_DEFLATE" = "m" -o \
+ "$CONFIG_CRYPTO_DEFLATE" = "m" -o \
"$CONFIG_JFFS2_FS" = "m" -o \
"$CONFIG_ZISOFS_FS" = "m" ]; then
define_tristate CONFIG_ZLIB_INFLATE m
@@ -24,10 +26,12 @@
fi
if [ "$CONFIG_PPP_DEFLATE" = "y" -o \
+ "$CONFIG_CRYPTO_DEFLATE" = "y" -o \
"$CONFIG_JFFS2_FS" = "y" ]; then
define_tristate CONFIG_ZLIB_DEFLATE y
else
if [ "$CONFIG_PPP_DEFLATE" = "m" -o \
+ "$CONFIG_CRYPTO_DEFLATE" = "m" -o \
"$CONFIG_JFFS2_FS" = "m" ]; then
define_tristate CONFIG_ZLIB_DEFLATE m
else
diff -Nru a/net/Config.in b/net/Config.in
--- a/net/Config.in Thu May 8 10:41:37 2003
+++ b/net/Config.in Thu May 8 10:41:37 2003
@@ -16,9 +16,11 @@
fi
bool 'Socket Filtering' CONFIG_FILTER
tristate 'Unix domain sockets' CONFIG_UNIX
+tristate 'PF_KEY sockets' CONFIG_NET_KEY
bool 'TCP/IP networking' CONFIG_INET
if [ "$CONFIG_INET" = "y" ]; then
source net/ipv4/Config.in
+ source net/xfrm/Config.in
if [ "$CONFIG_EXPERIMENTAL" = "y" ]; then
# IPv6 as module will cause a CRASH if you try to unload it
tristate ' The IPv6 protocol (EXPERIMENTAL)' CONFIG_IPV6
diff -Nru a/net/Makefile b/net/Makefile
--- a/net/Makefile Thu May 8 10:41:37 2003
+++ b/net/Makefile Thu May 8 10:41:37 2003
@@ -7,15 +7,15 @@
O_TARGET := network.o
-mod-subdirs := ipv4/netfilter ipv6/netfilter ipx irda bluetooth atm netlink sched core
+mod-subdirs := ipv4/netfilter ipv6/netfilter ipx irda bluetooth atm netlink sched core xfrm
export-objs := netsyms.o
subdir-y := core ethernet
-subdir-m := ipv4 # hum?
+subdir-m := ipv4 xfrm # hum?
subdir-$(CONFIG_NET) += 802 sched netlink
-subdir-$(CONFIG_INET) += ipv4
+subdir-$(CONFIG_INET) += ipv4 xfrm
subdir-$(CONFIG_NETFILTER) += ipv4/netfilter
subdir-$(CONFIG_UNIX) += unix
subdir-$(CONFIG_IPV6) += ipv6
@@ -28,6 +28,7 @@
subdir-$(CONFIG_KHTTPD) += khttpd
subdir-$(CONFIG_PACKET) += packet
+subdir-$(CONFIG_NET_KEY) += key
subdir-$(CONFIG_NET_SCHED) += sched
subdir-$(CONFIG_BRIDGE) += bridge
subdir-$(CONFIG_IPX) += ipx
diff -Nru a/net/atm/clip.c b/net/atm/clip.c
--- a/net/atm/clip.c Thu May 8 10:41:37 2003
+++ b/net/atm/clip.c Thu May 8 10:41:37 2003
@@ -509,6 +509,7 @@
struct atmarp_entry *entry;
int error;
struct clip_vcc *clip_vcc;
+ struct flowi fl = { .nl_u = { .ip4_u = { .daddr = ip, .tos = 1 } } };
struct rtable *rt;
if (vcc->push != clip_push) {
@@ -525,7 +526,7 @@
unlink_clip_vcc(clip_vcc);
return 0;
}
- error = ip_route_output(&rt,ip,0,1,0);
+ error = ip_route_output_key(&rt,&fl);
if (error) return error;
neigh = __neigh_lookup(&clip_tbl,&ip,rt->u.dst.dev,1);
ip_rt_put(rt);
diff -Nru a/net/core/dev.c b/net/core/dev.c
--- a/net/core/dev.c Thu May 8 10:41:37 2003
+++ b/net/core/dev.c Thu May 8 10:41:37 2003
@@ -912,6 +912,13 @@
return notifier_chain_register(&netdev_chain, nb);
}
+/* Synchronize with packet receive processing. */
+void synchronize_net(void)
+{
+ br_write_lock_bh(BR_NETPROTO_LOCK);
+ br_write_unlock_bh(BR_NETPROTO_LOCK);
+}
+
/**
* unregister_netdevice_notifier - unregister a network notifier block
* @nb: notifier
diff -Nru a/net/core/dst.c b/net/core/dst.c
--- a/net/core/dst.c Thu May 8 10:41:37 2003
+++ b/net/core/dst.c Thu May 8 10:41:37 2003
@@ -36,11 +36,11 @@
static unsigned long dst_gc_timer_expires;
static unsigned long dst_gc_timer_inc = DST_GC_MAX;
static void dst_run_gc(unsigned long);
+static void ___dst_free(struct dst_entry * dst);
static struct timer_list dst_gc_timer =
{ data: DST_GC_MIN, function: dst_run_gc };
-
static void dst_run_gc(unsigned long dummy)
{
int delayed = 0;
@@ -61,7 +61,25 @@
continue;
}
*dstp = dst->next;
- dst_destroy(dst);
+
+ dst = dst_destroy(dst);
+ if (dst) {
+ /* NOHASH and still referenced. Unless it is already
+ * on gc list, invalidate it and add to gc list.
+ *
+ * Note: this is temporary. Actually, NOHASH dst's
+ * must be obsoleted when parent is obsoleted.
+ * But we do not have state "obsoleted, but
+ * referenced by parent", so it is right.
+ */
+ if (dst->obsolete > 1)
+ continue;
+
+ ___dst_free(dst);
+ dst->next = *dstp;
+ *dstp = dst;
+ dstp = &dst->next;
+ }
}
if (!dst_garbage_list) {
dst_gc_timer_inc = DST_GC_MAX;
@@ -107,6 +125,7 @@
memset(dst, 0, ops->entry_size);
dst->ops = ops;
dst->lastuse = jiffies;
+ dst->path = dst;
dst->input = dst_discard;
dst->output = dst_blackhole;
#if RT_CACHE_DEBUG >= 2
@@ -116,10 +135,8 @@
return dst;
}
-void __dst_free(struct dst_entry * dst)
+static void ___dst_free(struct dst_entry * dst)
{
- spin_lock_bh(&dst_lock);
-
/* The first case (dev==NULL) is required, when
protocol module is unloaded.
*/
@@ -128,6 +145,12 @@
dst->output = dst_blackhole;
}
dst->obsolete = 2;
+}
+
+void __dst_free(struct dst_entry * dst)
+{
+ spin_lock_bh(&dst_lock);
+ ___dst_free(dst);
dst->next = dst_garbage_list;
dst_garbage_list = dst;
if (dst_gc_timer_inc > DST_GC_INC) {
@@ -137,14 +160,19 @@
dst_gc_timer.expires = jiffies + dst_gc_timer_expires;
add_timer(&dst_gc_timer);
}
-
spin_unlock_bh(&dst_lock);
}
-void dst_destroy(struct dst_entry * dst)
+struct dst_entry *dst_destroy(struct dst_entry * dst)
{
- struct neighbour *neigh = dst->neighbour;
- struct hh_cache *hh = dst->hh;
+ struct dst_entry *child;
+ struct neighbour *neigh;
+ struct hh_cache *hh;
+
+again:
+ neigh = dst->neighbour;
+ hh = dst->hh;
+ child = dst->child;
dst->hh = NULL;
if (hh && atomic_dec_and_test(&hh->hh_refcnt))
@@ -165,6 +193,21 @@
atomic_dec(&dst_total);
#endif
kmem_cache_free(dst->ops->kmem_cachep, dst);
+
+ dst = child;
+ if (dst) {
+ if (atomic_dec_and_test(&dst->__refcnt)) {
+ /* We were real parent of this dst, so kill child. */
+ if (dst->flags&DST_NOHASH)
+ goto again;
+ } else {
+ /* Child is still referenced, return it for freeing. */
+ if (dst->flags&DST_NOHASH)
+ return dst;
+ /* Child is still in his hash table */
+ }
+ }
+ return NULL;
}
static int dst_dev_event(struct notifier_block *this, unsigned long event, void *ptr)
diff -Nru a/net/core/netfilter.c b/net/core/netfilter.c
--- a/net/core/netfilter.c Thu May 8 10:41:37 2003
+++ b/net/core/netfilter.c Thu May 8 10:41:37 2003
@@ -563,13 +563,15 @@
{
struct iphdr *iph = (*pskb)->nh.iph;
struct rtable *rt;
- struct rt_key key = { dst:iph->daddr,
- src:iph->saddr,
- oif:(*pskb)->sk ? (*pskb)->sk->bound_dev_if : 0,
- tos:RT_TOS(iph->tos)|RTO_CONN,
+ struct flowi fl = { .nl_u = { .ip4_u =
+ { .daddr = iph->daddr,
+ .saddr = iph->saddr,
+ .tos = RT_TOS(iph->tos)|RTO_CONN,
#ifdef CONFIG_IP_ROUTE_FWMARK
- fwmark:(*pskb)->nfmark
+ .fwmark = (*pskb)->nfmark
#endif
+ } },
+ .oif = (*pskb)->sk ? (*pskb)->sk->bound_dev_if : 0,
};
struct net_device *dev_src = NULL;
int err;
@@ -578,10 +580,10 @@
0 or a local address; however some non-standard hacks like
ipt_REJECT.c:send_reset() can cause packets with foreign
saddr to be appear on the NF_IP_LOCAL_OUT hook -MB */
- if(key.src && !(dev_src = ip_dev_find(key.src)))
- key.src = 0;
+ if(fl.fl4_src && !(dev_src = ip_dev_find(fl.fl4_src)))
+ fl.fl4_src = 0;
- if ((err=ip_route_output_key(&rt, &key)) != 0) {
+ if ((err=ip_route_output_key(&rt, &fl)) != 0) {
printk("route_me_harder: ip_route_output_key(dst=%u.%u.%u.%u, src=%u.%u.%u.%u, oif=%d, tos=0x%x, fwmark=0x%lx) error %d\n",
NIPQUAD(iph->daddr), NIPQUAD(iph->saddr),
(*pskb)->sk ? (*pskb)->sk->bound_dev_if : 0,
diff -Nru a/net/core/rtnetlink.c b/net/core/rtnetlink.c
--- a/net/core/rtnetlink.c Thu May 8 10:41:37 2003
+++ b/net/core/rtnetlink.c Thu May 8 10:41:37 2003
@@ -128,7 +128,7 @@
return err;
}
-int rtnetlink_put_metrics(struct sk_buff *skb, unsigned *metrics)
+int rtnetlink_put_metrics(struct sk_buff *skb, u32 *metrics)
{
struct rtattr *mx = (struct rtattr*)skb->tail;
int i;
@@ -136,7 +136,7 @@
RTA_PUT(skb, RTA_METRICS, 0, NULL);
for (i=0; i<RTAX_MAX; i++) {
if (metrics[i])
- RTA_PUT(skb, i+1, sizeof(unsigned), metrics+i);
+ RTA_PUT(skb, i+1, sizeof(u32), metrics+i);
}
mx->rta_len = skb->tail - (u8*)mx;
if (mx->rta_len == RTA_LENGTH(0))
diff -Nru a/net/core/skbuff.c b/net/core/skbuff.c
--- a/net/core/skbuff.c Thu May 8 10:41:36 2003
+++ b/net/core/skbuff.c Thu May 8 10:41:36 2003
@@ -57,6 +57,7 @@
#include <net/dst.h>
#include <net/sock.h>
#include <net/checksum.h>
+#include <net/xfrm.h>
#include <asm/uaccess.h>
#include <asm/system.h>
@@ -201,6 +202,7 @@
/* Set up other state */
skb->len = 0;
+ skb->local_df = 0;
skb->cloned = 0;
skb->data_len = 0;
@@ -232,6 +234,7 @@
skb->stamp.tv_sec=0; /* No idea about time */
skb->dev = NULL;
skb->dst = NULL;
+ skb->sp = NULL;
memset(skb->cb, 0, sizeof(skb->cb));
skb->pkt_type = PACKET_HOST; /* Default type */
skb->ip_summed = 0;
@@ -316,6 +319,9 @@
}
dst_release(skb->dst);
+#ifdef CONFIG_INET
+ secpath_put(skb->sp);
+#endif
if(skb->destructor) {
if (in_irq()) {
printk(KERN_WARNING "Warning: kfree_skb on hard IRQ %p\n",
@@ -367,10 +373,15 @@
C(mac);
C(dst);
dst_clone(n->dst);
+ C(sp);
+#ifdef CONFIG_INET
+ secpath_get(n->sp);
+#endif
memcpy(n->cb, skb->cb, sizeof(skb->cb));
C(len);
C(data_len);
C(csum);
+ C(local_df);
n->cloned = 1;
C(pkt_type);
C(ip_summed);
@@ -420,11 +431,15 @@
new->priority=old->priority;
new->protocol=old->protocol;
new->dst=dst_clone(old->dst);
+#ifdef CONFIG_INET
+ new->sp=secpath_get(old->sp);
+#endif
new->h.raw=old->h.raw+offset;
new->nh.raw=old->nh.raw+offset;
new->mac.raw=old->mac.raw+offset;
memcpy(new->cb, old->cb, sizeof(old->cb));
atomic_set(&new->users, 1);
+ new->local_df=old->local_df;
new->pkt_type=old->pkt_type;
new->stamp=old->stamp;
new->destructor = NULL;
diff -Nru a/net/decnet/dn_nsp_out.c b/net/decnet/dn_nsp_out.c
--- a/net/decnet/dn_nsp_out.c Thu May 8 10:41:38 2003
+++ b/net/decnet/dn_nsp_out.c Thu May 8 10:41:38 2003
@@ -593,7 +593,7 @@
* associations.
*/
skb->dst = dst_clone(dst);
- skb->dst->output(skb);
+ dst_output(skb);
}
diff -Nru a/net/decnet/dn_route.c b/net/decnet/dn_route.c
--- a/net/decnet/dn_route.c Thu May 8 10:41:38 2003
+++ b/net/decnet/dn_route.c Thu May 8 10:41:38 2003
@@ -100,7 +100,6 @@
static int dn_dst_gc(void);
static struct dst_entry *dn_dst_check(struct dst_entry *, __u32);
-static struct dst_entry *dn_dst_reroute(struct dst_entry *, struct sk_buff *skb);
static struct dst_entry *dn_dst_negative_advice(struct dst_entry *);
static void dn_dst_link_failure(struct sk_buff *);
static int dn_route_input(struct sk_buff *);
@@ -119,7 +118,6 @@
gc_thresh: 128,
gc: dn_dst_gc,
check: dn_dst_check,
- reroute: dn_dst_reroute,
negative_advice: dn_dst_negative_advice,
link_failure: dn_dst_link_failure,
entry_size: sizeof(struct dn_route),
@@ -202,12 +200,6 @@
return NULL;
}
-static struct dst_entry *dn_dst_reroute(struct dst_entry *dst,
- struct sk_buff *skb)
-{
- return NULL;
-}
-
/*
* This is called through sendmsg() when you specify MSG_TRYHARD
* and there is already a route in cache.
@@ -396,7 +388,7 @@
int err;
if ((err = dn_route_input(skb)) == 0)
- return skb->dst->input(skb);
+ return dst_input(skb);
if (decnet_debug_level & 4) {
char *devname = skb->dev ? skb->dev->name : "???";
@@ -1049,10 +1041,12 @@
RTA_PUT(skb, RTA_SRC, 2, &rt->rt_saddr);
if (rt->u.dst.dev)
RTA_PUT(skb, RTA_OIF, sizeof(int), &rt->u.dst.dev->ifindex);
- if (rt->u.dst.window)
- RTA_PUT(skb, RTAX_WINDOW, sizeof(unsigned), &rt->u.dst.window);
- if (rt->u.dst.rtt)
- RTA_PUT(skb, RTAX_RTT, sizeof(unsigned), &rt->u.dst.rtt);
+ if (dst_metric(&rt->u.dst, RTAX_WINDOW))
+ RTA_PUT(skb, RTAX_WINDOW, sizeof(unsigned),
+ &rt->u.dst.metrics[RTAX_WINDOW - 1]);
+ if (dst_metric(&rt->u.dst, RTAX_RTT))
+ RTA_PUT(skb, RTAX_RTT, sizeof(unsigned),
+ &rt->u.dst.metrics[RTAX_RTT]);
nlh->nlmsg_len = skb->tail - b;
return skb->len;
@@ -1208,7 +1202,7 @@
dn_addr2asc(dn_ntohs(rt->rt_saddr), buf2),
atomic_read(&rt->u.dst.__refcnt),
rt->u.dst.__use,
- (int)rt->u.dst.rtt
+ (int) dst_metric(&rt->u.dst, RTAX_RTT)
);
diff -Nru a/net/ipv4/Config.in b/net/ipv4/Config.in
--- a/net/ipv4/Config.in Thu May 8 10:41:36 2003
+++ b/net/ipv4/Config.in Thu May 8 10:41:36 2003
@@ -41,6 +41,9 @@
fi
bool ' IP: TCP Explicit Congestion Notification support' CONFIG_INET_ECN
bool ' IP: TCP syncookie support (disabled per default)' CONFIG_SYN_COOKIES
+tristate ' IP: AH transformation' CONFIG_INET_AH
+tristate ' IP: ESP transformation' CONFIG_INET_ESP
+tristate ' IP: IPComp transformation' CONFIG_INET_IPCOMP
if [ "$CONFIG_NETFILTER" != "n" ]; then
source net/ipv4/netfilter/Config.in
fi
diff -Nru a/net/ipv4/Makefile b/net/ipv4/Makefile
--- a/net/ipv4/Makefile Thu May 8 10:41:37 2003
+++ b/net/ipv4/Makefile Thu May 8 10:41:37 2003
@@ -24,6 +24,11 @@
obj-$(CONFIG_NET_IPIP) += ipip.o
obj-$(CONFIG_NET_IPGRE) += ip_gre.o
obj-$(CONFIG_SYN_COOKIES) += syncookies.o
+obj-$(CONFIG_INET_AH) += ah.o
+obj-$(CONFIG_INET_ESP) += esp.o
+obj-$(CONFIG_INET_IPCOMP) += ipcomp.o
obj-$(CONFIG_IP_PNP) += ipconfig.o
+
+obj-y += xfrm4_policy.o xfrm4_state.o xfrm4_input.o xfrm4_tunnel.o
include $(TOPDIR)/Rules.make
diff -Nru a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
--- a/net/ipv4/af_inet.c Thu May 8 10:41:36 2003
+++ b/net/ipv4/af_inet.c Thu May 8 10:41:36 2003
@@ -89,6 +89,7 @@
#include <linux/smp_lock.h>
#include <linux/inet.h>
+#include <linux/igmp.h>
#include <linux/netdevice.h>
#include <linux/brlock.h>
#include <net/ip.h>
@@ -103,6 +104,7 @@
#include <net/icmp.h>
#include <net/ipip.h>
#include <net/inet_common.h>
+#include <net/xfrm.h>
#ifdef CONFIG_IP_MROUTE
#include <linux/mroute.h>
#endif
@@ -213,6 +215,8 @@
sock_orphan(sk);
+ xfrm_sk_free_policy(sk);
+
#ifdef INET_REFCNT_DEBUG
if (atomic_read(&sk->refcnt) != 1) {
printk(KERN_DEBUG "Destruction inet %p delayed, c=%d\n", sk, atomic_read(&sk->refcnt));
@@ -724,6 +728,7 @@
sin->sin_port = sk->sport;
sin->sin_addr.s_addr = addr;
}
+ memset(sin->sin_zero, 0, sizeof(sin->sin_zero));
*uaddr_len = sizeof(*sin);
return(0);
}
@@ -757,6 +762,21 @@
return sk->prot->sendmsg(sk, msg, size);
}
+
+ssize_t inet_sendpage(struct socket *sock, struct page *page, int offset, size_t size, int flags)
+{
+ struct sock *sk = sock->sk;
+
+ /* We may need to bind the socket. */
+ if (!sk->num && inet_autobind(sk))
+ return -EAGAIN;
+
+ if (sk->prot->sendpage)
+ return sk->prot->sendpage(sk, page, offset, size, flags);
+ return sock_no_sendpage(sock, page, offset, size, flags);
+}
+
+
int inet_shutdown(struct socket *sock, int how)
{
struct sock *sk = sock->sk;
@@ -981,7 +1001,7 @@
sendmsg: inet_sendmsg,
recvmsg: inet_recvmsg,
mmap: sock_no_mmap,
- sendpage: sock_no_sendpage,
+ sendpage: inet_sendpage,
};
struct net_proto_family inet_family_ops = {
@@ -1100,6 +1120,27 @@
}
}
+#ifdef CONFIG_IP_MULTICAST
+static struct inet_protocol igmp_protocol = {
+ .handler = igmp_rcv,
+};
+#endif
+
+static struct inet_protocol tcp_protocol = {
+ .handler = tcp_v4_rcv,
+ .err_handler = tcp_v4_err,
+ .no_policy = 1,
+};
+
+static struct inet_protocol udp_protocol = {
+ .handler = udp_rcv,
+ .err_handler = udp_err,
+ .no_policy = 1,
+};
+
+static struct inet_protocol icmp_protocol = {
+ .handler = icmp_rcv,
+};
/*
* Called by socket.c on kernel startup.
@@ -1108,7 +1149,6 @@
static int __init inet_init(void)
{
struct sk_buff *dummy_skb;
- struct inet_protocol *p;
struct inet_protosw *q;
struct list_head *r;
@@ -1126,16 +1166,19 @@
(void) sock_register(&inet_family_ops);
/*
- * Add all the protocols.
+ * Add all the base protocols.
*/
- printk(KERN_INFO "IP Protocols: ");
- for (p = inet_protocol_base; p != NULL;) {
- struct inet_protocol *tmp = (struct inet_protocol *) p->next;
- inet_add_protocol(p);
- printk("%s%s",p->name,tmp?", ":"\n");
- p = tmp;
- }
+ if (inet_add_protocol(&icmp_protocol, IPPROTO_ICMP) < 0)
+ printk(KERN_CRIT "inet_init: Cannot add ICMP protocol\n");
+ if (inet_add_protocol(&udp_protocol, IPPROTO_UDP) < 0)
+ printk(KERN_CRIT "inet_init: Cannot add UDP protocol\n");
+ if (inet_add_protocol(&tcp_protocol, IPPROTO_TCP) < 0)
+ printk(KERN_CRIT "inet_init: Cannot add TCP protocol\n");
+#ifdef CONFIG_IP_MULTICAST
+ if (inet_add_protocol(&igmp_protocol, IPPROTO_IGMP) < 0)
+ printk(KERN_CRIT "inet_init: Cannot add TCP protocol\n");
+#endif
/* Register the socket-side information for inet_create. */
for(r = &inetsw[0]; r < &inetsw[SOCK_MAX]; ++r)
diff -Nru a/net/ipv4/ah.c b/net/ipv4/ah.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/net/ipv4/ah.c Thu May 8 10:41:38 2003
@@ -0,0 +1,366 @@
+#include <linux/config.h>
+#include <linux/module.h>
+#include <net/ip.h>
+#include <net/xfrm.h>
+#include <net/ah.h>
+#include <linux/crypto.h>
+#include <linux/pfkeyv2.h>
+#include <net/icmp.h>
+#include <asm/scatterlist.h>
+
+
+/* Clear mutable options and find final destination to substitute
+ * into IP header for icv calculation. Options are already checked
+ * for validity, so paranoia is not required. */
+
+static int ip_clear_mutable_options(struct iphdr *iph, u32 *daddr)
+{
+ unsigned char * optptr = (unsigned char*)(iph+1);
+ int l = iph->ihl*4 - sizeof(struct iphdr);
+ int optlen;
+
+ while (l > 0) {
+ switch (*optptr) {
+ case IPOPT_END:
+ return 0;
+ case IPOPT_NOOP:
+ l--;
+ optptr++;
+ continue;
+ }
+ optlen = optptr[1];
+ if (optlen<2 || optlen>l)
+ return -EINVAL;
+ switch (*optptr) {
+ case IPOPT_SEC:
+ case 0x85: /* Some "Extended Security" crap. */
+ case 0x86: /* Another "Commercial Security" crap. */
+ case IPOPT_RA:
+ case 0x80|21: /* RFC1770 */
+ break;
+ case IPOPT_LSRR:
+ case IPOPT_SSRR:
+ if (optlen < 6)
+ return -EINVAL;
+ memcpy(daddr, optptr+optlen-4, 4);
+ /* Fall through */
+ default:
+ memset(optptr+2, 0, optlen-2);
+ }
+ l -= optlen;
+ optptr += optlen;
+ }
+ return 0;
+}
+
+static int ah_output(struct sk_buff *skb)
+{
+ int err;
+ struct dst_entry *dst = skb->dst;
+ struct xfrm_state *x = dst->xfrm;
+ struct iphdr *iph, *top_iph;
+ struct ip_auth_hdr *ah;
+ struct ah_data *ahp;
+ union {
+ struct iphdr iph;
+ char buf[60];
+ } tmp_iph;
+
+ if (skb->ip_summed == CHECKSUM_HW && skb_checksum_help(skb) == NULL) {
+ err = -EINVAL;
+ goto error_nolock;
+ }
+
+ spin_lock_bh(&x->lock);
+ err = xfrm_check_output(x, skb, AF_INET);
+ if (err)
+ goto error;
+
+ iph = skb->nh.iph;
+ if (x->props.mode) {
+ top_iph = (struct iphdr*)skb_push(skb, x->props.header_len);
+ top_iph->ihl = 5;
+ top_iph->version = 4;
+ top_iph->tos = 0;
+ top_iph->tot_len = htons(skb->len);
+ if (!(iph->frag_off&htons(IP_DF))) {
+#ifdef NETIF_F_TSO
+ __ip_select_ident(top_iph, dst, 0);
+#else
+ __ip_select_ident(top_iph, dst);
+#endif
+ }
+ top_iph->frag_off = 0;
+ top_iph->ttl = 0;
+ top_iph->protocol = IPPROTO_AH;
+ top_iph->check = 0;
+ top_iph->saddr = x->props.saddr.a4;
+ top_iph->daddr = x->id.daddr.a4;
+ ah = (struct ip_auth_hdr*)(top_iph+1);
+ ah->nexthdr = IPPROTO_IPIP;
+ } else {
+ memcpy(&tmp_iph, skb->data, iph->ihl*4);
+ top_iph = (struct iphdr*)skb_push(skb, x->props.header_len);
+ memcpy(top_iph, &tmp_iph, iph->ihl*4);
+ iph = &tmp_iph.iph;
+ top_iph->tos = 0;
+ top_iph->tot_len = htons(skb->len);
+ top_iph->frag_off = 0;
+ top_iph->ttl = 0;
+ top_iph->protocol = IPPROTO_AH;
+ top_iph->check = 0;
+ if (top_iph->ihl != 5) {
+ err = ip_clear_mutable_options(top_iph, &top_iph->daddr);
+ if (err)
+ goto error;
+ }
+ ah = (struct ip_auth_hdr*)((char*)top_iph+iph->ihl*4);
+ ah->nexthdr = iph->protocol;
+ }
+ ahp = x->data;
+ ah->hdrlen = (XFRM_ALIGN8(sizeof(struct ip_auth_hdr) +
+ ahp->icv_trunc_len) >> 2) - 2;
+
+ ah->reserved = 0;
+ ah->spi = x->id.spi;
+ ah->seq_no = htonl(++x->replay.oseq);
+ ahp->icv(ahp, skb, ah->auth_data);
+ top_iph->tos = iph->tos;
+ top_iph->ttl = iph->ttl;
+ if (x->props.mode) {
+ top_iph->frag_off = iph->frag_off&~htons(IP_MF|IP_OFFSET);
+ memset(&(IPCB(skb)->opt), 0, sizeof(struct ip_options));
+ } else {
+ top_iph->frag_off = iph->frag_off;
+ top_iph->daddr = iph->daddr;
+ if (iph->ihl != 5)
+ memcpy(top_iph+1, iph+1, iph->ihl*4 - sizeof(struct iphdr));
+ }
+ ip_send_check(top_iph);
+
+ skb->nh.raw = skb->data;
+
+ x->curlft.bytes += skb->len;
+ x->curlft.packets++;
+ spin_unlock_bh(&x->lock);
+ if ((skb->dst = dst_pop(dst)) == NULL) {
+ err = -EHOSTUNREACH;
+ goto error_nolock;
+ }
+ return NET_XMIT_BYPASS;
+
+error:
+ spin_unlock_bh(&x->lock);
+error_nolock:
+ kfree_skb(skb);
+ return err;
+}
+
+int ah_input(struct xfrm_state *x, struct xfrm_decap_state *decap, struct sk_buff *skb)
+{
+ int ah_hlen;
+ struct iphdr *iph;
+ struct ip_auth_hdr *ah;
+ struct ah_data *ahp;
+ char work_buf[60];
+
+ if (!pskb_may_pull(skb, sizeof(struct ip_auth_hdr)))
+ goto out;
+
+ ah = (struct ip_auth_hdr*)skb->data;
+ ahp = x->data;
+ ah_hlen = (ah->hdrlen + 2) << 2;
+
+ if (ah_hlen != XFRM_ALIGN8(sizeof(struct ip_auth_hdr) + ahp->icv_full_len) &&
+ ah_hlen != XFRM_ALIGN8(sizeof(struct ip_auth_hdr) + ahp->icv_trunc_len))
+ goto out;
+
+ if (!pskb_may_pull(skb, ah_hlen))
+ goto out;
+
+ /* We are going to _remove_ AH header to keep sockets happy,
+ * so... Later this can change. */
+ if (skb_cloned(skb) &&
+ pskb_expand_head(skb, 0, 0, GFP_ATOMIC))
+ goto out;
+
+ skb->ip_summed = CHECKSUM_NONE;
+
+ ah = (struct ip_auth_hdr*)skb->data;
+ iph = skb->nh.iph;
+
+ memcpy(work_buf, iph, iph->ihl*4);
+
+ iph->ttl = 0;
+ iph->tos = 0;
+ iph->frag_off = 0;
+ iph->check = 0;
+ if (iph->ihl != 5) {
+ u32 dummy;
+ if (ip_clear_mutable_options(iph, &dummy))
+ goto out;
+ }
+ {
+ u8 auth_data[ahp->icv_trunc_len];
+
+ memcpy(auth_data, ah->auth_data, ahp->icv_trunc_len);
+ skb_push(skb, skb->data - skb->nh.raw);
+ ahp->icv(ahp, skb, ah->auth_data);
+ if (memcmp(ah->auth_data, auth_data, ahp->icv_trunc_len)) {
+ x->stats.integrity_failed++;
+ goto out;
+ }
+ }
+ ((struct iphdr*)work_buf)->protocol = ah->nexthdr;
+ skb->nh.raw = skb_pull(skb, ah_hlen);
+ memcpy(skb->nh.raw, work_buf, iph->ihl*4);
+ skb->nh.iph->tot_len = htons(skb->len);
+ skb_pull(skb, skb->nh.iph->ihl*4);
+ skb->h.raw = skb->data;
+
+ return 0;
+
+out:
+ return -EINVAL;
+}
+
+void ah4_err(struct sk_buff *skb, u32 info)
+{
+ struct iphdr *iph = (struct iphdr*)skb->data;
+ struct ip_auth_hdr *ah = (struct ip_auth_hdr*)(skb->data+(iph->ihl<<2));
+ struct xfrm_state *x;
+
+ if (skb->h.icmph->type != ICMP_DEST_UNREACH ||
+ skb->h.icmph->code != ICMP_FRAG_NEEDED)
+ return;
+
+ x = xfrm_state_lookup((xfrm_address_t *)&iph->daddr, ah->spi, IPPROTO_AH, AF_INET);
+ if (!x)
+ return;
+ printk(KERN_DEBUG "pmtu discvovery on SA AH/%08x/%08x\n",
+ ntohl(ah->spi), ntohl(iph->daddr));
+ xfrm_state_put(x);
+}
+
+static int ah_init_state(struct xfrm_state *x, void *args)
+{
+ struct ah_data *ahp = NULL;
+ struct xfrm_algo_desc *aalg_desc;
+
+ /* null auth can use a zero length key */
+ if (x->aalg->alg_key_len > 512)
+ goto error;
+
+ ahp = kmalloc(sizeof(*ahp), GFP_KERNEL);
+ if (ahp == NULL)
+ return -ENOMEM;
+
+ memset(ahp, 0, sizeof(*ahp));
+
+ ahp->key = x->aalg->alg_key;
+ ahp->key_len = (x->aalg->alg_key_len+7)/8;
+ ahp->tfm = crypto_alloc_tfm(x->aalg->alg_name, 0);
+ if (!ahp->tfm)
+ goto error;
+ ahp->icv = ah_hmac_digest;
+
+ /*
+ * Lookup the algorithm description maintained by xfrm_algo,
+ * verify crypto transform properties, and store information
+ * we need for AH processing. This lookup cannot fail here
+ * after a successful crypto_alloc_tfm().
+ */
+ aalg_desc = xfrm_aalg_get_byname(x->aalg->alg_name);
+ BUG_ON(!aalg_desc);
+
+ if (aalg_desc->uinfo.auth.icv_fullbits/8 !=
+ crypto_tfm_alg_digestsize(ahp->tfm)) {
+ printk(KERN_INFO "AH: %s digestsize %u != %hu\n",
+ x->aalg->alg_name, crypto_tfm_alg_digestsize(ahp->tfm),
+ aalg_desc->uinfo.auth.icv_fullbits/8);
+ goto error;
+ }
+
+ ahp->icv_full_len = aalg_desc->uinfo.auth.icv_fullbits/8;
+ ahp->icv_trunc_len = aalg_desc->uinfo.auth.icv_truncbits/8;
+
+ ahp->work_icv = kmalloc(ahp->icv_full_len, GFP_KERNEL);
+ if (!ahp->work_icv)
+ goto error;
+
+ x->props.header_len = XFRM_ALIGN8(sizeof(struct ip_auth_hdr) + ahp->icv_trunc_len);
+ if (x->props.mode)
+ x->props.header_len += sizeof(struct iphdr);
+ x->data = ahp;
+
+ return 0;
+
+error:
+ if (ahp) {
+ if (ahp->work_icv)
+ kfree(ahp->work_icv);
+ if (ahp->tfm)
+ crypto_free_tfm(ahp->tfm);
+ kfree(ahp);
+ }
+ return -EINVAL;
+}
+
+static void ah_destroy(struct xfrm_state *x)
+{
+ struct ah_data *ahp = x->data;
+
+ if (ahp->work_icv) {
+ kfree(ahp->work_icv);
+ ahp->work_icv = NULL;
+ }
+ if (ahp->tfm) {
+ crypto_free_tfm(ahp->tfm);
+ ahp->tfm = NULL;
+ }
+ kfree(ahp);
+}
+
+
+static struct xfrm_type ah_type =
+{
+ .description = "AH4",
+ .owner = THIS_MODULE,
+ .proto = IPPROTO_AH,
+ .init_state = ah_init_state,
+ .destructor = ah_destroy,
+ .input = ah_input,
+ .output = ah_output
+};
+
+static struct inet_protocol ah4_protocol = {
+ .handler = xfrm4_rcv,
+ .err_handler = ah4_err,
+ .no_policy = 1,
+};
+
+static int __init ah4_init(void)
+{
+ if (xfrm_register_type(&ah_type, AF_INET) < 0) {
+ printk(KERN_INFO "ip ah init: can't add xfrm type\n");
+ return -EAGAIN;
+ }
+ if (inet_add_protocol(&ah4_protocol, IPPROTO_AH) < 0) {
+ printk(KERN_INFO "ip ah init: can't add protocol\n");
+ xfrm_unregister_type(&ah_type, AF_INET);
+ return -EAGAIN;
+ }
+ return 0;
+}
+
+static void __exit ah4_fini(void)
+{
+ if (inet_del_protocol(&ah4_protocol, IPPROTO_AH) < 0)
+ printk(KERN_INFO "ip ah close: can't remove protocol\n");
+ if (xfrm_unregister_type(&ah_type, AF_INET) < 0)
+ printk(KERN_INFO "ip ah close: can't remove xfrm type\n");
+}
+
+module_init(ah4_init);
+module_exit(ah4_fini);
+MODULE_LICENSE("GPL");
diff -Nru a/net/ipv4/arp.c b/net/ipv4/arp.c
--- a/net/ipv4/arp.c Thu May 8 10:41:37 2003
+++ b/net/ipv4/arp.c Thu May 8 10:41:37 2003
@@ -347,11 +347,13 @@
static int arp_filter(__u32 sip, __u32 tip, struct net_device *dev)
{
+ struct flowi fl = { .nl_u = { .ip4_u = { .daddr = sip,
+ .saddr = tip } } };
struct rtable *rt;
int flag = 0;
/*unsigned long now; */
- if (ip_route_output(&rt, sip, tip, 0, 0) < 0)
+ if (ip_route_output_key(&rt, &fl) < 0)
return 1;
if (rt->u.dst.dev != dev) {
NET_INC_STATS_BH(ArpFilter);
@@ -505,11 +507,11 @@
*/
skb = alloc_skb(sizeof(struct arphdr)+ 2*(dev->addr_len+4)
- + dev->hard_header_len + 15, GFP_ATOMIC);
+ + LL_RESERVED_SPACE(dev), GFP_ATOMIC);
if (skb == NULL)
return;
- skb_reserve(skb, (dev->hard_header_len+15)&~15);
+ skb_reserve(skb, LL_RESERVED_SPACE(dev));
skb->nh.raw = skb->data;
arp = (struct arphdr *) skb_put(skb,sizeof(struct arphdr) + 2*(dev->addr_len+4));
skb->dev = dev;
@@ -918,8 +920,10 @@
if (r->arp_flags & ATF_PERM)
r->arp_flags |= ATF_COM;
if (dev == NULL) {
+ struct flowi fl = { .nl_u = { .ip4_u = { .daddr = ip,
+ .tos = RTO_ONLINK } } };
struct rtable * rt;
- if ((err = ip_route_output(&rt, ip, 0, RTO_ONLINK, 0)) != 0)
+ if ((err = ip_route_output_key(&rt, &fl)) != 0)
return err;
dev = rt->u.dst.dev;
ip_rt_put(rt);
@@ -1001,8 +1005,10 @@
}
if (dev == NULL) {
+ struct flowi fl = { .nl_u = { .ip4_u = { .daddr = ip,
+ .tos = RTO_ONLINK } } };
struct rtable * rt;
- if ((err = ip_route_output(&rt, ip, 0, RTO_ONLINK, 0)) != 0)
+ if ((err = ip_route_output_key(&rt, &fl)) != 0)
return err;
dev = rt->u.dst.dev;
ip_rt_put(rt);
diff -Nru a/net/ipv4/devinet.c b/net/ipv4/devinet.c
--- a/net/ipv4/devinet.c Thu May 8 10:41:37 2003
+++ b/net/ipv4/devinet.c Thu May 8 10:41:37 2003
@@ -847,6 +847,8 @@
memcpy(ifa->ifa_label, dev->name, IFNAMSIZ);
inet_insert_ifa(ifa);
}
+ in_dev->cnf.no_xfrm = 1;
+ in_dev->cnf.no_policy = 1;
}
ip_mc_up(in_dev);
break;
@@ -1053,10 +1055,66 @@
return ret;
}
+static int ipv4_doint_and_flush(ctl_table *ctl, int write,
+ struct file* filp, void *buffer,
+ size_t *lenp)
+{
+ int *valp = ctl->data;
+ int val = *valp;
+ int ret = proc_dointvec(ctl, write, filp, buffer, lenp);
+
+ if (write && *valp != val)
+ rt_cache_flush(0);
+
+ return ret;
+}
+
+static int ipv4_doint_and_flush_strategy(ctl_table *table, int *name, int nlen,
+ void *oldval, size_t *oldlenp,
+ void *newval, size_t newlen,
+ void **context)
+{
+ int *valp = table->data;
+ int new;
+
+ if (!newval || !newlen)
+ return 0;
+
+ if (newlen != sizeof(int))
+ return -EINVAL;
+
+ if (get_user(new, (int *)newval))
+ return -EFAULT;
+
+ if (new == *valp)
+ return 0;
+
+ if (oldval && oldlenp) {
+ size_t len;
+
+ if (get_user(len, oldlenp))
+ return -EFAULT;
+
+ if (len) {
+ if (len > table->maxlen)
+ len = table->maxlen;
+ if (copy_to_user(oldval, valp, len))
+ return -EFAULT;
+ if (put_user(len, oldlenp))
+ return -EFAULT;
+ }
+ }
+
+ *valp = new;
+ rt_cache_flush(0);
+ return 1;
+}
+
+
static struct devinet_sysctl_table
{
struct ctl_table_header *sysctl_header;
- ctl_table devinet_vars[15];
+ ctl_table devinet_vars[17];
ctl_table devinet_dev[2];
ctl_table devinet_conf_dir[2];
ctl_table devinet_proto_dir[2];
@@ -1105,6 +1163,12 @@
{NET_IPV4_CONF_ARPFILTER, "arp_filter",
&ipv4_devconf.arp_filter, sizeof(int), 0644, NULL,
&proc_dointvec},
+ {NET_IPV4_CONF_NOXFRM, "disable_xfrm",
+ &ipv4_devconf.no_xfrm, sizeof(int), 0644, NULL,
+ &ipv4_doint_and_flush, &ipv4_doint_and_flush_strategy,},
+ {NET_IPV4_CONF_NOPOLICY, "disable_policy",
+ &ipv4_devconf.no_policy, sizeof(int), 0644, NULL,
+ &ipv4_doint_and_flush, &ipv4_doint_and_flush_strategy},
{0}},
{{NET_PROTO_CONF_ALL, "all", NULL, 0, 0555, devinet_sysctl.devinet_vars},{0}},
diff -Nru a/net/ipv4/esp.c b/net/ipv4/esp.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/net/ipv4/esp.c Thu May 8 10:41:38 2003
@@ -0,0 +1,604 @@
+#include <linux/config.h>
+#include <linux/module.h>
+#include <net/ip.h>
+#include <net/xfrm.h>
+#include <net/esp.h>
+#include <asm/scatterlist.h>
+#include <linux/crypto.h>
+#include <linux/pfkeyv2.h>
+#include <linux/random.h>
+#include <net/icmp.h>
+#include <net/udp.h>
+
+#define MAX_SG_ONSTACK 4
+
+/* decapsulation data for use when post-processing */
+struct esp_decap_data {
+ xfrm_address_t saddr;
+ __u16 sport;
+ __u8 proto;
+};
+
+int esp_output(struct sk_buff *skb)
+{
+ int err;
+ struct dst_entry *dst = skb->dst;
+ struct xfrm_state *x = dst->xfrm;
+ struct iphdr *iph, *top_iph;
+ struct ip_esp_hdr *esph;
+ struct crypto_tfm *tfm;
+ struct esp_data *esp;
+ struct sk_buff *trailer;
+ struct udphdr *uh = NULL;
+ struct xfrm_encap_tmpl *encap = NULL;
+ int blksize;
+ int clen;
+ int alen;
+ int nfrags;
+ union {
+ struct iphdr iph;
+ char buf[60];
+ } tmp_iph;
+
+ /* First, if the skb is not checksummed, complete checksum. */
+ if (skb->ip_summed == CHECKSUM_HW && skb_checksum_help(skb) == NULL) {
+ err = -EINVAL;
+ goto error_nolock;
+ }
+
+ spin_lock_bh(&x->lock);
+ err = xfrm_check_output(x, skb, AF_INET);
+ if (err)
+ goto error;
+ err = -ENOMEM;
+
+ /* Strip IP header in transport mode. Save it. */
+ if (!x->props.mode) {
+ iph = skb->nh.iph;
+ memcpy(&tmp_iph, iph, iph->ihl*4);
+ __skb_pull(skb, iph->ihl*4);
+ }
+ /* Now skb is pure payload to encrypt */
+
+ /* Round to block size */
+ clen = skb->len;
+
+ esp = x->data;
+ alen = esp->auth.icv_trunc_len;
+ tfm = esp->conf.tfm;
+ blksize = (crypto_tfm_alg_blocksize(tfm) + 3) & ~3;
+ clen = (clen + 2 + blksize-1)&~(blksize-1);
+ if (esp->conf.padlen)
+ clen = (clen + esp->conf.padlen-1)&~(esp->conf.padlen-1);
+
+ if ((nfrags = skb_cow_data(skb, clen-skb->len+alen, &trailer)) < 0)
+ goto error;
+
+ /* Fill padding... */
+ do {
+ int i;
+ for (i=0; i<clen-skb->len - 2; i++)
+ *(u8*)(trailer->tail + i) = i+1;
+ } while (0);
+ *(u8*)(trailer->tail + clen-skb->len - 2) = (clen - skb->len)-2;
+ pskb_put(skb, trailer, clen - skb->len);
+
+ encap = x->encap;
+
+ iph = skb->nh.iph;
+ if (x->props.mode) {
+ top_iph = (struct iphdr*)skb_push(skb, x->props.header_len);
+ esph = (struct ip_esp_hdr*)(top_iph+1);
+ if (encap && encap->encap_type) {
+ switch (encap->encap_type) {
+ case UDP_ENCAP_ESPINUDP:
+ uh = (struct udphdr*) esph;
+ esph = (struct ip_esp_hdr*)(uh+1);
+ top_iph->protocol = IPPROTO_UDP;
+ break;
+ default:
+ printk(KERN_INFO
+ "esp_output(): Unhandled encap: %u\n",
+ encap->encap_type);
+ top_iph->protocol = IPPROTO_ESP;
+ break;
+ }
+ } else
+ top_iph->protocol = IPPROTO_ESP;
+ *(u8*)(trailer->tail - 1) = IPPROTO_IPIP;
+ top_iph->ihl = 5;
+ top_iph->version = 4;
+ top_iph->tos = iph->tos; /* DS disclosed */
+ top_iph->tot_len = htons(skb->len + alen);
+ top_iph->frag_off = iph->frag_off&htons(IP_DF);
+ if (!(top_iph->frag_off))
+ ip_select_ident(top_iph, dst, 0);
+ top_iph->ttl = iph->ttl; /* TTL disclosed */
+ top_iph->check = 0;
+ top_iph->saddr = x->props.saddr.a4;
+ top_iph->daddr = x->id.daddr.a4;
+ memset(&(IPCB(skb)->opt), 0, sizeof(struct ip_options));
+ } else {
+ esph = (struct ip_esp_hdr*)skb_push(skb, x->props.header_len);
+ top_iph = (struct iphdr*)skb_push(skb, iph->ihl*4);
+ memcpy(top_iph, &tmp_iph, iph->ihl*4);
+ if (encap && encap->encap_type) {
+ switch (encap->encap_type) {
+ case UDP_ENCAP_ESPINUDP:
+ uh = (struct udphdr*) esph;
+ esph = (struct ip_esp_hdr*)(uh+1);
+ top_iph->protocol = IPPROTO_UDP;
+ break;
+ default:
+ printk(KERN_INFO
+ "esp_output(): Unhandled encap: %u\n",
+ encap->encap_type);
+ top_iph->protocol = IPPROTO_ESP;
+ break;
+ }
+ } else
+ top_iph->protocol = IPPROTO_ESP;
+ iph = &tmp_iph.iph;
+ top_iph->tot_len = htons(skb->len + alen);
+ top_iph->check = 0;
+ top_iph->frag_off = iph->frag_off;
+ *(u8*)(trailer->tail - 1) = iph->protocol;
+ }
+
+ /* this is non-NULL only with UDP Encapsulation */
+ if (encap && uh) {
+ uh->source = encap->encap_sport;
+ uh->dest = encap->encap_dport;
+ uh->len = htons(skb->len + alen - sizeof(struct iphdr));
+ uh->check = 0;
+ }
+
+ esph->spi = x->id.spi;
+ esph->seq_no = htonl(++x->replay.oseq);
+
+ if (esp->conf.ivlen)
+ crypto_cipher_set_iv(tfm, esp->conf.ivec, crypto_tfm_alg_ivsize(tfm));
+
+ do {
+ struct scatterlist sgbuf[nfrags>MAX_SG_ONSTACK ? 0 : nfrags];
+ struct scatterlist *sg = sgbuf;
+
+ if (unlikely(nfrags > MAX_SG_ONSTACK)) {
+ sg = kmalloc(sizeof(struct scatterlist)*nfrags, GFP_ATOMIC);
+ if (!sg)
+ goto error;
+ }
+ skb_to_sgvec(skb, sg, esph->enc_data+esp->conf.ivlen-skb->data, clen);
+ crypto_cipher_encrypt(tfm, sg, sg, clen);
+ if (unlikely(sg != sgbuf))
+ kfree(sg);
+ } while (0);
+
+ if (esp->conf.ivlen) {
+ memcpy(esph->enc_data, esp->conf.ivec, crypto_tfm_alg_ivsize(tfm));
+ crypto_cipher_get_iv(tfm, esp->conf.ivec, crypto_tfm_alg_ivsize(tfm));
+ }
+
+ if (esp->auth.icv_full_len) {
+ esp->auth.icv(esp, skb, (u8*)esph-skb->data,
+ sizeof(struct ip_esp_hdr) + esp->conf.ivlen+clen, trailer->tail);
+ pskb_put(skb, trailer, alen);
+ }
+
+ ip_send_check(top_iph);
+
+ skb->nh.raw = skb->data;
+
+ x->curlft.bytes += skb->len;
+ x->curlft.packets++;
+ spin_unlock_bh(&x->lock);
+ if ((skb->dst = dst_pop(dst)) == NULL) {
+ err = -EHOSTUNREACH;
+ goto error_nolock;
+ }
+ return NET_XMIT_BYPASS;
+
+error:
+ spin_unlock_bh(&x->lock);
+error_nolock:
+ kfree_skb(skb);
+ return err;
+}
+
+/*
+ * Note: detecting truncated vs. non-truncated authentication data is very
+ * expensive, so we only support truncated data, which is the recommended
+ * and common case.
+ */
+int esp_input(struct xfrm_state *x, struct xfrm_decap_state *decap, struct sk_buff *skb)
+{
+ struct iphdr *iph;
+ struct ip_esp_hdr *esph;
+ struct esp_data *esp = x->data;
+ struct sk_buff *trailer;
+ int blksize = crypto_tfm_alg_blocksize(esp->conf.tfm);
+ int alen = esp->auth.icv_trunc_len;
+ int elen = skb->len - sizeof(struct ip_esp_hdr) - esp->conf.ivlen - alen;
+ int nfrags;
+ int encap_len = 0;
+
+ if (!pskb_may_pull(skb, sizeof(struct ip_esp_hdr)))
+ goto out;
+
+ if (elen <= 0 || (elen & (blksize-1)))
+ goto out;
+
+ /* If integrity check is required, do this. */
+ if (esp->auth.icv_full_len) {
+ u8 sum[esp->auth.icv_full_len];
+ u8 sum1[alen];
+
+ esp->auth.icv(esp, skb, 0, skb->len-alen, sum);
+
+ if (skb_copy_bits(skb, skb->len-alen, sum1, alen))
+ BUG();
+
+ if (unlikely(memcmp(sum, sum1, alen))) {
+ x->stats.integrity_failed++;
+ goto out;
+ }
+ }
+
+ if ((nfrags = skb_cow_data(skb, 0, &trailer)) < 0)
+ goto out;
+
+ skb->ip_summed = CHECKSUM_NONE;
+
+ esph = (struct ip_esp_hdr*)skb->data;
+ iph = skb->nh.iph;
+
+ /* Get ivec. This can be wrong, check against another impls. */
+ if (esp->conf.ivlen)
+ crypto_cipher_set_iv(esp->conf.tfm, esph->enc_data, crypto_tfm_alg_ivsize(esp->conf.tfm));
+
+ {
+ u8 nexthdr[2];
+ struct scatterlist sgbuf[nfrags>MAX_SG_ONSTACK ? 0 : nfrags];
+ struct scatterlist *sg = sgbuf;
+ u8 workbuf[60];
+ int padlen;
+
+ if (unlikely(nfrags > MAX_SG_ONSTACK)) {
+ sg = kmalloc(sizeof(struct scatterlist)*nfrags, GFP_ATOMIC);
+ if (!sg)
+ goto out;
+ }
+ skb_to_sgvec(skb, sg, sizeof(struct ip_esp_hdr) + esp->conf.ivlen, elen);
+ crypto_cipher_decrypt(esp->conf.tfm, sg, sg, elen);
+ if (unlikely(sg != sgbuf))
+ kfree(sg);
+
+ if (skb_copy_bits(skb, skb->len-alen-2, nexthdr, 2))
+ BUG();
+
+ padlen = nexthdr[0];
+ if (padlen+2 >= elen)
+ goto out;
+
+ /* ... check padding bits here. Silly. :-) */
+
+ if (x->encap && decap && decap->decap_type) {
+ struct esp_decap_data *encap_data;
+ struct udphdr *uh = (struct udphdr *) (iph+1);
+
+ encap_data = (struct esp_decap_data *) (decap->decap_data);
+ encap_data->proto = 0;
+
+ switch (decap->decap_type) {
+ case UDP_ENCAP_ESPINUDP:
+
+ if ((void*)uh == (void*)esph) {
+ printk(KERN_DEBUG
+ "esp_input(): Got ESP; expecting ESPinUDP\n");
+ break;
+ }
+
+ encap_data->proto = AF_INET;
+ encap_data->saddr.a4 = iph->saddr;
+ encap_data->sport = uh->source;
+ encap_len = (void*)esph - (void*)uh;
+ if (encap_len != sizeof(*uh))
+ printk(KERN_DEBUG
+ "esp_input(): UDP -> ESP: too much room: %d\n",
+ encap_len);
+ break;
+
+ default:
+ printk(KERN_INFO
+ "esp_input(): processing unknown encap type: %u\n",
+ decap->decap_type);
+ break;
+ }
+ }
+
+ iph->protocol = nexthdr[1];
+ pskb_trim(skb, skb->len - alen - padlen - 2);
+ memcpy(workbuf, skb->nh.raw, iph->ihl*4);
+ skb->h.raw = skb_pull(skb, sizeof(struct ip_esp_hdr) + esp->conf.ivlen);
+ skb->nh.raw += encap_len + sizeof(struct ip_esp_hdr) + esp->conf.ivlen;
+ memcpy(skb->nh.raw, workbuf, iph->ihl*4);
+ skb->nh.iph->tot_len = htons(skb->len);
+ }
+
+ return 0;
+
+out:
+ return -EINVAL;
+}
+
+int esp_post_input(struct xfrm_state *x, struct xfrm_decap_state *decap, struct sk_buff *skb)
+{
+
+ if (x->encap) {
+ struct xfrm_encap_tmpl *encap;
+ struct esp_decap_data *decap_data;
+
+ encap = x->encap;
+ decap_data = (struct esp_decap_data *)(decap->decap_data);
+
+ /* first, make sure that the decap type == the encap type */
+ if (encap->encap_type != decap->decap_type)
+ return -EINVAL;
+
+ /* Next, if we don't have an encap type, then ignore it */
+ if (!encap->encap_type)
+ return 0;
+
+ switch (encap->encap_type) {
+ case UDP_ENCAP_ESPINUDP:
+ /*
+ * 1) if the NAT-T peer's IP or port changed then
+ * advertize the change to the keying daemon.
+ * This is an inbound SA, so just compare
+ * SRC ports.
+ */
+ if (decap_data->proto == AF_INET &&
+ (decap_data->saddr.a4 != x->props.saddr.a4 ||
+ decap_data->sport != encap->encap_sport)) {
+ xfrm_address_t ipaddr;
+
+ ipaddr.a4 = decap_data->saddr.a4;
+ km_new_mapping(x, &ipaddr, decap_data->sport);
+
+ /* XXX: perhaps add an extra
+ * policy check here, to see
+ * if we should allow or
+ * reject a packet from a
+ * different source
+ * address/port.
+ */
+ }
+
+ /*
+ * 2) ignore UDP/TCP checksums in case
+ * of NAT-T in Transport Mode, or
+ * perform other post-processing fixes
+ * as per * draft-ietf-ipsec-udp-encaps-06,
+ * section 3.1.2
+ */
+ if (!x->props.mode)
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+
+ break;
+ default:
+ printk(KERN_INFO
+ "esp4_post_input(): Unhandled encap type: %u\n",
+ encap->encap_type);
+ break;
+ }
+ }
+ return 0;
+}
+
+static u32 esp4_get_max_size(struct xfrm_state *x, int mtu)
+{
+ struct esp_data *esp = x->data;
+ u32 blksize = crypto_tfm_alg_blocksize(esp->conf.tfm);
+
+ if (x->props.mode) {
+ mtu = (mtu + 2 + blksize-1)&~(blksize-1);
+ } else {
+ /* The worst case. */
+ mtu += 2 + blksize;
+ }
+ if (esp->conf.padlen)
+ mtu = (mtu + esp->conf.padlen-1)&~(esp->conf.padlen-1);
+
+ return mtu + x->props.header_len + esp->auth.icv_trunc_len;
+}
+
+void esp4_err(struct sk_buff *skb, u32 info)
+{
+ struct iphdr *iph = (struct iphdr*)skb->data;
+ struct ip_esp_hdr *esph = (struct ip_esp_hdr*)(skb->data+(iph->ihl<<2));
+ struct xfrm_state *x;
+
+ if (skb->h.icmph->type != ICMP_DEST_UNREACH ||
+ skb->h.icmph->code != ICMP_FRAG_NEEDED)
+ return;
+
+ x = xfrm_state_lookup((xfrm_address_t *)&iph->daddr, esph->spi, IPPROTO_ESP, AF_INET);
+ if (!x)
+ return;
+ printk(KERN_DEBUG "pmtu discvovery on SA ESP/%08x/%08x\n",
+ ntohl(esph->spi), ntohl(iph->daddr));
+ xfrm_state_put(x);
+}
+
+void esp_destroy(struct xfrm_state *x)
+{
+ struct esp_data *esp = x->data;
+
+ if (esp->conf.tfm) {
+ crypto_free_tfm(esp->conf.tfm);
+ esp->conf.tfm = NULL;
+ }
+ if (esp->conf.ivec) {
+ kfree(esp->conf.ivec);
+ esp->conf.ivec = NULL;
+ }
+ if (esp->auth.tfm) {
+ crypto_free_tfm(esp->auth.tfm);
+ esp->auth.tfm = NULL;
+ }
+ if (esp->auth.work_icv) {
+ kfree(esp->auth.work_icv);
+ esp->auth.work_icv = NULL;
+ }
+ kfree(esp);
+}
+
+int esp_init_state(struct xfrm_state *x, void *args)
+{
+ struct esp_data *esp = NULL;
+
+ /* null auth and encryption can have zero length keys */
+ if (x->aalg) {
+ if (x->aalg->alg_key_len > 512)
+ goto error;
+ }
+ if (x->ealg == NULL)
+ goto error;
+
+ esp = kmalloc(sizeof(*esp), GFP_KERNEL);
+ if (esp == NULL)
+ return -ENOMEM;
+
+ memset(esp, 0, sizeof(*esp));
+
+ if (x->aalg) {
+ struct xfrm_algo_desc *aalg_desc;
+
+ esp->auth.key = x->aalg->alg_key;
+ esp->auth.key_len = (x->aalg->alg_key_len+7)/8;
+ esp->auth.tfm = crypto_alloc_tfm(x->aalg->alg_name, 0);
+ if (esp->auth.tfm == NULL)
+ goto error;
+ esp->auth.icv = esp_hmac_digest;
+
+ aalg_desc = xfrm_aalg_get_byname(x->aalg->alg_name);
+ BUG_ON(!aalg_desc);
+
+ if (aalg_desc->uinfo.auth.icv_fullbits/8 !=
+ crypto_tfm_alg_digestsize(esp->auth.tfm)) {
+ printk(KERN_INFO "ESP: %s digestsize %u != %hu\n",
+ x->aalg->alg_name,
+ crypto_tfm_alg_digestsize(esp->auth.tfm),
+ aalg_desc->uinfo.auth.icv_fullbits/8);
+ goto error;
+ }
+
+ esp->auth.icv_full_len = aalg_desc->uinfo.auth.icv_fullbits/8;
+ esp->auth.icv_trunc_len = aalg_desc->uinfo.auth.icv_truncbits/8;
+
+ esp->auth.work_icv = kmalloc(esp->auth.icv_full_len, GFP_KERNEL);
+ if (!esp->auth.work_icv)
+ goto error;
+ }
+ esp->conf.key = x->ealg->alg_key;
+ esp->conf.key_len = (x->ealg->alg_key_len+7)/8;
+ esp->conf.tfm = crypto_alloc_tfm(x->ealg->alg_name, CRYPTO_TFM_MODE_CBC);
+ if (esp->conf.tfm == NULL)
+ goto error;
+ esp->conf.ivlen = crypto_tfm_alg_ivsize(esp->conf.tfm);
+ esp->conf.padlen = 0;
+ if (esp->conf.ivlen) {
+ esp->conf.ivec = kmalloc(esp->conf.ivlen, GFP_KERNEL);
+ get_random_bytes(esp->conf.ivec, esp->conf.ivlen);
+ }
+ crypto_cipher_setkey(esp->conf.tfm, esp->conf.key, esp->conf.key_len);
+ x->props.header_len = sizeof(struct ip_esp_hdr) + esp->conf.ivlen;
+ if (x->props.mode)
+ x->props.header_len += sizeof(struct iphdr);
+ if (x->encap) {
+ struct xfrm_encap_tmpl *encap = x->encap;
+
+ if (encap->encap_type) {
+ switch (encap->encap_type) {
+ case UDP_ENCAP_ESPINUDP:
+ x->props.header_len += sizeof(struct udphdr);
+ break;
+ default:
+ printk (KERN_INFO
+ "esp_init_state(): Unhandled encap type: %u\n",
+ encap->encap_type);
+ break;
+ }
+ }
+ }
+ x->data = esp;
+ x->props.trailer_len = esp4_get_max_size(x, 0) - x->props.header_len;
+ return 0;
+
+error:
+ if (esp) {
+ if (esp->auth.tfm)
+ crypto_free_tfm(esp->auth.tfm);
+ if (esp->auth.work_icv)
+ kfree(esp->auth.work_icv);
+ if (esp->conf.tfm)
+ crypto_free_tfm(esp->conf.tfm);
+ kfree(esp);
+ }
+ return -EINVAL;
+}
+
+static struct xfrm_type esp_type =
+{
+ .description = "ESP4",
+ .owner = THIS_MODULE,
+ .proto = IPPROTO_ESP,
+ .init_state = esp_init_state,
+ .destructor = esp_destroy,
+ .get_max_size = esp4_get_max_size,
+ .input = esp_input,
+ .post_input = esp_post_input,
+ .output = esp_output
+};
+
+static struct inet_protocol esp4_protocol = {
+ .handler = xfrm4_rcv,
+ .err_handler = esp4_err,
+ .no_policy = 1,
+};
+
+int __init esp4_init(void)
+{
+ struct xfrm_decap_state decap;
+
+ if (sizeof(struct esp_decap_data) <
+ sizeof(decap.decap_data)) {
+ extern void decap_data_too_small(void);
+
+ decap_data_too_small();
+ }
+
+ SET_MODULE_OWNER(&esp_type);
+ if (xfrm_register_type(&esp_type, AF_INET) < 0) {
+ printk(KERN_INFO "ip esp init: can't add xfrm type\n");
+ return -EAGAIN;
+ }
+ if (inet_add_protocol(&esp4_protocol, IPPROTO_ESP) < 0) {
+ printk(KERN_INFO "ip esp init: can't add protocol\n");
+ xfrm_unregister_type(&esp_type, AF_INET);
+ return -EAGAIN;
+ }
+ return 0;
+}
+
+static void __exit esp4_fini(void)
+{
+ if (inet_del_protocol(&esp4_protocol, IPPROTO_ESP) < 0)
+ printk(KERN_INFO "ip esp close: can't remove protocol\n");
+ if (xfrm_unregister_type(&esp_type, AF_INET) < 0)
+ printk(KERN_INFO "ip esp close: can't remove xfrm type\n");
+}
+
+module_init(esp4_init);
+module_exit(esp4_fini);
+MODULE_LICENSE("GPL");
diff -Nru a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
--- a/net/ipv4/fib_frontend.c Thu May 8 10:41:37 2003
+++ b/net/ipv4/fib_frontend.c Thu May 8 10:41:37 2003
@@ -144,17 +144,15 @@
struct net_device * ip_dev_find(u32 addr)
{
- struct rt_key key;
+ struct flowi fl = { .nl_u = { .ip4_u = { .daddr = addr } } };
struct fib_result res;
struct net_device *dev = NULL;
- memset(&key, 0, sizeof(key));
- key.dst = addr;
#ifdef CONFIG_IP_MULTIPLE_TABLES
res.r = NULL;
#endif
- if (!local_table || local_table->tb_lookup(local_table, &key, &res)) {
+ if (!local_table || local_table->tb_lookup(local_table, &fl, &res)) {
return NULL;
}
if (res.type != RTN_LOCAL)
@@ -170,7 +168,7 @@
unsigned inet_addr_type(u32 addr)
{
- struct rt_key key;
+ struct flowi fl = { .nl_u = { .ip4_u = { .daddr = addr } } };
struct fib_result res;
unsigned ret = RTN_BROADCAST;
@@ -179,15 +177,13 @@
if (MULTICAST(addr))
return RTN_MULTICAST;
- memset(&key, 0, sizeof(key));
- key.dst = addr;
#ifdef CONFIG_IP_MULTIPLE_TABLES
res.r = NULL;
#endif
if (local_table) {
ret = RTN_UNICAST;
- if (local_table->tb_lookup(local_table, &key, &res) == 0) {
+ if (local_table->tb_lookup(local_table, &fl, &res) == 0) {
ret = res.type;
fib_res_put(&res);
}
@@ -207,18 +203,15 @@
struct net_device *dev, u32 *spec_dst, u32 *itag)
{
struct in_device *in_dev;
- struct rt_key key;
+ struct flowi fl = { .nl_u = { .ip4_u =
+ { .daddr = src,
+ .saddr = dst,
+ .tos = tos } },
+ .iif = oif };
struct fib_result res;
int no_addr, rpf;
int ret;
- key.dst = src;
- key.src = dst;
- key.tos = tos;
- key.oif = 0;
- key.iif = oif;
- key.scope = RT_SCOPE_UNIVERSE;
-
no_addr = rpf = 0;
read_lock(&inetdev_lock);
in_dev = __in_dev_get(dev);
@@ -231,7 +224,7 @@
if (in_dev == NULL)
goto e_inval;
- if (fib_lookup(&key, &res))
+ if (fib_lookup(&fl, &res))
goto last_resort;
if (res.type != RTN_UNICAST)
goto e_inval_res;
@@ -252,10 +245,10 @@
goto last_resort;
if (rpf)
goto e_inval;
- key.oif = dev->ifindex;
+ fl.oif = dev->ifindex;
ret = 0;
- if (fib_lookup(&key, &res) == 0) {
+ if (fib_lookup(&fl, &res) == 0) {
if (res.type == RTN_UNICAST) {
*spec_dst = FIB_RES_PREFSRC(res);
ret = FIB_RES_NH(res).nh_scope >= RT_SCOPE_HOST;
diff -Nru a/net/ipv4/fib_hash.c b/net/ipv4/fib_hash.c
--- a/net/ipv4/fib_hash.c Thu May 8 10:41:37 2003
+++ b/net/ipv4/fib_hash.c Thu May 8 10:41:37 2003
@@ -266,7 +266,7 @@
}
static int
-fn_hash_lookup(struct fib_table *tb, const struct rt_key *key, struct fib_result *res)
+fn_hash_lookup(struct fib_table *tb, const struct flowi *flp, struct fib_result *res)
{
int err;
struct fn_zone *fz;
@@ -275,7 +275,7 @@
read_lock(&fib_hash_lock);
for (fz = t->fn_zone_list; fz; fz = fz->fz_next) {
struct fib_node *f;
- fn_key_t k = fz_key(key->dst, fz);
+ fn_key_t k = fz_key(flp->fl4_dst, fz);
for (f = fz_chain(k, fz); f; f = f->fn_next) {
if (!fn_key_eq(k, f->fn_key)) {
@@ -285,17 +285,17 @@
continue;
}
#ifdef CONFIG_IP_ROUTE_TOS
- if (f->fn_tos && f->fn_tos != key->tos)
+ if (f->fn_tos && f->fn_tos != flp->fl4_tos)
continue;
#endif
f->fn_state |= FN_S_ACCESSED;
if (f->fn_state&FN_S_ZOMBIE)
continue;
- if (f->fn_scope < key->scope)
+ if (f->fn_scope < flp->fl4_scope)
continue;
- err = fib_semantic_match(f->fn_type, FIB_INFO(f), key, res);
+ err = fib_semantic_match(f->fn_type, FIB_INFO(f), flp, res);
if (err == 0) {
res->type = f->fn_type;
res->scope = f->fn_scope;
@@ -338,7 +338,7 @@
}
static void
-fn_hash_select_default(struct fib_table *tb, const struct rt_key *key, struct fib_result *res)
+fn_hash_select_default(struct fib_table *tb, const struct flowi *flp, struct fib_result *res)
{
int order, last_idx;
struct fib_node *f;
diff -Nru a/net/ipv4/fib_rules.c b/net/ipv4/fib_rules.c
--- a/net/ipv4/fib_rules.c Thu May 8 10:41:37 2003
+++ b/net/ipv4/fib_rules.c Thu May 8 10:41:37 2003
@@ -307,28 +307,28 @@
}
}
-int fib_lookup(const struct rt_key *key, struct fib_result *res)
+int fib_lookup(const struct flowi *flp, struct fib_result *res)
{
int err;
struct fib_rule *r, *policy;
struct fib_table *tb;
- u32 daddr = key->dst;
- u32 saddr = key->src;
+ u32 daddr = flp->fl4_dst;
+ u32 saddr = flp->fl4_src;
FRprintk("Lookup: %u.%u.%u.%u <- %u.%u.%u.%u ",
- NIPQUAD(key->dst), NIPQUAD(key->src));
+ NIPQUAD(flp->fl4_dst), NIPQUAD(flp->fl4_src));
read_lock(&fib_rules_lock);
for (r = fib_rules; r; r=r->r_next) {
if (((saddr^r->r_src) & r->r_srcmask) ||
((daddr^r->r_dst) & r->r_dstmask) ||
#ifdef CONFIG_IP_ROUTE_TOS
- (r->r_tos && r->r_tos != key->tos) ||
+ (r->r_tos && r->r_tos != flp->fl4_tos) ||
#endif
#ifdef CONFIG_IP_ROUTE_FWMARK
- (r->r_fwmark && r->r_fwmark != key->fwmark) ||
+ (r->r_fwmark && r->r_fwmark != flp->fl4_fwmark) ||
#endif
- (r->r_ifindex && r->r_ifindex != key->iif))
+ (r->r_ifindex && r->r_ifindex != flp->iif))
continue;
FRprintk("tb %d r %d ", r->r_table, r->r_action);
@@ -351,7 +351,7 @@
if ((tb = fib_get_table(r->r_table)) == NULL)
continue;
- err = tb->tb_lookup(tb, key, res);
+ err = tb->tb_lookup(tb, flp, res);
if (err == 0) {
res->r = policy;
if (policy)
@@ -369,13 +369,13 @@
return -ENETUNREACH;
}
-void fib_select_default(const struct rt_key *key, struct fib_result *res)
+void fib_select_default(const struct flowi *flp, struct fib_result *res)
{
if (res->r && res->r->r_action == RTN_UNICAST &&
FIB_RES_GW(*res) && FIB_RES_NH(*res).nh_scope == RT_SCOPE_LINK) {
struct fib_table *tb;
if ((tb = fib_get_table(res->r->r_table)) != NULL)
- tb->tb_select_default(tb, key, res);
+ tb->tb_select_default(tb, flp, res);
}
}
diff -Nru a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
--- a/net/ipv4/fib_semantics.c Thu May 8 10:41:36 2003
+++ b/net/ipv4/fib_semantics.c Thu May 8 10:41:36 2003
@@ -349,7 +349,6 @@
int err;
if (nh->nh_gw) {
- struct rt_key key;
struct fib_result res;
#ifdef CONFIG_IP_ROUTE_PERVASIVE
@@ -372,16 +371,18 @@
nh->nh_scope = RT_SCOPE_LINK;
return 0;
}
- memset(&key, 0, sizeof(key));
- key.dst = nh->nh_gw;
- key.oif = nh->nh_oif;
- key.scope = r->rtm_scope + 1;
-
- /* It is not necessary, but requires a bit of thinking */
- if (key.scope < RT_SCOPE_LINK)
- key.scope = RT_SCOPE_LINK;
- if ((err = fib_lookup(&key, &res)) != 0)
- return err;
+ {
+ struct flowi fl = { .nl_u = { .ip4_u =
+ { .daddr = nh->nh_gw,
+ .scope = r->rtm_scope + 1 } },
+ .oif = nh->nh_oif };
+
+ /* It is not necessary, but requires a bit of thinking */
+ if (fl.fl4_scope < RT_SCOPE_LINK)
+ fl.fl4_scope = RT_SCOPE_LINK;
+ if ((err = fib_lookup(&fl, &res)) != 0)
+ return err;
+ }
err = -EINVAL;
if (res.type != RTN_UNICAST && res.type != RTN_LOCAL)
goto out;
@@ -578,7 +579,7 @@
}
int
-fib_semantic_match(int type, struct fib_info *fi, const struct rt_key *key, struct fib_result *res)
+fib_semantic_match(int type, struct fib_info *fi, const struct flowi *flp, struct fib_result *res)
{
int err = fib_props[type].error;
@@ -603,7 +604,7 @@
for_nexthops(fi) {
if (nh->nh_flags&RTNH_F_DEAD)
continue;
- if (!key->oif || key->oif == nh->nh_oif)
+ if (!flp->oif || flp->oif == nh->nh_oif)
break;
}
#ifdef CONFIG_IP_ROUTE_MULTIPATH
@@ -949,7 +950,7 @@
fair weighted route distribution.
*/
-void fib_select_multipath(const struct rt_key *key, struct fib_result *res)
+void fib_select_multipath(const struct flowi *flp, struct fib_result *res)
{
struct fib_info *fi = res->fi;
int w;
diff -Nru a/net/ipv4/icmp.c b/net/ipv4/icmp.c
--- a/net/ipv4/icmp.c Thu May 8 10:41:37 2003
+++ b/net/ipv4/icmp.c Thu May 8 10:41:37 2003
@@ -101,7 +101,6 @@
int offset;
int data_len;
- unsigned int csum;
struct {
struct icmphdr icmph;
__u32 times[3];
@@ -275,37 +274,47 @@
* Checksum each fragment, and on the first include the headers and final checksum.
*/
-static int icmp_glue_bits(const void *p, char *to, unsigned int offset, unsigned int fraglen)
+int
+icmp_glue_bits(void *from, char *to, int offset, int len, int odd, struct sk_buff *skb)
{
- struct icmp_bxm *icmp_param = (struct icmp_bxm *)p;
- struct icmphdr *icmph;
+ struct icmp_bxm *icmp_param = (struct icmp_bxm *)from;
unsigned int csum;
- if (offset) {
- icmp_param->csum=skb_copy_and_csum_bits(icmp_param->skb,
- icmp_param->offset+(offset-icmp_param->head_len),
- to, fraglen,icmp_param->csum);
- return 0;
- }
+ csum = skb_copy_and_csum_bits(icmp_param->skb,
+ icmp_param->offset + offset,
+ to, len, 0);
- /*
- * First fragment includes header. Note that we've done
- * the other fragments first, so that we get the checksum
- * for the whole packet here.
- */
- csum = csum_partial_copy_nocheck((void *)&icmp_param->data,
- to, icmp_param->head_len,
- icmp_param->csum);
- csum=skb_copy_and_csum_bits(icmp_param->skb,
- icmp_param->offset,
- to+icmp_param->head_len,
- fraglen-icmp_param->head_len,
- csum);
- icmph=(struct icmphdr *)to;
- icmph->checksum = csum_fold(csum);
+ skb->csum = csum_block_add(skb->csum, csum, odd);
return 0;
}
+static void
+icmp_push_reply(struct icmp_bxm *icmp_param, struct ipcm_cookie *ipc, struct rtable *rt)
+{
+ struct sk_buff *skb;
+
+ ip_append_data(icmp_socket->sk, icmp_glue_bits, icmp_param,
+ icmp_param->data_len+icmp_param->head_len,
+ icmp_param->head_len,
+ ipc, rt, MSG_DONTWAIT);
+
+ if ((skb = skb_peek(&icmp_socket->sk->write_queue)) != NULL) {
+ struct icmphdr *icmph = skb->h.icmph;
+ unsigned int csum = 0;
+ struct sk_buff *skb1;
+
+ skb_queue_walk(&icmp_socket->sk->write_queue, skb1) {
+ csum = csum_add(csum, skb1->csum);
+ }
+ csum = csum_partial_copy_nocheck((void *)&icmp_param->data,
+ (char*)icmph, icmp_param->head_len,
+ csum);
+ icmph->checksum = csum_fold(csum);
+ skb->ip_summed = CHECKSUM_NONE;
+ ip_push_pending_frames(icmp_socket->sk);
+ }
+}
+
/*
* Driving logic for building and sending ICMP messages.
*/
@@ -323,7 +332,6 @@
icmp_xmit_lock();
icmp_param->data.icmph.checksum=0;
- icmp_param->csum=0;
icmp_out_count(icmp_param->data.icmph.type);
sk->protinfo.af_inet.tos = skb->nh.iph->tos;
@@ -335,14 +343,18 @@
if (ipc.opt->srr)
daddr = icmp_param->replyopts.faddr;
}
- if (ip_route_output(&rt, daddr, rt->rt_spec_dst, RT_TOS(skb->nh.iph->tos), 0))
- goto out;
- if (icmpv4_xrlim_allow(rt, icmp_param->data.icmph.type,
- icmp_param->data.icmph.code)) {
- ip_build_xmit(sk, icmp_glue_bits, icmp_param,
- icmp_param->data_len+icmp_param->head_len,
- &ipc, rt, MSG_DONTWAIT);
+ {
+ struct flowi fl = { .nl_u = { .ip4_u =
+ { .daddr = daddr,
+ .saddr = rt->rt_spec_dst,
+ .tos = RT_TOS(skb->nh.iph->tos) } },
+ .proto = IPPROTO_ICMP };
+ if (ip_route_output_key(&rt, &fl))
+ goto out;
}
+ if (icmpv4_xrlim_allow(rt, icmp_param->data.icmph.type,
+ icmp_param->data.icmph.code))
+ icmp_push_reply(icmp_param, &ipc, rt);
ip_rt_put(rt);
out:
icmp_xmit_unlock();
@@ -438,8 +450,8 @@
* Restore original addresses if packet has been translated.
*/
if (rt->rt_flags&RTCF_NAT && IPCB(skb_in)->flags&IPSKB_TRANSLATED) {
- iph->daddr = rt->key.dst;
- iph->saddr = rt->key.src;
+ iph->daddr = rt->fl.fl4_dst;
+ iph->saddr = rt->fl.fl4_src;
}
#endif
@@ -451,9 +463,14 @@
((iph->tos & IPTOS_TOS_MASK) | IPTOS_PREC_INTERNETCONTROL) :
iph->tos;
- if (ip_route_output(&rt, iph->saddr, saddr, RT_TOS(tos), 0))
- goto out;
-
+ {
+ struct flowi fl = { .nl_u = { .ip4_u = { .daddr = iph->saddr,
+ .saddr = saddr,
+ .tos = RT_TOS(tos) } },
+ .proto = IPPROTO_ICMP };
+ if (ip_route_output_key(&rt, &fl))
+ goto out;
+ }
if (ip_options_echo(&icmp_param.replyopts, skb_in))
goto ende;
@@ -466,7 +483,6 @@
icmp_param.data.icmph.code=code;
icmp_param.data.icmph.un.gateway = info;
icmp_param.data.icmph.checksum=0;
- icmp_param.csum=0;
icmp_param.skb=skb_in;
icmp_param.offset=skb_in->nh.raw - skb_in->data;
icmp_out_count(icmp_param.data.icmph.type);
@@ -475,8 +491,13 @@
ipc.addr = iph->saddr;
ipc.opt = &icmp_param.replyopts;
if (icmp_param.replyopts.srr) {
+ struct flowi fl = { .nl_u = { .ip4_u =
+ { .daddr = icmp_param.replyopts.faddr,
+ .saddr = saddr,
+ .tos = RT_TOS(tos) } },
+ .proto = IPPROTO_ICMP };
ip_rt_put(rt);
- if (ip_route_output(&rt, icmp_param.replyopts.faddr, saddr, RT_TOS(tos), 0))
+ if (ip_route_output_key(&rt, &fl))
goto out;
}
@@ -485,7 +506,7 @@
/* RFC says return as much as we can without exceeding 576 bytes. */
- room = rt->u.dst.pmtu;
+ room = dst_pmtu(&rt->u.dst);
if (room > 576)
room = 576;
room -= sizeof(struct iphdr) + icmp_param.replyopts.optlen;
@@ -496,9 +517,7 @@
icmp_param.data_len = room;
icmp_param.head_len = sizeof(struct icmphdr);
- ip_build_xmit(icmp_socket->sk, icmp_glue_bits, &icmp_param,
- icmp_param.data_len+sizeof(struct icmphdr),
- &ipc, rt, MSG_DONTWAIT);
+ icmp_push_reply(&icmp_param, &ipc, rt);
ende:
ip_rt_put(rt);
@@ -634,24 +653,10 @@
* we are OK.
*/
- ipprot = (struct inet_protocol *) inet_protos[hash];
- while (ipprot) {
- struct inet_protocol *nextip;
-
- nextip = (struct inet_protocol *) ipprot->next;
-
- /*
- * Pass it off to everyone who wants it.
- */
-
- /* RFC1122: OK. Passes appropriate ICMP errors to the */
- /* appropriate protocol layer (MUST), as per 3.2.2. */
-
- if (protocol == ipprot->protocol && ipprot->err_handler)
- ipprot->err_handler(skb, info);
+ ipprot = inet_protos[hash];
+ if (ipprot && ipprot->err_handler)
+ ipprot->err_handler(skb, info);
- ipprot = nextip;
- }
out:;
}
diff -Nru a/net/ipv4/igmp.c b/net/ipv4/igmp.c
--- a/net/ipv4/igmp.c Thu May 8 10:41:38 2003
+++ b/net/ipv4/igmp.c Thu May 8 10:41:38 2003
@@ -184,14 +184,6 @@
#define IGMP_SIZE (sizeof(struct igmphdr)+sizeof(struct iphdr)+4)
-/* Don't just hand NF_HOOK skb->dst->output, in case netfilter hook
- changes route */
-static inline int
-output_maybe_reroute(struct sk_buff *skb)
-{
- return skb->dst->output(skb);
-}
-
static int igmp_send_report(struct net_device *dev, u32 group, int type)
{
struct sk_buff *skb;
@@ -207,14 +199,19 @@
if (type == IGMP_HOST_LEAVE_MESSAGE)
dst = IGMP_ALL_ROUTER;
- if (ip_route_output(&rt, dst, 0, 0, dev->ifindex))
- return -1;
+ {
+ struct flowi fl = { .oif = dev->ifindex,
+ .nl_u = { .ip4_u = { .daddr = dst } },
+ .proto = IPPROTO_IGMP };
+ if (ip_route_output_key(&rt, &fl))
+ return -1;
+ }
if (rt->rt_src == 0) {
ip_rt_put(rt);
return -1;
}
- skb=alloc_skb(IGMP_SIZE+dev->hard_header_len+15, GFP_ATOMIC);
+ skb=alloc_skb(IGMP_SIZE+LL_RESERVED_SPACE(dev), GFP_ATOMIC);
if (skb == NULL) {
ip_rt_put(rt);
return -1;
@@ -222,7 +219,7 @@
skb->dst = &rt->u.dst;
- skb_reserve(skb, (dev->hard_header_len+15)&~15);
+ skb_reserve(skb, LL_RESERVED_SPACE(dev));
skb->nh.iph = iph = (struct iphdr *)skb_put(skb, sizeof(struct iphdr)+4);
@@ -250,7 +247,7 @@
ih->csum=ip_compute_csum((void *)ih, sizeof(struct igmphdr));
return NF_HOOK(PF_INET, NF_IP_LOCAL_OUT, skb, NULL, rt->u.dst.dev,
- output_maybe_reroute);
+ dst_output);
}
@@ -366,7 +363,7 @@
case IGMP_HOST_MEMBERSHIP_REPORT:
case IGMP_HOST_NEW_MEMBERSHIP_REPORT:
/* Is it our report looped back? */
- if (((struct rtable*)skb->dst)->key.iif == 0)
+ if (((struct rtable*)skb->dst)->fl.iif == 0)
break;
igmp_heard_report(in_dev, ih->group);
break;
@@ -600,6 +597,8 @@
static struct in_device * ip_mc_find_dev(struct ip_mreqn *imr)
{
+ struct flowi fl = { .nl_u = { .ip4_u =
+ { .daddr = imr->imr_multiaddr.s_addr } } };
struct rtable *rt;
struct net_device *dev = NULL;
struct in_device *idev = NULL;
@@ -611,7 +610,7 @@
__dev_put(dev);
}
- if (!dev && !ip_route_output(&rt, imr->imr_multiaddr.s_addr, 0, 0, 0)) {
+ if (!dev && !ip_route_output_key(&rt, &fl)) {
dev = rt->u.dst.dev;
ip_rt_put(rt);
}
diff -Nru a/net/ipv4/ip_forward.c b/net/ipv4/ip_forward.c
--- a/net/ipv4/ip_forward.c Thu May 8 10:41:36 2003
+++ b/net/ipv4/ip_forward.c Thu May 8 10:41:36 2003
@@ -40,6 +40,7 @@
#include <net/checksum.h>
#include <linux/route.h>
#include <net/route.h>
+#include <net/xfrm.h>
static inline int ip_forward_finish(struct sk_buff *skb)
{
@@ -47,36 +48,20 @@
IP_INC_STATS_BH(IpForwDatagrams);
- if (opt->optlen == 0) {
-#ifdef CONFIG_NET_FASTROUTE
- struct rtable *rt = (struct rtable*)skb->dst;
-
- if (rt->rt_flags&RTCF_FAST && !netdev_fastroute_obstacles) {
- struct dst_entry *old_dst;
- unsigned h = ((*(u8*)&rt->key.dst)^(*(u8*)&rt->key.src))&NETDEV_FASTROUTE_HMASK;
-
- write_lock_irq(&skb->dev->fastpath_lock);
- old_dst = skb->dev->fastpath[h];
- skb->dev->fastpath[h] = dst_clone(&rt->u.dst);
- write_unlock_irq(&skb->dev->fastpath_lock);
-
- dst_release(old_dst);
- }
-#endif
- return (ip_send(skb));
- }
+ if (unlikely(opt->optlen))
+ ip_forward_options(skb);
- ip_forward_options(skb);
- return (ip_send(skb));
+ return dst_output(skb);
}
int ip_forward(struct sk_buff *skb)
{
- struct net_device *dev2; /* Output device */
struct iphdr *iph; /* Our header */
struct rtable *rt; /* Route we use */
struct ip_options * opt = &(IPCB(skb)->opt);
- unsigned short mtu;
+
+ if (!xfrm4_policy_check(NULL, XFRM_POLICY_FWD, skb))
+ goto drop;
if (IPCB(skb)->opt.router_alert && ip_call_ra_chain(skb))
return NET_RX_SUCCESS;
@@ -93,32 +78,21 @@
*/
iph = skb->nh.iph;
- rt = (struct rtable*)skb->dst;
if (iph->ttl <= 1)
goto too_many_hops;
- if (opt->is_strictroute && rt->rt_dst != rt->rt_gateway)
- goto sr_failed;
-
- /*
- * Having picked a route we can now send the frame out
- * after asking the firewall permission to do so.
- */
+ if (!xfrm4_route_forward(skb))
+ goto drop;
- skb->priority = rt_tos2priority(iph->tos);
- dev2 = rt->u.dst.dev;
- mtu = rt->u.dst.pmtu;
+ iph = skb->nh.iph;
+ rt = (struct rtable*)skb->dst;
- /*
- * We now generate an ICMP HOST REDIRECT giving the route
- * we calculated.
- */
- if (rt->rt_flags&RTCF_DOREDIRECT && !opt->srr)
- ip_rt_send_redirect(skb);
+ if (opt->is_strictroute && rt->rt_dst != rt->rt_gateway)
+ goto sr_failed;
/* We are about to mangle packet. Copy it! */
- if (skb_cow(skb, dev2->hard_header_len))
+ if (skb_cow(skb, LL_RESERVED_SPACE(rt->u.dst.dev)+rt->u.dst.header_len))
goto drop;
iph = skb->nh.iph;
@@ -126,29 +100,16 @@
ip_decrease_ttl(iph);
/*
- * We now may allocate a new buffer, and copy the datagram into it.
- * If the indicated interface is up and running, kick it.
+ * We now generate an ICMP HOST REDIRECT giving the route
+ * we calculated.
*/
+ if (rt->rt_flags&RTCF_DOREDIRECT && !opt->srr)
+ ip_rt_send_redirect(skb);
- if (skb->len > mtu && (ntohs(iph->frag_off) & IP_DF))
- goto frag_needed;
-
-#ifdef CONFIG_IP_ROUTE_NAT
- if (rt->rt_flags & RTCF_NAT) {
- if (ip_do_nat(skb)) {
- kfree_skb(skb);
- return NET_RX_BAD;
- }
- }
-#endif
+ skb->priority = rt_tos2priority(iph->tos);
- return NF_HOOK(PF_INET, NF_IP_FORWARD, skb, skb->dev, dev2,
+ return NF_HOOK(PF_INET, NF_IP_FORWARD, skb, skb->dev, rt->u.dst.dev,
ip_forward_finish);
-
-frag_needed:
- IP_INC_STATS_BH(IpFragFails);
- icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, htonl(mtu));
- goto drop;
sr_failed:
/*
diff -Nru a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
--- a/net/ipv4/ip_gre.c Thu May 8 10:41:37 2003
+++ b/net/ipv4/ip_gre.c Thu May 8 10:41:37 2003
@@ -410,6 +410,7 @@
u16 flags;
int grehlen = (iph->ihl<<2) + 4;
struct sk_buff *skb2;
+ struct flowi fl;
struct rtable *rt;
if (p[1] != htons(ETH_P_IP))
@@ -486,7 +487,11 @@
skb2->nh.raw = skb2->data;
/* Try to guess incoming interface */
- if (ip_route_output(&rt, eiph->saddr, 0, RT_TOS(eiph->tos), 0)) {
+ memset(&fl, 0, sizeof(fl));
+ fl.fl4_dst = eiph->saddr;
+ fl.fl4_tos = RT_TOS(eiph->tos);
+ fl.proto = IPPROTO_GRE;
+ if (ip_route_output_key(&rt, &fl)) {
kfree_skb(skb2);
return;
}
@@ -496,7 +501,10 @@
if (rt->rt_flags&RTCF_LOCAL) {
ip_rt_put(rt);
rt = NULL;
- if (ip_route_output(&rt, eiph->daddr, eiph->saddr, eiph->tos, 0) ||
+ fl.fl4_dst = eiph->daddr;
+ fl.fl4_src = eiph->saddr;
+ fl.fl4_tos = eiph->tos;
+ if (ip_route_output_key(&rt, &fl) ||
rt->u.dst.dev->type != ARPHRD_IPGRE) {
ip_rt_put(rt);
kfree_skb(skb2);
@@ -513,11 +521,11 @@
/* change mtu on this route */
if (type == ICMP_DEST_UNREACH && code == ICMP_FRAG_NEEDED) {
- if (rel_info > skb2->dst->pmtu) {
+ if (rel_info > dst_pmtu(skb2->dst)) {
kfree_skb(skb2);
return;
}
- skb2->dst->pmtu = rel_info;
+ skb2->dst->ops->update_pmtu(skb2->dst, rel_info);
rel_info = htonl(rel_info);
} else if (type == ICMP_TIME_EXCEEDED) {
struct ip_tunnel *t = (struct ip_tunnel*)skb2->dev->priv;
@@ -617,7 +625,7 @@
#ifdef CONFIG_NET_IPGRE_BROADCAST
if (MULTICAST(iph->daddr)) {
/* Looped back packet, drop it! */
- if (((struct rtable*)skb->dst)->key.iif == 0)
+ if (((struct rtable*)skb->dst)->fl.iif == 0)
goto drop;
tunnel->stat.multicast++;
skb->pkt_type = PACKET_BROADCAST;
@@ -665,12 +673,6 @@
return(0);
}
-/* Need this wrapper because NF_HOOK takes the function address */
-static inline int do_ip_send(struct sk_buff *skb)
-{
- return ip_send(skb);
-}
-
static int ipgre_tunnel_xmit(struct sk_buff *skb, struct net_device *dev)
{
struct ip_tunnel *tunnel = (struct ip_tunnel*)dev->priv;
@@ -747,9 +749,17 @@
tos &= ~1;
}
- if (ip_route_output(&rt, dst, tiph->saddr, RT_TOS(tos), tunnel->parms.link)) {
- tunnel->stat.tx_carrier_errors++;
- goto tx_error;
+ {
+ struct flowi fl = { .oif = tunnel->parms.link,
+ .nl_u = { .ip4_u =
+ { .daddr = dst,
+ .saddr = tiph->saddr,
+ .tos = RT_TOS(tos) } },
+ .proto = IPPROTO_GRE };
+ if (ip_route_output_key(&rt, &fl)) {
+ tunnel->stat.tx_carrier_errors++;
+ goto tx_error;
+ }
}
tdev = rt->u.dst.dev;
@@ -761,14 +771,14 @@
df = tiph->frag_off;
if (df)
- mtu = rt->u.dst.pmtu - tunnel->hlen;
+ mtu = dst_pmtu(&rt->u.dst) - tunnel->hlen;
else
- mtu = skb->dst ? skb->dst->pmtu : dev->mtu;
+ mtu = skb->dst ? dst_pmtu(skb->dst) : dev->mtu;
- if (skb->protocol == htons(ETH_P_IP)) {
- if (skb->dst && mtu < skb->dst->pmtu && mtu >= 68)
- skb->dst->pmtu = mtu;
+ if (skb->dst)
+ skb->dst->ops->update_pmtu(skb->dst, mtu);
+ if (skb->protocol == htons(ETH_P_IP)) {
df |= (old_iph->frag_off&htons(IP_DF));
if ((old_iph->frag_off&htons(IP_DF)) &&
@@ -782,11 +792,11 @@
else if (skb->protocol == htons(ETH_P_IPV6)) {
struct rt6_info *rt6 = (struct rt6_info*)skb->dst;
- if (rt6 && mtu < rt6->u.dst.pmtu && mtu >= IPV6_MIN_MTU) {
+ if (rt6 && mtu < dst_pmtu(skb->dst) && mtu >= IPV6_MIN_MTU) {
if ((tunnel->parms.iph.daddr && !MULTICAST(tunnel->parms.iph.daddr)) ||
rt6->rt6i_dst.plen == 128) {
rt6->rt6i_flags |= RTF_MODIFIED;
- skb->dst->pmtu = mtu;
+ skb->dst->metrics[RTAX_MTU-1] = mtu;
}
}
@@ -809,7 +819,7 @@
skb->h.raw = skb->nh.raw;
- max_headroom = ((tdev->hard_header_len+15)&~15)+ gre_hlen;
+ max_headroom = LL_RESERVED_SPACE(tdev) + gre_hlen;
if (skb_headroom(skb) < max_headroom || skb_cloned(skb) || skb_shared(skb)) {
struct sk_buff *new_skb = skb_realloc_headroom(skb, max_headroom);
@@ -1102,10 +1112,14 @@
MOD_INC_USE_COUNT;
if (MULTICAST(t->parms.iph.daddr)) {
+ struct flowi fl = { .oif = t->parms.link,
+ .nl_u = { .ip4_u =
+ { .daddr = t->parms.iph.daddr,
+ .saddr = t->parms.iph.saddr,
+ .tos = RT_TOS(t->parms.iph.tos) } },
+ .proto = IPPROTO_GRE };
struct rtable *rt;
- if (ip_route_output(&rt, t->parms.iph.daddr,
- t->parms.iph.saddr, RT_TOS(t->parms.iph.tos),
- t->parms.link)) {
+ if (ip_route_output_key(&rt, &fl)) {
MOD_DEC_USE_COUNT;
return -EADDRNOTAVAIL;
}
@@ -1175,8 +1189,14 @@
/* Guess output device to choose reasonable mtu and hard_header_len */
if (iph->daddr) {
+ struct flowi fl = { .oif = tunnel->parms.link,
+ .nl_u = { .ip4_u =
+ { .daddr = iph->daddr,
+ .saddr = iph->saddr,
+ .tos = RT_TOS(iph->tos) } },
+ .proto = IPPROTO_GRE };
struct rtable *rt;
- if (!ip_route_output(&rt, iph->daddr, iph->saddr, RT_TOS(iph->tos), tunnel->parms.link)) {
+ if (!ip_route_output_key(&rt, &fl)) {
tdev = rt->u.dst.dev;
ip_rt_put(rt);
}
@@ -1257,13 +1277,8 @@
static struct inet_protocol ipgre_protocol = {
- ipgre_rcv, /* GRE handler */
- ipgre_err, /* TUNNEL error control */
- 0, /* next */
- IPPROTO_GRE, /* protocol ID */
- 0, /* copy */
- NULL, /* data */
- "GRE" /* name */
+ .handler = ipgre_rcv,
+ .err_handler = ipgre_err,
};
@@ -1279,9 +1294,13 @@
{
printk(KERN_INFO "GRE over IPv4 tunneling driver\n");
+ if (inet_add_protocol(&ipgre_protocol, IPPROTO_GRE) < 0) {
+ printk(KERN_INFO "ipgre init: can't add protocol\n");
+ return -EAGAIN;
+ }
+
ipgre_fb_tunnel_dev.priv = (void*)&ipgre_fb_tunnel;
register_netdev(&ipgre_fb_tunnel_dev);
- inet_add_protocol(&ipgre_protocol);
return 0;
}
@@ -1289,7 +1308,7 @@
void cleanup_module(void)
{
- if ( inet_del_protocol(&ipgre_protocol) < 0 )
+ if (inet_del_protocol(&ipgre_protocol, IPPROTO_GRE) < 0)
printk(KERN_INFO "ipgre close: can't remove protocol\n");
unregister_netdev(&ipgre_fb_tunnel_dev);
diff -Nru a/net/ipv4/ip_input.c b/net/ipv4/ip_input.c
--- a/net/ipv4/ip_input.c Thu May 8 10:41:36 2003
+++ b/net/ipv4/ip_input.c Thu May 8 10:41:36 2003
@@ -141,6 +141,7 @@
#include <net/raw.h>
#include <net/checksum.h>
#include <linux/netfilter_ipv4.h>
+#include <net/xfrm.h>
#include <linux/mroute.h>
#include <linux/netlink.h>
@@ -194,28 +195,6 @@
return 0;
}
-/* Handle this out of line, it is rare. */
-static int ip_run_ipprot(struct sk_buff *skb, struct iphdr *iph,
- struct inet_protocol *ipprot, int force_copy)
-{
- int ret = 0;
-
- do {
- if (ipprot->protocol == iph->protocol) {
- struct sk_buff *skb2 = skb;
- if (ipprot->copy || force_copy)
- skb2 = skb_clone(skb, GFP_ATOMIC);
- if(skb2 != NULL) {
- ret = 1;
- ipprot->handler(skb2);
- }
- }
- ipprot = (struct inet_protocol *) ipprot->next;
- } while(ipprot != NULL);
-
- return ret;
-}
-
static inline int ip_local_deliver_finish(struct sk_buff *skb)
{
int ihl = skb->nh.iph->ihl*4;
@@ -239,44 +218,40 @@
{
/* Note: See raw.c and net/raw.h, RAWV4_HTABLE_SIZE==MAX_INET_PROTOS */
int protocol = skb->nh.iph->protocol;
- int hash = protocol & (MAX_INET_PROTOS - 1);
- struct sock *raw_sk = raw_v4_htable[hash];
+ int hash;
+ struct sock *raw_sk;
struct inet_protocol *ipprot;
- int flag;
+
+ resubmit:
+ hash = protocol & (MAX_INET_PROTOS - 1);
+ raw_sk = raw_v4_htable[hash];
/* If there maybe a raw socket we must check - if not we
* don't care less
*/
- if(raw_sk != NULL)
- raw_sk = raw_v4_input(skb, skb->nh.iph, hash);
+ if (raw_sk)
+ raw_v4_input(skb, skb->nh.iph, hash);
- ipprot = (struct inet_protocol *) inet_protos[hash];
- flag = 0;
- if(ipprot != NULL) {
- if(raw_sk == NULL &&
- ipprot->next == NULL &&
- ipprot->protocol == protocol) {
- int ret;
-
- /* Fast path... */
- ret = ipprot->handler(skb);
-
- return ret;
- } else {
- flag = ip_run_ipprot(skb, skb->nh.iph, ipprot, (raw_sk != NULL));
- }
- }
+ if ((ipprot = inet_protos[hash]) != NULL) {
+ int ret;
- /* All protocols checked.
- * If this packet was a broadcast, we may *not* reply to it, since that
- * causes (proven, grin) ARP storms and a leakage of memory (i.e. all
- * ICMP reply messages get queued up for transmission...)
- */
- if(raw_sk != NULL) { /* Shift to last raw user */
- raw_rcv(raw_sk, skb);
- sock_put(raw_sk);
- } else if (!flag) { /* Free and report errors */
- icmp_send(skb, ICMP_DEST_UNREACH, ICMP_PROT_UNREACH, 0);
+ if (!ipprot->no_policy &&
+ !xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb)) {
+ kfree_skb(skb);
+ return 0;
+ }
+ ret = ipprot->handler(skb);
+ if (ret < 0) {
+ protocol = -ret;
+ goto resubmit;
+ }
+ } else {
+ if (!raw_sk) {
+ if (xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb)) {
+ icmp_send(skb, ICMP_DEST_UNREACH,
+ ICMP_PROT_UNREACH, 0);
+ }
+ }
kfree_skb(skb);
}
}
@@ -364,7 +339,7 @@
}
}
- return skb->dst->input(skb);
+ return dst_input(skb);
inhdr_error:
IP_INC_STATS_BH(IpInHdrErrors);
diff -Nru a/net/ipv4/ip_nat_dumb.c b/net/ipv4/ip_nat_dumb.c
--- a/net/ipv4/ip_nat_dumb.c Thu May 8 10:41:37 2003
+++ b/net/ipv4/ip_nat_dumb.c Thu May 8 10:41:37 2003
@@ -117,23 +117,23 @@
if (rt->rt_flags&RTCF_SNAT) {
if (ciph->daddr != osaddr) {
struct fib_result res;
- struct rt_key key;
unsigned flags = 0;
-
- key.src = ciph->daddr;
- key.dst = ciph->saddr;
- key.iif = skb->dev->ifindex;
- key.oif = 0;
+ struct flowi fl = {
+ .iif = skb->dev->ifindex,
+ .nl_u =
+ { .ip4_u =
+ { .daddr = ciph->saddr,
+ .saddr = ciph->daddr,
#ifdef CONFIG_IP_ROUTE_TOS
- key.tos = RT_TOS(ciph->tos);
-#endif
-#ifdef CONFIG_IP_ROUTE_FWMARK
- key.fwmark = 0;
+ .tos = RT_TOS(ciph->tos)
#endif
+ } },
+ .proto = ciph->protocol };
+
/* Use fib_lookup() until we get our own
* hash table of NATed hosts -- Rani
*/
- if (fib_lookup(&key, &res) == 0) {
+ if (fib_lookup(&fl, &res) == 0) {
if (res.r) {
ciph->daddr = fib_rules_policy(ciph->daddr, &res, &flags);
if (ciph->daddr != idaddr)
diff -Nru a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
--- a/net/ipv4/ip_output.c Thu May 8 10:41:38 2003
+++ b/net/ipv4/ip_output.c Thu May 8 10:41:38 2003
@@ -15,6 +15,7 @@
* Stefan Becker, <stefanb@yello.ping.de>
* Jorge Cwik, <jorge@laser.satlink.net>
* Arnt Gulbrandsen, <agulbra@nvg.unit.no>
+ * Hirokazu Takahashi, <taka@valinux.co.jp>
*
* See ip_input.c for original log
*
@@ -38,6 +39,9 @@
* Marc Boucher : When call_out_firewall returns FW_QUEUE,
* silently drop skb instead of failing with -EPERM.
* Detlev Wengorz : Copy protocol for fragments.
+ * Hirokazu Takahashi: HW checksumming for outgoing UDP
+ * datagrams.
+ * Hirokazu Takahashi: sendfile() on UDP works now.
*/
#include <asm/uaccess.h>
@@ -108,16 +112,9 @@
return 0;
}
-/* Don't just hand NF_HOOK skb->dst->output, in case netfilter hook
- changes route */
-static inline int
-output_maybe_reroute(struct sk_buff *skb)
-{
- return skb->dst->output(skb);
-}
-
/*
* Add an ip header to a skbuff and send it out.
+ *
*/
int ip_build_and_send_pkt(struct sk_buff *skb, struct sock *sk,
u32 saddr, u32 daddr, struct ip_options *opt)
@@ -152,15 +149,34 @@
}
ip_send_check(iph);
+ skb->priority = sk->priority;
+
/* Send it out. */
return NF_HOOK(PF_INET, NF_IP_LOCAL_OUT, skb, NULL, rt->u.dst.dev,
- output_maybe_reroute);
+ dst_output);
}
static inline int ip_finish_output2(struct sk_buff *skb)
{
struct dst_entry *dst = skb->dst;
struct hh_cache *hh = dst->hh;
+ struct net_device *dev = dst->dev;
+ int hh_len = LL_RESERVED_SPACE(dev);
+
+ /* Be paranoid, rather than too clever. */
+ if (unlikely(skb_headroom(skb) < hh_len && dev->hard_header)) {
+ struct sk_buff *skb2;
+
+ skb2 = skb_realloc_headroom(skb, LL_RESERVED_SPACE(dev));
+ if (skb2 == NULL) {
+ kfree_skb(skb);
+ return -ENOMEM;
+ }
+ if (skb->sk)
+ skb_set_owner_w(skb2, skb->sk);
+ kfree_skb(skb);
+ skb = skb2;
+ }
#ifdef CONFIG_NETFILTER_DEBUG
nf_debug_ip_finish_output2(skb);
@@ -181,7 +197,7 @@
return -EINVAL;
}
-__inline__ int ip_finish_output(struct sk_buff *skb)
+int ip_finish_output(struct sk_buff *skb)
{
struct net_device *dev = skb->dst->dev;
@@ -202,10 +218,6 @@
* If the indicated interface is up and running, send the packet.
*/
IP_INC_STATS(IpOutRequests);
-#ifdef CONFIG_IP_ROUTE_NAT
- if (rt->rt_flags & RTCF_NAT)
- ip_do_nat(skb);
-#endif
skb->dev = dev;
skb->protocol = htons(ETH_P_IP);
@@ -250,90 +262,26 @@
newskb->dev, ip_dev_loopback_xmit);
}
- return ip_finish_output(skb);
+ if (skb->len > dst_pmtu(&rt->u.dst) || skb_shinfo(skb)->frag_list)
+ return ip_fragment(skb, ip_finish_output);
+ else
+ return ip_finish_output(skb);
}
int ip_output(struct sk_buff *skb)
{
-#ifdef CONFIG_IP_ROUTE_NAT
- struct rtable *rt = (struct rtable*)skb->dst;
-#endif
-
IP_INC_STATS(IpOutRequests);
-#ifdef CONFIG_IP_ROUTE_NAT
- if (rt->rt_flags&RTCF_NAT)
- ip_do_nat(skb);
+ if ((skb->len > dst_pmtu(skb->dst) || skb_shinfo(skb)->frag_list) &&
+#ifdef NETIF_F_TSO
+ !skb_shinfo(skb)->tso_size
+#else
+ 1
#endif
-
- return ip_finish_output(skb);
-}
-
-/* Queues a packet to be sent, and starts the transmitter if necessary.
- * This routine also needs to put in the total length and compute the
- * checksum. We use to do this in two stages, ip_build_header() then
- * this, but that scheme created a mess when routes disappeared etc.
- * So we do it all here, and the TCP send engine has been changed to
- * match. (No more unroutable FIN disasters, etc. wheee...) This will
- * most likely make other reliable transport layers above IP easier
- * to implement under Linux.
- */
-static inline int ip_queue_xmit2(struct sk_buff *skb)
-{
- struct sock *sk = skb->sk;
- struct rtable *rt = (struct rtable *)skb->dst;
- struct net_device *dev;
- struct iphdr *iph = skb->nh.iph;
-
- dev = rt->u.dst.dev;
-
- /* This can happen when the transport layer has segments queued
- * with a cached route, and by the time we get here things are
- * re-routed to a device with a different MTU than the original
- * device. Sick, but we must cover it.
- */
- if (skb_headroom(skb) < dev->hard_header_len && dev->hard_header) {
- struct sk_buff *skb2;
-
- skb2 = skb_realloc_headroom(skb, (dev->hard_header_len + 15) & ~15);
- kfree_skb(skb);
- if (skb2 == NULL)
- return -ENOMEM;
- if (sk)
- skb_set_owner_w(skb2, sk);
- skb = skb2;
- iph = skb->nh.iph;
- }
-
- if (skb->len > rt->u.dst.pmtu)
- goto fragment;
-
- ip_select_ident(iph, &rt->u.dst, sk);
-
- /* Add an IP checksum. */
- ip_send_check(iph);
-
- skb->priority = sk->priority;
- return skb->dst->output(skb);
-
-fragment:
- if (ip_dont_fragment(sk, &rt->u.dst)) {
- /* Reject packet ONLY if TCP might fragment
- * it itself, if were careful enough.
- */
- NETDEBUG(printk(KERN_DEBUG "sending pkt_too_big (len[%u] pmtu[%u]) to self\n",
- skb->len, rt->u.dst.pmtu));
-
- icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
- htonl(rt->u.dst.pmtu));
- kfree_skb(skb);
- return -EMSGSIZE;
- }
- ip_select_ident(iph, &rt->u.dst, sk);
- if (skb->ip_summed == CHECKSUM_HW &&
- (skb = skb_checksum_help(skb)) == NULL)
- return -ENOMEM;
- return ip_fragment(skb, skb->dst->output);
+ )
+ return ip_fragment(skb, ip_finish_output);
+ else
+ return ip_finish_output(skb);
}
int ip_queue_xmit(struct sk_buff *skb)
@@ -342,6 +290,9 @@
struct ip_options *opt = sk->protinfo.af_inet.opt;
struct rtable *rt;
struct iphdr *iph;
+#ifdef NETIF_F_TSO
+ u32 mtu;
+#endif
/* Skip all of this if the packet is already routed,
* f.e. by something like SCTP.
@@ -360,14 +311,24 @@
if(opt && opt->srr)
daddr = opt->faddr;
- /* If this fails, retransmit mechanism of transport layer will
- * keep trying until route appears or the connection times itself
- * out.
- */
- if (ip_route_output(&rt, daddr, sk->saddr,
- RT_CONN_FLAGS(sk),
- sk->bound_dev_if))
- goto no_route;
+ {
+ struct flowi fl = { .oif = sk->bound_dev_if,
+ .nl_u = { .ip4_u =
+ { .daddr = daddr,
+ .saddr = sk->saddr,
+ .tos = RT_CONN_FLAGS(sk) } },
+ .proto = sk->protocol,
+ .uli_u = { .ports =
+ { .sport = sk->sport,
+ .dport = sk->dport } } };
+
+ /* If this fails, retransmit mechanism of transport layer will
+ * keep trying until route appears or the connection times
+ * itself out.
+ */
+ if (ip_route_output_flow(&rt, &fl, sk, 0))
+ goto no_route;
+ }
__sk_dst_set(sk, &rt->u.dst);
sk->route_caps = rt->u.dst.dev->features;
}
@@ -397,8 +358,30 @@
ip_options_build(skb, opt, sk->daddr, rt, 0);
}
+#ifdef NETIF_F_TSO
+ mtu = dst_pmtu(&rt->u.dst);
+ if (skb->len > mtu && (sk->route_caps&NETIF_F_TSO)) {
+ unsigned int hlen;
+
+ /* Hack zone: all this must be done by TCP. */
+ hlen = ((skb->h.raw - skb->data) + (skb->h.th->doff << 2));
+ skb_shinfo(skb)->tso_size = mtu - hlen;
+ skb_shinfo(skb)->tso_segs =
+ (skb->len - hlen + skb_shinfo(skb)->tso_size - 1)/
+ skb_shinfo(skb)->tso_size - 1;
+ }
+ ip_select_ident_more(iph, &rt->u.dst, sk, skb_shinfo(skb)->tso_segs);
+#else
+ ip_select_ident(iph, &rt->u.dst, sk);
+#endif
+
+ /* Add an IP checksum. */
+ ip_send_check(iph);
+
+ skb->priority = sk->priority;
+
return NF_HOOK(PF_INET, NF_IP_LOCAL_OUT, skb, NULL, rt->u.dst.dev,
- ip_queue_xmit2);
+ dst_output);
no_route:
IP_INC_STATS(IpOutNoRoutes);
@@ -406,336 +389,30 @@
return -EHOSTUNREACH;
}
-/*
- * Build and send a packet, with as little as one copy
- *
- * Doesn't care much about ip options... option length can be
- * different for fragment at 0 and other fragments.
- *
- * Note that the fragment at the highest offset is sent first,
- * so the getfrag routine can fill in the TCP/UDP checksum header
- * field in the last fragment it sends... actually it also helps
- * the reassemblers, they can put most packets in at the head of
- * the fragment queue, and they know the total size in advance. This
- * last feature will measurably improve the Linux fragment handler one
- * day.
- *
- * The callback has five args, an arbitrary pointer (copy of frag),
- * the source IP address (may depend on the routing table), the
- * destination address (char *), the offset to copy from, and the
- * length to be copied.
- */
-
-static int ip_build_xmit_slow(struct sock *sk,
- int getfrag (const void *,
- char *,
- unsigned int,
- unsigned int),
- const void *frag,
- unsigned length,
- struct ipcm_cookie *ipc,
- struct rtable *rt,
- int flags)
+static void ip_copy_metadata(struct sk_buff *to, struct sk_buff *from)
{
- unsigned int fraglen, maxfraglen, fragheaderlen;
- int err;
- int offset, mf;
- int mtu;
- u16 id;
-
- int hh_len = (rt->u.dst.dev->hard_header_len + 15)&~15;
- int nfrags=0;
- struct ip_options *opt = ipc->opt;
- int df = 0;
-
- mtu = rt->u.dst.pmtu;
- if (ip_dont_fragment(sk, &rt->u.dst))
- df = htons(IP_DF);
-
- length -= sizeof(struct iphdr);
+ to->pkt_type = from->pkt_type;
+ to->priority = from->priority;
+ to->protocol = from->protocol;
+ to->security = from->security;
+ to->dst = dst_clone(from->dst);
+ to->dev = from->dev;
- if (opt) {
- fragheaderlen = sizeof(struct iphdr) + opt->optlen;
- maxfraglen = ((mtu-sizeof(struct iphdr)-opt->optlen) & ~7) + fragheaderlen;
- } else {
- fragheaderlen = sizeof(struct iphdr);
-
- /*
- * Fragheaderlen is the size of 'overhead' on each buffer. Now work
- * out the size of the frames to send.
- */
-
- maxfraglen = ((mtu-sizeof(struct iphdr)) & ~7) + fragheaderlen;
- }
-
- if (length + fragheaderlen > 0xFFFF) {
- ip_local_error(sk, EMSGSIZE, rt->rt_dst, sk->dport, mtu);
- return -EMSGSIZE;
- }
-
- /*
- * Start at the end of the frame by handling the remainder.
- */
-
- offset = length - (length % (maxfraglen - fragheaderlen));
-
- /*
- * Amount of memory to allocate for final fragment.
- */
-
- fraglen = length - offset + fragheaderlen;
-
- if (length-offset==0) {
- fraglen = maxfraglen;
- offset -= maxfraglen-fragheaderlen;
- }
-
- /*
- * The last fragment will not have MF (more fragments) set.
- */
-
- mf = 0;
-
- /*
- * Don't fragment packets for path mtu discovery.
- */
+ /* Copy the flags to each fragment. */
+ IPCB(to)->flags = IPCB(from)->flags;
- if (offset > 0 && sk->protinfo.af_inet.pmtudisc==IP_PMTUDISC_DO) {
- ip_local_error(sk, EMSGSIZE, rt->rt_dst, sk->dport, mtu);
- return -EMSGSIZE;
- }
- if (flags&MSG_PROBE)
- goto out;
-
- /*
- * Begin outputting the bytes.
- */
-
- id = sk->protinfo.af_inet.id++;
-
- do {
- char *data;
- struct sk_buff * skb;
-
- /*
- * Get the memory we require with some space left for alignment.
- */
- if (!(flags & MSG_DONTWAIT) || nfrags == 0) {
- skb = sock_alloc_send_skb(sk, fraglen + hh_len + 15,
- (flags & MSG_DONTWAIT), &err);
- } else {
- /* On a non-blocking write, we check for send buffer
- * usage on the first fragment only.
- */
- skb = sock_wmalloc(sk, fraglen + hh_len + 15, 1,
- sk->allocation);
- if (!skb)
- err = -ENOBUFS;
- }
- if (skb == NULL)
- goto error;
-
- /*
- * Fill in the control structures
- */
-
- skb->priority = sk->priority;
- skb->dst = dst_clone(&rt->u.dst);
- skb_reserve(skb, hh_len);
-
- /*
- * Find where to start putting bytes.
- */
-
- data = skb_put(skb, fraglen);
- skb->nh.iph = (struct iphdr *)data;
-
- /*
- * Only write IP header onto non-raw packets
- */
-
- {
- struct iphdr *iph = (struct iphdr *)data;
-
- iph->version = 4;
- iph->ihl = 5;
- if (opt) {
- iph->ihl += opt->optlen>>2;
- ip_options_build(skb, opt,
- ipc->addr, rt, offset);
- }
- iph->tos = sk->protinfo.af_inet.tos;
- iph->tot_len = htons(fraglen - fragheaderlen + iph->ihl*4);
- iph->frag_off = htons(offset>>3)|mf|df;
- iph->id = id;
- if (!mf) {
- if (offset || !df) {
- /* Select an unpredictable ident only
- * for packets without DF or having
- * been fragmented.
- */
- __ip_select_ident(iph, &rt->u.dst);
- id = iph->id;
- }
-
- /*
- * Any further fragments will have MF set.
- */
- mf = htons(IP_MF);
- }
- if (rt->rt_type == RTN_MULTICAST)
- iph->ttl = sk->protinfo.af_inet.mc_ttl;
- else
- iph->ttl = sk->protinfo.af_inet.ttl;
- iph->protocol = sk->protocol;
- iph->check = 0;
- iph->saddr = rt->rt_src;
- iph->daddr = rt->rt_dst;
- iph->check = ip_fast_csum((unsigned char *)iph, iph->ihl);
- data += iph->ihl*4;
- }
-
- /*
- * User data callback
- */
-
- if (getfrag(frag, data, offset, fraglen-fragheaderlen)) {
- err = -EFAULT;
- kfree_skb(skb);
- goto error;
- }
-
- offset -= (maxfraglen-fragheaderlen);
- fraglen = maxfraglen;
-
- nfrags++;
-
- err = NF_HOOK(PF_INET, NF_IP_LOCAL_OUT, skb, NULL,
- skb->dst->dev, output_maybe_reroute);
- if (err) {
- if (err > 0)
- err = sk->protinfo.af_inet.recverr ? net_xmit_errno(err) : 0;
- if (err)
- goto error;
- }
- } while (offset >= 0);
-
- if (nfrags>1)
- ip_statistics[smp_processor_id()*2 + !in_softirq()].IpFragCreates += nfrags;
-out:
- return 0;
-
-error:
- IP_INC_STATS(IpOutDiscards);
- if (nfrags>1)
- ip_statistics[smp_processor_id()*2 + !in_softirq()].IpFragCreates += nfrags;
- return err;
-}
-
-/*
- * Fast path for unfragmented packets.
- */
-int ip_build_xmit(struct sock *sk,
- int getfrag (const void *,
- char *,
- unsigned int,
- unsigned int),
- const void *frag,
- unsigned length,
- struct ipcm_cookie *ipc,
- struct rtable *rt,
- int flags)
-{
- int err;
- struct sk_buff *skb;
- int df;
- struct iphdr *iph;
-
- /*
- * Try the simple case first. This leaves fragmented frames, and by
- * choice RAW frames within 20 bytes of maximum size(rare) to the long path
- */
-
- if (!sk->protinfo.af_inet.hdrincl) {
- length += sizeof(struct iphdr);
-
- /*
- * Check for slow path.
- */
- if (length > rt->u.dst.pmtu || ipc->opt != NULL)
- return ip_build_xmit_slow(sk,getfrag,frag,length,ipc,rt,flags);
- } else {
- if (length > rt->u.dst.dev->mtu) {
- ip_local_error(sk, EMSGSIZE, rt->rt_dst, sk->dport, rt->u.dst.dev->mtu);
- return -EMSGSIZE;
- }
- }
- if (flags&MSG_PROBE)
- goto out;
-
- /*
- * Do path mtu discovery if needed.
- */
- df = 0;
- if (ip_dont_fragment(sk, &rt->u.dst))
- df = htons(IP_DF);
-
- /*
- * Fast path for unfragmented frames without options.
- */
- {
- int hh_len = (rt->u.dst.dev->hard_header_len + 15)&~15;
-
- skb = sock_alloc_send_skb(sk, length+hh_len+15,
- flags&MSG_DONTWAIT, &err);
- if(skb==NULL)
- goto error;
- skb_reserve(skb, hh_len);
- }
-
- skb->priority = sk->priority;
- skb->dst = dst_clone(&rt->u.dst);
-
- skb->nh.iph = iph = (struct iphdr *)skb_put(skb, length);
-
- if(!sk->protinfo.af_inet.hdrincl) {
- iph->version=4;
- iph->ihl=5;
- iph->tos=sk->protinfo.af_inet.tos;
- iph->tot_len = htons(length);
- iph->frag_off = df;
- iph->ttl=sk->protinfo.af_inet.mc_ttl;
- ip_select_ident(iph, &rt->u.dst, sk);
- if (rt->rt_type != RTN_MULTICAST)
- iph->ttl=sk->protinfo.af_inet.ttl;
- iph->protocol=sk->protocol;
- iph->saddr=rt->rt_src;
- iph->daddr=rt->rt_dst;
- iph->check=0;
- iph->check = ip_fast_csum((unsigned char *)iph, iph->ihl);
- err = getfrag(frag, ((char *)iph)+iph->ihl*4,0, length-iph->ihl*4);
- }
- else
- err = getfrag(frag, (void *)iph, 0, length);
-
- if (err)
- goto error_fault;
-
- err = NF_HOOK(PF_INET, NF_IP_LOCAL_OUT, skb, NULL, rt->u.dst.dev,
- output_maybe_reroute);
- if (err > 0)
- err = sk->protinfo.af_inet.recverr ? net_xmit_errno(err) : 0;
- if (err)
- goto error;
-out:
- return 0;
-
-error_fault:
- err = -EFAULT;
- kfree_skb(skb);
-error:
- IP_INC_STATS(IpOutDiscards);
- return err;
+#ifdef CONFIG_NET_SCHED
+ to->tc_index = from->tc_index;
+#endif
+#ifdef CONFIG_NETFILTER
+ to->nfmark = from->nfmark;
+ /* Connection association is same as pre-frag packet */
+ to->nfct = from->nfct;
+ nf_conntrack_get(to->nfct);
+#ifdef CONFIG_NETFILTER_DEBUG
+ to->nf_debug = from->nf_debug;
+#endif
+#endif
}
/*
@@ -743,8 +420,6 @@
* smaller pieces (each of size equal to IP header plus
* a block of the data of the original IP data part) that will yet fit in a
* single device frame, and queue such a frame for sending.
- *
- * Yes this is inefficient, feel free to submit a quicker one.
*/
int ip_fragment(struct sk_buff *skb, int (*output)(struct sk_buff*))
@@ -768,13 +443,111 @@
iph = skb->nh.iph;
+ if (unlikely((iph->frag_off & htons(IP_DF)) && !skb->local_df)) {
+ icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
+ htonl(dst_pmtu(&rt->u.dst)));
+ kfree_skb(skb);
+ return -EMSGSIZE;
+ }
+
/*
* Setup starting values.
*/
hlen = iph->ihl * 4;
+ mtu = dst_pmtu(&rt->u.dst) - hlen; /* Size of data space */
+
+ /* When frag_list is given, use it. First, check its validity:
+ * some transformers could create wrong frag_list or break existing
+ * one, it is not prohibited. In this case fall back to copying.
+ *
+ * LATER: this step can be merged to real generation of fragments,
+ * we can switch to copy when see the first bad fragment.
+ */
+ if (skb_shinfo(skb)->frag_list) {
+ struct sk_buff *frag;
+ int first_len = skb_pagelen(skb);
+
+ if (first_len - hlen > mtu ||
+ ((first_len - hlen) & 7) ||
+ (iph->frag_off & htons(IP_MF|IP_OFFSET)) ||
+ skb_cloned(skb))
+ goto slow_path;
+
+ for (frag = skb_shinfo(skb)->frag_list; frag; frag = frag->next) {
+ /* Correct geometry. */
+ if (frag->len > mtu ||
+ ((frag->len & 7) && frag->next) ||
+ skb_headroom(frag) < hlen)
+ goto slow_path;
+
+ /* Correct socket ownership. */
+ if (frag->sk == NULL)
+ goto slow_path;
+
+ /* Partially cloned skb? */
+ if (skb_shared(frag))
+ goto slow_path;
+ }
+
+ /* Everything is OK. Generate! */
+
+ err = 0;
+ offset = 0;
+ frag = skb_shinfo(skb)->frag_list;
+ skb_shinfo(skb)->frag_list = 0;
+ skb->data_len = first_len - skb_headlen(skb);
+ skb->len = first_len;
+ iph->tot_len = htons(first_len);
+ iph->frag_off |= htons(IP_MF);
+ ip_send_check(iph);
+
+ for (;;) {
+ /* Prepare header of the next frame,
+ * before previous one went down. */
+ if (frag) {
+ frag->h.raw = frag->data;
+ frag->nh.raw = __skb_push(frag, hlen);
+ memcpy(frag->nh.raw, iph, hlen);
+ iph = frag->nh.iph;
+ iph->tot_len = htons(frag->len);
+ ip_copy_metadata(frag, skb);
+ if (offset == 0)
+ ip_options_fragment(frag);
+ offset += skb->len - hlen;
+ iph->frag_off = htons(offset>>3);
+ if (frag->next != NULL)
+ iph->frag_off |= htons(IP_MF);
+ /* Ready, complete checksum */
+ ip_send_check(iph);
+ }
+
+ err = output(skb);
+
+ if (err || !frag)
+ break;
+
+ skb = frag;
+ frag = skb->next;
+ skb->next = NULL;
+ }
+
+ if (err == 0) {
+ IP_INC_STATS(IpFragOKs);
+ return 0;
+ }
+
+ while (frag) {
+ skb = frag->next;
+ kfree_skb(frag);
+ frag = skb;
+ }
+ IP_INC_STATS(IpFragFails);
+ return err;
+ }
+
+slow_path:
left = skb->len - hlen; /* Space per frame */
- mtu = rt->u.dst.pmtu - hlen; /* Size of data space */
ptr = raw + hlen; /* Where to start from */
/*
@@ -802,7 +575,7 @@
* Allocate buffer.
*/
- if ((skb2 = alloc_skb(len+hlen+dev->hard_header_len+15,GFP_ATOMIC)) == NULL) {
+ if ((skb2 = alloc_skb(len+hlen+LL_RESERVED_SPACE(rt->u.dst.dev), GFP_ATOMIC)) == NULL) {
NETDEBUG(printk(KERN_INFO "IP: frag: no memory for new fragment!\n"));
err = -ENOMEM;
goto fail;
@@ -812,14 +585,11 @@
* Set up data on packet
*/
- skb2->pkt_type = skb->pkt_type;
- skb2->priority = skb->priority;
- skb_reserve(skb2, (dev->hard_header_len+15)&~15);
+ ip_copy_metadata(skb2, skb);
+ skb_reserve(skb2, LL_RESERVED_SPACE(rt->u.dst.dev));
skb_put(skb2, len + hlen);
skb2->nh.raw = skb2->data;
skb2->h.raw = skb2->data + hlen;
- skb2->protocol = skb->protocol;
- skb2->security = skb->security;
/*
* Charge the memory for the fragment to any owner
@@ -828,8 +598,6 @@
if (skb->sk)
skb_set_owner_w(skb2, skb->sk);
- skb2->dst = dst_clone(skb->dst);
- skb2->dev = skb->dev;
/*
* Copy the packet header into the new buffer.
@@ -859,9 +627,6 @@
if (offset == 0)
ip_options_fragment(skb);
- /* Copy the flags to each fragment. */
- IPCB(skb2)->flags = IPCB(skb)->flags;
-
/*
* Added AC : If we are fragmenting a fragment that's not the
* last fragment then keep MF on each bit
@@ -871,19 +636,6 @@
ptr += len;
offset += len;
-#ifdef CONFIG_NET_SCHED
- skb2->tc_index = skb->tc_index;
-#endif
-#ifdef CONFIG_NETFILTER
- skb2->nfmark = skb->nfmark;
- /* Connection association is same as pre-frag packet */
- skb2->nfct = skb->nfct;
- nf_conntrack_get(skb2->nfct);
-#ifdef CONFIG_NETFILTER_DEBUG
- skb2->nf_debug = skb->nf_debug;
-#endif
-#endif
-
/*
* Put this fragment into the sending queue.
*/
@@ -908,40 +660,562 @@
return err;
}
+int
+ip_generic_getfrag(void *from, char *to, int offset, int len, int odd, struct sk_buff *skb)
+{
+ struct iovec *iov = from;
+
+ if (skb->ip_summed == CHECKSUM_HW) {
+ if (memcpy_fromiovecend(to, iov, offset, len) < 0)
+ return -EFAULT;
+ } else {
+ unsigned int csum = 0;
+ if (csum_partial_copy_fromiovecend(to, iov, offset, len, &csum) < 0)
+ return -EFAULT;
+ skb->csum = csum_block_add(skb->csum, csum, odd);
+ }
+ return 0;
+}
+
+static inline int
+skb_can_coalesce(struct sk_buff *skb, int i, struct page *page, int off)
+{
+ if (i) {
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i-1];
+ return page == frag->page &&
+ off == frag->page_offset+frag->size;
+ }
+ return 0;
+}
+
+static void
+skb_fill_page_desc(struct sk_buff *skb, int i, struct page *page, int off, int size)
+{
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+ frag->page = page;
+ frag->page_offset = off;
+ frag->size = size;
+ skb_shinfo(skb)->nr_frags = i+1;
+}
+
+static inline unsigned int
+csum_page(struct page *page, int offset, int copy)
+{
+ char *kaddr;
+ unsigned int csum;
+ kaddr = kmap(page);
+ csum = csum_partial(kaddr + offset, copy, 0);
+ kunmap(page);
+ return csum;
+}
+
/*
- * Fetch data from kernel space and fill in checksum if needed.
+ * ip_append_data() and ip_append_page() can make one large IP datagram
+ * from many pieces of data. Each pieces will be holded on the socket
+ * until ip_push_pending_frames() is called. Eache pieces can be a page
+ * or non-page data.
+ *
+ * Not only UDP, other transport protocols - e.g. raw sockets - can use
+ * this interface potentially.
+ *
+ * LATER: length must be adjusted by pad at tail, when it is required.
*/
-static int ip_reply_glue_bits(const void *dptr, char *to, unsigned int offset,
- unsigned int fraglen)
+int ip_append_data(struct sock *sk,
+ int getfrag(void *from, char *to, int offset, int len,
+ int odd, struct sk_buff *skb),
+ void *from, int length, int transhdrlen,
+ struct ipcm_cookie *ipc, struct rtable *rt,
+ unsigned int flags)
{
- struct ip_reply_arg *dp = (struct ip_reply_arg*)dptr;
- u16 *pktp = (u16 *)to;
- struct iovec *iov;
- int len;
- int hdrflag = 1;
-
- iov = &dp->iov[0];
- if (offset >= iov->iov_len) {
- offset -= iov->iov_len;
- iov++;
- hdrflag = 0;
- }
- len = iov->iov_len - offset;
- if (fraglen > len) { /* overlapping. */
- dp->csum = csum_partial_copy_nocheck(iov->iov_base+offset, to, len,
- dp->csum);
- offset = 0;
- fraglen -= len;
- to += len;
- iov++;
+ struct inet_opt *inet = inet_sk(sk);
+ struct sk_buff *skb;
+
+ struct ip_options *opt = NULL;
+ int hh_len;
+ int exthdrlen;
+ int mtu;
+ int copy;
+ int err;
+ int offset = 0;
+ unsigned int maxfraglen, fragheaderlen;
+ int csummode = CHECKSUM_NONE;
+
+ if (flags&MSG_PROBE)
+ return 0;
+
+ if (skb_queue_empty(&sk->write_queue)) {
+ /*
+ * setup for corking.
+ */
+ opt = ipc->opt;
+ if (opt) {
+ if (inet->cork.opt == NULL)
+ inet->cork.opt = kmalloc(sizeof(struct ip_options)+40, sk->allocation);
+ memcpy(inet->cork.opt, opt, sizeof(struct ip_options)+opt->optlen);
+ inet->cork.flags |= IPCORK_OPT;
+ inet->cork.addr = ipc->addr;
+ }
+ dst_hold(&rt->u.dst);
+ inet->cork.fragsize = mtu = dst_pmtu(&rt->u.dst);
+ inet->cork.rt = rt;
+ inet->cork.length = 0;
+ inet->sndmsg_page = NULL;
+ inet->sndmsg_off = 0;
+ if ((exthdrlen = rt->u.dst.header_len) != 0) {
+ length += exthdrlen;
+ transhdrlen += exthdrlen;
+ }
+ } else {
+ rt = inet->cork.rt;
+ if (inet->cork.flags & IPCORK_OPT)
+ opt = inet->cork.opt;
+
+ transhdrlen = 0;
+ exthdrlen = 0;
+ mtu = inet->cork.fragsize;
+ }
+ hh_len = LL_RESERVED_SPACE(rt->u.dst.dev);
+
+ fragheaderlen = sizeof(struct iphdr) + (opt ? opt->optlen : 0);
+ maxfraglen = ((mtu-fragheaderlen) & ~7) + fragheaderlen;
+
+ if (inet->cork.length + length > 0xFFFF - fragheaderlen) {
+ ip_local_error(sk, EMSGSIZE, rt->rt_dst, sk->dport, mtu-exthdrlen);
+ return -EMSGSIZE;
+ }
+
+ /*
+ * transhdrlen > 0 means that this is the first fragment and we wish
+ * it won't be fragmented in the future.
+ */
+ if (transhdrlen &&
+ length + fragheaderlen <= maxfraglen &&
+ rt->u.dst.dev->features&(NETIF_F_IP_CSUM|NETIF_F_NO_CSUM|NETIF_F_HW_CSUM) &&
+ !exthdrlen)
+ csummode = CHECKSUM_HW;
+
+ inet->cork.length += length;
+
+ /* So, what's going on in the loop below?
+ *
+ * We use calculated fragment length to generate chained skb,
+ * each of segments is IP fragment ready for sending to network after
+ * adding appropriate IP header.
+ *
+ * Mistake is:
+ *
+ * If mtu-fragheaderlen is not 0 modulo 8, we generate additional
+ * small fragment of length (mtu-fragheaderlen)%8, even though
+ * it is not necessary. Not a big bug, but needs a fix.
+ */
+
+ if ((skb = skb_peek_tail(&sk->write_queue)) == NULL)
+ goto alloc_new_skb;
+
+ while (length > 0) {
+ if ((copy = maxfraglen - skb->len) <= 0) {
+ char *data;
+ unsigned int datalen;
+ unsigned int fraglen;
+ unsigned int alloclen;
+ BUG_TRAP(copy == 0);
+
+alloc_new_skb:
+ datalen = maxfraglen - fragheaderlen;
+ if (datalen > length)
+ datalen = length;
+
+ fraglen = datalen + fragheaderlen;
+ if ((flags & MSG_MORE) &&
+ !(rt->u.dst.dev->features&NETIF_F_SG))
+ alloclen = maxfraglen;
+ else
+ alloclen = datalen + fragheaderlen;
+
+ /* The last fragment gets additional space at tail.
+ * Note, with MSG_MORE we overallocate on fragments,
+ * because we have no idea what fragment will be
+ * the last.
+ */
+ if (datalen == length)
+ alloclen += rt->u.dst.trailer_len;
+
+ if (transhdrlen) {
+ skb = sock_alloc_send_skb(sk,
+ alloclen + hh_len + 15,
+ (flags & MSG_DONTWAIT), &err);
+ } else {
+ skb = NULL;
+ if (atomic_read(&sk->wmem_alloc) <= 2*sk->sndbuf)
+ skb = sock_wmalloc(sk,
+ alloclen + hh_len + 15, 1,
+ sk->allocation);
+ if (unlikely(skb == NULL))
+ err = -ENOBUFS;
+ }
+ if (skb == NULL)
+ goto error;
+
+ /*
+ * Fill in the control structures
+ */
+ skb->ip_summed = csummode;
+ skb->csum = 0;
+ skb_reserve(skb, hh_len);
+
+ /*
+ * Find where to start putting bytes.
+ */
+ data = skb_put(skb, fraglen);
+ skb->nh.raw = data + exthdrlen;
+ data += fragheaderlen;
+ skb->h.raw = data + exthdrlen;
+
+ copy = datalen - transhdrlen;
+ if (copy > 0 && getfrag(from, data + transhdrlen, offset, copy, 0, skb) < 0) {
+ err = -EFAULT;
+ kfree_skb(skb);
+ goto error;
+ }
+
+ offset += copy;
+ length -= datalen;
+ transhdrlen = 0;
+ exthdrlen = 0;
+ csummode = CHECKSUM_NONE;
+
+ /*
+ * Put the packet on the pending queue.
+ */
+ __skb_queue_tail(&sk->write_queue, skb);
+ continue;
+ }
+
+ if (copy > length)
+ copy = length;
+
+ if (!(rt->u.dst.dev->features&NETIF_F_SG)) {
+ unsigned int off;
+
+ off = skb->len;
+ if (getfrag(from, skb_put(skb, copy),
+ offset, copy, off, skb) < 0) {
+ __skb_trim(skb, off);
+ err = -EFAULT;
+ goto error;
+ }
+ } else {
+ int i = skb_shinfo(skb)->nr_frags;
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i-1];
+ struct page *page = inet->sndmsg_page;
+ int off = inet->sndmsg_off;
+ unsigned int left;
+
+ if (page && (left = PAGE_SIZE - off) > 0) {
+ if (copy >= left)
+ copy = left;
+ if (page != frag->page) {
+ if (i == MAX_SKB_FRAGS) {
+ err = -EMSGSIZE;
+ goto error;
+ }
+ get_page(page);
+ skb_fill_page_desc(skb, i, page, inet->sndmsg_off, 0);
+ frag = &skb_shinfo(skb)->frags[i];
+ }
+ } else if (i < MAX_SKB_FRAGS) {
+ if (copy > PAGE_SIZE)
+ copy = PAGE_SIZE;
+ page = alloc_pages(sk->allocation, 0);
+ if (page == NULL) {
+ err = -ENOMEM;
+ goto error;
+ }
+ inet->sndmsg_page = page;
+ inet->sndmsg_off = 0;
+
+ skb_fill_page_desc(skb, i, page, 0, 0);
+ frag = &skb_shinfo(skb)->frags[i];
+ skb->truesize += PAGE_SIZE;
+ atomic_add(PAGE_SIZE, &sk->wmem_alloc);
+ } else {
+ err = -EMSGSIZE;
+ goto error;
+ }
+ if (getfrag(from, page_address(frag->page)+frag->page_offset+frag->size, offset, copy, skb->len, skb) < 0) {
+ err = -EFAULT;
+ goto error;
+ }
+ inet->sndmsg_off += copy;
+ frag->size += copy;
+ skb->len += copy;
+ skb->data_len += copy;
+ }
+ offset += copy;
+ length -= copy;
}
- dp->csum = csum_partial_copy_nocheck(iov->iov_base+offset, to, fraglen,
- dp->csum);
+ return 0;
- if (hdrflag && dp->csumoffset)
- *(pktp + dp->csumoffset) = csum_fold(dp->csum); /* fill in checksum */
- return 0;
+error:
+ inet->cork.length -= length;
+ IP_INC_STATS(IpOutDiscards);
+ return err;
+}
+
+ssize_t ip_append_page(struct sock *sk, struct page *page,
+ int offset, size_t size, int flags)
+{
+ struct inet_opt *inet = inet_sk(sk);
+ struct sk_buff *skb;
+ struct rtable *rt;
+ struct ip_options *opt = NULL;
+ int hh_len;
+ int mtu;
+ int len;
+ int err;
+ unsigned int maxfraglen, fragheaderlen;
+
+ if (inet->hdrincl)
+ return -EPERM;
+
+ if (flags&MSG_PROBE)
+ return 0;
+
+ if (skb_queue_empty(&sk->write_queue))
+ return -EINVAL;
+
+ rt = inet->cork.rt;
+ if (inet->cork.flags & IPCORK_OPT)
+ opt = inet->cork.opt;
+
+ if (!(rt->u.dst.dev->features&NETIF_F_SG))
+ return -EOPNOTSUPP;
+
+ hh_len = LL_RESERVED_SPACE(rt->u.dst.dev);
+ mtu = inet->cork.fragsize;
+
+ fragheaderlen = sizeof(struct iphdr) + (opt ? opt->optlen : 0);
+ maxfraglen = ((mtu-fragheaderlen) & ~7) + fragheaderlen;
+
+ if (inet->cork.length + size > 0xFFFF - fragheaderlen) {
+ ip_local_error(sk, EMSGSIZE, rt->rt_dst, sk->dport, mtu);
+ return -EMSGSIZE;
+ }
+
+ if ((skb = skb_peek_tail(&sk->write_queue)) == NULL)
+ return -EINVAL;
+
+ inet->cork.length += size;
+
+ while (size > 0) {
+ int i;
+ if ((len = maxfraglen - skb->len) <= 0) {
+ char *data;
+ struct iphdr *iph;
+ BUG_TRAP(len == 0);
+
+ skb = sock_wmalloc(sk, fragheaderlen + hh_len + 15, 1,
+ sk->allocation);
+ if (unlikely(!skb)) {
+ err = -ENOBUFS;
+ goto error;
+ }
+
+ /*
+ * Fill in the control structures
+ */
+ skb->ip_summed = CHECKSUM_NONE;
+ skb->csum = 0;
+ skb_reserve(skb, hh_len);
+
+ /*
+ * Find where to start putting bytes.
+ */
+ data = skb_put(skb, fragheaderlen);
+ skb->nh.iph = iph = (struct iphdr *)data;
+ data += fragheaderlen;
+ skb->h.raw = data;
+
+ /*
+ * Put the packet on the pending queue.
+ */
+ __skb_queue_tail(&sk->write_queue, skb);
+ continue;
+ }
+
+ i = skb_shinfo(skb)->nr_frags;
+ if (len > size)
+ len = size;
+ if (skb_can_coalesce(skb, i, page, offset)) {
+ skb_shinfo(skb)->frags[i-1].size += len;
+ } else if (i < MAX_SKB_FRAGS) {
+ get_page(page);
+ skb_fill_page_desc(skb, i, page, offset, len);
+ } else {
+ err = -EMSGSIZE;
+ goto error;
+ }
+
+ if (skb->ip_summed == CHECKSUM_NONE) {
+ unsigned int csum;
+ csum = csum_page(page, offset, len);
+ skb->csum = csum_block_add(skb->csum, csum, skb->len);
+ }
+
+ skb->len += len;
+ skb->data_len += len;
+ offset += len;
+ size -= len;
+ }
+ return 0;
+
+error:
+ inet->cork.length -= size;
+ IP_INC_STATS(IpOutDiscards);
+ return err;
+}
+
+/*
+ * Combined all pending IP fragments on the socket as one IP datagram
+ * and push them out.
+ */
+int ip_push_pending_frames(struct sock *sk)
+{
+ struct sk_buff *skb, *tmp_skb;
+ struct sk_buff **tail_skb;
+ struct inet_opt *inet = inet_sk(sk);
+ struct ip_options *opt = NULL;
+ struct rtable *rt = inet->cork.rt;
+ struct iphdr *iph;
+ int df = 0;
+ __u8 ttl;
+ int err = 0;
+
+ if ((skb = __skb_dequeue(&sk->write_queue)) == NULL)
+ goto out;
+ tail_skb = &(skb_shinfo(skb)->frag_list);
+
+ /* move skb->data to ip header from ext header */
+ if (skb->data < skb->nh.raw)
+ __skb_pull(skb, skb->nh.raw - skb->data);
+ while ((tmp_skb = __skb_dequeue(&sk->write_queue)) != NULL) {
+ __skb_pull(tmp_skb, skb->h.raw - skb->nh.raw);
+ *tail_skb = tmp_skb;
+ tail_skb = &(tmp_skb->next);
+ skb->len += tmp_skb->len;
+ skb->data_len += tmp_skb->len;
+#if 0 /* Logically correct, but useless work, ip_fragment() will have to undo */
+ skb->truesize += tmp_skb->truesize;
+ __sock_put(tmp_skb->sk);
+ tmp_skb->destructor = NULL;
+ tmp_skb->sk = NULL;
+#endif
+ }
+
+ /* Unless user demanded real pmtu discovery (IP_PMTUDISC_DO), we allow
+ * to fragment the frame generated here. No matter, what transforms
+ * how transforms change size of the packet, it will come out.
+ */
+ if (inet->pmtudisc != IP_PMTUDISC_DO)
+ skb->local_df = 1;
+
+ /* DF bit is set when we want to see DF on outgoing frames.
+ * If local_df is set too, we still allow to fragment this frame
+ * locally. */
+ if (inet->pmtudisc == IP_PMTUDISC_DO ||
+ (!skb_shinfo(skb)->frag_list && ip_dont_fragment(sk, &rt->u.dst)))
+ df = htons(IP_DF);
+
+ if (inet->cork.flags & IPCORK_OPT)
+ opt = inet->cork.opt;
+
+ if (rt->rt_type == RTN_MULTICAST)
+ ttl = inet->mc_ttl;
+ else
+ ttl = inet->ttl;
+
+ iph = (struct iphdr *)skb->data;
+ iph->version = 4;
+ iph->ihl = 5;
+ if (opt) {
+ iph->ihl += opt->optlen>>2;
+ ip_options_build(skb, opt, inet->cork.addr, rt, 0);
+ }
+ iph->tos = inet->tos;
+ iph->tot_len = htons(skb->len);
+ iph->frag_off = df;
+ if (!df) {
+ __ip_select_ident(iph, &rt->u.dst);
+ } else {
+ iph->id = htons(inet->id++);
+ }
+ iph->ttl = ttl;
+ iph->protocol = sk->protocol;
+ iph->saddr = rt->rt_src;
+ iph->daddr = rt->rt_dst;
+ ip_send_check(iph);
+
+ skb->priority = sk->priority;
+ skb->dst = dst_clone(&rt->u.dst);
+
+ /* Netfilter gets whole the not fragmented skb. */
+ err = NF_HOOK(PF_INET, NF_IP_LOCAL_OUT, skb, NULL,
+ skb->dst->dev, dst_output);
+ if (err) {
+ if (err > 0)
+ err = inet->recverr ? net_xmit_errno(err) : 0;
+ if (err)
+ goto error;
+ }
+
+out:
+ inet->cork.flags &= ~IPCORK_OPT;
+ if (inet->cork.rt) {
+ ip_rt_put(inet->cork.rt);
+ inet->cork.rt = NULL;
+ }
+ return err;
+
+error:
+ IP_INC_STATS(IpOutDiscards);
+ goto out;
+}
+
+/*
+ * Throw away all pending data on the socket.
+ */
+void ip_flush_pending_frames(struct sock *sk)
+{
+ struct inet_opt *inet = inet_sk(sk);
+ struct sk_buff *skb;
+
+ while ((skb = __skb_dequeue_tail(&sk->write_queue)) != NULL)
+ kfree_skb(skb);
+
+ inet->cork.flags &= ~IPCORK_OPT;
+ if (inet->cork.opt) {
+ kfree(inet->cork.opt);
+ inet->cork.opt = NULL;
+ }
+ if (inet->cork.rt) {
+ ip_rt_put(inet->cork.rt);
+ inet->cork.rt = NULL;
+ }
+}
+
+
+/*
+ * Fetch data from kernel space and fill in checksum if needed.
+ */
+static int ip_reply_glue_bits(void *dptr, char *to, int offset,
+ int len, int odd, struct sk_buff *skb)
+{
+ unsigned int csum;
+
+ csum = csum_partial_copy_nocheck(dptr+offset, to, len, 0);
+ skb->csum = csum_block_add(skb->csum, csum, odd);
+ return 0;
}
/*
@@ -950,6 +1224,8 @@
*
* Should run single threaded per socket because it uses the sock
* structure to pass arguments.
+ *
+ * LATER: switch from ip_build_xmit to ip_append_*
*/
void ip_send_reply(struct sock *sk, struct sk_buff *skb, struct ip_reply_arg *arg,
unsigned int len)
@@ -975,8 +1251,19 @@
daddr = replyopts.opt.faddr;
}
- if (ip_route_output(&rt, daddr, rt->rt_spec_dst, RT_TOS(skb->nh.iph->tos), 0))
- return;
+ {
+ struct flowi fl = { .nl_u = { .ip4_u =
+ { .daddr = daddr,
+ .saddr = rt->rt_spec_dst,
+ .tos = RT_TOS(skb->nh.iph->tos) } },
+ /* Not quite clean, but right. */
+ .uli_u = { .ports =
+ { .sport = skb->h.th->dest,
+ .dport = skb->h.th->source } },
+ .proto = sk->protocol };
+ if (ip_route_output_key(&rt, &fl))
+ return;
+ }
/* And let IP do all the hard work.
@@ -988,7 +1275,15 @@
sk->protinfo.af_inet.tos = skb->nh.iph->tos;
sk->priority = skb->priority;
sk->protocol = skb->nh.iph->protocol;
- ip_build_xmit(sk, ip_reply_glue_bits, arg, len, &ipc, rt, MSG_DONTWAIT);
+ ip_append_data(sk, ip_reply_glue_bits, arg->iov->iov_base, len, 0,
+ &ipc, rt, MSG_DONTWAIT);
+ if ((skb = skb_peek(&sk->write_queue)) != NULL) {
+ if (arg->csumoffset >= 0)
+ *((u16 *)skb->h.raw + arg->csumoffset) = csum_fold(csum_add(skb->csum, arg->csum));
+ skb->ip_summed = CHECKSUM_NONE;
+ ip_push_pending_frames(sk);
+ }
+
bh_unlock_sock(sk);
ip_rt_put(rt);
diff -Nru a/net/ipv4/ip_sockglue.c b/net/ipv4/ip_sockglue.c
--- a/net/ipv4/ip_sockglue.c Thu May 8 10:41:37 2003
+++ b/net/ipv4/ip_sockglue.c Thu May 8 10:41:37 2003
@@ -36,6 +36,7 @@
#include <linux/route.h>
#include <linux/mroute.h>
#include <net/route.h>
+#include <net/xfrm.h>
#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
#include <net/transp_v6.h>
#endif
@@ -380,6 +381,7 @@
int ip_setsockopt(struct sock *sk, int level, int optname, char *optval, int optlen)
{
+ struct inet_opt *inet = inet_sk(sk);
int val=0,err;
if (level != SOL_IP)
@@ -431,8 +433,10 @@
(!((1<<sk->state)&(TCPF_LISTEN|TCPF_CLOSE))
&& sk->daddr != LOOPBACK4_IPV6)) {
#endif
+ if (inet->opt)
+ tp->ext_header_len -= inet->opt->optlen;
if (opt)
- tp->ext_header_len = opt->optlen;
+ tp->ext_header_len += opt->optlen;
tcp_sync_mss(sk, tp->pmtu_cookie);
#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
}
@@ -616,6 +620,11 @@
sk->protinfo.af_inet.freebind = !!val;
break;
+ case IP_IPSEC_POLICY:
+ case IP_XFRM_POLICY:
+ err = xfrm_user_policy(sk, optname, optval, optlen);
+ break;
+
default:
#ifdef CONFIG_NETFILTER
err = nf_setsockopt(sk, PF_INET, optname, optval,
@@ -717,7 +726,7 @@
val = 0;
dst = sk_dst_get(sk);
if (dst) {
- val = dst->pmtu;
+ val = dst_pmtu(dst) - dst->header_len;
dst_release(dst);
}
if (!val) {
diff -Nru a/net/ipv4/ipcomp.c b/net/ipv4/ipcomp.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/net/ipv4/ipcomp.c Thu May 8 10:41:38 2003
@@ -0,0 +1,376 @@
+/*
+ * IP Payload Compression Protocol (IPComp) - RFC3713.
+ *
+ * Copyright (c) 2003 James Morris <jmorris@intercode.com.au>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ * Todo:
+ * - Tunable compression parameters.
+ * - Compression stats.
+ * - Adaptive compression.
+ */
+#include <linux/config.h>
+#include <linux/module.h>
+#include <asm/scatterlist.h>
+#include <linux/crypto.h>
+#include <linux/pfkeyv2.h>
+#include <net/ip.h>
+#include <net/xfrm.h>
+#include <net/icmp.h>
+#include <net/esp.h>
+
+#define IPCOMP_SCRATCH_SIZE 65400
+
+struct ipcomp_hdr {
+ u8 nexthdr;
+ u8 flags;
+ u16 cpi;
+};
+
+struct ipcomp_data {
+ u16 threshold;
+ u8 *scratch;
+ struct crypto_tfm *tfm;
+};
+
+static int ipcomp_decompress(struct xfrm_state *x, struct sk_buff *skb)
+{
+ int err, plen, dlen;
+ struct iphdr *iph;
+ struct ipcomp_data *ipcd = x->data;
+ u8 *start, *scratch = ipcd->scratch;
+
+ plen = skb->len;
+ dlen = IPCOMP_SCRATCH_SIZE;
+ start = skb->data;
+
+ err = crypto_comp_decompress(ipcd->tfm, start, plen, scratch, &dlen);
+ if (err)
+ goto out;
+
+ if (dlen < (plen + sizeof(struct ipcomp_hdr))) {
+ err = -EINVAL;
+ goto out;
+ }
+
+ err = pskb_expand_head(skb, 0, dlen - plen, GFP_ATOMIC);
+ if (err)
+ goto out;
+
+ skb_put(skb, dlen - plen);
+ memcpy(skb->data, scratch, dlen);
+ iph = skb->nh.iph;
+ iph->tot_len = htons(dlen + iph->ihl * 4);
+out:
+ return err;
+}
+
+static int ipcomp_input(struct xfrm_state *x,
+ struct xfrm_decap_state *decap, struct sk_buff *skb)
+{
+ u8 nexthdr;
+ int err = 0;
+ struct iphdr *iph;
+ union {
+ struct iphdr iph;
+ char buf[60];
+ } tmp_iph;
+
+
+ if ((skb_is_nonlinear(skb) || skb_cloned(skb)) &&
+ skb_linearize(skb, GFP_ATOMIC) != 0) {
+ err = -ENOMEM;
+ goto out;
+ }
+
+ skb->ip_summed = CHECKSUM_NONE;
+
+ /* Remove ipcomp header and decompress original payload */
+ iph = skb->nh.iph;
+ memcpy(&tmp_iph, iph, iph->ihl * 4);
+ nexthdr = *(u8 *)skb->data;
+ skb_pull(skb, sizeof(struct ipcomp_hdr));
+ skb->nh.raw += sizeof(struct ipcomp_hdr);
+ memcpy(skb->nh.raw, &tmp_iph, tmp_iph.iph.ihl * 4);
+ iph = skb->nh.iph;
+ iph->tot_len = htons(ntohs(iph->tot_len) - sizeof(struct ipcomp_hdr));
+ iph->protocol = nexthdr;
+ skb->h.raw = skb->data;
+ err = ipcomp_decompress(x, skb);
+
+out:
+ return err;
+}
+
+static int ipcomp_compress(struct xfrm_state *x, struct sk_buff *skb)
+{
+ int err, plen, dlen, ihlen;
+ struct iphdr *iph = skb->nh.iph;
+ struct ipcomp_data *ipcd = x->data;
+ u8 *start, *scratch = ipcd->scratch;
+
+ ihlen = iph->ihl * 4;
+ plen = skb->len - ihlen;
+ dlen = IPCOMP_SCRATCH_SIZE;
+ start = skb->data + ihlen;
+
+ err = crypto_comp_compress(ipcd->tfm, start, plen, scratch, &dlen);
+ if (err)
+ goto out;
+
+ if ((dlen + sizeof(struct ipcomp_hdr)) >= plen) {
+ err = -EMSGSIZE;
+ goto out;
+ }
+
+ memcpy(start, scratch, dlen);
+ pskb_trim(skb, ihlen + dlen);
+
+out:
+ return err;
+}
+
+static void ipcomp_tunnel_encap(struct xfrm_state *x, struct sk_buff *skb)
+{
+ struct dst_entry *dst = skb->dst;
+ struct iphdr *iph, *top_iph;
+
+ iph = skb->nh.iph;
+ top_iph = (struct iphdr *)skb_push(skb, sizeof(struct iphdr));
+ top_iph->ihl = 5;
+ top_iph->version = 4;
+ top_iph->tos = iph->tos;
+ top_iph->tot_len = htons(skb->len);
+ if (!(iph->frag_off&htons(IP_DF))) {
+#ifdef NETIF_F_TSO
+ __ip_select_ident(top_iph, dst, 0);
+#else
+ __ip_select_ident(top_iph, dst);
+#endif
+ }
+ top_iph->ttl = iph->ttl;
+ top_iph->check = 0;
+ top_iph->saddr = x->props.saddr.a4;
+ top_iph->daddr = x->id.daddr.a4;
+ top_iph->frag_off = iph->frag_off&~htons(IP_MF|IP_OFFSET);
+ memset(&(IPCB(skb)->opt), 0, sizeof(struct ip_options));
+ skb->nh.raw = skb->data;
+}
+
+static int ipcomp_output(struct sk_buff *skb)
+{
+ int err;
+ struct dst_entry *dst = skb->dst;
+ struct xfrm_state *x = dst->xfrm;
+ struct iphdr *iph, *top_iph;
+ struct ipcomp_hdr *ipch;
+ struct ipcomp_data *ipcd = x->data;
+ union {
+ struct iphdr iph;
+ char buf[60];
+ } tmp_iph;
+
+ if (skb->ip_summed == CHECKSUM_HW && skb_checksum_help(skb) == NULL) {
+ err = -EINVAL;
+ goto error_nolock;
+ }
+
+ spin_lock_bh(&x->lock);
+ err = xfrm_check_output(x, skb, AF_INET);
+ if (err)
+ goto error;
+
+ /* Don't bother compressing */
+ if (skb->len < ipcd->threshold) {
+ if (x->props.mode) {
+ ipcomp_tunnel_encap(x, skb);
+ iph = skb->nh.iph;
+ iph->protocol = IPPROTO_IPIP;
+ ip_send_check(iph);
+ }
+ goto out_ok;
+ }
+
+ if (x->props.mode)
+ ipcomp_tunnel_encap(x, skb);
+
+ if ((skb_is_nonlinear(skb) || skb_cloned(skb)) &&
+ skb_linearize(skb, GFP_ATOMIC) != 0) {
+ err = -ENOMEM;
+ goto error;
+ }
+
+ err = ipcomp_compress(x, skb);
+ if (err) {
+ if (err == -EMSGSIZE) {
+ if (x->props.mode) {
+ iph = skb->nh.iph;
+ iph->protocol = IPPROTO_IPIP;
+ ip_send_check(iph);
+ }
+ goto out_ok;
+ }
+ goto error;
+ }
+
+ /* Install ipcomp header, convert into ipcomp datagram. */
+ iph = skb->nh.iph;
+ memcpy(&tmp_iph, iph, iph->ihl * 4);
+ top_iph = (struct iphdr *)skb_push(skb, sizeof(struct ipcomp_hdr));
+ memcpy(top_iph, &tmp_iph, iph->ihl * 4);
+ iph = top_iph;
+ iph->tot_len = htons(skb->len);
+ iph->protocol = IPPROTO_COMP;
+ iph->check = 0;
+ ipch = (struct ipcomp_hdr *)((char *)iph + iph->ihl * 4);
+ ipch->nexthdr = x->props.mode ? IPPROTO_IPIP : tmp_iph.iph.protocol;
+ ipch->flags = 0;
+ ipch->cpi = htons((u16 )ntohl(x->id.spi));
+ ip_send_check(iph);
+ skb->nh.raw = skb->data;
+
+out_ok:
+ x->curlft.bytes += skb->len;
+ x->curlft.packets++;
+ spin_unlock_bh(&x->lock);
+
+ if ((skb->dst = dst_pop(dst)) == NULL) {
+ err = -EHOSTUNREACH;
+ goto error_nolock;
+ }
+ err = NET_XMIT_BYPASS;
+
+out_exit:
+ return err;
+error:
+ spin_unlock_bh(&x->lock);
+error_nolock:
+ kfree_skb(skb);
+ goto out_exit;
+}
+
+static void ipcomp4_err(struct sk_buff *skb, u32 info)
+{
+ u32 spi;
+ struct iphdr *iph = (struct iphdr *)skb->data;
+ struct ipcomp_hdr *ipch = (struct ipcomp_hdr *)(skb->data+(iph->ihl<<2));
+ struct xfrm_state *x;
+
+ if (skb->h.icmph->type != ICMP_DEST_UNREACH ||
+ skb->h.icmph->code != ICMP_FRAG_NEEDED)
+ return;
+
+ spi = ntohl(ntohs(ipch->cpi));
+ x = xfrm_state_lookup((xfrm_address_t *)&iph->daddr,
+ spi, IPPROTO_COMP, AF_INET);
+ if (!x)
+ return;
+ printk(KERN_DEBUG "pmtu discvovery on SA IPCOMP/%08x/%u.%u.%u.%u\n",
+ spi, NIPQUAD(iph->daddr));
+ xfrm_state_put(x);
+}
+
+static void ipcomp_free_data(struct ipcomp_data *ipcd)
+{
+ if (ipcd->tfm)
+ crypto_free_tfm(ipcd->tfm);
+ if (ipcd->scratch)
+ kfree(ipcd->scratch);
+}
+
+static void ipcomp_destroy(struct xfrm_state *x)
+{
+ struct ipcomp_data *ipcd = x->data;
+ ipcomp_free_data(ipcd);
+ kfree(ipcd);
+}
+
+static int ipcomp_init_state(struct xfrm_state *x, void *args)
+{
+ int err = -ENOMEM;
+ struct ipcomp_data *ipcd;
+ struct xfrm_algo_desc *calg_desc;
+
+ ipcd = kmalloc(sizeof(*ipcd), GFP_KERNEL);
+ if (!ipcd)
+ goto error;
+
+ memset(ipcd, 0, sizeof(*ipcd));
+ x->props.header_len = sizeof(struct ipcomp_hdr);
+ if (x->props.mode)
+ x->props.header_len += sizeof(struct iphdr);
+ x->data = ipcd;
+
+ ipcd->scratch = kmalloc(IPCOMP_SCRATCH_SIZE, GFP_KERNEL);
+ if (!ipcd->scratch)
+ goto error;
+
+ ipcd->tfm = crypto_alloc_tfm(x->calg->alg_name, 0);
+ if (!ipcd->tfm)
+ goto error;
+
+ calg_desc = xfrm_calg_get_byname(x->calg->alg_name);
+ BUG_ON(!calg_desc);
+ ipcd->threshold = calg_desc->uinfo.comp.threshold;
+ err = 0;
+out:
+ return err;
+
+error:
+ if (ipcd) {
+ ipcomp_free_data(ipcd);
+ kfree(ipcd);
+ }
+ goto out;
+}
+
+static struct xfrm_type ipcomp_type =
+{
+ .description = "IPCOMP4",
+ .proto = IPPROTO_COMP,
+ .init_state = ipcomp_init_state,
+ .destructor = ipcomp_destroy,
+ .input = ipcomp_input,
+ .output = ipcomp_output
+};
+
+static struct inet_protocol ipcomp4_protocol = {
+ .handler = xfrm4_rcv,
+ .err_handler = ipcomp4_err,
+ .no_policy = 1,
+};
+
+static int __init ipcomp4_init(void)
+{
+ SET_MODULE_OWNER(&ipcomp_type);
+ if (xfrm_register_type(&ipcomp_type, AF_INET) < 0) {
+ printk(KERN_INFO "ipcomp init: can't add xfrm type\n");
+ return -EAGAIN;
+ }
+ if (inet_add_protocol(&ipcomp4_protocol, IPPROTO_COMP) < 0) {
+ printk(KERN_INFO "ipcomp init: can't add protocol\n");
+ xfrm_unregister_type(&ipcomp_type, AF_INET);
+ return -EAGAIN;
+ }
+ return 0;
+}
+
+static void __exit ipcomp4_fini(void)
+{
+ if (inet_del_protocol(&ipcomp4_protocol, IPPROTO_COMP) < 0)
+ printk(KERN_INFO "ip ipcomp close: can't remove protocol\n");
+ if (xfrm_unregister_type(&ipcomp_type, AF_INET) < 0)
+ printk(KERN_INFO "ip ipcomp close: can't remove xfrm type\n");
+}
+
+module_init(ipcomp4_init);
+module_exit(ipcomp4_fini);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("IP Payload Compression Protocol (IPComp) - RFC3713");
+MODULE_AUTHOR("James Morris <jmorris@intercode.com.au>");
+
diff -Nru a/net/ipv4/ipconfig.c b/net/ipv4/ipconfig.c
--- a/net/ipv4/ipconfig.c Thu May 8 10:41:36 2003
+++ b/net/ipv4/ipconfig.c Thu May 8 10:41:36 2003
@@ -655,7 +655,7 @@
struct net_device *dev = d->dev;
struct sk_buff *skb;
struct bootp_pkt *b;
- int hh_len = (dev->hard_header_len + 15) & ~15;
+ int hh_len = LL_RESERVED_SPACE(dev);
struct iphdr *h;
/* Allocate packet */
diff -Nru a/net/ipv4/ipip.c b/net/ipv4/ipip.c
--- a/net/ipv4/ipip.c Thu May 8 10:41:37 2003
+++ b/net/ipv4/ipip.c Thu May 8 10:41:37 2003
@@ -115,6 +115,7 @@
#include <net/protocol.h>
#include <net/ipip.h>
#include <net/inet_ecn.h>
+#include <net/xfrm.h>
#define HASH_SIZE 16
#define HASH(addr) ((addr^(addr>>4))&0xF)
@@ -207,7 +208,7 @@
write_unlock_bh(&ipip_lock);
}
-struct ip_tunnel * ipip_tunnel_locate(struct ip_tunnel_parm *parms, int create)
+static struct ip_tunnel * ipip_tunnel_locate(struct ip_tunnel_parm *parms, int create)
{
u32 remote = parms->iph.daddr;
u32 local = parms->iph.saddr;
@@ -289,7 +290,7 @@
dev_put(dev);
}
-void ipip_err(struct sk_buff *skb, u32 info)
+static void ipip_err(struct sk_buff *skb, void *__unused)
{
#ifndef I_WISH_WORLD_WERE_PERFECT
@@ -355,6 +356,7 @@
int rel_code = 0;
int rel_info = 0;
struct sk_buff *skb2;
+ struct flowi fl;
struct rtable *rt;
if (len < hlen + sizeof(struct iphdr))
@@ -417,7 +419,11 @@
skb2->nh.raw = skb2->data;
/* Try to guess incoming interface */
- if (ip_route_output(&rt, eiph->saddr, 0, RT_TOS(eiph->tos), 0)) {
+ memset(&fl, 0, sizeof(fl));
+ fl.fl4_daddr = eiph->saddr;
+ fl.fl4_tos = RT_TOS(eiph->tos);
+ fl.proto = IPPROTO_IPIP;
+ if (ip_route_output_key(&rt, &key)) {
kfree_skb(skb2);
return;
}
@@ -427,8 +433,11 @@
if (rt->rt_flags&RTCF_LOCAL) {
ip_rt_put(rt);
rt = NULL;
- if (ip_route_output(&rt, eiph->daddr, eiph->saddr, eiph->tos, 0) ||
- rt->u.dst.dev->type != ARPHRD_IPGRE) {
+ fl.fl4_daddr = eiph->daddr;
+ fl.fl4_src = eiph->saddr;
+ fl.fl4_tos = eiph->tos;
+ if (ip_route_output_key(&rt, &fl) ||
+ rt->u.dst.dev->type != ARPHRD_TUNNEL) {
ip_rt_put(rt);
kfree_skb(skb2);
return;
@@ -436,7 +445,7 @@
} else {
ip_rt_put(rt);
if (ip_route_input(skb2, eiph->daddr, eiph->saddr, eiph->tos, skb2->dev) ||
- skb2->dst->dev->type != ARPHRD_IPGRE) {
+ skb2->dst->dev->type != ARPHRD_TUNNEL) {
kfree_skb(skb2);
return;
}
@@ -444,11 +453,11 @@
/* change mtu on this route */
if (type == ICMP_DEST_UNREACH && code == ICMP_FRAG_NEEDED) {
- if (rel_info > skb2->dst->pmtu) {
+ if (rel_info > dst_pmtu(skb2->dst)) {
kfree_skb(skb2);
return;
}
- skb2->dst->pmtu = rel_info;
+ skb2->dst->ops->update_pmtu(skb2->dst, rel_info);
rel_info = htonl(rel_info);
} else if (type == ICMP_TIME_EXCEEDED) {
struct ip_tunnel *t = (struct ip_tunnel*)skb2->dev->priv;
@@ -473,7 +482,7 @@
IP_ECN_set_ce(inner_iph);
}
-int ipip_rcv(struct sk_buff *skb)
+static int ipip_rcv(struct sk_buff *skb)
{
struct iphdr *iph;
struct ip_tunnel *tunnel;
@@ -509,16 +518,8 @@
}
read_unlock(&ipip_lock);
- icmp_send(skb, ICMP_DEST_UNREACH, ICMP_PROT_UNREACH, 0);
out:
- kfree_skb(skb);
- return 0;
-}
-
-/* Need this wrapper because NF_HOOK takes the function address */
-static inline int do_ip_send(struct sk_buff *skb)
-{
- return ip_send(skb);
+ return -1;
}
/*
@@ -562,9 +563,17 @@
goto tx_error_icmp;
}
- if (ip_route_output(&rt, dst, tiph->saddr, RT_TOS(tos), tunnel->parms.link)) {
- tunnel->stat.tx_carrier_errors++;
- goto tx_error_icmp;
+ {
+ struct flowi fl = { .oif = tunnel->parms.link,
+ .nl_u = { .ip4_u =
+ { .daddr = dst,
+ .saddr = tiph->saddr,
+ .tos = RT_TOS(tos) } },
+ .proto = IPPROTO_IPIP };
+ if (ip_route_output_key(&rt, &fl)) {
+ tunnel->stat.tx_carrier_errors++;
+ goto tx_error_icmp;
+ }
}
tdev = rt->u.dst.dev;
@@ -575,17 +584,17 @@
}
if (tiph->frag_off)
- mtu = rt->u.dst.pmtu - sizeof(struct iphdr);
+ mtu = dst_pmtu(&rt->u.dst) - sizeof(struct iphdr);
else
- mtu = skb->dst ? skb->dst->pmtu : dev->mtu;
+ mtu = skb->dst ? dst_pmtu(skb->dst) : dev->mtu;
if (mtu < 68) {
tunnel->stat.collisions++;
ip_rt_put(rt);
goto tx_error;
}
- if (skb->dst && mtu < skb->dst->pmtu)
- skb->dst->pmtu = mtu;
+ if (skb->dst)
+ skb->dst->ops->update_pmtu(skb->dst, mtu);
df |= (old_iph->frag_off&htons(IP_DF));
@@ -608,7 +617,7 @@
/*
* Okay, now see if we can stuff it in the buffer as-is.
*/
- max_headroom = (((tdev->hard_header_len+15)&~15)+sizeof(struct iphdr));
+ max_headroom = (LL_RESERVED_SPACE(tdev)+sizeof(struct iphdr));
if (skb_headroom(skb) < max_headroom || skb_cloned(skb) || skb_shared(skb)) {
struct sk_buff *new_skb = skb_realloc_headroom(skb, max_headroom);
@@ -824,8 +833,14 @@
ipip_tunnel_init_gen(dev);
if (iph->daddr) {
+ struct flowi fl = { .oif = tunnel->parms.link,
+ .nl_u = { .ip4_u =
+ { .daddr = iph->daddr,
+ .saddr = iph->saddr,
+ .tos = RT_TOS(iph->tos) } },
+ .proto = IPPROTO_IPIP };
struct rtable *rt;
- if (!ip_route_output(&rt, iph->daddr, iph->saddr, RT_TOS(iph->tos), tunnel->parms.link)) {
+ if (!ip_route_output_key(&rt, &fl)) {
tdev = rt->u.dst.dev;
ip_rt_put(rt);
}
@@ -858,7 +873,7 @@
}
#endif
-int __init ipip_fb_tunnel_init(struct net_device *dev)
+static int __init ipip_fb_tunnel_init(struct net_device *dev)
{
struct iphdr *iph;
@@ -878,11 +893,9 @@
return 0;
}
-static struct inet_protocol ipip_protocol = {
- handler: ipip_rcv,
- err_handler: ipip_err,
- protocol: IPPROTO_IPIP,
- name: "IPIP"
+static struct xfrm_tunnel ipip_handler = {
+ .handler = ipip_rcv,
+ .err_handler = ipip_err,
};
static char banner[] __initdata =
@@ -892,16 +905,20 @@
{
printk(banner);
+ if (xfrm4_tunnel_register(&ipip_handler) < 0) {
+ printk(KERN_INFO "ipip init: can't register tunnel\n");
+ return -EAGAIN;
+ }
+
ipip_fb_tunnel_dev.priv = (void*)&ipip_fb_tunnel;
register_netdev(&ipip_fb_tunnel_dev);
- inet_add_protocol(&ipip_protocol);
return 0;
}
static void __exit ipip_fini(void)
{
- if ( inet_del_protocol(&ipip_protocol) < 0 )
- printk(KERN_INFO "ipip close: can't remove protocol\n");
+ if (xfrm4_tunnel_deregister(&ipip_handler) < 0)
+ printk(KERN_INFO "ipip close: can't deregister tunnel\n");
unregister_netdev(&ipip_fb_tunnel_dev);
}
diff -Nru a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c
--- a/net/ipv4/ipmr.c Thu May 8 10:41:37 2003
+++ b/net/ipv4/ipmr.c Thu May 8 10:41:37 2003
@@ -108,7 +108,7 @@
static int ipmr_cache_report(struct sk_buff *pkt, vifi_t vifi, int assert);
static int ipmr_fill_mroute(struct sk_buff *skb, struct mfc_cache *c, struct rtmsg *rtm);
-extern struct inet_protocol pim_protocol;
+static struct inet_protocol pim_protocol;
static struct timer_list ipmr_expire_timer;
@@ -928,23 +928,28 @@
#ifdef CONFIG_IP_PIMSM
case MRT_PIM:
{
- int v;
+ int v, ret;
if(get_user(v,(int *)optval))
return -EFAULT;
v = (v)?1:0;
rtnl_lock();
+ ret = 0;
if (v != mroute_do_pim) {
mroute_do_pim = v;
mroute_do_assert = v;
#ifdef CONFIG_IP_PIMSM_V2
if (mroute_do_pim)
- inet_add_protocol(&pim_protocol);
+ ret = inet_add_protocol(&pim_protocol,
+ IPPROTO_PIM);
else
- inet_del_protocol(&pim_protocol);
+ ret = inet_del_protocol(&pim_protocol,
+ IPPROTO_PIM);
+ if (ret < 0)
+ ret = -EAGAIN;
#endif
}
rtnl_unlock();
- return 0;
+ return ret;
}
#endif
/*
@@ -1106,10 +1111,10 @@
{
struct dst_entry *dst = skb->dst;
- if (skb->len <= dst->pmtu)
- return dst->output(skb);
+ if (skb->len <= dst_pmtu(dst))
+ return dst_output(skb);
else
- return ip_fragment(skb, dst->output);
+ return ip_fragment(skb, dst_output);
}
/*
@@ -1141,17 +1146,28 @@
#endif
if (vif->flags&VIFF_TUNNEL) {
- if (ip_route_output(&rt, vif->remote, vif->local, RT_TOS(iph->tos), vif->link))
+ struct flowi fl = { .oif = vif->link,
+ .nl_u = { .ip4_u =
+ { .daddr = vif->remote,
+ .saddr = vif->local,
+ .tos = RT_TOS(iph->tos) } },
+ .proto = IPPROTO_IPIP };
+ if (ip_route_output_key(&rt, &fl))
return;
encap = sizeof(struct iphdr);
} else {
- if (ip_route_output(&rt, iph->daddr, 0, RT_TOS(iph->tos), vif->link))
+ struct flowi fl = { .oif = vif->link,
+ .nl_u = { .ip4_u =
+ { .daddr = iph->daddr,
+ .tos = RT_TOS(iph->tos) } },
+ .proto = IPPROTO_IPIP };
+ if (ip_route_output_key(&rt, &fl))
return;
}
dev = rt->u.dst.dev;
- if (skb->len+encap > rt->u.dst.pmtu && (ntohs(iph->frag_off) & IP_DF)) {
+ if (skb->len+encap > dst_pmtu(&rt->u.dst) && (ntohs(iph->frag_off) & IP_DF)) {
/* Do not fragment multicasts. Alas, IPv4 does not
allow to send ICMP, so that packets will disappear
to blackhole.
@@ -1162,7 +1178,7 @@
return;
}
- encap += dev->hard_header_len;
+ encap += LL_RESERVED_SPACE(dev);
if (skb_headroom(skb) < encap || skb_cloned(skb) || !last)
skb2 = skb_realloc_headroom(skb, (encap + 15)&~15);
@@ -1239,7 +1255,7 @@
if (vif_table[vif].dev != skb->dev) {
int true_vifi;
- if (((struct rtable*)skb->dst)->key.iif == 0) {
+ if (((struct rtable*)skb->dst)->fl.iif == 0) {
/* It is our own packet, looped back.
Very complicated situation...
@@ -1727,15 +1743,8 @@
#endif
#ifdef CONFIG_IP_PIMSM_V2
-struct inet_protocol pim_protocol =
-{
- pim_rcv, /* PIM handler */
- NULL, /* PIM error control */
- NULL, /* next */
- IPPROTO_PIM, /* protocol ID */
- 0, /* copy */
- NULL, /* data */
- "PIM" /* name */
+static struct inet_protocol pim_protocol = {
+ .handler = pim_rcv,
};
#endif
diff -Nru a/net/ipv4/netfilter/ip_conntrack_standalone.c b/net/ipv4/netfilter/ip_conntrack_standalone.c
--- a/net/ipv4/netfilter/ip_conntrack_standalone.c Thu May 8 10:41:37 2003
+++ b/net/ipv4/netfilter/ip_conntrack_standalone.c Thu May 8 10:41:37 2003
@@ -201,7 +201,7 @@
/* Local packets are never produced too large for their
interface. We degfragment them at LOCAL_OUT, however,
so we have to refragment them here. */
- if ((*pskb)->len > rt->u.dst.pmtu) {
+ if ((*pskb)->len > dst_pmtu(&rt->u.dst)) {
/* No hook can be after us, so this should be OK. */
ip_fragment(*pskb, okfn);
return NF_STOLEN;
diff -Nru a/net/ipv4/netfilter/ip_fw_compat_masq.c b/net/ipv4/netfilter/ip_fw_compat_masq.c
--- a/net/ipv4/netfilter/ip_fw_compat_masq.c Thu May 8 10:41:37 2003
+++ b/net/ipv4/netfilter/ip_fw_compat_masq.c Thu May 8 10:41:37 2003
@@ -68,12 +68,13 @@
/* Setup the masquerade, if not already */
if (!info->initialized) {
u_int32_t newsrc;
+ struct flowi fl = { .nl_u = { .ip4_u = { .daddr = iph->daddr } } };
struct rtable *rt;
struct ip_nat_multi_range range;
/* Pass 0 instead of saddr, since it's going to be changed
anyway. */
- if (ip_route_output(&rt, iph->daddr, 0, 0, 0) != 0) {
+ if (ip_route_output_key(&rt, &fl) != 0) {
DEBUGP("ipnat_rule_masquerade: Can't reroute.\n");
return NF_DROP;
}
diff -Nru a/net/ipv4/netfilter/ip_nat_core.c b/net/ipv4/netfilter/ip_nat_core.c
--- a/net/ipv4/netfilter/ip_nat_core.c Thu May 8 10:41:36 2003
+++ b/net/ipv4/netfilter/ip_nat_core.c Thu May 8 10:41:36 2003
@@ -206,10 +206,11 @@
static int
do_extra_mangle(u_int32_t var_ip, u_int32_t *other_ipp)
{
+ struct flowi fl = { .nl_u = { .ip4_u = { .daddr = var_ip } } };
struct rtable *rt;
/* FIXME: IPTOS_TOS(iph->tos) --RR */
- if (ip_route_output(&rt, var_ip, 0, 0, 0) != 0) {
+ if (ip_route_output_key(&rt, &fl) != 0) {
DEBUGP("do_extra_mangle: Can't get route to %u.%u.%u.%u\n",
NIPQUAD(var_ip));
return 0;
diff -Nru a/net/ipv4/netfilter/ipt_MASQUERADE.c b/net/ipv4/netfilter/ipt_MASQUERADE.c
--- a/net/ipv4/netfilter/ipt_MASQUERADE.c Thu May 8 10:41:36 2003
+++ b/net/ipv4/netfilter/ipt_MASQUERADE.c Thu May 8 10:41:36 2003
@@ -69,7 +69,6 @@
struct ip_nat_multi_range newrange;
u_int32_t newsrc;
struct rtable *rt;
- struct rt_key key;
IP_NF_ASSERT(hooknum == NF_IP_POST_ROUTING);
@@ -84,17 +83,21 @@
mr = targinfo;
- key.dst = (*pskb)->nh.iph->daddr;
- key.src = 0; /* Unknown: that's what we're trying to establish */
- key.tos = RT_TOS((*pskb)->nh.iph->tos)|RTO_CONN;
- key.oif = out->ifindex;
+ {
+ struct flowi fl = { .nl_u = { .ip4_u =
+ { .daddr = (*pskb)->nh.iph->daddr,
+ .tos = (RT_TOS((*pskb)->nh.iph->tos) |
+ RTO_CONN),
#ifdef CONFIG_IP_ROUTE_FWMARK
- key.fwmark = (*pskb)->nfmark;
+ .fwmark = (*pskb)->nfmark
#endif
- if (ip_route_output_key(&rt, &key) != 0) {
- /* Shouldn't happen */
- printk("MASQUERADE: No route: Rusty's brain broke!\n");
- return NF_DROP;
+ } },
+ .oif = out->ifindex };
+ if (ip_route_output_key(&rt, &fl) != 0) {
+ /* Shouldn't happen */
+ printk("MASQUERADE: No route: Rusty's brain broke!\n");
+ return NF_DROP;
+ }
}
newsrc = rt->rt_src;
diff -Nru a/net/ipv4/netfilter/ipt_MIRROR.c b/net/ipv4/netfilter/ipt_MIRROR.c
--- a/net/ipv4/netfilter/ipt_MIRROR.c Thu May 8 10:41:38 2003
+++ b/net/ipv4/netfilter/ipt_MIRROR.c Thu May 8 10:41:38 2003
@@ -44,12 +44,13 @@
static int route_mirror(struct sk_buff *skb)
{
struct iphdr *iph = skb->nh.iph;
+ struct flowi fl = { .nl_u = { .ip4_u = { .daddr = iph->saddr,
+ .saddr = iph->daddr,
+ .tos = RT_TOS(iph->tos) | RTO_CONN } } };
struct rtable *rt;
/* Backwards */
- if (ip_route_output(&rt, iph->saddr, iph->daddr,
- RT_TOS(iph->tos) | RTO_CONN,
- 0)) {
+ if (ip_route_output_key(&rt, &fl)) {
return 0;
}
diff -Nru a/net/ipv4/netfilter/ipt_REJECT.c b/net/ipv4/netfilter/ipt_REJECT.c
--- a/net/ipv4/netfilter/ipt_REJECT.c Thu May 8 10:41:37 2003
+++ b/net/ipv4/netfilter/ipt_REJECT.c Thu May 8 10:41:37 2003
@@ -63,11 +63,18 @@
csum_partial((char *)otcph, otcplen, 0)) != 0)
return;
- /* Routing: if not headed for us, route won't like source */
- if (ip_route_output(&rt, oldskb->nh.iph->saddr,
- local ? oldskb->nh.iph->daddr : 0,
- RT_TOS(oldskb->nh.iph->tos), 0) != 0)
- return;
+ {
+ struct flowi fl = { .nl_u = { .ip4_u =
+ { .daddr = oldskb->nh.iph->daddr,
+ .saddr = (local ?
+ oldskb->nh.iph->saddr :
+ 0),
+ .tos = RT_TOS(oldskb->nh.iph->tos) } } };
+
+ /* Routing: if not headed for us, route won't like source */
+ if (ip_route_output_key(&rt, &fl))
+ return;
+ }
hh_len = (rt->u.dst.dev->hard_header_len + 15)&~15;
@@ -149,7 +156,7 @@
nskb->nh.iph->ihl);
/* "Never happens" */
- if (nskb->len > nskb->dst->pmtu)
+ if (nskb->len > dst_pmtu(nskb->dst))
goto free_nskb;
connection_attach(nskb, oldskb->nfct);
@@ -229,14 +236,19 @@
tos = (iph->tos & IPTOS_TOS_MASK) | IPTOS_PREC_INTERNETCONTROL;
- if (ip_route_output(&rt, iph->saddr, saddr, RT_TOS(tos), 0))
- return;
-
+ {
+ struct flowi fl = { .nl_u = { .ip4_u =
+ { .daddr = iph->saddr,
+ .saddr = saddr,
+ .tos = RT_TOS(tos) } } };
+ if (ip_route_output_key(&rt, &fl))
+ return;
+ }
/* RFC says return as much as we can without exceeding 576 bytes. */
length = skb_in->len + sizeof(struct iphdr) + sizeof(struct icmphdr);
- if (length > rt->u.dst.pmtu)
- length = rt->u.dst.pmtu;
+ if (length > dst_pmtu(&rt->u.dst))
+ length = dst_pmtu(&rt->u.dst);
if (length > 576)
length = 576;
diff -Nru a/net/ipv4/netfilter/ipt_TCPMSS.c b/net/ipv4/netfilter/ipt_TCPMSS.c
--- a/net/ipv4/netfilter/ipt_TCPMSS.c Thu May 8 10:41:37 2003
+++ b/net/ipv4/netfilter/ipt_TCPMSS.c Thu May 8 10:41:37 2003
@@ -85,14 +85,14 @@
return NF_DROP; /* or IPT_CONTINUE ?? */
}
- if((*pskb)->dst->pmtu <= (sizeof(struct iphdr) + sizeof(struct tcphdr))) {
+ if(dst_pmtu((*pskb)->dst) <= (sizeof(struct iphdr) + sizeof(struct tcphdr))) {
if (net_ratelimit())
printk(KERN_ERR
- "ipt_tcpmss_target: unknown or invalid path-MTU (%d)\n", (*pskb)->dst->pmtu);
+ "ipt_tcpmss_target: unknown or invalid path-MTU (%d)\n", dst_pmtu((*pskb)->dst));
return NF_DROP; /* or IPT_CONTINUE ?? */
}
- newmss = (*pskb)->dst->pmtu - sizeof(struct iphdr) - sizeof(struct tcphdr);
+ newmss = dst_pmtu((*pskb)->dst) - sizeof(struct iphdr) - sizeof(struct tcphdr);
} else
newmss = tcpmssinfo->mss;
diff -Nru a/net/ipv4/protocol.c b/net/ipv4/protocol.c
--- a/net/ipv4/protocol.c Thu May 8 10:41:36 2003
+++ b/net/ipv4/protocol.c Thu May 8 10:41:36 2003
@@ -48,134 +48,52 @@
#include <net/ipip.h>
#include <linux/igmp.h>
-#define IPPROTO_PREVIOUS NULL
-
-#ifdef CONFIG_IP_MULTICAST
-
-static struct inet_protocol igmp_protocol = {
- handler: igmp_rcv,
- next: IPPROTO_PREVIOUS,
- protocol: IPPROTO_IGMP,
- name: "IGMP"
-};
-
-#undef IPPROTO_PREVIOUS
-#define IPPROTO_PREVIOUS &igmp_protocol
-
-#endif
-
-static struct inet_protocol tcp_protocol = {
- handler: tcp_v4_rcv,
- err_handler: tcp_v4_err,
- next: IPPROTO_PREVIOUS,
- protocol: IPPROTO_TCP,
- name: "TCP"
-};
-
-#undef IPPROTO_PREVIOUS
-#define IPPROTO_PREVIOUS &tcp_protocol
-
-static struct inet_protocol udp_protocol = {
- handler: udp_rcv,
- err_handler: udp_err,
- next: IPPROTO_PREVIOUS,
- protocol: IPPROTO_UDP,
- name: "UDP"
-};
-
-#undef IPPROTO_PREVIOUS
-#define IPPROTO_PREVIOUS &udp_protocol
-
-static struct inet_protocol icmp_protocol = {
- handler: icmp_rcv,
- next: IPPROTO_PREVIOUS,
- protocol: IPPROTO_ICMP,
- name: "ICMP"
-};
-
-#undef IPPROTO_PREVIOUS
-#define IPPROTO_PREVIOUS &icmp_protocol
-
-
-struct inet_protocol *inet_protocol_base = IPPROTO_PREVIOUS;
-
struct inet_protocol *inet_protos[MAX_INET_PROTOS];
/*
* Add a protocol handler to the hash tables
*/
-void inet_add_protocol(struct inet_protocol *prot)
+int inet_add_protocol(struct inet_protocol *prot, unsigned char protocol)
{
- unsigned char hash;
- struct inet_protocol *p2;
+ int hash, ret;
+
+ hash = protocol & (MAX_INET_PROTOS - 1);
- hash = prot->protocol & (MAX_INET_PROTOS - 1);
br_write_lock_bh(BR_NETPROTO_LOCK);
- prot ->next = inet_protos[hash];
- inet_protos[hash] = prot;
- prot->copy = 0;
-
- /*
- * Set the copy bit if we need to.
- */
-
- p2 = (struct inet_protocol *) prot->next;
- while (p2) {
- if (p2->protocol == prot->protocol) {
- prot->copy = 1;
- break;
- }
- p2 = (struct inet_protocol *) p2->next;
+
+ if (inet_protos[hash]) {
+ ret = -1;
+ } else {
+ inet_protos[hash] = prot;
+ ret = 0;
}
+
br_write_unlock_bh(BR_NETPROTO_LOCK);
+
+ return ret;
}
/*
* Remove a protocol from the hash tables.
*/
-int inet_del_protocol(struct inet_protocol *prot)
+int inet_del_protocol(struct inet_protocol *prot, unsigned char protocol)
{
- struct inet_protocol *p;
- struct inet_protocol *lp = NULL;
- unsigned char hash;
-
- hash = prot->protocol & (MAX_INET_PROTOS - 1);
- br_write_lock_bh(BR_NETPROTO_LOCK);
- if (prot == inet_protos[hash]) {
- inet_protos[hash] = (struct inet_protocol *) inet_protos[hash]->next;
- br_write_unlock_bh(BR_NETPROTO_LOCK);
- return 0;
- }
+ int hash, ret;
- p = (struct inet_protocol *) inet_protos[hash];
+ hash = protocol & (MAX_INET_PROTOS - 1);
- if (p != NULL && p->protocol == prot->protocol)
- lp = p;
-
- while (p) {
- /*
- * We have to worry if the protocol being deleted is
- * the last one on the list, then we may need to reset
- * someone's copied bit.
- */
- if (p->next && p->next == prot) {
- /*
- * if we are the last one with this protocol and
- * there is a previous one, reset its copy bit.
- */
- if (prot->copy == 0 && lp != NULL)
- lp->copy = 0;
- p->next = prot->next;
- br_write_unlock_bh(BR_NETPROTO_LOCK);
- return 0;
- }
- if (p->next != NULL && p->next->protocol == prot->protocol)
- lp = p->next;
+ br_write_lock_bh(BR_NETPROTO_LOCK);
- p = (struct inet_protocol *) p->next;
+ if (inet_protos[hash] == prot) {
+ inet_protos[hash] = NULL;
+ ret = 0;
+ } else {
+ ret = -1;
}
+
br_write_unlock_bh(BR_NETPROTO_LOCK);
- return -1;
+
+ return ret;
}
diff -Nru a/net/ipv4/raw.c b/net/ipv4/raw.c
--- a/net/ipv4/raw.c Thu May 8 10:41:36 2003
+++ b/net/ipv4/raw.c Thu May 8 10:41:36 2003
@@ -64,6 +64,8 @@
#include <net/raw.h>
#include <net/inet_common.h>
#include <net/checksum.h>
+#include <net/xfrm.h>
+#include <linux/netfilter_ipv4.h>
struct sock *raw_v4_htable[RAWV4_HTABLE_SIZE];
rwlock_t raw_v4_lock = RW_LOCK_UNLOCKED;
@@ -132,13 +134,12 @@
}
/* IP input processing comes here for RAW socket delivery.
- * This is fun as to avoid copies we want to make no surplus
- * copies.
+ * Caller owns SKB, so we must make clones.
*
* RFC 1122: SHOULD pass TOS value up to the transport layer.
* -> It does. And not only TOS, but all IP header.
*/
-struct sock *raw_v4_input(struct sk_buff *skb, struct iphdr *iph, int hash)
+void raw_v4_input(struct sk_buff *skb, struct iphdr *iph, int hash)
{
struct sock *sk;
@@ -150,28 +151,19 @@
skb->dev->ifindex);
while (sk) {
- struct sock *sknext = __raw_v4_lookup(sk->next, iph->protocol,
- iph->saddr, iph->daddr,
- skb->dev->ifindex);
- if (iph->protocol != IPPROTO_ICMP ||
- !icmp_filter(sk, skb)) {
- struct sk_buff *clone;
-
- if (!sknext)
- break;
- clone = skb_clone(skb, GFP_ATOMIC);
+ if (iph->protocol != IPPROTO_ICMP || !icmp_filter(sk, skb)) {
+ struct sk_buff *clone = skb_clone(skb, GFP_ATOMIC);
+
/* Not releasing hash table! */
if (clone)
raw_rcv(sk, clone);
}
- sk = sknext;
+ sk = __raw_v4_lookup(sk->next, iph->protocol,
+ iph->saddr, iph->daddr,
+ skb->dev->ifindex);
}
out:
- if (sk)
- sock_hold(sk);
read_unlock(&raw_v4_lock);
-
- return sk;
}
void raw_err (struct sock *sk, struct sk_buff *skb, u32 info)
@@ -244,71 +236,92 @@
int raw_rcv(struct sock *sk, struct sk_buff *skb)
{
+ if (!xfrm4_policy_check(sk, XFRM_POLICY_IN, skb)) {
+ kfree_skb(skb);
+ return NET_RX_DROP;
+ }
+
skb_push(skb, skb->data - skb->nh.raw);
raw_rcv_skb(sk, skb);
return 0;
}
-struct rawfakehdr
-{
- struct iovec *iov;
- u32 saddr;
- struct dst_entry *dst;
-};
+static int raw_send_hdrinc(struct sock *sk, void *from, int length,
+ struct rtable *rt,
+ unsigned int flags)
+{
+ struct inet_opt *inet = inet_sk(sk);
+ int hh_len;
+ struct iphdr *iph;
+ struct sk_buff *skb;
+ int err;
-/*
- * Send a RAW IP packet.
- */
+ if (length > rt->u.dst.dev->mtu) {
+ ip_local_error(sk, EMSGSIZE, rt->rt_dst, sk->dport,
+ rt->u.dst.dev->mtu);
+ return -EMSGSIZE;
+ }
+ if (flags&MSG_PROBE)
+ goto out;
-/*
- * Callback support is trivial for SOCK_RAW
- */
-
-static int raw_getfrag(const void *p, char *to, unsigned int offset,
- unsigned int fraglen)
-{
- struct rawfakehdr *rfh = (struct rawfakehdr *) p;
- return memcpy_fromiovecend(to, rfh->iov, offset, fraglen);
-}
+ hh_len = LL_RESERVED_SPACE(rt->u.dst.dev);
-/*
- * IPPROTO_RAW needs extra work.
- */
-
-static int raw_getrawfrag(const void *p, char *to, unsigned int offset,
- unsigned int fraglen)
-{
- struct rawfakehdr *rfh = (struct rawfakehdr *) p;
+ skb = sock_alloc_send_skb(sk, length+hh_len+15,
+ flags&MSG_DONTWAIT, &err);
+ if (skb == NULL)
+ goto error;
+ skb_reserve(skb, hh_len);
+
+ skb->priority = sk->priority;
+ skb->dst = dst_clone(&rt->u.dst);
+
+ skb->nh.iph = iph = (struct iphdr *)skb_put(skb, length);
- if (memcpy_fromiovecend(to, rfh->iov, offset, fraglen))
- return -EFAULT;
+ skb->ip_summed = CHECKSUM_NONE;
+
+ skb->h.raw = skb->nh.raw;
+ err = memcpy_fromiovecend((void *)iph, from, 0, length);
+ if (err)
+ goto error_fault;
- if (!offset) {
- struct iphdr *iph = (struct iphdr *)to;
+ /* We don't modify invalid header */
+ if (length >= sizeof(*iph) && iph->ihl * 4 <= length) {
if (!iph->saddr)
- iph->saddr = rfh->saddr;
+ iph->saddr = rt->rt_src;
iph->check = 0;
- iph->tot_len = htons(fraglen); /* This is right as you can't
- frag RAW packets */
- /*
- * Deliberate breach of modularity to keep
- * ip_build_xmit clean (well less messy).
- */
+ iph->tot_len = htons(length);
if (!iph->id)
- ip_select_ident(iph, rfh->dst, NULL);
+ ip_select_ident(iph, &rt->u.dst, NULL);
+
iph->check = ip_fast_csum((unsigned char *)iph, iph->ihl);
}
+
+ err = NF_HOOK(PF_INET, NF_IP_LOCAL_OUT, skb, NULL, rt->u.dst.dev,
+ dst_output);
+ if (err > 0)
+ err = inet->recverr ? net_xmit_errno(err) : 0;
+ if (err)
+ goto error;
+out:
return 0;
+
+error_fault:
+ err = -EFAULT;
+ kfree_skb(skb);
+error:
+ IP_INC_STATS(IpOutDiscards);
+ return err;
}
static int raw_sendmsg(struct sock *sk, struct msghdr *msg, int len)
{
+ struct inet_opt *inet = inet_sk(sk);
struct ipcm_cookie ipc;
- struct rawfakehdr rfh;
struct rtable *rt = NULL;
int free = 0;
u32 daddr;
+ u32 saddr;
u8 tos;
int err;
@@ -378,7 +391,7 @@
free = 1;
}
- rfh.saddr = ipc.addr;
+ saddr = ipc.addr;
ipc.addr = daddr;
if (!ipc.opt)
@@ -404,12 +417,19 @@
if (MULTICAST(daddr)) {
if (!ipc.oif)
ipc.oif = sk->protinfo.af_inet.mc_index;
- if (!rfh.saddr)
- rfh.saddr = sk->protinfo.af_inet.mc_addr;
+ if (!saddr)
+ saddr = sk->protinfo.af_inet.mc_addr;
}
- err = ip_route_output(&rt, daddr, rfh.saddr, tos, ipc.oif);
-
+ {
+ struct flowi fl = { .oif = ipc.oif,
+ .nl_u = { .ip4_u =
+ { .daddr = daddr,
+ .saddr = saddr,
+ .tos = tos } },
+ .proto = inet->hdrincl ? IPPROTO_RAW : sk->protocol };
+ err = ip_route_output_flow(&rt, &fl, sk, !(msg->msg_flags&MSG_DONTWAIT));
+ }
if (err)
goto done;
@@ -421,14 +441,22 @@
goto do_confirm;
back_from_confirm:
- rfh.iov = msg->msg_iov;
- rfh.saddr = rt->rt_src;
- rfh.dst = &rt->u.dst;
- if (!ipc.addr)
- ipc.addr = rt->rt_dst;
- err = ip_build_xmit(sk, sk->protinfo.af_inet.hdrincl ? raw_getrawfrag :
- raw_getfrag, &rfh, len, &ipc, rt, msg->msg_flags);
-
+ if (inet->hdrincl)
+ err = raw_send_hdrinc(sk, msg->msg_iov, len,
+ rt, msg->msg_flags);
+
+ else {
+ if (!ipc.addr)
+ ipc.addr = rt->rt_dst;
+ lock_sock(sk);
+ err = ip_append_data(sk, ip_generic_getfrag, msg->msg_iov, len, 0,
+ &ipc, rt, msg->msg_flags);
+ if (err)
+ ip_flush_pending_frames(sk);
+ else if (!(msg->msg_flags & MSG_MORE))
+ err = ip_push_pending_frames(sk);
+ release_sock(sk);
+ }
done:
if (free)
kfree(ipc.opt);
diff -Nru a/net/ipv4/route.c b/net/ipv4/route.c
--- a/net/ipv4/route.c Thu May 8 10:41:38 2003
+++ b/net/ipv4/route.c Thu May 8 10:41:38 2003
@@ -95,6 +95,7 @@
#include <net/arp.h>
#include <net/tcp.h>
#include <net/icmp.h>
+#include <net/xfrm.h>
#ifdef CONFIG_SYSCTL
#include <linux/sysctl.h>
#endif
@@ -132,11 +133,10 @@
*/
static struct dst_entry *ipv4_dst_check(struct dst_entry *dst, u32 cookie);
-static struct dst_entry *ipv4_dst_reroute(struct dst_entry *dst,
- struct sk_buff *skb);
static void ipv4_dst_destroy(struct dst_entry *dst);
static struct dst_entry *ipv4_negative_advice(struct dst_entry *dst);
static void ipv4_link_failure(struct sk_buff *skb);
+static void ip_rt_update_pmtu(struct dst_entry *dst, u32 mtu);
static int rt_garbage_collect(void);
@@ -145,10 +145,10 @@
protocol: __constant_htons(ETH_P_IP),
gc: rt_garbage_collect,
check: ipv4_dst_check,
- reroute: ipv4_dst_reroute,
destroy: ipv4_dst_destroy,
negative_advice: ipv4_negative_advice,
link_failure: ipv4_link_failure,
+ update_pmtu: ip_rt_update_pmtu,
entry_size: sizeof(struct rtable),
};
@@ -248,11 +248,12 @@
r->u.dst.__use,
0,
(unsigned long)r->rt_src,
- (r->u.dst.advmss ?
- (int) r->u.dst.advmss + 40 : 0),
- r->u.dst.window,
- (int)((r->u.dst.rtt >> 3) + r->u.dst.rttvar),
- r->key.tos,
+ (dst_metric(&r->u.dst, RTAX_ADVMSS) ?
+ (int) dst_metric(&r->u.dst, RTAX_ADVMSS) + 40 : 0),
+ dst_metric(&r->u.dst, RTAX_WINDOW),
+ (int)((dst_metric(&r->u.dst, RTAX_RTT) >> 3)
+ + dst_metric(&r->u.dst, RTAX_RTTVAR)),
+ r->fl.fl4_tos,
r->u.dst.hh ?
atomic_read(&r->u.dst.hh->hh_refcnt) :
-1,
@@ -335,7 +336,7 @@
/* Kill broadcast/multicast entries very aggresively, if they
collide in hash table with more useful entries */
return (rth->rt_flags & (RTCF_BROADCAST | RTCF_MULTICAST)) &&
- rth->key.iif && rth->u.rt_next;
+ rth->fl.iif && rth->u.rt_next;
}
static __inline__ int rt_valuable(struct rtable *rth)
@@ -623,6 +624,13 @@
out: return 0;
}
+static inline int compare_keys(struct flowi *fl1, struct flowi *fl2)
+{
+ return memcmp(&fl1->nl_u.ip4_u, &fl2->nl_u.ip4_u, sizeof(fl1->nl_u.ip4_u)) == 0 &&
+ fl1->oif == fl2->oif &&
+ fl1->iif == fl2->iif;
+}
+
static int rt_intern_hash(unsigned hash, struct rtable *rt, struct rtable **rp)
{
struct rtable *rth, **rthp;
@@ -634,7 +642,7 @@
write_lock_bh(&rt_hash_table[hash].lock);
while ((rth = *rthp) != NULL) {
- if (memcmp(&rth->key, &rt->key, sizeof(rt->key)) == 0) {
+ if (compare_keys(&rth->fl, &rt->fl)) {
/* Put it first */
*rthp = rth->u.rt_next;
rth->u.rt_next = rt_hash_table[hash].chain;
@@ -656,7 +664,7 @@
/* Try to bind route to arp only if it is output
route or unicast forwarding path.
*/
- if (rt->rt_type == RTN_UNICAST || rt->key.iif == 0) {
+ if (rt->rt_type == RTN_UNICAST || rt->fl.iif == 0) {
int err = arp_bind_neighbour(&rt->u.dst);
if (err) {
write_unlock_bh(&rt_hash_table[hash].lock);
@@ -819,11 +827,11 @@
while ((rth = *rthp) != NULL) {
struct rtable *rt;
- if (rth->key.dst != daddr ||
- rth->key.src != skeys[i] ||
- rth->key.tos != tos ||
- rth->key.oif != ikeys[k] ||
- rth->key.iif != 0) {
+ if (rth->fl.fl4_dst != daddr ||
+ rth->fl.fl4_src != skeys[i] ||
+ rth->fl.fl4_tos != tos ||
+ rth->fl.oif != ikeys[k] ||
+ rth->fl.iif != 0) {
rthp = &rth->u.rt_next;
continue;
}
@@ -914,14 +922,14 @@
ret = NULL;
} else if ((rt->rt_flags & RTCF_REDIRECTED) ||
rt->u.dst.expires) {
- unsigned hash = rt_hash_code(rt->key.dst,
- rt->key.src ^
- (rt->key.oif << 5),
- rt->key.tos);
+ unsigned hash = rt_hash_code(rt->fl.fl4_dst,
+ rt->fl.fl4_src ^
+ (rt->fl.oif << 5),
+ rt->fl.fl4_tos);
#if RT_CACHE_DEBUG >= 1
printk(KERN_DEBUG "ip_rt_advice: redirect to "
"%u.%u.%u.%u/%02x dropped\n",
- NIPQUAD(rt->rt_dst), rt->key.tos);
+ NIPQUAD(rt->rt_dst), rt->fl.fl4_tos);
#endif
rt_del(hash, rt);
ret = NULL;
@@ -1065,34 +1073,34 @@
read_lock(&rt_hash_table[hash].lock);
for (rth = rt_hash_table[hash].chain; rth;
rth = rth->u.rt_next) {
- if (rth->key.dst == daddr &&
- rth->key.src == skeys[i] &&
+ if (rth->fl.fl4_dst == daddr &&
+ rth->fl.fl4_src == skeys[i] &&
rth->rt_dst == daddr &&
rth->rt_src == iph->saddr &&
- rth->key.tos == tos &&
- rth->key.iif == 0 &&
- !(rth->u.dst.mxlock & (1 << RTAX_MTU))) {
+ rth->fl.fl4_tos == tos &&
+ rth->fl.iif == 0 &&
+ !(dst_metric_locked(&rth->u.dst, RTAX_MTU))) {
unsigned short mtu = new_mtu;
if (new_mtu < 68 || new_mtu >= old_mtu) {
/* BSD 4.2 compatibility hack :-( */
if (mtu == 0 &&
- old_mtu >= rth->u.dst.pmtu &&
+ old_mtu >= rth->u.dst.metrics[RTAX_MTU-1] &&
old_mtu >= 68 + (iph->ihl << 2))
old_mtu -= iph->ihl << 2;
mtu = guess_mtu(old_mtu);
}
- if (mtu <= rth->u.dst.pmtu) {
- if (mtu < rth->u.dst.pmtu) {
+ if (mtu <= rth->u.dst.metrics[RTAX_MTU-1]) {
+ if (mtu < rth->u.dst.metrics[RTAX_MTU-1]) {
dst_confirm(&rth->u.dst);
if (mtu < ip_rt_min_pmtu) {
mtu = ip_rt_min_pmtu;
- rth->u.dst.mxlock |=
+ rth->u.dst.metrics[RTAX_LOCK-1] |=
(1 << RTAX_MTU);
}
- rth->u.dst.pmtu = mtu;
+ rth->u.dst.metrics[RTAX_MTU-1] = mtu;
dst_set_expires(&rth->u.dst,
ip_rt_mtu_expires);
}
@@ -1105,15 +1113,15 @@
return est_mtu ? : new_mtu;
}
-void ip_rt_update_pmtu(struct dst_entry *dst, unsigned mtu)
+static void ip_rt_update_pmtu(struct dst_entry *dst, u32 mtu)
{
- if (dst->pmtu > mtu && mtu >= 68 &&
- !(dst->mxlock & (1 << RTAX_MTU))) {
+ if (dst->metrics[RTAX_MTU-1] > mtu && mtu >= 68 &&
+ !(dst_metric_locked(dst, RTAX_MTU))) {
if (mtu < ip_rt_min_pmtu) {
mtu = ip_rt_min_pmtu;
- dst->mxlock |= (1 << RTAX_MTU);
+ dst->metrics[RTAX_LOCK-1] |= (1 << RTAX_MTU);
}
- dst->pmtu = mtu;
+ dst->metrics[RTAX_MTU-1] = mtu;
dst_set_expires(dst, ip_rt_mtu_expires);
}
}
@@ -1124,12 +1132,6 @@
return NULL;
}
-static struct dst_entry *ipv4_dst_reroute(struct dst_entry *dst,
- struct sk_buff *skb)
-{
- return NULL;
-}
-
static void ipv4_dst_destroy(struct dst_entry *dst)
{
struct rtable *rt = (struct rtable *) dst;
@@ -1175,9 +1177,9 @@
u32 src;
struct fib_result res;
- if (rt->key.iif == 0)
+ if (rt->fl.iif == 0)
src = rt->rt_src;
- else if (fib_lookup(&rt->key, &res) == 0) {
+ else if (fib_lookup(&rt->fl, &res) == 0) {
#ifdef CONFIG_IP_ROUTE_NAT
if (res.type == RTN_NAT)
src = inet_select_addr(rt->u.dst.dev, rt->rt_gateway,
@@ -1210,28 +1212,28 @@
if (FIB_RES_GW(*res) &&
FIB_RES_NH(*res).nh_scope == RT_SCOPE_LINK)
rt->rt_gateway = FIB_RES_GW(*res);
- memcpy(&rt->u.dst.mxlock, fi->fib_metrics,
- sizeof(fi->fib_metrics));
+ memcpy(rt->u.dst.metrics, fi->fib_metrics,
+ sizeof(rt->u.dst.metrics));
if (fi->fib_mtu == 0) {
- rt->u.dst.pmtu = rt->u.dst.dev->mtu;
- if (rt->u.dst.mxlock & (1 << RTAX_MTU) &&
+ rt->u.dst.metrics[RTAX_MTU-1] = rt->u.dst.dev->mtu;
+ if (rt->u.dst.metrics[RTAX_LOCK-1] & (1 << RTAX_MTU) &&
rt->rt_gateway != rt->rt_dst &&
- rt->u.dst.pmtu > 576)
- rt->u.dst.pmtu = 576;
+ rt->u.dst.dev->mtu > 576)
+ rt->u.dst.metrics[RTAX_MTU-1] = 576;
}
#ifdef CONFIG_NET_CLS_ROUTE
rt->u.dst.tclassid = FIB_RES_NH(*res).nh_tclassid;
#endif
} else
- rt->u.dst.pmtu = rt->u.dst.dev->mtu;
+ rt->u.dst.metrics[RTAX_MTU-1]= rt->u.dst.dev->mtu;
- if (rt->u.dst.pmtu > IP_MAX_MTU)
- rt->u.dst.pmtu = IP_MAX_MTU;
- if (rt->u.dst.advmss == 0)
- rt->u.dst.advmss = max_t(unsigned int, rt->u.dst.dev->mtu - 40,
+ if (rt->u.dst.metrics[RTAX_MTU-1] > IP_MAX_MTU)
+ rt->u.dst.metrics[RTAX_MTU-1] = IP_MAX_MTU;
+ if (rt->u.dst.metrics[RTAX_ADVMSS-1] == 0)
+ rt->u.dst.metrics[RTAX_ADVMSS-1] = max_t(unsigned int, rt->u.dst.dev->mtu - 40,
ip_rt_min_advmss);
- if (rt->u.dst.advmss > 65535 - 40)
- rt->u.dst.advmss = 65535 - 40;
+ if (rt->u.dst.metrics[RTAX_ADVMSS-1] > 65535 - 40)
+ rt->u.dst.metrics[RTAX_ADVMSS-1] = 65535 - 40;
#ifdef CONFIG_NET_CLS_ROUTE
#ifdef CONFIG_IP_MULTIPLE_TABLES
@@ -1276,13 +1278,15 @@
atomic_set(&rth->u.dst.__refcnt, 1);
rth->u.dst.flags= DST_HOST;
- rth->key.dst = daddr;
+ if (in_dev->cnf.no_policy)
+ rth->u.dst.flags |= DST_NOPOLICY;
+ rth->fl.fl4_dst = daddr;
rth->rt_dst = daddr;
- rth->key.tos = tos;
+ rth->fl.fl4_tos = tos;
#ifdef CONFIG_IP_ROUTE_FWMARK
- rth->key.fwmark = skb->nfmark;
+ rth->fl.fl4_fwmark= skb->nfmark;
#endif
- rth->key.src = saddr;
+ rth->fl.fl4_src = saddr;
rth->rt_src = saddr;
#ifdef CONFIG_IP_ROUTE_NAT
rth->rt_dst_map = daddr;
@@ -1292,10 +1296,10 @@
rth->u.dst.tclassid = itag;
#endif
rth->rt_iif =
- rth->key.iif = dev->ifindex;
+ rth->fl.iif = dev->ifindex;
rth->u.dst.dev = &loopback_dev;
dev_hold(rth->u.dst.dev);
- rth->key.oif = 0;
+ rth->fl.oif = 0;
rth->rt_gateway = daddr;
rth->rt_spec_dst= spec_dst;
rth->rt_type = RTN_MULTICAST;
@@ -1337,10 +1341,19 @@
int ip_route_input_slow(struct sk_buff *skb, u32 daddr, u32 saddr,
u8 tos, struct net_device *dev)
{
- struct rt_key key;
struct fib_result res;
struct in_device *in_dev = in_dev_get(dev);
struct in_device *out_dev = NULL;
+ struct flowi fl = { .nl_u = { .ip4_u =
+ { .daddr = daddr,
+ .saddr = saddr,
+ .tos = tos,
+ .scope = RT_SCOPE_UNIVERSE,
+#ifdef CONFIG_IP_ROUTE_FWMARK
+ .fwmark = skb->nfmark
+#endif
+ } },
+ .iif = dev->ifindex };
unsigned flags = 0;
u32 itag = 0;
struct rtable * rth;
@@ -1354,17 +1367,7 @@
if (!in_dev)
goto out;
- key.dst = daddr;
- key.src = saddr;
- key.tos = tos;
-#ifdef CONFIG_IP_ROUTE_FWMARK
- key.fwmark = skb->nfmark;
-#endif
- key.iif = dev->ifindex;
- key.oif = 0;
- key.scope = RT_SCOPE_UNIVERSE;
-
- hash = rt_hash_code(daddr, saddr ^ (key.iif << 5), tos);
+ hash = rt_hash_code(daddr, saddr ^ (fl.iif << 5), tos);
/* Check for the most weird martians, which can be not detected
by fib_lookup.
@@ -1388,7 +1391,7 @@
/*
* Now we are ready to route packet.
*/
- if ((err = fib_lookup(&key, &res)) != 0) {
+ if ((err = fib_lookup(&fl, &res)) != 0) {
if (!IN_DEV_FORWARD(in_dev))
goto e_inval;
goto no_route;
@@ -1408,17 +1411,17 @@
src_map = fib_rules_policy(saddr, &res, &flags);
if (res.type == RTN_NAT) {
- key.dst = fib_rules_map_destination(daddr, &res);
+ fl.fl4_dst = fib_rules_map_destination(daddr, &res);
fib_res_put(&res);
free_res = 0;
- if (fib_lookup(&key, &res))
+ if (fib_lookup(&fl, &res))
goto e_inval;
free_res = 1;
if (res.type != RTN_UNICAST)
goto e_inval;
flags |= RTCF_DNAT;
}
- key.src = src_map;
+ fl.fl4_src = src_map;
}
#endif
@@ -1444,8 +1447,8 @@
goto martian_destination;
#ifdef CONFIG_IP_ROUTE_MULTIPATH
- if (res.fi->fib_nhs > 1 && key.oif == 0)
- fib_select_multipath(&key, &res);
+ if (res.fi->fib_nhs > 1 && fl.oif == 0)
+ fib_select_multipath(&fl, &res);
#endif
out_dev = in_dev_get(FIB_RES_DEV(res));
if (out_dev == NULL) {
@@ -1482,26 +1485,30 @@
atomic_set(&rth->u.dst.__refcnt, 1);
rth->u.dst.flags= DST_HOST;
- rth->key.dst = daddr;
+ if (in_dev->cnf.no_policy)
+ rth->u.dst.flags |= DST_NOPOLICY;
+ if (in_dev->cnf.no_xfrm)
+ rth->u.dst.flags |= DST_NOXFRM;
+ rth->fl.fl4_dst = daddr;
rth->rt_dst = daddr;
- rth->key.tos = tos;
+ rth->fl.fl4_tos = tos;
#ifdef CONFIG_IP_ROUTE_FWMARK
- rth->key.fwmark = skb->nfmark;
+ rth->fl.fl4_fwmark= skb->nfmark;
#endif
- rth->key.src = saddr;
+ rth->fl.fl4_src = saddr;
rth->rt_src = saddr;
rth->rt_gateway = daddr;
#ifdef CONFIG_IP_ROUTE_NAT
- rth->rt_src_map = key.src;
- rth->rt_dst_map = key.dst;
+ rth->rt_src_map = fl.fl4_src;
+ rth->rt_dst_map = fl.fl4_dst;
if (flags&RTCF_DNAT)
- rth->rt_gateway = key.dst;
+ rth->rt_gateway = fl.fl4_dst;
#endif
rth->rt_iif =
- rth->key.iif = dev->ifindex;
+ rth->fl.iif = dev->ifindex;
rth->u.dst.dev = out_dev->dev;
dev_hold(rth->u.dst.dev);
- rth->key.oif = 0;
+ rth->fl.oif = 0;
rth->rt_spec_dst= spec_dst;
rth->u.dst.input = ip_forward;
@@ -1559,26 +1566,27 @@
atomic_set(&rth->u.dst.__refcnt, 1);
rth->u.dst.flags= DST_HOST;
- rth->key.dst = daddr;
+ if (in_dev->cnf.no_policy)
+ rth->u.dst.flags |= DST_NOPOLICY;
+ rth->fl.fl4_dst = daddr;
rth->rt_dst = daddr;
- rth->key.tos = tos;
+ rth->fl.fl4_tos = tos;
#ifdef CONFIG_IP_ROUTE_FWMARK
- rth->key.fwmark = skb->nfmark;
+ rth->fl.fl4_fwmark= skb->nfmark;
#endif
- rth->key.src = saddr;
+ rth->fl.fl4_src = saddr;
rth->rt_src = saddr;
#ifdef CONFIG_IP_ROUTE_NAT
- rth->rt_dst_map = key.dst;
- rth->rt_src_map = key.src;
+ rth->rt_dst_map = fl.fl4_dst;
+ rth->rt_src_map = fl.fl4_src;
#endif
#ifdef CONFIG_NET_CLS_ROUTE
rth->u.dst.tclassid = itag;
#endif
rth->rt_iif =
- rth->key.iif = dev->ifindex;
+ rth->fl.iif = dev->ifindex;
rth->u.dst.dev = &loopback_dev;
dev_hold(rth->u.dst.dev);
- rth->key.oif = 0;
rth->rt_gateway = daddr;
rth->rt_spec_dst= spec_dst;
rth->u.dst.input= ip_local_deliver;
@@ -1656,14 +1664,14 @@
read_lock(&rt_hash_table[hash].lock);
for (rth = rt_hash_table[hash].chain; rth; rth = rth->u.rt_next) {
- if (rth->key.dst == daddr &&
- rth->key.src == saddr &&
- rth->key.iif == iif &&
- rth->key.oif == 0 &&
+ if (rth->fl.fl4_dst == daddr &&
+ rth->fl.fl4_src == saddr &&
+ rth->fl.iif == iif &&
+ rth->fl.oif == 0 &&
#ifdef CONFIG_IP_ROUTE_FWMARK
- rth->key.fwmark == skb->nfmark &&
+ rth->fl.fl4_fwmark == skb->nfmark &&
#endif
- rth->key.tos == tos) {
+ rth->fl.fl4_tos == tos) {
rth->u.dst.lastuse = jiffies;
dst_hold(&rth->u.dst);
rth->u.dst.__use++;
@@ -1712,43 +1720,45 @@
* Major route resolver routine.
*/
-int ip_route_output_slow(struct rtable **rp, const struct rt_key *oldkey)
+int ip_route_output_slow(struct rtable **rp, const struct flowi *oldflp)
{
- struct rt_key key;
+ u32 tos = oldflp->fl4_tos & (IPTOS_RT_MASK | RTO_ONLINK);
+ struct flowi fl = { .nl_u = { .ip4_u =
+ { .daddr = oldflp->fl4_dst,
+ .saddr = oldflp->fl4_src,
+ .tos = tos & IPTOS_RT_MASK,
+ .scope = ((tos & RTO_ONLINK) ?
+ RT_SCOPE_LINK :
+ RT_SCOPE_UNIVERSE),
+#ifdef CONFIG_IP_ROUTE_FWMARK
+ .fwmark = oldflp->fl4_fwmark
+#endif
+ } },
+ .iif = loopback_dev.ifindex,
+ .oif = oldflp->oif };
struct fib_result res;
unsigned flags = 0;
struct rtable *rth;
struct net_device *dev_out = NULL;
+ struct in_device *in_dev = NULL;
unsigned hash;
int free_res = 0;
int err;
- u32 tos;
- tos = oldkey->tos & (IPTOS_RT_MASK | RTO_ONLINK);
- key.dst = oldkey->dst;
- key.src = oldkey->src;
- key.tos = tos & IPTOS_RT_MASK;
- key.iif = loopback_dev.ifindex;
- key.oif = oldkey->oif;
-#ifdef CONFIG_IP_ROUTE_FWMARK
- key.fwmark = oldkey->fwmark;
-#endif
- key.scope = (tos & RTO_ONLINK) ? RT_SCOPE_LINK :
- RT_SCOPE_UNIVERSE;
res.fi = NULL;
#ifdef CONFIG_IP_MULTIPLE_TABLES
res.r = NULL;
#endif
- if (oldkey->src) {
+ if (oldflp->fl4_src) {
err = -EINVAL;
- if (MULTICAST(oldkey->src) ||
- BADCLASS(oldkey->src) ||
- ZERONET(oldkey->src))
+ if (MULTICAST(oldflp->fl4_src) ||
+ BADCLASS(oldflp->fl4_src) ||
+ ZERONET(oldflp->fl4_src))
goto out;
/* It is equivalent to inet_addr_type(saddr) == RTN_LOCAL */
- dev_out = ip_dev_find(oldkey->src);
+ dev_out = ip_dev_find(oldflp->fl4_src);
if (dev_out == NULL)
goto out;
@@ -1760,8 +1770,8 @@
of another iface. --ANK
*/
- if (oldkey->oif == 0
- && (MULTICAST(oldkey->dst) || oldkey->dst == 0xFFFFFFFF)) {
+ if (oldflp->oif == 0
+ && (MULTICAST(oldflp->fl4_dst) || oldflp->fl4_dst == 0xFFFFFFFF)) {
/* Special hack: user can direct multicasts
and limited broadcast via necessary interface
without fiddling with IP_MULTICAST_IF or IP_PKTINFO.
@@ -1777,15 +1787,15 @@
Luckily, this hack is good workaround.
*/
- key.oif = dev_out->ifindex;
+ fl.oif = dev_out->ifindex;
goto make_route;
}
if (dev_out)
dev_put(dev_out);
dev_out = NULL;
}
- if (oldkey->oif) {
- dev_out = dev_get_by_index(oldkey->oif);
+ if (oldflp->oif) {
+ dev_out = dev_get_by_index(oldflp->oif);
err = -ENODEV;
if (dev_out == NULL)
goto out;
@@ -1794,39 +1804,39 @@
goto out; /* Wrong error code */
}
- if (LOCAL_MCAST(oldkey->dst) || oldkey->dst == 0xFFFFFFFF) {
- if (!key.src)
- key.src = inet_select_addr(dev_out, 0,
- RT_SCOPE_LINK);
+ if (LOCAL_MCAST(oldflp->fl4_dst) || oldflp->fl4_dst == 0xFFFFFFFF) {
+ if (!fl.fl4_src)
+ fl.fl4_src = inet_select_addr(dev_out, 0,
+ RT_SCOPE_LINK);
goto make_route;
}
- if (!key.src) {
- if (MULTICAST(oldkey->dst))
- key.src = inet_select_addr(dev_out, 0,
- key.scope);
- else if (!oldkey->dst)
- key.src = inet_select_addr(dev_out, 0,
- RT_SCOPE_HOST);
+ if (!fl.fl4_src) {
+ if (MULTICAST(oldflp->fl4_dst))
+ fl.fl4_src = inet_select_addr(dev_out, 0,
+ fl.fl4_scope);
+ else if (!oldflp->fl4_dst)
+ fl.fl4_src = inet_select_addr(dev_out, 0,
+ RT_SCOPE_HOST);
}
}
- if (!key.dst) {
- key.dst = key.src;
- if (!key.dst)
- key.dst = key.src = htonl(INADDR_LOOPBACK);
+ if (!fl.fl4_dst) {
+ fl.fl4_dst = fl.fl4_src;
+ if (!fl.fl4_dst)
+ fl.fl4_dst = fl.fl4_src = htonl(INADDR_LOOPBACK);
if (dev_out)
dev_put(dev_out);
dev_out = &loopback_dev;
dev_hold(dev_out);
- key.oif = loopback_dev.ifindex;
+ fl.oif = loopback_dev.ifindex;
res.type = RTN_LOCAL;
flags |= RTCF_LOCAL;
goto make_route;
}
- if (fib_lookup(&key, &res)) {
+ if (fib_lookup(&fl, &res)) {
res.fi = NULL;
- if (oldkey->oif) {
+ if (oldflp->oif) {
/* Apparently, routing tables are wrong. Assume,
that the destination is on link.
@@ -1845,9 +1855,9 @@
likely IPv6, but we do not.
*/
- if (key.src == 0)
- key.src = inet_select_addr(dev_out, 0,
- RT_SCOPE_LINK);
+ if (fl.fl4_src == 0)
+ fl.fl4_src = inet_select_addr(dev_out, 0,
+ RT_SCOPE_LINK);
res.type = RTN_UNICAST;
goto make_route;
}
@@ -1862,13 +1872,13 @@
goto e_inval;
if (res.type == RTN_LOCAL) {
- if (!key.src)
- key.src = key.dst;
+ if (!fl.fl4_src)
+ fl.fl4_src = fl.fl4_dst;
if (dev_out)
dev_put(dev_out);
dev_out = &loopback_dev;
dev_hold(dev_out);
- key.oif = dev_out->ifindex;
+ fl.oif = dev_out->ifindex;
if (res.fi)
fib_info_put(res.fi);
res.fi = NULL;
@@ -1877,36 +1887,40 @@
}
#ifdef CONFIG_IP_ROUTE_MULTIPATH
- if (res.fi->fib_nhs > 1 && key.oif == 0)
- fib_select_multipath(&key, &res);
+ if (res.fi->fib_nhs > 1 && fl.oif == 0)
+ fib_select_multipath(&fl, &res);
else
#endif
- if (!res.prefixlen && res.type == RTN_UNICAST && !key.oif)
- fib_select_default(&key, &res);
+ if (!res.prefixlen && res.type == RTN_UNICAST && !fl.oif)
+ fib_select_default(&fl, &res);
- if (!key.src)
- key.src = FIB_RES_PREFSRC(res);
+ if (!fl.fl4_src)
+ fl.fl4_src = FIB_RES_PREFSRC(res);
if (dev_out)
dev_put(dev_out);
dev_out = FIB_RES_DEV(res);
dev_hold(dev_out);
- key.oif = dev_out->ifindex;
+ fl.oif = dev_out->ifindex;
make_route:
- if (LOOPBACK(key.src) && !(dev_out->flags&IFF_LOOPBACK))
+ if (LOOPBACK(fl.fl4_src) && !(dev_out->flags&IFF_LOOPBACK))
goto e_inval;
- if (key.dst == 0xFFFFFFFF)
+ if (fl.fl4_dst == 0xFFFFFFFF)
res.type = RTN_BROADCAST;
- else if (MULTICAST(key.dst))
+ else if (MULTICAST(fl.fl4_dst))
res.type = RTN_MULTICAST;
- else if (BADCLASS(key.dst) || ZERONET(key.dst))
+ else if (BADCLASS(fl.fl4_dst) || ZERONET(fl.fl4_dst))
goto e_inval;
if (dev_out->flags & IFF_LOOPBACK)
flags |= RTCF_LOCAL;
+ in_dev = in_dev_get(dev_out);
+ if (!in_dev)
+ goto e_inval;
+
if (res.type == RTN_BROADCAST) {
flags |= RTCF_BROADCAST | RTCF_LOCAL;
if (res.fi) {
@@ -1915,11 +1929,8 @@
}
} else if (res.type == RTN_MULTICAST) {
flags |= RTCF_MULTICAST|RTCF_LOCAL;
- read_lock(&inetdev_lock);
- if (!__in_dev_get(dev_out) ||
- !ip_check_mc(__in_dev_get(dev_out), oldkey->dst))
+ if (!ip_check_mc(in_dev, oldflp->fl4_dst))
flags &= ~RTCF_LOCAL;
- read_unlock(&inetdev_lock);
/* If multicast route do not exist use
default one, but do not gateway in this case.
Yes, it is hack.
@@ -1936,25 +1947,28 @@
atomic_set(&rth->u.dst.__refcnt, 1);
rth->u.dst.flags= DST_HOST;
- rth->key.dst = oldkey->dst;
- rth->key.tos = tos;
- rth->key.src = oldkey->src;
- rth->key.iif = 0;
- rth->key.oif = oldkey->oif;
+ if (in_dev->cnf.no_xfrm)
+ rth->u.dst.flags |= DST_NOXFRM;
+ if (in_dev->cnf.no_policy)
+ rth->u.dst.flags |= DST_NOPOLICY;
+ rth->fl.fl4_dst = oldflp->fl4_dst;
+ rth->fl.fl4_tos = tos;
+ rth->fl.fl4_src = oldflp->fl4_src;
+ rth->fl.oif = oldflp->oif;
#ifdef CONFIG_IP_ROUTE_FWMARK
- rth->key.fwmark = oldkey->fwmark;
+ rth->fl.fl4_fwmark= oldflp->fl4_fwmark;
#endif
- rth->rt_dst = key.dst;
- rth->rt_src = key.src;
+ rth->rt_dst = fl.fl4_dst;
+ rth->rt_src = fl.fl4_src;
#ifdef CONFIG_IP_ROUTE_NAT
- rth->rt_dst_map = key.dst;
- rth->rt_src_map = key.src;
+ rth->rt_dst_map = fl.fl4_dst;
+ rth->rt_src_map = fl.fl4_src;
#endif
- rth->rt_iif = oldkey->oif ? : dev_out->ifindex;
+ rth->rt_iif = oldflp->oif ? : dev_out->ifindex;
rth->u.dst.dev = dev_out;
dev_hold(dev_out);
- rth->rt_gateway = key.dst;
- rth->rt_spec_dst= key.src;
+ rth->rt_gateway = fl.fl4_dst;
+ rth->rt_spec_dst= fl.fl4_src;
rth->u.dst.output=ip_output;
@@ -1962,40 +1976,39 @@
if (flags & RTCF_LOCAL) {
rth->u.dst.input = ip_local_deliver;
- rth->rt_spec_dst = key.dst;
+ rth->rt_spec_dst = fl.fl4_dst;
}
if (flags & (RTCF_BROADCAST | RTCF_MULTICAST)) {
- rth->rt_spec_dst = key.src;
+ rth->rt_spec_dst = fl.fl4_src;
if (flags & RTCF_LOCAL && !(dev_out->flags & IFF_LOOPBACK)) {
rth->u.dst.output = ip_mc_output;
rt_cache_stat[smp_processor_id()].out_slow_mc++;
}
#ifdef CONFIG_IP_MROUTE
if (res.type == RTN_MULTICAST) {
- struct in_device *in_dev = in_dev_get(dev_out);
- if (in_dev) {
- if (IN_DEV_MFORWARD(in_dev) &&
- !LOCAL_MCAST(oldkey->dst)) {
- rth->u.dst.input = ip_mr_input;
- rth->u.dst.output = ip_mc_output;
- }
- in_dev_put(in_dev);
+ if (IN_DEV_MFORWARD(in_dev) &&
+ !LOCAL_MCAST(oldflp->fl4_dst)) {
+ rth->u.dst.input = ip_mr_input;
+ rth->u.dst.output = ip_mc_output;
}
}
#endif
}
rt_set_nexthop(rth, &res, 0);
+
rth->rt_flags = flags;
- hash = rt_hash_code(oldkey->dst, oldkey->src ^ (oldkey->oif << 5), tos);
+ hash = rt_hash_code(oldflp->fl4_dst, oldflp->fl4_src ^ (oldflp->oif << 5), tos);
err = rt_intern_hash(hash, rth, rp);
done:
if (free_res)
fib_res_put(&res);
if (dev_out)
dev_put(dev_out);
+ if (in_dev)
+ in_dev_put(in_dev);
out: return err;
e_inval:
@@ -2006,23 +2019,23 @@
goto done;
}
-int ip_route_output_key(struct rtable **rp, const struct rt_key *key)
+int __ip_route_output_key(struct rtable **rp, const struct flowi *flp)
{
unsigned hash;
struct rtable *rth;
- hash = rt_hash_code(key->dst, key->src ^ (key->oif << 5), key->tos);
+ hash = rt_hash_code(flp->fl4_dst, flp->fl4_src ^ (flp->oif << 5), flp->fl4_tos);
read_lock_bh(&rt_hash_table[hash].lock);
for (rth = rt_hash_table[hash].chain; rth; rth = rth->u.rt_next) {
- if (rth->key.dst == key->dst &&
- rth->key.src == key->src &&
- rth->key.iif == 0 &&
- rth->key.oif == key->oif &&
+ if (rth->fl.fl4_dst == flp->fl4_dst &&
+ rth->fl.fl4_src == flp->fl4_src &&
+ rth->fl.iif == 0 &&
+ rth->fl.oif == flp->oif &&
#ifdef CONFIG_IP_ROUTE_FWMARK
- rth->key.fwmark == key->fwmark &&
+ rth->fl.fl4_fwmark == flp->fl4_fwmark &&
#endif
- !((rth->key.tos ^ key->tos) &
+ !((rth->fl.fl4_tos ^ flp->fl4_tos) &
(IPTOS_RT_MASK | RTO_ONLINK))) {
rth->u.dst.lastuse = jiffies;
dst_hold(&rth->u.dst);
@@ -2035,8 +2048,26 @@
}
read_unlock_bh(&rt_hash_table[hash].lock);
- return ip_route_output_slow(rp, key);
-}
+ return ip_route_output_slow(rp, flp);
+}
+
+int ip_route_output_key(struct rtable **rp, struct flowi *flp)
+{
+ int err;
+
+ if ((err = __ip_route_output_key(rp, flp)) != 0)
+ return err;
+ return flp->proto ? xfrm_lookup((struct dst_entry**)rp, flp, NULL, 0) : 0;
+}
+
+int ip_route_output_flow(struct rtable **rp, struct flowi *flp, struct sock *sk, int flags)
+{
+ int err;
+
+ if ((err = __ip_route_output_key(rp, flp)) != 0)
+ return err;
+ return flp->proto ? xfrm_lookup((struct dst_entry**)rp, flp, sk, flags) : 0;
+}
static int rt_fill_info(struct sk_buff *skb, u32 pid, u32 seq, int event,
int nowait)
@@ -2055,7 +2086,7 @@
r->rtm_family = AF_INET;
r->rtm_dst_len = 32;
r->rtm_src_len = 0;
- r->rtm_tos = rt->key.tos;
+ r->rtm_tos = rt->fl.fl4_tos;
r->rtm_table = RT_TABLE_MAIN;
r->rtm_type = rt->rt_type;
r->rtm_scope = RT_SCOPE_UNIVERSE;
@@ -2064,9 +2095,9 @@
if (rt->rt_flags & RTCF_NOTIFY)
r->rtm_flags |= RTM_F_NOTIFY;
RTA_PUT(skb, RTA_DST, 4, &rt->rt_dst);
- if (rt->key.src) {
+ if (rt->fl.fl4_src) {
r->rtm_src_len = 32;
- RTA_PUT(skb, RTA_SRC, 4, &rt->key.src);
+ RTA_PUT(skb, RTA_SRC, 4, &rt->fl.fl4_src);
}
if (rt->u.dst.dev)
RTA_PUT(skb, RTA_OIF, sizeof(int), &rt->u.dst.dev->ifindex);
@@ -2074,13 +2105,13 @@
if (rt->u.dst.tclassid)
RTA_PUT(skb, RTA_FLOW, 4, &rt->u.dst.tclassid);
#endif
- if (rt->key.iif)
+ if (rt->fl.iif)
RTA_PUT(skb, RTA_PREFSRC, 4, &rt->rt_spec_dst);
- else if (rt->rt_src != rt->key.src)
+ else if (rt->rt_src != rt->fl.fl4_src)
RTA_PUT(skb, RTA_PREFSRC, 4, &rt->rt_src);
if (rt->rt_dst != rt->rt_gateway)
RTA_PUT(skb, RTA_GATEWAY, 4, &rt->rt_gateway);
- if (rtnetlink_put_metrics(skb, &rt->u.dst.mxlock) < 0)
+ if (rtnetlink_put_metrics(skb, rt->u.dst.metrics) < 0)
goto rtattr_failure;
ci.rta_lastuse = jiffies - rt->u.dst.lastuse;
ci.rta_used = rt->u.dst.__use;
@@ -2102,7 +2133,7 @@
eptr = (struct rtattr*)skb->tail;
#endif
RTA_PUT(skb, RTA_CACHEINFO, sizeof(ci), &ci);
- if (rt->key.iif) {
+ if (rt->fl.iif) {
#ifdef CONFIG_IP_MROUTE
u32 dst = rt->rt_dst;
@@ -2122,7 +2153,7 @@
}
} else
#endif
- RTA_PUT(skb, RTA_IIF, sizeof(int), &rt->key.iif);
+ RTA_PUT(skb, RTA_IIF, sizeof(int), &rt->fl.iif);
}
nlh->nlmsg_len = skb->tail - b;
@@ -2176,10 +2207,14 @@
if (!err && rt->u.dst.error)
err = -rt->u.dst.error;
} else {
+ struct flowi fl = { .nl_u = { .ip4_u = { .daddr = dst,
+ .saddr = src,
+ .tos = rtm->rtm_tos } } };
int oif = 0;
if (rta[RTA_OIF - 1])
memcpy(&oif, RTA_DATA(rta[RTA_OIF - 1]), sizeof(int));
- err = ip_route_output(&rt, dst, src, rtm->rtm_tos, oif);
+ fl.oif = oif;
+ err = ip_route_output_key(&rt, &fl);
}
if (err)
goto out_free;
@@ -2568,4 +2603,6 @@
#ifdef CONFIG_NET_CLS_ROUTE
create_proc_read_entry("net/rt_acct", 0, 0, ip_rt_acct_read, NULL);
#endif
+ xfrm_init();
+ xfrm4_init();
}
diff -Nru a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
--- a/net/ipv4/syncookies.c Thu May 8 10:41:36 2003
+++ b/net/ipv4/syncookies.c Thu May 8 10:41:36 2003
@@ -169,18 +169,25 @@
* hasn't changed since we received the original syn, but I see
* no easy way to do this.
*/
- if (ip_route_output(&rt,
- opt &&
- opt->srr ? opt->faddr : req->af.v4_req.rmt_addr,
- req->af.v4_req.loc_addr,
- RT_CONN_FLAGS(sk),
- 0)) {
- tcp_openreq_free(req);
- goto out;
+ {
+ struct flowi fl = { .nl_u = { .ip4_u =
+ { .daddr = ((opt && opt->srr) ?
+ opt->faddr :
+ req->af.v4_req.rmt_addr),
+ .saddr = req->af.v4_req.loc_addr,
+ .tos = RT_CONN_FLAGS(sk) } },
+ .proto = IPPROTO_TCP,
+ .uli_u = { .ports =
+ { .sport = skb->h.th->dest,
+ .dport = skb->h.th->source } } };
+ if (ip_route_output_key(&rt, &fl)) {
+ tcp_openreq_free(req);
+ goto out;
+ }
}
/* Try to redo what tcp_v4_send_synack did. */
- req->window_clamp = rt->u.dst.window;
+ req->window_clamp = dst_metric(&rt->u.dst, RTAX_WINDOW);
tcp_select_initial_window(tcp_full_space(sk), req->mss,
&req->rcv_wnd, &req->window_clamp,
0, &rcv_wscale);
diff -Nru a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
--- a/net/ipv4/sysctl_net_ipv4.c Thu May 8 10:41:37 2003
+++ b/net/ipv4/sysctl_net_ipv4.c Thu May 8 10:41:37 2003
@@ -77,14 +77,39 @@
void *newval, size_t newlen,
void **context)
{
+ int *valp = table->data;
int new;
+
+ if (!newval || !newlen)
+ return 0;
+
if (newlen != sizeof(int))
return -EINVAL;
- if (get_user(new,(int *)newval))
- return -EFAULT;
- if (new != ipv4_devconf.forwarding)
- inet_forward_change();
- return 0; /* caller does change again and handles handles oldval */
+
+ if (get_user(new, (int *)newval))
+ return -EFAULT;
+
+ if (new == *valp)
+ return 0;
+
+ if (oldval && oldlenp) {
+ size_t len;
+
+ if (get_user(len, oldlenp))
+ return -EFAULT;
+
+ if (len) {
+ if (len > table->maxlen)
+ len = table->maxlen;
+ if (copy_to_user(oldval, valp, len))
+ return -EFAULT;
+ if (put_user(len, oldlenp))
+ return -EFAULT;
+ }
+ }
+
+ inet_forward_change();
+ return 1;
}
ctl_table ipv4_table[] = {
diff -Nru a/net/ipv4/tcp.c b/net/ipv4/tcp.c
--- a/net/ipv4/tcp.c Thu May 8 10:41:37 2003
+++ b/net/ipv4/tcp.c Thu May 8 10:41:37 2003
@@ -204,6 +204,8 @@
* Andi Kleen : Make poll agree with SIGIO
* Salvatore Sanfilippo : Support SO_LINGER with linger == 1 and
* lingertime == 0 (RFC 793 ABORT Call)
+ * Hirokazu Takahashi : Use copy_from_user() instead of
+ * csum_and_copy_from_user() if possible.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
@@ -256,6 +258,7 @@
#include <net/icmp.h>
#include <net/tcp.h>
+#include <net/xfrm.h>
#include <asm/uaccess.h>
#include <asm/ioctls.h>
@@ -953,8 +956,8 @@
return res;
}
-#define TCP_PAGE(sk) (sk->tp_pinfo.af_tcp.sndmsg_page)
-#define TCP_OFF(sk) (sk->tp_pinfo.af_tcp.sndmsg_off)
+#define TCP_PAGE(sk) (inet_sk(sk)->sndmsg_page)
+#define TCP_OFF(sk) (inet_sk(sk)->sndmsg_off)
static inline int
tcp_copy_to_page(struct sock *sk, char *from, struct sk_buff *skb,
@@ -963,18 +966,22 @@
int err = 0;
unsigned int csum;
- csum = csum_and_copy_from_user(from, page_address(page)+off,
+ if (skb->ip_summed == CHECKSUM_NONE) {
+ csum = csum_and_copy_from_user(from, page_address(page) + off,
copy, 0, &err);
- if (!err) {
- if (skb->ip_summed == CHECKSUM_NONE)
- skb->csum = csum_block_add(skb->csum, csum, skb->len);
- skb->len += copy;
- skb->data_len += copy;
- skb->truesize += copy;
- sk->wmem_queued += copy;
- sk->forward_alloc -= copy;
+ if (err) return err;
+ skb->csum = csum_block_add(skb->csum, csum, skb->len);
+ } else {
+ if (copy_from_user(page_address(page) + off, from, copy))
+ return -EFAULT;
}
- return err;
+
+ skb->len += copy;
+ skb->data_len += copy;
+ skb->truesize += copy;
+ sk->wmem_queued += copy;
+ sk->forward_alloc -= copy;
+ return 0;
}
static inline int
@@ -984,11 +991,16 @@
unsigned int csum;
int off = skb->len;
- csum = csum_and_copy_from_user(from, skb_put(skb, copy),
+ if (skb->ip_summed == CHECKSUM_NONE) {
+ csum = csum_and_copy_from_user(from, skb_put(skb, copy),
copy, 0, &err);
- if (!err) {
- skb->csum = csum_block_add(skb->csum, csum, off);
- return 0;
+ if (!err) {
+ skb->csum = csum_block_add(skb->csum, csum, off);
+ return 0;
+ }
+ } else {
+ if (!copy_from_user(skb_put(skb, copy), from, copy))
+ return 0;
}
__skb_trim(skb, off);
@@ -1070,6 +1082,12 @@
if (skb == NULL)
goto wait_for_memory;
+ /*
+ * Check whether we can use HW checksum.
+ */
+ if (sk->route_caps & (NETIF_F_IP_CSUM|NETIF_F_NO_CSUM|NETIF_F_HW_CSUM))
+ skb->ip_summed = CHECKSUM_HW;
+
skb_entail(sk, tp, skb);
copy = mss_now;
}
@@ -1888,6 +1906,8 @@
sk->prot->destroy(sk);
tcp_kill_sk_queues(sk);
+
+ xfrm_sk_free_policy(sk);
#ifdef INET_REFCNT_DEBUG
if (atomic_read(&sk->refcnt) != 1) {
diff -Nru a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
--- a/net/ipv4/tcp_input.c Thu May 8 10:41:36 2003
+++ b/net/ipv4/tcp_input.c Thu May 8 10:41:37 2003
@@ -524,25 +524,25 @@
* Probably, no packets returned in time.
* Reset our results.
*/
- if (!(dst->mxlock&(1<<RTAX_RTT)))
- dst->rtt = 0;
+ if (!(dst_metric_locked(dst, RTAX_RTT)))
+ dst->metrics[RTAX_RTT-1] = 0;
return;
}
- m = dst->rtt - tp->srtt;
+ m = dst_metric(dst, RTAX_RTT) - tp->srtt;
/* If newly calculated rtt larger than stored one,
* store new one. Otherwise, use EWMA. Remember,
* rtt overestimation is always better than underestimation.
*/
- if (!(dst->mxlock&(1<<RTAX_RTT))) {
+ if (!(dst_metric_locked(dst, RTAX_RTT))) {
if (m <= 0)
- dst->rtt = tp->srtt;
+ dst->metrics[RTAX_RTT-1] = tp->srtt;
else
- dst->rtt -= (m>>3);
+ dst->metrics[RTAX_RTT-1] -= (m>>3);
}
- if (!(dst->mxlock&(1<<RTAX_RTTVAR))) {
+ if (!(dst_metric_locked(dst, RTAX_RTTVAR))) {
if (m < 0)
m = -m;
@@ -551,44 +551,46 @@
if (m < tp->mdev)
m = tp->mdev;
- if (m >= dst->rttvar)
- dst->rttvar = m;
+ if (m >= dst_metric(dst, RTAX_RTTVAR))
+ dst->metrics[RTAX_RTTVAR-1] = m;
else
- dst->rttvar -= (dst->rttvar - m)>>2;
+ dst->metrics[RTAX_RTT-1] -=
+ (dst->metrics[RTAX_RTT-1] - m)>>2;
}
if (tp->snd_ssthresh >= 0xFFFF) {
/* Slow start still did not finish. */
- if (dst->ssthresh &&
- !(dst->mxlock&(1<<RTAX_SSTHRESH)) &&
- (tp->snd_cwnd>>1) > dst->ssthresh)
- dst->ssthresh = (tp->snd_cwnd>>1);
- if (!(dst->mxlock&(1<<RTAX_CWND)) &&
- tp->snd_cwnd > dst->cwnd)
- dst->cwnd = tp->snd_cwnd;
+ if (dst_metric(dst, RTAX_SSTHRESH) &&
+ !dst_metric_locked(dst, RTAX_SSTHRESH) &&
+ (tp->snd_cwnd >> 1) > dst_metric(dst, RTAX_SSTHRESH))
+ dst->metrics[RTAX_SSTHRESH-1] = tp->snd_cwnd >> 1;
+ if (!dst_metric_locked(dst, RTAX_CWND) &&
+ tp->snd_cwnd > dst_metric(dst, RTAX_CWND))
+ dst->metrics[RTAX_CWND-1] = tp->snd_cwnd;
} else if (tp->snd_cwnd > tp->snd_ssthresh &&
tp->ca_state == TCP_CA_Open) {
/* Cong. avoidance phase, cwnd is reliable. */
- if (!(dst->mxlock&(1<<RTAX_SSTHRESH)))
- dst->ssthresh = max(tp->snd_cwnd>>1, tp->snd_ssthresh);
- if (!(dst->mxlock&(1<<RTAX_CWND)))
- dst->cwnd = (dst->cwnd + tp->snd_cwnd)>>1;
+ if (!dst_metric_locked(dst, RTAX_SSTHRESH))
+ dst->metrics[RTAX_SSTHRESH-1] =
+ max(tp->snd_cwnd >> 1, tp->snd_ssthresh);
+ if (!dst_metric_locked(dst, RTAX_CWND))
+ dst->metrics[RTAX_CWND-1] = (dst->metrics[RTAX_CWND-1] + tp->snd_cwnd) >> 1;
} else {
/* Else slow start did not finish, cwnd is non-sense,
ssthresh may be also invalid.
*/
- if (!(dst->mxlock&(1<<RTAX_CWND)))
- dst->cwnd = (dst->cwnd + tp->snd_ssthresh)>>1;
- if (dst->ssthresh &&
- !(dst->mxlock&(1<<RTAX_SSTHRESH)) &&
- tp->snd_ssthresh > dst->ssthresh)
- dst->ssthresh = tp->snd_ssthresh;
+ if (!dst_metric_locked(dst, RTAX_CWND))
+ dst->metrics[RTAX_CWND-1] = (dst->metrics[RTAX_CWND-1] + tp->snd_ssthresh) >> 1;
+ if (dst->metrics[RTAX_SSTHRESH-1] &&
+ !dst_metric_locked(dst, RTAX_SSTHRESH) &&
+ tp->snd_ssthresh > dst->metrics[RTAX_SSTHRESH-1])
+ dst->metrics[RTAX_SSTHRESH-1] = tp->snd_ssthresh;
}
- if (!(dst->mxlock&(1<<RTAX_REORDERING))) {
- if (dst->reordering < tp->reordering &&
+ if (!dst_metric_locked(dst, RTAX_REORDERING)) {
+ if (dst->metrics[RTAX_REORDERING-1] < tp->reordering &&
tp->reordering != sysctl_tcp_reordering)
- dst->reordering = tp->reordering;
+ dst->metrics[RTAX_REORDERING-1] = tp->reordering;
}
}
}
@@ -627,22 +629,23 @@
dst_confirm(dst);
- if (dst->mxlock&(1<<RTAX_CWND))
- tp->snd_cwnd_clamp = dst->cwnd;
- if (dst->ssthresh) {
- tp->snd_ssthresh = dst->ssthresh;
+ if (dst_metric_locked(dst, RTAX_CWND))
+ tp->snd_cwnd_clamp = dst_metric(dst, RTAX_CWND);
+ if (dst_metric(dst, RTAX_SSTHRESH)) {
+ tp->snd_ssthresh = dst_metric(dst, RTAX_SSTHRESH);
if (tp->snd_ssthresh > tp->snd_cwnd_clamp)
tp->snd_ssthresh = tp->snd_cwnd_clamp;
}
- if (dst->reordering && tp->reordering != dst->reordering) {
+ if (dst_metric(dst, RTAX_REORDERING) &&
+ tp->reordering != dst_metric(dst, RTAX_REORDERING)) {
tp->sack_ok &= ~2;
- tp->reordering = dst->reordering;
+ tp->reordering = dst_metric(dst, RTAX_REORDERING);
}
- if (dst->rtt == 0)
+ if (dst_metric(dst, RTAX_RTT) == 0)
goto reset;
- if (!tp->srtt && dst->rtt < (TCP_TIMEOUT_INIT<<3))
+ if (!tp->srtt && dst_metric(dst, RTAX_RTT) < (TCP_TIMEOUT_INIT << 3))
goto reset;
/* Initial rtt is determined from SYN,SYN-ACK.
@@ -659,10 +662,10 @@
* to low value, and then abruptly stops to do it and starts to delay
* ACKs, wait for troubles.
*/
- if (dst->rtt > tp->srtt)
- tp->srtt = dst->rtt;
- if (dst->rttvar > tp->mdev) {
- tp->mdev = dst->rttvar;
+ if (dst_metric(dst, RTAX_RTT) > tp->srtt)
+ tp->srtt = dst_metric(dst, RTAX_RTT);
+ if (dst_metric(dst, RTAX_RTTVAR) > tp->mdev) {
+ tp->mdev = dst_metric(dst, RTAX_RTTVAR);
tp->mdev_max = tp->rttvar = max(tp->mdev, TCP_RTO_MIN);
}
tcp_set_rto(tp);
diff -Nru a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
--- a/net/ipv4/tcp_ipv4.c Thu May 8 10:41:37 2003
+++ b/net/ipv4/tcp_ipv4.c Thu May 8 10:41:37 2003
@@ -63,10 +63,10 @@
#include <net/tcp.h>
#include <net/ipv6.h>
#include <net/inet_common.h>
+#include <net/xfrm.h>
#include <linux/inet.h>
#include <linux/stddef.h>
-#include <linux/ipsec.h>
extern int sysctl_ip_dynaddr;
extern int sysctl_ip_default_ttl;
@@ -783,7 +783,9 @@
}
tmp = ip_route_connect(&rt, nexthop, sk->saddr,
- RT_CONN_FLAGS(sk), sk->bound_dev_if);
+ RT_CONN_FLAGS(sk), sk->bound_dev_if,
+ IPPROTO_TCP,
+ sk->sport, usin->sin_port, sk);
if (tmp < 0)
return tmp;
@@ -792,9 +794,6 @@
return -ENETUNREACH;
}
- __sk_dst_set(sk, &rt->u.dst);
- sk->route_caps = rt->u.dst.dev->features;
-
if (!sk->protinfo.af_inet.opt || !sk->protinfo.af_inet.opt->srr)
daddr = rt->rt_dst;
@@ -844,6 +843,15 @@
if (err)
goto failure;
+ err = ip_route_newports(&rt, sk->sport, sk->dport, sk);
+ if (err)
+ goto failure;
+
+ /* OK, now commit destination to socket. */
+ __sk_dst_set(sk, &rt->u.dst);
+ sk->route_caps = rt->u.dst.dev->features;
+ tp->ext2_header_len = rt->u.dst.header_len;
+
if (!tp->write_seq)
tp->write_seq = secure_tcp_sequence_number(sk->saddr, sk->daddr,
sk->sport, usin->sin_port);
@@ -851,14 +859,16 @@
sk->protinfo.af_inet.id = tp->write_seq^jiffies;
err = tcp_connect(sk);
+ rt = NULL;
if (err)
goto failure;
return 0;
failure:
+ /* This unhashes the socket and releases the local port, if necessary. */
tcp_set_state(sk, TCP_CLOSE);
- __sk_dst_reset(sk);
+ ip_rt_put(rt);
sk->route_caps = 0;
sk->dport = 0;
return err;
@@ -920,7 +930,7 @@
/*
* This routine does path mtu discovery as defined in RFC1191.
*/
-static inline void do_pmtu_discovery(struct sock *sk, struct iphdr *ip, unsigned mtu)
+static inline void do_pmtu_discovery(struct sock *sk, struct iphdr *ip, u32 mtu)
{
struct dst_entry *dst;
struct tcp_opt *tp = &sk->tp_pinfo.af_tcp;
@@ -941,17 +951,19 @@
if ((dst = __sk_dst_check(sk, 0)) == NULL)
return;
- ip_rt_update_pmtu(dst, mtu);
+ dst->ops->update_pmtu(dst, mtu);
/* Something is about to be wrong... Remember soft error
* for the case, if this connection will not able to recover.
*/
- if (mtu < dst->pmtu && ip_dont_fragment(sk, dst))
+ if (mtu < dst_pmtu(dst) && ip_dont_fragment(sk, dst))
sk->err_soft = EMSGSIZE;
+ mtu = dst_pmtu(dst);
+
if (sk->protinfo.af_inet.pmtudisc != IP_PMTUDISC_DONT &&
- tp->pmtu_cookie > dst->pmtu) {
- tcp_sync_mss(sk, dst->pmtu);
+ tp->pmtu_cookie > mtu) {
+ tcp_sync_mss(sk, mtu);
/* Resend the TCP packet because it's
* clear that the old packet has been
@@ -1189,7 +1201,6 @@
sizeof(struct tcphdr),
IPPROTO_TCP,
0);
- arg.n_iov = 1;
arg.csumoffset = offsetof(struct tcphdr, check) / 2;
tcp_socket->sk->protinfo.af_inet.ttl = sysctl_ip_default_ttl;
@@ -1217,7 +1228,6 @@
arg.iov[0].iov_base = (unsigned char *)&rep;
arg.iov[0].iov_len = sizeof(rep.th);
- arg.n_iov = 1;
if (ts) {
rep.tsopt[0] = htonl((TCPOPT_NOP << 24) |
(TCPOPT_NOP << 16) |
@@ -1268,14 +1278,20 @@
static struct dst_entry* tcp_v4_route_req(struct sock *sk, struct open_request *req)
{
struct rtable *rt;
- struct ip_options *opt;
+ struct ip_options *opt = req->af.v4_req.opt;
+ struct flowi fl = { .oif = sk->bound_dev_if,
+ .nl_u = { .ip4_u =
+ { .daddr = ((opt && opt->srr) ?
+ opt->faddr :
+ req->af.v4_req.rmt_addr),
+ .saddr = req->af.v4_req.loc_addr,
+ .tos = RT_CONN_FLAGS(sk) } },
+ .proto = IPPROTO_TCP,
+ .uli_u = { .ports =
+ { .sport = sk->sport,
+ .dport = req->rmt_port } } };
- opt = req->af.v4_req.opt;
- if(ip_route_output(&rt, ((opt && opt->srr) ?
- opt->faddr :
- req->af.v4_req.rmt_addr),
- req->af.v4_req.loc_addr,
- RT_CONN_FLAGS(sk), sk->bound_dev_if)) {
+ if (ip_route_output_flow(&rt, &fl, sk, 0)) {
IP_INC_STATS_BH(IpOutNoRoutes);
return NULL;
}
@@ -1498,7 +1514,7 @@
(sysctl_max_syn_backlog - tcp_synq_len(sk)
< (sysctl_max_syn_backlog>>2)) &&
(!peer || !peer->tcp_ts_stamp) &&
- (!dst || !dst->rtt)) {
+ (!dst || !dst_metric(dst, RTAX_RTT))) {
/* Without syncookies last quarter of
* backlog is filled with destinations, proven to be alive.
* It means that we continue to communicate
@@ -1570,10 +1586,11 @@
newtp->ext_header_len = 0;
if (newsk->protinfo.af_inet.opt)
newtp->ext_header_len = newsk->protinfo.af_inet.opt->optlen;
+ newtp->ext2_header_len = dst->header_len;
newsk->protinfo.af_inet.id = newtp->write_seq^jiffies;
- tcp_sync_mss(newsk, dst->pmtu);
- newtp->advmss = dst->advmss;
+ tcp_sync_mss(newsk, dst_pmtu(dst));
+ newtp->advmss = dst_metric(dst, RTAX_ADVMSS);;
tcp_initialize_rcv_mss(newsk);
__tcp_v4_hash(newsk, 0);
@@ -1758,12 +1775,12 @@
goto no_tcp_socket;
process:
- if(!ipsec_sk_policy(sk,skb))
- goto discard_and_relse;
-
if (sk->state == TCP_TIME_WAIT)
goto do_time_wait;
+ if (!xfrm4_policy_check(sk, XFRM_POLICY_IN, skb))
+ goto discard_and_relse;
+
if (sk_filter(sk, skb, 0))
goto discard_and_relse;
@@ -1783,6 +1800,9 @@
return ret;
no_tcp_socket:
+ if (!xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb))
+ goto discard_it;
+
if (skb->len < (th->doff<<2) || tcp_checksum_complete(skb)) {
bad_packet:
TCP_INC_STATS_BH(TcpInErrs);
@@ -1800,6 +1820,9 @@
goto discard_it;
do_time_wait:
+ if (!xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb))
+ goto discard_and_relse;
+
if (skb->len < (th->doff<<2) || tcp_checksum_complete(skb)) {
TCP_INC_STATS_BH(TcpInErrs);
goto discard_and_relse;
@@ -1853,7 +1876,9 @@
/* Query new route. */
err = ip_route_connect(&rt, daddr, 0,
RT_TOS(sk->protinfo.af_inet.tos)|sk->localroute,
- sk->bound_dev_if);
+ sk->bound_dev_if,
+ IPPROTO_TCP,
+ sk->sport, sk->dport, sk);
if (err)
return err;
@@ -1901,8 +1926,19 @@
if(sk->protinfo.af_inet.opt && sk->protinfo.af_inet.opt->srr)
daddr = sk->protinfo.af_inet.opt->faddr;
- err = ip_route_output(&rt, daddr, sk->saddr,
- RT_CONN_FLAGS(sk), sk->bound_dev_if);
+ {
+ struct flowi fl = { .oif = sk->bound_dev_if,
+ .nl_u = { .ip4_u =
+ { .daddr = daddr,
+ .saddr = sk->saddr,
+ .tos = RT_CONN_FLAGS(sk) } },
+ .proto = IPPROTO_TCP,
+ .uli_u = { .ports =
+ { .sport = sk->sport,
+ .dport = sk->dport } } };
+
+ err = ip_route_output_flow(&rt, &fl, sk, 0);
+ }
if (!err) {
__sk_dst_set(sk, &rt->u.dst);
sk->route_caps = rt->u.dst.dev->features;
@@ -2067,8 +2103,8 @@
tcp_put_port(sk);
/* If sendmsg cached page exists, toss it. */
- if (tp->sndmsg_page != NULL)
- __free_page(tp->sndmsg_page);
+ if (inet_sk(sk)->sndmsg_page)
+ __free_page(inet_sk(sk)->sndmsg_page);
atomic_dec(&tcp_sockets_allocated);
diff -Nru a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
--- a/net/ipv4/tcp_minisocks.c Thu May 8 10:41:37 2003
+++ b/net/ipv4/tcp_minisocks.c Thu May 8 10:41:37 2003
@@ -25,6 +25,7 @@
#include <linux/sysctl.h>
#include <net/tcp.h>
#include <net/inet_common.h>
+#include <net/xfrm.h>
#ifdef CONFIG_SYSCTL
#define SYNC_INIT 0 /* let the user enable it */
@@ -681,6 +682,13 @@
if ((filter = newsk->filter) != NULL)
sk_filter_charge(newsk, filter);
#endif
+ if (unlikely(xfrm_sk_clone_policy(newsk))) {
+ /* It is still raw copy of parent, so invalidate
+ * destructor and make plain sk_free() */
+ newsk->destruct = NULL;
+ sk_free(newsk);
+ return NULL;
+ }
/* Now setup tcp_opt */
newtp = &(newsk->tp_pinfo.af_tcp);
diff -Nru a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
--- a/net/ipv4/tcp_output.c Thu May 8 10:41:37 2003
+++ b/net/ipv4/tcp_output.c Thu May 8 10:41:37 2003
@@ -89,8 +89,8 @@
struct dst_entry *dst = __sk_dst_get(sk);
int mss = tp->advmss;
- if (dst && dst->advmss < mss) {
- mss = dst->advmss;
+ if (dst && dst_metric(dst, RTAX_ADVMSS) < mss) {
+ mss = dst_metric(dst, RTAX_ADVMSS);
tp->advmss = mss;
}
@@ -502,13 +502,16 @@
int tcp_sync_mss(struct sock *sk, u32 pmtu)
{
- struct tcp_opt *tp = &sk->tp_pinfo.af_tcp;
+ struct tcp_opt *tp = tcp_sk(sk);
+ struct dst_entry *dst = __sk_dst_get(sk);
int mss_now;
+ if (dst && dst->ops->get_mss)
+ pmtu = dst->ops->get_mss(dst, pmtu);
+
/* Calculate base mss without TCP options:
It is MMS_S - sizeof(tcphdr) of rfc1122
*/
-
mss_now = pmtu - tp->af_specific->net_header_len - sizeof(struct tcphdr);
/* Clamp it (mss_clamp does not include tcp options) */
@@ -516,7 +519,7 @@
mss_now = tp->mss_clamp;
/* Now subtract optional transport overhead */
- mss_now -= tp->ext_header_len;
+ mss_now -= tp->ext_header_len + tp->ext2_header_len;
/* Then reserve room for full set of TCP options and 8 bytes of data */
if (mss_now < 48)
@@ -1131,10 +1134,10 @@
if (req->rcv_wnd == 0) { /* ignored for retransmitted syns */
__u8 rcv_wscale;
/* Set this up on the first call only */
- req->window_clamp = tp->window_clamp ? : dst->window;
+ req->window_clamp = tp->window_clamp ? : dst_metric(dst, RTAX_WINDOW);
/* tcp_full_space because it is guaranteed to be the first packet */
tcp_select_initial_window(tcp_full_space(sk),
- dst->advmss - (req->tstamp_ok ? TCPOLEN_TSTAMP_ALIGNED : 0),
+ dst_metric(dst, RTAX_ADVMSS) - (req->tstamp_ok ? TCPOLEN_TSTAMP_ALIGNED : 0),
&req->rcv_wnd,
&req->window_clamp,
req->wscale_ok,
@@ -1146,7 +1149,7 @@
th->window = htons(req->rcv_wnd);
TCP_SKB_CB(skb)->when = tcp_time_stamp;
- tcp_syn_build_options((__u32 *)(th + 1), dst->advmss, req->tstamp_ok,
+ tcp_syn_build_options((__u32 *)(th + 1), dst_metric(dst, RTAX_ADVMSS), req->tstamp_ok,
req->sack_ok, req->wscale_ok, req->rcv_wscale,
TCP_SKB_CB(skb)->when,
req->ts_recent);
@@ -1175,11 +1178,11 @@
if (tp->user_mss)
tp->mss_clamp = tp->user_mss;
tp->max_window = 0;
- tcp_sync_mss(sk, dst->pmtu);
+ tcp_sync_mss(sk, dst_pmtu(dst));
if (!tp->window_clamp)
- tp->window_clamp = dst->window;
- tp->advmss = dst->advmss;
+ tp->window_clamp = dst_metric(dst, RTAX_WINDOW);
+ tp->advmss = dst_metric(dst, RTAX_ADVMSS);
tcp_initialize_rcv_mss(sk);
tcp_select_initial_window(tcp_full_space(sk),
diff -Nru a/net/ipv4/udp.c b/net/ipv4/udp.c
--- a/net/ipv4/udp.c Thu May 8 10:41:36 2003
+++ b/net/ipv4/udp.c Thu May 8 10:41:36 2003
@@ -11,6 +11,7 @@
* Fred N. van Kempen, <waltje@uWalt.NL.Mugnet.ORG>
* Arnt Gulbrandsen, <agulbra@nvg.unit.no>
* Alan Cox, <Alan.Cox@linux.org>
+ * Hirokazu Takahashi, <taka@valinux.co.jp>
*
* Fixes:
* Alan Cox : verify_area() calls
@@ -64,6 +65,10 @@
* YOSHIFUJI Hideaki @USAGI and: Support IPV6_V6ONLY socket option, which
* Alexey Kuznetsov: allow both IPv4 and IPv6 sockets to bind
* a single port at the same time.
+ * Hirokazu Takahashi : HW checksumming for outgoing UDP
+ * datagrams.
+ * Hirokazu Takahashi : sendfile() on UDP works now.
+ * Derek Atkins <derek@ihtfp.com>: Add Encapulation Support
*
*
* This program is free software; you can redistribute it and/or
@@ -97,6 +102,7 @@
#include <net/route.h>
#include <net/inet_common.h>
#include <net/checksum.h>
+#include <net/xfrm.h>
/*
* Snmp MIB for the UDP layer
@@ -365,80 +371,118 @@
sock_put(sk);
}
-
-static unsigned short udp_check(struct udphdr *uh, int len, unsigned long saddr, unsigned long daddr, unsigned long base)
-{
- return(csum_tcpudp_magic(saddr, daddr, len, IPPROTO_UDP, base));
-}
-
-struct udpfakehdr
-{
- struct udphdr uh;
- u32 saddr;
- u32 daddr;
- struct iovec *iov;
- u32 wcheck;
-};
-
/*
- * Copy and checksum a UDP packet from user space into a buffer.
+ * Throw away all pending data and cancel the corking. Socket is locked.
*/
-
-static int udp_getfrag(const void *p, char * to, unsigned int offset, unsigned int fraglen)
+static void udp_flush_pending_frames(struct sock *sk)
{
- struct udpfakehdr *ufh = (struct udpfakehdr *)p;
- if (offset==0) {
- if (csum_partial_copy_fromiovecend(to+sizeof(struct udphdr), ufh->iov, offset,
- fraglen-sizeof(struct udphdr), &ufh->wcheck))
- return -EFAULT;
- ufh->wcheck = csum_partial((char *)ufh, sizeof(struct udphdr),
- ufh->wcheck);
- ufh->uh.check = csum_tcpudp_magic(ufh->saddr, ufh->daddr,
- ntohs(ufh->uh.len),
- IPPROTO_UDP, ufh->wcheck);
- if (ufh->uh.check == 0)
- ufh->uh.check = -1;
- memcpy(to, ufh, sizeof(struct udphdr));
- return 0;
+ struct udp_opt *up = udp_sk(sk);
+
+ if (up->pending) {
+ up->pending = 0;
+ ip_flush_pending_frames(sk);
}
- if (csum_partial_copy_fromiovecend(to, ufh->iov, offset-sizeof(struct udphdr),
- fraglen, &ufh->wcheck))
- return -EFAULT;
- return 0;
}
/*
- * Copy a UDP packet from user space into a buffer without checksumming.
+ * Push out all pending data as one UDP datagram. Socket is locked.
*/
-
-static int udp_getfrag_nosum(const void *p, char * to, unsigned int offset, unsigned int fraglen)
+static int udp_push_pending_frames(struct sock *sk, struct udp_opt *up)
{
- struct udpfakehdr *ufh = (struct udpfakehdr *)p;
+ struct sk_buff *skb;
+ struct udphdr *uh;
+ int err = 0;
- if (offset==0) {
- memcpy(to, ufh, sizeof(struct udphdr));
- return memcpy_fromiovecend(to+sizeof(struct udphdr), ufh->iov, offset,
- fraglen-sizeof(struct udphdr));
+ /* Grab the skbuff where UDP header space exists. */
+ if ((skb = skb_peek(&sk->write_queue)) == NULL)
+ goto out;
+
+ /*
+ * Create a UDP header
+ */
+ uh = skb->h.uh;
+ uh->source = up->sport;
+ uh->dest = up->dport;
+ uh->len = htons(up->len);
+ uh->check = 0;
+
+ if (sk->no_check == UDP_CSUM_NOXMIT) {
+ skb->ip_summed = CHECKSUM_NONE;
+ goto send;
+ }
+
+ if (skb_queue_len(&sk->write_queue) == 1) {
+ /*
+ * Only one fragment on the socket.
+ */
+ if (skb->ip_summed == CHECKSUM_HW) {
+ skb->csum = offsetof(struct udphdr, check);
+ uh->check = ~csum_tcpudp_magic(up->saddr, up->daddr,
+ up->len, IPPROTO_UDP, 0);
+ } else {
+ skb->csum = csum_partial((char *)uh,
+ sizeof(struct udphdr), skb->csum);
+ uh->check = csum_tcpudp_magic(up->saddr, up->daddr,
+ up->len, IPPROTO_UDP, skb->csum);
+ if (uh->check == 0)
+ uh->check = -1;
+ }
+ } else {
+ unsigned int csum = 0;
+ /*
+ * HW-checksum won't work as there are two or more
+ * fragments on the socket so that all csums of sk_buffs
+ * should be together.
+ */
+ if (skb->ip_summed == CHECKSUM_HW) {
+ int offset = (unsigned char *)uh - skb->data;
+ skb->csum = skb_checksum(skb, offset, skb->len - offset, 0);
+
+ skb->ip_summed = CHECKSUM_NONE;
+ } else {
+ skb->csum = csum_partial((char *)uh,
+ sizeof(struct udphdr), skb->csum);
+ }
+
+ skb_queue_walk(&sk->write_queue, skb) {
+ csum = csum_add(csum, skb->csum);
+ }
+ uh->check = csum_tcpudp_magic(up->saddr, up->daddr,
+ up->len, IPPROTO_UDP, csum);
+ if (uh->check == 0)
+ uh->check = -1;
}
- return memcpy_fromiovecend(to, ufh->iov, offset-sizeof(struct udphdr),
- fraglen);
+send:
+ err = ip_push_pending_frames(sk);
+out:
+ up->len = 0;
+ up->pending = 0;
+ return err;
+}
+
+
+static unsigned short udp_check(struct udphdr *uh, int len, unsigned long saddr, unsigned long daddr, unsigned long base)
+{
+ return(csum_tcpudp_magic(saddr, daddr, len, IPPROTO_UDP, base));
}
int udp_sendmsg(struct sock *sk, struct msghdr *msg, int len)
{
- int ulen = len + sizeof(struct udphdr);
+ struct udp_opt *up = udp_sk(sk);
+ int ulen = len;
struct ipcm_cookie ipc;
- struct udpfakehdr ufh;
struct rtable *rt = NULL;
int free = 0;
int connected = 0;
- u32 daddr;
+ u32 daddr, faddr, saddr;
+ u16 dport;
u8 tos;
int err;
+ int corkreq = up->corkflag || msg->msg_flags&MSG_MORE;
/* This check is ONLY to check for arithmetic overflow
on integer(!) len. Not more! Real check will be made
- in ip_build_xmit --ANK
+ in ip_append_* --ANK
BTW socket.c -> af_*.c -> ... make multiple
invalid conversions size_t -> int. We MUST repair it f.e.
@@ -457,10 +501,23 @@
if (msg->msg_flags&MSG_OOB) /* Mirror BSD error message compatibility */
return -EOPNOTSUPP;
+ ipc.opt = NULL;
+
+ if (up->pending) {
+ /*
+ * There are pending frames.
+ * The socket lock must be held while it's corked.
+ */
+ lock_sock(sk);
+ if (likely(up->pending))
+ goto do_append_data;
+ release_sock(sk);
+ }
+ ulen += sizeof(struct udphdr);
+
/*
* Get and verify the address.
*/
-
if (msg->msg_name) {
struct sockaddr_in * usin = (struct sockaddr_in*)msg->msg_name;
if (msg->msg_namelen < sizeof(*usin))
@@ -470,24 +527,22 @@
return -EINVAL;
}
- ufh.daddr = usin->sin_addr.s_addr;
- ufh.uh.dest = usin->sin_port;
- if (ufh.uh.dest == 0)
+ daddr = usin->sin_addr.s_addr;
+ dport = usin->sin_port;
+ if (dport == 0)
return -EINVAL;
} else {
if (sk->state != TCP_ESTABLISHED)
return -ENOTCONN;
- ufh.daddr = sk->daddr;
- ufh.uh.dest = sk->dport;
+ daddr = sk->daddr;
+ dport = sk->dport;
/* Open fast path for connected socket.
Route will not be used, if at least one option is set.
*/
connected = 1;
}
ipc.addr = sk->saddr;
- ufh.uh.source = sk->sport;
- ipc.opt = NULL;
ipc.oif = sk->bound_dev_if;
if (msg->msg_controllen) {
err = ip_cmsg_send(msg, &ipc);
@@ -500,13 +555,13 @@
if (!ipc.opt)
ipc.opt = sk->protinfo.af_inet.opt;
- ufh.saddr = ipc.addr;
- ipc.addr = daddr = ufh.daddr;
+ saddr = ipc.addr;
+ ipc.addr = faddr = daddr;
if (ipc.opt && ipc.opt->srr) {
if (!daddr)
return -EINVAL;
- daddr = ipc.opt->faddr;
+ faddr = ipc.opt->faddr;
connected = 0;
}
tos = RT_TOS(sk->protinfo.af_inet.tos);
@@ -519,8 +574,8 @@
if (MULTICAST(daddr)) {
if (!ipc.oif)
ipc.oif = sk->protinfo.af_inet.mc_index;
- if (!ufh.saddr)
- ufh.saddr = sk->protinfo.af_inet.mc_addr;
+ if (!saddr)
+ saddr = sk->protinfo.af_inet.mc_addr;
connected = 0;
}
@@ -528,7 +583,16 @@
rt = (struct rtable*)sk_dst_check(sk, 0);
if (rt == NULL) {
- err = ip_route_output(&rt, daddr, ufh.saddr, tos, ipc.oif);
+ struct flowi fl = { .oif = ipc.oif,
+ .nl_u = { .ip4_u =
+ { .daddr = faddr,
+ .saddr = saddr,
+ .tos = tos } },
+ .proto = IPPROTO_UDP,
+ .uli_u = { .ports =
+ { .sport = sk->sport,
+ .dport = dport } } };
+ err = ip_route_output_flow(&rt, &fl, sk, !(msg->msg_flags&MSG_DONTWAIT));
if (err)
goto out;
@@ -543,23 +607,39 @@
goto do_confirm;
back_from_confirm:
- ufh.saddr = rt->rt_src;
+ saddr = rt->rt_src;
if (!ipc.addr)
- ufh.daddr = ipc.addr = rt->rt_dst;
- ufh.uh.len = htons(ulen);
- ufh.uh.check = 0;
- ufh.iov = msg->msg_iov;
- ufh.wcheck = 0;
-
- /* RFC1122: OK. Provides the checksumming facility (MUST) as per */
- /* 4.1.3.4. It's configurable by the application via setsockopt() */
- /* (MAY) and it defaults to on (MUST). */
-
- err = ip_build_xmit(sk,
- (sk->no_check == UDP_CSUM_NOXMIT ?
- udp_getfrag_nosum :
- udp_getfrag),
- &ufh, ulen, &ipc, rt, msg->msg_flags);
+ daddr = ipc.addr = rt->rt_dst;
+
+ lock_sock(sk);
+ if (unlikely(up->pending)) {
+ /* The socket is already corked while preparing it. */
+ /* ... which is an evident application bug. --ANK */
+ release_sock(sk);
+
+ NETDEBUG(if (net_ratelimit()) printk(KERN_DEBUG "udp cork app bug 2\n"));
+ err = -EINVAL;
+ goto out;
+ }
+ /*
+ * Now cork the socket to pend data.
+ */
+ up->daddr = daddr;
+ up->dport = dport;
+ up->saddr = saddr;
+ up->sport = sk->sport;
+ up->pending = 1;
+
+do_append_data:
+ up->len += ulen;
+ err = ip_append_data(sk, ip_generic_getfrag, msg->msg_iov, ulen,
+ sizeof(struct udphdr), &ipc, rt,
+ corkreq ? msg->msg_flags|MSG_MORE : msg->msg_flags);
+ if (err)
+ udp_flush_pending_frames(sk);
+ else if (!corkreq)
+ err = udp_push_pending_frames(sk, up);
+ release_sock(sk);
out:
ip_rt_put(rt);
@@ -579,6 +659,52 @@
goto out;
}
+int udp_sendpage(struct sock *sk, struct page *page, int offset, size_t size, int flags)
+{
+ struct udp_opt *up = udp_sk(sk);
+ int ret;
+
+ if (!up->pending) {
+ struct msghdr msg = { .msg_flags = flags|MSG_MORE };
+
+ /* Call udp_sendmsg to specify destination address which
+ * sendpage interface can't pass.
+ * This will succeed only when the socket is connected.
+ */
+ ret = udp_sendmsg(sk, &msg, 0);
+ if (ret < 0)
+ return ret;
+ }
+
+ lock_sock(sk);
+
+ if (unlikely(!up->pending)) {
+ release_sock(sk);
+
+ NETDEBUG(if (net_ratelimit()) printk(KERN_DEBUG "udp cork app bug 3\n"));
+ return -EINVAL;
+ }
+
+ ret = ip_append_page(sk, page, offset, size, flags);
+ if (ret == -EOPNOTSUPP) {
+ release_sock(sk);
+ return sock_no_sendpage(sk->socket, page, offset, size, flags);
+ }
+ if (ret < 0) {
+ udp_flush_pending_frames(sk);
+ goto out;
+ }
+
+ up->len += size;
+ if (!(up->corkflag || (flags&MSG_MORE)))
+ ret = udp_push_pending_frames(sk, up);
+ if (!ret)
+ ret = size;
+out:
+ release_sock(sk);
+ return ret;
+}
+
/*
* IOCTL requests applicable to the UDP protocol
*/
@@ -745,7 +871,9 @@
saddr = sk->protinfo.af_inet.mc_addr;
}
err = ip_route_connect(&rt, usin->sin_addr.s_addr, saddr,
- RT_CONN_FLAGS(sk), oif);
+ RT_CONN_FLAGS(sk), oif,
+ IPPROTO_UDP,
+ sk->sport, usin->sin_port, sk);
if (err)
return err;
if ((rt->rt_flags&RTCF_BROADCAST) && !sk->broadcast) {
@@ -796,11 +924,124 @@
inet_sock_release(sk);
}
+/* return:
+ * 1 if the the UDP system should process it
+ * 0 if we should drop this packet
+ * -1 if it should get processed by xfrm4_rcv_encap
+ */
+static int udp_encap_rcv(struct sock * sk, struct sk_buff *skb)
+{
+ struct udp_opt *up = udp_sk(sk);
+ struct udphdr *uh = skb->h.uh;
+ struct iphdr *iph;
+ int iphlen, len;
+
+ __u8 *udpdata = (__u8 *)uh + sizeof(struct udphdr);
+ __u32 *udpdata32 = (__u32 *)udpdata;
+ __u16 encap_type = up->encap_type;
+
+ /* if we're overly short, let UDP handle it */
+ if (udpdata > skb->tail)
+ return 1;
+
+ /* if this is not encapsulated socket, then just return now */
+ if (!encap_type)
+ return 1;
+
+ len = skb->tail - udpdata;
+
+ switch (encap_type) {
+ case UDP_ENCAP_ESPINUDP:
+ /* Check if this is a keepalive packet. If so, eat it. */
+ if (len == 1 && udpdata[0] == 0xff) {
+ return 0;
+ } else if (len > sizeof(struct ip_esp_hdr) && udpdata32[0] != 0 ) {
+ /* ESP Packet without Non-ESP header */
+ len = sizeof(struct udphdr);
+ } else
+ /* Must be an IKE packet.. pass it through */
+ return 1;
+
+ /* At this point we are sure that this is an ESPinUDP packet,
+ * so we need to remove 'len' bytes from the packet (the UDP
+ * header and optional ESP marker bytes) and then modify the
+ * protocol to ESP, and then call into the transform receiver.
+ */
+
+ /* Now we can update and verify the packet length... */
+ iph = skb->nh.iph;
+ iphlen = iph->ihl << 2;
+ iph->tot_len = htons(ntohs(iph->tot_len) - len);
+ if (skb->len < iphlen + len) {
+ /* packet is too small!?! */
+ return 0;
+ }
+
+ /* pull the data buffer up to the ESP header and set the
+ * transport header to point to ESP. Keep UDP on the stack
+ * for later.
+ */
+ skb->h.raw = skb_pull(skb, len);
+
+ /* modify the protocol (it's ESP!) */
+ iph->protocol = IPPROTO_ESP;
+
+ /* and let the caller know to send this into the ESP processor... */
+ return -1;
+
+ default:
+ printk(KERN_INFO "udp_encap_rcv(): Unhandled UDP encap type: %u\n",
+ encap_type);
+ return 1;
+ }
+}
+
+/* returns:
+ * -1: error
+ * 0: success
+ * >0: "udp encap" protocol resubmission
+ *
+ * Note that in the success and error cases, the skb is assumed to
+ * have either been requeued or freed.
+ */
static int udp_queue_rcv_skb(struct sock * sk, struct sk_buff *skb)
{
+ struct udp_opt *up = udp_sk(sk);
+
/*
* Charge it to the socket, dropping if the queue is full.
*/
+ if (!xfrm4_policy_check(sk, XFRM_POLICY_IN, skb)) {
+ kfree_skb(skb);
+ return -1;
+ }
+
+ if (up->encap_type) {
+ /*
+ * This is an encapsulation socket, so let's see if this is
+ * an encapsulated packet.
+ * If it's a keepalive packet, then just eat it.
+ * If it's an encapsulateed packet, then pass it to the
+ * IPsec xfrm input and return the response
+ * appropriately. Otherwise, just fall through and
+ * pass this up the UDP socket.
+ */
+ int ret;
+
+ ret = udp_encap_rcv(sk, skb);
+ if (ret == 0) {
+ /* Eat the packet .. */
+ kfree_skb(skb);
+ return 0;
+ }
+ if (ret < 0) {
+ /* process the ESP packet */
+ ret = xfrm4_rcv_encap(skb, up->encap_type);
+ UDP_INC_STATS_BH(UdpInDatagrams);
+ return -ret;
+ }
+ /* FALLTHROUGH -- it's a UDP Packet */
+ }
#if defined(CONFIG_FILTER)
if (sk->filter && skb->ip_summed != CHECKSUM_UNNECESSARY) {
@@ -853,8 +1094,13 @@
if(sknext)
skb1 = skb_clone(skb, GFP_ATOMIC);
- if(skb1)
- udp_queue_rcv_skb(sk, skb1);
+ if(skb1) {
+ int ret = udp_queue_rcv_skb(sk, skb1);
+ if (ret > 0)
+ /* we should probably re-process instead
+ * of dropping packets here. */
+ kfree_skb(skb1);
+ }
sk = sknext;
} while(sknext);
} else
@@ -929,11 +1175,20 @@
sk = udp_v4_lookup(saddr, uh->source, daddr, uh->dest, skb->dev->ifindex);
if (sk != NULL) {
- udp_queue_rcv_skb(sk, skb);
+ int ret = udp_queue_rcv_skb(sk, skb);
sock_put(sk);
+
+ /* a return value > 0 means to resubmit the input, but
+ * it it wants the return to be -protocol, or 0
+ */
+ if (ret > 0)
+ return -ret;
return 0;
}
+ if (!xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb))
+ goto drop;
+
/* No socket. Drop packet silently, if checksum is wrong */
if (udp_checksum_complete(skb))
goto csum_error;
@@ -974,6 +1229,7 @@
NIPQUAD(daddr),
ntohs(uh->dest),
ulen));
+drop:
UDP_INC_STATS_BH(UdpInErrors);
kfree_skb(skb);
return(0);
@@ -1038,16 +1294,107 @@
return len;
}
+static int udp_destroy_sock(struct sock *sk)
+{
+ lock_sock(sk);
+ udp_flush_pending_frames(sk);
+ release_sock(sk);
+ return 0;
+}
+
+/*
+ * Socket option code for UDP
+ */
+static int udp_setsockopt(struct sock *sk, int level, int optname,
+ char *optval, int optlen)
+{
+ struct udp_opt *up = udp_sk(sk);
+ int val;
+ int err = 0;
+
+ if (level != SOL_UDP)
+ return ip_setsockopt(sk, level, optname, optval, optlen);
+
+ if(optlen<sizeof(int))
+ return -EINVAL;
+
+ if (get_user(val, (int *)optval))
+ return -EFAULT;
+
+ switch(optname) {
+ case UDP_CORK:
+ if (val != 0) {
+ up->corkflag = 1;
+ } else {
+ up->corkflag = 0;
+ lock_sock(sk);
+ udp_push_pending_frames(sk, up);
+ release_sock(sk);
+ }
+ break;
+
+ case UDP_ENCAP:
+ up->encap_type = val;
+ break;
+
+ default:
+ err = -ENOPROTOOPT;
+ break;
+ };
+
+ return err;
+}
+
+static int udp_getsockopt(struct sock *sk, int level, int optname,
+ char *optval, int *optlen)
+{
+ struct udp_opt *up = udp_sk(sk);
+ int val, len;
+
+ if (level != SOL_UDP)
+ return ip_getsockopt(sk, level, optname, optval, optlen);
+
+ if(get_user(len,optlen))
+ return -EFAULT;
+
+ len = min_t(unsigned int, len, sizeof(int));
+
+ if(len < 0)
+ return -EINVAL;
+
+ switch(optname) {
+ case UDP_CORK:
+ val = up->corkflag;
+ break;
+
+ case UDP_ENCAP:
+ val = up->encap_type;
+ break;
+
+ default:
+ return -ENOPROTOOPT;
+ };
+
+ if(put_user(len, optlen))
+ return -EFAULT;
+ if(copy_to_user(optval, &val,len))
+ return -EFAULT;
+ return 0;
+}
+
+
struct proto udp_prot = {
name: "UDP",
close: udp_close,
connect: udp_connect,
disconnect: udp_disconnect,
ioctl: udp_ioctl,
- setsockopt: ip_setsockopt,
- getsockopt: ip_getsockopt,
+ destroy: udp_destroy_sock,
+ setsockopt: udp_setsockopt,
+ getsockopt: udp_getsockopt,
sendmsg: udp_sendmsg,
recvmsg: udp_recvmsg,
+ sendpage: udp_sendpage,
backlog_rcv: udp_queue_rcv_skb,
hash: udp_v4_hash,
unhash: udp_v4_unhash,
diff -Nru a/net/ipv4/xfrm4_input.c b/net/ipv4/xfrm4_input.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/net/ipv4/xfrm4_input.c Thu May 8 10:41:38 2003
@@ -0,0 +1,144 @@
+/*
+ * xfrm4_input.c
+ *
+ * Changes:
+ * YOSHIFUJI Hideaki @USAGI
+ * Split up af-specific portion
+ * Derek Atkins <derek@ihtfp.com>
+ * Add Encapsulation support
+ *
+ */
+
+#include <net/ip.h>
+#include <net/xfrm.h>
+
+static kmem_cache_t *secpath_cachep;
+
+int xfrm4_rcv(struct sk_buff *skb)
+{
+ return xfrm4_rcv_encap(skb, 0);
+}
+
+int xfrm4_rcv_encap(struct sk_buff *skb, __u16 encap_type)
+{
+ int err;
+ u32 spi, seq;
+ struct sec_decap_state xfrm_vec[XFRM_MAX_DEPTH];
+ struct xfrm_state *x;
+ int xfrm_nr = 0;
+ int decaps = 0;
+
+ if ((err = xfrm_parse_spi(skb, skb->nh.iph->protocol, &spi, &seq)) != 0)
+ goto drop;
+
+ do {
+ struct iphdr *iph = skb->nh.iph;
+
+ if (xfrm_nr == XFRM_MAX_DEPTH)
+ goto drop;
+
+ x = xfrm_state_lookup((xfrm_address_t *)&iph->daddr, spi, iph->protocol, AF_INET);
+ if (x == NULL)
+ goto drop;
+
+ spin_lock(&x->lock);
+ if (unlikely(x->km.state != XFRM_STATE_VALID))
+ goto drop_unlock;
+
+ if (x->props.replay_window && xfrm_replay_check(x, seq))
+ goto drop_unlock;
+
+ if (xfrm_state_check_expire(x))
+ goto drop_unlock;
+
+ xfrm_vec[xfrm_nr].decap.decap_type = encap_type;
+ if (x->type->input(x, &(xfrm_vec[xfrm_nr].decap), skb))
+ goto drop_unlock;
+
+ /* only the first xfrm gets the encap type */
+ encap_type = 0;
+
+ if (x->props.replay_window)
+ xfrm_replay_advance(x, seq);
+
+ x->curlft.bytes += skb->len;
+ x->curlft.packets++;
+
+ spin_unlock(&x->lock);
+
+ xfrm_vec[xfrm_nr++].xvec = x;
+
+ iph = skb->nh.iph;
+
+ if (x->props.mode) {
+ if (iph->protocol != IPPROTO_IPIP)
+ goto drop;
+ skb->nh.raw = skb->data;
+ iph = skb->nh.iph;
+ memset(&(IPCB(skb)->opt), 0, sizeof(struct ip_options));
+ decaps = 1;
+ break;
+ }
+
+ if ((err = xfrm_parse_spi(skb, skb->nh.iph->protocol, &spi, &seq)) < 0)
+ goto drop;
+ } while (!err);
+
+ /* Allocate new secpath or COW existing one. */
+
+ if (!skb->sp || atomic_read(&skb->sp->refcnt) != 1) {
+ kmem_cache_t *pool = skb->sp ? skb->sp->pool : secpath_cachep;
+ struct sec_path *sp;
+ sp = kmem_cache_alloc(pool, SLAB_ATOMIC);
+ if (!sp)
+ goto drop;
+ if (skb->sp) {
+ memcpy(sp, skb->sp, sizeof(struct sec_path));
+ secpath_put(skb->sp);
+ } else {
+ sp->pool = pool;
+ sp->len = 0;
+ }
+ atomic_set(&sp->refcnt, 1);
+ skb->sp = sp;
+ }
+ if (xfrm_nr + skb->sp->len > XFRM_MAX_DEPTH)
+ goto drop;
+
+ memcpy(skb->sp->x+skb->sp->len, xfrm_vec, xfrm_nr*sizeof(struct sec_decap_state));
+ skb->sp->len += xfrm_nr;
+
+ if (decaps) {
+ if (!(skb->dev->flags&IFF_LOOPBACK)) {
+ dst_release(skb->dst);
+ skb->dst = NULL;
+ }
+ netif_rx(skb);
+ return 0;
+ } else {
+ return -skb->nh.iph->protocol;
+ }
+
+drop_unlock:
+ spin_unlock(&x->lock);
+ xfrm_state_put(x);
+drop:
+ while (--xfrm_nr >= 0)
+ xfrm_state_put(xfrm_vec[xfrm_nr].xvec);
+
+ kfree_skb(skb);
+ return 0;
+}
+
+
+void __init xfrm4_input_init(void)
+{
+ secpath_cachep = kmem_cache_create("secpath4_cache",
+ sizeof(struct sec_path),
+ 0, SLAB_HWCACHE_ALIGN,
+ NULL, NULL);
+
+ if (!secpath_cachep)
+ panic("IP: failed to allocate secpath4_cache\n");
+}
+
diff -Nru a/net/ipv4/xfrm4_policy.c b/net/ipv4/xfrm4_policy.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/net/ipv4/xfrm4_policy.c Thu May 8 10:41:38 2003
@@ -0,0 +1,284 @@
+/*
+ * xfrm4_policy.c
+ *
+ * Changes:
+ * Kazunori MIYAZAWA @USAGI
+ * YOSHIFUJI Hideaki @USAGI
+ * Split up af-specific portion
+ *
+ */
+
+#include <linux/config.h>
+#include <net/xfrm.h>
+#include <net/ip.h>
+
+extern struct dst_ops xfrm4_dst_ops;
+extern struct xfrm_policy_afinfo xfrm4_policy_afinfo;
+
+static struct xfrm_type_map xfrm4_type_map = { .lock = RW_LOCK_UNLOCKED };
+
+static int xfrm4_dst_lookup(struct xfrm_dst **dst, struct flowi *fl)
+{
+ return __ip_route_output_key((struct rtable**)dst, fl);
+}
+
+/* Check that the bundle accepts the flow and its components are
+ * still valid.
+ */
+
+static int __xfrm4_bundle_ok(struct xfrm_dst *xdst, struct flowi *fl)
+{
+ do {
+ if (xdst->u.dst.ops != &xfrm4_dst_ops)
+ return 1;
+
+ if (!xfrm_selector_match(&xdst->u.dst.xfrm->sel, fl, AF_INET))
+ return 0;
+ if (xdst->u.dst.xfrm->km.state != XFRM_STATE_VALID ||
+ xdst->u.dst.path->obsolete > 0)
+ return 0;
+ xdst = (struct xfrm_dst*)xdst->u.dst.child;
+ } while (xdst);
+ return 0;
+}
+
+static struct dst_entry *
+__xfrm4_find_bundle(struct flowi *fl, struct rtable *rt, struct xfrm_policy *policy)
+{
+ struct dst_entry *dst;
+
+ if (!fl->fl4_src)
+ fl->fl4_src = rt->rt_src;
+ if (!fl->fl4_dst)
+ fl->fl4_dst = rt->rt_dst;
+ read_lock_bh(&policy->lock);
+ for (dst = policy->bundles; dst; dst = dst->next) {
+ struct xfrm_dst *xdst = (struct xfrm_dst*)dst;
+ if (xdst->u.rt.fl.oif == fl->oif && /*XXX*/
+ xdst->u.rt.fl.fl4_dst == fl->fl4_dst &&
+ xdst->u.rt.fl.fl4_src == fl->fl4_src &&
+ __xfrm4_bundle_ok(xdst, fl)) {
+ dst_clone(dst);
+ break;
+ }
+ }
+ read_unlock_bh(&policy->lock);
+ return dst;
+}
+
+/* Allocate chain of dst_entry's, attach known xfrm's, calculate
+ * all the metrics... Shortly, bundle a bundle.
+ */
+
+static int
+__xfrm4_bundle_create(struct xfrm_policy *policy, struct xfrm_state **xfrm, int nx,
+ struct flowi *fl, struct dst_entry **dst_p)
+{
+ struct dst_entry *dst, *dst_prev;
+ struct rtable *rt0 = (struct rtable*)(*dst_p);
+ struct rtable *rt = rt0;
+ u32 remote = fl->fl4_dst;
+ u32 local = fl->fl4_src;
+ int i;
+ int err;
+ int header_len = 0;
+ int trailer_len = 0;
+
+ dst = dst_prev = NULL;
+
+ for (i = 0; i < nx; i++) {
+ struct dst_entry *dst1 = dst_alloc(&xfrm4_dst_ops);
+
+ if (unlikely(dst1 == NULL)) {
+ err = -ENOBUFS;
+ goto error;
+ }
+
+ dst1->xfrm = xfrm[i];
+ if (!dst)
+ dst = dst1;
+ else {
+ dst_prev->child = dst1;
+ dst1->flags |= DST_NOHASH;
+ dst_clone(dst1);
+ }
+ dst_prev = dst1;
+ if (xfrm[i]->props.mode) {
+ remote = xfrm[i]->id.daddr.a4;
+ local = xfrm[i]->props.saddr.a4;
+ }
+ header_len += xfrm[i]->props.header_len;
+ trailer_len += xfrm[i]->props.trailer_len;
+ }
+
+ if (remote != fl->fl4_dst) {
+ struct flowi fl_tunnel = { .nl_u = { .ip4_u =
+ { .daddr = remote,
+ .saddr = local }
+ }
+ };
+ err = xfrm_dst_lookup((struct xfrm_dst**)&rt, &fl_tunnel, AF_INET);
+ if (err)
+ goto error;
+ } else {
+ dst_hold(&rt->u.dst);
+ }
+ dst_prev->child = &rt->u.dst;
+ for (dst_prev = dst; dst_prev != &rt->u.dst; dst_prev = dst_prev->child) {
+ struct xfrm_dst *x = (struct xfrm_dst*)dst_prev;
+ x->u.rt.fl = *fl;
+
+ dst_prev->dev = rt->u.dst.dev;
+ if (rt->u.dst.dev)
+ dev_hold(rt->u.dst.dev);
+ dst_prev->obsolete = -1;
+ dst_prev->flags |= DST_HOST;
+ dst_prev->lastuse = jiffies;
+ dst_prev->header_len = header_len;
+ dst_prev->trailer_len = trailer_len;
+ memcpy(&dst_prev->metrics, &rt->u.dst.metrics, sizeof(dst_prev->metrics));
+ dst_prev->path = &rt->u.dst;
+
+ /* Copy neighbout for reachability confirmation */
+ dst_prev->neighbour = neigh_clone(rt->u.dst.neighbour);
+ dst_prev->input = rt->u.dst.input;
+ dst_prev->output = dst_prev->xfrm->type->output;
+ if (rt->peer)
+ atomic_inc(&rt->peer->refcnt);
+ x->u.rt.peer = rt->peer;
+ /* Sheit... I remember I did this right. Apparently,
+ * it was magically lost, so this code needs audit */
+ x->u.rt.rt_flags = rt0->rt_flags&(RTCF_BROADCAST|RTCF_MULTICAST|RTCF_LOCAL);
+ x->u.rt.rt_type = rt->rt_type;
+ x->u.rt.rt_src = rt0->rt_src;
+ x->u.rt.rt_dst = rt0->rt_dst;
+ x->u.rt.rt_gateway = rt->rt_gateway;
+ x->u.rt.rt_spec_dst = rt0->rt_spec_dst;
+ header_len -= x->u.dst.xfrm->props.header_len;
+ trailer_len -= x->u.dst.xfrm->props.trailer_len;
+ }
+ *dst_p = dst;
+ return 0;
+
+error:
+ if (dst)
+ dst_free(dst);
+ return err;
+}
+
+static void
+_decode_session4(struct sk_buff *skb, struct flowi *fl)
+{
+ struct iphdr *iph = skb->nh.iph;
+ u8 *xprth = skb->nh.raw + iph->ihl*4;
+
+ if (!(iph->frag_off & htons(IP_MF | IP_OFFSET))) {
+ switch (iph->protocol) {
+ case IPPROTO_UDP:
+ case IPPROTO_TCP:
+ case IPPROTO_SCTP:
+ if (pskb_may_pull(skb, xprth + 4 - skb->data)) {
+ u16 *ports = (u16 *)xprth;
+
+ fl->fl_ip_sport = ports[0];
+ fl->fl_ip_dport = ports[1];
+ }
+ break;
+
+ case IPPROTO_ESP:
+ if (pskb_may_pull(skb, xprth + 4 - skb->data)) {
+ u32 *ehdr = (u32 *)xprth;
+
+ fl->fl_ipsec_spi = ehdr[0];
+ }
+ break;
+
+ case IPPROTO_AH:
+ if (pskb_may_pull(skb, xprth + 8 - skb->data)) {
+ u32 *ah_hdr = (u32*)xprth;
+
+ fl->fl_ipsec_spi = ah_hdr[1];
+ }
+ break;
+
+ case IPPROTO_COMP:
+ if (pskb_may_pull(skb, xprth + 4 - skb->data)) {
+ u16 *ipcomp_hdr = (u16 *)xprth;
+
+ fl->fl_ipsec_spi = ntohl(ntohs(ipcomp_hdr[1]));
+ }
+ break;
+ default:
+ fl->fl_ipsec_spi = 0;
+ break;
+ };
+ } else {
+ memset(fl, 0, sizeof(struct flowi));
+ }
+ fl->proto = iph->protocol;
+ fl->fl4_dst = iph->daddr;
+ fl->fl4_src = iph->saddr;
+}
+
+static inline int xfrm4_garbage_collect(void)
+{
+ read_lock(&xfrm4_policy_afinfo.lock);
+ xfrm4_policy_afinfo.garbage_collect();
+ read_unlock(&xfrm4_policy_afinfo.lock);
+ return (atomic_read(&xfrm4_dst_ops.entries) > xfrm4_dst_ops.gc_thresh*2);
+}
+
+static void xfrm4_update_pmtu(struct dst_entry *dst, u32 mtu)
+{
+ struct dst_entry *path = dst->path;
+
+ if (mtu < 68 + dst->header_len)
+ return;
+
+ path->ops->update_pmtu(path, mtu);
+}
+
+struct dst_ops xfrm4_dst_ops = {
+ .family = AF_INET,
+ .protocol = __constant_htons(ETH_P_IP),
+ .gc = xfrm4_garbage_collect,
+ .update_pmtu = xfrm4_update_pmtu,
+ .gc_thresh = 1024,
+ .entry_size = sizeof(struct xfrm_dst),
+};
+
+struct xfrm_policy_afinfo xfrm4_policy_afinfo = {
+ .family = AF_INET,
+ .lock = RW_LOCK_UNLOCKED,
+ .type_map = &xfrm4_type_map,
+ .dst_ops = &xfrm4_dst_ops,
+ .dst_lookup = xfrm4_dst_lookup,
+ .find_bundle = __xfrm4_find_bundle,
+ .bundle_create = __xfrm4_bundle_create,
+ .decode_session = _decode_session4,
+};
+
+void __init xfrm4_policy_init(void)
+{
+ xfrm_policy_register_afinfo(&xfrm4_policy_afinfo);
+}
+
+void __exit xfrm4_policy_fini(void)
+{
+ xfrm_policy_unregister_afinfo(&xfrm4_policy_afinfo);
+}
+
+void __init xfrm4_init(void)
+{
+ xfrm4_state_init();
+ xfrm4_policy_init();
+ xfrm4_input_init();
+}
+
+void __exit xfrm4_fini(void)
+{
+ //xfrm4_input_fini();
+ xfrm4_policy_fini();
+ xfrm4_state_fini();
+}
+
diff -Nru a/net/ipv4/xfrm4_state.c b/net/ipv4/xfrm4_state.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/net/ipv4/xfrm4_state.c Thu May 8 10:41:38 2003
@@ -0,0 +1,128 @@
+/*
+ * xfrm4_state.c
+ *
+ * Changes:
+ * YOSHIFUJI Hideaki @USAGI
+ * Split up af-specific portion
+ *
+ */
+
+#include <net/xfrm.h>
+#include <linux/pfkeyv2.h>
+#include <linux/ipsec.h>
+
+extern struct xfrm_state_afinfo xfrm4_state_afinfo;
+
+static void
+__xfrm4_init_tempsel(struct xfrm_state *x, struct flowi *fl,
+ struct xfrm_tmpl *tmpl,
+ xfrm_address_t *daddr, xfrm_address_t *saddr)
+{
+ x->sel.daddr.a4 = fl->fl4_dst;
+ x->sel.saddr.a4 = fl->fl4_src;
+ x->sel.dport = fl->fl_ip_dport;
+ x->sel.dport_mask = ~0;
+ x->sel.sport = fl->fl_ip_sport;
+ x->sel.sport_mask = ~0;
+ x->sel.prefixlen_d = 32;
+ x->sel.prefixlen_s = 32;
+ x->sel.proto = fl->proto;
+ x->sel.ifindex = fl->oif;
+ x->id = tmpl->id;
+ if (x->id.daddr.a4 == 0)
+ x->id.daddr.a4 = daddr->a4;
+ x->props.saddr = tmpl->saddr;
+ if (x->props.saddr.a4 == 0)
+ x->props.saddr.a4 = saddr->a4;
+ x->props.mode = tmpl->mode;
+ x->props.reqid = tmpl->reqid;
+ x->props.family = AF_INET;
+}
+
+static struct xfrm_state *
+__xfrm4_state_lookup(xfrm_address_t *daddr, u32 spi, u8 proto)
+{
+ unsigned h = __xfrm4_spi_hash(daddr, spi, proto);
+ struct xfrm_state *x;
+
+ list_for_each_entry(x, xfrm4_state_afinfo.state_byspi+h, byspi) {
+ if (x->props.family == AF_INET &&
+ spi == x->id.spi &&
+ daddr->a4 == x->id.daddr.a4 &&
+ proto == x->id.proto) {
+ atomic_inc(&x->refcnt);
+ return x;
+ }
+ }
+ return NULL;
+}
+
+static struct xfrm_state *
+__xfrm4_find_acq(u8 mode, u16 reqid, u8 proto,
+ xfrm_address_t *daddr, xfrm_address_t *saddr,
+ int create)
+{
+ struct xfrm_state *x, *x0;
+ unsigned h = __xfrm4_dst_hash(daddr);
+
+ x0 = NULL;
+
+ list_for_each_entry(x, xfrm4_state_afinfo.state_bydst+h, bydst) {
+ if (x->props.family == AF_INET &&
+ daddr->a4 == x->id.daddr.a4 &&
+ mode == x->props.mode &&
+ proto == x->id.proto &&
+ saddr->a4 == x->props.saddr.a4 &&
+ reqid == x->props.reqid &&
+ x->km.state == XFRM_STATE_ACQ) {
+ if (!x0)
+ x0 = x;
+ if (x->id.spi)
+ continue;
+ x0 = x;
+ break;
+ }
+ }
+ if (x0) {
+ atomic_inc(&x0->refcnt);
+ } else if (create && (x0 = xfrm_state_alloc()) != NULL) {
+ x0->sel.daddr.a4 = daddr->a4;
+ x0->sel.saddr.a4 = saddr->a4;
+ x0->sel.prefixlen_d = 32;
+ x0->sel.prefixlen_s = 32;
+ x0->props.saddr.a4 = saddr->a4;
+ x0->km.state = XFRM_STATE_ACQ;
+ x0->id.daddr.a4 = daddr->a4;
+ x0->id.proto = proto;
+ x0->props.family = AF_INET;
+ x0->props.mode = mode;
+ x0->props.reqid = reqid;
+ x0->props.family = AF_INET;
+ x0->lft.hard_add_expires_seconds = XFRM_ACQ_EXPIRES;
+ atomic_inc(&x0->refcnt);
+ mod_timer(&x0->timer, jiffies + XFRM_ACQ_EXPIRES*HZ);
+ atomic_inc(&x0->refcnt);
+ list_add_tail(&x0->bydst, xfrm4_state_afinfo.state_bydst+h);
+ wake_up(&km_waitq);
+ }
+ return x0;
+}
+
+static struct xfrm_state_afinfo xfrm4_state_afinfo = {
+ .family = AF_INET,
+ .lock = RW_LOCK_UNLOCKED,
+ .init_tempsel = __xfrm4_init_tempsel,
+ .state_lookup = __xfrm4_state_lookup,
+ .find_acq = __xfrm4_find_acq,
+};
+
+void __init xfrm4_state_init(void)
+{
+ xfrm_state_register_afinfo(&xfrm4_state_afinfo);
+}
+
+void __exit xfrm4_state_fini(void)
+{
+ xfrm_state_unregister_afinfo(&xfrm4_state_afinfo);
+}
+
diff -Nru a/net/ipv4/xfrm4_tunnel.c b/net/ipv4/xfrm4_tunnel.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/net/ipv4/xfrm4_tunnel.c Thu May 8 10:41:38 2003
@@ -0,0 +1,264 @@
+/* xfrm4_tunnel.c: Generic IP tunnel transformer.
+ *
+ * Copyright (C) 2003 David S. Miller (davem@redhat.com)
+ */
+
+#include <linux/skbuff.h>
+#include <net/xfrm.h>
+#include <net/ip.h>
+#include <net/icmp.h>
+#include <net/inet_ecn.h>
+
+int xfrm4_tunnel_check_size(struct sk_buff *skb)
+{
+ int mtu, ret = 0;
+ struct dst_entry *dst;
+ struct iphdr *iph = skb->nh.iph;
+
+ if (IPCB(skb)->flags & IPSKB_XFRM_TUNNEL_SIZE)
+ goto out;
+
+ IPCB(skb)->flags |= IPSKB_XFRM_TUNNEL_SIZE;
+
+ if (!(iph->frag_off & htons(IP_DF)))
+ goto out;
+
+ dst = skb->dst;
+ mtu = dst_pmtu(dst) - dst->header_len - dst->trailer_len;
+ if (skb->len > mtu) {
+ icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, htonl(mtu));
+ ret = -EMSGSIZE;
+ }
+out:
+ return ret;
+}
+
+static int ipip_output(struct sk_buff *skb)
+{
+ struct dst_entry *dst = skb->dst;
+ struct xfrm_state *x = dst->xfrm;
+ struct iphdr *iph, *top_iph;
+ int tos, err;
+
+ if ((err = xfrm4_tunnel_check_size(skb)) != 0)
+ goto error_nolock;
+
+ iph = skb->nh.iph;
+
+ spin_lock_bh(&x->lock);
+
+ tos = iph->tos;
+
+ top_iph = (struct iphdr *) skb_push(skb, x->props.header_len);
+ top_iph->ihl = 5;
+ top_iph->version = 4;
+ top_iph->tos = INET_ECN_encapsulate(tos, iph->tos);
+ top_iph->tot_len = htons(skb->len);
+ top_iph->frag_off = iph->frag_off & ~htons(IP_MF|IP_OFFSET);
+ if (!(iph->frag_off & htons(IP_DF))) {
+#ifdef NETIF_F_TSO
+ __ip_select_ident(top_iph, dst, 0);
+#else
+ __ip_select_ident(top_iph, dst);
+#endif
+ }
+ top_iph->ttl = iph->ttl;
+ top_iph->protocol = IPPROTO_IPIP;
+ top_iph->check = 0;
+ top_iph->saddr = x->props.saddr.a4;
+ top_iph->daddr = x->id.daddr.a4;
+ memset(&(IPCB(skb)->opt), 0, sizeof(struct ip_options));
+ ip_send_check(top_iph);
+
+ skb->nh.raw = skb->data;
+ x->curlft.bytes += skb->len;
+ x->curlft.packets++;
+
+ spin_unlock_bh(&x->lock);
+
+ if ((skb->dst = dst_pop(dst)) == NULL) {
+ kfree_skb(skb);
+ err = -EHOSTUNREACH;
+ goto error_nolock;
+ }
+ return NET_XMIT_BYPASS;
+
+error_nolock:
+ kfree_skb(skb);
+ return err;
+}
+
+static inline void ipip_ecn_decapsulate(struct iphdr *outer_iph, struct sk_buff *skb)
+{
+ struct iphdr *inner_iph = skb->nh.iph;
+
+ if (INET_ECN_is_ce(outer_iph->tos) &&
+ INET_ECN_is_not_ce(inner_iph->tos))
+ IP_ECN_set_ce(inner_iph);
+}
+
+static int ipip_xfrm_rcv(struct xfrm_state *x, struct xfrm_decap_state *decap, struct sk_buff *skb)
+{
+ struct iphdr *outer_iph = skb->nh.iph;
+
+ if (!pskb_may_pull(skb, sizeof(struct iphdr)))
+ return -EINVAL;
+ skb->mac.raw = skb->nh.raw;
+ skb->nh.raw = skb->data;
+ memset(&(IPCB(skb)->opt), 0, sizeof(struct ip_options));
+ dst_release(skb->dst);
+ skb->dst = NULL;
+ skb->protocol = htons(ETH_P_IP);
+ skb->pkt_type = PACKET_HOST;
+ ipip_ecn_decapsulate(outer_iph, skb);
+ netif_rx(skb);
+
+ return 0;
+}
+
+static struct xfrm_tunnel *ipip_handler;
+static DECLARE_MUTEX(xfrm4_tunnel_sem);
+
+int xfrm4_tunnel_register(struct xfrm_tunnel *handler)
+{
+ int ret;
+
+ down(&xfrm4_tunnel_sem);
+ ret = 0;
+ if (ipip_handler != NULL)
+ ret = -EINVAL;
+ if (!ret)
+ ipip_handler = handler;
+ up(&xfrm4_tunnel_sem);
+
+ return ret;
+}
+
+int xfrm4_tunnel_deregister(struct xfrm_tunnel *handler)
+{
+ int ret;
+
+ down(&xfrm4_tunnel_sem);
+ ret = 0;
+ if (ipip_handler != handler)
+ ret = -EINVAL;
+ if (!ret)
+ ipip_handler = NULL;
+ up(&xfrm4_tunnel_sem);
+
+ synchronize_net();
+
+ return ret;
+}
+
+static int ipip_rcv(struct sk_buff *skb)
+{
+ struct xfrm_tunnel *handler = ipip_handler;
+ struct xfrm_state *x = NULL;
+ int err;
+
+ /* Tunnel devices take precedence. */
+ if (handler) {
+ err = handler->handler(skb);
+ if (!err)
+ goto out;
+ }
+
+ x = xfrm_state_lookup((xfrm_address_t *)&skb->nh.iph->daddr,
+ skb->nh.iph->saddr,
+ IPPROTO_IPIP, AF_INET);
+
+ if (x) {
+ spin_lock(&x->lock);
+
+ if (unlikely(x->km.state != XFRM_STATE_VALID))
+ goto drop_unlock;
+ }
+
+ err = ipip_xfrm_rcv(x, NULL, skb);
+ if (err)
+ goto drop_unlock;
+
+ if (x) {
+ x->curlft.bytes += skb->len;
+ x->curlft.packets++;
+
+ spin_unlock(&x->lock);
+
+ xfrm_state_put(x);
+ }
+
+ return 0;
+
+drop_unlock:
+ if (x) {
+ spin_unlock(&x->lock);
+ xfrm_state_put(x);
+ }
+ kfree_skb(skb);
+out:
+ return 0;
+}
+
+static void ipip_err(struct sk_buff *skb, u32 info)
+{
+ struct xfrm_tunnel *handler = ipip_handler;
+ u32 arg = info;
+
+ if (handler)
+ handler->err_handler(skb, &arg);
+}
+
+static int ipip_init_state(struct xfrm_state *x, void *args)
+{
+ if (!x->props.mode)
+ return -EINVAL;
+ x->props.header_len = sizeof(struct iphdr);
+
+ return 0;
+}
+
+static void ipip_destroy(struct xfrm_state *x)
+{
+}
+
+static struct xfrm_type ipip_type = {
+ .description = "IPIP",
+ .proto = IPPROTO_IPIP,
+ .init_state = ipip_init_state,
+ .destructor = ipip_destroy,
+ .input = ipip_xfrm_rcv,
+ .output = ipip_output
+};
+
+static struct inet_protocol ipip_protocol = {
+ .handler = ipip_rcv,
+ .err_handler = ipip_err,
+};
+
+static int __init ipip_init(void)
+{
+ SET_MODULE_OWNER(&ipip_type);
+ if (xfrm_register_type(&ipip_type, AF_INET) < 0) {
+ printk(KERN_INFO "ipip init: can't add xfrm type\n");
+ return -EAGAIN;
+ }
+ if (inet_add_protocol(&ipip_protocol, IPPROTO_IPIP) < 0) {
+ printk(KERN_INFO "ipip init: can't add protocol\n");
+ xfrm_unregister_type(&ipip_type, AF_INET);
+ return -EAGAIN;
+ }
+ return 0;
+}
+
+static void __exit ipip_fini(void)
+{
+ if (inet_del_protocol(&ipip_protocol, IPPROTO_IPIP) < 0)
+ printk(KERN_INFO "ipip close: can't remove protocol\n");
+ if (xfrm_unregister_type(&ipip_type, AF_INET) < 0)
+ printk(KERN_INFO "ipip close: can't remove xfrm type\n");
+}
+
+module_init(ipip_init);
+module_exit(ipip_fini);
+MODULE_LICENSE("GPL");
diff -Nru a/net/ipv6/Config.in b/net/ipv6/Config.in
--- a/net/ipv6/Config.in Thu May 8 10:41:37 2003
+++ b/net/ipv6/Config.in Thu May 8 10:41:37 2003
@@ -2,9 +2,9 @@
# IPv6 configuration
#
-#bool ' IPv6: flow policy support' CONFIG_RT6_POLICY
-#bool ' IPv6: firewall support' CONFIG_IPV6_FIREWALL
-
if [ "$CONFIG_NETFILTER" != "n" ]; then
source net/ipv6/netfilter/Config.in
fi
+
+tristate 'IPv6: AH transformation' CONFIG_INET6_AH
+tristate 'IPv6: ESP transformation' CONFIG_INET6_ESP
diff -Nru a/net/ipv6/Makefile b/net/ipv6/Makefile
--- a/net/ipv6/Makefile Thu May 8 10:41:37 2003
+++ b/net/ipv6/Makefile Thu May 8 10:41:37 2003
@@ -9,14 +9,18 @@
O_TARGET := ipv6.o
+export-objs := ipv6_syms.o
+
obj-y := af_inet6.o anycast.o ip6_output.o ip6_input.o addrconf.o sit.o \
route.o ip6_fib.o ipv6_sockglue.o ndisc.o udp.o raw.o \
protocol.o icmp.o mcast.o reassembly.o tcp_ipv6.o \
exthdrs.o sysctl_net_ipv6.o datagram.o proc.o \
- ip6_flowlabel.o
+ ip6_flowlabel.o xfrm6_policy.o xfrm6_state.o xfrm6_input.o \
+ ipv6_syms.o
-obj-m := $(O_TARGET)
+obj-$(CONFIG_INET6_AH) += ah6.o
+obj-$(CONFIG_INET6_ESP) += esp6.o
-#obj-$(CONFIG_IPV6_FIREWALL) += ip6_fw.o
+obj-m := $(O_TARGET)
include $(TOPDIR)/Rules.make
diff -Nru a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
--- a/net/ipv6/af_inet6.c Thu May 8 10:41:36 2003
+++ b/net/ipv6/af_inet6.c Thu May 8 10:41:36 2003
@@ -672,6 +672,12 @@
addrconf_init();
sit_init();
+ /* Init v6 extention headers. */
+ ipv6_rthdr_init();
+ ipv6_frag_init();
+ ipv6_nodata_init();
+ ipv6_destopt_init();
+
/* Init v6 transport protocols. */
udpv6_init();
tcpv6_init();
diff -Nru a/net/ipv6/ah6.c b/net/ipv6/ah6.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/net/ipv6/ah6.c Thu May 8 10:41:38 2003
@@ -0,0 +1,364 @@
+/*
+ * Copyright (C)2002 USAGI/WIDE Project
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Authors
+ *
+ * Mitsuru KANDA @USAGI : IPv6 Support
+ * Kazunori MIYAZAWA @USAGI :
+ * Kunihiro Ishiguro :
+ *
+ * This file is derived from net/ipv4/ah.c.
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <net/ip.h>
+#include <net/xfrm.h>
+#include <net/ah.h>
+#include <linux/crypto.h>
+#include <linux/pfkeyv2.h>
+#include <net/icmp.h>
+#include <net/ipv6.h>
+#include <net/xfrm.h>
+#include <asm/scatterlist.h>
+
+/* XXX no ipv6 ah specific */
+#define NIP6(addr) \
+ ntohs((addr).s6_addr16[0]),\
+ ntohs((addr).s6_addr16[1]),\
+ ntohs((addr).s6_addr16[2]),\
+ ntohs((addr).s6_addr16[3]),\
+ ntohs((addr).s6_addr16[4]),\
+ ntohs((addr).s6_addr16[5]),\
+ ntohs((addr).s6_addr16[6]),\
+ ntohs((addr).s6_addr16[7])
+
+int ah6_output(struct sk_buff *skb)
+{
+ int err;
+ int hdr_len = sizeof(struct ipv6hdr);
+ struct dst_entry *dst = skb->dst;
+ struct xfrm_state *x = dst->xfrm;
+ struct ipv6hdr *iph = NULL;
+ struct ip_auth_hdr *ah;
+ struct ah_data *ahp;
+ u16 nh_offset = 0;
+ u8 nexthdr;
+
+ if (skb->ip_summed == CHECKSUM_HW && skb_checksum_help(skb) == NULL) {
+ err = -EINVAL;
+ goto error_nolock;
+ }
+
+ spin_lock_bh(&x->lock);
+ err = xfrm_check_output(x, skb, AF_INET);
+ if (err)
+ goto error;
+
+ if (x->props.mode) {
+ iph = skb->nh.ipv6h;
+ skb->nh.ipv6h = (struct ipv6hdr*)skb_push(skb, x->props.header_len);
+ skb->nh.ipv6h->version = 6;
+ skb->nh.ipv6h->payload_len = htons(skb->len - sizeof(struct ipv6hdr));
+ skb->nh.ipv6h->nexthdr = IPPROTO_AH;
+ memcpy(&skb->nh.ipv6h->saddr, &x->props.saddr, sizeof(struct in6_addr));
+ memcpy(&skb->nh.ipv6h->daddr, &x->id.daddr, sizeof(struct in6_addr));
+ ah = (struct ip_auth_hdr*)(skb->nh.ipv6h+1);
+ ah->nexthdr = IPPROTO_IPV6;
+ } else {
+ hdr_len = skb->h.raw - skb->nh.raw;
+ iph = kmalloc(hdr_len, GFP_ATOMIC);
+ if (!iph) {
+ err = -ENOMEM;
+ goto error;
+ }
+ memcpy(iph, skb->data, hdr_len);
+ skb->nh.ipv6h = (struct ipv6hdr*)skb_push(skb, x->props.header_len);
+ memcpy(skb->nh.ipv6h, iph, hdr_len);
+ nexthdr = xfrm6_clear_mutable_options(skb, &nh_offset, XFRM_POLICY_OUT);
+ if (nexthdr == 0)
+ goto error;
+
+ skb->nh.raw[nh_offset] = IPPROTO_AH;
+ skb->nh.ipv6h->payload_len = htons(skb->len - sizeof(struct ipv6hdr));
+ ah = (struct ip_auth_hdr*)(skb->nh.raw+hdr_len);
+ skb->h.raw = (unsigned char*) ah;
+ ah->nexthdr = nexthdr;
+ }
+
+ skb->nh.ipv6h->priority = 0;
+ skb->nh.ipv6h->flow_lbl[0] = 0;
+ skb->nh.ipv6h->flow_lbl[1] = 0;
+ skb->nh.ipv6h->flow_lbl[2] = 0;
+ skb->nh.ipv6h->hop_limit = 0;
+
+ ahp = x->data;
+ ah->hdrlen = (XFRM_ALIGN8(sizeof(struct ipv6_auth_hdr) +
+ ahp->icv_trunc_len) >> 2) - 2;
+
+ ah->reserved = 0;
+ ah->spi = x->id.spi;
+ ah->seq_no = htonl(++x->replay.oseq);
+ ahp->icv(ahp, skb, ah->auth_data);
+
+ if (x->props.mode) {
+ skb->nh.ipv6h->hop_limit = iph->hop_limit;
+ skb->nh.ipv6h->priority = iph->priority;
+ skb->nh.ipv6h->flow_lbl[0] = iph->flow_lbl[0];
+ skb->nh.ipv6h->flow_lbl[1] = iph->flow_lbl[1];
+ skb->nh.ipv6h->flow_lbl[2] = iph->flow_lbl[2];
+ } else {
+ memcpy(skb->nh.ipv6h, iph, hdr_len);
+ skb->nh.raw[nh_offset] = IPPROTO_AH;
+ skb->nh.ipv6h->payload_len = htons(skb->len - sizeof(struct ipv6hdr));
+ kfree (iph);
+ }
+
+ skb->nh.raw = skb->data;
+
+ x->curlft.bytes += skb->len;
+ x->curlft.packets++;
+ spin_unlock_bh(&x->lock);
+ if ((skb->dst = dst_pop(dst)) == NULL) {
+ err = -EHOSTUNREACH;
+ goto error_nolock;
+ }
+ return NET_XMIT_BYPASS;
+error:
+ spin_unlock_bh(&x->lock);
+error_nolock:
+ kfree_skb(skb);
+ return err;
+}
+
+int ah6_input(struct xfrm_state *x, struct xfrm_decap_state *decap, struct sk_buff *skb)
+{
+ int ah_hlen;
+ struct ipv6hdr *iph;
+ struct ipv6_auth_hdr *ah;
+ struct ah_data *ahp;
+ unsigned char *tmp_hdr = NULL;
+ int hdr_len = skb->h.raw - skb->nh.raw;
+ u8 nexthdr = 0;
+
+ if (!pskb_may_pull(skb, sizeof(struct ip_auth_hdr)))
+ goto out;
+
+ ah = (struct ipv6_auth_hdr*)skb->data;
+ ahp = x->data;
+ ah_hlen = (ah->hdrlen + 2) << 2;
+
+ if (ah_hlen != XFRM_ALIGN8(sizeof(struct ipv6_auth_hdr) + ahp->icv_full_len) &&
+ ah_hlen != XFRM_ALIGN8(sizeof(struct ipv6_auth_hdr) + ahp->icv_trunc_len))
+ goto out;
+
+ if (!pskb_may_pull(skb, ah_hlen))
+ goto out;
+
+ /* We are going to _remove_ AH header to keep sockets happy,
+ * so... Later this can change. */
+ if (skb_cloned(skb) &&
+ pskb_expand_head(skb, 0, 0, GFP_ATOMIC))
+ goto out;
+
+ tmp_hdr = kmalloc(hdr_len, GFP_ATOMIC);
+ if (!tmp_hdr)
+ goto out;
+ memcpy(tmp_hdr, skb->nh.raw, hdr_len);
+ ah = (struct ipv6_auth_hdr*)skb->data;
+ iph = skb->nh.ipv6h;
+
+ {
+ u8 auth_data[ahp->icv_trunc_len];
+
+ memcpy(auth_data, ah->auth_data, ahp->icv_trunc_len);
+ skb_push(skb, skb->data - skb->nh.raw);
+ ahp->icv(ahp, skb, ah->auth_data);
+ if (memcmp(ah->auth_data, auth_data, ahp->icv_trunc_len)) {
+ if (net_ratelimit())
+ printk(KERN_WARNING "ipsec ah authentication error\n");
+ x->stats.integrity_failed++;
+ goto free_out;
+ }
+ }
+
+ nexthdr = ((struct ipv6hdr*)tmp_hdr)->nexthdr = ah->nexthdr;
+ skb->nh.raw = skb_pull(skb, (ah->hdrlen+2)<<2);
+ memcpy(skb->nh.raw, tmp_hdr, hdr_len);
+ skb->nh.ipv6h->payload_len = htons(skb->len - sizeof(struct ipv6hdr));
+ skb_pull(skb, hdr_len);
+ skb->h.raw = skb->data;
+
+
+ kfree(tmp_hdr);
+
+ return nexthdr;
+
+free_out:
+ kfree(tmp_hdr);
+out:
+ return -EINVAL;
+}
+
+void ah6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ int type, int code, int offset, __u32 info)
+{
+ struct ipv6hdr *iph = (struct ipv6hdr*)skb->data;
+ struct ip_auth_hdr *ah = (struct ip_auth_hdr*)(skb->data+offset);
+ struct xfrm_state *x;
+
+ if (type != ICMPV6_DEST_UNREACH ||
+ type != ICMPV6_PKT_TOOBIG)
+ return;
+
+ x = xfrm_state_lookup((xfrm_address_t *)&iph->daddr, ah->spi, IPPROTO_AH, AF_INET6);
+ if (!x)
+ return;
+
+ printk(KERN_DEBUG "pmtu discvovery on SA AH/%08x/"
+ "%04x:%04x:%04x:%04x:%04x:%04x:%04x:%04x\n",
+ ntohl(ah->spi), NIP6(iph->daddr));
+
+ xfrm_state_put(x);
+}
+
+static int ah6_init_state(struct xfrm_state *x, void *args)
+{
+ struct ah_data *ahp = NULL;
+ struct xfrm_algo_desc *aalg_desc;
+
+ /* null auth can use a zero length key */
+ if (x->aalg->alg_key_len > 512)
+ goto error;
+
+ ahp = kmalloc(sizeof(*ahp), GFP_KERNEL);
+ if (ahp == NULL)
+ return -ENOMEM;
+
+ memset(ahp, 0, sizeof(*ahp));
+
+ ahp->key = x->aalg->alg_key;
+ ahp->key_len = (x->aalg->alg_key_len+7)/8;
+ ahp->tfm = crypto_alloc_tfm(x->aalg->alg_name, 0);
+ if (!ahp->tfm)
+ goto error;
+ ahp->icv = ah_hmac_digest;
+
+ /*
+ * Lookup the algorithm description maintained by xfrm_algo,
+ * verify crypto transform properties, and store information
+ * we need for AH processing. This lookup cannot fail here
+ * after a successful crypto_alloc_tfm().
+ */
+ aalg_desc = xfrm_aalg_get_byname(x->aalg->alg_name);
+ BUG_ON(!aalg_desc);
+
+ if (aalg_desc->uinfo.auth.icv_fullbits/8 !=
+ crypto_tfm_alg_digestsize(ahp->tfm)) {
+ printk(KERN_INFO "AH: %s digestsize %u != %hu\n",
+ x->aalg->alg_name, crypto_tfm_alg_digestsize(ahp->tfm),
+ aalg_desc->uinfo.auth.icv_fullbits/8);
+ goto error;
+ }
+
+ ahp->icv_full_len = aalg_desc->uinfo.auth.icv_fullbits/8;
+ ahp->icv_trunc_len = aalg_desc->uinfo.auth.icv_truncbits/8;
+
+ ahp->work_icv = kmalloc(ahp->icv_full_len, GFP_KERNEL);
+ if (!ahp->work_icv)
+ goto error;
+
+ x->props.header_len = XFRM_ALIGN8(sizeof(struct ipv6_auth_hdr) + ahp->icv_trunc_len);
+ if (x->props.mode)
+ x->props.header_len += sizeof(struct ipv6hdr);
+ x->data = ahp;
+
+ return 0;
+
+error:
+ if (ahp) {
+ if (ahp->work_icv)
+ kfree(ahp->work_icv);
+ if (ahp->tfm)
+ crypto_free_tfm(ahp->tfm);
+ kfree(ahp);
+ }
+ return -EINVAL;
+}
+
+static void ah6_destroy(struct xfrm_state *x)
+{
+ struct ah_data *ahp = x->data;
+
+ if (ahp->work_icv) {
+ kfree(ahp->work_icv);
+ ahp->work_icv = NULL;
+ }
+ if (ahp->tfm) {
+ crypto_free_tfm(ahp->tfm);
+ ahp->tfm = NULL;
+ }
+ kfree(ahp);
+}
+
+static struct xfrm_type ah6_type =
+{
+ .description = "AH6",
+ .owner = THIS_MODULE,
+ .proto = IPPROTO_AH,
+ .init_state = ah6_init_state,
+ .destructor = ah6_destroy,
+ .input = ah6_input,
+ .output = ah6_output
+};
+
+static struct inet6_protocol ah6_protocol = {
+ .handler = xfrm6_rcv,
+ .err_handler = ah6_err,
+ .flags = INET6_PROTO_NOPOLICY,
+};
+
+int __init ah6_init(void)
+{
+ if (xfrm_register_type(&ah6_type, AF_INET6) < 0) {
+ printk(KERN_INFO "ipv6 ah init: can't add xfrm type\n");
+ return -EAGAIN;
+ }
+
+ if (inet6_add_protocol(&ah6_protocol, IPPROTO_AH) < 0) {
+ printk(KERN_INFO "ipv6 ah init: can't add protocol\n");
+ xfrm_unregister_type(&ah6_type, AF_INET6);
+ return -EAGAIN;
+ }
+
+ return 0;
+}
+
+static void __exit ah6_fini(void)
+{
+ if (inet6_del_protocol(&ah6_protocol, IPPROTO_AH) < 0)
+ printk(KERN_INFO "ipv6 ah close: can't remove protocol\n");
+
+ if (xfrm_unregister_type(&ah6_type, AF_INET6) < 0)
+ printk(KERN_INFO "ipv6 ah close: can't remove xfrm type\n");
+
+}
+
+module_init(ah6_init);
+module_exit(ah6_fini);
+
+MODULE_LICENSE("GPL");
diff -Nru a/net/ipv6/datagram.c b/net/ipv6/datagram.c
--- a/net/ipv6/datagram.c Thu May 8 10:41:36 2003
+++ b/net/ipv6/datagram.c Thu May 8 10:41:36 2003
@@ -89,7 +89,7 @@
serr->ee.ee_info = info;
serr->ee.ee_data = 0;
serr->addr_offset = (u8*)&iph->daddr - skb->nh.raw;
- serr->port = fl->uli_u.ports.dport;
+ serr->port = fl->fl_ip_dport;
skb->h.raw = skb->tail;
__skb_pull(skb, skb->tail - skb->data);
diff -Nru a/net/ipv6/esp6.c b/net/ipv6/esp6.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/net/ipv6/esp6.c Thu May 8 10:41:38 2003
@@ -0,0 +1,531 @@
+/*
+ * Copyright (C)2002 USAGI/WIDE Project
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Authors
+ *
+ * Mitsuru KANDA @USAGI : IPv6 Support
+ * Kazunori MIYAZAWA @USAGI :
+ * Kunihiro Ishiguro :
+ *
+ * This file is derived from net/ipv4/esp.c
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <net/ip.h>
+#include <net/xfrm.h>
+#include <net/esp.h>
+#include <asm/scatterlist.h>
+#include <linux/crypto.h>
+#include <linux/pfkeyv2.h>
+#include <linux/random.h>
+#include <net/icmp.h>
+#include <net/ipv6.h>
+#include <linux/icmpv6.h>
+
+#define MAX_SG_ONSTACK 4
+
+/* BUGS:
+ * - we assume replay seqno is always present.
+ */
+
+/* Move to common area: it is shared with AH. */
+/* Common with AH after some work on arguments. */
+
+/* XXX no ipv6 esp specific */
+#define NIP6(addr) \
+ ntohs((addr).s6_addr16[0]),\
+ ntohs((addr).s6_addr16[1]),\
+ ntohs((addr).s6_addr16[2]),\
+ ntohs((addr).s6_addr16[3]),\
+ ntohs((addr).s6_addr16[4]),\
+ ntohs((addr).s6_addr16[5]),\
+ ntohs((addr).s6_addr16[6]),\
+ ntohs((addr).s6_addr16[7])
+
+static int get_offset(u8 *packet, u32 packet_len, u8 *nexthdr, struct ipv6_opt_hdr **prevhdr)
+{
+ u16 offset = sizeof(struct ipv6hdr);
+ struct ipv6_opt_hdr *exthdr = (struct ipv6_opt_hdr*)(packet + offset);
+ u8 nextnexthdr;
+
+ *nexthdr = ((struct ipv6hdr*)packet)->nexthdr;
+
+ while (offset + 1 < packet_len) {
+
+ switch (*nexthdr) {
+
+ case NEXTHDR_HOP:
+ case NEXTHDR_ROUTING:
+ offset += ipv6_optlen(exthdr);
+ *nexthdr = exthdr->nexthdr;
+ *prevhdr = exthdr;
+ exthdr = (struct ipv6_opt_hdr*)(packet + offset);
+ break;
+
+ case NEXTHDR_DEST:
+ nextnexthdr =
+ ((struct ipv6_opt_hdr*)(packet + offset + ipv6_optlen(exthdr)))->nexthdr;
+ /* XXX We know the option is inner dest opt
+ with next next header check. */
+ if (nextnexthdr != NEXTHDR_HOP &&
+ nextnexthdr != NEXTHDR_ROUTING &&
+ nextnexthdr != NEXTHDR_DEST) {
+ return offset;
+ }
+ offset += ipv6_optlen(exthdr);
+ *nexthdr = exthdr->nexthdr;
+ *prevhdr = exthdr;
+ exthdr = (struct ipv6_opt_hdr*)(packet + offset);
+ break;
+
+ default :
+ return offset;
+ }
+ }
+
+ return offset;
+}
+
+int esp6_output(struct sk_buff *skb)
+{
+ int err;
+ int hdr_len = 0;
+ struct dst_entry *dst = skb->dst;
+ struct xfrm_state *x = dst->xfrm;
+ struct ipv6hdr *iph = NULL, *top_iph;
+ struct ipv6_esp_hdr *esph;
+ struct crypto_tfm *tfm;
+ struct esp_data *esp;
+ struct sk_buff *trailer;
+ struct ipv6_opt_hdr *prevhdr = NULL;
+ int blksize;
+ int clen;
+ int alen;
+ int nfrags;
+ u8 nexthdr;
+
+ /* First, if the skb is not checksummed, complete checksum. */
+ if (skb->ip_summed == CHECKSUM_HW && skb_checksum_help(skb) == NULL) {
+ err = -EINVAL;
+ goto error_nolock;
+ }
+
+ spin_lock_bh(&x->lock);
+ err = xfrm_check_output(x, skb, AF_INET6);
+ if (err)
+ goto error;
+ err = -ENOMEM;
+
+ /* Strip IP header in transport mode. Save it. */
+
+ if (!x->props.mode) {
+ hdr_len = get_offset(skb->nh.raw, skb->len, &nexthdr, &prevhdr);
+ iph = kmalloc(hdr_len, GFP_ATOMIC);
+ if (!iph) {
+ err = -ENOMEM;
+ goto error;
+ }
+ memcpy(iph, skb->nh.raw, hdr_len);
+ __skb_pull(skb, hdr_len);
+ }
+
+ /* Now skb is pure payload to encrypt */
+
+ /* Round to block size */
+ clen = skb->len;
+
+ esp = x->data;
+ alen = esp->auth.icv_trunc_len;
+ tfm = esp->conf.tfm;
+ blksize = (crypto_tfm_alg_blocksize(tfm) + 3) & ~3;
+ clen = (clen + 2 + blksize-1)&~(blksize-1);
+ if (esp->conf.padlen)
+ clen = (clen + esp->conf.padlen-1)&~(esp->conf.padlen-1);
+
+ if ((nfrags = skb_cow_data(skb, clen-skb->len+alen, &trailer)) < 0) {
+ if (!x->props.mode && iph) kfree(iph);
+ goto error;
+ }
+
+ /* Fill padding... */
+ do {
+ int i;
+ for (i=0; i<clen-skb->len - 2; i++)
+ *(u8*)(trailer->tail + i) = i+1;
+ } while (0);
+ *(u8*)(trailer->tail + clen-skb->len - 2) = (clen - skb->len)-2;
+ pskb_put(skb, trailer, clen - skb->len);
+
+ if (x->props.mode) {
+ iph = skb->nh.ipv6h;
+ top_iph = (struct ipv6hdr*)skb_push(skb, x->props.header_len);
+ esph = (struct ipv6_esp_hdr*)(top_iph+1);
+ *(u8*)(trailer->tail - 1) = IPPROTO_IPV6;
+ top_iph->version = 6;
+ top_iph->priority = iph->priority;
+ top_iph->flow_lbl[0] = iph->flow_lbl[0];
+ top_iph->flow_lbl[1] = iph->flow_lbl[1];
+ top_iph->flow_lbl[2] = iph->flow_lbl[2];
+ top_iph->nexthdr = IPPROTO_ESP;
+ top_iph->payload_len = htons(skb->len + alen - sizeof(struct ipv6hdr));
+ top_iph->hop_limit = iph->hop_limit;
+ memcpy(&top_iph->saddr, (struct in6_addr *)&x->props.saddr, sizeof(struct in6_addr));
+ memcpy(&top_iph->daddr, (struct in6_addr *)&x->id.daddr, sizeof(struct in6_addr));
+ } else {
+ /* XXX exthdr */
+ esph = (struct ipv6_esp_hdr*)skb_push(skb, x->props.header_len);
+ skb->h.raw = (unsigned char*)esph;
+ top_iph = (struct ipv6hdr*)skb_push(skb, hdr_len);
+ memcpy(top_iph, iph, hdr_len);
+ kfree(iph);
+ top_iph->payload_len = htons(skb->len + alen - sizeof(struct ipv6hdr));
+ if (prevhdr) {
+ prevhdr->nexthdr = IPPROTO_ESP;
+ } else {
+ top_iph->nexthdr = IPPROTO_ESP;
+ }
+ *(u8*)(trailer->tail - 1) = nexthdr;
+ }
+
+ esph->spi = x->id.spi;
+ esph->seq_no = htonl(++x->replay.oseq);
+
+ if (esp->conf.ivlen)
+ crypto_cipher_set_iv(tfm, esp->conf.ivec, crypto_tfm_alg_ivsize(tfm));
+
+ do {
+ struct scatterlist sgbuf[nfrags>MAX_SG_ONSTACK ? 0 : nfrags];
+ struct scatterlist *sg = sgbuf;
+
+ if (unlikely(nfrags > MAX_SG_ONSTACK)) {
+ sg = kmalloc(sizeof(struct scatterlist)*nfrags, GFP_ATOMIC);
+ if (!sg)
+ goto error;
+ }
+ skb_to_sgvec(skb, sg, esph->enc_data+esp->conf.ivlen-skb->data, clen);
+ crypto_cipher_encrypt(tfm, sg, sg, clen);
+ if (unlikely(sg != sgbuf))
+ kfree(sg);
+ } while (0);
+
+ if (esp->conf.ivlen) {
+ memcpy(esph->enc_data, esp->conf.ivec, crypto_tfm_alg_ivsize(tfm));
+ crypto_cipher_get_iv(tfm, esp->conf.ivec, crypto_tfm_alg_ivsize(tfm));
+ }
+
+ if (esp->auth.icv_full_len) {
+ esp->auth.icv(esp, skb, (u8*)esph-skb->data,
+ sizeof(struct ipv6_esp_hdr) + esp->conf.ivlen+clen, trailer->tail);
+ pskb_put(skb, trailer, alen);
+ }
+
+ skb->nh.raw = skb->data;
+
+ x->curlft.bytes += skb->len;
+ x->curlft.packets++;
+ spin_unlock_bh(&x->lock);
+ if ((skb->dst = dst_pop(dst)) == NULL) {
+ err = -EHOSTUNREACH;
+ goto error_nolock;
+ }
+ return NET_XMIT_BYPASS;
+
+error:
+ spin_unlock_bh(&x->lock);
+error_nolock:
+ kfree_skb(skb);
+ return err;
+}
+
+int esp6_input(struct xfrm_state *x, struct xfrm_decap_state *decap, struct sk_buff *skb)
+{
+ struct ipv6hdr *iph;
+ struct ipv6_esp_hdr *esph;
+ struct esp_data *esp = x->data;
+ struct sk_buff *trailer;
+ int blksize = crypto_tfm_alg_blocksize(esp->conf.tfm);
+ int alen = esp->auth.icv_trunc_len;
+ int elen = skb->len - sizeof(struct ipv6_esp_hdr) - esp->conf.ivlen - alen;
+
+ int hdr_len = skb->h.raw - skb->nh.raw;
+ int nfrags;
+ u8 ret_nexthdr = 0;
+ unsigned char *tmp_hdr = NULL;
+
+ if (!pskb_may_pull(skb, sizeof(struct ipv6_esp_hdr)))
+ goto out;
+
+ if (elen <= 0 || (elen & (blksize-1)))
+ goto out;
+
+ tmp_hdr = kmalloc(hdr_len, GFP_ATOMIC);
+ if (!tmp_hdr)
+ goto out;
+ memcpy(tmp_hdr, skb->nh.raw, hdr_len);
+
+ /* If integrity check is required, do this. */
+ if (esp->auth.icv_full_len) {
+ u8 sum[esp->auth.icv_full_len];
+ u8 sum1[alen];
+
+ esp->auth.icv(esp, skb, 0, skb->len-alen, sum);
+
+ if (skb_copy_bits(skb, skb->len-alen, sum1, alen))
+ BUG();
+
+ if (unlikely(memcmp(sum, sum1, alen))) {
+ x->stats.integrity_failed++;
+ goto out;
+ }
+ }
+
+ if ((nfrags = skb_cow_data(skb, 0, &trailer)) < 0)
+ goto out;
+
+ skb->ip_summed = CHECKSUM_NONE;
+
+ esph = (struct ipv6_esp_hdr*)skb->data;
+ iph = skb->nh.ipv6h;
+
+ /* Get ivec. This can be wrong, check against another impls. */
+ if (esp->conf.ivlen)
+ crypto_cipher_set_iv(esp->conf.tfm, esph->enc_data, crypto_tfm_alg_ivsize(esp->conf.tfm));
+
+ {
+ u8 nexthdr[2];
+ struct scatterlist sgbuf[nfrags>MAX_SG_ONSTACK ? 0 : nfrags];
+ struct scatterlist *sg = sgbuf;
+ u8 padlen;
+
+ if (unlikely(nfrags > MAX_SG_ONSTACK)) {
+ sg = kmalloc(sizeof(struct scatterlist)*nfrags, GFP_ATOMIC);
+ if (!sg)
+ goto out;
+ }
+ skb_to_sgvec(skb, sg, sizeof(struct ipv6_esp_hdr) + esp->conf.ivlen, elen);
+ crypto_cipher_decrypt(esp->conf.tfm, sg, sg, elen);
+ if (unlikely(sg != sgbuf))
+ kfree(sg);
+
+ if (skb_copy_bits(skb, skb->len-alen-2, nexthdr, 2))
+ BUG();
+
+ padlen = nexthdr[0];
+ if (padlen+2 >= elen) {
+ if (net_ratelimit()) {
+ printk(KERN_WARNING "ipsec esp packet is garbage padlen=%d, elen=%d\n", padlen+2, elen);
+ }
+ goto out;
+ }
+ /* ... check padding bits here. Silly. :-) */
+
+ ret_nexthdr = ((struct ipv6hdr*)tmp_hdr)->nexthdr = nexthdr[1];
+ pskb_trim(skb, skb->len - alen - padlen - 2);
+ skb->h.raw = skb_pull(skb, sizeof(struct ipv6_esp_hdr) + esp->conf.ivlen);
+ skb->nh.raw += sizeof(struct ipv6_esp_hdr) + esp->conf.ivlen;
+ memcpy(skb->nh.raw, tmp_hdr, hdr_len);
+ }
+ kfree(tmp_hdr);
+ return ret_nexthdr;
+
+out:
+ return -EINVAL;
+}
+
+static u32 esp6_get_max_size(struct xfrm_state *x, int mtu)
+{
+ struct esp_data *esp = x->data;
+ u32 blksize = crypto_tfm_alg_blocksize(esp->conf.tfm);
+
+ if (x->props.mode) {
+ mtu = (mtu + 2 + blksize-1)&~(blksize-1);
+ } else {
+ /* The worst case. */
+ mtu += 2 + blksize;
+ }
+ if (esp->conf.padlen)
+ mtu = (mtu + esp->conf.padlen-1)&~(esp->conf.padlen-1);
+
+ return mtu + x->props.header_len + esp->auth.icv_full_len;
+}
+
+void esp6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ int type, int code, int offset, __u32 info)
+{
+ struct ipv6hdr *iph = (struct ipv6hdr*)skb->data;
+ struct ipv6_esp_hdr *esph = (struct ipv6_esp_hdr*)(skb->data+offset);
+ struct xfrm_state *x;
+
+ if (type != ICMPV6_DEST_UNREACH ||
+ type != ICMPV6_PKT_TOOBIG)
+ return;
+
+ x = xfrm_state_lookup((xfrm_address_t *)&iph->daddr, esph->spi, IPPROTO_ESP, AF_INET6);
+ if (!x)
+ return;
+ printk(KERN_DEBUG "pmtu discvovery on SA ESP/%08x/"
+ "%04x:%04x:%04x:%04x:%04x:%04x:%04x:%04x\n",
+ ntohl(esph->spi), NIP6(iph->daddr));
+ xfrm_state_put(x);
+}
+
+void esp6_destroy(struct xfrm_state *x)
+{
+ struct esp_data *esp = x->data;
+
+ if (esp->conf.tfm) {
+ crypto_free_tfm(esp->conf.tfm);
+ esp->conf.tfm = NULL;
+ }
+ if (esp->conf.ivec) {
+ kfree(esp->conf.ivec);
+ esp->conf.ivec = NULL;
+ }
+ if (esp->auth.tfm) {
+ crypto_free_tfm(esp->auth.tfm);
+ esp->auth.tfm = NULL;
+ }
+ if (esp->auth.work_icv) {
+ kfree(esp->auth.work_icv);
+ esp->auth.work_icv = NULL;
+ }
+ kfree(esp);
+}
+
+int esp6_init_state(struct xfrm_state *x, void *args)
+{
+ struct esp_data *esp = NULL;
+
+ if (x->aalg) {
+ if (x->aalg->alg_key_len == 0 || x->aalg->alg_key_len > 512)
+ goto error;
+ }
+ if (x->ealg == NULL)
+ goto error;
+
+ esp = kmalloc(sizeof(*esp), GFP_KERNEL);
+ if (esp == NULL)
+ return -ENOMEM;
+
+ memset(esp, 0, sizeof(*esp));
+
+ if (x->aalg) {
+ struct xfrm_algo_desc *aalg_desc;
+
+ esp->auth.key = x->aalg->alg_key;
+ esp->auth.key_len = (x->aalg->alg_key_len+7)/8;
+ esp->auth.tfm = crypto_alloc_tfm(x->aalg->alg_name, 0);
+ if (esp->auth.tfm == NULL)
+ goto error;
+ esp->auth.icv = esp_hmac_digest;
+
+ aalg_desc = xfrm_aalg_get_byname(x->aalg->alg_name);
+ BUG_ON(!aalg_desc);
+
+ if (aalg_desc->uinfo.auth.icv_fullbits/8 !=
+ crypto_tfm_alg_digestsize(esp->auth.tfm)) {
+ printk(KERN_INFO "ESP: %s digestsize %u != %hu\n",
+ x->aalg->alg_name,
+ crypto_tfm_alg_digestsize(esp->auth.tfm),
+ aalg_desc->uinfo.auth.icv_fullbits/8);
+ goto error;
+ }
+
+ esp->auth.icv_full_len = aalg_desc->uinfo.auth.icv_fullbits/8;
+ esp->auth.icv_trunc_len = aalg_desc->uinfo.auth.icv_truncbits/8;
+
+ esp->auth.work_icv = kmalloc(esp->auth.icv_full_len, GFP_KERNEL);
+ if (!esp->auth.work_icv)
+ goto error;
+ }
+ esp->conf.key = x->ealg->alg_key;
+ esp->conf.key_len = (x->ealg->alg_key_len+7)/8;
+ esp->conf.tfm = crypto_alloc_tfm(x->ealg->alg_name, CRYPTO_TFM_MODE_CBC);
+ if (esp->conf.tfm == NULL)
+ goto error;
+ esp->conf.ivlen = crypto_tfm_alg_ivsize(esp->conf.tfm);
+ esp->conf.padlen = 0;
+ if (esp->conf.ivlen) {
+ esp->conf.ivec = kmalloc(esp->conf.ivlen, GFP_KERNEL);
+ get_random_bytes(esp->conf.ivec, esp->conf.ivlen);
+ }
+ crypto_cipher_setkey(esp->conf.tfm, esp->conf.key, esp->conf.key_len);
+ x->props.header_len = sizeof(struct ipv6_esp_hdr) + esp->conf.ivlen;
+ if (x->props.mode)
+ x->props.header_len += sizeof(struct ipv6hdr);
+ x->data = esp;
+ return 0;
+
+error:
+ if (esp) {
+ if (esp->auth.tfm)
+ crypto_free_tfm(esp->auth.tfm);
+ if (esp->auth.work_icv)
+ kfree(esp->auth.work_icv);
+ if (esp->conf.tfm)
+ crypto_free_tfm(esp->conf.tfm);
+ kfree(esp);
+ }
+ return -EINVAL;
+}
+
+static struct xfrm_type esp6_type =
+{
+ .description = "ESP6",
+ .owner = THIS_MODULE,
+ .proto = IPPROTO_ESP,
+ .init_state = esp6_init_state,
+ .destructor = esp6_destroy,
+ .get_max_size = esp6_get_max_size,
+ .input = esp6_input,
+ .output = esp6_output
+};
+
+static struct inet6_protocol esp6_protocol = {
+ .handler = xfrm6_rcv,
+ .err_handler = esp6_err,
+ .flags = INET6_PROTO_NOPOLICY,
+};
+
+int __init esp6_init(void)
+{
+ if (xfrm_register_type(&esp6_type, AF_INET6) < 0) {
+ printk(KERN_INFO "ipv6 esp init: can't add xfrm type\n");
+ return -EAGAIN;
+ }
+ if (inet6_add_protocol(&esp6_protocol, IPPROTO_ESP) < 0) {
+ printk(KERN_INFO "ipv6 esp init: can't add protocol\n");
+ xfrm_unregister_type(&esp6_type, AF_INET6);
+ return -EAGAIN;
+ }
+
+ return 0;
+}
+
+static void __exit esp6_fini(void)
+{
+ if (inet6_del_protocol(&esp6_protocol, IPPROTO_ESP) < 0)
+ printk(KERN_INFO "ipv6 esp close: can't remove protocol\n");
+ if (xfrm_unregister_type(&esp6_type, AF_INET6) < 0)
+ printk(KERN_INFO "ipv6 esp close: can't remove xfrm type\n");
+}
+
+module_init(esp6_init);
+module_exit(esp6_fini);
+
+MODULE_LICENSE("GPL");
diff -Nru a/net/ipv6/exthdrs.c b/net/ipv6/exthdrs.c
--- a/net/ipv6/exthdrs.c Thu May 8 10:41:36 2003
+++ b/net/ipv6/exthdrs.c Thu May 8 10:41:36 2003
@@ -18,6 +18,9 @@
/* Changes:
* yoshfuji : ensure not to overrun while parsing
* tlv options.
+ * Mitsuru KANDA @USAGI and: Remove ipv6_parse_exthdrs().
+ * YOSHIFUJI Hideaki @USAGI Register inbound extention header
+ * handlers as inet6_protocol{}.
*/
#include <linux/errno.h>
@@ -44,20 +47,6 @@
#include <asm/uaccess.h>
/*
- * Parsing inbound headers.
- *
- * Parsing function "func" returns offset wrt skb->nh of the place,
- * where next nexthdr value is stored or NULL, if parsing
- * failed. It should also update skb->h tp point at the next header.
- */
-
-struct hdrtype_proc
-{
- int type;
- int (*func) (struct sk_buff **, int offset);
-};
-
-/*
* Parsing tlv encoded headers.
*
* Parsing function "func" returns 1, if parsing succeed
@@ -164,9 +153,9 @@
{-1, NULL}
};
-static int ipv6_dest_opt(struct sk_buff **skb_ptr, int nhoff)
+static int ipv6_destopt_rcv(struct sk_buff **skbp, unsigned int *nhoffp)
{
- struct sk_buff *skb=*skb_ptr;
+ struct sk_buff *skb = *skbp;
struct inet6_skb_parm *opt = (struct inet6_skb_parm *)skb->cb;
if (!pskb_may_pull(skb, (skb->h.raw-skb->data)+8) ||
@@ -179,29 +168,56 @@
if (ip6_parse_tlv(tlvprocdestopt_lst, skb)) {
skb->h.raw += ((skb->h.raw[1]+1)<<3);
- return opt->dst1;
+ *nhoffp = opt->dst1;
+ return 1;
}
return -1;
}
+static struct inet6_protocol destopt_protocol =
+{
+ .handler = ipv6_destopt_rcv,
+ .flags = INET6_PROTO_NOPOLICY,
+};
+
+void __init ipv6_destopt_init(void)
+{
+ if (inet6_add_protocol(&destopt_protocol, IPPROTO_DSTOPTS) < 0)
+ printk(KERN_ERR "ipv6_destopt_init: Could not register protocol\n");
+}
+
/********************************
NONE header. No data in packet.
********************************/
-static int ipv6_nodata(struct sk_buff **skb_ptr, int nhoff)
+static int ipv6_nodata_rcv(struct sk_buff **skbp, unsigned int *nhoffp)
{
- kfree_skb(*skb_ptr);
- return -1;
+ struct sk_buff *skb = *skbp;
+
+ kfree_skb(skb);
+ return 0;
+}
+
+static struct inet6_protocol nodata_protocol =
+{
+ .handler = ipv6_nodata_rcv,
+ .flags = INET6_PROTO_NOPOLICY,
+};
+
+void __init ipv6_nodata_init(void)
+{
+ if (inet6_add_protocol(&nodata_protocol, IPPROTO_NONE) < 0)
+ printk(KERN_ERR "ipv6_nodata_init: Could not register protocol\n");
}
/********************************
Routing header.
********************************/
-static int ipv6_routing_header(struct sk_buff **skb_ptr, int nhoff)
+static int ipv6_rthdr_rcv(struct sk_buff **skbp, unsigned int *nhoffp)
{
- struct sk_buff *skb = *skb_ptr;
+ struct sk_buff *skb = *skbp;
struct inet6_skb_parm *opt = (struct inet6_skb_parm *)skb->cb;
struct in6_addr *addr;
struct in6_addr daddr;
@@ -232,7 +248,8 @@
skb->h.raw += (hdr->hdrlen + 1) << 3;
opt->dst0 = opt->dst1;
opt->dst1 = 0;
- return (&hdr->nexthdr) - skb->nh.raw;
+ *nhoffp = (&hdr->nexthdr) - skb->nh.raw;
+ return 1;
}
if (hdr->type != IPV6_SRCRT_TYPE_0 || (hdr->hdrlen & 0x01)) {
@@ -242,7 +259,7 @@
/*
* This is the routing header forwarding algorithm from
- * RFC 1883, page 17.
+ * RFC 2460, page 16.
*/
n = hdr->hdrlen >> 1;
@@ -260,7 +277,7 @@
kfree_skb(skb);
if (skb2 == NULL)
return -1;
- *skb_ptr = skb = skb2;
+ *skbp = skb = skb2;
opt = (struct inet6_skb_parm *)skb2->cb;
hdr = (struct ipv6_rt_hdr *) skb2->h.raw;
}
@@ -288,7 +305,7 @@
dst_release(xchg(&skb->dst, NULL));
ip6_route_input(skb);
if (skb->dst->error) {
- skb->dst->input(skb);
+ dst_input(skb);
return -1;
}
if (skb->dst->dev->flags&IFF_LOOPBACK) {
@@ -302,10 +319,22 @@
goto looped_back;
}
- skb->dst->input(skb);
+ dst_input(skb);
return -1;
}
+static struct inet6_protocol rthdr_protocol =
+{
+ .handler = ipv6_rthdr_rcv,
+ .flags = INET6_PROTO_NOPOLICY,
+};
+
+void __init ipv6_rthdr_init(void)
+{
+ if (inet6_add_protocol(&rthdr_protocol, IPPROTO_ROUTING) < 0)
+ printk(KERN_ERR "ipv6_rthdr_init: Could not register protocol\n");
+};
+
/*
This function inverts received rthdr.
NOTE: specs allow to make it automatically only if
@@ -370,97 +399,6 @@
memcpy(irthdr->addr+i, rthdr->addr+(n-1-i), 16);
return opt;
}
-
-/********************************
- AUTH header.
- ********************************/
-
-/*
- rfc1826 said, that if a host does not implement AUTH header
- it MAY ignore it. We use this hole 8)
-
- Actually, now we can implement OSPFv6 without kernel IPsec.
- Authentication for poors may be done in user space with the same success.
-
- Yes, it means, that we allow application to send/receive
- raw authentication header. Apparently, we suppose, that it knows
- what it does and calculates authentication data correctly.
- Certainly, it is possible only for udp and raw sockets, but not for tcp.
-
- AUTH header has 4byte granular length, which kills all the idea
- behind AUTOMATIC 64bit alignment of IPv6. Now we will lose
- cpu ticks, checking that sender did not something stupid
- and opt->hdrlen is even. Shit! --ANK (980730)
- */
-
-static int ipv6_auth_hdr(struct sk_buff **skb_ptr, int nhoff)
-{
- struct sk_buff *skb=*skb_ptr;
- struct inet6_skb_parm *opt = (struct inet6_skb_parm *)skb->cb;
- int len;
-
- if (!pskb_may_pull(skb, (skb->h.raw-skb->data)+8))
- goto fail;
-
- /*
- * RFC2402 2.2 Payload Length
- * The 8-bit field specifies the length of AH in 32-bit words
- * (4-byte units), minus "2".
- * -- Noriaki Takamiya @USAGI Project
- */
- len = (skb->h.raw[1]+2)<<2;
-
- if (len&7)
- goto fail;
-
- if (!pskb_may_pull(skb, (skb->h.raw-skb->data)+len))
- goto fail;
-
- opt->auth = skb->h.raw - skb->nh.raw;
- skb->h.raw += len;
- return opt->auth;
-
-fail:
- kfree_skb(skb);
- return -1;
-}
-
-/* This list MUST NOT contain entry for NEXTHDR_HOP.
- It is parsed immediately after packet received
- and if it occurs somewhere in another place we must
- generate error.
- */
-
-struct hdrtype_proc hdrproc_lst[] = {
- {NEXTHDR_FRAGMENT, ipv6_reassembly},
- {NEXTHDR_ROUTING, ipv6_routing_header},
- {NEXTHDR_DEST, ipv6_dest_opt},
- {NEXTHDR_NONE, ipv6_nodata},
- {NEXTHDR_AUTH, ipv6_auth_hdr},
- /*
- {NEXTHDR_ESP, ipv6_esp_hdr},
- */
- {-1, NULL}
-};
-
-int ipv6_parse_exthdrs(struct sk_buff **skb_in, int nhoff)
-{
- struct hdrtype_proc *hdrt;
- u8 nexthdr = (*skb_in)->nh.raw[nhoff];
-
-restart:
- for (hdrt=hdrproc_lst; hdrt->type >= 0; hdrt++) {
- if (hdrt->type == nexthdr) {
- if ((nhoff = hdrt->func(skb_in, nhoff)) >= 0) {
- nexthdr = (*skb_in)->nh.raw[nhoff];
- goto restart;
- }
- return -1;
- }
- }
- return nhoff;
-}
-
/**********************************
Hop-by-hop options.
diff -Nru a/net/ipv6/icmp.c b/net/ipv6/icmp.c
--- a/net/ipv6/icmp.c Thu May 8 10:41:37 2003
+++ b/net/ipv6/icmp.c Thu May 8 10:41:37 2003
@@ -74,17 +74,11 @@
#define icmpv6_socket __icmpv6_socket[smp_processor_id()]
#define icmpv6_socket_cpu(X) __icmpv6_socket[(X)]
-int icmpv6_rcv(struct sk_buff *skb);
+static int icmpv6_rcv(struct sk_buff **pskb, unsigned int *nhoffp);
-static struct inet6_protocol icmpv6_protocol =
-{
- icmpv6_rcv, /* handler */
- NULL, /* error control */
- NULL, /* next */
- IPPROTO_ICMPV6, /* protocol ID */
- 0, /* copy */
- NULL, /* data */
- "ICMPv6" /* name */
+static struct inet6_protocol icmpv6_protocol = {
+ .handler = icmpv6_rcv,
+ .flags = INET6_PROTO_FINAL,
};
struct icmpv6_msg {
@@ -318,12 +312,12 @@
}
fl.proto = IPPROTO_ICMPV6;
- fl.nl_u.ip6_u.daddr = &hdr->saddr;
- fl.nl_u.ip6_u.saddr = saddr;
+ fl.fl6_dst = &hdr->saddr;
+ fl.fl6_src = saddr;
fl.oif = iif;
fl.fl6_flowlabel = 0;
- fl.uli_u.icmpt.type = type;
- fl.uli_u.icmpt.code = code;
+ fl.fl_icmp_type = type;
+ fl.fl_icmp_code = code;
icmpv6_xmit_lock();
@@ -392,12 +386,12 @@
msg.daddr = &skb->nh.ipv6h->saddr;
fl.proto = IPPROTO_ICMPV6;
- fl.nl_u.ip6_u.daddr = msg.daddr;
- fl.nl_u.ip6_u.saddr = saddr;
+ fl.fl6_dst = msg.daddr;
+ fl.fl6_src = saddr;
fl.oif = skb->dev->ifindex;
fl.fl6_flowlabel = 0;
- fl.uli_u.icmpt.type = ICMPV6_ECHO_REPLY;
- fl.uli_u.icmpt.code = 0;
+ fl.fl_icmp_type = ICMPV6_ECHO_REPLY;
+ fl.fl_icmp_code = 0;
icmpv6_xmit_lock();
@@ -447,15 +441,9 @@
hash = nexthdr & (MAX_INET_PROTOS - 1);
- for (ipprot = (struct inet6_protocol *) inet6_protos[hash];
- ipprot != NULL;
- ipprot=(struct inet6_protocol *)ipprot->next) {
- if (ipprot->protocol != nexthdr)
- continue;
-
- if (ipprot->err_handler)
- ipprot->err_handler(skb, NULL, type, code, inner_offset, info);
- }
+ ipprot = inet6_protos[hash];
+ if (ipprot && ipprot->err_handler)
+ ipprot->err_handler(skb, NULL, type, code, inner_offset, info);
read_lock(&raw_v6_lock);
if ((sk = raw_v6_htable[hash]) != NULL) {
@@ -471,8 +459,9 @@
* Handle icmp messages
*/
-int icmpv6_rcv(struct sk_buff *skb)
+static int icmpv6_rcv(struct sk_buff **pskb, unsigned int *nhoffp)
{
+ struct sk_buff *skb = *pskb;
struct net_device *dev = skb->dev;
struct in6_addr *saddr, *daddr;
struct ipv6hdr *orig_hdr;
@@ -643,7 +632,12 @@
sk->prot->unhash(sk);
}
- inet6_add_protocol(&icmpv6_protocol);
+ if (inet6_add_protocol(&icmpv6_protocol, IPPROTO_ICMPV6) < 0) {
+ printk(KERN_ERR "Failed to register ICMP6 protocol\n");
+ sock_release(icmpv6_socket);
+ icmpv6_socket = NULL;
+ return -EAGAIN;
+ }
return 0;
fail:
@@ -662,7 +656,7 @@
sock_release(icmpv6_socket_cpu(i));
icmpv6_socket_cpu(i) = NULL;
}
- inet6_del_protocol(&icmpv6_protocol);
+ inet6_del_protocol(&icmpv6_protocol, IPPROTO_ICMPV6);
}
static struct icmp6_err {
diff -Nru a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
--- a/net/ipv6/ip6_fib.c Thu May 8 10:41:37 2003
+++ b/net/ipv6/ip6_fib.c Thu May 8 10:41:37 2003
@@ -453,7 +453,6 @@
*/
if ((iter->rt6i_dev == rt->rt6i_dev) &&
- (iter->rt6i_flowr == rt->rt6i_flowr) &&
(ipv6_addr_cmp(&iter->rt6i_gateway,
&rt->rt6i_gateway) == 0)) {
if (!(iter->rt6i_flags&RTF_EXPIRES))
diff -Nru a/net/ipv6/ip6_fw.c b/net/ipv6/ip6_fw.c
--- a/net/ipv6/ip6_fw.c Thu May 8 10:41:37 2003
+++ /dev/null Wed Dec 31 16:00:00 1969
@@ -1,390 +0,0 @@
-/*
- * IPv6 Firewall
- * Linux INET6 implementation
- *
- * Authors:
- * Pedro Roque <roque@di.fc.ul.pt>
- *
- * $Id: ip6_fw.c,v 1.16 2001/10/31 08:17:58 davem Exp $
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version
- * 2 of the License, or (at your option) any later version.
- */
-
-#include <linux/config.h>
-#include <linux/errno.h>
-#include <linux/types.h>
-#include <linux/string.h>
-#include <linux/socket.h>
-#include <linux/sockios.h>
-#include <linux/net.h>
-#include <linux/route.h>
-#include <linux/netdevice.h>
-#include <linux/in6.h>
-#include <linux/udp.h>
-#include <linux/init.h>
-
-#include <net/ipv6.h>
-#include <net/ip6_route.h>
-#include <net/ip6_fw.h>
-#include <net/netlink.h>
-
-static unsigned long ip6_fw_rule_cnt;
-static struct ip6_fw_rule ip6_fw_rule_list = {
- {0},
- NULL, NULL,
- {0},
- IP6_FW_REJECT
-};
-
-static int ip6_fw_accept(struct dst_entry *dst, struct fl_acc_args *args);
-
-struct flow_rule_ops ip6_fw_ops = {
- ip6_fw_accept
-};
-
-
-static struct rt6_info ip6_fw_null_entry = {
- {{NULL, 0, 0, NULL,
- 0, 0, 0, 0, 0, 0, 0, 0, -ENETUNREACH, NULL, NULL,
- ip6_pkt_discard, ip6_pkt_discard, NULL}},
- NULL, {{{0}}}, 256, RTF_REJECT|RTF_NONEXTHOP, ~0UL,
- 0, &ip6_fw_rule_list, {{{{0}}}, 128}, {{{{0}}}, 128}
-};
-
-static struct fib6_node ip6_fw_fib = {
- NULL, NULL, NULL, NULL,
- &ip6_fw_null_entry,
- 0, RTN_ROOT|RTN_TL_ROOT, 0
-};
-
-rwlock_t ip6_fw_lock = RW_LOCK_UNLOCKED;
-
-
-static void ip6_rule_add(struct ip6_fw_rule *rl)
-{
- struct ip6_fw_rule *next;
-
- write_lock_bh(&ip6_fw_lock);
- ip6_fw_rule_cnt++;
- next = &ip6_fw_rule_list;
- rl->next = next;
- rl->prev = next->prev;
- rl->prev->next = rl;
- next->prev = rl;
- write_unlock_bh(&ip6_fw_lock);
-}
-
-static void ip6_rule_del(struct ip6_fw_rule *rl)
-{
- struct ip6_fw_rule *next, *prev;
-
- write_lock_bh(&ip6_fw_lock);
- ip6_fw_rule_cnt--;
- next = rl->next;
- prev = rl->prev;
- next->prev = prev;
- prev->next = next;
- write_unlock_bh(&ip6_fw_lock);
-}
-
-static __inline__ struct ip6_fw_rule * ip6_fwrule_alloc(void)
-{
- struct ip6_fw_rule *rl;
-
- rl = kmalloc(sizeof(struct ip6_fw_rule), GFP_ATOMIC);
- if (rl)
- {
- memset(rl, 0, sizeof(struct ip6_fw_rule));
- rl->flowr.ops = &ip6_fw_ops;
- }
- return rl;
-}
-
-static __inline__ void ip6_fwrule_free(struct ip6_fw_rule * rl)
-{
- kfree(rl);
-}
-
-static __inline__ int port_match(int rl_port, int fl_port)
-{
- int res = 0;
- if (rl_port == 0 || (rl_port == fl_port))
- res = 1;
- return res;
-}
-
-static int ip6_fw_accept_trans(struct ip6_fw_rule *rl,
- struct fl_acc_args *args)
-{
- int res = FLOWR_NODECISION;
- int proto = 0;
- int sport = 0;
- int dport = 0;
-
- switch (args->type) {
- case FL_ARG_FORWARD:
- {
- struct sk_buff *skb = args->fl_u.skb;
- struct ipv6hdr *hdr = skb->nh.ipv6h;
- int len;
-
- len = skb->len - sizeof(struct ipv6hdr);
-
- proto = hdr->nexthdr;
-
- switch (proto) {
- case IPPROTO_TCP:
- {
- struct tcphdr *th;
-
- if (len < sizeof(struct tcphdr)) {
- res = FLOWR_ERROR;
- goto out;
- }
- th = (struct tcphdr *)(hdr + 1);
- sport = th->source;
- dport = th->dest;
- break;
- }
- case IPPROTO_UDP:
- {
- struct udphdr *uh;
-
- if (len < sizeof(struct udphdr)) {
- res = FLOWR_ERROR;
- goto out;
- }
- uh = (struct udphdr *)(hdr + 1);
- sport = uh->source;
- dport = uh->dest;
- break;
- }
- default:
- goto out;
- };
- break;
- }
-
- case FL_ARG_ORIGIN:
- {
- proto = args->fl_u.fl_o.flow->proto;
-
- if (proto == IPPROTO_ICMPV6) {
- goto out;
- } else {
- sport = args->fl_u.fl_o.flow->uli_u.ports.sport;
- dport = args->fl_u.fl_o.flow->uli_u.ports.dport;
- }
- break;
- }
-
- if (proto == rl->info.proto &&
- port_match(args->fl_u.fl_o.flow->uli_u.ports.sport, sport) &&
- port_match(args->fl_u.fl_o.flow->uli_u.ports.dport, dport)) {
- if (rl->policy & IP6_FW_REJECT)
- res = FLOWR_SELECT;
- else
- res = FLOWR_CLEAR;
- }
-
- default:
-#if IP6_FW_DEBUG >= 1
- printk(KERN_DEBUG "ip6_fw_accept: unknown arg type\n");
-#endif
- goto out;
- };
-
-out:
- return res;
-}
-
-static int ip6_fw_accept(struct dst_entry *dst, struct fl_acc_args *args)
-{
- struct rt6_info *rt;
- struct ip6_fw_rule *rl;
- int proto;
- int res = FLOWR_NODECISION;
-
- rt = (struct rt6_info *) dst;
- rl = (struct ip6_fw_rule *) rt->rt6i_flowr;
-
- proto = rl->info.proto;
-
- switch (proto) {
- case 0:
- if (rl->policy & IP6_FW_REJECT)
- res = FLOWR_SELECT;
- else
- res = FLOWR_CLEAR;
- break;
- case IPPROTO_TCP:
- case IPPROTO_UDP:
- res = ip6_fw_accept_trans(rl, args);
- break;
- case IPPROTO_ICMPV6:
- };
-
- return res;
-}
-
-static struct dst_entry * ip6_fw_dup(struct dst_entry *frule,
- struct dst_entry *rt,
- struct fl_acc_args *args)
-{
- struct ip6_fw_rule *rl;
- struct rt6_info *nrt;
- struct rt6_info *frt;
-
- frt = (struct rt6_info *) frule;
-
- rl = (struct ip6_fw_rule *) frt->rt6i_flowr;
-
- nrt = ip6_rt_copy((struct rt6_info *) rt);
-
- if (nrt) {
- nrt->u.dst.input = frule->input;
- nrt->u.dst.output = frule->output;
-
- nrt->rt6i_flowr = flow_clone(frt->rt6i_flowr);
-
- nrt->rt6i_flags |= RTF_CACHE;
- nrt->rt6i_tstamp = jiffies;
- }
-
- return (struct dst_entry *) nrt;
-}
-
-int ip6_fw_reject(struct sk_buff *skb)
-{
-#if IP6_FW_DEBUG >= 1
- printk(KERN_DEBUG "packet rejected: \n");
-#endif
-
- icmpv6_send(skb, ICMPV6_DEST_UNREACH, ICMPV6_ADM_PROHIBITED, 0,
- skb->dev);
- /*
- * send it via netlink, as (rule, skb)
- */
-
- kfree_skb(skb);
- return 0;
-}
-
-int ip6_fw_discard(struct sk_buff *skb)
-{
- printk(KERN_DEBUG "ip6_fw: BUG fw_reject called\n");
- kfree_skb(skb);
- return 0;
-}
-
-int ip6_fw_msg_add(struct ip6_fw_msg *msg)
-{
- struct in6_rtmsg rtmsg;
- struct ip6_fw_rule *rl;
- struct rt6_info *rt;
- int err;
-
- ipv6_addr_copy(&rtmsg.rtmsg_dst, &msg->dst);
- ipv6_addr_copy(&rtmsg.rtmsg_src, &msg->src);
- rtmsg.rtmsg_dst_len = msg->dst_len;
- rtmsg.rtmsg_src_len = msg->src_len;
- rtmsg.rtmsg_metric = IP6_RT_PRIO_FW;
-
- rl = ip6_fwrule_alloc();
-
- if (rl == NULL)
- return -ENOMEM;
-
- rl->policy = msg->policy;
- rl->info.proto = msg->proto;
- rl->info.uli_u.data = msg->u.data;
-
- rtmsg.rtmsg_flags = RTF_NONEXTHOP|RTF_POLICY;
- err = ip6_route_add(&rtmsg);
-
- if (err) {
- ip6_fwrule_free(rl);
- return err;
- }
-
- /* The rest will not work for now. --ABK (989725) */
-
-#ifndef notdef
- ip6_fwrule_free(rl);
- return -EPERM;
-#else
- rt->u.dst.error = -EPERM;
-
- if (msg->policy == IP6_FW_ACCEPT) {
- /*
- * Accept rules are never selected
- * (i.e. packets use normal forwarding)
- */
- rt->u.dst.input = ip6_fw_discard;
- rt->u.dst.output = ip6_fw_discard;
- } else {
- rt->u.dst.input = ip6_fw_reject;
- rt->u.dst.output = ip6_fw_reject;
- }
-
- ip6_rule_add(rl);
-
- rt->rt6i_flowr = flow_clone((struct flow_rule *)rl);
-
- return 0;
-#endif
-}
-
-static int ip6_fw_msgrcv(int unit, struct sk_buff *skb)
-{
- int count = 0;
-
- while (skb->len) {
- struct ip6_fw_msg *msg;
-
- if (skb->len < sizeof(struct ip6_fw_msg)) {
- count = -EINVAL;
- break;
- }
-
- msg = (struct ip6_fw_msg *) skb->data;
- skb_pull(skb, sizeof(struct ip6_fw_msg));
- count += sizeof(struct ip6_fw_msg);
-
- switch (msg->action) {
- case IP6_FW_MSG_ADD:
- ip6_fw_msg_add(msg);
- break;
- case IP6_FW_MSG_DEL:
- break;
- default:
- return -EINVAL;
- };
- }
-
- return count;
-}
-
-static void ip6_fw_destroy(struct flow_rule *rl)
-{
- ip6_fwrule_free((struct ip6_fw_rule *)rl);
-}
-
-#ifdef MODULE
-#define ip6_fw_init module_init
-#endif
-
-void __init ip6_fw_init(void)
-{
- netlink_attach(NETLINK_IP6_FW, ip6_fw_msgrcv);
-}
-
-#ifdef MODULE
-void cleanup_module(void)
-{
- netlink_detach(NETLINK_IP6_FW);
-}
-#endif
diff -Nru a/net/ipv6/ip6_input.c b/net/ipv6/ip6_input.c
--- a/net/ipv6/ip6_input.c Thu May 8 10:41:36 2003
+++ b/net/ipv6/ip6_input.c Thu May 8 10:41:36 2003
@@ -15,6 +15,11 @@
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
+/* Changes
+ *
+ * Mitsuru KANDA @USAGI and
+ * YOSHIFUJI Hideaki @USAGI: Remove ipv6_parse_exthdrs().
+ */
#include <linux/errno.h>
#include <linux/types.h>
@@ -39,6 +44,7 @@
#include <net/ndisc.h>
#include <net/ip6_route.h>
#include <net/addrconf.h>
+#include <net/xfrm.h>
@@ -47,7 +53,7 @@
if (skb->dst == NULL)
ip6_route_input(skb);
- return skb->dst->input(skb);
+ return dst_input(skb);
}
int ipv6_rcv(struct sk_buff *skb, struct net_device *dev, struct packet_type *pt)
@@ -121,13 +127,12 @@
static inline int ip6_input_finish(struct sk_buff *skb)
{
- struct ipv6hdr *hdr = skb->nh.ipv6h;
struct inet6_protocol *ipprot;
struct sock *raw_sk;
- int nhoff;
+ unsigned int nhoff;
int nexthdr;
- int found = 0;
u8 hash;
+ int cksum_sub = 0;
skb->h.raw = skb->nh.raw + sizeof(struct ipv6hdr);
@@ -135,7 +140,7 @@
* Parse extension headers
*/
- nexthdr = hdr->nexthdr;
+ nexthdr = skb->nh.ipv6h->nexthdr;
nhoff = offsetof(struct ipv6hdr, nexthdr);
/* Skip hop-by-hop options, they are already parsed. */
@@ -145,58 +150,46 @@
skb->h.raw += (skb->h.raw[1]+1)<<3;
}
- /* This check is sort of optimization.
- It would be stupid to detect for optional headers,
- which are missing with probability of 200%
- */
- if (nexthdr != IPPROTO_TCP && nexthdr != IPPROTO_UDP) {
- nhoff = ipv6_parse_exthdrs(&skb, nhoff);
- if (nhoff < 0)
- return 0;
- nexthdr = skb->nh.raw[nhoff];
- hdr = skb->nh.ipv6h;
- }
-
+resubmit:
if (!pskb_pull(skb, skb->h.raw - skb->data))
goto discard;
+ nexthdr = skb->nh.raw[nhoff];
- if (skb->ip_summed == CHECKSUM_HW)
- skb->csum = csum_sub(skb->csum,
- csum_partial(skb->nh.raw, skb->h.raw-skb->nh.raw, 0));
-
- raw_sk = raw_v6_htable[nexthdr&(MAX_INET_PROTOS-1)];
+ raw_sk = raw_v6_htable[nexthdr & (MAX_INET_PROTOS - 1)];
if (raw_sk)
- raw_sk = ipv6_raw_deliver(skb, nexthdr);
+ ipv6_raw_deliver(skb, nexthdr);
hash = nexthdr & (MAX_INET_PROTOS - 1);
- for (ipprot = (struct inet6_protocol *) inet6_protos[hash];
- ipprot != NULL;
- ipprot = (struct inet6_protocol *) ipprot->next) {
- struct sk_buff *buff = skb;
-
- if (ipprot->protocol != nexthdr)
- continue;
-
- if (ipprot->copy || raw_sk)
- buff = skb_clone(skb, GFP_ATOMIC);
-
- if (buff)
- ipprot->handler(buff);
- found = 1;
- }
-
- if (raw_sk) {
- rawv6_rcv(raw_sk, skb);
- sock_put(raw_sk);
- found = 1;
- }
-
- /*
- * not found: send ICMP parameter problem back
- */
- if (!found) {
- IP6_INC_STATS_BH(Ip6InUnknownProtos);
- icmpv6_param_prob(skb, ICMPV6_UNK_NEXTHDR, nhoff);
+ if ((ipprot = inet6_protos[hash]) != NULL) {
+ int ret;
+
+ if (ipprot->flags & INET6_PROTO_FINAL) {
+ if (!cksum_sub && skb->ip_summed == CHECKSUM_HW) {
+ skb->csum = csum_sub(skb->csum,
+ csum_partial(skb->nh.raw, skb->h.raw-skb->nh.raw, 0));
+ cksum_sub++;
+ }
+ }
+ if (!(ipprot->flags & INET6_PROTO_NOPOLICY) &&
+ !xfrm6_policy_check(NULL, XFRM_POLICY_IN, skb)) {
+ kfree_skb(skb);
+ return 0;
+ }
+
+ ret = ipprot->handler(&skb, &nhoff);
+ if (ret > 0)
+ goto resubmit;
+ else if (ret == 0)
+ IP6_INC_STATS_BH(Ip6InDelivers);
+ } else {
+ if (!raw_sk) {
+ if (xfrm6_policy_check(NULL, XFRM_POLICY_IN, skb)) {
+ IP6_INC_STATS_BH(Ip6InUnknownProtos);
+ icmpv6_param_prob(skb, ICMPV6_UNK_NEXTHDR, nhoff);
+ }
+ } else {
+ kfree_skb(skb);
+ }
}
return 0;
@@ -246,7 +239,7 @@
skb2 = skb;
}
- dst->output(skb2);
+ dst_output(skb2);
}
}
#endif
diff -Nru a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
--- a/net/ipv6/ip6_output.c Thu May 8 10:41:36 2003
+++ b/net/ipv6/ip6_output.c Thu May 8 10:41:36 2003
@@ -49,6 +49,7 @@
#include <net/addrconf.h>
#include <net/rawv6.h>
#include <net/icmp.h>
+#include <net/xfrm.h>
static __inline__ void ipv6_select_ident(struct sk_buff *skb, struct frag_hdr *fhdr)
{
@@ -143,8 +144,8 @@
fl.fl6_src = &iph->saddr;
fl.oif = skb->sk ? skb->sk->bound_dev_if : 0;
fl.fl6_flowlabel = 0;
- fl.uli_u.ports.dport = 0;
- fl.uli_u.ports.sport = 0;
+ fl.fl_ip_dport = 0;
+ fl.fl_ip_sport = 0;
dst = ip6_route_output(skb->sk, &fl);
@@ -173,7 +174,7 @@
}
}
#endif /* CONFIG_NETFILTER */
- return skb->dst->output(skb);
+ return dst_output(skb);
}
/*
@@ -184,12 +185,18 @@
struct ipv6_txoptions *opt)
{
struct ipv6_pinfo * np = sk ? &sk->net_pinfo.af_inet6 : NULL;
- struct in6_addr *first_hop = fl->nl_u.ip6_u.daddr;
+ struct in6_addr *first_hop = fl->fl6_dst;
struct dst_entry *dst = skb->dst;
struct ipv6hdr *hdr;
u8 proto = fl->proto;
int seg_len = skb->len;
int hlimit;
+ u32 mtu;
+ int err = 0;
+
+ if ((err = xfrm_lookup(&skb->dst, fl, sk, 0)) < 0) {
+ return err;
+ }
if (opt) {
int head_room;
@@ -233,17 +240,18 @@
hdr->nexthdr = proto;
hdr->hop_limit = hlimit;
- ipv6_addr_copy(&hdr->saddr, fl->nl_u.ip6_u.saddr);
+ ipv6_addr_copy(&hdr->saddr, fl->fl6_src);
ipv6_addr_copy(&hdr->daddr, first_hop);
- if (skb->len <= dst->pmtu) {
+ mtu = dst_pmtu(dst);
+ if (skb->len <= mtu) {
IP6_INC_STATS(Ip6OutRequests);
return NF_HOOK(PF_INET6, NF_IP6_LOCAL_OUT, skb, NULL, dst->dev, ip6_maybe_reroute);
}
if (net_ratelimit())
printk(KERN_DEBUG "IPv6: sending pkt_too_big to self\n");
- icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, dst->pmtu, skb->dev);
+ icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu, skb->dev);
kfree_skb(skb);
return -EMSGSIZE;
}
@@ -297,8 +305,8 @@
hdr->hop_limit = hlimit;
hdr->nexthdr = fl->proto;
- ipv6_addr_copy(&hdr->saddr, fl->nl_u.ip6_u.saddr);
- ipv6_addr_copy(&hdr->daddr, fl->nl_u.ip6_u.daddr);
+ ipv6_addr_copy(&hdr->saddr, fl->fl6_src);
+ ipv6_addr_copy(&hdr->daddr, fl->fl6_dst);
return hdr;
}
@@ -514,7 +522,7 @@
fl->fl6_dst = rt0->addr;
}
- if (!fl->oif && ipv6_addr_is_multicast(fl->nl_u.ip6_u.daddr))
+ if (!fl->oif && ipv6_addr_is_multicast(fl->fl6_dst))
fl->oif = np->mcast_oif;
dst = __sk_dst_check(sk, np->dst_cookie);
@@ -572,6 +580,13 @@
}
pktlength = length;
+ if (dst) {
+ if ((err = xfrm_lookup(&dst, fl, sk, 0)) < 0) {
+ dst_release(dst);
+ return -ENETUNREACH;
+ }
+ }
+
if (hlimit < 0) {
if (ipv6_addr_is_multicast(fl->fl6_dst))
hlimit = np->mcast_hops;
@@ -598,7 +613,7 @@
}
}
- mtu = dst->pmtu;
+ mtu = dst_pmtu(dst);
if (np->frag_size < mtu) {
if (np->frag_size)
mtu = np->frag_size;
@@ -626,9 +641,8 @@
err = 0;
if (flags&MSG_PROBE)
goto out;
-
- skb = sock_alloc_send_skb(sk, pktlength + 15 +
- dev->hard_header_len,
+ /* alloc skb with mtu as we do in the IPv4 stack for IPsec */
+ skb = sock_alloc_send_skb(sk, mtu + LL_RESERVED_SPACE(dev),
flags & MSG_DONTWAIT, &err);
if (skb == NULL) {
@@ -659,6 +673,8 @@
err = getfrag(data, &hdr->saddr,
((char *) hdr) + (pktlength - length),
0, length);
+ if (!opt || !opt->dst1opt)
+ skb->h.raw = ((char *) hdr) + (pktlength - length);
if (!err) {
IP6_INC_STATS(Ip6OutRequests);
@@ -683,7 +699,7 @@
* cleanup
*/
out:
- ip6_dst_store(sk, dst, fl->nl_u.ip6_u.daddr == &np->daddr ? &np->daddr : NULL);
+ ip6_dst_store(sk, dst, fl->fl6_dst == &np->daddr ? &np->daddr : NULL);
if (err > 0)
err = np->recverr ? net_xmit_errno(err) : 0;
return err;
@@ -718,7 +734,7 @@
static inline int ip6_forward_finish(struct sk_buff *skb)
{
- return skb->dst->output(skb);
+ return dst_output(skb);
}
int ip6_forward(struct sk_buff *skb)
@@ -730,6 +746,9 @@
if (ipv6_devconf.forwarding == 0)
goto error;
+ if (!xfrm6_policy_check(NULL, XFRM_POLICY_FWD, skb))
+ goto drop;
+
skb->ip_summed = CHECKSUM_NONE;
/*
@@ -764,6 +783,9 @@
return -ETIMEDOUT;
}
+ if (!xfrm6_route_forward(skb))
+ goto drop;
+
/* IPv6 specs say nothing about it, but it is clear that we cannot
send redirects to source routed frames.
*/
@@ -794,10 +816,10 @@
goto error;
}
- if (skb->len > dst->pmtu) {
+ if (skb->len > dst_pmtu(dst)) {
/* Again, force OUTPUT device used as source address */
skb->dev = dst->dev;
- icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, dst->pmtu, skb->dev);
+ icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, dst_pmtu(dst), skb->dev);
IP6_INC_STATS_BH(Ip6InTooBigErrors);
kfree_skb(skb);
return -EMSGSIZE;
diff -Nru a/net/ipv6/ipv6_sockglue.c b/net/ipv6/ipv6_sockglue.c
--- a/net/ipv6/ipv6_sockglue.c Thu May 8 10:41:36 2003
+++ b/net/ipv6/ipv6_sockglue.c Thu May 8 10:41:36 2003
@@ -47,6 +47,7 @@
#include <net/inet_common.h>
#include <net/tcp.h>
#include <net/udp.h>
+#include <net/xfrm.h>
#include <asm/uaccess.h>
@@ -404,6 +405,10 @@
case IPV6_FLOWLABEL_MGR:
retv = ipv6_flowlabel_opt(sk, optval, optlen);
break;
+ case IPV6_IPSEC_POLICY:
+ case IPV6_XFRM_POLICY:
+ retv = xfrm_user_policy(sk, optname, optval, optlen);
+ break;
#ifdef CONFIG_NETFILTER
default:
@@ -482,7 +487,7 @@
lock_sock(sk);
dst = sk_dst_get(sk);
if (dst) {
- val = dst->pmtu;
+ val = dst_pmtu(dst) - dst->header_len;
dst_release(dst);
}
release_sock(sk);
diff -Nru a/net/ipv6/ipv6_syms.c b/net/ipv6/ipv6_syms.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/net/ipv6/ipv6_syms.c Thu May 8 10:41:38 2003
@@ -0,0 +1,5 @@
+#include <linux/module.h>
+#include <net/xfrm.h>
+
+EXPORT_SYMBOL(xfrm6_rcv);
+EXPORT_SYMBOL(xfrm6_clear_mutable_options);
diff -Nru a/net/ipv6/ndisc.c b/net/ipv6/ndisc.c
--- a/net/ipv6/ndisc.c Thu May 8 10:41:36 2003
+++ b/net/ipv6/ndisc.c Thu May 8 10:41:36 2003
@@ -71,6 +71,7 @@
#include <net/addrconf.h>
#include <net/icmp.h>
+#include <net/flow.h>
#include <net/checksum.h>
#include <linux/proc_fs.h>
@@ -330,8 +331,6 @@
unsigned char ha[MAX_ADDR_LEN];
unsigned char *h_dest = NULL;
- skb_reserve(skb, (dev->hard_header_len + 15) & ~15);
-
if (dev->hard_header) {
if (ipv6_addr_type(daddr) & IPV6_ADDR_MULTICAST) {
ndisc_mc_map(daddr, ha, dev, 1);
@@ -368,10 +367,50 @@
* Send a Neighbour Advertisement
*/
+int ndisc_output(struct sk_buff *skb)
+{
+ if (skb) {
+ struct neighbour *neigh = (skb->dst ? skb->dst->neighbour : NULL);
+ if (ndisc_build_ll_hdr(skb, skb->dev, &skb->nh.ipv6h->daddr, neigh, skb->len) == 0) {
+ kfree_skb(skb);
+ return -EINVAL;
+ }
+ dev_queue_xmit(skb);
+ return 0;
+ }
+ return -EINVAL;
+}
+
+static inline void ndisc_rt_init(struct rt6_info *rt, struct net_device *dev,
+ struct neighbour *neigh)
+{
+ rt->rt6i_dev = dev;
+ rt->rt6i_nexthop = neigh;
+ rt->rt6i_expires = 0;
+ rt->rt6i_flags = RTF_LOCAL;
+ rt->rt6i_metric = 0;
+ rt->rt6i_hoplimit = 255;
+ rt->u.dst.output = ndisc_output;
+}
+
+static inline void ndisc_flow_init(struct flowi *fl, u8 type,
+ struct in6_addr *saddr, struct in6_addr *daddr)
+{
+ memset(fl, 0, sizeof(*fl));
+ fl->fl6_src = saddr;
+ fl->fl6_dst = daddr;
+ fl->proto = IPPROTO_ICMPV6;
+ fl->fl_icmp_type = type;
+ fl->fl_icmp_code = 0;
+}
+
void ndisc_send_na(struct net_device *dev, struct neighbour *neigh,
struct in6_addr *daddr, struct in6_addr *solicited_addr,
- int router, int solicited, int override, int inc_opt)
+ int router, int solicited, int override, int inc_opt)
{
+ struct flowi fl;
+ struct rt6_info *rt = NULL;
+ struct dst_entry* dst;
static struct in6_addr tmpaddr;
struct inet6_ifaddr *ifp;
struct sock *sk = ndisc_socket->sk;
@@ -383,6 +422,22 @@
len = sizeof(struct icmp6hdr) + sizeof(struct in6_addr);
+ rt = ip6_dst_alloc();
+ if (!rt)
+ return;
+
+ ndisc_flow_init(&fl, NDISC_NEIGHBOUR_ADVERTISEMENT, solicited_addr, daddr);
+ ndisc_rt_init(rt, dev, neigh);
+
+ dst = (struct dst_entry*)rt;
+ dst_clone(dst);
+
+ err = xfrm_lookup(&dst, &fl, NULL, 0);
+ if (err < 0) {
+ dst_release(dst);
+ return;
+ }
+
if (inc_opt) {
if (dev->addr_len)
len += NDISC_OPT_SPACE(dev->addr_len);
@@ -408,14 +463,10 @@
src_addr = &tmpaddr;
}
- if (ndisc_build_ll_hdr(skb, dev, daddr, neigh, len) == 0) {
- kfree_skb(skb);
- return;
- }
-
+ skb_reserve(skb, (dev->hard_header_len + 15) & ~15);
ip6_nd_hdr(sk, skb, dev, src_addr, daddr, IPPROTO_ICMPV6, len);
- msg = (struct nd_msg *) skb_put(skb, len);
+ skb->h.raw = (unsigned char*) msg = (struct nd_msg *) skb_put(skb, len);
msg->icmph.icmp6_type = NDISC_NEIGHBOUR_ADVERTISEMENT;
msg->icmph.icmp6_code = 0;
@@ -438,7 +489,9 @@
csum_partial((__u8 *) msg,
len, 0));
- dev_queue_xmit(skb);
+ dst_clone(dst);
+ skb->dst = dst;
+ dst_output(skb);
ICMP6_INC_STATS(Icmp6OutNeighborAdvertisements);
ICMP6_INC_STATS(Icmp6OutMsgs);
@@ -448,6 +501,9 @@
struct in6_addr *solicit,
struct in6_addr *daddr, struct in6_addr *saddr)
{
+ struct flowi fl;
+ struct rt6_info *rt = NULL;
+ struct dst_entry* dst;
struct sock *sk = ndisc_socket->sk;
struct sk_buff *skb;
struct nd_msg *msg;
@@ -462,6 +518,22 @@
saddr = &addr_buf;
}
+ rt = ip6_dst_alloc();
+ if (!rt)
+ return;
+
+ ndisc_flow_init(&fl, NDISC_NEIGHBOUR_SOLICITATION, saddr, daddr);
+ ndisc_rt_init(rt, dev, neigh);
+
+ dst = (struct dst_entry*)rt;
+ dst_clone(dst);
+
+ err = xfrm_lookup(&dst, &fl, NULL, 0);
+ if (err < 0) {
+ dst_release(dst);
+ return;
+ }
+
len = sizeof(struct icmp6hdr) + sizeof(struct in6_addr);
send_llinfo = dev->addr_len && ipv6_addr_type(saddr) != IPV6_ADDR_ANY;
if (send_llinfo)
@@ -474,14 +546,10 @@
return;
}
- if (ndisc_build_ll_hdr(skb, dev, daddr, neigh, len) == 0) {
- kfree_skb(skb);
- return;
- }
-
+ skb_reserve(skb, (dev->hard_header_len + 15) & ~15);
ip6_nd_hdr(sk, skb, dev, saddr, daddr, IPPROTO_ICMPV6, len);
- msg = (struct nd_msg *)skb_put(skb, len);
+ skb->h.raw = (unsigned char*) msg = (struct nd_msg *)skb_put(skb, len);
msg->icmph.icmp6_type = NDISC_NEIGHBOUR_SOLICITATION;
msg->icmph.icmp6_code = 0;
msg->icmph.icmp6_cksum = 0;
@@ -500,7 +568,9 @@
csum_partial((__u8 *) msg,
len, 0));
/* send it! */
- dev_queue_xmit(skb);
+ dst_clone(dst);
+ skb->dst = dst;
+ dst_output(skb);
ICMP6_INC_STATS(Icmp6OutNeighborSolicits);
ICMP6_INC_STATS(Icmp6OutMsgs);
@@ -509,6 +579,9 @@
void ndisc_send_rs(struct net_device *dev, struct in6_addr *saddr,
struct in6_addr *daddr)
{
+ struct flowi fl;
+ struct rt6_info *rt = NULL;
+ struct dst_entry* dst;
struct sock *sk = ndisc_socket->sk;
struct sk_buff *skb;
struct icmp6hdr *hdr;
@@ -516,6 +589,22 @@
int len;
int err;
+ rt = ip6_dst_alloc();
+ if (!rt)
+ return;
+
+ ndisc_flow_init(&fl, NDISC_ROUTER_SOLICITATION, saddr, daddr);
+ ndisc_rt_init(rt, dev, NULL);
+
+ dst = (struct dst_entry*)rt;
+ dst_clone(dst);
+
+ err = xfrm_lookup(&dst, &fl, NULL, 0);
+ if (err < 0) {
+ dst_release(dst);
+ return;
+ }
+
len = sizeof(struct icmp6hdr);
if (dev->addr_len)
len += NDISC_OPT_SPACE(dev->addr_len);
@@ -527,14 +616,10 @@
return;
}
- if (ndisc_build_ll_hdr(skb, dev, daddr, NULL, len) == 0) {
- kfree_skb(skb);
- return;
- }
-
+ skb_reserve(skb, (dev->hard_header_len + 15) & ~15);
ip6_nd_hdr(sk, skb, dev, saddr, daddr, IPPROTO_ICMPV6, len);
- hdr = (struct icmp6hdr *) skb_put(skb, len);
+ skb->h.raw = (unsigned char*) hdr = (struct icmp6hdr *) skb_put(skb, len);
hdr->icmp6_type = NDISC_ROUTER_SOLICITATION;
hdr->icmp6_code = 0;
hdr->icmp6_cksum = 0;
@@ -551,7 +636,9 @@
csum_partial((__u8 *) hdr, len, 0));
/* send it! */
- dev_queue_xmit(skb);
+ dst_clone(dst);
+ skb->dst = dst;
+ dst_output(skb);
ICMP6_INC_STATS(Icmp6OutRouterSolicits);
ICMP6_INC_STATS(Icmp6OutMsgs);
@@ -1058,7 +1145,7 @@
in6_dev->cnf.mtu6 = mtu;
if (rt)
- rt->u.dst.pmtu = mtu;
+ rt->u.dst.metrics[RTAX_MTU-1] = mtu;
rt6_mtu_change(skb->dev, mtu);
}
@@ -1181,6 +1268,8 @@
struct in6_addr *addrp;
struct net_device *dev;
struct rt6_info *rt;
+ struct dst_entry *dst;
+ struct flowi fl;
u8 *opt;
int rd_len;
int err;
@@ -1192,6 +1281,22 @@
if (rt == NULL)
return;
+ dst = (struct dst_entry*)rt;
+
+ if (ipv6_get_lladdr(dev, &saddr_buf)) {
+ ND_PRINTK1("redirect: no link_local addr for dev\n");
+ return;
+ }
+
+ ndisc_flow_init(&fl, NDISC_REDIRECT, &saddr_buf, &skb->nh.ipv6h->saddr);
+
+ dst_clone(dst);
+ err = xfrm_lookup(&dst, &fl, NULL, 0);
+ if (err) {
+ dst_release(dst);
+ return;
+ }
+
if (rt->rt6i_flags & RTF_GATEWAY) {
ND_PRINTK1("ndisc_send_redirect: not a neighbour\n");
dst_release(&rt->u.dst);
@@ -1220,11 +1325,6 @@
rd_len &= ~0x7;
len += rd_len;
- if (ipv6_get_lladdr(dev, &saddr_buf)) {
- ND_PRINTK1("redirect: no link_local addr for dev\n");
- return;
- }
-
buff = sock_alloc_send_skb(sk, MAX_HEADER + len + dev->hard_header_len + 15,
1, &err);
if (buff == NULL) {
@@ -1234,15 +1334,11 @@
hlen = 0;
- if (ndisc_build_ll_hdr(buff, dev, &skb->nh.ipv6h->saddr, NULL, len) == 0) {
- kfree_skb(buff);
- return;
- }
-
+ skb_reserve(skb, (dev->hard_header_len + 15) & ~15);
ip6_nd_hdr(sk, buff, dev, &saddr_buf, &skb->nh.ipv6h->saddr,
IPPROTO_ICMPV6, len);
- icmph = (struct icmp6hdr *) skb_put(buff, len);
+ skb->h.raw = (unsigned char*) icmph = (struct icmp6hdr *) skb_put(buff, len);
memset(icmph, 0, sizeof(struct icmp6hdr));
icmph->icmp6_type = NDISC_REDIRECT;
@@ -1280,7 +1376,8 @@
len, IPPROTO_ICMPV6,
csum_partial((u8 *) icmph, len, 0));
- dev_queue_xmit(buff);
+ skb->dst = dst;
+ dst_output(skb);
ICMP6_INC_STATS(Icmp6OutRedirects);
ICMP6_INC_STATS(Icmp6OutMsgs);
diff -Nru a/net/ipv6/protocol.c b/net/ipv6/protocol.c
--- a/net/ipv6/protocol.c Thu May 8 10:41:36 2003
+++ b/net/ipv6/protocol.c Thu May 8 10:41:36 2003
@@ -42,77 +42,42 @@
struct inet6_protocol *inet6_protos[MAX_INET_PROTOS];
-void inet6_add_protocol(struct inet6_protocol *prot)
+int inet6_add_protocol(struct inet6_protocol *prot, unsigned char protocol)
{
- unsigned char hash;
- struct inet6_protocol *p2;
+ int ret, hash = protocol & (MAX_INET_PROTOS - 1);
- hash = prot->protocol & (MAX_INET_PROTOS - 1);
br_write_lock_bh(BR_NETPROTO_LOCK);
- prot->next = inet6_protos[hash];
- inet6_protos[hash] = prot;
- prot->copy = 0;
-
- /*
- * Set the copy bit if we need to.
- */
-
- p2 = (struct inet6_protocol *) prot->next;
- while(p2 != NULL) {
- if (p2->protocol == prot->protocol) {
- prot->copy = 1;
- break;
- }
- p2 = (struct inet6_protocol *) p2->next;
+
+ if (inet6_protos[hash]) {
+ ret = -1;
+ } else {
+ inet6_protos[hash] = prot;
+ ret = 0;
}
+
br_write_unlock_bh(BR_NETPROTO_LOCK);
+
+ return ret;
}
/*
* Remove a protocol from the hash tables.
*/
-int inet6_del_protocol(struct inet6_protocol *prot)
+int inet6_del_protocol(struct inet6_protocol *prot, unsigned char protocol)
{
- struct inet6_protocol *p;
- struct inet6_protocol *lp = NULL;
- unsigned char hash;
+ int ret, hash = protocol & (MAX_INET_PROTOS - 1);
- hash = prot->protocol & (MAX_INET_PROTOS - 1);
br_write_lock_bh(BR_NETPROTO_LOCK);
- if (prot == inet6_protos[hash]) {
- inet6_protos[hash] = (struct inet6_protocol *) inet6_protos[hash]->next;
- br_write_unlock_bh(BR_NETPROTO_LOCK);
- return(0);
- }
-
- p = (struct inet6_protocol *) inet6_protos[hash];
- if (p != NULL && p->protocol == prot->protocol)
- lp = p;
-
- while(p != NULL) {
- /*
- * We have to worry if the protocol being deleted is
- * the last one on the list, then we may need to reset
- * someone's copied bit.
- */
- if (p->next != NULL && p->next == prot) {
- /*
- * if we are the last one with this protocol and
- * there is a previous one, reset its copy bit.
- */
- if (prot->copy == 0 && lp != NULL)
- lp->copy = 0;
- p->next = prot->next;
- br_write_unlock_bh(BR_NETPROTO_LOCK);
- return(0);
- }
- if (p->next != NULL && p->next->protocol == prot->protocol)
- lp = p->next;
-
- p = (struct inet6_protocol *) p->next;
+ if (inet6_protos[hash] != prot) {
+ ret = -1;
+ } else {
+ inet6_protos[hash] = NULL;
+ ret = 0;
}
+
br_write_unlock_bh(BR_NETPROTO_LOCK);
- return(-1);
+
+ return ret;
}
diff -Nru a/net/ipv6/raw.c b/net/ipv6/raw.c
--- a/net/ipv6/raw.c Thu May 8 10:41:37 2003
+++ b/net/ipv6/raw.c Thu May 8 10:41:37 2003
@@ -45,6 +45,7 @@
#include <net/inet_common.h>
#include <net/rawv6.h>
+#include <net/xfrm.h>
struct sock *raw_v6_htable[RAWV6_HTABLE_SIZE];
rwlock_t raw_v6_lock = RW_LOCK_UNLOCKED;
@@ -133,12 +134,14 @@
* demultiplex raw sockets.
* (should consider queueing the skb in the sock receive_queue
* without calling rawv6.c)
+ *
+ * Caller owns SKB so we must make clones.
*/
-struct sock * ipv6_raw_deliver(struct sk_buff *skb, int nexthdr)
+void ipv6_raw_deliver(struct sk_buff *skb, int nexthdr)
{
struct in6_addr *saddr;
struct in6_addr *daddr;
- struct sock *sk, *sk2;
+ struct sock *sk;
__u8 hash;
saddr = &skb->nh.ipv6h->saddr;
@@ -159,30 +162,18 @@
sk = __raw_v6_lookup(sk, nexthdr, daddr, saddr);
- if (sk) {
- sk2 = sk;
-
- while ((sk2 = __raw_v6_lookup(sk2->next, nexthdr, daddr, saddr))) {
- struct sk_buff *buff;
-
- if (nexthdr == IPPROTO_ICMPV6 &&
- icmpv6_filter(sk2, skb))
- continue;
-
- buff = skb_clone(skb, GFP_ATOMIC);
- if (buff)
- rawv6_rcv(sk2, buff);
+ while (sk) {
+ if (nexthdr != IPPROTO_ICMPV6 || !icmpv6_filter(sk, skb)) {
+ struct sk_buff *clone = skb_clone(skb, GFP_ATOMIC);
+
+ /* Not releasing hash table! */
+ if (clone)
+ rawv6_rcv(sk, clone);
}
+ sk = __raw_v6_lookup(sk->next, nexthdr, daddr, saddr);
}
-
- if (sk && nexthdr == IPPROTO_ICMPV6 && icmpv6_filter(sk, skb))
- sk = NULL;
-
out:
- if (sk)
- sock_hold(sk);
read_unlock(&raw_v6_lock);
- return sk;
}
/* This cleans up af_inet6 a bit. -DaveM */
@@ -309,6 +300,11 @@
*/
int rawv6_rcv(struct sock *sk, struct sk_buff *skb)
{
+ if (!xfrm6_policy_check(sk, XFRM_POLICY_IN, skb)) {
+ kfree_skb(skb);
+ return NET_RX_DROP;
+ }
+
if (!sk->tp_pinfo.tp_raw.checksum)
skb->ip_summed = CHECKSUM_UNNECESSARY;
@@ -620,8 +616,8 @@
fl.fl6_dst = daddr;
if (fl.fl6_src == NULL && !ipv6_addr_any(&np->saddr))
fl.fl6_src = &np->saddr;
- fl.uli_u.icmpt.type = 0;
- fl.uli_u.icmpt.code = 0;
+ fl.fl_icmp_type = 0;
+ fl.fl_icmp_code = 0;
if (raw_opt->checksum) {
struct rawv6_fakehdr hdr;
diff -Nru a/net/ipv6/reassembly.c b/net/ipv6/reassembly.c
--- a/net/ipv6/reassembly.c Thu May 8 10:41:37 2003
+++ b/net/ipv6/reassembly.c Thu May 8 10:41:37 2003
@@ -23,6 +23,7 @@
* Horst von Brand Add missing #include <linux/string.h>
* Alexey Kuznetsov SMP races, threading, cleanup.
* Patrick McHardy LRU queue of frag heads for evictor.
+ * Mitsuru KANDA @USAGI Register inet6_protocol{}.
*/
#include <linux/config.h>
#include <linux/errno.h>
@@ -519,12 +520,13 @@
* the last and the first frames arrived and all the bits are here.
*/
static int ip6_frag_reasm(struct frag_queue *fq, struct sk_buff **skb_in,
+ unsigned int *nhoffp,
struct net_device *dev)
{
struct sk_buff *fp, *head = fq->fragments;
int remove_fraghdr = 0;
int payload_len;
- int nhoff;
+ unsigned int nhoff;
fq_kill(fq);
@@ -611,7 +613,8 @@
IP6_INC_STATS_BH(Ip6ReasmOKs);
fq->fragments = NULL;
- return nhoff;
+ *nhoffp = nhoff;
+ return 1;
out_oversize:
if (net_ratelimit())
@@ -625,7 +628,7 @@
return -1;
}
-int ipv6_reassembly(struct sk_buff **skbp, int nhoff)
+static int ipv6_frag_rcv(struct sk_buff **skbp, unsigned int *nhoffp)
{
struct sk_buff *skb = *skbp;
struct net_device *dev = skb->dev;
@@ -655,7 +658,8 @@
skb->h.raw += sizeof(struct frag_hdr);
IP6_INC_STATS_BH(Ip6ReasmOKs);
- return (u8*)fhdr - skb->nh.raw;
+ *nhoffp = (u8*)fhdr - skb->nh.raw;
+ return 1;
}
if (atomic_read(&ip6_frag_mem) > sysctl_ip6frag_high_thresh)
@@ -666,11 +670,11 @@
spin_lock(&fq->lock);
- ip6_frag_queue(fq, skb, fhdr, nhoff);
+ ip6_frag_queue(fq, skb, fhdr, *nhoffp);
if (fq->last_in == (FIRST_IN|LAST_IN) &&
fq->meat == fq->len)
- ret = ip6_frag_reasm(fq, skbp, dev);
+ ret = ip6_frag_reasm(fq, skbp, nhoffp, dev);
spin_unlock(&fq->lock);
fq_put(fq);
@@ -680,4 +684,16 @@
IP6_INC_STATS_BH(Ip6ReasmFails);
kfree_skb(skb);
return -1;
+}
+
+static struct inet6_protocol frag_protocol =
+{
+ .handler = ipv6_frag_rcv,
+ .flags = INET6_PROTO_NOPOLICY,
+};
+
+void __init ipv6_frag_init(void)
+{
+ if (inet6_add_protocol(&frag_protocol, IPPROTO_FRAGMENT) < 0)
+ printk(KERN_ERR "ipv6_frag_init: Could not register protocol\n");
}
diff -Nru a/net/ipv6/route.c b/net/ipv6/route.c
--- a/net/ipv6/route.c Thu May 8 10:41:36 2003
+++ b/net/ipv6/route.c Thu May 8 10:41:36 2003
@@ -38,6 +38,8 @@
#include <net/addrconf.h>
#include <net/tcp.h>
#include <linux/rtnetlink.h>
+#include <net/dst.h>
+#include <net/xfrm.h>
#include <asm/uaccess.h>
@@ -45,8 +47,6 @@
#include <linux/sysctl.h>
#endif
-#undef CONFIG_RT6_POLICY
-
/* Set to 3 to get tracing. */
#define RT6_DEBUG 2
@@ -69,39 +69,43 @@
static struct rt6_info * ip6_rt_copy(struct rt6_info *ort);
static struct dst_entry *ip6_dst_check(struct dst_entry *dst, u32 cookie);
-static struct dst_entry *ip6_dst_reroute(struct dst_entry *dst,
- struct sk_buff *skb);
static struct dst_entry *ip6_negative_advice(struct dst_entry *);
static int ip6_dst_gc(void);
static int ip6_pkt_discard(struct sk_buff *skb);
static void ip6_link_failure(struct sk_buff *skb);
+static void ip6_rt_update_pmtu(struct dst_entry *dst, u32 mtu);
struct dst_ops ip6_dst_ops = {
- AF_INET6,
- __constant_htons(ETH_P_IPV6),
- 1024,
-
- ip6_dst_gc,
- ip6_dst_check,
- ip6_dst_reroute,
- NULL,
- ip6_negative_advice,
- ip6_link_failure,
- sizeof(struct rt6_info),
+ .family = AF_INET6,
+ .protocol = __constant_htons(ETH_P_IPV6),
+ .gc = ip6_dst_gc,
+ .gc_thresh = 1024,
+ .check = ip6_dst_check,
+ .negative_advice = ip6_negative_advice,
+ .link_failure = ip6_link_failure,
+ .update_pmtu = ip6_rt_update_pmtu,
+ .entry_size = sizeof(struct rt6_info),
};
struct rt6_info ip6_null_entry = {
- {{NULL, ATOMIC_INIT(1), 1, &loopback_dev,
- -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
- -ENETUNREACH, NULL, NULL,
- ip6_pkt_discard, ip6_pkt_discard,
-#ifdef CONFIG_NET_CLS_ROUTE
- 0,
-#endif
- &ip6_dst_ops}},
- NULL, {{{0}}}, RTF_REJECT|RTF_NONEXTHOP, ~0U,
- 255, ATOMIC_INIT(1), {NULL}, {{{{0}}}, 0}, {{{{0}}}, 0}
+ .u = {
+ .dst = {
+ .__refcnt = ATOMIC_INIT(1),
+ .__use = 1,
+ .dev = &loopback_dev,
+ .obsolete = -1,
+ .error = -ENETUNREACH,
+ .input = ip6_pkt_discard,
+ .output = ip6_pkt_discard,
+ .ops = &ip6_dst_ops,
+ .path = (struct dst_entry*)&ip6_null_entry,
+ }
+ },
+ .rt6i_flags = (RTF_REJECT | RTF_NONEXTHOP),
+ .rt6i_metric = ~(u32) 0,
+ .rt6i_hoplimit = 255,
+ .rt6i_ref = ATOMIC_INIT(1),
};
struct fib6_node ip6_routing_table = {
@@ -110,29 +114,22 @@
0, RTN_ROOT|RTN_TL_ROOT|RTN_RTINFO, 0
};
-#ifdef CONFIG_RT6_POLICY
-int ip6_rt_policy = 0;
-
-struct pol_chain *rt6_pol_list = NULL;
-
-
-static int rt6_flow_match_in(struct rt6_info *rt, struct sk_buff *skb);
-static int rt6_flow_match_out(struct rt6_info *rt, struct sock *sk);
-
-static struct rt6_info *rt6_flow_lookup(struct rt6_info *rt,
- struct in6_addr *daddr,
- struct in6_addr *saddr,
- struct fl_acc_args *args);
-
-#else
-#define ip6_rt_policy (0)
-#endif
-
/* Protects all the ip6 fib */
rwlock_t rt6_lock = RW_LOCK_UNLOCKED;
+/* allocate dst with ip6_dst_ops */
+static __inline__ struct rt6_info *__ip6_dst_alloc(void)
+{
+ return dst_alloc(&ip6_dst_ops);
+}
+
+struct rt6_info *ip6_dst_alloc(void)
+{
+ return __ip6_dst_alloc();
+}
+
/*
* Route lookup. Any rt6_lock is implied.
*/
@@ -321,38 +318,6 @@
return &ip6_null_entry;
}
-#ifdef CONFIG_RT6_POLICY
-static __inline__ struct rt6_info *rt6_flow_lookup_in(struct rt6_info *rt,
- struct sk_buff *skb)
-{
- struct in6_addr *daddr, *saddr;
- struct fl_acc_args arg;
-
- arg.type = FL_ARG_FORWARD;
- arg.fl_u.skb = skb;
-
- saddr = &skb->nh.ipv6h->saddr;
- daddr = &skb->nh.ipv6h->daddr;
-
- return rt6_flow_lookup(rt, daddr, saddr, &arg);
-}
-
-static __inline__ struct rt6_info *rt6_flow_lookup_out(struct rt6_info *rt,
- struct sock *sk,
- struct flowi *fl)
-{
- struct fl_acc_args arg;
-
- arg.type = FL_ARG_ORIGIN;
- arg.fl_u.fl_o.sk = sk;
- arg.fl_u.fl_o.flow = fl;
-
- return rt6_flow_lookup(rt, fl->nl_u.ip6_u.daddr, fl->nl_u.ip6_u.saddr,
- &arg);
-}
-
-#endif
-
#define BACKTRACK() \
if (rt == &ip6_null_entry && strict) { \
while ((fn = fn->parent) != NULL) { \
@@ -385,53 +350,29 @@
rt = fn->leaf;
if ((rt->rt6i_flags & RTF_CACHE)) {
- if (ip6_rt_policy == 0) {
- rt = rt6_device_match(rt, skb->dev->ifindex, strict);
- BACKTRACK();
- dst_hold(&rt->u.dst);
- goto out;
- }
-
-#ifdef CONFIG_RT6_POLICY
- if ((rt->rt6i_flags & RTF_FLOW)) {
- struct rt6_info *sprt;
-
- for (sprt = rt; sprt; sprt = sprt->u.next) {
- if (rt6_flow_match_in(sprt, skb)) {
- rt = sprt;
- dst_hold(&rt->u.dst);
- goto out;
- }
- }
- }
-#endif
+ rt = rt6_device_match(rt, skb->dev->ifindex, strict);
+ BACKTRACK();
+ dst_hold(&rt->u.dst);
+ goto out;
}
rt = rt6_device_match(rt, skb->dev->ifindex, 0);
BACKTRACK();
- if (ip6_rt_policy == 0) {
- if (!rt->rt6i_nexthop && !(rt->rt6i_flags & RTF_NONEXTHOP)) {
- read_unlock_bh(&rt6_lock);
+ if (!rt->rt6i_nexthop && !(rt->rt6i_flags & RTF_NONEXTHOP)) {
+ read_unlock_bh(&rt6_lock);
- rt = rt6_cow(rt, &skb->nh.ipv6h->daddr,
- &skb->nh.ipv6h->saddr);
+ rt = rt6_cow(rt, &skb->nh.ipv6h->daddr,
+ &skb->nh.ipv6h->saddr);
- if (rt->u.dst.error != -EEXIST || --attempts <= 0)
- goto out2;
- /* Race condition! In the gap, when rt6_lock was
- released someone could insert this route. Relookup.
- */
- goto relookup;
- }
- dst_hold(&rt->u.dst);
- } else {
-#ifdef CONFIG_RT6_POLICY
- rt = rt6_flow_lookup_in(rt, skb);
-#else
- /* NEVER REACHED */
-#endif
+ if (rt->u.dst.error != -EEXIST || --attempts <= 0)
+ goto out2;
+ /* Race condition! In the gap, when rt6_lock was
+ released someone could insert this route. Relookup.
+ */
+ goto relookup;
}
+ dst_hold(&rt->u.dst);
out:
read_unlock_bh(&rt6_lock);
@@ -448,38 +389,21 @@
int strict;
int attempts = 3;
- strict = ipv6_addr_type(fl->nl_u.ip6_u.daddr) & (IPV6_ADDR_MULTICAST|IPV6_ADDR_LINKLOCAL);
+ strict = ipv6_addr_type(fl->fl6_dst) & (IPV6_ADDR_MULTICAST|IPV6_ADDR_LINKLOCAL);
relookup:
read_lock_bh(&rt6_lock);
- fn = fib6_lookup(&ip6_routing_table, fl->nl_u.ip6_u.daddr,
- fl->nl_u.ip6_u.saddr);
+ fn = fib6_lookup(&ip6_routing_table, fl->fl6_dst, fl->fl6_src);
restart:
rt = fn->leaf;
if ((rt->rt6i_flags & RTF_CACHE)) {
- if (ip6_rt_policy == 0) {
- rt = rt6_device_match(rt, fl->oif, strict);
- BACKTRACK();
- dst_hold(&rt->u.dst);
- goto out;
- }
-
-#ifdef CONFIG_RT6_POLICY
- if ((rt->rt6i_flags & RTF_FLOW)) {
- struct rt6_info *sprt;
-
- for (sprt = rt; sprt; sprt = sprt->u.next) {
- if (rt6_flow_match_out(sprt, sk)) {
- rt = sprt;
- dst_hold(&rt->u.dst);
- goto out;
- }
- }
- }
-#endif
+ rt = rt6_device_match(rt, fl->oif, strict);
+ BACKTRACK();
+ dst_hold(&rt->u.dst);
+ goto out;
}
if (rt->rt6i_flags & RTF_DEFAULT) {
if (rt->rt6i_metric >= IP6_RT_PRIO_ADDRCONF)
@@ -489,29 +413,20 @@
BACKTRACK();
}
- if (ip6_rt_policy == 0) {
- if (!rt->rt6i_nexthop && !(rt->rt6i_flags & RTF_NONEXTHOP)) {
- read_unlock_bh(&rt6_lock);
+ if (!rt->rt6i_nexthop && !(rt->rt6i_flags & RTF_NONEXTHOP)) {
+ read_unlock_bh(&rt6_lock);
- rt = rt6_cow(rt, fl->nl_u.ip6_u.daddr,
- fl->nl_u.ip6_u.saddr);
-
- if (rt->u.dst.error != -EEXIST || --attempts <= 0)
- goto out2;
+ rt = rt6_cow(rt, fl->fl6_dst, fl->fl6_src);
- /* Race condition! In the gap, when rt6_lock was
- released someone could insert this route. Relookup.
- */
- goto relookup;
- }
- dst_hold(&rt->u.dst);
- } else {
-#ifdef CONFIG_RT6_POLICY
- rt = rt6_flow_lookup_out(rt, sk, fl);
-#else
- /* NEVER REACHED */
-#endif
+ if (rt->u.dst.error != -EEXIST || --attempts <= 0)
+ goto out2;
+
+ /* Race condition! In the gap, when rt6_lock was
+ released someone could insert this route. Relookup.
+ */
+ goto relookup;
}
+ dst_hold(&rt->u.dst);
out:
read_unlock_bh(&rt6_lock);
@@ -539,16 +454,6 @@
return NULL;
}
-static struct dst_entry *ip6_dst_reroute(struct dst_entry *dst, struct sk_buff *skb)
-{
- /*
- * FIXME
- */
- RDBG(("ip6_dst_reroute(%p,%p)[%p] (AIEEE)\n", dst, skb,
- __builtin_return_address(0)));
- return NULL;
-}
-
static struct dst_entry *ip6_negative_advice(struct dst_entry *dst)
{
struct rt6_info *rt = (struct rt6_info *) dst;
@@ -578,6 +483,16 @@
}
}
+static void ip6_rt_update_pmtu(struct dst_entry *dst, u32 mtu)
+{
+ struct rt6_info *rt6 = (struct rt6_info*)dst;
+
+ if (mtu < dst_pmtu(dst) && rt6->rt6i_dst.plen == 128) {
+ rt6->rt6i_flags |= RTF_MODIFIED;
+ dst->metrics[RTAX_MTU-1] = mtu;
+ }
+}
+
static int ip6_dst_gc()
{
static unsigned expire = 30*HZ;
@@ -665,7 +580,7 @@
if (rtmsg->rtmsg_metric == 0)
rtmsg->rtmsg_metric = IP6_RT_PRIO_USER;
- rt = dst_alloc(&ip6_dst_ops);
+ rt = __ip6_dst_alloc();
if (rt == NULL)
return -ENOMEM;
@@ -792,14 +707,14 @@
rt->rt6i_flags = rtmsg->rtmsg_flags;
install_route:
- rt->u.dst.pmtu = ipv6_get_mtu(dev);
- rt->u.dst.advmss = max_t(unsigned int, rt->u.dst.pmtu - 60, ip6_rt_min_advmss);
+ rt->u.dst.metrics[RTAX_MTU-1] = ipv6_get_mtu(dev);
+ rt->u.dst.metrics[RTAX_ADVMSS-1] = max_t(unsigned int, dst_pmtu(&rt->u.dst) - 60, ip6_rt_min_advmss);
/* Maximal non-jumbo IPv6 payload is 65535 and corresponding
MSS is 65535 - tcp_header_size. 65535 is also valid and
means: "any MSS, rely only on pmtu discovery"
*/
- if (rt->u.dst.advmss > 65535-20)
- rt->u.dst.advmss = 65535;
+ if (dst_metric(&rt->u.dst, RTAX_ADVMSS) > 65535-20)
+ rt->u.dst.metrics[RTAX_ADVMSS-1] = 65535;
rt->u.dst.dev = dev;
return rt6_ins(rt, nlh);
@@ -951,10 +866,10 @@
ipv6_addr_copy(&nrt->rt6i_gateway, (struct in6_addr*)neigh->primary_key);
nrt->rt6i_nexthop = neigh_clone(neigh);
/* Reset pmtu, it may be better */
- nrt->u.dst.pmtu = ipv6_get_mtu(neigh->dev);
- nrt->u.dst.advmss = max_t(unsigned int, nrt->u.dst.pmtu - 60, ip6_rt_min_advmss);
- if (rt->u.dst.advmss > 65535-20)
- rt->u.dst.advmss = 65535;
+ nrt->u.dst.metrics[RTAX_MTU-1] = ipv6_get_mtu(neigh->dev);
+ nrt->u.dst.metrics[RTAX_ADVMSS-1] = max_t(unsigned int, dst_pmtu(&nrt->u.dst) - 60, ip6_rt_min_advmss);
+ if (nrt->u.dst.metrics[RTAX_ADVMSS-1] > 65535-20)
+ nrt->u.dst.metrics[RTAX_ADVMSS-1] = 65535;
nrt->rt6i_hoplimit = ipv6_get_hoplimit(neigh->dev);
if (rt6_ins(nrt, NULL))
@@ -996,7 +911,7 @@
if (rt == NULL)
return;
- if (pmtu >= rt->u.dst.pmtu)
+ if (pmtu >= dst_pmtu(&rt->u.dst))
goto out;
/* New mtu received -> path was valid.
@@ -1011,7 +926,7 @@
would return automatically.
*/
if (rt->rt6i_flags & RTF_CACHE) {
- rt->u.dst.pmtu = pmtu;
+ rt->u.dst.metrics[RTAX_MTU-1] = pmtu;
dst_set_expires(&rt->u.dst, ip6_rt_mtu_expires);
rt->rt6i_flags |= RTF_MODIFIED|RTF_EXPIRES;
goto out;
@@ -1025,7 +940,7 @@
if (!rt->rt6i_nexthop && !(rt->rt6i_flags & RTF_NONEXTHOP)) {
nrt = rt6_cow(rt, daddr, saddr);
if (!nrt->u.dst.error) {
- nrt->u.dst.pmtu = pmtu;
+ nrt->u.dst.metrics[RTAX_MTU-1] = pmtu;
/* According to RFC 1981, detecting PMTU increase shouldn't be
happened within 5 mins, the recommended timer is 10 mins.
Here this route expiration time is set to ip6_rt_mtu_expires
@@ -1046,7 +961,7 @@
nrt->rt6i_nexthop = neigh_clone(rt->rt6i_nexthop);
dst_set_expires(&nrt->u.dst, ip6_rt_mtu_expires);
nrt->rt6i_flags |= RTF_DYNAMIC|RTF_CACHE|RTF_EXPIRES;
- nrt->u.dst.pmtu = pmtu;
+ nrt->u.dst.metrics[RTAX_MTU-1] = pmtu;
rt6_ins(nrt, NULL);
}
@@ -1060,15 +975,13 @@
static struct rt6_info * ip6_rt_copy(struct rt6_info *ort)
{
- struct rt6_info *rt;
-
- rt = dst_alloc(&ip6_dst_ops);
+ struct rt6_info *rt = __ip6_dst_alloc();
if (rt) {
rt->u.dst.input = ort->u.dst.input;
rt->u.dst.output = ort->u.dst.output;
- memcpy(&rt->u.dst.mxlock, &ort->u.dst.mxlock, RTAX_MAX*sizeof(unsigned));
+ memcpy(rt->u.dst.metrics, ort->u.dst.metrics, RTAX_MAX*sizeof(u32));
rt->u.dst.dev = ort->u.dst.dev;
if (rt->u.dst.dev)
dev_hold(rt->u.dst.dev);
@@ -1206,9 +1119,8 @@
int ip6_rt_addr_add(struct in6_addr *addr, struct net_device *dev)
{
- struct rt6_info *rt;
+ struct rt6_info *rt = __ip6_dst_alloc();
- rt = dst_alloc(&ip6_dst_ops);
if (rt == NULL)
return -ENOMEM;
@@ -1216,10 +1128,10 @@
rt->u.dst.input = ip6_input;
rt->u.dst.output = ip6_output;
rt->rt6i_dev = dev_get_by_name("lo");
- rt->u.dst.pmtu = ipv6_get_mtu(rt->rt6i_dev);
- rt->u.dst.advmss = max_t(unsigned int, rt->u.dst.pmtu - 60, ip6_rt_min_advmss);
- if (rt->u.dst.advmss > 65535-20)
- rt->u.dst.advmss = 65535;
+ rt->u.dst.metrics[RTAX_MTU-1] = ipv6_get_mtu(rt->rt6i_dev);
+ rt->u.dst.metrics[RTAX_ADVMSS-1] = max_t(unsigned int, dst_pmtu(&rt->u.dst) - 60, ip6_rt_min_advmss);
+ if (rt->u.dst.metrics[RTAX_ADVMSS-1] > 65535-20)
+ rt->u.dst.metrics[RTAX_ADVMSS-1] = 65535;
rt->rt6i_hoplimit = ipv6_get_hoplimit(rt->rt6i_dev);
rt->u.dst.obsolete = -1;
@@ -1256,122 +1168,6 @@
return err;
}
-
-#ifdef CONFIG_RT6_POLICY
-
-static int rt6_flow_match_in(struct rt6_info *rt, struct sk_buff *skb)
-{
- struct flow_filter *frule;
- struct pkt_filter *filter;
- int res = 1;
-
- if ((frule = rt->rt6i_filter) == NULL)
- goto out;
-
- if (frule->type != FLR_INPUT) {
- res = 0;
- goto out;
- }
-
- for (filter = frule->u.filter; filter; filter = filter->next) {
- __u32 *word;
-
- word = (__u32 *) skb->h.raw;
- word += filter->offset;
-
- if ((*word ^ filter->value) & filter->mask) {
- res = 0;
- break;
- }
- }
-
-out:
- return res;
-}
-
-static int rt6_flow_match_out(struct rt6_info *rt, struct sock *sk)
-{
- struct flow_filter *frule;
- int res = 1;
-
- if ((frule = rt->rt6i_filter) == NULL)
- goto out;
-
- if (frule->type != FLR_INPUT) {
- res = 0;
- goto out;
- }
-
- if (frule->u.sk != sk)
- res = 0;
-out:
- return res;
-}
-
-static struct rt6_info *rt6_flow_lookup(struct rt6_info *rt,
- struct in6_addr *daddr,
- struct in6_addr *saddr,
- struct fl_acc_args *args)
-{
- struct flow_rule *frule;
- struct rt6_info *nrt = NULL;
- struct pol_chain *pol;
-
- for (pol = rt6_pol_list; pol; pol = pol->next) {
- struct fib6_node *fn;
- struct rt6_info *sprt;
-
- fn = fib6_lookup(pol->rules, daddr, saddr);
-
- do {
- for (sprt = fn->leaf; sprt; sprt=sprt->u.next) {
- int res;
-
- frule = sprt->rt6i_flowr;
-#if RT6_DEBUG >= 2
- if (frule == NULL) {
- printk(KERN_DEBUG "NULL flowr\n");
- goto error;
- }
-#endif
- res = frule->ops->accept(rt, sprt, args, &nrt);
-
- switch (res) {
- case FLOWR_SELECT:
- goto found;
- case FLOWR_CLEAR:
- goto next_policy;
- case FLOWR_NODECISION:
- break;
- default:
- goto error;
- };
- }
-
- fn = fn->parent;
-
- } while ((fn->fn_flags & RTN_TL_ROOT) == 0);
-
- next_policy:
- }
-
-error:
- dst_hold(&ip6_null_entry.u.dst);
- return &ip6_null_entry;
-
-found:
- if (nrt == NULL)
- goto error;
-
- nrt->rt6i_flags |= RTF_CACHE;
- dst_hold(&nrt->u.dst);
- err = rt6_ins(nrt, NULL);
- if (err)
- nrt->u.dst.error = err;
- return nrt;
-}
-#endif
-
static int fib6_ifdown(struct rt6_info *rt, void *arg)
{
if (((void*)rt->rt6i_dev == arg || arg == NULL) &&
@@ -1423,14 +1219,14 @@
PMTU discouvery.
*/
if (rt->rt6i_dev == arg->dev &&
- !(rt->u.dst.mxlock&(1<<RTAX_MTU)) &&
- (rt->u.dst.pmtu > arg->mtu ||
- (rt->u.dst.pmtu < arg->mtu &&
- rt->u.dst.pmtu == idev->cnf.mtu6)))
- rt->u.dst.pmtu = arg->mtu;
- rt->u.dst.advmss = max_t(unsigned int, arg->mtu - 60, ip6_rt_min_advmss);
- if (rt->u.dst.advmss > 65535-20)
- rt->u.dst.advmss = 65535;
+ !dst_metric_locked(&rt->u.dst, RTAX_MTU) &&
+ (rt->u.dst.metrics[RTAX_MTU-1] > arg->mtu ||
+ (rt->u.dst.metrics[RTAX_MTU-1] < arg->mtu &&
+ rt->u.dst.metrics[RTAX_MTU-1] == idev->cnf.mtu6)))
+ rt->u.dst.metrics[RTAX_MTU-1] = arg->mtu;
+ rt->u.dst.metrics[RTAX_ADVMSS-1] = max_t(unsigned int, arg->mtu - 60, ip6_rt_min_advmss);
+ if (rt->u.dst.metrics[RTAX_ADVMSS-1] > 65535-20)
+ rt->u.dst.metrics[RTAX_ADVMSS-1] = 65535;
return 0;
}
@@ -1572,7 +1368,7 @@
if (ipv6_get_saddr(&rt->u.dst, dst, &saddr_buf) == 0)
RTA_PUT(skb, RTA_PREFSRC, 16, &saddr_buf);
}
- if (rtnetlink_put_metrics(skb, &rt->u.dst.mxlock) < 0)
+ if (rtnetlink_put_metrics(skb, rt->u.dst.metrics) < 0)
goto rtattr_failure;
if (rt->u.dst.neighbour)
RTA_PUT(skb, RTA_GATEWAY, 16, &rt->u.dst.neighbour->primary_key);
@@ -1721,15 +1517,11 @@
skb->mac.raw = skb->data;
skb_reserve(skb, MAX_HEADER + sizeof(struct ipv6hdr));
- fl.proto = 0;
- fl.nl_u.ip6_u.daddr = NULL;
- fl.nl_u.ip6_u.saddr = NULL;
- fl.uli_u.icmpt.type = 0;
- fl.uli_u.icmpt.code = 0;
+ memset(&fl, 0, sizeof(fl));
if (rta[RTA_SRC-1])
- fl.nl_u.ip6_u.saddr = (struct in6_addr*)RTA_DATA(rta[RTA_SRC-1]);
+ fl.fl6_src = (struct in6_addr*)RTA_DATA(rta[RTA_SRC-1]);
if (rta[RTA_DST-1])
- fl.nl_u.ip6_u.daddr = (struct in6_addr*)RTA_DATA(rta[RTA_DST-1]);
+ fl.fl6_dst = (struct in6_addr*)RTA_DATA(rta[RTA_DST-1]);
if (rta[RTA_IIF-1])
memcpy(&iif, RTA_DATA(rta[RTA_IIF-1]), sizeof(int));
@@ -1753,8 +1545,7 @@
NETLINK_CB(skb).dst_pid = NETLINK_CB(in_skb).pid;
err = rt6_fill_node(skb, rt,
- fl.nl_u.ip6_u.daddr,
- fl.nl_u.ip6_u.saddr,
+ fl.fl6_dst, fl.fl6_src,
iif,
RTM_NEWROUTE, NETLINK_CB(in_skb).pid,
nlh->nlmsg_seq, nlh);
@@ -1966,7 +1757,6 @@
#endif
-
void __init ip6_route_init(void)
{
ip6_dst_ops.kmem_cachep = kmem_cache_create("ip6_dst_cache",
@@ -1978,6 +1768,7 @@
proc_net_create("ipv6_route", 0, rt6_proc_info);
proc_net_create("rt6_stats", 0, rt6_proc_stats);
#endif
+ xfrm6_init();
}
#ifdef MODULE
@@ -1987,7 +1778,7 @@
proc_net_remove("ipv6_route");
proc_net_remove("rt6_stats");
#endif
-
+ xfrm6_fini();
rt6_ifdown(NULL);
fib6_gc_cleanup();
}
diff -Nru a/net/ipv6/sit.c b/net/ipv6/sit.c
--- a/net/ipv6/sit.c Thu May 8 10:41:38 2003
+++ b/net/ipv6/sit.c Thu May 8 10:41:38 2003
@@ -422,13 +422,6 @@
return 0;
}
-/* Need this wrapper because NF_HOOK takes the function address */
-static inline int do_ip_send(struct sk_buff *skb)
-{
- return ip_send(skb);
-}
-
-
/* Returns the embedded IPv4 address if the IPv6 address
comes from 6to4 (draft-ietf-ngtrans-6to4-04) addr space */
@@ -501,9 +494,16 @@
dst = addr6->s6_addr32[3];
}
- if (ip_route_output(&rt, dst, tiph->saddr, RT_TOS(tos), tunnel->parms.link)) {
- tunnel->stat.tx_carrier_errors++;
- goto tx_error_icmp;
+ {
+ struct flowi fl = { .nl_u = { .ip4_u =
+ { .daddr = dst,
+ .saddr = tiph->saddr,
+ .tos = RT_TOS(tos) } },
+ .oif = tunnel->parms.link };
+ if (ip_route_output_key(&rt, &fl)) {
+ tunnel->stat.tx_carrier_errors++;
+ goto tx_error_icmp;
+ }
}
if (rt->rt_type != RTN_UNICAST) {
tunnel->stat.tx_carrier_errors++;
@@ -518,9 +518,9 @@
}
if (tiph->frag_off)
- mtu = rt->u.dst.pmtu - sizeof(struct iphdr);
+ mtu = dst_pmtu(&rt->u.dst) - sizeof(struct iphdr);
else
- mtu = skb->dst ? skb->dst->pmtu : dev->mtu;
+ mtu = skb->dst ? dst_pmtu(skb->dst) : dev->mtu;
if (mtu < 68) {
tunnel->stat.collisions++;
@@ -529,15 +529,9 @@
}
if (mtu < IPV6_MIN_MTU)
mtu = IPV6_MIN_MTU;
- if (skb->dst && mtu < skb->dst->pmtu) {
- struct rt6_info *rt6 = (struct rt6_info*)skb->dst;
- if (mtu < rt6->u.dst.pmtu) {
- if (tunnel->parms.iph.daddr || rt6->rt6i_dst.plen == 128) {
- rt6->rt6i_flags |= RTF_MODIFIED;
- rt6->u.dst.pmtu = mtu;
- }
- }
- }
+ if (tunnel->parms.iph.daddr && skb->dst)
+ skb->dst->ops->update_pmtu(skb->dst, mtu);
+
if (skb->len > mtu) {
icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu, dev);
ip_rt_put(rt);
@@ -557,7 +551,7 @@
/*
* Okay, now see if we can stuff it in the buffer as-is.
*/
- max_headroom = (((tdev->hard_header_len+15)&~15)+sizeof(struct iphdr));
+ max_headroom = LL_RESERVED_SPACE(tdev)+sizeof(struct iphdr);
if (skb_headroom(skb) < max_headroom || skb_cloned(skb) || skb_shared(skb)) {
struct sk_buff *new_skb = skb_realloc_headroom(skb, max_headroom);
@@ -776,8 +770,13 @@
ipip6_tunnel_init_gen(dev);
if (iph->daddr) {
+ struct flowi fl = { .nl_u = { .ip4_u =
+ { .daddr = iph->daddr,
+ .saddr = iph->saddr,
+ .tos = RT_TOS(iph->tos) } },
+ .oif = tunnel->parms.link };
struct rtable *rt;
- if (!ip_route_output(&rt, iph->daddr, iph->saddr, RT_TOS(iph->tos), tunnel->parms.link)) {
+ if (!ip_route_output_key(&rt, &fl)) {
tdev = rt->u.dst.dev;
ip_rt_put(rt);
}
@@ -834,19 +833,14 @@
}
static struct inet_protocol sit_protocol = {
- ipip6_rcv,
- ipip6_err,
- 0,
- IPPROTO_IPV6,
- 0,
- NULL,
- "IPv6"
+ .handler = ipip6_rcv,
+ .err_handler = ipip6_err,
};
#ifdef MODULE
void sit_cleanup(void)
{
- inet_del_protocol(&sit_protocol);
+ inet_del_protocol(&sit_protocol, IPPROTO_IPV6);
unregister_netdev(&ipip6_fb_tunnel_dev);
}
#endif
@@ -855,9 +849,13 @@
{
printk(KERN_INFO "IPv6 over IPv4 tunneling driver\n");
+ if (inet_add_protocol(&sit_protocol, IPPROTO_IPV6) < 0) {
+ printk(KERN_INFO "sit init: Can't add protocol\n");
+ return -EAGAIN;
+ }
+
ipip6_fb_tunnel_dev.priv = (void*)&ipip6_fb_tunnel;
strcpy(ipip6_fb_tunnel_dev.name, ipip6_fb_tunnel.parms.name);
register_netdev(&ipip6_fb_tunnel_dev);
- inet_add_protocol(&sit_protocol);
return 0;
}
diff -Nru a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
--- a/net/ipv6/tcp_ipv6.c Thu May 8 10:41:36 2003
+++ b/net/ipv6/tcp_ipv6.c Thu May 8 10:41:36 2003
@@ -38,6 +38,7 @@
#include <linux/init.h>
#include <linux/jhash.h>
#include <linux/ipsec.h>
+#include <net/xfrm.h>
#include <linux/ipv6.h>
#include <linux/icmpv6.h>
@@ -647,14 +648,17 @@
fl.fl6_dst = &np->daddr;
fl.fl6_src = saddr;
fl.oif = sk->bound_dev_if;
- fl.uli_u.ports.dport = usin->sin6_port;
- fl.uli_u.ports.sport = sk->sport;
+ fl.fl_ip_dport = usin->sin6_port;
+ fl.fl_ip_sport = sk->sport;
if (np->opt && np->opt->srcrt) {
struct rt0_hdr *rt0 = (struct rt0_hdr *) np->opt->srcrt;
- fl.nl_u.ip6_u.daddr = rt0->addr;
+ fl.fl6_dst = rt0->addr;
}
+ if (!fl.fl6_src)
+ fl.fl6_src = &np->saddr;
+
dst = ip6_route_output(sk, &fl);
if ((err = dst->error) != 0) {
@@ -767,11 +771,11 @@
for now.
*/
fl.proto = IPPROTO_TCP;
- fl.nl_u.ip6_u.daddr = &np->daddr;
- fl.nl_u.ip6_u.saddr = &np->saddr;
+ fl.fl6_dst = &np->daddr;
+ fl.fl6_src = &np->saddr;
fl.oif = sk->bound_dev_if;
- fl.uli_u.ports.dport = sk->dport;
- fl.uli_u.ports.sport = sk->sport;
+ fl.fl_ip_dport = sk->dport;
+ fl.fl_ip_sport = sk->sport;
dst = ip6_route_output(sk, &fl);
} else
@@ -779,8 +783,8 @@
if (dst->error) {
sk->err_soft = -dst->error;
- } else if (tp->pmtu_cookie > dst->pmtu) {
- tcp_sync_mss(sk, dst->pmtu);
+ } else if (tp->pmtu_cookie > dst_pmtu(dst)) {
+ tcp_sync_mss(sk, dst_pmtu(dst));
tcp_simple_retransmit(sk);
} /* else let the usual retransmit timer handle it */
dst_release(dst);
@@ -851,12 +855,12 @@
int err = -1;
fl.proto = IPPROTO_TCP;
- fl.nl_u.ip6_u.daddr = &req->af.v6_req.rmt_addr;
- fl.nl_u.ip6_u.saddr = &req->af.v6_req.loc_addr;
+ fl.fl6_dst = &req->af.v6_req.rmt_addr;
+ fl.fl6_src = &req->af.v6_req.loc_addr;
fl.fl6_flowlabel = 0;
fl.oif = req->af.v6_req.iif;
- fl.uli_u.ports.dport = req->rmt_port;
- fl.uli_u.ports.sport = sk->sport;
+ fl.fl_ip_dport = req->rmt_port;
+ fl.fl_ip_sport = sk->sport;
if (dst == NULL) {
opt = sk->net_pinfo.af_inet6.opt;
@@ -871,7 +875,7 @@
if (opt && opt->srcrt) {
struct rt0_hdr *rt0 = (struct rt0_hdr *) opt->srcrt;
- fl.nl_u.ip6_u.daddr = rt0->addr;
+ fl.fl6_dst = rt0->addr;
}
dst = ip6_route_output(sk, &fl);
@@ -887,7 +891,7 @@
&req->af.v6_req.loc_addr, &req->af.v6_req.rmt_addr,
csum_partial((char *)th, skb->len, skb->csum));
- fl.nl_u.ip6_u.daddr = &req->af.v6_req.rmt_addr;
+ fl.fl6_dst = &req->af.v6_req.rmt_addr;
err = ip6_xmit(sk, skb, &fl, opt);
if (err == NET_XMIT_CN)
err = 0;
@@ -988,19 +992,18 @@
buff->csum = csum_partial((char *)t1, sizeof(*t1), 0);
- fl.nl_u.ip6_u.daddr = &skb->nh.ipv6h->saddr;
- fl.nl_u.ip6_u.saddr = &skb->nh.ipv6h->daddr;
+ fl.fl6_dst = &skb->nh.ipv6h->saddr;
+ fl.fl6_src = &skb->nh.ipv6h->daddr;
fl.fl6_flowlabel = 0;
- t1->check = csum_ipv6_magic(fl.nl_u.ip6_u.saddr,
- fl.nl_u.ip6_u.daddr,
+ t1->check = csum_ipv6_magic(fl.fl6_src, fl.fl6_dst,
sizeof(*t1), IPPROTO_TCP,
buff->csum);
fl.proto = IPPROTO_TCP;
fl.oif = tcp_v6_iif(skb);
- fl.uli_u.ports.dport = t1->dest;
- fl.uli_u.ports.sport = t1->source;
+ fl.fl_ip_dport = t1->dest;
+ fl.fl_ip_sport = t1->source;
/* sk = NULL, but it is safe for now. RST socket required. */
buff->dst = ip6_route_output(NULL, &fl);
@@ -1055,19 +1058,18 @@
buff->csum = csum_partial((char *)t1, tot_len, 0);
- fl.nl_u.ip6_u.daddr = &skb->nh.ipv6h->saddr;
- fl.nl_u.ip6_u.saddr = &skb->nh.ipv6h->daddr;
+ fl.fl6_dst = &skb->nh.ipv6h->saddr;
+ fl.fl6_src = &skb->nh.ipv6h->daddr;
fl.fl6_flowlabel = 0;
- t1->check = csum_ipv6_magic(fl.nl_u.ip6_u.saddr,
- fl.nl_u.ip6_u.daddr,
+ t1->check = csum_ipv6_magic(fl.fl6_src, fl.fl6_dst,
tot_len, IPPROTO_TCP,
buff->csum);
fl.proto = IPPROTO_TCP;
fl.oif = tcp_v6_iif(skb);
- fl.uli_u.ports.dport = t1->dest;
- fl.uli_u.ports.sport = t1->source;
+ fl.fl_ip_dport = t1->dest;
+ fl.fl_ip_sport = t1->source;
buff->dst = ip6_route_output(NULL, &fl);
@@ -1296,16 +1298,16 @@
if (dst == NULL) {
fl.proto = IPPROTO_TCP;
- fl.nl_u.ip6_u.daddr = &req->af.v6_req.rmt_addr;
+ fl.fl6_dst = &req->af.v6_req.rmt_addr;
if (opt && opt->srcrt) {
struct rt0_hdr *rt0 = (struct rt0_hdr *) opt->srcrt;
- fl.nl_u.ip6_u.daddr = rt0->addr;
+ fl.fl6_dst = rt0->addr;
}
- fl.nl_u.ip6_u.saddr = &req->af.v6_req.loc_addr;
+ fl.fl6_src = &req->af.v6_req.loc_addr;
fl.fl6_flowlabel = 0;
fl.oif = sk->bound_dev_if;
- fl.uli_u.ports.dport = req->rmt_port;
- fl.uli_u.ports.sport = sk->sport;
+ fl.fl_ip_dport = req->rmt_port;
+ fl.fl_ip_sport = sk->sport;
dst = ip6_route_output(sk, &fl);
}
@@ -1372,8 +1374,8 @@
if (np->opt)
newtp->ext_header_len = np->opt->opt_nflen + np->opt->opt_flen;
- tcp_sync_mss(newsk, dst->pmtu);
- newtp->advmss = dst->advmss;
+ tcp_sync_mss(newsk, dst_pmtu(dst));
+ newtp->advmss = dst_metric(dst, RTAX_ADVMSS);
tcp_initialize_rcv_mss(newsk);
newsk->daddr = LOOPBACK4_IPV6;
@@ -1542,8 +1544,9 @@
return 0;
}
-int tcp_v6_rcv(struct sk_buff *skb)
+static int tcp_v6_rcv(struct sk_buff **pskb, unsigned int *nhoffp)
{
+ struct sk_buff *skb = *pskb;
struct tcphdr *th;
struct sock *sk;
int ret;
@@ -1586,11 +1589,12 @@
goto no_tcp_socket;
process:
- if(!ipsec_sk_policy(sk,skb))
- goto discard_and_relse;
if(sk->state == TCP_TIME_WAIT)
goto do_time_wait;
+ if (!xfrm6_policy_check(sk, XFRM_POLICY_IN, skb))
+ goto discard_and_relse;
+
if (sk_filter(sk, skb, 0))
goto discard_and_relse;
@@ -1606,9 +1610,12 @@
bh_unlock_sock(sk);
sock_put(sk);
- return ret;
+ return ret ? -1 : 0;
no_tcp_socket:
+ if (!xfrm6_policy_check(NULL, XFRM_POLICY_IN, skb))
+ goto discard_and_relse;
+
if (skb->len < (th->doff<<2) || tcp_checksum_complete(skb)) {
bad_packet:
TCP_INC_STATS_BH(TcpInErrs);
@@ -1630,6 +1637,10 @@
goto discard_it;
do_time_wait:
+ if (!xfrm6_policy_check(NULL, XFRM_POLICY_IN, skb)) {
+ sock_put(sk);
+ goto discard_it;
+ }
if (skb->len < (th->doff<<2) || tcp_checksum_complete(skb)) {
TCP_INC_STATS_BH(TcpInErrs);
sock_put(sk);
@@ -1674,16 +1685,16 @@
struct flowi fl;
fl.proto = IPPROTO_TCP;
- fl.nl_u.ip6_u.daddr = &np->daddr;
- fl.nl_u.ip6_u.saddr = &np->saddr;
+ fl.fl6_dst = &np->daddr;
+ fl.fl6_src = &np->saddr;
fl.fl6_flowlabel = np->flow_label;
fl.oif = sk->bound_dev_if;
- fl.uli_u.ports.dport = sk->dport;
- fl.uli_u.ports.sport = sk->sport;
+ fl.fl_ip_dport = sk->dport;
+ fl.fl_ip_sport = sk->sport;
if (np->opt && np->opt->srcrt) {
struct rt0_hdr *rt0 = (struct rt0_hdr *) np->opt->srcrt;
- fl.nl_u.ip6_u.daddr = rt0->addr;
+ fl.fl6_dst = rt0->addr;
}
dst = ip6_route_output(sk, &fl);
@@ -1715,12 +1726,12 @@
fl.fl6_flowlabel = np->flow_label;
IP6_ECN_flow_xmit(sk, fl.fl6_flowlabel);
fl.oif = sk->bound_dev_if;
- fl.uli_u.ports.sport = sk->sport;
- fl.uli_u.ports.dport = sk->dport;
+ fl.fl_ip_sport = sk->sport;
+ fl.fl_ip_dport = sk->dport;
if (np->opt && np->opt->srcrt) {
struct rt0_hdr *rt0 = (struct rt0_hdr *) np->opt->srcrt;
- fl.nl_u.ip6_u.daddr = rt0->addr;
+ fl.fl6_dst = rt0->addr;
}
dst = __sk_dst_check(sk, np->dst_cookie);
@@ -1740,7 +1751,7 @@
skb->dst = dst_clone(dst);
/* Restore final destination back after routing done */
- fl.nl_u.ip6_u.daddr = &np->daddr;
+ fl.fl6_dst = &np->daddr;
return ip6_xmit(sk, skb, &fl, np->opt);
}
@@ -1850,6 +1861,7 @@
static int tcp_v6_destroy_sock(struct sock *sk)
{
struct tcp_opt *tp = &(sk->tp_pinfo.af_tcp);
+ struct inet_opt *inet = inet_sk(sk);
tcp_clear_xmit_timers(sk);
@@ -1867,8 +1879,8 @@
tcp_put_port(sk);
/* If sendmsg cached page exists, toss it. */
- if (tp->sndmsg_page != NULL)
- __free_page(tp->sndmsg_page);
+ if (inet->sndmsg_page != NULL)
+ __free_page(inet->sndmsg_page);
atomic_dec(&tcp_sockets_allocated);
@@ -2128,15 +2140,10 @@
get_port: tcp_v6_get_port,
};
-static struct inet6_protocol tcpv6_protocol =
-{
- tcp_v6_rcv, /* TCP handler */
- tcp_v6_err, /* TCP error control */
- NULL, /* next */
- IPPROTO_TCP, /* protocol ID */
- 0, /* copy */
- NULL, /* data */
- "TCPv6" /* name */
+static struct inet6_protocol tcpv6_protocol = {
+ .handler = tcp_v6_rcv,
+ .err_handler = tcp_v6_err,
+ .flags = INET6_PROTO_NOPOLICY|INET6_PROTO_FINAL,
};
extern struct proto_ops inet6_stream_ops;
@@ -2154,6 +2161,7 @@
void __init tcpv6_init(void)
{
/* register inet6 protocol */
- inet6_add_protocol(&tcpv6_protocol);
+ if (inet6_add_protocol(&tcpv6_protocol, IPPROTO_TCP) < 0)
+ printk(KERN_ERR "tcpv6_init: Could not register protocol\n");
inet6_register_protosw(&tcpv6_protosw);
}
diff -Nru a/net/ipv6/udp.c b/net/ipv6/udp.c
--- a/net/ipv6/udp.c Thu May 8 10:41:37 2003
+++ b/net/ipv6/udp.c Thu May 8 10:41:37 2003
@@ -50,6 +50,7 @@
#include <net/inet_common.h>
#include <net/checksum.h>
+#include <net/xfrm.h>
struct udp_mib udp_stats_in6[NR_CPUS*2];
@@ -331,8 +332,8 @@
fl.fl6_dst = &np->daddr;
fl.fl6_src = &saddr;
fl.oif = sk->bound_dev_if;
- fl.uli_u.ports.dport = sk->dport;
- fl.uli_u.ports.sport = sk->sport;
+ fl.fl_ip_dport = sk->dport;
+ fl.fl_ip_sport = sk->sport;
if (!fl.oif && (addr_type&IPV6_ADDR_MULTICAST))
fl.oif = np->mcast_oif;
@@ -517,6 +518,11 @@
static inline int udpv6_queue_rcv_skb(struct sock * sk, struct sk_buff *skb)
{
+ if (!xfrm6_policy_check(sk, XFRM_POLICY_IN, skb)) {
+ kfree_skb(skb);
+ return -1;
+ }
+
#if defined(CONFIG_FILTER)
if (sk->filter && skb->ip_summed != CHECKSUM_UNNECESSARY) {
if ((unsigned short)csum_fold(skb_checksum(skb, 0, skb->len, skb->csum))) {
@@ -612,8 +618,9 @@
read_unlock(&udp_hash_lock);
}
-int udpv6_rcv(struct sk_buff *skb)
+static int udpv6_rcv(struct sk_buff **pskb, unsigned int *nhoffp)
{
+ struct sk_buff *skb = *pskb;
struct sock *sk;
struct udphdr *uh;
struct net_device *dev = skb->dev;
@@ -680,6 +687,9 @@
sk = udp_v6_lookup(saddr, uh->source, daddr, uh->dest, dev->ifindex);
if (sk == NULL) {
+ if (!xfrm6_policy_check(NULL, XFRM_POLICY_IN, skb))
+ goto discard;
+
if (skb->ip_summed != CHECKSUM_UNNECESSARY &&
(unsigned short)csum_fold(skb_checksum(skb, 0, skb->len, skb->csum)))
goto discard;
@@ -903,8 +913,8 @@
fl.fl6_dst = daddr;
if (fl.fl6_src == NULL && !ipv6_addr_any(&np->saddr))
fl.fl6_src = &np->saddr;
- fl.uli_u.ports.dport = udh.uh.dest;
- fl.uli_u.ports.sport = udh.uh.source;
+ fl.fl_ip_dport = udh.uh.dest;
+ fl.fl_ip_sport = udh.uh.source;
err = ip6_build_xmit(sk, udpv6_getfrag, &udh, &fl, len, opt, hlimit,
msg->msg_flags);
@@ -918,15 +928,10 @@
return ulen;
}
-static struct inet6_protocol udpv6_protocol =
-{
- udpv6_rcv, /* UDP handler */
- udpv6_err, /* UDP error control */
- NULL, /* next */
- IPPROTO_UDP, /* protocol ID */
- 0, /* copy */
- NULL, /* data */
- "UDPv6" /* name */
+static struct inet6_protocol udpv6_protocol = {
+ .handler = udpv6_rcv,
+ .err_handler = udpv6_err,
+ .flags = INET6_PROTO_NOPOLICY|INET6_PROTO_FINAL,
};
#define LINE_LEN 190
@@ -1034,6 +1039,7 @@
void __init udpv6_init(void)
{
- inet6_add_protocol(&udpv6_protocol);
+ if (inet6_add_protocol(&udpv6_protocol, IPPROTO_UDP) < 0)
+ printk(KERN_ERR "udpv6_init: Could not register protocol\n");
inet6_register_protosw(&udpv6_protosw);
}
diff -Nru a/net/ipv6/xfrm6_input.c b/net/ipv6/xfrm6_input.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/net/ipv6/xfrm6_input.c Thu May 8 10:41:38 2003
@@ -0,0 +1,271 @@
+/*
+ * xfrm6_input.c: based on net/ipv4/xfrm4_input.c
+ *
+ * Authors:
+ * Mitsuru KANDA @USAGI
+ * Kazunori MIYAZAWA @USAGI
+ * Kunihiro Ishiguro
+ * YOSHIFUJI Hideaki @USAGI
+ * IPv6 support
+ */
+
+#include <net/ip.h>
+#include <net/ipv6.h>
+#include <net/xfrm.h>
+
+static kmem_cache_t *secpath_cachep;
+
+static int zero_out_mutable_opts(struct ipv6_opt_hdr *opthdr)
+{
+ u8 *opt = (u8 *)opthdr;
+ int len = ipv6_optlen(opthdr);
+ int off = 0;
+ int optlen = 0;
+
+ off += 2;
+ len -= 2;
+
+ while (len > 0) {
+
+ switch (opt[off]) {
+
+ case IPV6_TLV_PAD0:
+ optlen = 1;
+ break;
+ default:
+ if (len < 2)
+ goto bad;
+ optlen = opt[off+1]+2;
+ if (len < optlen)
+ goto bad;
+ if (opt[off] & 0x20)
+ memset(&opt[off+2], 0, opt[off+1]);
+ break;
+ }
+
+ off += optlen;
+ len -= optlen;
+ }
+ if (len == 0)
+ return 1;
+
+bad:
+ return 0;
+}
+
+int xfrm6_clear_mutable_options(struct sk_buff *skb, u16 *nh_offset, int dir)
+{
+ u16 offset = sizeof(struct ipv6hdr);
+ struct ipv6_opt_hdr *exthdr = (struct ipv6_opt_hdr*)(skb->nh.raw + offset);
+ unsigned int packet_len = skb->tail - skb->nh.raw;
+ u8 nexthdr = skb->nh.ipv6h->nexthdr;
+ u8 nextnexthdr = 0;
+
+ *nh_offset = ((unsigned char *)&skb->nh.ipv6h->nexthdr) - skb->nh.raw;
+
+ while (offset + 1 <= packet_len) {
+
+ switch (nexthdr) {
+
+ case NEXTHDR_HOP:
+ *nh_offset = offset;
+ offset += ipv6_optlen(exthdr);
+ if (!zero_out_mutable_opts(exthdr)) {
+ if (net_ratelimit())
+ printk(KERN_WARNING "overrun hopopts\n");
+ return 0;
+ }
+ nexthdr = exthdr->nexthdr;
+ exthdr = (struct ipv6_opt_hdr*)(skb->nh.raw + offset);
+ break;
+
+ case NEXTHDR_ROUTING:
+ *nh_offset = offset;
+ offset += ipv6_optlen(exthdr);
+ ((struct ipv6_rt_hdr*)exthdr)->segments_left = 0;
+ nexthdr = exthdr->nexthdr;
+ exthdr = (struct ipv6_opt_hdr*)(skb->nh.raw + offset);
+ break;
+
+ case NEXTHDR_DEST:
+ *nh_offset = offset;
+ offset += ipv6_optlen(exthdr);
+ if (!zero_out_mutable_opts(exthdr)) {
+ if (net_ratelimit())
+ printk(KERN_WARNING "overrun destopt\n");
+ return 0;
+ }
+ nexthdr = exthdr->nexthdr;
+ exthdr = (struct ipv6_opt_hdr*)(skb->nh.raw + offset);
+ break;
+
+ case NEXTHDR_AUTH:
+ if (dir == XFRM_POLICY_OUT) {
+ memset(((struct ipv6_auth_hdr*)exthdr)->auth_data, 0,
+ (((struct ipv6_auth_hdr*)exthdr)->hdrlen - 1) << 2);
+ }
+ if (exthdr->nexthdr == NEXTHDR_DEST) {
+ offset += (((struct ipv6_auth_hdr*)exthdr)->hdrlen + 2) << 2;
+ exthdr = (struct ipv6_opt_hdr*)(skb->nh.raw + offset);
+ nextnexthdr = exthdr->nexthdr;
+ if (!zero_out_mutable_opts(exthdr)) {
+ if (net_ratelimit())
+ printk(KERN_WARNING "overrun destopt\n");
+ return 0;
+ }
+ }
+ return nexthdr;
+ default :
+ return nexthdr;
+ }
+ }
+
+ return nexthdr;
+}
+
+int xfrm6_rcv(struct sk_buff **pskb, unsigned int *nhoffp)
+{
+ struct sk_buff *skb = *pskb;
+ int err;
+ u32 spi, seq;
+ struct sec_decap_state xfrm_vec[XFRM_MAX_DEPTH];
+ struct xfrm_state *x;
+ int xfrm_nr = 0;
+ int decaps = 0;
+ struct ipv6hdr *hdr = skb->nh.ipv6h;
+ unsigned char *tmp_hdr = NULL;
+ int hdr_len = 0;
+ u16 nh_offset = 0;
+ int nexthdr = 0;
+
+ nh_offset = ((unsigned char*)&skb->nh.ipv6h->nexthdr) - skb->nh.raw;
+ hdr_len = sizeof(struct ipv6hdr);
+
+ tmp_hdr = kmalloc(hdr_len, GFP_ATOMIC);
+ if (!tmp_hdr)
+ goto drop;
+ memcpy(tmp_hdr, skb->nh.raw, hdr_len);
+
+ nexthdr = xfrm6_clear_mutable_options(skb, &nh_offset, XFRM_POLICY_IN);
+ hdr->priority = 0;
+ hdr->flow_lbl[0] = 0;
+ hdr->flow_lbl[1] = 0;
+ hdr->flow_lbl[2] = 0;
+ hdr->hop_limit = 0;
+
+ if ((err = xfrm_parse_spi(skb, nexthdr, &spi, &seq)) != 0)
+ goto drop;
+
+ do {
+ struct ipv6hdr *iph = skb->nh.ipv6h;
+
+ if (xfrm_nr == XFRM_MAX_DEPTH)
+ goto drop;
+
+ x = xfrm_state_lookup((xfrm_address_t *)&iph->daddr, spi, nexthdr, AF_INET6);
+ if (x == NULL)
+ goto drop;
+ spin_lock(&x->lock);
+ if (unlikely(x->km.state != XFRM_STATE_VALID))
+ goto drop_unlock;
+
+ if (x->props.replay_window && xfrm_replay_check(x, seq))
+ goto drop_unlock;
+
+ if (xfrm_state_check_expire(x))
+ goto drop_unlock;
+
+ nexthdr = x->type->input(x, &(xfrm_vec[xfrm_nr].decap), skb);
+ if (nexthdr <= 0)
+ goto drop_unlock;
+
+ if (x->props.replay_window)
+ xfrm_replay_advance(x, seq);
+
+ x->curlft.bytes += skb->len;
+ x->curlft.packets++;
+
+ spin_unlock(&x->lock);
+
+ xfrm_vec[xfrm_nr++].xvec = x;
+
+ iph = skb->nh.ipv6h;
+
+ if (x->props.mode) { /* XXX */
+ if (iph->nexthdr != IPPROTO_IPV6)
+ goto drop;
+ skb->nh.raw = skb->data;
+ iph = skb->nh.ipv6h;
+ decaps = 1;
+ break;
+ }
+
+ if ((err = xfrm_parse_spi(skb, nexthdr, &spi, &seq)) < 0)
+ goto drop;
+ } while (!err);
+
+ if (!decaps) {
+ memcpy(skb->nh.raw, tmp_hdr, hdr_len);
+ skb->nh.raw[nh_offset] = nexthdr;
+ skb->nh.ipv6h->payload_len = htons(hdr_len + skb->len - sizeof(struct ipv6hdr));
+ }
+
+ /* Allocate new secpath or COW existing one. */
+ if (!skb->sp || atomic_read(&skb->sp->refcnt) != 1) {
+ kmem_cache_t *pool = skb->sp ? skb->sp->pool : secpath_cachep;
+ struct sec_path *sp;
+ sp = kmem_cache_alloc(pool, SLAB_ATOMIC);
+ if (!sp)
+ goto drop;
+ if (skb->sp) {
+ memcpy(sp, skb->sp, sizeof(struct sec_path));
+ secpath_put(skb->sp);
+ } else {
+ sp->pool = pool;
+ sp->len = 0;
+ }
+ atomic_set(&sp->refcnt, 1);
+ skb->sp = sp;
+ }
+
+ if (xfrm_nr + skb->sp->len > XFRM_MAX_DEPTH)
+ goto drop;
+
+ memcpy(skb->sp->x+skb->sp->len, xfrm_vec, xfrm_nr*sizeof(struct sec_decap_state));
+ skb->sp->len += xfrm_nr;
+ skb->ip_summed = CHECKSUM_NONE;
+
+ if (decaps) {
+ if (!(skb->dev->flags&IFF_LOOPBACK)) {
+ dst_release(skb->dst);
+ skb->dst = NULL;
+ }
+ netif_rx(skb);
+ return -1;
+ } else {
+ *nhoffp = nh_offset;
+ return 1;
+ }
+
+drop_unlock:
+ spin_unlock(&x->lock);
+ xfrm_state_put(x);
+drop:
+ if (tmp_hdr) kfree(tmp_hdr);
+ while (--xfrm_nr >= 0)
+ xfrm_state_put(xfrm_vec[xfrm_nr].xvec);
+ kfree_skb(skb);
+ return -1;
+}
+
+void __init xfrm6_input_init(void)
+{
+ secpath_cachep = kmem_cache_create("secpath6_cache",
+ sizeof(struct sec_path),
+ 0, SLAB_HWCACHE_ALIGN,
+ NULL, NULL);
+
+ if (!secpath_cachep)
+ panic("IPv6: failed to allocate secpath6_cache\n");
+}
+
diff -Nru a/net/ipv6/xfrm6_policy.c b/net/ipv6/xfrm6_policy.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/net/ipv6/xfrm6_policy.c Thu May 8 10:41:38 2003
@@ -0,0 +1,274 @@
+/*
+ * xfrm6_policy.c: based on xfrm4_policy.c
+ *
+ * Authors:
+ * Mitsuru KANDA @USAGI
+ * Kazunori MIYAZAWA @USAGI
+ * Kunihiro Ishiguro
+ * IPv6 support
+ * YOSHIFUJI Hideaki
+ * Split up af-specific portion
+ *
+ */
+
+#include <linux/config.h>
+#include <net/xfrm.h>
+#include <net/ip.h>
+#include <net/ipv6.h>
+#include <net/ip6_route.h>
+
+extern struct dst_ops xfrm6_dst_ops;
+extern struct xfrm_policy_afinfo xfrm6_policy_afinfo;
+
+static struct xfrm_type_map xfrm6_type_map = { .lock = RW_LOCK_UNLOCKED };
+
+int xfrm6_dst_lookup(struct xfrm_dst **dst, struct flowi *fl)
+{
+ int err = 0;
+ *dst = (struct xfrm_dst*)ip6_route_output(NULL, fl);
+ if (!*dst)
+ err = -ENETUNREACH;
+ return err;
+}
+
+/* Check that the bundle accepts the flow and its components are
+ * still valid.
+ */
+
+static int __xfrm6_bundle_ok(struct xfrm_dst *xdst, struct flowi *fl)
+{
+ do {
+ if (xdst->u.dst.ops != &xfrm6_dst_ops)
+ return 1;
+
+ if (!xfrm_selector_match(&xdst->u.dst.xfrm->sel, fl, AF_INET6))
+ return 0;
+ if (xdst->u.dst.xfrm->km.state != XFRM_STATE_VALID ||
+ xdst->u.dst.path->obsolete > 0)
+ return 0;
+ xdst = (struct xfrm_dst*)xdst->u.dst.child;
+ } while (xdst);
+ return 0;
+}
+
+static struct dst_entry *
+__xfrm6_find_bundle(struct flowi *fl, struct rtable *rt, struct xfrm_policy *policy)
+{
+ struct dst_entry *dst;
+
+ /* Still not clear if we should set fl->fl6_{src,dst}... */
+ read_lock_bh(&policy->lock);
+ for (dst = policy->bundles; dst; dst = dst->next) {
+ struct xfrm_dst *xdst = (struct xfrm_dst*)dst;
+ if (!ipv6_addr_cmp(&xdst->u.rt6.rt6i_dst.addr, fl->fl6_dst) &&
+ !ipv6_addr_cmp(&xdst->u.rt6.rt6i_src.addr, fl->fl6_src) &&
+ __xfrm6_bundle_ok(xdst, fl)) {
+ dst_clone(dst);
+ break;
+ }
+ }
+ read_unlock_bh(&policy->lock);
+ return dst;
+}
+
+/* Allocate chain of dst_entry's, attach known xfrm's, calculate
+ * all the metrics... Shortly, bundle a bundle.
+ */
+
+static int
+__xfrm6_bundle_create(struct xfrm_policy *policy, struct xfrm_state **xfrm, int nx,
+ struct flowi *fl, struct dst_entry **dst_p)
+{
+ struct dst_entry *dst, *dst_prev;
+ struct rt6_info *rt0 = (struct rt6_info*)(*dst_p);
+ struct rt6_info *rt = rt0;
+ struct in6_addr *remote = fl->fl6_dst;
+ struct in6_addr *local = fl->fl6_src;
+ int i;
+ int err = 0;
+ int header_len = 0;
+ int trailer_len = 0;
+
+ dst = dst_prev = NULL;
+
+ for (i = 0; i < nx; i++) {
+ struct dst_entry *dst1 = dst_alloc(&xfrm6_dst_ops);
+
+ if (unlikely(dst1 == NULL)) {
+ err = -ENOBUFS;
+ goto error;
+ }
+
+ dst1->xfrm = xfrm[i];
+ if (!dst)
+ dst = dst1;
+ else {
+ dst_prev->child = dst1;
+ dst1->flags |= DST_NOHASH;
+ dst_clone(dst1);
+ }
+ dst_prev = dst1;
+ if (xfrm[i]->props.mode) {
+ remote = (struct in6_addr*)&xfrm[i]->id.daddr;
+ local = (struct in6_addr*)&xfrm[i]->props.saddr;
+ }
+ header_len += xfrm[i]->props.header_len;
+ trailer_len += xfrm[i]->props.trailer_len;
+ }
+
+ if (ipv6_addr_cmp(remote, fl->fl6_dst)) {
+ struct flowi fl_tunnel = { .nl_u = { .ip6_u =
+ { .daddr = remote,
+ .saddr = local }
+ }
+ };
+ err = xfrm_dst_lookup((struct xfrm_dst**)&rt, &fl_tunnel, AF_INET6);
+ if (err)
+ goto error;
+ } else {
+ dst_hold(&rt->u.dst);
+ }
+ dst_prev->child = &rt->u.dst;
+ for (dst_prev = dst; dst_prev != &rt->u.dst; dst_prev = dst_prev->child) {
+ struct xfrm_dst *x = (struct xfrm_dst*)dst_prev;
+ x->u.rt.fl = *fl;
+
+ dst_prev->dev = rt->u.dst.dev;
+ if (rt->u.dst.dev)
+ dev_hold(rt->u.dst.dev);
+ dst_prev->obsolete = -1;
+ dst_prev->flags |= DST_HOST;
+ dst_prev->lastuse = jiffies;
+ dst_prev->header_len = header_len;
+ dst_prev->trailer_len = trailer_len;
+ memcpy(&dst_prev->metrics, &rt->u.dst.metrics, sizeof(dst_prev->metrics));
+ dst_prev->path = &rt->u.dst;
+
+ /* Copy neighbout for reachability confirmation */
+ dst_prev->neighbour = neigh_clone(rt->u.dst.neighbour);
+ dst_prev->input = rt->u.dst.input;
+ dst_prev->output = dst_prev->xfrm->type->output;
+ /* Sheit... I remember I did this right. Apparently,
+ * it was magically lost, so this code needs audit */
+ x->u.rt6.rt6i_flags = rt0->rt6i_flags&(RTCF_BROADCAST|RTCF_MULTICAST|RTCF_LOCAL);
+ x->u.rt6.rt6i_metric = rt0->rt6i_metric;
+ x->u.rt6.rt6i_node = rt0->rt6i_node;
+ x->u.rt6.rt6i_hoplimit = rt0->rt6i_hoplimit;
+ x->u.rt6.rt6i_gateway = rt0->rt6i_gateway;
+ memcpy(&x->u.rt6.rt6i_gateway, &rt0->rt6i_gateway, sizeof(x->u.rt6.rt6i_gateway));
+ header_len -= x->u.dst.xfrm->props.header_len;
+ trailer_len -= x->u.dst.xfrm->props.trailer_len;
+ }
+ *dst_p = dst;
+ return 0;
+
+error:
+ if (dst)
+ dst_free(dst);
+ return err;
+}
+
+static inline void
+_decode_session6(struct sk_buff *skb, struct flowi *fl)
+{
+ u16 offset = sizeof(struct ipv6hdr);
+ struct ipv6hdr *hdr = skb->nh.ipv6h;
+ struct ipv6_opt_hdr *exthdr = (struct ipv6_opt_hdr*)(skb->nh.raw + offset);
+ u8 nexthdr = skb->nh.ipv6h->nexthdr;
+
+ fl->fl6_dst = &hdr->daddr;
+ fl->fl6_src = &hdr->saddr;
+
+ while (pskb_may_pull(skb, skb->nh.raw + offset + 1 - skb->data)) {
+ switch (nexthdr) {
+ case NEXTHDR_ROUTING:
+ case NEXTHDR_HOP:
+ case NEXTHDR_DEST:
+ offset += ipv6_optlen(exthdr);
+ nexthdr = exthdr->nexthdr;
+ exthdr = (struct ipv6_opt_hdr*)(skb->nh.raw + offset);
+ break;
+
+ case IPPROTO_UDP:
+ case IPPROTO_TCP:
+ case IPPROTO_SCTP:
+ if (pskb_may_pull(skb, skb->nh.raw + offset + 4 - skb->data)) {
+ u16 *ports = (u16 *)exthdr;
+
+ fl->fl_ip_sport = ports[0];
+ fl->fl_ip_dport = ports[1];
+ }
+ return;
+
+ /* XXX Why are there these headers? */
+ case IPPROTO_AH:
+ case IPPROTO_ESP:
+ case IPPROTO_COMP:
+ default:
+ fl->fl_ipsec_spi = 0;
+ return;
+ };
+ }
+}
+
+static inline int xfrm6_garbage_collect(void)
+{
+ read_lock(&xfrm6_policy_afinfo.lock);
+ xfrm6_policy_afinfo.garbage_collect();
+ read_unlock(&xfrm6_policy_afinfo.lock);
+ return (atomic_read(&xfrm6_dst_ops.entries) > xfrm6_dst_ops.gc_thresh*2);
+}
+
+static void xfrm6_update_pmtu(struct dst_entry *dst, u32 mtu)
+{
+ struct dst_entry *path = dst->path;
+
+ if (mtu >= 1280 && mtu < dst_pmtu(dst))
+ return;
+
+ path->ops->update_pmtu(path, mtu);
+}
+
+struct dst_ops xfrm6_dst_ops = {
+ .family = AF_INET6,
+ .protocol = __constant_htons(ETH_P_IPV6),
+ .gc = xfrm6_garbage_collect,
+ .update_pmtu = xfrm6_update_pmtu,
+ .gc_thresh = 1024,
+ .entry_size = sizeof(struct xfrm_dst),
+};
+
+struct xfrm_policy_afinfo xfrm6_policy_afinfo = {
+ .family = AF_INET6,
+ .lock = RW_LOCK_UNLOCKED,
+ .type_map = &xfrm6_type_map,
+ .dst_ops = &xfrm6_dst_ops,
+ .dst_lookup = xfrm6_dst_lookup,
+ .find_bundle = __xfrm6_find_bundle,
+ .bundle_create = __xfrm6_bundle_create,
+ .decode_session = _decode_session6,
+};
+
+void __init xfrm6_policy_init(void)
+{
+ xfrm_policy_register_afinfo(&xfrm6_policy_afinfo);
+}
+
+void __exit xfrm6_policy_fini(void)
+{
+ xfrm_policy_unregister_afinfo(&xfrm6_policy_afinfo);
+}
+
+void __init xfrm6_init(void)
+{
+ xfrm6_policy_init();
+ xfrm6_state_init();
+ xfrm6_input_init();
+}
+
+void __exit xfrm6_fini(void)
+{
+ //xfrm6_input_fini();
+ xfrm6_policy_fini();
+ xfrm6_state_fini();
+}
diff -Nru a/net/ipv6/xfrm6_state.c b/net/ipv6/xfrm6_state.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/net/ipv6/xfrm6_state.c Thu May 8 10:41:38 2003
@@ -0,0 +1,134 @@
+/*
+ * xfrm6_state.c: based on xfrm4_state.c
+ *
+ * Authors:
+ * Mitsuru KANDA @USAGI
+ * Kazunori MIYAZAWA @USAGI
+ * Kunihiro Ishiguro
+ * IPv6 support
+ * YOSHIFUJI Hideaki @USAGI
+ * Split up af-specific portion
+ *
+ */
+
+#include <net/xfrm.h>
+#include <linux/pfkeyv2.h>
+#include <linux/ipsec.h>
+#include <net/ipv6.h>
+
+extern struct xfrm_state_afinfo xfrm6_state_afinfo;
+
+static void
+__xfrm6_init_tempsel(struct xfrm_state *x, struct flowi *fl,
+ struct xfrm_tmpl *tmpl,
+ xfrm_address_t *daddr, xfrm_address_t *saddr)
+{
+ /* Initialize temporary selector matching only
+ * to current session. */
+ memcpy(&x->sel.daddr, fl->fl6_dst, sizeof(struct in6_addr));
+ memcpy(&x->sel.saddr, fl->fl6_src, sizeof(struct in6_addr));
+ x->sel.dport = fl->fl_ip_dport;
+ x->sel.dport_mask = ~0;
+ x->sel.sport = fl->fl_ip_sport;
+ x->sel.sport_mask = ~0;
+ x->sel.prefixlen_d = 128;
+ x->sel.prefixlen_s = 128;
+ x->sel.proto = fl->proto;
+ x->sel.ifindex = fl->oif;
+ x->id = tmpl->id;
+ if (ipv6_addr_any((struct in6_addr*)&x->id.daddr))
+ memcpy(&x->id.daddr, daddr, sizeof(x->sel.daddr));
+ memcpy(&x->props.saddr, &tmpl->saddr, sizeof(x->props.saddr));
+ if (ipv6_addr_any((struct in6_addr*)&x->props.saddr))
+ memcpy(&x->props.saddr, saddr, sizeof(x->props.saddr));
+ x->props.mode = tmpl->mode;
+ x->props.reqid = tmpl->reqid;
+ x->props.family = AF_INET6;
+}
+
+static struct xfrm_state *
+__xfrm6_state_lookup(xfrm_address_t *daddr, u32 spi, u8 proto)
+{
+ unsigned h = __xfrm6_spi_hash(daddr, spi, proto);
+ struct xfrm_state *x;
+
+ list_for_each_entry(x, xfrm6_state_afinfo.state_byspi+h, byspi) {
+ if (x->props.family == AF_INET6 &&
+ spi == x->id.spi &&
+ !ipv6_addr_cmp((struct in6_addr *)daddr, (struct in6_addr *)x->id.daddr.a6) &&
+ proto == x->id.proto) {
+ atomic_inc(&x->refcnt);
+ return x;
+ }
+ }
+ return NULL;
+}
+
+static struct xfrm_state *
+__xfrm6_find_acq(u8 mode, u16 reqid, u8 proto,
+ xfrm_address_t *daddr, xfrm_address_t *saddr,
+ int create)
+{
+ struct xfrm_state *x, *x0;
+ unsigned h = __xfrm6_dst_hash(daddr);
+
+ x0 = NULL;
+
+ list_for_each_entry(x, xfrm6_state_afinfo.state_bydst+h, bydst) {
+ if (x->props.family == AF_INET6 &&
+ !ipv6_addr_cmp((struct in6_addr *)daddr, (struct in6_addr *)x->id.daddr.a6) &&
+ mode == x->props.mode &&
+ proto == x->id.proto &&
+ !ipv6_addr_cmp((struct in6_addr *)saddr, (struct in6_addr *)x->props.saddr.a6) &&
+ reqid == x->props.reqid &&
+ x->km.state == XFRM_STATE_ACQ) {
+ if (!x0)
+ x0 = x;
+ if (x->id.spi)
+ continue;
+ x0 = x;
+ break;
+ }
+ }
+ if (x0) {
+ atomic_inc(&x0->refcnt);
+ } else if (create && (x0 = xfrm_state_alloc()) != NULL) {
+ memcpy(x0->sel.daddr.a6, daddr, sizeof(struct in6_addr));
+ memcpy(x0->sel.saddr.a6, saddr, sizeof(struct in6_addr));
+ x0->sel.prefixlen_d = 128;
+ x0->sel.prefixlen_s = 128;
+ memcpy(x0->props.saddr.a6, saddr, sizeof(struct in6_addr));
+ x0->km.state = XFRM_STATE_ACQ;
+ memcpy(x0->id.daddr.a6, daddr, sizeof(struct in6_addr));
+ x0->id.proto = proto;
+ x0->props.family = AF_INET6;
+ x0->props.mode = mode;
+ x0->props.reqid = reqid;
+ x0->lft.hard_add_expires_seconds = XFRM_ACQ_EXPIRES;
+ atomic_inc(&x0->refcnt);
+ mod_timer(&x0->timer, jiffies + XFRM_ACQ_EXPIRES*HZ);
+ atomic_inc(&x0->refcnt);
+ list_add_tail(&x0->bydst, xfrm6_state_afinfo.state_bydst+h);
+ wake_up(&km_waitq);
+ }
+ return x0;
+}
+
+static struct xfrm_state_afinfo xfrm6_state_afinfo = {
+ .family = AF_INET6,
+ .lock = RW_LOCK_UNLOCKED,
+ .init_tempsel = __xfrm6_init_tempsel,
+ .state_lookup = __xfrm6_state_lookup,
+ .find_acq = __xfrm6_find_acq,
+};
+
+void __init xfrm6_state_init(void)
+{
+ xfrm_state_register_afinfo(&xfrm6_state_afinfo);
+}
+
+void __exit xfrm6_state_fini(void)
+{
+ xfrm_state_unregister_afinfo(&xfrm6_state_afinfo);
+}
+
diff -Nru a/net/key/Makefile b/net/key/Makefile
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/net/key/Makefile Thu May 8 10:41:38 2003
@@ -0,0 +1,9 @@
+#
+# Makefile for the key AF.
+#
+
+O_TARGET := key.o
+
+obj-$(CONFIG_NET_KEY) += af_key.o
+
+include $(TOPDIR)/Rules.make
diff -Nru a/net/key/af_key.c b/net/key/af_key.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/net/key/af_key.c Thu May 8 10:41:38 2003
@@ -0,0 +1,2868 @@
+/*
+ * net/key/af_key.c An implementation of PF_KEYv2 sockets.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ * Authors: Maxim Giryaev <gem@asplinux.ru>
+ * David S. Miller <davem@redhat.com>
+ * Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
+ * Kunihiro Ishiguro <kunihiro@ipinfusion.com>
+ * Kazunori MIYAZAWA / USAGI Project <miyazawa@linux-ipv6.org>
+ * Derek Atkins <derek@ihtfp.com>
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/socket.h>
+#include <linux/pfkeyv2.h>
+#include <linux/ipsec.h>
+#include <linux/skbuff.h>
+#include <linux/rtnetlink.h>
+#include <linux/in.h>
+#include <linux/in6.h>
+#include <linux/proc_fs.h>
+#include <linux/init.h>
+#include <net/xfrm.h>
+
+#include <net/sock.h>
+
+#define _X2KEY(x) ((x) == XFRM_INF ? 0 : (x))
+#define _KEY2X(x) ((x) == 0 ? XFRM_INF : (x))
+
+
+/* List of all pfkey sockets. */
+static struct sock * pfkey_table;
+static DECLARE_WAIT_QUEUE_HEAD(pfkey_table_wait);
+static rwlock_t pfkey_table_lock = RW_LOCK_UNLOCKED;
+static atomic_t pfkey_table_users = ATOMIC_INIT(0);
+
+static atomic_t pfkey_socks_nr = ATOMIC_INIT(0);
+
+static void pfkey_sock_destruct(struct sock *sk)
+{
+ skb_queue_purge(&sk->receive_queue);
+
+ if (!sk->dead) {
+ printk("Attempt to release alive pfkey socket: %p\n", sk);
+ return;
+ }
+
+ BUG_TRAP(atomic_read(&sk->rmem_alloc)==0);
+ BUG_TRAP(atomic_read(&sk->wmem_alloc)==0);
+
+ kfree(pfkey_sk(sk));
+
+ atomic_dec(&pfkey_socks_nr);
+
+ MOD_DEC_USE_COUNT;
+}
+
+static void pfkey_table_grab(void)
+{
+ write_lock_bh(&pfkey_table_lock);
+
+ if (atomic_read(&pfkey_table_users)) {
+ DECLARE_WAITQUEUE(wait, current);
+
+ add_wait_queue_exclusive(&pfkey_table_wait, &wait);
+ for(;;) {
+ set_current_state(TASK_UNINTERRUPTIBLE);
+ if (atomic_read(&pfkey_table_users) == 0)
+ break;
+ write_unlock_bh(&pfkey_table_lock);
+ schedule();
+ write_lock_bh(&pfkey_table_lock);
+ }
+
+ __set_current_state(TASK_RUNNING);
+ remove_wait_queue(&pfkey_table_wait, &wait);
+ }
+}
+
+static __inline__ void pfkey_table_ungrab(void)
+{
+ write_unlock_bh(&pfkey_table_lock);
+ wake_up(&pfkey_table_wait);
+}
+
+static __inline__ void pfkey_lock_table(void)
+{
+ /* read_lock() synchronizes us to pfkey_table_grab */
+
+ read_lock(&pfkey_table_lock);
+ atomic_inc(&pfkey_table_users);
+ read_unlock(&pfkey_table_lock);
+}
+
+static __inline__ void pfkey_unlock_table(void)
+{
+ if (atomic_dec_and_test(&pfkey_table_users))
+ wake_up(&pfkey_table_wait);
+}
+
+
+static struct proto_ops pfkey_ops;
+
+static void pfkey_insert(struct sock *sk)
+{
+ pfkey_table_grab();
+ sk->next = pfkey_table;
+ pfkey_table = sk;
+ sock_hold(sk);
+ pfkey_table_ungrab();
+}
+
+static void pfkey_remove(struct sock *sk)
+{
+ struct sock **skp;
+
+ pfkey_table_grab();
+ for (skp = &pfkey_table; *skp; skp = &((*skp)->next)) {
+ if (*skp == sk) {
+ *skp = sk->next;
+ __sock_put(sk);
+ break;
+ }
+ }
+ pfkey_table_ungrab();
+}
+
+static int pfkey_create(struct socket *sock, int protocol)
+{
+ struct sock *sk;
+ struct pfkey_opt *pfk;
+ int err;
+
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
+ if (sock->type != SOCK_RAW)
+ return -ESOCKTNOSUPPORT;
+ if (protocol != PF_KEY_V2)
+ return -EPROTONOSUPPORT;
+
+ MOD_INC_USE_COUNT;
+
+ err = -ENOMEM;
+ sk = sk_alloc(PF_KEY, GFP_KERNEL, 1);
+ if (sk == NULL)
+ goto out;
+
+ sock->ops = &pfkey_ops;
+ sock_init_data(sock, sk);
+
+ err = -ENOMEM;
+ pfk = pfkey_sk(sk) = kmalloc(sizeof(*pfk), GFP_KERNEL);
+ if (!pfk) {
+ sk_free(sk);
+ goto out;
+ }
+ memset(pfk, 0, sizeof(*pfk));
+
+ sk->family = PF_KEY;
+ sk->destruct = pfkey_sock_destruct;
+
+ atomic_inc(&pfkey_socks_nr);
+
+ pfkey_insert(sk);
+
+ return 0;
+
+out:
+ MOD_DEC_USE_COUNT;
+ return err;
+}
+
+static int pfkey_release(struct socket *sock)
+{
+ struct sock *sk = sock->sk;
+
+ if (!sk)
+ return 0;
+
+ pfkey_remove(sk);
+
+ sock_orphan(sk);
+ sock->sk = NULL;
+ skb_queue_purge(&sk->write_queue);
+ sock_put(sk);
+
+ return 0;
+}
+
+static int pfkey_broadcast_one(struct sk_buff *skb, struct sk_buff **skb2,
+ int allocation, struct sock *sk)
+{
+ int err = -ENOBUFS;
+
+ sock_hold(sk);
+ if (*skb2 == NULL) {
+ if (atomic_read(&skb->users) != 1) {
+ *skb2 = skb_clone(skb, allocation);
+ } else {
+ *skb2 = skb;
+ atomic_inc(&skb->users);
+ }
+ }
+ if (*skb2 != NULL) {
+ if (atomic_read(&sk->rmem_alloc) <= sk->rcvbuf) {
+ skb_orphan(*skb2);
+ skb_set_owner_r(*skb2, sk);
+ skb_queue_tail(&sk->receive_queue, *skb2);
+ sk->data_ready(sk, (*skb2)->len);
+ *skb2 = NULL;
+ err = 0;
+ }
+ }
+ sock_put(sk);
+ return err;
+}
+
+/* Send SKB to all pfkey sockets matching selected criteria. */
+#define BROADCAST_ALL 0
+#define BROADCAST_ONE 1
+#define BROADCAST_REGISTERED 2
+#define BROADCAST_PROMISC_ONLY 4
+static int pfkey_broadcast(struct sk_buff *skb, int allocation,
+ int broadcast_flags, struct sock *one_sk)
+{
+ struct sock *sk;
+ struct sk_buff *skb2 = NULL;
+ int err = -ESRCH;
+
+ /* XXX Do we need something like netlink_overrun? I think
+ * XXX PF_KEY socket apps will not mind current behavior.
+ */
+ if (!skb)
+ return -ENOMEM;
+
+ pfkey_lock_table();
+ for (sk = pfkey_table; sk; sk = sk->next) {
+ struct pfkey_opt *pfk = pfkey_sk(sk);
+ int err2;
+
+ /* Yes, it means that if you are meant to receive this
+ * pfkey message you receive it twice as promiscuous
+ * socket.
+ */
+ if (pfk->promisc)
+ pfkey_broadcast_one(skb, &skb2, allocation, sk);
+
+ /* the exact target will be processed later */
+ if (sk == one_sk)
+ continue;
+ if (broadcast_flags != BROADCAST_ALL) {
+ if (broadcast_flags & BROADCAST_PROMISC_ONLY)
+ continue;
+ if ((broadcast_flags & BROADCAST_REGISTERED) &&
+ !pfk->registered)
+ continue;
+ if (broadcast_flags & BROADCAST_ONE)
+ continue;
+ }
+
+ err2 = pfkey_broadcast_one(skb, &skb2, allocation, sk);
+
+ /* Error is cleare after succecful sending to at least one
+ * registered KM */
+ if ((broadcast_flags & BROADCAST_REGISTERED) && err)
+ err = err2;
+ }
+ pfkey_unlock_table();
+
+ if (one_sk != NULL)
+ err = pfkey_broadcast_one(skb, &skb2, allocation, one_sk);
+
+ if (skb2)
+ kfree_skb(skb2);
+ kfree_skb(skb);
+ return err;
+}
+
+static inline void pfkey_hdr_dup(struct sadb_msg *new, struct sadb_msg *orig)
+{
+ *new = *orig;
+}
+
+static int pfkey_error(struct sadb_msg *orig, int err, struct sock *sk)
+{
+ struct sk_buff *skb = alloc_skb(sizeof(struct sadb_msg) + 16, GFP_KERNEL);
+ struct sadb_msg *hdr;
+
+ if (!skb)
+ return -ENOBUFS;
+
+ /* Woe be to the platform trying to support PFKEY yet
+ * having normal errnos outside the 1-255 range, inclusive.
+ */
+ err = -err;
+ if (err == ERESTARTSYS ||
+ err == ERESTARTNOHAND ||
+ err == ERESTARTNOINTR)
+ err = EINTR;
+ if (err >= 512)
+ err = EINVAL;
+ if (err <= 0 || err >= 256)
+ BUG();
+
+ hdr = (struct sadb_msg *) skb_put(skb, sizeof(struct sadb_msg));
+ pfkey_hdr_dup(hdr, orig);
+ hdr->sadb_msg_errno = (uint8_t) err;
+ hdr->sadb_msg_len = (sizeof(struct sadb_msg) /
+ sizeof(uint64_t));
+
+ pfkey_broadcast(skb, GFP_KERNEL, BROADCAST_ONE, sk);
+
+ return 0;
+}
+
+static u8 sadb_ext_min_len[] = {
+ [SADB_EXT_RESERVED] = (u8) 0,
+ [SADB_EXT_SA] = (u8) sizeof(struct sadb_sa),
+ [SADB_EXT_LIFETIME_CURRENT] = (u8) sizeof(struct sadb_lifetime),
+ [SADB_EXT_LIFETIME_HARD] = (u8) sizeof(struct sadb_lifetime),
+ [SADB_EXT_LIFETIME_SOFT] = (u8) sizeof(struct sadb_lifetime),
+ [SADB_EXT_ADDRESS_SRC] = (u8) sizeof(struct sadb_address),
+ [SADB_EXT_ADDRESS_DST] = (u8) sizeof(struct sadb_address),
+ [SADB_EXT_ADDRESS_PROXY] = (u8) sizeof(struct sadb_address),
+ [SADB_EXT_KEY_AUTH] = (u8) sizeof(struct sadb_key),
+ [SADB_EXT_KEY_ENCRYPT] = (u8) sizeof(struct sadb_key),
+ [SADB_EXT_IDENTITY_SRC] = (u8) sizeof(struct sadb_ident),
+ [SADB_EXT_IDENTITY_DST] = (u8) sizeof(struct sadb_ident),
+ [SADB_EXT_SENSITIVITY] = (u8) sizeof(struct sadb_sens),
+ [SADB_EXT_PROPOSAL] = (u8) sizeof(struct sadb_prop),
+ [SADB_EXT_SUPPORTED_AUTH] = (u8) sizeof(struct sadb_supported),
+ [SADB_EXT_SUPPORTED_ENCRYPT] = (u8) sizeof(struct sadb_supported),
+ [SADB_EXT_SPIRANGE] = (u8) sizeof(struct sadb_spirange),
+ [SADB_X_EXT_KMPRIVATE] = (u8) sizeof(struct sadb_x_kmprivate),
+ [SADB_X_EXT_POLICY] = (u8) sizeof(struct sadb_x_policy),
+ [SADB_X_EXT_SA2] = (u8) sizeof(struct sadb_x_sa2),
+ [SADB_X_EXT_NAT_T_TYPE] = (u8) sizeof(struct sadb_x_nat_t_type),
+ [SADB_X_EXT_NAT_T_SPORT] = (u8) sizeof(struct sadb_x_nat_t_port),
+ [SADB_X_EXT_NAT_T_DPORT] = (u8) sizeof(struct sadb_x_nat_t_port),
+ [SADB_X_EXT_NAT_T_OA] = (u8) sizeof(struct sadb_address),
+};
+
+/* Verify sadb_address_{len,prefixlen} against sa_family. */
+static int verify_address_len(void *p)
+{
+ struct sadb_address *sp = p;
+ struct sockaddr *addr = (struct sockaddr *)(sp + 1);
+ struct sockaddr_in *sin;
+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+ struct sockaddr_in6 *sin6;
+#endif
+ int len;
+
+ switch (addr->sa_family) {
+ case AF_INET:
+ len = sizeof(*sp) + sizeof(*sin) + (sizeof(uint64_t) - 1);
+ len /= sizeof(uint64_t);
+ if (sp->sadb_address_len != len ||
+ sp->sadb_address_prefixlen > 32)
+ return -EINVAL;
+ break;
+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+ case AF_INET6:
+ len = sizeof(*sp) + sizeof(*sin6) + (sizeof(uint64_t) - 1);
+ len /= sizeof(uint64_t);
+ if (sp->sadb_address_len != len ||
+ sp->sadb_address_prefixlen > 128)
+ return -EINVAL;
+ break;
+#endif
+ default:
+ /* It is user using kernel to keep track of security
+ * associations for another protocol, such as
+ * OSPF/RSVP/RIPV2/MIP. It is user's job to verify
+ * lengths.
+ *
+ * XXX Actually, association/policy database is not yet
+ * XXX able to cope with arbitrary sockaddr families.
+ * XXX When it can, remove this -EINVAL. -DaveM
+ */
+ return -EINVAL;
+ break;
+ };
+
+ return 0;
+}
+
+static int present_and_same_family(struct sadb_address *src,
+ struct sadb_address *dst)
+{
+ struct sockaddr *s_addr, *d_addr;
+
+ if (!src || !dst)
+ return 0;
+
+ s_addr = (struct sockaddr *)(src + 1);
+ d_addr = (struct sockaddr *)(dst + 1);
+ if (s_addr->sa_family != d_addr->sa_family)
+ return 0;
+ if (s_addr->sa_family != AF_INET
+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+ && s_addr->sa_family != AF_INET6
+#endif
+ )
+ return 0;
+
+ return 1;
+}
+
+static int parse_exthdrs(struct sk_buff *skb, struct sadb_msg *hdr, void **ext_hdrs)
+{
+ char *p = (char *) hdr;
+ int len = skb->len;
+
+ len -= sizeof(*hdr);
+ p += sizeof(*hdr);
+ while (len > 0) {
+ struct sadb_ext *ehdr = (struct sadb_ext *) p;
+ uint16_t ext_type;
+ int ext_len;
+
+ ext_len = ehdr->sadb_ext_len;
+ ext_len *= sizeof(uint64_t);
+ ext_type = ehdr->sadb_ext_type;
+ if (ext_len < sizeof(uint64_t) ||
+ ext_len > len ||
+ ext_type == SADB_EXT_RESERVED)
+ return -EINVAL;
+
+ if (ext_type <= SADB_EXT_MAX) {
+ int min = (int) sadb_ext_min_len[ext_type];
+ if (ext_len < min)
+ return -EINVAL;
+ if (ext_hdrs[ext_type-1] != NULL)
+ return -EINVAL;
+ if (ext_type == SADB_EXT_ADDRESS_SRC ||
+ ext_type == SADB_EXT_ADDRESS_DST ||
+ ext_type == SADB_EXT_ADDRESS_PROXY ||
+ ext_type == SADB_X_EXT_NAT_T_OA) {
+ if (verify_address_len(p))
+ return -EINVAL;
+ }
+ ext_hdrs[ext_type-1] = p;
+ }
+ p += ext_len;
+ len -= ext_len;
+ }
+
+ return 0;
+}
+
+static uint16_t
+pfkey_satype2proto(uint8_t satype)
+{
+ switch (satype) {
+ case SADB_SATYPE_UNSPEC:
+ return IPSEC_PROTO_ANY;
+ case SADB_SATYPE_AH:
+ return IPPROTO_AH;
+ case SADB_SATYPE_ESP:
+ return IPPROTO_ESP;
+ case SADB_X_SATYPE_IPCOMP:
+ return IPPROTO_COMP;
+ break;
+ default:
+ return 0;
+ }
+ /* NOTREACHED */
+}
+
+static uint8_t
+pfkey_proto2satype(uint16_t proto)
+{
+ switch (proto) {
+ case IPPROTO_AH:
+ return SADB_SATYPE_AH;
+ case IPPROTO_ESP:
+ return SADB_SATYPE_ESP;
+ case IPPROTO_COMP:
+ return SADB_X_SATYPE_IPCOMP;
+ break;
+ default:
+ return 0;
+ }
+ /* NOTREACHED */
+}
+
+/* BTW, this scheme means that there is no way with PFKEY2 sockets to
+ * say specifically 'just raw sockets' as we encode them as 255.
+ */
+
+static uint8_t pfkey_proto_to_xfrm(uint8_t proto)
+{
+ return (proto == IPSEC_PROTO_ANY ? 0 : proto);
+}
+
+static uint8_t pfkey_proto_from_xfrm(uint8_t proto)
+{
+ return (proto ? proto : IPSEC_PROTO_ANY);
+}
+
+static int pfkey_sadb_addr2xfrm_addr(struct sadb_address *addr,
+ xfrm_address_t *xaddr)
+{
+ switch (((struct sockaddr*)(addr + 1))->sa_family) {
+ case AF_INET:
+ xaddr->a4 =
+ ((struct sockaddr_in *)(addr + 1))->sin_addr.s_addr;
+ return AF_INET;
+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+ case AF_INET6:
+ memcpy(xaddr->a6,
+ &((struct sockaddr_in6 *)(addr + 1))->sin6_addr,
+ sizeof(struct in6_addr));
+ return AF_INET6;
+#endif
+ default:
+ return 0;
+ }
+ /* NOTREACHED */
+}
+
+static struct xfrm_state *pfkey_xfrm_state_lookup(struct sadb_msg *hdr, void **ext_hdrs)
+{
+ struct sadb_sa *sa;
+ struct sadb_address *addr;
+ uint16_t proto;
+ unsigned short family;
+ xfrm_address_t *xaddr;
+
+ sa = (struct sadb_sa *) ext_hdrs[SADB_EXT_SA-1];
+ if (sa == NULL)
+ return NULL;
+
+ proto = pfkey_satype2proto(hdr->sadb_msg_satype);
+ if (proto == 0)
+ return NULL;
+
+ /* sadb_address_len should be checked by caller */
+ addr = (struct sadb_address *) ext_hdrs[SADB_EXT_ADDRESS_DST-1];
+ if (addr == NULL)
+ return NULL;
+
+ family = ((struct sockaddr *)(addr + 1))->sa_family;
+ switch (family) {
+ case AF_INET:
+ xaddr = (xfrm_address_t *)&((struct sockaddr_in *)(addr + 1))->sin_addr;
+ break;
+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+ case AF_INET6:
+ xaddr = (xfrm_address_t *)&((struct sockaddr_in6 *)(addr + 1))->sin6_addr;
+ break;
+#endif
+ default:
+ xaddr = NULL;
+ }
+
+ if (!xaddr)
+ return NULL;
+
+ return xfrm_state_lookup(xaddr, sa->sadb_sa_spi, proto, family);
+}
+
+#define PFKEY_ALIGN8(a) (1 + (((a) - 1) | (8 - 1)))
+static int
+pfkey_sockaddr_size(sa_family_t family)
+{
+ switch (family) {
+ case AF_INET:
+ return PFKEY_ALIGN8(sizeof(struct sockaddr_in));
+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+ case AF_INET6:
+ return PFKEY_ALIGN8(sizeof(struct sockaddr_in6));
+#endif
+ default:
+ return 0;
+ }
+ /* NOTREACHED */
+}
+
+static struct sk_buff * pfkey_xfrm_state2msg(struct xfrm_state *x, int add_keys, int hsc)
+{
+ struct sk_buff *skb;
+ struct sadb_msg *hdr;
+ struct sadb_sa *sa;
+ struct sadb_lifetime *lifetime;
+ struct sadb_address *addr;
+ struct sadb_key *key;
+ struct sadb_x_sa2 *sa2;
+ struct sockaddr_in *sin;
+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+ struct sockaddr_in6 *sin6;
+#endif
+ int size;
+ int auth_key_size = 0;
+ int encrypt_key_size = 0;
+ int sockaddr_size;
+ struct xfrm_encap_tmpl *natt = NULL;
+
+ /* address family check */
+ sockaddr_size = pfkey_sockaddr_size(x->props.family);
+ if (!sockaddr_size)
+ ERR_PTR(-EINVAL);
+
+ /* base, SA, (lifetime (HSC),) address(SD), (address(P),)
+ key(AE), (identity(SD),) (sensitivity)> */
+ size = sizeof(struct sadb_msg) +sizeof(struct sadb_sa) +
+ sizeof(struct sadb_lifetime) +
+ ((hsc & 1) ? sizeof(struct sadb_lifetime) : 0) +
+ ((hsc & 2) ? sizeof(struct sadb_lifetime) : 0) +
+ sizeof(struct sadb_address)*2 +
+ sockaddr_size*2 +
+ sizeof(struct sadb_x_sa2);
+ /* identity & sensitivity */
+
+ if ((x->props.family == AF_INET &&
+ x->sel.saddr.a4 != x->props.saddr.a4)
+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+ || (x->props.family == AF_INET6 &&
+ memcmp (x->sel.saddr.a6, x->props.saddr.a6, sizeof (struct in6_addr)))
+#endif
+ )
+ size += sizeof(struct sadb_address) + sockaddr_size;
+
+ if (add_keys) {
+ if (x->aalg && x->aalg->alg_key_len) {
+ auth_key_size =
+ PFKEY_ALIGN8((x->aalg->alg_key_len + 7) / 8);
+ size += sizeof(struct sadb_key) + auth_key_size;
+ }
+ if (x->ealg && x->ealg->alg_key_len) {
+ encrypt_key_size =
+ PFKEY_ALIGN8((x->ealg->alg_key_len+7) / 8);
+ size += sizeof(struct sadb_key) + encrypt_key_size;
+ }
+ }
+ if (x->encap)
+ natt = x->encap;
+
+ if (natt && natt->encap_type) {
+ size += sizeof(struct sadb_x_nat_t_type);
+ size += sizeof(struct sadb_x_nat_t_port);
+ size += sizeof(struct sadb_x_nat_t_port);
+ }
+
+ skb = alloc_skb(size + 16, GFP_ATOMIC);
+ if (skb == NULL)
+ return ERR_PTR(-ENOBUFS);
+
+ /* call should fill header later */
+ hdr = (struct sadb_msg *) skb_put(skb, sizeof(struct sadb_msg));
+ memset(hdr, 0, size); /* XXX do we need this ? */
+ hdr->sadb_msg_len = size / sizeof(uint64_t);
+
+ /* sa */
+ sa = (struct sadb_sa *) skb_put(skb, sizeof(struct sadb_sa));
+ sa->sadb_sa_len = sizeof(struct sadb_sa)/sizeof(uint64_t);
+ sa->sadb_sa_exttype = SADB_EXT_SA;
+ sa->sadb_sa_spi = x->id.spi;
+ sa->sadb_sa_replay = x->props.replay_window;
+ sa->sadb_sa_state = SADB_SASTATE_DYING;
+ if (x->km.state == XFRM_STATE_VALID && !x->km.dying)
+ sa->sadb_sa_state = SADB_SASTATE_MATURE;
+ else if (x->km.state == XFRM_STATE_ACQ)
+ sa->sadb_sa_state = SADB_SASTATE_LARVAL;
+ else if (x->km.state == XFRM_STATE_EXPIRED)
+ sa->sadb_sa_state = SADB_SASTATE_DEAD;
+ sa->sadb_sa_auth = 0;
+ if (x->aalg) {
+ struct xfrm_algo_desc *a = xfrm_aalg_get_byname(x->aalg->alg_name);
+ sa->sadb_sa_auth = a ? a->desc.sadb_alg_id : 0;
+ }
+ sa->sadb_sa_encrypt = 0;
+ BUG_ON(x->ealg && x->calg);
+ if (x->ealg) {
+ struct xfrm_algo_desc *a = xfrm_ealg_get_byname(x->ealg->alg_name);
+ sa->sadb_sa_encrypt = a ? a->desc.sadb_alg_id : 0;
+ }
+ /* KAME compatible: sadb_sa_encrypt is overloaded with calg id */
+ if (x->calg) {
+ struct xfrm_algo_desc *a = xfrm_calg_get_byname(x->calg->alg_name);
+ sa->sadb_sa_encrypt = a ? a->desc.sadb_alg_id : 0;
+ }
+
+ sa->sadb_sa_flags = 0;
+
+ /* hard time */
+ if (hsc & 2) {
+ lifetime = (struct sadb_lifetime *) skb_put(skb,
+ sizeof(struct sadb_lifetime));
+ lifetime->sadb_lifetime_len =
+ sizeof(struct sadb_lifetime)/sizeof(uint64_t);
+ lifetime->sadb_lifetime_exttype = SADB_EXT_LIFETIME_HARD;
+ lifetime->sadb_lifetime_allocations = _X2KEY(x->lft.hard_packet_limit);
+ lifetime->sadb_lifetime_bytes = _X2KEY(x->lft.hard_byte_limit);
+ lifetime->sadb_lifetime_addtime = x->lft.hard_add_expires_seconds;
+ lifetime->sadb_lifetime_usetime = x->lft.hard_use_expires_seconds;
+ }
+ /* soft time */
+ if (hsc & 1) {
+ lifetime = (struct sadb_lifetime *) skb_put(skb,
+ sizeof(struct sadb_lifetime));
+ lifetime->sadb_lifetime_len =
+ sizeof(struct sadb_lifetime)/sizeof(uint64_t);
+ lifetime->sadb_lifetime_exttype = SADB_EXT_LIFETIME_SOFT;
+ lifetime->sadb_lifetime_allocations = _X2KEY(x->lft.soft_packet_limit);
+ lifetime->sadb_lifetime_bytes = _X2KEY(x->lft.soft_byte_limit);
+ lifetime->sadb_lifetime_addtime = x->lft.soft_add_expires_seconds;
+ lifetime->sadb_lifetime_usetime = x->lft.soft_use_expires_seconds;
+ }
+ /* current time */
+ lifetime = (struct sadb_lifetime *) skb_put(skb,
+ sizeof(struct sadb_lifetime));
+ lifetime->sadb_lifetime_len =
+ sizeof(struct sadb_lifetime)/sizeof(uint64_t);
+ lifetime->sadb_lifetime_exttype = SADB_EXT_LIFETIME_CURRENT;
+ lifetime->sadb_lifetime_allocations = x->curlft.packets;
+ lifetime->sadb_lifetime_bytes = x->curlft.bytes;
+ lifetime->sadb_lifetime_addtime = x->curlft.add_time;
+ lifetime->sadb_lifetime_usetime = x->curlft.use_time;
+ /* src address */
+ addr = (struct sadb_address*) skb_put(skb,
+ sizeof(struct sadb_address)+sockaddr_size);
+ addr->sadb_address_len =
+ (sizeof(struct sadb_address)+sockaddr_size)/
+ sizeof(uint64_t);
+ addr->sadb_address_exttype = SADB_EXT_ADDRESS_SRC;
+ /* "if the ports are non-zero, then the sadb_address_proto field,
+ normally zero, MUST be filled in with the transport
+ protocol's number." - RFC2367 */
+ addr->sadb_address_proto = 0;
+ addr->sadb_address_reserved = 0;
+ if (x->props.family == AF_INET) {
+ addr->sadb_address_prefixlen = 32;
+
+ sin = (struct sockaddr_in *) (addr + 1);
+ sin->sin_family = AF_INET;
+ sin->sin_addr.s_addr = x->props.saddr.a4;
+ sin->sin_port = 0;
+ memset(sin->sin_zero, 0, sizeof(sin->sin_zero));
+ }
+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+ else if (x->props.family == AF_INET6) {
+ addr->sadb_address_prefixlen = 128;
+
+ sin6 = (struct sockaddr_in6 *) (addr + 1);
+ sin6->sin6_family = AF_INET6;
+ sin6->sin6_port = 0;
+ sin6->sin6_flowinfo = 0;
+ memcpy(&sin6->sin6_addr, x->props.saddr.a6,
+ sizeof(struct in6_addr));
+ sin6->sin6_scope_id = 0;
+ }
+#endif
+ else
+ BUG();
+
+ /* dst address */
+ addr = (struct sadb_address*) skb_put(skb,
+ sizeof(struct sadb_address)+sockaddr_size);
+ addr->sadb_address_len =
+ (sizeof(struct sadb_address)+sockaddr_size)/
+ sizeof(uint64_t);
+ addr->sadb_address_exttype = SADB_EXT_ADDRESS_DST;
+ addr->sadb_address_proto = 0;
+ addr->sadb_address_prefixlen = 32; /* XXX */
+ addr->sadb_address_reserved = 0;
+ if (x->props.family == AF_INET) {
+ sin = (struct sockaddr_in *) (addr + 1);
+ sin->sin_family = AF_INET;
+ sin->sin_addr.s_addr = x->id.daddr.a4;
+ sin->sin_port = 0;
+ memset(sin->sin_zero, 0, sizeof(sin->sin_zero));
+
+ if (x->sel.saddr.a4 != x->props.saddr.a4) {
+ addr = (struct sadb_address*) skb_put(skb,
+ sizeof(struct sadb_address)+sockaddr_size);
+ addr->sadb_address_len =
+ (sizeof(struct sadb_address)+sockaddr_size)/
+ sizeof(uint64_t);
+ addr->sadb_address_exttype = SADB_EXT_ADDRESS_PROXY;
+ addr->sadb_address_proto =
+ pfkey_proto_from_xfrm(x->sel.proto);
+ addr->sadb_address_prefixlen = x->sel.prefixlen_s;
+ addr->sadb_address_reserved = 0;
+
+ sin = (struct sockaddr_in *) (addr + 1);
+ sin->sin_family = AF_INET;
+ sin->sin_addr.s_addr = x->sel.saddr.a4;
+ sin->sin_port = x->sel.sport;
+ memset(sin->sin_zero, 0, sizeof(sin->sin_zero));
+ }
+ }
+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+ else if (x->props.family == AF_INET6) {
+ addr->sadb_address_prefixlen = 128;
+
+ sin6 = (struct sockaddr_in6 *) (addr + 1);
+ sin6->sin6_family = AF_INET6;
+ sin6->sin6_port = 0;
+ sin6->sin6_flowinfo = 0;
+ memcpy(&sin6->sin6_addr, x->id.daddr.a6, sizeof(struct in6_addr));
+ sin6->sin6_scope_id = 0;
+
+ if (memcmp (x->sel.saddr.a6, x->props.saddr.a6,
+ sizeof(struct in6_addr))) {
+ addr = (struct sadb_address *) skb_put(skb,
+ sizeof(struct sadb_address)+sockaddr_size);
+ addr->sadb_address_len =
+ (sizeof(struct sadb_address)+sockaddr_size)/
+ sizeof(uint64_t);
+ addr->sadb_address_exttype = SADB_EXT_ADDRESS_PROXY;
+ addr->sadb_address_proto =
+ pfkey_proto_from_xfrm(x->sel.proto);
+ addr->sadb_address_prefixlen = x->sel.prefixlen_s;
+ addr->sadb_address_reserved = 0;
+
+ sin6 = (struct sockaddr_in6 *) (addr + 1);
+ sin6->sin6_family = AF_INET6;
+ sin6->sin6_port = x->sel.sport;
+ sin6->sin6_flowinfo = 0;
+ memcpy(&sin6->sin6_addr, x->sel.saddr.a6,
+ sizeof(struct in6_addr));
+ sin6->sin6_scope_id = 0;
+ }
+ }
+#endif
+ else
+ BUG();
+
+ /* auth key */
+ if (add_keys && auth_key_size) {
+ key = (struct sadb_key *) skb_put(skb,
+ sizeof(struct sadb_key)+auth_key_size);
+ key->sadb_key_len = (sizeof(struct sadb_key) + auth_key_size) /
+ sizeof(uint64_t);
+ key->sadb_key_exttype = SADB_EXT_KEY_AUTH;
+ key->sadb_key_bits = x->aalg->alg_key_len;
+ key->sadb_key_reserved = 0;
+ memcpy(key + 1, x->aalg->alg_key, (x->aalg->alg_key_len+7)/8);
+ }
+ /* encrypt key */
+ if (add_keys && encrypt_key_size) {
+ key = (struct sadb_key *) skb_put(skb,
+ sizeof(struct sadb_key)+encrypt_key_size);
+ key->sadb_key_len = (sizeof(struct sadb_key) +
+ encrypt_key_size) / sizeof(uint64_t);
+ key->sadb_key_exttype = SADB_EXT_KEY_ENCRYPT;
+ key->sadb_key_bits = x->ealg->alg_key_len;
+ key->sadb_key_reserved = 0;
+ memcpy(key + 1, x->ealg->alg_key,
+ (x->ealg->alg_key_len+7)/8);
+ }
+
+ /* sa */
+ sa2 = (struct sadb_x_sa2 *) skb_put(skb, sizeof(struct sadb_x_sa2));
+ sa2->sadb_x_sa2_len = sizeof(struct sadb_x_sa2)/sizeof(uint64_t);
+ sa2->sadb_x_sa2_exttype = SADB_X_EXT_SA2;
+ sa2->sadb_x_sa2_mode = x->props.mode + 1;
+ sa2->sadb_x_sa2_reserved1 = 0;
+ sa2->sadb_x_sa2_reserved2 = 0;
+ sa2->sadb_x_sa2_sequence = 0;
+ sa2->sadb_x_sa2_reqid = x->props.reqid;
+
+ if (natt && natt->encap_type) {
+ struct sadb_x_nat_t_type *n_type;
+ struct sadb_x_nat_t_port *n_port;
+
+ /* type */
+ n_type = (struct sadb_x_nat_t_type*) skb_put(skb, sizeof(*n_type));
+ n_type->sadb_x_nat_t_type_len = sizeof(*n_type)/sizeof(uint64_t);
+ n_type->sadb_x_nat_t_type_exttype = SADB_X_EXT_NAT_T_TYPE;
+ n_type->sadb_x_nat_t_type_type = natt->encap_type;
+ n_type->sadb_x_nat_t_type_reserved[0] = 0;
+ n_type->sadb_x_nat_t_type_reserved[1] = 0;
+ n_type->sadb_x_nat_t_type_reserved[2] = 0;
+
+ /* source port */
+ n_port = (struct sadb_x_nat_t_port*) skb_put(skb, sizeof (*n_port));
+ n_port->sadb_x_nat_t_port_len = sizeof(*n_port)/sizeof(uint64_t);
+ n_port->sadb_x_nat_t_port_exttype = SADB_X_EXT_NAT_T_SPORT;
+ n_port->sadb_x_nat_t_port_port = natt->encap_sport;
+ n_port->sadb_x_nat_t_port_reserved = 0;
+
+ /* dest port */
+ n_port = (struct sadb_x_nat_t_port*) skb_put(skb, sizeof (*n_port));
+ n_port->sadb_x_nat_t_port_len = sizeof(*n_port)/sizeof(uint64_t);
+ n_port->sadb_x_nat_t_port_exttype = SADB_X_EXT_NAT_T_DPORT;
+ n_port->sadb_x_nat_t_port_port = natt->encap_dport;
+ n_port->sadb_x_nat_t_port_reserved = 0;
+ }
+
+ return skb;
+}
+
+static struct xfrm_state * pfkey_msg2xfrm_state(struct sadb_msg *hdr,
+ void **ext_hdrs)
+{
+ struct xfrm_state *x;
+ struct sadb_lifetime *lifetime;
+ struct sadb_sa *sa;
+ struct sadb_key *key;
+ uint16_t proto;
+
+
+ sa = (struct sadb_sa *) ext_hdrs[SADB_EXT_SA-1];
+ if (!sa ||
+ !present_and_same_family(ext_hdrs[SADB_EXT_ADDRESS_SRC-1],
+ ext_hdrs[SADB_EXT_ADDRESS_DST-1]))
+ return ERR_PTR(-EINVAL);
+ if (hdr->sadb_msg_satype == SADB_SATYPE_ESP &&
+ !ext_hdrs[SADB_EXT_KEY_ENCRYPT-1])
+ return ERR_PTR(-EINVAL);
+ if (hdr->sadb_msg_satype == SADB_SATYPE_AH &&
+ !ext_hdrs[SADB_EXT_KEY_AUTH-1])
+ return ERR_PTR(-EINVAL);
+ if (!!ext_hdrs[SADB_EXT_LIFETIME_HARD-1] !=
+ !!ext_hdrs[SADB_EXT_LIFETIME_SOFT-1])
+ return ERR_PTR(-EINVAL);
+
+ proto = pfkey_satype2proto(hdr->sadb_msg_satype);
+ if (proto == 0)
+ return ERR_PTR(-EINVAL);
+
+ /* RFC2367:
+
+ Only SADB_SASTATE_MATURE SAs may be submitted in an SADB_ADD message.
+ SADB_SASTATE_LARVAL SAs are created by SADB_GETSPI and it is not
+ sensible to add a new SA in the DYING or SADB_SASTATE_DEAD state.
+ Therefore, the sadb_sa_state field of all submitted SAs MUST be
+ SADB_SASTATE_MATURE and the kernel MUST return an error if this is
+ not true.
+
+ However, KAME setkey always uses SADB_SASTATE_LARVAL.
+ Hence, we have to _ignore_ sadb_sa_state, which is also reasonable.
+ */
+ if (sa->sadb_sa_auth > SADB_AALG_MAX ||
+ (hdr->sadb_msg_satype == SADB_X_SATYPE_IPCOMP &&
+ sa->sadb_sa_encrypt > SADB_X_CALG_MAX) ||
+ sa->sadb_sa_encrypt > SADB_EALG_MAX)
+ return ERR_PTR(-EINVAL);
+ key = (struct sadb_key*) ext_hdrs[SADB_EXT_KEY_AUTH-1];
+ if (key != NULL &&
+ sa->sadb_sa_auth != SADB_X_AALG_NULL &&
+ ((key->sadb_key_bits+7) / 8 == 0 ||
+ (key->sadb_key_bits+7) / 8 > key->sadb_key_len * sizeof(uint64_t)))
+ return ERR_PTR(-EINVAL);
+ key = ext_hdrs[SADB_EXT_KEY_ENCRYPT-1];
+ if (key != NULL &&
+ sa->sadb_sa_encrypt != SADB_EALG_NULL &&
+ ((key->sadb_key_bits+7) / 8 == 0 ||
+ (key->sadb_key_bits+7) / 8 > key->sadb_key_len * sizeof(uint64_t)))
+ return ERR_PTR(-EINVAL);
+
+ x = xfrm_state_alloc();
+ if (x == NULL)
+ return ERR_PTR(-ENOBUFS);
+
+ x->id.proto = proto;
+ x->id.spi = sa->sadb_sa_spi;
+ x->props.replay_window = sa->sadb_sa_replay;
+
+ lifetime = (struct sadb_lifetime*) ext_hdrs[SADB_EXT_LIFETIME_HARD-1];
+ if (lifetime != NULL) {
+ x->lft.hard_packet_limit = _KEY2X(lifetime->sadb_lifetime_allocations);
+ x->lft.hard_byte_limit = _KEY2X(lifetime->sadb_lifetime_bytes);
+ x->lft.hard_add_expires_seconds = lifetime->sadb_lifetime_addtime;
+ x->lft.hard_use_expires_seconds = lifetime->sadb_lifetime_usetime;
+ }
+ lifetime = (struct sadb_lifetime*) ext_hdrs[SADB_EXT_LIFETIME_SOFT-1];
+ if (lifetime != NULL) {
+ x->lft.soft_packet_limit = _KEY2X(lifetime->sadb_lifetime_allocations);
+ x->lft.soft_byte_limit = _KEY2X(lifetime->sadb_lifetime_bytes);
+ x->lft.soft_add_expires_seconds = lifetime->sadb_lifetime_addtime;
+ x->lft.soft_use_expires_seconds = lifetime->sadb_lifetime_usetime;
+ }
+ key = (struct sadb_key*) ext_hdrs[SADB_EXT_KEY_AUTH-1];
+ if (sa->sadb_sa_auth) {
+ int keysize = 0;
+ struct xfrm_algo_desc *a = xfrm_aalg_get_byid(sa->sadb_sa_auth);
+ if (!a)
+ goto out;
+ if (key)
+ keysize = (key->sadb_key_bits + 7) / 8;
+ x->aalg = kmalloc(sizeof(*x->aalg) + keysize, GFP_KERNEL);
+ if (!x->aalg)
+ goto out;
+ strcpy(x->aalg->alg_name, a->name);
+ x->aalg->alg_key_len = 0;
+ if (key) {
+ x->aalg->alg_key_len = key->sadb_key_bits;
+ memcpy(x->aalg->alg_key, key+1, keysize);
+ }
+ x->props.aalgo = sa->sadb_sa_auth;
+ /* x->algo.flags = sa->sadb_sa_flags; */
+ }
+ if (sa->sadb_sa_encrypt) {
+ if (hdr->sadb_msg_satype == SADB_X_SATYPE_IPCOMP) {
+ struct xfrm_algo_desc *a = xfrm_calg_get_byid(sa->sadb_sa_encrypt);
+ if (!a)
+ goto out;
+ x->calg = kmalloc(sizeof(*x->calg), GFP_KERNEL);
+ if (!x->calg)
+ goto out;
+ strcpy(x->calg->alg_name, a->name);
+ x->props.calgo = sa->sadb_sa_encrypt;
+ } else {
+ int keysize = 0;
+ struct xfrm_algo_desc *a = xfrm_ealg_get_byid(sa->sadb_sa_encrypt);
+ if (!a)
+ goto out;
+ key = (struct sadb_key*) ext_hdrs[SADB_EXT_KEY_ENCRYPT-1];
+ if (key)
+ keysize = (key->sadb_key_bits + 7) / 8;
+ x->ealg = kmalloc(sizeof(*x->ealg) + keysize, GFP_KERNEL);
+ if (!x->ealg)
+ goto out;
+ strcpy(x->ealg->alg_name, a->name);
+ x->ealg->alg_key_len = 0;
+ if (key) {
+ x->ealg->alg_key_len = key->sadb_key_bits;
+ memcpy(x->ealg->alg_key, key+1, keysize);
+ }
+ x->props.ealgo = sa->sadb_sa_encrypt;
+ }
+ }
+ /* x->algo.flags = sa->sadb_sa_flags; */
+
+ x->props.family = pfkey_sadb_addr2xfrm_addr((struct sadb_address *) ext_hdrs[SADB_EXT_ADDRESS_SRC-1],
+ &x->props.saddr);
+ if (!x->props.family)
+ goto out;
+ pfkey_sadb_addr2xfrm_addr((struct sadb_address *) ext_hdrs[SADB_EXT_ADDRESS_DST-1],
+ &x->id.daddr);
+
+ if (ext_hdrs[SADB_X_EXT_SA2-1]) {
+ struct sadb_x_sa2 *sa2 = (void*)ext_hdrs[SADB_X_EXT_SA2-1];
+ x->props.mode = sa2->sadb_x_sa2_mode;
+ if (x->props.mode)
+ x->props.mode--;
+ x->props.reqid = sa2->sadb_x_sa2_reqid;
+ }
+
+ if (ext_hdrs[SADB_EXT_ADDRESS_PROXY-1]) {
+ struct sadb_address *addr = ext_hdrs[SADB_EXT_ADDRESS_PROXY-1];
+
+ /* Nobody uses this, but we try. */
+ pfkey_sadb_addr2xfrm_addr(addr, &x->sel.saddr);
+ x->sel.prefixlen_s = addr->sadb_address_prefixlen;
+ }
+
+ if (ext_hdrs[SADB_X_EXT_NAT_T_TYPE-1]) {
+ struct sadb_x_nat_t_type* n_type;
+ struct xfrm_encap_tmpl *natt;
+
+ x->encap = kmalloc(sizeof(*x->encap), GFP_KERNEL);
+ if (!x->encap)
+ goto out;
+
+ natt = x->encap;
+ n_type = ext_hdrs[SADB_X_EXT_NAT_T_TYPE-1];
+ natt->encap_type = n_type->sadb_x_nat_t_type_type;
+
+ if (ext_hdrs[SADB_X_EXT_NAT_T_SPORT-1]) {
+ struct sadb_x_nat_t_port* n_port =
+ ext_hdrs[SADB_X_EXT_NAT_T_SPORT-1];
+ natt->encap_sport = n_port->sadb_x_nat_t_port_port;
+ }
+ if (ext_hdrs[SADB_X_EXT_NAT_T_DPORT-1]) {
+ struct sadb_x_nat_t_port* n_port =
+ ext_hdrs[SADB_X_EXT_NAT_T_DPORT-1];
+ natt->encap_dport = n_port->sadb_x_nat_t_port_port;
+ }
+ }
+
+ x->type = xfrm_get_type(proto, x->props.family);
+ if (x->type == NULL)
+ goto out;
+ if (x->type->init_state(x, NULL))
+ goto out;
+ x->km.seq = hdr->sadb_msg_seq;
+ x->km.state = XFRM_STATE_VALID;
+ return x;
+
+out:
+ if (x->aalg)
+ kfree(x->aalg);
+ if (x->ealg)
+ kfree(x->ealg);
+ if (x->calg)
+ kfree(x->calg);
+ if (x->encap)
+ kfree(x->encap);
+ kfree(x);
+ return ERR_PTR(-ENOBUFS);
+}
+
+static int pfkey_reserved(struct sock *sk, struct sk_buff *skb, struct sadb_msg *hdr, void **ext_hdrs)
+{
+ return -EOPNOTSUPP;
+}
+
+static int pfkey_getspi(struct sock *sk, struct sk_buff *skb, struct sadb_msg *hdr, void **ext_hdrs)
+{
+ struct sk_buff *resp_skb;
+ struct sadb_x_sa2 *sa2;
+ struct sadb_address *saddr, *daddr;
+ struct sadb_msg *out_hdr;
+ struct xfrm_state *x = NULL;
+ u8 mode;
+ u16 reqid;
+ u8 proto;
+ unsigned short family;
+ xfrm_address_t *xsaddr = NULL, *xdaddr = NULL;
+
+ if (!present_and_same_family(ext_hdrs[SADB_EXT_ADDRESS_SRC-1],
+ ext_hdrs[SADB_EXT_ADDRESS_DST-1]))
+ return -EINVAL;
+
+ proto = pfkey_satype2proto(hdr->sadb_msg_satype);
+ if (proto == 0)
+ return -EINVAL;
+
+ if ((sa2 = ext_hdrs[SADB_X_EXT_SA2-1]) != NULL) {
+ mode = sa2->sadb_x_sa2_mode - 1;
+ reqid = sa2->sadb_x_sa2_reqid;
+ } else {
+ mode = 0;
+ reqid = 0;
+ }
+
+ saddr = ext_hdrs[SADB_EXT_ADDRESS_SRC-1];
+ daddr = ext_hdrs[SADB_EXT_ADDRESS_DST-1];
+
+ family = ((struct sockaddr *)(saddr + 1))->sa_family;
+ switch (family) {
+ case AF_INET:
+ xdaddr = (xfrm_address_t *)&((struct sockaddr_in *)(daddr + 1))->sin_addr.s_addr;
+ xsaddr = (xfrm_address_t *)&((struct sockaddr_in *)(saddr + 1))->sin_addr.s_addr;
+ break;
+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+ case AF_INET6:
+ xdaddr = (xfrm_address_t *)&((struct sockaddr_in6 *)(daddr + 1))->sin6_addr;
+ xsaddr = (xfrm_address_t *)&((struct sockaddr_in6 *)(saddr + 1))->sin6_addr;
+ break;
+#endif
+ }
+ if (xdaddr)
+ x = xfrm_find_acq(mode, reqid, proto, xdaddr, xsaddr, 1, family);
+
+ if (x == NULL)
+ return -ENOENT;
+
+ resp_skb = ERR_PTR(-ENOENT);
+
+ spin_lock_bh(&x->lock);
+ if (x->km.state != XFRM_STATE_DEAD) {
+ struct sadb_spirange *range = ext_hdrs[SADB_EXT_SPIRANGE-1];
+ u32 min_spi, max_spi;
+
+ if (range != NULL) {
+ min_spi = range->sadb_spirange_min;
+ max_spi = range->sadb_spirange_max;
+ } else {
+ min_spi = htonl(0x100);
+ max_spi = htonl(0x0fffffff);
+ }
+ xfrm_alloc_spi(x, min_spi, max_spi);
+ if (x->id.spi)
+ resp_skb = pfkey_xfrm_state2msg(x, 0, 3);
+ }
+ spin_unlock_bh(&x->lock);
+
+ if (IS_ERR(resp_skb)) {
+ xfrm_state_put(x);
+ return PTR_ERR(resp_skb);
+ }
+
+ out_hdr = (struct sadb_msg *) resp_skb->data;
+ out_hdr->sadb_msg_version = hdr->sadb_msg_version;
+ out_hdr->sadb_msg_type = SADB_GETSPI;
+ out_hdr->sadb_msg_satype = pfkey_proto2satype(proto);
+ out_hdr->sadb_msg_errno = 0;
+ out_hdr->sadb_msg_reserved = 0;
+ out_hdr->sadb_msg_seq = hdr->sadb_msg_seq;
+ out_hdr->sadb_msg_pid = hdr->sadb_msg_pid;
+
+ xfrm_state_put(x);
+
+ pfkey_broadcast(resp_skb, GFP_KERNEL, BROADCAST_ONE, sk);
+
+ return 0;
+}
+
+static int pfkey_acquire(struct sock *sk, struct sk_buff *skb, struct sadb_msg *hdr, void **ext_hdrs)
+{
+ struct xfrm_state *x;
+
+ if (hdr->sadb_msg_len != sizeof(struct sadb_msg)/8)
+ return -EOPNOTSUPP;
+
+ if (hdr->sadb_msg_seq == 0 || hdr->sadb_msg_errno == 0)
+ return 0;
+
+ x = xfrm_find_acq_byseq(hdr->sadb_msg_seq);
+ if (x == NULL)
+ return 0;
+
+ spin_lock_bh(&x->lock);
+ if (x->km.state == XFRM_STATE_ACQ) {
+ x->km.state = XFRM_STATE_ERROR;
+ wake_up(&km_waitq);
+ }
+ spin_unlock_bh(&x->lock);
+ xfrm_state_put(x);
+ return 0;
+}
+
+
+static int pfkey_add(struct sock *sk, struct sk_buff *skb, struct sadb_msg *hdr, void **ext_hdrs)
+{
+ struct sk_buff *out_skb;
+ struct sadb_msg *out_hdr;
+ struct xfrm_state *x;
+ struct xfrm_state *x1;
+
+ xfrm_probe_algs();
+
+ x = pfkey_msg2xfrm_state(hdr, ext_hdrs);
+ if (IS_ERR(x))
+ return PTR_ERR(x);
+
+ /* XXX there is race condition */
+ x1 = pfkey_xfrm_state_lookup(hdr, ext_hdrs);
+ if (!x1) {
+ x1 = xfrm_find_acq(x->props.mode, x->props.reqid, x->id.proto,
+ &x->id.daddr,
+ &x->props.saddr, 0, x->props.family);
+ if (x1 && x1->id.spi != x->id.spi && x1->id.spi) {
+ xfrm_state_put(x1);
+ x1 = NULL;
+ }
+ }
+
+ if (x1 && x1->id.spi && hdr->sadb_msg_type == SADB_ADD) {
+ x->km.state = XFRM_STATE_DEAD;
+ xfrm_state_put(x);
+ xfrm_state_put(x1);
+ return -EEXIST;
+ }
+
+ xfrm_state_insert(x);
+
+ if (x1) {
+ xfrm_state_delete(x1);
+ xfrm_state_put(x1);
+ }
+
+ out_skb = pfkey_xfrm_state2msg(x, 0, 3);
+ if (IS_ERR(out_skb))
+ return PTR_ERR(out_skb); /* XXX Should we return 0 here ? */
+
+ out_hdr = (struct sadb_msg *) out_skb->data;
+ out_hdr->sadb_msg_version = hdr->sadb_msg_version;
+ out_hdr->sadb_msg_type = hdr->sadb_msg_type;
+ out_hdr->sadb_msg_satype = pfkey_proto2satype(x->id.proto);
+ out_hdr->sadb_msg_errno = 0;
+ out_hdr->sadb_msg_reserved = 0;
+ out_hdr->sadb_msg_seq = hdr->sadb_msg_seq;
+ out_hdr->sadb_msg_pid = hdr->sadb_msg_pid;
+
+ pfkey_broadcast(out_skb, GFP_ATOMIC, BROADCAST_ALL, sk);
+
+ return 0;
+}
+
+static int pfkey_delete(struct sock *sk, struct sk_buff *skb, struct sadb_msg *hdr, void **ext_hdrs)
+{
+ struct xfrm_state *x;
+
+ if (!ext_hdrs[SADB_EXT_SA-1] ||
+ !present_and_same_family(ext_hdrs[SADB_EXT_ADDRESS_SRC-1],
+ ext_hdrs[SADB_EXT_ADDRESS_DST-1]))
+ return -EINVAL;
+
+ x = pfkey_xfrm_state_lookup(hdr, ext_hdrs);
+ if (x == NULL)
+ return -ESRCH;
+
+ xfrm_state_delete(x);
+ xfrm_state_put(x);
+
+ pfkey_broadcast(skb_clone(skb, GFP_KERNEL), GFP_KERNEL,
+ BROADCAST_ALL, sk);
+
+ return 0;
+}
+
+static int pfkey_get(struct sock *sk, struct sk_buff *skb, struct sadb_msg *hdr, void **ext_hdrs)
+{
+ struct sk_buff *out_skb;
+ struct sadb_msg *out_hdr;
+ struct xfrm_state *x;
+
+ if (!ext_hdrs[SADB_EXT_SA-1] ||
+ !present_and_same_family(ext_hdrs[SADB_EXT_ADDRESS_SRC-1],
+ ext_hdrs[SADB_EXT_ADDRESS_DST-1]))
+ return -EINVAL;
+
+ x = pfkey_xfrm_state_lookup(hdr, ext_hdrs);
+ if (x == NULL)
+ return -ESRCH;
+
+ out_skb = pfkey_xfrm_state2msg(x, 1, 3);
+ xfrm_state_put(x);
+ if (IS_ERR(out_skb))
+ return PTR_ERR(out_skb);
+
+ out_hdr = (struct sadb_msg *) out_skb->data;
+ out_hdr->sadb_msg_version = hdr->sadb_msg_version;
+ out_hdr->sadb_msg_type = SADB_DUMP;
+ out_hdr->sadb_msg_satype = pfkey_proto2satype(x->id.proto);
+ out_hdr->sadb_msg_errno = 0;
+ out_hdr->sadb_msg_reserved = 0;
+ out_hdr->sadb_msg_seq = hdr->sadb_msg_seq;
+ out_hdr->sadb_msg_pid = hdr->sadb_msg_pid;
+ pfkey_broadcast(out_skb, GFP_ATOMIC, BROADCAST_ONE, sk);
+
+ return 0;
+}
+
+static struct sk_buff *compose_sadb_supported(struct sadb_msg *orig, int allocation)
+{
+ struct sk_buff *skb;
+ struct sadb_msg *hdr;
+ int len, auth_len, enc_len, i;
+
+ auth_len = xfrm_count_auth_supported();
+ if (auth_len) {
+ auth_len *= sizeof(struct sadb_alg);
+ auth_len += sizeof(struct sadb_supported);
+ }
+
+ enc_len = xfrm_count_enc_supported();
+ if (enc_len) {
+ enc_len *= sizeof(struct sadb_alg);
+ enc_len += sizeof(struct sadb_supported);
+ }
+
+ len = enc_len + auth_len + sizeof(struct sadb_msg);
+
+ skb = alloc_skb(len + 16, allocation);
+ if (!skb)
+ goto out_put_algs;
+
+ hdr = (struct sadb_msg *) skb_put(skb, sizeof(*hdr));
+ pfkey_hdr_dup(hdr, orig);
+ hdr->sadb_msg_errno = 0;
+ hdr->sadb_msg_len = len / sizeof(uint64_t);
+
+ if (auth_len) {
+ struct sadb_supported *sp;
+ struct sadb_alg *ap;
+
+ sp = (struct sadb_supported *) skb_put(skb, auth_len);
+ ap = (struct sadb_alg *) (sp + 1);
+
+ sp->sadb_supported_len = auth_len / sizeof(uint64_t);
+ sp->sadb_supported_exttype = SADB_EXT_SUPPORTED_AUTH;
+
+ for (i = 0; ; i++) {
+ struct xfrm_algo_desc *aalg = xfrm_aalg_get_byidx(i);
+ if (!aalg)
+ break;
+ if (aalg->available)
+ *ap++ = aalg->desc;
+ }
+ }
+
+ if (enc_len) {
+ struct sadb_supported *sp;
+ struct sadb_alg *ap;
+
+ sp = (struct sadb_supported *) skb_put(skb, enc_len);
+ ap = (struct sadb_alg *) (sp + 1);
+
+ sp->sadb_supported_len = enc_len / sizeof(uint64_t);
+ sp->sadb_supported_exttype = SADB_EXT_SUPPORTED_ENCRYPT;
+
+ for (i = 0; ; i++) {
+ struct xfrm_algo_desc *ealg = xfrm_ealg_get_byidx(i);
+ if (!ealg)
+ break;
+ if (ealg->available)
+ *ap++ = ealg->desc;
+ }
+ }
+
+out_put_algs:
+ return skb;
+}
+
+static int pfkey_register(struct sock *sk, struct sk_buff *skb, struct sadb_msg *hdr, void **ext_hdrs)
+{
+ struct pfkey_opt *pfk = pfkey_sk(sk);
+ struct sk_buff *supp_skb;
+
+ if (hdr->sadb_msg_satype > SADB_SATYPE_MAX)
+ return -EINVAL;
+
+ if (hdr->sadb_msg_satype != SADB_SATYPE_UNSPEC) {
+ if (pfk->registered&(1<<hdr->sadb_msg_satype))
+ return -EEXIST;
+ pfk->registered |= (1<<hdr->sadb_msg_satype);
+ }
+
+ xfrm_probe_algs();
+
+ supp_skb = compose_sadb_supported(hdr, GFP_KERNEL);
+ if (!supp_skb) {
+ if (hdr->sadb_msg_satype != SADB_SATYPE_UNSPEC)
+ pfk->registered &= ~(1<<hdr->sadb_msg_satype);
+
+ return -ENOBUFS;
+ }
+
+ pfkey_broadcast(supp_skb, GFP_KERNEL, BROADCAST_REGISTERED, sk);
+
+ return 0;
+}
+
+static int pfkey_flush(struct sock *sk, struct sk_buff *skb, struct sadb_msg *hdr, void **ext_hdrs)
+{
+ unsigned proto;
+ struct sk_buff *skb_out;
+ struct sadb_msg *hdr_out;
+
+ proto = pfkey_satype2proto(hdr->sadb_msg_satype);
+ if (proto == 0)
+ return -EINVAL;
+
+ skb_out = alloc_skb(sizeof(struct sadb_msg) + 16, GFP_KERNEL);
+ if (!skb_out)
+ return -ENOBUFS;
+
+ xfrm_state_flush(proto);
+
+ hdr_out = (struct sadb_msg *) skb_put(skb_out, sizeof(struct sadb_msg));
+ pfkey_hdr_dup(hdr_out, hdr);
+ hdr_out->sadb_msg_errno = (uint8_t) 0;
+ hdr_out->sadb_msg_len = (sizeof(struct sadb_msg) / sizeof(uint64_t));
+
+ pfkey_broadcast(skb_out, GFP_KERNEL, BROADCAST_ALL, NULL);
+
+ return 0;
+}
+
+struct pfkey_dump_data
+{
+ struct sk_buff *skb;
+ struct sadb_msg *hdr;
+ struct sock *sk;
+};
+
+static int dump_sa(struct xfrm_state *x, int count, void *ptr)
+{
+ struct pfkey_dump_data *data = ptr;
+ struct sk_buff *out_skb;
+ struct sadb_msg *out_hdr;
+
+ out_skb = pfkey_xfrm_state2msg(x, 1, 3);
+ if (IS_ERR(out_skb))
+ return PTR_ERR(out_skb);
+
+ out_hdr = (struct sadb_msg *) out_skb->data;
+ out_hdr->sadb_msg_version = data->hdr->sadb_msg_version;
+ out_hdr->sadb_msg_type = SADB_DUMP;
+ out_hdr->sadb_msg_satype = pfkey_proto2satype(x->id.proto);
+ out_hdr->sadb_msg_errno = 0;
+ out_hdr->sadb_msg_reserved = 0;
+ out_hdr->sadb_msg_seq = count;
+ out_hdr->sadb_msg_pid = data->hdr->sadb_msg_pid;
+ pfkey_broadcast(out_skb, GFP_ATOMIC, BROADCAST_ONE, data->sk);
+ return 0;
+}
+
+static int pfkey_dump(struct sock *sk, struct sk_buff *skb, struct sadb_msg *hdr, void **ext_hdrs)
+{
+ u8 proto;
+ struct pfkey_dump_data data = { .skb = skb, .hdr = hdr, .sk = sk };
+
+ proto = pfkey_satype2proto(hdr->sadb_msg_satype);
+ if (proto == 0)
+ return -EINVAL;
+
+ return xfrm_state_walk(proto, dump_sa, &data);
+}
+
+static int pfkey_promisc(struct sock *sk, struct sk_buff *skb, struct sadb_msg *hdr, void **ext_hdrs)
+{
+ struct pfkey_opt *pfk = pfkey_sk(sk);
+ int satype = hdr->sadb_msg_satype;
+
+ if (hdr->sadb_msg_len == (sizeof(*hdr) / sizeof(uint64_t))) {
+ /* XXX we mangle packet... */
+ hdr->sadb_msg_errno = 0;
+ if (satype != 0 && satype != 1)
+ return -EINVAL;
+ pfk->promisc = satype;
+ }
+ pfkey_broadcast(skb_clone(skb, GFP_KERNEL), GFP_KERNEL, BROADCAST_ALL, NULL);
+ return 0;
+}
+
+static int check_reqid(struct xfrm_policy *xp, int dir, int count, void *ptr)
+{
+ int i;
+ u16 reqid = *(u16*)ptr;
+
+ for (i=0; i<xp->xfrm_nr; i++) {
+ if (xp->xfrm_vec[i].reqid == reqid)
+ return -EEXIST;
+ }
+ return 0;
+}
+
+static u16 gen_reqid(void)
+{
+ u16 start;
+ static u16 reqid = IPSEC_MANUAL_REQID_MAX;
+
+ start = reqid;
+ do {
+ ++reqid;
+ if (reqid == 0)
+ reqid = IPSEC_MANUAL_REQID_MAX+1;
+ if (xfrm_policy_walk(check_reqid, (void*)&reqid) != -EEXIST)
+ return reqid;
+ } while (reqid != start);
+ return 0;
+}
+
+static int
+parse_ipsecrequest(struct xfrm_policy *xp, struct sadb_x_ipsecrequest *rq)
+{
+ struct xfrm_tmpl *t = xp->xfrm_vec + xp->xfrm_nr;
+ struct sockaddr_in *sin;
+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+ struct sockaddr_in6 *sin6;
+#endif
+
+ if (xp->xfrm_nr >= XFRM_MAX_DEPTH)
+ return -ELOOP;
+
+ if (rq->sadb_x_ipsecrequest_mode == 0)
+ return -EINVAL;
+
+ t->id.proto = rq->sadb_x_ipsecrequest_proto; /* XXX check proto */
+ t->mode = rq->sadb_x_ipsecrequest_mode-1;
+ if (rq->sadb_x_ipsecrequest_level == IPSEC_LEVEL_USE)
+ t->optional = 1;
+ else if (rq->sadb_x_ipsecrequest_level == IPSEC_LEVEL_UNIQUE) {
+ t->reqid = rq->sadb_x_ipsecrequest_reqid;
+ if (t->reqid > IPSEC_MANUAL_REQID_MAX)
+ t->reqid = 0;
+ if (!t->reqid && !(t->reqid = gen_reqid()))
+ return -ENOBUFS;
+ }
+
+ /* addresses present only in tunnel mode */
+ if (t->mode) {
+ switch (xp->family) {
+ case AF_INET:
+ sin = (void*)(rq+1);
+ if (sin->sin_family != AF_INET)
+ return -EINVAL;
+ t->saddr.a4 = sin->sin_addr.s_addr;
+ sin++;
+ if (sin->sin_family != AF_INET)
+ return -EINVAL;
+ t->id.daddr.a4 = sin->sin_addr.s_addr;
+ break;
+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+ case AF_INET6:
+ sin6 = (void *)(rq+1);
+ if (sin6->sin6_family != AF_INET6)
+ return -EINVAL;
+ memcpy(t->saddr.a6, &sin6->sin6_addr, sizeof(struct in6_addr));
+ sin6++;
+ if (sin6->sin6_family != AF_INET6)
+ return -EINVAL;
+ memcpy(t->id.daddr.a6, &sin6->sin6_addr, sizeof(struct in6_addr));
+ break;
+#endif
+ default:
+ return -EINVAL;
+ }
+ }
+ /* No way to set this via kame pfkey */
+ t->aalgos = t->ealgos = t->calgos = ~0;
+ xp->xfrm_nr++;
+ return 0;
+}
+
+static int
+parse_ipsecrequests(struct xfrm_policy *xp, struct sadb_x_policy *pol)
+{
+ int err;
+ int len = pol->sadb_x_policy_len*8 - sizeof(struct sadb_x_policy);
+ struct sadb_x_ipsecrequest *rq = (void*)(pol+1);
+
+ while (len >= sizeof(struct sadb_x_ipsecrequest)) {
+ if ((err = parse_ipsecrequest(xp, rq)) < 0)
+ return err;
+ len -= rq->sadb_x_ipsecrequest_len;
+ rq = (void*)((u8*)rq + rq->sadb_x_ipsecrequest_len);
+ }
+ return 0;
+}
+
+static int pfkey_xfrm_policy2msg_size(struct xfrm_policy *xp)
+{
+ int sockaddr_size = pfkey_sockaddr_size(xp->family);
+ int socklen = (xp->family == AF_INET ?
+ sizeof(struct sockaddr_in) :
+ sizeof(struct sockaddr_in6));
+
+ return sizeof(struct sadb_msg) +
+ (sizeof(struct sadb_lifetime) * 3) +
+ (sizeof(struct sadb_address) * 2) +
+ (sockaddr_size * 2) +
+ sizeof(struct sadb_x_policy) +
+ (xp->xfrm_nr * (sizeof(struct sadb_x_ipsecrequest) +
+ (socklen * 2)));
+}
+
+static struct sk_buff * pfkey_xfrm_policy2msg_prep(struct xfrm_policy *xp)
+{
+ struct sk_buff *skb;
+ int size;
+
+ size = pfkey_xfrm_policy2msg_size(xp);
+
+ skb = alloc_skb(size + 16, GFP_ATOMIC);
+ if (skb == NULL)
+ return ERR_PTR(-ENOBUFS);
+
+ return skb;
+}
+
+static void pfkey_xfrm_policy2msg(struct sk_buff *skb, struct xfrm_policy *xp, int dir)
+{
+ struct sadb_msg *hdr;
+ struct sadb_address *addr;
+ struct sadb_lifetime *lifetime;
+ struct sadb_x_policy *pol;
+ struct sockaddr_in *sin;
+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+ struct sockaddr_in6 *sin6;
+#endif
+ int i;
+ int size;
+ int sockaddr_size = pfkey_sockaddr_size(xp->family);
+ int socklen = (xp->family == AF_INET ?
+ sizeof(struct sockaddr_in) :
+ sizeof(struct sockaddr_in6));
+
+ size = pfkey_xfrm_policy2msg_size(xp);
+
+ /* call should fill header later */
+ hdr = (struct sadb_msg *) skb_put(skb, sizeof(struct sadb_msg));
+ memset(hdr, 0, size); /* XXX do we need this ? */
+
+ /* src address */
+ addr = (struct sadb_address*) skb_put(skb,
+ sizeof(struct sadb_address)+sockaddr_size);
+ addr->sadb_address_len =
+ (sizeof(struct sadb_address)+sockaddr_size)/
+ sizeof(uint64_t);
+ addr->sadb_address_exttype = SADB_EXT_ADDRESS_SRC;
+ addr->sadb_address_proto = pfkey_proto_from_xfrm(xp->selector.proto);
+ addr->sadb_address_prefixlen = xp->selector.prefixlen_s;
+ addr->sadb_address_reserved = 0;
+ /* src address */
+ if (xp->family == AF_INET) {
+ sin = (struct sockaddr_in *) (addr + 1);
+ sin->sin_family = AF_INET;
+ sin->sin_addr.s_addr = xp->selector.saddr.a4;
+ sin->sin_port = xp->selector.sport;
+ memset(sin->sin_zero, 0, sizeof(sin->sin_zero));
+ }
+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+ else if (xp->family == AF_INET6) {
+ sin6 = (struct sockaddr_in6 *) (addr + 1);
+ sin6->sin6_family = AF_INET6;
+ sin6->sin6_port = xp->selector.sport;
+ sin6->sin6_flowinfo = 0;
+ memcpy(&sin6->sin6_addr, xp->selector.saddr.a6,
+ sizeof(struct in6_addr));;
+ sin6->sin6_scope_id = 0;
+ }
+#endif
+ else
+ BUG();
+
+ /* dst address */
+ addr = (struct sadb_address*) skb_put(skb,
+ sizeof(struct sadb_address)+sockaddr_size);
+ addr->sadb_address_len =
+ (sizeof(struct sadb_address)+sockaddr_size)/
+ sizeof(uint64_t);
+ addr->sadb_address_exttype = SADB_EXT_ADDRESS_DST;
+ addr->sadb_address_proto = pfkey_proto_from_xfrm(xp->selector.proto);
+ addr->sadb_address_prefixlen = xp->selector.prefixlen_d;
+ addr->sadb_address_reserved = 0;
+ if (xp->family == AF_INET) {
+ sin = (struct sockaddr_in *) (addr + 1);
+ sin->sin_family = AF_INET;
+ sin->sin_addr.s_addr = xp->selector.daddr.a4;
+ sin->sin_port = xp->selector.dport;
+ memset(sin->sin_zero, 0, sizeof(sin->sin_zero));
+ }
+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+ else if (xp->family == AF_INET6) {
+ sin6 = (struct sockaddr_in6 *) (addr + 1);
+ sin6->sin6_family = AF_INET6;
+ sin6->sin6_port = xp->selector.dport;
+ sin6->sin6_flowinfo = 0;
+ memcpy(&sin6->sin6_addr, xp->selector.daddr.a6,
+ sizeof(struct in6_addr));
+ sin6->sin6_scope_id = 0;
+ }
+#endif
+ else
+ BUG();
+
+ /* hard time */
+ lifetime = (struct sadb_lifetime *) skb_put(skb,
+ sizeof(struct sadb_lifetime));
+ lifetime->sadb_lifetime_len =
+ sizeof(struct sadb_lifetime)/sizeof(uint64_t);
+ lifetime->sadb_lifetime_exttype = SADB_EXT_LIFETIME_HARD;
+ lifetime->sadb_lifetime_allocations = _X2KEY(xp->lft.hard_packet_limit);
+ lifetime->sadb_lifetime_bytes = _X2KEY(xp->lft.hard_byte_limit);
+ lifetime->sadb_lifetime_addtime = xp->lft.hard_add_expires_seconds;
+ lifetime->sadb_lifetime_usetime = xp->lft.hard_use_expires_seconds;
+ /* soft time */
+ lifetime = (struct sadb_lifetime *) skb_put(skb,
+ sizeof(struct sadb_lifetime));
+ lifetime->sadb_lifetime_len =
+ sizeof(struct sadb_lifetime)/sizeof(uint64_t);
+ lifetime->sadb_lifetime_exttype = SADB_EXT_LIFETIME_SOFT;
+ lifetime->sadb_lifetime_allocations = _X2KEY(xp->lft.soft_packet_limit);
+ lifetime->sadb_lifetime_bytes = _X2KEY(xp->lft.soft_byte_limit);
+ lifetime->sadb_lifetime_addtime = xp->lft.soft_add_expires_seconds;
+ lifetime->sadb_lifetime_usetime = xp->lft.soft_use_expires_seconds;
+ /* current time */
+ lifetime = (struct sadb_lifetime *) skb_put(skb,
+ sizeof(struct sadb_lifetime));
+ lifetime->sadb_lifetime_len =
+ sizeof(struct sadb_lifetime)/sizeof(uint64_t);
+ lifetime->sadb_lifetime_exttype = SADB_EXT_LIFETIME_CURRENT;
+ lifetime->sadb_lifetime_allocations = xp->curlft.packets;
+ lifetime->sadb_lifetime_bytes = xp->curlft.bytes;
+ lifetime->sadb_lifetime_addtime = xp->curlft.add_time;
+ lifetime->sadb_lifetime_usetime = xp->curlft.use_time;
+
+ pol = (struct sadb_x_policy *) skb_put(skb, sizeof(struct sadb_x_policy));
+ pol->sadb_x_policy_len = sizeof(struct sadb_x_policy)/sizeof(uint64_t);
+ pol->sadb_x_policy_exttype = SADB_X_EXT_POLICY;
+ pol->sadb_x_policy_type = IPSEC_POLICY_DISCARD;
+ if (xp->action == XFRM_POLICY_ALLOW) {
+ if (xp->xfrm_nr)
+ pol->sadb_x_policy_type = IPSEC_POLICY_IPSEC;
+ else
+ pol->sadb_x_policy_type = IPSEC_POLICY_NONE;
+ }
+ pol->sadb_x_policy_dir = dir+1;
+ pol->sadb_x_policy_id = xp->index;
+
+ for (i=0; i<xp->xfrm_nr; i++) {
+ struct sadb_x_ipsecrequest *rq;
+ struct xfrm_tmpl *t = xp->xfrm_vec + i;
+ int req_size;
+
+ req_size = sizeof(struct sadb_x_ipsecrequest);
+ if (t->mode)
+ req_size += 2*socklen;
+ else
+ size -= 2*socklen;
+ rq = (void*)skb_put(skb, req_size);
+ pol->sadb_x_policy_len += req_size/8;
+ rq->sadb_x_ipsecrequest_len = req_size;
+ rq->sadb_x_ipsecrequest_proto = t->id.proto;
+ rq->sadb_x_ipsecrequest_mode = t->mode+1;
+ rq->sadb_x_ipsecrequest_level = IPSEC_LEVEL_REQUIRE;
+ if (t->reqid)
+ rq->sadb_x_ipsecrequest_level = IPSEC_LEVEL_UNIQUE;
+ if (t->optional)
+ rq->sadb_x_ipsecrequest_level = IPSEC_LEVEL_USE;
+ rq->sadb_x_ipsecrequest_reqid = t->reqid;
+ if (t->mode) {
+ switch (xp->family) {
+ case AF_INET:
+ sin = (void*)(rq+1);
+ sin->sin_family = AF_INET;
+ sin->sin_addr.s_addr = t->saddr.a4;
+ sin->sin_port = 0;
+ memset(sin->sin_zero, 0, sizeof(sin->sin_zero));
+ sin++;
+ sin->sin_family = AF_INET;
+ sin->sin_addr.s_addr = t->id.daddr.a4;
+ sin->sin_port = 0;
+ memset(sin->sin_zero, 0, sizeof(sin->sin_zero));
+ break;
+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+ case AF_INET6:
+ sin6 = (void*)(rq+1);
+ sin6->sin6_family = AF_INET6;
+ sin6->sin6_port = 0;
+ sin6->sin6_flowinfo = 0;
+ memcpy(&sin6->sin6_addr, t->saddr.a6,
+ sizeof(struct in6_addr));
+ sin6->sin6_scope_id = 0;
+
+ sin6++;
+ sin6->sin6_family = AF_INET6;
+ sin6->sin6_port = 0;
+ sin6->sin6_flowinfo = 0;
+ memcpy(&sin6->sin6_addr, t->id.daddr.a6,
+ sizeof(struct in6_addr));
+ sin6->sin6_scope_id = 0;
+ break;
+#endif
+ default:
+ break;
+ }
+ }
+ }
+ hdr->sadb_msg_len = size / sizeof(uint64_t);
+ hdr->sadb_msg_reserved = atomic_read(&xp->refcnt);
+}
+
+static int pfkey_spdadd(struct sock *sk, struct sk_buff *skb, struct sadb_msg *hdr, void **ext_hdrs)
+{
+ int err;
+ struct sadb_lifetime *lifetime;
+ struct sadb_address *sa;
+ struct sadb_x_policy *pol;
+ struct xfrm_policy *xp;
+ struct sk_buff *out_skb;
+ struct sadb_msg *out_hdr;
+
+ if (!present_and_same_family(ext_hdrs[SADB_EXT_ADDRESS_SRC-1],
+ ext_hdrs[SADB_EXT_ADDRESS_DST-1]) ||
+ !ext_hdrs[SADB_X_EXT_POLICY-1])
+ return -EINVAL;
+
+ pol = ext_hdrs[SADB_X_EXT_POLICY-1];
+ if (pol->sadb_x_policy_type > IPSEC_POLICY_IPSEC)
+ return -EINVAL;
+ if (!pol->sadb_x_policy_dir || pol->sadb_x_policy_dir >= IPSEC_DIR_MAX)
+ return -EINVAL;
+
+ xp = xfrm_policy_alloc(GFP_KERNEL);
+ if (xp == NULL)
+ return -ENOBUFS;
+
+ xp->action = (pol->sadb_x_policy_type == IPSEC_POLICY_DISCARD ?
+ XFRM_POLICY_BLOCK : XFRM_POLICY_ALLOW);
+
+ sa = ext_hdrs[SADB_EXT_ADDRESS_SRC-1],
+ xp->family = pfkey_sadb_addr2xfrm_addr(sa, &xp->selector.saddr);
+ if (!xp->family) {
+ err = -EINVAL;
+ goto out;
+ }
+ xp->selector.prefixlen_s = sa->sadb_address_prefixlen;
+ xp->selector.proto = pfkey_proto_to_xfrm(sa->sadb_address_proto);
+ xp->selector.sport = ((struct sockaddr_in *)(sa+1))->sin_port;
+ if (xp->selector.sport)
+ xp->selector.sport_mask = ~0;
+
+ sa = ext_hdrs[SADB_EXT_ADDRESS_DST-1],
+ pfkey_sadb_addr2xfrm_addr(sa, &xp->selector.daddr);
+ xp->selector.prefixlen_d = sa->sadb_address_prefixlen;
+
+ /* Amusing, we set this twice. KAME apps appear to set same value
+ * in both addresses.
+ */
+ xp->selector.proto = pfkey_proto_to_xfrm(sa->sadb_address_proto);
+
+ xp->selector.dport = ((struct sockaddr_in *)(sa+1))->sin_port;
+ if (xp->selector.dport)
+ xp->selector.dport_mask = ~0;
+
+ xp->lft.soft_byte_limit = XFRM_INF;
+ xp->lft.hard_byte_limit = XFRM_INF;
+ xp->lft.soft_packet_limit = XFRM_INF;
+ xp->lft.hard_packet_limit = XFRM_INF;
+ if ((lifetime = ext_hdrs[SADB_EXT_LIFETIME_HARD-1]) != NULL) {
+ xp->lft.hard_packet_limit = _KEY2X(lifetime->sadb_lifetime_allocations);
+ xp->lft.hard_byte_limit = _KEY2X(lifetime->sadb_lifetime_bytes);
+ xp->lft.hard_add_expires_seconds = lifetime->sadb_lifetime_addtime;
+ xp->lft.hard_use_expires_seconds = lifetime->sadb_lifetime_usetime;
+ }
+ if ((lifetime = ext_hdrs[SADB_EXT_LIFETIME_SOFT-1]) != NULL) {
+ xp->lft.soft_packet_limit = _KEY2X(lifetime->sadb_lifetime_allocations);
+ xp->lft.soft_byte_limit = _KEY2X(lifetime->sadb_lifetime_bytes);
+ xp->lft.soft_add_expires_seconds = lifetime->sadb_lifetime_addtime;
+ xp->lft.soft_use_expires_seconds = lifetime->sadb_lifetime_usetime;
+ }
+ xp->xfrm_nr = 0;
+ if (pol->sadb_x_policy_type == IPSEC_POLICY_IPSEC &&
+ (err = parse_ipsecrequests(xp, pol)) < 0)
+ goto out;
+
+ out_skb = pfkey_xfrm_policy2msg_prep(xp);
+ if (IS_ERR(out_skb)) {
+ err = PTR_ERR(out_skb);
+ goto out;
+ }
+
+ err = xfrm_policy_insert(pol->sadb_x_policy_dir-1, xp,
+ hdr->sadb_msg_type != SADB_X_SPDUPDATE);
+ if (err) {
+ kfree_skb(out_skb);
+ goto out;
+ }
+
+ pfkey_xfrm_policy2msg(out_skb, xp, pol->sadb_x_policy_dir-1);
+
+ xfrm_pol_put(xp);
+
+ out_hdr = (struct sadb_msg *) out_skb->data;
+ out_hdr->sadb_msg_version = hdr->sadb_msg_version;
+ out_hdr->sadb_msg_type = hdr->sadb_msg_type;
+ out_hdr->sadb_msg_satype = 0;
+ out_hdr->sadb_msg_errno = 0;
+ out_hdr->sadb_msg_seq = hdr->sadb_msg_seq;
+ out_hdr->sadb_msg_pid = hdr->sadb_msg_pid;
+ pfkey_broadcast(out_skb, GFP_ATOMIC, BROADCAST_ALL, sk);
+ return 0;
+
+out:
+ kfree(xp);
+ return err;
+}
+
+static int pfkey_spddelete(struct sock *sk, struct sk_buff *skb, struct sadb_msg *hdr, void **ext_hdrs)
+{
+ int err;
+ struct sadb_address *sa;
+ struct sadb_x_policy *pol;
+ struct xfrm_policy *xp;
+ struct sk_buff *out_skb;
+ struct sadb_msg *out_hdr;
+ struct xfrm_selector sel;
+
+ if (!present_and_same_family(ext_hdrs[SADB_EXT_ADDRESS_SRC-1],
+ ext_hdrs[SADB_EXT_ADDRESS_DST-1]) ||
+ !ext_hdrs[SADB_X_EXT_POLICY-1])
+ return -EINVAL;
+
+ pol = ext_hdrs[SADB_X_EXT_POLICY-1];
+ if (!pol->sadb_x_policy_dir || pol->sadb_x_policy_dir >= IPSEC_DIR_MAX)
+ return -EINVAL;
+
+ memset(&sel, 0, sizeof(sel));
+
+ sa = ext_hdrs[SADB_EXT_ADDRESS_SRC-1],
+ pfkey_sadb_addr2xfrm_addr(sa, &sel.saddr);
+ sel.prefixlen_s = sa->sadb_address_prefixlen;
+ sel.proto = pfkey_proto_to_xfrm(sa->sadb_address_proto);
+ sel.sport = ((struct sockaddr_in *)(sa+1))->sin_port;
+ if (sel.sport)
+ sel.sport_mask = ~0;
+
+ sa = ext_hdrs[SADB_EXT_ADDRESS_DST-1],
+ pfkey_sadb_addr2xfrm_addr(sa, &sel.daddr);
+ sel.prefixlen_d = sa->sadb_address_prefixlen;
+ sel.proto = pfkey_proto_to_xfrm(sa->sadb_address_proto);
+ sel.dport = ((struct sockaddr_in *)(sa+1))->sin_port;
+ if (sel.dport)
+ sel.dport_mask = ~0;
+
+ xp = xfrm_policy_delete(pol->sadb_x_policy_dir-1, &sel);
+ if (xp == NULL)
+ return -ENOENT;
+
+ err = 0;
+
+ out_skb = pfkey_xfrm_policy2msg_prep(xp);
+ if (IS_ERR(out_skb)) {
+ err = PTR_ERR(out_skb);
+ goto out;
+ }
+ pfkey_xfrm_policy2msg(out_skb, xp, pol->sadb_x_policy_dir-1);
+
+ out_hdr = (struct sadb_msg *) out_skb->data;
+ out_hdr->sadb_msg_version = hdr->sadb_msg_version;
+ out_hdr->sadb_msg_type = SADB_X_SPDDELETE;
+ out_hdr->sadb_msg_satype = 0;
+ out_hdr->sadb_msg_errno = 0;
+ out_hdr->sadb_msg_seq = hdr->sadb_msg_seq;
+ out_hdr->sadb_msg_pid = hdr->sadb_msg_pid;
+ pfkey_broadcast(out_skb, GFP_ATOMIC, BROADCAST_ALL, sk);
+ err = 0;
+
+out:
+ if (xp) {
+ xfrm_policy_kill(xp);
+ xfrm_pol_put(xp);
+ }
+ return err;
+}
+
+static int pfkey_spdget(struct sock *sk, struct sk_buff *skb, struct sadb_msg *hdr, void **ext_hdrs)
+{
+ int err;
+ struct sadb_x_policy *pol;
+ struct xfrm_policy *xp;
+ struct sk_buff *out_skb;
+ struct sadb_msg *out_hdr;
+
+ if ((pol = ext_hdrs[SADB_X_EXT_POLICY-1]) == NULL)
+ return -EINVAL;
+
+ xp = xfrm_policy_byid(0, pol->sadb_x_policy_id,
+ hdr->sadb_msg_type == SADB_X_SPDDELETE2);
+ if (xp == NULL)
+ return -ENOENT;
+
+ err = 0;
+
+ out_skb = pfkey_xfrm_policy2msg_prep(xp);
+ if (IS_ERR(out_skb)) {
+ err = PTR_ERR(out_skb);
+ goto out;
+ }
+ pfkey_xfrm_policy2msg(out_skb, xp, pol->sadb_x_policy_dir-1);
+
+ out_hdr = (struct sadb_msg *) out_skb->data;
+ out_hdr->sadb_msg_version = hdr->sadb_msg_version;
+ out_hdr->sadb_msg_type = hdr->sadb_msg_type;
+ out_hdr->sadb_msg_satype = 0;
+ out_hdr->sadb_msg_errno = 0;
+ out_hdr->sadb_msg_seq = hdr->sadb_msg_seq;
+ out_hdr->sadb_msg_pid = hdr->sadb_msg_pid;
+ pfkey_broadcast(out_skb, GFP_ATOMIC, BROADCAST_ALL, sk);
+ err = 0;
+
+out:
+ if (xp) {
+ if (hdr->sadb_msg_type == SADB_X_SPDDELETE2)
+ xfrm_policy_kill(xp);
+ xfrm_pol_put(xp);
+ }
+ return err;
+}
+
+static int dump_sp(struct xfrm_policy *xp, int dir, int count, void *ptr)
+{
+ struct pfkey_dump_data *data = ptr;
+ struct sk_buff *out_skb;
+ struct sadb_msg *out_hdr;
+
+ out_skb = pfkey_xfrm_policy2msg_prep(xp);
+ if (IS_ERR(out_skb))
+ return PTR_ERR(out_skb);
+
+ pfkey_xfrm_policy2msg(out_skb, xp, dir);
+
+ out_hdr = (struct sadb_msg *) out_skb->data;
+ out_hdr->sadb_msg_version = data->hdr->sadb_msg_version;
+ out_hdr->sadb_msg_type = SADB_X_SPDDUMP;
+ out_hdr->sadb_msg_satype = SADB_SATYPE_UNSPEC;
+ out_hdr->sadb_msg_errno = 0;
+ out_hdr->sadb_msg_seq = count;
+ out_hdr->sadb_msg_pid = data->hdr->sadb_msg_pid;
+ pfkey_broadcast(out_skb, GFP_ATOMIC, BROADCAST_ONE, data->sk);
+ return 0;
+}
+
+static int pfkey_spddump(struct sock *sk, struct sk_buff *skb, struct sadb_msg *hdr, void **ext_hdrs)
+{
+ struct pfkey_dump_data data = { .skb = skb, .hdr = hdr, .sk = sk };
+
+ return xfrm_policy_walk(dump_sp, &data);
+}
+
+static int pfkey_spdflush(struct sock *sk, struct sk_buff *skb, struct sadb_msg *hdr, void **ext_hdrs)
+{
+ struct sk_buff *skb_out;
+ struct sadb_msg *hdr_out;
+
+ skb_out = alloc_skb(sizeof(struct sadb_msg) + 16, GFP_KERNEL);
+ if (!skb_out)
+ return -ENOBUFS;
+
+ xfrm_policy_flush();
+
+ hdr_out = (struct sadb_msg *) skb_put(skb_out, sizeof(struct sadb_msg));
+ pfkey_hdr_dup(hdr_out, hdr);
+ hdr_out->sadb_msg_errno = (uint8_t) 0;
+ hdr_out->sadb_msg_len = (sizeof(struct sadb_msg) / sizeof(uint64_t));
+ pfkey_broadcast(skb_out, GFP_KERNEL, BROADCAST_ALL, NULL);
+
+ return 0;
+}
+
+typedef int (*pfkey_handler)(struct sock *sk, struct sk_buff *skb,
+ struct sadb_msg *hdr, void **ext_hdrs);
+static pfkey_handler pfkey_funcs[SADB_MAX + 1] = {
+ [SADB_RESERVED] = pfkey_reserved,
+ [SADB_GETSPI] = pfkey_getspi,
+ [SADB_UPDATE] = pfkey_add,
+ [SADB_ADD] = pfkey_add,
+ [SADB_DELETE] = pfkey_delete,
+ [SADB_GET] = pfkey_get,
+ [SADB_ACQUIRE] = pfkey_acquire,
+ [SADB_REGISTER] = pfkey_register,
+ [SADB_EXPIRE] = NULL,
+ [SADB_FLUSH] = pfkey_flush,
+ [SADB_DUMP] = pfkey_dump,
+ [SADB_X_PROMISC] = pfkey_promisc,
+ [SADB_X_PCHANGE] = NULL,
+ [SADB_X_SPDUPDATE] = pfkey_spdadd,
+ [SADB_X_SPDADD] = pfkey_spdadd,
+ [SADB_X_SPDDELETE] = pfkey_spddelete,
+ [SADB_X_SPDGET] = pfkey_spdget,
+ [SADB_X_SPDACQUIRE] = NULL,
+ [SADB_X_SPDDUMP] = pfkey_spddump,
+ [SADB_X_SPDFLUSH] = pfkey_spdflush,
+ [SADB_X_SPDSETIDX] = pfkey_spdadd,
+ [SADB_X_SPDDELETE2] = pfkey_spdget,
+};
+
+static int pfkey_process(struct sock *sk, struct sk_buff *skb, struct sadb_msg *hdr)
+{
+ void *ext_hdrs[SADB_EXT_MAX];
+ int err;
+
+ pfkey_broadcast(skb_clone(skb, GFP_KERNEL), GFP_KERNEL,
+ BROADCAST_PROMISC_ONLY, NULL);
+
+ memset(ext_hdrs, 0, sizeof(ext_hdrs));
+ err = parse_exthdrs(skb, hdr, ext_hdrs);
+ if (!err) {
+ err = -EOPNOTSUPP;
+ if (pfkey_funcs[hdr->sadb_msg_type])
+ err = pfkey_funcs[hdr->sadb_msg_type](sk, skb, hdr, ext_hdrs);
+ }
+ return err;
+}
+
+static struct sadb_msg *pfkey_get_base_msg(struct sk_buff *skb, int *errp)
+{
+ struct sadb_msg *hdr = NULL;
+
+ if (skb->len < sizeof(*hdr)) {
+ *errp = -EMSGSIZE;
+ } else {
+ hdr = (struct sadb_msg *) skb->data;
+ if (hdr->sadb_msg_version != PF_KEY_V2 ||
+ hdr->sadb_msg_reserved != 0 ||
+ (hdr->sadb_msg_type <= SADB_RESERVED ||
+ hdr->sadb_msg_type > SADB_MAX)) {
+ hdr = NULL;
+ *errp = -EINVAL;
+ } else if (hdr->sadb_msg_len != (skb->len /
+ sizeof(uint64_t)) ||
+ hdr->sadb_msg_len < (sizeof(struct sadb_msg) /
+ sizeof(uint64_t))) {
+ hdr = NULL;
+ *errp = -EMSGSIZE;
+ } else {
+ *errp = 0;
+ }
+ }
+ return hdr;
+}
+
+static inline int aalg_tmpl_set(struct xfrm_tmpl *t, struct xfrm_algo_desc *d)
+{
+ return t->aalgos & (1 << d->desc.sadb_alg_id);
+}
+
+static inline int ealg_tmpl_set(struct xfrm_tmpl *t, struct xfrm_algo_desc *d)
+{
+ return t->ealgos & (1 << d->desc.sadb_alg_id);
+}
+
+static int count_ah_combs(struct xfrm_tmpl *t)
+{
+ int i, sz = 0;
+
+ for (i = 0; ; i++) {
+ struct xfrm_algo_desc *aalg = xfrm_aalg_get_byidx(i);
+ if (!aalg)
+ break;
+ if (aalg_tmpl_set(t, aalg) && aalg->available)
+ sz += sizeof(struct sadb_comb);
+ }
+ return sz + sizeof(struct sadb_prop);
+}
+
+static int count_esp_combs(struct xfrm_tmpl *t)
+{
+ int i, k, sz = 0;
+
+ for (i = 0; ; i++) {
+ struct xfrm_algo_desc *ealg = xfrm_ealg_get_byidx(i);
+ if (!ealg)
+ break;
+
+ if (!(ealg_tmpl_set(t, ealg) && ealg->available))
+ continue;
+
+ for (k = 1; ; k++) {
+ struct xfrm_algo_desc *aalg = xfrm_aalg_get_byidx(k);
+ if (!aalg)
+ break;
+
+ if (aalg_tmpl_set(t, aalg) && aalg->available)
+ sz += sizeof(struct sadb_comb);
+ }
+ }
+ return sz + sizeof(struct sadb_prop);
+}
+
+static void dump_ah_combs(struct sk_buff *skb, struct xfrm_tmpl *t)
+{
+ struct sadb_prop *p;
+ int i;
+
+ p = (struct sadb_prop*)skb_put(skb, sizeof(struct sadb_prop));
+ p->sadb_prop_len = sizeof(struct sadb_prop)/8;
+ p->sadb_prop_exttype = SADB_EXT_PROPOSAL;
+ p->sadb_prop_replay = 32;
+
+ for (i = 0; ; i++) {
+ struct xfrm_algo_desc *aalg = xfrm_aalg_get_byidx(i);
+ if (!aalg)
+ break;
+
+ if (aalg_tmpl_set(t, aalg) && aalg->available) {
+ struct sadb_comb *c;
+ c = (struct sadb_comb*)skb_put(skb, sizeof(struct sadb_comb));
+ memset(c, 0, sizeof(*c));
+ p->sadb_prop_len += sizeof(struct sadb_comb)/8;
+ c->sadb_comb_auth = aalg->desc.sadb_alg_id;
+ c->sadb_comb_auth_minbits = aalg->desc.sadb_alg_minbits;
+ c->sadb_comb_auth_maxbits = aalg->desc.sadb_alg_maxbits;
+ c->sadb_comb_hard_addtime = 24*60*60;
+ c->sadb_comb_soft_addtime = 20*60*60;
+ c->sadb_comb_hard_usetime = 8*60*60;
+ c->sadb_comb_soft_usetime = 7*60*60;
+ }
+ }
+}
+
+static void dump_esp_combs(struct sk_buff *skb, struct xfrm_tmpl *t)
+{
+ struct sadb_prop *p;
+ int i, k;
+
+ p = (struct sadb_prop*)skb_put(skb, sizeof(struct sadb_prop));
+ p->sadb_prop_len = sizeof(struct sadb_prop)/8;
+ p->sadb_prop_exttype = SADB_EXT_PROPOSAL;
+ p->sadb_prop_replay = 32;
+
+ for (i=0; ; i++) {
+ struct xfrm_algo_desc *ealg = xfrm_ealg_get_byidx(i);
+ if (!ealg)
+ break;
+
+ if (!(ealg_tmpl_set(t, ealg) && ealg->available))
+ continue;
+
+ for (k = 1; ; k++) {
+ struct sadb_comb *c;
+ struct xfrm_algo_desc *aalg = xfrm_aalg_get_byidx(k);
+ if (!aalg)
+ break;
+ if (!(aalg_tmpl_set(t, aalg) && aalg->available))
+ continue;
+ c = (struct sadb_comb*)skb_put(skb, sizeof(struct sadb_comb));
+ memset(c, 0, sizeof(*c));
+ p->sadb_prop_len += sizeof(struct sadb_comb)/8;
+ c->sadb_comb_auth = aalg->desc.sadb_alg_id;
+ c->sadb_comb_auth_minbits = aalg->desc.sadb_alg_minbits;
+ c->sadb_comb_auth_maxbits = aalg->desc.sadb_alg_maxbits;
+ c->sadb_comb_encrypt = ealg->desc.sadb_alg_id;
+ c->sadb_comb_encrypt_minbits = ealg->desc.sadb_alg_minbits;
+ c->sadb_comb_encrypt_maxbits = ealg->desc.sadb_alg_maxbits;
+ c->sadb_comb_hard_addtime = 24*60*60;
+ c->sadb_comb_soft_addtime = 20*60*60;
+ c->sadb_comb_hard_usetime = 8*60*60;
+ c->sadb_comb_soft_usetime = 7*60*60;
+ }
+ }
+}
+
+static int pfkey_send_notify(struct xfrm_state *x, int hard)
+{
+ struct sk_buff *out_skb;
+ struct sadb_msg *out_hdr;
+ int hsc = (hard ? 2 : 1);
+
+ out_skb = pfkey_xfrm_state2msg(x, 0, hsc);
+ if (IS_ERR(out_skb))
+ return PTR_ERR(out_skb);
+
+ out_hdr = (struct sadb_msg *) out_skb->data;
+ out_hdr->sadb_msg_version = PF_KEY_V2;
+ out_hdr->sadb_msg_type = SADB_EXPIRE;
+ out_hdr->sadb_msg_satype = pfkey_proto2satype(x->id.proto);
+ out_hdr->sadb_msg_errno = 0;
+ out_hdr->sadb_msg_reserved = 0;
+ out_hdr->sadb_msg_seq = 0;
+ out_hdr->sadb_msg_pid = 0;
+
+ pfkey_broadcast(out_skb, GFP_ATOMIC, BROADCAST_REGISTERED, NULL);
+ return 0;
+}
+
+static u32 get_acqseq(void)
+{
+ u32 res;
+ static u32 acqseq;
+ static spinlock_t acqseq_lock = SPIN_LOCK_UNLOCKED;
+
+ spin_lock_bh(&acqseq_lock);
+ res = (++acqseq ? : ++acqseq);
+ spin_unlock_bh(&acqseq_lock);
+ return res;
+}
+
+static int pfkey_send_acquire(struct xfrm_state *x, struct xfrm_tmpl *t, struct xfrm_policy *xp, int dir)
+{
+ struct sk_buff *skb;
+ struct sadb_msg *hdr;
+ struct sadb_address *addr;
+ struct sadb_x_policy *pol;
+ struct sockaddr_in *sin;
+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+ struct sockaddr_in6 *sin6;
+#endif
+ int sockaddr_size;
+ int size;
+
+ sockaddr_size = pfkey_sockaddr_size(x->props.family);
+ if (!sockaddr_size)
+ return -EINVAL;
+
+ size = sizeof(struct sadb_msg) +
+ (sizeof(struct sadb_address) * 2) +
+ (sockaddr_size * 2) +
+ sizeof(struct sadb_x_policy);
+
+ if (x->id.proto == IPPROTO_AH)
+ size += count_ah_combs(t);
+ else if (x->id.proto == IPPROTO_ESP)
+ size += count_esp_combs(t);
+
+ skb = alloc_skb(size + 16, GFP_ATOMIC);
+ if (skb == NULL)
+ return -ENOMEM;
+
+ hdr = (struct sadb_msg *) skb_put(skb, sizeof(struct sadb_msg));
+ hdr->sadb_msg_version = PF_KEY_V2;
+ hdr->sadb_msg_type = SADB_ACQUIRE;
+ hdr->sadb_msg_satype = pfkey_proto2satype(x->id.proto);
+ hdr->sadb_msg_len = size / sizeof(uint64_t);
+ hdr->sadb_msg_errno = 0;
+ hdr->sadb_msg_reserved = 0;
+ hdr->sadb_msg_seq = x->km.seq = get_acqseq();
+ hdr->sadb_msg_pid = 0;
+
+ /* src address */
+ addr = (struct sadb_address*) skb_put(skb,
+ sizeof(struct sadb_address)+sockaddr_size);
+ addr->sadb_address_len =
+ (sizeof(struct sadb_address)+sockaddr_size)/
+ sizeof(uint64_t);
+ addr->sadb_address_exttype = SADB_EXT_ADDRESS_SRC;
+ addr->sadb_address_proto = 0;
+ addr->sadb_address_reserved = 0;
+ if (x->props.family == AF_INET) {
+ addr->sadb_address_prefixlen = 32;
+
+ sin = (struct sockaddr_in *) (addr + 1);
+ sin->sin_family = AF_INET;
+ sin->sin_addr.s_addr = x->props.saddr.a4;
+ sin->sin_port = 0;
+ memset(sin->sin_zero, 0, sizeof(sin->sin_zero));
+ }
+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+ else if (x->props.family == AF_INET6) {
+ addr->sadb_address_prefixlen = 128;
+
+ sin6 = (struct sockaddr_in6 *) (addr + 1);
+ sin6->sin6_family = AF_INET6;
+ sin6->sin6_port = 0;
+ sin6->sin6_flowinfo = 0;
+ memcpy(&sin6->sin6_addr,
+ x->props.saddr.a6, sizeof(struct in6_addr));
+ sin6->sin6_scope_id = 0;
+ }
+#endif
+ else
+ BUG();
+
+ /* dst address */
+ addr = (struct sadb_address*) skb_put(skb,
+ sizeof(struct sadb_address)+sockaddr_size);
+ addr->sadb_address_len =
+ (sizeof(struct sadb_address)+sockaddr_size)/
+ sizeof(uint64_t);
+ addr->sadb_address_exttype = SADB_EXT_ADDRESS_DST;
+ addr->sadb_address_proto = 0;
+ addr->sadb_address_reserved = 0;
+ if (x->props.family == AF_INET) {
+ addr->sadb_address_prefixlen = 32;
+
+ sin = (struct sockaddr_in *) (addr + 1);
+ sin->sin_family = AF_INET;
+ sin->sin_addr.s_addr = x->id.daddr.a4;
+ sin->sin_port = 0;
+ memset(sin->sin_zero, 0, sizeof(sin->sin_zero));
+ }
+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+ else if (x->props.family == AF_INET6) {
+ addr->sadb_address_prefixlen = 128;
+
+ sin6 = (struct sockaddr_in6 *) (addr + 1);
+ sin6->sin6_family = AF_INET6;
+ sin6->sin6_port = 0;
+ sin6->sin6_flowinfo = 0;
+ memcpy(&sin6->sin6_addr,
+ x->id.daddr.a6, sizeof(struct in6_addr));
+ sin6->sin6_scope_id = 0;
+ }
+#endif
+ else
+ BUG();
+
+ pol = (struct sadb_x_policy *) skb_put(skb, sizeof(struct sadb_x_policy));
+ pol->sadb_x_policy_len = sizeof(struct sadb_x_policy)/sizeof(uint64_t);
+ pol->sadb_x_policy_exttype = SADB_X_EXT_POLICY;
+ pol->sadb_x_policy_type = IPSEC_POLICY_IPSEC;
+ pol->sadb_x_policy_dir = dir+1;
+ pol->sadb_x_policy_id = xp->index;
+
+ /* Set sadb_comb's. */
+ if (x->id.proto == IPPROTO_AH)
+ dump_ah_combs(skb, t);
+ else if (x->id.proto == IPPROTO_ESP)
+ dump_esp_combs(skb, t);
+
+ return pfkey_broadcast(skb, GFP_ATOMIC, BROADCAST_REGISTERED, NULL);
+}
+
+static struct xfrm_policy *pfkey_compile_policy(u16 family, int opt,
+ u8 *data, int len, int *dir)
+{
+ struct xfrm_policy *xp;
+ struct sadb_x_policy *pol = (struct sadb_x_policy*)data;
+
+ switch (family) {
+ case AF_INET:
+ if (opt != IP_IPSEC_POLICY) {
+ *dir = -EOPNOTSUPP;
+ return NULL;
+ }
+ break;
+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+ case AF_INET6:
+ if (opt != IPV6_IPSEC_POLICY) {
+ *dir = -EOPNOTSUPP;
+ return NULL;
+ }
+ break;
+#endif
+ default:
+ *dir = -EINVAL;
+ return NULL;
+ }
+
+ *dir = -EINVAL;
+
+ if (len < sizeof(struct sadb_x_policy) ||
+ pol->sadb_x_policy_len*8 > len ||
+ pol->sadb_x_policy_type > IPSEC_POLICY_BYPASS ||
+ (!pol->sadb_x_policy_dir || pol->sadb_x_policy_dir > IPSEC_DIR_OUTBOUND))
+ return NULL;
+
+ xp = xfrm_policy_alloc(GFP_ATOMIC);
+ if (xp == NULL) {
+ *dir = -ENOBUFS;
+ return NULL;
+ }
+
+ xp->action = (pol->sadb_x_policy_type == IPSEC_POLICY_DISCARD ?
+ XFRM_POLICY_BLOCK : XFRM_POLICY_ALLOW);
+
+ xp->lft.soft_byte_limit = XFRM_INF;
+ xp->lft.hard_byte_limit = XFRM_INF;
+ xp->lft.soft_packet_limit = XFRM_INF;
+ xp->lft.hard_packet_limit = XFRM_INF;
+ xp->family = family;
+
+ xp->xfrm_nr = 0;
+ if (pol->sadb_x_policy_type == IPSEC_POLICY_IPSEC &&
+ (*dir = parse_ipsecrequests(xp, pol)) < 0)
+ goto out;
+
+ *dir = pol->sadb_x_policy_dir-1;
+ return xp;
+
+out:
+ kfree(xp);
+ return NULL;
+}
+
+static int pfkey_send_new_mapping(struct xfrm_state *x, xfrm_address_t *ipaddr, u16 sport)
+{
+ struct sk_buff *skb;
+ struct sadb_msg *hdr;
+ struct sadb_sa *sa;
+ struct sadb_address *addr;
+ struct sadb_x_nat_t_port *n_port;
+ struct sockaddr_in *sin;
+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+ struct sockaddr_in6 *sin6;
+#endif
+ int sockaddr_size;
+ int size;
+ __u8 satype = (x->id.proto == IPPROTO_ESP ? SADB_SATYPE_ESP : 0);
+ struct xfrm_encap_tmpl *natt = NULL;
+
+ sockaddr_size = pfkey_sockaddr_size(x->props.family);
+ if (!sockaddr_size)
+ return -EINVAL;
+
+ if (!satype)
+ return -EINVAL;
+
+ if (!x->encap)
+ return -EINVAL;
+
+ natt = x->encap;
+
+ /* Build an SADB_X_NAT_T_NEW_MAPPING message:
+ *
+ * HDR | SA | ADDRESS_SRC (old addr) | NAT_T_SPORT (old port) |
+ * ADDRESS_DST (new addr) | NAT_T_DPORT (new port)
+ */
+
+ size = sizeof(struct sadb_msg) +
+ sizeof(struct sadb_sa) +
+ (sizeof(struct sadb_address) * 2) +
+ (sockaddr_size * 2) +
+ (sizeof(struct sadb_x_nat_t_port) * 2);
+
+ skb = alloc_skb(size + 16, GFP_ATOMIC);
+ if (skb == NULL)
+ return -ENOMEM;
+
+ hdr = (struct sadb_msg *) skb_put(skb, sizeof(struct sadb_msg));
+ hdr->sadb_msg_version = PF_KEY_V2;
+ hdr->sadb_msg_type = SADB_X_NAT_T_NEW_MAPPING;
+ hdr->sadb_msg_satype = satype;
+ hdr->sadb_msg_len = size / sizeof(uint64_t);
+ hdr->sadb_msg_errno = 0;
+ hdr->sadb_msg_reserved = 0;
+ hdr->sadb_msg_seq = x->km.seq = get_acqseq();
+ hdr->sadb_msg_pid = 0;
+
+ /* SA */
+ sa = (struct sadb_sa *) skb_put(skb, sizeof(struct sadb_sa));
+ sa->sadb_sa_len = sizeof(struct sadb_sa)/sizeof(uint64_t);
+ sa->sadb_sa_exttype = SADB_EXT_SA;
+ sa->sadb_sa_spi = x->id.spi;
+ sa->sadb_sa_replay = 0;
+ sa->sadb_sa_state = 0;
+ sa->sadb_sa_auth = 0;
+ sa->sadb_sa_encrypt = 0;
+ sa->sadb_sa_flags = 0;
+
+ /* ADDRESS_SRC (old addr) */
+ addr = (struct sadb_address*)
+ skb_put(skb, sizeof(struct sadb_address)+sockaddr_size);
+ addr->sadb_address_len =
+ (sizeof(struct sadb_address)+sockaddr_size)/
+ sizeof(uint64_t);
+ addr->sadb_address_exttype = SADB_EXT_ADDRESS_SRC;
+ addr->sadb_address_proto = 0;
+ addr->sadb_address_reserved = 0;
+ if (x->props.family == AF_INET) {
+ addr->sadb_address_prefixlen = 32;
+
+ sin = (struct sockaddr_in *) (addr + 1);
+ sin->sin_family = AF_INET;
+ sin->sin_addr.s_addr = x->props.saddr.a4;
+ sin->sin_port = 0;
+ memset(sin->sin_zero, 0, sizeof(sin->sin_zero));
+ }
+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+ else if (x->props.family == AF_INET6) {
+ addr->sadb_address_prefixlen = 128;
+
+ sin6 = (struct sockaddr_in6 *) (addr + 1);
+ sin6->sin6_family = AF_INET6;
+ sin6->sin6_port = 0;
+ sin6->sin6_flowinfo = 0;
+ memcpy(&sin6->sin6_addr,
+ x->props.saddr.a6, sizeof(struct in6_addr));
+ sin6->sin6_scope_id = 0;
+ }
+#endif
+ else
+ BUG();
+
+ /* NAT_T_SPORT (old port) */
+ n_port = (struct sadb_x_nat_t_port*) skb_put(skb, sizeof (*n_port));
+ n_port->sadb_x_nat_t_port_len = sizeof(*n_port)/sizeof(uint64_t);
+ n_port->sadb_x_nat_t_port_exttype = SADB_X_EXT_NAT_T_SPORT;
+ n_port->sadb_x_nat_t_port_port = natt->encap_sport;
+ n_port->sadb_x_nat_t_port_reserved = 0;
+
+ /* ADDRESS_DST (new addr) */
+ addr = (struct sadb_address*)
+ skb_put(skb, sizeof(struct sadb_address)+sockaddr_size);
+ addr->sadb_address_len =
+ (sizeof(struct sadb_address)+sockaddr_size)/
+ sizeof(uint64_t);
+ addr->sadb_address_exttype = SADB_EXT_ADDRESS_SRC;
+ addr->sadb_address_proto = 0;
+ addr->sadb_address_reserved = 0;
+ if (x->props.family == AF_INET) {
+ addr->sadb_address_prefixlen = 32;
+
+ sin = (struct sockaddr_in *) (addr + 1);
+ sin->sin_family = AF_INET;
+ sin->sin_addr.s_addr = ipaddr->a4;
+ sin->sin_port = 0;
+ memset(sin->sin_zero, 0, sizeof(sin->sin_zero));
+ }
+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+ else if (x->props.family == AF_INET6) {
+ addr->sadb_address_prefixlen = 128;
+
+ sin6 = (struct sockaddr_in6 *) (addr + 1);
+ sin6->sin6_family = AF_INET6;
+ sin6->sin6_port = 0;
+ sin6->sin6_flowinfo = 0;
+ memcpy(&sin6->sin6_addr, &ipaddr->a6, sizeof(struct in6_addr));
+ sin6->sin6_scope_id = 0;
+ }
+#endif
+ else
+ BUG();
+
+ /* NAT_T_DPORT (new port) */
+ n_port = (struct sadb_x_nat_t_port*) skb_put(skb, sizeof (*n_port));
+ n_port->sadb_x_nat_t_port_len = sizeof(*n_port)/sizeof(uint64_t);
+ n_port->sadb_x_nat_t_port_exttype = SADB_X_EXT_NAT_T_DPORT;
+ n_port->sadb_x_nat_t_port_port = sport;
+ n_port->sadb_x_nat_t_port_reserved = 0;
+
+ return pfkey_broadcast(skb, GFP_ATOMIC, BROADCAST_REGISTERED, NULL);
+}
+
+static int pfkey_sendmsg(struct socket *sock, struct msghdr *msg, int len,
+ struct scm_cookie *scm)
+{
+ struct sock *sk = sock->sk;
+ struct sk_buff *skb = NULL;
+ struct sadb_msg *hdr = NULL;
+ int err;
+
+ err = -EOPNOTSUPP;
+ if (msg->msg_flags & MSG_OOB)
+ goto out;
+
+ err = -EMSGSIZE;
+ if ((unsigned)len > sk->sndbuf-32)
+ goto out;
+
+ err = -ENOBUFS;
+ skb = alloc_skb(len, GFP_KERNEL);
+ if (skb == NULL)
+ goto out;
+
+ err = -EFAULT;
+ if (memcpy_fromiovec(skb_put(skb,len), msg->msg_iov, len))
+ goto out;
+
+ hdr = pfkey_get_base_msg(skb, &err);
+ if (!hdr)
+ goto out;
+
+ down(&xfrm_cfg_sem);
+ err = pfkey_process(sk, skb, hdr);
+ up(&xfrm_cfg_sem);
+
+out:
+ if (err && hdr && pfkey_error(hdr, err, sk) == 0)
+ err = 0;
+ if (skb)
+ kfree_skb(skb);
+
+ return err ? : len;
+}
+
+static int pfkey_recvmsg(struct socket *sock, struct msghdr *msg, int len,
+ int flags, struct scm_cookie *scm)
+{
+ struct sock *sk = sock->sk;
+ struct sk_buff *skb;
+ int copied, err;
+
+ err = -EINVAL;
+ if (flags & ~(MSG_PEEK|MSG_DONTWAIT|MSG_TRUNC))
+ goto out;
+
+ msg->msg_namelen = 0;
+ skb = skb_recv_datagram(sk, flags, flags & MSG_DONTWAIT, &err);
+ if (skb == NULL)
+ goto out;
+
+ copied = skb->len;
+ if (copied > len) {
+ msg->msg_flags |= MSG_TRUNC;
+ copied = len;
+ }
+
+ skb->h.raw = skb->data;
+ err = skb_copy_datagram_iovec(skb, 0, msg->msg_iov, copied);
+ if (err)
+ goto out_free;
+
+ sock_recv_timestamp(msg, sk, skb);
+
+ err = (flags & MSG_TRUNC) ? skb->len : copied;
+
+out_free:
+ skb_free_datagram(sk, skb);
+out:
+ return err;
+}
+
+static struct proto_ops pfkey_ops = {
+ .family = PF_KEY,
+
+ /* Operations that make no sense on pfkey sockets. */
+ .bind = sock_no_bind,
+ .connect = sock_no_connect,
+ .socketpair = sock_no_socketpair,
+ .accept = sock_no_accept,
+ .getname = sock_no_getname,
+ .ioctl = sock_no_ioctl,
+ .listen = sock_no_listen,
+ .shutdown = sock_no_shutdown,
+ .setsockopt = sock_no_setsockopt,
+ .getsockopt = sock_no_getsockopt,
+ .mmap = sock_no_mmap,
+ .sendpage = sock_no_sendpage,
+
+ /* Now the operations that really occur. */
+ .release = pfkey_release,
+ .poll = datagram_poll,
+ .sendmsg = pfkey_sendmsg,
+ .recvmsg = pfkey_recvmsg,
+};
+
+static struct net_proto_family pfkey_family_ops = {
+ .family = PF_KEY,
+ .create = pfkey_create,
+};
+
+#ifdef CONFIG_PROC_FS
+static int pfkey_read_proc(char *buffer, char **start, off_t offset,
+ int length, int *eof, void *data)
+{
+ off_t pos = 0;
+ off_t begin = 0;
+ int len = 0;
+ struct sock *s;
+
+ len += sprintf(buffer,"sk RefCnt Rmem Wmem User Inode\n");
+
+ read_lock(&pfkey_table_lock);
+
+ for (s = pfkey_table; s; s = s->next) {
+ len += sprintf(buffer+len,"%p %-6d %-6u %-6u %-6u %-6lu",
+ s,
+ atomic_read(&s->refcnt),
+ atomic_read(&s->rmem_alloc),
+ atomic_read(&s->wmem_alloc),
+ sock_i_uid(s),
+ sock_i_ino(s)
+ );
+
+ buffer[len++] = '\n';
+
+ pos = begin + len;
+ if (pos < offset) {
+ len = 0;
+ begin = pos;
+ }
+ if(pos > offset + length)
+ goto done;
+ }
+ *eof = 1;
+
+done:
+ read_unlock(&pfkey_table_lock);
+
+ *start = buffer + (offset - begin);
+ len -= (offset - begin);
+
+ if (len > length)
+ len = length;
+ if (len < 0)
+ len = 0;
+
+ return len;
+}
+#endif
+
+static struct xfrm_mgr pfkeyv2_mgr =
+{
+ .id = "pfkeyv2",
+ .notify = pfkey_send_notify,
+ .acquire = pfkey_send_acquire,
+ .compile_policy = pfkey_compile_policy,
+ .new_mapping = pfkey_send_new_mapping,
+};
+
+static void __exit ipsec_pfkey_exit(void)
+{
+ xfrm_unregister_km(&pfkeyv2_mgr);
+ remove_proc_entry("net/pfkey", 0);
+ sock_unregister(PF_KEY);
+}
+
+static int __init ipsec_pfkey_init(void)
+{
+ sock_register(&pfkey_family_ops);
+#ifdef CONFIG_PROC_FS
+ create_proc_read_entry("net/pfkey", 0, 0, pfkey_read_proc, NULL);
+#endif
+ xfrm_register_km(&pfkeyv2_mgr);
+ return 0;
+}
+
+module_init(ipsec_pfkey_init);
+module_exit(ipsec_pfkey_exit);
+MODULE_LICENSE("GPL");
diff -Nru a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
--- a/net/netlink/af_netlink.c Thu May 8 10:41:37 2003
+++ b/net/netlink/af_netlink.c Thu May 8 10:41:37 2003
@@ -490,13 +490,13 @@
return -1;
}
-void netlink_broadcast(struct sock *ssk, struct sk_buff *skb, u32 pid,
- u32 group, int allocation)
+int netlink_broadcast(struct sock *ssk, struct sk_buff *skb, u32 pid,
+ u32 group, int allocation)
{
struct sock *sk;
struct sk_buff *skb2 = NULL;
int protocol = ssk->protocol;
- int failure = 0;
+ int failure = 0, delivered = 0;
/* While we sleep in clone, do not allow to change socket list */
@@ -530,8 +530,10 @@
failure = 1;
} else if (netlink_broadcast_deliver(sk, skb2)) {
netlink_overrun(sk);
- } else
+ } else {
+ delivered = 1;
skb2 = NULL;
+ }
sock_put(sk);
}
@@ -540,6 +542,12 @@
if (skb2)
kfree_skb(skb2);
kfree_skb(skb);
+
+ if (delivered)
+ return 0;
+ if (failure)
+ return -ENOBUFS;
+ return -ESRCH;
}
void netlink_set_err(struct sock *ssk, u32 pid, u32 group, int code)
diff -Nru a/net/netsyms.c b/net/netsyms.c
--- a/net/netsyms.c Thu May 8 10:41:37 2003
+++ b/net/netsyms.c Thu May 8 10:41:37 2003
@@ -53,6 +53,13 @@
#include <linux/inet.h>
#include <linux/mroute.h>
#include <linux/igmp.h>
+#include <net/xfrm.h>
+#if defined(CONFIG_INET_AH) || defined(CONFIG_INET_AH_MODULE) || defined(CONFIG_INET6_AH) || defined(CONFIG_INET6_AH_MODULE)
+#include <net/ah.h>
+#endif
+#if defined(CONFIG_INET_ESP) || defined(CONFIG_INET_ESP_MODULE) || defined(CONFIG_INET6_ESP) || defined(CONFIG_INET6_ESP_MODULE)
+#include <net/esp.h>
+#endif
extern struct net_proto_family inet_family_ops;
@@ -286,6 +293,81 @@
EXPORT_SYMBOL(dlci_ioctl_hook);
#endif
+EXPORT_SYMBOL(xfrm_user_policy);
+EXPORT_SYMBOL(km_waitq);
+EXPORT_SYMBOL(km_new_mapping);
+EXPORT_SYMBOL(xfrm_cfg_sem);
+EXPORT_SYMBOL(xfrm_policy_alloc);
+EXPORT_SYMBOL(__xfrm_policy_destroy);
+EXPORT_SYMBOL(xfrm_policy_lookup);
+EXPORT_SYMBOL(xfrm_lookup);
+EXPORT_SYMBOL(__xfrm_policy_check);
+EXPORT_SYMBOL(__xfrm_route_forward);
+EXPORT_SYMBOL(xfrm_state_alloc);
+EXPORT_SYMBOL(__xfrm_state_destroy);
+EXPORT_SYMBOL(xfrm_state_find);
+EXPORT_SYMBOL(xfrm_state_insert);
+EXPORT_SYMBOL(xfrm_state_check_expire);
+EXPORT_SYMBOL(xfrm_state_check_space);
+EXPORT_SYMBOL(xfrm_state_lookup);
+EXPORT_SYMBOL(xfrm_state_register_afinfo);
+EXPORT_SYMBOL(xfrm_state_unregister_afinfo);
+EXPORT_SYMBOL(xfrm_state_get_afinfo);
+EXPORT_SYMBOL(xfrm_state_put_afinfo);
+EXPORT_SYMBOL(xfrm_replay_check);
+EXPORT_SYMBOL(xfrm_replay_advance);
+EXPORT_SYMBOL(xfrm_check_selectors);
+EXPORT_SYMBOL(xfrm_check_output);
+EXPORT_SYMBOL(__secpath_destroy);
+EXPORT_SYMBOL(xfrm_get_acqseq);
+EXPORT_SYMBOL(xfrm_parse_spi);
+EXPORT_SYMBOL(xfrm4_rcv);
+EXPORT_SYMBOL(xfrm4_tunnel_register);
+EXPORT_SYMBOL(xfrm4_tunnel_deregister);
+EXPORT_SYMBOL(xfrm4_tunnel_check_size);
+EXPORT_SYMBOL(xfrm_register_type);
+EXPORT_SYMBOL(xfrm_unregister_type);
+EXPORT_SYMBOL(xfrm_get_type);
+EXPORT_SYMBOL(inet_peer_idlock);
+EXPORT_SYMBOL(xfrm_register_km);
+EXPORT_SYMBOL(xfrm_unregister_km);
+EXPORT_SYMBOL(xfrm_state_delete);
+EXPORT_SYMBOL(xfrm_state_walk);
+EXPORT_SYMBOL(xfrm_find_acq_byseq);
+EXPORT_SYMBOL(xfrm_find_acq);
+EXPORT_SYMBOL(xfrm_alloc_spi);
+EXPORT_SYMBOL(xfrm_state_flush);
+EXPORT_SYMBOL(xfrm_policy_kill);
+EXPORT_SYMBOL(xfrm_policy_delete);
+EXPORT_SYMBOL(xfrm_policy_insert);
+EXPORT_SYMBOL(xfrm_policy_walk);
+EXPORT_SYMBOL(xfrm_policy_flush);
+EXPORT_SYMBOL(xfrm_policy_byid);
+EXPORT_SYMBOL(xfrm_policy_list);
+EXPORT_SYMBOL(xfrm_dst_lookup);
+EXPORT_SYMBOL(xfrm_policy_register_afinfo);
+EXPORT_SYMBOL(xfrm_policy_unregister_afinfo);
+EXPORT_SYMBOL(xfrm_policy_get_afinfo);
+EXPORT_SYMBOL(xfrm_policy_put_afinfo);
+
+EXPORT_SYMBOL_GPL(xfrm_probe_algs);
+EXPORT_SYMBOL_GPL(xfrm_count_auth_supported);
+EXPORT_SYMBOL_GPL(xfrm_count_enc_supported);
+EXPORT_SYMBOL_GPL(xfrm_aalg_get_byidx);
+EXPORT_SYMBOL_GPL(xfrm_ealg_get_byidx);
+EXPORT_SYMBOL_GPL(xfrm_calg_get_byidx);
+EXPORT_SYMBOL_GPL(xfrm_aalg_get_byid);
+EXPORT_SYMBOL_GPL(xfrm_ealg_get_byid);
+EXPORT_SYMBOL_GPL(xfrm_calg_get_byid);
+EXPORT_SYMBOL_GPL(xfrm_aalg_get_byname);
+EXPORT_SYMBOL_GPL(xfrm_ealg_get_byname);
+EXPORT_SYMBOL_GPL(xfrm_calg_get_byname);
+EXPORT_SYMBOL_GPL(skb_icv_walk);
+#if defined(CONFIG_INET_ESP) || defined(CONFIG_INET_ESP_MODULE) || defined(CONFIG_INET6_ESP) || defined(CONFIG_INET6_ESP_MODULE)
+EXPORT_SYMBOL_GPL(skb_cow_data);
+EXPORT_SYMBOL_GPL(pskb_put);
+EXPORT_SYMBOL_GPL(skb_to_sgvec);
+#endif
#ifdef CONFIG_IPV6
EXPORT_SYMBOL(ipv6_addr_type);
@@ -478,6 +560,7 @@
EXPORT_SYMBOL(loopback_dev);
EXPORT_SYMBOL(register_netdevice);
EXPORT_SYMBOL(unregister_netdevice);
+EXPORT_SYMBOL(synchronize_net);
EXPORT_SYMBOL(netdev_state_change);
EXPORT_SYMBOL(dev_new_index);
EXPORT_SYMBOL(dev_get_by_flags);
diff -Nru a/net/sched/cls_route.c b/net/sched/cls_route.c
--- a/net/sched/cls_route.c Thu May 8 10:41:37 2003
+++ b/net/sched/cls_route.c Thu May 8 10:41:37 2003
@@ -154,7 +154,7 @@
if (head == NULL)
goto old_method;
- iif = ((struct rtable*)dst)->key.iif;
+ iif = ((struct rtable*)dst)->fl.iif;
h = route4_fastmap_hash(id, iif);
if (id == head->fastmap[h].id &&
diff -Nru a/net/xfrm/Config.in b/net/xfrm/Config.in
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/net/xfrm/Config.in Thu May 8 10:41:38 2003
@@ -0,0 +1,4 @@
+#
+# XFRM configuration
+#
+tristate ' IP: IPsec user configuration interface' CONFIG_XFRM_USER
diff -Nru a/net/xfrm/Makefile b/net/xfrm/Makefile
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/net/xfrm/Makefile Thu May 8 10:41:38 2003
@@ -0,0 +1,10 @@
+#
+# Makefile for the XFRM subsystem.
+#
+
+O_TARGET := xfrm.o
+
+obj-y := xfrm_policy.o xfrm_state.o xfrm_input.o xfrm_algo.o xfrm_output.o
+obj-$(CONFIG_XFRM_USER) += xfrm_user.o
+
+include $(TOPDIR)/Rules.make
diff -Nru a/net/xfrm/xfrm_algo.c b/net/xfrm/xfrm_algo.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/net/xfrm/xfrm_algo.c Thu May 8 10:41:38 2003
@@ -0,0 +1,695 @@
+/*
+ * xfrm algorithm interface
+ *
+ * Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ */
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/pfkeyv2.h>
+#include <net/xfrm.h>
+#if defined(CONFIG_INET_AH) || defined(CONFIG_INET_AH_MODULE) || defined(CONFIG_INET6_AH) || defined(CONFIG_INET6_AH_MODULE)
+#include <net/ah.h>
+#endif
+#if defined(CONFIG_INET_ESP) || defined(CONFIG_INET_ESP_MODULE) || defined(CONFIG_INET6_ESP) || defined(CONFIG_INET6_ESP_MODULE)
+#include <net/esp.h>
+#endif
+#include <asm/scatterlist.h>
+
+/*
+ * Algorithms supported by IPsec. These entries contain properties which
+ * are used in key negotiation and xfrm processing, and are used to verify
+ * that instantiated crypto transforms have correct parameters for IPsec
+ * purposes.
+ */
+static struct xfrm_algo_desc aalg_list[] = {
+{
+ .name = "digest_null",
+
+ .uinfo = {
+ .auth = {
+ .icv_truncbits = 0,
+ .icv_fullbits = 0,
+ }
+ },
+
+ .desc = {
+ .sadb_alg_id = SADB_X_AALG_NULL,
+ .sadb_alg_ivlen = 0,
+ .sadb_alg_minbits = 0,
+ .sadb_alg_maxbits = 0
+ }
+},
+{
+ .name = "md5",
+
+ .uinfo = {
+ .auth = {
+ .icv_truncbits = 96,
+ .icv_fullbits = 128,
+ }
+ },
+
+ .desc = {
+ .sadb_alg_id = SADB_AALG_MD5HMAC,
+ .sadb_alg_ivlen = 0,
+ .sadb_alg_minbits = 128,
+ .sadb_alg_maxbits = 128
+ }
+},
+{
+ .name = "sha1",
+
+ .uinfo = {
+ .auth = {
+ .icv_truncbits = 96,
+ .icv_fullbits = 160,
+ }
+ },
+
+ .desc = {
+ .sadb_alg_id = SADB_AALG_SHA1HMAC,
+ .sadb_alg_ivlen = 0,
+ .sadb_alg_minbits = 160,
+ .sadb_alg_maxbits = 160
+ }
+},
+{
+ .name = "sha256",
+
+ .uinfo = {
+ .auth = {
+ .icv_truncbits = 128,
+ .icv_fullbits = 256,
+ }
+ },
+
+ .desc = {
+ .sadb_alg_id = SADB_X_AALG_SHA2_256HMAC,
+ .sadb_alg_ivlen = 0,
+ .sadb_alg_minbits = 256,
+ .sadb_alg_maxbits = 256
+ }
+},
+{
+ .name = "ripemd160",
+
+ .uinfo = {
+ .auth = {
+ .icv_truncbits = 96,
+ .icv_fullbits = 160,
+ }
+ },
+
+ .desc = {
+ .sadb_alg_id = SADB_X_AALG_RIPEMD160HMAC,
+ .sadb_alg_ivlen = 0,
+ .sadb_alg_minbits = 160,
+ .sadb_alg_maxbits = 160
+ }
+},
+};
+
+static struct xfrm_algo_desc ealg_list[] = {
+{
+ .name = "cipher_null",
+
+ .uinfo = {
+ .encr = {
+ .blockbits = 8,
+ .defkeybits = 0,
+ }
+ },
+
+ .desc = {
+ .sadb_alg_id = SADB_EALG_NULL,
+ .sadb_alg_ivlen = 0,
+ .sadb_alg_minbits = 0,
+ .sadb_alg_maxbits = 0
+ }
+},
+{
+ .name = "des",
+
+ .uinfo = {
+ .encr = {
+ .blockbits = 64,
+ .defkeybits = 64,
+ }
+ },
+
+ .desc = {
+ .sadb_alg_id = SADB_EALG_DESCBC,
+ .sadb_alg_ivlen = 8,
+ .sadb_alg_minbits = 64,
+ .sadb_alg_maxbits = 64
+ }
+},
+{
+ .name = "des3_ede",
+
+ .uinfo = {
+ .encr = {
+ .blockbits = 64,
+ .defkeybits = 192,
+ }
+ },
+
+ .desc = {
+ .sadb_alg_id = SADB_EALG_3DESCBC,
+ .sadb_alg_ivlen = 8,
+ .sadb_alg_minbits = 192,
+ .sadb_alg_maxbits = 192
+ }
+},
+{
+ .name = "cast128",
+
+ .uinfo = {
+ .encr = {
+ .blockbits = 64,
+ .defkeybits = 128,
+ }
+ },
+
+ .desc = {
+ .sadb_alg_id = SADB_X_EALG_CASTCBC,
+ .sadb_alg_ivlen = 8,
+ .sadb_alg_minbits = 40,
+ .sadb_alg_maxbits = 128
+ }
+},
+{
+ .name = "blowfish",
+
+ .uinfo = {
+ .encr = {
+ .blockbits = 64,
+ .defkeybits = 128,
+ }
+ },
+
+ .desc = {
+ .sadb_alg_id = SADB_X_EALG_BLOWFISHCBC,
+ .sadb_alg_ivlen = 8,
+ .sadb_alg_minbits = 40,
+ .sadb_alg_maxbits = 448
+ }
+},
+{
+ .name = "aes",
+
+ .uinfo = {
+ .encr = {
+ .blockbits = 128,
+ .defkeybits = 128,
+ }
+ },
+
+ .desc = {
+ .sadb_alg_id = SADB_X_EALG_AESCBC,
+ .sadb_alg_ivlen = 8,
+ .sadb_alg_minbits = 128,
+ .sadb_alg_maxbits = 256
+ }
+},
+};
+
+static struct xfrm_algo_desc calg_list[] = {
+{
+ .name = "deflate",
+ .uinfo = {
+ .comp = {
+ .threshold = 90,
+ }
+ },
+ .desc = { .sadb_alg_id = SADB_X_CALG_DEFLATE }
+},
+{
+ .name = "lzs",
+ .uinfo = {
+ .comp = {
+ .threshold = 90,
+ }
+ },
+ .desc = { .sadb_alg_id = SADB_X_CALG_LZS }
+},
+{
+ .name = "lzjh",
+ .uinfo = {
+ .comp = {
+ .threshold = 50,
+ }
+ },
+ .desc = { .sadb_alg_id = SADB_X_CALG_LZJH }
+},
+};
+
+static inline int aalg_entries(void)
+{
+ return sizeof(aalg_list) / sizeof(aalg_list[0]);
+}
+
+static inline int ealg_entries(void)
+{
+ return sizeof(ealg_list) / sizeof(ealg_list[0]);
+}
+
+static inline int calg_entries(void)
+{
+ return sizeof(calg_list) / sizeof(calg_list[0]);
+}
+
+/* Todo: generic iterators */
+struct xfrm_algo_desc *xfrm_aalg_get_byid(int alg_id)
+{
+ int i;
+
+ for (i = 0; i < aalg_entries(); i++) {
+ if (aalg_list[i].desc.sadb_alg_id == alg_id) {
+ if (aalg_list[i].available)
+ return &aalg_list[i];
+ else
+ break;
+ }
+ }
+ return NULL;
+}
+
+struct xfrm_algo_desc *xfrm_ealg_get_byid(int alg_id)
+{
+ int i;
+
+ for (i = 0; i < ealg_entries(); i++) {
+ if (ealg_list[i].desc.sadb_alg_id == alg_id) {
+ if (ealg_list[i].available)
+ return &ealg_list[i];
+ else
+ break;
+ }
+ }
+ return NULL;
+}
+
+struct xfrm_algo_desc *xfrm_calg_get_byid(int alg_id)
+{
+ int i;
+
+ for (i = 0; i < calg_entries(); i++) {
+ if (calg_list[i].desc.sadb_alg_id == alg_id) {
+ if (calg_list[i].available)
+ return &calg_list[i];
+ else
+ break;
+ }
+ }
+ return NULL;
+}
+
+struct xfrm_algo_desc *xfrm_aalg_get_byname(char *name)
+{
+ int i;
+
+ if (!name)
+ return NULL;
+
+ for (i=0; i < aalg_entries(); i++) {
+ if (strcmp(name, aalg_list[i].name) == 0) {
+ if (aalg_list[i].available)
+ return &aalg_list[i];
+ else
+ break;
+ }
+ }
+ return NULL;
+}
+
+struct xfrm_algo_desc *xfrm_ealg_get_byname(char *name)
+{
+ int i;
+
+ if (!name)
+ return NULL;
+
+ for (i=0; i < ealg_entries(); i++) {
+ if (strcmp(name, ealg_list[i].name) == 0) {
+ if (ealg_list[i].available)
+ return &ealg_list[i];
+ else
+ break;
+ }
+ }
+ return NULL;
+}
+
+struct xfrm_algo_desc *xfrm_calg_get_byname(char *name)
+{
+ int i;
+
+ if (!name)
+ return NULL;
+
+ for (i=0; i < calg_entries(); i++) {
+ if (strcmp(name, calg_list[i].name) == 0) {
+ if (calg_list[i].available)
+ return &calg_list[i];
+ else
+ break;
+ }
+ }
+ return NULL;
+}
+
+struct xfrm_algo_desc *xfrm_aalg_get_byidx(unsigned int idx)
+{
+ if (idx >= aalg_entries())
+ return NULL;
+
+ return &aalg_list[idx];
+}
+
+struct xfrm_algo_desc *xfrm_ealg_get_byidx(unsigned int idx)
+{
+ if (idx >= ealg_entries())
+ return NULL;
+
+ return &ealg_list[idx];
+}
+
+struct xfrm_algo_desc *xfrm_calg_get_byidx(unsigned int idx)
+{
+ if (idx >= calg_entries())
+ return NULL;
+
+ return &calg_list[idx];
+}
+
+/*
+ * Probe for the availability of crypto algorithms, and set the available
+ * flag for any algorithms found on the system. This is typically called by
+ * pfkey during userspace SA add, update or register.
+ */
+void xfrm_probe_algs(void)
+{
+#ifdef CONFIG_CRYPTO
+ int i, status;
+
+ BUG_ON(in_softirq());
+
+ for (i = 0; i < aalg_entries(); i++) {
+ status = crypto_alg_available(aalg_list[i].name, 0);
+ if (aalg_list[i].available != status)
+ aalg_list[i].available = status;
+ }
+
+ for (i = 0; i < ealg_entries(); i++) {
+ status = crypto_alg_available(ealg_list[i].name, 0);
+ if (ealg_list[i].available != status)
+ ealg_list[i].available = status;
+ }
+
+ for (i = 0; i < calg_entries(); i++) {
+ status = crypto_alg_available(calg_list[i].name, 0);
+ if (calg_list[i].available != status)
+ calg_list[i].available = status;
+ }
+#endif
+}
+
+int xfrm_count_auth_supported(void)
+{
+ int i, n;
+
+ for (i = 0, n = 0; i < aalg_entries(); i++)
+ if (aalg_list[i].available)
+ n++;
+ return n;
+}
+
+int xfrm_count_enc_supported(void)
+{
+ int i, n;
+
+ for (i = 0, n = 0; i < ealg_entries(); i++)
+ if (ealg_list[i].available)
+ n++;
+ return n;
+}
+
+/* Move to common area: it is shared with AH. */
+
+void skb_icv_walk(const struct sk_buff *skb, struct crypto_tfm *tfm,
+ int offset, int len, icv_update_fn_t icv_update)
+{
+ int start = skb->len - skb->data_len;
+ int i, copy = start - offset;
+ struct scatterlist sg;
+
+ /* Checksum header. */
+ if (copy > 0) {
+ if (copy > len)
+ copy = len;
+
+ sg.page = virt_to_page(skb->data + offset);
+ sg.offset = (unsigned long)(skb->data + offset) % PAGE_SIZE;
+ sg.length = copy;
+
+ icv_update(tfm, &sg, 1);
+
+ if ((len -= copy) == 0)
+ return;
+ offset += copy;
+ }
+
+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+ int end;
+
+ BUG_TRAP(start <= offset + len);
+
+ end = start + skb_shinfo(skb)->frags[i].size;
+ if ((copy = end - offset) > 0) {
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+
+ if (copy > len)
+ copy = len;
+
+ sg.page = frag->page;
+ sg.offset = frag->page_offset + offset-start;
+ sg.length = copy;
+
+ icv_update(tfm, &sg, 1);
+
+ if (!(len -= copy))
+ return;
+ offset += copy;
+ }
+ start = end;
+ }
+
+ if (skb_shinfo(skb)->frag_list) {
+ struct sk_buff *list = skb_shinfo(skb)->frag_list;
+
+ for (; list; list = list->next) {
+ int end;
+
+ BUG_TRAP(start <= offset + len);
+
+ end = start + list->len;
+ if ((copy = end - offset) > 0) {
+ if (copy > len)
+ copy = len;
+ skb_icv_walk(list, tfm, offset-start, copy, icv_update);
+ if ((len -= copy) == 0)
+ return;
+ offset += copy;
+ }
+ start = end;
+ }
+ }
+ if (len)
+ BUG();
+}
+
+#if defined(CONFIG_INET_ESP) || defined(CONFIG_INET_ESP_MODULE) || defined(CONFIG_INET6_ESP) || defined(CONFIG_INET6_ESP_MODULE)
+
+/* Looking generic it is not used in another places. */
+
+int
+skb_to_sgvec(struct sk_buff *skb, struct scatterlist *sg, int offset, int len)
+{
+ int start = skb->len - skb->data_len;
+ int i, copy = start - offset;
+ int elt = 0;
+
+ if (copy > 0) {
+ if (copy > len)
+ copy = len;
+ sg[elt].page = virt_to_page(skb->data + offset);
+ sg[elt].offset = (unsigned long)(skb->data + offset) % PAGE_SIZE;
+ sg[elt].length = copy;
+ elt++;
+ if ((len -= copy) == 0)
+ return elt;
+ offset += copy;
+ }
+
+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+ int end;
+
+ BUG_TRAP(start <= offset + len);
+
+ end = start + skb_shinfo(skb)->frags[i].size;
+ if ((copy = end - offset) > 0) {
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+
+ if (copy > len)
+ copy = len;
+ sg[elt].page = frag->page;
+ sg[elt].offset = frag->page_offset+offset-start;
+ sg[elt].length = copy;
+ elt++;
+ if (!(len -= copy))
+ return elt;
+ offset += copy;
+ }
+ start = end;
+ }
+
+ if (skb_shinfo(skb)->frag_list) {
+ struct sk_buff *list = skb_shinfo(skb)->frag_list;
+
+ for (; list; list = list->next) {
+ int end;
+
+ BUG_TRAP(start <= offset + len);
+
+ end = start + list->len;
+ if ((copy = end - offset) > 0) {
+ if (copy > len)
+ copy = len;
+ elt += skb_to_sgvec(list, sg+elt, offset - start, copy);
+ if ((len -= copy) == 0)
+ return elt;
+ offset += copy;
+ }
+ start = end;
+ }
+ }
+ if (len)
+ BUG();
+ return elt;
+}
+
+/* Check that skb data bits are writable. If they are not, copy data
+ * to newly created private area. If "tailbits" is given, make sure that
+ * tailbits bytes beyond current end of skb are writable.
+ *
+ * Returns amount of elements of scatterlist to load for subsequent
+ * transformations and pointer to writable trailer skb.
+ */
+
+int skb_cow_data(struct sk_buff *skb, int tailbits, struct sk_buff **trailer)
+{
+ int copyflag;
+ int elt;
+ struct sk_buff *skb1, **skb_p;
+
+ /* If skb is cloned or its head is paged, reallocate
+ * head pulling out all the pages (pages are considered not writable
+ * at the moment even if they are anonymous).
+ */
+ if ((skb_cloned(skb) || skb_shinfo(skb)->nr_frags) &&
+ __pskb_pull_tail(skb, skb_pagelen(skb)-skb_headlen(skb)) == NULL)
+ return -ENOMEM;
+
+ /* Easy case. Most of packets will go this way. */
+ if (!skb_shinfo(skb)->frag_list) {
+ /* A little of trouble, not enough of space for trailer.
+ * This should not happen, when stack is tuned to generate
+ * good frames. OK, on miss we reallocate and reserve even more
+ * space, 128 bytes is fair. */
+
+ if (skb_tailroom(skb) < tailbits &&
+ pskb_expand_head(skb, 0, tailbits-skb_tailroom(skb)+128, GFP_ATOMIC))
+ return -ENOMEM;
+
+ /* Voila! */
+ *trailer = skb;
+ return 1;
+ }
+
+ /* Misery. We are in troubles, going to mincer fragments... */
+
+ elt = 1;
+ skb_p = &skb_shinfo(skb)->frag_list;
+ copyflag = 0;
+
+ while ((skb1 = *skb_p) != NULL) {
+ int ntail = 0;
+
+ /* The fragment is partially pulled by someone,
+ * this can happen on input. Copy it and everything
+ * after it. */
+
+ if (skb_shared(skb1))
+ copyflag = 1;
+
+ /* If the skb is the last, worry about trailer. */
+
+ if (skb1->next == NULL && tailbits) {
+ if (skb_shinfo(skb1)->nr_frags ||
+ skb_shinfo(skb1)->frag_list ||
+ skb_tailroom(skb1) < tailbits)
+ ntail = tailbits + 128;
+ }
+
+ if (copyflag ||
+ skb_cloned(skb1) ||
+ ntail ||
+ skb_shinfo(skb1)->nr_frags ||
+ skb_shinfo(skb1)->frag_list) {
+ struct sk_buff *skb2;
+
+ /* Fuck, we are miserable poor guys... */
+ if (ntail == 0)
+ skb2 = skb_copy(skb1, GFP_ATOMIC);
+ else
+ skb2 = skb_copy_expand(skb1,
+ skb_headroom(skb1),
+ ntail,
+ GFP_ATOMIC);
+ if (unlikely(skb2 == NULL))
+ return -ENOMEM;
+
+ if (skb1->sk)
+ skb_set_owner_w(skb, skb1->sk);
+
+ /* Looking around. Are we still alive?
+ * OK, link new skb, drop old one */
+
+ skb2->next = skb1->next;
+ *skb_p = skb2;
+ kfree_skb(skb1);
+ skb1 = skb2;
+ }
+ elt++;
+ *trailer = skb1;
+ skb_p = &skb1->next;
+ }
+
+ return elt;
+}
+
+void *pskb_put(struct sk_buff *skb, struct sk_buff *tail, int len)
+{
+ if (tail != skb) {
+ skb->data_len += len;
+ skb->len += len;
+ }
+ return skb_put(tail, len);
+}
+#endif
diff -Nru a/net/xfrm/xfrm_input.c b/net/xfrm/xfrm_input.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/net/xfrm/xfrm_input.c Thu May 8 10:41:38 2003
@@ -0,0 +1,52 @@
+/*
+ * xfrm_input.c
+ *
+ * Changes:
+ * YOSHIFUJI Hideaki @USAGI
+ * Split up af-specific portion
+ *
+ */
+
+#include <net/ip.h>
+#include <net/xfrm.h>
+
+void __secpath_destroy(struct sec_path *sp)
+{
+ int i;
+ for (i = 0; i < sp->len; i++)
+ xfrm_state_put(sp->x[i].xvec);
+ kmem_cache_free(sp->pool, sp);
+}
+
+/* Fetch spi and seq frpm ipsec header */
+
+int xfrm_parse_spi(struct sk_buff *skb, u8 nexthdr, u32 *spi, u32 *seq)
+{
+ int offset, offset_seq;
+
+ switch (nexthdr) {
+ case IPPROTO_AH:
+ offset = offsetof(struct ip_auth_hdr, spi);
+ offset_seq = offsetof(struct ip_auth_hdr, seq_no);
+ break;
+ case IPPROTO_ESP:
+ offset = offsetof(struct ip_esp_hdr, spi);
+ offset_seq = offsetof(struct ip_esp_hdr, seq_no);
+ break;
+ case IPPROTO_COMP:
+ if (!pskb_may_pull(skb, 4))
+ return -EINVAL;
+ *spi = ntohl(ntohs(*(u16*)(skb->h.raw + 2)));
+ *seq = 0;
+ return 0;
+ default:
+ return 1;
+ }
+
+ if (!pskb_may_pull(skb, 16))
+ return -EINVAL;
+
+ *spi = *(u32*)(skb->h.raw + offset);
+ *seq = *(u32*)(skb->h.raw + offset_seq);
+ return 0;
+}
diff -Nru a/net/xfrm/xfrm_output.c b/net/xfrm/xfrm_output.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/net/xfrm/xfrm_output.c Thu May 8 10:41:38 2003
@@ -0,0 +1,46 @@
+/*
+ * generic xfrm output routines
+ *
+ * Copyright (c) 2003 James Morris <jmorris@intercode.com.au>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ */
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/skbuff.h>
+#include <net/xfrm.h>
+
+int xfrm_check_output(struct xfrm_state *x,
+ struct sk_buff *skb, unsigned short family)
+{
+ int err;
+
+ err = xfrm_state_check_expire(x);
+ if (err)
+ goto out;
+
+ if (x->props.mode) {
+ switch (family) {
+ case AF_INET:
+ err = xfrm4_tunnel_check_size(skb);
+ break;
+
+ case AF_INET6:
+ err = xfrm6_tunnel_check_size(skb);
+ break;
+
+ default:
+ err = -EINVAL;
+ }
+
+ if (err)
+ goto out;
+ }
+
+ err = xfrm_state_check_space(x, skb);
+out:
+ return err;
+}
diff -Nru a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/net/xfrm/xfrm_policy.c Thu May 8 10:41:38 2003
@@ -0,0 +1,1244 @@
+/*
+ * xfrm_policy.c
+ *
+ * Changes:
+ * Mitsuru KANDA @USAGI
+ * Kazunori MIYAZAWA @USAGI
+ * Kunihiro Ishiguro
+ * IPv6 support
+ * Kazunori MIYAZAWA @USAGI
+ * YOSHIFUJI Hideaki
+ * Split up af-specific portion
+ * Derek Atkins <derek@ihtfp.com> Add the post_input processor
+ *
+ */
+
+#include <linux/config.h>
+#include <net/xfrm.h>
+#include <net/ip.h>
+
+DECLARE_MUTEX(xfrm_cfg_sem);
+
+static u32 xfrm_policy_genid;
+static rwlock_t xfrm_policy_lock = RW_LOCK_UNLOCKED;
+
+struct xfrm_policy *xfrm_policy_list[XFRM_POLICY_MAX*2];
+
+static rwlock_t xfrm_policy_afinfo_lock = RW_LOCK_UNLOCKED;
+static struct xfrm_policy_afinfo *xfrm_policy_afinfo[NPROTO];
+
+kmem_cache_t *xfrm_dst_cache;
+
+/* Limited flow cache. Its function now is to accelerate search for
+ * policy rules.
+ *
+ * Flow cache is private to cpus, at the moment this is important
+ * mostly for flows which do not match any rule, so that flow lookups
+ * are absolultely cpu-local. When a rule exists we do some updates
+ * to rule (refcnt, stats), so that locality is broken. Later this
+ * can be repaired.
+ */
+
+struct flow_entry
+{
+ struct flow_entry *next;
+ struct flowi fl;
+ u8 dir;
+ u32 genid;
+ struct xfrm_policy *pol;
+};
+
+static kmem_cache_t *flow_cachep;
+
+struct flow_entry **flow_table;
+
+static int flow_lwm = 2*XFRM_FLOWCACHE_HASH_SIZE;
+static int flow_hwm = 4*XFRM_FLOWCACHE_HASH_SIZE;
+
+static int flow_number[NR_CPUS] __cacheline_aligned;
+
+#define flow_count(cpu) (flow_number[cpu])
+
+static void flow_cache_shrink(int cpu)
+{
+ int i;
+ struct flow_entry *fle, **flp;
+ int shrink_to = flow_lwm/XFRM_FLOWCACHE_HASH_SIZE;
+
+ for (i=0; i<XFRM_FLOWCACHE_HASH_SIZE; i++) {
+ int k = 0;
+ flp = &flow_table[cpu*XFRM_FLOWCACHE_HASH_SIZE+i];
+ while ((fle=*flp) != NULL && k<shrink_to) {
+ k++;
+ flp = &fle->next;
+ }
+ while ((fle=*flp) != NULL) {
+ *flp = fle->next;
+ if (fle->pol)
+ xfrm_pol_put(fle->pol);
+ kmem_cache_free(flow_cachep, fle);
+ }
+ }
+}
+
+struct xfrm_policy *flow_lookup(int dir, struct flowi *fl,
+ unsigned short family)
+{
+ struct xfrm_policy *pol = NULL;
+ struct flow_entry *fle;
+ u32 hash;
+ int cpu;
+
+ hash = flow_hash(fl, family);
+
+ local_bh_disable();
+ cpu = smp_processor_id();
+
+ for (fle = flow_table[cpu*XFRM_FLOWCACHE_HASH_SIZE+hash];
+ fle; fle = fle->next) {
+ if (memcmp(fl, &fle->fl, sizeof(fle->fl)) == 0 &&
+ fle->dir == dir) {
+ if (fle->genid == xfrm_policy_genid) {
+ if ((pol = fle->pol) != NULL)
+ atomic_inc(&pol->refcnt);
+ local_bh_enable();
+ return pol;
+ }
+ break;
+ }
+ }
+
+ pol = xfrm_policy_lookup(dir, fl, family);
+
+ if (fle) {
+ /* Stale flow entry found. Update it. */
+ fle->genid = xfrm_policy_genid;
+
+ if (fle->pol)
+ xfrm_pol_put(fle->pol);
+ fle->pol = pol;
+ if (pol)
+ atomic_inc(&pol->refcnt);
+ } else {
+ if (flow_count(cpu) > flow_hwm)
+ flow_cache_shrink(cpu);
+
+ fle = kmem_cache_alloc(flow_cachep, SLAB_ATOMIC);
+ if (fle) {
+ flow_count(cpu)++;
+ fle->fl = *fl;
+ fle->genid = xfrm_policy_genid;
+ fle->dir = dir;
+ fle->pol = pol;
+ if (pol)
+ atomic_inc(&pol->refcnt);
+ fle->next = flow_table[cpu*XFRM_FLOWCACHE_HASH_SIZE+hash];
+ flow_table[cpu*XFRM_FLOWCACHE_HASH_SIZE+hash] = fle;
+ }
+ }
+ local_bh_enable();
+ return pol;
+}
+
+void __init flow_cache_init(void)
+{
+ int order;
+
+ flow_cachep = kmem_cache_create("flow_cache",
+ sizeof(struct flow_entry),
+ 0, SLAB_HWCACHE_ALIGN,
+ NULL, NULL);
+
+ if (!flow_cachep)
+ panic("NET: failed to allocate flow cache slab\n");
+
+ for (order = 0;
+ (PAGE_SIZE<<order) < (NR_CPUS*sizeof(struct flow_entry *)*XFRM_FLOWCACHE_HASH_SIZE);
+ order++)
+ /* NOTHING */;
+
+ flow_table = (struct flow_entry **)__get_free_pages(GFP_ATOMIC, order);
+
+ if (!flow_table)
+ panic("Failed to allocate flow cache hash table\n");
+
+ memset(flow_table, 0, PAGE_SIZE<<order);
+}
+
+int xfrm_register_type(struct xfrm_type *type, unsigned short family)
+{
+ struct xfrm_policy_afinfo *afinfo = xfrm_policy_get_afinfo(family);
+ struct xfrm_type_map *typemap;
+ int err = 0;
+
+ if (unlikely(afinfo == NULL))
+ return -EAFNOSUPPORT;
+ typemap = afinfo->type_map;
+
+ write_lock(&typemap->lock);
+ if (likely(typemap->map[type->proto] == NULL))
+ typemap->map[type->proto] = type;
+ else
+ err = -EEXIST;
+ write_unlock(&typemap->lock);
+ xfrm_policy_put_afinfo(afinfo);
+ return err;
+}
+
+int xfrm_unregister_type(struct xfrm_type *type, unsigned short family)
+{
+ struct xfrm_policy_afinfo *afinfo = xfrm_policy_get_afinfo(family);
+ struct xfrm_type_map *typemap;
+ int err = 0;
+
+ if (unlikely(afinfo == NULL))
+ return -EAFNOSUPPORT;
+ typemap = afinfo->type_map;
+
+ write_lock(&typemap->lock);
+ if (unlikely(typemap->map[type->proto] != type))
+ err = -ENOENT;
+ else
+ typemap->map[type->proto] = NULL;
+ write_unlock(&typemap->lock);
+ xfrm_policy_put_afinfo(afinfo);
+ return err;
+}
+
+struct xfrm_type *xfrm_get_type(u8 proto, unsigned short family)
+{
+ struct xfrm_policy_afinfo *afinfo = xfrm_policy_get_afinfo(family);
+ struct xfrm_type_map *typemap;
+ struct xfrm_type *type;
+
+ if (unlikely(afinfo == NULL))
+ return NULL;
+ typemap = afinfo->type_map;
+
+ read_lock(&typemap->lock);
+ type = typemap->map[proto];
+ if (type && type->owner)
+ __MOD_INC_USE_COUNT(type->owner);
+ read_unlock(&typemap->lock);
+ xfrm_policy_put_afinfo(afinfo);
+ return type;
+}
+
+int xfrm_dst_lookup(struct xfrm_dst **dst, struct flowi *fl,
+ unsigned short family)
+{
+ struct xfrm_policy_afinfo *afinfo = xfrm_policy_get_afinfo(family);
+ int err = 0;
+
+ if (unlikely(afinfo == NULL))
+ return -EAFNOSUPPORT;
+
+ if (likely(afinfo->dst_lookup != NULL))
+ err = afinfo->dst_lookup(dst, fl);
+ else
+ err = -EINVAL;
+ xfrm_policy_put_afinfo(afinfo);
+ return err;
+}
+
+void xfrm_put_type(struct xfrm_type *type)
+{
+ if (type->owner)
+ __MOD_DEC_USE_COUNT(type->owner);
+}
+
+static inline unsigned long make_jiffies(long secs)
+{
+ if (secs >= (MAX_SCHEDULE_TIMEOUT-1)/HZ)
+ return MAX_SCHEDULE_TIMEOUT-1;
+ else
+ return secs*HZ;
+}
+
+static void xfrm_policy_timer(unsigned long data)
+{
+ struct xfrm_policy *xp = (struct xfrm_policy*)data;
+ unsigned long now = (unsigned long)xtime.tv_sec;
+ long next = LONG_MAX;
+ u32 index;
+
+ if (xp->dead)
+ goto out;
+
+ if (xp->lft.hard_add_expires_seconds) {
+ long tmo = xp->lft.hard_add_expires_seconds +
+ xp->curlft.add_time - now;
+ if (tmo <= 0)
+ goto expired;
+ if (tmo < next)
+ next = tmo;
+ }
+ if (next != LONG_MAX &&
+ !mod_timer(&xp->timer, jiffies + make_jiffies(next)))
+ atomic_inc(&xp->refcnt);
+
+out:
+ xfrm_pol_put(xp);
+ return;
+
+expired:
+ index = xp->index;
+ xfrm_pol_put(xp);
+
+ /* Not 100% correct. id can be recycled in theory */
+ xp = xfrm_policy_byid(0, index, 1);
+ if (xp) {
+ xfrm_policy_kill(xp);
+ xfrm_pol_put(xp);
+ }
+}
+
+
+/* Allocate xfrm_policy. Not used here, it is supposed to be used by pfkeyv2
+ * SPD calls.
+ */
+
+struct xfrm_policy *xfrm_policy_alloc(int gfp)
+{
+ struct xfrm_policy *policy;
+
+ policy = kmalloc(sizeof(struct xfrm_policy), gfp);
+
+ if (policy) {
+ memset(policy, 0, sizeof(struct xfrm_policy));
+ atomic_set(&policy->refcnt, 1);
+ policy->lock = RW_LOCK_UNLOCKED;
+ init_timer(&policy->timer);
+ policy->timer.data = (unsigned long)policy;
+ policy->timer.function = xfrm_policy_timer;
+ }
+ return policy;
+}
+
+/* Destroy xfrm_policy: descendant resources must be released to this moment. */
+
+void __xfrm_policy_destroy(struct xfrm_policy *policy)
+{
+ if (!policy->dead)
+ BUG();
+
+ if (policy->bundles)
+ BUG();
+
+ if (del_timer(&policy->timer))
+ BUG();
+
+ kfree(policy);
+}
+
+/* Rule must be locked. Release descentant resources, announce
+ * entry dead. The rule must be unlinked from lists to the moment.
+ */
+
+void xfrm_policy_kill(struct xfrm_policy *policy)
+{
+ struct dst_entry *dst;
+
+ write_lock_bh(&policy->lock);
+ if (policy->dead)
+ goto out;
+
+ policy->dead = 1;
+
+ while ((dst = policy->bundles) != NULL) {
+ policy->bundles = dst->next;
+ dst_free(dst);
+ }
+
+ if (del_timer(&policy->timer))
+ atomic_dec(&policy->refcnt);
+
+out:
+ write_unlock_bh(&policy->lock);
+}
+
+/* Generate new index... KAME seems to generate them ordered by cost
+ * of an absolute inpredictability of ordering of rules. This will not pass. */
+static u32 xfrm_gen_index(int dir)
+{
+ u32 idx;
+ struct xfrm_policy *p;
+ static u32 idx_generator;
+
+ for (;;) {
+ idx = (idx_generator | dir);
+ idx_generator += 8;
+ if (idx == 0)
+ idx = 8;
+ for (p = xfrm_policy_list[dir]; p; p = p->next) {
+ if (p->index == idx)
+ break;
+ }
+ if (!p)
+ return idx;
+ }
+}
+
+int xfrm_policy_insert(int dir, struct xfrm_policy *policy, int excl)
+{
+ struct xfrm_policy *pol, **p;
+
+ write_lock_bh(&xfrm_policy_lock);
+ for (p = &xfrm_policy_list[dir]; (pol=*p)!=NULL; p = &pol->next) {
+ if (memcmp(&policy->selector, &pol->selector, sizeof(pol->selector)) == 0) {
+ if (excl) {
+ write_unlock_bh(&xfrm_policy_lock);
+ return -EEXIST;
+ }
+ break;
+ }
+ }
+ atomic_inc(&policy->refcnt);
+ policy->next = pol ? pol->next : NULL;
+ *p = policy;
+ xfrm_policy_genid++;
+ policy->index = pol ? pol->index : xfrm_gen_index(dir);
+ policy->curlft.add_time = (unsigned long)xtime.tv_sec;
+ policy->curlft.use_time = 0;
+ if (policy->lft.hard_add_expires_seconds &&
+ !mod_timer(&policy->timer, jiffies + HZ))
+ atomic_inc(&policy->refcnt);
+ write_unlock_bh(&xfrm_policy_lock);
+
+ if (pol) {
+ atomic_dec(&pol->refcnt);
+ xfrm_policy_kill(pol);
+ xfrm_pol_put(pol);
+ }
+ return 0;
+}
+
+struct xfrm_policy *xfrm_policy_delete(int dir, struct xfrm_selector *sel)
+{
+ struct xfrm_policy *pol, **p;
+
+ write_lock_bh(&xfrm_policy_lock);
+ for (p = &xfrm_policy_list[dir]; (pol=*p)!=NULL; p = &pol->next) {
+ if (memcmp(sel, &pol->selector, sizeof(*sel)) == 0) {
+ *p = pol->next;
+ break;
+ }
+ }
+ if (pol)
+ xfrm_policy_genid++;
+ write_unlock_bh(&xfrm_policy_lock);
+ return pol;
+}
+
+struct xfrm_policy *xfrm_policy_byid(int dir, u32 id, int delete)
+{
+ struct xfrm_policy *pol, **p;
+
+ write_lock_bh(&xfrm_policy_lock);
+ for (p = &xfrm_policy_list[id & 7]; (pol=*p)!=NULL; p = &pol->next) {
+ if (pol->index == id) {
+ if (delete)
+ *p = pol->next;
+ break;
+ }
+ }
+ if (pol) {
+ if (delete)
+ xfrm_policy_genid++;
+ else
+ atomic_inc(&pol->refcnt);
+ }
+ write_unlock_bh(&xfrm_policy_lock);
+ return pol;
+}
+
+void xfrm_policy_flush()
+{
+ struct xfrm_policy *xp;
+ int dir;
+
+ write_lock_bh(&xfrm_policy_lock);
+ for (dir = 0; dir < XFRM_POLICY_MAX; dir++) {
+ while ((xp = xfrm_policy_list[dir]) != NULL) {
+ xfrm_policy_list[dir] = xp->next;
+ write_unlock_bh(&xfrm_policy_lock);
+
+ xfrm_policy_kill(xp);
+ xfrm_pol_put(xp);
+
+ write_lock_bh(&xfrm_policy_lock);
+ }
+ }
+ xfrm_policy_genid++;
+ write_unlock_bh(&xfrm_policy_lock);
+}
+
+int xfrm_policy_walk(int (*func)(struct xfrm_policy *, int, int, void*),
+ void *data)
+{
+ struct xfrm_policy *xp;
+ int dir;
+ int count = 0;
+ int error = 0;
+
+ read_lock_bh(&xfrm_policy_lock);
+ for (dir = 0; dir < 2*XFRM_POLICY_MAX; dir++) {
+ for (xp = xfrm_policy_list[dir]; xp; xp = xp->next)
+ count++;
+ }
+
+ if (count == 0) {
+ error = -ENOENT;
+ goto out;
+ }
+
+ for (dir = 0; dir < 2*XFRM_POLICY_MAX; dir++) {
+ for (xp = xfrm_policy_list[dir]; xp; xp = xp->next) {
+ error = func(xp, dir%XFRM_POLICY_MAX, --count, data);
+ if (error)
+ goto out;
+ }
+ }
+
+out:
+ read_unlock_bh(&xfrm_policy_lock);
+ return error;
+}
+
+
+/* Find policy to apply to this flow. */
+
+struct xfrm_policy *xfrm_policy_lookup(int dir, struct flowi *fl,
+ unsigned short family)
+{
+ struct xfrm_policy *pol;
+
+ read_lock_bh(&xfrm_policy_lock);
+ for (pol = xfrm_policy_list[dir]; pol; pol = pol->next) {
+ struct xfrm_selector *sel = &pol->selector;
+ int match;
+
+ if (pol->family != family)
+ continue;
+
+ match = xfrm_selector_match(sel, fl, family);
+ if (match) {
+ atomic_inc(&pol->refcnt);
+ break;
+ }
+ }
+ read_unlock_bh(&xfrm_policy_lock);
+ return pol;
+}
+
+struct xfrm_policy *xfrm_sk_policy_lookup(struct sock *sk, int dir, struct flowi *fl)
+{
+ struct xfrm_policy *pol;
+
+ read_lock_bh(&xfrm_policy_lock);
+ if ((pol = sk->policy[dir]) != NULL) {
+ int match;
+
+ match = xfrm_selector_match(&pol->selector, fl, sk->family);
+ if (match)
+ atomic_inc(&pol->refcnt);
+ else
+ pol = NULL;
+ }
+ read_unlock_bh(&xfrm_policy_lock);
+ return pol;
+}
+
+void xfrm_sk_policy_link(struct xfrm_policy *pol, int dir)
+{
+ pol->next = xfrm_policy_list[XFRM_POLICY_MAX+dir];
+ xfrm_policy_list[XFRM_POLICY_MAX+dir] = pol;
+ atomic_inc(&pol->refcnt);
+}
+
+void xfrm_sk_policy_unlink(struct xfrm_policy *pol, int dir)
+{
+ struct xfrm_policy **polp;
+
+ for (polp = &xfrm_policy_list[XFRM_POLICY_MAX+dir];
+ *polp != NULL; polp = &(*polp)->next) {
+ if (*polp == pol) {
+ *polp = pol->next;
+ atomic_dec(&pol->refcnt);
+ return;
+ }
+ }
+}
+
+int xfrm_sk_policy_insert(struct sock *sk, int dir, struct xfrm_policy *pol)
+{
+ struct xfrm_policy *old_pol;
+
+ write_lock_bh(&xfrm_policy_lock);
+ old_pol = sk->policy[dir];
+ sk->policy[dir] = pol;
+ if (pol) {
+ pol->curlft.add_time = (unsigned long)xtime.tv_sec;
+ pol->index = xfrm_gen_index(XFRM_POLICY_MAX+dir);
+ xfrm_sk_policy_link(pol, dir);
+ }
+ if (old_pol)
+ xfrm_sk_policy_unlink(old_pol, dir);
+ write_unlock_bh(&xfrm_policy_lock);
+
+ if (old_pol) {
+ xfrm_policy_kill(old_pol);
+ xfrm_pol_put(old_pol);
+ }
+ return 0;
+}
+
+static struct xfrm_policy *clone_policy(struct xfrm_policy *old, int dir)
+{
+ struct xfrm_policy *newp = xfrm_policy_alloc(GFP_ATOMIC);
+
+ if (newp) {
+ newp->selector = old->selector;
+ newp->lft = old->lft;
+ newp->curlft = old->curlft;
+ newp->action = old->action;
+ newp->flags = old->flags;
+ newp->xfrm_nr = old->xfrm_nr;
+ newp->index = old->index;
+ memcpy(newp->xfrm_vec, old->xfrm_vec,
+ newp->xfrm_nr*sizeof(struct xfrm_tmpl));
+ write_lock_bh(&xfrm_policy_lock);
+ xfrm_sk_policy_link(newp, dir);
+ write_unlock_bh(&xfrm_policy_lock);
+ }
+ return newp;
+}
+
+int __xfrm_sk_clone_policy(struct sock *sk)
+{
+ struct xfrm_policy *p0, *p1;
+ p0 = sk->policy[0];
+ p1 = sk->policy[1];
+ sk->policy[0] = NULL;
+ sk->policy[1] = NULL;
+ if (p0 && (sk->policy[0] = clone_policy(p0, 0)) == NULL)
+ return -ENOMEM;
+ if (p1 && (sk->policy[1] = clone_policy(p1, 1)) == NULL)
+ return -ENOMEM;
+ return 0;
+}
+
+void __xfrm_sk_free_policy(struct xfrm_policy *pol, int dir)
+{
+ write_lock_bh(&xfrm_policy_lock);
+ xfrm_sk_policy_unlink(pol, dir);
+ write_unlock_bh(&xfrm_policy_lock);
+
+ xfrm_policy_kill(pol);
+ xfrm_pol_put(pol);
+}
+
+/* Resolve list of templates for the flow, given policy. */
+
+static int
+xfrm_tmpl_resolve(struct xfrm_policy *policy, struct flowi *fl,
+ struct xfrm_state **xfrm,
+ unsigned short family)
+{
+ int nx;
+ int i, error;
+ xfrm_address_t *daddr = xfrm_flowi_daddr(fl, family);
+ xfrm_address_t *saddr = xfrm_flowi_saddr(fl, family);
+
+ for (nx=0, i = 0; i < policy->xfrm_nr; i++) {
+ struct xfrm_state *x;
+ xfrm_address_t *remote = daddr;
+ xfrm_address_t *local = saddr;
+ struct xfrm_tmpl *tmpl = &policy->xfrm_vec[i];
+
+ if (tmpl->mode) {
+ remote = &tmpl->id.daddr;
+ local = &tmpl->saddr;
+ }
+
+ x = xfrm_state_find(remote, local, fl, tmpl, policy, &error, family);
+
+ if (x && x->km.state == XFRM_STATE_VALID) {
+ xfrm[nx++] = x;
+ daddr = remote;
+ saddr = local;
+ continue;
+ }
+ if (x) {
+ error = (x->km.state == XFRM_STATE_ERROR ?
+ -EINVAL : -EAGAIN);
+ xfrm_state_put(x);
+ }
+
+ if (!tmpl->optional)
+ goto fail;
+ }
+ return nx;
+
+fail:
+ for (nx--; nx>=0; nx--)
+ xfrm_state_put(xfrm[nx]);
+ return error;
+}
+
+/* Check that the bundle accepts the flow and its components are
+ * still valid.
+ */
+
+static struct dst_entry *
+xfrm_find_bundle(struct flowi *fl, struct rtable *rt, struct xfrm_policy *policy, unsigned short family)
+{
+ struct dst_entry *x;
+ struct xfrm_policy_afinfo *afinfo = xfrm_policy_get_afinfo(family);
+ if (unlikely(afinfo == NULL))
+ return ERR_PTR(-EINVAL);
+ x = afinfo->find_bundle(fl, rt, policy);
+ xfrm_policy_put_afinfo(afinfo);
+ return x;
+}
+
+/* Allocate chain of dst_entry's, attach known xfrm's, calculate
+ * all the metrics... Shortly, bundle a bundle.
+ */
+
+static int
+xfrm_bundle_create(struct xfrm_policy *policy, struct xfrm_state **xfrm, int nx,
+ struct flowi *fl, struct dst_entry **dst_p,
+ unsigned short family)
+{
+ int err;
+ struct xfrm_policy_afinfo *afinfo = xfrm_policy_get_afinfo(family);
+ if (unlikely(afinfo == NULL))
+ return -EINVAL;
+ err = afinfo->bundle_create(policy, xfrm, nx, fl, dst_p);
+ xfrm_policy_put_afinfo(afinfo);
+ return err;
+}
+
+/* Main function: finds/creates a bundle for given flow.
+ *
+ * At the moment we eat a raw IP route. Mostly to speed up lookups
+ * on interfaces with disabled IPsec.
+ */
+int xfrm_lookup(struct dst_entry **dst_p, struct flowi *fl,
+ struct sock *sk, int flags)
+{
+ struct xfrm_policy *policy;
+ struct xfrm_state *xfrm[XFRM_MAX_DEPTH];
+ struct rtable *rt = (struct rtable*)*dst_p;
+ struct dst_entry *dst;
+ int nx = 0;
+ int err;
+ u32 genid;
+ u16 family = (*dst_p)->ops->family;
+
+ switch (family) {
+ case AF_INET:
+ if (!fl->fl4_src)
+ fl->fl4_src = rt->rt_src;
+ if (!fl->fl4_dst)
+ fl->fl4_dst = rt->rt_dst;
+ case AF_INET6:
+ /* Still not clear... */
+ default:
+ /* nothing */;
+ }
+
+restart:
+ genid = xfrm_policy_genid;
+ policy = NULL;
+ if (sk && sk->policy[1])
+ policy = xfrm_sk_policy_lookup(sk, XFRM_POLICY_OUT, fl);
+
+ if (!policy) {
+ /* To accelerate a bit... */
+ if ((rt->u.dst.flags & DST_NOXFRM) || !xfrm_policy_list[XFRM_POLICY_OUT])
+ return 0;
+
+ policy = flow_lookup(XFRM_POLICY_OUT, fl, family);
+ }
+
+ if (!policy)
+ return 0;
+
+ policy->curlft.use_time = (unsigned long)xtime.tv_sec;
+
+ switch (policy->action) {
+ case XFRM_POLICY_BLOCK:
+ /* Prohibit the flow */
+ xfrm_pol_put(policy);
+ return -EPERM;
+
+ case XFRM_POLICY_ALLOW:
+ if (policy->xfrm_nr == 0) {
+ /* Flow passes not transformed. */
+ xfrm_pol_put(policy);
+ return 0;
+ }
+
+ /* Try to find matching bundle.
+ *
+ * LATER: help from flow cache. It is optional, this
+ * is required only for output policy.
+ */
+ dst = xfrm_find_bundle(fl, rt, policy, family);
+ if (IS_ERR(dst)) {
+ xfrm_pol_put(policy);
+ return PTR_ERR(dst);
+ }
+
+ if (dst)
+ break;
+
+ nx = xfrm_tmpl_resolve(policy, fl, xfrm, family);
+
+ if (unlikely(nx<0)) {
+ err = nx;
+ if (err == -EAGAIN) {
+ struct task_struct *tsk = current;
+ DECLARE_WAITQUEUE(wait, tsk);
+ if (!flags)
+ goto error;
+
+ __set_task_state(tsk, TASK_INTERRUPTIBLE);
+ add_wait_queue(&km_waitq, &wait);
+ err = xfrm_tmpl_resolve(policy, fl, xfrm, family);
+ if (err == -EAGAIN)
+ schedule();
+ __set_task_state(tsk, TASK_RUNNING);
+ remove_wait_queue(&km_waitq, &wait);
+
+ if (err == -EAGAIN && signal_pending(current)) {
+ err = -ERESTART;
+ goto error;
+ }
+ if (err == -EAGAIN ||
+ genid != xfrm_policy_genid)
+ goto restart;
+ }
+ if (err)
+ goto error;
+ } else if (nx == 0) {
+ /* Flow passes not transformed. */
+ xfrm_pol_put(policy);
+ return 0;
+ }
+
+ dst = &rt->u.dst;
+ err = xfrm_bundle_create(policy, xfrm, nx, fl, &dst, family);
+
+ if (unlikely(err)) {
+ int i;
+ for (i=0; i<nx; i++)
+ xfrm_state_put(xfrm[i]);
+ goto error;
+ }
+
+ write_lock_bh(&policy->lock);
+ if (unlikely(policy->dead)) {
+ /* Wow! While we worked on resolving, this
+ * policy has gone. Retry. It is not paranoia,
+ * we just cannot enlist new bundle to dead object.
+ */
+ write_unlock_bh(&policy->lock);
+
+ xfrm_pol_put(policy);
+ if (dst)
+ dst_free(dst);
+ goto restart;
+ }
+ dst->next = policy->bundles;
+ policy->bundles = dst;
+ dst_hold(dst);
+ write_unlock_bh(&policy->lock);
+ }
+ *dst_p = dst;
+ ip_rt_put(rt);
+ xfrm_pol_put(policy);
+ return 0;
+
+error:
+ ip_rt_put(rt);
+ xfrm_pol_put(policy);
+ *dst_p = NULL;
+ return err;
+}
+
+/* When skb is transformed back to its "native" form, we have to
+ * check policy restrictions. At the moment we make this in maximally
+ * stupid way. Shame on me. :-) Of course, connected sockets must
+ * have policy cached at them.
+ */
+
+static inline int
+xfrm_state_ok(struct xfrm_tmpl *tmpl, struct xfrm_state *x,
+ unsigned short family)
+{
+ return x->id.proto == tmpl->id.proto &&
+ (x->id.spi == tmpl->id.spi || !tmpl->id.spi) &&
+ x->props.mode == tmpl->mode &&
+ (tmpl->aalgos & (1<<x->props.aalgo)) &&
+ !(x->props.mode && xfrm_state_addr_cmp(tmpl, x, family));
+}
+
+static inline int
+xfrm_policy_ok(struct xfrm_tmpl *tmpl, struct sec_path *sp, int idx,
+ unsigned short family)
+{
+ for (; idx < sp->len; idx++) {
+ if (xfrm_state_ok(tmpl, sp->x[idx].xvec, family))
+ return ++idx;
+ }
+ return -1;
+}
+
+static int
+_decode_session(struct sk_buff *skb, struct flowi *fl, unsigned short family)
+{
+ struct xfrm_policy_afinfo *afinfo = xfrm_policy_get_afinfo(family);
+
+ if (unlikely(afinfo == NULL))
+ return -EAFNOSUPPORT;
+
+ afinfo->decode_session(skb, fl);
+ xfrm_policy_put_afinfo(afinfo);
+ return 0;
+}
+
+int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb,
+ unsigned short family)
+{
+ struct xfrm_policy *pol;
+ struct flowi fl;
+
+ if (_decode_session(skb, &fl, family) < 0)
+ return 0;
+
+ /* First, check used SA against their selectors. */
+ if (skb->sp) {
+ int i;
+
+ for (i=skb->sp->len-1; i>=0; i--) {
+ struct sec_decap_state *xvec = &(skb->sp->x[i]);
+ if (!xfrm_selector_match(&xvec->xvec->sel, &fl, family))
+ return 0;
+
+ /* If there is a post_input processor, try running it */
+ if (xvec->xvec->type->post_input &&
+ (xvec->xvec->type->post_input)(xvec->xvec,
+ &(xvec->decap),
+ skb) != 0)
+ return 0;
+ }
+ }
+
+ pol = NULL;
+ if (sk && sk->policy[dir])
+ pol = xfrm_sk_policy_lookup(sk, dir, &fl);
+
+ if (!pol)
+ pol = flow_lookup(dir, &fl, family);
+
+ if (!pol)
+ return 1;
+
+ pol->curlft.use_time = (unsigned long)xtime.tv_sec;
+
+ if (pol->action == XFRM_POLICY_ALLOW) {
+ if (pol->xfrm_nr != 0) {
+ struct sec_path *sp;
+ static struct sec_path dummy;
+ int i, k;
+
+ if ((sp = skb->sp) == NULL)
+ sp = &dummy;
+
+ /* For each tmpl search corresponding xfrm.
+ * Order is _important_. Later we will implement
+ * some barriers, but at the moment barriers
+ * are implied between each two transformations.
+ */
+ for (i = pol->xfrm_nr-1, k = 0; i >= 0; i--) {
+ if (pol->xfrm_vec[i].optional)
+ continue;
+ k = xfrm_policy_ok(pol->xfrm_vec+i, sp, k, family);
+ if (k < 0)
+ goto reject;
+ }
+ }
+ xfrm_pol_put(pol);
+ return 1;
+ }
+
+reject:
+ xfrm_pol_put(pol);
+ return 0;
+}
+
+int __xfrm_route_forward(struct sk_buff *skb, unsigned short family)
+{
+ struct flowi fl;
+
+ if (_decode_session(skb, &fl, family) < 0)
+ return 0;
+
+ return xfrm_lookup(&skb->dst, &fl, NULL, 0) == 0;
+}
+
+/* Optimize later using cookies and generation ids. */
+
+static struct dst_entry *xfrm_dst_check(struct dst_entry *dst, u32 cookie)
+{
+ struct dst_entry *child = dst;
+
+ while (child) {
+ if (child->obsolete > 0 ||
+ (child->xfrm && child->xfrm->km.state != XFRM_STATE_VALID)) {
+ dst_release(dst);
+ return NULL;
+ }
+ child = child->child;
+ }
+
+ return dst;
+}
+
+static void xfrm_dst_destroy(struct dst_entry *dst)
+{
+ xfrm_state_put(dst->xfrm);
+ dst->xfrm = NULL;
+}
+
+static void xfrm_link_failure(struct sk_buff *skb)
+{
+ /* Impossible. Such dst must be popped before reaches point of failure. */
+ return;
+}
+
+static struct dst_entry *xfrm_negative_advice(struct dst_entry *dst)
+{
+ if (dst) {
+ if (dst->obsolete) {
+ dst_release(dst);
+ dst = NULL;
+ }
+ }
+ return dst;
+}
+
+static void __xfrm_garbage_collect(void)
+{
+ int i;
+ struct xfrm_policy *pol;
+ struct dst_entry *dst, **dstp, *gc_list = NULL;
+
+ read_lock_bh(&xfrm_policy_lock);
+ for (i=0; i<2*XFRM_POLICY_MAX; i++) {
+ for (pol = xfrm_policy_list[i]; pol; pol = pol->next) {
+ write_lock(&pol->lock);
+ dstp = &pol->bundles;
+ while ((dst=*dstp) != NULL) {
+ if (atomic_read(&dst->__refcnt) == 0) {
+ *dstp = dst->next;
+ dst->next = gc_list;
+ gc_list = dst;
+ } else {
+ dstp = &dst->next;
+ }
+ }
+ write_unlock(&pol->lock);
+ }
+ }
+ read_unlock_bh(&xfrm_policy_lock);
+
+ while (gc_list) {
+ dst = gc_list;
+ gc_list = dst->next;
+ dst_free(dst);
+ }
+}
+
+static int bundle_depends_on(struct dst_entry *dst, struct xfrm_state *x)
+{
+ do {
+ if (dst->xfrm == x)
+ return 1;
+ } while ((dst = dst->child) != NULL);
+ return 0;
+}
+
+int xfrm_flush_bundles(struct xfrm_state *x)
+{
+ int i;
+ struct xfrm_policy *pol;
+ struct dst_entry *dst, **dstp, *gc_list = NULL;
+
+ read_lock_bh(&xfrm_policy_lock);
+ for (i=0; i<2*XFRM_POLICY_MAX; i++) {
+ for (pol = xfrm_policy_list[i]; pol; pol = pol->next) {
+ write_lock(&pol->lock);
+ dstp = &pol->bundles;
+ while ((dst=*dstp) != NULL) {
+ if (bundle_depends_on(dst, x)) {
+ *dstp = dst->next;
+ dst->next = gc_list;
+ gc_list = dst;
+ } else {
+ dstp = &dst->next;
+ }
+ }
+ write_unlock(&pol->lock);
+ }
+ }
+ read_unlock_bh(&xfrm_policy_lock);
+
+ while (gc_list) {
+ dst = gc_list;
+ gc_list = dst->next;
+ dst_free(dst);
+ }
+
+ return 0;
+}
+
+/* Well... that's _TASK_. We need to scan through transformation
+ * list and figure out what mss tcp should generate in order to
+ * final datagram fit to mtu. Mama mia... :-)
+ *
+ * Apparently, some easy way exists, but we used to choose the most
+ * bizarre ones. :-) So, raising Kalashnikov... tra-ta-ta.
+ *
+ * Consider this function as something like dark humour. :-)
+ */
+static int xfrm_get_mss(struct dst_entry *dst, u32 mtu)
+{
+ int res = mtu - dst->header_len;
+
+ for (;;) {
+ struct dst_entry *d = dst;
+ int m = res;
+
+ do {
+ struct xfrm_state *x = d->xfrm;
+ if (x) {
+ spin_lock_bh(&x->lock);
+ if (x->km.state == XFRM_STATE_VALID &&
+ x->type && x->type->get_max_size)
+ m = x->type->get_max_size(d->xfrm, m);
+ else
+ m += x->props.header_len;
+ spin_unlock_bh(&x->lock);
+ }
+ } while ((d = d->child) != NULL);
+
+ if (m <= mtu)
+ break;
+ res -= (m - mtu);
+ if (res < 88)
+ return mtu;
+ }
+
+ return res + dst->header_len;
+}
+
+int xfrm_policy_register_afinfo(struct xfrm_policy_afinfo *afinfo)
+{
+ int err = 0;
+ if (unlikely(afinfo == NULL))
+ return -EINVAL;
+ if (unlikely(afinfo->family >= NPROTO))
+ return -EAFNOSUPPORT;
+ write_lock(&xfrm_policy_afinfo_lock);
+ if (unlikely(xfrm_policy_afinfo[afinfo->family] != NULL))
+ err = -ENOBUFS;
+ else {
+ struct dst_ops *dst_ops = afinfo->dst_ops;
+ if (likely(dst_ops->kmem_cachep == NULL))
+ dst_ops->kmem_cachep = xfrm_dst_cache;
+ if (likely(dst_ops->check == NULL))
+ dst_ops->check = xfrm_dst_check;
+ if (likely(dst_ops->destroy == NULL))
+ dst_ops->destroy = xfrm_dst_destroy;
+ if (likely(dst_ops->negative_advice == NULL))
+ dst_ops->negative_advice = xfrm_negative_advice;
+ if (likely(dst_ops->link_failure == NULL))
+ dst_ops->link_failure = xfrm_link_failure;
+ if (likely(dst_ops->get_mss == NULL))
+ dst_ops->get_mss = xfrm_get_mss;
+ if (likely(afinfo->garbage_collect == NULL))
+ afinfo->garbage_collect = __xfrm_garbage_collect;
+ xfrm_policy_afinfo[afinfo->family] = afinfo;
+ }
+ write_unlock(&xfrm_policy_afinfo_lock);
+ return err;
+}
+
+int xfrm_policy_unregister_afinfo(struct xfrm_policy_afinfo *afinfo)
+{
+ int err = 0;
+ if (unlikely(afinfo == NULL))
+ return -EINVAL;
+ if (unlikely(afinfo->family >= NPROTO))
+ return -EAFNOSUPPORT;
+ write_lock(&xfrm_policy_afinfo_lock);
+ if (likely(xfrm_policy_afinfo[afinfo->family] != NULL)) {
+ if (unlikely(xfrm_policy_afinfo[afinfo->family] != afinfo))
+ err = -EINVAL;
+ else {
+ struct dst_ops *dst_ops = afinfo->dst_ops;
+ xfrm_policy_afinfo[afinfo->family] = NULL;
+ dst_ops->kmem_cachep = NULL;
+ dst_ops->check = NULL;
+ dst_ops->destroy = NULL;
+ dst_ops->negative_advice = NULL;
+ dst_ops->link_failure = NULL;
+ dst_ops->get_mss = NULL;
+ afinfo->garbage_collect = NULL;
+ }
+ }
+ write_unlock(&xfrm_policy_afinfo_lock);
+ return err;
+}
+
+struct xfrm_policy_afinfo *xfrm_policy_get_afinfo(unsigned short family)
+{
+ struct xfrm_policy_afinfo *afinfo;
+ if (unlikely(family >= NPROTO))
+ return NULL;
+ read_lock(&xfrm_policy_afinfo_lock);
+ afinfo = xfrm_policy_afinfo[family];
+ if (likely(afinfo != NULL))
+ read_lock(&afinfo->lock);
+ read_unlock(&xfrm_policy_afinfo_lock);
+ return afinfo;
+}
+
+void xfrm_policy_put_afinfo(struct xfrm_policy_afinfo *afinfo)
+{
+ if (unlikely(afinfo == NULL))
+ return;
+ read_unlock(&afinfo->lock);
+}
+
+void __init xfrm_policy_init(void)
+{
+ xfrm_dst_cache = kmem_cache_create("xfrm_dst_cache",
+ sizeof(struct xfrm_dst),
+ 0, SLAB_HWCACHE_ALIGN,
+ NULL, NULL);
+ if (!xfrm_dst_cache)
+ panic("XFRM: failed to allocate xfrm_dst_cache\n");
+}
+
+void __init xfrm_init(void)
+{
+ xfrm_state_init();
+ flow_cache_init();
+ xfrm_policy_init();
+}
+
diff -Nru a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/net/xfrm/xfrm_state.c Thu May 8 10:41:38 2003
@@ -0,0 +1,803 @@
+/*
+ * xfrm_state.c
+ *
+ * Changes:
+ * Mitsuru KANDA @USAGI
+ * Kazunori MIYAZAWA @USAGI
+ * Kunihiro Ishiguro
+ * IPv6 support
+ * YOSHIFUJI Hideaki @USAGI
+ * Split up af-specific functions
+ * Derek Atkins <derek@ihtfp.com>
+ * Add UDP Encapsulation
+ *
+ */
+
+#include <net/xfrm.h>
+#include <linux/pfkeyv2.h>
+#include <linux/ipsec.h>
+#include <asm/uaccess.h>
+#include <linux/tqueue.h>
+
+/* Each xfrm_state may be linked to two tables:
+
+ 1. Hash table by (spi,daddr,ah/esp) to find SA by SPI. (input,ctl)
+ 2. Hash table by daddr to find what SAs exist for given
+ destination/tunnel endpoint. (output)
+ */
+
+static spinlock_t xfrm_state_lock = SPIN_LOCK_UNLOCKED;
+
+/* Hash table to find appropriate SA towards given target (endpoint
+ * of tunnel or destination of transport mode) allowed by selector.
+ *
+ * Main use is finding SA after policy selected tunnel or transport mode.
+ * Also, it can be used by ah/esp icmp error handler to find offending SA.
+ */
+static struct list_head xfrm_state_bydst[XFRM_DST_HSIZE];
+static struct list_head xfrm_state_byspi[XFRM_DST_HSIZE];
+
+DECLARE_WAIT_QUEUE_HEAD(km_waitq);
+
+static rwlock_t xfrm_state_afinfo_lock = RW_LOCK_UNLOCKED;
+static struct xfrm_state_afinfo *xfrm_state_afinfo[NPROTO];
+
+static struct tq_struct xfrm_state_gc_work;
+static struct list_head xfrm_state_gc_list = LIST_HEAD_INIT(xfrm_state_gc_list);
+static spinlock_t xfrm_state_gc_lock = SPIN_LOCK_UNLOCKED;
+
+static void __xfrm_state_delete(struct xfrm_state *x);
+
+static void xfrm_state_gc_destroy(struct xfrm_state *x)
+{
+ if (del_timer(&x->timer))
+ BUG();
+ if (x->aalg)
+ kfree(x->aalg);
+ if (x->ealg)
+ kfree(x->ealg);
+ if (x->calg)
+ kfree(x->calg);
+ if (x->encap)
+ kfree(x->encap);
+ if (x->type) {
+ x->type->destructor(x);
+ xfrm_put_type(x->type);
+ }
+ kfree(x);
+ wake_up(&km_waitq);
+}
+
+static void xfrm_state_gc_task(void *data)
+{
+ struct xfrm_state *x;
+ struct list_head *entry, *tmp;
+ struct list_head gc_list = LIST_HEAD_INIT(gc_list);
+
+ spin_lock_bh(&xfrm_state_gc_lock);
+ list_splice_init(&xfrm_state_gc_list, &gc_list);
+ spin_unlock_bh(&xfrm_state_gc_lock);
+
+ list_for_each_safe(entry, tmp, &gc_list) {
+ x = list_entry(entry, struct xfrm_state, bydst);
+ xfrm_state_gc_destroy(x);
+ }
+}
+
+static inline unsigned long make_jiffies(long secs)
+{
+ if (secs >= (MAX_SCHEDULE_TIMEOUT-1)/HZ)
+ return MAX_SCHEDULE_TIMEOUT-1;
+ else
+ return secs*HZ;
+}
+
+static void xfrm_timer_handler(unsigned long data)
+{
+ struct xfrm_state *x = (struct xfrm_state*)data;
+ unsigned long now = (unsigned long)xtime.tv_sec;
+ long next = LONG_MAX;
+ int warn = 0;
+
+ spin_lock(&x->lock);
+ if (x->km.state == XFRM_STATE_DEAD)
+ goto out;
+ if (x->km.state == XFRM_STATE_EXPIRED)
+ goto expired;
+ if (x->lft.hard_add_expires_seconds) {
+ long tmo = x->lft.hard_add_expires_seconds +
+ x->curlft.add_time - now;
+ if (tmo <= 0)
+ goto expired;
+ if (tmo < next)
+ next = tmo;
+ }
+ if (x->lft.hard_use_expires_seconds && x->curlft.use_time) {
+ long tmo = x->lft.hard_use_expires_seconds +
+ x->curlft.use_time - now;
+ if (tmo <= 0)
+ goto expired;
+ if (tmo < next)
+ next = tmo;
+ }
+ if (x->km.dying)
+ goto resched;
+ if (x->lft.soft_add_expires_seconds) {
+ long tmo = x->lft.soft_add_expires_seconds +
+ x->curlft.add_time - now;
+ if (tmo <= 0)
+ warn = 1;
+ else if (tmo < next)
+ next = tmo;
+ }
+ if (x->lft.soft_use_expires_seconds && x->curlft.use_time) {
+ long tmo = x->lft.soft_use_expires_seconds +
+ x->curlft.use_time - now;
+ if (tmo <= 0)
+ warn = 1;
+ else if (tmo < next)
+ next = tmo;
+ }
+
+ if (warn)
+ km_warn_expired(x);
+resched:
+ if (next != LONG_MAX &&
+ !mod_timer(&x->timer, jiffies + make_jiffies(next)))
+ atomic_inc(&x->refcnt);
+ goto out;
+
+expired:
+ if (x->km.state == XFRM_STATE_ACQ && x->id.spi == 0) {
+ x->km.state = XFRM_STATE_EXPIRED;
+ wake_up(&km_waitq);
+ next = 2;
+ goto resched;
+ }
+ if (x->id.spi != 0)
+ km_expired(x);
+ __xfrm_state_delete(x);
+
+out:
+ spin_unlock(&x->lock);
+ xfrm_state_put(x);
+}
+
+struct xfrm_state *xfrm_state_alloc(void)
+{
+ struct xfrm_state *x;
+
+ x = kmalloc(sizeof(struct xfrm_state), GFP_ATOMIC);
+
+ if (x) {
+ memset(x, 0, sizeof(struct xfrm_state));
+ atomic_set(&x->refcnt, 1);
+ INIT_LIST_HEAD(&x->bydst);
+ INIT_LIST_HEAD(&x->byspi);
+ init_timer(&x->timer);
+ x->timer.function = xfrm_timer_handler;
+ x->timer.data = (unsigned long)x;
+ x->curlft.add_time = (unsigned long)xtime.tv_sec;
+ x->lft.soft_byte_limit = XFRM_INF;
+ x->lft.soft_packet_limit = XFRM_INF;
+ x->lft.hard_byte_limit = XFRM_INF;
+ x->lft.hard_packet_limit = XFRM_INF;
+ x->lock = SPIN_LOCK_UNLOCKED;
+ }
+ return x;
+}
+
+void __xfrm_state_destroy(struct xfrm_state *x)
+{
+ BUG_TRAP(x->km.state == XFRM_STATE_DEAD);
+
+ spin_lock_bh(&xfrm_state_gc_lock);
+ list_add(&x->bydst, &xfrm_state_gc_list);
+ spin_unlock_bh(&xfrm_state_gc_lock);
+ schedule_task(&xfrm_state_gc_work);
+}
+
+static void __xfrm_state_delete(struct xfrm_state *x)
+{
+ if (x->km.state != XFRM_STATE_DEAD) {
+ x->km.state = XFRM_STATE_DEAD;
+ spin_lock(&xfrm_state_lock);
+ list_del(&x->bydst);
+ atomic_dec(&x->refcnt);
+ if (x->id.spi) {
+ list_del(&x->byspi);
+ atomic_dec(&x->refcnt);
+ }
+ spin_unlock(&xfrm_state_lock);
+ if (del_timer(&x->timer))
+ atomic_dec(&x->refcnt);
+
+ /* The number two in this test is the reference
+ * mentioned in the comment below plus the reference
+ * our caller holds. A larger value means that
+ * there are DSTs attached to this xfrm_state.
+ */
+ if (atomic_read(&x->refcnt) > 2)
+ xfrm_flush_bundles(x);
+
+ /* All xfrm_state objects are created by one of two possible
+ * paths:
+ *
+ * 2) xfrm_state_lookup --> xfrm_state_insert
+ *
+ * The xfrm_state_lookup or xfrm_state_alloc call gives a
+ * reference, and that is what we are dropping here.
+ */
+ atomic_dec(&x->refcnt);
+ }
+}
+
+void xfrm_state_delete(struct xfrm_state *x)
+{
+ spin_lock_bh(&x->lock);
+ __xfrm_state_delete(x);
+ spin_unlock_bh(&x->lock);
+}
+
+void xfrm_state_flush(u8 proto)
+{
+ int i;
+ struct xfrm_state *x;
+
+ spin_lock_bh(&xfrm_state_lock);
+ for (i = 0; i < XFRM_DST_HSIZE; i++) {
+restart:
+ list_for_each_entry(x, xfrm_state_bydst+i, bydst) {
+ if (proto == IPSEC_PROTO_ANY || x->id.proto == proto) {
+ atomic_inc(&x->refcnt);
+ spin_unlock_bh(&xfrm_state_lock);
+
+ xfrm_state_delete(x);
+ xfrm_state_put(x);
+
+ spin_lock_bh(&xfrm_state_lock);
+ goto restart;
+ }
+ }
+ }
+ spin_unlock_bh(&xfrm_state_lock);
+ wake_up(&km_waitq);
+}
+
+static int
+xfrm_init_tempsel(struct xfrm_state *x, struct flowi *fl,
+ struct xfrm_tmpl *tmpl,
+ xfrm_address_t *daddr, xfrm_address_t *saddr,
+ unsigned short family)
+{
+ struct xfrm_state_afinfo *afinfo = xfrm_state_get_afinfo(family);
+ if (!afinfo)
+ return -1;
+ afinfo->init_tempsel(x, fl, tmpl, daddr, saddr);
+ xfrm_state_put_afinfo(afinfo);
+ return 0;
+}
+
+struct xfrm_state *
+xfrm_state_find(xfrm_address_t *daddr, xfrm_address_t *saddr,
+ struct flowi *fl, struct xfrm_tmpl *tmpl,
+ struct xfrm_policy *pol, int *err,
+ unsigned short family)
+{
+ unsigned h = xfrm_dst_hash(daddr, family);
+ struct xfrm_state *x;
+ int acquire_in_progress = 0;
+ int error = 0;
+ struct xfrm_state *best = NULL;
+
+ spin_lock_bh(&xfrm_state_lock);
+ list_for_each_entry(x, xfrm_state_bydst+h, bydst) {
+ if (x->props.family == family &&
+ x->props.reqid == tmpl->reqid &&
+ xfrm_state_addr_check(x, daddr, saddr, family) &&
+ tmpl->mode == x->props.mode &&
+ tmpl->id.proto == x->id.proto) {
+ /* Resolution logic:
+ 1. There is a valid state with matching selector.
+ Done.
+ 2. Valid state with inappropriate selector. Skip.
+
+ Entering area of "sysdeps".
+
+ 3. If state is not valid, selector is temporary,
+ it selects only session which triggered
+ previous resolution. Key manager will do
+ something to install a state with proper
+ selector.
+ */
+ if (x->km.state == XFRM_STATE_VALID) {
+ if (!xfrm_selector_match(&x->sel, fl, family))
+ continue;
+ if (!best ||
+ best->km.dying > x->km.dying ||
+ (best->km.dying == x->km.dying &&
+ best->curlft.add_time < x->curlft.add_time))
+ best = x;
+ } else if (x->km.state == XFRM_STATE_ACQ) {
+ acquire_in_progress = 1;
+ } else if (x->km.state == XFRM_STATE_ERROR ||
+ x->km.state == XFRM_STATE_EXPIRED) {
+ if (xfrm_selector_match(&x->sel, fl, family))
+ error = 1;
+ }
+ }
+ }
+
+ if (best) {
+ atomic_inc(&best->refcnt);
+ spin_unlock_bh(&xfrm_state_lock);
+ return best;
+ }
+
+ x = NULL;
+ if (!error && !acquire_in_progress &&
+ ((x = xfrm_state_alloc()) != NULL)) {
+ /* Initialize temporary selector matching only
+ * to current session. */
+ xfrm_init_tempsel(x, fl, tmpl, daddr, saddr, family);
+
+ if (km_query(x, tmpl, pol) == 0) {
+ x->km.state = XFRM_STATE_ACQ;
+ list_add_tail(&x->bydst, xfrm_state_bydst+h);
+ atomic_inc(&x->refcnt);
+ if (x->id.spi) {
+ h = xfrm_spi_hash(&x->id.daddr, x->id.spi, x->id.proto, family);
+ list_add(&x->byspi, xfrm_state_byspi+h);
+ atomic_inc(&x->refcnt);
+ }
+ x->lft.hard_add_expires_seconds = XFRM_ACQ_EXPIRES;
+ atomic_inc(&x->refcnt);
+ mod_timer(&x->timer, XFRM_ACQ_EXPIRES*HZ);
+ } else {
+ x->km.state = XFRM_STATE_DEAD;
+ xfrm_state_put(x);
+ x = NULL;
+ error = 1;
+ }
+ }
+ spin_unlock_bh(&xfrm_state_lock);
+ if (!x)
+ *err = acquire_in_progress ? -EAGAIN :
+ (error ? -ESRCH : -ENOMEM);
+ return x;
+}
+
+void xfrm_state_insert(struct xfrm_state *x)
+{
+ unsigned h = xfrm_dst_hash(&x->id.daddr, x->props.family);
+
+ spin_lock_bh(&xfrm_state_lock);
+ list_add(&x->bydst, xfrm_state_bydst+h);
+ atomic_inc(&x->refcnt);
+
+ h = xfrm_spi_hash(&x->id.daddr, x->id.spi, x->id.proto, x->props.family);
+
+ list_add(&x->byspi, xfrm_state_byspi+h);
+ atomic_inc(&x->refcnt);
+
+ if (!mod_timer(&x->timer, jiffies + HZ))
+ atomic_inc(&x->refcnt);
+
+ spin_unlock_bh(&xfrm_state_lock);
+ wake_up(&km_waitq);
+}
+
+int xfrm_state_check_expire(struct xfrm_state *x)
+{
+ if (!x->curlft.use_time)
+ x->curlft.use_time = (unsigned long)xtime.tv_sec;
+
+ if (x->km.state != XFRM_STATE_VALID)
+ return -EINVAL;
+
+ if (x->curlft.bytes >= x->lft.hard_byte_limit ||
+ x->curlft.packets >= x->lft.hard_packet_limit) {
+ km_expired(x);
+ if (!mod_timer(&x->timer, jiffies + XFRM_ACQ_EXPIRES*HZ))
+ atomic_inc(&x->refcnt);
+ return -EINVAL;
+ }
+
+ if (!x->km.dying &&
+ (x->curlft.bytes >= x->lft.soft_byte_limit ||
+ x->curlft.packets >= x->lft.soft_packet_limit))
+ km_warn_expired(x);
+ return 0;
+}
+
+int xfrm_state_check_space(struct xfrm_state *x, struct sk_buff *skb)
+{
+ int nhead = x->props.header_len + LL_RESERVED_SPACE(skb->dst->dev)
+ - skb_headroom(skb);
+
+ if (nhead > 0)
+ return pskb_expand_head(skb, nhead, 0, GFP_ATOMIC);
+
+ /* Check tail too... */
+ return 0;
+}
+
+struct xfrm_state *
+xfrm_state_lookup(xfrm_address_t *daddr, u32 spi, u8 proto,
+ unsigned short family)
+{
+ struct xfrm_state *x;
+ struct xfrm_state_afinfo *afinfo = xfrm_state_get_afinfo(family);
+ if (!afinfo)
+ return NULL;
+
+ spin_lock_bh(&xfrm_state_lock);
+ x = afinfo->state_lookup(daddr, spi, proto);
+ spin_unlock_bh(&xfrm_state_lock);
+ xfrm_state_put_afinfo(afinfo);
+ return x;
+}
+
+struct xfrm_state *
+xfrm_find_acq(u8 mode, u16 reqid, u8 proto,
+ xfrm_address_t *daddr, xfrm_address_t *saddr,
+ int create, unsigned short family)
+{
+ struct xfrm_state *x;
+ struct xfrm_state_afinfo *afinfo = xfrm_state_get_afinfo(family);
+ if (!afinfo)
+ return NULL;
+
+ spin_lock_bh(&xfrm_state_lock);
+ x = afinfo->find_acq(mode, reqid, proto, daddr, saddr, create);
+ spin_unlock_bh(&xfrm_state_lock);
+ xfrm_state_put_afinfo(afinfo);
+ return x;
+}
+
+/* Silly enough, but I'm lazy to build resolution list */
+
+struct xfrm_state * xfrm_find_acq_byseq(u32 seq)
+{
+ int i;
+ struct xfrm_state *x;
+
+ spin_lock_bh(&xfrm_state_lock);
+ for (i = 0; i < XFRM_DST_HSIZE; i++) {
+ list_for_each_entry(x, xfrm_state_bydst+i, bydst) {
+ if (x->km.seq == seq) {
+ atomic_inc(&x->refcnt);
+ spin_unlock_bh(&xfrm_state_lock);
+ return x;
+ }
+ }
+ }
+ spin_unlock_bh(&xfrm_state_lock);
+ return NULL;
+}
+
+u32 xfrm_get_acqseq(void)
+{
+ u32 res;
+ static u32 acqseq;
+ static spinlock_t acqseq_lock = SPIN_LOCK_UNLOCKED;
+
+ spin_lock_bh(&acqseq_lock);
+ res = (++acqseq ? : ++acqseq);
+ spin_unlock_bh(&acqseq_lock);
+ return res;
+}
+
+void
+xfrm_alloc_spi(struct xfrm_state *x, u32 minspi, u32 maxspi)
+{
+ u32 h;
+ struct xfrm_state *x0;
+
+ if (x->id.spi)
+ return;
+
+ if (minspi == maxspi) {
+ x0 = xfrm_state_lookup(&x->id.daddr, minspi, x->id.proto, x->props.family);
+ if (x0) {
+ xfrm_state_put(x0);
+ return;
+ }
+ x->id.spi = minspi;
+ } else {
+ u32 spi = 0;
+ minspi = ntohl(minspi);
+ maxspi = ntohl(maxspi);
+ for (h=0; h<maxspi-minspi+1; h++) {
+ spi = minspi + net_random()%(maxspi-minspi+1);
+ x0 = xfrm_state_lookup(&x->id.daddr, minspi, x->id.proto, x->props.family);
+ if (x0 == NULL)
+ break;
+ xfrm_state_put(x0);
+ }
+ x->id.spi = htonl(spi);
+ }
+ if (x->id.spi) {
+ spin_lock_bh(&xfrm_state_lock);
+ h = xfrm_spi_hash(&x->id.daddr, x->id.spi, x->id.proto, x->props.family);
+ list_add(&x->byspi, xfrm_state_byspi+h);
+ atomic_inc(&x->refcnt);
+ spin_unlock_bh(&xfrm_state_lock);
+ wake_up(&km_waitq);
+ }
+}
+
+int xfrm_state_walk(u8 proto, int (*func)(struct xfrm_state *, int, void*),
+ void *data)
+{
+ int i;
+ struct xfrm_state *x;
+ int count = 0;
+ int err = 0;
+
+ spin_lock_bh(&xfrm_state_lock);
+ for (i = 0; i < XFRM_DST_HSIZE; i++) {
+ list_for_each_entry(x, xfrm_state_bydst+i, bydst) {
+ if (proto == IPSEC_PROTO_ANY || x->id.proto == proto)
+ count++;
+ }
+ }
+ if (count == 0) {
+ err = -ENOENT;
+ goto out;
+ }
+
+ for (i = 0; i < XFRM_DST_HSIZE; i++) {
+ list_for_each_entry(x, xfrm_state_bydst+i, bydst) {
+ if (proto != IPSEC_PROTO_ANY && x->id.proto != proto)
+ continue;
+ err = func(x, --count, data);
+ if (err)
+ goto out;
+ }
+ }
+out:
+ spin_unlock_bh(&xfrm_state_lock);
+ return err;
+}
+
+
+int xfrm_replay_check(struct xfrm_state *x, u32 seq)
+{
+ u32 diff;
+
+ seq = ntohl(seq);
+
+ if (unlikely(seq == 0))
+ return -EINVAL;
+
+ if (likely(seq > x->replay.seq))
+ return 0;
+
+ diff = x->replay.seq - seq;
+ if (diff >= x->props.replay_window) {
+ x->stats.replay_window++;
+ return -EINVAL;
+ }
+
+ if (x->replay.bitmap & (1U << diff)) {
+ x->stats.replay++;
+ return -EINVAL;
+ }
+ return 0;
+}
+
+void xfrm_replay_advance(struct xfrm_state *x, u32 seq)
+{
+ u32 diff;
+
+ seq = ntohl(seq);
+
+ if (seq > x->replay.seq) {
+ diff = seq - x->replay.seq;
+ if (diff < x->props.replay_window)
+ x->replay.bitmap = ((x->replay.bitmap) << diff) | 1;
+ else
+ x->replay.bitmap = 1;
+ x->replay.seq = seq;
+ } else {
+ diff = x->replay.seq - seq;
+ x->replay.bitmap |= (1U << diff);
+ }
+}
+
+int xfrm_check_selectors(struct xfrm_state **x, int n, struct flowi *fl)
+{
+ int i;
+
+ for (i=0; i<n; i++) {
+ int match;
+ match = xfrm_selector_match(&x[i]->sel, fl, x[i]->props.family);
+ if (!match)
+ return -EINVAL;
+ }
+ return 0;
+}
+
+static struct list_head xfrm_km_list = LIST_HEAD_INIT(xfrm_km_list);
+static rwlock_t xfrm_km_lock = RW_LOCK_UNLOCKED;
+
+void km_warn_expired(struct xfrm_state *x)
+{
+ struct xfrm_mgr *km;
+
+ x->km.dying = 1;
+ read_lock(&xfrm_km_lock);
+ list_for_each_entry(km, &xfrm_km_list, list)
+ km->notify(x, 0);
+ read_unlock(&xfrm_km_lock);
+}
+
+void km_expired(struct xfrm_state *x)
+{
+ struct xfrm_mgr *km;
+
+ x->km.state = XFRM_STATE_EXPIRED;
+
+ read_lock(&xfrm_km_lock);
+ list_for_each_entry(km, &xfrm_km_list, list)
+ km->notify(x, 1);
+ read_unlock(&xfrm_km_lock);
+ wake_up(&km_waitq);
+}
+
+int km_query(struct xfrm_state *x, struct xfrm_tmpl *t, struct xfrm_policy *pol)
+{
+ int err = -EINVAL;
+ struct xfrm_mgr *km;
+
+ read_lock(&xfrm_km_lock);
+ list_for_each_entry(km, &xfrm_km_list, list) {
+ err = km->acquire(x, t, pol, XFRM_POLICY_OUT);
+ if (!err)
+ break;
+ }
+ read_unlock(&xfrm_km_lock);
+ return err;
+}
+
+int km_new_mapping(struct xfrm_state *x, xfrm_address_t *ipaddr, u16 sport)
+{
+ int err = -EINVAL;
+ struct xfrm_mgr *km;
+
+ read_lock(&xfrm_km_lock);
+ list_for_each_entry(km, &xfrm_km_list, list) {
+ if (km->new_mapping)
+ err = km->new_mapping(x, ipaddr, sport);
+ if (!err)
+ break;
+ }
+ read_unlock(&xfrm_km_lock);
+ return err;
+}
+
+int xfrm_user_policy(struct sock *sk, int optname, u8 *optval, int optlen)
+{
+ int err;
+ u8 *data;
+ struct xfrm_mgr *km;
+ struct xfrm_policy *pol = NULL;
+
+ if (optlen <= 0 || optlen > PAGE_SIZE)
+ return -EMSGSIZE;
+
+ data = kmalloc(optlen, GFP_KERNEL);
+ if (!data)
+ return -ENOMEM;
+
+ err = -EFAULT;
+ if (copy_from_user(data, optval, optlen))
+ goto out;
+
+ err = -EINVAL;
+ read_lock(&xfrm_km_lock);
+ list_for_each_entry(km, &xfrm_km_list, list) {
+ pol = km->compile_policy(sk->family, optname, data, optlen, &err);
+ if (err >= 0)
+ break;
+ }
+ read_unlock(&xfrm_km_lock);
+
+ if (err >= 0) {
+ xfrm_sk_policy_insert(sk, err, pol);
+ err = 0;
+ }
+
+out:
+ kfree(data);
+ return err;
+}
+
+int xfrm_register_km(struct xfrm_mgr *km)
+{
+ write_lock_bh(&xfrm_km_lock);
+ list_add_tail(&km->list, &xfrm_km_list);
+ write_unlock_bh(&xfrm_km_lock);
+ return 0;
+}
+
+int xfrm_unregister_km(struct xfrm_mgr *km)
+{
+ write_lock_bh(&xfrm_km_lock);
+ list_del(&km->list);
+ write_unlock_bh(&xfrm_km_lock);
+ return 0;
+}
+
+int xfrm_state_register_afinfo(struct xfrm_state_afinfo *afinfo)
+{
+ int err = 0;
+ if (unlikely(afinfo == NULL))
+ return -EINVAL;
+ if (unlikely(afinfo->family >= NPROTO))
+ return -EAFNOSUPPORT;
+ write_lock(&xfrm_state_afinfo_lock);
+ if (unlikely(xfrm_state_afinfo[afinfo->family] != NULL))
+ err = -ENOBUFS;
+ else {
+ afinfo->state_bydst = xfrm_state_bydst;
+ afinfo->state_byspi = xfrm_state_byspi;
+ xfrm_state_afinfo[afinfo->family] = afinfo;
+ }
+ write_unlock(&xfrm_state_afinfo_lock);
+ return err;
+}
+
+int xfrm_state_unregister_afinfo(struct xfrm_state_afinfo *afinfo)
+{
+ int err = 0;
+ if (unlikely(afinfo == NULL))
+ return -EINVAL;
+ if (unlikely(afinfo->family >= NPROTO))
+ return -EAFNOSUPPORT;
+ write_lock(&xfrm_state_afinfo_lock);
+ if (likely(xfrm_state_afinfo[afinfo->family] != NULL)) {
+ if (unlikely(xfrm_state_afinfo[afinfo->family] != afinfo))
+ err = -EINVAL;
+ else {
+ xfrm_state_afinfo[afinfo->family] = NULL;
+ afinfo->state_byspi = NULL;
+ afinfo->state_bydst = NULL;
+ }
+ }
+ write_unlock(&xfrm_state_afinfo_lock);
+ return err;
+}
+
+struct xfrm_state_afinfo *xfrm_state_get_afinfo(unsigned short family)
+{
+ struct xfrm_state_afinfo *afinfo;
+ if (unlikely(family >= NPROTO))
+ return NULL;
+ read_lock(&xfrm_state_afinfo_lock);
+ afinfo = xfrm_state_afinfo[family];
+ if (likely(afinfo != NULL))
+ read_lock(&afinfo->lock);
+ read_unlock(&xfrm_state_afinfo_lock);
+ return afinfo;
+}
+
+void xfrm_state_put_afinfo(struct xfrm_state_afinfo *afinfo)
+{
+ if (unlikely(afinfo == NULL))
+ return;
+ read_unlock(&afinfo->lock);
+}
+
+void __init xfrm_state_init(void)
+{
+ int i;
+
+ for (i=0; i<XFRM_DST_HSIZE; i++) {
+ INIT_LIST_HEAD(&xfrm_state_bydst[i]);
+ INIT_LIST_HEAD(&xfrm_state_byspi[i]);
+ }
+ INIT_TQUEUE(&xfrm_state_gc_work, xfrm_state_gc_task, NULL);
+}
+
diff -Nru a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
--- /dev/null Wed Dec 31 16:00:00 1969
+++ b/net/xfrm/xfrm_user.c Thu May 8 10:41:38 2003
@@ -0,0 +1,1121 @@
+/* xfrm_user.c: User interface to configure xfrm engine.
+ *
+ * Copyright (C) 2002 David S. Miller (davem@redhat.com)
+ *
+ * Changes:
+ * Mitsuru KANDA @USAGI
+ * Kazunori MIYAZAWA @USAGI
+ * Kunihiro Ishiguro
+ * IPv6 support
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/socket.h>
+#include <linux/string.h>
+#include <linux/net.h>
+#include <linux/skbuff.h>
+#include <linux/netlink.h>
+#include <linux/rtnetlink.h>
+#include <linux/pfkeyv2.h>
+#include <linux/ipsec.h>
+#include <linux/init.h>
+#include <net/sock.h>
+#include <net/xfrm.h>
+#include <asm/uaccess.h>
+
+static struct sock *xfrm_nl;
+
+static int verify_one_alg(struct rtattr **xfrma, enum xfrm_attr_type_t type)
+{
+ struct rtattr *rt = xfrma[type - 1];
+ struct xfrm_algo *algp;
+
+ if (!rt)
+ return 0;
+
+ if ((rt->rta_len - sizeof(*rt)) < sizeof(*algp))
+ return -EINVAL;
+
+ algp = RTA_DATA(rt);
+ switch (type) {
+ case XFRMA_ALG_AUTH:
+ if (!algp->alg_key_len &&
+ strcmp(algp->alg_name, "digest_null") != 0)
+ return -EINVAL;
+ break;
+
+ case XFRMA_ALG_CRYPT:
+ if (!algp->alg_key_len &&
+ strcmp(algp->alg_name, "cipher_null") != 0)
+ return -EINVAL;
+ break;
+
+ case XFRMA_ALG_COMP:
+ /* Zero length keys are legal. */
+ break;
+
+ default:
+ return -EINVAL;
+ };
+
+ algp->alg_name[CRYPTO_MAX_ALG_NAME - 1] = '\0';
+ return 0;
+}
+
+static int verify_encap_tmpl(struct rtattr **xfrma)
+{
+ struct rtattr *rt = xfrma[XFRMA_ENCAP - 1];
+ struct xfrm_encap_tmpl *encap;
+
+ if (!rt)
+ return 0;
+
+ if ((rt->rta_len - sizeof(*rt)) < sizeof(*encap))
+ return -EINVAL;
+
+ return 0;
+}
+
+static int verify_newsa_info(struct xfrm_usersa_info *p,
+ struct rtattr **xfrma)
+{
+ int err;
+
+ err = -EINVAL;
+ switch (p->family) {
+ case AF_INET:
+ break;
+
+ case AF_INET6:
+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+ break;
+#else
+ err = -EAFNOSUPPORT;
+ goto out;
+#endif
+
+ default:
+ goto out;
+ };
+
+ err = -EINVAL;
+ switch (p->id.proto) {
+ case IPPROTO_AH:
+ if (!xfrma[XFRMA_ALG_AUTH-1] ||
+ xfrma[XFRMA_ALG_CRYPT-1] ||
+ xfrma[XFRMA_ALG_COMP-1])
+ goto out;
+ break;
+
+ case IPPROTO_ESP:
+ if ((!xfrma[XFRMA_ALG_AUTH-1] &&
+ !xfrma[XFRMA_ALG_CRYPT-1]) ||
+ xfrma[XFRMA_ALG_COMP-1])
+ goto out;
+ break;
+
+ case IPPROTO_COMP:
+ if (!xfrma[XFRMA_ALG_COMP-1] ||
+ xfrma[XFRMA_ALG_AUTH-1] ||
+ xfrma[XFRMA_ALG_CRYPT-1])
+ goto out;
+ break;
+
+ default:
+ goto out;
+ };
+
+ if ((err = verify_one_alg(xfrma, XFRMA_ALG_AUTH)))
+ goto out;
+ if ((err = verify_one_alg(xfrma, XFRMA_ALG_CRYPT)))
+ goto out;
+ if ((err = verify_one_alg(xfrma, XFRMA_ALG_COMP)))
+ goto out;
+ if ((err = verify_encap_tmpl(xfrma)))
+ goto out;
+
+ err = -EINVAL;
+ switch (p->mode) {
+ case 0:
+ case 1:
+ break;
+
+ default:
+ goto out;
+ };
+
+ err = 0;
+
+out:
+ return err;
+}
+
+static int attach_one_algo(struct xfrm_algo **algpp, struct rtattr *u_arg)
+{
+ struct rtattr *rta = u_arg;
+ struct xfrm_algo *p, *ualg;
+
+ if (!rta)
+ return 0;
+
+ ualg = RTA_DATA(rta);
+ p = kmalloc(sizeof(*ualg) + ualg->alg_key_len, GFP_KERNEL);
+ if (!p)
+ return -ENOMEM;
+
+ memcpy(p, ualg, sizeof(*ualg) + ualg->alg_key_len);
+ *algpp = p;
+ return 0;
+}
+
+static int attach_encap_tmpl(struct xfrm_encap_tmpl **encapp, struct rtattr *u_arg)
+{
+ struct rtattr *rta = u_arg;
+ struct xfrm_encap_tmpl *p, *uencap;
+
+ if (!rta)
+ return 0;
+
+ uencap = RTA_DATA(rta);
+ p = kmalloc(sizeof(*p), GFP_KERNEL);
+ if (!p)
+ return -ENOMEM;
+
+ memcpy(p, uencap, sizeof(*p));
+ *encapp = p;
+ return 0;
+}
+
+static void copy_from_user_state(struct xfrm_state *x, struct xfrm_usersa_info *p)
+{
+ memcpy(&x->id, &p->id, sizeof(x->id));
+ memcpy(&x->sel, &p->sel, sizeof(x->sel));
+ memcpy(&x->lft, &p->lft, sizeof(x->lft));
+ x->props.mode = p->mode;
+ x->props.replay_window = p->replay_window;
+ x->props.reqid = p->reqid;
+ x->props.family = p->family;
+ x->props.saddr = x->sel.saddr;
+}
+
+static struct xfrm_state *xfrm_state_construct(struct xfrm_usersa_info *p,
+ struct rtattr **xfrma,
+ int *errp)
+{
+ struct xfrm_state *x = xfrm_state_alloc();
+ int err = -ENOMEM;
+
+ if (!x)
+ goto error_no_put;
+
+ copy_from_user_state(x, p);
+
+ if ((err = attach_one_algo(&x->aalg, xfrma[XFRMA_ALG_AUTH-1])))
+ goto error;
+ if ((err = attach_one_algo(&x->ealg, xfrma[XFRMA_ALG_CRYPT-1])))
+ goto error;
+ if ((err = attach_one_algo(&x->calg, xfrma[XFRMA_ALG_COMP-1])))
+ goto error;
+ if ((err = attach_encap_tmpl(&x->encap, xfrma[XFRMA_ENCAP-1])))
+ goto error;
+
+ err = -ENOENT;
+ x->type = xfrm_get_type(x->id.proto, x->props.family);
+ if (x->type == NULL)
+ goto error;
+
+ err = x->type->init_state(x, NULL);
+ if (err)
+ goto error;
+
+ x->curlft.add_time = (unsigned long) xtime.tv_sec;
+ x->km.state = XFRM_STATE_VALID;
+ x->km.seq = p->seq;
+
+ return x;
+
+error:
+ xfrm_state_put(x);
+error_no_put:
+ *errp = err;
+ return NULL;
+}
+
+static int xfrm_add_sa(struct sk_buff *skb, struct nlmsghdr *nlh, void **xfrma)
+{
+ struct xfrm_usersa_info *p = NLMSG_DATA(nlh);
+ struct xfrm_state *x, *x1;
+ int err;
+
+ err = verify_newsa_info(p, (struct rtattr **) xfrma);
+ if (err)
+ return err;
+
+ x = xfrm_state_construct(p, (struct rtattr **) xfrma, &err);
+ if (!x)
+ return err;
+
+ x1 = xfrm_state_lookup(&x->props.saddr, x->id.spi, x->id.proto, x->props.family);
+ if (x1) {
+ xfrm_state_put(x);
+ xfrm_state_put(x1);
+ return -EEXIST;
+ }
+
+ xfrm_state_insert(x);
+
+ return 0;
+}
+
+static int xfrm_del_sa(struct sk_buff *skb, struct nlmsghdr *nlh, void **xfrma)
+{
+ struct xfrm_state *x;
+ struct xfrm_usersa_id *p = NLMSG_DATA(nlh);
+
+ x = xfrm_state_lookup(&p->saddr, p->spi, p->proto, p->family);
+ if (x == NULL)
+ return -ESRCH;
+
+ xfrm_state_delete(x);
+ xfrm_state_put(x);
+
+ return 0;
+}
+
+static void copy_to_user_state(struct xfrm_state *x, struct xfrm_usersa_info *p)
+{
+ memcpy(&p->id, &x->id, sizeof(p->id));
+ memcpy(&p->sel, &x->sel, sizeof(p->sel));
+ memcpy(&p->lft, &x->lft, sizeof(p->lft));
+ memcpy(&p->curlft, &x->curlft, sizeof(p->curlft));
+ memcpy(&p->stats, &x->stats, sizeof(p->stats));
+ p->mode = x->props.mode;
+ p->replay_window = x->props.replay_window;
+ p->reqid = x->props.reqid;
+ p->family = x->props.family;
+ p->seq = x->km.seq;
+}
+
+struct xfrm_dump_info {
+ struct sk_buff *in_skb;
+ struct sk_buff *out_skb;
+ u32 nlmsg_seq;
+ int start_idx;
+ int this_idx;
+};
+
+static int dump_one_state(struct xfrm_state *x, int count, void *ptr)
+{
+ struct xfrm_dump_info *sp = ptr;
+ struct sk_buff *in_skb = sp->in_skb;
+ struct sk_buff *skb = sp->out_skb;
+ struct xfrm_usersa_info *p;
+ struct nlmsghdr *nlh;
+ unsigned char *b = skb->tail;
+
+ if (sp->this_idx < sp->start_idx)
+ goto out;
+
+ nlh = NLMSG_PUT(skb, NETLINK_CB(in_skb).pid,
+ sp->nlmsg_seq,
+ XFRM_MSG_NEWSA, sizeof(*p));
+ nlh->nlmsg_flags = 0;
+
+ p = NLMSG_DATA(nlh);
+ copy_to_user_state(x, p);
+
+ if (x->aalg)
+ RTA_PUT(skb, XFRMA_ALG_AUTH,
+ sizeof(*(x->aalg))+(x->aalg->alg_key_len+7)/8, x->aalg);
+ if (x->ealg)
+ RTA_PUT(skb, XFRMA_ALG_CRYPT,
+ sizeof(*(x->ealg))+(x->ealg->alg_key_len+7)/8, x->ealg);
+ if (x->calg)
+ RTA_PUT(skb, XFRMA_ALG_COMP, sizeof(*(x->calg)), x->calg);
+
+ if (x->encap)
+ RTA_PUT(skb, XFRMA_ENCAP, sizeof(*x->encap), x->encap);
+
+ nlh->nlmsg_len = skb->tail - b;
+out:
+ sp->this_idx++;
+ return 0;
+
+nlmsg_failure:
+rtattr_failure:
+ skb_trim(skb, b - skb->data);
+ return -1;
+}
+
+static int xfrm_dump_sa(struct sk_buff *skb, struct netlink_callback *cb)
+{
+ struct xfrm_dump_info info;
+
+ info.in_skb = cb->skb;
+ info.out_skb = skb;
+ info.nlmsg_seq = cb->nlh->nlmsg_seq;
+ info.this_idx = 0;
+ info.start_idx = cb->args[0];
+ (void) xfrm_state_walk(IPSEC_PROTO_ANY, dump_one_state, &info);
+ cb->args[0] = info.this_idx;
+
+ return skb->len;
+}
+
+static struct sk_buff *xfrm_state_netlink(struct sk_buff *in_skb,
+ struct xfrm_state *x, u32 seq)
+{
+ struct xfrm_dump_info info;
+ struct sk_buff *skb;
+
+ skb = alloc_skb(NLMSG_GOODSIZE, GFP_ATOMIC);
+ if (!skb)
+ return ERR_PTR(-ENOMEM);
+
+ NETLINK_CB(skb).dst_pid = NETLINK_CB(in_skb).pid;
+ info.in_skb = in_skb;
+ info.out_skb = skb;
+ info.nlmsg_seq = seq;
+ info.this_idx = info.start_idx = 0;
+
+ if (dump_one_state(x, 0, &info)) {
+ kfree_skb(skb);
+ return NULL;
+ }
+
+ return skb;
+}
+
+static int xfrm_get_sa(struct sk_buff *skb, struct nlmsghdr *nlh, void **xfrma)
+{
+ struct xfrm_usersa_id *p = NLMSG_DATA(nlh);
+ struct xfrm_state *x;
+ struct sk_buff *resp_skb;
+ int err;
+
+ x = xfrm_state_lookup(&p->saddr, p->spi, p->proto, p->family);
+ err = -ESRCH;
+ if (x == NULL)
+ goto out_noput;
+
+ resp_skb = xfrm_state_netlink(skb, x, nlh->nlmsg_seq);
+ if (IS_ERR(resp_skb)) {
+ err = PTR_ERR(resp_skb);
+ } else {
+ err = netlink_unicast(xfrm_nl, resp_skb,
+ NETLINK_CB(skb).pid, MSG_DONTWAIT);
+ }
+ xfrm_state_put(x);
+out_noput:
+ return err;
+}
+
+static int verify_userspi_info(struct xfrm_userspi_info *p)
+{
+ switch (p->info.id.proto) {
+ case IPPROTO_AH:
+ case IPPROTO_ESP:
+ break;
+
+ case IPPROTO_COMP:
+ /* IPCOMP spi is 16-bits. */
+ if (p->min >= 0x10000 ||
+ p->max >= 0x10000)
+ return -EINVAL;
+
+ default:
+ return -EINVAL;
+ };
+
+ if (p->min > p->max)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int xfrm_alloc_userspi(struct sk_buff *skb, struct nlmsghdr *nlh, void **xfrma)
+{
+ struct xfrm_state *x;
+ struct xfrm_userspi_info *p;
+ struct sk_buff *resp_skb;
+ int err;
+
+ p = NLMSG_DATA(nlh);
+ err = verify_userspi_info(p);
+ if (err)
+ goto out_noput;
+ x = xfrm_find_acq(p->info.mode, p->info.reqid, p->info.id.proto,
+ &p->info.sel.daddr,
+ &p->info.sel.saddr, 1,
+ p->info.family);
+ err = -ENOENT;
+ if (x == NULL)
+ goto out_noput;
+
+ resp_skb = ERR_PTR(-ENOENT);
+
+ spin_lock_bh(&x->lock);
+ if (x->km.state != XFRM_STATE_DEAD) {
+ xfrm_alloc_spi(x, p->min, p->max);
+ if (x->id.spi)
+ resp_skb = xfrm_state_netlink(skb, x, nlh->nlmsg_seq);
+ }
+ spin_unlock_bh(&x->lock);
+
+ if (IS_ERR(resp_skb)) {
+ err = PTR_ERR(resp_skb);
+ goto out;
+ }
+
+ err = netlink_unicast(xfrm_nl, resp_skb,
+ NETLINK_CB(skb).pid, MSG_DONTWAIT);
+
+out:
+ xfrm_state_put(x);
+out_noput:
+ return err;
+}
+
+static int verify_policy_dir(__u8 dir)
+{
+ switch (dir) {
+ case XFRM_POLICY_IN:
+ case XFRM_POLICY_OUT:
+ case XFRM_POLICY_FWD:
+ break;
+
+ default:
+ return -EINVAL;
+ };
+
+ return 0;
+}
+
+static int verify_newpolicy_info(struct xfrm_userpolicy_info *p)
+{
+ switch (p->share) {
+ case XFRM_SHARE_ANY:
+ case XFRM_SHARE_SESSION:
+ case XFRM_SHARE_USER:
+ case XFRM_SHARE_UNIQUE:
+ break;
+
+ default:
+ return -EINVAL;
+ };
+
+ switch (p->action) {
+ case XFRM_POLICY_ALLOW:
+ case XFRM_POLICY_BLOCK:
+ break;
+
+ default:
+ return -EINVAL;
+ };
+
+ switch (p->family) {
+ case AF_INET:
+ break;
+
+ case AF_INET6:
+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+ break;
+#else
+ return -EAFNOSUPPORT;
+#endif
+
+ default:
+ return -EINVAL;
+ };
+
+ return verify_policy_dir(p->dir);
+}
+
+static void copy_templates(struct xfrm_policy *xp, struct xfrm_user_tmpl *ut,
+ int nr)
+{
+ int i;
+
+ xp->xfrm_nr = nr;
+ for (i = 0; i < nr; i++, ut++) {
+ struct xfrm_tmpl *t = &xp->xfrm_vec[i];
+
+ memcpy(&t->id, &ut->id, sizeof(struct xfrm_id));
+ memcpy(&t->saddr, &ut->saddr,
+ sizeof(xfrm_address_t));
+ t->reqid = ut->reqid;
+ t->mode = ut->mode;
+ t->share = ut->share;
+ t->optional = ut->optional;
+ t->aalgos = ut->aalgos;
+ t->ealgos = ut->ealgos;
+ t->calgos = ut->calgos;
+ }
+}
+
+static int copy_user_tmpl(struct xfrm_policy *pol, struct rtattr **xfrma)
+{
+ struct rtattr *rt = xfrma[XFRMA_TMPL-1];
+ struct xfrm_user_tmpl *utmpl;
+ int nr;
+
+ if (!rt) {
+ pol->xfrm_nr = 0;
+ } else {
+ nr = (rt->rta_len - sizeof(*rt)) / sizeof(*utmpl);
+
+ if (nr > XFRM_MAX_DEPTH)
+ return -EINVAL;
+
+ copy_templates(pol, RTA_DATA(rt), nr);
+ }
+ return 0;
+}
+
+static void copy_from_user_policy(struct xfrm_policy *xp, struct xfrm_userpolicy_info *p)
+{
+ xp->priority = p->priority;
+ xp->index = p->index;
+ memcpy(&xp->selector, &p->sel, sizeof(xp->selector));
+ memcpy(&xp->lft, &p->lft, sizeof(xp->lft));
+ xp->action = p->action;
+ xp->flags = p->flags;
+ xp->family = p->family;
+ /* XXX xp->share = p->share; */
+}
+
+static void copy_to_user_policy(struct xfrm_policy *xp, struct xfrm_userpolicy_info *p, int dir)
+{
+ memcpy(&p->sel, &xp->selector, sizeof(p->sel));
+ memcpy(&p->lft, &xp->lft, sizeof(p->lft));
+ memcpy(&p->curlft, &xp->curlft, sizeof(p->curlft));
+ p->priority = xp->priority;
+ p->index = xp->index;
+ p->family = xp->family;
+ p->dir = dir;
+ p->action = xp->action;
+ p->flags = xp->flags;
+ p->share = XFRM_SHARE_ANY; /* XXX xp->share */
+}
+
+static struct xfrm_policy *xfrm_policy_construct(struct xfrm_userpolicy_info *p, struct rtattr **xfrma, int *errp)
+{
+ struct xfrm_policy *xp = xfrm_policy_alloc(GFP_KERNEL);
+ int err;
+
+ if (!xp) {
+ *errp = -ENOMEM;
+ return NULL;
+ }
+
+ copy_from_user_policy(xp, p);
+ err = copy_user_tmpl(xp, xfrma);
+ if (err) {
+ *errp = err;
+ kfree(xp);
+ xp = NULL;
+ }
+
+ return xp;
+}
+
+static int xfrm_add_policy(struct sk_buff *skb, struct nlmsghdr *nlh, void **xfrma)
+{
+ struct xfrm_userpolicy_info *p = NLMSG_DATA(nlh);
+ struct xfrm_policy *xp;
+ int err;
+
+ err = verify_newpolicy_info(p);
+ if (err)
+ return err;
+
+ xp = xfrm_policy_construct(p, (struct rtattr **) xfrma, &err);
+ if (!xp)
+ return err;
+
+ err = xfrm_policy_insert(p->dir, xp, 1);
+ if (err) {
+ kfree(xp);
+ return err;
+ }
+
+ xfrm_pol_put(xp);
+
+ return 0;
+}
+
+static int xfrm_del_policy(struct sk_buff *skb, struct nlmsghdr *nlh, void **xfrma)
+{
+ struct xfrm_policy *xp;
+ struct xfrm_userpolicy_id *p;
+ int err;
+
+ p = NLMSG_DATA(nlh);
+
+ err = verify_policy_dir(p->dir);
+ if (err)
+ return err;
+
+ xp = xfrm_policy_delete(p->dir, &p->sel);
+ if (xp == NULL)
+ return -ENOENT;
+ xfrm_policy_kill(xp);
+ xfrm_pol_put(xp);
+ return 0;
+}
+
+static int dump_one_policy(struct xfrm_policy *xp, int dir, int count, void *ptr)
+{
+ struct xfrm_dump_info *sp = ptr;
+ struct xfrm_userpolicy_info *p;
+ struct sk_buff *in_skb = sp->in_skb;
+ struct sk_buff *skb = sp->out_skb;
+ struct nlmsghdr *nlh;
+ unsigned char *b = skb->tail;
+
+ if (sp->this_idx < sp->start_idx)
+ goto out;
+
+ nlh = NLMSG_PUT(skb, NETLINK_CB(in_skb).pid,
+ sp->nlmsg_seq,
+ XFRM_MSG_NEWPOLICY, sizeof(*p));
+ p = NLMSG_DATA(nlh);
+ nlh->nlmsg_flags = 0;
+
+ copy_to_user_policy(xp, p, dir);
+
+ if (xp->xfrm_nr) {
+ struct xfrm_user_tmpl vec[XFRM_MAX_DEPTH];
+ int i;
+
+ for (i = 0; i < xp->xfrm_nr; i++) {
+ struct xfrm_user_tmpl *up = &vec[i];
+ struct xfrm_tmpl *kp = &xp->xfrm_vec[i];
+
+ memcpy(&up->id, &kp->id, sizeof(up->id));
+ memcpy(&up->saddr, &kp->saddr, sizeof(up->saddr));
+ up->reqid = kp->reqid;
+ up->mode = kp->mode;
+ up->share = kp->share;
+ up->optional = kp->optional;
+ up->aalgos = kp->aalgos;
+ up->ealgos = kp->ealgos;
+ up->calgos = kp->calgos;
+ }
+ RTA_PUT(skb, XFRMA_TMPL,
+ (sizeof(struct xfrm_user_tmpl) * xp->xfrm_nr),
+ vec);
+ }
+
+ nlh->nlmsg_len = skb->tail - b;
+out:
+ sp->this_idx++;
+ return 0;
+
+nlmsg_failure:
+rtattr_failure:
+ skb_trim(skb, b - skb->data);
+ return -1;
+}
+
+static int xfrm_dump_policy(struct sk_buff *skb, struct netlink_callback *cb)
+{
+ struct xfrm_dump_info info;
+
+ info.in_skb = cb->skb;
+ info.out_skb = skb;
+ info.nlmsg_seq = cb->nlh->nlmsg_seq;
+ info.this_idx = 0;
+ info.start_idx = cb->args[0];
+ (void) xfrm_policy_walk(dump_one_policy, &info);
+ cb->args[0] = info.this_idx;
+
+ return skb->len;
+}
+
+static struct sk_buff *xfrm_policy_netlink(struct sk_buff *in_skb,
+ struct xfrm_policy *xp,
+ int dir, u32 seq)
+{
+ struct xfrm_dump_info info;
+ struct sk_buff *skb;
+
+ skb = alloc_skb(NLMSG_GOODSIZE, GFP_KERNEL);
+ if (!skb)
+ return ERR_PTR(-ENOMEM);
+
+ NETLINK_CB(skb).dst_pid = NETLINK_CB(in_skb).pid;
+ info.in_skb = in_skb;
+ info.out_skb = skb;
+ info.nlmsg_seq = seq;
+ info.this_idx = info.start_idx = 0;
+
+ if (dump_one_policy(xp, dir, 0, &info) < 0) {
+ kfree_skb(skb);
+ return NULL;
+ }
+
+ return skb;
+}
+
+static int xfrm_get_policy(struct sk_buff *skb, struct nlmsghdr *nlh, void **xfrma)
+{
+ struct xfrm_policy *xp;
+ struct xfrm_userpolicy_id *p;
+ struct sk_buff *resp_skb;
+ int err;
+
+ p = NLMSG_DATA(nlh);
+ xp = xfrm_policy_byid(p->dir, p->index, 0);
+ if (xp == NULL)
+ return -ENOENT;
+
+ resp_skb = xfrm_policy_netlink(skb, xp, p->dir, nlh->nlmsg_seq);
+ if (IS_ERR(resp_skb)) {
+ err = PTR_ERR(resp_skb);
+ } else {
+ err = netlink_unicast(xfrm_nl, resp_skb,
+ NETLINK_CB(skb).pid, MSG_DONTWAIT);
+ }
+
+ xfrm_pol_put(xp);
+
+ return err;
+}
+
+static const int xfrm_msg_min[(XFRM_MSG_MAX + 1 - XFRM_MSG_BASE)] = {
+ NLMSG_LENGTH(sizeof(struct xfrm_usersa_info)), /* NEW SA */
+ NLMSG_LENGTH(sizeof(struct xfrm_usersa_id)), /* DEL SA */
+ NLMSG_LENGTH(sizeof(struct xfrm_usersa_id)), /* GET SA */
+ NLMSG_LENGTH(sizeof(struct xfrm_userpolicy_info)),/* NEW POLICY */
+ NLMSG_LENGTH(sizeof(struct xfrm_userpolicy_id)), /* DEL POLICY */
+ NLMSG_LENGTH(sizeof(struct xfrm_userpolicy_id)), /* GET POLICY */
+ NLMSG_LENGTH(sizeof(struct xfrm_userspi_info)), /* ALLOC SPI */
+ NLMSG_LENGTH(sizeof(struct xfrm_user_acquire)), /* ACQUIRE */
+ NLMSG_LENGTH(sizeof(struct xfrm_user_expire)), /* EXPIRE */
+};
+
+static struct xfrm_link {
+ int (*doit)(struct sk_buff *, struct nlmsghdr *, void **);
+ int (*dump)(struct sk_buff *, struct netlink_callback *);
+} xfrm_dispatch[] = {
+ { .doit = xfrm_add_sa, },
+ { .doit = xfrm_del_sa, },
+ {
+ .doit = xfrm_get_sa,
+ .dump = xfrm_dump_sa,
+ },
+ { .doit = xfrm_add_policy },
+ { .doit = xfrm_del_policy },
+ {
+ .doit = xfrm_get_policy,
+ .dump = xfrm_dump_policy,
+ },
+ { .doit = xfrm_alloc_userspi },
+};
+
+static int xfrm_done(struct netlink_callback *cb)
+{
+ return 0;
+}
+
+static int xfrm_user_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh, int *errp)
+{
+ struct rtattr *xfrma[XFRMA_MAX];
+ struct xfrm_link *link;
+ int type, min_len;
+
+ if (!(nlh->nlmsg_flags & NLM_F_REQUEST))
+ return 0;
+
+ type = nlh->nlmsg_type;
+
+ /* A control message: ignore them */
+ if (type < XFRM_MSG_BASE)
+ return 0;
+
+ /* Unknown message: reply with EINVAL */
+ if (type > XFRM_MSG_MAX)
+ goto err_einval;
+
+ type -= XFRM_MSG_BASE;
+ link = &xfrm_dispatch[type];
+
+ /* All operations require privileges, even GET */
+ if (!cap_raised(NETLINK_CB(skb).eff_cap, CAP_NET_ADMIN)) {
+ *errp = -EPERM;
+ return -1;
+ }
+
+ if ((type == 2 || type == 5) && (nlh->nlmsg_flags & NLM_F_DUMP)) {
+ u32 rlen;
+
+ if (link->dump == NULL)
+ goto err_einval;
+
+ if ((*errp = netlink_dump_start(xfrm_nl, skb, nlh,
+ link->dump,
+ xfrm_done)) != 0) {
+ return -1;
+ }
+ rlen = NLMSG_ALIGN(nlh->nlmsg_len);
+ if (rlen > skb->len)
+ rlen = skb->len;
+ skb_pull(skb, rlen);
+ return -1;
+ }
+
+ memset(xfrma, 0, sizeof(xfrma));
+
+ if (nlh->nlmsg_len < (min_len = xfrm_msg_min[type]))
+ goto err_einval;
+
+ if (nlh->nlmsg_len > min_len) {
+ int attrlen = nlh->nlmsg_len - NLMSG_ALIGN(min_len);
+ struct rtattr *attr = (void *) nlh + NLMSG_ALIGN(min_len);
+
+ while (RTA_OK(attr, attrlen)) {
+ unsigned short flavor = attr->rta_type;
+ if (flavor) {
+ if (flavor > XFRMA_MAX)
+ goto err_einval;
+ xfrma[flavor - 1] = attr;
+ }
+ attr = RTA_NEXT(attr, attrlen);
+ }
+ }
+
+ if (link->doit == NULL)
+ goto err_einval;
+ *errp = link->doit(skb, nlh, (void **) &xfrma);
+
+ return *errp;
+
+err_einval:
+ *errp = -EINVAL;
+ return -1;
+}
+
+static int xfrm_user_rcv_skb(struct sk_buff *skb)
+{
+ int err;
+ struct nlmsghdr *nlh;
+
+ while (skb->len >= NLMSG_SPACE(0)) {
+ u32 rlen;
+
+ nlh = (struct nlmsghdr *) skb->data;
+ if (nlh->nlmsg_len < sizeof(*nlh) ||
+ skb->len < nlh->nlmsg_len)
+ return 0;
+ rlen = NLMSG_ALIGN(nlh->nlmsg_len);
+ if (rlen > skb->len)
+ rlen = skb->len;
+ if (xfrm_user_rcv_msg(skb, nlh, &err)) {
+ if (err == 0)
+ return -1;
+ netlink_ack(skb, nlh, err);
+ } else if (nlh->nlmsg_flags & NLM_F_ACK)
+ netlink_ack(skb, nlh, 0);
+ skb_pull(skb, rlen);
+ }
+
+ return 0;
+}
+
+static void xfrm_netlink_rcv(struct sock *sk, int len)
+{
+ do {
+ struct sk_buff *skb;
+
+ down(&xfrm_cfg_sem);
+
+ while ((skb = skb_dequeue(&sk->receive_queue)) != NULL) {
+ if (xfrm_user_rcv_skb(skb)) {
+ if (skb->len)
+ skb_queue_head(&sk->receive_queue, skb);
+ else
+ kfree_skb(skb);
+ break;
+ }
+ kfree_skb(skb);
+ }
+
+ up(&xfrm_cfg_sem);
+
+ } while (xfrm_nl && xfrm_nl->receive_queue.qlen);
+}
+
+static int build_expire(struct sk_buff *skb, struct xfrm_state *x, int hard)
+{
+ struct xfrm_user_expire *ue;
+ struct nlmsghdr *nlh;
+ unsigned char *b = skb->tail;
+
+ nlh = NLMSG_PUT(skb, 0, 0, XFRM_MSG_EXPIRE,
+ sizeof(*ue));
+ ue = NLMSG_DATA(nlh);
+ nlh->nlmsg_flags = 0;
+
+ copy_to_user_state(x, &ue->state);
+ ue->hard = (hard != 0) ? 1 : 0;
+
+ nlh->nlmsg_len = skb->tail - b;
+ return skb->len;
+
+nlmsg_failure:
+ skb_trim(skb, b - skb->data);
+ return -1;
+}
+
+static int xfrm_send_notify(struct xfrm_state *x, int hard)
+{
+ struct sk_buff *skb;
+
+ skb = alloc_skb(sizeof(struct xfrm_user_expire) + 16, GFP_ATOMIC);
+ if (skb == NULL)
+ return -ENOMEM;
+
+ if (build_expire(skb, x, hard) < 0)
+ BUG();
+
+ NETLINK_CB(skb).dst_groups = XFRMGRP_EXPIRE;
+
+ return netlink_broadcast(xfrm_nl, skb, 0, XFRMGRP_EXPIRE, GFP_ATOMIC);
+}
+
+static int build_acquire(struct sk_buff *skb, struct xfrm_state *x,
+ struct xfrm_tmpl *xt, struct xfrm_policy *xp,
+ int dir)
+{
+ struct xfrm_user_acquire *ua;
+ struct nlmsghdr *nlh;
+ unsigned char *b = skb->tail;
+ __u32 seq = xfrm_get_acqseq();
+
+ nlh = NLMSG_PUT(skb, 0, 0, XFRM_MSG_ACQUIRE,
+ sizeof(*ua));
+ ua = NLMSG_DATA(nlh);
+ nlh->nlmsg_flags = 0;
+
+ memcpy(&ua->id, &x->id, sizeof(ua->id));
+ memcpy(&ua->saddr, &x->props.saddr, sizeof(ua->saddr));
+ copy_to_user_policy(xp, &ua->policy, dir);
+ ua->aalgos = xt->aalgos;
+ ua->ealgos = xt->ealgos;
+ ua->calgos = xt->calgos;
+ ua->seq = x->km.seq = seq;
+
+ nlh->nlmsg_len = skb->tail - b;
+ return skb->len;
+
+nlmsg_failure:
+ skb_trim(skb, b - skb->data);
+ return -1;
+}
+
+static int xfrm_send_acquire(struct xfrm_state *x, struct xfrm_tmpl *xt,
+ struct xfrm_policy *xp, int dir)
+{
+ struct sk_buff *skb;
+
+ skb = alloc_skb(sizeof(struct xfrm_user_acquire) + 16, GFP_ATOMIC);
+ if (skb == NULL)
+ return -ENOMEM;
+
+ if (build_acquire(skb, x, xt, xp, dir) < 0)
+ BUG();
+
+ NETLINK_CB(skb).dst_groups = XFRMGRP_ACQUIRE;
+
+ return netlink_broadcast(xfrm_nl, skb, 0, XFRMGRP_ACQUIRE, GFP_ATOMIC);
+}
+
+/* User gives us xfrm_user_policy_info followed by an array of 0
+ * or more templates.
+ */
+struct xfrm_policy *xfrm_compile_policy(u16 family, int opt,
+ u8 *data, int len, int *dir)
+{
+ struct xfrm_userpolicy_info *p = (struct xfrm_userpolicy_info *)data;
+ struct xfrm_user_tmpl *ut = (struct xfrm_user_tmpl *) (p + 1);
+ struct xfrm_policy *xp;
+ int nr;
+
+ switch (family) {
+ case AF_INET:
+ if (opt != IP_XFRM_POLICY) {
+ *dir = -EOPNOTSUPP;
+ return NULL;
+ }
+ break;
+#if defined(CONFIG_IPV6) || defined(CONFIG_IPV6_MODULE)
+ case AF_INET6:
+ if (opt != IPV6_XFRM_POLICY) {
+ *dir = -EOPNOTSUPP;
+ return NULL;
+ }
+ break;
+#endif
+ default:
+ *dir = -EINVAL;
+ return NULL;
+ }
+
+ *dir = -EINVAL;
+
+ if (len < sizeof(*p) ||
+ verify_newpolicy_info(p))
+ return NULL;
+
+ nr = ((len - sizeof(*p)) / sizeof(*ut));
+ if (nr > XFRM_MAX_DEPTH)
+ return NULL;
+
+ xp = xfrm_policy_alloc(GFP_KERNEL);
+ if (xp == NULL) {
+ *dir = -ENOBUFS;
+ return NULL;
+ }
+
+ copy_from_user_policy(xp, p);
+ copy_templates(xp, ut, nr);
+
+ *dir = p->dir;
+
+ return xp;
+}
+
+static struct xfrm_mgr netlink_mgr = {
+ .id = "netlink",
+ .notify = xfrm_send_notify,
+ .acquire = xfrm_send_acquire,
+ .compile_policy = xfrm_compile_policy,
+};
+
+static int __init xfrm_user_init(void)
+{
+ printk(KERN_INFO "Initializing IPsec netlink socket\n");
+
+ xfrm_nl = netlink_kernel_create(NETLINK_XFRM, xfrm_netlink_rcv);
+ if (xfrm_nl == NULL)
+ panic("xfrm_user_init: cannot initialize xfrm_nl\n");
+
+
+ xfrm_register_km(&netlink_mgr);
+
+ return 0;
+}
+
+static void __exit xfrm_user_exit(void)
+{
+ xfrm_unregister_km(&netlink_mgr);
+ sock_release(xfrm_nl->socket);
+}
+
+module_init(xfrm_user_init);
+module_exit(xfrm_user_exit);
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Slab corruption mm3 + davem fixes
2003-05-11 22:01 ` David S. Miller
@ 2003-05-11 22:15 ` Andrew Morton
2003-05-11 21:24 ` David S. Miller
2003-05-11 22:34 ` David S. Miller
0 siblings, 2 replies; 8+ messages in thread
From: Andrew Morton @ 2003-05-11 22:15 UTC (permalink / raw)
To: David S. Miller; +Cc: tomlins, linux-mm, linux-kernel, rusty, laforge
"David S. Miller" <davem@redhat.com> wrote:
>
> On Sun, 2003-05-11 at 09:21, Ed Tomlinson wrote:
> > I am also seeing this on 69-bk (as of Sunday morning)
> ...
> > On May 10, 2003 11:19 pm, Ed Tomlinson wrote:
> > > I looked at my logs and found the following error in it. My kernel is
> > > 69-mm3 with two davem fixes on it.
> ...
> > > May 10 22:41:06 oscar kernel: Call Trace:
> > > May 10 22:41:06 oscar kernel: [__slab_error+30/32] __slab_error+0x1e/0x20
> > > May 10 22:41:06 oscar kernel: [check_poison_obj+376/384]
> > > check_poison_obj+0x178/0x180 May 10 22:41:06 oscar kernel:
> > > [kmalloc+221/392] kmalloc+0xdd/0x188 May 10 22:41:06 oscar kernel:
> > > [alloc_skb+64/240] alloc_skb+0x40/0xf0 May 10 22:41:06 oscar kernel:
>
> Yeah, more bugs in the NAT netfilter changes. Debugging this one
> patch is becomming a full time job :-(
>
> This should fix it. Rusty, you're computing checksums and mangling
> src/dst using header pointers potentially pointing to free'd skbs.
>
Did you mean to send a one megabyte diff?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"aart@kvack.org"> aart@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Slab corruption mm3 + davem fixes
2003-05-11 22:15 ` Andrew Morton
2003-05-11 21:24 ` David S. Miller
@ 2003-05-11 22:34 ` David S. Miller
2003-05-12 7:44 ` Ed Tomlinson
1 sibling, 1 reply; 8+ messages in thread
From: David S. Miller @ 2003-05-11 22:34 UTC (permalink / raw)
To: Andrew Morton; +Cc: tomlins, linux-mm, linux-kernel, rusty, laforge
[-- Attachment #1: Type: text/plain, Size: 1237 bytes --]
On Sun, 2003-05-11 at 15:15, Andrew Morton wrote:
> "David S. Miller" <davem@redhat.com> wrote:
> >
> > On Sun, 2003-05-11 at 09:21, Ed Tomlinson wrote:
> > > I am also seeing this on 69-bk (as of Sunday morning)
> > ...
> > > On May 10, 2003 11:19 pm, Ed Tomlinson wrote:
> > > > I looked at my logs and found the following error in it. My kernel is
> > > > 69-mm3 with two davem fixes on it.
> > ...
> > > > May 10 22:41:06 oscar kernel: Call Trace:
> > > > May 10 22:41:06 oscar kernel: [__slab_error+30/32] __slab_error+0x1e/0x20
> > > > May 10 22:41:06 oscar kernel: [check_poison_obj+376/384]
> > > > check_poison_obj+0x178/0x180 May 10 22:41:06 oscar kernel:
> > > > [kmalloc+221/392] kmalloc+0xdd/0x188 May 10 22:41:06 oscar kernel:
> > > > [alloc_skb+64/240] alloc_skb+0x40/0xf0 May 10 22:41:06 oscar kernel:
> >
> > Yeah, more bugs in the NAT netfilter changes. Debugging this one
> > patch is becomming a full time job :-(
> >
> > This should fix it. Rusty, you're computing checksums and mangling
> > src/dst using header pointers potentially pointing to free'd skbs.
> >
>
> Did you mean to send a one megabyte diff?
Let's try this again, here is the correct patch :-)
--
David S. Miller <davem@redhat.com>
[-- Attachment #2: diff --]
[-- Type: text/plain, Size: 4379 bytes --]
# This is a BitKeeper generated patch for the following project:
# Project Name: Linux kernel tree
# This patch format is intended for GNU patch command version 2.5 or higher.
# This patch includes the following deltas:
# ChangeSet 1.1105 -> 1.1106
# net/ipv4/netfilter/ip_nat_core.c 1.25 -> 1.26
# net/ipv4/netfilter/ip_fw_compat_masq.c 1.8 -> 1.9
#
# The following is the BitKeeper ChangeSet Log
# --------------------------------------------
# 03/05/11 davem@nuts.ninka.net 1.1106
# [NETFILTER]: Fix stale skb data pointer usage in ipv4 NAT.
# --------------------------------------------
#
diff -Nru a/net/ipv4/netfilter/ip_fw_compat_masq.c b/net/ipv4/netfilter/ip_fw_compat_masq.c
--- a/net/ipv4/netfilter/ip_fw_compat_masq.c Sun May 11 15:31:41 2003
+++ b/net/ipv4/netfilter/ip_fw_compat_masq.c Sun May 11 15:31:41 2003
@@ -35,16 +35,15 @@
unsigned int
do_masquerade(struct sk_buff **pskb, const struct net_device *dev)
{
- struct iphdr *iph = (*pskb)->nh.iph;
struct ip_nat_info *info;
enum ip_conntrack_info ctinfo;
struct ip_conntrack *ct;
unsigned int ret;
/* Sorry, only ICMP, TCP and UDP. */
- if (iph->protocol != IPPROTO_ICMP
- && iph->protocol != IPPROTO_TCP
- && iph->protocol != IPPROTO_UDP)
+ if ((*pskb)->nh.iph->protocol != IPPROTO_ICMP
+ && (*pskb)->nh.iph->protocol != IPPROTO_TCP
+ && (*pskb)->nh.iph->protocol != IPPROTO_UDP)
return NF_DROP;
/* Feed it to connection tracking; in fact we're in NF_IP_FORWARD,
@@ -68,7 +67,7 @@
/* Setup the masquerade, if not already */
if (!info->initialized) {
u_int32_t newsrc;
- struct flowi fl = { .nl_u = { .ip4_u = { .daddr = iph->daddr } } };
+ struct flowi fl = { .nl_u = { .ip4_u = { .daddr = (*pskb)->nh.iph->daddr } } };
struct rtable *rt;
struct ip_nat_multi_range range;
@@ -124,19 +123,18 @@
check_for_demasq(struct sk_buff **pskb)
{
struct ip_conntrack_tuple tuple;
- struct iphdr *iph = (*pskb)->nh.iph;
struct ip_conntrack_protocol *protocol;
struct ip_conntrack_tuple_hash *h;
enum ip_conntrack_info ctinfo;
struct ip_conntrack *ct;
int ret;
- protocol = ip_ct_find_proto(iph->protocol);
+ protocol = ip_ct_find_proto((*pskb)->nh.iph->protocol);
/* We don't feed packets to conntrack system unless we know
they're part of an connection already established by an
explicit masq command. */
- switch (iph->protocol) {
+ switch ((*pskb)->nh.iph->protocol) {
case IPPROTO_ICMP:
/* ICMP errors. */
ct = icmp_error_track(*pskb, &ctinfo, NF_IP_PRE_ROUTING);
@@ -146,12 +144,6 @@
server here (== DNAT). Do SNAT icmp manips
in POST_ROUTING handling. */
if (CTINFO2DIR(ctinfo) == IP_CT_DIR_REPLY) {
- /* FIXME: Remove once NAT handled non-linear.
- */
- if (skb_is_nonlinear(*pskb)
- && skb_linearize(*pskb, GFP_ATOMIC) != 0)
- return NF_DROP;
-
icmp_reply_translation(pskb, ct,
NF_IP_PRE_ROUTING,
CTINFO2DIR(ctinfo));
@@ -166,7 +158,7 @@
case IPPROTO_UDP:
IP_NF_ASSERT(((*pskb)->nh.iph->frag_off & htons(IP_OFFSET)) == 0);
- if (!get_tuple(iph, *pskb, iph->ihl*4, &tuple, protocol)) {
+ if (!get_tuple((*pskb)->nh.iph, *pskb, (*pskb)->nh.iph->ihl*4, &tuple, protocol)) {
if (net_ratelimit())
printk("ip_fw_compat_masq: Can't get tuple\n");
return NF_ACCEPT;
diff -Nru a/net/ipv4/netfilter/ip_nat_core.c b/net/ipv4/netfilter/ip_nat_core.c
--- a/net/ipv4/netfilter/ip_nat_core.c Sun May 11 15:31:41 2003
+++ b/net/ipv4/netfilter/ip_nat_core.c Sun May 11 15:31:41 2003
@@ -717,10 +717,13 @@
iph = (void *)(*pskb)->data + iphdroff;
/* Manipulate protcol part. */
- if (!find_nat_proto(proto)->manip_pkt(pskb, iphdroff + iph->ihl*4,
+ if (!find_nat_proto(proto)->manip_pkt(pskb,
+ iphdroff + iph->ihl*4,
manip, maniptype))
return 0;
+ iph = (void *)(*pskb)->data + iphdroff;
+
if (maniptype == IP_NAT_MANIP_SRC) {
iph->check = ip_nat_cheat_check(~iph->saddr, manip->ip,
iph->check);
@@ -952,6 +955,8 @@
READ_UNLOCK(&ip_nat_lock);
hdrlen = (*pskb)->nh.iph->ihl * 4;
+
+ inside = (void *)(*pskb)->data + (*pskb)->nh.iph->ihl*4;
inside->icmp.checksum = 0;
inside->icmp.checksum = csum_fold(skb_checksum(*pskb, hdrlen,
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Slab corruption mm3 + davem fixes
2003-05-12 7:44 ` Ed Tomlinson
@ 2003-05-12 6:42 ` David S. Miller
0 siblings, 0 replies; 8+ messages in thread
From: David S. Miller @ 2003-05-12 6:42 UTC (permalink / raw)
To: tomlins; +Cc: akpm, linux-mm, linux-kernel, rusty, laforge
On May 11, 2003 06:34 pm, David S. Miller wrote:
> > > Yeah, more bugs in the NAT netfilter changes. Debugging this one
> > > patch is becomming a full time job :-(
But you do it well... Looks like this fixes the slab problems here with
69-bk from Sunday am.
Thank you for testing.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"aart@kvack.org"> aart@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Slab corruption mm3 + davem fixes
2003-05-11 22:34 ` David S. Miller
@ 2003-05-12 7:44 ` Ed Tomlinson
2003-05-12 6:42 ` David S. Miller
0 siblings, 1 reply; 8+ messages in thread
From: Ed Tomlinson @ 2003-05-12 7:44 UTC (permalink / raw)
To: David S. Miller, Andrew Morton; +Cc: linux-mm, linux-kernel, rusty, laforge
On May 11, 2003 06:34 pm, David S. Miller wrote:
> > > Yeah, more bugs in the NAT netfilter changes. Debugging this one
> > > patch is becomming a full time job :-(
But you do it well... Looks like this fixes the slab problems here with
69-bk from Sunday am.
> > > This should fix it. Rusty, you're computing checksums and mangling
> > > src/dst using header pointers potentially pointing to free'd skbs.
> >
> > Did you mean to send a one megabyte diff?
>
> Let's try this again, here is the correct patch :-)
Thanks
Ed Tomlinson
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"aart@kvack.org"> aart@kvack.org </a>
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2003-05-12 7:44 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-05-11 3:19 Slab corruption mm3 + davem fixes Ed Tomlinson
2003-05-11 16:21 ` Ed Tomlinson
2003-05-11 22:01 ` David S. Miller
2003-05-11 22:15 ` Andrew Morton
2003-05-11 21:24 ` David S. Miller
2003-05-11 22:34 ` David S. Miller
2003-05-12 7:44 ` Ed Tomlinson
2003-05-12 6:42 ` David S. Miller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox