rssLink RSS for all categories
 
icon_red
icon_green
icon_red
icon_red
icon_blue
icon_green
icon_green
icon_red
icon_red
icon_red
icon_orange
icon_green
icon_green
icon_green
icon_green
icon_blue
icon_red
icon_orange
icon_red
icon_red
icon_red
icon_red
icon_green
icon_red
icon_red
icon_red
icon_red
icon_orange
icon_green
 

FS#5743 — FS#9661 — vac3-1-n7

Attached to Project— Anti-DDoS
Incident
Paris DC1
CLOSED
100%
Some contexts are not functioning correctly
on the VAC3 Nexus 7009. It's probably due to
the port 10G remapping we did for
some contexts when we updated the N7
and mixed ports F2 and M2.

2013 Nov 8 18:50:08.758959 port-profile: - found data in FU_PSEL_Q_CAT_MTS queue, fd(7), usr_q_info(1)
2013 Nov 8 18:50:08.759043 port-profile: fu_priority_select_select_queue: round credit(0)
2013 Nov 8 18:50:08.759077 port-profile: curr_q - FU_PSEL_Q_CAT_CQ, usr_q_info(4), priority(7), credit(0), empty
2013 Nov 8 18:50:08.759100 port-profile: Starting a new round
2013 Nov 8 18:50:08.759121 port-profile: fu_priority_select: returning FU_PSEL_Q_CAT_MTS queue, fd(7), usr_q_info(1)
2013 Nov 8 18:50:08.759278 port-profile: fu_sdb_publisher_invoke_app_callback:OPC(185/MTS_OPC_SYSLOG_FACILITY_OPR) is NOT all-drop;Bail-out.
2013 Nov 8 18:50:08.759310 port-profile: fsrv_sdb_process_msg(1444): vdc-id[7] mts_opc[185][MTS_OPC_SYSLOG_FACILITY_OPR] 0x8e9cf88 0xf2560c90 132
2013 Nov 8 18:50:08.759332 port-profile: fsrv_sdb_process_msg(1453): Sending it to SDB-Dispatch
2013 Nov 8 18:50:08.759355 port-profile: fsrv_sdb_process_msg(1465): Sdb-dispatch did not process: rcode[0xffffffff]
2013 Nov 8 18:50:08.759380 port-profile: fsrv_sdb_process_msg(1493): No msg handler in FSRV for mts_opc[185][MTS_OPC_SYSLOG_FACILITY_OPR]
2013 Nov 8 18:50:08.759402 port-profile: fu_fsm_engine: fsrv_sdb_process_msg ret 0x0
2013 Nov 8 18:50:08.759424 port-profile: fu_fsm_engine: fsrv_sdb_process_msg continue ret 0x0
2013 Nov 8 18:50:08.759446 port-profile: fu_sync_pss_to_standby_apply:Set of checks failed
2013 Nov 8 18:50:08.759468 port-profile: fu_sdb_handle_update: validation fail, fu_is_state_active = 1, fu_is_sync_pss_to_standby_enabled = 1, mts_sync_event_get(mts_msg) = 0
2013 Nov 8 18:50:08.759528 port-profile: fu_fsm_execute_all: match_msg_id(0), log_already_open(0)
2013 Nov 8 18:50:08.759553 port-profile: fu_fsm_execute_all: null fsm_event_list
2013 Nov 8 18:50:08.759582 port-profile: fu_fsm_engine_post_event_processing
2013 Nov 8 18:50:08.759609 port-profile: fu_mts_drop ref 0x8e9cf88 opc 185
2013 Nov 8 18:50:08.759715 port-profile: fu_fsm_engine_post_e

We will reboot the entire chassis.
Date:  Sunday, 10 November 2013, 00:24AM
Reason for closing:  Done
Comment by OVH - Friday, 08 November 2013, 15:48PM

vac3-admin# reload
!!!WARNING! there is unsaved configuration in VDC!!!
This command will reboot the system. (y/n)? [n] y

[6639708.362130] writing reset reason 9,

>>>
>>>


Comment by OVH - Friday, 08 November 2013, 19:27PM

Still doesn't work

2013 Nov 8 23:17:06 vac3-1-n7 %$ VDC-2 %$ proxy
%MCM-2-MCM_REPLICATION_DISABLED: Proxy layer-3 modules are not
available for replication. Proxy layer-3 multicast replication is
disabled.
2013 Nov 8 23:17:06 vac3-1-n7 %$ VDC-2 %$ proxy
%MCM-2-MCM_ROUTING_DISABLED: Proxy layer-3 modules are not available
for routing. Proxy layer-3 forwarding is disabled.

So we cancel everything and starts again...


Comment by OVH - Sunday, 10 November 2013, 00:09AM

The origin of the issue is that in a context we mixed M2 and F2e cards.

Historically we started from a configuration of a VDC with the F2e, the IP setup on a 10G port of F2E worked, so we could configure:

inter e6/32
ip add xx.xx.xx.xx/yy

Then, we have updated the router on hot then added in the same context ports of the card M2 and mixed it with the F2e

No problem. It works without any problem.

But following the chassis reboot,the router couldn't accept the configuration of the IP on a 10G port of F2e. We can only set the conf.

inter e6/32
switchport
switchport mode trunk
switchport trunk allowed zzz

inter vlan zzz
ip add xx.xx.xx.xx/yy

In short, we will change the configuration, but it's really not so clean behavior.We will also optimize the setup of VAC1 and VAC2.


Comment by OVH - Sunday, 10 November 2013, 00:24AM

The configuration has been changed. it works again.

explanation:
the "inter vlan zzz" is managed by M2 and supports
256K routes. if we configure the L3 "inter e6/32" of F2e the number of routes will be limited to 20K.
So it is a tip to support 256K routes with F2e