Saturday, August 6, 2011

[z]Cisco IOS 命名规则(整理版)

转自:bbs.net130.com

思科IOS软件命名规则简单介绍:
AAAAA-BBBB-CC-DDDD.EE
1. AAAAA 这组字符是说明文件所适用的硬件平台,
2. BBBB 这组字符是说明这个IOS中所包含的特性,
3. CC 这组字符是IOS文件格式,
4. DDDD 这组字符是指出IOS软件版本,
5. EE 这个是IOS文件的后缀。
一、“AAAAA” 硬件平台字符组
比如:(这里我们就不一一列举了,只列出几个有代表性的)
c2600 2600系列路由器
c2800 2800系列路由器
c54sm Catalyst 5000 RSM/VIP
ics7700 ICS7700
mc3810 MC3810多服务访问集线器
regen 15104光连网系统
rpm MGX 8850 RPM
rsp 7500系列路由器
ubr 7200 UBR7200通用宽带路由器
vg200 VG200语音网关等。
二、“BBBB” 表示特性的字符串
我们这说几个常用的,经常会看到的:
a Advanced Peer-to-Peer Networking(APPN)特性
boot 引导映像
j 企业
i IP
ipbase IP BASE
i3 简化的IP,没有BGP、EBP、NHRP
i5 带有VoFR的IP
k8 IPSec 56
k9 IPSec 3DES
o IOS防火墙
o3 带在入侵检测系统IDS、SSH的防火墙
p 加
s 加(NAT、IBM、VPDN、VoIP)
v VIP
v5 VoIP
x3 语音
56 56位的加密
三、“CC”格式字符组
第一个“C”指出映像的在哪个路由器内存类型中执行。
f flash,内存
m RAM
r ROM
l 运行时刻定位

如果你正想把Flash卡(闪存卡)从一台路由器上拆除,那么可以看看这个字符是什么。如果是f, 则软件是直接从闪存执行的,这时候就要求安装有闪存,以便IOS软件能够运行。如果是m , 那么路由器已经从Flash(闪存)中读取了IOS软件,压缩之后正在从RAM运行它。在路由器正常引导起来以后,就可以安全的拆除Flash了。

第二个“C”说明如何进行压缩的
z zip压缩
x mzip压缩
w stac压缩
四、“DDDD”版本说明字符组
指出IOS软件的版本号
五、“.EE”文件名还有一个后缀
如: .bin或者.tar

例:“rsp-jo3sv-mz.122-1.bin”,
rsp 是硬件平台(Cisco 7500系列)。
jo3sv 是出企业级(j)、带IDS的防火墙(o3)、带有NAT/VoIP的IP增强(s)以及通用接口处理器VIP(v)。
mz 表明是运行在路由器的RAM内存中,并且用zip压缩。
122-1 表明是CISCO IOS软件版本12(2)1,即主版本12(2)的第一个维护版本。
.bin 是这个IOS软件后缀。

Thursday, June 2, 2011

some insight about NAT on C6509

t 08:23 PM 3/1/2007 +0100, Peter Salanki opined:
>If NAT is done in hardware, no CPU increase would be noticeable.

That's not entirely true. The bottleneck for h/w NAT on Sup720/Sup32
is in the *session setup* - the first packet(s) in every new
*session* is punted to the CPU to do one or both of the following:
* Create the NAT xlation
* Push down the appropriate netflow entry to the hardware to NAT that flow

The latter is done for *every* session, not just ones needing an
xlation entry (ie, we *always* have to push down a new NF entry for a
new flow even if the xlation in IOS exists). Note that for a TCP
session, the entire 3-way handshake is punted before you'll get full
h/w fwding of that NAT. Once you have full bidir h/w NF entries set
up, then the fwding rate is very high (20Mpps), for packets in that flow.

So bottom line - control plane scalability may be inadequate if you
have massive numbers of flows. Additionally, NF table scalability can
come into the picture as well (many factors apply, e.g. life of
flows, PFC version). If the NF entries can't be installed (no room),
we punt for everything that didn't fit.

HTH,
Tim




Tim Stevenson, tstevens [at] cisco
Routing & Switching CCIE #5561
Technical Marketing Engineer, Catalyst 6500
Cisco Systems, http://www.cisco.com
IP Phone: 408-526-6759
********************************************************
The contents of this message may be *Cisco Confidential*
and are intended for the specified recipients only.

------------------------------------------------------------------------------

Re: SUP720-3B and NAT performance [In reply to]

> Is there any way to determine whether a hardware NF entry has
> been installed or not?

'sh mls netf ip sw' will show you software-installed NetFlow
entries.

> Funny also that the CPU load on the router should grow with
> traffic inside that one session (aka flow)...

That suggests to me that flows are not being set up correctly
for that one session.

But wait...

Were you trying to NAT ESP? The Sup720 will only NAT UDP and
TCP in 'hardware'.

-A

_______________________________________________
cisco-nsp mailing list cisco-nsp [at] puck
https://puck.nether.net/mailman/listinfo/cisco-nsp
archive at http://puck.nether.net/pipermail/cisco-nsp/

reference:

Friday, April 1, 2011

最近火气大。。原来还是修养不好

靜思語
◎ 看別人不順眼,是自己修養不夠。
◎ 欣賞別人就是莊嚴自己。
◎ 以佛心看人,則遍地都是佛。以鬼心看人,則處處是猙獰的惡鬼。
◎ 心美看什麼都順眼。
◎ 愛不是要求對方,而是要由自身的付出。
◎ 道德是提昇自我的明燈,不該是呵斥別人的鞭子。
◎ 一個人的快樂,不是因為他擁有的多,而是因為他計較得少。
◎ 地上種了菜,就不易長草;心中有善,就不易生惡。
◎ 生氣,就是拿別人的過錯來懲罰自己。
◎ 自己害自己,莫過於亂發脾氣。
◎ 我們最大的敵人不是別人,可能是自己。

Thursday, March 10, 2011

configuring L2TP TOS byte

stumble into this command:

pseudowire-class t59
encapsulation l2tpv3
ip local interface Loopback0
ip tos value 160

the intention was to set the IP Precedence bit in TOS byte of the IP header carrying the L2TPV3 packet manually. Well sniffing tells otherwise:


traffic from 9.9.0.9 to 9.9.0.7 is carrying ICMP traffic from R1 to R5, although i did not get wireshark to decode the payload properly. But check the IP header, the TOS is still 0.

Checked command reference and notice the following:


Apparently there is another option under IP TOS:

pseudowire-class t59
encapsulation l2tpv3
ip local interface Loopback0
ip tos reflect

And this "reflect" thing actually copy the payload's TOS to the outer IP header. Well it is understandable that if the payload is NOT ip then this "copy" thing will not happen.

Hmm.. i am not using reflect but my payload is NOT IP (ICMP).. could it be...

Ok try again telnet from R1 to R5... ah ha there it is..


we can see the IP precedence is now at 5.

So it seems like even we manually set the TOS byte under pseudo-wire class, if the payload is NOT ip, the TOS byte is not set properly.
Bug? ios version (C7200-K91P-M), Version 12.2(25)S15.

Monday, February 28, 2011

multicast.. forever pain in the ass

now this is getting a bit irritating.. stuck for a few days.. today managed to zoom in to the culprit router. below is the diagram

R7, R8 and R9 is running MPLS VPN backbone with multicast backbone as well. Multicast VPN has been configured over.. control plan works fine.. R5 join address 225.5.5.5. Ping 225.5.5.5 from R4 is not happening.

Sniffing shows R7 is sending traffic to 225.5.5.5 using multicast tunnel address 239.0.0.1, and this is captured by R9. From R9, the multicast route table looks like this:

R9#
00:14:19: %SYS-5-CONFIG_I: Configured from console by console
R9# sh ip mroute vrf ABC2
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report, Z - Multicast Tunnel
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 225.5.5.5), 00:23:21/00:02:48, RP 172.16.0.7, flags: S
Incoming interface: Tunnel0, RPF nbr 12.12.0.7
Outgoing interface list:
FastEthernet0/0.59, Forward/Sparse, 00:23:21/00:02:48

(172.16.48.4, 225.5.5.5), 00:01:57/00:01:03, flags: TY
Incoming interface: Tunnel0, RPF nbr 12.12.0.8, MDT:239.0.8.0/00:02:03
Outgoing interface list:
FastEthernet0/0.59, Forward/Sparse, 00:01:57/00:02:48

(*, 224.0.1.40), 00:23:27/00:02:56, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Loopback1, Forward/Sparse, 00:23:27/00:02:56

ok now already switched to MDT data group.. Cos R7 is happily sending the traffic.. fine. R9 is not sending anything out:

R9# sh ip mroute vrf ABC2 cou
IP Multicast Statistics
3 routes using 1720 bytes of memory
2 groups, 0.50 average sources per group
Forwarding Counts: Pkt Count/Pkts per second/Avg Pkt Size/Kilobits per second
Other counts: Total/RPF failed/Other drops(OIF-null, rate-limit etc)

Group: 225.5.5.5, Source count: 1, Packets forwarded: 2, Packets received: 2
RP-tree: Forwarding: 2/0/100/0, Other: 2/0/0
Source: 172.16.48.4/32, Forwarding: 0/0/0/0, Other: 0/0/0

Group: 224.0.1.40, Source count: 0, Packets forwarded: 0, Packets received: 0

Debug ip mpacket detail tells why:

R9# deb ip mpa de
IP multicast packets debugging is on (detailed)
R9#
00:33:41: IP(0): MAC sa=*Tunnel* (Tunnel0)
00:33:41: IP(0): IP tos=0x0, len=100, id=1280, ttl=252, prot=1
00:33:41: IP(0): s=172.16.48.4 (Tunnel0) d=225.5.5.5 (FastEthernet0/0.69) id=1280, ttl=252, prot=1, len=100(100), not RPF interface
R9#
00:33:43: IP(0): MAC sa=ca08.1a44.0008 (FastEthernet0/0.79), IP last-hop=12.12.79.7
00:33:43: IP(0): IP tos=0x0, len=124, id=14896, ttl=254, prot=47
00:33:43: IP(0): MAC sa=ca08.1a44.0008 (FastEthernet0/0.79), IP last-hop=12.12.79.7
00:33:43: IP(0): IP tos=0x0, len=124, id=14897, ttl=254, prot=47
00:33:43: IP(0): MAC sa=*Tunnel* (Tunnel0)
00:33:43: IP(0): IP tos=0x0, len=100, id=1280, ttl=253, prot=1
00:33:43: IP(0): s=172.16.47.4 (Tunnel0) d=225.5.5.5 (FastEthernet0/0.69) id=1280, ttl=253, prot=1, len=100(100), not RPF interface
00:33:43: IP(0): MAC sa=*Tunnel* (Tunnel0)
00:33:43: IP(0): IP tos=0x0, len=100, id=1280, ttl=252, prot=1
00:33:43: IP(0): s=172.16.0.4 (Tunnel0) d=225.5.5.5 (FastEthernet0/0.69) id=1280, ttl=252, prot=1, len=100(100), not RPF interface
00:33:43: IP(0): MAC sa=ca08.1a44.0008 (FastEthernet0/0.79), IP last-hop=12.12.79.7
00:33:43: IP(0): IP tos=0x0, len=124, id=14898, ttl=254, prot=47
00:33:43: IP(0): MAC sa=*Tunnel* (Tunnel0)

This is where things start not to make sense. We see the OIL listed fa0/0.59 as the egress interface for group 225.5.5.5, however debug shows that R9 is trying to forward it to fa0/0.69 and gives a "not rpf interface" error?!

-----------------------------------------------------------
Action:
-- add 'bgp next-hop loopback', cos R9 is peering ebgp over fa0/0.69, no good.
-- shutdown loopback 2, no good.
-- shutdown fa0/0.69, everything works. Well this is not a solution
------------------------------------------------------------------
thought:
-- R9 has two loopback address in global table.. i seem to remember that could give some sort of problem when the backbone multicast is not using MDT address family, even through in my case, i use PIM BSR instead of PIM SSM to build the backbone. Research this more tomorrow.. anyway my code does not support MDT address family. XD
-- there is a l2tp tunnel forming through fa0/0.69, related?




Wednesday, February 16, 2011

今天没有读书

啊我废了今天。。

Saturday, January 29, 2011

Strong heart

兵败悉尼, 其实还真有点失望。 人品太差, 不能怪谁。 眼看CCIE SP 2.0应该是没戏了, 今天早上无聊在思科网站翻翻, 竟然发现有3月18号的SP 2.0, 在 SAN JOSE。

有点不能理解为什么还有DATE, 虽然自己心里一直希望有个DATE。 SAN JOSE还真不好去, 要飞22个小时。 机票要两千六百多, 加上考试费和住宿, 又是一个5千。 所以即使是眼前放着这个奇迹般的日期, 鼠标的左键一直点不下去。

打给老贼婆咨询一下吧, 每次买东西太大件下不了手都找她的。

“你。。。你。。。疯了!”
“我。。。我。。。说真的。”

似乎老贼婆对这个提议很有意见。哈, 这不是, 电话刚挂一个洋洋洒洒的SMS就来了。 看气势就知道是兴兵问罪来的。

“I really need strong heart to take your decision. Anyway if you are ready just go."

嗯。。 strong heart 是个key word. 我比老贼婆更需要stong heart to make this decision: do i really want to go through this again?

-- 每天做8个钟头的LAB
-- 每天读4个钟头的书
-- 做22小时飞机
-- 很有可能两天内完全睡不着
-- 以一具行尸走肉的形态度过8小时的实验时间, 希望运气好一点就过了
-- 人品差点搞不好结果和悉尼是一样的
×× 这次可不是full time study, 没假了。

-----------------------------------------------------------------------------------

好吧, 我是贱人。 伤口好了就不记得疼难受。 不过最重要的似乎还是那种不知道是执着还是偏执。 眼前有机会怎么可以放过? 再说。。 SP 的技术点我是喜欢的。 准备考试的过程是enjoyable的。

离考试还有45天, 抓紧每天吧。




Saturday, January 8, 2011

IEWB SP VOL1 LAB 3 debrief

This lab is at grade 8, well well.. not so impressive but tricky. Stuck at task 5.4 back to back VRF for a few days.. Configuration was easy.. and i can't find anything different from the solution guide..

From R1 VRF 100, able to see R6's connected interface in VRF 100..

Rack1R1#sh ip route v 100 | i 54.
54.0.0.0/24 is subnetted, 1 subnets
B 54.1.1.0 [200/1] via 150.1.3.3, 02:15:45

Ping is not working, debug on R6 shows echo request received and sending echo reply. I traced echo reply all the way back to R1. Debug mpls packet on R1 shows it is receiving the packet with LC-ATM label as well as VPN label..

Rack1R1#ping vrf 100 54.1.1.6 rep 1000 size 1000

Type escape sequence to abort.
Sending 1000, 1000-byte ICMP Echos to 54.1.1.6, timeout is 2 seconds:

1d03h: MPLS turbo: AT3/0.1: rx: Len 1012 Stack {0 0 251} {27 0 253} - ipv4 data.
1d03h: MPLS turbo: AT3/0.1: rx: Len 1012 Stack {0 0 251} {27 0 253} - ipv4 data.
1d03h: MPLS turbo: AT3/0.1: rx: Len 1012 Stack {0 0 251} {27 0 253} - ipv4 data.
1d03h: MPLS turbo: AT3/0.1: rx: Len 1012 Stack {0 0 251} {27 0 253} - ipv4 data
1d03h: MPLS turbo: AT3/0.1: rx: Len 67 Stack {0 6 253} - ipv4 data.
1d03h: MPLS turbo: AT3/0.1: rx: Len 1012 Stack {0 0 251} {27 0 253} - ipv4 data.
1d03h: MPLS turbo: AT3/0.1: rx: Len 1012 Stack {0 0 251} {27 0 253} - ipv4 data..

but it seems like it is having trouble to pop the LC-ATM label and then pop the VPN label. I don't have the same issue with VPNs terminated on R4 where R4 runs a frame mode MPLS, in fact when R4 receives a label packet it has only the VPN label left due to the penuntimate hop thing.. well LC-ATM does not do penuntimate hop, not sure whether that is the reason...

So after much time i concluded this to be dynamips bugs, will try this out again on real rack.

Other than this, the multicast is kinda fun for this lab. This is the first time i see ppl use "igmp static group" to achieve Inter-AS multicast connectivity, happy that i work it out myself. :) So R3 is emulating a multicast receiver on AS100 to pull multicast feed. Then once the multicast feed reaches AS100, it is forwarded in dense so everyone have a chance to get it. Of course this will only allow feed from AS12349 to AS100.