Date created: Wednesday, November 18, 2015 11:26:57 AM. Last modified: Tuesday, January 17, 2017 4:51:42 PM
LNS Shaping & LLQ (ASR1000 series)
References:
http://www.cisco.com/c/en/us/td/docs/ios/12_0s/feature/guide/fsllq26.html
http://www.cisco.com/c/en/us/support/docs/quality-of-service-qos/qos-policing/23706-pppoe-qos-dsl.html
http://www.cisco.com/c/en/us/td/docs/ios/12_2/qos/configuration/guide/fqos_c/qcfpolsh.html#wp1012025
http://www.cisco.com/c/en/us/support/docs/routers/7200-series-routers/110850-queue-limit-output-drops-ios.html
http://www.cisco.com/c/en/us/support/docs/quality-of-service-qos/qos-packet-marking/10100-priorityvsbw.html
http://docs.tpu.ru/docs/cisco/ios120/120newft/120t/120t7/pqcbwfq.pdf
http://www.cisco.com/c/en/us/td/docs/ios/12_2sb/feature/guide/mpq.html
http://www.cisco.com/c/en/us/td/docs/ios/qos/configuration/guide/child_svc_policy.pdf
https://tools.cisco.com/bugsearch/bug/CSCsz20271/?referring_site=bugquickviewredir
https://tools.cisco.com/bugsearch/bug/CSCub64068/?referring_site=bugquickviewredir
http://www.cisco.com/c/en/us/td/docs/routers/10000/10008/configuration/guides/qos/qoscf/10qovrvu.html#wp1134404
http://www.cisco.com/c/en/us/td/docs/routers/asr1000/configuration/guide/chassis/asrswcfg/multilink_ppp.html#pgfId-1096305
http://www.cisco.com/c/en/us/products/collateral/routers/asr-1000-series-aggregation-services-routers/q-and-a-c67-731655.html
This testing is with an ASR1002-X LNS running first IOS-XE 03.10.04.S (IOS 15.3(3)S4) and then IOS-XE 03.13.04.S (15.4(3)S4). The test CPE was a 1941 running IOS 15.3(3)M5.
Contents
LNS QoS using LLQ/CBWFQ
LNS Model F Shaping
CPE Hierarchical QoS Framework
Using shapers with LLQ/CBWFQ on the virtual-access interfacases, various commands are not supported.
! These erros are logged when the test ADSL/PPP session first connects (LNS using 03.10.04.S at this point); lns1#please remove queuing feature from child policy first lns1#please remove queuing feature from child policy first lns1#please remove queuing feature from child policy first lns1#please remove queuing feature from child policy first lns1# 10:55:25.459 UTC: %CPPOSLIB-3-ERROR_NOTIFY: SIP0: cpp_cp: cpp_cp encountered an error -Traceback= 1#54e74bbead750509eb73bfab6e933a68 errmsg:7FBA19529000+121D cpp_common_os:7FBA1C547000+DA15 cpp_common_os:7FBA1C547000+D914 cpp_common_os:7FBA1C547000+19BEE cpp_bqs_mgr_lib:7FBA2DD83000+188BF cpp_qos_ea_lib:7FBA2F16A000+1FE51 cpp_qos_ea_lib:7FBA2F16A000+1EF02 cpp_qos_ea_lib:7FBA2F16A000+1B07C cpp_qos_ea_lib:7FBA2F16A000+DC29 cpp_qos_smc_lib:7FBA2F3EF000+19B6 cpp_qos_ea_lib:7FBA2F16A000+4FB84 binos:7FBA1A8B20 ! This is the policy being applied by RADIUS policy-map PE-DSL-QOS-PARENT-OUT class class-default shape average percent 95 service-policy PE-DSL-QOS-CPE-OUT policy-map PE-DSL-QOS-CPE-OUT class NC priority percent 2 class REALTIME priority percent 10 class APP-1 bandwidth percent 22 class APP-2 bandwidth percent 24 class APP-3 bandwidth percent 12 class APP-4 bandwidth percent 5 class class-default bandwidth percent 25
"priority percent" and "bandwidth percent" seem to cause issues, striping back the "bandwidth percent" but keeping "priority percent" reservations was still causing issues
policy-map PE-DSL-QOS-CPE-OUT class NC priority percent 2 class REALTIME priority percent 10 class APP-1 class APP-2 class APP-3 class APP-4 class class-default ! When the test PPP user connects, the session is flapping up and down with the LNS logging the following lns1#please remove queuing feature from child policy first lns1#please remove queuing feature from child policy first lns1#please remove queuing feature from child policy first lns1#please remove queuing feature from child policy first lns1#please remove queuing feature from child policy first lns1#please remove queuing feature from child policy first ! CPE logs 11:44:14.218 UTC: %LINEPROTO-5-UPDOWN: Line protocol on Interface Virtual-Access2, changed state to down 11:44:15.898 UTC: %LINEPROTO-5-UPDOWN: Line protocol on Interface Virtual-Access2, changed state to up 11:44:17.898 UTC: %LINEPROTO-5-UPDOWN: Line protocol on Interface Virtual-Access2, changed state to down 11:44:19.354 UTC: %LINEPROTO-5-UPDOWN: Line protocol on Interface Virtual-Access2, changed state to up 11:44:21.354 UTC: %LINEPROTO-5-UPDOWN: Line protocol on Interface Virtual-Access2, changed state to down 11:44:22.842 UTC: %LINEPROTO-5-UPDOWN: Line protocol on Interface Virtual-Access2, changed state to up 11:44:24.842 UTC: %LINEPROTO-5-UPDOWN: Line protocol on Interface Virtual-Access2, changed state to down
One presumes this is because the "percent" keyword on either "priority" or "bandwidth" creates a packet queue in the child policy which is not supported (as per the LNS error message above). If the child policy is striped down to the following the test ADSL line can connect, using a policer for the priority queues not with "percent":
policy-map PE-DSL-QOS-PARENT-OUT class class-default shape average percent 95 service-policy PE-DSL-QOS-CPE-OUT policy-map PE-DSL-QOS-CPE-OUT class NC police 20000 conform-action transmit exceed-action transmit violate-action transmit priority level 2 class REALTIME police 20000 conform-action transmit exceed-action transmit violate-action drop priority level 1 class APP-1 class APP-2 class APP-3 class APP-4 class class-default
However, no bandwidth reservations can be made against the other classes whilst users are connected (now the ASR has been switched to IOS-XE 03.13.04.S to see if the commands are supported):
lns1(config)#policy-map PE-DSL-QOS-CPE-OUT lns1(config-pmap)# class class-default lns1(config-pmap-c)#bandwidth ? Kilo Bits per second percent % of total Bandwidth remaining percent/ratio of the remaining bandwidth lns1(config-pmap-c)#bandwidth percent 24 user-defined classes with queueing features are not allowed in a service-policy at sub-interface/pvc in conjunction with user-defined classes with queueing features in a service-policy at sub-interface/pvc/pvp 14:42:59.721 UTC: %IOSXE-3-PLATFORM: SIP0: cpp_cp: QFP:0.0 Thread:137 TS:00000003394541849117 %QOS-3-INVALID_STAT_QID: Stat Queuing error for interface EVSI30, qid 3084 vqid 0 -Traceback=1#51dd690e1b991bd20da3f9977f85d148 4078813b 4080c9f6 40053ad5 40054958 40054db1 40737b89 4077310c 4077435a
After disconnecting the test user changes can be made, the user can reconnect, however it doesn't seem to be working (there is no bandwidth reservation shown below for class-default, probably becasue "percent" is used again). Also note that the two priority classes set up don't seem to using the priorty levels specified but the do policers seem to be in place:
! Side note, the router had to be rebooted before this would work. This happened once before when change the policy on the port-channel once and now making a QoS policy change having split the interface out, a reboot is required again. lns1#show run | s policy-map PE-DSL-QOS-PARENT-OUT policy-map PE-DSL-QOS-PARENT-OUT class class-default shape average percent 95 service-policy PE-DSL-QOS-CPE-OUT lns1#show run | s policy-map PE-DSL-QOS-CPE-OUT policy-map PE-DSL-QOS-CPE-OUT class NC police 20000 conform-action transmit exceed-action transmit violate-action transmit priority level 2 class REALTIME police 20000 conform-action transmit exceed-action transmit violate-action drop priority level 1 class APP-1 class APP-2 class APP-3 class APP-4 class class-default bandwidth percent 25 lns1#show policy-map interface vi2.2 output Virtual-Access2.2 SSS session identifier 24 - Service-policy output: PE-DSL-QOS-PARENT-OUT Class-map: class-default (match-any) 2 packets, 168 bytes 30 second offered rate 0000 bps, drop rate 0000 bps Match: any Queueing queue limit 665 packets (queue depth/total drops/no-buffer drops) 0/0/0 (pkts output/bytes output) 0/0 shape (average) cir 950000000, bc 9500000, be 9500000 target shape rate 950000000 Service-policy : PE-DSL-QOS-CPE-OUT Class-map: NC (match-any) 0 packets, 0 bytes 30 second offered rate 0000 bps, drop rate 0000 bps Match: mpls experimental topmost 6 Match: mpls experimental topmost 7 Match: dscp cs6 (48) Match: dscp cs7 (56) Match: cos 6 Match: cos 7 police: cir 20000 bps, bc 1500 bytes, be 1500 bytes conformed 0 packets, 0 bytes; actions: transmit exceeded 0 packets, 0 bytes; actions: transmit violated 0 packets, 0 bytes; actions: transmit conformed 0000 bps, exceeded 0000 bps, violated 0000 bps Class-map: REALTIME (match-any) 0 packets, 0 bytes 30 second offered rate 0000 bps, drop rate 0000 bps Match: mpls experimental topmost 5 Match: dscp ef (46) Match: dscp cs5 (40) Match: cos 5 police: cir 20000 bps, bc 1500 bytes, be 1500 bytes conformed 0 packets, 0 bytes; actions: transmit exceeded 0 packets, 0 bytes; actions: transmit violated 0 packets, 0 bytes; actions: drop conformed 0000 bps, exceeded 0000 bps, violated 0000 bps Class-map: APP-1 (match-any) 0 packets, 0 bytes 30 second offered rate 0000 bps Match: mpls experimental topmost 4 Match: dscp af41 (34) Match: dscp af42 (36) Match: dscp af43 (38) Match: dscp cs4 (32) Match: cos 4 Class-map: APP-2 (match-any) 0 packets, 0 bytes 30 second offered rate 0000 bps Match: mpls experimental topmost 3 Match: dscp af31 (26) Match: dscp af32 (28) Match: dscp af33 (30) Match: dscp cs3 (24) Match: cos 3 Class-map: APP-3 (match-any) 0 packets, 0 bytes 30 second offered rate 0000 bps Match: mpls experimental topmost 2 Match: dscp af21 (18) Match: dscp af22 (20) Match: dscp af23 (22) Match: dscp cs2 (16) Match: cos 2 Class-map: APP-4 (match-any) 0 packets, 0 bytes 30 second offered rate 0000 bps Match: mpls experimental topmost 1 Match: dscp af11 (10) Match: dscp af12 (12) Match: dscp af13 (14) Match: dscp cs1 (8) Match: cos 1 Class-map: class-default (match-any) 2 packets, 168 bytes 30 second offered rate 0000 bps, drop rate 0000 bps Match: any
Below an attempt is made to configure a Model F QoS policy as finally in the Cisco docs something has turned up that shows it to be the only supported QoS deployment on the ASR1000 series when providing subscriber services. Model F means: class default-only queuing policy map on Ethernet sub-interface, and two-level hierarchical queuing policy map on session (through virtual template or RADIUS configuration). Model D.2 and Model F are the only supported models when using LACP links with broadband subscribers. Model F is the only support model when providing QoS for MLP subscribers. However this page does states specifically that Model F is not supported on the ASR1002-X, the current test box!
policy-map MODEL-F-SUB-INT-OUT class class-default shape average percent 100 exit exit int Gi0/0/0.201 service-policy output MODEL-F-SUB-INT-OUT ! Produces the error... Only class-default shaper in flat policy-map on main interface GigabitEthernet0/0/0 can co-exist with QoS on sub targets ! There was a core-facing policy on the port-channel that was moved down to the individual link, now the port-channel has been broken up. This can't be removed it seems: int gi0/0/0 no service-policy output PE-QOS-CORE-OUT Remove session policy before removing policy from main interface (GigabitEthernet0/0/0) ! There aren't any subscribers online. Shutting the inteface down doesn't solve this. ! The router had to be rebooted before the policy could be removed. ! This is the config now in place: policy-map MODEL-F-SUB-INT-OUT class class-default shape average percent 100 policy-map PE-DSL-QOS-PARENT-OUT class class-default shape average percent 90 service-policy PE-DSL-QOS-CPE-OUT policy-map PE-DSL-QOS-CPE-OUT class NC police 20000 conform-action transmit exceed-action transmit violate-action transmit priority level 2 class REALTIME police 20000 conform-action transmit exceed-action transmit violate-action drop priority level 1 class APP-1 class APP-2 class APP-3 class APP-4 class class-default interface GigabitEthernet0/0/0.201 service-policy output MODEL-F-SUB-INT-OUT interface Virtual-Template1 service-policy output PE-DSL-QOS-PARENT-OUT ! With one test sub online everything has worked regarding the priority queues: ! there are two priority queues that are two different levels and policers are in place on each priority queue ! but the subscriber is still shaped to 90% of the physical interface speed (so 900Mbps). ! This LNS is though, as it always has been, receiving the actual subscriber speed from the LAC in the L2TP incoming call lns1#show users | i test Vi2.1 test@ist.net PPPoVPDN - 11.22.33.44 lns1#show interfaces vi2.1 Virtual-Access2.1 is up, line protocol is up Hardware is Virtual Access interface Interface is unnumbered. Using address of Loopback30 (100.66.0.13) MTU 1500 bytes, BW 2265 Kbit/sec, DLY 100000 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation PPP, LCP Open, multilink Closed Open: IPCP PPPoVPDN vaccess, cloned from AAA, Virtual-Template1 Vaccess status 0x0 Protocol l2tp, tunnel id 20094, session id 57197 Keepalive set (20 sec) 848 packets input, 40704 bytes 893 packets output, 41634 bytes Last clearing of "show interface" counters never lns1#show policy-map interface vi2.1 Virtual-Access2.1 SSS session identifier 1 - Service-policy output: PE-DSL-QOS-PARENT-OUT Class-map: class-default (match-any) 30 packets, 2600 bytes 30 second offered rate 0000 bps, drop rate 0000 bps Match: any Queueing queue limit 3628 packets (queue depth/total drops/no-buffer drops) 0/0/0 (pkts output/bytes output) 2/284 shape (average) cir 900000000, bc 9000000, be 9000000 target shape rate 900000000 Service-policy : PE-DSL-QOS-CPE-OUT queue stats for all priority classes: Queueing priority level 2 queue limit 512 packets (queue depth/total drops/no-buffer drops) 0/0/0 (pkts output/bytes output) 0/0 queue stats for all priority classes: Queueing priority level 1 queue limit 512 packets (queue depth/total drops/no-buffer drops) 0/0/0 (pkts output/bytes output) 0/0 Class-map: NC (match-any) 0 packets, 0 bytes 30 second offered rate 0000 bps, drop rate 0000 bps Match: mpls experimental topmost 6 Match: mpls experimental topmost 7 Match: dscp cs6 (48) Match: dscp cs7 (56) Match: cos 6 Match: cos 7 police: cir 20000 bps, bc 1500 bytes, be 1500 bytes conformed 0 packets, 0 bytes; actions: transmit exceeded 0 packets, 0 bytes; actions: transmit violated 0 packets, 0 bytes; actions: transmit conformed 0000 bps, exceeded 0000 bps, violated 0000 bps Priority: Strict, b/w exceed drops: 0 Priority Level: 2 Class-map: REALTIME (match-any) 0 packets, 0 bytes 30 second offered rate 0000 bps, drop rate 0000 bps Match: mpls experimental topmost 5 Match: dscp ef (46) Match: dscp cs5 (40) Match: cos 5 police: cir 20000 bps, bc 1500 bytes, be 1500 bytes conformed 0 packets, 0 bytes; actions: transmit exceeded 0 packets, 0 bytes; actions: transmit violated 0 packets, 0 bytes; actions: drop conformed 0000 bps, exceeded 0000 bps, violated 0000 bps Priority: Strict, b/w exceed drops: 0 Priority Level: 1 Class-map: APP-1 (match-any) 0 packets, 0 bytes 30 second offered rate 0000 bps Match: mpls experimental topmost 4 Match: dscp af41 (34) Match: dscp af42 (36) Match: dscp af43 (38) Match: dscp cs4 (32) Match: cos 4 Class-map: APP-2 (match-any) 0 packets, 0 bytes 30 second offered rate 0000 bps Match: mpls experimental topmost 3 Match: dscp af31 (26) Match: dscp af32 (28) Match: dscp af33 (30) Match: dscp cs3 (24) Match: cos 3 Class-map: APP-3 (match-any) 0 packets, 0 bytes 30 second offered rate 0000 bps Match: mpls experimental topmost 2 Match: dscp af21 (18) Match: dscp af22 (20) Match: dscp af23 (22) Match: dscp cs2 (16) Match: cos 2 Class-map: APP-4 (match-any) 0 packets, 0 bytes 30 second offered rate 0000 bps Match: mpls experimental topmost 1 Match: dscp af11 (10) Match: dscp af12 (12) Match: dscp af13 (14) Match: dscp cs1 (8) Match: cos 1 Class-map: class-default (match-any) 30 packets, 2600 bytes 30 second offered rate 0000 bps, drop rate 0000 bps Match: any queue limit 3628 packets (queue depth/total drops/no-buffer drops) 0/0/0 (pkts output/bytes output) 2/284
After that failed attempt in which the subscriber is always shaped to N percentage of the physical interface on which their L2TP session ingresses TAC advised the following (after much digging), basically RADIUS CoA is the way forward:
The parent policy of a hierarchical policy-map that is applied to the session will always base its rate on the physical interface. That is the expected behavior.
It is expect this behavior to be the same on a single GigE or on a Port-channel configuration. One way around that is if that parent shaper rate is overridden by CoA Radius message to update the parent policy-map shaper rate or if ANCP is configured and ANCP updates the model F grandparent shaper rate associated with the flat sub-interface policy-map. Here are some examples of ANCP:
http://www.cisco.com/c/en/us/td/docs/ios/ios_xe/ancp/configuration/guide/2_xe/ancp_xe_book/ancp_xe.html
CPE Hierarchical QoS Framework
On the CPE things seem to be working slightly better. The priority queues seem to be working. The hold-queue needed increasing on the dialer interface (it can't be increased on the ATM/VDSL interface, the command was there but it didn't seem to work, perhaps this is infact the tx-ring? But that command is missing?!
1941-3(config-if)#int atm0/0/0 1941-3(config-if)#hold-queue 15000 out Cannot change hold-queue out for non FIFO queue 1941-3(config-if)#tx? % Unrecognized command
The hold queue is the interface buffer taken from RAM. The aggregate total of each traffic class queue in a policy must not exceed the hold-queue size. The hold-queue receives traffic after it has been prioritise by a CBWFQ policy, with packets in each class being queue'ed up in the hold-queue based on their priority. The hold-queue will then feed into the tx-ring buffer which is FIFO, for physical transmission. If propper queueing is inplace the tx-ring buffer should stay small as it can adds latency becasue it can only queue traffic after it has already been prioritised.
Note the hold queue size on the dialer2 interface below is set to 12289 bytes. There are three queues each set to 64 packets in length. There are two 64 packet queues for each of the two priority level queues and a 64 packet queue for class-default. These are assumed to be 64 byte packets, so 64 packets * 64 bytes = 4096 bytes, *3 = 12288 bytes. For some reason this doesn't work, setting the interface hold-queue to 12288 and apply the policy gives the following error:
%QOS-4-QLIMIT_HQUEUE_VALUE_SYNC_ISSUE: The sum of all queue-limit value is greater than the hold-queue value.
But 12288 is the exact amount of memory required. The hold-queue has to be set to 1 packet larger than this value it seems for IOS to accept the policy (off-by-one code typo perhaps?). When testing with two 64 packet queues 8192 bytes would be needed. IOS wouldn't allow a policy with two 64 packet queues with out setting the hold-queue to 8193 bytes.
1941-3#show run | s policy-map CPE-DSL-QOS-PARENT-OUT policy-map CPE-DSL-QOS-PARENT-OUT class class-default shape average percent 90 service-policy CPE-DSL-QOS-PE-OUT 1941-3#show run | s policy-map CPE-DSL-QOS-PE-OUT class REALTIME police 20000 conform-action transmit exceed-action transmit violate-action drop priority level 1 class NC police 20000 conform-action transmit exceed-action transmit violate-action drop priority level 2 class APP-1 class APP-2 class APP-3 class APP-4 class class-default ! The policy map is applied under dialer2 before the PPP session was established int dialer 2 service-policy output CPE-DSL-QOS-PARENT-OUT hold-queue 12289 out 1941-3(config-pmap)#do show policy-map int di2 Dialer2 Service-policy output: CPE-DSL-QOS-PARENT-OUT Class-map: class-default (match-any) 2907 packets, 4319592 bytes 30 second offered rate 0000 bps, drop rate 0000 bps Match: any Queueing queue limit 64 packets (queue depth/total drops/no-buffer drops) 0/8/0 (pkts output/bytes output) 1336/1956624 shape (average) cir 50400, bc 202, be 201 target shape rate 50400 Service-policy : CPE-DSL-QOS-PE-OUT queue stats for all priority classes: Queueing priority level 2 queue limit 64 packets (queue depth/total drops/no-buffer drops) 0/0/0 (pkts output/bytes output) 8/528 queue stats for all priority classes: Queueing priority level 1 queue limit 64 packets (queue depth/total drops/no-buffer drops) 0/0/0 (pkts output/bytes output) 12/792 Class-map: REALTIME (match-any) 720 packets, 1065724 bytes 30 second offered rate 0000 bps, drop rate 0000 bps Match: mpls experimental topmost 5 0 packets, 0 bytes 30 second rate 0 bps Match: dscp ef (46) 720 packets, 1065724 bytes 30 second rate 0 bps Match: dscp cs5 (40) 0 packets, 0 bytes 30 second rate 0 bps police: cir 20000 bps, bc 1500 bytes, be 1500 bytes conformed 12 packets, 792 bytes; actions: transmit exceeded 0 packets, 0 bytes; actions: transmit violated 708 packets, 1064932 bytes; actions: drop conformed 0000 bps, exceeded 0000 bps, violated 0000 bps Priority: Strict, b/w exceed drops: 0 Priority Level: 1 Class-map: NC (match-any) 863 packets, 1286516 bytes 30 second offered rate 0000 bps, drop rate 0000 bps Match: mpls experimental topmost 6 0 packets, 0 bytes 30 second rate 0 bps Match: mpls experimental topmost 7 0 packets, 0 bytes 30 second rate 0 bps Match: dscp cs6 (48) 863 packets, 1286516 bytes 30 second rate 0 bps Match: dscp cs7 (56) 0 packets, 0 bytes 30 second rate 0 bps police: cir 20000 bps, bc 1500 bytes, be 1500 bytes conformed 8 packets, 528 bytes; actions: transmit exceeded 0 packets, 0 bytes; actions: transmit violated 855 packets, 1285988 bytes; actions: drop conformed 0000 bps, exceeded 0000 bps, violated 0000 bps Priority: Strict, b/w exceed drops: 0 Priority Level: 2 Class-map: APP-1 (match-any) 0 packets, 0 bytes 30 second offered rate 0000 bps Match: mpls experimental topmost 4 0 packets, 0 bytes 30 second rate 0 bps Match: dscp af41 (34) 0 packets, 0 bytes 30 second rate 0 bps Match: dscp af42 (36) 0 packets, 0 bytes 30 second rate 0 bps Match: dscp af43 (38) 0 packets, 0 bytes 30 second rate 0 bps Match: dscp cs4 (32) 0 packets, 0 bytes 30 second rate 0 bps Class-map: APP-2 (match-any) 0 packets, 0 bytes 30 second offered rate 0000 bps Match: mpls experimental topmost 3 0 packets, 0 bytes 30 second rate 0 bps Match: dscp af31 (26) 0 packets, 0 bytes 30 second rate 0 bps Match: dscp af32 (28) 0 packets, 0 bytes 30 second rate 0 bps Match: dscp af33 (30) 0 packets, 0 bytes 30 second rate 0 bps Match: dscp cs3 (24) 0 packets, 0 bytes 30 second rate 0 bps Class-map: APP-3 (match-any) 0 packets, 0 bytes 30 second offered rate 0000 bps Match: mpls experimental topmost 2 0 packets, 0 bytes 30 second rate 0 bps Match: dscp af21 (18) 0 packets, 0 bytes 30 second rate 0 bps Match: dscp af22 (20) 0 packets, 0 bytes 30 second rate 0 bps Match: dscp af23 (22) 0 packets, 0 bytes 30 second rate 0 bps Match: dscp cs2 (16) 0 packets, 0 bytes 30 second rate 0 bps Class-map: APP-4 (match-any) 0 packets, 0 bytes 30 second offered rate 0000 bps Match: mpls experimental topmost 1 0 packets, 0 bytes 30 second rate 0 bps Match: dscp af11 (10) 0 packets, 0 bytes 30 second rate 0 bps Match: dscp af12 (12) 0 packets, 0 bytes 30 second rate 0 bps Match: dscp af13 (14) 0 packets, 0 bytes 30 second rate 0 bps Match: dscp cs1 (8) 0 packets, 0 bytes 30 second rate 0 bps Class-map: class-default (match-any) 1324 packets, 1967352 bytes 30 second offered rate 0000 bps, drop rate 0000 bps Match: any queue limit 64 packets (queue depth/total drops/no-buffer drops) 0/8/0 (pkts output/bytes output) 1316/1955304 ! Default configuration 1941-3#show int di2 Dialer2 is up, line protocol is up (spoofing) Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo Output queue: 0/40 (size/max) ! After hold-queue is increased Dialer2 is up, line protocol is up (spoofing) Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 1563 Queueing strategy: fifo Output queue: 0/12289 (size/max)
Using "fair-queue" under class class-default can cause some unwanted behaviour. From Cisco docs
"In HQF images, flow-based fair-queues, configurable in both user-defined classes and class default with fair-queue, are scheduled equally (instead of by Weight)...In HQF, Class Default defaults to a FIFO queue and is allocated a pseudo bandwidth reservation based on the leftover allocations from User Defined Classes...At all times, regardless of configuration, class class-default in HQF images will always have an implicit bandwidth reservation equal to the unused interface bandwidth not consumed by user-defined classes. By default, the class-default class receives a minimum of 1% of the interface or parent shape bandwidth. It is also possible to explicitly configure the bandwidth CLI in class default."
Note: The default queue-limit is 64 packets.
Note: Physical interface: 1000 packets, tunable with interface CLI "hold-queue out"
If fair-queue is configured in class Class-Default, the behavior matches the HQF “bandwidth” + “fair-queue” behavior
If fair-queue and random-detect are configured together in Class-Default, the behavior matches the HQF “bandwidth” + “random-detect” + “fair-queue” behavior
When bandwidth and fair-queue are applied together to an HQF User Defined class, each flow-based queue is allocated a queue-limit equal to .25 * queue-limit. Because the default queue-limit is 64 packets, each flow based queue in a fair-queue will be allocated 16 packets. If four flows were traversing this class, by default each flow-queue would have 16 packets, therefore you would never expect to see total packets enqueued of >64 (4 *16). All tail drops from an individual flow-queue are recorded as flowdrops. If the number of flow-queues were significantly high as was the queue-limit, then another opportunity for no-buffer drops. For example, assuming policy attach-point is a physical interface, where 1000 aggregate buffers are allocated:
policy-map TEST class 1 bandwidth 32 fair-queue 1024 queue-limit 128
In this configuration, appreciable traffic in all flow queues can starve aggregate interface buffers and result in no-buffer drops in other User-Defined classes (see Cisco bug ID CSCsw98427). This is because 1024 flow queues, each with a 32 packet queue-limit can easily oversubscribe the 1000 aggregate interface Class Based Queuing buffer allocation.
HQF “bandwidth” + “random-detect” + “fair-queue” behavior:
Example: policy-map TEST class 1 bandwidth 32 fair-queue 1024 queue-limit 128 random-detect
Same as bandwidth and fair-queue in section except WRED Average Queue Size is calculated every time a packet arrives to decide whether the packet should be random dropped or tail dropped. As with pre-HQF, all flow-queues will share one instance of WRED thresholds, meaning all packets enqueued to all flow-queues are used to calculate WRED Average Queue Depth, then the drop decision applies the WRED minimum and maximum thresholds against the aggregate packets in all flow queues. However, another departure from bandwidth and fair-queue in section, because one instance of WRED thresholds apply to all flow-based queues, the individual flow-queues’ queue-limit (.25 * “queue-limit”) is ignored and instead honor the Classes aggregate queue-limit for a Current Queue Limit check.
! All tail drops from an individual flow-queue are recorded as flowdrops 1941-3#show policy-map interface di2 Class-map: class-default (match-any) 606 packets, 907684 bytes 30 second offered rate 58000 bps, drop rate 7000 bps Match: any Queueing queue limit 64 packets (queue depth/total drops/no-buffer drops/flowdrops) 17/85/0/85 (pkts output/bytes output) 521/779674 Fair-queue: per-flow queue limit 16 packets
Previous page: DSL over L2TP Basic Shaper
Next page: ME3600X/ME3800X Buffer Oversubscription