Date created: Thursday, November 19, 2015 3:48:34 PM. Last modified: Wednesday, March 28, 2018 5:15:43 PM
ASR1000 LNS Config
All the testing has been carried out on IOS-XE 03.13.04.S (15.4(3)S4) on an ASR1002-X.
References:
http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/bbdsl/configuration/xe-3s/asr1000/bba-xe-3s-asr1000-book.pdf
http://thenetworksbroken.blogspot.co.uk/2012/09/cisco-asr-1001-queuing-on-pppoe.html
http://www.gossamer-threads.com/lists/cisco/nsp/165742
http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/lanswitch/configuration/xe-3s/lanswitch-xe-3s-book/lnsw-ether-flw-redun.html
http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/qos_mqc/configuration/xe-3s/qos-mqc-xe-3s-book/qos-eth-int.html
http://www.cisco.com/c/en/us/td/docs/routers/asr1000/configuration/guide/chassis/asrswcfg/multilink_ppp.html
http://www.cisco.com/c/en/us/td/docs/routers/asr1000/configuration/guide/chassis/asrswcfg/scaling.html
http://www.cisco.com/c/en/us/td/docs/routers/10000/10008/configuration/guides/qos/qoscf/10qovrvu.html#wp1134404
Feature Support
The ASR1000 series support PPPoA, PPPoE PPPPoEoA and PPP over LNS (MLPoLNS). They can operate as an L2TP Access Concentrator (LAC), L2TP Network Server (LNS), or PPP Termination and Aggregation (PTA) device.
Info on session and tunnel scaling limits: http://www.cisco.com/c/en/us/td/docs/routers/asr1000/configuration/guide/chassis/asrswcfg/scaling.html#pgfId-1125595
The physical interface MTU must be increased to accomodate MLP headers, PPP headers, L2TP headers, PPPoE Ethernet headers etc, as well as MPLS.
VAIs
The ASR1000 series routers no longer support full Virtual-Access interfaces for PPP subscribers, instead it creates a sub-interface per subscriber:
lns1(config-if)#do show users Line User Host(s) Idle Location * 2 vty 0 james.bens idle 00:00:00 10.0.0.1 Interface User Mode Idle Peer Address Vi2.2 test2@isp.net PPPoVPDN - 11.22.33.44
The global config command "aaa policy interface-config allow-subinterface" is required before any users can connect, to allow the creation of a sub-interface in the first place.
When the LNS tries to copy the virtual-template interface and add in any attributes received from RADIUS to terminate a PPP session on a new VAI sub-interface, the following message is logged if any config is present that requires a full VAI interface to be created (config that isn't supported on a sub-interface):
%FMANRP_ESS-4-FULLVAI: Session creation failed due to Full Virtual-Access Interfaces not being supported. Check that all applied Virtual-Template and RADIUS features support Virtual-Access sub-interfaces. swidb= 0x7F11844221E8, ifnum= 40
After configuring a virtual-template the router can test if it will allow for the configuration of sub-interfaces:
lns1#test virtual-template 1 subinterface Subinterfaces cannot be created using Virtual-Template1 Interface specific commands: ppp timeout multilink lost-fragment 0 500
The following is a list of config found through trial and error that IS NOT supported which was on the traditional 7200/7300 series LNS routers (thes cause the VAI to be a full VAI rather than a sub-interface, so the user session establishment cannot be completed):
interface Virtual-Template 1 no snmp trap link-status ! Not supported on VAI sub-interface but global config command ! "no virtual-template snmp" is supported ntp disable ! Not supported on VAI sub-interface ppp timeout multilink lost-fragment ! Not supported on VAI sub-interface qos pre-classify ! Not supported on VAI sub-interface RADIUS: Framed-Compression = Van-Jacobson-TCP-IP - defunct after dial-up, not required for xDSL services)
Cisco-AVPair = "lcp:interface-config=ip unnumbered Loopback100" - Older "lcp" style Cisco VSAs are not supported
Cisco-AVpair = "ip:ip-unnumbered=Loopback1610" - Newer "ip" style ones should be used
! However for uRPF for example, there isn't a new "ip" form, the "lcp" form is still supported lcp:interface-config=ip verify unicast reverse-path
Qos in General
The IP Type of Service (ToS) Reflect feature (effective from Cisco IOS XE Release 3.7.(0)S) allows the IP header ToS value from the inner IP header to be reflected in the ToS of the outer L2TP IP header. IP ToS Reflect is enabled under the VPDN group with "ip tos reflect".
QoS and Port-Channels
To use QoS statistics ("show policy-map int x/x") the global command "platform qos marker-statistics" must be enabled, which first requires that no policy maps are applied to any interface (http://www.cisco.com/c/en/us/td/docs/ios-xml/ios/qos_mqc/configuration/xe-3s/qos-mqc-xe-3s-book/qos-mrkg.html). Check with "show platform hardware qfp active feature qos config global".
QoS is also not supported on port-channels/etherchannels/LACP bundles in the way one would expect (its very limited). During initial testing it does not seem possible to apply per-subscriber shapers. This is an example LNS bouncing a test subscriber to activate an outbound policy on that subscriber session, and the LNS has two member interfaces in a L3 port-channel (with no QoS policy on the port-channel interface or member interfaces):
lns1#show run int virtual-temp 1 Building configuration... Current configuration : 328 bytes ! interface Virtual-Template1 description Test VT no ip address no ip redirects no ip proxy-arp no logging event link-status no peer default ip address keepalive 20 3 ppp authentication chap callin ppp ipcp ignore-map ppp multilink ppp timeout authentication 100 service-policy output PE-DSL-QOS-PARENT-OUT end lns1#show users Line User Host(s) Idle Location * 2 vty 0 james.bens idle 00:00:00 10.0.0.1 Interface User Mode Idle Peer Address Vi2.1 test@isp.net PPPoVPDN - 11.22.33.44 lns1#clear interface vi2.1 15:50:18.693 UTC: Port-channel1 has more than one active member link lns1# 15:50:18.693 UTC: %QOS-6-POLICY_INST_FAILED: Service policy installation failed
Etherchannel with LACP and Load Balancing
Supported in Cisco IOS XE Release 2.5 and subsequent releases:
Egress MQC Queuing Configuration on Port-Channel Member Link - Etherchannel Load Balancing
There is no support for ingress QoS features in any release.
To use more than one interface in an LACP port-channel with load-balancing at layer 3 (such as source & destination IP hashing) and QoS (facing the SP core on the port-channel for example) one must configure egress policies on the member links of the port-channel. They can not be applied to the port-channel itself:
lns1(config)#int po1 lns1(config-if)#service-policy output PE-QOS-CORE-OUT service-policy output PE-QOS-CORE-OUT not supported on this target
One option is to step down the port-channel capacity but keep resiliency buy using the member links in active/standby mode:
Etherchannel Active/Standby with LACP (No Etherchannel Load Balancing)
Supported in Cisco IOS XE Release 2.4 and subsequent releases:
Egress MQC Queuing on Port-Channel Member Link - No Etherchannel Load Balancing
Supposedly one can still apply egress policies to the member links but only one member can be active at a time. During testing below it is now possible to apply subscriber shaper policies but they are now showing a shaper rate of the percentage of the one active physical link in the port-channel the L2TP traffic is coming in on, not of the ADSL subscriber session:
lns1#show run | s policy-map PE-DSL-QOS-PARENT-OUT policy-map PE-DSL-QOS-PARENT-OUT class class-default shape average percent 90 lns1#show policy-map interface vi2.1 Virtual-Access2.1 SSS session identifier 20 - Service-policy output: PE-DSL-QOS-PARENT-OUT Class-map: class-default (match-any) 23 packets, 1932 bytes 30 second offered rate 0000 bps, drop rate 0000 bps Match: any Queueing queue limit 631 packets (queue depth/total drops/no-buffer drops) 0/0/0 (pkts output/bytes output) 0/0 shape (average) cir 900000000, bc 9000000, be 9000000 target shape rate 900000000 ! This policy has shaped to 90% of 1Gbp, not 90% of the subscriber session speed,
! which has been correctly reported by the LAC to the LNS in the L2TP incoming call request,
! and shows under the interface output below, this example ADSL1 line is 8Mbps-ish lns1#show int vi2.1 Virtual-Access2.1 is up, line protocol is up Hardware is Virtual Access interface Description: Test VT Interface is unnumbered. Using address of Loopback30 (100.66.0.13) MTU 1500 bytes, BW 861 Kbit/sec, DLY 100000 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation PPP, LCP Open, multilink Closed Open: IPCP PPPoVPDN vaccess, cloned from AAA, Virtual-Template1 Vaccess status 0x0 Protocol l2tp, tunnel id 17005, session id 21157 Keepalive set (20 sec) 73 packets input, 3504 bytes 98 packets output, 3866 bytes Last clearing of "show interface" counters never 1941-CPE#show controllers vDSL 0/0/0 DS Channel1 DS Channel0 Speed (kbps): 0 864
It doesn't seem possible to apply an egress policy to a member interface now this subscriber is online:
lns1(config-if)#do show etherchannel summ ... Group Port-channel Protocol Ports ------+-------------+-----------+----------------------------------------------- 1 Po1(RU) LACP Gi0/0/0(bndl) Gi0/0/1(hot-sby) lns1(config)#int gi0/0/0 lns1(config-if)#service-policy output PE-QOS-CORE-OUT Service_policy with queueing features on this interface is not allowed if session based queuing policy is already installed. lns1(config-if)# 17:26:29.816 UTC: %QOS-6-POLICY_INST_FAILED: Service policy installation failed
Disconnecting the test subscriber allows the configuration of policies on the member links (even though it is still shaping to 90% of the physical link and not the subscriber session). This mean that once an LNS is deployed/live, the policies on member links can't be removed and reapplied etc to change them without disconnecting all the subscribers:
lns1#show users Line User Host(s) Idle Location * 3 vty 1 james.bens idle 00:00:00 10.0.0.1 Interface User Mode Idle Peer Address Vi2.1 test@isp.net PPPoVPDN - 11.22.33.44 lns1#clear interface vi2.1 lns1#show users Line User Host(s) Idle Location * 3 vty 1 james.bens idle 00:00:00 10.0.0.1 Interface User Mode Idle Peer Address lns1#conf t lns1(config)#int gi0/0/0 lns1(config-if)#service-policy output PE-QOS-CORE-OUT lns1(config-if)#service-policy input PE-QOS-CPE-IN lns1(config-if)#int gi0/0/1 lns1(config-if)#service-policy output PE-QOS-CORE-OUT lns1(config-if)#service-policy input PE-QOS-CPE-IN lns1#show users Line User Host(s) Idle Location * 3 vty 1 james.bens idle 00:00:00 10.0.0.1 Interface User Mode Idle Peer Address Vi2.1 test@isp.net PPPoVPDN - 11.22.33.44 lns1#show policy-map interface vi2.1 Virtual-Access2.1 SSS session identifier 21 - Service-policy output: PE-DSL-QOS-PARENT-OUT Class-map: class-default (match-any) 0 packets, 0 bytes 30 second offered rate 0000 bps, drop rate 0000 bps Match: any Queueing queue limit 630 packets (queue depth/total drops/no-buffer drops) 0/0/0 (pkts output/bytes output) 0/0 shape (average) cir 900000000, bc 9000000, be 9000000 target shape rate 900000000 lns1#show policy-map interface gi0/0/0 GigabitEthernet0/0/0 Service-policy input: PE-QOS-CPE-IN Class-map: NC (match-any) 0 packets, 0 bytes 5 minute offered rate 0000 bps, drop rate 0000 bps Match: mpls experimental topmost 6 Match: mpls experimental topmost 7 Match: dscp cs6 (48) Match: dscp cs7 (56) Match: cos 6 Match: cos 7 QoS Set qos-group 6 Packets marked 0 lns1#conf t lns1(config)#int gi0/0/0 lns1(config-if)#no service-policy output PE-QOS-CORE-OUT Remove session policy before removing policy from main interface (GigabitEthernet0/0/0)
Now the two physical LNS links have been removed from the port-channel. They are stand-alone 1Gbps IPoMPLSoE links with no ECMP, so one standard MPLS preferred path to the LAC. However the LNS is still shaping the user session to 90% of the physical link speed. It was possible to apple the nested policy as above with two priority queues after testing the single policy below.
lns1#show run int virtual-template 1 interface Virtual-Template1 no ip address no ip redirects no ip proxy-arp no logging event link-status no peer default ip address keepalive 20 3 ppp authentication chap callin ppp ipcp ignore-map ppp link reorders ppp multilink ppp multilink interleave ppp timeout authentication 100 service-policy output PE-DSL-QOS-PARENT-OUT lns1#show run | s PE-DSL-QOS-PARENT-OUT policy-map PE-DSL-QOS-PARENT-OUT class class-default shape average percent 90 lns1#show users Interface User Mode Idle Peer Address Vi2.20 test@isp.net PPPoVPDN - 11.22.33.44 lns1#show derived-config interface vi2.20 interface Virtual-Access2.20 ip unnumbered Loopback30 no ip redirects ip verify unicast reverse-path no ip route-cache same-interface no peer default ip address keepalive 20 3 ppp authentication chap callin ppp ipcp ignore-map ppp link reorders ppp timeout authentication 100 end lns1#show policy-map interface vi2.20 Virtual-Access2.20 SSS session identifier 211 - Service-policy output: PE-DSL-QOS-PARENT-OUT Class-map: class-default (match-any) 14 packets, 1176 bytes 30 second offered rate 0000 bps, drop rate 0000 bps Match: any Queueing queue limit 630 packets (queue depth/total drops/no-buffer drops) 0/0/0 (pkts output/bytes output) 0/0 shape (average) cir 900000000, bc 9000000, be 9000000 target shape rate 900000000
Load-Balancing
MLPPP is supported however CEF per-packet load balancing is not support (interface command " ip load-sharing per-packet " is removed) so ADSL/SDSL/VDSL PPP services need to use MLPPP.
MLPPP
Even though MLPPP is supported it's a bit flakey. First test MLPPP sessions are working but a "show ppp mul" caused a traceback (although this might be because the LNS is receiving L2TP tunnels over a port-channel from the LAC, which as above aren't supported very well):
lns1-isp.core#show ppp multilink Virtual-Access4 Bundle name: ld5-test3@realm.net Remote Username: ld5-test3@realm.net Remote Endpoint Discriminator: [1] ld5-test3@realm Local Username: lns1-isp.core Local Endpoint Discriminator: [1] lns1-isp.core Bundle up for 00:04:31, total bandwidth 1130, load 1/255 Receive buffer limit 24384 bytes, frag timeout 1000 ms Bundle is Distributed Using relaxed lost fragment detection algorithm. 0/0 fragments/bytes in reassembly list 0 lost fragments, 0 reordered 0/0 discarded fragments/bytes, 0 lost received 0x0 received sequence, 0x0 sent sequence Platform Specific Multilink PPP info NOTE: internal keyword not applicable on this platform Interleaving: Disabled, Fragmentation: Disabled Member links: 2 (max 16, min not set) bt-wbmc-1:Vi3 (LAC.IP.ADDR.HERE), since 00:04:36, unsequenced bt-wbmc-1:Vi5 (LAC.IP.ADDR.HERE), since 00:04:01, unsequenced No inactive multilink interfaces lns1-isp.core# Dec 7 2015 13:13:49.464 UTC: %IOSXE_MLP-2-STATS_TIMED_OUT: Timed out for getting MLP bundle stats. -Traceback= 1#a3fe01abba2bac2871f0e4442db8a494 :7FA544DDB000+DC19997 :7FA544DDB000+8AC861C :7FA544DDB000+8AC7C7C :7FA544DDB000+71754E4 :7FA544DDB000+717458D :7FA544DDB000+717287A :7FA544DDB000+B97D079 :7FA544DDB000+7190190 :7FA544DDB000+A89AACD
As of IOS-XE 3.7.1S the ASR1000 series can support 8 member links per bundle and maximum or 4000 member links and a maximum 4000 bundles, when running MLPoLNS (PPP sessions over L2TP).
The Cisco ASR 1000 Series Aggregation Services Routers do not support the following MLP features:
- In-Service Software Upgrade (ISSU) and Stateful Switchover (SSO) for MLP bundles
- The broadband L4 Redirect feature and the Intelligent Services Gateway feature
- Per-user firewall
- Lawful intercept
- MLP with MPLS-TE FRR
- Change of Authorization (CoA)
- Layer 2 input QoS classification
- The Multiclass Multilink Protocol (MCMP) RFC 2686 extension to LFI
- Per-user Access Control Lists (ACLs) applied through the RADIUS server are not supported. However, ACLs applied through the virtual template definition for the bundle are supported.
- Only the MLP long-sequence number format is supported for the packet header format option.
Important restrictions of MLPPP on ASR1000s to note are:
- Layer 2 Tunnel Protocol (L2TP) over IPsec is not supported (so IL3 support might be out of the question?).
- QoS (other than downstream Model-F shaping) on interfaces and tunnels towards the customer premise equipment (CPE) is not supported. QoS Model F requires 3 levels of shaping, queuing and scheduling -A shper at the subinterface / then per-session shaper / then indiviual class queues.
- Although per-packet load balancing is not supported, the configuration is not blocked and the functionality is operational (but not tested). Per-packet load balancing cannot be used with MLPoLNS because MLPoLNS requires a single-path per-destination IP address.
QoS with MLPPP over L2TP (MLPoLNS)
To rate limit a broadband MLP bundle session, use a hierarchical QoS (HQoS) policy with a parent shaper in the class-default class. The Cisco ASR 1000 Series Aggregation Services Routers support HQoS queuing only in the egress (output) direction, and not in the ingress direction:
Example of a working config, taken from ASR1002-X but should work on any ASR1000 series router running IOS-XR:
vrf definition On-Net-DSL ! address-family ipv4 exit-address-family ! address-family ipv6 exit-address-family ! vrf definition 3rd-Party-DSL ! address-family ipv4 exit-address-family ! address-family ipv6 exit-address-family ! vrf definition RADIUS rd 10.0.0.1:2222 ! address-family ipv4 route-target export 1111:2222 route-target import 1111:2222 exit-address-family ! aaa new-model ! aaa group server radius RAD-VIP server name RADIUS ip vrf forwarding RADIUS ip radius source-interface Loopback9 exit ! aaa authentication ppp default group RAD-VIP aaa authorization network default group RAD-VIP aaa accounting network default start-stop group RAD-VIP aaa session-id common aaa policy interface-config allow-subinterface ! subscriber templating ! multilink bundle-name authenticated vpdn enable ! vpdn-group On-Net-DSL description On-Net-DSL accept-dialin protocol l2tp virtual-template 1 vpn vrf On-Net-DSL source-ip 10.0.0.243 local name lns-hostname-here lcp renegotiation always l2tp tunnel password 7 lalalalala ip pmtu ! vpdn-group 3rd-Party-DSL description 3rd-Party-DSL accept-dialin protocol l2tp virtual-template 1 vpn vrf 3rd-Party-DSL source-ip 1.1.1.1 local name lns-hostname-here lcp renegotiation always l2tp tunnel password 7 lallalalalal ip pmtu ! no virtual-template snmp ! policy-map PE-DSL-QOS-CPE-OUT class REALTIME police 20000 conform-action transmit exceed-action transmit violate-action drop priority level 1 class class-default policy-map PE-DSL-QOS-PARENT-OUT class class-default shape average percent 90 service-policy PE-DSL-QOS-CPE-OUT ! interface Loopback9 description Loopback for RADIUS vrf forwarding RADIUS ip address 11.22.33.55 255.255.255.255 ! interface GigabitEthernet0/0/0 description Core Uplink mtu 8900 no ip address negotiation auto cdp enable lacp rate fast service-policy output PE-QOS-CORE-OUT channel-group 1 mode active hold-queue 4096 in ! interface GigabitEthernet0/0/0.100 description MPLS Uplink encapsulation dot1Q 100 ip address 10.10.0.34 255.255.255.252 no ip redirects no ip proxy-arp ip ospf network point-to-point ip ospf 1 area 0 mpls ip ! interface GigabitEthernet0/0/0.201 description DSL Uplink 1 encapsulation dot1Q 201 vrf forwarding 3rd-Party-DSL ip address 1.1.1.1 255.255.255.254 no ip redirects no ip proxy-arp ! interface GigabitEthernet0/0/0.202 description DSL Uplink 2 encapsulation dot1Q 202 vrf forwarding On-Net-DSL ip address 10.0.0.243 255.255.255.254 no ip redirects no ip proxy-arp ! interface Virtual-Template1 description On-Net LACs no ip address no ip redirects no ip proxy-arp no logging event link-status no peer default ip address keepalive 20 3 ppp authentication chap callin ppp ipcp ignore-map ppp multilink ppp multilink interleave ppp timeout authentication 100 ! interface Virtual-Template8 description 3rd Party DSL Provider LACs no ip address no ip redirects no ip proxy-arp no logging event link-status no peer default ip address keepalive 20 3 ppp authentication chap callin ppp ipcp ignore-map ppp multilink ppp timeout authentication 100 ! ip route vrf On-Net-DSL 0.0.0.0 0.0.0.0 10.0.0.242 ip route vrf 3rd-Party-DSL 0.0.0.0 0.0.0.0 1.1.1.2 ! ip radius source-interface Loopback9 vrf RADIUS ! radius server RADIUS address ipv4 11.22.33.44 auth-port 1812 acct-port 1813 key 7 lalalalalala
Previous page: Ethernet CFM CCM/DMM over Pseudowire Inter-Op
Next page: Basic PPPoA ADSL1/2+