Date created: Friday, April 17, 2015 9:52:59 AM. Last modified: Wednesday, March 28, 2018 5:08:11 PM

MLPPP over ADSL

References:
https://tools.ietf.org/html/rfc1990

For the LNS having "lcp renegotiation always" configured under the VPDN group is required.

1941 with dual EHWIC-VA-DSL-M

interface ATM0/0/0
 description PSTN 67890
 mtu 1500
 no ip address
 no atm ilmi-keepalive
 pvc 0/38
  encapsulation aal5mux ppp Virtual-Template1

interface ATM0/1/0
 description PSTN 12345
 mtu 1500
 no ip address
 no atm ilmi-keepalive
 pvc 0/38
  encapsulation aal5mux ppp Virtual-Template1

interface Virtual-Template1
 no ip address
 ppp multilink
 ppp multilink load-threshold 1 either
 ! Set the load-threshold to 1; this means any link that is available will be
 ! added to the bundle if possible
 ppp multilink group 1
! Sometimes when a member link goes down the entire MLPPP bond has issues until that member link is timedout and removed.
! This can be reduced with the following optional settings:
! ppp timeout idle 10
! keepalive 5 interface Multilink1 ip address negotiated no ip redirects no ip unreachables no ip proxy-arp no ip route-cache ntp disable no cdp enable ip virtual-reassembly in ip tcp adjust-mss 1430 ppp chap hostname user@realm.net ppp chap password 7 1234567890 ppp link reorders ! Allow out of order packets ppp timeout multilink lost-fragment 0 500 ! Default is 1 second ppp multilink ppp multilink interleave ppp multilink group 1 ppp multilink fragment delay 20 ! no ppp multilink fragmentation ! ! Enabling MLP fragmentation reduces packet delay by reducing serialisation delay;
! Perhaps preferred for bundles carrying VoIP. ! ! When larger packets are fragmented, MLP tries to sending each fragment in order across ! the member links. Smaller packets waiting to transmit can be held up. Fragmentation allows ! the smaller packets to be sent in-between larger packet fragments, at the expense of a little ! extra CPU for the reassembly. Disabling fragmentation gives a little extra throughput ! and a little CPU reduction, perhaps good for non-VoIP carrying bundles. # One can check the config on a member link with:
show derived-config interface Vi2

# The member links need to be as close in speed as possible as the multilink bundle will run each member link as fast as the slowest member link. In this example case both lines are ADSL1 circuits. ROUTER#show controllers vdSL 0/0/0 | i Attentuation|Margin|Rate|Power|Speed Noise Margin: 11.0 dB 19.0 dB Attainable Rate: 10980 kbits/s 1380 kbits/s Actual Power: 13.8 dBm 12.3 dBm Speed (kbps): 0 8128 0 832 ROUTER#show controllers vdSL 0/1/0 | i Attentuation|Margin|Rate|Power|Speed Noise Margin: 11.2 dB 16.0 dB Attainable Rate: 10792 kbits/s 1272 kbits/s Actual Power: 15.8 dBm 12.3 dBm Speed (kbps): 0 8128 0 832 ROUTER#show ppp mul Multilink1 Bundle name: lns1-router-name Remote Username: lns1-router-name Remote Endpoint Discriminator: [1] lns1-router-name Local Username: multilink-username@isp.net Local Endpoint Discriminator: [1] multilink-username@isp.net Bundle up for 00:01:01, total bandwidth 1664, load 1/255 Receive buffer limit 24000 bytes, frag timeout 1000 ms Interleaving enabled 0/0 fragments/bytes in reassembly list 0 lost fragments, 24 reordered 0/0 discarded fragments/bytes, 0 lost received 0x63 received sequence, 0x3E sent sequence Member links: 2 active, 1 inactive (max 255, min not set) Vi4, since 00:01:01, 2080 weight, 1496 frag size PPPoATM link, ATM PVC 0/38 on ATM0/0/0 Packets in ATM PVC Holdq: 0 , Particles in ATM PVC Tx Ring: 10178 Vi3, since 00:01:01, 2080 weight, 1496 frag size PPPoATM link, ATM PVC 0/38 on ATM0/1/0 Packets in ATM PVC Holdq: 0 , Particles in ATM PVC Tx Ring: 10178 Vt1 (inactive) No inactive multilink interfaces ROUTER#show int mul1 Multilink1 is up, line protocol is up Hardware is multilink group interface Internet address is 10.49.0.80/32 MTU 1500 bytes, BW 1664 Kbit/sec, DLY 100000 usec, reliability 255/255, txload 1/255, rxload 1/255 Encapsulation PPP, LCP Open, multilink Open Open: IPCP, loopback not set Keepalive set (10 sec) DTR is pulsed for 2 seconds on reset Last input 00:00:00, output never, output hang never Last clearing of "show interface" counters 29w5d Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 2311 Queueing strategy: Class-based queueing Output queue: 0/1000/1964 (size/max total/drops) 5 minute input rate 0 bits/sec, 0 packets/sec 5 minute output rate 0 bits/sec, 0 packets/sec 3725888 packets input, 361508455 bytes, 279 no buffer Received 0 broadcasts (0 IP multicasts) 0 runts, 0 giants, 0 throttles 417 input errors, 0 CRC, 308 frame, 0 overrun, 109 ignored, 0 abort 3672490 packets output, 494500387 bytes, 0 underruns 279 output errors, 0 collisions, 8 interface resets 16 unknown protocol drops 0 output buffer failures, 0 output buffers swapped out 0 carrier transitions

Sometimes when using wholesale providers the hostname of the ISP/CP LNS is not always reported back to the MLPPP CPE during LCP and instead the wholesaler LAC/BRAS node name is reported back. This often happens when the first MLPPP member or PPP session comes up and the LAC has no existing L2TP tunnel to the LNS (often then further PPP sessions will see the LNS hostname on the CPE as the L2TP tunnel is already established). In the below example the CPE with two ADSL2+ lines is connected to the same telephone exchange via the same wholesale provider back to the same ISP, however PPP sessions thinks its terminated on the device with the IPS LNS hostname and the other PPP sessions thinks it is terminated on the device with the wholesaler LAC/BRAS hostname:

c1941#show users
    Line       User       Host(s)              Idle       Location
*132 vty 0     admin      idle                 00:00:00 192.168.0.2

  Interface    User                Mode         Idle     Peer Address
  Vi2          lns1-isp.core       PPPoATM      00:00:00 100.66.0.36
  Vi3          acc-aln1.wholesaler PPPoATM      00:00:01 100.66.0.36

Section 5.1.3 in RFC1990, Endpoint Discriminator Option allows the CPE to use the peer IP address to determine that both PPP sessions are connected to the same endpoint and may be bundled together:

ppp multilink endpoint ip 1.2.3.4

Another method with the same result (less reliable)

interface ATM0/0/0
 bandwidth 6000
 no ip address
 no atm ilmi-keepalive
 pvc 0/38
  encapsulation aal5mux ppp dialer
  dialer pool-member 1
 !
!
interface ATM0/1/0
 bandwidth 6000
 no ip address
 no atm ilmi-keepalive
 pvc 0/38
  encapsulation aal5mux ppp dialer
  dialer pool-member 1
 !
!
interface Dialer0
 bandwidth 12000
 ip address negotiated
 ip virtual-reassembly in
 encapsulation ppp
 dialer pool 1
 ppp chap hostname user@realm.net
 ppp chap password 7 1234567890
 ppp ipcp dns request accept
 ppp ipcp route default
 ppp ipcp address accept
 ppp multilink