Page 1 of 2

Buffer / Flow Control / Speed Issue

Posted: Tue Jan 20, 2015 10:17 pm
by keefe007
So what exactly is the buffer / flow control / speed issue so many people have been having with toughswitches?

We see weird things happen but i can't ever put a finger on it.

Re: Buffer / Flow Control / Speed Issue

Posted: Wed Jan 21, 2015 12:10 am
by sirhc
I have a post in this thread titled ToughSwitchâ„¢ PRO vs WISP Switchâ„¢ Packet Buffers that is a good read for specs.

Basically the ToughSwitch only has 192K of shared buffers where as the WISP Switch has 4Mb which can help.

"One" thing that happens with the ToughSwitch or any switch when you have a 1G port feeding a 100Mb port like an airMAX radio is that packets being sent to the radio stack up on the radio Ethernet port overwhelming it's buffers then the switch port buffers get full. Having more buffers can help but it is only one part of the solution. Ideally the use of 1G interface devices would be nice but that is not an option yet. The problem is compounded by the fact that the 100M port is feeding a wireless connection that is a half duplex link with varying capacity. DOUBLE WAMMY

Flow control is a band-aid but you should use it. As I said in a previous post I mistakenly had Flow Control on our WISP Switch turned off by default assuming people would enable it if they need it.

If you have a mixture of 1G and 100M devices you should enable Flow Control on any 1G port that will send significant amounts of data to any 100M ports on the switch and on all the 100M ports that receive data from the 1G ports.

On the WISP Switch Flow Control is enabled on a per port basis by clicking on the enable box on the Ports Tab under the Column labeled FC for Flow Control.

Flow Control is not my favorite method but you do what you have to right.

Re: Buffer / Flow Control / Speed Issue

Posted: Sat Jan 24, 2015 10:04 am
by wtm
Having been one of the ones that did iperf testing on the TS to show this issue,
One thing that I felt would help the situation was to allow the programming of the "Flow Control TIMEOUT". This is allowed is several of the high end routers (Lucent, Cisco)!

The way the flow control works, is that switch/router sends a signal to the upstream to stop sending data when things start backing up :pissed: , the switch/router continues processing it :working: (which is where the larger buffer is needed), and a "Flow Control Time Out" timer starts. Either the switch/router sends a signal upstream to re-start sending the data when it gets caught up, or the Time Out timer sends the signal, whichever happens first :leave: .

Allowing the selection of different time periods for this time out timer, might help the switch/router process traffic better where the dreaded 1 gig to 100 Mbps transition occurs! Most Routers/Switches have fixed settings on this timer :nono: .

The use of longer time outs might help switches/routers that have smaller buffers process this backup? Sweeping I don't feel it will hurt anything to lengthen the time out, as once the data gets caught up, the Router/Switch sends a re-start signal anyway!

Re: Buffer / Flow Control / Speed Issue

Posted: Sat Jan 24, 2015 10:51 am
by sirhc
I can look into this, interesting suggestion, not sure if it is possible.

I believe the standard timeout is 50 micro seconds nt to be confused with 50 milliseconds

There is one Flow control option we are considering that we know we can do as there are 2 sides to flow control and we can turn them on and off independently with the switch core, they are "generate" and "obey" pause frames.

Re: Buffer / Flow Control / Speed Issue

Posted: Sat Jan 24, 2015 4:30 pm
by LRL
The buffers here are pretty small, but you don't want to buffer to much or you start seeing buffer bloat issues. My preference is to simply drop the packets that don't fit through the pipe to the radio that is overloaded. If the upstream interface is paused it affects the entire tower instead of the single AP that is having trouble keeping up. This is called head of line blocking.

Chris, how does the WS handle it when the switch runs out of buffer space? Can you also give us a run down on the proper use of FC in a standard configuration where the WS is used as a midpoint PoE injector between all APs, backhaul(s) and the tower router?

Re: Buffer / Flow Control / Speed Issue

Posted: Sat Jan 24, 2015 6:42 pm
by sirhc
The only problem is that when packets are dropped the senders resends the packets further clogging up the interface.

Also when an interface buffer is full it drops everyone's packets on that interface not just the CPE radio causing the congestion.

I have tried it both ways with and without Flow Control and even though I hate the way Flow Control handles it I have better results with it on.

Since the AP airMAX Rocket is in bridge mode talking to many client radio when wireless retries or errors start happening things fall apart and the normal TCP mechanisms used to deal with congestion break down.

If I turn Flow Control off my AP that goes to my house I see like 8 to 20Mb down and 70+ up.

If I turn on Flow Control for the 100M interfaces feeding the AP's and the interface(s) that go to the tower router and I get 70-90 Mb down and up, keep in mind that I recently changed my AP to 30 MHz wide channel.

See this thread for speed tests from my house but the screen shots in this post were with a 20MHz wide channel, 30 MHz kick ass!

I will do another speed test with the new channel size and post below but it is PRIME TIME so may not see 90 Mbps.

Re: Buffer / Flow Control / Speed Issue

Posted: Sat Jan 24, 2015 8:10 pm
by LRL
Do you just turn FC on for the AP side? Or do you turn it on for the router, too? How about your backhaul(s)?

Re: Buffer / Flow Control / Speed Issue

Posted: Sat Jan 24, 2015 8:33 pm
by sirhc
LRL wrote:Do you just turn FC on for the AP side? Or do you turn it on for the router, too? How about your backhaul(s)?

I turn Flow Control on all 100M AP ports and the 1G port(s) going to the Router at each tower.

Since I have a router at each tower (not a flat network) I do not use FC on my back hauls.

However we have two tower configurations we are playing with.

Configuration 1:
-We have a 2 or 3 port Static LAG between the WISP Switch and the Cisco Router 2951.
-Then each port is a VLAN for each AP as shown in this Need for Speed Post

Note: We are exploring creating 2 LAGs to the Cisco, 1 for Back hauls and 1 for the APs but we are having issues with the dual LAG, Port.Channels and VLANs between the Cisco and our Switch. A WORK IN PROGRESS CONCEPT

Configuration 2:
-The switch has mid-spans configured for the airFIBERS, that way we can power the AF or other high power back hauls with the switch able to bounce the ports if needed and monitor the traffic at a glance with our COOL graphs 5min/30/min/1 hour
-Also 2 or 3 port Static LAG for the AP ports
-Similar to what is shown in this Post
-In the example shown in the link above VLANs 60,60,71, 99, and 100 are mid-spans to PTP links

We like Configuration 1 best as it is the simplest way and uses less ports but because we are forced to enable Flow Control on the LAG ports (24,25,26) those pause frames affect the airFIBER back haul Ethernet ports with pause frames since ports 24,25,26 are all shared in the LAG even on traffic passing through that tower to the next tower in the ring which is NOT good so we are going to standardize on a config more like Configuration 2 shown above.

Even when we finally upgrade all APs to the Newer Rocket AC units with 1G ports we will have to see if Flow Control is still warranted because of the unknown capacity of a wireless link which to my knowledge screws with the normal TCP congestion mechanisms since the AP is an intermediary device and not the final destination device which would normally tell his sender to slow down if he was unable to process the packets fast enough and the AP being the monkey in the middle is unable to tell the sender there is congestion. Sure 1G ports do have drastically larger buffers which is why people see an improvement with the Titanium Rockets when they are not failing.

Re: Buffer / Flow Control / Speed Issue

Posted: Sun Jan 25, 2015 1:13 am
by LRL
I've been playing with RED queues applied to the subnets allocated to each AP. So far I seem to be getting pretty decent performance with this setup. Unfortunately I loose a little bit of throughput to ensure the links don't reach the congestion point. I need to play more with FC on the WS, the tough switches made we warry.

Re: Buffer / Flow Control / Speed Issue

Posted: Tue Jan 27, 2015 5:09 am
by LRL

I had some time to play with FC on the WS again. I remember testing this when I got my first WS and it was perfect compared to the tough switches. I can unmistakable reproduce the extremely slow throughput with FC off on the tough switches. But I simply can't reproduce that issue on the WS. Kudos to the WS!

It may simply be the added buffer gives enough room for stream to back off it's self.

I've disabled all RED queues I had on one tower and I'll see how that effects performance.

On a side note, I can reliably cause a tough switch to lockup and force a physical power cycle to clear the error-ed condition when FC is on. It's interesting, but would be very hard to exploit in the wild.