Wispswitch and Mikrotik POE in
Posted: Tue Oct 05, 2021 8:39 pm
Thought I'd post here before I get abuse on the Mikrotik forums.
I've had a strange issue on a couple of sites (and downstream micro sites) where we just couldn't get decent speedtests (Ookla and fast.com) at customer premises on their UPLOAD. One other thing we noticed was sluggishness or having to frequently refresh when using the https interface of their rooftop UBNT Airmax AC radios when accessing them from the WAN side. TCP Btests to/from customer routers to anywhere (other countries etc) were fine as were maximum MTU tests. Customer access to other https sites were fine. So likely some sort of TCP windowing issue when losing large packets. I'm thinking a UDP test with 1500MTU at 80% of max TCP capacity might show it?
One site had Mikrotik Hex and other a HEXS, both with WS-12-250-DC. Flow control, STP disabled on the netonix ports and backhual radios. Both sites had their router powered via POE in on port 1 from the Netonix. Both had the management untagged vlan on this port. Both had tagged vlans for downstream radios on the same port.
The fix was simply moving these vlans to another port (and obvious corresponding change on the Mikrotik).
What concerns me is that thee were no FCS errors, collisions, drops or pauses showing on the Mikrotik. No Errors showing on the Netonix port (11 RX drops, 702837 RX filtered from 24228393043 RX packets). There's also no difference before and after on smokeping to the APs or connected clients.
I've got over 96 Netonixes on our network, with probably 3/4 of them routed so I'm no immediate rush to reconfigure all of them. I've not experienced these symptoms on the others but perhaps this is happening on some of them. For all I know our Cambium ePMP and Mikrotik radio interfaces mask what's actually happening.
What I'm after is a method of picking up this fault on a suitable NMS?
I've had a strange issue on a couple of sites (and downstream micro sites) where we just couldn't get decent speedtests (Ookla and fast.com) at customer premises on their UPLOAD. One other thing we noticed was sluggishness or having to frequently refresh when using the https interface of their rooftop UBNT Airmax AC radios when accessing them from the WAN side. TCP Btests to/from customer routers to anywhere (other countries etc) were fine as were maximum MTU tests. Customer access to other https sites were fine. So likely some sort of TCP windowing issue when losing large packets. I'm thinking a UDP test with 1500MTU at 80% of max TCP capacity might show it?
One site had Mikrotik Hex and other a HEXS, both with WS-12-250-DC. Flow control, STP disabled on the netonix ports and backhual radios. Both sites had their router powered via POE in on port 1 from the Netonix. Both had the management untagged vlan on this port. Both had tagged vlans for downstream radios on the same port.
The fix was simply moving these vlans to another port (and obvious corresponding change on the Mikrotik).
What concerns me is that thee were no FCS errors, collisions, drops or pauses showing on the Mikrotik. No Errors showing on the Netonix port (11 RX drops, 702837 RX filtered from 24228393043 RX packets). There's also no difference before and after on smokeping to the APs or connected clients.
I've got over 96 Netonixes on our network, with probably 3/4 of them routed so I'm no immediate rush to reconfigure all of them. I've not experienced these symptoms on the others but perhaps this is happening on some of them. For all I know our Cambium ePMP and Mikrotik radio interfaces mask what's actually happening.
What I'm after is a method of picking up this fault on a suitable NMS?