[MVMA] Issue: Channel contention causes large latency. (revisit)

Chuck Gelm nc8q-mesh at gelm.net
Sun Jan 20 10:09:35 EST 2019


Issue: Channel contention causes large latency.

Assumption: Latency is a good metric to compare real-world link quality.

Test latency from NC8Q to N8JJ-Services with and without tunnel:

With n8jj tunnel:
gelmce at gelmce-HP-Notebook ~ $ traceroute n8jj-services
traceroute to n8jj-services (10.70.82.134), 30 hops max, 60 byte packets
  1  localnode.local.mesh (10.215.85.57)  1.000 ms  1.298 ms  1.272 ms
  2  dtdlink.NC8Q-uTik-hAP.local.mesh (10.218.93.193)  2.877 ms 2.698 
ms  2.679 ms
  3  mid2.NC8Q-M2loco-34 (172.31.34.202)  19.859 ms  27.270 ms 34.916 ms
  4  N8JJ-Services.local.mesh (10.70.82.134)  107.420 ms  107.313 ms 
*107.162 ms*
gelmce at gelmce-HP-Notebook ~ $

Route: DtD-tunnel-tunnel-

Without n8jj tunnel:
gelmce at gelmce-HP-Notebook ~ $ traceroute n8jj-services
traceroute to n8jj-services (10.70.82.134), 30 hops max, 60 byte packets
  1  localnode.local.mesh (10.215.85.57)  1.356 ms  1.273 ms  1.271 ms
  2  dtdlink.NC8Q-uTik-hAP.local.mesh (10.218.93.193)  3.409 ms 3.260 
ms  3.234 ms
  3  mid2.NC8Q-M2loco-34 (172.31.34.202)  21.393 ms  27.408 ms 27.377 ms
  4  dtdlink.NC8Q-M3-NS-Kettering.local.mesh (10.133.151.120)  32.623 
ms  32.972 ms  32.876 ms
  5  NC8Q-M3-NS-83.local.mesh (10.132.150.83)  32.901 ms  33.750 ms *
  6  * dtdlink.KE8MVM-MVHS-AGM5.local.mesh (10.89.245.3)  41.119 ms 
44.214 ms
  7  KE8MVM-DARA-AGM5.local.mesh (10.174.15.186)  68.704 ms  66.568 ms  
66.547 ms
  8  dtdlink.W8BI-DARA-Services.local.mesh (10.55.125.45)  96.581 ms 
94.544 ms  111.484 ms
  9  N8JJ-Services.local.mesh (10.70.82.134)  116.178 ms  105.374 ms 
*107.828 ms*
gelmce at gelmce-HP-Notebook ~ $

Route: DtD-tunnel-DtD-RF-DtD-RF-tunnel

Note1: The additional 2xDtD and 2xRF paths increased latency by < 1 
millisecond.
Note2: The RF paths were 'clear' PtP links without channel contention.
This is the expected logical path.
-----

Test latency from NC8Q to WA8APB-Services with and without tunnel:

With n8jj tunnel:
gelmce at gelmce-HP-Notebook ~ $ traceroute wa8apb-services
traceroute to wa8apb-services (10.104.249.244), 30 hops max, 60 byte packets
  1  localnode.local.mesh (10.215.85.57)  1.851 ms  1.764 ms  1.968 ms
  2  dtdlink.NC8Q-uTik-hAP.local.mesh (10.218.93.193)  5.104 ms 5.111 
ms  5.090 ms
  3  mid2.NC8Q-M2loco-34 (172.31.34.202)  21.482 ms  26.489 ms 35.121 ms
  4  mid2.N8JJ-Services (172.31.34.197)  101.675 ms  109.865 ms 111.520 ms
  5  WA8APB-Services.local.mesh (10.104.249.244)  187.772 ms  193.902 
ms  198.608 ms
gelmce at gelmce-HP-Notebook ~ $

Without n8jj tunnel:
gelmce at gelmce-HP-Notebook ~ $ traceroute wa8apb-services
traceroute to wa8apb-services (10.104.249.244), 30 hops max, 60 byte packets
  1  localnode.local.mesh (10.215.85.57)  2.092 ms  2.011 ms  2.033 ms
  2  dtdlink.NC8Q-uTik-hAP.local.mesh (10.218.93.193)  4.181 ms 4.114 
ms  4.294 ms
  3  mid2.NC8Q-M2loco-34 (172.31.34.202)  22.825 ms  25.700 ms 32.067 ms
  4  dtdlink.NC8Q-M3-NS-Kettering.local.mesh (10.133.151.120)  32.262 
ms  32.333 ms  32.529 ms
  5  NC8Q-M3-NS-83.local.mesh (10.132.150.83)  32.569 ms  38.463 ms 
38.421 ms
  6  dtdlink.KE8MVM-MVHS-Omni.local.mesh (10.79.205.71)  38.643 ms 
40.209 ms  40.061 ms
  7  WA8APB-M2-Omni-Bvk.local.mesh (10.12.121.141)  1310.144 ms 1637.409 
ms  1840.261 ms
  8  WA8APB-Services.local.mesh (10.104.249.244)  2296.363 ms 2314.419 
ms  2782.547 ms
gelmce at gelmce-HP-Notebook ~ $

Notice the large latency (1.8 seconds) due to the poor 2397 MHz link 
being chosen.
WA8APB-M2-Omni-Bvk.local.mesh    92%    82%    2.2    (ETX=1.33)

The logical preferred route would have been MVHS-5GHz -> W-Xenia-5-GHz-> 
WA8APB-5-GHz:
KE8MVM-W-Xenia-NSM5.local.mesh    100%    100%    13.0    then
WA8APB-NS-Ch182.local.mesh         100%    100%    25.8    (ETX=2.0)
-----

Test latency from NC8Q to W8BI-DARA-Services with and without tunnel:

With n8jj tunnel:
gelmce at gelmce-HP-Notebook ~ $ traceroute w8bi-dara-services
traceroute to w8bi-dara-services (10.54.125.45), 30 hops max, 60 byte 
packets
  1  localnode.local.mesh (10.215.85.57)  1.183 ms  1.803 ms  1.718 ms
  2  dtdlink.NC8Q-uTik-hAP.local.mesh (10.218.93.193)  4.080 ms 4.032 
ms  3.950 ms
  3  mid2.NC8Q-M2loco-34 (172.31.34.202)  25.051 ms  25.012 ms 29.739 ms
  4  mid2.N8JJ-Services (172.31.34.197)  101.291 ms  101.171 ms 101.107 ms
  5  W8BI-DARA-Services.local.mesh (10.54.125.45)  156.160 ms 161.073 
ms  161.036 ms
gelmce at gelmce-HP-Notebook ~ $

Without n8jj tunnel:
gelmce at gelmce-HP-Notebook ~ $ traceroute w8bi-dara-services
traceroute to w8bi-dara-services (10.54.125.45), 30 hops max, 60 byte 
packets
  1  localnode.local.mesh (10.215.85.57)  0.940 ms  2.105 ms  2.196 ms
  2  dtdlink.NC8Q-uTik-hAP.local.mesh (10.218.93.193)  2.524 ms 2.581 
ms  2.347 ms
  3  mid2.NC8Q-M2loco-34 (172.31.34.202)  24.227 ms  26.381 ms 26.267 ms
  4  dtdlink.NC8Q-M3-NS-Kettering.local.mesh (10.133.151.120)  31.952 
ms  32.039 ms  31.802 ms
  5  NC8Q-M3-NS-83.local.mesh (10.132.150.83)  31.921 ms  35.921 ms 
35.952 ms
  6  dtdlink.KE8MVM-MVHS-Omni.local.mesh (10.79.205.71)  40.927 ms 
43.497 ms  43.430 ms
  7  WA8APB-M2-Omni-Bvk.local.mesh (10.12.121.141)  908.861 ms 904.302 
ms  1583.624 ms
  8  dtdlink.WA8APB-Services.local.mesh (10.105.249.244)  1622.777 ms  
1821.666 ms  2089.228 ms
  9  W8BI-DARA-Services.local.mesh (10.54.125.45)  2215.266 ms 2444.466 
ms  2735.735 ms
gelmce at gelmce-HP-Notebook ~ $

The planned preferred route would have been ... KE8MVM-MVHS-AGM5 -> 
KE8MVM-DARA-AGM5 ...
KE8MVM-DARA-AGM5.local.mesh        63%    100%    5.9  ETX=1.6
but this link needs a few more dB of SNR.
Perhaps higher antenna elevation or dual stream devices would help.
Adding 5' elevation at MVHS is possible. Adding 5'+ elevation at DARA is 
very practical.
-----

Issue: Channel contention causes large latency.

Proof of concept: Links without contention have immensely less latency.

Assumption: MVMA has installed 5 GHz nodes in an attempt to fix the 
failure of obtaining
quality end-to-end links over the underlying 2397 MHz 'mesh'.

  Moe has *changed the SSIDs *of many 2397 MHz nodes in order to force 
the nodes *to use
only their quality neighbors. *This made huge improvements in providing 
end-to-end links
using the 2397 MHz nodes, but it does not reduce the underlying 
contention issue.

Solution: Reduce or eliminate channel contention with use of multiple 
channels and/or bands.

Suggestion: That MVMA move forward with a priority of promoting quality 
networking and
less promotion of fixing the limitations of the existing 2397 MHz mesh.

Chuck


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.qth.net/pipermail/mvma/attachments/20190120/9e2d9a50/attachment.html>


More information about the MVMA mailing list