v1.4.6 Bug reports and Comments

DOWNLOAD THE LATEST FIRMWARE HERE
User avatar
sirhc
Employee
Employee
 
Posts: 7416
Joined: Tue Apr 08, 2014 3:48 pm
Location: Lancaster, PA
Has thanked: 1608 times
Been thanked: 1325 times

Re: v1.4.6 Bug reports and Comments

Wed Dec 21, 2016 6:02 pm

Digitexwireless wrote:Anybody having issue with a complete reboot of the Netonx after upgrading to 1.4.6? I have a site which has run flawlessly for months, then i did the upgrade from one of the RC's and now i get it rebooting. When it reboots, it takes all links down. In the log i see reference to a cold boot. Should i downgrade?

Tommy


No, there is no bug in the firmware v1.4.6 that would cause this.

This is possibly a defective power supply.

To verify this is a defective power supply do this:
1) Remove from service
2) Factory default it
3) Let it run on your bench doing nothing for 24 hours - not powering anything, just sitting there in a default state.

If after 24 hours it does not show uptime over 24 hours then you have a defective power supply and you will need to RMA it for a new power supply.
Support is handled on the Forums not in Emails and PMs.
Before you ask a question use the Search function to see it has been answered before.
To do an Advanced Search click the magnifying glass in the Search Box.
To upload pictures click the Upload attachment link below the BLUE SUBMIT BUTTON.

Digitexwireless
Member
 
Posts: 24
Joined: Mon Aug 22, 2016 11:20 pm
Location: Cleburne, TX
Has thanked: 0 time
Been thanked: 2 times

Re: v1.4.6 Bug reports and Comments

Wed Dec 21, 2016 6:09 pm

I have an extra one sitting on the self, i will swap it out. Before doing so, i will downgrade the firmware to the RC I previously ran. Since my house is connected through this tower, i will know almost as soon as the monitoring server does. Seems to happen at 9:30 AM twice is the last couple of days. There is nothing scheduled, so I figure defaulting may be the answer. We will run it through the paces as suggested if downgrade does not fix.

Thank you,

T
---------------------------------------------------------------------------
Tommy A.
Network Administrator
Digitex.com

User avatar
sirhc
Employee
Employee
 
Posts: 7416
Joined: Tue Apr 08, 2014 3:48 pm
Location: Lancaster, PA
Has thanked: 1608 times
Been thanked: 1325 times

Re: v1.4.6 Bug reports and Comments

Wed Dec 21, 2016 6:25 pm

Digitexwireless wrote:I have an extra one sitting on the self, i will swap it out. Before doing so, i will downgrade the firmware to the RC I previously ran. Since my house is connected through this tower, i will know almost as soon as the monitoring server does. Seems to happen at 9:30 AM twice is the last couple of days. There is nothing scheduled, so I figure defaulting may be the answer. We will run it through the paces as suggested if downgrade does not fix.

Thank you,

T


I would also bench test / burn in test the unit your going to put in service first.

Let it run 24 hours on bench to verify power supply does not start rebooting the switch.

"If" you have a defective power supply it will start rebooting in less than 24 hours "if there is less than 50 watt load", sometimes within a couple hours and eventually it ends up in an endless reboot.

A defective power supplies works perfectly at first for up to 1 to 18 hours with no load and then after that as so long as there is a total load of 40-50 watts it runs fine but if the load on the power supply drops below 40-50 watts is when the issue happens.

We recently discovered this issue last week and have begun to do 24 hour burn in tests to catch this and are working with the manufacturer to make sure it does not happen again. It only affects a small percentage of the power supplies in our last shipment and the manufacturer believes it is due to a defective reel of CAPs on the power supply controller board.

It only affects the 250 watt AC power supply from our last shipment which we received in September that goes in our WS-10-250-AC and WS-12-250-AC . We have fully tested all power supplies in our warehouse including those that were already built but we are sure there are some out in the wild and will just have to deal with them as we find them.

This is not to say this is what your issue is but this is an easy test.
Support is handled on the Forums not in Emails and PMs.
Before you ask a question use the Search function to see it has been answered before.
To do an Advanced Search click the magnifying glass in the Search Box.
To upload pictures click the Upload attachment link below the BLUE SUBMIT BUTTON.

Digitexwireless
Member
 
Posts: 24
Joined: Mon Aug 22, 2016 11:20 pm
Location: Cleburne, TX
Has thanked: 0 time
Been thanked: 2 times

Re: v1.4.6 Bug reports and Comments

Wed Dec 21, 2016 6:40 pm

Well i guess fortunately for me I have the WS-12-250-DC. But i like your way of thinking. I will crank it up now to test overnight.
---------------------------------------------------------------------------
Tommy A.
Network Administrator
Digitex.com

User avatar
sirhc
Employee
Employee
 
Posts: 7416
Joined: Tue Apr 08, 2014 3:48 pm
Location: Lancaster, PA
Has thanked: 1608 times
Been thanked: 1325 times

Re: v1.4.6 Bug reports and Comments

Wed Dec 21, 2016 6:53 pm

Digitexwireless wrote:Well i guess fortunately for me I have the WS-12-250-DC. But i like your way of thinking. I will crank it up now to test overnight.


OH, no known issues with DC switches, or 150 watt or 400 watt. And mind you only a "small" number of 250 watt units manufactured since September this year.

The firmware upgrade of the DC switch "might" cause a reboot if it also updates the power supply firmware, depends on how old your original firmware was. Some firmware includes a power supply firmware upgrade.

If the switch in question is a WS-12-250-DC I would NOT down grade it as it will down grade the power supply firmware.

You should have all your switches running v1.4.6
Support is handled on the Forums not in Emails and PMs.
Before you ask a question use the Search function to see it has been answered before.
To do an Advanced Search click the magnifying glass in the Search Box.
To upload pictures click the Upload attachment link below the BLUE SUBMIT BUTTON.

User avatar
david.sovereen@mercury.net
Member
 
Posts: 33
Joined: Tue Sep 29, 2015 6:17 pm
Location: Midland, MI
Has thanked: 0 time
Been thanked: 2 times

Re: v1.4.6 Bug reports and Comments

Mon Dec 26, 2016 1:00 pm

I don't think this is 1.4.6-specific, but we just uncovered what appears to be a very odd bug.

I'm going to give more details than may be necessary to recreate this.

Our Management VLAN is 4 (may not be important to recreating this)

We had a Netonix configured as 10.0.6.42, 255.255.255.0 subnet, 10.0.6.1 as gateway. By mistake, Primary DNS was set to 10.0.6.1 and Secondary DNS was set to 8.8.8.8.

Over time, pings and management access to the switch would deteriorate, with longer and longer periods of pure packet loss. For example, in the beginning, maybe only 1 or 2 pings after every 20 would drop. But over time, the packet loss would appear as longer and longer moments of no connectivity. I have some of it on my terminal history, shown here:


ping sw-4.mntwwi-2.mercury.net

PING sw-4.mntwwi-2.mercury.net (10.0.6.42): 56 data bytes

Request timeout for icmp_seq 0

Request timeout for icmp_seq 1

Request timeout for icmp_seq 2

Request timeout for icmp_seq 3

Request timeout for icmp_seq 4

Request timeout for icmp_seq 5

Request timeout for icmp_seq 6

Request timeout for icmp_seq 7

Request timeout for icmp_seq 8

64 bytes from 10.0.6.42: icmp_seq=9 ttl=59 time=34.316 ms

64 bytes from 10.0.6.42: icmp_seq=10 ttl=59 time=35.893 ms

64 bytes from 10.0.6.42: icmp_seq=11 ttl=59 time=33.268 ms

64 bytes from 10.0.6.42: icmp_seq=12 ttl=59 time=37.533 ms

Request timeout for icmp_seq 13

Request timeout for icmp_seq 14

Request timeout for icmp_seq 15

Request timeout for icmp_seq 16

Request timeout for icmp_seq 17

Request timeout for icmp_seq 18

Request timeout for icmp_seq 19

Request timeout for icmp_seq 20

Request timeout for icmp_seq 21

Request timeout for icmp_seq 22

Request timeout for icmp_seq 23

64 bytes from 10.0.6.42: icmp_seq=13 ttl=59 time=11705.652 ms

64 bytes from 10.0.6.42: icmp_seq=14 ttl=59 time=10703.757 ms

64 bytes from 10.0.6.42: icmp_seq=25 ttl=59 time=32.643 ms

64 bytes from 10.0.6.42: icmp_seq=26 ttl=59 time=31.854 ms

64 bytes from 10.0.6.42: icmp_seq=27 ttl=59 time=29.243 ms

64 bytes from 10.0.6.42: icmp_seq=28 ttl=59 time=28.302 ms

Request timeout for icmp_seq 30

Request timeout for icmp_seq 31

Request timeout for icmp_seq 32

Request timeout for icmp_seq 33

Request timeout for icmp_seq 34

Request timeout for icmp_seq 35

Request timeout for icmp_seq 36



Here, we are loosing management access to the switch for 10 seconds or so. It kind of looks like an RSTP re-route, which we were trying to diagnose for a very long time. However, connectivity to devices at the switch were perfect, which suggested this was not an RSTP re-route event.

Eventually changed Primary DNS from the default gateway address of 10.0.6.1 to a valid DNS server and wham, packet loss disappeared.

Dave

User avatar
david.sovereen@mercury.net
Member
 
Posts: 33
Joined: Tue Sep 29, 2015 6:17 pm
Location: Midland, MI
Has thanked: 0 time
Been thanked: 2 times

Re: v1.4.6 Bug reports and Comments

Mon Dec 26, 2016 1:00 pm

I don't think this is 1.4.6-specific, but we just uncovered what appears to be a very odd bug.

I'm going to give more details than may be necessary to recreate this.

Our Management VLAN is 4 (may not be important to recreating this)

We had a Netonix configured as 10.0.6.42, 255.255.255.0 subnet, 10.0.6.1 as gateway. By mistake, Primary DNS was set to 10.0.6.1 and Secondary DNS was set to 8.8.8.8.

Over time, pings and management access to the switch would deteriorate, with longer and longer periods of pure packet loss. For example, in the beginning, maybe only 1 or 2 pings after every 20 would drop. But over time, the packet loss would appear as longer and longer moments of no connectivity. I have some of it on my terminal history, shown here:


ping sw-4.mntwwi-2.mercury.net

PING sw-4.mntwwi-2.mercury.net (10.0.6.42): 56 data bytes

Request timeout for icmp_seq 0

Request timeout for icmp_seq 1

Request timeout for icmp_seq 2

Request timeout for icmp_seq 3

Request timeout for icmp_seq 4

Request timeout for icmp_seq 5

Request timeout for icmp_seq 6

Request timeout for icmp_seq 7

Request timeout for icmp_seq 8

64 bytes from 10.0.6.42: icmp_seq=9 ttl=59 time=34.316 ms

64 bytes from 10.0.6.42: icmp_seq=10 ttl=59 time=35.893 ms

64 bytes from 10.0.6.42: icmp_seq=11 ttl=59 time=33.268 ms

64 bytes from 10.0.6.42: icmp_seq=12 ttl=59 time=37.533 ms

Request timeout for icmp_seq 13

Request timeout for icmp_seq 14

Request timeout for icmp_seq 15

Request timeout for icmp_seq 16

Request timeout for icmp_seq 17

Request timeout for icmp_seq 18

Request timeout for icmp_seq 19

Request timeout for icmp_seq 20

Request timeout for icmp_seq 21

Request timeout for icmp_seq 22

Request timeout for icmp_seq 23

64 bytes from 10.0.6.42: icmp_seq=13 ttl=59 time=11705.652 ms

64 bytes from 10.0.6.42: icmp_seq=14 ttl=59 time=10703.757 ms

64 bytes from 10.0.6.42: icmp_seq=25 ttl=59 time=32.643 ms

64 bytes from 10.0.6.42: icmp_seq=26 ttl=59 time=31.854 ms

64 bytes from 10.0.6.42: icmp_seq=27 ttl=59 time=29.243 ms

64 bytes from 10.0.6.42: icmp_seq=28 ttl=59 time=28.302 ms

Request timeout for icmp_seq 30

Request timeout for icmp_seq 31

Request timeout for icmp_seq 32

Request timeout for icmp_seq 33

Request timeout for icmp_seq 34

Request timeout for icmp_seq 35

Request timeout for icmp_seq 36



Here, we are loosing management access to the switch for 10 seconds or so. It kind of looks like an RSTP re-route, which we were trying to diagnose for a very long time. However, connectivity to devices at the switch were perfect, which suggested this was not an RSTP re-route event.

Eventually changed Primary DNS from the default gateway address of 10.0.6.1 to a valid DNS server and wham, packet loss disappeared.

Dave

User avatar
sirhc
Employee
Employee
 
Posts: 7416
Joined: Tue Apr 08, 2014 3:48 pm
Location: Lancaster, PA
Has thanked: 1608 times
Been thanked: 1325 times

Re: v1.4.6 Bug reports and Comments

Mon Dec 26, 2016 1:25 pm

david.sovereen@mercury.net wrote:
We had a Netonix configured as 10.0.6.42, 255.255.255.0 subnet, 10.0.6.1 as gateway. By mistake, Primary DNS was set to 10.0.6.1 and Secondary DNS was set to 8.8.8.8.

Here, we are loosing management access to the switch for 10 seconds or so. It kind of looks like an RSTP re-route, which we were trying to diagnose for a very long time. However, connectivity to devices at the switch were perfect, which suggested this was not an RSTP re-route event.

Eventually changed Primary DNS from the default gateway address of 10.0.6.1 to a valid DNS server and wham, packet loss disappeared.

Dave


OK so the switch is located at an invalid non routable IP address (10.0.6.42) yet you have the secondary DNS server as 8.8.8.8? Is the 10.0.6.0/42 NATed to provide access to the internet?

If the address is NATed is your switch accessible to the world wide web via port mappings? If so this could be bad as your switch could be discovered and BOTs would attempt logins in an attempt to gain access and multiple attempts to login will drive up the CPU usage to 100% and cause the switch to not respond.

If your switch is accessible to the world wide web which I would NOT do I would at least use an Access Control list on your router doing the NAT to prevent people from attempting to hack into it.

The fact that it happens intermittently can be because after X attempts the Tar Pit blocks access from the source address which would be your address that is NATed which would also block your access.
Support is handled on the Forums not in Emails and PMs.
Before you ask a question use the Search function to see it has been answered before.
To do an Advanced Search click the magnifying glass in the Search Box.
To upload pictures click the Upload attachment link below the BLUE SUBMIT BUTTON.

MBBNET
Member
 
Posts: 8
Joined: Sat May 23, 2015 1:27 am
Has thanked: 0 time
Been thanked: 0 time

Re: v1.4.6 Bug reports and Comments

Tue Dec 27, 2016 5:24 pm

Is anyone having issues getting into Mini 6 switches running 1.4.5.RC8 ? I can't seem to get into several with the user and pass that was set. Just wondering...

User avatar
sirhc
Employee
Employee
 
Posts: 7416
Joined: Tue Apr 08, 2014 3:48 pm
Location: Lancaster, PA
Has thanked: 1608 times
Been thanked: 1325 times

Re: v1.4.6 Bug reports and Comments

Tue Dec 27, 2016 8:08 pm

No, but if you upgrade to v1.4.5rc8 from a version older than v1.4.4 which fixed a password issue if using special characters???

v1.4.5 fixed where we were only checking the first 8 characters if your password is longer than 8 characters try it with just 8 and the whole password.

If you still can not get in then I have no clue?

Maybe provide more information on what your seeing and what your password length is and is it has special characters
Support is handled on the Forums not in Emails and PMs.
Before you ask a question use the Search function to see it has been answered before.
To do an Advanced Search click the magnifying glass in the Search Box.
To upload pictures click the Upload attachment link below the BLUE SUBMIT BUTTON.

PreviousNext
Return to Hardware and software issues

Who is online

Users browsing this forum: No registered users and 50 guests