I am glad you up and running Les and once again I am so sorry I released that firmware!
Thank you all for being so understanding for such a BLATANT mistake!
Not accessible after upgrade to v1.1.0rc12
-
sirhc - Employee
- Posts: 7415
- Joined: Tue Apr 08, 2014 3:48 pm
- Location: Lancaster, PA
- Has thanked: 1608 times
- Been thanked: 1325 times
Re: Not accessible after upgrade to v1.1.0rc12
Support is handled on the Forums not in Emails and PMs.
Before you ask a question use the Search function to see it has been answered before.
To do an Advanced Search click the magnifying glass in the Search Box.
To upload pictures click the Upload attachment link below the BLUE SUBMIT BUTTON.
Before you ask a question use the Search function to see it has been answered before.
To do an Advanced Search click the magnifying glass in the Search Box.
To upload pictures click the Upload attachment link below the BLUE SUBMIT BUTTON.
-
lligetfa - Associate
- Posts: 1191
- Joined: Sun Aug 03, 2014 12:12 pm
- Location: Fort Frances Ont. Canada
- Has thanked: 307 times
- Been thanked: 381 times
Re: Not accessible after upgrade to v1.1.0rc12
Well... it was a good learning experience. Now if this switch had been put into a real production environment, I would have had the cable on hand and the procedure all documented.
Failing to plan is planning to fail. At work I had a grab bag with all manner of cables, gender benders, cheat sheets, etc. I had hardware spares with configs saved to my HP Tablet PC. I had a spool of fiber, splice trays, and a fusion splicer all charged and ready to rock and roll.
When I mounted my switches, I always left a space to mount a replacement, made sure I had a spare power outlet, and ran all my jumpers long enough off to the sides so I could swap them one-by-one to the replacement switch.
Failing to plan is planning to fail. At work I had a grab bag with all manner of cables, gender benders, cheat sheets, etc. I had hardware spares with configs saved to my HP Tablet PC. I had a spool of fiber, splice trays, and a fusion splicer all charged and ready to rock and roll.
When I mounted my switches, I always left a space to mount a replacement, made sure I had a spare power outlet, and ran all my jumpers long enough off to the sides so I could swap them one-by-one to the replacement switch.
-
sirhc - Employee
- Posts: 7415
- Joined: Tue Apr 08, 2014 3:48 pm
- Location: Lancaster, PA
- Has thanked: 1608 times
- Been thanked: 1325 times
Re: Not accessible after upgrade to v1.1.0rc12
I know it is a matter of time but so far with getting to over 500 switches out there in service I have not heard of a failure "YET", I know there will have to be one sooner or later, or at least a DOA as an average failure rate in this industry is up to 1%.
We built these like tanks, did not try to pinch pennies and so far so good? *finger crossed*
We built these like tanks, did not try to pinch pennies and so far so good? *finger crossed*
Support is handled on the Forums not in Emails and PMs.
Before you ask a question use the Search function to see it has been answered before.
To do an Advanced Search click the magnifying glass in the Search Box.
To upload pictures click the Upload attachment link below the BLUE SUBMIT BUTTON.
Before you ask a question use the Search function to see it has been answered before.
To do an Advanced Search click the magnifying glass in the Search Box.
To upload pictures click the Upload attachment link below the BLUE SUBMIT BUTTON.
-
lligetfa - Associate
- Posts: 1191
- Joined: Sun Aug 03, 2014 12:12 pm
- Location: Fort Frances Ont. Canada
- Has thanked: 307 times
- Been thanked: 381 times
Re: Not accessible after upgrade to v1.1.0rc12
LOL... stealing my soap box again, eh?
I didn't mean plan for (expect) a Netonix switch to fail... but always expect the unexpected and try to have a plan for how to recover from it.
On my last job, I was 100% responsible for the business side of the network and only had a dotted line to the process control side. I would get called in to assist if'n when they had issues. I did all the work on my network but for the process control network it was a union shop so the ENI techs did all the installs, and cable layout. There was one ENI tech that was obsessed about the jumpers and used short jumpers from the switches to the patch panels. Some of the other techs would tangle up the jumpers in front of the switches so badly that it was impossible to remove a failed switch from the front without removing jumpers going to other switches.
Sometimes I got lucky and could pull the switch forward just enough to get at the screws to remove the ears so that I could then pull the switch out the back instead. Sometimes not... the back was often such a rats nest of power cords, premise wire, etc., that removing via the rear was not an option either. If I had supervisory authority over the ENI techs, I would have never let them make such a friggin mess in the first place. There was little chance of getting them to clean it up on a scheduled shutdown cuz the techs were busy doing other planned shit.
I had the same problem with their servers... jammed in willy nilly... power cords too short to slide out the server. They cheaped out and didn't buy the optional redundant power supply. The Dell server had the C14 connector for the redundant PS and sometimes the techs had a power cord plugged into the outlet as well but if you pulled the other plug to try to rearrange the cords one at a time, the server went down hard!
All my servers had redundant power supplies and the cords long enough and properly routed so the server could be slid out. All my network switches had long enough jumpers routed out the the side of the rack and back, so a switch or a module could easily be pulled. At one time I had to change out 25 modules when I discovered a production run flaw in them. They were hot plug modules so I could effect the swap with minimum (rolling) network disruption.
Anyway... ramblings of an old man. Like I said, to fail to plan is to plan to fail.
I didn't mean plan for (expect) a Netonix switch to fail... but always expect the unexpected and try to have a plan for how to recover from it.
On my last job, I was 100% responsible for the business side of the network and only had a dotted line to the process control side. I would get called in to assist if'n when they had issues. I did all the work on my network but for the process control network it was a union shop so the ENI techs did all the installs, and cable layout. There was one ENI tech that was obsessed about the jumpers and used short jumpers from the switches to the patch panels. Some of the other techs would tangle up the jumpers in front of the switches so badly that it was impossible to remove a failed switch from the front without removing jumpers going to other switches.
Sometimes I got lucky and could pull the switch forward just enough to get at the screws to remove the ears so that I could then pull the switch out the back instead. Sometimes not... the back was often such a rats nest of power cords, premise wire, etc., that removing via the rear was not an option either. If I had supervisory authority over the ENI techs, I would have never let them make such a friggin mess in the first place. There was little chance of getting them to clean it up on a scheduled shutdown cuz the techs were busy doing other planned shit.
I had the same problem with their servers... jammed in willy nilly... power cords too short to slide out the server. They cheaped out and didn't buy the optional redundant power supply. The Dell server had the C14 connector for the redundant PS and sometimes the techs had a power cord plugged into the outlet as well but if you pulled the other plug to try to rearrange the cords one at a time, the server went down hard!
All my servers had redundant power supplies and the cords long enough and properly routed so the server could be slid out. All my network switches had long enough jumpers routed out the the side of the rack and back, so a switch or a module could easily be pulled. At one time I had to change out 25 modules when I discovered a production run flaw in them. They were hot plug modules so I could effect the swap with minimum (rolling) network disruption.
Anyway... ramblings of an old man. Like I said, to fail to plan is to plan to fail.
-
sirhc - Employee
- Posts: 7415
- Joined: Tue Apr 08, 2014 3:48 pm
- Location: Lancaster, PA
- Has thanked: 1608 times
- Been thanked: 1325 times
Re: Not accessible after upgrade to v1.1.0rc12
lligetfa wrote:All my servers had redundant power supplies and the cords long enough and properly routed so the server could be slid out. All my network switches had long enough jumpers routed out the the side of the rack and back, so a switch or a module could easily be pulled. At one time I had to change out 25 modules when I discovered a production run flaw in them. They were hot plug modules so I could effect the swap with minimum (rolling) network disruption.
Properly..... run cables?
Correct..... lengths on cables?
What strange language is this that you speak Les, definitely not finding these phrases in the WISP techno babble handbook?
Support is handled on the Forums not in Emails and PMs.
Before you ask a question use the Search function to see it has been answered before.
To do an Advanced Search click the magnifying glass in the Search Box.
To upload pictures click the Upload attachment link below the BLUE SUBMIT BUTTON.
Before you ask a question use the Search function to see it has been answered before.
To do an Advanced Search click the magnifying glass in the Search Box.
To upload pictures click the Upload attachment link below the BLUE SUBMIT BUTTON.
-
lligetfa - Associate
- Posts: 1191
- Joined: Sun Aug 03, 2014 12:12 pm
- Location: Fort Frances Ont. Canada
- Has thanked: 307 times
- Been thanked: 381 times
Re: Not accessible after upgrade to v1.1.0rc12
LOL ja, well... I'm not a WISP, never was in the true sense. I did setup and operated a hotspot in and around the mill for contractors to use but there was no money charged for it. I setup wireless links for the mill both fixed and mobile.
I did liaise with two regional WISPs fairly often. I was also a client for both. I brokered a deal with one to get space on one of our towers so that they could serve many of my users. I also ran Radio Mobile path analysis for many of them, and I installed relays for some hard to reach spots. I worked with a WISP to get internet out to a couple of our hydro dams.
I did liaise with two regional WISPs fairly often. I was also a client for both. I brokered a deal with one to get space on one of our towers so that they could serve many of my users. I also ran Radio Mobile path analysis for many of them, and I installed relays for some hard to reach spots. I worked with a WISP to get internet out to a couple of our hydro dams.
-
lligetfa - Associate
- Posts: 1191
- Joined: Sun Aug 03, 2014 12:12 pm
- Location: Fort Frances Ont. Canada
- Has thanked: 307 times
- Been thanked: 381 times
Re: Not accessible after upgrade to v1.1.0rc12
lligetfa wrote:I had a hard time to track one down... first stop was a waste of time... nothing at all with DB9 male or female, no gender changers, no null modems, not even the ends to solder up my own. Second stop was not looking too good... I walk in the store and find out they laid off all their computer techs...
My bad for spreading rumors... the first shop I went to told me the other guys were getting out of the PC/Network repair business and that they laid off their techs. When I saw there was nobody in their service dept, I assumed it to be true.
Anyway, I emailed them to confirm and they refuted the rumor.
Who is online
Users browsing this forum: No registered users and 31 guests