I used Midnight Commander on *nix systems since not long after it was released. It's such a versatile tool it's like a shell unto itself.Frequently used Norton Commander.
I realize your reply, but I didn't say they were "so many", but "just many".I'm not thrilled with your characterization of so many Sysadmin as lazy.
It was my perception of things, neither offense intended of course nor universal judgement.
Nowadays things are different maybe because the whole world of work has become paroxystically competitive and selections are much more careful than those of decades ago, and system administrators are much better prepared.
However, sometimes the way in which something is started also characterizes the rest of its story.
IMO
Yes, a way of swallowing up small companies to which we would never have had to "get used".Point taken.
True. But, to an extent, the monopolies themselves reflect complexity. There's a lot of time and effort that goes into making globally distributed, real time updating, multi-platform software.
I used to work for a relatively small (around 50 people) company, we struggled to get business because many of our competitors were much larger and or customers ran 24x7 businesses and wanted to be assured that we'd be around to support our stuff in London, NY, Tokyo etc.
Eventually we were purchased by a competitor 20 times our size that could offer that level of support and distribution.
What has almost definitively been lost is the "bio-diversity" of companies.
The parallel with life in the oceans (but also elsewhere) perhaps helps me to mean what I think: in the companies we lose the ability to think (and decide) differently.
An ocean cannot be populated by sharks only.
It would be the beginning of the end of everything.
IMO
Last edited:
This has nothing to do with risks for that.The risk of push updates clearly demonstrated. Will they learn from it? No.
This is a process issue.
ALL updates are normally fully tested, then retested, then rolled out to a sandbox, then retested and tested again, rolled out to a limited set of users, and at the end of the whole line, released to the rest of the world. This outfit is top of the line.
Somebody jumped the line. Human error.
Jan
So today was the day I had planned to get my car inspected (like the MOT in the UK) here in Massachusetts, fortunately I called the garage where I usually get this done because the entire statewide network is down. I don't know whether this is CrowdStrike related since MA inspection system outages aren't that uncommon, but I have my suspicions.
Wouldn't argue that point. Microsoft has put a lot of what I would consider to be application level code into the OS. It's been a slow creep over the last 30 years.
But the real problem with this is the requirements that all anti-virus software have. They need to be allowed deep into the OS into areas that sit below the driver layers and that are not subject to the same management and constraints that the OS applies to higher layers. Failure that low in the OS probably will lead to a BSOD...
I guess we may learn exactly what crowdstrike got wrong, but it's mostly a testing issue and patch release issue I think.
The driver "layer" is below the OS. Indeed, asynchronous drivers using interrupts run "below" the kernel.
The anti-virus layer runs IN the OS. Likely affecting the task manager and perhaps "washing" network stacks.
This has nothing to do with risks for that.
This is a process issue.
ALL updates are normally fully tested, then retested, then rolled out to a sandbox, then retested and tested again, rolled out to a limited set of users, and at the end of the whole line, released to the rest of the world. This outfit is top of the line.
Somebody jumped the line. Human error.
Jan
Jan, Crowdstrike admitted on Friday they hadn't tested their latest update with real world conditions.
Someone tried to be cheap to get a bigger bonus.
Assuming Crowdstrike's process is as you say, then how did they bypass testing to push the code into release branch? In a modern GitLab Agile process you have to submit test results and get a positive peer review count before the system accepts the code... into ANY shared branch.
There is a way to bypass it... I've done it... your typical "I gotta do this and it's Friday and they want it tomorrow"... but only for submittals into a test branch.... NEVER into a release branch.
It's really a crazy thing that happened
+++ BTW, the "risk of push updates" is real from a user point of view. When a machine becomes stable and is doing its job correctly the last thing we want, as an user, is a change. Microsoft drives me nuts... but my Android Samsung Phone is by far the worst... it keeps telling me I got to update...
Worst though, is Android Auto.
Last edited:
Related subjects: how many of you have multiple daily updated backups for when your PC goes on fire?
Jan
Jan
Everything is a human error Jan, software is written by humans. Even if hardware fails it's a human error. As it was a human task to make the system fault tolerant. Having said that, CrowdStrike aims, among other threats, at zero day exploits. It's their unique selling point to respond fast. Their customers are worried about encryption attacks. The problem with the push updates is they are send out by block, if not to all nodes at ones. As sysadmin you have no response time to block the update.This has nothing to do with risks for that.
This is a process issue.
ALL updates are normally fully tested, then retested, then rolled out to a sandbox, then retested and tested again, rolled out to a limited set of users, and at the end of the whole line, released to the rest of the world. This outfit is top of the line.
Somebody jumped the line. Human error.
Jan
My data is stored in network drives (RAID5), or RAID5 USB drives or RAID0 USB drives.
The machines thus have very little data, since even the configuration files are external.
I do monthly, or sometimes bimonthly, full back ups to the network and I will also build a back up drive with a SATAwire.
The network itself is also backed up... each RAID array is inherently backed up, but I use one of the NAS as an online back up to the other. Terrible space utilization, awesome security.
The machines proper have their own backups partitions... that has worked well for me for the NVME "drives".
I even have my own back up PC... if one fails. I just flick the KVM switch.
Naturally, I got UPSs and the NAS have two power supplies with separate sources of power... one UPS, the other -primary- right into the wall
Should I say that my "data closet" has a dedicated 15A homerun?
The machines thus have very little data, since even the configuration files are external.
I do monthly, or sometimes bimonthly, full back ups to the network and I will also build a back up drive with a SATAwire.
The network itself is also backed up... each RAID array is inherently backed up, but I use one of the NAS as an online back up to the other. Terrible space utilization, awesome security.
The machines proper have their own backups partitions... that has worked well for me for the NVME "drives".
I even have my own back up PC... if one fails. I just flick the KVM switch.
Naturally, I got UPSs and the NAS have two power supplies with separate sources of power... one UPS, the other -primary- right into the wall
Should I say that my "data closet" has a dedicated 15A homerun?
I have a device to rsync my music collection weekly . Really I can afford to lose a week of backups from the PC.
At my previous job a sometimes loaded and unloaded enough tapes from a library that it would count as a normal person's daily exercise requirement. Also got to do some highly important restores.
Most companies are skimping on backups and sometimes they really pay the price. It is not bad enough that you lost client data. It is worse if you paid the bad guys 22Million to get it back (Some healthcare company recently did that).
At my previous job a sometimes loaded and unloaded enough tapes from a library that it would count as a normal person's daily exercise requirement. Also got to do some highly important restores.
Most companies are skimping on backups and sometimes they really pay the price. It is not bad enough that you lost client data. It is worse if you paid the bad guys 22Million to get it back (Some healthcare company recently did that).
I'm not an expert, but in my opinion issues start with corporate networks managed by incompetent system administrators and employees (please note that I'm not saying everyone is, just many) who don't give a damn if the world falls apart.
So an internal threat that accesses external threats.
And external threats that attempt to access corporate networks.
So they had to think of a remedy to prevent anyone in a company from opening any type of file without worrying about the consequences.
The slowdown of corporate systems is also due to the fact that any executable file (including script and batch) could be malicious.
And to incompetent system administrators.
There is no way to know before it executes a hostile code and huge databases are created and accessed simultaneously by thousands and thousands of requests that are inhibited first and then analyzed and then maybe executed.
And all this takes time.
Someone thought of reassuring lazy and incompetent system administrators by offering them a fabulous software that in a short time was installed on millions of machines.
That company in a short time reached a turnover of 4 billion dollars.
This.
That's the reason why we in R&D have had a long standing feud with the IT people.... we think they are .......
So we put our systems behind our own firewalls.... engineers always get the better machines, our own routers, our own VPN s....
I can tell you horror stories about IT. Short story is there have been times when they needed, badly, to be taken out to the back corner of the parking lot and taught their manners.
I realize your reply, but I didn't say they were "so many", but "just many".
It was my perception of things, neither offense intended of course nor universal judgement.
Nowadays things are different maybe because the whole world of work has become paroxystically competitive and selections are much more careful than those of decades ago, and system administrators are much better prepared.
However, sometimes the way in which something is started also characterizes the rest of its story.
IMO
The issue, IMHO, is that the term "sys admin" has morphed.
It used to be we had "network admins" and "sys admins", that was it. You might have some hired hands in the non R&D world to help out the users, but that was it.
Then IT got a hair up their a$$, took some Microsoft certificates, perhaps Novell, perhaps a CCNA, and they sold management a bill of goods.
IT became a PITA, a source of pain for the users and a sink hole for the budget.
They want to control everything...
Once upon a time, when I informed our IT ( because they don't call themselves 'sys admins' anymore ) that we needed 128 static IP address, the kid told me that we were using the wrong RTOS for our avionics. Needless to say, he stated this in a meeting I had called in front of our bosses.
I didn't strangle him, I just sat quietly... the IT fool telling the R&D engineer how to do the job.... needless to say, his boss understood our need, so in two minutes we had 256 static IP addresses and the budget for us to buy, install and maintain two new Cisco routers for our lab.
Or, same company... IT shut down ALL ftp servers on Friday... He even tunneled through our lab routers and shut down our R&D ftp servers... the kid was NOT supposed to do that.... We had a sell off in front of QA on Monday. The kid went home after lunch.... so, our VP got a hold of him ( this was NG, so the VP was a BIG SHOT) and told the kid to get his *** back into the office and fix what he had done.
I could go on and on.... about such things.
No love between R&D and 99% of the IT guys.
Curiously, in my current contract, the IT guy is very cool and helps a lot.... nice, smart guy too.
Last edited:
but my Android Samsung Phone is by far the worst... it keeps telling me I got to update...
Worst though, is Android Auto.
Time for an iPhone?
dave
I thought it was cute. I had a FORTH ROM for my BBC micro in the 70s, I dabbled but really didn't have a project I needed it for and RPN hurt my head. All the action was in assembler or Basic.
Thought Modula2 had promise...
Mostly now I program in Java or C/C++. Keep thinking I should try Rust, dunno if there's a compiler for Arduino or not.
To be honest, C is still the programming language in my field.... firmware. We use C++ here and there but then you see lots of singletons and tons of friends which defeats the purpose.
With C, I can do almost everything that C++ does except inheritance. But, given a common API, we write a lot of stuff as relocatable so when the kernel wakes up and reads the configuration it knows which files to install. No need for linking, nothing... the addresses are all specific and identically mapped.
Simple...
I see RTist used now, which is fine, but if you rely on its code generation then it becomes a PITA. I can use Eclipse to develop code faster than FTist or Rhapsody can generate it.
Oh, I'm surprised y'all didn't mention SQL.
You know, SQL creates intermediate tables while it executes... done creatively you can use that feature to create local variables in cache and that truly speeds up the query. I got some queries to run in 200 msec instead of 5 sec.... but the SQL guys weren't impressed by efficiency... they wore ties, I wore jeans... pfft...
Time for an iPhone?
dave
Nope.... no Apple stuff for me... sorry.
Apple is to computers what Bose is to audio.
I've done a bit of assembler (Z80, 8085, and some stuff on an 8-bit ST micro back in 1990-1) and then went across to C, C++ on the mBed platform after a long hiatus. That's closing down in July 2025, so I am just going to migrate across to the ST dev environment for GP ARM MCU's. ST have a really good selection of MCU's with loads of I/O options and a very good peripheral configurator.
Re the CrowdStrike thing - I can't imagine how difficult it must be to manage large organisations with multiple sites spread across the globe. I worked for a large corporation and the IT guys had their hands full.
ARM is very good.... fun to use.... but now there's hybrid core/FPGA devices. A chip with some hardware cores and a big FPGA so you can program your own "silicon". Sometimes you have a couple of R5 cores and then you write an A7 in the silicon....
I used to love PPC too... too bad they killed it off.
Big Endian is where it's at.... even if you have to sweat when you're working on PCI dumps.
+++
Remember the HP1000F? 0x17777
Nope.... no Apple stuff for me... sorry.
Apple is to computers what Bose is to audio.
I respectfully disagree.
dave
Old joke
If airlines were like:
Unix, the passengers and crew would meet on the tarmac and build their own plane.
Microsoft, everything would be shiny and spanking new and the planes would blow up at 40,000 feet.
Apple, everything would be shiny and new and if you asked them how it works they'd kick you out.
I want to know how things work.... hence, I don't like Apple. The reason why it's "reliable" is because it's purposely dumbed down and limited. It's great at what it does, but it doesn't do that much.
If airlines were like:
Unix, the passengers and crew would meet on the tarmac and build their own plane.
Microsoft, everything would be shiny and spanking new and the planes would blow up at 40,000 feet.
Apple, everything would be shiny and new and if you asked them how it works they'd kick you out.
I want to know how things work.... hence, I don't like Apple. The reason why it's "reliable" is because it's purposely dumbed down and limited. It's great at what it does, but it doesn't do that much.
- Home
- Member Areas
- The Lounge
- CrowdStrike