Probably. Maybe. It definitely could be.
This post is going to be a little long, so I’ll cut out as much of the crap as I can. That being said, read on.
I was reading an article the other day on the internet. I’ll just list it here verbatim:
I make a living as a sysadmin. What does that mean, to be a sysadmin? Well, where I come from it means knowing a lot. It means knowing how to config routers and networking equipment, it means advanced firewalling, DNAT, SNAT, it means knowing how to do traffic sniffing and deciphering packet-level information, it means knowing how to build and configure common services like SMTP/IMAP/POP/mail via a dozen different pieces of software on three different families of operating systems, it means knowing how to build clusters for high availability and high performance, it means knowing when to use CIFS, NFS, SMB, GFS and when not to and what the difference is between them all, it means knowing how to configure iSCSI, fibre channel, SANs, direct and non-direct storage, it means knowing SQL and getting information out of databases, it means knowing how to program in a dozen different languages and how to script and automate events in any OS to make life easier, it means understanding authentication and security settings, how to configure any directory service from LDAP to AD to NIS, it means understanding DNS is more than just a optional addon to look up system names occasionally, it means understanding encryption, knowing what terms like Diffie Hellman, AES, SHA1 and others mean, and what parts of the encryption process they apply to, it means being able to make everything you do completely redundant and fault tolerant, right down to you own job, and it means so much more.
Why is it that professional IT services today consist of service reps who tell you the things you are doing are untested, dangerous, unsupported, different, not usual, or a host of other words meaning they are scared shitless and unwilling to learn something new? Why is it that I spend my time building things people tell me for 6 months during build and test “will never work”, only to have them go into production and work ten times faster for one tenth the cost of the old system? Why is it that IT professionals today choose brand labels over intelligence, and post-justify it by hiding behind “board confidence” when providing a solid, working, profitable system is the best thing to boost confidence from the board?
And every time I leave, I hear the same things. Some new guy comes in to replace me. Within days/weeks he’s broken something necessary for production, lost terabytes of data, destroyed the backup/DR/recovery systems, spent hundreds of thousands replacing something that met the businesses’ every need with some proprietary/generic piece of rubbish that performs half as well when there were dozens of other things that could have been improved instead. And all because they didn’t take the time to understand the business, it’s needs, and the solutions currently in place.
The hardware is provided by a tier 1, namebrand hardware provider (number 2 worldwide in server sales, I hear). The support guys who come on site are paid absolute buckets of cash and are supposedly the best of the best. These guys come out and utterly bollocks up installs. They constantly tell you things are impossible to achieve, only to stare slack-jawed in amazement three weeks later when they are achieved and working faster than their setups were supposed to provide. They rant and spit when I build things for zero-dollar licensing cost that their multi-hundreds-of-thousands-of-dollar hardware is supposed to be the only stuff that can do the job (my latest GFS/CLVM cluster outperforms their SAN snapshotting, and is free of charge compared to their pay-a-license-per-snapshot “solution”). And of course, their golden trump card is to say “well that’s fine, but we don’t support it” when you offend them. Watch the CIOs scramble when their hardware vendors threaten to not offer support! Yet ask them when they last called on the “professional” support (other than simple break/fix/replace stuff), and most can’t answer.
So when did this happen? When did “the IT guy” turn from the person who was cross trained with the breadth and depth of knowledge across a wide variety of systems and procedures turn into a drivelling half-wit who sees more value in a commercial certification than actually learning and building things, and who decides to be “the Microsoft guy” or “the UNIX guy” or “the Cisco guy” and learns nothing but one brand-name item to the ignorance of all others, and often poorly because they can’t separate concepts and ideas from brand names and marketing acronyms?
I’ve had a gut full. Something must come of this. The industry as a whole is in for a rude shock if it keeps going the way it does. We keep packing IT departments full of more people who know less. Things break constantly because unqualified people manage them, and departments stop communicating because the connecting technologies are always “somebody else’s problem”. The industry gets flooded with cowboys who have no concept of system and data integrity, who don’t take care with the systems they are put in charge of, who don’t bother securing things in a proper fashion so that data doesn’t leak everywhere. It’s almost a daily event to hear of some horrendously scary security breech that affects millions of innocent people who put their trust in these idiots.
Please not that these aren’t my words, but they do echo my thoughts. If you’re interested, and have an OCAU account, you can read the full thread here, otherwise check here for the full post.
Now, I’m not perfect when it comes to IT; my knowledge is the furthest from complete as it can possibly be.
Don’t get me wrong, I know, and have met people who are exactly like described above – those guys that say they can do “all that”, but in reality can do “none of the above”. On the other hand, there are people I know who aren’t like that. Chris is one of those people. Sure, he can be the slackest person ever when it comes to paying people back, or writing blog posts, but like any good Linux user, he lives and dies by his man pages. If there’s something he doesn’t know about, he’ll probably “wiki” it, or use the Google machine. Mark my words, he’ll become of the those people who know absolutely everything about absolutely anything – and I wish him the best of luck. Better him than me…
There was a situation at work where a UNIX jockey (or who I assume to be a UNIX jockey) came in and asked about getting a Mac. He was relieved to know about the support of X11, the BSD subsystem, the Terminal and all that, but it all started whether he could install a GNOME or KDE environment on it in place of Aqua. I was a little shocked that you would want to do that, but recovered a little by saying that I’m sure you could (or at least hack it so that it worked), but I’m not sure why you would. That was all fine and good, and being the Linux user that I once was, I was pretty confident I could handle the rest of his questions. One for one. Not bad.
His next question was comparatively easy; can I compile my own apps using the GNU C Compiler? Well, yeah, Apple include GCC as part of Xcode, and I’ve even compiled wget (not included by default on OSX) from scratch and installed it on my system. However, there are restrictions: you can’t install whatever version of GCC you like; Apple dictate what version you can and can’t install officially. I also added in that there would be nothing stopping you from installing the version of GCC provided by Apple, and then compiling your own version of GCC from scratch – however this would probably cause untold mayhem and mess. Two for two. Still going strong…
Then he threw me a curveball – he asked me which libraries X11 was built against, and which libraries that BSD subsystem of OSX shipped with. Of course, I had no idea and responded by saying that Apple generally don’t release that kind of documentation (although I’m not too sure about that) as they’re running a closed source scheme. This is where I tripped up a little – sure, the info he was asking for was a little technical, and not out of my reach, but surely I wasn’t expected to rattle off each and every single library that Apple ships with their OS? Surely not. However, I definitely could have (and was capable of) finding out this information beforehand. Why didn’t I? Primarily because I don’t want to memorise crap for the sake of memorising crap, but really – if you’re that dependent on some special library, install GCC and compile it yourself!
This is how I’ve become that “drivelling half-wit who sees more value in a commercial certification than actually learning and building things, and who decides to be” … the Mac guy … “and learns nothing but one brand-name item to the ignorance of all others.” That’s me!
As a closing thought just to make myself feel better, there was another scenario at work where I had stuffed up. Yeah, it happens. Anyways, that affected my confidence for a bit. After a few weeks of under-performance and general moping, I decided to talk to someone at work who knew his stuff. I approached him with my concerns, and he basically said that I do alright for how old I am, and it didn’t matter that I stuffed up ‘cos it was a problem that was easily fixed. After that, I felt a little better.
There’s this other guy at work who “expects brilliance, all the time” from Will and I. He’s a fantastic guy – making it clear what he expects, and what he doesn’t expect. When I don’t know how to solve something, he isn’t disappointed – he knows what I’m capable of. He’s a good guy.
The point is, if you’re thinking of going into IT, don’t be like “that UNIX guy” who know everything about UNIX and nothing about anything else, or “that Mac guy” who knows everything about Mac and nothing about anything else. Read your man pages. Study hard. Sure, worry about your final CCNA exam, but at the end of the day, it’s just a qualification that looks damned good on your resume.
Not that that’s important or anything 🙄
Comments below. Apologies for the long post, hope it was worth your time.