PowerShell – The Trouble With All That Power
Lately I’ve seen a lot more conversations on the forums about how much error checking you should do, how careful you should be, should you include aliases in your modules. Silly questions you may say.
This comes down to two problems.
Think of this, PowerShell is the only language that has an interactive environment that allows you to deal with objects. Ok so there might be some others out there, but they aren’t as popular (and I don’t use them!). Whats the big deal you ask? If you are at all like me you leave your console (ISE for me) open all the time and by all the time I mean it loads when the system is booted and stays open (my system is only rebooted for patches). I don’t have a good reason other than I like PowerShell at my finger tips. Still with me? Now chances are if you do this you also tend to work in PowerShell and store data here and there to variables to quickly test or work with something and lets say its a server you’re working with, do you do this too?
$s = "MyServer" gwmi win32_Computersystem -co $s gwmi win32_share -co $s
Whats wrong with this? I stored my server name in $s which is certainly easy for me to type as I work on the fly. Problem is, a few hours later when I start to do something different I reuse that $s for another server, or perhaps something else. It’s just a server name, no big deal right? What about $dt to store a datetime object? I do that a lot, or $dt to store a data table. Perhaps you see where this is going. As this shell stays open for multiple days I might overwrite a variable and then when I want to look at it again I have a problem (that datatable with all the SQL data now contains a date, opps.) This is obviously a user error and I’ve trained myself to open tabs to deal with problems like this (I really don’t want to type long variable names)
No, the problem is that more and more people are developing some really neat modules. I personally try to stay away from them because I tend to take what I do on the fly and turn it in to a script that will be run from a server, and I’m not going to install modules there. Even with that said I find that my module collection is growing.
Where am I going with this? As you install more modules, and create your own and start working more and more in PowerShell you’re likely to run in to the problem of variable/alias confusion. Even though its a ‘Best Practice’ to label your functions with meaningful names (good job Quest: Get-QADUser) you don’t always see that in practice. On the plus side MS is cutting this off with the new Intillisense that PS3 has which will ask which one you want to use. But still, as the years go by and more and more cool stuff comes out this will start to be a rather large problem. This isn’t something that we needed to deal with back in the day, once you ran the script, all of those variables were gone upon completion. If you worked in *nix, you likely didn’t use the shell like that (plus its all text and not objects, no fun.)
On top of this, more modules are creating variables. Most of them are fairly verbose with their names, but not all. I often find myself trying to use $host to store a hostname. As the PowerShell environment grows we’ll need to be more and more careful about what variables we use.
Advice? First read on.
The developer problem isn’t a new one, but it’s probably new to a lot of you PowerShell guys who are admins and want to use PowerShell to automate things and like the interactive console idea. My background is a mix bag of admin and development. My admin experience is much stronger, but I know enough in both areas to understand this problem.
As you write code you’re first goal is to make it do what you want it to do, which is the fun part. Once you get the code working the way you want, you should ask yourself, what could break this and how can I stop/trap it?
If you spend any deal of time on this you are likely to become overwhelmed with just how many things can break your code. I once heard of the 80/20 rule (there are many versions) for programming.
20% logic (the work) and 80% error checking. If you think that’s a lot, think of this. Lets say you want to modify a file on a remote system you’ll first want to verify that the system is online with a test-connection because its quick. Then you’ll want to make sure you can access the file location (timeout is generally slower for this) and then you want to check if the file is there and if it is you want to make sure you can edit it.
There are two methods of thought, test first, or trap errors. I wont say one method is better than the other, I use both depending on whats faster/safer.
Where am I going with this?
To the Point – The Knowledge:
The problem I see is that people do one of two things, they over think it and get overwhelmed with how many things can go wrong, or they just say it works, I’m done!
There is some great documents out there on this, but this isn’t a programming class, it’s an adaptation for the admin who wants to write code. It comes down to two main things:
Know The Audience – I think this is a big one, who will use this code? What is their ability level? Is it a helpdesk team that wont be able to figure out what the PowerShell error means or is it technical group? Is it to be deployed as a server job? Is it being sold or published to the community?
Know The Risk – What is the repercussion of this script failing? If it’s a server job it could be really bad. You might lose data over it. If it’s for a client you run the risk of making them unhappy and increase support costs. If it’s for the community then the project could fail. If it’s for you, well, you probably don’t care.
My point here is that there is a middle ground on how much error checking needs to be done. Most aren’t doing enough, and some are over thinking it. So the next time you write a script and think you are down with it, think about your audience and what will happen if this script fails.