Friday, September 25, 2015

More on Editing Multiple Documents in vim

Using vim Windows

We’ve talked about using buffers to edit multiple documents in vim. Each file lives in a buffer which, in turn, lives in it’s own tab page, rather like a tab in a web browser. But sometimes I want to view one file while editing another, and it would be handy to have both files in the same screen at once. To support that, vim gives us windows.
To split the screen into multiple windows in vim, switch to command mode and run :split or control-w followed by s. This will split the screen into two windows, each containing a copy of the file that was in the original screen. Running :split again will create a third pane, and so on.
If you want to open the new pane with a different file, run :split myfile. This will open a new window with the other file loaded inside. To open a window with a new, blank file, run :new or control-w n.
The active window can be resized with with the command :resize # or z#, where the # sign is the number of lines high that the window should be.
To move up and down among the visible windows use control-w k and control-w j respectively. To close the selected window, run :close or press control-w c.
There are actually many commands to manipulate buffers and windows in vim. These are the basics and will get you through most common tasks. To learn more, go to vim’s command mode and run :help window.
Be seeing you.

Friday, September 11, 2015

I'm an Alias, I'm a Legal Alias

Useful Command Aliases

When working from the command line I like to cut down on the amount of typing that I do because a) I get in a hurry, and b) I fat-finger the things I’m trying to type, so c) I end up doing things over to get them right. I use aliases to take some common tasks and shorten them up so that I can save time and face.
If you look at my *nix systems you’ll find several aliases for various incarnations of ls:
alias l='ls -F'
alias la='ls -a'
alias ll='ls -lh'
The first of these, -F, appends a character to the end of the file name to identify the file type: * for executable files; @ for symbolic links; and / for directories. Regular files don’t have any special character following their names. While most Linuxes and BSDs use a colorized ls command these days, this is useful if you need to work from a terminal or remote session that doesn’t support color.
The second alias lists all files, including hidden ones, and the third displays a detailed listing with file sizes in “human readable” format: gigabytes or megabytes instead of bytes.
Aliases can be more sophisticated, standing for piped output. When I use the df command to check free disk space, usually I just want to see mounted logical volumes and not pseudo-filesystems and LVM disks. I pipe the output of df to grep to get what I want:
alias dfh='df -h | grep -v "^[[:alpha:]]"'
And to see a sorted list of details about my own processes:
alias myps='ps aux | grep ^$(whoami) | sort -k 2b'
An alias can also contain several commands—a mini-script, if you will:
alias sched='echo ""; cal; echo ""; calendar -w 3; echo ""'
Perhaps I have inspired you to experiment with aliases and discover some of the cool things you can do with them.
Be seeing you.

Tuesday, September 8, 2015

Editing Multiple Documents in vim

Using vim Buffers

When editing a file in vim it is stored and displayed in a buffer, something like a tab in a web browser window. You can open multiple files at once and easily move back and forth among the buffers to work with each file.

Opening Multiple Files in vim

To open multiple files in vim when you start your session, simply list them on the command line:
$> vim myfile1 myfile2 myfile3 myfile4
This will start vim with four buffers pre-loaded with your four files. You can use wildcards:
$> vim documents/myfile*
If you already have vim running and want to open another file, go to command mode and type:
:e documents/myfile5
You’ll be happy to know that you can use tab completion with the :e command. If you want to create a new file within vim, from command mode run :enew which will open an empty buffer. Just remember to save the file with a name anytime before you exit:
:sav documents/myfile6
To see the buffers open in your current session, from command mode run :buffers. Note that each buffer lists the file that it contains, and each has a unique number. To move to a different buffer, run :b# where the # sign represents the number you want to move to. For example, to begin editing the file in buffer number 3, run:
:b3
Keep in mind that before you can switch to a new buffer you must save or discard all the changes made to the current one. Use the :w command to save your file before changing buffers.
You can cut, copy and paste among buffers just as you would within a single file. Just remember that if you make a cut you will need to save the changes in the current file before switching to the target buffer.
When you no longer want to work on a file you can close its buffer with the “buffer delete” command, :bd. This command closes the file in the current buffer and deletes the buffer; the file is not removed. By adding a number you can delete a specific buffer. For example:
:bd3
This will close the file in buffer 3 and delete the buffer. And quitting vim will close all files and delete all buffers.
Be seeing you.

Thursday, September 3, 2015

PowerShell and Chocolatey Goodness

A movement has long been afoot to bring a useful package management system to Windows. If you’ve used apt or yum or yast or some similar tool in a Linux distribution, then you know how easy it can be to install new software to your computer with the right utility. Enter chocolatey, the PowerShell package manager for Windows.
Before starting a new class I make sure I load up my instructor computer with my favorite text editor and my chosen file manager:
choco install -y vim doublecmd
If I’m going to play videos for my class, I may also add a nice player:
choco install vlc
Of course, if I see that the machine I’m using has an older version of PowerShell I may want to update that:
choco install -y powershell4
What if I don’t know the name of the package I want? If I want to install the Angry IP network scan tool but know not what the package is called, I’ll ask chocolatey:
choco search angry
This is such a cool tool that rumor has it that even Microsoft is interested in using it. We’ll see. Perhaps managing software in Windows will soon be as easy as it is in Linux.
Bee seeing you.

Monday, August 31, 2015

Brushing Up Your vim

vim is a quirky, spartan, powerful text editor derived from the standard UNIX tool vi. If you’ve decided to learn to use it, or want to refresh your memory as to how it works, here are a couple of suggestions.

Use It

Make vim your go-to editor for all your text processing needs. Use it in Linux. Run it on Windows. Make yourself use it regularly and it will become familiar over time.
P.S.: I write all my PowerShell scripts in the Windows version of vim. More on this later.

Play It

This lovely website provides you with hours of entertainment playing a cool game while you learn and internalize the basics of working in vim. As you master the skills needed to defeat the game you’re also becoming adept at handling this most interesting of text editors.
I plan to speak more of vim in the coming weeks, so stay tuned. Be seeing you.

Tuesday, August 25, 2015

xargs: Praise the Lord and Pass the Argument!

One interesting command in UNIX is the little xargs command. It takes something from standard input and passes it as an argument to another command. For example, if a command expects a user name, you can get that name from somewhere and feed it to the command through xargs:
whoami | xargs passwd
The silly example above is also superfluous—passwd can accept a user name through standard input directly, so xargs isn’t needed here. And of course, passwd with no arguments would change the current user’s password anyway. But it is a simple way to illustrate what xargs does. Some commands don’t look for a required argument in the standard input stream, so they won’t read directly from the pipe. xargs takes whatever is in the pipe and sends it as an argument to the command that follows; in this case, passwd.
I personally use xargs in my .xinitrc file to set a random background wallpaper on my desktop when I start X:
find ~/pictures/wallpapers -type f | sort -R | tail -1 | xargs feh --bg-fill
The find command locates all the files in my wallpapers directory. The sort command puts the file names in random order and the tail command grabs only the last one in the list. xargs then passes that to feh, an image display utility, which sets the randomly selected file as my wallpaper. And so X greets me with a surprise every time I go GUI.
Be seeing you.

Friday, August 21, 2015

PowerShell: Set Multiple Variables at Once

A list of variable names can be an l-value, that is, an expression that appears on the left side of an assignment operator. Did you know that you could do this…
PS C:\> $FirstName, $Initial, $LastName = 'John', 'Q', 'Public'
This statement will assign each variable the corresponding value on the right of the assignment operator. What if the number of variable names and values doesn’t match? Extra values will be stored as an array in the last variable, so in
PS C:\> $FirstName, $Initial, $LastName = 'John', 'Q', 'Public', 'Sr', 'Esq'
that $LastName variable will be a list containing Public, Sr, and Esq. If there aren’t enough values, the extra variable names are unassigned.
Oh, and if you want to set several variable names to the same value, use this:
PS C:\> $a = $b = $c = $d = 90
Have fun with your variables. Be seeing you.

Monday, August 17, 2015

PowerShell - Microsoft's Trident: Part 3, Scripting

PowerShell and Programming

I’ve been examining the three facets of PowerShell that make it such a powerful and useful tool in the hands of the Windows system admin. In this first post I talked about the shell and its ability to plumb the depths of .NET. In the second post I discussed what makes cmdlets so cool and why every kid should want one. In this post we round out our look with…

The Language of PowerShell

Yes, even though it is a shell environment and it has a passel of useful cmdlets, it is also a rich programming (well, scripting) language. If you’ve ever written code in Assembler, C, FORTRAN, BASIC, Java or the like, you can appreciate all the goodness that PowerShell as a language has to offer.

1. Functions

Code reuse is a term that gets tossed about in programming circles, but all that it means is to write and debug your code one time and use it over and over again. Not only does this prevent you from “re-inventing the wheel” but it makes your work easier to maintain. Once you know something works you just keep using it.
To that end PowerShell supports functions, blocks of PowerShell code that can be referred to by name. They look something like this:
function Say-Hello ([String] $Name = 'Anonymous') {
    Write-Host -NoNewLine 'Hello, '
    Write-Host -ForegroundColor Cyan -NoNewLine $Name
    Write-Host '.'
}
This creates a silly little function named Say-Hello that does just that, but in color. However, once we tweak this function to get it to work just the way we want it, all we have to do is call it:
Say-Hello Dave
Now we can drop this function call in anywhere we want to, as long as the function has been defined. Of course, if we want to use it in multiple scripts we really don’t want to have to copy and paste the function into all of them, so that brings us to…

2. Libraries

Libraries are files, PowerShell scripts, that generally contain nothing but definitions: variables and aliases sometimes, but mostly functions. When your script uses a library it can inherit all the names that were defined in that library. Libraries make it easy to write portable functions that may be used anywhere, and that’s what code reuse is all about.
PowerShell supports two different mechanisms for using libraries. You can load any PowerShell script into another using the dot operator to “source” the file, just like you can in UNIX shells:
. $HOME/myScriptsDir/myLibrary.ps1
Note the space after the dot, because it is a command.
If the library has been saved as a module, that is, with a .psm1 extension, it can be managed through PowerShell’s module system and loaded with a cmdlet:
Import-Module $HOME/myScriptsDir/myLibrary.psm1
Modules and dot-sourced scripts each have a place, but both allow you to create libraries of useful functions and easily use them over and over again.

3. Flow Control

Any programming or scripting language is arguably only as strong as its support for flow control, the ability to suddenly change direction at run time when the situation demands it. PowerShell has several tools for creating scripts that can respond to changing circumstances.
  • Comparison Operators
    PowerShell contains a fairly rich set of comparison operators and can perform all kinds of tests to see if certain conditions have been met.
  • Branching Constructs
    Sometimes you need a script to execute different instructions depending on some condition. PowerShell supports both if and switch keywords for branching to different parts of the script based on useful comparison results.
  • Iteration (a.k.a, Looping)
    PowerShell has five different looping constructs built in to the language, not to mention an iterative cmdlet. There are many ways to repeat tasks in PowerShell.

4. Exception Handling

One of the beauties of PowerShell is that is has not one, but two mechanisms for exception handling. The trap keyword creates global error handlers that can be used to create a default behavior should any part of your script throw an exception. The try..catch construct can deal with errors at very specific parts of your script, giving you the flexibility to handle the same error in different ways depending on where in the execution it occurs.
These can each be used alone or combined together in the same script. Each has its strengths and addresses slightly different challenges for dealing with errors, but both give you the tools to ensure that exceptions are caught and dealt with in your PowerShell scripts.

Really, PowerShell is a feature-rich language with first-class tools that allow you to write really useful administrative scripts. Of course, what would you expect from the company that brought BASIC to the Altair 8800?
Be seeing you.

Friday, August 7, 2015

Those Were (Not) the Days...

In my Linux and UNIX classes I introduce the students to a nifty little CLI tool called cal. Run from the command line, it displays a calendar of the current month with today highlighted. If you have a *nix terminal handy, fire it up and let’s take this little baby for a test drive.
Run cal and you get the calendar for the current month. To see the calendar for the entire year, run cal 2015.
You can also ask cal for a specific month and year. To find the date for Thanksgiving in the year 2020, run cal 11 2020. Pass cal the month and year of your birth and you’ll see what day of the week was graced by that momentous occasion.
But do you want to see something cool? Run cal 9 1752. You might notice something wrong here—September of 1752 is missing eleven days! Is cal broken? Hardly. In September of 1752 the British Empire (finally) adopted the Gregorian calendar to correct an error that was, by that time, eleven days long. What’s cool is that a little UNIX program already knew all that, so you don’t have to worry about accidentally scheduling a Tardis trip for London, September 8, 1752. That is, assuming the Tardis runs a version of UNIX…
So, there’s some trivia for the local pub. Perhaps you can use it to win a few drinks. And watch out for blue police call boxes.
Be seeing you.

Monday, August 3, 2015

PowerShell - Microsoft's Trident: Part 2, Cmdlets

The Second Point on the PowerShell Trident

In a previous post I began to look at the three aspects of PowerShell that make it so powerful. In that first article I examined the shell itself, and how it can be used to do some really cool things. Now for the second point…

Cmdlets (/kəˈmandlits/)

Cmdlets are wonderful things. As I stated in that last article, you can run just about any command in the PowerShell shell. But why would you? Between .NET (see that previous article) and the three-thousand-or-so cmdlets in PowerShell 3 and later, anything you need to do from the command line is taken care of.
Yes, initially there will be a bit of a learning curve, but it’s not nearly so bad as you might think, and the advantages far outweigh the inconvenience of learning something new. Besides, as a mentor of mine once said, “The day you stop learning is the day you start dying.” So let’s learn what makes cmdlets so cool.

1. They’re consistent.

If you’ve used any command line environment in the past—CP/M, UNIX, DOS—one source of endless frustration is the lack of consistency among all those commands. “Can you use a space here? Are parameters prefixed with a slash, a dash, two dashes, or not at all? Are they case sensitive?” The very fact that one must even ask such questions makes learning to use the command line difficult and contributes to it’s bad reputation.
But really, it’s foolish to expect anything else. Each of those commands was written by different people at different times to solve different problems. Their authors each had his own experiences and way of thinking that affected how he would address his particular need. It’s only natural that those commands bear little resemblance to one another.
But then came the GUI. When Apple introduced the Lisa in the early eighties it also published a style guide so that developers could write programs that correspond to the system’s look and feel. Microsoft did the same when Windows was released some years later. Building on that experience, Microsoft has provided some recommendations for developers of new cmdlets so that users don’t have to face the arduous task of learning a new set of commands from scratch. In theory, once you learn how to use one cmdlet you can apply that knowledge to any cmdlet. In practice, it works really well.

2. They’re well-documented.

Referring to those operating systems previously mentioned: help in CP/M was non-existent; in DOS it was terse and sometimes cryptic; and while often useful and informative in UNIX the man pages for commands are as diverse and inconsistent as the commands themselves. When a user has to toil by the sweat of his brow to unearth a little information, it’s little wonder that he would run the other way rather than learn how a command works.
PowerShell, though, is actually helpful. The Get-Help cmdlet provides different levels of verbosity, from printing out basic syntax to highlighting every detail of each argument that a cmdlet will accept. It can even display working examples of how a cmdlet is used, which fits my learning style perfectly. You can also search through the help using the standard shell wildcard characters * and ?.
And it’s thorough. Not only can you find information on cmdlets, but concepts are documented as well. Try Get-Help about_arithmetic_operators to find out how PowerShell does math, or Get-Help about_if to review how PowerShell makes decisions. If you need to know something about PowerShell, like Prego spaghetti sauce, “It’s in there.”

3. They return objects.

One of the things that makes it difficult to use command line tools is that they go about their business, perhaps print something on the screen, and leave. This is fine if you happen to be hanging about to read the output, but makes things tough if you want to automate some task in a script. The problem is one of parsing.
Let’s say that I want to fetch a computer’s IP address from DNS and use that address in some other process. I can use:
nslookup www.centriq.com
Since I can see the result on screen, I can parse the output, find the relevant data, and do what I want to with it. Humans are good at this sort of thing; computers, not so much. Let’s try to fetch just the IP address in cmd.exe with some fancier command-fu:
set host=www.centriq.com
for /f "usebackq tokens=2" %%i in (`nslookup %host%`) do set ipaddr=%%i
Now I have a variable called ipaddr that contains the IPv4 address of host, right? Well, maybe. The problem is that nslookup changes its output depending on the information in the DNS record it queries. This command expects the data of interest to be on the last line of output, which will work much of the time, but not always.
Compare that to the following:
$host = 'www.centriq.com'
$ipaddr = resolve-dnsname $host | ? ip4address | select -expand ip4address
If an A record for host exists, the ipaddr variable will always contain an IPv4 address. No doubt about it, and no parsing necessary. The object that PowerShell creates from the Resolve-DNSName cmdlet will have an IP address property that we can directly query and save in a variable.
In fact, all of those objects have useful properties and methods that we can use to customize the output of a cmdlet or even change the state of a computer. These objects can start services, kill processes, change passwords, and so on. They are infinitely more useful than a few lines of text output upon a computer screen.

If you’ve ever found the command line infuriating don’t let that stop you from learning how to get what you want from PowerShell. Its cmdlets were designed to address the very frustrations you’ve had with other shells. While learning this new system will require an investment of time and effort, the dividends are definitely worth it.
Stay tuned for part three of this series. Be seeing you.

Thursday, July 30, 2015

PowerShell(s) of Two

While writing out some IP addressing exercises I needed a quick reminder of some of the powers of 2 that I use a little less frequently. I just opened up a PowerShell console and made a little table like this:
foreach ($i in 0..32) {"{0,8} {1,-11}" -f $i, ([System.Math]::Pow(2, $i))}
Hope you find this useful. Be seeing you.

Tuesday, July 28, 2015

PowerShell - Microsoft's Trident: Part 1, the Shell

The Three Tines of PowerShell

Like the powerful trident in the hands of Poseidon, PowerShell can be wielded for good or ill. It can create vast wellsprings of new possibilities for the savvy administrator and unleash destructive floods of complexity on the uninitiated. To some it solves many problems, to others it is the source of many headaches.
What I want to examine now, though, is the triune nature of this beast. Like Poseidon’s legendary weapon, PowerShell takes a three-pronged poke at the issues that commonly beset us in IT.

The Shell

PowerShell is a shell. That is, it’s an environment in which you can type commands to get your computer to do something. But it’s a really cool shell.

1. It gives you direct access to .NET.

I can’t stress this enough. .NET. From the command line. At your fingertips at all times, ready to do your bidding. If there is anything you must know about PowerShell, it is this. Almost anything that can be done in .NET can be done from the command line in PowerShell. Do you want to get data from a web server?
[System.Net.WebRequest]::Create('http://www.mysite.com/index.html')
Do you want to place the cursor somewhere particular on the screen?
$Host.UI.RawUI.CursorPosition = @{x = 20; y = 47}
I’m talking about some cool stuff. For example, Microsoft failed to give us an equivalent to the DOS ‘pause’ command in the initial release of PowerShell, and their eventual addition of a comparable function is lame. That’s because PowerShell can’t respond when you “Press any key to continue…” It doesn’t know you’ve pressed a key until you hit Enter and send to it the contents of the keyboard buffer. But .NET can access the keyboard at a lower level, bypassing the buffer:
function Wait-Key {
<#
.SYNOPSIS
Prompts the user to press a key and waits until the user does so.
.DESCRIPTION
The Wait-Key function simulates the DOS program pause. It gives a script an
opportunity to wait for the user to press a key before proceeding.
.PARAMETER Message
The prompt message to display when the script pauses.
#>
        [CmdletBinding()] param (
            [string] $Message = "Press any key to continue..."
        )

        $Message
        $Host.UI.RawUI.ReadKey("NoEcho, IncludeKeyDown") | Out-Null

}      # end function Wait-Key
With .NET and the MSDN website at your beck and call your command-fu is only limited by your imagination.

2. It allows you to run any command.*

From PowerShell you can launch any graphical program. Open up a text file in notepad or an html report in your default web browser.
notepad .\seating_chart.txt
Invoke-Expression ./disk_report.html
You can run any command-line utility…
ipconfig /displaydns
… even old CLI tools whose arcane syntax can cause PowerShell to balk—when you use the --% operator between the command and it’s arguments.
ICACLS.EXE --% C:\TEST /GRANT USERS:(F)
* Okay, so not really ANY command. You can’t run commands that are built in to the old CMD.EXE or COMMAND.COM command shells. This makes sense, as those commands are not stand-alone tools but are BUILT-IN. That’s why there’s point number 3.

3. PowerShell has cool aliases for useful commands.

As stated in the caveat above, CMD.EXE had several commands integrated into the shell itself. There are no executables for DIR or CD or DEL, so you can’t get at those from PowerShell. However, Microsoft has borrowed a page from other shells and given us aliases. While not quite as useful as the UNIX implementation of the concept they do allow us to create familiar or shortened names for common commands. So I can type DIR—or ls for that matter—and get a directory listing. Granted, what I get is not the result of DIR but the PowerShell Get-ChildItem command, so things are a little different, but not completely alien. I can easily create aliases of my own, if I like:
Set-Alias -Name unlink -Value Remove-Item
Set-Alias -Name goto -Value Set-Location
With aliases I can customize the shell to suit the way I prefer to work. To me, that makes PowerShell the best Windows shell around.

So stop using that old command interpreter. PowerShell is a much better shell than anything Microsoft has given us before, and with it we can do so much more than CMD will allow. And stay tuned for the other two prongs in the PowerShell trident.

Be seeing you.