Handy Bash/ZSH Aliases & Functions
The more you work in the terminal, the faster you start to build a repetitive workflow. While basic tasks and actions are fairly simple and take no time at all, others might require multiple steps; this does tend to rather monotonous after a while. Alteratively you might want to string together of list of trivial commands into one giant timesaver! I’ve slowly compiled a little handy list of aliases and functions to help in reduce some of that monotony! 🔨!
So, without further ado, here’s a collection of simple aliases and functions that I have slowly begun to use over time. If you would like to use any of these yourself simply append these to the end of your ~/.bashrc
file (or ~/.kshrc
for ksh users, or ~/.zshrc
for zsh users).
- Make a directory and change to it
- Concise oneline git log graph
- Set git remote as branch upstream and push branch
- Reload .zshrc configuration
- Clone a git repo and cd into it in one command
- Use wget to recursively download a website for offline/mirroring use
Make a directory and change to it⌗
Does exactly what it says; saves some typing on a commonly repeated task.
mkcd () {
mkdir -p "$1" && cd "$1"
}
Note: a number of defects of the above function were pointed out thanks to @Gilles. A more foolproof, albeit messier version can be found below:
mkcd () {
case "$1" in
*/..|*/../) cd -- "$1";;
/*/../*) (cd "${1%/../*}/.." && mkdir -p "./${1##*/../}") && cd -- "$1";;
/*) mkdir -p "$1" && cd "$1";;
*/../*) (cd "./${1%/../*}/.." && mkdir -p "./${1##*/../}") && cd "./$1";;
../*) (cd .. && mkdir -p "${1#.}") && cd "$1";;
*) mkdir -p "./$1" && cd "./$1";;
esac
}
Also, if you are a zsh
user and use Oh-My-Zsh, you can simply use the built in command take
, which does essentially the same as the function above.
Concise oneline git log graph⌗
Conserve the amount of screen space taken up with git log. This shows the timeline/graph of commits on the current branch. Commits take up one line and SHA are abbreviated.
alias glgo = "git log --graph --oneline"
Set git remote as branch upstream and push branch⌗
In the past, a frequent part of my usual workflow involved creating a new feature branch, committing some changes and then pushing those changes upstream.
But being the forgetful sort, I often forget I haven’t created an upstream branch before initiating a push and consequently get greeted with the infamous git message:
fatal: The current branch feat/my-branch has no upstream branch.
To push the current branch and set the remote as upstream, use
git push --set-upstream origin feat/my-branch
No big deal right? after all it’s simply a few seconds lost just copying the command and running it again. Regardless, I still found it rather irritating and that same irritation spawned this tiny shell function below!
In short, this first checks if an argument has been passed in as a remote. If not, defaults the remote to origin, gets the current branch and pushes to the remote branch of the same name
function gupush {
if [[ $1 -eq 0 ]]; then
echo "No remote branch specified, defaulting to origin..."
remote_name="origin"
else
remote_name=$1
fi
branch_name=$(git rev-parse --symbolic-full-name --abbrev-ref HEAD)
git push --set-upstream $remote_name $branch_name
}
Reload .zshrc configuration⌗
A simple alias for source
ing your .zshrc
. Particularly useful after making changes to it and wish to apply them to the current shell straight away.
alias reload="source ~/.zshrc"
Clone a git repo and cd into it in one command⌗
Changing into the directory of a git repository you’ve just cloned is a frequent occurrence. This helps save some keyboard clicks.
gclone() {
git clone "$1" && cd "$(basename "$1" .git)"
}
Use wget to recursively download a website for offline/mirroring use⌗
If you ever need to download an entire Web site, perhaps for off-line viewing, wget can do the job.
alias downloadsite = wget -r -nH --no-parent -R "index.html*"
Usage
downloadsite http://example.com/
robots.txt⌗
The alias above may fail if the site you are downloading forbids you in it’s robots.txt file.
Wget allows you to ignore robots exclusions and thereby crawl material otherwise blocked by a robots.txt file. However crawling a site that a robots file is explicitly forbidding you from doing is considered, at the very least highly impolite and at worst might even get you into legal trouble.
If you understand and still wish to go ahead regardless, then adding -e robots=off
to the alias above should do the trick.
Consider an adding some throttling when ignoring robot exclusions. The alias both limits the download rate and waits for 10 seconds between requests
alias mustdownloadsite = wget -r -nH --no-parent -e robots=off -R "index.html*" --wait=10 --limit-rate=20K