If you've followed my previous article (http://www.linux.com/component/content/article/130-distributions/430530) you know what I'm trying to do. This project aims to create an home server, requirements have already been described, I've few mods in my mind (X10 automation, SMS gateway and few more) but basically this is what I'm trying to do. As I've told you before I've already decided to use an Atom motherboard but there were other architectures involved in the decision process, here are few personal considerations with pros/cons:
SX (STM) and derived architectures.
Pros: really really cheap, really reduced power consumption
Cons: limited I/O, not enough I/O port available or “high level” devices linkable to them
Atmel ATMega architecture (and Arduino clones as well)
Pros: quite cheap, easy customization, reliable hw, good programming (related to my skills), close to the real world (digital I/O, analog I/O, measurements), RT OS available for “bigger” CPUs, really reduced power consumption
Cons: Nice objects for controlling limited appliances, even if they've a really good CPU processing power they're not suitable for a full blown home server with an USB controller. They've the capability to connect to USB drives/peripherals but they're too limited for some cool stuff or if I'd like to have a strong OS on it (read Linux)
Note: I'll probably use some custom created PCBs with Atmel devices on it for controlling little things like lights, sensors, LCD panels and so on
Pros: reduced power consumption, “high level” interfacing with buses and peripherals (if compared with atmega/sx/pic chips) like: micro-pci bus, usb. RT OS (real time operating systems) available as well as a compact Linux OS (generally with uclibc and busybox), also check OpenWRT. I can find these kind of CPUs into a lot of small hardware devices (Access Points, DSL Routers, few switches, …). You may find them at cheaper prices (20-30-40-50$) inside ready made appliances. Power consumption is limited
Cons: USB is available and you may connect external drives to it (if you find the proper device) but heavy disk operations may affect heavily this kind of CPU (if you don't have a DSP and a southbridge-like controller). Heavy CPU operations (P2P software) may be a problem for them. Realizing a custom made project with these devices may cost you a lot of bucks
Pros: Same benefits of MIPS devices and even something more: ARM CPUs performs really well, I cannot compare them to a Pentium class device but they've enough power for a lot of heavy disk operations. There's a lot of efforts around Linux ARM community and there are a lot patches and features for an autonomous device (uclibc+busybox+linux are a must). There're even some ARM distro with good performances. Ready made devices may be found into high class routers, switches, access points and so on; if you like them remember: OpenWRT (http://www.openwrt.org) could be your best friend. Power consumption is limited
Cons: It's not easy to find a ready made device with an USB controller and an affordable price; multiple USB connections may affect machine performances (without a DSP or a Southbridge), I need few USB devices connected to my machine (USB disks, webcams, an audio card, …) dealing all of them may be challenging. Heavy disk operations (file transfer or P2P for example) may affect your machine a little bit. ARM has enough CPU power to solve these problems but even if I can realize or buy a custom made motherboard it could be a problem: they may cost you a lot of money and it's not so easy to run under 100$. ARM chips are reliable, have a lot of CPU power, they don't absorb so much (more than MIPS btw) but they're not so cheap in the industrial market. You may find something in the consumer market but motherboard are limited (1eth, 1usb and few more)
Note: This is my second choice and unfortunately they're to expensive for me
Geode/Intel Atom/Via C3 devices
As I've already told you in my last article this is my current choice, that's why:
Pros: they're not so cheap (like an MPU: micro processor units like ATMega's, PIC Microchips or STM SX's) but Intel an Via (especially Intel) are cutting down prices a lot, in some cases they're even less expensive than MIPS or ARM based cards. I may find a CPU+Motherboard for 50-70$ and use it as a common PC (with limited CPU power if compared to a Pentium Class device). You generally have a complete micro motherboard around it, pains like adding external devices are solved, you've: audio sound card, a bunch of USB ports (4-8), ethernet (wireless or wired), RS232, Parallel port, … You've a southbridge on the motherboard, no more troubles with heavy USB I/O operations (for a really small server, not suitable for a medium sized corp.). If you want an efficient system you may create an optimized OS, use a common Linux distro (if you don't have skills or time) or install a full blown MS Windows (if you want to leave dangerously). Passive cooling is available on low end systems
Cons: CPU intensive tasks and I/O may affect your power absorption a lot (even 40W or more) but if you tune it a little while and if you turn off what you don't use you may save a lot (I'm now using 15W in idle, 20-25W when streaming some video), this is my first problem with these devices: I'd like to absorb like an ARM card but have this class of CPU. I've found a lot of dual cores atom but they've too much CPU power for my needs and absorb a lot, in this case a CPU fan is required, I'd better stay with a low end system with passive cooling (just a radiator)
“Pentium Class” devices (Pentium/P. Mobile/Core 1,2,3,4,.../Core i3,5,7,1000)
Pros: They're marvelous devices, huge cpu power, you can do everything you want with them.... but I don't care... I just need a bare bone machine and nothing more. Expansion boards available: pci, usb, eth, lots of ports everywhere
Cons: Active cooling is needed (at least 1 CPU fan). They absorb an huge amount of power, too much for a 24/7 machine (at least until I'll pay my electricity bill), even underpowered you cannot cut a lot of watts from them (desktop or laptop machine doesn't care). Their cost is adequate to their CPU processing but decisively too classy if you compare them with an Atom board. You've a cheap hardware with moving parts (CPU fan: 10$), if you break it you lost your entire system, you cannot leave it unattended for a long period.
Grandmas computer or something cheap from your roof
Pros: It's cheap because you already have it (but please read the cons as well), you've a ready made machine for an unbeatable price (free)
Cons: Is it reliable ? How many years does it have ? What about spare parts ? Again, if this device is a pentium class object you may need to read Cons section for pentium based devices. If you look at your electricity bill you may pay a brand new atom board with the same amount of money spent on extra Watts with these platforms
Consoles, consumer products (TiVo, …), DVD players
Pros: you may find something good in this category but please consider these two constraints: power absorption and price, you have to stay under 50W and 100$. If you find something good please let me know, I'm interested in it as well.... :-)
Cons: You may spend time on hacking it but how much does it cost a new fully loaded Atom device if you compare to it ? But again: if you've a 50W machine (or below) and a 100$ price please let me know
So after this considerations you may know why I'm now using Atom devices, I've used hacked MIPS/ARM machines (before this replacement) but I've switched from them due to CPU requirements, if you've a viable solution like an Atom device and an ARM power absorption I'm really interested in it
You've read about my basic requirements, we've just passed the HW selection, now it's time to take a look at the software selection: small intro, a lot of testing, distro selection and customization will follow in the next article
Creating a custom live distro for a target appliance
Create a custom distro, choice of the right base
Create a custom distro: building the build machine
Create your first target image
Building a minimal image
In our last lesson here, I scratched the surface of the basics needed to start scripting in BASH. Today, we'll learn a few more things.
Here are some fundamentals that we need to cover real quick. It's nothing too complicated.
chmod - You've seen this command in action before, I'm sure. It's important when it comes to writing shell scripts because it allows the script to become executable. This is, of course, a necessity if we want the script to actually run and do something.
#! - These two symbols used in this particular order at the beginning of a script specifies which shell we want to use to run this script. We will primarily be talking about BASH here in these lessons, but let's say we wanted to write a script that used the tcsh shell instead. The first line of the script would be:
The operating system would check that first line initially to determine what shell we intended when we wrote the script. It would then call up that shell and run the script. You'll see that a lot when programming in BASH. Now you know what it does.
# - This symbol signifies a comment. You've probably seen this many times already in configuration files like GRUB's menu.lst or your distribution's repository configuration file, maybe. It's not at all uncommon to find this symbol used in many different programming languages to alert the OS (and the user) that the data on the same line following the # is an informational comment of some sort. Comments are GOOD things when programming. Make lots of comments. They will help you and maybe others who use your program or script down the road. Here's an example of a commented snippet of script:
# ZIP-100 mknod script - 07222006 - Bruno
mknod /dev/hdb4 b 3 64
You can see the initial #! telling the OS what shell to use. In the next line, you can see the # symbol followed by some informational data. This was an actual script that I used to need to run at boot to get my ZIP-100 drive to be recognized by the OS. Newer kernels later on solved the issue by using udev, but that's a whole 'nother subject. I just wanted you to see what a comment looks like in a BASH script.
; and <newline> - These two devices, when used in the shell, denote a transition from one command to another. For example, if I had three commands called c1, c2, and c3, I could run them on the same line using the ; (semi-colon) to separate them. Or I could just put each of them on a new line. Here's what I mean...
$ c1;c2;c3 <ENTER to execute>
$ c1 <ENTER>
$ c2 <ENTER>
$ c3 <ENTER>
Pretty simple so far, huh? OK. Let's continue...
- The backward slash is used to continue a command from one line to the next. Like this:
$ echo "She sells seashells
> by the seashore." <ENTER>
She sells seashells by the seashore.
Without the , the line would have been broken when displayed by the echo command's output. Like this:
$ echo "She sells seashells
> by the seashore." <ENTER>
Sea sells seashells
by the seashore.
See the difference? Cool! OK, then... just a few more for today. We don't want to get a headache by stuffing too much in there at one time.
| and & - These two are very cool, and you'll see them a lot. The first one is called a "pipe", and that's just what it does. It pipes output of one command into the input of another. The ampersand (&) symbol tells the shell to run a command in the background. I should briefly diverge here for a moment and mention foreground and background operations so you'll understand what they are.
A quick sidebar lesson...
You can run a command in the foreground (actively running before your eyes) or in the background (hidden from view but still going). The way to do this is to use the bg and fg command modifiers in the command line. Let's look a a couple simple examples. Let's say Mary wants to run some big admin job in the background while she does other stuff in the foreground. It's pretty easy to do. First she starts big_job01 from the command line.
With big_job01 running actively in the foreground, Mary doesn't have a cursor blinking at the command line. She cant do any other work because she has to watch big_job01 outputting. To change this, Mary will "background" big_job01 to bring back her command line cursor so she can do other things.
What this combination of keystrokes will do for Mary is it will stop the process big_job01, giving this notification:
1]+ Stopped big_job01
Now Mary will get her cursor back and she can send the big_job01 to the background and get it running again.
mary@workplace:~$ bg 1
She can check to see that it's in the background and running by listing the jobs.
mary@workplace:~$ jobs -l
1]+ 4107 Running big_job01 &
Note the following & in the above line. That's telling Mary that big_job01 is running in the background. Cool, huh? Now Mary can go on about her other chores in the foreground as she normally would. When big_job01 finishes, the command line will give Mary a notification like this:
1]+ Done big_job01
OK, there you have it... a bit about backgrounding and foregrounding operations within the shell. Now let's go back to what we were talking about before.
( and ) - The parenthesis are used in shell scripting to group commands. The shell will actually create a copy of itself, called a sub-shell, for each group. Groups are seen by the shell as jobs and process IDs are created for each group. Sub-shells have their own environment; meaning they can have their own variables with values different from other sub-shells or the main shell. A bit confusing, huh? Well, let's see a quick example and then wrap up this lesson so we can all go watch C.S.I. on TV.
$ (c1 ; c2) & c3 &
In the above example, grouped command c1 and c2 would execute sequentially in the background while executing command c3 in the background. Remember that we can background commands by using the ampersand (&) symbol and separate commands by using the semi-colon (;). The prompt would return immediately after hitting ENTER on the line above because both jobs would be running in background. The shell would run the grouped commands c1 and c2 as one job and c3 as another job. Clear as chocolate pudding, huh? Don't worry. It'll all come together... I hope.
Until next time....
*This article originally appeared on my Nocturnal Slacker v1.0 site at WordPress.com.
After reading the Opera browser's End User License Agreement (EULA), I came across a most unusual entry. It is somewhat disconcerting to read Opera's EULA where it states in §3.1 under, " LICENSE RESTRICTIONS AND THIRD-PARTY SOFTWARE," that the "user shall not and shall not allow any third party to... reverse engineer, decompile, disassemble, or otherwise attempt to derive the source code for the Software." This does not sound at all like a FOSS browser or one which should be embraced by the FOSS community.
According to the Open Source Initiative, the Opera browser does not quality as an Open Source project as it fails to meet their second criteria, namely, providing the source code for use by others. This same source code access acts as a precondition for the Free Software Definition's freedom 1. In the words of the Free Software Foundation, freedom 1 encompasses "The freedom to study how the program works, and change it to make it do what you wish." Whether one adheres to a Free Software, Open Source or Free Open Source Software philosophy, one can agree that Opera is not conducive to the spread of an open Internet.
In a marketplace of ideas, there is no reason to create a monopoly on information. In my opinion, Opera seeks to do this by way of their EULA. In terms of Free Open Source Software web browsers, I recommend Mozilla Firefox, GNU IceCat, Epiphany, as well as Uzbl (I will experiment with this FOSS browser and report on it soon).
I have found (I think) a way to migrate one EXISTING, bilingual WordPress install with lots of content, to two independent copies of WordPress with the same top URL, that is without subdomains like it.example.com, en.example.com... but without changing the URL of any already existing page. It seems to work, but please do let me know if I missed something. Here is the detailed description of this method. Feedback is very, very welcome, no, needed!
After a long period of absence from the site I'm back with a lot of work done on custom embedded appliances and Linux custom live CD creation. This is the first article of an hopefully good series of topics related to custom CD/USB creation, but before entering into too much details on it I'd like to answer a common question about this: “why do you go there ?”
Here is the short version:
I'd like to have a minimal system without too much software dependencies
It has to be really small and run on tailored devices with memory and CPU constraints
It's not a desktop system, software and general requirements don't change, no additional software installation is needed after initial startup
Startup configuration and installation has to be quick (no user interaction)
I'm a geek and I tend to revert/control everything I've around me
Because it's fun and it's nice to learn something
Here's the long version:
I'd like to learn more from a live CD/USB creation
I'd like to have a small footprint installation
this appliance has to run without external user interaction, no administration needed or available for it; it has to be reliable and survive to: power losses, HD malfunction, general HW failures, bad usage or tough environment temperatures, holy bible plagues, …
the appliance has to be replaceable quickly, something like a swap with an identical device. Linux OS “boot media” has to be exchangeable and easily upgradeable by a common user without IT skills
It has to run completely on RAM (ram disk and configuration), if something goes wrong I'd like to “fix” my problems with just a reboot (restore factory defaults)
No video/console available, this device has: a power cable, an ethernet cable
low spec CPU power needed, I just need to run few apps on it (Samba and few more), I don't need 1000 cores for basic things
Because it's fun and it's nice to learn something
So you're may wondering about what kind of task I'd like to achieve, it's pretty simple: an home server NAS with just few custom constraints, here are:
Has to run from RAM (disk+memory) as previously reported, I don't want to configure or modify my needs too quickly (after first installation)
It has to be cheap, at least until I don't get enough money for a custom made ARM device, for the entire project I've spent 80$ / 60€
I pay my electricity bill so it has to adsorb less watts as possible (a pentium class device is not my choice for a 24/7 machine)
It has to have enough CPU power to sustain few CPU intensive jobs (asterisk, P2P programs, online UPnP video/audio decoding and streaming, samba), I've used MIPS/ARM devices for this particular task but even if they're more appealing when you talk about power usage you've few troubles with CPU intensive tasks (UPnP, video/audio streaming or torrent for example)
I need USB ports
Hardware may fail, heat and dust are big problems, I'd like to have a machine without moving parts, I don't want to use an HD for the Linux installation, I don't want to have a fan over the CPU or the power supply, I'd like to rely on removable media so if there's something wrong I may move my installation to a similar machine
Linux root system resides in an USB stick, it has to be read only and load the system on boot. USB sticks have an “infinite life” (a sort of) if you use them as read only devices, they're even cheap and easily replaceable for an average user without IT skills
If you've an HW failure you may recover everything with just these swap parts: a standard ATX power supply, a similar mobo+cpu, a cheap usb stick
Linux installation in just 100Mb or maybe something more, I've a lot of 128Mb usb sticks to waste so I've cut my distro to stay around 100Mb
I'd like to attach an external USB drive with my own media (photo, films, music, …), this media is just read from “home clients” (read: media stations connected to TVs), sometimes someone (me) uploads contents on it or does entire backups of it
It has to be expandable easily: adding a web cam, a sound card, an Internet audio/video streaming with icecast should be possible without too much troubles
It has to be close to x86 architecture to be cheap, believe me: I'm an embedded specialist and I'd like to stay with ARM/MIPS devices when they're available, I like to deal with toolchains and cross compile installations but my device has to be really cheap, it's hard to beat the price I've mentioned
I don't wanna waste too much time on it, it has to be ready as soon as possible
It's not just a NAS, it's an “home server”, I've web apps, Internet audio/video streams, files, UPnP (and more...) on it, I've used a lot of NAS-like distro but I'd like to have something more
For this particular project I've chosen to use an ATOM CPU: single board, single core, desktop, no particular peripherals (I only need the USB bus and the ethernet card), no frills and damn cheap. There are plenty of them available everywhere, I've just chosen the cheaper one with USB ports and 1 Ethernet, overall cost: 80$ (you may find something cheaper across 50-60$)
The Plan (Plan 9 from outer space)
This is just an intro for a series of articles describing my own initial project, now I've expanded it to something else but at least I'd like to describe why I'm doing this and what kind of results I'd like to have, when I've started it I've wasted a lot of time on research (hw/sw) but now I've a more clear path to my own personal appliances. Here's what I've did:
- Building a minimal image
Trying to use a RAM disk
Run the entire distro from a RAM disk (so read only media)
Trim it to minimal memory (user memory) consumption, XOrg is not needed
Packaged and automated it (some scripting)
Build the installation media (my USB stick)
Follow all these chapters to read the entire post
I'd like to interact with you and have your opinions as well, feel free to collaborate and discuss my own opinions, especially if they're different from mine
Sometimes, we all have to look back down the path we've taken to more fully understand that path that lies ahead.
I started my Linux Adventure a bit late in life. I've always had an interest in all things technical. My career for the majority of my working life was as an electronics technician (component level repair). I had aspirations at one time of gaining an engineering degree in electronics; other paths were taken, though.
My first experience with computers and programming and such was in 1979-80, when I was attending tech school. We were trained on kit-made 8080A computers that were so primitive that they were programmed directly via octal machine code. The outputs of the machines were led light displays. This was ridiculously simplistic by today's computing standards.
I didn't choose computers as my field of endeavor, though. I was much more interested in RF (radio frequency) and audio electronics. That decision probably made for a much different life than others would have. I regret lacking the vision that others had regarding the future of the computer. Hindsight shows me that I would have enjoyed a career in that field very much.
I did not have much interaction with computers from the early 80s till about 2000, with the exception of some fun times with a Commodore SX64 and some intriguing games from a company called Infocom. Oh, I had some experience with PC-type systems at the workplace in the early 90s, but I never really developed a passion for them.
I remember in the late 90s sitting at my kitchen table reading computer store ads and dreaming about getting a system for my home. I never could justify the money for it, unfortunately. I had other priorities. In 2000, my brother bought a new system for himself and passed his old system on to me. It was a little Pentium I - 90Mhz machine. I set that guy up and signed on to a bunch of free dial-up ISPs and off I went...
My current passion with computers and operating systems came about directly from my initial experiences with the Internet in 2000. Yeah, I was a little late coming to the party. I've been trying to catch up ever since. I have come a long way, though. I've been building my own systems since 2001 or so. I crossed over from MS Windows to GNU/Linux in 2006. I'm currently reading and learning all I can about the GNU/Linux operating system.
I made some resolutions in the new year to learn more about specific Linux subjects; one in particular was shell scripting. I'm currently reading and experimenting with that now. I'm also publishing some basic lessons regarding this stuff as I go along. I learned a long time ago that a great method for learning is to learn by teaching. I have to research and learn something before I can write an article about it here.
Whatever I learn, I like to pass on to others. That is the beating heart of the GNU/Linux Open Source community. I have learned so much from the selfless acts of others in this community that I am driven to give something back. It is a mission of mine to educate, to assist, to entertain, and to ease the transition of new Linux Adventurers into this wonderful community. I am no guru when it comes to Linux, but I have gained enough knowledge to get around without getting lost too often. I have much to learn yet, but when I do learn it, I'll be here or somewhere helping others to learn it too.
A man called Bruno inspired me.
I would one day love to earn a living writing technical related articles or books regarding GNU/Linux. I would like to be employed in some fashion that would allow me to use my knowledge in a GNU/Linux business environment; as a systems administrator or a technical writer for some company or other. Sadly, my late-coming to the party and the fact that I'm no young spring chicken anymore has hindered my abilities to secure any positions like these. I'm totally self-taught and hold no industry certifications. I would love to attend school again to learn more in this field, but again... it doesn't always work out that way.
I'm not at all sure what my future path is going to be like. It's a day-to-day thing right now. However, I will always be learning; and with any luck, I'll always be here passing it along to you.
Thanks for reading/commenting.
*This article was originally published on my Nocturnal Slacker v1.0 site at WordPress.com
Before we get ahead of ourselves, it would probably be a good idea to go over some BASH basics.
Let's start at the beginning, OK? The Bourne Shell was born at AT&T's Bell Laboratories a while back. It was birthed by Steve Bourne, hence the name. The Bourne Again Shell (bash) is a direct descendent of the Bourne Shell (sh). It's very similar to the critter that Mr. Bourne brought to life back at Bell Labs.
There are different types of shells for different purposes. You can have a login shell, which is interactive; a non-login shell, which is also interactive; and a non-interactive shell, like the one used to execute a shell script.
The characteristics of these shells are usually determined by a file in the user's /home directory called .bashrc. It's sort of like a configuration file in that items place within it will be faithfully followed by the shell when it is initialized. We've seen this already when we were pimping our BASH prompt in a previous article here. I'm over-simplifying, of course. There are other files, be they root or user oriented, that also affect BASH's behavior. However, we don't need to go into that at the moment. For now we just want to get a bit more familiar with BASH.
Symbols are an important part of BASH scripting. Some commonly used ones are (, ), [, ], and $. You can see them in action in this snippet of a script:
Cards=(2 3 4 5 6 7 8 9 10 J Q K A)
# Alternate method of initializing an array.
Standard Input, Standard Output, Standard Error... You may run across these terms while experimenting with and learning BASH. The first is usually provided by your keyboard; typing, in other words. The second is just the printed reply that BASH gives you in the command line. The third is the standard error notice that BASH will give you when it can't find something or follow a command you've entered. Here's an example of an error:
$ cat jimy_grades.txt
cat: jimy_grades.txt: No such file or directory
You list the contents of your working directory and find that you mis-spelled that file. It's actually jimmy_grades.txt. This is why the cat command could not find it and BASH provided that standard error output for you. You can also redirect inputs and outputs in BASH using the | and < > symbols. We'll see redirection in action a bit more later on when we write a few simple scripts to do stuff.
What is a shell script? Well, it's a file that tells the shell (BASH, in our case) to do something. How does it do this? We “code” or write step-by-step commands and subcommands within the script, using variables and flow control, to make the shell follow these logical steps to achieve our goal.
You can write scripts to do just about anything on your systems. For example, you can write scripts that automate backup or updating chores, etc. You can write scripts to randomly change your desktop wallpaper. You can write scripts that do mulitple chores for you; some become quite multitasking in nature, with more than just a single result. Scripts can be very useful tools.
Writing code, or scripting, is like learning any other human language. It takes time and effort to learn... and lots of practice. You have to practice what you learn or you'll lose it the same way folks lose their second languages if they don't speak or read/write them regularly.
We made a simple script in yesterday's lesson. We showed how Mary was able to write a small script at her workplace that simplified a chore that she had to perform often during the day. We'll move ahead in the coming days to more complicated scripting, but nothing too complicated. The goal here is just to familiarize you with shell scripting, not to make you and expert. Only you can make you an expert.
Note: As always, please remember to click on the embedded links within my articles for definitions to unusual words, historical background, or links to supplemental informative sites, etc.
*Note: this article originally published on my Nocturnal Slacker v1.0 at Wordpress.com site.
OK, let's continue on with our lessons about the Linux Shell and the command line.
Today, we're going to learn how to write a script that you can run from the command line that will actually do stuff. It's a very simple script, so don't expect anything spectacular. I was wondering what I could use as an example script for this lesson when I remembered a question someone at LinuxQuestions.org asked the other day about how to tell which users are currently logged into a Linux system.
Let's make a little script to give us that information. Let's say that Mary wants to see who is logged in on the Linux system that she maintains for her company. She could easily do this with one simple command from the command line:
bill pts/4 April 10 09:12
james pts/5 April 10 09:30
marjory pts/11 April 10 10:02
ken pts/16 April 10 10:31
That was pretty easy, right? What if Mary wanted to check this periodically during the day? Well, she could write a small shell script with the proper permissions that she could execute any time she wanted from the command line or the terminal within her graphical user interface. Let's find out how she might do that.
Mary would first open the vim editor with the file name that she plans to use for this script:
mary@workplace:~$ vim onsys
Vim would faithfully carry out the command and open up ready to edit file onsys. At which point, Mary would enter these lines to create her script:
# custom who script - mary - 10 Apr 11
echo "Users currently logged into the system:"
# end of script
Here's what Mary actually did... she made a comment in the first line that begins with the character #, so that the shell knows to ignore that line. In the next line, she sets the command "date" so the script will output the date along with whatever else she requests it to do. In line 4, she uses the built-in echo command to tell the shell to display whatever is being asked for following the echo command. In this case, Mary wants the script to display Users currently logged into the system: when she runs it. The next command that Mary enters into this little script is the built-in who command. And lastly, is her notation that the script has ended.
Now, to make this script work, Mary needs to make it executable. In other words, she has to change the file's permissions to allow herself (and other, if she wants) to execute (make it run) the script onsys. She will do that this way:
mary@workplace:~$ chmod +x onsys
If she now listed the permissions for this file, they would look like this:
mary@workplace:~$ ls -l onsys
-rwxr-xr-x 1 mary users 94 Apr 10 15:21 onsys
What this means is that everyone on the system can read and execute this script, but only mary can write to (change) it. OK, so Mary wants to test her script now, so she just types in the script name at the command line (assuming the script is in her working directory):
Sun Apr 10 15:48:09 EDT 2011
Users currently logged into the system:
bill pts/4 April 10 09:12
james pts/5 April 10 09:30
marjory pts/11 April 10 10:02
ken pts/16 April 10 10:31
And so, there you have it. Mary wrote a simple script using shell built-in commands to perform a function that she does repetitively every day. That's what scripts do. They perform tasks for us at our bidding. Of course, scripts can get much more complicated; so complicated in fact as to be considered applications in their own right. A script is just a series of encoded commands and variables designed to do something, which is basically what an application is also.
Start yourself a directory in your /home directory where you can create and play around with scripting and scripts. It can really be fun. You can "automate" a lot of every day stuff that you do on your computer using custom-made scripts. Yes, you can do it. It ain't rocket science. It is similar to the technology used to program rockets, though.
*This article appeared originally on my Nocturnal Slacker v1.0 site at WordPress.com
In the next couple lessons here at Nocturnal Slacker v1.0 we'll be discussing the Bourne Again Shell (BASH) in a bit more detail.
With that in mind, though, how 'bout we play around with a neat trick that will customize your BASH prompt whenever you go to the command line? Sound like fun? OK, here we go...
There's a file in your /home/<your_username> directory called .bashrc. You'll need to use the -a (all) option when listing (ls) that directory because the preceding . means it's a hidden file. So, let's see how joe would do this...
joe@mysystem:~$ ls -a .bashrc
There we go. So Joe now knows that he does have a .bashrc file in his /home directory. Now, let's say that Joe wants a cool BASH prompt like mine:
... that shows his username, his operating system, and his working directory. To do this, Joe is going to have to add a little data to his .bashrc file. He'll edit the file using the VIM command line editor. Remember that from a previous lesson? Here goes Joe...
joe@mysystem:~$ vim .bashrc
And poof! Up comes VIM with the .bashrc file ready to be edited. It's almost like magic, huh?
# custom prompt
PS1="u | archlinuxw:$ "
Joe is going to edit the .bashrc file to add the little bit in purple that you see above. What that will do is tell BASH to start up the command line with a prompt the shows Joe's usersname (u), his distribution (ArchLinux) separated by the | (pipe) character; and his working directory (w). After saving the file, this is what Joe's command line prompt will look like once he starts his next BASH session:
joe | archlinux~:$
Pretty neato, huh? Remember the ~ denotes the home directory. If Joe was to change directories (cd) to say /boot/grub, then his prompt would now change too to show the new working directory.
joe | archlinux~:$ cd /boot/grub
joe | archlinux/boot/grub:$
Again, pretty neato, huh? This way, Joe will always know what directory he's in at any given time. That will help prevent Joe from making any command line boo-boos.
That's it, guys and gals. Pretty simple stuff. See? You're not near as scared of that mean ol' command line as you used to be, huh? Use your imagination when customizing your prompt. I've seen some pretty cool ones out there. Here's the one my friend securitybreach and fellow Admin from Scot's Newsletter Forums - Bruno's All Things Linux uses:
‚ïî‚ïê comhack@Cerberus 02:27 PM
How's that for spiffy? For further reading see HERE and HERE.
Have a great weekend wherever you may be!
*This article appeared previously at my Nocturnal Slacker v1.0 site at WordPress.com
Let's continue now with our series of articles introducing new Linux Adventurers to the Linux shell, the command line, and command line editing.
Sometimes, there is a need to stuff a bunch of files into an archive and compress it. This is often the case when one wants to send a bunch of large files to someone else via email or serve them on a server. This practice came about in the old days of computing when disk space was at a premium. Even though disk space is now huge and cheap, bandwidth can still be tight in some cases, so archiving and compressing files is still an efficient way to store/transport them on your system or move them across the Internet.
Let's say our old pal Joe had three files that he wants to shove into an archive for storage on his own system. His files are text1.txt, joemama.text, and umeume.txt. Here's how Joe would archive these three files using the tar command. Tar stands for tape archive. It's been around since Noah ran Slackware on the Ark.
joe@mysystem:~$ tar -cvf joesstuff.tar text1.txt joemama.txt umeume.txt
Here's what the above command does... it takes the three files listed at the end of the command and stuffs them (-c = create, -v = verbose, -f = read/write to file) into an archived file called joesstuff.tar.
OK, so far? Alright then... what if Joe's three files were pretty big and he wants to send them to his buddies Bill, Mary, and Tom via his online server? Well, in that case Joe would want to compress his new archive using a compression application. Remember the old WinZip program in Windows? Well, Linux has some similar applications to squish files into smaller packages. Two that we'll talk about here are bzip2 and gzip.
So, Joe wants to compress his new archive file with bzip2 for starters. Here's how he does it:
joe@mysystem:~$ bzip2 -v joesstuff.tar
Now when Joe lists (ls) his contents of the directory he's working in, he will find that his original joesstuff.tar is now called joesstuff.tar.bz2. Cool, huh? The size will be smaller, too, because the archive has been compressed. Joe could now upload the file to his online server to share with his friends.
Another compression method that Joe could use is gzip. Here's how Joe would do that:
joe@mysystem:~$ gzip -v joesstuff.tar
If Joe used gzip to compress his archive, it would now be called joesstuff.tar.gz.
Now let's say that Mary downloaded the joesstuff.tar.bz2 to her computer and wanted to actually see/read the files it contained. She would have to decompress the archive first. She would do this from her command line with this command:
mary&larry@home:~$ bunzip2 joesstuff.tar.bz2
This command would "unzip" (decompress) the package leaving Mary with Joe's original archive --> joesstuff.tar. If Joe had put the archive on his server as a gzip compressed archive, Mary would use this command instead to unzip it:
mary&larry@home:~$ gunzip joesstuff.tar.gz
Either way, she'd still end up with Joe's original archive --> joesstuff.tar. Now though, she will need to unpack the archive in order to read the individual files. She can unpack the tar archive this way:
mary&larry@home:~$ tar -xvf joesstuff.tar
Now, when Mary lists (ls) the contents of her working directory, she would see the original joesstuff.tar and the three files that she just extracted (-x) from the archive.
That's it, folks. Those are the basics of archiving and compressing/decompressing files in Linux using the command line tools tar, bzip2, and gzip. Be sure to refer to the man pages for these commands to get some more good information. Remember how to get to the man pages? You can either read them online at places like linuxmanpages.com or you can access them right there on your own computer using the command line by entering:
you@your_computer:~$ man tar
TAR(1) BSD General Commands Manual TAR(1)
tar — The GNU version of the tar archiving utility
tar [-] A --catenate --concatenate | c --create | d --diff --compare |
--delete | r --append | t --list | --test-label | u --update | x
--extract --get [options] [pathname ...]
Tar stores and extracts files from a tape or disk archive.
The first argument to tar should be a function; either one of the letters
Acdrtux, or one of the long function names. A function letter need not
be prefixed with ``-'', and may be combined with other single-letter
options. A long function name must be prefixed with --...
I hope you've learned something from this lesson.
*This article originally appeared on my Nocturnal Slacker v1.0 site at WordPress.com