I have found (I think) a way to migrate one EXISTING, bilingual WordPress install with lots of content, to two independent copies of WordPress with the same top URL, that is without subdomains like it.example.com, en.example.com... but without changing the URL of any already existing page. It seems to work, but please do let me know if I missed something. Here is the detailed description of this method. Feedback is very, very welcome, no, needed!
After a long period of absence from the site I'm back with a lot of work done on custom embedded appliances and Linux custom live CD creation. This is the first article of an hopefully good series of topics related to custom CD/USB creation, but before entering into too much details on it I'd like to answer a common question about this: “why do you go there ?”
Here is the short version:
I'd like to have a minimal system without too much software dependencies
It has to be really small and run on tailored devices with memory and CPU constraints
It's not a desktop system, software and general requirements don't change, no additional software installation is needed after initial startup
Startup configuration and installation has to be quick (no user interaction)
I'm a geek and I tend to revert/control everything I've around me
Because it's fun and it's nice to learn something
Here's the long version:
I'd like to learn more from a live CD/USB creation
I'd like to have a small footprint installation
this appliance has to run without external user interaction, no administration needed or available for it; it has to be reliable and survive to: power losses, HD malfunction, general HW failures, bad usage or tough environment temperatures, holy bible plagues, …
the appliance has to be replaceable quickly, something like a swap with an identical device. Linux OS “boot media” has to be exchangeable and easily upgradeable by a common user without IT skills
It has to run completely on RAM (ram disk and configuration), if something goes wrong I'd like to “fix” my problems with just a reboot (restore factory defaults)
No video/console available, this device has: a power cable, an ethernet cable
low spec CPU power needed, I just need to run few apps on it (Samba and few more), I don't need 1000 cores for basic things
Because it's fun and it's nice to learn something
So you're may wondering about what kind of task I'd like to achieve, it's pretty simple: an home server NAS with just few custom constraints, here are:
Has to run from RAM (disk+memory) as previously reported, I don't want to configure or modify my needs too quickly (after first installation)
It has to be cheap, at least until I don't get enough money for a custom made ARM device, for the entire project I've spent 80$ / 60€
I pay my electricity bill so it has to adsorb less watts as possible (a pentium class device is not my choice for a 24/7 machine)
It has to have enough CPU power to sustain few CPU intensive jobs (asterisk, P2P programs, online UPnP video/audio decoding and streaming, samba), I've used MIPS/ARM devices for this particular task but even if they're more appealing when you talk about power usage you've few troubles with CPU intensive tasks (UPnP, video/audio streaming or torrent for example)
I need USB ports
Hardware may fail, heat and dust are big problems, I'd like to have a machine without moving parts, I don't want to use an HD for the Linux installation, I don't want to have a fan over the CPU or the power supply, I'd like to rely on removable media so if there's something wrong I may move my installation to a similar machine
Linux root system resides in an USB stick, it has to be read only and load the system on boot. USB sticks have an “infinite life” (a sort of) if you use them as read only devices, they're even cheap and easily replaceable for an average user without IT skills
If you've an HW failure you may recover everything with just these swap parts: a standard ATX power supply, a similar mobo+cpu, a cheap usb stick
Linux installation in just 100Mb or maybe something more, I've a lot of 128Mb usb sticks to waste so I've cut my distro to stay around 100Mb
I'd like to attach an external USB drive with my own media (photo, films, music, …), this media is just read from “home clients” (read: media stations connected to TVs), sometimes someone (me) uploads contents on it or does entire backups of it
It has to be expandable easily: adding a web cam, a sound card, an Internet audio/video streaming with icecast should be possible without too much troubles
It has to be close to x86 architecture to be cheap, believe me: I'm an embedded specialist and I'd like to stay with ARM/MIPS devices when they're available, I like to deal with toolchains and cross compile installations but my device has to be really cheap, it's hard to beat the price I've mentioned
I don't wanna waste too much time on it, it has to be ready as soon as possible
It's not just a NAS, it's an “home server”, I've web apps, Internet audio/video streams, files, UPnP (and more...) on it, I've used a lot of NAS-like distro but I'd like to have something more
For this particular project I've chosen to use an ATOM CPU: single board, single core, desktop, no particular peripherals (I only need the USB bus and the ethernet card), no frills and damn cheap. There are plenty of them available everywhere, I've just chosen the cheaper one with USB ports and 1 Ethernet, overall cost: 80$ (you may find something cheaper across 50-60$)
The Plan (Plan 9 from outer space)
This is just an intro for a series of articles describing my own initial project, now I've expanded it to something else but at least I'd like to describe why I'm doing this and what kind of results I'd like to have, when I've started it I've wasted a lot of time on research (hw/sw) but now I've a more clear path to my own personal appliances. Here's what I've did:
- Building a minimal image
Trying to use a RAM disk
Run the entire distro from a RAM disk (so read only media)
Trim it to minimal memory (user memory) consumption, XOrg is not needed
Packaged and automated it (some scripting)
Build the installation media (my USB stick)
Follow all these chapters to read the entire post
I'd like to interact with you and have your opinions as well, feel free to collaborate and discuss my own opinions, especially if they're different from mine
Sometimes, we all have to look back down the path we've taken to more fully understand that path that lies ahead.
I started my Linux Adventure a bit late in life. I've always had an interest in all things technical. My career for the majority of my working life was as an electronics technician (component level repair). I had aspirations at one time of gaining an engineering degree in electronics; other paths were taken, though.
My first experience with computers and programming and such was in 1979-80, when I was attending tech school. We were trained on kit-made 8080A computers that were so primitive that they were programmed directly via octal machine code. The outputs of the machines were led light displays. This was ridiculously simplistic by today's computing standards.
I didn't choose computers as my field of endeavor, though. I was much more interested in RF (radio frequency) and audio electronics. That decision probably made for a much different life than others would have. I regret lacking the vision that others had regarding the future of the computer. Hindsight shows me that I would have enjoyed a career in that field very much.
I did not have much interaction with computers from the early 80s till about 2000, with the exception of some fun times with a Commodore SX64 and some intriguing games from a company called Infocom. Oh, I had some experience with PC-type systems at the workplace in the early 90s, but I never really developed a passion for them.
I remember in the late 90s sitting at my kitchen table reading computer store ads and dreaming about getting a system for my home. I never could justify the money for it, unfortunately. I had other priorities. In 2000, my brother bought a new system for himself and passed his old system on to me. It was a little Pentium I - 90Mhz machine. I set that guy up and signed on to a bunch of free dial-up ISPs and off I went...
My current passion with computers and operating systems came about directly from my initial experiences with the Internet in 2000. Yeah, I was a little late coming to the party. I've been trying to catch up ever since. I have come a long way, though. I've been building my own systems since 2001 or so. I crossed over from MS Windows to GNU/Linux in 2006. I'm currently reading and learning all I can about the GNU/Linux operating system.
I made some resolutions in the new year to learn more about specific Linux subjects; one in particular was shell scripting. I'm currently reading and experimenting with that now. I'm also publishing some basic lessons regarding this stuff as I go along. I learned a long time ago that a great method for learning is to learn by teaching. I have to research and learn something before I can write an article about it here.
Whatever I learn, I like to pass on to others. That is the beating heart of the GNU/Linux Open Source community. I have learned so much from the selfless acts of others in this community that I am driven to give something back. It is a mission of mine to educate, to assist, to entertain, and to ease the transition of new Linux Adventurers into this wonderful community. I am no guru when it comes to Linux, but I have gained enough knowledge to get around without getting lost too often. I have much to learn yet, but when I do learn it, I'll be here or somewhere helping others to learn it too.
A man called Bruno inspired me.
I would one day love to earn a living writing technical related articles or books regarding GNU/Linux. I would like to be employed in some fashion that would allow me to use my knowledge in a GNU/Linux business environment; as a systems administrator or a technical writer for some company or other. Sadly, my late-coming to the party and the fact that I'm no young spring chicken anymore has hindered my abilities to secure any positions like these. I'm totally self-taught and hold no industry certifications. I would love to attend school again to learn more in this field, but again... it doesn't always work out that way.
I'm not at all sure what my future path is going to be like. It's a day-to-day thing right now. However, I will always be learning; and with any luck, I'll always be here passing it along to you.
Thanks for reading/commenting.
*This article was originally published on my Nocturnal Slacker v1.0 site at WordPress.com
Before we get ahead of ourselves, it would probably be a good idea to go over some BASH basics.
Let's start at the beginning, OK? The Bourne Shell was born at AT&T's Bell Laboratories a while back. It was birthed by Steve Bourne, hence the name. The Bourne Again Shell (bash) is a direct descendent of the Bourne Shell (sh). It's very similar to the critter that Mr. Bourne brought to life back at Bell Labs.
There are different types of shells for different purposes. You can have a login shell, which is interactive; a non-login shell, which is also interactive; and a non-interactive shell, like the one used to execute a shell script.
The characteristics of these shells are usually determined by a file in the user's /home directory called .bashrc. It's sort of like a configuration file in that items place within it will be faithfully followed by the shell when it is initialized. We've seen this already when we were pimping our BASH prompt in a previous article here. I'm over-simplifying, of course. There are other files, be they root or user oriented, that also affect BASH's behavior. However, we don't need to go into that at the moment. For now we just want to get a bit more familiar with BASH.
Symbols are an important part of BASH scripting. Some commonly used ones are (, ), [, ], and $. You can see them in action in this snippet of a script:
Cards=(2 3 4 5 6 7 8 9 10 J Q K A)
# Alternate method of initializing an array.
Standard Input, Standard Output, Standard Error... You may run across these terms while experimenting with and learning BASH. The first is usually provided by your keyboard; typing, in other words. The second is just the printed reply that BASH gives you in the command line. The third is the standard error notice that BASH will give you when it can't find something or follow a command you've entered. Here's an example of an error:
$ cat jimy_grades.txt
cat: jimy_grades.txt: No such file or directory
You list the contents of your working directory and find that you mis-spelled that file. It's actually jimmy_grades.txt. This is why the cat command could not find it and BASH provided that standard error output for you. You can also redirect inputs and outputs in BASH using the | and < > symbols. We'll see redirection in action a bit more later on when we write a few simple scripts to do stuff.
What is a shell script? Well, it's a file that tells the shell (BASH, in our case) to do something. How does it do this? We “code” or write step-by-step commands and subcommands within the script, using variables and flow control, to make the shell follow these logical steps to achieve our goal.
You can write scripts to do just about anything on your systems. For example, you can write scripts that automate backup or updating chores, etc. You can write scripts to randomly change your desktop wallpaper. You can write scripts that do mulitple chores for you; some become quite multitasking in nature, with more than just a single result. Scripts can be very useful tools.
Writing code, or scripting, is like learning any other human language. It takes time and effort to learn... and lots of practice. You have to practice what you learn or you'll lose it the same way folks lose their second languages if they don't speak or read/write them regularly.
We made a simple script in yesterday's lesson. We showed how Mary was able to write a small script at her workplace that simplified a chore that she had to perform often during the day. We'll move ahead in the coming days to more complicated scripting, but nothing too complicated. The goal here is just to familiarize you with shell scripting, not to make you and expert. Only you can make you an expert.
Note: As always, please remember to click on the embedded links within my articles for definitions to unusual words, historical background, or links to supplemental informative sites, etc.
*Note: this article originally published on my Nocturnal Slacker v1.0 at Wordpress.com site.
OK, let's continue on with our lessons about the Linux Shell and the command line.
Today, we're going to learn how to write a script that you can run from the command line that will actually do stuff. It's a very simple script, so don't expect anything spectacular. I was wondering what I could use as an example script for this lesson when I remembered a question someone at LinuxQuestions.org asked the other day about how to tell which users are currently logged into a Linux system.
Let's make a little script to give us that information. Let's say that Mary wants to see who is logged in on the Linux system that she maintains for her company. She could easily do this with one simple command from the command line:
bill pts/4 April 10 09:12
james pts/5 April 10 09:30
marjory pts/11 April 10 10:02
ken pts/16 April 10 10:31
That was pretty easy, right? What if Mary wanted to check this periodically during the day? Well, she could write a small shell script with the proper permissions that she could execute any time she wanted from the command line or the terminal within her graphical user interface. Let's find out how she might do that.
Mary would first open the vim editor with the file name that she plans to use for this script:
mary@workplace:~$ vim onsys
Vim would faithfully carry out the command and open up ready to edit file onsys. At which point, Mary would enter these lines to create her script:
# custom who script - mary - 10 Apr 11
echo "Users currently logged into the system:"
# end of script
Here's what Mary actually did... she made a comment in the first line that begins with the character #, so that the shell knows to ignore that line. In the next line, she sets the command "date" so the script will output the date along with whatever else she requests it to do. In line 4, she uses the built-in echo command to tell the shell to display whatever is being asked for following the echo command. In this case, Mary wants the script to display Users currently logged into the system: when she runs it. The next command that Mary enters into this little script is the built-in who command. And lastly, is her notation that the script has ended.
Now, to make this script work, Mary needs to make it executable. In other words, she has to change the file's permissions to allow herself (and other, if she wants) to execute (make it run) the script onsys. She will do that this way:
mary@workplace:~$ chmod +x onsys
If she now listed the permissions for this file, they would look like this:
mary@workplace:~$ ls -l onsys
-rwxr-xr-x 1 mary users 94 Apr 10 15:21 onsys
What this means is that everyone on the system can read and execute this script, but only mary can write to (change) it. OK, so Mary wants to test her script now, so she just types in the script name at the command line (assuming the script is in her working directory):
Sun Apr 10 15:48:09 EDT 2011
Users currently logged into the system:
bill pts/4 April 10 09:12
james pts/5 April 10 09:30
marjory pts/11 April 10 10:02
ken pts/16 April 10 10:31
And so, there you have it. Mary wrote a simple script using shell built-in commands to perform a function that she does repetitively every day. That's what scripts do. They perform tasks for us at our bidding. Of course, scripts can get much more complicated; so complicated in fact as to be considered applications in their own right. A script is just a series of encoded commands and variables designed to do something, which is basically what an application is also.
Start yourself a directory in your /home directory where you can create and play around with scripting and scripts. It can really be fun. You can "automate" a lot of every day stuff that you do on your computer using custom-made scripts. Yes, you can do it. It ain't rocket science. It is similar to the technology used to program rockets, though.
*This article appeared originally on my Nocturnal Slacker v1.0 site at WordPress.com
In the next couple lessons here at Nocturnal Slacker v1.0 we'll be discussing the Bourne Again Shell (BASH) in a bit more detail.
With that in mind, though, how 'bout we play around with a neat trick that will customize your BASH prompt whenever you go to the command line? Sound like fun? OK, here we go...
There's a file in your /home/<your_username> directory called .bashrc. You'll need to use the -a (all) option when listing (ls) that directory because the preceding . means it's a hidden file. So, let's see how joe would do this...
joe@mysystem:~$ ls -a .bashrc
There we go. So Joe now knows that he does have a .bashrc file in his /home directory. Now, let's say that Joe wants a cool BASH prompt like mine:
... that shows his username, his operating system, and his working directory. To do this, Joe is going to have to add a little data to his .bashrc file. He'll edit the file using the VIM command line editor. Remember that from a previous lesson? Here goes Joe...
joe@mysystem:~$ vim .bashrc
And poof! Up comes VIM with the .bashrc file ready to be edited. It's almost like magic, huh?
# custom prompt
PS1="u | archlinuxw:$ "
Joe is going to edit the .bashrc file to add the little bit in purple that you see above. What that will do is tell BASH to start up the command line with a prompt the shows Joe's usersname (u), his distribution (ArchLinux) separated by the | (pipe) character; and his working directory (w). After saving the file, this is what Joe's command line prompt will look like once he starts his next BASH session:
joe | archlinux~:$
Pretty neato, huh? Remember the ~ denotes the home directory. If Joe was to change directories (cd) to say /boot/grub, then his prompt would now change too to show the new working directory.
joe | archlinux~:$ cd /boot/grub
joe | archlinux/boot/grub:$
Again, pretty neato, huh? This way, Joe will always know what directory he's in at any given time. That will help prevent Joe from making any command line boo-boos.
That's it, guys and gals. Pretty simple stuff. See? You're not near as scared of that mean ol' command line as you used to be, huh? Use your imagination when customizing your prompt. I've seen some pretty cool ones out there. Here's the one my friend securitybreach and fellow Admin from Scot's Newsletter Forums - Bruno's All Things Linux uses:
‚ïî‚ïê comhack@Cerberus 02:27 PM
How's that for spiffy? For further reading see HERE and HERE.
Have a great weekend wherever you may be!
*This article appeared previously at my Nocturnal Slacker v1.0 site at WordPress.com
Let's continue now with our series of articles introducing new Linux Adventurers to the Linux shell, the command line, and command line editing.
Sometimes, there is a need to stuff a bunch of files into an archive and compress it. This is often the case when one wants to send a bunch of large files to someone else via email or serve them on a server. This practice came about in the old days of computing when disk space was at a premium. Even though disk space is now huge and cheap, bandwidth can still be tight in some cases, so archiving and compressing files is still an efficient way to store/transport them on your system or move them across the Internet.
Let's say our old pal Joe had three files that he wants to shove into an archive for storage on his own system. His files are text1.txt, joemama.text, and umeume.txt. Here's how Joe would archive these three files using the tar command. Tar stands for tape archive. It's been around since Noah ran Slackware on the Ark.
joe@mysystem:~$ tar -cvf joesstuff.tar text1.txt joemama.txt umeume.txt
Here's what the above command does... it takes the three files listed at the end of the command and stuffs them (-c = create, -v = verbose, -f = read/write to file) into an archived file called joesstuff.tar.
OK, so far? Alright then... what if Joe's three files were pretty big and he wants to send them to his buddies Bill, Mary, and Tom via his online server? Well, in that case Joe would want to compress his new archive using a compression application. Remember the old WinZip program in Windows? Well, Linux has some similar applications to squish files into smaller packages. Two that we'll talk about here are bzip2 and gzip.
So, Joe wants to compress his new archive file with bzip2 for starters. Here's how he does it:
joe@mysystem:~$ bzip2 -v joesstuff.tar
Now when Joe lists (ls) his contents of the directory he's working in, he will find that his original joesstuff.tar is now called joesstuff.tar.bz2. Cool, huh? The size will be smaller, too, because the archive has been compressed. Joe could now upload the file to his online server to share with his friends.
Another compression method that Joe could use is gzip. Here's how Joe would do that:
joe@mysystem:~$ gzip -v joesstuff.tar
If Joe used gzip to compress his archive, it would now be called joesstuff.tar.gz.
Now let's say that Mary downloaded the joesstuff.tar.bz2 to her computer and wanted to actually see/read the files it contained. She would have to decompress the archive first. She would do this from her command line with this command:
mary&larry@home:~$ bunzip2 joesstuff.tar.bz2
This command would "unzip" (decompress) the package leaving Mary with Joe's original archive --> joesstuff.tar. If Joe had put the archive on his server as a gzip compressed archive, Mary would use this command instead to unzip it:
mary&larry@home:~$ gunzip joesstuff.tar.gz
Either way, she'd still end up with Joe's original archive --> joesstuff.tar. Now though, she will need to unpack the archive in order to read the individual files. She can unpack the tar archive this way:
mary&larry@home:~$ tar -xvf joesstuff.tar
Now, when Mary lists (ls) the contents of her working directory, she would see the original joesstuff.tar and the three files that she just extracted (-x) from the archive.
That's it, folks. Those are the basics of archiving and compressing/decompressing files in Linux using the command line tools tar, bzip2, and gzip. Be sure to refer to the man pages for these commands to get some more good information. Remember how to get to the man pages? You can either read them online at places like linuxmanpages.com or you can access them right there on your own computer using the command line by entering:
you@your_computer:~$ man tar
TAR(1) BSD General Commands Manual TAR(1)
tar — The GNU version of the tar archiving utility
tar [-] A --catenate --concatenate | c --create | d --diff --compare |
--delete | r --append | t --list | --test-label | u --update | x
--extract --get [options] [pathname ...]
Tar stores and extracts files from a tape or disk archive.
The first argument to tar should be a function; either one of the letters
Acdrtux, or one of the long function names. A function letter need not
be prefixed with ``-'', and may be combined with other single-letter
options. A long function name must be prefixed with --...
I hope you've learned something from this lesson.
*This article originally appeared on my Nocturnal Slacker v1.0 site at WordPress.com
Mozilla posted a list of the worst offender addons for its Firefox browser. These are the addons that really slow down that FF start up.
Mozilla actually labels them as "slow performing addons". You can view the entire list by clicking HERE. See if any of your favorite extensions are on the list; a few of mine were, but they were down toward the bottom (minimal footprint) of the list.
You can't get something for nothing. That's just a universal truth. If you want your browser to do a boat load of extra tasks or jump through hoops like a circus animal, then you're going to have to feed it. Your browser eats RAM and CPU cycles. That's just the nature of the beast. Some are picky eaters; others are voracious monsters with bottomless pits for stomachs.
If you want your FF to be a lean mean browsing machine, you have to trim the fat a bit. Break the candy-coated addon habit. If you don't really need it or use it, why have it installed? There are some extensions that, were they not available, would probably deter me from using FF altogether. These are my must haves.
However, I also have some fluff in there. I have a smiley extension that's pretty cool. I also have one that adds "Go to top" and "Go to bottom" of my R-click context menu. Could I live without those? Sure, but I don't wanna', so I keep them. They both have very minimal footprints and seem to use next to nothing in resources within FF, so what the hell?
The beauty of FF for me is its potential for customization. You can truly make FF your own, should you care to put the effort into it. I could never do that with IE back in my Windows daze. I had to use addons for the Trident engine such as Crazy Browser and Avant to get IE to be what I wanted it to be. And even with those tools, there were limitations.
Have I mentioned lately that I LOVE Mozilla! If you have some spare change lying around, they could always use a buck or two to help defray the expenses of running that project. Mozilla creates so much for so many with so little. Help if you can.
Notes: Don't forget to click on links within my articles, folks. They often lead to informational sites to help you in some way; be they definitions of an uncommon word, or Wikipedia articles about certain items.
Disclaimer: I was at one time involved with the Avant Browser Support Team. I'm now retired from that excellent group. If you decide to give Avant Browser a try, tell 'em Eric sent ya'. ;)
*Originally published on my Nocturnal Slacker v1.0 blog at WordPress.com
Using Linux and old computers to do cool stuff. Vol. 3
In my last tutorial I went over how to use an old computer as a Linux media center for home entertainment. Now I am gonna explain how to use an old computer to store and serve your multimedia files to any network enabled device on your home network including that media center you just got setup. I am sure there are many of you facing the same problems as we speak. You have recently become aware that you can not only download music but movies and TV episodes as well. Only to find out that you can only watch them on the computer you downloaded them on, or even worse you have to copy them to a USB device and plug that into whatever you want to view it on. That works, but what happens when that device fills up? How long does it take to copy the media from one device to the next? As you can see this could all get to be frustrating and time-consuming and might even drive you to do something stupid. Like going to those crappy low def DVDs you get from Red Box. Just think, instead of that you could take an old computer that you probably have lying around, (or hell come grab one of the many lying around my house) and use it to solve all those issues in one fail sweep. Now please before anybody leaves comments saying, ‘a Buffalo NAS will do all that out of the box’, or ‘Windows Home Server does all that seamlessly’. Please remember, I am trying to give people a way to solve their problems for little or no money. Besides, what fun would that be to just throw money at it and not use our cunning Linux skills to solve things on the cheap?
Alright first things first, I don’t want you wasting a bunch of time trying to set this up on a computer that is too old and slow to do the job we need it to do. That being said I would suggest at least a Multi-core processor with cores that are at least 2Ghz. The reason is you need at least two cores at 1.8Ghz or one core at 3.2Ghz to process HD video. We don’t want our playback choppy! Cram as much compatible RAM as you can find in there and let’s get crackin! The operating system we are going to use on this old computer is Freenas. It is one of the greatest operating systems ever created. I could go on for hours about how awesome it is but let’s just put it this way. If you have never used it you will fall in love with it for sure. Go on over to HERE and get the latest ‘stable’ build in .iso format, 32bit should be fine since the machine is old anyway. If you are a Linux user you probably know how to burn an iso to a disk. If you are a windows user HERE are some dumbed down videos that might help.
Now that we have our freenas disk and our old computer we are ready to install. Now here is the one thing I would suggest you spend some money on. If you are planning on putting movies on this thing like me then the hard drive that is probably in that old thing will not cut it. Plus the way freenas works it’s much easier to install the OS to a different disk then the one where your shared data is. It’s actually best to put the OS on a cheap flash drive if you have one or if the drive that was in the old computer died, but you are still gonna need a big one for your data. Fortunately hard drives can be had on the cheep now days so I suggest you get at least a 1TB drive that is 7200 rpm or better. HERE is some info that might help with getting that thing installed. Now, plug-in a monitor and keyboard for now and put the freenas CD in the optical drive. You might have to get into the bios to set the computer to boot from CD before hard disk. HERE is some help on that. Now it is gonna boot up and give you some number options. I think number 9 should be ‘install/upgrade’ so choose that one. Now it’s gonna give you some options on what type of install you want. We need to choose ‘FULL’ install with ‘DATA’ and ‘SWAP’. The reason we don’t want embedded is because we need a lager than default data partition because we will be loading additional packages that would fill up the partition if we left it the default size. One of the key things we will be installing is the PS3 Media Server software. It does a great job of streaming movies to the PS3 or Xbox 360. So during the next couple steps it will ask you what size you want your data partition to be. The default is 380 and we need at least 500, but was are talking about megabytes here not gigabytes so I usually give it over 1000 just encase I decide to add more software later.
Alright once that finishes up it will want to reboot, make sure you remove the installation media. Now once you get it booted up one of the options there is to set the IP address of you LAN adapter. Just give it something that isn’t being used on your local network there. Or better yet you could leave set on automatic DHCP and make a reservation for it in the DHCP scope on your router. That way if you ever have to replace your router with one with a different internal subnet you can still find your NAS without too much effort by scanning your subnet for it. If that sounds confusing don’t waste time on it as long as you can access the IP address it has we are good. Alright once you have that done you shouldn’t need the monitor or keyboard any more. We can do the rest of what we need via the web interface or ssh. So head over to your normal computer or laptop and put the IP address of that thing in your browser. The default username is ‘admin’ and the default password is ‘freenas’. Now once you are in you will need to configure that big huge hard drive you bought and setup some shares. Some good tutorials on that can be found HERE and HERE. Once you get that done you just need to load the PS3 Media Server packages. A good how to on that can be found HERE. That’s really all you need. Freenas has the Transmission bittorrent client-server installed already. All you need to do is go into services and enable the bittorrent service and you can find torrents on the internet and let your NAS download them for you and automatically put them in the folder that your PS3 media server software makes available to the network. Now you can have all your media on one box and have it available to every device in your house. Enjoy!
Steven Rosenberg just wrote at insidesocal:
Eben Moglen of the Software Freedom Law Center talks about how there should be alternatives to ceding our rights and freedoms for "free" services -- and how free, open-source software can attack this problem with cheap hardware in the new Freedom Box project. I'm still absorbing all of this and turning it over in my head, and I'll have more to say when that process is further along...
In a related post about free as in freedom webmail, Rosenberg also wrote:
"[using self hosted or ISP-provided webmail] I can tap into the relative ubiquity of web-based e-mail services such as Gmail, Yahoo Mail, AOL Mail, etc., that you can get from any web browser yet not be subjected to the advertising, spying and general lack of control inherent in those "free" (but not free) services... Can you tell I'm falling under the influence of Eben Moglen?"
Since I already went through the same process last year and have already thought a lot about it, since it is (I hope) useful for everybody, and since I do want as much feedback and general discussion as possible around this, I repost here the comment I wrote to Rosenberg's first post:
reading in this and your other post (the one about Roundcube as a webmail interface) that you are "falling under the influence of E. Moglen", that is thinking around free as in freedom email services, I think that you may be interested in what I wrote about "Virtual Personal Email Servers". I look forward to hear your opinion about this.
Oh, and I too heard Moglen speaking about the Freedom Box at the Open World Forum last year. Here is my summary of what he said about it in that occasion. Again, feedback is welcome