Linux.com

Community Blogs



Pure Magic ...


Well it seems like magic ... rebuilt my system from scratch with PCLinuxOS 2009.1 in just over 3hrs!!

For some reason i was getting 'dead.letter' mail advising the possibility of intruders ... and since i do not have the expertise to follow through with investigating the situation i just 'bit the bullet' and rebuilt the system.

What makes it easy and quick for me now [it has not always been this way] is that i have a 'master log', or time line, of what i do each time. So it now is just a matter of reinstalling the 'live CD', and working my way through the historical time-line to reinstall and configure the software i use.

So it is NO 'biggie' now days.

One thing that makes it all straight forward is that i DO NOT store any dynamic data on my localhost ... unless it is 'throw away' stuff. All serious data is stored on another pc [WIN XP] and is backed up automatically from there to an external Maxtor 640gig HD.

So because data is never an issue, i can reformat my localhost at the blink of an eye knowing i am only dealing with system stuff.

Remember i am NOT proficient with shell commands ... so all this is done via gui.

HOWEVER, i can do it even faster, as after i rebuild from scratch i then use a piece of software named CLONEZILLA, which i had previously downloaded the ISO and burned it to CD. This is a bare basic program that allows you to take a 'snapshot' of a partition and save it to disk. THEN, when i want to 'rebuild' localhost it is simply a matter of re-storing that partition image via CLONEZILLA .... takes no more that 10mins.

I find this and EXTREMELY useful method of restoring sensibility to my system should it go haywire .... it allows me to test untried software without worrying if it will 'mess up' my system ... because if it does i simply restore the master image via CLONEZILLA ... and bingo ... am back to known territory again.

If the piece of software i 'try out' does the trick and i like it, i simply add it to the 'time line' or 'master install log' i keep, and when things get to a point where synaptic is doing an upgrade, i simply restore the last CLONEZILLA image, work my way through the 'time line', upgrade through Synaptic, and make a NEW 'master image' of the partition, to fall back to should i need it.

For the first time i actually feel in 'control' with Linux ... and NO WAY could i be considered as a GURU, NERD or even on that pathway!!

So im *smiling*

 

My Favorite Linux Distribution

Since I have been running Linux for over a decade, I've had the opportunity to use a variety of different distributions.  From roll-it-yourself Slackware installations to 
 

Joined Linux.com

This's nice place.
 

Understanding su and sudo

First of all, su means super user.  Distros like Ubuntu does not have root account . So what to do if one wants to do administrative tasks? Simple, just log in as super user. How? sudo is the answer. Here, are the steps how to log in as root/super user in distros where there is no root account.

 If you are in GUI mode,

Go to Accessories > Open New terminal then type "sudo su -" thats it you'll become root. And if then, if you want to be super sneaky you can type passwd and change/make new password for superuser account so that next time if you want to log in as root you'll just have to do is...

$ su (enter)
and type the password you set earlier.

 

How a poor boy of an in development country know Linux!

How a poor boy of an in development country know Linux?

All starts with a  simple fail, i need to use a computer, but it doesnot have Hard disk. Flash Drives was so expensive for me, so i rebember of an old conversation with one of my friends:

"I discovered that exist an Operational Sistem where there is no need for Hard Disk, it is Kurumin", it was Kurumin Linux.

So  i gone to my friends house and get one of this old Kurumin CDs, and run it by Live CD. WOW what was that? Runing just on RAM, on a old computer, it was faster than <the other> OS. Time passed and I got money enough to buy a HD, so I decided to install Kurumin in Dual Boot with <the other> OS.

After this i become sad because of some comments about the discontinuity of the Kurumin project. I quit Linux by a month when other friend tell me about Debian, so i downloaded it (yes on this time i become a University Student and get Internet), and instaled it.

WOW again ! another world, another Level!

I started with Debian etch, in January this Year, so i was tenpted to try Debian Lenny. Yes, i gone! Some time past and i get a faster computer and continued using Debian Lenny, now with 1GB RAM! Testing is Great!

Time passed and Lenny Become Stable! Wow it was fantastic! But i could not wait 2 days before going to Squeeze, where i am Now.

The live with Gnu/Linux  is SO EXCITING!, last week i tried to compile a new Linux Kernel. It fails, but this week i will try harder!

 Thank you Linux and GNU developers, you make me become a free man! Now i can use a computer, and change what i want on my PC! Thank you i can use a computer now!

 Ty

 

 

Using Perl to securely execute a command on and copy a file from a server.

This blog will discuss how to both securely execute a command on a remote server and securely copy a file from that server.

Here is the Perl script that can securely execute commands on as well as securely copy files from a server.

 #!/usr/bin/perl -w
##################################################
#This script is responsible for making a secure  #
#connection via ssh to server1 and executing the #
#commaned ls .                                   #
#This script is also responsible for making a    #
#a secure connection via ssh to server1 and then #
#scp the file test.txt.                          #
##################################################

#import required modules
use strict;
use Net::SCP qw(scp iscp);
use Net::SSH qw(ssh);
use Log::Dispatch::Syslog;

#declare local variables
my $scp;
my $host = "server1.domain.com";
my $user = "user1";
my $remotedir = "/home/user1/";
my $file = "test.txt";
my $cmd = "/bin/ls";

####################Log::Dispatch::Syslog#######################################
# Define our pid for use in the log message
my $pid = getppid();
# Define our logfile object
my $logfile = Log::Dispatch::Syslog->new( name => 'logfile',
                                          min_level => 'info',
                                          ident => "running_list_cmd[$pid]" );
####################Log::Dispatch::Syslog#######################################

######first connect to $host via Net::SSH and run /bin/ls###########
$logfile->log( level => 'info', message => "Connecting to $host as $user and running /bin/ls ..." );
ssh("$user\@$host", $cmd);
$logfile->log( level => 'info', message => "ls completed successfully!" );
######first connect to $host via Net::SSH and copy file $file###########

#initialize Net::SCP object and send credentials
$scp = Net::SCP->new($host);

#notify user we're logging into $host
print "Logging into $host ...\n";

#write "connected to $host" to $file
$logfile->log( level => 'info', message => "Connected to $host successfully." );

#log into $host as $user
$scp->login($user) or die $scp->{errstr};

#write "connected to $host" to $file
$logfile->log( level => 'info', message => "Logged into $host successfully." );

#notify user of changing working directory to $remotedir
print "Chaging working directory to $remotedir\n";

#change working directory to $remotedir
$scp->cwd($remotedir) or die $scp->{errstr};

#Write Changed working directory (CWD) to $remotedir
$logfile->log( level => 'info', message => "CWD to $remotedir successfully." );

#display file size of $file
$scp->size($file) or die $scp->{errstr};

#notify user scp of $file has started
print "SCPing $remotedir$file from $host ...\n";

#scp $file from $host
$scp->get($file) or die $scp->{errstr};

#notify user scp of $file from $host was successful
print "$remotedir$file copied from $host successfully!\n";

Disclaimer:  This blog entry comes with NO expressed warranty, guarantee, support, or maintenance of any kind!  Use at your own risk!   

Good luck and hope you find this useful.

 

Nice console regex helper

Last week I stumbled on a nice console regex helper. It’s not that I’m bad at regex but switching between sed, vim, python, perl regex trips me up a bit sometimes.

Full Post

 

Good for the (open) soul

Wow, Linux.com has really been buzzing with activity today.. Good to see the community active, and striving towards better things. Good to see the community being listened to, as well. Good to see the community not abusing being listened to, too ;P

 Forums could be a bit busier though, but they should catch on.. But no, lets not spoil the post with negativity, it's a good day, on a good site, using a good OS :D

 

Como solucionar problemas con las llaves de los repositorios de Launchpad

 Launchpad

Alguna vez te ha pasado que al actualizar "sudo apt-get update" tienes problemas con tus repositorios y te piden que vuelvas a actualizar o te indican que alguna llave esta mal?

Gracias a un script creado por por un usuario de Ubuntu Forums podremos olvidarnos de los problemas con los repositorios de Launchpad y los cambios de llaves.

1.- Para utilizarlo primero tienes que descargar el siguiente archivo - script
2.- Despues abre una terminal y navega hasta el script
3.- Teclea el siguiente comando en la terminal:

./launchpad-update

4.- Espera a que termine "Se paciente"

Listo eso es todo tendrás solucionados todos los problemas que tengas con las llaves de tus repositorios de Launchpad saludos!!!

 

Brian Masinick on desktop Linux systems

I have been a follower of free software since the eighties.  I started using commercial UNIX software in 1982, and not long after, I sought to find free utilities that would meet needs not cleanly met with standard tools.

When the GNU project started, I found a number of utilities that I liked, so over time, I used many of them.

I did not actually download my first Linux distribution until late 1995, when I finally purchased my first home PC for that very purpose.  By then, I was using the majority of tools that I was interested in that were in the Slackware Linux distribution.  I bought a book that Patrick Volkerding co-authored because I did not yet have broadband network access from home.  (It was not until 1999 that I got home broadband, and that is when my home Linux usage REALLY took off).

From 1999 until 2001 I was attending online graduate classes at the University of Phoenix.  I wrote about and promoted Linux at every opportunity, and at that time, I felt that emerging embedded systems and small form factor systems, coupled with free falling hardware prices would create a huge market for Linux systems across servers, desktops, and small devices.  There has been a nice market established, but nowhere near the size that I had been expecting, though a decade later, there are signs that good things are happening at a modest pace.

I enjoy testing and reviewing desktop Linux systems and I particularly enjoy desktop distributions that have been derived from Debian roots.

 

 

Linux.com is a model of the community

Today is a very good day for Linux, particularly because we have Linux.com working again.  I say this not exclusively because it was defacto offline for the last forever:  the old one was ugly.  Don't get me wrong, I loved the articles.  However, if I wanted to send a friend to Linux online I would send them to ubuntu.com, not linux.com.  That's not the case anymore.

 It's not just that Linux.com looks amazing, though it does.  It's the whole idea of a social community.  Users of Mac and Windows live in a vertical environment:  they send money up and down comes the software.  We live in a horizontal environment:  You have something I want so I take it and modify it and pass the modifications back to you.  Project x needs to implement feature y?  No problem, Project Z has that code and is glad to let you have it.  I think that it's really telling how Linux.com is a social networking site, unlike the sites for Mac and Windows.  The Guru Points idea is a great one, an idea that embodies Linux's meritocracy, and I can see this easily becoming the go-to place for answers in the future of Linux.  We now have a central point where all users of Linux - regardless of distro - can come together and enjoy the commonality that we have gained by using ethically sound software.

Great job Linux Foundation.  Keep up the good work.

 
Page 122 of 131

Upcoming Linux Foundation Courses

  1. LFS426 Linux Performance Tuning
    08 Sep » 11 Sep - New York
    Details
  2. LFS520 OpenStack Cloud Architecture and Deployment
    08 Sep » 11 Sep - Virtual
    Details
  3. LFD320 Linux Kernel Internals and Debugging
    15 Sep » 19 Sep - Virtual
    Details

View All Upcoming Courses


Who we are ?

The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

More About the foundation...

Frequent Questions

Join / Linux Training / Board