After working with Ubuntu Linux since version 7.04 I have a lot of experiences that were not all good, but overall, the operating system has been stable up until the latest version 11.04.
Yes, I understand that canonical wants to offer this new experience that provides a new look-and-feel, but in my opinion, jeopardizing stability by turning an operating system inside out just does not make sense.
I feel Ubuntu overall at one time provided a great Linux experiences for anyone; even people migrating from Windows, but now, I have one thing to say - I love Kubuntu 11.04 I call it "the new way to be - KDE"...
How to save youtube without any external software!!!
First make sure that the video you want to download is open in the browser(i have tried this with firefox and chrome)
Doing this in firefox is rather simple..it can be copied from the browser cache!! it is present in the home directory /home/user_name/.mozilla/firefox/(random_text).default/cache/ and then use find(search for .flv) to get the flash video!!
But in chrome its kinda complicated..in earlier versions of flash players the vids used to be stored in the /tmp folder..but not anymore with the release of version 10.2 of Adobe flash !!
anyways it can still be achieved
First open good ol' terminal and type either “lsof | grep Flash” or “ps x | grep flash”
The first command will give an output similar to the following
here you can see the process is “chrome” an the process ID (PID) is 4299 and the fd number is 33
now navigate to the file by typing
“cd /proc/process ID(the number)/fd(only fd not the value)”
here process id is 4299
now copy the file with fd 33 onto te desktop or the required destination with the command
“cp fd number /home/username/Desktop”
Thats it....you now have the file!! it is better to do this after the whole video has buffered to get the whole video!!
Now coming to the second command
Open the terminal again and type
“ps x | grep flash”
the output will be similar to this
here you can see the results..the first one uses libgcflashplayer.so..hence that is what we shall be using now!!
here u see only the PID number..in this case 4299
so we need to navigate to that folder
“cd /proc/process id number/fd”
here the command would be “cd /proc/4299/fd”
now list the directory and grep the flash file
“ls -l | grep Flash”
the output will be similar to the one above..here you can see that the fd 33 points to the deleted flashfile
in the tmp folder..this is the required file...copy it to the required destination!!!
To begin with a bit of background on the environment may be helpful…
The need to virtualize my HP C7000 blade environment came from a requirement to consolidate our comms room estate, and retire out legacy hardware and achieve as good an occupancy on the remaining hardware as possible. The eventual plan for the left-over kit could be anything from a test-rig running Eucalyptus, or just a VMware ESXi environment running many virtual machines. – For now we are keeping it simple with a basic ESXi environment.
Most of my existing hardware is running on G1 or G2 blade kit, and I wanted to be able to just lift out the existing servers and place them in their new environment with as little disruption as possible, or developer time rewriting legacy code,etc, whilst I gave some thought to how I would rearrange my estate for maximum efficiency once all the services running in it had been virtualized, and made effectively hardware independent (within reason).
Here are the steps I went through, I’ve also listed a couple of gotcha’s that I wasted a bit of time on, but I’m glad I’ve thought of, so I wont be wasting time again!
I wanted to virtualize a system that was running on a HP BL460c(using its local storage not SAN or storage blades) and make it run under ESXi. I thought that this would be a simple case of connecting the ESXi cold clone CD to the blade and doing a few mouse clicks.
This was how I proceeded, but I couldnt figure out initially why the blade was unable to see my ESXi server, even though all the correct routing between networks existed. - Then I remembered that I was running with 2 x Gb2EC network switches in the back of that c7000 chassis, and that I had had to use VLAN-tagging on all of the ports, this worked fine when the original blade OS was ‘up’, but without the knowledge of the VLAN tags in the cold clone CD, this seemed to fail to work.
(If someone has done a cold clone in an environment where they have needed to tag the packets that are being sent from the cold clone mini-OS then I would love to have some feedback on how you did it.)
In the end I moved the blade from its original chassis and placed it in a c7000 enclosure with the VLAN-tagging disabled, and this worked great.
So I used the blade ‘SUV’ cable and connected a CD drive and keyboard and VGA screen to the blade and booted from the VMware ESXi cold clone CD, and went through the steps of identifying the ESXi system that I wanted to receive the image that the cold clone CD produced from the blade.
I had a bit of a issue with the fact that parts of the configuration process for the cold clone environment seemed to require a mouse to click ‘Next’ as the tab key seemed to work intermittently (this could be a hardware/keyboard issue on my side), but just for reference its fine to disconnect the keyboard from the SUV cable and connect a mouse (and vice-versa) as many times as necessary throughout the installation. – Another approach which is probably possible is to connect the cold clone media using HP Virtual Media, but again I went for what was the most straightforward approach at the time.
Once the cloning process was complete I had the virtual version of the blade available on my ESXi host, but at this point it would still not boot successfully, as its expecting to see the Smart Array adapter in the blade, and so it tries to look for boot and root on /dev/cciss/c0d0pXX.
So from this point forward the files that I needed to edit on the Virtual machine image were the /etc/fstab, the /boot/grub/device.map and /boot/grub/menu.lst.
You need to go through this and replace any reference to /dev/cciss/c0d0 with /dev/sdaX and so on. As an example here are some of my changes, which I applied by booting a liveCD and mounting each partition:
(hd0) /dev/cciss/c0d0 —-> changes to —>>(hd0) /dev/sda (note that there is no partition number specified)
kernel /vmlinuz-version root=/dev/cciss/c0d0p3 resume=/dev/cciss/c0d0p2
The above three lines changed to:
kernel /vmlinuz-version root=/dev/sda3 resume=/dev/sda2
/dev/cciss/c0d0p3 / –>changes to –> /dev/sda3 /
/dev/cciss/c0d0p1 /boot –>changes to –> /dev/sda1 /boot
/dev/cciss/c0d0p2 swap –>changes to –> /dev/sda2 swap
Next, I grabbed the SLES install CD/DVD and booted as if I were going to do an installation. I proceeded through the normal install steps up to where you are asked whether you are doing a new install,an update or ‘other options’. From other options you can run the System Repair Tool, and this analyses the installed system and advises you of any missing kernel modules, or ones that are now defunct (amongst other things). My CD advised me to disable debugfs and usbfs. I did not select verify packages, but only ‘check partitions’, ‘fstab enties’ and the final step rewriting the boot loader if needed.
Once the newly imaged server had booted I needed to delete the old network interfaces, and delete all entries in the /etc/udev/rules.d/30-persistent-net-names.rule, do a reboot, which automatically entered the new MAC address details for the new VMware ethernet adapter, then readded the network adapter in YaST.
After that I did a reboot, ejected the Install CD, installed VMwareTools on the Guest and I had my newly virtualized system operational again!
Matt Palmer 30-Aug-2011
When you try to connect to a server using SSH (secure shell) your ip will log in the server for example the log in here..
when our ip was log on targer server, its so dangerous, so know lets do a simple trick to fake our ip in the server log by using SSH, and the trick is we use Torify.. lets try
# torify ssh
So our Ip address will be anonymos, and lets check the log now
Aug 23 16:46:12 namaserver sshd: Invalid user admin from 192.168.1.8
Aug 23 16:46:12 namaserver sshd: Failed none for invalid user admin from 192.168.1.8 port 44194 ssh2
Aug 23 16:47:19 namaserver sshd: pam_unix(sshd:auth): check pass; user unknown
Aug 23 16:47:21 namaserver sshd: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.1.7
Aug 23 10:32:05 namaserver sshd: Failed password for invalid user admin from 192.168.1.8 port 44194 ssh2
and we succes to fake ip to login into a server using SSH, but our Ip still be logged in log, if we see in the red font its our IP, and the administartor of server know if our was log into his server, remeber,, use at your own risk, its only educational purpose :D
Hey all, Worldlabel is sponsoring a
Grab Tux Paint, make a cool drawing, win one of 3 OLPC laptops, one of 10 Sugar-on-a-stick and other awesome prizes!
If you're not familiar with Worldlabel, the Worldlabel blog runs a lot of great Linux howtos.
The 2011 Tux Paint Summer Drawing Contest is sponsored by Worldlabel.com and is open to all children aged 3 to 12 who live anywhere in the World!
Here’s a chance to show off your talent using a great drawing program made especially for kids. Tux Paint is an award-winning drawing program you can download to your computer. Tux Paint was recently awarded SourceForge.net Project of the Month. It will run on all versions of Windows (including Tablet PC), Mac OS X 10.4 and up, Linux, FreeBSD and NetBSD. And it’s FREE!
PRIZES: Worldlabel.com will give out prizes to 10 winners! 1st prize wins a OLPC notepad computer, Sugar-on-a-stick loaded with Tux Paint, a Tux Paint T-shirt and button. 2nd and 3rd wins a OLPC computer, Sugar-on-a-stick and a T-shirt. 7 more winners will be chosen and will receive a Sugar-on-a-stick and a Tux Paint t-shirt.
HOW TO ENTER:
- Download Tux Paint
- Make your drawing in Tux Paint and save it in png format
- Send your finished drawing in png format to
and include “Tux Paint” in the email subject line
- In the email submission include: 1) the artist’s’ name 2) the artist’s age 3) the title of drawing 4) the country where the artist lives
- All artwork must be the contestant’s original work created on Tux Paint.
- Only one entry per child
Entries will be judged on the quality and originality of the artwork. Extra points will be given to drawings that tell a story.
Entries must be submitted by midnight USA Easter Standard time on 12 September 2011. Winners will be announced no later than 22 September 2011.
All entries will be licensed: Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported and will be exhibited on this page.
Visit Tux Paint Kids Summer Drawing Contest for complete details!
In our house we have one PC running Fedora 15 and one laptop also running Fedora 15. Up until now the laptop has been my machine only, no kids allowed but I am really sick of frying nuts off on the sofa and want to get some serious stuff done which would be better suited to a PC. Problem with the PC is that the kids always want on it, solution, create them accounts on the laptop, job done? No!
Even with accounts for the kids I was still getting nagged because none of their stuff (minecraft saves) were n the laptop. No problem, copied them over, easy as, but the next issue then was synchronizing those minecraft saves. I was considering flushing them through to a little HP Microserver running Ubuntu Server 10.10 but did not have the time, enter Dropbox. I saw this somewhere else but forget where but quite simply via cli in the dropbox directory:
ln -s /home/annoying_kid_name/.minecraft/saves mcsaves
So now it does not matter if the kids log into the PC or the laptop, Dropbox keeps the minecraft saves current on both machines :-)
This works great for other application settings to, I do the same thing for Getting Things Gnome and am busy thinking up other stuff to sync just for the sake of it.
It is hardly single sign on but it works, I do miss the off line files feature of Windows Server!
I decided to write the following post after having some problems getting iSCSI connectivity to work correctly with a Dell EQL PS6500 Array with RedHat 6.
It seems that Dell have done a great job in getting the Host Integration Tools to work with RH5.5 and ESXi and also the MEM for MPIO and I love the product, but I encountered some problems when trying to install a RH6 server and connect it all up following the docs on the Dell site. The main issue I had straight way that I was predominantly using SuSE SLES as my preferred Enterprise distro, but the RPMs provided by Dell for the HIT tools and MPIO had a load of dependencies that needed to be resolved before it was even possible to start using the extended functionality that these tools offer on SuSE.
So I downloaded a version of RH6.1 to test the new tools on, only to find that this didnt work as intended. After a lengthly support call, I was told that although the documentation said that it supported RH6 what this actually meant was 6.0. – So I downloaded 6.0 and had exactly the same issues…
So I did some digging around and debugging of the installation procedure and docs. The first thing that the docs tell you to do is make sure that you are at the latest supported release of 6 and to perform a yum_update. – This in itself is a problem, as it effectively takes the 6.0 installation and makes it 6.1. I tested this once I had a working 6.0 installation by doing the yum_update, and it completely broke the ability to address the iSCSI devices properly using the Dell tools. I found this a bit worrying, particularly if you are sharing your admin roles with someone else, they could inadvertently break the entire configuration.
The instructions for actually formatting and adding the device to the system could use some more detail.
The documentation(for RedHat MPIO on EQL) states as a final step that you need to do the following(this is entirely absent from the HIT tools docs):
mke2fs –j –v /dev/mapper/rhel-test
this doesnt work at all using the HIT tools and tells you that the process could not read block 0, and gives you all sorts of wonderful messages about not being happy with the superblocks on the device. – This doesnt happen if you dont use the tools, but just use Linux native MPIO.
The only approach to this that worked was by trial and error on my part (which was very time consuming) was as follows. Here’s my checklist:
Configure SCSI on the local onboard NICS on a dedicated DELL R710 and set 4 paths (edit the /etc/equallogic/equallogic.conf file and change the [MPIO] section to contain the maximum amount of path (NICS) you intend to use) – If you dont do this you could have problems in the future as for each NIC path you are given a corresponding /dev/sdX device which sits behind the mapper tool.
Some issues with the Dell HIT tool for EQL it doesnt work well on 6.1.
- Yum update from 6.0 breaks a valid config as it effectively takes the server to 6.1!!!
- Ignore the doc, dont do a localinstall of DKMS,EQL and iscsi-initiators. Also dont grab the latest version of DKMS as it is broken even though its available on the Dell site. – at the time of posting I used dkms-126.96.36.199-1.noarch.rpm, but had problems with dkms-188.8.131.52-1.noarch.rpm.
Grab kernel-devel and gcc, gcc-headers using yum
Configure iscsi.conf and CHAP. – enter in here the CHAP password you set on your EqualLogic
Enable iscsi logins in /etc/rc.local
Run eqltune -v and run through checklist and make the necessary alterations to the kernel config,etc. – This tool is your friend!
Run ehcmcli -d to show valid paths and sessions (you should see 4 paths here if you set your MPIO settings in /etc/equallogic/equallogic.conf to 4).
Run rswcli -E -network <ip> -mask 255.255.255.0 to exclude the public NICS from accepting broadcast traffic from the EQL tool.
Use iscsiadm to discover the new LUNS and to log in the EQL LUN.
Next look in /dev/mapper for the eql-xxxxxxxx-volname LUN id. Make a note of its true dm-X number
To format the disk do not use the MPIO device. Its necessary to format the sdX devices that are available on each session. for this reason as well it means you need to set the config ahead of time at its maximum. It wont create more sessions as they are needed (this will break the LUN!!!). do this in /etc/equalogic/eql.conf
Run ehcmcli to find out which sdX devices are underneath the MPIO layer.
kpartx -a /dev/dm-X (where dm-X is the dm id associated with the eql LUN and vol name)
sfdisk /dev/sdXX (do this for each sdX)
kpartx -a /dev/dm-X
Next: mkfs.ext3 /dev/mapper/eql-lun-vol-part
I’ve also included my working multipath.conf file from my attempts to use native MPIO on RH6.1 and not the Dell tools. You will note that the way that the scsi tools (on RH6.1) interrogate the device has changed slightly and it now expects to see SEQLOGIC not EQLLOGIC, which could explain why the yum update breaks the tool.
There is also the suggestion to do the following in the lvm.conf file, but this renders you install unbootable if you are using LVM on your main filesystem.
filter = [ r|/dev/mapper/eql-[-0-9a-fA-F]*_[a-z]$| ]
path_selector “round-robin 0″
getuid_callout “/lib/udev/scsi_id –whitelisted –device=/dev/%n”
# devnode “^dasd[c-d]+[0-9]*”
# wwid “IBM.75000000092461.4d00.34″
getuid_callout “/sbin/scsi_id -p 0×83 -gus /block/%n”
features “1 queue_if_no_path”
path_selector “round-robin 0″
I’m sure theres other ways to sort this out, but the above checklist is what worked for me..
I'd like to share an experience that I think most of us can hopefully benefit from. My main personal computer is pretty decently fast, even tho I have owned it for almost 3 years now it's life span seems like it should go a lot longer then my previous systems. It has never had any major issues, and it has been the center of multiple hobbies, projects, and the source of my relaxation(StarCraft2).
However, I started realizing something I thought was odd, in it's boot up process it was taking longer, and longer, for it to find it's USB Controllers which slowed down the entire process of booting up. Other issues would happen as well, windows and Linux were both becoming extremely unstable even with fresh copies of each installed.
After driving myself crazy trying to find the source of the problem. I stopped doing something that I think we all have been guilty of, especially with our own systems at one time or another. I decided I could not fix this if I do not start fixing the obvious first. I began by unplugging my tower ,opening it up and finally cleaning every little spec of dust I could find.
Of course it was a lot more gruesome then that. I owned this system since before I quit smoking, I own three small dogs who also contribute to dog fur in odd places around the house, etc... Although it was obvious how badly it needed to be cleaned, once I got down to taking fans apart, cleaning all my heat sinks, and everything else I could find it was obvious how bad it really was.
I had a huge plate of dirt, dust, fur, and probably some mutated germs of whatever cold virus I have had in the past 3 years. It was disgusting, but I felt extremely accomplished once it was done. It was good to know that my computer was spotless inside and out, and that it will last a long as possible with proper maintenance that it has been needing on the hardware side.
I boot up my system, and to my surprise, my USB Controllers initialized immediately, windows (which was previously crashed to a blue screen) worked perfect, as well as my Linux OS. My system was like new, and the software was being effected as much as the hardware was by simply not being cleaned.
As technical professionals, hobbyists, enthusiasts, geeks, whatever you want to call us, we all have been guilty of letting our own system go with different things at one time or another. I hope this can be a reminder to all of us regarding the importance of taking our own advice, when it comes to our own systems as well.
AMD Phenom II (Quad Core) 2.6ghz
8 GB DDR 3 RAM
1TB Hard Drive
1GB Video RAM ATI Radeon HD 4850
We are pleased to announce the openSUSE Weekly News Issue 185.
In this Issue:
openSUSE 12.1 Milestone 3 released
Google Summer of Code Reports
Javier Llorente: New namespace for KDE apps maintained by upstream
openSUSE medical Meeting
You can download it there:
We hope you enjoy the reading :-)
If you want to help us collecting interesting articles for the openSUSE Weekly News, so you can all your stuff into our new ietherpad: http://os-news.ietherpad.com/2.
Found Bugs? Please place it in our Bugtracker: http://developer.berlios.de/bugs/?group_id=12095
Features, Ideas and Improvements can placed in our Featuretracker: http://developer.berlios.de/feature/?group_id=12095
Well determined to use my powerful little net-book for as many things as possible that it was not intended to be used for,I wiped Windows 7 Starter and installed Ubuntu 11.04 32 bit; which went extremely smoothly, and Ubuntu was able to install and detect everything on it's own. The only work I had to do on top of installing Ubuntu 11.04, was clicking on “additional drivers” and enabling the proprietary drivers for my Nvidia Ion card(That's right, my little 10 inch net-book has a 512 V-RAM Nvidia Ion graphics card). My wireless card, bluetooth, everything built-in to my net-book Ubuntu 11.04 installed smoothly and easily. However, I felt this was all too easy and simple, so I wanted to take it a step further.
With a real operating system installed (The “Starter” edition of windows 7 lacks many basic features for my net-book, such as switching between the on-board video card and my Nvidia card for battery saving reasons). I decided to attempt to install a virtual copy of Windows 2003 Server on my net-book for no other reason then to prove to myself and the world that it can be done. The first thing I did was restart the computer and check my BIOS settings to see if my net-book supports virtualization, it was a false hope but it turns out they don't put that option in the BIOS of 10 inch net-books... I thought this would be the case. This led me to believe that I was in the beginning of a big waste of time, because without being able to enable virtualization on my BIOS, that usually, if not always, means that my BIOS/mainboard does not support virtualization. But hey, why let this stop you?
I continued on my quest to run a functional virtual copy of Windows 2003 Server on my net-book (Why windows? Because hey, if it can run a virtual copy of Windows, it can run a virtual copy of anything!). Next I install VirtualBox OSE direct from the Ubuntu Software Center then made a 20 GB 256 MB RAM Virtual slot for my soon to be OS. Once that completed, I proceeded to install Windows 2003 Server in VirtualBox on my new net-book.
After waiting for windows to finally install, I had a tough time staying determined to make this work, but still I kept at it while watching the windows installer 40 minute count-down slowly pass. Once done, it rebooted the VirtualBox OSE to start Windows 2003 Server, then I rebooted it a few more times to make sure it wasn't a fluke... Cross your fingers.... this is the moment of truth.
Victory!!! Man +1 Machine 0
I have now installed Ubuntu 11.04 Desktop, VirtualBox OSE, and Windows 2003 Server onto my net-book.
ASUS Eee PC Seashell 1015PN
CPU SPEED: 1.66 GHZ
Processor: Intel Atom 570 (Dual-Core)
Memmory: 1GB 1333mhz 204 pin RAM
Hard Drive: 250 GB