Home Linux Community Community Blogs

Community Blogs

C++ Programming With Emacs

I am starting work on a computer science Master's Degree next month. To get a head start, I have started to brush up on my C++ programming skills. On youtube, there are excellent C++ programming tutorials, but most of them use Dev + as their programming environment. When I was in college, I dismissed the university's proprietary programming solution, Microsoft Visual C++. I found Emacs, and I fell in love Emacs as a development environment.  I run Fedora 11 on my laptop, which features the latest version of Emacs. I also downloaded and tried Eclipse, but I found Eclipse not be as intuitive as Emacs. For now, I will continue to learn on Emacs.


The only disturbing thing is the lack of Emacs C++ programming resources on the Internet. That is one problem that I intend to correct. As I get more proficient, I will start to post Emacs C++ programming tutorials on Youtube. I would also like to write a book entitled "C++ Programming on Emacs". I found it easy to write basic code in Emacs. Error messages made it easy for me to find problems in my code, and to rapidly fix the errors. Compiling the code was a breeze. In my opinion, Emacs is a simple, productive development environment.


Linux just Works

Dual-booting should never be this hard. But for some reason Microsoft's ode to crappy names, and crappy OS design and development seems to love to make things harder than they have to be. I've had some serious issue with Vista ever since I bought the OS under the assumptions that it would be nearly as friendly, and refreshing as XP. If it wasn't for Vista I could even come to the conclusion that perhaps I would have never tried Ubuntu, and thus any other linux distribution out there. So as of tonight I'd like to outline a few ways in which crappy OS' like Vista make me delighted to be a penguin.

I began my installation of Vista as I normally do with dual-booting, apprehensive. I setup my second drive as NTFS and slipped the Vista disk in. Installation went smoothly, and setting up grub to be my primary bootloader went smoother than normal. Much smoother than it ever had in the past thanks to great help at

But this was short-lived. After the installation I was left with the usual myriad of issues that needed immediate fixing. This included getting my Audigy 2 ZS to work under Vista (mostly a Creative issue), downloading and installing drivers for a 9500 GT and a Linksys wireless adaptor, downloading updates, and then maybe I could get to what I installed the OS in the first place for, in this case playing pc games.

Things went well until after I downloaded Fallout 3 and attempted to extract it. Vista would give me errors that it could not extract a 5.5Gb archive into roughly 50Gb of hard drive space. Explorer was having a problem with this, and I couldn't get a straight answer anywhere online. It was around this time that the updates finished downloading, and I was prompted to restart. Strangely some updates did not download. No issue, I hoped.

Upon restart the updates were configured, for nearly an hour. Vista then restarted itself and gave me an error, so I had to place the install disc back into the drive to run repair. This appeared to fix the issue until halfway to downloading Winzip, in an attempt to rectify the previous issue, I was greated with the great spirit of Windows, the Blue Screen of Death. Wonderful, I thought to myself, not even 5 hours into Vista's lifespan it is already having issues with itself. A forced restart later, and another attempt at running the OS proved that the BSOD didn't plan on going away. So here I am, reinstalling a version of Windows not 5 hours old.

As you can tell, getting Vista just to run successfully is a huge trial in patience. When Ubuntu performs every needed task faster, stabler, and more securely it boggles my mind that Microsoft has gotten away with the crap they put on store shelves. And if those of you who are interested in Windows 7 think that things may change, think again, I received concurrent BSOD's in that OS as well. My Windows 7 install disk now sits happily at the bottom of a landfill somewhere in New Mexico. 

It is a huge burden to setup Vista when every Linux distribution I've used has done things better than Vista out-of-the-box, so to speak. Just staring at the installation screen makes me sigh not knowing what issue it will throw at me next. And realizing this I'll probably just end up formatting the drive again, and turning on Ubuntu to get things done. And slowly I realize why I switched to Linux in the first place, and take a quiet resignation and joy in the fact that Linux just works.



enKryptik Observations - 8 CLI Tools that are Under-utilized, Sometimes Unknown, or Unappreciated and Yet Pack a Powerful Punch

I watch a lot of the Linux boards and various distribution sites these days and notice the trend of wanting everything to be bigger, better and flashier. With great projects like Beryl (I love distracting Windows administrators with the's like watching a bug fly into one of those electric blue lights), and developers in the open source community constantly improving and designing ever more functional GUI interfaces (think tools such as Yast2 or Synaptic) it is easy to start letting your CLI skills get rusty and forget about your little but mighty friends on the console. Unfortunately with the sheer amount of sensitivity in the world these days I need to be certain I preface that I use GUIs and am by no means belittling, demeaning, pointing fingers and laughing nor looking to taunt you a second time-ah, however, I do believe that maintaining CLI skills and commands are imperative to good Linux administration. Nothing like being woken at two in the morning by an urgent alarm that your server sporting Tux has gone down and will not boot back to a display manger (yes yes, I know this doesn't happen...but let's think about this for a moment strictly as pilots in a Below are a few command line tools that I have found to be very helpful and informative when it comes to administrating Linux. They may not always be readily visible or well known, but they pack a very powerful punch. Now to the nitty and gritty...

WATCH — watch is a neat and versatile tool. Ever sat there frustrated running a particular command over and over just to see what is changing? watch automates the command you wish to run and repeats it every x seconds. The format is watch -n . The default time is 2 seconds if you leave out the -n flag. Lets say you want to watch connections on port 80. By issuing "watch lsof -i TCP:80" watch will run the command "lsof -i TCP:80" every 2 seconds and display the results in a terminal. You will see that I started out with just consulting the oracle (google), but then I figured I'd go see how my boys were doing up at Ft. Sill and cruise the news pages over on yahoo. Watch happily provided the results.

#watch lsof -i TCP:80
Every 2.0s: lsof -i TCP:80 Fri Jul 20 07:58:31 2007


iceweasel 10030 user 36u IPv4 29054 TCP> (ESTABLISHED)

iceweasel 10030 user 37u IPv4 28547 TCP> (ESTABLISHED)
Every 2.0s: lsof -i TCP:80                                       Fri Jul 20 07:58:38 2007   <---- notice the time


iceweasel 10030 user 34u IPv4 31794 TCP> (ESTABLISHED)

iceweasel 10030 user 42u IPv4 32130 TCP> (ESTABLISHED)

iceweasel 10030 user 43u IPv4 33178 TCP> (ESTABLISHED)

iceweasel 10030 user 44u IPv4 32178 TCP> (ESTABLISHED)

iceweasel 10030 user 45u IPv4 32587 TCP> (ESTABLISHED)

iceweasel 10030 user 46u IPv4 32190 TCP> (ESTABLISHED)

iceweasel 10030 user 47u IPv4 33177 TCP> (ESTABLISHED)

iceweasel 10030 user 48u IPv4 32907 TCP> (ESTABLISHED)

iceweasel 10030 user 52u IPv4 33181 TCP> (CLOSE_WAIT)

Maybe you want to watch a particular user and what processes they are running: watch -n 1 'ps aux | grep ' . If you toss in the -d flag, each time the console refreshes it will highlight the differences from the last execution. You can get creative with watch, just remember it is actually executing the command.

JUMPGATE - jumpgate became my friend awhile back when I administered some wireless access points that did not have a connection to the backhaul due to specialized protocols and security. In effect I needed a TCP forwarder to send data back and forth between boxes. By issuing jumpgate you can take a machine and have it handle the forwarding and receiving for your client. Format: jumpgate -b <localhost or IP address> -l -a -r

For example, in my situation I needed my local client to authenticate to a certificate server to be issued a certificate to allow it to join a secure network. The problem was my client could not reach the internal network where this certificate server resided. However, it could reach the gateway linux server and that server could reach the certificate box. I ran jumpgate on the gateway to forward my request.

#jumpgate -b -l 80 -a -r 8080

Then when I opened my browser and was prompted for the address of the certificate server by my local linux box I gave it the gateway's address instead ( Once the packet hits the gateway, jumpgate is listening on that particular port and forwards it to the remote destination ( where the certificate server was actually listening for requests on port 8080. Jumpgate includes options for logging or being interactive such as: jumpgate -i -l 32000 -f jumpgaterequest.log. This tells jumpgate to bind and listen for connections on port 32000, then interactively ask the user where they want the file transferred to when a connection is made and log the session to file jumpgaterequest.log.

LSOF - lsof is a personal favorite of mine. In fact I submitted a tip on lsof before. It's versatility is great, especially when you are troubleshooting an issue and need more information about process or connection details. This command elegantly stands for list open files. Linux treats most everything as a file. Sockets, devices, directories, etc, can all be viewed as files. When a process or application interacts with these files it has to "open" them if you will. Using this command you can delve into and see what your system is up to. 

For instance to show all the open TCP files - Will return what service is running, who is running it, the process ID and the connections on all TCP ports:
# lsof -i TCP
Show open TCP files on port 80 -
# lsof -i TCP:80
returns --> httpd2-wo 7010 wwwrun 3u IPv6 14787 TCP *:http (LISTEN)

Show open LDAP connections on TCP -
# lsof -i TCP:636

Want to know what files are open by a particular command (substitute your command after the c, and yes you can abbreviate it matches the closest command)
# lsof -c mysq
mysqld 991 admin cwd DIR 8,3 240 148743 /home/admin/novell/idm/mysql/data
mysqld 991 admin rtd DIR 8,3 536 2 /
mysqld 991 admin txt REG 8,3 5464060 148691 /home/admin/novell/idm/mysql/bin/mysqld
mysqld 991 admin 0r CHR 1,3 41715 /dev/null
mysqld 991 admin 1w REG 8,3 1250 149954 /home/admin/novell/idm/mysql/mysql.log
mysqld 991 admin 2w REG 8,3 1250 149954 /home/admin/novell/idm/mysql/mysql.log
mysqld 991 admin 3u IPv4 86990 TCP *:63306 (LISTEN)...
Want to know what files are open by a particular device?

#lsof /dev/cdrom
bash 30904 admin cwd DIR 3,0 2048 63692 /media/cdrecorder/linux/user_application_provisioning

You can change TCP to UDP and narrow down your requests to very specific items you want to target (i.e. is there an established connection from
# lsof -i This e-mail address is being protected from spambots. You need JavaScript enabled to view it .0.2:636 (lists LDAP connections to my server)

returns --> java 890 root 18u IPv6 8365030 TCP> (ESTABLISHED)

ndsd 6520 root 262u IPv4 8390927 TCP> (ESTABLISHED)

ATOP - As an administrator you are used to typing 'top' to see real-time tasks and system information. atop takes it a step further and injects some steroids (in a safe and humane fashion) to allow you to flex some muscle and watch your processes and system information in much greater detail. I like the fact that I can watch read/writes from the disk as well as that it offers a quick snapshot of my network device and memory usage (VGROW and RGROW). Another neat feature is that it takes samples and logs these in /var/log/ for later review. This can be very useful for system analysis (i.e. you keep hearing complaints that the server is sluggish during certain times of the can review these samples watching for high utilization or memory leaks). When you fire up atop you can press h and it will bring up the help screen. There are many options for you to tweak this command.


ATOP ? MYLAPTOP 2007/07/25 08:25:29 10 Seconds elapsed
PRC | sys 0.04s | user 0.26s | #thr 150 | #zombie 0 | #exit 0 |
CPU | sys 1% | user 3% | irq 2% | idle 195% | wait 0% |
cpu | sys 0% | user 2% | irq 2% | idle 96% | cpu000 w 0% |
cpu | sys 0% | user 1% | irq 0% | idle 99% | cpu001 w 0% |
MEM | tot 2.0G | free 982.6M | cache 491.5M | buff 123.1M | slab 60.6M |
SWP | tot 3.0G | free 3.0G | | vmcom 869.0M | vmlim 4.0G |
DSK | sda | busy 0% | read 0 | write 4 | avio 2 ms |
NET | transport | tcpi 1 | tcpo 1 | udpi 0 | udpo 0 |
NET | network | ipi 5 | ipo 1 | ipfrw 0 | deliv 1 |
NET | dev eth0 | pcki 6 | pcko 1 | in 0 Kbps | out 0 Kbps |

PID SYSCPU USRCPU VGROW RGROW USERNAME THR ST EXC S CPU CMD 1/2 5463 0.02s 0.13s 0K 0K root 1 -- - R 2% Xorg
6152 0.00s 0.07s 0K 0K user 1 -- - S 1% metacity
10953 0.01s 0.03s 0K 0K user 8 -- - S 0% firefox-bin
11100 0.01s 0.01s 0K 0K root 1 -- - R 0% atop
6143 0.00s 0.01s 0K 0K user 1 -- - S 0% gnome-panel
6701 0.00s 0.01s 0K 0K cupsys 1 -- - S 0% cupsd
6148 0.00s 0.00s 0K 0K user 1 -- - S 0% nautilus
6270 0.00s 0.00s 0K 0K user 2 -- - S 0% gnome-terminal
7269 0.00s 0.00s 0K 0K user 1 -- - S 0% notification-d
6167 0.00s 0.00s 0K 0K user 4 -- - S 0% gnome-cups-ico
6244 0.00s 0.00s 0K 0K user 1 -- - S 0% gnome-screensa

IFTOP - Curious about what network traffic is flowing around you? Pop this command into your console and get instant feedback. Of interest (although it is not shown in my example below) is that bars will appear as bandwidth is used up. Since I'm on a test network while I write this, there is not enough traffic generation for iftop to scale up. I like it because you can toggle DNS resolution, show source or destination or both, sort by column or source or destination and a sweet feature is to freeze the order. If you have something you want to specifically watch, you can type o (the letter not the number) and it locks onto just those source/destination connections. Again, typing h brings up the help and options screen. To be honest I usually leave iftop up and running during the day to keep an eye on what is going on with my network.

1.91Mb 3.81Mb 5.72Mb 7.63Mb 9.54Mb
+----------------------------------------------- => 0b 0b 0b
<= 256b 179b 166b => 0b 0b 0b
<= 0b 169b 133b => 0b 0b 0b
<= 0b 0b 55b => 0b 0b 0b
<= 0b 0b 55b => 0b 0b 0b
<= 0b 0b 55b => ALL-SYSTEMS.MCAST.NET 0b 0b 6b
<= 0b 0b 0b

TX: cumm: 654KB peak: 0b rates: 0b 0b 0b
RX: 5.59MB 2.02Kb 256b 348b 470b
TOTAL: 6.23MB 2.02Kb 256b 348b 470b

PSTREE - Run 'ps aux' to have it spew out its results and you are stuck combing through the list checking which process is related to the another process. It's ingrained to type 'ps aux' as an administrator but pstree simplifies the display by taking the process status and building it out as tree. The results are clean and it allows you to rapidly check parent and child relationships (good parenting skills are always a bonus in life...remember to teach your children Linux early on). The key word there is rapidly. If you need or desire to see the PID inside the tree insert the -p flag. 


? +-6*[{amarokapp}]
+-gnome-terminal---bash---iftop---3*[{iftop}] <--- check it out, it shows iftop and atop running
? +-bash---atop
? +-bash---su---bash
? +-bash---su---bash---pstree
? +-gnome-pty-helpe
? +-{gnome-terminal}

NETCAT (NC) - netcat or nc is aptly nicknamed the swiss army utility knife of networking. This is another personal favorite. Although this tool is still popular often times folks forget about it (except those who wear b/w/g hats). Official sounding terminology from the man page, is: it is "a simple unix utility which reads and writes data across network connections, using TCP or UDP protocol. It is designed to be a reliable "back-end" tool that can be used directly or easily driven by other programs and scripts".

Using an earlier scenario of machines that were separated, and now adding the fact I needed to flash images, I could combine the use of jumpgate and dd with netcat and have my remote system flash itself (no cops or court dates appear with this type of flashing).

Quick imaging: On the box I wish to flash I issue:
# nc -l -p 23000 | dd of=/dev/hda <--- tells netcat to listen on port 23000 and pipe whatever it receives to dd

On the client-side (box I am sending the image from) I issue:
# dd if=/dev/hda | nc 23000 <--- takes the output from dd and pipes it over to a listening server

Don't have nmap loaded? No worries, netcat will scan ports for you:
# netcat -vv -z 8000-9200 <--- using -vv tells netcat to be very verbose and -z tells it to scan the range (8000-9200) you can
substitute your own port range values
(UNKNOWN) [] 9104 (?) : Connection refused
(UNKNOWN) [] 9103 (bacula-sd) : Connection refused <--- interesting it has a network backup daemon
(UNKNOWN) [] 9102 (bacula-fd) : Connection refused
(UNKNOWN) [] 9101 (bacula-dir) : Connection refused
(UNKNOWN) [] 9100 (?) open <--- Hey, I discovered I have a device that is running print service using HP JetDirect
(UNKNOWN) [] 9099 (?) : Connection refused
(UNKNOWN) [] 9098 (xinetd) : Connection refused

If you do not care about or need a secure transmission you could skip using scp and modify nc to transfer files, on the server side:
# nc -l -vv -p 9000 > myfilename.txt <--- sets up netcat to listen on port 9000, output whatever it receives to said filename, and be verbose about it so
I can watch (if I have a console open on the server)

And on the client side I issue:
# nc -vv 9000 < myfilename.txt <--- feed my file using nc to the server at port 9000 and tell me about it while doing it

STRACE — strace is a wicked little debugger. This command is fancy to ToTo in the Wizard of Oz, it will peel back the curtain and show you what levers and wheels the great Oz is working when you execute a command. Ever wonder why or specifically where your compile and makefile was vomiting? Pondering why your application hangs and just appears to be caught in a time/space vortex? strace allows you to watch step by step what the kernel is performing when you request action. Now the warning for you is that it reveals A LOT of information that will require a little patience on your part when combing through the data. However, this little bit of patience will pay off when you are able to find where you need to modify your code or add libraries or whatever the problem is when your command errors out. Below is an example output (truncated/edited for space) from syncing the hardware clock to system clock:

#strace hwclock --hctosys

execve("/sbin/hwclock", ["hwclock", "--hctosys"], [/* 32 vars */]) = 0

brk(0) = 0x608000

mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b7148539000

uname({sys="Linux", node="L01395", ...}) = 0

access("/etc/", F_OK) = 0

mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b714853a000

access("/etc/", R_OK) = -1 ENOENT (No such file or directory)

open("/etc/", O_RDONLY) = 3

fstat(3, {st_mode=S_IFREG|0644, st_size=78580, ...}) = 0

mmap(NULL, 78580, PROT_READ, MAP_PRIVATE, 3, 0) = 0x2b714853c000

close(3) = 0

access("/etc/", F_OK) = 0

open("/lib/", O_RDONLY) = 3

read(3, "177ELF211ÔøΩÔøΩÔøΩÔøΩÔøΩÔøΩÔøΩÔøΩÔøΩ3ÔøΩ>ÔøΩ1ÔøΩÔøΩÔøΩ340331"..., 832) = 832

fstat(3, {st_mode=S_IFREG|0755, st_size=1367432, ...}) = 0

mmap(NULL, 3473592, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x2b714873a000

mprotect(0x2b7148881000, 2097152, PROT_NONE) = 0

mmap(0x2b7148a81000, 20480, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP...
brk(0) = 0x608000

brk(0x629000) = 0x629000

open("/usr/lib/locale/locale-archive", O_RDONLY) = -1 ENOENT (No such file or directory)

open("/usr/share/locale/locale.alias", O_RDONLY) = 3

fstat(3, {st_mode=S_IFREG|0644, st_size=2586, ...}) = 0

mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b714853c000

read(3, "# Locale name alias data base.
#"..., 4096) = 2586

stat("/etc/adjtime", {st_mode=S_IFREG|0644, st_size=46, ...}) = 0

open("/etc/adjtime", O_RDONLY) = 3

fstat(3, {st_mode=S_IFREG|0644, st_size=46, ...}) = 0

mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b714865f000

read(3, "0.293746 1185223896 0.000000
118"..., 4096) = 46

close(3) = 0

munmap(0x2b714865f000, 4096) = 0

open("/dev/rtc", O_RDONLY) = 3

ioctl(3, RTC_RD_TIME, {tm_sec=26, tm_min=51, tm_hour=7, tm_mday=24, tm_mon=6, tm_year=107, ...}) = 0

ioctl(3, RTC_RD_TIME, {tm_sec=26, tm_min=51, tm_hour=7, tm_mday=24, tm_mon=6, tm_year=107, ...}) = 0

close(3) = 0

munmap(0x2b714865f000, 4096) = 0

stat("/etc/localtime", {st_mode=S_IFREG|0644, st_size=861, ...}) = 0

settimeofday({1185281487, 0}, {300, 103079215111}) = 0

exit_group(0) = ?

Process 9729 detached

These commands and suggestions are strictly my two cents (and when adjusted for inflation usually worth less than that). However, central to having an understanding of Linux is having an understanding of how to interact and interpret data that the kernel will serve up. These CLI tools will run on any Linux  distribution. Not all of these tools are included by default though, so included below are links to those that probably are not already installed. I like no I'm not writing this tip sheet in vi...but I do believe learning CLI and using the Linux console is akin to learning to drive a standard. Once you understand the clutch and gas routine you can drive anything. CLI is the standard that never lets you down. Happy computing!


enKryptik Observations - MiniBytes of Linux

I'm enjoying the new site. In all actuality I hope that it continues to build into the central location for all things Linux...and not just because it owns the name "". With that in mind I am going to do my part to help and increase my contributions, after all, unless the individual is active in the community, there is no community.

The thing I have always thoroughly enjoyed about Linux is the fact of how it lets me learn. I can push my limits and comfort zones with this particular operating system. Oh, yeah...and I can do it for free. I also like the fact that there is more than one way to do something and reach the same goal.  Because of that, the proportion of challenge that Linux provides to the end user is directly proportional to how far the end user is willing to peel back the OS and go exploring.  One of the best examples is exhibited by the Ubuntu distribution. Ubuntu is crisp, clean and has the reputation of just working. It makes the transition for a new user from Windows exceptionaly easy. However, pull back the pomp and circumstance, and Ubuntu is still Linux. Any user can choose at any time to begin to delve deeper in our favored operating system. 

With all that in mind I've decided to try and become more active in the community. I've been working in Linux since 2001 and have designed, engineered and supported Linux in the corporate IT infrastructure. As any fan, it is the only OS I operate in my household and I actively encourage friends and family to adopt at least one Linux based system in their abodes.

I want to specifically support I've written some articles for Novell's CoolSolutions community. I'm going to reproduce several of those articles here to encourage and help others. I make no claim of being some super guru, dungeon master or captain awesomeness. Rather I'm just the type who lives for challenges, likes to learn new tricks and methods, pushes to explore and loves Linux and all that this tool gives me the capability to do.  

Interested in tips and thoughts that others have. Feel free to contribute with me if you'd like!

Cheers - Kryptikos


Halfway status report. Display - success

My computer is becoming wearable, though it's taking some time.

Half of the allocated time is gone already, and I feel like I should have accomplished a lot more. Also I feel like I'll be finished in a week, which is not true either. I now have a working computer, which doesn't yet seem to understand my 3g, working head mounted display, in monochrome as it's a lot more readable that way and I'm going to spend my time in command line anyway, and barebones keyboard, which has not yet been programmed because of missing atmega8 programmerthingy.

 My achievement this week has definitely been the display. I have bought a lot of cables and adapters. The Beagleboard gives S-video and Myvu Crystal wants S-video or composite, but it accepts a 4 conductor 3.5mm plug. Currently I have a modified S-video to RCA -> RCA to 3.5mm plug, but a while ago I had like four or five adapters and cables going around. That thing didn't "just work" and everything was a lot better after I had replaced the wiring. I did try to mess with the os settings for overlays, displays and framebuffers without enough knowledge, but the problem was with my cables. There was also the thing that first I didn't get any signal at all before Gregor Richards helped me a bit. There's just so much conflicting and old information, and not enough new information about s-video on Beagleboard+Ångstrom. I'll also have to thank a fellow student who has been helping me with some information and other things.

 I still need to modify them though. Myvu Crystal is a weird thing, as the usb-charging cable works as a ground, and so I shouldn't have another ground going to Beagleboard, because that messes the signal up. I already cut the ground pins there, but wasn't enough. Currently I get best signal if I leave those RCA cables hanging so that their grounds don't touch each other and I use the white RCA of the a/v jack and just touch the first metal thing on the Myvu pendant with the tip of the 3.5mm plug. If I use the red one and just stick the 3.5mm plug in, it gets worse signal. Probably because there's also sound signals going there. I need to break the plug so, that there is just the last tip that can connect to anything.

It's been irritating to look at micro sized flashing ntsc screen, but now it is perfectly readable and the signal is just perfect. After I've fixed the cables, I'll just need to break the myvu and put one of the displays inside some sunglasses I'll be buying soon. I'll also need that stupid half-silvered/see-through mirror which will be in front of my eye and which will allow me to see through the display. Then the display is done.

It could have been a lot easier and better, and still might be, if I'd just buy the correct driver chip from Kopin. It's "just" 50$ and the correct is that version where I could bypass the composite and just feed it a pure digital display (I've been told that myvu pendant has the driver board version that has some stupid chip which you can't see it's pins so no wiring there). That way I could have used the dvi-d signal of Beagleboard and everything would be lollipops and rainbows. They still are, but they would be the colored things that way. There are some nice specifications at Kopin webpage. For your information, Myvu Crystal uses Kopin 640x480 microdisplays. Driver board might, or might not be Kopin, but it has the same chips that handles the signals and probably the same decoder for composite/s-video signals. 

 I'll be getting the chip for the keyboard (Spiffchorder) soon, but I'll need to rewire the keys and craft a handle for it. I'll try making one from finfoam. Then the keyboard is also done, if it can type scandinavian letters like ö and such. If not, I might need to touch their code too.

My computer needs wvdial or something similar, so it could use my 3g. It also needs a box, maybe from finfoam. Maybe I should make the box from some clear plastic though. I have ordered a usb ethernet card just because I'll probably need it sometimes. I'll need to decipher why narcissus, the awesome Ångstrom distribution image builder gives me enlightenment and gnome-games and such, when I clearly order a command-line only setup. I don't know what I should kill to get to command-line now. Alt+ctrl+F1 doesn't work, though it clearly has worked sometimes for some unfathomable reason. If I just kill enlightenment, xorg and/or gpe-login, they will just restart. I should study my chosen distribution a bit more.

And finally, I'll need a battery. I'm thinking about those rechargable batteries, I've seen some high mah ones that should do with some kind of a 5V regulator.

 I'm trying to do most of the keyboard this weekend, then I'll check the software I'll be using and the cables. I need some new, short cables, It's been a joke of the day that I could use them as my only clothing. It also seems that being mobile doesn't mean not having lots of wires around. After my keyboard is finished, I'll build the computer case and the sunglasses-display, and afterwards, I'll make the battery and start wearing the thing when I'll manage to find a way to wear them. Maybe I could just hang the computer from my belt. I definitely don't want a fannypack or a backpack. Maybe something like the Casebelt from Urban tools. Do you know some nice way for wearing the wearable? Comments welcome.


Linux Kernel Virtual Machine improves build performance

KVM acts as the host for the guest operating systems that build the target software for the user. By switching from VMware build guests to Linux KVM guests, build times for each guest are reduced by as much as 50 percent. Learn how to set up the build server and create guests, customize build requests, and organize and access build results.

Samba public users directory (quick howto)

This quick post shows you how to create a samba share for a network, every user is forced to a specific username and each file belongs to this username. This is useful when dealing with public folders for some sort of exchange between users in a network

Read/Write access  to everyone for directories and files, this is a tipical configuration for a swap area

Check it out:

comment = Public folder for my network
available = yes
browseable = yes
path = /home/public
guest ok = yes
public = yes
writable = yes
write list = *

force group = commongroup
force user = commonuser
create mask = 0644
directory mask = 0755

printable = no


Comments welcomed




Improving consumer market share for GNU/linux

Trying to prioritize three steps towards a higher linux based desktop market share, I came up with the three speed bumps on the road leading to "freedom of choice"


#1 - Stability

Following the debate in papers and blogs these days, there seems to be a resistance to replace XP with Windows 7. Now - my experience tells me, that the IT-industry has always resisted upgrades. It's a tough job to find any users out there, who would argue the old system to be worse than the latest. Remember how good OS/2 was? or DOS for that matter? Seems like over the years, it's just getting worse and worse ;) 

Of course not! We are improving all the time. I guess nobody would ask to have a new laptop pre-installed with any OS older than the last release. XP's been around for nearly a decade. People know it and are comfortable using it. It might not be perfect, but it works and it has a variety of apps, that can be installed - out of the box. Download, install. So stability - not in the sense of avoiding crashes and the like, but as in "it really works" is important to the end user.

Now whats with GNU/linux?

Most distributions have a 6-12 month release cycle, and only 2 versions of backward support (security patching).

Go ahead - do the math yourself... Why upgrade every other year, if it's avoidable?

I'd like to see the impact of a GNU/linux distribution that remains stable in 10-20 years. I'd be likely to chose it. And I guess CIO's in charge of an install base of tens of housands clients, would be easier to convince to migrate.

Would it be possible? With the architecture of Windows I doubt it - on linux, there might be a chance? 


#2 - Data and security

The world is turning fast these days. Remember how much/little data you had stored 10-15 years ago? 50 MB? Might have been a bit more. Today everybody has shitloads of data sitting in their laptops. Prom snapshots, wedding photos, scanned images of the world that was, not to mention music that easily takes up 30+ GB or your home video and DivX library at 100+ GB. That's a lot of data. Sure, that amount of data, is not going to decrease in the years to come. So: Data management is important!

That being said: Dolphin has a looong way to go.

The traditional directory structure of filing data, is starting to lose relevance. Hell, if I am not super structured in building up my directory tree, it ges really hard to find that budget-file I spend hours preparing last year. Not to mention my 13 year old son or my 65 year old mother, who will simultaneously go, "huh?" if I ask them, to name their default directory path. They don't know! All they know is that if the put in a CD in the drive, the music will appear in the player, and later, when they plug in the iPod, the music will magically appear in it minutes later. Great! :)

Some data is private, while most data is meant to be shared amongst users. Does GNU imply some standard for placing (mounting) shared drives and document structure, that is implemented in the applications? Admitted, it does take away a bit of freedom, but as a gain it also takes out the annoying "path to:" text boxes in many GNU applications. And as a bonus you gain access to the users (users!) that don't care about HDD architecture. 
But make it a GNU standard, don't have it depend on the distribution flavour. Make sure to communicate the standard to the open source community developers.

I'd like to see a distribution that comes with out-of-the-box policies and directory-structure that allows end user low-brain-activity access to data. :p


#3 - Consumer applications

Hopefully, the presence of a stable distribution that doesn't require regular reinstalls, will grow a number of end-user applications.

The security model of GNU/linux shows the path for managing those. But I don't really see anybody walk the talk yet.

Today root access is required to install/deinstall anything on your linux based PC.

I see application installation happen in three ways (A, B, C):

A. User install - concerning private data management. GNU needs more emphasis on user installs. I am not really sure exactly which applications fall into this category, but I guess mostly smaller (thin client/offline) games and social networking stuff like 'picasa' (though a linux version is not available - but you get the point).
Let users install apps on their own account with no risk to the main system. Application data is private.

And remember, end users 'double click' - they don't fiddle with 'gcc', 'make' & 'install'. Which again means distribution of binaries rather than source code. Don't let the open souce ideals block the growth of Linux. People are willing to pay for almost any application (even if they can have it for free - MS is a stunning example).  There are two elements of security here:

1. Open source is secure, because if you are concerned about security issues, you can fix them yourself and distribute your own source 'fork'. Proactive security.

2. Paid software comes with T&C's. If the T&C are broken, you can sue the vendor. Passive security.

Anyway the latter model is very popular with Average Joe and amongst business leaders, because it doesn't require programming skills.

B. Group install - concerning shared data management. To avoid having the same installation in all user accounts larger applications, i.e. office-suites, media players, WOW-like games could be installed and accessed across a group of users, but still without root permission. Application data can be shared by group or private.

Admitted, I have security concerns regarding group-wide installs and you might argue that some kind of admin authorization be required to perform this kind of installation.

C. Root install - concerning system wide administation, security patch and general data management. In the ideal world the end user need not to care about root access. In the real world, the user might occationally have to type in the root password when prompted by security update client. Application data is 'invisible' to regular users.


The question is what happens, when you PC boots up in nothing but a browser? I'll think about that over the next couple of days and get back to you with my view.

Thanks for reading all the way to the end. Post a comment, I'd appreciate it!


Super scary new computer game trilogy ported to Linux

Halloween has come early for Linux-loving gamers in the form of the scary Penumbra game trilogy, which has just recently been ported natively to GNU-Linux by the manufacturer, Frictional Games.  The Penumbra games, named Overture, Black Plague, and Requiem, respectively, are first person survival horror and physics puzzle games which challenge the player to survive in a mine in Greenland which has been taken over by a monstrous infection/demon/cthulhu-esque thing.  The graphics, sounds, and plot are all admirable in a scary sort of way, given that the protagonist is an ordinary human with no particular powers at all, who fumbles around in the dark mine fighting zombified dogs or fleeing from infected humans.

The Penumbra games are remarkable for its physics engine -- rather than just bump and acquire, the player must use the mouse to turn knobs and open doors; and the player can grab and throw pretty much anything in the environment.  The physics engine drives objects to fly and fall exactly as one would expect.  The porting of a game with such a deft physics engine natively to Linux might be one of the most noteworthy events for GNU-Linux gamers since the World of Goo Linux port

It is always a big deal for GNU-Linux when a cool new game like this is created for GNU-Linux.  It creates new buzz for Linux.  People use operating systems not for the operating system itself, but for the things that they can do with the operating system.

Also, what makes Microsoft powerful is not their operating system itself, but all of the hundreds of thousands of applications that are written for Microsoft Windows.  Again, people get Linux or the Mac or Microsoft Windows for what they can do with it. 

And it's good that a company is actually charging for something that runs on top of GNU-Linux.  People value stuff that they pay for.  If someone pays for the Penumbra games, then people will tend to put more esteem into GNU-Linux, because they have paid for a game that will run on top of it.

For this weekend only, from July 17 through July 20, you can buy a copy of the game for only $5.00 by going to the Frictional Games home page



Best practices for using the Java Native Interface

JNI is an important tool to leverage existing code assets. This article identifies the top 10 JNI programming pitfalls, provides best practices for avoiding them, and introduces the tools available for implementing these practices.

enKryptik Observations - Microsoft v.

This was just too good not to pass along. I love hearing about how "supportive" and "tolerant" Microsoft is of Linux these days. Heck, Novell believed them enough to form a partnership with them. Course Novell has been hemorrhaging cash and people for a while now so to them it made sense. They needed the stability *cough cough* that Microsoft would bring to the table by saying they won't sue them for copyright infringement on code they won't talk about. 

Anyways, that's not the point of this little ditty. I just found it humorous today as I was working on my profile and making some quick updates that this happened. I happen to use Firefox as my primary browser. Generally when I am logged into a site and want to see something from a visitor's perspective I fire up IE or Opera etc to look at the environment since cookies are active and soforth. So after my adjustments, I fire up the ole IE and clicked on the page. Sure and true to its reputation as being Internet Exploder, she exploded immediately. Here's what she vomited back to me:

Ha. IE doesn't place nicely in the sandbox with Shame on them. All I can say is no browser should fear the penguin! Anyone else have similar experiences?

Cheers - Kryptikos

"This is Linux country...on a quiet night you can hear Windows reboot."

Page 111 of 160

Upcoming Linux Foundation Courses

  1. LFS201 Essentials of System Administration
    12 Jan » 30 Mar - Online Self-Paced
  2. LFD331 Developing Linux Device Drivers
    01 Jun » 05 Jun - Virtual (GUARANTEED TO RUN)
  3. LFD320 Linux Kernel Internals and Debugging
    08 Jun » 12 Jun - San Jose - CA + Virtual (GUARANTEED TO RUN)

View All Upcoming Courses

Who we are ?

The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

More About the foundation...

Frequent Questions

Join / Linux Training / Board