Home Linux Community Community Blogs

Community Blogs

open-slx Weekly News 19 published

We are pleased to announce the new  open-slx Weekly News 19 in the Formats PDF and EPUB.

You can find in this week (abstract):

  • Updates from the Newsteam
  • Maliit & Plasma Active
  • New Font-Repository available
  • Linux Colormanagement Hackfest
  • and more...

The open-slx Weekly News 19 are downloadable there [275,50 kB] (PDF) and there [19,10 kB] (EPUB).

Because Textwriters are needing Coffe just  donate anything.

Technorati Tags:, , , ,

Download: PDF-Format [275,50 kB] EPUB-Format [19,10 kB]

Flattr this



open-slx Weekly News 18 published

We are pleased to announce the new  open-slx Weekly News 18 in the Formats PDF and EPUB.

You can find in this week (abstract):

  • open-slx Screencast: Updating Plasma Active
  • Vivaldi Tablet with 8GB
  • Tizen runs Android Apps too
  • Installing Java 7
  • and more...

The open-slx Weekly News 18 are downloadable there [881,31 kB] (PDF) and there [11,94 kB] (EPUB).

Because Textwriters are needing Coffe just  donate anything.

Technorati Tags:, , , ,

Download: PDF-Format [881,31 kB] EPUB-Format [11,94 kB]

Flattr this

Dieser Wochenrückblick wurde unter der Creative Commons by Share Alike veröffentlicht.


IPv6 take up in Linux out pacing Windows in Top Sites

During recent analysis of the Alexa top 1 million websites, I pulled some stats regarding IPv6 enabled sites; with these stats as a basis I created an IPv6 infographic showing the current break down of technologies in use by the IPv6 enabled sites in the top 1 million.

Linux is way ahead of any other operating systems with a significant share of the sites being hosted on Linux platforms with Apache or Nginx as the web server. Microsoft IIS has about 4.5% of the share of IPv6 enabled sites.

Lets dig into that last statistic a little more. In the top 1 million sites 11237 sites were found to have a corresponding AAAA record. That is about 1.1% of the top 1 million.

Of those 11237 sites, 4.5% were running Microsoft IIS; while 72.3% were running Apache and 18.1% were running Nginx, putting these open source web servers way out in front. Maybe the system administrators and web teams that operate open source based servers are more inclined to play with new technologies; or perhaps there is another reason for the significant difference.

Netcraft puts the total number of Microsoft IIS web servers in the top 1 million at about 13%. The primary reason for this slow take up I would speculate is due to Microsoft IIS's predominance in business. Since business will usually only make a change when it is either mandated or there is a financial gain to be made; current rollout and deployments of IPv6 are on hold.

The United States also has a smaller take up of IPv6 enabled sites, much less than many other parts of the world such as Europe and parts of Asia.

For futher findings take a look at the infographic -

Any thoughts or comments on the variation in the deployment of IPv6 are most welcome.



Peter is the owner of LLC a provider of open source security testing services available on-line. Try free free port scanners and other tools for testing the security of internet connected systems.


Electronic Arts talks at Ubuntu Developer Summit

In this phenomenal times for Linux Gaming there are even more great things to come for Linux soon. As some of you know, Ubuntu Developer Summit is going to take place in California on 7–11 May this year.



Ubuntu 12.04 released

 It's time again - a new version of Ubuntu with the Long Term Support for the next 5 years has been released. What offers the new version for the Linux gamers?



Leveraging Open Source and Avoiding Risks in Small Tech Companies

Today’s software development is geared more towards building upon previous work and less about reinventing content from scratch. Resourceful software development organizations and developers use a combination of previously created code, commercial software and open source software (OSS), and their own creative content to produce the desired software product or functionality. Outsourced code can also be used, which in itself can contain any of the above combination of software.

There are many good reasons for using off-the-shelf and especially open source software, the greatest being its ability to speed up development and drive down costs without sacrificing quality. Almost all software groups knowingly, and in many cases unknowingly, use open source software to their advantage. Code reuse is possibly the biggest accelerator of innovation, as long as OSS is adopted and managed in a controlled fashion.

In today’s world of open-sourced, out-sourced, easily-searched and easily-copied software it is difficult for companies to know what is in their code. Anytime a product containing software changes hands there is a need to understand its composition, its pedigree, its ownership, and any open source licenses or obligations that restrict the rules around its use by new owners.

Given developers’ focus on the technical aspects of their work and emphasis on innovation, obligations associated with use of third party components can be easily compromised. Ideally companies track open source and third party code throughout the development lifecycle. If that is not the case then, at the very least, they should know what is in their code before engaging in a transaction that includes a software component.

Examples of transactions involving software are: a launch of a product into the market, mergers & acquisitions (M&A) of companies with software development operations, and technology transfer between organizations whether they are commercial, academic or public. Any company that produces software as part of a software supply chain must be aware of what is in their code base.


Impact of Code Uncertainties

Any uncertainty around software ownership or license compliance can deter downstream users, reduce ability to create partnerships, and create litigation risk to the company and their customers. For smaller companies, intellectual property (IP) uncertainties can also delay or otherwise threaten closures in funding deals, affect product and company value, and negatively impact M&A activities.

IP uncertainties can affect the competitiveness of small technology companies due to indemnity demands from their clients. Therefore technology companies need to understand the obligations associated with the software that they are acquiring. Any uncertainties around third party content in code can also stretch sales cycles. Lack of internal resources allocated to identification, tracking and maintaining open source and other third party code in a project impacts smaller companies even more.

Along with licensing issues and IP uncertainties, organizations that use open source also need to be aware of security vulnerabilities. A number of public databases, such as the US National Vulnerability Database (NVD) or Carnegie Mellon University's Computer Emergency Response Team (CERT) database, list known vulnerabilities associated with a large number of software packages. Without an accurate knowledge of what exists in the code base it is not possible to consult these databases. Aspects such as known deficiencies, vulnerabilities, known security risks, and code pedigree all assume the existence of software Bill of Materials (BOM). In a number of jurisdictions, another important aspect to consider before a software transaction takes place is whether the code includes encryption content or other content subject to export control – this is important to companies that do business internationally.  


The benefits of OSS usage can be realized and the risks can be managed at the same time. Ideally, a company using OSS should have a process in place to ensure that OSS is properly adopted and managed throughout the development cycle. Having such a process in place allows organizations to detect any licensing or IP uncertainties at the earliest possible stage during development which reduces the time, effort, and cost associated correcting the problem later down the road.

If a managed OSS adoption process spanning all stages of a development life cycle is not in place, there are other options available to smaller companies. Organizations are encouraged to audit their code base, or software in specific projects, regularly. Some may decide to examine third party contents and the associated obligations just before a product is launched, or in anticipation of an M&A.


Internal Audits

The key here is having an accurate view of all third-party, including OSS, content within the company. One option is to carry out an internal audit of the company code base for the presence of outside content and its licensing and other obligations. Unfortunately manually auditing a typical project of 1000-5000 files is a resource and time consuming process. Automated tools can speed up the discovery stage considerably. For organizations that do not have the time, resources or expertise to carry out an assessment on their own, an external audit would be the fastest, most accurate and cost effective option.


External Audits

External audit groups ideally deploy experts on open source and software licensing that use automated tools, resulting in accurate assessment and fast turnaround. A large audit project requires significant interactions between the audit agency and the company personnel, typically representatives in the R&D group, resident legal or licensing office, and product managers. A large audit project requires an understanding of the company’s outsourcing and open source adoption history, knowledge of the code portfolio in order to break it down into meaningful smaller sub projects, test runs, and consistent interactions between the audit team and the company representatives.

Smaller audit projects however can be streamlined and a number of overhead activities can be eliminated, resulting in a time and cost efficient solution without compromising details or accuracy. An example would be streamlined machine-assisted software assessment service. The automated scanning operation, through use of automated open source management tools, can provide a first-level report in hours. Expert review and verification of the machine-generated reports and final consolidation of the results into an executive report can take another few days depending on the size of the project.

The executive report delivered by an external audit agency is a high level view of all third party content, including OSS, and attributes associated with them. The audit report describes the software code audit environment, the process used, and the major findings, drawing attention to specific software packages, or even software files and their associated copyright and licenses. The audit report will highlight third party code snippets that were “cut & pasted” into proprietary files and how that could affect the distribution or the commercial model. This is important for certain licenses such as those in the GPL (GNU Public License) family of OSS licenses, depending on how the public domain code or code snippet is utilized.

The report significantly reduces the discovery and analysis effort required from the company being audited, allowing them to focus on making relevant decisions based on the knowledge of their code base.


Third party code, including open source and commercially available software packages, can accelerate development, reduce time to market and decrease development costs. These advantages can be obtained without compromising quality, security or IP ownership. Especially for small companies, any uncertainty around code content and the obligations associated with third party code can impact the ability of an organization to attract customers. Ambiguity around third party code within a product stretches sales cycles, and reduces the value of products and impacts company valuations. For small organizations, an external audit of the code base can quickly, accurately and economically establish the composition the software and its associated obligations.


Left 4 Dead for Linux is imminent

Michael Larabel, the founder of, has been invited to visit the Valve headquarters by its boss Gabe Newell. Yesterday, it was the day on which he was able to take a look inside the development progress of Steam for Linux. He has not only talked to Gabe Newell about Valve's plans regarding Linux, but even has tested the native Steam client on Ubuntu.



Phenomenal times for Linux Gaming

Although the Linux gamers are experiencing a phenomenal time nowadays, even better times are waiting for them. While Steam for Linux is being developed by Valve at the moment, more Linux games than ever before are being published now.

The latest HumbleIndieBundle introduced again a great new game for Linux a few day ago - Botanicula:



PostgreSQL C++ tutorial

Installation and configuration

This tutorial is done on LinuxMint 12 and it will work on Ubuntu 11.10. I did the same on CentOS 6.2 and I'm going to write about it later, installing PostgreSQL 9 and corresponding libpqxx is there rather complicated.

Using terminal we find what is available:

apt-cache search postgresql

those are results we are interested in:

libpqxx-3.0 - C++ library to connect to PostgreSQL

libpqxx-3.0-dbg - C++ library to connect to PostgreSQL (debugging symbols)

libpqxx3-dev - C++ library to connect to PostgreSQL (development files)

libpqxx3-doc - C++ library to connect to PostgreSQL (documentation)

postgresql-9.1 - object-relational SQL database, version 9.1 server

postgresql-client - front-end programs for PostgreSQL (supported version)

postgresql-client-9.1 - front-end programs for PostgreSQL 9.1

postgresql-client-common - manager for multiple PostgreSQL client versions

postgresql-common - PostgreSQL database-cluster manager

postgresql-contrib - additional facilities for PostgreSQL (supported version)

postgresql-contrib-9.1 - additional facilities for PostgreSQL

It will return much more but we do not need them all. Now in terminal we do

sudo apt-get install postgresql-9.1 postgresql-client postgresql-client-9.1 postgresql-client-common postgresql-common postgresql-contrib postgresql-contrib-9.1

or if one like gui, Software Manager or Synaptic will also do. Do not forget contrib packages, you will need them for pgAdmin III.

Again in terminal do:

sudo su postgres

afer entering password you are postgres. As postgres:

psql template1

psql (9.1.3)

Type "help" for help.

template1=# create role testuser login password 'testpass' superuser valid until 'infinity';



That escaped q quits psql and after one exit you are back to your login. If you like install now pgAdmin III or using psql create DB and table where you are going to practice.

To allow remote connections do:

sudo gedit /etc/postgresql/9.1/main/postgresql.conf

and modify listen_addresses, something like this:

listen_addresses = 'localhost,,'

Also in pg_hba.conf we need to enable remote users:

sudo gedit /etc/postgresql/9.1/main/pg_hba.conf

it should look something like this, all the way towards bottom of the file:

# IPv4 local connections:

host all all md5

host template1 testuser md5

host testpgdb testuser md5

After saving changes restart PostgreSQL server:

sudo /etc/init.d/postgresql restart

Please create DB testpgdb with sufficient rights for testuser or rename DB in C++ example.

Now it is time to install libpqxx. From terminal execute:

sudo apt-get install libpqxx-3.0 libpqxx-3.0-dbg libpqxx3-dev libpqxx3-doc

and installation is done.

C++ example

Code is slightly adjusted test 005 which comes with libpqxx3-doc, to see where is what use:

dpkg -L libpqxx3-doc

It connects to local instance of PostgreSQL, if you want remote – please edit connection string. If connection succeeds creates table, inserts data and at the end does one non-transactional select.

#include <iostream>

#include <pqxx/pqxx>


using namespace std;

using namespace pqxx;


int main(int argc, char** argv) {

connection C("dbname=testpgdb user=testuser password=testpass hostaddr= port=5432");

string tableName("tabletwo");

if (C.is_open()) {

cout << "We are connected to" << C.dbname() << endl;

} else {

cout << "We are not connected!" << endl;

return 0;


work Q(C);

try {

Q.exec("DROP TABLE " + tableName);


} catch (const sql_error &) {


work T(C);

T.exec("CREATE TABLE "+tableName+" (id integer NOT NULL, name character varying(32) NOT NULL, salary integer DEFAULT 0);");

tablewriter W(T, tableName);

string load[][3] = {






for (int i=0;i< 4; ++i)

W.insert(&load[i][0], &load[i][3]);




nontransaction N(C);

result R(N.exec("select * from "+tableName));

if (!R.empty()) {

for (result::const_iterator c = R.begin(); c != R.end(); ++c) {

cout << '\t' << c[0].as(string()) << '\t' << c[1].as(string()) <<'\t' << c[2].as(string()) <<endl;



return 0;


In order to compile code you will need to tell to g++ where are libpqxx headers (they are not on the path) and also to linker what libs must be used for linking. Something like this:

g++ hello.cxx -o hello -I/usr/local/include/ -lpqxx -lpq

If your libpqxx or libpq are on unusual place you will use -L[path to where they are], there is lot of that on CentOS or Fedora ;-)

After executing hello (./hello) you should be rewarded with the following output:

We are connected totestpgdb

NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index "PK_IDT" for table "tabletwo"

1 John 0

2 Jane 1

3 Rosa 2

4 Danica 3

I will write more on subject and explain Red Hat, CentOS, Fedora installation. After that we will look at coding using Eclipse and NetBeans, also PostgreSQL and libpqxx. - sync data between systems using pen-drive

Its is easy to sync two connected systems, but what if you need to sync two or more systems, not connected with each other? Need to use external storage device, but everytime copying data to external device is not always easy, and there may be chances that you may forget to copy some files. So here is a small bash-shell script, just need to set path once in script and it will take care of rest things.

save following as ~/bin/ and mark executable by executing 'chmod +x ~/bin/'.
to sync data from local system to external device run:
$ upload  (enter 'satellite' on asking passphrase)
to sync data from external device to local system run it with 'base' as argument and passphrase.

In this case external storage is USB pen drive labeled as SATELLITE.


if [ $# -ne 1 ];then
echo "Usage: $0 upload|download"
exit 0


# ---- BLOCK-START ----
# add path to both arrays
# and basename of the path must be a folder
# ---- BLOCK-END ----

# check that both arrays contains same number of elements
if [ ${#path_satellite[@]} -ne ${#path_base[@]} ];then
echo -e "mismatch detected in ${0}\nkindly verify both arrays, exiting"
exit 1

total=${#path_base[@]}      # since both arrays are having same number of elements
echo "total: $total"

# check that order of elements in both arrays are same
while [ $counter -le $total ]
temp_sate=$(basename ${path_satellite[$counter]})
temp_base=$(basename ${path_base[$counter]})

if [ "${temp_sate}" != "${temp_base}" ];then
echo "satellite: ${temp_sate}"
echo "base     : ${temp_base}"
echo "above paths does not match in ${0}, kindly check and rerun"
exit 1
counter=$(expr $counter + 1)

# chech if local copy of is latest

script_remote=$(stat ${f_remote} | grep "Modify")
script_remote=$(echo ${script_remote:8:19} | sed 's/[- :]//g')

script_local=$(stat ${f_local} | grep "Modify")
script_local=$(echo ${script_local:8:19} | sed 's/[- :]//g')

echo "remote: ${script_remote}"
echo "local : ${script_local}"

if [ "${script_remote}" -gt "${script_local}" ];then
cp -f ${f_remote} ${f_local}
echo "local copy was outdated, hence updated"
echo "re-run: ${0}"
exit 1

if [ "$arg" = "upload" ];then
echo -ne "phrase to overwrite SATELLITE \033[35G: "
read text

if [ "$text" != "satellite" ];then
echo "wrong phrase"
exit 2

for (( x=1; x<=$total; x++))
[ ! -d ${path_satellite[x]} ] && mkdir -p ${path_satellite[x]}
rsync -av --delete ${path_base[x]}/ ${path_satellite[x]}

elif [ "$arg" = "download" ];then
echo -ne "phrase to overwrite BASE \033[35G: "
read text

if [ "$text" != "base" ];then
echo "wrong phrase"
exit 2

for (( x=1; x<=$total; x++))
[ ! -d ${path_base[x]} ] && mkdir -p ${path_base[x]}
rsync -av --delete ${path_satellite[x]}/ ${path_base[x]}

# enable executable bit of bash-shell & python scripts
find ${path_base[x]} -name "*.sh" -exec chmod +x {} \;
find ${path_base[x]} -name "*.py" -exec chmod +x {} \;

# enable executable bit of binaries
# if executable path/file name contains single quote ('), skip it -- need to fix it!
var=$(find ${path_base[x]} -name "*" -type f -print| grep -v "'"| xargs file | grep ELF | cut -d':' -f1)
if [ -n "${var}" ];then
echo ${var} | xargs chmod +x

echo -e "synching \033[35G DONE"
#------------ END ------------


auto indent in vim

$ vim ~/.exrc

set shiftwidth=4
set softtabstop=4
set nu

if has ("autocmd")
     filetype plugin indent on


vim parse ~/.exrc on start,

shiftwidth & softtabstop makes sure that when 'tab' key pressed, it insert 4 spaces and when 'backspace' key pressed, delete 4 spaces.


'nu' print line number


'filetype plugin indent on' loads auto indent plugin for the file type (.extension) if avalable

Page 20 of 140

Upcoming Linux Foundation Courses

  1. LFS426 Linux Performance Tuning
    08 Sep » 11 Sep - New York
  2. LFS520 OpenStack Cloud Architecture and Deployment
    08 Sep » 11 Sep - Virtual
  3. LFD320 Linux Kernel Internals and Debugging
    15 Sep » 19 Sep - Virtual

View All Upcoming Courses

Who we are ?

The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

More About the foundation...

Frequent Questions

Join / Linux Training / Board