Linux.com

Home Linux Community Community Blogs

Community Blogs



Ubuntu 12.04 released

 It's time again - a new version of Ubuntu with the Long Term Support for the next 5 years has been released. What offers the new version for the Linux gamers?

Read more at www.SteamForLinux.com

 

Leveraging Open Source and Avoiding Risks in Small Tech Companies

Today’s software development is geared more towards building upon previous work and less about reinventing content from scratch. Resourceful software development organizations and developers use a combination of previously created code, commercial software and open source software (OSS), and their own creative content to produce the desired software product or functionality. Outsourced code can also be used, which in itself can contain any of the above combination of software.

There are many good reasons for using off-the-shelf and especially open source software, the greatest being its ability to speed up development and drive down costs without sacrificing quality. Almost all software groups knowingly, and in many cases unknowingly, use open source software to their advantage. Code reuse is possibly the biggest accelerator of innovation, as long as OSS is adopted and managed in a controlled fashion.

In today’s world of open-sourced, out-sourced, easily-searched and easily-copied software it is difficult for companies to know what is in their code. Anytime a product containing software changes hands there is a need to understand its composition, its pedigree, its ownership, and any open source licenses or obligations that restrict the rules around its use by new owners.

Given developers’ focus on the technical aspects of their work and emphasis on innovation, obligations associated with use of third party components can be easily compromised. Ideally companies track open source and third party code throughout the development lifecycle. If that is not the case then, at the very least, they should know what is in their code before engaging in a transaction that includes a software component.

Examples of transactions involving software are: a launch of a product into the market, mergers & acquisitions (M&A) of companies with software development operations, and technology transfer between organizations whether they are commercial, academic or public. Any company that produces software as part of a software supply chain must be aware of what is in their code base.

 

Impact of Code Uncertainties

Any uncertainty around software ownership or license compliance can deter downstream users, reduce ability to create partnerships, and create litigation risk to the company and their customers. For smaller companies, intellectual property (IP) uncertainties can also delay or otherwise threaten closures in funding deals, affect product and company value, and negatively impact M&A activities.

IP uncertainties can affect the competitiveness of small technology companies due to indemnity demands from their clients. Therefore technology companies need to understand the obligations associated with the software that they are acquiring. Any uncertainties around third party content in code can also stretch sales cycles. Lack of internal resources allocated to identification, tracking and maintaining open source and other third party code in a project impacts smaller companies even more.

Along with licensing issues and IP uncertainties, organizations that use open source also need to be aware of security vulnerabilities. A number of public databases, such as the US National Vulnerability Database (NVD) or Carnegie Mellon University's Computer Emergency Response Team (CERT) database, list known vulnerabilities associated with a large number of software packages. Without an accurate knowledge of what exists in the code base it is not possible to consult these databases. Aspects such as known deficiencies, vulnerabilities, known security risks, and code pedigree all assume the existence of software Bill of Materials (BOM). In a number of jurisdictions, another important aspect to consider before a software transaction takes place is whether the code includes encryption content or other content subject to export control – this is important to companies that do business internationally.  

Solutions

The benefits of OSS usage can be realized and the risks can be managed at the same time. Ideally, a company using OSS should have a process in place to ensure that OSS is properly adopted and managed throughout the development cycle. Having such a process in place allows organizations to detect any licensing or IP uncertainties at the earliest possible stage during development which reduces the time, effort, and cost associated correcting the problem later down the road.

If a managed OSS adoption process spanning all stages of a development life cycle is not in place, there are other options available to smaller companies. Organizations are encouraged to audit their code base, or software in specific projects, regularly. Some may decide to examine third party contents and the associated obligations just before a product is launched, or in anticipation of an M&A.

 

Internal Audits

The key here is having an accurate view of all third-party, including OSS, content within the company. One option is to carry out an internal audit of the company code base for the presence of outside content and its licensing and other obligations. Unfortunately manually auditing a typical project of 1000-5000 files is a resource and time consuming process. Automated tools can speed up the discovery stage considerably. For organizations that do not have the time, resources or expertise to carry out an assessment on their own, an external audit would be the fastest, most accurate and cost effective option.

 

External Audits

External audit groups ideally deploy experts on open source and software licensing that use automated tools, resulting in accurate assessment and fast turnaround. A large audit project requires significant interactions between the audit agency and the company personnel, typically representatives in the R&D group, resident legal or licensing office, and product managers. A large audit project requires an understanding of the company’s outsourcing and open source adoption history, knowledge of the code portfolio in order to break it down into meaningful smaller sub projects, test runs, and consistent interactions between the audit team and the company representatives.

Smaller audit projects however can be streamlined and a number of overhead activities can be eliminated, resulting in a time and cost efficient solution without compromising details or accuracy. An example would be streamlined machine-assisted software assessment service. The automated scanning operation, through use of automated open source management tools, can provide a first-level report in hours. Expert review and verification of the machine-generated reports and final consolidation of the results into an executive report can take another few days depending on the size of the project.

The executive report delivered by an external audit agency is a high level view of all third party content, including OSS, and attributes associated with them. The audit report describes the software code audit environment, the process used, and the major findings, drawing attention to specific software packages, or even software files and their associated copyright and licenses. The audit report will highlight third party code snippets that were “cut & pasted” into proprietary files and how that could affect the distribution or the commercial model. This is important for certain licenses such as those in the GPL (GNU Public License) family of OSS licenses, depending on how the public domain code or code snippet is utilized.

The report significantly reduces the discovery and analysis effort required from the company being audited, allowing them to focus on making relevant decisions based on the knowledge of their code base.

Conclusion

Third party code, including open source and commercially available software packages, can accelerate development, reduce time to market and decrease development costs. These advantages can be obtained without compromising quality, security or IP ownership. Especially for small companies, any uncertainty around code content and the obligations associated with third party code can impact the ability of an organization to attract customers. Ambiguity around third party code within a product stretches sales cycles, and reduces the value of products and impacts company valuations. For small organizations, an external audit of the code base can quickly, accurately and economically establish the composition the software and its associated obligations.

 

Left 4 Dead for Linux is imminent

Michael Larabel, the founder of phoronix.com, has been invited to visit the Valve headquarters by its boss Gabe Newell. Yesterday, it was the day on which he was able to take a look inside the development progress of Steam for Linux. He has not only talked to Gabe Newell about Valve's plans regarding Linux, but even has tested the native Steam client on Ubuntu.

Read more at www.SteamForLinux.com

 

Phenomenal times for Linux Gaming

Although the Linux gamers are experiencing a phenomenal time nowadays, even better times are waiting for them. While Steam for Linux is being developed by Valve at the moment, more Linux games than ever before are being published now.

The latest HumbleIndieBundle introduced again a great new game for Linux a few day ago - Botanicula:


Read more at

www.SteamForLinux.com

 

PostgreSQL C++ tutorial

Installation and configuration

This tutorial is done on LinuxMint 12 and it will work on Ubuntu 11.10. I did the same on CentOS 6.2 and I'm going to write about it later, installing PostgreSQL 9 and corresponding libpqxx is there rather complicated.

Using terminal we find what is available:

apt-cache search postgresql

those are results we are interested in:

libpqxx-3.0 - C++ library to connect to PostgreSQL

libpqxx-3.0-dbg - C++ library to connect to PostgreSQL (debugging symbols)

libpqxx3-dev - C++ library to connect to PostgreSQL (development files)

libpqxx3-doc - C++ library to connect to PostgreSQL (documentation)

postgresql-9.1 - object-relational SQL database, version 9.1 server

postgresql-client - front-end programs for PostgreSQL (supported version)

postgresql-client-9.1 - front-end programs for PostgreSQL 9.1

postgresql-client-common - manager for multiple PostgreSQL client versions

postgresql-common - PostgreSQL database-cluster manager

postgresql-contrib - additional facilities for PostgreSQL (supported version)

postgresql-contrib-9.1 - additional facilities for PostgreSQL

It will return much more but we do not need them all. Now in terminal we do

sudo apt-get install postgresql-9.1 postgresql-client postgresql-client-9.1 postgresql-client-common postgresql-common postgresql-contrib postgresql-contrib-9.1

or if one like gui, Software Manager or Synaptic will also do. Do not forget contrib packages, you will need them for pgAdmin III.

Again in terminal do:

sudo su postgres

afer entering password you are postgres. As postgres:

psql template1

psql (9.1.3)

Type "help" for help.

template1=# create role testuser login password 'testpass' superuser valid until 'infinity';

CREATE ROLE

template1=#\q

That escaped q quits psql and after one exit you are back to your login. If you like install now pgAdmin III or using psql create DB and table where you are going to practice.

To allow remote connections do:

sudo gedit /etc/postgresql/9.1/main/postgresql.conf

and modify listen_addresses, something like this:

listen_addresses = 'localhost, 192.168.0.42, 192.168.0.111'

Also in pg_hba.conf we need to enable remote users:

sudo gedit /etc/postgresql/9.1/main/pg_hba.conf

it should look something like this, all the way towards bottom of the file:

# IPv4 local connections:

host all all 127.0.0.1/32 md5

host template1 testuser 192.168.0.0/24 md5

host testpgdb testuser 192.168.0.0/24 md5

After saving changes restart PostgreSQL server:

sudo /etc/init.d/postgresql restart

Please create DB testpgdb with sufficient rights for testuser or rename DB in C++ example.

Now it is time to install libpqxx. From terminal execute:

sudo apt-get install libpqxx-3.0 libpqxx-3.0-dbg libpqxx3-dev libpqxx3-doc

and installation is done.

C++ example

Code is slightly adjusted test 005 which comes with libpqxx3-doc, to see where is what use:

dpkg -L libpqxx3-doc

It connects to local instance of PostgreSQL, if you want remote – please edit connection string. If connection succeeds creates table, inserts data and at the end does one non-transactional select.



#include <iostream>

#include <pqxx/pqxx>

 

using namespace std;

using namespace pqxx;


 

int main(int argc, char** argv) {

connection C("dbname=testpgdb user=testuser password=testpass hostaddr=127.0.0.1 port=5432");

string tableName("tabletwo");

if (C.is_open()) {

cout << "We are connected to" << C.dbname() << endl;

} else {

cout << "We are not connected!" << endl;

return 0;

}

work Q(C);

try {

Q.exec("DROP TABLE " + tableName);

Q.commit();

} catch (const sql_error &) {

}

work T(C);

T.exec("CREATE TABLE "+tableName+" (id integer NOT NULL, name character varying(32) NOT NULL, salary integer DEFAULT 0);");

tablewriter W(T, tableName);

string load[][3] = {

{"1","John","0"},

{"2","Jane","1"},

{"3","Rosa","2"},

{"4","Danica","3"}

};

for (int i=0;i< 4; ++i)

W.insert(&load[i][0], &load[i][3]);

W.complete();

T.exec("ALTER TABLE ONLY "+tableName+" ADD CONSTRAINT \"PK_IDT\" PRIMARY KEY (id);");

T.commit();

nontransaction N(C);

result R(N.exec("select * from "+tableName));

if (!R.empty()) {

for (result::const_iterator c = R.begin(); c != R.end(); ++c) {

cout << '\t' << c[0].as(string()) << '\t' << c[1].as(string()) <<'\t' << c[2].as(string()) <<endl;

}

}

return 0;

}

In order to compile code you will need to tell to g++ where are libpqxx headers (they are not on the path) and also to linker what libs must be used for linking. Something like this:

g++ hello.cxx -o hello -I/usr/local/include/ -lpqxx -lpq

If your libpqxx or libpq are on unusual place you will use -L[path to where they are], there is lot of that on CentOS or Fedora ;-)

After executing hello (./hello) you should be rewarded with the following output:

We are connected totestpgdb

NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index "PK_IDT" for table "tabletwo"

1 John 0

2 Jane 1

3 Rosa 2

4 Danica 3

I will write more on subject and explain Red Hat, CentOS, Fedora installation. After that we will look at coding using Eclipse and NetBeans, also PostgreSQL and libpqxx.

 

satellite.sh - sync data between systems using pen-drive

Its is easy to sync two connected systems, but what if you need to sync two or more systems, not connected with each other? Need to use external storage device, but everytime copying data to external device is not always easy, and there may be chances that you may forget to copy some files. So here is a small bash-shell script, just need to set path once in script and it will take care of rest things.

save following as ~/bin/satellite.sh and mark executable by executing 'chmod +x ~/bin/satellite.sh'.
to sync data from local system to external device run:
$ satellite.sh upload  (enter 'satellite' on asking passphrase)
to sync data from external device to local system run it with 'base' as argument and passphrase.

In this case external storage is USB pen drive labeled as SATELLITE.

#!/bin/bash

if [ $# -ne 1 ];then
echo "Usage: $0 upload|download"
exit 0
fi

arg=$1

# ---- BLOCK-START ----
# add path to both arrays
# and basename of the path must be a folder
#
SATPATH=/media/SATELLITE/.A-BOX
#
path_satellite[1]="$SATPATH/bin"
path_satellite[2]="$SATPATH/learning"
path_satellite[3]="$SATPATH/raw_c"
path_satellite[4]="$SATPATH/raw_python"
path_satellite[5]="$SATPATH/scripting"
path_satellite[6]="$SATPATH/Wallpapers"
#
path_base[1]="$HOME/bin"
path_base[2]="$HOME/Documents/learning"
path_base[3]="$HOME/Documents/turbo/raw_c"
path_base[4]="$HOME/Documents/turbo/raw_python"
path_base[5]="$HOME/Documents/turbo/scripting"
path_base[6]="$HOME/Pictures/Wallpapers"
#
# ---- BLOCK-END ----

#
# check that both arrays contains same number of elements
#
if [ ${#path_satellite[@]} -ne ${#path_base[@]} ];then
echo -e "mismatch detected in ${0}\nkindly verify both arrays, exiting"
exit 1
fi

total=${#path_base[@]}      # since both arrays are having same number of elements
echo "total: $total"

#
# check that order of elements in both arrays are same
#
counter=1
while [ $counter -le $total ]
do
temp_sate=$(basename ${path_satellite[$counter]})
temp_base=$(basename ${path_base[$counter]})

if [ "${temp_sate}" != "${temp_base}" ];then
echo "satellite: ${temp_sate}"
echo "base     : ${temp_base}"
echo "above paths does not match in ${0}, kindly check and rerun"
exit 1
fi
counter=$(expr $counter + 1)
done

#
# chech if local copy of satellite.sh is latest
#
f_remote="$SATPATH/bin/satellite.sh"
f_local="$HOME/bin/satellite.sh"

script_remote=$(stat ${f_remote} | grep "Modify")
script_remote=$(echo ${script_remote:8:19} | sed 's/[- :]//g')

script_local=$(stat ${f_local} | grep "Modify")
script_local=$(echo ${script_local:8:19} | sed 's/[- :]//g')

echo "remote: ${script_remote}"
echo "local : ${script_local}"

if [ "${script_remote}" -gt "${script_local}" ];then
cp -f ${f_remote} ${f_local}
echo "local copy was outdated, hence updated"
echo "re-run: ${0}"
exit 1
fi


if [ "$arg" = "upload" ];then
echo -ne "phrase to overwrite SATELLITE \033[35G: "
read text

if [ "$text" != "satellite" ];then
echo "wrong phrase"
exit 2
fi

for (( x=1; x<=$total; x++))
do
[ ! -d ${path_satellite[x]} ] && mkdir -p ${path_satellite[x]}
rsync -av --delete ${path_base[x]}/ ${path_satellite[x]}
done

elif [ "$arg" = "download" ];then
echo -ne "phrase to overwrite BASE \033[35G: "
read text

if [ "$text" != "base" ];then
echo "wrong phrase"
exit 2
fi

for (( x=1; x<=$total; x++))
do
[ ! -d ${path_base[x]} ] && mkdir -p ${path_base[x]}
rsync -av --delete ${path_satellite[x]}/ ${path_base[x]}

#
# enable executable bit of bash-shell & python scripts
#
find ${path_base[x]} -name "*.sh" -exec chmod +x {} \;
find ${path_base[x]} -name "*.py" -exec chmod +x {} \;

#
# enable executable bit of binaries
# if executable path/file name contains single quote ('), skip it -- need to fix it!
#
var=$(find ${path_base[x]} -name "*" -type f -print| grep -v "'"| xargs file | grep ELF | cut -d':' -f1)
if [ -n "${var}" ];then
echo ${var} | xargs chmod +x
fi
done

fi
echo -e "synching \033[35G DONE"
#------------ END ------------

 

auto indent in vim

$ vim ~/.exrc

set shiftwidth=4
set softtabstop=4
set nu

if has ("autocmd")
     filetype plugin indent on
endif

:wq

vim parse ~/.exrc on start,

shiftwidth & softtabstop makes sure that when 'tab' key pressed, it insert 4 spaces and when 'backspace' key pressed, delete 4 spaces.

 

'nu' print line number

 

'filetype plugin indent on' loads auto indent plugin for the file type (.extension) if avalable


 

A Linux ready for Enterprise

Before I start...

Hi guys, it's been a long time since I left Linux.com. I contributed a lot and even was one of the private beta testers for this site and ran regular blog entries including writing Xlib-based windows managers, and general programming topics for Linux. I felt I had to leave as my Kudos points were stuck at zero no matter how much I contributed - I know this seems a silly reason for leaving, but I felt that fixing my Kudos points was the least the admins could have done for my contributions (especially as people were winning prizes in the early days of the site with the most points), but despite asking several times, my Kudos points remained at zero and I requested my account to be deleted.

I feel that I am now prepared to forgive and forget and going to start contributing again; I hope you enjoy this article!

Custom Distributions & I

A while back I started my own Linux distribution (based on Arch Linux) called LDR. which saw a LOT of traffic/interest hit my server (the most traffic the poor machine had ever seen!). The project itself was a massive flop as I could not hook in any support for other developers, nor could I keep up with the questions from casual users who, as a distro owner, I really should have made my top priority to please. Arch's base installation changed significantly and things broke and the project fell further and further behind whilst my main programming job (which paid the bills) grew busier and busier until I had to stop working on the project completely.

Strangely I have been left with fond memories of the project, it was a great learning curve for someone who up until then would consider himself an noobie to Linux. I really enjoyed being a part of the hype and excitement around a new distro and contributing GPL code that may be, some day, some one might download and use (though by that time, the project will probably be out of date and won't compile?!).

I continue to hope I will be involved in authoring another distribution from the ground up someday.

Working with Microsoft Technologies

I left my job with a Microsoft-technology-based Enterprise Telco. software provider in 2009 as I wanted to concentrate my working hours on developing Linux-based solutions for mission/time critical projects.

I have since then worked on some amazing projects including 2 successful contracts for the British MoD.

I have been working with Linux exclusively until recently when I surprisingly moved back to a Microsoft technology provider...

Don't get me wrong, I haven't given up on Linux; it is still very much a burning passion which I use to bore my pro-Microsoft/Mac colleges; my problem is that I find that Linux lacks the integration and support that I need on a daily basis to produce software to a level that I can be really proud of. It's probably 80% my opinion rather than facts, but I find that Windows - as a platform - offers me the right tools, in the right packaging, to be able to get my job done better.

This article is a brain-dump of the parts of Windows and Microsoft technologies that I would marry together with Linux to produce a distro that would be suitable for..... well.... me! (but hopefully other Enterprise Linux developers and companies too!)

I realize that praising Microsoft on Linux.com comes at the risk of receiving quite a bit of trolling/flaming from said community; but after you've had your fun, I hope you can understand that Microsoft HAVE created some components extremely well, sometimes better than other companies/communities have. Anyone who has tried to stress/load test a PHP web application, MYSQL db and Apache web server at the same time will hopefully respect the fact that because of the disconjointed nature of these components on Linux, aggregating data (like when trying to determine the bottleneck in this application stack) is extremely difficult - someone needs to take all these components, and put them together with loads of "Grade A" middleware glue, so that they work with the developer.

NOTE: I don't have the pleasure to work with Enterprise distributions of Linux such as Red Hat, and I'm sure these distros will include some/all of the concepts I will share with you below, but the truth is - as previously mentioned - I want to be involved in a project from the very start and that's why I'm inventing/dreaming up the concept anew here. Whether I am offering anything new to the table is up to you as a reader to decide (and discuss!).

Things that I think Microsoft did well (checklist)

I'll start by listing all the concepts, components, tools etc.. that I think Microsoft have done a fantastic job in providing and that would need to be included in my concept Linux distribution...

Read more... Comment (0)
 

Now russian govermential agencies can use Astra Linux for the top-secret information processing

Operating system «Astra Linux» worked out by JSC RPA RusBITech can be used in Russian governmental agencies that deal with the top-secret information.

The certification of the operating system «Astra Linux» concerning the compliance with the governmental requirements of the information security has been completed in Russia. The possibility of using the operating system «Astra Linux» in the information systems that deal with the top-secret information has been confirmed.

Thus the open-source based software platform with the high-level information security has appeared for governmental agencies in Russia. The process of complete replacement of previous operating systems and software by Linux and open-source software that is going on nowadays in governmental agencies in Russia must be completed till 2015.

The operating system «Astra Linux» has been created and is developing by the RPA RusBITech on the base of open-source software and functions on the computers with the processors x86-64 and ARM, and also on the mainframes IBM System Z. It comprises the software that ensures the highest level of information security.

RPA RusBITech is the member of the Linux Foundation.

 

Configuring ESXi VDR FLR on SuSE Linux SLES 11 x86_64

I've written the following post as it took me a while to figure out how to get SLES Linux File based restore from within a VMDK on ESXi.

 

By default there is support for Debian Guest and RedHat, There is also a helpful article on the VMware forums that details implementation on OpenSuSE 32bit.

 

This is where my first problem arose, as the VDR FLR programs require 32bit libraries in order to run. The way I approached this was to use a 32bit Guest VM as a donor for the 32bit linker programs, that dont seem to get included in the same way when installing the 32bit runtime environment on SLES 11 x86_64. All of the documentation on the OpenSuSE sites seem to point to declaring runtime variable settings for the linker and compiler by using "-m32" as an argument. Whilst this 'Works" it fails to actually build the source objects that you require.

 

So I created a 32-bit guest and after a bit of debugging zipped down the /usr/i586-suse-linux directory and copied it over to and unzipped it on the 64-bit guest that I wanted to have VDR FLR running on. - This will give me a 32bit version of the linker program 'ld'.

 

I also found that running on a kernel anything earlier than 2.6.27.21 failed to create the FUSE directories and files correctly under /tmp. So I ran a kernel update by grabbing these files from Novell's SLES site:

 

For the following to run successfully you will need to update module-init-tools first:

 

 

 

rpm -Fvh mod-init*.rpm

//This will use these files to update the module-init-tools.

module-init-tools-3.12-29.1.x86_64.rpm

module-init-tools-debuginfo-3.4-70.6.1.x86_64.rpm

module-init-tools-debugsource-3.4-70.6.1.x86_64.rpm

 

 

 

//

 

//Next do the kernel update online

 

mkdir /usr/local/src/kernelmods

move the following files into /usr/local/src/kernelmods

ext4dev-kmp-default-0_2.6.27.21_0.1-7.1.2.x86_64.rpm

ext4dev-kmp-xen-0_2.6.27.21_0.1-7.1.2.x86_64.rpm

kernel-default-2.6.27.21-0.1.2.x86_64.rpm

kernel-default-base-2.6.27.21-0.1.2.x86_64.rpm

kernel-source-2.6.27.21-0.1.1.x86_64.rpm

kernel-syms-2.6.27.21-0.1.2.x86_64.rpm

kernel-xen-2.6.27.21-0.1.2.x86_64.rpm

kernel-xen-base-2.6.27.21-0.1.2.x86_64.rpm

cd /usr/local/src/kernelmods

//Next run the update

rpm -Fvh *.rpm

 

 

Use YaST to make sure that you have installed the 32bit runtime environment. - Note that some of the steps we are doing after this is to get around a problem that I found with the 64bit linker not seeming to accept "-m32".

 

Once this has finished, its best for you to do a reboot, just to make sure you are running everything that you should be.

 

Download VMware-vix-disklib from the VMware site. I used this version:VMware-vix-disklib-1.2.0-230216.i386.tar. Copy this to /usr/local/src and unpack and install by executing ./vmware-install.pl

 

Next follow the VDR instructions to get hold of the FLR program:VMwareRestoreClient.tgz. Copy this file to /usr/local/src on the 64bit guest, and unpack.

 

Next grab a source copy of FUSE from the FUSE site - I used 2.7.3. Here are the build instructions that worked for me:

 

./configure '--prefix=/usr/local/mattsfuse'  '--build=i386' 'CC=gcc -m32' 'LD=/usr/i586-suse-linux/bin/ld' 'AS=gcc -c -m32' 'LDFLAGS=-L/usr/local/mattsfuse/lib' '--enable-threads=posix' '--infodir=/usr/share/info' '--mandir=/usr/share/man' '--libdir=/usr/local/mattsfuse/lib' '--libexecdir=/usr/local/mattsfuse/lib' '--enable-languages=c,c++,objc,fortran,obj-c++,java,ada' '--enable-checking=release' '--with-gxx-include-dir=/usr/include/c++/4.1.2' '--enable-ssp' '--disable-libssp' '--disable-libgcj' '--with-slibdir=/usr/local/mattsfuse/lib' '--with-system-zlib' '--enable-__cxa_atexit' '--enable-libstdcxx-allocator=new' '--program-suffix=' '--enable-version-specific-runtime-libs' '--without-system-libunwind' '--with-cpu=generic' '--host=i586-suse-linux' 'build_alias=i386' 'host_alias=i586-suse-linux' --cache-file=/dev/null --srcdir=.

 

As you can see in the configure script I specified an absolute path to the 32-bit linker (ld) 'LD=/usr/i586-suse-linux/bin/ld', and used --build=i386 and manually set some other 32bit flags to instruct the compiler on what to do.

 

Once the configure has run, issue a 'make' and 'make install' if there are no problems shown in the 'make'.

 

You now have a 32-bit source version of FUSE running in on 64-bit SLES!

 

Almost there, all we need to do now is use 'ldd' to look at the VDR programs we need to run and see what libs it thinks are missing.

 

cd /usr/local/src/VMwareRestoreClient

 

ldd libvixMntapi.so

 

you should see something like this:

linux-gate.so.1 =>  (0xffffe000)

libdl.so.2 => /lib/libdl.so.2 (0xb7d72000)

libpthread.so.0 => /lib/libpthread.so.0 (0xb7d5c000)

libcrypt.so.1 => /lib/libcrypt.so.1 (0xb7d29000)

libz.so.1 => /lib/libz.so.1 (0xb7d17000)

libvixDiskLib.so.1 => not found

libfuse.so.2 => not found

libc.so.6 => /lib/libc.so.6 (0xb7bea000)

/lib/ld-linux.so.2 (0x80000000)

 

The items showing 'not found' are the ones we need to move around.

 

cp -a /usr/local/mattsfuse/lib/libfuse.* /usr/lib

find / -name libvixDiskLib.so.1 -print

cp -a libvixDiskLib.so* /usr/lib

run 'ldconfig'

ldd libvixMntapi.so

 

--> This should now show you the locations of the missing files that have now been found:

 

linux-gate.so.1 =>  (0xffffe000)

libdl.so.2 => /lib/libdl.so.2 (0xb7e4a000)

libpthread.so.0 => /lib/libpthread.so.0 (0xb7e34000)

libcrypt.so.1 => /lib/libcrypt.so.1 (0xb7e01000)

libz.so.1 => /usr/lib/libz.so.1 (0xb7def000)

libvixDiskLib.so.1 => /usr/lib/libvixDiskLib.so.1 (0xb7c98000)

libfuse.so.2 => /usr/local/lib/libfuse.so.2 (0xb7c7f000)

libc.so.6 => /lib/libc.so.6 (0xb7b53000)

/lib/ld-linux.so.2 (0x80000000)

librt.so.1 => /lib/librt.so.1 (0xb7b4a000

 

Next you should be able to run"VdrFileRestore -a " As per VMware's instructions on the 64-bit guest.

 

Follow the onscreen instructions to select which backup day that you want to mount the filesystem for. You will then need to SSH onto the 64-bit guest. If you run 'df' you will see that there is a /tmp/xxxxxx file mounted in the list. - Do not try to use this as a file path to grab files from. Instead use the suggested /root/HOSTNAME-DAY mount point.

 

For a test I moved /etc/hosts /etc/hosts.myold and then copied /root/HOSTNAME-DAY/etc/hosts /etc/hosts, and checked that I could read it ok.

 

Hope that someone might find this useful. VDR is an amazing backup tool that is free with the Enterprise licence. You can either do a complete host restore, or use FLR as described above to restore single files from inside the machine image.

 

 

(c)   Matt Palmer 29 Jan 2012 - www.metallic-badger.com

 

Password guessing with Medusa 2.0

Medusa is my password forcer of choice! Mainly because of its speed. If you're hoping to try it on a Windows box, sorry you're out of luck. As far as I know, there is no Windows port. In which case you're next best alternative is Hydra. See last week's post found here. Medusa was created by the fine folks at foofus.net, in fact the much awaited Medusa 2.0 update was released in February of 2010. For a complete change log please visit http://www.foofus.net/jmk/medusa/changelog Medusa is a command line tool, as far as I know there is no GUI front end. But don't let that scare you, it's super simple to operate. The foo magic of compiling from source is the hardest part. Although if you're running Ubuntu, Medusa is in their repository. Starting with Ubuntu 10.10 Medusa packages were updated to latest 2.0 release. If you're a Fedora fan boy, good news; Medusa RPM is available. With Fedora 16 Medusa was updated to release 2.0. Anything prior will use Medusa 1.5. Other distros may have to compile from source. Compiling Medusa from source: 1. Download Medusa 2.0 source from foofus.net 2. Decompress tarball tar -xvf medusa-2.0.tar.gz 3. Perform usual compile foo magic ./configure make make install One word of caution. During the ./configure process a module check is performed. If dependencies have not been met, Medusa will not support those modules. You'll have to ensure all dependencies are satisfied before running make and make install. Have a look here if you run into trouble http://foofus.net/~jmk/medusa/medusa.html Installing Medusa from Ubuntu Repository: 1. apt-get update 2. apt-get install medusa Basic password guessing with Medusa: If you'd like to see all Medusa options, execute medusa with no switches. If you'd like to see all supported modules execute medusa -d In its most basic form Medusa requires the following information: 1. Target host 2. User name or text file with user names 3. Password or text file with passwords 4. Module name For example; If I want to try a single password guess of abc123 against the Administrator account on a Windows box with an IP address of 192.168.100.1 medusa -h 192.168.100.1 -u Administrator -p abc123 -M smbnt In a Windows environment the Administrator account is special in that it is the only account which cannot be locked out. Although watch out, some environments remove this feature. Before you brute force accounts ensure you know the lockout policy. But let's pretend in this example the Administrator account does not lock out. This means I can attempt as many password guesses as I'd like. In this case I'd download a pre-compiled password list. Then, let Medusa loose and wait. medusa -h 192.168.100.1 -u Administrator -P passwordlist.txt -M smbnt Depending on the latency between you and the target host, limiting concurrent attempts may be a good idea. This can be accomplished with -t or if you'd like Medusa to stop after first succesful username, password combination use -f Medusa is simple, fast and effective. I especially love the number of modules it supports, including web forms. How many times have you wanted to password guess a web site login? With Medusa it is possible, simply provide the proper URL. Medusa even supports SSL and if your target is using security through obscurity by using a non standard port, Medusa supports that too. Specify non standard ports with -n Administrators should be auditing passwords regularly. Weak passwords are your number one concern. If you allow users to generate a weak password they will. You're best bet is to implement a good password policy and enforce it. For more information please visit our blog at: www.digitalboundary.net/wp
 
Page 2 of 122
30 Linux Kernel Developers in 30 Weeks

Who we are ?

The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

More About the foundation...

Frequent Questions

Join / Linux Training / Board