Linux.com

Home Linux Community Community Blogs Business (or Enterprise)

Community Blogs



On the use of low thread high speed “gaming computers” to solve engineer simulations.

On the use of low thread high speed “gaming computers” to solve engineer simulations.

   Many of us in the linux community work only with software that is FOSS, which stands for Free open-source software. This is software that is not only open source but is available without licensing fees. There are many outstanding products out on the market today that are considered FOSS from the Firefox browser to most distributions of linux to the OpenOffice suite of text editing programs. However, there are times when FOSS is not an option, a good example is in my line of work supporting engineering software especially CAD tools and simulators. This software is not only costly but it is very restrictive. Each aspect of the software is charged. For example many of the simulators can run multithreaded. With one piece of software running on up to 16 threads for a single simulation.  More threads require more tokens, and we pay per token available. This puts us in a situation that we wish to maximize the amount we accomplish with as little threads as we can.

   If for example an engineer needs a simulation to finish to prove a design concept and it will take 6 hours to simulate at 1 thread he will want another token in order to use more threads. Using one token may buy him or her a reduction of 3 hours in simulation time, but the cost is that the tokens used for his or her simulation cannot be used by another engineer. The simple solution would be to keep on buying more and more tokens until every engineer has enough to run on the maximum number of threads at all times. If there are 5 engineers who run simulations that can run on 16 threads each for the cost of 5 tokens then we will need 25 tokens. Of course the simple solution rarely works. The cost of 25 is so high that it can easily bankrupt a medium sized company.

   Another solution would be to use less tokens but implement advance queuing software. This has the advantage that engineers can submit tasks and the servers running the simulations will run at all time (we hope) using the tokens we do have to the utmost. This strategy works well when deadlines are far away, but as they get close the fight for slots can grow.

  Since the limiting resource here is the number of threads we tried a different approach. As we are paying per thread we run, we should try to run threads as fast as possible (increasing our performance) rather then our throughput.  To further justify our reasoning we looked at creating benchmarks for our tools and comparing the amount of time it took to run a simulation compared to the number of threads we employed for it.

  The conclusion was:  Independent of software and the type of simulation we ran the performance increased exponentially to 4.5 threads and then leveled off. A shocking result given that the tools we used came out in different years and were produced by different venders.

   Given this information we concluded that if we ran 4 threads 25% faster on machine A (by overclocking) we could achieve better results then on machine B despite the same architecture.  This meant that for the near trivial price (compared to a server’s cost or additional tokens) of a modified desktop computer we could outperform a server with the maximum number of tokens we could purchase.

Our new system specifications:

Newegg #

Price

Item name

Quantity

N82E16819115095

349.99

Intel core i7 1155 socket

1

N82E16813131837

139.99

Asus motherboard

1

N82E16817171048

149.99

Cooler master power supply

1

N82E16820231611

139.99

G.Skill DDR3 ram

2

N82E16835103181

84.99

All in one liquid cpu cooler

1

N82E16811119213

164.99

Cooler master PC case

1

N82E16833106126

131.99

Ethernet server adapater

1

N82E16820167115

204.99

SSD 180GB

1

Amazon order

349

I7

1

   The total cost was approximately 1200 per unit after rebates. Assembly took about 3 hours. Overclocking was achieved at 4.7Ghz stable with the maximum recorded temperature at 70 C. The operating system is centos with the full desktop installed. The NICs have two connections link aggregated to our servers.

  To test overclocking we wrote a simple infinite loop floating point operation in perl and launched 8 instances of it while monitoring the results using a FOSS program called i7z. The hard drive only exists to provide a boot drive all other functions are performed via ssh and NFS exports. The units sit headless in our server room. It has been estimated that we have reduced overall simulation time across the company by 50% with only two units.

  The analogy we give is one of transportation. We have servers which function like buses. They can move a great deal of people at a time, which is great but buses are slow. Now we constructed high speed sports cars, these cars can only move a few people at a time but can move them much faster.


Isiah Schwartz

Teledyne Lecroy

 

 

Meet HD Camera Cape for BBB- $ 49.99 low cost camera cape

  • Adding a camera cape to the latest BeagleBone Black, RadiumBoards has increased its “Cape”-ability to provide a portable camera solution. Thousands of designers, makers, hobbyists and engineers are adopting BeagleBone Black and are becoming huge fans because of its unique functionality as a pocket-sized expandable Linux computer that can connect to the Internet. To augment them in their work we have designed this HD Camera Cape with following features and benefits as:
  •  

    • 720p HD video at 30fps
    • Superior low-light performance
    • Ultra-low-power
    • Progressive scan with Electronic Rolling Shutter
    • No software effort required for OEM
    • Proven, off-the-shelf driver available for quick and easy software integration
    • Minimize development time for system designers
    • Easily Customized
    • Simple Design with Low Cost
  • Priced for just $49.99, this game-changing cape plug for the latest credit card sized BBB can help developers differentiate their product and get to market faster.
  • To learn more about the new and exciting cape checkout www.radiumboards.com
  •  
     

    Dick MacInnis, Creator of DreamStudio, Launches Celeum Embedded Linux Devices

    Celeum offers four unique embedded devices based on Linux:

    • 1.The CeleumPC, which dual boots Android and DreamStudio
    • 2.The CeleumTV, which runs Android with a custom XBMC setup
    • 3.The Celeum Cloud Server, which runs Ubuntu Server with ownCloud for personal cloud storage, and
    • 4.The Celeum Domain Server, a drop in replacement for Windows Domain Controllers, powered by Ubuntu Server and a custom fork of Zentyal Small Business Server.

    The Celeum TV is currently available only in the Saskatoon, Canada area, while the other three devices are currently in crowdfunding phase, and can be preordered by making a donation to the Celeum Indiegogo campaign

     

    Leveraging Open Source and Avoiding Risks in Small Tech Companies

    Today’s software development is geared more towards building upon previous work and less about reinventing content from scratch. Resourceful software development organizations and developers use a combination of previously created code, commercial software and open source software (OSS), and their own creative content to produce the desired software product or functionality. Outsourced code can also be used, which in itself can contain any of the above combination of software.

    There are many good reasons for using off-the-shelf and especially open source software, the greatest being its ability to speed up development and drive down costs without sacrificing quality. Almost all software groups knowingly, and in many cases unknowingly, use open source software to their advantage. Code reuse is possibly the biggest accelerator of innovation, as long as OSS is adopted and managed in a controlled fashion.

    In today’s world of open-sourced, out-sourced, easily-searched and easily-copied software it is difficult for companies to know what is in their code. Anytime a product containing software changes hands there is a need to understand its composition, its pedigree, its ownership, and any open source licenses or obligations that restrict the rules around its use by new owners.

    Given developers’ focus on the technical aspects of their work and emphasis on innovation, obligations associated with use of third party components can be easily compromised. Ideally companies track open source and third party code throughout the development lifecycle. If that is not the case then, at the very least, they should know what is in their code before engaging in a transaction that includes a software component.

    Examples of transactions involving software are: a launch of a product into the market, mergers & acquisitions (M&A) of companies with software development operations, and technology transfer between organizations whether they are commercial, academic or public. Any company that produces software as part of a software supply chain must be aware of what is in their code base.

     

    Impact of Code Uncertainties

    Any uncertainty around software ownership or license compliance can deter downstream users, reduce ability to create partnerships, and create litigation risk to the company and their customers. For smaller companies, intellectual property (IP) uncertainties can also delay or otherwise threaten closures in funding deals, affect product and company value, and negatively impact M&A activities.

    IP uncertainties can affect the competitiveness of small technology companies due to indemnity demands from their clients. Therefore technology companies need to understand the obligations associated with the software that they are acquiring. Any uncertainties around third party content in code can also stretch sales cycles. Lack of internal resources allocated to identification, tracking and maintaining open source and other third party code in a project impacts smaller companies even more.

    Along with licensing issues and IP uncertainties, organizations that use open source also need to be aware of security vulnerabilities. A number of public databases, such as the US National Vulnerability Database (NVD) or Carnegie Mellon University's Computer Emergency Response Team (CERT) database, list known vulnerabilities associated with a large number of software packages. Without an accurate knowledge of what exists in the code base it is not possible to consult these databases. Aspects such as known deficiencies, vulnerabilities, known security risks, and code pedigree all assume the existence of software Bill of Materials (BOM). In a number of jurisdictions, another important aspect to consider before a software transaction takes place is whether the code includes encryption content or other content subject to export control – this is important to companies that do business internationally.  

    Solutions

    The benefits of OSS usage can be realized and the risks can be managed at the same time. Ideally, a company using OSS should have a process in place to ensure that OSS is properly adopted and managed throughout the development cycle. Having such a process in place allows organizations to detect any licensing or IP uncertainties at the earliest possible stage during development which reduces the time, effort, and cost associated correcting the problem later down the road.

    If a managed OSS adoption process spanning all stages of a development life cycle is not in place, there are other options available to smaller companies. Organizations are encouraged to audit their code base, or software in specific projects, regularly. Some may decide to examine third party contents and the associated obligations just before a product is launched, or in anticipation of an M&A.

     

    Internal Audits

    The key here is having an accurate view of all third-party, including OSS, content within the company. One option is to carry out an internal audit of the company code base for the presence of outside content and its licensing and other obligations. Unfortunately manually auditing a typical project of 1000-5000 files is a resource and time consuming process. Automated tools can speed up the discovery stage considerably. For organizations that do not have the time, resources or expertise to carry out an assessment on their own, an external audit would be the fastest, most accurate and cost effective option.

     

    External Audits

    External audit groups ideally deploy experts on open source and software licensing that use automated tools, resulting in accurate assessment and fast turnaround. A large audit project requires significant interactions between the audit agency and the company personnel, typically representatives in the R&D group, resident legal or licensing office, and product managers. A large audit project requires an understanding of the company’s outsourcing and open source adoption history, knowledge of the code portfolio in order to break it down into meaningful smaller sub projects, test runs, and consistent interactions between the audit team and the company representatives.

    Smaller audit projects however can be streamlined and a number of overhead activities can be eliminated, resulting in a time and cost efficient solution without compromising details or accuracy. An example would be streamlined machine-assisted software assessment service. The automated scanning operation, through use of automated open source management tools, can provide a first-level report in hours. Expert review and verification of the machine-generated reports and final consolidation of the results into an executive report can take another few days depending on the size of the project.

    The executive report delivered by an external audit agency is a high level view of all third party content, including OSS, and attributes associated with them. The audit report describes the software code audit environment, the process used, and the major findings, drawing attention to specific software packages, or even software files and their associated copyright and licenses. The audit report will highlight third party code snippets that were “cut & pasted” into proprietary files and how that could affect the distribution or the commercial model. This is important for certain licenses such as those in the GPL (GNU Public License) family of OSS licenses, depending on how the public domain code or code snippet is utilized.

    The report significantly reduces the discovery and analysis effort required from the company being audited, allowing them to focus on making relevant decisions based on the knowledge of their code base.

    Conclusion

    Third party code, including open source and commercially available software packages, can accelerate development, reduce time to market and decrease development costs. These advantages can be obtained without compromising quality, security or IP ownership. Especially for small companies, any uncertainty around code content and the obligations associated with third party code can impact the ability of an organization to attract customers. Ambiguity around third party code within a product stretches sales cycles, and reduces the value of products and impacts company valuations. For small organizations, an external audit of the code base can quickly, accurately and economically establish the composition the software and its associated obligations.

     

    Corks? Or Screw Tops? Why the Experience Matters

    I've noticed a disturbing trend amongst a few of the high quality wineries in my state. They have abandoned the cork to close their high-end wine bottles and turned to screw caps. This is good news to people who struggle with how to get a cork out of a wine bottle. 

    Read more... Comment (0)
     

    Chatting with Peter Tait of Lucid Imagination

    Back in January I had the opportunity to test drive LucidWorks Enterprise, a search engine for internal networks. The cross-platform search engine was flexible, stable, easy to install and came backed by a friendly support staff. In short, it was a good experience which demonstrated how useful (and straight forward) running one's own search engine can be.

    Read more... Comment (0)
     

    HOWTO - Using rsync to move a mountain of data

    In this installment of my blog, I want to document the proper use of rsync for folks who are tasked with moving a large amount of data.  I'll even show you a few things you can do from the command line interface to extend the built-in capability of rsync using a little bash-scripting trickery.

    I use rsync to migrate Oracle databases between servers at least a few times per year.  In a snap, its one of the easiest ways to clone a database from a Production server to a Pre-Production/Development server or even a Virtual Machine.  You don't have to have a fancy Fibre-Channel or iSCSI storage array attached to both servers, in order to do a data LUN clone, thanks to rsync.

    I hope you enjoy this in-depth article.  Please feel free to comment: if you need clarification, find it useful, or something I wrote is just plain wrong.

    Read more... Comment (3)
     

    Clone a Virtual Machine from the shell (The Script)

    After few comments on my previous blog related on how to manually clone a Virtual Machine from the shell I've decided to write a simple script to do everything automatically. Maybe this could be useful for newbies but basically it reproduces all the information reported on my latest blog.

    There's no rocket science here and I've tried to keep the script simple and hackable for anyone, it required me some time (less than 1h) due to my poor sed knowledge, I've taken it as an exercise to improve my sed capabilities.

    As in open source feel free to improve or modify it as you wish, send me an updated copy so I can publish your best version as well, error checking it's quite simple now. You may input absolute or relative paths but there're few limitations around.

     

    Basic Usage:

    VMCopy <old name> <new name>
    oldname is the name of the directory with original VMWare files
    newname
    is the name of the directory with newly created VMWare files

    simple, isn't it ?

     

    Here's the script:

    #!/bin/bash
    #
    # @name VMCopy - Copy/Clone a VMWARE Virtual machine with a new name
    #
    # @author Andrea Benini (Ben)
    # @since 2011-02
    # @website http://www.linux.com
    # @email andrea benini (at domain name) gmail [DoT] com
    # @package Use it to get a physical copy of an existing machine, no snapshots or
    # VMWare tools involved in this operation, it's a plain text bash script
    # @require This tool should be portable to many UNIX platforms, it just requires:
    # sed, dirname, basename, md5sum, $RANDOM (shell variable) and few more
    # shell builtins commands
    #
    # @license GPL v2 AND The Beer-ware License
    # See GPL details from http://www.gnu.org/licenses/gpl-2.0.html
    # "THE BEER-WARE LICENSE" (Revision 43)
    # Andrea Benini wrote this file. As long as you retain this notice you
    # can do whatever you want with this stuff. If you make modification on
    # the file please leave author notes on it, if you improve/alter/modify
    # it please send me an updated copy by email. If we meet some day, and
    # you think this stuff is worth it, you can buy me a beer in return.
    # Andrea Benini
    #
    SOURCEPATH=$(dirname "$1")
    TARGETPATH=$(dirname "$2")
    SOURCEMACHINE=$(basename "$1")
    TARGETMACHINE=$(basename "$2")

    if [[ $# -ne 2 ]]; then
    echo -e "$0 "
    echo -e " Copies a VMWare virtual machine"
    echo " and are names"
    echo " of the machine you'd like to copy and the new destination name"
    echo ""
    exit
    fi

    exec 2> /dev/null
    echo "VMCopy - VMWare Virtual Machines cloner"
    echo " - Copying source machine '$SOURCEMACHINE' with the new name '$TARGETMACHINE'..."
    rm -rf "$TARGETPATH/$TARGETMACHINE"
    cp -R "$SOURCEPATH/$SOURCEMACHINE" "$TARGETPATH/$TARGETMACHINE"

    echo " - Removing unnecessary files for '$TARGETMACHINE'"
    rm -f "$TARGETPATH/$TARGETMACHINE"/*.log

    echo " - Renaming files for '$TARGETMACHINE'"
    for OLDNAME in "$TARGETPATH/$TARGETMACHINE"/*; do
    NEWNAME=${OLDNAME/$SOURCEMACHINE/$TARGETMACHINE}
    mv -f "$OLDNAME" "$NEWNAME"
    done

    echo " - Remapping Hard Disks for the new machine"
    ls "$TARGETPATH/$TARGETMACHINE"/*.vmdk | grep -v -e "-s....vmdk" | while read DISKNAME; do
    sed -i "s/$SOURCEMACHINE/$TARGETMACHINE/g" "${DISKNAME}"
    done

    echo " - Changing resource files (if any)"
    if [[ -f "$TARGETPATH/$TARGETMACHINE/$TARGETMACHINE.vmxf" ]]; then
    sed -i "s/$SOURCEMACHINE/$TARGETMACHINE/g" "$TARGETPATH/$TARGETMACHINE/$TARGETMACHINE.vmxf"
    fi

    echo " - Changing $TARGETMACHINE.vmx file"
    # Massive character substitutions
    sed -i "s/$SOURCEMACHINE/$TARGETMACHINE/g" "$TARGETPATH/$TARGETMACHINE/$TARGETMACHINE.vmx"
    # Change ethernet mac addresses
    MACADDRESSES=`cat "$TARGETPATH/$TARGETMACHINE/$TARGETMACHINE.vmx"|grep "generatedAddress ="| sed -e "s/.*=."//" -e "s/"//"`
    REGEXP="[0-9 A-Z a-z][0-9 A-Z a-z]"
    for OLDMAC in $MACADDRESSES; do
    NEWMAC=$(echo $RANDOM$RANDOM |md5sum| sed -r 's/(..)/1:/g; s/^(.{17}).*$/1/;')
    sed -i "s/$OLDMAC/$NEWMAC/" "$TARGETPATH/$TARGETMACHINE/$TARGETMACHINE.vmx"
    done

    echo -e " - Operation Complete, '$TARGETMACHINE' cloned successfully"

     

     

    Share your ideas

    If you find errors or you'd like to change some parts let me know, share your ideas to improve the script, I'll always post here the improved version

     

     

    Manually clone a VMWare Virtual machine from the shell

    Introduction

    Sometimes you've VMWare appliances and you need to get a physical copy instantly and you don't have VMWare Tools with you or you're doing everything from command line (on a remote console), sometimes you don't even have VMWare (ESX/GSX/VSphere/player) installed or you've just the Player (no cloning from there) but you still need to get a clone of a working machine. I usually create my own appliances with my own utilities, packages and tools installed, I store them as .TAR.GZ and I use them as a base for new machines. Here's what I do to have an exact copy of a machine; it's not a geek trick, it's just a plain basic task, this always works, no matter about the OS inside your VM (Win/Linux/BSD/Plan9/BeOS/...).

     

    First: of all you need to do is stop your source machine (in my example “Debian 6”) and locate its directory, then copy the whole source Dir to a new path (in my example “new.machine”)

    $ cp -R Debian 6   new.machine
    $ ls -la new.machine/
    total 534520
    drwxr-xr-x 2 ben ben 4096 2011-02-09 09:53 .
    drwxr-xr-x 12 ben ben 4096 2011-02-09 09:53 ..
    -rw------- 1 ben ben 8684 2011-02-09 09:53 Debian 6.nvram
    -rw------- 1 ben ben 211550208 2011-02-09 09:53 Debian 6-s001.vmdk
    -rw------- 1 ben ben 234356736 2011-02-09 09:53 Debian 6-s002.vmdk
    -rw------- 1 ben ben 107347968 2011-02-09 09:53 Debian 6-s003.vmdk
    -rw------- 1 ben ben 2621440 2011-02-09 09:53 Debian 6-s004.vmdk
    -rw------- 1 ben ben 65536 2011-02-09 09:53 Debian 6-s005.vmdk
    -rw------- 1 ben ben 639 2011-02-09 09:53 Debian 6.vmdk
    -rw-r--r-- 1 ben ben 0 2011-02-09 09:53 Debian 6.vmsd
    -rwxr-xr-x 1 ben ben 1652 2011-02-09 09:53 Debian 6.vmx
    -rw-r--r-- 1 ben ben 263 2011-02-09 09:53 Debian 6.vmxf
    -rw-r--r-- 1 ben ben 88558 2011-02-09 09:53 vmware-0.log
    -rw-r--r-- 1 ben ben 49667 2011-02-09 09:53 vmware-1.log
    -rw-r--r-- 1 ben ben 64331 2011-02-09 09:53 vmware-2.log
    -rw-r--r-- 1 ben ben 63492 2011-02-09 09:53 vmware.log

    Now delete unnecessary files like the logs

    $ rm *.log

    Do a massive rename, source/previous virtual machine was named “Debian 6”, you need to replace it with “new.machine” (our new name)

    $ mv "Debian 6.nvram" new.machine.nvram
    $ mv "Debian 6-s001.vmdk" new.machine-s001.vmdk
    $ mv "Debian 6-s002.vmdk" new.machine-s002.vmdk
    $ mv "Debian 6-s003.vmdk" new.machine-s003.vmdk
    $ mv "Debian 6-s004.vmdk" new.machine-s004.vmdk
    $ mv "Debian 6-s005.vmdk" new.machine-s005.vmdk
    $ mv "Debian 6.vmdk" new.machine.vmdk
    $ mv "Debian 6.vmsd" new.machine.vmsd
    $ mv "Debian 6.vmx" new.machine.vmx
    $ mv "Debian 6.vmxf" new.machine.vmxf

    NOTE: .vmxf file is present on newer releases of VMWare appliances, if you don't have it just ignore it

    Now it's time to change information inside your virtual machines, you just need to use your favorite text editor to change few things, keep this files as they're

    new.machine-s*
    new.machine.nvram
    new.machine.vmsd

    NVRam is your bios/nvram, it's a binary file and you don't need to change it, *.vmdk are your disks, you just need to change the information header of the disk (new.machine.vmdk), leave the other VMDK files as they are (new.machine-s*.vmdk); VMSD file is usually empty, don't need to change it.

     

    Modify your hard disks

    If you've more than one hard disk you've more than one .VMDK master file, you need to apply few mods on it, here's the content of the original file (was “Debian 6.vmdk”, now “new.machine.vmdk”)

    # Disk DescriptorFile
    version=1
    encoding="UTF-8"
    CID=71ad0a67
    parentCID=ffffffff
    isNativeSnapshot="no"
    createType="twoGbMaxExtentSparse"

    # Extent description
    RW 4192256 SPARSE "Debian 6-s001.vmdk"
    RW 4192256 SPARSE "Debian 6-s002.vmdk"
    RW 4192256 SPARSE "Debian 6-s003.vmdk"
    RW 4192256 SPARSE "Debian 6-s004.vmdk"
    RW 8192 SPARSE "Debian 6-s005.vmdk"

    # The Disk Data Base
    #DDB
    ddb.virtualHWVersion = "7"
    ddb.longContentID ="86aa7ebbb50ab88b973ea60271ad0a67"
    ddb.uuid = "60 00 C2 9f 9a e3 43 6a-ea 70 c7 fa 35 72 7c 04"
    ddb.geometry.cylinders = "1044"
    ddb.geometry.heads = "255"
    ddb.geometry.sectors = "63"
    ddb.adapterType = "lsilogic"

    Row order and content may vary, VMWare configuration files don't have a fixed order, you may change row order, add comments and some other stuff inside it. Here's what you need to change:

    RW 4192256 SPARSE "new.machine-s001.vmdk"
    RW 4192256 SPARSE "new.machine-s002.vmdk"
    RW 4192256 SPARSE "new.machine-s003.vmdk"
    RW 4192256 SPARSE "new.machine-s004.vmdk"
    RW 8192 SPARSE "new.machine-s005.vmdk"

    So all you need to do is change references to physical hard disk files, nothing more, just change the lines above in your new.machine.vmdk file and nothing else

     

    Other Descriptors

    It's time to change the VMXF file (extra configs from VMWare), if you don't have it, just skip this step. Your new.machine.vmxf file could be something like that:

    52 62 73 9d 7f 10 1b 58-8e 3c 8e 15 8e ef f4 a3 Debian 6.vmx

    It's an XML file as you may see, content and VMIDs may change a little bit but it doesn't matter. All you need to do here is to replace this string:

    Debian 6.vmx

    with this one

    new.machine.vmx

    and nothing more, here's the result:

    52 62 73 9d 7f 10 1b 58-8e 3c 8e 15 8e ef f4 a3 new.machine.vmx

    VMX Main configuration file

    new.machine.vmx is the machine main configuration file, inside it you find hardware description and file references, it may vary a lot according to virtual hardware, player version and hardware machine version, here's a copy of my new.machine.vmx (original copy from Debian 6.vmx)

    .encoding = "UTF-8"
    config.version = "8"
    virtualHW.version = "7"
    scsi0.present = "TRUE"
    scsi0.virtualDev = "lsilogic"
    memsize = "256"
    scsi0:0.present = "TRUE"
    scsi0:0.fileName = "Debian 6.vmdk"
    ethernet0.present = "TRUE"
    ethernet0.connectionType = "bridged"
    ethernet0.wakeOnPcktRcv = "FALSE"
    ethernet0.addressType = "generated"
    pciBridge0.present = "TRUE"
    pciBridge4.present = "TRUE"
    pciBridge4.virtualDev = "pcieRootPort"
    pciBridge4.functions = "8"
    pciBridge5.present = "TRUE"
    pciBridge5.virtualDev = "pcieRootPort"
    pciBridge5.functions = "8"
    pciBridge6.present = "TRUE"
    pciBridge6.virtualDev = "pcieRootPort"
    pciBridge6.functions = "8"
    pciBridge7.present = "TRUE"
    pciBridge7.virtualDev = "pcieRootPort"
    pciBridge7.functions = "8"
    vmci0.present = "TRUE"
    roamingVM.exitBehavior = "go"
    displayName = "Debian 6"
    guestOS = "other26xlinux"
    nvram = "Debian 6.nvram"
    virtualHW.productCompatibility = "hosted"
    gui.exitOnCLIHLT = "FALSE"
    extendedConfigFile = "Debian 6.vmxf"
    ethernet0.generatedAddress = "00:0c:29:b1:8b:e6"
    uuid.location = "56 4d 05 92 24 e8 b0 b3-f7 37 1f d9 51 b1 8b e6"
    uuid.bios = "56 4d 05 92 24 e8 b0 b3-f7 37 1f d9 51 b1 8b e6"
    cleanShutdown = "TRUE"
    replay.supported = "FALSE"
    replay.filename = ""
    scsi0:0.redo = ""
    pciBridge0.pciSlotNumber = "17"
    pciBridge4.pciSlotNumber = "21"
    pciBridge5.pciSlotNumber = "22"
    pciBridge6.pciSlotNumber = "23"
    pciBridge7.pciSlotNumber = "24"
    scsi0.pciSlotNumber = "16"
    ethernet0.pciSlotNumber = "32"
    vmci0.pciSlotNumber = "34"
    vmotion.checkpointFBSize = "16777216"
    ethernet0.generatedAddressOffset = "0"
    vmci0.id = "1370590182"
    vmi.present = "FALSE"
    ide1:0.present = "FALSE"
    floppy0.present = "FALSE"

    Now let's focus on the changes, it's basically straightforward to understand it but I need to mention:

    change previous VMDK references to newly created hard disk, basically you need to replace “Debian 6.vmdk” with “new.machine.vmdk” everywhere in your file (just one occurrence)

    scsi0:0.fileName = "new.machine.vmdk"

    Now it's time to change the label for your new machine in the Server (ESX,GSX,VSphere) or Player with your favorite name (“My new Machine Name” in my case), here's:

    displayName = "My new Machine Name"

    NVRam file with the new name:

    nvram = "new.machine.nvram"

    Extended configuration file (only if this is present) with the new one:

    extendedConfigFile = "new.machine.vmxf"

    Now change the Ethernet mac address with a new one or your machines cannot be on the same network with the same address (as in real cases), just respect mac address notations and change something random in it

    ethernet0.generatedAddress = "00:0c:29:b1:ab:ab"

    You may change UIDs inside the file but you don't need to bother about them. Save everything and import your newly created/cloned machine inside your favorite player/server.

     

    Ready, Set, Go!

    Locate your new .VMX file and open it with your Server/Player, you'll see your new machine inside the remote/local repository and you're ready to start it.

    We didn't change the machine UID because it's not necessary, VMWare will do it for us, when you run your machine the first time you'll see a window like this

    Just select “I copied it” button and VMWare will generate the serial UID for the new machine. Now the machine runs an exact copy of your previous one with the same operating system and configurations inside it, please read these hints to solve possible problems:

    • If you're using a static IP address you need to change it in your new machine to avoid conflicts with the previous one (obviously)

    • If you're using a MS Windows OS you need to change the machine name or you'll have a “name duplicated” error when you start the machine, just change the name and make a new reboot

    • If you're serving clients with a basic service (DHCP, DNS, MS Domain Controller) you need to stop it or you'll have few network troubles (as with real servers) due to two services running in the same network (two DHCP servers in the same net are a bad thing...)

    • udev troubles and linux networking, please read below if you're running a perfect Linux machine but without networking capabilities

     

    No networking ? Please read

    Everything is fine with your new virtual machine but.... you don't have a network card properly configured ? Keep reading.

    If you're using UDEVD (http://linuxmanpages.com/man8/udevd.8.php) you may have a problem, it's just a minor trouble as you'll see.

    UDEVD defines plugged network cards in a proper configuration file, network cards are located generally in /etc/udev/rules.d, there's a file called: “z25_persistent-net.rules” | “70-persistent-net.rules” (Debian | Gentoo) or something like that, it's not hard to find it (or let me know and I'll add your information here), generically it's called *persistent-net.rules, let's see it to understand how it works:

    ~$ cat /etc/udev/rules.d/70-persistent-net.rules
    # This file was automatically generated by the /lib64/udev/write_net_rules
    # program run by the persistent-net-generator.rules rules file.
    #
    # You can modify it, as long as you keep each rule on a single line.


    # PCI device 0x10ec:0x8168 (r8169)
    SUBSYSTEM=="net", DRIVERS=="?*", ATTR{address}=="00:21:85:c1:79:37", KERNEL=="eth*", NAME="eth0"

     

    Here's the “old” network card (the one you cloned), it's called “eth0” and this card is not available any more (you've just changed the mac address), you may:

    • Delete the line reporting the “eth0” device, just delete this line

    • Change the line with the proper mac address (the one you've changed in the VMX file)

    I usually prefer to delete the line so a new one will be created for you on the next reboot

     

    NOTE: If you've a line with “eth1” device and you don't have two network cards it means UDEVD has created the line for your with your new (according to him) network card and it left the previous one already there and configured, you may remove eth0 line and rename eth1 to eth0 OR delete both lines. UDEVD will recreate what it needs on the next reboot, don't change your network configuration (etc/networking and so on), just leave UDEVD with the proper card and you'll see it running fine from the next boot

    NOTE SAMPLE: If you're following my own example with a Debian 6 installation you don't need to worry about udev, previous versions (etc, lenny, …) are affected by this


    IMPROVED SCRIPT
    : After few comments reported on this blog I've decided to write a new blog with an automated script, the script does everything reported it by itself, check it out here


    I hope this small guide will assist you in some way if you decide to clone your VMWare machines on your own, file formats are basically the same for a long time and that's what I do for basic sys Admin when I don't have the hypervisor or proper tools with me.

    Share your comments

    Hope it helps

     

    Andrea (Ben) Benini

     

     

    WIKIBEN: Virtualization Articles Collection

    Here's a list of my articles related to virtualization, I've collected all of them in this page to summarize my results, I'll always keep the page updated so you can periodically check my activities related to it.

    I've sorted subjects on technology, product and topic, feel free to ask me new topics or arguments if you need more information about something, hope this helps


    Generic
    VMWare Server 2.x thoughts



    VMWare Troubleshooting and Tricks
    Manually clone a VMWare Virtual machine from the shell
    Clone a virtual machine from the shell (THE SCRIPT)
    Mouse/Keyboard not responding on VMWare Player with Linux
    Accessing VMWare Server 2 with vSphere Client (the unsupported way)
    Access VMWare Server 2 remote virtual machine without web interfaceHOWTO: VMWare Server 2, Disable Web Server Interface



    Linux Distro Related Topics

    Install latest VMWare Player on Gentoo (without portage)
    Install VMWare Player on Gentoo (amd64), the easy way
    HOWTO: Install VMWare Server 2 on Debian Lenny, AMD64 (64bit)

     

     

     

     

    Ben

     

     

    VPN-O-Rama : IPCop to IPCop with IPSec

    After a short introduction (http://www.linux.com/community/blogs/vpn-o-rama-vpns-intro-practical-howtos-screenshots.html) it's time to face the facts and see something practical.

    As previously mentioned I'd like to focus on ready made Linux distros so you can create a VPN connection on the fly and easily in just few easy steps. In my first episode I'll approach IPCop (www.ipcop.org) and I'll create a VPN connection between two IPCop machines, screenshots are something nice to see but our first step is to plan the example.

    This example is built in a private network with virtual appliances, you can obviously modify it to fit your needs, I'll use fake names and networks, translate them to your current network if needed


     

    Network topology:

    Office

    Network

    Subnet

    Headquarter (Coruscant)

    10.0.2.0

    255.255.255.0

    Subsidiary 1 (Alderaan)

    10.0.3.0

    255.255.255.0

     

     

    Firewalls:

    Location/Name

    Linux Distro

    Private IP (LAN)

    Public IP (WAN)

    Coruscant

    IPCop v1.4.21

    10.0.2.94

    10.0.0.94

    Alderaan

    IPCop v1.4.21

    10.0.3.95

    10.0.0.95

     

    For simplicity I've two private and separate networks (representing two offices) connected to a private net (10.0.0.0/24) representing the Internet. It's an easy example quite portable to everything. I also have static IP Addresses (LAN and WAN) and no NAT traversal troubles around (at least in this example, I'll come back with NAT traversal and dynamic IP addresses later...).

    IPCop installation is pretty straightforward, I'll assume you're familiar with it or you can install it without serious issues (or let me know and I'll write something for you if needed); from a basic installation without additional modules or plugins you've everything you need to setup an IPSec connection between your machines.

    I'll use IPCop IPSec built in capabilities to set everything up, first connect to your machine in you headquarter (Coruscant), just go to: https://10.0.2.95:445, then select VPNs Menu and choose VPNs option or go directly to https://10.0.2.94:445/cgi-bin/vpnmain.cgi if you prefer. Here's what you see with a clean installation

     


    If you've done some tests or you've some previous configurations you may press “Remove all CA and certs” to wipe everything. If you want to use IPSec on this host you need to check the “Enabled” flag (on top left) and issue a fully qualified domain name (FQDN) or public IP address for this machine (in our case is 10.0.0.94) than it “Save” button to start IPSec on IPCop.

    Now as your first step you'll create root/host certificates, press “Generate Root/Host Certificates” to create an X509 cert from Coruscant firewall (10.0.2.94), in the next screen you need to fill some data related to your host and office, here are mine :

    You've just created an X509 certificate inside your firewall (Coruscant) with a root and host certificate for your machine, here's what you'll see after this:

    Now save your root and host certificates by hitting the two little disk icons on the bottom right (download root certificate / download host certificate) and name them as:

    • cacert.coruscant.pem
    • hostcert.coruscant.pem

    Now we need to do the same in the other IPCop machine (Alderaan), here are screenshots taken from https://10.0.3.95:445/cgi-bin/vpnmain.cgi :

    Generate Root/Host Certificates button

    hit Generate Root/Host Certificates button and here's the result

    Now save your root and host certificates by hitting the two little disk icons on the bottom right (download root certificate / download host certificate) and name them as:

    • cacert.alderaan.pem
    • hostcert.alderaan.pem

     

    Importing Certificates on both sides

    Now on Coruscant firewall (10.0.2.94) you need to import Alderaan root certificate, in VPN page type “Alderaan” as CA Name and select cacert.alderaan.pem by hitting the “browse” button, see image for details:

    Hit “Upload CA Certificate” to continue, here's the result

    Now you know Alderaan certification authority on this machine and you're ready to create a VPN tunnel, let's do the same on Alderaan firewall (10.0.3.95), see screenshots:


     

    Ready, Set, Go !

    Where's my VPN tunnel ? Relax, we're creating it now; we've done the tough part related to certificates and authorities, now let's establish the tunnel.

    On Coruscant firewall hit the “Add” button in the middle of “Connection status and control” tab so you can decide the type of VPN connection; we're trying to connect two networks so we choose Net-to-Net Virtual Private Network in the following screen, then we press add to continue (screenshot)


    Now fill remote data (Alderaan) with proper values, as you may see they match remote Alderaan network (10.0.3.0/255.255.255.0) and Alderaan public/static IP address.

    In the authentication section you need to select “Upload a certificate” and use hostcert.alderaan.pem certificate downloaded before, in the bottom of the page hit the “Save” button to continue,

    you'll now see a new closed VPN connection on Coruscant firewall


    Now do the same on Alderaan firewall to establish the connection, go to https://10.0.3.95:445/cgi-bin/vpnmain.cgi (Alderaan) and press the “Add” button to create a new VPN connection, select NET-TO-NET as done before

     

    Now fill remote data (Coruscant) with proper values, you're on the other side so you need to reverse everything: Coruscant network (10.0.2.0/255.255.255.0) and Coruscant public/static IP address (10.0.0.94). In the authentication section select “Upload a certificate” and hit “browse” to select hostcert.coruscant.pem , see the screenshot

    Then press save on the bottom of the page to continue


     

    Yeah ! we're up and running

    Now these two networks are fully connected and working, I hope you'll benefit from this article and find it useful for your work; let me know if you want further details or additional information.

    Next episodes will cover, different Linux and BSD distro, more configurations, NAT and dynamic IP addresses as well.

     

     

    Previous:
    VPN-O-Rama: VPNs intro, practical HOWTOs

    Next:
    IPCop to PFSense with IPSec

     

    Andrea Benini

     
    Page 2 of 6

    Upcoming Linux Foundation Courses

    1. LFD320 Linux Kernel Internals and Debugging
      04 Aug » 08 Aug - Virtual
      Details
    2. LFD405 Embedded Linux Development with Yocto Project
      04 Aug » 07 Aug - Santa Clara, CA
      Details
    3. LFD312 Developing Applications For Linux
      18 Aug » 22 Aug - Virtual
      Details

    View All Upcoming Courses


    Who we are ?

    The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

    More About the foundation...

    Frequent Questions

    Join / Linux Training / Board