This tutorial shows how to install an Ubuntu 15.10 (Wily Werewolf) server (with Apache2, BIND, Dovecot) for the installation of ISPConfig 3, and how to install ISPConfig 3. ISPConfig 3 is a webhosting control panel that allows you to configure the following services through a web browser: Apache or nginx web server, Postfix mail server, Courier or Dovecot IMAP/POP3 server, MySQL, BIND or MyDNS nameserver, PureFTPd, SpamAssassin, ClamAV, and many more. This setup covers the installation of Apache (instead of nginx), BIND (instead of MyDNS), and Dovecot (instead of Courier).
Linux Foundation Scholarship Recipient: Enrique Sevillano
The Linux Foundation regularly awards scholarships as part of its Linux Training Scholarship Program. In the five years that the Linux Foundation has hosted this program, it has awarded a total of 34 scholarships totalling more than $100,000 in free training. In this continuing series, we are sharing the stories of recent scholarship recipients in the hope that they will inspire others.
Enrique Sevillano (age 42) is a recipient in the Sys Admin Superstar category and works as an IT manager at an energy utility company in the United States. He recently decided to move the company’s architecture to Linux. By doing so, he says, they have optimized services on old servers that otherwise would have been cost prohibitive. Enrique says Linux and open source have allowed him to deploy a high-availability virtualization infrastructure as well as affordable storage and cloud solutions.
How did you become interested in Linux and open source?
I have been an evangelist of the core concept of Linux and Open Source Software from the beginning of my career; however, it was in my current professional position that I could experience Linux’s full-blown benefits. It was an eye opener of the current reality for many of us. How to overcome the challenge of managing and deploying a very demanding, hardened, and complex computing system for the data center — such as the one used by energy utilities — with not enough budget to deploy a full HA (high availability) virtualization infrastructure using VMware or any other commercial solution nor to deploy expensive storage solutions?
Thanks to Linux and open source, I was able to overcome that challenge. I was able to fully deploy an HA virtualization infrastructure for our data center, as well a deploy affordable storage and private cloud solutions based on Linux.
Another reason why I focused on open source and Linux was because standard commercial solutions provide very limited time for trials. However, those timeframes never synced with the time I had in order to test any IT solution at my own pace and using my own lab. Linux and open source is still the smartest way to do it for me.
Last but not least was standardization — being able to deploy IT services one way, the standard way, saved me a ton of time in comparison with commercial applications where I have to learn their ways, which are usually unique from anybody else, even if you trying to deploy the same IT service. It’s been nice to be able to deploy, secure, and integrate most of the IT services with one computing platform, that is the power of Linux.
What Linux Foundation course do you plan to take?
I am taking LFS230, Advanced Linux System Administration and Networking.
How do you expect to use the knowledge you gain from the course?
I am planning to use all this knowledge widely across all the company’s computing architecture, mainly in the networking side, as I am also planning to standardize the L2 and L3 layers across the company using Linux on white boxes that are compatible with Linux Open Networking system.
What are your career goals? How do you see a Linux Foundation course helping you achieve those goals?
With all my background and professional experience, I am ensuring my career as an expert of deploying Linux for the enterprise as well as transitioning to Linux from another computing platform. Linux security is a big portion of it, and as a result, I do see Linux Foundation courses as an essential part of my career. Short-term accomplishments that I have in mind are the Linux Foundation Certified Engineer certification as well as the Linux Security training course.
What other hobbies or projects are you involved in? Do you participate in any open source projects at this time?
My wife just was granted the Technology Teacher Statement of Eligibility from the Colorado Department of Education, and I am helping her start up a STEM (Science Technology Engineering and Math) program in our school district. This is a difficult path to take in the current situation of our school district, but we like the challenge. We are also trying to start up a robotics program using Raspberry Pi and/or Arduino hardware. We will present the program to the recreational center in our district, and for both programs, we are relying heavily on Linux and open source.
The Story of PhillipOS, Deri, and Timen: My life as a programmer (PhillipOS Download Link Included)
PhillipOS is a linux distro I developed that is based on OpenSUSE 13.2. I first started developing operating systems in 2005 when I started to develop Ingos Linux build one, based on Ubuntu 4.10. When I finished it, I sent to a few of my friends via email, and they all liked it.
In early 2006, I was still interested in Ingos Linux development, and I started work on build two. It was based on Ubuntu 5.04, and featured many of the same features as Ubuntu 5.04, but the live cd file was much larger, due to much graphical improvements with Ingos Linux. I finished the build in 2 months, and sent it to my friends in June 2006. They said that they preferred the first build, due to the fact that the bigger hard disk space it takes up made it crash more often. I started to fix the bugs for a third build, but soon lost interest in Ingos Linux, and stopped development in September 2006.
2 years later, in 2008, I regained interest, and decided to develop Ingos Linux again. I started on a new third build. When I finished the build in November 2008, I sent it to my friends, and they recommended that I post it on a Linux web forum. So on 14 November 2008, I posted it to a linux web forum called linuxlike.net. It got good feedback from members.
A few days after, on 19 November 2008, a computer programmer named Kobie asked me how I developed the operating system. We soon gained a friendship, and we founded a software company called Deri Computer Software Development in January 2009.
I worked at the company for a few months before I left in May 2009 to develop “ShareTos”, a GUI for Arch Linux. The project was finished in February 2010, and posted on linuxlike.net.
Around this time, Kobie emailed me with a download link for Deri’s new operating system, ACA Server 2011, based on the hybrid kernel. I downloaded it and tested it out, and liked it. I rejoined the company in May 2010, and started development on Sony Linux build one, based on Ubuntu 9.10. Once I finished it, in December 2010, I tried to post it to linuxlike.net, but the forum was taken down, so I just sent it to my friends and also all the staff of Deri. They all gave positive feedback, so I started to develop an RTM version.
However, in May 2011, I suffered a major computer error, and all the source code was lost.
In August 2011, I started development on Deri-HDOS, which stood for Deri Hard Disk operating system. This was based on the hybrid kernel. However, I soon gave up development in December 2011.
In January 2012, Kobie and I started on “9OS”, a linux distro based on Ubuntu 11.04. I focused on the programs and source code, and Kobie focused on the visual of the OS. However, Kobie soon lost interest, but I continued on with the project until May 2013. Part of the source code for PhillipOS is from this project.
In October 2013, me and a few members of the Deri staff started development on “10OS”, to keep the naming similar. 10OS was based on Ubuntu 13.10. Development ceased in March 2014.
PhillipOS development started in July 2014 as an official Deri project. It was based on Linux Mint 16. However, when Deri defunct in December 2014, I carried on development after I converted the source code from Linux Mint 16 to OpenSUSE 13.2. In May 2015, Kobie and I started working together again, and we founded Timen Computeronics. We continued developing PhillipOS together. The latest build, 1.11, was finalized on November 2, 2015.
Going IaaS: What You Need to Know
In a previous article, we looked at the evolution of IT infrastructure and how Infrastructure as a Service (IaaS) is helping enterprises focus on their core competency instead of worrying about the underlying infrastructure.
In this article, we will address some of the questions we asked in the previous story: Is IaaS or OpenStack right for every enterprise? Are there cases where you don’t need IaaS? How does it affect the cost? What things should you consider before moving to IaaS? What are the tools available?
Is IaaS for Everyone?
Whether or not you need IaaS is a business decision. You need to ask yourself what you are trying to do with your infrastructure: Do you want to reduce cost? Do you want to be more agile? Do you need dynamic infrastructure? Are you going to scale very rapidly in the next six months or so?
Then, you have to look at your pain points with the current infrastructure: Do you experience slow time to provision? Are your customers demanding faster response time but your IT pulls you down? Do you experience peaks and valleys of demand that your current infrastructure can’t handle?
Another equally important question is: Do you want to keep your infrastructure on premise or off site? There are decisions to make regarding going public or keeping it private. There are also considerations around compliance, security, privacy, data sensitivity, and so on. All of these things factor into the business decision.
You also have to consider the technical aspects. The decision boils down to the skill set of your existing teams. If you are moving from a traditional IT setup where you have IT admins, now you are looking at DevOps. You will either have to train your IT folks to code or hire new people.
Other decisions to consider include: Do you want to manage it in-house or do you want to outsource it? If you are a small company and don’t want to deal with the details, then you can go to a public cloud or get a managed service cloud where someone will handle it for you. All you do is consume it.
There are a lot of options to think about before using IaaS, but the bottom line is that these are business decisions and you need to weigh all these options in light of your own needs.
Things to Know Before Going IaaS
In a traditional model, every component of the stack is a separate entity: You have separate compute, networking, storage and so on. The advantage there is that you have specific teams focused on these areas with in-house expertise. However, this approach also means added people cost and the headache of maintaining and managing the hardware.
When it comes to IaaS, everything is software defined: storage, networking, compute. Now you need fewer people, but you need people who work closely with each other, because components like the hypervisor, the virtualization layer, and networking capabilities are intricately tied together. Although you have succeeded in breaking down the silos of such teams found in the traditional model, you will have to build a cohesive team that can work effectively together.
Additionally, when you think of IaaS, you must remember that you will have different applications from different business units from within your company using that shared infrastructure. So, you need to invest in tools and technologies to do metering and chargeback and showback — depending on how your organization is structured. OpenStack provides all the data of instrumentation, which tells you what was used. But, you still have to invest in tools to convert all of that and put things together.
If you are embracing agile, rapid development, you need to make sure your developer tools allow you to be flexible. For example, monitoring is an integral part of any infrastructure. Companies with traditional IT infrastructure do this. Big companies have thousands of servers. They monitor the stack, update it, patch it…but they do it at the individual server level; they need to know every detail, which can become very, very expensive. They can afford it; not everyone can.
Consider the alternative scenario with IaaS; monitoring is built into IaaS. There is monitoring for your base infrastructure. There are tools like Ceilometer, which basically provides monitoring as a service. It monitors the entire infrastructure from the server up. But monitoring alone is not enough, you also need analytic capabilities so that you can debug problems quickly. Tools like Zabbix and Grafana can help you do that.
“On IaaS side, there is one monitoring system that monitors the entire shared infrastructure and analytics are built on top of it. So it reduces cost; you don’t need to be expert on all these things; you just need to know one thing and you don’t need that many people. It’s more cost effective across the board, and it’s more agile too,” said Kamesh Pemmaraju, vice president of product marketing at Mirantis.
Avoiding Vendor Lock-in
Although OpenStack is open source, you still have to consider the risks of vendor lock-in. Different players offer their own OpenStack implementations, but how do you know they are compatible with each other? How do you ensure that you can easily switch vendors without getting locked in?
To ensure interoperability, the OpenStack Foundation has something called DefCore. It sets base requirements by defining 1) capabilities, 2) code, and 3) must-pass tests for all OpenStack products. This definition uses community resources and involvement to drive interoperability by creating the minimum standards for products that are labeled OpenStack.
“It’s effectively about APIs, making sure that APIs are compatible, the implementation has certain SLAs and contract associated with it,” said Pemmaraju.
There are other factors to keep in mind while choosing an OpenStack implementation. The reputation of the vendor plays a big role. There are pure play companies like Mirantis, which focus entire on OpenStack and let the customer pick the rest of the components that make the cloud. There are companies like Red Hat that are reputed to work only upstream. But, there are other vendors, too.
Many vendors try to push their own products along with OpenStack — such as the operating system, servers, middleware components, management systems, orchestrations systems, plugins, etc. If any or all of these components are proprietary, then obviously you are getting locked into the stack from that vendor. Avoid such vendors. You don’t want to be locked inside your own cloud.
Read the next article in this series: How to Test-Drive OpenStack
Read the previous article: Infrastructure Should Enable Not Block Business
Get it all at once. Download a free 26-page ebook, “IaaS and OpenStack – How to Get Started” from The Linux Foundation Training. The ebook gives you an overview of the concepts and technologies involved in IaaS, as well as a practical guide for trying OpenStack yourself.
Open Source R Code to Get New Hub with Help from the Linux Foundation
The R Consortium and the Linux Foundation will fund a new open source developer hub for the R programming language called R-Hub, which complements CRAN and R-Forge.
The R Consortium and the Linux Foundation are investing in a new code-hosting platform that will help streamline the development and distribution of software packages for R, the popular statistical programming language. Titled R-Hub, the platform will offer development, building, testing and validation services for R packages. R developers proposed the creation of R-Hub in July 2015 to serve as “the everything-builder the R community needs.”
Read more at The VAR Guy
Keeping Linux Clean
One of the great things about Linux is how stable it is over time. The biggest challenge with Linux is getting it installed, finding and configuring the software you need to get stuff done. Once you get that accomplished, it pretty much just runs. There’s not much in the way of system maintenance you have to worry about. Windows, on the other hand, is what I call a “dirty” system,in that it generates lots and lots of extra data that it leaves on the hard drive as it runs. It’s notorious for slowing down over time, as this data piles up and Windows users either have to install software to clean all of this trash out or reload the system periodically to keep that freshly booted up feeling. There’s actually a whole industry devoted to selling “cleaners” for Windows., Some of these programs are really just malware in disguise but many are quite useful. Of course, the problem is figuring out which is which. (Read the rest at Freedom Penguin)
Kaspersky Announces Death of CoinVault, Bitcryptor Ransomware, Releases All Keys
Over 14,000 keys used to unlock files encrypted by CoinVault and Bitcryptor have been released, signaling the death of the ransomware variants. Kaspersky has released all the known keys required to unlock files encrypted by the CoinVault and Bitcryptor ransomware, giving victims the chance to get their files back without paying up.
The Moscow, Russia-based cybersecurity firm says both the variations of ransomware are now dead as all the decryption keys required to unlock systems infected with the malware are now in the public domain.
Read more at ZDNet News
Linux Kernel 4.3 Officially Released, It’s Now the Most Advanced Stable Version
Linus Torvalds has just announced that the new Linux kernel 4.3 has been released and is now available for download. This marks the end of a new development cycle, for 4.3, and the beginning of the next one, 4.4.
The new Linux kernel 4.3 has finally arrived, and it looks like no major problems have troubled the developers. As usual, the new Linux kernel packs an assortment of changes and improvements, and we’ll likely see it integrated very soon in a host of operating systems. This is not a long term release, so there won’t be a lot of updates down the line, but any kernel upgrade is usually a good one.
Distribution Release: OpenELEC 6.0
Stephan Raue has announced the release of OpenELEC 6.0, a major new version of the specialist Linux distribution designed for media centres and featuring the brand-new Kodi 15.2 media centre software: “The OpenELEC team is proud to announce OpenELEC 6.0 (6.0.0) The most visible change is Kodi 15.2 (Isengard). Beginning with Kodi 15.0, most audio encoder, audio decoder, PVR and visualisation add-ons are no longer pre-bundled into OpenELEC but can be downloaded from the Kodi add-on repository if required. PVR backends, such as VDR and TVHeadend, will install needed dependencies automatically…”
Storage Node (LVMiSCSI) deployment for RDO Liberty on CentOS 7.1
Following bellow is brief instruction for 4 node deployment test Controller & Network & Compute & Storage on RDO Liberty (CentOS 7.1), which was performed on Fedora 21 host with KVM/Libvirt Hypervisor (32 GB RAM, Intel Core i7-4790 Haswell CPU, ASUS Z97-P ) .Four VMs (4 GB RAM, 4 VCPUS) have been setup. Controller VM one (management subnet) VNIC, Network Node VM three VNICS (management,vtep’s external subnets), Compute Node VM two VNICS (management,vtep’s subnets), Storage Node VM one VNIC (management)
Setup:-
192.169.142.127 – Controller Node
192.169.142.147 – Network Nodes
192.169.142.137 – Compute Node
192.169.142.157 – Storage Node (LVMiSCSI)
Complete text may be seen here