Linux.com

Home Linux Community Community Blogs

Community Blogs



Tips for Cloud Computing Security

    Shared software, shared resources are integral elements of Cloud Computing, which has become a rage today. It’s being used by companies looking for flexibility and scalability. But there are risks involved that you have to keep in mind while using public, private or hybrid cloud computing options. Security of your vital data is of great importance to you and it should be protected while you get the best benefits of cloud computing services. To make effective use of cloud computing and to ensure that data is transferred online securely, there are certain tips you must follow.
 

Know how your data is protected

    Data encryption and firewall security is crucial and you have to know more about these aspects when you sign up with a service provider. Public cloud SaaS solutions make secure encryption practically mandatory. Cloud computing resources should also have an inbound firewall for security reasons. You might be on a cloud but you still have to have strategies in place in case there is an external attack or hostile activity on the system. Service providers that talk about their security measures openly should be given precedence because it enables you to have your own strategies in order. 
 
Read more at YourOwnLinux
Author: Ramya Raju
 

Broadcast Quality For Online Video Interviews

 

A simple idea to get independent high quality video for online interviews would be to cheat a little. This would only work if your not required to publish the video in real time.

 

The idea is to use dual recordings with the devices. For example. If the person to be interviewed has a high quality web cam but poor internet connection. The software could store a high quality video recording of his part of the session. In real-time the interview would be of the quality the connection can handle but because the stored part can be re-transmitted after the interview is done. It can then with the other videos restore the quality.

 

Low Resolution Preview Idea for Open Source Video Editing

 

An idea to speed up the process and use less resources when editing HD video would be to use a low resolution version of the video as a guide. Akin to 3D software, this would be like rendering the animation first at a low resolution.

So with this you first create a low resolution version of the video. Do all the cuts and effects. Then you preview the result. If the result is good you render the video with the high resolution. With the actions as a script.

 

How To : Install/Upgrade to Linux Kernel 3.14.4 in Ubuntu/Linux Mint Systems

     "The Linux Kernel 3.14.4 is now available for the users and all the users of 3.14 kernel series must upgrade", announced Greg Kroah-Hartman.This Linux Kernel version comes with plenty of fixes and improvements. This article will guide you to install or upgrade to Linux Kernel 3.14.4 in your Ubuntu or Linux Mint system.

Fixes

  • ahci: Ensure "MSI Revert to Single Message" mode is not enforced
  • ahci: Do not receive interrupts sent by dummy ports 
  • pinctrl: as3722: fix handling of GPIO invert bit 
  • KVM: PPC: Book3S HV: Fix KVM hang with CONFIG_KVM_XICS=n 
  • powerpc/compat: 32-bit little endian machine name is ppcle, not ppc 
  • aio: v4 ensure access to ctx->ring_pages is correctly serialised for migration 
  • cpufreq: unicore32: fix typo issue for 'clk' 
  • rtlwifi: rtl8188ee: initialize packet_beacon 
  • mtd: nuc900_nand: NULL dereference in nuc900_nand_enable() 
  • mtd: sm_ftl: heap corruption in sm_create_sysfs_attributes() 
  • libata/ahci: accommodate tag ordered controllers 
  • ahci: do not request irq for dummy port 
  • iwlwifi: dvm: take mutex when sending SYNC BT config command 
  • iwlwifi: mvm: disable uAPSD due to bugs in the firmware 
  • virtio-scsi: Skip setting affinity on uninitialized vq 
  • mac80211: exclude AP_VLAN interfaces from tx power calculation 
  • ath9k: fix ready time of the multicast buffer queue 
  • x86-64, build: Fix stack protector Makefile breakage with 32-bit userland 
  • drm: cirrus: add power management support
  • drm: bochs: add power management support 
  • Skip intel_crt_init for Dell XPS 8700 and many more...

Read more at YourOwnLinux

 

How To : Secure Shell (SSH) Password-less Login using SSH-Keygen

    Secure Shell, as the name tells, is the open source and most secure and hence, most used protocol that is used to execute command remotely on a Linux host or to transfer files from one Linux host to another within a network using Secure Copy (SCP). Find more details about Secure Shell in our article- Secure Shell in Linux.

    In this article, we will see how to setup password-less login between two Linux system to transfer files between them with the same level of security and trust.


Read more at YourOwnLinux

 

Schedule Your Jobs in Linux With CRON

Most of the Linux users are aware of how commands are run, processes are manipulated and scripts are executed in terminal. But, if you are a Linux system administrator, you might want them to start and execute automatically in the background. As an example, you might consider running a backup job every day, at a specific time, automatically. Or you might consider an example of collecting inventory data of the systems deployed across your network, by running a script automatically on monthly basis. But, how to schedule these jobs and execute them automatically in Linux?

    There is an utility in Linux known as CRON with which you can start your jobs automatically at a desired time and schedule them to get executed periodically.

    Cron utility consists of two parts: The cron daemon and the cron configuration files. Cron daemon is just like any service that is started automatically whenever your system boots. Cron configuration files hold the information of what to do and when to do. The main job of cron daemon is to inspect the configuration regularly (every minute to be more precise) and check if there is any job to be completed.

    In the /etc directory, you will find some sub-directories namely cron.hourly, cron.daily, cron.weekly and cron.monthly. You can put your scripts in these directories, and as their names suggest, they will be automatically executed after certain period of time. For example, if you wish to run a job or service regularly after every week, simple put the script in /etc/cron.weekly directory.

 

Read more at YourOwnLinux.

 

 

5 Best Free Erlang Books

The focus of this article is to select the finest Erlang books which are available to read for free. Some of the books featured here are released under an open source license. All of the texts have a lot to offer for a budding Erlang programmer.

<A HREF="http://www.linuxlinks.com/article/20140510054337787/FreeErlangBooks.html">Read more</A>

 

19 cool things to do after installing Kubuntu 14.04 Trusty Tahr

Kubuntu 14.04 Ubuntu 14.04 LTS has been recently released and Kubuntu 14.04 followed up swiftly. Kubuntu is been my primary distro for many years now. It brings together the wonderful KDE desktop along with the app laden Ubuntu. So if you have just done a fresh install of Kubuntu then you can tweak few things and install some apps to make sure everything from multimedia to office apps and browser functionality works in the best possible...
Read more... Comment (0)
 

Does Tor Browser Just Open a Text Editor? Here's a Simple Fix

If you use Ubuntu, then you're probably familiar with the nuisance of running Tor Browser. Yes, I know, when you click "Run-Tor-Browser", it just opens a gedit text window. Let's change that with one simple step.

 

Open your terminal and type:

gsettings set org.gnome.nautilus.preferences executable-text-activation ask

 

 

How To Build a Cloud (cluster) Hosting Without Investing a Lot of Money

Three years ago, I had an interesting problem. It was necessary to assemble a platform to combine multiple racks of servers into a single entity for the dynamic allocation of resources between sites, written for the LAMP platform.

However, the budgets were very less so expensive solutions such as Cisco Content Switch or disk shelves with fiber optics were not affordable.

And, besides, of course, in case if one server is down – this should not affect the operation of the platform was my main concern.

In my school time, I read somewhere that “Necessity is the mother of invention”, which is fairly true.

First of all you need to share a platform into subtasks. You have to do something for the synchronization of data as a shared drive is available. In addition, it is necessary to balance the traffic and have at it some statistics. Finally, the automation of providing the necessary resources – is also quite a serious problem.

Let’s start from the beginning…

I had a choice on what to organize a platform. OpenVZ or XEN ? Each has its pluses and minuses. OpenVZ has a lower overhead, work with files and does not block devices, but cannot run anything other than Linux’ovh distributions. XEN allows you to run on Windows, but more difficult work. I’ve been using OpenVZ, as this is more suited for the task, but you can choose the one you like, there is no restriction on choice.

Then I shared the server space for the VDS, one for each core. Servers were different, and therefore I had a set from 2 to 16 and virtual ok on each server. In the “average house” turned out about 150 virtual ok on the counter.

How to synchronize the data?

The next item – this is the early establishment of VDS on demand + protection against breakage of any server. The solution was simple and beautiful.

Each VDS creates the initial image as a file on the LVM partition. This image “spreads” on all servers in the platform. As a result, we have a backup of all the projects on each server (paranoid cry of emotion), and the creation of a new VDS «on demand” has been simplified to a snapshot image and it start the VDS literally in few seconds.

Database and API

If the integrity of the files were all simple, here’s a sync base things were worse. From the beginning I tried a classic example – master-slave, and collided with a classic problem: slave lag behind master.

The next step was to Mysql-Proxy. As a sysadmin, this was very easy to set and forget, but the configuration should be updated while adding / removing new VDS. But developers have had their own opinion. In particular, the fact is that, it is easier to write a PHP class for synchronization of INSERT / UPDATE / DELETE queries than to learn Lua, without which the Mysql-Proxy is useless.

Their work produced a so-called API, which was able to find neighbors of a broadcast sync up to date and to inform the neighbors of any changes to the database.

But still worth exploring Lua and make native mode, where all requests are synchronized with their neighbors.

FreeBSD

Balancer – it can be said that it is a key aspect of the platform. If they fall to balance server, all work will have no meaning.

That is why I used the CARP to create fault-tolerant balancer, choosing FreeBSD as the OS and Nginx as a balancer.

Yes, NLB has been replaced by two weak machines with FreeBSD (marketers in a rage).

And most importantly – how it works

When starting up the platform for each site runs on a single copy and monitor to balanesere watched to ensure that the primary copy has always worked.

In addition, the balancer was installed to analyze statistics Awstats, which provided all the logs in a convenient format, and most importantly – there was a script, polling each VDS via SNMP for its load.

As we remember, I devoted each VDS on one core, so Load Average in a 1 – this is a normal load for the VDS. If LA became 2 or above – the script that creates a copy of the VDS on a random server and put this in its upstream nginx’a. And when the load on extra VDS fell less than 1 -, respectively, all removed.

Summarize

If you take the rack with servers and switches supporting the CARP protocol, to create a ESDS cloud hosting Server, will need to:

  • Explore Lua and adjust transparent synchronization across Mysql-Proxy
  • Screw the billing account for additional copies of the VDS and traffic
  • Write a web interface for managing VDS
  • The filling racks with enough amount of four zeros. Compared with the decisions of the brands, where the price of one stand is the sum of six zeros, counts worth.
 

An Insight On Dedicated Server Terminology

At times, technical jargon can become overcomplicated and it further complicates simple matters as well. While investigating the web hosting servers, you will often come across titles like ‘The beginner’s guide’ or ‘simple steps’ that comprises of words like hypertext, applet and many other unexplained acronyms.

As compared to the other web hosting platforms, dedicated hosting is comparatively a complicated hosting solution and it takes some time to get accustomed to it. It is important to understand the benefits of a dedicated server and know the reasons why you should opt for this hosting solution. Once you have this basic understanding, you must move forward and gain knowledge of some of the crucial words and phrases that are going to be useful for gaining an insight on the functioning of a dedicated server.

The aim of this article is not to burden the users with explanations but just to educated them on the basic terminology of dedicated servers that they must be aware of. Here is a brief explanation of the important terms:

Initially, let’s have a look at the concept – dedicated server. Basically, this is your personal server where there is no need for you to share the resources as the server is completely dedicated to you. This is a flexible hosting solution that enables you to decide on the factors like operating systems, hardware and the other resources on the server. Dedicated servers usually provide with high security and better performance as compared to the other hosting platforms. Although a dedicated server has a higher price tag, it is a complete value for money for the resource intensive websites.

DNSBL (DNS blacklist); this is something you don’t want to see. You might come across this term when your server is blacklisted. This term is used for the networks that distribute spam or any type of harmful services. Usually, it is a list of blacklisted IP addresses that you wouldn’t like to deal with for some or the other reason.

While the offerings of many web hosting companies comprise of unmanaged dedicated servers, another term that you might come across is – managed servers. As dedicated servers are more personal and private, there is no interference caused by others. This means that you can configure the server as per your preference. By selecting a managed dedicated server, you will be provided with round the clock support by the web hosting company and they will maintain the server and ensure the smooth functioning of the server.

The name server is the server that translates your IP address into a domain name that is human-readable. This is done so that the users can access and view your website through the domain name. This means that there is no need for the users to enter a series of numbers into the browser in order to get to your website.

RAID (Redundancy Array Of Independent Disks) is a phrase you might not frequently come across but it is good to know about it. RAID is the structure that creates the backing of the hard drives through a series of redundant hard disks that store your data. This data can be utilized in case your operating hard drive malfunctions due to any reason.

The method that enables you to condense the data between network protocols is known as tunneling. A common type of the tunneling method is the secure SSH. SSH enables you to tunnel a wide range of protocols in order to provide efficient and secure file transfers and connections.

With proper understanding of the important terminology mentioned above, you will be in a better position to use the dedicated server hosting solution to the best of its capacity.

 
Page 14 of 148

Upcoming Linux Foundation Courses

  1. LFS220 Linux System Administration
    05 Jan » 08 Jan - Virtual
    Details
  2. LFD331 Developing Linux Device Drivers
    12 Jan » 16 Jan - Virtual
    Details
  3. LFS520 OpenStack Cloud Architecture and Deployment
    12 Jan » 15 Jan - Virtual
    Details

View All Upcoming Courses


Who we are ?

The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

More About the foundation...

Frequent Questions

Join / Linux Training / Board