Home Blog Page 687

The Open Source SDN Distro That Keeps Microsoft’s WiFi Secure

In case you didn’t know, “Microsoft IT is big,” according to Brent Hermanson, who leads the Network Infrastructure Services group for Microsoft IT.  In a keynote presentation at OpenDaylight Summit in September, Hermanson noted that Microsoft IT has users and locations all over the globe.  Until recently, they had a legacy approach to the corporate network, but now they want to modernize the legacy. The need for corporate networks in buildings has evaporated and 70 percent of wired network ports now are unused.

Microsoft has adopted a “Wireless First” approach along with an “Internet First” approach to their IT investments.  The Wireless First approach centers on WiFi, with a driving concern for ensuring that users are more secure in a Microsoft building than at the local coffee shop given that your workload is still in the cloud. The Internet First approach leads to the key question of how to maintain QoS and ensure security.  With more and more workloads in the cloud, their new default is that everything goes to the Internet.  Their corporate intranet is used for applications, such as Skype, that require QoS and security.  This approach Hermanson noted has produced an estimated cost savings of 50 percent.  

Yet, Hermanson continued, it’s a “huge cultural change” for how they’ve built up processes that secure their data, manage their identity, and control data loss prevention. The way they view it is the corporate network becomes the IT data center and offices locations are just on the Internet. As they “aggressively, move workloads to Azure and they need to aggressively move users to an Internet optimized path.”   

Gert Vanderstraeten, Network Architect at Microsoft, did note that “not all traffic gets dumped on the Internet.” He said that Skype for Business requires QoS  and High Business Intelligence (HBI) information requires security, neither of which you will get on the Internet. Thus, the default is the Internet first with these noted exceptions, which go to the corporate WAN where they have better chance of QoS and security.

There are many ways to mark traffic, Vanderstraeten said.  A method they tried was marking based on known UDP port numbers.  This worked great until employees figured out how to spoof the port number making their traffic always a high priority. Next, they added DPI (Deep Packet Inspection). This worked even better — about 75 percent of the time — but the move to encrypting everything dampened this approach.

Dr. Bithika Khargharia, a principal solutions architect at Extreme Networks and director of product and community management at the Open Networking Foundation (ONF), then elaborated on the new approach by discussing a project called Atrium Enterprise. Atrium Enterprise is an open source SDN distribution that’s ODL-based and has an integrated unified communications and collaboration application. It runs on Atrium partner hardware according to Khargharia.

“In phase 2, what they are essentially providing is a VNF, a virtual network function” she said.  The Skype 5-Tuple information is communicated to ODL SDN Controller, which then tells this VNF that this is Skype traffic and here’s what you do with it. This function will sit behind the building’s router in what Vanderstraeten also refers to as the “Decision Point.”

Khargharia noted they are looking at the use case of Unified Communications (Skype) in the cloud serving enterprises with one or more service providers (SP)providing connectivity between them.  They are interested in an end-to-end solution where Skype for example, communicates its requirements to both the enterprise cloud and to the SP’s cloud.  In her example, the enterprise could be ODL based and the SP could be ONOS based. The requisite APIs would be SDN controller independent to allow this end-to-end signaling.

Cloud Native Computing Foundation Adds OpenTracing Project

The Cloud Native Computing Foundation (CNCF) today officially announced that the open-source OpenTracing project has been accepted as a hosted project.

CNCF got started in July 2015 as a Linux Foundation Collaborative Project. The inaugural project behind the CNCF is Google’s Kubernetes, which wasrecently updated to version 1.4.

In May 2016, CNCF welcomed its second project, the Prometheus monitoring project. Now with OpenTracing there is another key tool being added to the CNCF portfolio.

Read more at Internet News

IRC 3: The Original Online Chat Program Gets Updated

Long before there was Whatsapp, Slack, or Snapchat, IRC was the program for online chatting. And, it’s not dead yet.

Internet Relay Chat (IRC) was born in 1988 to help people message each over over the pre-web internet. While many other programs have become more popular since then, such as WhatsappGoogle Allo, and Slack, IRC lives on primarily in developer communities. Now, IRC developers are updating the venerable protocol to revitalize it for the 21st century.

Read more at ZDNet

Red Hat Announces Open Source Release of Ansible Galaxy with Aim of Advancing Automation

Red Hat launched Tuesday its Ansible Galaxy project with the full availability of Ansible Galaxy’s open-sourced code repository. Ansible Galaxy is Ansible’s official community hub for sharing Ansible Roles. By open-sourcing Ansible Galaxy, Red Hat further demonstrates its commitment to community-powered innovation and advancing the best in open source automation technology.

Ansible Tower by Red Hat offers a visual dashboard; role-based access control; job scheduling; graphical inventory management aling with real-time job status updates.

Read more at Computer Technology Review

Fear Makes The Wolf Look Bigger

The biggest impediment to the DevOps “Revolution” may be the language used to describe it. Many proponents focus on the automation aspect of DevOps. At its core automation implies giving up control and that’s a scary prospect. This tech-centric focus does a disservice to what DevOps is really about.

Automation is just one aspect of DevOps, and at the risk of committingheresy, it is the least interesting. Please, before you make a run on pitchforks hear me out.

DevOps is based on 3 key pillarsPeopleProcess and Automation. I believe their importance to a business should be considered in that order.

Read more at Chris Scharff’s Blog

 

Blockchain Adoption Faster Than Expected

study released last week by IBM indicates that blockchain adoption by financial institutions is on the rise and beating expectations. This is good news for IBM, which is betting big on the database technology that was brought to prominence by Bitcoin. Yesterday, Big Blue announced that it has made its Watson-powered blockchain service available to enterprise customers.

For its study, IBM’s Institute for Business Value teamed with the Economist Intelligence Unit to survey 200 banks spread through 16 countries about “their experience and expectations with blockchains.” The study found that 15 percent of the banks surveyed plan to implement commercial blockchain solutions in 2017.

Read more at IT Pro

Keynote: Apache Milagro (incubating) – Brian Spector, CEO & Co-Founder, MIRACL

https://www.youtube.com/watch?v=bIaA7-Eady0?list=PLGeM09tlguZTvqV5g7KwFhxDlWi4njK6n

In this keynote, Brian Spector provides an introduction to Apache Milagro, which enables a post-PKI Internet that provides stronger IoT and Mobile security while offering independence from monolithic third-party trust authorities. 

 

Fujitsu Open Source Project Aims to Be Front End for Cloud Foundry Service-Broker API

Last year, Fujitsu launched its first open source project, Open Service Catalog Manager (OSCM), for service providers, IT departments and end users to manage and track the cost of provisioning cloud-native applications.

It is, essentially, a platform to manage cloud services and build marketplaces where all of the major cloud service providers from VMware to AWS to Google Compute can list and manage their offerings. IT managers can then shop for and provision cloud services, as well as track and monitor their organizations’ cloud use and purchasing.  

A CIO, for example, can see an overview of the services a company consumes, who authorized them, do SLA comparisons, and create other reports.

It also offers a registry of technical services for service providers such as AWS, VMware, Salesforce, and Office 365, that provides everything a service needs to provision an instance.

The goal is to make subscribing to cloud resources – regardless of the vendor or service type – “as easy as buying a videocam from an Internet shop,” said Uwe Specht, senior manager at Fujitsu, in his talk at LinuxCon Europe last week.

Companies are desperate for such a software solution as they increasingly turn to public and hybrid cloud services to deliver products and services.

Fujitsu believes that by open sourcing its software, it can lay the foundation for an industry-wide, vendor-neutral platform for provisioning cloud services.

“As far as we know there is no other open source project that is really for a multi-service catalog,” Wolfgang Ries, CMO of Fujitsu EST, said in the session. “This was built as a single-pane-of-glass, self-service catalog for any type of IT service in your organization.”

OSCM is similar in function to Murano, OpenStack’s open source application catalog, but with a broader scope because it integrates with all service offerings and isn’t limited to IaaS, Ries said.

Ries also raised the possibility of using OSCM as a front end for Cloud Foundry’s Service Broker API. Fujitsu, a platinum sponsor of CNCF and a silver sponsor of Cloud Foundry, has been involved in the Cloud Foundry Service Broker API working group and aims to allow OSCM to leverage the Cloud Foundry Service Broker API.

OSCM has been seven years in the making at Fujitsu, and has seen several iterations before the company released it as open source in 2015. The version they released represents the solution that best meets its customers’ needs for a provisioning catalog system, Specht said. Fujitsu can bring this history and knowledge to the open source community to create a foundational technology for provisioning across cloud providers.

But first, they must build the project’s community. In the past year OSCM has gained 20 contributors on GitHub – largely comprising Fujitsu developers – has processed 1,400 pulls, and has amassed 200 users in the Docker registry where users can download a container with a basic OSCM installation. They are actively recruiting more contributors to help grow the project.

For more information on the project and to contribute visit: http://openservicecatalogmanager.org/ui/
 

Apache on Ubuntu Linux For Beginners

The Apache HTTP server is a mighty beast that powers the majority of websites. It has evolved into a complex server that slices, dices, dances, and sings. It powers vast hosting centers, and it is also splendid for running small personal sites.

The trick with Apache is knowing which configurations you need as it has plenty to choose from. We’ll start with setting up a single site, and then in part 2 set up SSL, which is vitally important, and we’ll learn a bit about the .htaccess file. Yes, .htaccess, that wonderful file that allows us to make per-directory configurations, but which uses such a contorted syntax it reduces the most stoic server admin to tears and whimpers. The good news is you never need to use .htaccess if you have access to your Apache configuration files. But when you don’t, for example on shared hosting, you need .htaccess.

In this series we’ll tackle Debian/Ubuntu/Mint and Fedora/CentOS/Red Hat separately, because the various Linux families are all special flowers that feel they must organize the Apache configuration files in their own unique ways.

On Debian/etc. install Apache with this command:

$ sudo apt-get install apache2

This installs Apache, utilities, configuration files, and other items you can see with apt-cache show apache2. Debian/Ubuntu start Apache automatically after installation. Point your web browser to http://localhost to verify that it’s running; you should see the default index page (Figure 1).

Figure 1: Apache2 Ubuntu default page.
When you see this you’re ready to put it to work.

Where is Everything

Configuring Apache appears complex at first, but when you study the configuration files you see a nice modular scheme that makes it easy to manage your configurations. The configuration files are in /etc/apache2. Spend some time studying your configuration file structure and reading the default configuration files. You should have these files and directories:


apache2.conf                
conf-available              
conf-enabled                
envvars
magic
mods-available
mods-enabled
ports.conf
sites-available
sites-enabled

The main server configuration file is /etc/apache2/apache2.conf. You won’t spend much time editing this file as it ships with a complete configuration, and it calls other configuration files for the bits you need to customize. Look in here to see the location of server configuration files, your HTTP user and group, included files, log file locations, and other global server configurations.

conf-available contains additional server configurations. Keep any additional global configurations in this directory, rather than adding them to apache2.conf.

mods-available contains configurations for all installed modules.

sites-available contains configurations for your virtual hosts. Even if you’re running only a single site you will set it up in a virtual host configuration.

None of the configurations in these directories are active until they are symlinked to their respective *-enabled directories. You must use Apache commands to set up new sites correctly, which we will get to in the next section.

envvars contains Apache’s environment variables.

magic has file MIME types, so that Apache can quickly determine the file type from its first few bytes.

ports configures the TCP ports that Apache listens to.

Single Site

The installation comes with a default site that is configured in sites-available/000.default.conf. You could modify and use this, but it’s better to leave it alone because it’s a useful testing tool. Let’s be like real nerds and make our new site from scratch. First create its document root (which in this example is test.com), enter the new directory, then create and test your sample index page:


$ sudo mkdir -p /var/www/test.com
$ cd /var/www/test.com
$ sudo nano index.html

Feel free to copy this for your new test page. It must be named index.html and go in your document root:


<head>
<title>Test.com index page</title>
</head>
<h1>Hello, welcome to test.com! It works!</h1>            
<h2>That is all I have to say. If you don't see this then it doesn't work.</h2>
</body>
</html>

Test your new index page by opening it in a web browser (Figure 2), which in this example is file:///var/www/test.com/index.html.

Figure 2: Test page.
So far so good! Now create the virtual hosts file so that Apache can serve it up. All virtual hosts files must have the .conf extension, because this is how they are defined in apache2.conf.


$ cd /etc/apache2/sites-available/
$ sudo nano test.com.conf

Copy this example configuration, substituting your own email address:


<VirtualHost *:80>
    ServerAdmin carla@localhost
    DocumentRoot /var/www/test.com
    ServerName test.com
    ServerAlias www.test.com
    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>

Apache calls this name-based virtual hosting, which allows you to serve multiple virtual hosts on a single IP address. IP-based hosting requires that each virtual host have its own IP address. Now you must activate the new configuration with a2ensite:


$ sudo a2ensite test.com.conf
Enabling site test.com.
To activate the new configuration, you need to run:
  service apache2 reload

Restart Apache, then point your web browser to http://test.com (Figure 3).

Figure 3: Test page live.
Hurrah, it works! At this point you can go super-nerd and set up DNS for your test domain. Or you could do it the easy way with entries in your /etc/hosts file:


127.0.1.2       test.com
127.0.1.3       www.test.com

These are localhost addresses, so your sites are not accessible outside of your PC. For more LAN testing you could use the /etc/hosts file on multiple computers in the same subnet, and enter the LAN IP address of your Apache server instead of a localhost address. Another option is to use Dnsmasq; creating a LAN server is easy with Dnsmasq, so check out Dnsmasq For Easy LAN Name Services to learn how.

If you want a publicly-accessible web server over the big bad Internets, then you have a whole lot more work to do with registering domain names, setting up proper DNS, and building a good firewall. Do please study this intensively, and be careful!

Setting up More Sites

Want to set up more sites? Just repeat these steps, creating different document roots and domains.

The fine Apache documentation is exhaustively thorough, and it makes more sense when you have a live server running, and have some idea of how things work. Come back for part 2 to learn about the vitally-important SSL configuration, and how to bend .htaccess to your will.

Read Part 2 in this series for more on how to enable SSL on Apache. 

Advance your career in Linux System Administration. Check out the Essentials of System Administration course from The Linux Foundation.

A Monitoring Solution for Docker Hosts, Containers and Containerized Services

I’ve been looking for an open source self-hosted monitoring solution that can provide metrics storage, visualization and alerting for physical servers, virtual machines, containers and services that are running inside containers. After trying out Elastic Beats, Graphite and Prometheus I’ve settled-on Prometheus. The main reason for choosing Prometheus was the support for multi-dimensional metrics and the query language that’s easy to grasp. The fact that you can use the same language for graphing and alerting makes the monitoring task a whole lot easier. Prometheus handles blackbox probing as well as whitebox metrics meaning you can probe your infrastructure and also monitor the internal state of your applications.

Read more at Stefan Prodan’s Blog