Home Blog Page 702

Cloud Foundry Launches New Docker-Compatible Container Management System

Cloud Foundry, the Pivotal- and VMware-incubated open source platform-as-a-service project, is going all in on its new Diego container management system. For a while now, the project used what it called Droplet Execution Agents (DEA) to manage application containers. After running in parallel for a while, though, the team has now decided to go all in on its new so-called “Diego” architecture. Thanks to this, Cloud Foundry says it can now scale to running up to 250,000 containers in a single cluster.

Few enterprises — if any — are currently using Cloud Foundry (or containers in general) at this scale. As anybody who has talked to enterprise developers recently can tell you, though, enterprise adoption of containers is growing quickly (and maybe faster than most people realize). Cloud Foundry’s own research shows that many enterprises are now evaluation containers, even as the number of broad container deployments has remained steady (and low) over the last few months.

Read more at TechCrunch

Dig into DNS: Part 3

In the first and second articles in this series, I introduced the powerful dig utility and its uses in performing DNS lookups along with a few time-saving examples of to put it into practice. In this part, I’ll look at my favorite dig utility feature — which is definitely the “trace” option. For those unfamiliar with this process, let’s slowly run through it.

Without a Trace

When a DNS query is made, the “chain” of events (for want of a better description) starts with the name server initiating the lookup speaking to one of the world’s “root” name servers. The root server knows, via a name server which acts as an authority for a top-level country code, which remote name server is responsible for responding to queries about that particular domain name (if that domain name exists).

Figure 1: A DNS lookup for “sample.org.”

Figure 1 shows the delegation from the root servers down into the key servers responsible for the delegation of the Top Level Domain (TLD) .org. Underneath those authoritative “afilias-nst” servers, we see (at the end of the output) the two name servers responsible for answering queries for the domain name sample.org. In this case ns1.mailbank.com and ns2.mailbank.com. The last salient link in the chain is:

sample.org.        300    IN    A    64.99.64.45

This is the “A Record” result for the non “www” record for “sample.org” (in other words, if you just typed sample.org into a web browser without a www, this is the IP address you would visit). This is ultimately the answer we’ve been waiting for, and you might be surprised how many servers we had to ask in order to receive it.

Now that we’ve come to grips with DNS delegation and traversed the lengthy chain required for a “.org” namespace lookup, let’s compare that output with a “.co.uk” DNS lookup. Figure 2 shouldn’t cause too many headaches to decipher.

Figure 2: Shows the “sample.co.uk” delegation during a DNS lookup.

The 10 “.nic.uk” name servers (geographically disparate for resilience, which you could confirm categorically with a “WHOIS” lookup on each of their IP addresses) shows that the .UK namespace is well protected from issues. For reference, Network Information Centre (or NIC) is common parlance throughout the name server world, and if you’re ever unsure which authoritative body is responsible for a country’s domain name space, trying something like http://www.nic.uk, in the case of the United Kingdom, might save the day.

Now, let’s compare that with a much more concise output using the +short option as follows. In Figure 3, you see that the NIC’s inclusion is not displayed but instead just the other salient parts of the chain. The command that used was:

# dig +short +trace @8.8.4.4 sample.co.uk

Figure 3: Shows root server delegation in short.

Tick Tock

The timing of DNS lookups are clearly of paramount importance. Your boss may have invested in the latest dynamic infrastructure for your killer application, but if you are delaying each connection to your application by two seconds because of poorly configured DNS, then your users will be far from pleased.

If you see very slow response times anywhere in the chain (you can see the lookup completion results at the foot of each section in Figures 1 and 2), my advice is to turn to one of the many online tools that assists in measuring performance. They can help isolate and then identify the bottlenecks and additionally  suggest improvements.

Hold on for a second. What if you needed to use the dig utility in a script and you foresaw connectivity issues or queries that wouldn’t have valid answers? You can set a timeout function as follows deviating from the default of five seconds using the +time= command switch:


# dig +time=2 chrisbinnie.tld mx

You can now be assured of running through multiple lookups, with or without lookup failures, in two-second increments.

Start of Authority

Those familiar with zone files will understand this next section. Imagine — and this almost certainly stems from the Internet of old — that you need to figure out who to contact about a domain name issue or discover certain parameters that help to control how a domain name runs its synchronisation of data between its name servers. You can discover much of this (not always entirely accurately because there can be other factors at play) by querying the start of authority (SOA) of a domain name.

Here is a “short” example, which I’ll explain in a second with its output following:


# dig +short @8.8.4.4 chrisbinnie.tld soa

toma550561.mars.orderbox-dns.com. transfer.active-domain.com. 2014102717 7200 7200 172800 38400

Let’s compare that slightly unusual numerical output to a default display of a SOA query using the dig utility in Figure 4.

To achieve a reasonable amount of detail but in a sensible way, the command line I used was:


# dig +multiline @8.8.8.8 chrisbinnie.tld SOA

The +multiline option tries its best to offer a formatted layout coupled with helpful comments. In the case of the SOA, I’m sure that you’ll agree it makes sense of an otherwise difficult to read output under the AUTHORITY SECTION.

Figure 4: Shows the use of the “multiline” parameter in the dig utility.

I, for one, think that’s much less cryptic. The “serial” number in an SOA is a simple measure of when a change to the domain name was last made. Request For Comments (RFCs), which are technical specifications for the Internet, most likely recommend the serial takes the form of a date format shown backward as seen in our example.

Apparently (but this may not be the case), that RFC 1035 holds fort for SOA recommendations, which can be found here https://www.ietf.org/rfc/rfc1035.txt if you would like some enlightening, bedtime reading.

Other name servers simply increment a long number such as 1000003 each time there’s a change. This process admittedly doesn’t help with when an update was made, but at least it lets you know which version of a zone file you are looking at. This is important because when sharing updates between many name servers, it’s imperative that they answer with the most up-to-date version of the DNS and do not respond with stale, potentially damaging information.

In case you’re wondering, we’re seeing the “root-servers.net” and “verisign-grs.com” domain names being mentioned because the domain name “chrisbinnie.tld” doesn’t exist (I’m just using it as an example). So Figure 4 is showing the SOA for “a.root-servers.net” instead, which is the first link in the chain for our non-existent DNS lookup.

The “refresh” field looks at how soon secondary name servers are to come back for new information (secondaries and tertiaries are name servers serving the same information for resilience in addition to the primary). Usually this value sits at 24 hours but it’s shorter for a root server. The “retry” is how quickly to try a “refresh” that failed again. On a name server that isn’t a root server, you would expect this to be 7200 seconds (two hours).

The “expire” field lets secondary name servers know when the answers it has been serving should be considered stale (or, in other words, how long the retrieved information remains valid).

Finally, the “minimum” field shows secondaries (also called “slaves”) how long the received information should be cached before checking again. This has been called the setting with the greatest importance thanks to the fact that, with frequent changes to your DNS, you need to keep secondaries frequently updated.

The tricky balance, however, is that if you don’t make frequent DNS updates, keeping this entry set to a matter of days is best to keep your DNS server load down. The rule of thumb is that to speed up a visitor’s connection time, you really don’t want to enforce a DNS lookup unless you’re making frequent changes. On today’s Internet, this is less critical because name server traffic is relatively cheap thanks to its tiny packet size and the efficiency of modern name servers. It’s still a design variable to consider however.

When You’re Ready

And, at the risk of sounding melodramatic, when it comes to DNS, every millisecond really does count.

Another quick note about Figure 4 is that “a.root-servers.net” is the authoritative name of the name server for that domain name’s zone. And, separately, the “nstld.verisign-grs.com” entry is the contact method of the domain name administrator, which I mentioned earlier. Although it’s a strange format if you haven’t seen it before — to avoid the use of the @ symbol I assume —  this field is actually translated as “nstld@verisign-grs.com” into a functioning email address.

Finally, the “1799” at the start of that line is the Time To Live — which means how many seconds (30 minutes in this case, minus a second) connecting client software should hold onto the retrieved data before asking for it again, in the event that it’s changed in the meantime.

In the fourth and final part of this series, I’ll take a quick look look at some security options and wrap up with more examples that I have found very useful.

Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.

Learn more about network and system management in the Essentials of System Administration training course from The Linux Foundation. 

How Blockchain Will Grow Beyond Bitcoin

Since its advent in 2009, bitcoin’s decentralized, broker-less and secure mechanism to send money across the world has steadily risen in popularity and adoption. Of equal — if not greater — importance is the blockchain, the technology that supports the cryptocurrency, the distributed ledger which enables trustless, peer-to-peer exchange of data.

Every day, new companies and organizations, including big names such as Microsoft and Tesla, take strides toward or show interest in using cryptocurrency and blockchain to support their business.

Read more at TechCrunch

 

Nextcloud Box: A Cloud for your Office or Living Room

Nextcloud, in partnership with Canonical and WDLabs, has released a Raspberry Pi and Ubuntu Linux powered cloud server for your home or office.

The Nextcloud Box is a secure, private, self-hosted cloud and Internet of Things (IoT) platform. It makes hosting a personal cloud simple and cost effective whilst maintaining a secure private environment that can be expanded with additional features via apps.

The Nextcloud Box consists of a WDLabs’s 1 TB USB3 hard drive powered by a Raspberry Pi 2 computer. It uses Snappy Ubuntu Core as its operating system on a microSD card. The mini server comes ready to run with the Apache web serverMySQL, and the latest Nextcloud 10.

Read more at ZDNet

Transforming Rigid Networks into Dynamic and Agile Infrastructure

For service providers with rigid network resources, physically assigned in metro and wide-area networking (WAN)networks, their complex networking structures aren’t easily manipulated into making dynamic changes. Another challenge is the ability to manage their existing network connections, while at the same time, build new agile platforms with on-demand, multi-tenant services.

The time has come for network operators to also enjoy the benefits that virtualization and programmatic control over network infrastructure can deliver. So today, IXPs and ISPs are ready to forge ahead and deploy software-defined networking (SDN) with at-scale network virtualization and spin up virtual network infrastructure to more flexibly, and cost-efficiently, meet their customer’s demands.

Read more at SDx Central

Cars Will Make Up 98% of Mobile-to-Mobile Traffic by 2021

Internet radio and information services will generate approximately 6,000 petabytes of data a year. In just five years, ever more sophisticated in-vehicle infotainment (IVI) systems that stream music and offer real-time navigation will generate up to 98% of mobile-to-mobile (M2M) data traffic, according to a new report.

The Juniper Research report indicates that ever-more popular applications such as Apple CarPlay and Android Auto that mirror your smartphone interface to your car’s IVI will generate massive amounts of cellular M2M data traffic.

Read more at ComputerWorld

Moving Toward a Services Based Architecture with Mesos

Over the past few years at Strava, server side development has transitioned from our monolithic Ruby on Rails app (The Monorail) to a service oriented architecture with services written in Scala, Ruby, and golang.

rails.png

repos.png

Top: Commits to our monolithic rails app. Bottom: Total private repositories at Strava as a proxy for the number of services.

Initially, services at Strava were reserved for things that were simply not possible to implement in Rails (for example, see my previous blog post on Routemaster). It was difficult to bring up a new service, and it required some combination of prior knowledge and trailblazing.

At some point, it became clear that services had become the preferred way of development at Strava. Service oriented architecture (SOA) had triumphed over the Monorail, however, all was not yet well. If done hastily, introducing dozens of services can cause havoc. We had figured out that we liked building services, but we didn’t really have a plan for scaling the number of them.

This post covers the infrastructure that has supported smooth adoption of services at Strava. Today we run over a hundred services deployed by over a dozen engineers. Engineers are able to fully implement and deploy new services quickly and with minimal guidance, having almost no prior infrastructure experience. Services are deployed extremely quickly (< 30 seconds) in a consistent and reliable way. They are monitored, logged, and always kept running.

Read more at Strava’s blog.

Parity Check: Beware the Public Cloud Bandwagon

Unlike some other publications, we did not interpret McKinsey’s recently released ITaaS Cloud Survey findings as a ringing endorsement of the public cloud. Nay, the data doesn’t show that at all. Instead, the data shows that large enterprises have been playing catch-up.

To determine how far along a company is in their cloud migration, McKinsey asked over 800 CIOs and senior IT executives if at least one corporate workload was primarily run on a particular cloud tier. For large enterprises, only 24 percent were using a virtual private cloud in 2015, but that skyrockets to 71 percent in 2018. Ditto public cloud, with large enterprise use going from 10 percent in 2015 to 51 percent in 2018.

Read more at The New Stack.

Microsoft PerfView is Now Open Source On GitHub

I am happy to announce, that the PerfView source code is now open source as a GitHub repository.    It is available at

https://github.com/Microsoft/perfview

The readme associated with the GitHub  repository has getting started information (how to fetch the repository, how to build, test and deploy the code.     We use Visual Studio 2015.   You can download a free copy of  Visual Studio 2015 Community Edition that has everything you need to clone, build test and deploy PerfView.   Thus you can get going with PerfView RIGHT NOW.   The instructions on the PerfView repository tell you how to get started even if you know nothing about GIT (although knowing something about GIT and Visual Studio certainly helps).

The readme also talks about how you can log issues as well as contribute to PerfView.   You should definitely read the documentation about contributing if you intend to do that (probably best however to just start with learning to build / run / debug and get familiar with the code.

We could definitely use help in both documentation and testing, so this is a great place to contribute, especially to start with.

Read more at Microsoft Developer blog.

DevOps: How to Persuade Your Boss

So there you are, you and your ace tech team, all excited about DevOps. You know that DevOps is the methodology that will move you past “yak shaving” and into building an IT infrastructure that will streamline and move your company forward. But how do you sell this to your bosses, and especially your non-technical bosses? Victoria Blessing, Operations Engineer for the College of Architecture at Texas A&M University, described the basics in her LinuxCon North America 2016 presentation.

To start, Blessing explained the meaning of “yak shaving,” which was coined by Carlin Vieri at MIT. It refers to a series of tasks that must be completed in order for you to be able to do what you were trying to do in the first place. While it can really be applied to any aspect of life, it’s something that we, in IT, constantly fall victim to. Getting caught up in the little details it takes to get things done, and then we’re constantly fighting fires. It’s a part of the culture problem that we have.”

The speed of business is very fast these days, and keeps getting faster, and we can’t afford inefficiencies. But implementing DevOps usually means making a giant cultural shift in your company, and the price of change is often steep. Blessing said, “I’m not going to lie to you. There will be initial technical debt. That often ends up being a stumbling block or a barrier to entry for many organizations. “Oh, we don’t have the time to implement that, or the right tools.” But you have to be willing to take the leap in order to reap the rewards. Yes, there’s initial technical debt, but the rewards are exponential.”

You know the benefits of breaking down the traditional silos, the barriers between the different departments in your company where everyone is doing their own thing, and nobody knows what anyone else is doing, and somehow you’re all supposed to magically coordinate your efforts to develop and release products on time without anyone seeing the big picture. How do you talk to your boss about this? You know that if you start going on about tools like Chef and Puppet, Nagios and Zabbix, Git and Subversion, microservices, orchestration, and all of those good things you risk putting your boss to sleep.

Avoid this trap and follow Blessing’s advice: “The best way to sell something to someone is to show them why it matters to them. Interest is often directly proportional to personal investment. For management, decision drivers are often monetary ones. You’re going to have to show that the benefits outweigh the costs. Ask yourself, “In what ways does this save us money, and in what ways does this make us money?”…Listen to your audience and use their questions to help you adapt your content to make sure that you’re getting your point across.”

Watch Blessing’s presentation below to learn how to persuade your bosses and co-workers to buy in to your DevOps vision.

https://www.youtube.com/watch?v=2i-daDtvU0s?list=PLbzoR-pLrL6qBYLdrGWFHbsolIdJIjLnN

linux-com_ctas_linuxcon_452x134.png?itok=G4guaVb3

You won’t want to miss the stellar lineup of keynotes, 185+ sessions and plenty of extracurricular events for networking at LinuxCon + ContainerCon Europe, Oct. 4-6 in Berlin. Secure your spot before it’s too late! Register now.