Home Blog Page 492

Open Core, Open Perimeter, And the Future of Enterprise Software

This is an inversion of the traditional open core model behind many commercial open source strategies for enterprise application layer products. In open core, the product’s core is open source, and in the enterprise edition, vendors provide and support proprietary enhancements. Using the API approach, the product’s core is often not visible in the cloud, and the only way in and out of the product is through the API.

Because of APIs, we are seeing the differentiation, enhancement, and value in enterprise editions migrating to the perimeter via tools, widgets, and components. These can be closed source and/or open source, but we should see more open source in the perimeter, because many vendors can make money by supporting their core and charging for API calls or transactions. The two best examples of this are Twilio and Stripe.

Read more at OpenSource.com

Review: System76’s Galago Pro Solves ‘Just Works’ Linux’s Goldilocks Problem

Still, finding the perfect Linux laptop has always been and remains something of a Goldilocks problem: this one is too big, this one is too underpowered, this one has too little RAM, this one lacks a big SSD, and so on. Generally speaking, if you want power and storage you’re going to end up with something too big to comfortably throw in a bag and carry all day. The Dell Precision 7520 and the System76 Oryx Pro are good examples of this.

Alternately, you could go for the more portable Dell XPS 13 or System76 Lemur, which both offer a more svelte, lightweight machine that’s easier on your shoulders but lacking in RAM and drive space.

What Linux users like myself have long wanted is a laptop with roughly the form factor and weight of a Macbook pro, but with the option to get 32GB of RAM or 3TB of storage. This is the mythical unicorn of pre-built Linux machines, a laptop that is both reasonably lightweight and powerful.

And that, my fellow Linux users, is refreshingly what System76 has managed to deliver with its new Galago Pro laptop.

Read more at Ars Technica

Kubernetes at GitHub

Over the last year, GitHub has gradually evolved the infrastructure that runs the Ruby on Rails application responsible for github.com and api.github.com. We reached a big milestone recently: all web and API requests are served by containers running in Kubernetes clusters deployed on our metal cloud. Moving a critical application to Kubernetes was a fun challenge, and we’re excited to share some of what we’ve learned with you today.

Why change?

Before this move, our main Ruby on Rails application (we call it github/github) was configured a lot like it was eight years ago: Unicorn processes managed by a Ruby process manager called God running on Puppet-managed servers. Similarly, our chatops deployment worked a lot like it did when it was first introduced: Capistrano established SSH connections to each frontend server, then updated the code in place and restarted application processes. When peak request load exceeded available frontend CPU capacity, GitHub Site Reliability Engineers would provision additional capacity and add it to the pool of active frontend servers.

Read more at GitHub

NASA Launches Supercomputer Servers into Space

Using off-the-shelf servers, NASA and HPE are devising a way to allow a Mars-bound spacecraft to house an on-board supercomputer.

To test the concept, NASA has launched the SpaceX CRS-12 rocket containing HPE’s “Spaceborne Computer” as its payload. According the company, the servers that make up the system are of the same type that power Pleiades, NASA’s flagship 7-petaflop supercomputer housed at the Ames Research Center in Mountain View, California. Pleiades is currently the 15th most powerful system in the world, according to the latest TOP500 rankings

The Spaceborne Computer will be deposited at the International Space Station (ISS), where it be part of a year-long experiment to find out how regular commodity servers can operate in the harsh conditions of outer space. 

Read more at Top500

4 Container Adoption Patterns: What You Need to Know

Containers, DevOps, and microservices all fit together to help CIOs achieve that goal of agility. In short, containers corral applications in a neat package, isolated from the host system on which they run. Developers can easily move them around during experimentation, which is a fundamental part of DevOps. Containers also prove helpful as you move quickly from development to production environments. (For more, see this background guide on containers.)  Of course, technology alone doesn’t solve the problem. CIOs must also manage the cultural challenges that arise when you start working in cross-functional DevOps groups and rethinking boundaries and process. 

But CIOs can learn plenty from their peers’ work on both fronts of culture and technology. On the technology side, when working in the trenches with companies adopting containers, you see many of the same goals and hurdles. Let’s examine the four typical ways companies adopt containers – and what you should know about each pattern.

How companies tap into containers

First, understand that there’s not one perfect container adoption path for your company. You may begin using containers on one path, then hop across to another later. Also, different groups inside a company often use containers in different ways – so it’s common to see multiple usage patterns at once. 

Read more at The Enterprisers Project

Software Defined Networking (SDN) Explained for Beginners

Over the past few years,  Software Defined Networking (SDN) has been a  key buzz in the computer networking/IT industry. Today, more and more companies are discussing SDN to leverage it for their business and future growth plans. Reason being, SDN reduces CAPEX (capital expenses of network equipment) and OPEX (operational and maintenance expenses) of a network, and that’s what every business in the networking industry wants at the end of the day.

That brings us to the question, what is so special about SDN that existing or legacy networking is not able to deliver?

Basically, traditional networks can’t cope up and meet current networking requirements like dynamic scalability, central control and management, on the fly changes or experiments, lesser error-prone manual configurations on each networking node, handling of network traffic (which has massively increased due to boom of mobile data), and server virtualization traffic in data centres.

What’s more, traditional networks are tightly coupled with highly expensive network elements that don’t offer any kind of openness or ability to customize internals. To deal with such issues, open source communities came together to define a networking approach for future. And that’s how the concept of SDN came to life.

Read more at HowtoForge

DevOps Fundamentals (LFS261) Chapter 1 – Continuous Integration

DevOps Fundamentals course preview from John Willis

DevOps Fundamentals (LFS261) Chapter 1 – Understanding the Value Stream

DevOps Fundamentals course preview from John Willis

 

Manipulate IPv6 Addresses with ipv6calc

Last week, you may recall, we looked at calculating network addresses with ipcalc. Now, dear friends, it is my pleasure to introduce you to ipv6calc, the excellent IPv6 address manipulator and query tool by Dr. Peter Bieringer. ipv6calc is a little thing; on Ubuntu /usr/bin/ipv6calc is about 2MB, yet it packs in a ton of functionality. 

Here are some of ipv6calc’s features:

  • IPv4 assignment databases (ARIN, IANA, APNIC, etc.)
  • IPv6 assignment databases
  • Address and logfile anonymization
  • Compression and expansion of addresses
  • Query addresses for geolocation, registrar, address type
  • Multiple input and output formats

It includes multiple commands. We’re looking at the ipv6calc command in this article. It also includes ipv6calcweb and mod_ipv6calc for websites, ipv6logconv log converter, and ipv6logstats log statistics generator.

If your Linux distribution doesn’t compile all options, it’s easy to build it yourself by following instructions on The ipv6calc Homepage.

One useful feature it does not include is a subnet calculator. We’ll cover this in a future article.

Run ipv6calc -vv to see a complete features listing. Refer to man ipv6calc and The ipv6calc Homepage to learn all the command options.

Compression and Decompression

Remember how we can compress those long IPv6 addresses by condensing the zeroes? ipv6calc does this for you:

$ ipv6calc --addr2compaddr 2001:0db8:0000:0000:0000:0000:0000:0001
2001:db8::1

You might recall from Practical Networking for Linux Admins: Real IPv6 that the 2001:0DB8::/32 block is reserved for documentation and testing. You can uncompress IPv6 addresses:

$ ipv6calc --addr2uncompaddr 2001:db8::1
2001:db8:0:0:0:0:0:1

Uncompress it completely with the --addr2fulluncompaddr option:

$ ipv6calc --addr2fulluncompaddr 2001:db8::1
2001:0db8:0000:0000:0000:0000:0000:0001

Anonymizing Addresses

Anonymize any address this way:

$ ipv6calc --action anonymize 2001:db8::1
No input type specified, try autodetection...found type: ipv6addr
No output type specified, try autodetection...found type: ipv6addr
2001:db8::9:a929:4291:c02d:5d15

If you get tired of “no input type” messages, you can specify the input and output types:

$ ipv6calc --in ipv6addr --out ipv6addr  --action anonymize 2001:db8::1
2001:db8::9:a929:4291:c02d:5d15

Or use the “quiet” option to suppress the messages:

$ ipv6calc -q --action anonymize 2001:db8::1
2001:db8::9:a929:4291:c02d:5d15

Getting Information

What with all the different address classes and sheer size of IPv6 addresses, it’s nice to have ipv6calc tell you all about a particular address:

$ ipv6calc -qi 2001:db8::1
Address type: unicast, global-unicast, productive, iid, iid-local
Registry for address: reserved(RFC3849#4)
Address type has SLA: 0000
Interface identifier: 0000:0000:0000:0001
Interface identifier is probably manual set

$ ipv6calc -qi fe80::b07:5c7e:2e69:9d41
Address type: unicast, link-local, iid, iid-global, iid-eui64
Registry for address: reserved(RFC4291#2.5.6)
Interface identifier: 0b07:5c7e:2e69:9d41
EUI-64 identifier: 09:07:5c:7e:2e:69:9d:41
EUI-64 identifier is a global unique one

One of these days, I must write up a glossary of all of these crazy terms, like EUI-64 identifier. This means Extended Unique Identifier (EUI), defined in RFC 2373. This still doesn’t tell us much, does it? EUI-64 addresses are the link local IPv6 addresses, for stateless auto-configuration. Note how ipv6calc helpfully provides the relevant RFCs.

This example queries Google’s public DNS IPv6 address, showing information from the ARIN registry:

$ ipv6calc -qi 2001:4860:4860::8844
Address type: unicast, global-unicast, productive, iid, iid-local
Country Code: US
Registry for address: ARIN
Address type has SLA: 0000
Interface identifier: 0000:0000:0000:8844
Interface identifier is probably manual set
GeoIP country name and code: United States (US)
GeoIP database: GEO-106FREE 20160408 Bu
Built-In database: IPv6-REG:AFRINIC/20150904 APNIC/20150904 ARIN/20150904 
IANA/20150810 LACNIC/20150904 RIPENCC/20150904

You can filter these queries in various ways:

$ ipv6calc -qi --mrmt GEOIP 2001:4860:4860::8844
GEOIP_COUNTRY_SHORT=US
GEOIP_COUNTRY_LONG=United States
GEOIP_DATABASE_INFO=GEO-106FREE 20160408 Bu

$ ipv6calc -qi --mrmt  IPV6_COUNTRYCODE 2001:4860:4860::8844
IPV6_COUNTRYCODE=US

Run ipv6calc -vh to see a list of feature tokens and which ones are installed.

DNS PTR Records

Now we’ll use Red Hat in our examples. To find the IPv6 address of a site, you can use good old dig to query the AAAA records:

$ dig AAAA www.redhat.com
[...]
;; ANSWER SECTION:

e3396.dscx.akamaiedge.net. 20   IN      AAAA    2600:1409:a:3a2::d44
e3396.dscx.akamaiedge.net. 20   IN      AAAA    2600:1409:a:397::d44

And now you can run a reverse lookup:

$ dig -x 2600:1409:a:3a2::d44 +short
g2600-1409-r-4.4.d.0.0.0.0.0.0.0.0.0.0.0.0.0.2.a.3.0.a.
 0.0.0.deploy.static.akamaitechnologies.com.
g2600-1409-000a-r-4.4.d.0.0.0.0.0.0.0.0.0.0.0.0.0.2.a. 
 3.0.deploy.static.akamaitechnologies.com.

As you can see, DNS is quite complex these days thanks to cloud technologies, load balancing, and all those newfangled tools that datacenters use.

There are many ways to create those crazy long PTR strings for your own DNS records. ipv6calc will do it for you:

$ ipv6calc -q --out revnibbles.arpa 2600:1409:a:3a2::d44
4.4.d.0.0.0.0.0.0.0.0.0.0.0.0.0.2.a.3.0.a.0.0.0.9.0.4.1.0.0.6.2.ip6.arpa.

If you want to dig deeper into IPv6, try reading the RFCs. Yes, they can be dry, but they are authoritative. I recommend starting with RFC 8200, Internet Protocol, Version 6 (IPv6) Specification.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Samsung Hosts ONOS Build 2017 and Fuels SDN Innovation

The ONOS (Open Network Operating System) community is growing fast in different geographies around the world and it’s time to bring everyone together. In collaboration with the Open Networking Foundation (ONF), Samsung is hosting ONOS Build 2017 at its R&D Campus in Seoul, Korea, on September 20-22.

The 2nd annual event is poised to unite more than 200 developers and contributors to share, learn, align, plan and hack together. There will be keynote and panel presentations by ONOS influencers, Community Showcase previews where people can present information about their work, an SDN Science Fair for demo presentations and a hackathon.

We sat down with Samsung, an ONF partner, to learn more about why the company invests in ONOS and why ONOS Build is an important event.

Why did Samsung offer to host ONOS build?

Samsung understands that innovation will be accelerated by open source communities and ONOS is the core organization paving the way. As a leading network solution provider, Samsung is excited to help connect developers who are fueling innovation and bringing SDN technologies into telecommunication networks around the world.  

Why is Samsung invested in ONOS?

ONF’s ONOS project targets carrier-grade performance by SDN equipped with service guarantee, reliability and scalability /high-performance. Samsung believes that ONOS is on the front lines of turning legacy networks into flexible and scalable systems that will enable operators to run their network more efficiently and ready for the upcoming 5G.

Samsung has been actively contributing and accelerating open source SDN and network virtualization based on ONOS to shape the next-generation service. We’re confident Samsung’s contributions will serve to fulfill the carrier’s’ requirements from compelling 5G service cases as a leader in telecommunications.

How long has Samsung been a part of the ONOS project?

Samsung leveraged ONOS in 2014 to develop a commercial ready SDN product and joined the ONOS project in 2016 as a board member. Since joining the project, Samsung has been actively contributing to the development of each release and is working closely with other operators to develop commercial level SDN solutions. With a large scale of developers who have extensive experiences working on SDN and insights to oversee the whole architecture of the telecommunication network, Samsung is playing a key role as a major contributor of specifications in the community, helping to escalate the technology available in the market.

What three things is Samsung hoping to get out of the event?

ONOS Build is an annual conference that will be held in different areas of the world to connect global developers from various backgrounds and industries. This year, we’ve invited innovators to Asia to reinvigorate a solid academic and business ecosystem throughout the Asia-Pacific areas.

Also, the event will be a platform for developers to promote and share their progress in SDN technology and its use cases. By sharing yearly updates, attendees can contribute to the history of SDN development and participate in open discussions that will increase efforts to innovate the SDN to the level of commercially available solutions.

Lastly, many global operators are expected to attend ONOS Build. By connecting the operators, we hope that we can share the vision and technical advancements of SDN with developers. This will dramatically shift the industry and help us step towards bigger network possibilities.

What will this event have that the last one didn’t?

ONOS Build 2016 was the first event to establish a strong foundation of SDN technology. Within a year there are more and more mobile operators willing to incorporate the technology into their networks. ONOS Build 2017 will be held within the context of rapidly changing perception of telco-industry participants and will be the catalyst to commercialization of carrier-grade SDN to the global telecommunication markets. In terms of content, this year’s event will span 3 days to offer an extra day of sessions for attendees to dive deeper into the technology and to showcase their work. There will also be a new CORD track on the last day which aims to introduce attendees to CORD as a use-case of ONOS.  

If you’re interested in learning more about ONOS Build 2017, please use these links:

To register: http://onosbuild.org/register/

To participate in the Community Showcase, SDN science fair or Hackathon:

http://onosbuild.org/cfp/

To learn more about Community Travel Sponsorships:

http://onosproject.org/2017/07/28/update-6-community-travel-sponsorships-onos-build-2017/