Home Blog Page 493

DevOps Fundamentals (LFS261) Chapter 1 – Continuous Integration

DevOps Fundamentals course preview from John Willis

DevOps Fundamentals (LFS261) Chapter 1 – Understanding the Value Stream

DevOps Fundamentals course preview from John Willis

 

Manipulate IPv6 Addresses with ipv6calc

Last week, you may recall, we looked at calculating network addresses with ipcalc. Now, dear friends, it is my pleasure to introduce you to ipv6calc, the excellent IPv6 address manipulator and query tool by Dr. Peter Bieringer. ipv6calc is a little thing; on Ubuntu /usr/bin/ipv6calc is about 2MB, yet it packs in a ton of functionality. 

Here are some of ipv6calc’s features:

  • IPv4 assignment databases (ARIN, IANA, APNIC, etc.)
  • IPv6 assignment databases
  • Address and logfile anonymization
  • Compression and expansion of addresses
  • Query addresses for geolocation, registrar, address type
  • Multiple input and output formats

It includes multiple commands. We’re looking at the ipv6calc command in this article. It also includes ipv6calcweb and mod_ipv6calc for websites, ipv6logconv log converter, and ipv6logstats log statistics generator.

If your Linux distribution doesn’t compile all options, it’s easy to build it yourself by following instructions on The ipv6calc Homepage.

One useful feature it does not include is a subnet calculator. We’ll cover this in a future article.

Run ipv6calc -vv to see a complete features listing. Refer to man ipv6calc and The ipv6calc Homepage to learn all the command options.

Compression and Decompression

Remember how we can compress those long IPv6 addresses by condensing the zeroes? ipv6calc does this for you:

$ ipv6calc --addr2compaddr 2001:0db8:0000:0000:0000:0000:0000:0001
2001:db8::1

You might recall from Practical Networking for Linux Admins: Real IPv6 that the 2001:0DB8::/32 block is reserved for documentation and testing. You can uncompress IPv6 addresses:

$ ipv6calc --addr2uncompaddr 2001:db8::1
2001:db8:0:0:0:0:0:1

Uncompress it completely with the --addr2fulluncompaddr option:

$ ipv6calc --addr2fulluncompaddr 2001:db8::1
2001:0db8:0000:0000:0000:0000:0000:0001

Anonymizing Addresses

Anonymize any address this way:

$ ipv6calc --action anonymize 2001:db8::1
No input type specified, try autodetection...found type: ipv6addr
No output type specified, try autodetection...found type: ipv6addr
2001:db8::9:a929:4291:c02d:5d15

If you get tired of “no input type” messages, you can specify the input and output types:

$ ipv6calc --in ipv6addr --out ipv6addr  --action anonymize 2001:db8::1
2001:db8::9:a929:4291:c02d:5d15

Or use the “quiet” option to suppress the messages:

$ ipv6calc -q --action anonymize 2001:db8::1
2001:db8::9:a929:4291:c02d:5d15

Getting Information

What with all the different address classes and sheer size of IPv6 addresses, it’s nice to have ipv6calc tell you all about a particular address:

$ ipv6calc -qi 2001:db8::1
Address type: unicast, global-unicast, productive, iid, iid-local
Registry for address: reserved(RFC3849#4)
Address type has SLA: 0000
Interface identifier: 0000:0000:0000:0001
Interface identifier is probably manual set

$ ipv6calc -qi fe80::b07:5c7e:2e69:9d41
Address type: unicast, link-local, iid, iid-global, iid-eui64
Registry for address: reserved(RFC4291#2.5.6)
Interface identifier: 0b07:5c7e:2e69:9d41
EUI-64 identifier: 09:07:5c:7e:2e:69:9d:41
EUI-64 identifier is a global unique one

One of these days, I must write up a glossary of all of these crazy terms, like EUI-64 identifier. This means Extended Unique Identifier (EUI), defined in RFC 2373. This still doesn’t tell us much, does it? EUI-64 addresses are the link local IPv6 addresses, for stateless auto-configuration. Note how ipv6calc helpfully provides the relevant RFCs.

This example queries Google’s public DNS IPv6 address, showing information from the ARIN registry:

$ ipv6calc -qi 2001:4860:4860::8844
Address type: unicast, global-unicast, productive, iid, iid-local
Country Code: US
Registry for address: ARIN
Address type has SLA: 0000
Interface identifier: 0000:0000:0000:8844
Interface identifier is probably manual set
GeoIP country name and code: United States (US)
GeoIP database: GEO-106FREE 20160408 Bu
Built-In database: IPv6-REG:AFRINIC/20150904 APNIC/20150904 ARIN/20150904 
IANA/20150810 LACNIC/20150904 RIPENCC/20150904

You can filter these queries in various ways:

$ ipv6calc -qi --mrmt GEOIP 2001:4860:4860::8844
GEOIP_COUNTRY_SHORT=US
GEOIP_COUNTRY_LONG=United States
GEOIP_DATABASE_INFO=GEO-106FREE 20160408 Bu

$ ipv6calc -qi --mrmt  IPV6_COUNTRYCODE 2001:4860:4860::8844
IPV6_COUNTRYCODE=US

Run ipv6calc -vh to see a list of feature tokens and which ones are installed.

DNS PTR Records

Now we’ll use Red Hat in our examples. To find the IPv6 address of a site, you can use good old dig to query the AAAA records:

$ dig AAAA www.redhat.com
[...]
;; ANSWER SECTION:

e3396.dscx.akamaiedge.net. 20   IN      AAAA    2600:1409:a:3a2::d44
e3396.dscx.akamaiedge.net. 20   IN      AAAA    2600:1409:a:397::d44

And now you can run a reverse lookup:

$ dig -x 2600:1409:a:3a2::d44 +short
g2600-1409-r-4.4.d.0.0.0.0.0.0.0.0.0.0.0.0.0.2.a.3.0.a.
 0.0.0.deploy.static.akamaitechnologies.com.
g2600-1409-000a-r-4.4.d.0.0.0.0.0.0.0.0.0.0.0.0.0.2.a. 
 3.0.deploy.static.akamaitechnologies.com.

As you can see, DNS is quite complex these days thanks to cloud technologies, load balancing, and all those newfangled tools that datacenters use.

There are many ways to create those crazy long PTR strings for your own DNS records. ipv6calc will do it for you:

$ ipv6calc -q --out revnibbles.arpa 2600:1409:a:3a2::d44
4.4.d.0.0.0.0.0.0.0.0.0.0.0.0.0.2.a.3.0.a.0.0.0.9.0.4.1.0.0.6.2.ip6.arpa.

If you want to dig deeper into IPv6, try reading the RFCs. Yes, they can be dry, but they are authoritative. I recommend starting with RFC 8200, Internet Protocol, Version 6 (IPv6) Specification.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Samsung Hosts ONOS Build 2017 and Fuels SDN Innovation

The ONOS (Open Network Operating System) community is growing fast in different geographies around the world and it’s time to bring everyone together. In collaboration with the Open Networking Foundation (ONF), Samsung is hosting ONOS Build 2017 at its R&D Campus in Seoul, Korea, on September 20-22.

The 2nd annual event is poised to unite more than 200 developers and contributors to share, learn, align, plan and hack together. There will be keynote and panel presentations by ONOS influencers, Community Showcase previews where people can present information about their work, an SDN Science Fair for demo presentations and a hackathon.

We sat down with Samsung, an ONF partner, to learn more about why the company invests in ONOS and why ONOS Build is an important event.

Why did Samsung offer to host ONOS build?

Samsung understands that innovation will be accelerated by open source communities and ONOS is the core organization paving the way. As a leading network solution provider, Samsung is excited to help connect developers who are fueling innovation and bringing SDN technologies into telecommunication networks around the world.  

Why is Samsung invested in ONOS?

ONF’s ONOS project targets carrier-grade performance by SDN equipped with service guarantee, reliability and scalability /high-performance. Samsung believes that ONOS is on the front lines of turning legacy networks into flexible and scalable systems that will enable operators to run their network more efficiently and ready for the upcoming 5G.

Samsung has been actively contributing and accelerating open source SDN and network virtualization based on ONOS to shape the next-generation service. We’re confident Samsung’s contributions will serve to fulfill the carrier’s’ requirements from compelling 5G service cases as a leader in telecommunications.

How long has Samsung been a part of the ONOS project?

Samsung leveraged ONOS in 2014 to develop a commercial ready SDN product and joined the ONOS project in 2016 as a board member. Since joining the project, Samsung has been actively contributing to the development of each release and is working closely with other operators to develop commercial level SDN solutions. With a large scale of developers who have extensive experiences working on SDN and insights to oversee the whole architecture of the telecommunication network, Samsung is playing a key role as a major contributor of specifications in the community, helping to escalate the technology available in the market.

What three things is Samsung hoping to get out of the event?

ONOS Build is an annual conference that will be held in different areas of the world to connect global developers from various backgrounds and industries. This year, we’ve invited innovators to Asia to reinvigorate a solid academic and business ecosystem throughout the Asia-Pacific areas.

Also, the event will be a platform for developers to promote and share their progress in SDN technology and its use cases. By sharing yearly updates, attendees can contribute to the history of SDN development and participate in open discussions that will increase efforts to innovate the SDN to the level of commercially available solutions.

Lastly, many global operators are expected to attend ONOS Build. By connecting the operators, we hope that we can share the vision and technical advancements of SDN with developers. This will dramatically shift the industry and help us step towards bigger network possibilities.

What will this event have that the last one didn’t?

ONOS Build 2016 was the first event to establish a strong foundation of SDN technology. Within a year there are more and more mobile operators willing to incorporate the technology into their networks. ONOS Build 2017 will be held within the context of rapidly changing perception of telco-industry participants and will be the catalyst to commercialization of carrier-grade SDN to the global telecommunication markets. In terms of content, this year’s event will span 3 days to offer an extra day of sessions for attendees to dive deeper into the technology and to showcase their work. There will also be a new CORD track on the last day which aims to introduce attendees to CORD as a use-case of ONOS.  

If you’re interested in learning more about ONOS Build 2017, please use these links:

To register: http://onosbuild.org/register/

To participate in the Community Showcase, SDN science fair or Hackathon:

http://onosbuild.org/cfp/

To learn more about Community Travel Sponsorships:

http://onosproject.org/2017/07/28/update-6-community-travel-sponsorships-onos-build-2017/

Operationalizing Cybersecurity

Operationalizing, or implementing, cybersecurity is an ongoing effort that continually evolves and grows.  Just like organizations can’t achieve safety; they cannot achieve cybersecurity.  Therefore, having a well-defined organizational cybersecurity strategy is essential in keeping organizational security goals in mind. Board members are becoming increasingly aware of the requirements to implement cybersecurity strategies and the perils faced by those organizations that continue to leave cybersecurity as an information technology (IT) problem. These motivations are assisting board members in being more active in defining the organization cybersecurity strategy. Therefore, board members are becoming increasingly aware of the importance in implementing a cybersecurity strategy.

Defining a cybersecurity strategy 

An organizational cybersecurity strategy is the organization’s plan for mitigating security risks to an acceptable level. Understanding the business purpose and mission goals of the organization is the first step in defining a cybersecurity strategy.  

Read more at RSA Conference

The Problem With Heroes In Software Development

AM, but your business is global. You have users in every time zone. They’re angry. They’re unable to purchase things on your website or are canceling their subscriptions. Money is being lost every minute your web application is down.

Suddenly, one of your developers is on the case! This developer shows a little anxiety and curses a lot (it is 2 AM after all), but eventually the problems are resolved. The application is running again and money is flowing into your business. Despite this kind of situation happening from time to time, you are comforted knowing that you always have this developer to save the day. This developer is your hero.

No one in this situation should feel comfort though. It is an extremely risky position to be in for a company.

Read more at Dev.to

How Materials Science Will Determine the Future of Human Civilization

One of the extraordinary features of the microelectronics revolution is its ability to scale, a featured captured by Moore’s Law. That has led to a rapid and massive increase in computing capacity—today’s top-of-the-range smartphones have the computing power equivalent to the world’s most powerful supercomputers from the early 1990s. Tomorrow’s smartphones will be even more powerful.

But there is a problem in the offing. As powerful computers become more widespread, the amount of power they consume will increase. If Moore’s exponential law continues, electronic devices will consume more than half the planet’s energy budget within a couple of decades.

That’s clearly unsustainable. So what to do?

Read more at Technology Review

Understanding the Hows and Whys of Open Source Audits

Since I’ve been working at Black Duck, I’ve learned a great deal about open source — and how and why an audit of the code base is important. I’ve also heard stories from customers scrambling to create a plan that addresses concerns about open source software risk during mergers and acquisitions (M&A) — before it jeopardizes the deal. This scramble makes me wonder how well the companies involved understand how their solutions are built. 

Why Bother with an Open Source Audit?

It’s important to consider why you’re doing an audit — why you need to examine your dev teams’ projects, open source components, and license requirements. 

For many, impending M&A activity drives an audit. After all, when buying, you want to acquire high-quality assets free of legal or security issues and, when selling, you want to be a high-quality asset. Buyers want to have a good handle on the risks they are taking on so they can value and structure the deal appropriately. Those buyers want to know that their target does not bring with it unaccounted for baggage. They’d like to know the company is using open source components within the bounds of their licenses, is resistant to cyberattacks, can ensure consistent uptime, and that their data — and their customers’ — will be secure.

Read more at Black Duck 

Why GitHub Can’t Host the Linux Kernel Community

A while back at the awesome maintainerati I chatted with a few great fellow maintainers about how to scale really big open source projects, and how github forces projects into a certain way of scaling. The linux kernel has an entirely different model, which maintainers hosting their projects on github don’t understand, and I think it’s worth explaining why and how it works, and how it’s different.

Another motivation to finally get around to typing this all up is the HN discussion on my “Maintainers Don’t Scale” talk, where the top comment boils down to “… why don’t these dinosaurs use modern dev tooling?”. A few top kernel maintainers vigorously defend mailing lists and patch submissions over something like github pull requests, but at least some folks from the graphics subsystem would love more modern tooling which would be much easier to script. The problem is that github doesn’t support the way the linux kernel scales out to a huge number of contributors, and therefore we can’t simply move, not even just a few subsystems. And this isn’t about just hosting the git data, that part obviously works, but how pull requests, issues and forks work on github.

Read more at blog.ffwll.ch

For Fun and Profit: A New Book on the History of Linux and Open Source

Twenty-six years ago this month, a geeky student in Finland released the Linux kernel to the world. Today, hundreds of millions of people are using Linux. Why? That’s a question I try to answer in my new book For Fun and Profit: A History of the Free and Open Source Software Revolution.

Linux historySure, you can explain Linux’s popularity today in terms of factors that exist in the present its technical features, the dynamism of the open source community, the corporate backing that Linux enjoys today, and so on.

But, to understand what really launched Linux into the position it enjoys today, however, you need to know the history of Linux  as well as the history of the larger free and open source software universe.

You have to look at some big questions about Linux’s past, such as:

  • Why did Linux beat out much bigger and better-funded kernels, like GNUs and BSDs, to become probably the most important open source software project in history?

  • Why did Linus Torvalds, the student who created Linux, decide to give his code away for free?

  • Why did Linux programmers succeed in producing a feature-rich kernel so quickly, while so many other free software projects in the early 1990s struggled to get a working kernel up and running?

For Fun and Profit

I explore these questions and more in the book, which was published this month by MIT Press. This book isn’t about just Linux, though. It’s about the history of free and open source software writ large. However, explaining the what, why, and how of the Linux kernel’s history is a major focus of the book. The book tells the story of how Linux came to be what it is today. It not only explains the major events and personalities that shaped the kernel, but also considers why Linux followed the specific historical path that led to today a path that no one could have foreseen back when Torvalds announced Linux on the Minix Usenet group in August 1991.

Other key topics covered in the book include the origins of Unix and Unix’s role in laying the foundation for the free software movement, the birth and evolution of Richard Stallman’s GNU project and the creation of open source Web platforms like Apache.

The book also critically reevaluates some of the traditional ways of thinking about the history of free and open source software. I argue, for example, that Stallman and GNU have been a lot more pragmatic historically speaking, at least than they receive credit for. Stallman may be a polarizing figure but measured from an historical perspective, neither he nor GNU are as dogmatic as they are sometimes portrayed. I also note that Torvalds doggedly opposed charging any money for Linux when he created the kernel a fact that is easy to forget today, when Linux helps to sustain billion-dollar companies.

I explore, too, the complicated and controversial questions of whether projects like Ubuntu and Android have remained true to the original goals of the free software movement that helped create them or whether these platforms engender more problems and distractions for free software hackers than they are worth. Through discussion of issues like these, the book brings the history of free and open source software up to the present day.

Why I Wrote this Book

It’s easy to find summaries online (and sometimes even in man pages) of the history of various free and open source software projects. But no one has told their story comprehensively or tried to explain why free and open source software was created, how the philosophies and practices associated with it have evolved over time, why some projects flourished while others fizzled in short, why we live today in a world dominated by free and open source software, which few people would have predicted even just a decade ago.

The book is based on extensive research with original sources things like Usenet archives, old mailing lists, and lots and lots of historical Slashdot threads. It is also informed by discussions with Torvalds, Stallman, and other important figures in the history of the free and open source software community.

Sample Chapter and Further Reading

If any of the above sounds interesting, you can read a sample chapter from the book or learn more about the book in general.