Home Blog Page 570

Huawei: Openness Key to Building an All-Cloud Network

This article is paid for by Huawei, a Platinum-level sponsor of Open Networking Summit, to be held April 3-6, and was written by Linux.com.

While the networking community is getting ready for Open Networking Summit 2017, we spoke to Bill Ren, Vice President of Huawei Network, Industry & Ecosystem Development, to discuss the role of openness in what Huawei calls the “All-Cloud Network.”

Bill Ren, Vice President of Huawei Network, Industry & Ecosystem Development
As companies grapple with digital transformation across all industries, Huawei advocates full cloudification to build efficient networks and agile competitiveness. “All-Cloud” Strategy involves a full reconstruction of network infrastructure,  including equipment, network, services, and operations — and it will be built on openness.

“The future cloud network presents huge market potential. Full-gear and close collaboration from all industry parties is needed to build an industry ecosystem with positive impact and eventually, enable a successful rollout of the cloud network,” said Ren.

Indeed, in this big data and cloud age, collaboration and integration are the keystones of success.

That’s no secret to Huawei which is already heavily invested in pushing to shape cloud network-related standards, contributing on research results. The company is a founding Platinum member and major contributor to the OPNFV community, a Platinum member of The Linux Foundation, a Gold member of OpenStack and Cloud Foundry, and a founding member of ONOS.

“We fully participate in setting up SDN technical standards, as well as lead in setting up northbound interface / security / optical transmission and other standards.”

Here is what Ren had to say on how all that comes together to shape the future of the industry.

Linux.com: Why is cloud-network ecosystem so important to becoming and remaining competitive?

Ren: Currently, the whole industry is at a critical time of digital transformation. New technologies keep emerging, a flawless experience has become a requirement for end users, and an open and all-connected world has arrived. One can expect that the future will be a society that has thorough cross-industry integration and collaboration, which will need comprehensive network capacity.

Linux.com: Everyone agrees that innovation is crucial in the industry, but few are certain what that might look like. How does the future network differ from the networks that exist now?

Ren: In the past 20+ years, the world has built trillions of network assets. In order to maximize the value of these networks, to face the multiple challenges of intelligent society, and equip the network with the ability to build up the next one-trillion new business territory, the future network will have several features.

Linux.com: And what are those features? Or, at least some of them?

Ren: There are four main features: agility , intelligence, efficiency, and openness.  By agility I mean the network must have the capability for quick integration of new services and new features, delivering a ROADS experiences and drastically reducing time-to-market (TTM).

By intelligence, I mean flexible and intelligent resource management, including fully automated planning, provisioning, resource allocation, and O&M to support thousands or even millions of service instances.

By efficiency, we mean better resource pooling. The industry must move away from closed silos, or the “chimney network model,” so that there is more sharing of resources and more ease in integration.

The fourth, of course, is openness. There must be more cooperation and participation by more industry partners to rapidly mature these technologies and to build larger pools of resources.

Open Networking Summit April 3-6 in Santa Clara, CA features over 75 sessions, workshops, and free training! Get in-depth training on up-and-coming technologies including AR/VR/IoT, orchestration, containers, and more.

Linux.com readers can register now with code LINUXRD5 for 5% off the attendee registration. Register now!

This article was sponsored by Huawei, a leading global information and communications technology (ICT) solutions provider. Our aim is to enrich life and improve efficiency through a better connected world, acting as a responsible corporate citizen, innovative enabler for the information society, and collaborative contributor to the industry. Driven by customer-centric innovation and open partnerships, Huawei has established an end-to-end ICT solutions portfolio that gives customers competitive advantages in telecom and enterprise networks, devices and cloud computing. For more information, please visit Huawei online at http://www.twitter.com/Huawei.

Kubernetes Federation in a Post-Configuration Management Universe

When containerization was young, one of its early principles was the ideal of immutable infrastructure, the ability to build a support structure for a container that was flexible enough to meet the container’s needs during its lifespan, which may be short, but remained a fixed asset throughout that duration.

It spoke to a possible, complete outmoding of one of IT’s most critical functions — configuration management — and a skill upon which enterprises dearly depend upon today. While the leading vendors in the space began discussing evolutionary adaptations, the practitioners in the space rallied together, forming a kind of global support group to keep hope alive through the evolutionary maelstrom.

Read more at The New Stack

A Brief History of Random Numbers

“As an instrument for selecting at random, I have found nothing superior to dice,” wrote statistician Francis Galton in an 1890 issue of Nature. “When they are shaken and tossed in a basket, they hurtle so variously against one another and against the ribs of the basket-work that they tumble wildly about, and their positions at the outset afford no perceptible clue to what they will be even after a single good shake and toss.”

How can we generate a uniform sequence of random numbers? The randomness so beautifully and abundantly generated by nature has not always been easy to extract and quantify. The oldest known dice (4-sided) were discovered in a 24th century B.C. tomb in the Middle East.

Read more at freeCodeCamp

Continuous Integration: CircleCI vs Travis CI vs Jenkins

CI definition and its main goal

Continuous Integration (CI) is a software development practice that is based on a frequent integration of the code into a shared repository. Each check-in is then verified by an automated build.

The main goal of continuous integration is to identify the problems that may occur during the development process earlier and more easily. If you integrate regularly — there is much less to check while looking for errors. That results in less time spent for debugging and more time for adding features. There is also an option to set up inspection of the code style, cyclomatic complexity (low complexity makes the testing process more simple) and other checks. That helps to minimize the efforts of the person responsible for the code review, saves time, and improves the quality of the code

How it works

scheme

  • Developers check the code locally on their computers
  • When completed — they commit changes to the repository
  • Repository sends a request (webhook) to CI system
  • CI server runs job (tests, coverage, check syntax and others)
  • CI server releases saved artifacts for testing
  • If the build or tests fail, the CI server alerts the team
  • The team fixes the issue

CircleCI vs Travis CI vs Jenkins

Now, when the process of continuous integration is clear (I hope so) we can move to the comparison of some of the most popular CI platforms nowadays. Each of those has its pros and cons. Let’s start with CircleCI.

CircleCI

CircleCI

Features :
  • CircleCI is a cloud-based system — no dedicated server required, and you do not need to administrate it. However, it also offers an on-prem solution that allows you to run it in your private cloud or data center.
  • It has a free plan even for a business account
  • Rest API — you have an access to projects, build and artifacts The result of the build is going to be an artifact or the group of artifacts. Artifacts could be a compiled application or executable files (e.g. android APK) or metadata (e.g. information about the tests`success)
  • CircleCI caches requirements installation. It checks 3rd party dependencies instead of constant installations of the environments needed
  • You can trigger SSH mode to access container and make your own investigation (in case of any problems appear)
  • That’s a complete out of a box solution that needs minimal configurationadjustments
CircleCI is compatible with:
  • Python, Node.js, Ruby, Java, Go, etc
  • Ubuntu (12.04, 14.04), Mac OS X (paid accounts)
  • Github, Bitbucket
  • AWS, Azure, Heroku, Docker, dedicated server
  • Jira, HipChat, Slack
CircleCI Pros:
  • Fast start
  • CircleCI has a free plan for enterprise projects
  • It’s easy and fast to start
  • Lightweight, easily readable YAML config
  • You do not need any dedicated server to run CircleCI
CircleCI Cons:
  • CircleCI supports only 2 versions of Ubuntu for free (12.04 и 14.04) and MacOS as a paid part

  • Despite the fact CircleCI do work with and run on all languages tt supports only the following programming languages “out of the box”:

Go (Golang), Haskell, Java, PHP, Python, Ruby/Rails, Scala

  • Some problems may appear in case you would like to make customizations: you may need some 3rd party software to make those adjustments

  • Also, while being a cloud-based system is a plus from one side, it can also stop supporting any software, and you won’t be able to prevent that

Travis CI

Travis CI

Travis CI and CircleCI are almost the same

 

Both of them:

  • Have YAML file as a config
  • Are cloud-based
  • Have support of Docker to run tests
What does TravisCI offer that CircleCI doesn’t?

 

  • Option to run tests on Linux and Mac OS X at same time

  • Supports more languages out of the box:

Android, C, C#, C++, Clojure, Crystal, D, Dart, Erlang, Elixir, F#, Go, Groovy, Haskell, Haxe, Java, JavaScript (with Node.js), Julia, Objective-C, Perl, Perl6, PHP, Python, R, Ruby, Rust, Scala, Smalltalk, Visual Basic

  • Support of build matrix

Build matrix

language: python  
python:  
  - "2.7"
  - "3.4"
  - "3.5"
env:  
  - DJANGO='django>=1.8,<1.9'
  - DJANGO='django>=1.9,<1.10'
  - DJANGO='django>=1.10,<1.11'
  - DJANGO='https://github.com/django/django/archive/master.tar.gz'
matrix:  
  allow_failures:
    - env: DJANGO='https://github.com/django/django/archive/master.tar.gz'

Build matrix is a tool that gives an opportunity to run tests with different versions of language and packages. You may customize it in different ways. For example, fails of some environments can trigger notifications but don’t fail all the build ( that’s helpful for development versions of packages)

TOX

In case you prefer any other CI platform — there is always an option to create a Build Matrix by using Tox.

[tox]
envlist = py{27,34,35}-django{18,19,110,master}

[testenv]
deps =  
    py{27,34,35}: -rrequirements/test.txt
    django18: Django>=1.8,<1.9
    django19: Django>=1.9,<1.10
    django110: Django>=1.10,<1.11
    djangomaster: https://github.com/django/django/archive/master.tar.gz
commands = ./runtests.py

[testenv:py27-djangomaster]
ignore_outcome = True

Tox is a generic virtualenv management and test command line tool. You may install it by using pip install tox or easy_install tox command.

Travis CI Pros:
  • Build matrix out of the box
  • Fast start
  • Lightweight YAML config
  • Free plan for open-sourced projects
  • No dedicated server required
Travis CI Cons:
  • Price is higher compared to CircleCI, no free enterprise plan

  • Customization (for some stuff you’ll need 3rd parties)

Jenkins

Jenkins

Features:

 

  • Jenkins is a self-contained Java-based program, ready to run out-of-the-box, with packages for Windows, Mac OS X and other Unix-like operating systems

  • With hundreds of plugins in the Update Center, Jenkins integrates with practically every tool in the continuous integration and continuous delivery toolchain

  • Jenkins can be extended via its plugin architecture, providing nearly infinite possibilities for what Jenkins can do

  • Various job modes: Freestyle project, Pipeline, External Job, Multi-configuration project, Folder, GitHub Organization, Multibranch Pipeline

  • Jenkins Pipeline. That’s a suite of plugins which supports implementing and integrating continuous delivery pipelines into Jenkins. Pipeline provides an extensible set of tools for modeling simple-to-complex delivery pipelines “as code” via the Pipeline DSL

  • Allows you to launch builds with various conditions.

  • You can run Jenkins with Libvirt, Kubernetes, Docker, and others.

  • Rest API – have access to Controlling the amount of data you fetch, Fetch/Update config.xml, Delete a job, Retrieving all builds, Fetch/Update job description, Perform a build, Disable/Enable a job

Jenkins Pros:
  • Price (it’s free)
  • Customization
  • Plugins system
  • Full control of the system
Jenkins Cons:
  • Dedicated server (or several servers) are required. That results in additional expenses. For the server itself, DevOps, etc…
  • Time needed for configuration / customization

Conclusion

What CI system to chose? That depends on your needs and the way you are planning to use it.

CircleCI is recommended for small projects, where the main goal is to start the integration as fast as possible.

Travis CI is recommended for cases when you are working on the open-source projects, that should be tested in different environments.

Jenkins is recommended for the big projects, where you need a lot of customizations that can be done by usage of various plugins. You may change almost everything here, still this process may take a while. If you are planning the quickest start with the CI system Jenkins might not be your choice.

More useful articles you can find in our blog

Originally published here

Signup for email digest

 

Which CI system do you prefer and why? Leave us a comment to share your thoughts.

Docker at Four: The State of the Docker Ecosystem from 2013 to Today

Docker containers turned four years old this month. If you were paying attention to Docker in its early days, you know that the Docker ecosystem today looks nothing like it did then. Here’s how the Docker world has evolved since Docker’s launch in 2013.

Docker’s debuted generated a splash among developers — you can hear them oohing and aahing in this demonstration of Docker at the PyCon event in 2013 — but its commercial importance at the time of Docker’s launch was unclear. Back then, Docker was just an implementation of LXC, a Linux containerization technology that had already been around for years. LXC was an interesting platform, but few people in 2013 thought of it as the building block for deployment infrastructure that could replace virtual machines.

Read more at The VAR Guy

Real-World Performance and the Future of JavaScript Benchmarking

Web workloads are changing. Performance metrics and tooling need to adapt. Limiting the amount of JS proportionally to what’s visible on the screen is a good strategy.

In the last 10 years, an incredible amount of resources went into speeding up peak performance of JavaScript engines. This was mostly driven by peak performance benchmarks like SunSpider and Octane and shifted a lot of focus toward the sophisticated optimizing compilers found in modern JavaScript engines like Crankshaft in Chrome.

This drove JavaScript peak performance to incredible heights in the last two years, but at the same time, we neglected other aspects of performance like page load time, and we noticed that it became ever more difficult for developers to stay on the fine line of great performance. In addition to that, despite all of these resources dedicated to performance, the user experience on the web seemed to get worse over time — especially page load time on low-end devices.

Read more at DZone

Open-Source Developers Targeted in Sophisticated Malware Attack

Attackers have targeted developers present on GitHub since January with an information-stealing program called Dimnie.

The attacks started in January and consisted of malicious emails specifically crafted to attract the attention of developers, such as requests for help with development projects and offers of payment for custom programming jobs. The emails had .gz attachments that contained Word documents with malicious macro code attached. If allowed to execute, the macro code executed a PowerShell script that reached out to a remote server and downloaded a malware program known as Dimnie.

Read more at InfoWorld

Global Enterprises Join The Linux Foundation to Accelerate Open Source Development Across Diverse Industries

Open source is now mainstream. More and more developers, organizations, and enterprises are are understanding the benefits of an open source strategy and getting involved. In fact, The Linux Foundation is on track to reach 1,000 participating organizations in 2017 and aims to bring even more voices into open source technology projects ranging from embedded and automotive to blockchain and cloud.

Just this week, AT&T joined The Linux Foundation as a Platinum Member, and 16 other organizations joined as Silver Members. Together, these organizations combine to help support development of the greatest shared technology resources in history, while accelerating innovation across industry verticals.

AT&T’s commitment to open source follows news of the company’s contribution of several million lines of ECOMP code to The Linux Foundation. Additionally, Chris Rice, senior vice president of AT&T Labs, joined The Linux Foundation Board of Directors and was also recently selected as the ONAP chairman.

The Linux Foundation is excited about the recent merger of open source ECOMP and OPEN-O, which formed the Open Network Automation Platform (ONAP) project initiated by China Mobile. The newly formed ONAP will allow end users to automate, design, orchestrate, and manage services and virtual functions. Through this amalgamation of projects, ONAP creates a harmonized framework for real-time, policy-driven software automation of virtual network functions and is poised to deliver a unified architecture and implementation faster than any one project could on its own.

AT&T, along with other members, service providers, developers, and industry leaders, will be at Open Networking Summit next week, April 3-6, in Santa Clara, CA to discuss networking topics, share insights, and shape the future of the industry. The event will feature an enterprise track, more than 75 sessions, and keynotes from networking visionaries.

The new Silver members include: Amihan Global Strategies, BayLibre, Bell Canada, China Merchants Bank, Comcast, Ericsson, Innovium, Kinvolk, Kontena, Kubique, Metaswitch Networks, Monax, Pinterest, SAP SE, SELTECH, and Tech Mahindra.

In addition to joining the Foundation, many of these new members have joined Linux Foundation projects across a wide range of technologies, such as Automotive Grade Linux, Cloud Native Computing Foundation, Hyperledger, Open Container Initiative, Open Mainframe Project, Open Network Automation Platform (ONAP), OpenSwitch, and Yocto Project.

The Linux Foundation is also excited about a new initiative in the IoT space. If you’re working in the edge networking/IoT space and want to learn more, please contact Mike Woster.

Security Tips for Installing Linux on Your SysAdmin Workstation

Once you’ve chosen a Linux distro that meets all the security guidelines set out in our last article, you’ll need to install the distro on your workstation.

Linux installation security best practices vary, depending on the distribution. But, in general, there are some essential steps to take:

  • Use full disk encryption (LUKS) with a robust passphrase

  • Make sure swap is also encrypted

  • Require a password to edit bootloader (can be same as LUKS)

  • Set up a robust root password (can be same as LUKS)

  • Use an unprivileged account, part of administrators group

  • Set up a robust user-account password, different from root

These guidelines are intended for systems administrators who are remote workers. But they apply equally well if you work either from a portable laptop in a work environment, or set up a home system to access work infrastructure for after-hours/emergency support.

When combined with the other recommendations in this series, they will help reduce the risk that SysAdmins will become attack vectors against the rest of your IT infrastructure.

Full disk encryption

Unless you are using self-encrypting hard drives, it is important to configure your installer to fully encrypt all the disks that will be used for storing your data and your system files. It is not sufficient to simply encrypt the user directory via auto-mounting cryptfs loop files (I’m looking at you, older versions of Ubuntu), as this offers no protection for system binaries or swap, which is likely to contain a slew of sensitive data. The recommended encryption strategy is to encrypt the LVM device, so only one passphrase is required during the boot process.

The /boot partition will usually remain unencrypted, as the bootloader needs to be able to boot the kernel itself before invoking LUKS/dm-crypt. Some distributions support encrypting the /boot partition as well (e.g. Arch), and it is possible to do the same on other distros, but likely at the cost of complicating system updates. It is not critical to encrypt /boot if your distro of choice does not natively support it, as the kernel image itself leaks no private data and will be protected against tampering with a cryptographic signature checked by SecureBoot.

Choosing good passphrases

Modern Linux systems have no limitation of password/passphrase length, so the only real limitation is your level of paranoia and your stubbornness. If you boot your system a lot, you will probably have to type at least two different passwords: one to unlock LUKS, and another one to log in, so having long passphrases will probably get old really fast. Pick passphrases that are two to three words long, easy to type, and preferably from rich/mixed vocabularies.

Examples of good passphrases (yes, you can use spaces):

• nature abhors roombas

• 12 in-flight Jebediahs

• perdon, tengo flatulence

Weak passphrases are combinations of words you’re likely to see in published works or anywhere else in real life, and you should avoid using them, as attackers are starting to include such simple passphrases into their brute-force strategies.

Examples of passphrases to avoid:

• Mary had a little lamb

• you’re a wizard, Harry

• to infinity and beyond

You can also stick with non-vocabulary passwords that are at least 10-12 characters long, if you prefer that to typing passphrases.

Unless you have concerns about physical security, it is fine to write down your passphrases and keep them in a safe place away from your work desk.

Root, user passwords and the admin group

We recommend that you use the same passphrase for your root password as you use for your LUKS encryption (unless you share your laptop with other trusted people who should be able to unlock the drives, but shouldn’t be able to become root). If you are the sole user of the laptop, then having your root password be different from your LUKS password has no meaningful security advantages. Generally, you can use the same passphrase for your UEFI administration, disk encryption, and root account — knowing any of these will give an attacker full control of your system anyway, so there is little security benefit to have them be different on a single-user workstation.

You should have a different, but equally strong password for your regular user account that you will be using for day-to-day tasks. This user should be member of the admin group (e.g. wheel or similar, depending on the distribution), allowing you to perform sudo to elevate privileges.

In other words, if you are the sole user on your workstation, you should have two distinct, robust, equally strong passphrases you will need to remember:

Admin-level, used in the following locations:

• UEFI administration

• Bootloader (GRUB)

• Disk encryption (LUKS)

• Workstation admin (root user)

User-level, used for the following:

• User account and sudo

• Master password for the password manager

All of them, obviously, can be different if there is a compelling reason.

Next time we’ll talk about post-installation security hardening. This will depend greatly on your distribution of choice, so we’ll provide an overview of the steps you should take rather than provide detailed instructions.

Workstation Security

Read more:

How to Choose the Best Linux Distro for SysAdmin Workstation Security

4 Security Steps to Take Before You Install Linux

A Journey through Upstream Atomic KMS to Achieve DP Compliance – Manasi Navare, Intel

Intel’s Manasi Navare describes her journey of creating a patch to fix DisplayPort issues and offers some general tips for the Linux kernel upstreaming process.