Home Blog Page 741

Wal-Mart Proves Open Source Is Big Business

Open source, in the form of both software and hardware is big business—really big business. This past week, Wal-Mart Stores, one of the largest retailers in the world proved it once again when they announced that they would make their application lifecycle management tool OneOps available as an open source project. Why give away something that you created yourself that gives your company an advantage? Because companies like Wal-Mart and General Electric (with their Open Innovation initiative) are finding that proprietary is being beat by collaboration and openness.

Turning a company’s proprietary work into something shared in the market can reap benefits as others also help innovate and return new features and functionality to your work.

Read more at Forbes

Intel Puts Numbers on the Security Talent Shortage

The cybersecurity shortfall in the workforce remains a critical vulnerability for companies and nations, according to anIntel Security report being issued today.

Eighty-two percent of surveyed respondents reported a shortage of security skills, and respondents in every country said that cybersecurity education is deficient. 25% of organizations have lost proprietary data to cyberattacks. Also, 71 percent of respondents report that the lack of security skills does measurable damage to their organizations

Large Shortage of Cybersecurity Skills, Intel Security saysRead more at SDx Central

Agile 2016: How to Measure your DevOps Initiatives

You’ve heard the benefits of DevOps, and you’ve decided to move your teams to this way of working. But how do you know if you are doing it right? Metrics are key indicators for businesses to figure out whether or not they are making the right decisions, but often they aren’t choosing the right metrics to look at, according to Anders Wallgren, CTO of Electric Cloud.

He spoke at the Agile 2016 conference in Atlanta about the key DevOps metrics businesses should look into to determine failures or successes.

Read more at SDTimes

Mirantis to Fuse Kubernetes, CI/CD with Commercial OpenStack

In a move with serious implications for the lowest software layers of data center infrastructure, commercial OpenStack producer Mirantis this morning announced it is partnering with the two most important players in the infrastructure space — Google and Intel — to produce a new version of the OpenStack platform designed to run inside Linux containers (such as Docker), for deployment through Google’s Kubernetes orchestrator platform.

“We are containerizing all of the OpenStack services,” explained Boris Renski, Mirantis’ co-founder and CMO, in an interview with Datacenter Knowledge, “and making it possible to natively run OpenStack on top of Kubernetes — to make it be orchestrated by Kubernetes.” … What Renski is telling us is that Mirantis’ commercial OpenStack is itself will be deployed within containers, whose coordination with one another will be maintained using Kubernetes.

Read more at Data Center Knowledge

 

OpenVZ 7.0 Runs Linux VMs Like Containers

One of the original major proponents of container technology on Linux, OpenVZ — or Virtuozzo in its commercial edition — is releasing a new version of its container solution, packaged as a full-fledged Linux distribution.

The commercial edition of OpenVZ, called Virtuozzo (also the name of the company marketing the product), incorporates OpenVZ but adds enterprise-grade features not found in the open source release. Virtuozzo has a new release of its own alongside OpenVZ 7, named — appropriately enough — Virtuozzo 7.

Most of the big changes announced in OpenVZ 7.0 involve the packaging and deployment of the product. It’s now an entire standalone Linux distribution, with both the commercial Virtuozzo product and the free OpenVZ distribution based on the same kernel.

Read more at InfoWorld

Navigating the Data Center Networking Landscape

System administrators today are tasked with creating a much smarter networking layer. One that is capable of keeping up with some of the most advanced business and IT demands. In a recent Worldwide Enterprise Networking Report, IDC pointed out that virtualization continues to have a sizable impact on the enterprise network. IDC expects that these factors will place unprecedented demands on the scalability, programmability, agility, analytics capabilities, and management capabilities of enterprise networks. They predict that in 2016, overall enterprise network revenue will grow 3.5 percent to reach $41.1 billion.

In one of my recent articles here on DCK, we defined the overall SDN landscape. We examined technologies like NSX, ACI, and even open SDN systems. Today, we take a step back and will look at four data center networking components impacting the modern business:

  • Traditional data center networking.
  • Software-defined networking.
  • White/brite box networking.
  • Cloud networking.

Read more at Data Center Knowledge

Reasons Organizations Opt Not to Use Open Source Software

Black Duck’s latest open source survey shows that a majority of companies are now using open source. So what’s stopping the rest? Here’s a look at the reasons why businesses might choose not to use open source, or avoid partnering with companies that do.

The fact that open source doesn’t work for everyone — or that some people think it won’t work for them, so they don’t even give it a try — does not mean open source is inherently flawed. It’s certainly a highly effective way to build and acquire software in many situations. Still, to understand open source fully, it’s worth taking a look at its drawbacks, both perceived and actual. They include…

Read more at The VAR Guy

NodeSource offers ‘One-Click’ Deployment of Node.JS for Kubernetes Clusters

NodeSource has configured a version of its N|Solid commercial Node.js implementation to run on Kubernetes clusters, potentially speeding both the deployments of containerized Node.js applications and N|Solid deployments themselves.

“We are seeing a ton of traction behind containers being orchestrated by Kubernetes,” said Joe McCann, CEO and co-founder of NodeSource. “For us, it makes sense to specialize an offering with N|Solid inside a Docker container orchestrated by Kubernetes.” 

Using Node.js in conjunction with containers can offer speedy deployment times, elaborated Ross Kukulinski, NodeSource technical product manager. About 45 percent of developers using Node.js are already using them with container technologies such as Docker.

Read more at The New Stack

Find Top 15 Processes by Memory Usage with ‘top’ in Batch Mode

Similarly to the previous tip about find out top processes by RAM and CPU usage, you can also use top command to view the same information. Perhaps there’s an extra advantage of this approach when compared to the previous one: the “header” of top provides extra information about the current status and usage of the system: the uptime, load average, and total number of processes, to name a few examples.

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

DevOps: A Pillar of Modern IT Infrastructure

A massive transformation is underway in the way we manage IT infrastructure. More companies are looking for improved agility and flexibility. They are moving from traditional server stacks to cloudy infrastructure to support a new array of applications and services that must be delivered at breakneck pace in order to remain competitive.

This transition is as much about people as it is about technology. In traditional data centers there are different IT departments specialized in different pieces — networking, storage, database, and so on. These silos operate independently. Provisioning new systems is a challenging and time-consuming task.

The adoption of cloud-based, software-defined technologies requires a much tighter collaboration between these silos. When a company needs to spin up 100 new systems in under a minute, they can’t afford 10 different departments talking to each other to provision it. They need it now. To achieve this scale and speed, they need to break the silos. And that’s exactly what has happened, giving rise to a new culture and set of IT practices called DevOps that blur the traditional line between developers and operations.

Who are DevOps pros?

DevOps is the emerging process and mindset for managing modern, cloud-based infrastructure that is gaining mainstream adoption. It’s the blending of development and operational needs to create maximum business agility, says Amit Nayar, Vice President of Engineering at Media Temple, a web hosting and cloud hosting provider.

DevOps professionals are the developers who also have the expertise of sysadmins and vice versa; they know all those components needed for operations — from networking, database, storage to almost everything else. It’s breaking down of those silos as seen in the traditional IT infrastructure, to incite tighter collaboration between “development” and “operations,” leading to the term DevOps. DevOps professionals are jacks of all trades and master of some.

Some of the core skillsets expected from DevOps professionals include the complete understanding of building, deploying, monitoring, and managing of the production and development environment.

To get a picture of the skillsets that big companies are looking for, I went through dozens of DevOps jobs opening at big companies like Boeing, Capital One, Booz Allen, Red Hat, Geico, Apple, etc. I found certain “desired” skills and knowledge common in all those job openings:  

  • Strong experience with provisioning and deployment toolchains such as Chef, Puppet, Ansible, Salt, Docker, Heroku Buildpacks, etc.

  • Containers and container orchestration (Kubernetes, Docker, Docker Swarm)

  • Continuous integration and test automation tools (Travis CI, Jenkins, etc.)

  • Cloud and virtualization technologies: AWS, Azure, VMware, KVM, zones/containers, Vagrant, Docker

  • System monitoring experience with tools such as Nagios, Sensu, etc.

  • Scripting language proficiency such as Perl, Python, and Ruby

  • Experience building dev, test, and production environments in public cloud

  • Experience in configuration management and source code control tools and setting up continuous build environments

  • Experience with the Hashicorp tools (Vagrant, Vault, Packer, etc.)

  • Experience with networking services and SDN

  • Good understanding of network theory, such as different protocols (TCP/IP, UDP, ICMP, etc), MAC addresses, IP packets, DNS, OSI layers, and load balancing)

  • Strong working knowledge of Linux operating systems, their underlying components, system statistics, performance tuning, filesystems, and I/O.

  • Problem solving, doing whatever it takes — ability and attitude.

In a nutshell, DevOps pros are doing almost everything around key areas — automation, building, testing, deploying, monitoring, and managing the production and development environment.

“With more of the industry supporting cloud-like infrastructures — including running virtualized platforms on in-house data centers — it is critical for engineers to understand how things connect and the constraints involved in running these services and applications,” said Mike Fiedler, Director of Technical Operations at DataDog, provider of SaaS-based monitoring and analytics platform.

The DevOps movement is not just about the technical skillset, however. “DevOps is as much a mindset as it is a skillset,” Nayar said. The collapse of silos has certainly brought a shift in skillset — the line between operations and developers is blurry now.

But, as Nayar said, it’s not just about skillset. Those job openings show that organizations are looking at their infrastructure as a whole, breaking the traditional silos, barriers that separated developers and operations.

Rise of DevOps

Modern infrastructures are predominantly being designed to be defined by code to ensure repeatable, programmable deployments, and this trend is an impetus for requiring the hybrid DevOps mindset and skillset, according to Nayar.

DevOps and cloud native applications are creating infrastructure as a code that can be described in a yaml file. This has interesting side effects. It’s liberating for sysadmins who don’t have to worry about provisioning and managing individual pieces of infrastructure, such as virtual machines, firewalls, etc. All of that is moved to the developer side of the house. It’s also liberating for developers who no longer are dealing with the operations side of things.

“They define the need through the file, abstracting out infrastructure — describing it all in a file and requests through APIs,” said Amar Kapadia, Sr. Director Product Marketing at Mirantis.

The result is that structural changes cannot take place independently of operational changes. If you want a modern infrastructure, you must adopt DevOps practices. DevOps professionals are the ones who enable a company to deliver applications and services more efficiently and cost effectively.

”Modern infrastructure is synonymous with DevOps,” said Thomas Hatch, creator of the popular Salt configuration management tool and CTO of SaltStack.

New game, new challenges

Deploying modern infrastructure is challenging. As more companies embrace modern IT infrastructure, there is an advantage to adopting DevOps culture; they have to transform or replace existing independent silos into a more coherent group of IT professionals. Part of adopting DevOps, Nayar says, is moving your culture to a DevOps mindset first and finding common agreement between development and operations.

The biggest challenge for these companies is breaking old habits and challenging the status quo. To make this shift, companies must hire people with experience in the DevOps culture and toolset, but they also must train their existing staff on DevOps concepts, Nayar said. This can be an easy task for a smaller company, but the bigger the company, the harder it becomes. In either case, the increasing demand for DevOps professionals can strain employees and HR departments, alike.

Different companies use different approaches

Companies are very diverse in how they run their operations. They have different business and digital requirements, and they are using different approaches to adapt to the growing need for cross-team collaboration. “Every organization is adopting the DevOps mindset in a very different way,” said Hatch.

One approach is to identify developers and sysadmins who are willing to learn more about tasks outside of their current job profile. It’s not just the companies looking for new skillsets; systems and software engineers are going through the same phase. They are aware of the changing dynamics, and they are aware of the risks of not adapting to new opportunities. It’s not really hard to find developers interested in learning more about the systems that run the applications they developed. Companies can identify such talent and encourage them to level up and acquire the knowledge needed to support their services. “The same applies for sysadmins who are looking to learn more about development,” said Fiedler.

Some companies take a different approach and cross-pollinate devs and operations. You can embed a sysadmin with the dev team so the team gets the domain-specific expertise. The sysadmin is then more involved in the dev cycle, aware of design decisions that can have impact later on. Additionally, “developers benefit from having someone readily accessible to help them understand some of the system-level constraints and greater architecture,” said Fielder.

Either way, the transformation requires a strong leader to focus on creating a culture shift by training and reorganizing new teams. “One place where companies making the transition to DevOps are hiring most is leadership to drive the transformation (VP level),” said Kapadia. “They are hired to own the transformation.”

Conclusion

The bottom line is that organizations can’t afford to ignore this cultural change.

“The transition from traditional dev and ops practices to coordinated, agile DevOps has been slow, but each organization has to move at an appropriate pace,” said Hatch. “Fear of missing out and competitive pressure has helped to accelerate adoption, as the benefits of DevOps can be obvious and markets move so quickly these days.”

The benefits of DevOps are being demonstrated by web-scale companies like Amazon, Facebook, and Google. Everyone seems to be in the hunt for GIFEE (Google-style Infrastructure For Everyone Else). But, change is hard and for some impossible.

In response, DevOps pros are fond of quoting noted statistician and champion of the lean manufacturing movement, W. Edwards Deming, who famously said, “It is not necessary to change. Survival is not mandatory.”

 

Sign up to receive one free Linux tutorial each week for 22 weeks from Linux Foundation Training. Sign Up Now »