Home Blog Page 724

How to Clone or Backup Linux Disk Using Clonezilla

Clonezilla is one of the greatest Open Source backup tool for Linux. Its Graphical User Interface combined with a simpler, fast and intuitive guided command line wizard that runs on top of a live Linux Kernel makes it a perfect candidate back-up tool for every sysadmin out there.

With Clonezilla, not only you can perform a full backup of a device data blocks directly to another drive, also known disk cloning, but you can also backup entire disks or individual partitions remotely (using SSH, Samba or NFS shares) or locally to images which can be all encrypted and stored in a central backup storage, typically a NAS, or even on external hard-disks or other USB devices.

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

Read full article

How to Monitor the Progress of dd on Linux

The dd command allows you to duplicate a hard disk to another or erase a hard drive completely. It is also useful for backups and recovery. However, once you start a dd command, there’s nothing to tell you of its progress. It just sits at the cursor until the command finally finishes. There are various ways to display the progress of dd command on Linux. This tutorial explains how to use the latest version of dd from GNU/coreutils to displays its status on Linux based system.

Read full article

How to Create Docker Images with a Dockerfile

In this tutorial, I will show you how to create your own docker image with a dockerfile. A Dockerfile is a script that contains collections of commands and instructions that will be automatically executed in sequence in the docker environment for building a new docker images. As example, we will create a Nginx Web server with PHP-FPM.

Read full article

Attend the HackerNest Tech Job Fair Before LinuxCon on Saturday

On Saturday, Aug. 20, HackerNest will host its Tech Job Fair at MaRS Discovery District in Toronto. From 1-6 p.m., talented job seekers will speak with representatives from hiring companies. HackerNest is known for blockbuster events like DementiaHack, CourtHack, and monthly Tech Socials. The HackerNest Tech Job Fair is an extension of the fantastic programming those in the Toronto tech community have come to expect.

HackerNest Toronto has grown to over 9,800 members this year, and as their mandate and mission is to encourage more people to get into tech – and be successful in tech – a Tech Job Fair provides an opportunity for recruiters and hiring managers to scope out the depth of talent in the city. Make no mistake, the HackerNest Tech Job Fair is not just for HackerNest community members, but for anyone looking for employment opportunities in the greater Toronto area.

HackerNest has locked in over 30 exhibitors for the Tech Job Fair, and the depth of opportunity in Toronto is showcased in sponsorship from organizations such as Tangerine Bank, the Konrad Group, Capco, and Amazon Canada, among others.

“On one hand, we’ve assembled a very diverse set of sponsors and exhibitors that exemplify the breadth of innovation across a number of exciting sectors in Toronto and Canada,” says JJ Beh, HackerNest’s COO. “On the other hand, we’ve managed to attract talent to the job fair that comprises passionate developers and technologists that are looking for that next career move.”

Organizations exhibiting at the Tech Job Fair are interested not only in hiring people, but also increasing engagement within the Toronto tech community. So no matter your age, skill level, coding language, or discipline, be sure to come by and check out the Tech Job Fair, this Saturday, Aug. 20 from 1pm – 6pm at MaRS Discovery District on College St.

 

RSVP for your free ticket here: http://hckrn.st/2aZgLbe

 

The Next Evolution of DevOps in the Enterprise: “Hardening” DevOps

In today’s digital age much of the business innovation is driven by software. To win, serve and retain their customers, enterprises are being tasked with releasing application updates at an increasingly faster pace. A great idea, killer functionality and a robust technology are all as important as ever – but do not mean much if you can’t get your code to your end users in a fast, predictable manner, and with high quality.

Your “Pathway to Production” is the path that your code takes from developer check-in all the way to a successful Release. It spans the entire organization – comprising all the different stakeholders, teams, processes, tools and environments involved in your software delivery. This is, essentially, how your organization delivers value to the market.

Increasingly, we see that organizations that become better at streamlining and accelerating their Pathway to Production are better equipped to compete and win in today’s economy. The maturity, speed and quality of your software release processes have become a key differentiator and a competitive advantage for businesses today.

DevOps and ARA: Paving a Better Pathway to Production

DevOps and Application Release Automation (ARA) have emerged to help organizations become better at delivering software – allowing for greater speed and agility while mitigating the risk of software releases.

DevOps has huge business benefits: statistics show that organizations that practice DevOps outperform the SMP500 over a 3 year period, high-performing IT organizations have 50% higher market cap growth, and so on.

In order to remain competitive and meet consumer demands, enterprises across the board are adopting DevOps to optimize their Pathway to Production. Just as you would invest in designing the right functionality for your product, or defining a winning go-to-market plan, organizations now invest in optimizing and (re)designing their Pathway to Production to enable innovation.

The implementation of DevOps in large organizations comes with a unique set of challenges. Enterprises often need to support large volumes of distributed teams and multiple applications/product releases. In addition, regulatory and governance requirements, supporting legacy systems, tool variety, infrastructure management, and complex internal processes further compound these challenges.

I’d like to discuss the evolution of DevOps adoption in the enterprise, and what I see as the next phase of the DevOps revolution.

DevOps in the Enterprise: Starting Small, Dev is Leading.

Agile methodologies, adopted by many software organizations, have been largely focused on development, QA and product management functions, and less on the Pathway to Production once the software has been authored.

As a continuation to Agile, DevOps also started as a very Dev-driven movement (despite the ‘Ops’ in the name). Dev teams were quicker to adopt these practices, as they were eager to find a way to get their code into Production faster. Ops were traditionally more hesitant to adopt DevOps, seeing the increased velocity and speed as possible risks.

The majority of DevOps implementations today still start as grass-roots initiatives in small teams. And that’s OK and is a good way to show early success and then scale. Increasingly, alongside these bottom-up efforts, we’re seeing a shift towards DevOps being a company-wide initiative, championed both at the executive-level, as well as at the team-level.

The next phase: Scaling DevOps, Ops Takes Center Stage.

One of the biggest challenges for large enterprises is the “silo-ing” of people, processes and toolsets. Oftentimes, one or more of these silos may be quite adept at understanding and automating their piece of the puzzle, but there is no end-to-end visibility or automation of the entire Pathway. This leads to fragmented processes, manual handoffs, delays, errors, lack of governance, etc.

Since the Pathway to Production spans the entire organization, enterprises are realizing that optimizing it is not a disparate set of problems, but requires a system-level approach. The evolution of DevOps is towards scaling adoption across the entire enterprise to cover the end-to-end Pathway to Production. This removes friction by automating all aspects of your delivery pipeline, in the pursuit of creating predictable, repeatable processes that can be run frequently with less and less human intervention. By achieving consistency of processes and deployments (into QA, Staging, Prod.) throughout the entire lifecycle, you’re in fact always ‘practicing’ for game-day, and hardening your DevOps practices as you optimize them.

As part of this process – as DevOps matures and becomes mainstream in enterprises (and as it becomes more critical to their operations), DevOps practices are ‘hardened’ to take into account more ‘Ops’ requirements for releases: mainly around manageability,governance, security and compliance.

Talking about “enterprise-control” is no longer a bad thing or something that may be viewed as hindering DevOps adoption. DevOps is about enabling speed while ensuring stability. Similar to children maturing, now that we’ve grown and learned to walk (faster), it’s time to learn to be more responsible.

As with the software your organization is developing, it’s time to “harden” your DevOps practices to scale adoption throughout your end-to-end process across the organization. ‘Hardening’ doesn’t mean sacrificing speed or experimentation; it means your DevOps is getting ready for Prime Time!

“Hardening” Your DevOps Implementation:

You want to design your underlying tools and processes along your Pathway to Production in a way that can scale across the enterprise. This requires balancing team ownership and collaboration, with supporting the needs of the organization for checks and balancesstandardization, and system-level visibility and control.

While you would likely still start ‘local’, and gradually roll out across different groups as you optimize – be sure to always think ‘global. As you analyze and (re)design your Pathway to Production, you need to take a system-wide approach, and always consider:How do I scale this? – across all teams, applications, releases, environments, and so on.

First, take some time to map your end-to-end Pathway to Production. From my experience, organizations often are not even aware of the entire path their code takes from check-in, through build, testing, deployment across environments, etc. Be sure to interview all the different teams and stakeholder, until you reach a painstakingly detailed documentation of your cross-functional pipeline(s) – including all the tools, technologies, infrastructure and processes involved.

Then, take a look at the bottlenecks – where do your pipelines choke? For example: waiting on VMs, waiting on builds, configuration drifts between environments, failed or flaky tests, bugs making it to Production, failed releases, errors or lags due to manual handoffs between teams or tools, etc.

As you redesign your pipelines to eliminate friction points, here are some things to consider on your journey to ‘harden’ your DevOps practices to support stability and scaling across the organization:

  1. How do I ensure security access controls and approval gates at critical points along the pipeline?
  2. How do I guarantee visibility and auditability- so we have real-time reporting of the state of each task along the pipeline, and a record of exactly who did what/where/when?
  3. What security and compliance test (or other tests) must all processes adhere to in order to move through the pipeline and into Production?
  4. How do I standardize as much as possible on toolchain, technology and processes to normalize my pipeline to allow reusability across teams/applications and save on cost?
  5. How do I still enable extensibility and flexibility to support different needs from various teams or variants of the application?
  6. Can my chosen DevOps solution orchestrate and automate the entire end-to-end pipeline?
  7. Can my implementation support Bi-modal IT – enabling traditional release practices and support for legacy apps, as well as more modern container/microservices architectures and CD pipelines?
  8. Can I support both simpler, linear, release pipelines, as well as complex releases that require coordination of many inter-dependent applications and components into many environments?
  9. Is my solution ‘future-ready’ and flexible enough to be able to plug-in any new technology stack, tool or processes as the need arise?
  10. As I scale, can my implementation support the velocity and throughout I’m expecting across the organization – which can include thousands of developers, thousands of Releases, millions of builds and test cases per .?
  11. Setting up one pipeline for one team/release is easy enough, but how do I onboard thousands of applications?

While optimizing your tools and technology to scale DevOps adoption is important, it is only half the battle. Above all- DevOps is a mindset, and cultural shift takes time. Remember that change doesn’t happen in a day, and that you’re in it for the long haul.

As a community, we started with asking why we should even bother doing DevOps. After establishing momentum and proving the ROI of DevOps, the discussion is gradually evolving to how we get DevOps right in large enterprises: what are some of the patterns for success, and how can we effectively scale so that the entire organization can reap the benefits.

The Linux Foundation Awards 14 Training and Certification Scholarships

Students and recent graduates, Linux beginners, longtime sysadmins, aspiring kernel developers, and passionate Linux users are all counted among the winners announced today who will receive a 2016 Linux Foundation Training (LiFT) scholarship.

The LiFT Scholarship Program gives free training courses to individuals who may not otherwise have access to these opportunities.  The recipients will also receive a Linux Foundation Certified System Administrator (LFCS) or Linux Foundation Certified Engineer (LFCE) exam.

This year, 14 LiFT scholarship recipients were chosen from more than 1,000 applicants, spanning in age from 13 to 66 and hailing from six continents.

The training provides recipients with the tools they need to advance their career or get started in one of the most lucrative jobs in IT. According to the 2016 Open Source Jobs Report, 65 percent of hiring managers say open source hiring will increase more than any other part of their business over the next six months, and 79 percent of hiring managers have increased incentives to hold on to their current open source professionals.

“I am currently seeking a full-time position as a Linux kernel developer, preferably in open source,” said Ksenija Stanojevic, 29, an engineer and former Outreachy intern from Serbia who is a LiFT scholarship recipient in the Kernel Guru category. “This scholarship will directly help me achieve my goals. Apart from giving more job opportunities it will allow me to work in a field that I love and am passionate about.”

Over the past six years, The Linux Foundation has awarded 48 scholarships worth more than $130,000 to current and aspiring IT professionals.

“Providing scholarships for advanced training helps those individuals who directly benefit from it to then contribute to existing open source projects and even start new ones, as well as pass their knowledge along to their communities,” said Linux Foundation Executive Director Jim Zemlin.  “We hope these scholarships serve as a catalyst for helping open source continue to grow and thrive.”

This year’s winners across seven categories include:

Academic Aces

Ahmed Alkabary, 23, Canada. A recent graduate of the University of Regina, where he earned degrees in computer science and mathematics.

Tetevi Placide Ekon, 24, Burkina Faso. A graduate student studying civil engineering at the 2iE Institute for Water and Environmental Engineering.

Developer Do Gooder

Luis Camacho Caballero, 42, Peru. A Linux user since 1998 who started a project to preserve endangered South American languages using Linux.

Kurt Kremitzki, 28, United States. Studying biological and agricultural engineering at Texas A&M and working with a university in Mexico to design irrigation systems for a Mayan community in the Yucatan.

Linux Kernel Guru

Alexander Popov, 28, Russia. A Linux kernel developer who has had 14 patches accepted into the mainline kernel to date.

Ksenija Stanojevic, 29, Serbia. An Outreachy intern who has worked on splitting the existing IIO driver into MFD with ADC and touchscreen parts and has contributed to the Year 2038 project.

Linux Newbies

Yasin Sekabira, 27, Uganda. A graduate of the computer science program at Makerere University.

Lorien Smyer, 52, United States. A former bookkeeper who decided she wanted to start a new career in computer science.

SysAdmin Super Star

Jacob Neyer, 20, United States. Deployed with the United States Air Force, where he administers Linux servers.

Sumilang Plucena, 33, Philippines. A systems analyst at the largest hospital in the Philippines, which runs Linux on all its servers.

Teens-in-Training

Sarah Burney, 13, United States. An eighth grader at her middle school in Maryland, who has already completed a data science course at Johns Hopkins, as well as several coding programs.

Florian Vamosi, 15, Hungary. A grammar school student who has been using Linux since age 10, who is working on a color recognition system to categorize stars in astronomical research.

Women in Linux

Shivani Bhardwaj, 22, India. A recent computer science graduate and Outreachy intern who has already had more than 75 patches accepted to the staging driver of the Linux kernel.

Farlonn Mutasa, 21, South Africa. Passed the CompTIA Linux+ certification exam, which opened the door to a sysadmin internship.
 

The Linux Foundation aims to increase diversity in technology and the open source community and support career development opportunities for the next generation, especially those who have traditionally been underrepresented in open source and technology.

Get more information on The Linux Foundation Community Giving Programs.

 

How Twitter Avoids the Microservice Version of “Works on My Machine”

Apache Mesos and Apache Aurora initially helped Twitter engineers to implement more sophisticated DevOps processes and streamline tooling, says software engineer David McLaughlin. But over time a whole new class of bespoke tooling emerged to manage deployment across multiple availability zones as the number of microservices grew.

“As the number of microservices grows and the dependency graph between them grows, the confidence level you achieve from unit tests and mocks alone rapidly decreases,” McLaughlin says, in the interview below. “You end up in the microservice version of “works on my machine.”

David McLaughlin, software engineer at Twitter, will speak at MesosCon Europe in Amsterdam Aug. 31 – Sept. 2, 2016.
McLaughlin will talk this month at MesosCon Europe about these challenges, as well as the system Twitter built to support their CI/CD pipeline and close the gaps in deploy tooling.

Here, he describes application testing and deployment in a microservices architecture; how Twitter approaches it; and what he’s learned about DevOps in the process.

Linux.com: What is the challenge with orchestration in a microservices architecture?

David McLaughlin: One of the biggest challenges for service owners is trying to build the confidence that code changes are going to work in production. As the number of microservices grows and the dependency graph between them grows, the confidence level you achieve from unit tests and mocks alone rapidly decreases. You end up in the microservice version of “works on my machine.”

One way we’ve built up confidence is to build pipelines where services are deployed and tested end to end against real upstream and downstream services before going to production. At a given size, you also have the issue of having multiple availability zones and finding yourself having to repeat all these testing steps for each zone. If this process involves humans in any way, that becomes a lot of time and money being spent just deploying code. This is obviously not a good position to be in when you fully embrace microservices and start to have hundreds or even thousands of services being managed.

Linux.com: How did your team initially try to address the challenge?

David: Mesos and Aurora make it really easy for engineers at Twitter to deploy their service to multiple environments and clusters. Aurora comes with the ability to schedule a service to only run when capacity is available, and be evicted in favor of a production service during peak loads. This allows engineers to use more resources in pre-production environments without worrying about the cost to the company – they are simply taking advantage of the extra capacity that is required for things like disaster recovery or peak events.

However, orchestrating the deploy pipeline across each step was still left to users. This was done through complex CI job configurations, with bespoke deploy tooling, or even worse – completely manually.

Linux.com: How are you handling orchestration and tooling now?

David: We’ve built a tool to handle the release of a code change from development to testing to production across multiple clusters. It is built to support (if not encourage!) automation via an API, but also supports manual orchestration via a UI or even a hybrid approach where everything except the final production push can be automated. This allows users to adopt the tooling even if their current testing practices don’t provide the confidence to fully automate the whole process.

Linux.com: What did you learn about DevOps in the process?

David: I think the biggest lesson I’ve taken away from working in this space is that when it comes to DevOps, often the best user experience is having no experience at all. The vast majority of developers just want to build stuff and ship it. So the focus should be on enabling that, instead of putting more and more tools in between them and the satisfaction of having their work shipped.

Linux.com: Can you give one example of a more sophisticated DevOps process that resulted from your experimentation with tooling?

David: Performing a rollback used to mean retracing steps to find some known-good package version or gitsha, manually injecting it into your configuration and then applying to all your different production clusters. Now it’s as simple as clicking a button (or making an API call) on a previously successful deploy. It greatly reduces the stress of backing out from problems in production.

 

Join the Apache Mesos community at MesosCon Europe on Aug. 31 – Sept. 1, 2016! Look forward to 40+ talks from users, developers and maintainers deploying Apache Mesos including Netflix, Apple, Twitter and others. Register now.


Apache, Apache Mesos, and Mesos are either registered trademarks or trademarks of the Apache Software Foundation (ASF) in the United States and/or other countries. MesosCon is run in partnership with the ASF

 

Why are Containers so Disruptive to the Data Centre?

Enterprise architecture is usually a mixture of technologies, platforms, stacks, licenses and maturities, all owned by different teams and hosted in different data centres and clouds.

This diversity is just the sign of a long life — Enterprises generally didn’t appear yesterday — but it can result in high operational costs and inefficient use of resources.

Utopian Visions are so 20th Century

Is this technical mix just debt to be fixed with a Utopian clean slate?

Or should we find a way to embrace the diversity?

What if we had a common, computer-regulated way to manage heterogeneous services and environments? 

Intel Unveils its Joule Chip Module for the Internet of Things

Joule is the latest product in Intel’s family of all-in-one chip modules for the Internet of Things.

Intel CEO Brian Krzanich showed off the new Joule module during a keynote speech at the Intel Developer Forum in San Francisco. The module is a follow-up to Edison, the prior IoT module introduced in 2014.

The Joule technology is a tiny computer on a small module. It can fit in a wide variety of products, from robotics to augmented reality glasses. Joule is the latest product in Intel’s family of all-in-one chip modules for the Internet of Things. 

Read more at VentureBeat

Ansible as a Gateway to DevOps in the Cloud

Ansible as a gateway to DevOps in the cloud

I have a confession to make—although the word “cloud” is in my job title, there was a time when I used to think it was all buzzwords, hype, and vapor, with no substance. Eventually, Ansible became my gateway to the cloud. In this article, I’ll provide an introduction to DevOps with Ansible.

Before Ansible came along, I was a sysadmin, happily deploying bare-metal servers and virtual machines, with each new project requiring its own bespoke infrastructure. Sure, the deployment of the initial operating system was automated with Kickstart, but then came the slew of manual steps to get the servers ready for the application owners. It was a slow process, but I knew when I was done with it, I was handing over a finely tuned system that would run like a champ for years.

Read more at OpenSource.com