Home Blog Page 384

Real-World Build Tips for Yocto

Despite its widespread and growing adoption, the Yocto Project is one of the more misunderstood of Linux technologies. It’s not a distribution but rather a collection of open source templates, tools, and methods for creating custom embedded Linux-based systems. Yocto Project contributor and Intel Embedded Software Engineer Stephano Cetola explained more about Yocto in his talk at the recent Embedded Linux conference in Portland.

Although embedded hardware vendors often list “Yocto” along with Ubuntu, Fedora, and the like, one Yocto Project build is often markedly different from another. Embedded developers who once constructed their own DIY Linux stacks from scratch to provide stripped-down stacks optimized for power savings and specific technologies now typically use Yocto. In the process, they save countless hours of debugging and testing.

Yocto is a line drawn in the sand — we’re picking certain versions of software, drawing a line and testing them,” Cetola said.  “Every day we test four different architectures and build on six different distros checking for compatibility and performance. We can provide you with BeagleBone SDKs and images, and there’s a bug tracker too.”

Most of this was old news to the attendees at Cetola’s talk, “Real-World Yocto: Getting the Most out of Your Build System.” Yet, there’s a lot about Yocto that even experienced developers don’t know, said Cetola. His talk covered a variety of Yocto tips, with a focus on using the BitBake build engine.

Cetola highlighted some of his favorite best practices, utilities, scripts, and commands, including wic (OpenEmbedded Image Creator), Shared State cache, and packaged feeds. (You can watch the entire presentation below.)

Layering up

The Yocto Project is built around the concept of layers. This is often difficult for developers to grasp so they often just ignore it. “People tend to lump everything into one giant layer because it’s a relatively quick way to get your build started,” said Cetola. “But if you put all your distro information, hardware requirements, and software into a single layer, you’ll be kicking yourself later.”

The biggest drawback is the difficulty in updating hardware and software. “When your boss comes in and drops a web kiosk project on your desk and asks if this will work on your layer scheme, the only way you can answer yes is if you have separated out the layers.”

Having a separate distro layer, for example, makes it easier to support both a frame buffer and an X11 layer, says Cetola. “If your hardware uses different architectures or you don’t want your different hardware mingling together you can separate them into layers to help you distribute these internally,” he added. “In software, you may have Python living with C, and if they have nothing do with each other, separating them means you can ship the manufacturer or QA team only what they need.”

Wic’ing your way to bootable image formats

Yocto Project developers often struggle when integrating a vendor layer into a bootable image, said Cetola. “In the past it’s been hard to add multiple partitions or try to do a layered architecture where your layering read-only and read-write on top of each other.”

A new tool called wic (OpenEmbedded Image Creator) can help out. Wic reads from a kickstart WKS file that allows you to generate custom partitions and media that you can burn to,” explained Cetola. “For example, your manufacturer may expect to get an SD card, but you may also want to boot from NAND or NOR. Wic lets you cleanly separate these concepts, and then reuse them.”

Wic lets you copy files into or remove files from a Wic image, as well as ls a directory inside an image for greater introspection. Wic also supports bmap-tools, “which is an order of magnitude faster than using dd,” said Cetola. “Bmap-tools realizes that you’re going to copy useless data, so it skips that by doing a sparse copy. Once you use bmap you’ll never use dd again.”

SSTATE is your friend

One of the blessings (in flexibility and code purity) and curses (drudgery) of Yocto development is that everything is built from scratch. That means it must be rebuilt from scratch over and over again. Fortunately, the platform offers a workaround in the form of shared state cache (SSTATE), which is sort of a snapshot of an unaltered Yocto recipe deployed as a set of packaged data generating a cache. BitBake checks SSTATE to see if a recipe does not need to be rebuilt, thereby saving time.

“One of the biggest complaints about Yocto is that it takes a long time to build, and it does — Buildroot wins on speed every time,” says Cetola. “SSTATE cache, which is meant to be used circumstantially, can speed the process up, but a lot of people don’t take advantage of it.” Cetola said there were numerous examples in which building from scratch “isn’t ideal,” for example when a team is doing a build on underpowered laptops.

Cetola recommended using site.conf, which he described as “a configuration file that BitBake looks for when it starts up inside the conf directory.” He continued: “I have a script that starts my build directory and copies in my site.conf, which sets the download and SSTATE directories.”

Cetola also suggested that developers make greater use of the related SSTATE mirror technology, which he said is “handy for sharing with other machines.” These can also be managed by site.conf. “People think they won’t benefit from SSTATE mirrors, but it’s extremely useful.”

Package feeds

Another way to accelerate the build process is to use package feeds, which “can not only save an immense amount of time during your personal development, but also for anybody who needs to quickly install the software,” said Cetola. He described a scenario in which “you use dd to burn something to a card, and you load it on a board and it doesn’t work. So you change your software, boot it again, burn it, load it, and it’s missing a library.”

The missing library is “probably sitting in an rpm folder,” said Cetola. “Yocto can run a BitBake package index, which indexes the folder so rpm can look for it, so you essentially have a repo. By creating a package feed, and sharing that folder on a webserver, and running BitBake package index, you’ve saved yourself the trouble of pulling the SD card and copying something onto it. Instead you just say ‘rpm install’.”

Yocto grows introspective

Sometimes the problem is not a missing file, but rather an unexpected one. “Customers always say to me: ‘I just built my file system and booted my board, and there’s a file there — where did it come from and why is it there?’”

To figure out what the hell is going on — a process known as introspection — Cetola starts with oe-pkgdata-util to find the path. “It should output the name of the recipe that caused that file to populate on the board,” he explained. “You can point it any file and it will do its best to introspect the file and figure it out.” If that’s not enough, he turns to git grep, as well as DNF, “which gives you a lot of introspection onto what’s on your board and why.”

Weird files show up from time to time because “when we do a build and run a file system a lot of stuff is done dynamically,” explained Cetola. In this case, the above tools aren’t likely to help. For example, “inside the guts of Yocto there are rootfs post process commands that can slip something on to the board. For that I use IRC. One of the things I love about the Yocto Project is that it’s very friendly. The IRC channel is a great place to ask questions. People respond.”

Other tools include the “recipetool” and its appendfile sub-command, which “will generate the recipe for you” if you need to change a file, said Cetola. Developers can also use a dependency tree, which can be generated within BitBake using the -g option. “Yocto 2.5 will have an oe-depends-dot tool, which will save you from having to look at that gigantic dependency tree by letting you introspect specific parts.”

Cetola is surprised that developers don’t make greater use of BitBake options. “Whenever I go out to lunch and am running a substantial build, I use the -k option, which keeps the build from stopping when it encounters an error.” Other options include the -e option, which outputs the BitBake environment, and -C command for invalidating a stamp (specific clean).

BitBake scripts

Cetola also gave a shout out to some of his favorite BitBake scripts, starting with devtool, which the Yocto Project wiki describes as a way “to ‘mix’ customization into” a Yocto image. “If you’re not a full-time kernel developer but you need to do some edits on the kernel, devtool can be a lifesaver,” said Cetola. “Using `devtool modify` and `devtool build` you can modify and build the kernel without building an SDK or rolling your own cross compilation environment. It’s also a handy tool for generating recipes. Once you’ve finished making a small edit to the kernel, `devtool finish` can make the patch for you.”

Another useful script is BitBake-layers, “which is great when you’re building layers or searching for them.” Cetola also recommended bitbake-dumpsig/diffsings. “Say you changed one thing in your recipe and BitBake recompiled 25 things,” he said. “What happened was that the change caused different SSTATE hashes to invalidate. To find out why, you can use bitbake-dumpsig/diffsings. Dumpsigs will dump all the information (stored in the ‘stamps’ directory) into a format where you can see the things it is basing its hash on, and then use the diff tool to compare them to work out whether there was a dependency change.”

Cetola concluded with a call for community involvement. “If you’re brave enough to look at Bugzilla please do, but if that’s a bit much, just find a part of the system you’re interested in working on and send us an email — we’re always willing to take contributions and willing to help.”

Watch the entire presentation below:

Changing Healthcare with Blockchain Technology

Blockchain technology is heralded to become a broadly disruptive force in the coming years. According to a Forbes story, blockchain is already revolutionizing contracts, payment processing, asset protection, and supply chain management. However, partly due to the industry’s emphasis on records, authentication and people-centric processes, healthcare is predicted to be one of the fields that blockchain will truly transform.

That was the key message at an Open Source Leadership Summit keynote address titled “Blockchain Technology at Change Healthcare” by Aaron Symanski, CTO at Change Healthcare. In his talk, Symanski said that blockchain is already impacting the healthcare system.

Read more at The Linux Foundation

DevOps, Machine Learning Dominate Technology Opportunities This Year

Latest Stack Overflow survey of 100,000-plus developers finds the highest salaries and interest levels in DevOps methodologies and artificial intelligence.

This is a key takeaway from the latest surveyof more than 100,000 developers worldwide, conducted by Stack Overflow. The survey finds DevOps specialists and engineering managers have the highest salaries in the field, averaging between $70,000 and $90,000 a year worldwide. (Within the United States, salaries for these two range between $110,000 annually for DevOps specialists and $137,000 for engineering managers.)

The survey also shows that DevOps specialists and developers who code for desktop and enterprise applications have the most experience, averaging eight years of professional coding experience

Read more at ZDNet

Make your First Contribution to an Open Source Project

In this article, I’ll provide a checklist of beginner-friendly features and some tips to make your first open source contribution easy.

Understand the product

Before contributing to a project, you should understand how it works. To understand it, you need to try it for yourself. If you find the product interesting and useful, it is worth contributing to.

Too often, beginners try to contribute to a project without first using the software. They then get frustrated and give up. If you don’t use the software, you can’t understand how it works. If you don’t know how it works, how can you fix a bug or write a new feature?

Read more at OpenSource.com

How to Install Rancher Docker Container Manager on Ubuntu

If you’re looking to take your container management to the next level, the Rancher Docker Container Manager might be just what you need. Jack Wallen shows you how to get this up and running.

You’ve been working with containers for some time now—maybe you’re using docker commands to manage and deploy those containers. You’re not ready to migrate to Kubernetes (and Docker has been treating you well), but you’d like to make use of a handy web-based management tool to make your container life a bit easier. Where do you turn?

There are a number of options available, one of which is the Rancher Docker Container Manager. This particular tool should be of interest, especially considering it supports Kubernetes and can deploy and manage full stacks, so when you’re ready to make the jump, your tools are also ready.

But how do you get the Rancher Docker Container Manager (RDCM) up and running? The easiest way is (with a nod to irony) via Docker itself. I’m going to show you how to deploy a container for the RDCM quickly and easily. Once deployed, you can then log into the system, via web browser, and manage your containers.

Read more at TechRepublic

Anaconda, CPython, PyPy, and more: Know Your Python Distributions

When you choose Python for software development, you choose a large language ecosystem with a wealth of packages covering all manner of programming needs. But in addition to libraries for everything from GUI development to machine learning, you can also choose from a number of Python runtimes—and some of these runtimes may be better suited to the use case you have at hand than others.

Here is a brief tour of the most commonly used Python distributions, from the standard implementation (CPython) to versions optimized for speed (PyPy), for special use cases (Anaconda, ActivePython), or for runtimes originally designed for entirely different languages (Jython, IronPython).

Read more at InfoWorld

Microservices Explained

Microservices is not a new term. Like containers, the concept been around for a while, but it’s become a buzzword recently as many companies embark on their cloud native journey. But, what exactly does the term microservices mean? Who should care about it? In this article, we’ll take a deep dive into the microservices architecture.

Evolution of microservices

Patrick Chanezon, Chief Developer Advocate for Docker provided a brief history lesson during our conversation: In the late 1990s, developers started to structure their applications into monoliths where massive apps hadall features and functionalities baked into them. Monoliths were easy to write and manage. Companies could have a team of developers who built their applications based on customer feedback through sales and marketing teams. The entire developer team would work together to build tightly glued pieces as an app that can be run on their own app servers. It was a popular way of writing and delivering web applications.

There is a flip side to the monolithic coin. Monoliths slow everything and everyone down. It’s not easy to update one service or feature of the application. The entire app needs to be updated and a new version released. It takes time. There is a direct impact on businesses. Organizations could not respond quickly to keep up with new trends and changing market dynamics. Additionally, scalability was challenging.

Around 2011, SOA (Service Oriented Architecture) became popular where developers could cram multi-tier web applications as software services inside a VM (virtual machine). It did allow them to add or update services independent of each other. However, scalability still remained a problem.

“The scale out strategy then was to deploy multiple copies of the virtual machine behind a load balancer. The problems with this model are several. Your services can not scale or be upgraded independently as the VM is your lowest granularity for scale. VMs are bulky as they carry extra weight of an operating system, so you need to be careful about simply deploying multiple copies of VMs for scaling,” said Madhura Maskasky, co-founder and VP of Product at Platform9.

Some five years ago when Docker hit the scene and containers became popular, SOA faded out in favor of “microservices” architecture.  “Containers and microservices fix a lot of these problems. Containers enable deployment of microservices that are focused and independent, as containers are lightweight. The Microservices paradigm, combined with a powerful framework with native support for the paradigm, enables easy deployment of independent services as one or more containers as well as easy scale out and upgrade of these,” said Maskasky.

What’s are microservices?

Basically, a microservice architecture is a way of structuring applications. With the rise of containers, people have started to break monoliths into microservices. “The idea is that you are building your application as a set of loosely coupled services that can be updated and scaled separately under the container infrastructure,” said Chanezon.

“Microservices seem to have evolved from the more strictly defined service-oriented architecture (SOA), which in turn can be seen as an expression object oriented programming concepts for networked applications. Some would call it just a rebranding of SOA, but the term “microservices” often implies the use of even smaller functional components than SOA, RESTful APIs exchanging JSON, lighter-weight servers (often containerized, and modern web technologies and protocols,” said Troy Topnik, SUSE Senior Product Manager, Cloud Application Platform.

Microservices provides a way to scale development and delivery of large, complex applications by breaking them down that allows the individual components to evolve independently from each other.

“Microservices architecture brings more flexibility through the independence of services, enabling organizations to become more agile in how they deliver new business capabilities or respond to changing market conditions. Microservices allows for using the ‘right tool for the right task’, meaning that apps can be developed and delivered by the technology that will be best for the task, rather than being locked into a single technology, runtime or framework,” said Christian Posta, senior principal application platform specialist, Red Hat.

Who consumes microservices?

“The main consumers of microservices architecture patterns are developers and application architects,” said Topnik. As far as admins and DevOps engineers are concerned their role is to build and maintain the infrastructure and processes that support microservices.

“Developers have been building their applications traditionally using various design patterns for efficient scale out, high availability and lifecycle management of their applications. Microservices done along with the right orchestration framework help simplify their lives by providing a lot of these features out of the box. A well-designed application built using microservices will showcase its benefits to the customers by being easy to scale, upgrade, debug, but without exposing the end customer to complex details of the microservices architecture,” said Maskasky.

Who needs microservices?

Everyone. Microservices is the modern approach to writing and deploying applications more efficiently. If an organization cares about being able to write and deploy its services at a faster rate they  should care about it. If you want to stay ahead of your competitors, microservices is the fastest route. Security is another major benefit of the microservices architecture, as this approach allows developers to keep up with security and bug fixes, without having to worry about downtime.

“Application developers have always known that they should build their applications in a modular and flexible way, but now that enough of them are actually doing this, those that don’t risk being left behind by their competitors,” said Topnik.

If you are building a new application, you should design it as microservices. You never have to hold up a release if one team is late. New functionalities are available when they’re ready, and the overall system never breaks.

“We see customers using this as an opportunity to also fix other problems around their application deployment — such as end-to-end security, better observability, deployment and upgrade issues,” said Maskasky.

Failing to do so means you would be stuck in the traditional stack, which means microservices won’t be able to add any value to it. If you are building new applications, microservices is the way to go.

Learn more about cloud-native at KubeCon + CloudNativeCon Europe, coming up May 2-4 in Copenhagen, Denmark.

DevOps Success: Why Continuous Is a Key Word

When implementing DevOps initiatives, the word “continuous” is the key to success. Most Agile schemes today incorporate concepts and strategies that can – and should – be implemented at all times throughout the SDLC. The most important to recognize throughout your team’s development cycle are Continuous Integration (CI), Continuous Testing (CT) and Continuous Delivery (CD). 

Often, I hear of dev teams wondering which “continuous” deployment model should be used – if at all – and when. Typically, familiar with CI and CD, they’ll pair those two off, while completely separating them from CT. While each serves different purposes, and addresses different aspects throughout the SDLC, all three can – and should – integrate to assure quality, while also maintaining velocity. 

Read more at Enterprisers Project

Making Cloud-Native Computing Universal and Sustainable

The original seed project for CNCF was Kubernetes, as orchestration is a critical piece of moving toward a cloud-native infrastructure. As many people know, Kubernetes is one of the highest-velocity open source projects of all time and is sometimes affectionately referred to as “Linux of the Cloud.” Kubernetes has become the de facto orchestration system, with more than 50 certified Kubernetes solutions and supported by the top cloud providers in the world. Furthermore, CNCF is the first open source foundation to count the top 10 cloud providers in the world as members.

However, CNCF is intended to be more than just a home for Kubernetes, as the cloud-native and open infrastructure movement encompasses more than just orchestration.

A community of open infrastructure projects

CNCF has a community of independently governed projects; as of today, there are 18 covering all parts of cloud native. For example, Prometheus integrates beautifully with Kubernetes but also brings modern monitoring practices to environments outside of cloud-native land.

Read more at OpenSource.com

This Week in Numbers: Chinese Adoption of Kubernetes

Chinese developers are, in general, less far along in their production deployment of containers and Kubernetes, according to our reading of data from a Mandarin-translated version of a Cloud Native Computing Foundationsurvey.

For example, 44 percent of the Mandarin respondents were using Kubernetes to manage containers while the figure jumped to 77 percent amongst the English sample. They are also much more likely to deploy containers to Alibaba Cloud and OpenStack cloud providers, compared to the English survey respondents. The Mandarin respondents were also twice as likely to cite reliability as a challenge. A full write-up of these findings can be found in the post “China vs. the World: A Kubernetes and Container Perspective.”

It is noteworthy that 46 percent of Mandarin-speaking respondents are challenged in choosing an orchestration solution, which is 20 percentage points more than the rest of the study. 

Read more at The New Stack