Home Blog Page 379

6 DevOps Trends to Watch in 2018

Communicating with key subject matter experts in the DevOps space plays an important role in helping us understand where the industry is headed. To gain insight into trends for 2018, we caught up with six DevOps experts and asked them:

What’s the number-one trend you see for log analysis and monitoring in 2018?

Here’s what our panel of influencers had to say.


1. Joe Beda

“By far the biggest trend that I see is the application itself getting more involved in exporting structured metrics and logs. These logs are built so that they can be filtered and aggregated. Being able to say ‘show me all the logs impacting customer X’ is a huge, powerful step forward. Doing this in a way that crosses multiple microservices is even more powerful. 

Read more at Loggly

Viperr Linux Keeps Crunchbang Alive with a Fedora Flair

Do you remember Crunchbang Linux? Crunchbang (often referred to as #!) was a fan-favorite, Debian-based distribution that focused on using a bare minimum of resources. This was accomplished by discarding the standard desktop environment and using a modified version of the Openbox Window Manager. For some, Crunchbang was a lightweight Linux dream come true. It was lightning fast, easy to use, and hearkened back to the Linux of old.

However, back in 2015, Philip Newborough made this announcement:

For anyone who has been involved with Linux for the past ten years or so, I’m sure they’ll agree that things have moved on. Whilst some things have stayed exactly the same, others have changed beyond all recognition. It’s called progress, and for the most part, progress is a good thing. That said, when progress happens, some things get left behind, and for me, CrunchBang is something that I need to leave behind. I’m leaving it behind because I honestly believe that it no longer holds any value, and whilst I could hold on to it for sentimental reasons, I don’t believe that would be in the best interest of its users, who would benefit from using vanilla Debian.

Almost immediately, developers began their own efforts to keep Crunchbang alive. One such effort is Viperr. Viperr is a Fedora respin that follows in the footsteps of its inspiration by using the Openbox window manager. By merging some of the qualities that made Crunchbang popular, with the Fedora distribution, Viperr creates a unique Linux distribution that feels very much old school, with a bit of new-school technology under the hood.

The one thing to keep in mind is that Viperr development is incredibly slow. At the moment, the most recent stable release is Viperr 9, based on Fedora 24. I read in the forums that, as of 2017, work was started on Viperr 10, but it’s still in alpha. So using Viperr might seem a bit of a mixed bag. After installing, I ran an update to find the running kernel at 4.7.5. That’s a pretty old kernel (relatively speaking). Even still, Viperr is a worthwhile distribution that might appeal to users looking for a lightweight Linux akin to Crunchbang.

Let’s install Viperr and see what gives this distribution its bite.

Installation

We’ve reached the point in Linux where walking through an installation is almost pointless—the installs are that easy. That being said, if you’ve installed Fedora or CentOS, you’ve installed Viperr. The Anaconda Installer makes installing any distribution incredibly simple. It’s all point and click, with a minimal of user interaction and steps. The only difference with Viperr is the post-Anaconda installation. Once you’ve completed the installation and rebooted the system, you’ll be greeted with a terminal window, in which a post-install script is run (Figure 1).

Figure 1: The post-install script in action.

That script will first prompt you for your user password (created during the installation). Once you’ve authenticated, it will ask you a number of questions regarding software to be installed. During the run of the script, you can have LibreOffice installed (Figure 2), as well as other applications.

Figure 2: Installing LibreOffice by way of the post-install script.

You will also be asked if you want to include the free and non-free Fusion repo. This repository is filled with software that Fedora or Red Hat doesn’t want to ship (such as Audacity, MPlayer, Streamripper, MythTV, GStreamer, Bombono-DVD, Xtables, Pianobar, LiVES, Telegram-Desktop, Ndiswrapper, VLC, some games, and more). It’s not a huge number of titles, but there are some items many Linux users consider must-haves.

Once the script completes its run, you can close out the terminal and start using Viperr.

Usage

As you probably expect, using Viperr is incredibly simple. The combination of the Openbox window manager and Conky giving a real-time read-out on system resources (Figure 3) is certainly a throwback to old-school Linux that many users will appreciate.

Figure 3: The default Viperr desktop.

Click on the Viperr start button to gain access to all of the installed applications. Open an application and use it. That start menu, however, isn’t the only route to starting applications. If you right-click anywhere on the desktop, you gain access to the same menu (Figure 4).

Figure 4: The Viperr right-click desktop menu.

I’ve always been a big fan of this type of menu system, as it makes interacting with that main menu incredibly efficient.

If you want to bring Viperr even further into the new world order, you can open up a terminal window and install Flatpak with the command sudo yum install flatpak (or sudo dnf install flatpak). Once you’ve installed Flatpak, you’ll find even more software can be installed, via Flathub.

Updates needed

Obviously, the one glaring problem is that Viperr is way out of date. However, you could go through the process of doing a distribution upgrade, via the dnf command. To do this, you would first have to install the DNF plugin with the command:

sudo dnf install dnf-plugin-system-upgrade

Once that command completes, you can upgrade from a base of Fedora 24 to 25 with the command:

sudo dnf system-upgrade download --releasever=25

When that command completes, reboot with the command:

sudo dnf system-upgrade reboot

The above command does take some time to complete (I had 2339 packages to upgrade), but it will eventually land you back on your Viperr desktop. I successfully completed that upgrade (which upgraded the kernel to 4.13), but I didn’t continue with the process to upgrade from 25 to 26 and then 26 to 27. Theoretically, it could work.

Outside of that, you’d be hard-pressed to find anything really wrong with a lightweight distribution like Viperr. It’s a fast, reliable throwback to a distribution so many users quickly fell in love with. With Crunchbang long gone, for those longing to return to the days of a more basic version of the operating system, Viperr fits that bill to a tee.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

This Week in Open Source News: Hyperledger Bug Bounty Program, The Linux Foundation’s Networking Harmonization Initiative & More

This week in Linux and open source news, Arpit Joshipura sheds light on the networking harmonization initiative, Hyperledger opens the doors of its bug bounty program to the public & more! Read on to stay abreast of the latest open source news. 

1) Arpit Joshipura, General Manager of Networking at The Linux Foundation, speaks about the Harmonization 1.0 initiative

Linux Foundation Seeks to Harmonize Open Source and Standards Development– TelecomTV

2) “The open-source blockchain project is now asking the public to help in the quest to squash bugs impacting the platform.”

Hyperledger Bug Bounty Program Goes Public– ZDNet

3) Nextcloud “has announced it will be supplying the German federal government with a private, on-premises cloud platform as part of a three-year contract.”

German Government Goes Open Source With Cloud Firm Nextcloud– TechRadar Pro

4) “For the first time, Microsoft has released its own Linux kernel in a new Linux-based product: Azure Sphere.”

Microsoft Releases Its First Linux Product– ZDNet

5) “Open source is being heavily adopted in China and many companies are now trying to figure out how to best contribute to these kind of projects. Joining a foundation is an obvious first step.”

Cloud Foundry Foundation Looks East as Alibaba Joins As a Gold Member– TechCrunch

CRI: The Second Boom of Container Runtimes

Harry (Lei) Zhang, together with the CTO of HyperHQ, Xu Wang, will present “CRI: The Second Boom of Container Runtimes” at KubeCon + CloudNativeCon EU 2018, May 2-4 in Copenhagen, Denmark. The presentation will clarify about more about CRI, container runtimes, KataContainers and where they are going. Please join them if you are interested in learning more.

When was the first “boom” of container runtimes in your mind?

Harry (Lei) Zhang: At the end of 2013, one of my former colleagues at an extremely successful cloud computing company introduced me to a small project on Github, and claimed: “this thing’s gonna kill us.”

We all know the rest of the story. Docker started a revolution in containers, which soon swept the whole world with its slogan of “Build, Ship, Run.” The brilliant idea of the Docker image reconstructed the base of cloud computing by changing the way software was delivered, and how developers consumed the cloud. The following years of 2014-2015 were all dominated by the word container it was hot and fancy, and swept through all the companies in the industry, just like AI today.

What happened to container runtimes after that?

Zhang: After this period of prosperity, container runtimes, of course including Docker, gradually became a secondary issue. Instead, people were more eager to quarrel over container scheduling and orchestration, which eventually brought Kubernetes to the center of the container world. This is not surprising, although the container itself is highly creative, once it is separated from the upper-level orchestration framework, the significance of this innovation is be greatly reduced. This also explains why CNCF, which is led by platform players like Google and Red Hat, eventually became the winner of the dispute: container runtimes are “boring”.

So you are claiming a new prosperity comes for container runtimes now?

Zhang: Yes. In 2017, the Kubernetes community, started pushing container runtimes back to the forefront with projects like cri-o, containerd, Kata, frakti –

In fact, this second boom is no longer because of the technological competition, but because of the birth of a generic Container Runtime Interface (CRI), which leveraged the freedom to develop container runtimes for Kubernetes however they wanted, using whatever technology. This is further evidence that Kubernetes is winning the infrastructure software industry.

Could you please explain more about CRI?

Zhang: The creation of CRI can be dated back to a pretty old issue in the Kubernetes repo, which, thanks to Brendan Burns and Dawn Chen brought out the idea of “client server mode container runtime.” The reason we began to discuss this approach in sig-node was mainly because, although Docker was the most successful container runtime at that time (and even today), we could see it gradually evolving to a much more complicated platform project, which brought uncertainty to Kubernetes itself. At the same time, new candidates like CoreOS rkt and Hyper runV (hypervisor based container) had been introduced in Kubernetes as PoC runtimes, and to bring extra maintenance efforts to sig-node. We needed to find a way to balance user options and feasibility in Kubernetes container runtimes, and to save users from any potential vendor lock in this layer.

What does CRI look like from a developer view?

Zhang: The core idea of CRI is simple: can we summarize a limited group of APIs which Kubernetes can rely on to talk to containers and images, regardless of what container runtime it is using?

Sig-node eventually defined around 20 APIs in protoc format based on the existing operations in kubelet (the component that talks to container runtime in Kubernetes). If you are familiar with Docker, you can see that we indeed extracted the most frequently used APIs out from its CLI, and also defined the concept of “sandbox” to match to Pod in Kubernetes, which is a group of tightly coupled user containers. But the key is that once you have this interface, you now  have the freedom to choose how to implement this “sandbox,” either by namespaces (Docker), or hypervisor (Kata). Soon after the CRI spec was ready, we worked together to deliver the first CRI implementation named “dockershim” for Docker, and then “frakti” for hypervisor runtimes (at that time, runV).

How does CRI work with Kubernetes?

ysjGjXY74xxyIn6-HBiizom4qw6z13aIw06n-2Vd

Zhang: The core implementation of CRI is a GenericRuntime, which will hide CRI from kubelet so, from the Kubernetes side, it does not know about CRI or any container runtime. All the container and image operations will be called GenericRuntime just like it is in Docker or rkt.

The container functionalities, like networking and storage, are decoupled from container runtime by standard interfaces like CNI (Container Network Interface), and CSI (Container Storage Interface). Thus, the implementation of CRI (e.g. dockershim) will be able to call standard CNI interfaces to allocate the network for the target container without knowing any detail of underlying network plugins. The allocation result,will be returned back to kubelet after the CRI call

What’s more, the latest design of hardware accelerators in Kubernetes, like GPU and FPGA, also rely on CRI to allocate devices to corresponding containers. The core idea of this design is included in another extension system named Device Plugin. It is responsible for generating device information and dependency directories per device allocate request, and then returns them to kubelet. Again, Kubelet will inject this information into CRI to create a container call. That’s also the secret why the latest Kubernetes implementations do not rely on Docker, or any other specific container runtime to manage GPU, etc.

What are CRI shims? I’ve been hearing this a lot recently.

Zhang: Actually, CRI implementations are called CRI shims within the Kubernetes community. It is free for the developer to decide how to implement those CRI interfaces, and this has triggered another innovation storm at the container runtime level, which was almost forgotten by the community in recent years.

The most straightforward idea is: can I just implement a shim for runC, which is the building block for an existing Docker project? Of course yes. Not very long after CRI is released, maintainers of the runC project proposed a design which leverages users of Kubernetes to run workloads on runC without installing Docker at all. That project is cri-o.

Unlike Docker, cri-o is much simpler and only focuses on the container and image management, as well as serving CRI requests. Specifically, it “implements CRI using OCI conformant runtimes” (runC for example), so the scope of cri-o is always tied to the scope of the CRI. Besides a very basic CLI, cri-o will not expect users to use it the same way as Docker.

Besides these Linux operating system level containers (mostly based on cgroups and namespaces), we can also implement CRI by using hardware virtualization to achieve higher level security and isolation. These efforts rely on the newly created project KataContainers and corresponding CRI shim frakti.

The CRI has been so successful that many other container projects or legacy container runtimes are beginning to provide implementations for it. Alibaba’s internal container Pouch, for example, is a container runtime which has been used inside Alibaba for years, tested with unbelievable battles like serving the 1.48 billion transactions in 24-hours during the the Single’s Day (11/11) sale.

What’s the next direction of CRI and container runtimes in Kubernetes?

Zhang: I believe the second boom of container runtimes is still continuing, but this time, it is being lead by the Kubernetes community.

At the end of 2017, Intel ClearContainer and Hyper runV were announced to merge to one new project, KataContainers, under the governance of the OpenStack foundation and OCI. Those two teams are well known for leading the effort of pushing hypervisor based container runtimes to Kubernetes since 2015, and finally joined forces with the help of CRI.

This will not be the only story in this area. The maintainers of container runtimes have noticed that expressing their capabilities through Kubernetes, rather than competition at the container runtime level, is an effective way to promote the success of these projects. This has already been proven by cri-o, which is now a core component in Red Hat’s container portfolio, and is known for its simplicity and better performance. There’s no need to mention Windows Container with CRI support, which has already been promoted a high priority task in sig-node.

But there’s also concern. With the continuing of container runtime innovation, users now have to face the same problem again: which container runtime should I choose?

Luckily, this problem has already been partially solved by Kubernetes itself as it’s the Kubernetes API that users need to care about, not the container runtimes. But even for people who do not rely on Kubernetes, there is still  a choice to be made. “No default” should be the “official” attitude from Kubernetes maintainers. One good thing is that we can see the maintainers of runtimes are trying their best to eliminate unnecessary burden for users. runV has merged with ClearContainer, and frakti will be refactored to containerd based eventually. After all, as a user of Kubernetes, I expect a future that only several container runtimes with essential differences for me to choose, if that’s the case, my life will be much easier.

Harry (Lei) Zhang

Harry (Lei) Zhang, Engineer at HyperHQ, Microsoft MVP of cloud computing. Feature maintainer of Kubernetes project. Harry (Lei) Zhang mainly works on scheduling, CRI and hypervisor-based container runtime, i.e. KataContainers. Focusing on kube-scheduler, kubelet and secure container runtime on Kubernetes upstream as well as Hypernetes project. An active community advocator, tech speaker of LinuxCon, KubeCon, OpenStack Summit etc. Once published the book “Docker and Kubernetes Under The Hood” which is the best seller of container cloud area in China.

Learn more at KubeCon + CloudNativeCon Europe, coming up May 2-4 in Copenhagen, Denmark.

​Learn to use GitHub with GitHub Learning Lab

Want to join the 27 million open-source programmers who develop on GitHub? Here’s how to get your start.

The most popular open-source development site in the world is GitHub. It’s used by tens of millions of developers to work on over 80 million projects.

It’s not just a site where people use Linus Torvalds’ Git open-source distributed version control system. It’s also an online home for collaboration, a sandbox for testing, a launchpad for deployment, and a platform for learning new skills. The GitHub Training Team has now released an app, GitHub Learning Lab, so you can join the programming party.

GitHub Learning Lab is not a tutorial or webcast. It’s an app that gives you a hands-on learning experience within GitHub. 

Read more at ZDNet

Managing OPA

OPA is a general-purpose policy engine that let’s you offload decisions from your service. To do so, OPA needs to have access to policies and data that it can use to make decisions.

Prior to v0.8, OPA only exposed low-level HTTP APIs that let you push policy and data into the engine. With v0.8, we’re excited to provide new management features in OPA which make it easier to distribute policies (and data) as well as monitor the health of your agents.

Bundle API

To simplify distribution of policy and data, you can now configure OPA to download “bundles” from remote HTTP endpoints. Bundles are simply gzipped tarballs containing Rego and JSON files. When you configure the Bundle feature, OPA will periodically call out to the remote HTTP and GET the named bundle:

Read more at Medium

Meet Gloo, the ‘Function Gateway’ That Unifies Legacy APIs, Microservices, and Serverless

Most enterprises still have monolithic applications, but many are exploring the use of microservices. The monoliths are accessible via APIs and monitored by the traditional application performance management (APM) tools, with deep dives provided by Splunk and other log investigation tools. With microservices — usually, run on platforms such as Kubernetes or Cloud Foundry — monitoring is usually done through tools such as Prometheus (scalable monitoring) and Open Tracing (transactional logging). Typically, the microservices monitoring tools and the traditional ones do not play well together, necessitating two sets of tools that must be maintained for monitoring.

Adding to this architectural complexity is that many organizations are also exploring the realm of serverless, which is mostly cloud-driven at this point through services like AWS Lambda or Google Cloud Functions. These, too, have their own sets of monitoring tools, such as AWS X-Ray.

Solo’s approach is to treat any of these distinct styles of computing as a single entity, that of the function, so they then can be monitored and managed with a single system. To realize this vision, the company built a function gateway, which is like an API gateway but for functions.

Read more at The New Stack

Azure Sphere Makes Microsoft an Arm Linux Player for IoT

The punchline: Microsoft just unveiled a mostly open source, embedded Arm SoC design with a custom Linux kernel.

The correct response?

1. Ha! Ha! Ha! Ha! You’re killing me!

2. Good one, dude, but April 1st was weeks ago.

3. Hallelujah! Linux and open source have finally beaten the evil empire. Can Apple be next?

4. We’re doomed! After Redmond gets its greedy hands on it, Linux will never be the same.

5. Smart strategic move — let’s see if they can manage not to screw it up like they did with Windows RT.

Microsoft’s Azure Sphere announcement was surprising on many levels. This crossover Cortex-A/Cortex-M SoC architecture for IoT offers silicon-level security, as well as an Azure Sphere OS based on a secure custom Linux kernel. There’s also a turnkey cloud service for secure device-to-device and device-to-cloud communication.

Azure Sphere is notable for being Microsoft’s first major Arm-based hardware since its failed Windows RT-based Surface tablets. It’s also one of its biggest hardware plays since the Xbox, which contributed some of its silicon security technology to Azure Sphere.

Azure Sphere is not only Microsoft’s first Linux-based product, but also one of the most open source. Precise details await the release of the first Azure Sphere products later this year, but Microsoft stated is offering “royalty-free” licensing of its “silicon security technologies” to silicon partners. These include MediaTek, NXP, Nordic, Qualcomm, Silicon Labs, ST Micro, Toshiba, and Arm, which collaborated with Microsoft on the technology. Microsoft is not likely to build its own SoCs, but it has set itself up as an IP intermediary between Arm and the SoC vendors.

Considering how tightly the Azure Sphere architecture is intertwined with the silicon and OS security, the media has interpreted Microsoft’s licensing verbiage as indicating an essentially open source design. Because the technology is based on Arm IP, it’s not as open source as RISC-V technology, but it would likely be more open than most processors.

“Microsoft is putting Azure Sphere up against Amazon FreeRTOS, so I assume it will be pretty permissive open source licensing,” said Roy Murdock, an analyst at VDC Research Group’s IoT & Embedded Technology unit. “Microsoft has finally realized it doesn’t make sense to alienate potential embedded engineers. It realizes it can get more from licensing Azure cloud services than from OS revenues. It’s a smart move.”

Under Satya Nadella’s leadership, Microsoft has further experimented with open source technologies while offering a friendlier face toward the Linux community, especially in regard to Azure. Microsoft is a regular contributor to the Linux kernel and a member of the Linux Foundation. The bad old days of Steve Ballmer deriding Linux while warning about its threat to the tech industry seem long gone. Still, these have all been baby steps compared to Azure Sphere.

Azure Sphere is not an MCU

Despite all the surprises, Azure Sphere is not quite as revolutionary as Microsoft suggests. Its billed as a major new cross-over microcontroller platform, but it’s really more like an application processor than an MCU.

“It’s not accurate to call it an MCU just because it has Cortex-M cores,” noted VDC’s Murdock. “It’s more like an SoC. But if you’re competing with Amazon FreeRTOS, it’s smart marketing.”

Based on the specs listed by the first Azure Sphere SoC — the MediaTek MT3620 — which is due to ship in products by the end of the year, this is a relatively normal Cortex-A7 based SoC with dual Cortex-M4 MCUs backed up by exceptional end-to-end security. NXP has been making similar, hybrid Cortex-A/Cortex-M SoCs for years, including its Cortex-A7 based i.MX7 and -A53-based i.MX8. Others such as Renesas and Marvell have also paired the low-power, Linux-oriented Cortex-A7 with Cortex-M MCUs on various SoCs.

Microsoft hints that other SoC vendors may choose different combinations of Cortex-A and -M chips. One interesting choice for IoT is a single-core implementation of Cortex-A53, such as used by NXP’s LS1012A SoC. Other possibilities may be found in the low-power Cortex-A35.

Security blanket

What makes Azure Sphere potentially attractive to chipmakers beyond the royalty-free licensing and Microsoft’s robust market presence is the multi-layered security, which is desperately needed at the vulnerable IoT edge. In addition to providing a 500MHz Cortex-A7 core and dual Cortex-M4F MCUs for real-time processing, the flagship MT3620 SoC has a third Cortex-M4F core that handles secure boot and system operation within an isolated subsystem. There’s also a separate Andes N9 RISC core supports an isolated WiFi subsystem.

The Linux-based Azure Sphere OS features a Microsoft Pluton Security Subsystem that works closely with the hardware security subsystem. It “creates a hardware root of trust, stores private keys, and executes complex cryptographic operations,” says Microsoft.  Underlying the kernel layer is a security monitor layer and at the top is a container layer for application-level security.

The third major security component lies in the cloud. The Azure Sphere Security Service is a cloud-based turnkey platform that brokers trust for device-to-device and device-to-cloud links via certificate-based authentication. The service detects “emerging security threats across the entire Azure Sphere ecosystem through online failure reporting, and renewing security through software updates,” says Microsoft.

Microsoft would love you to connect Azure Sphere Security Service with Azure Cloud and its Azure IoT Suite. To its credit, however, it is also supporting other major cloud services like Amazon AWS and Google Cloud. In this way, it may be more open than Amazon AWS IoT ecosystem with the related, Linux-oriented AWS Greengrass platform for edge devices, which also offers end-to-end security. Amazon FreeRTOS, which was announced in December along with a major investment in the open source FreeRTOS project, expands upon FreeRTOS with libraries that add AWS and AWS Greengrass support for secure cloud-based or local processing and connectivity.

VDC’s Murdock speculates that most Azure Sphere customers will stick with Azure. “We can definitely expect tight integration between Azure Sphere and Azure IoT Suite,” he said. “Microsoft will offer developers a one-click option to turn their data telemetry over to Azure and get security updates. You will be able to connect to other cloud platforms, but it will be complicated. Microsoft is relying on security as a hook, which is smart.”

Microsoft’s goal is not only to push more customers to Azure, but also to harvest the vast amount of information available from millions of edge devices. “Azure Sphere will let Microsoft look at more interesting data and do predictive maintenance,” said Murdock.

Unlike AWS and most other IoT ecosystems that trumpet end-to-end security, Azure Sphere has the benefit of embedding the security at the chip level in addition to OS and cloud. Of course, this is also a limitation because you need a compliant chip to benefit from the security umbrella. This may be one reason Samsung’s Artik platform , which in October was expanded with more security-enhanced Secure Artik models, has yet to set the world on fire.

Indeed, Artik may be the closest analogue to Azure Sphere in that security is baked into a variety of Artik modules and their dedicated Arm chips, and the same security framework also extends to the Artik Cloud. Samsung doesn’t use hybrid SoCs, but it offers a variety of Linux-ready Cortex-A modules and Cortex-M based MCU modules that are intended to work together.

Why not Windows Embedded or IoT Core?

Shortly before the Azure Sphere announcement, VDC Research released an insightful brief called A Call to Revisit Windows Embedded. The report recommended reinvigorating, opening up, and perhaps fully open sourcing, the neglected, but still widely used Windows Embedded platform. In this way it could both establish a foothold in IoT and compete with Amazon FreeRTOS, which VDC sees as a potentially huge play in the MCU world.

Microsoft has instead focused on Windows 10 IoT Core, which competes with Linux on higher powered Arm SoCs and Intel Atom processors. Yet even this minimalist Windows variant is not able to squeeze onto low-end IoT node devices with limited memory and power where Linux and Windows Embedded are still viable.

Presumably, Microsoft decided it would take too much time and effort to update Windows Mobile, especially when IoT developers would prefer to work with Linux anyway. Microsoft can still make money by selling Windows Embedded to legacy customers while advancing into the future with Linux.

Another approach would have been to mimic Amazon and fully embrace the RTOS and MCU world below that level. Like FreeRTOS, a new breed of open source RTOSes such as Arm Mbed and the Intel-backed Zephyr, are offering more Linux-like features for improving, wireless connected Cortex-M and -R SoCs. Yet perhaps Microsoft envisioned that as endpoint IoT devices offer more Internet connectivity, multimedia, and AI processing, low-end Cortex-A cores will be increasingly essential. That road leads to Linux.

Despite Microsoft’s embrace of Linux, Microsoft Chief Legal Officer Brad Smith couldn’t resist a backhanded compliment during the announcement. He chose to use the example of a toy from among the many potential targets for Azure Sphere, ranging from industrial gear to consumer appliances to smart city infrastructure.

“Of course, we are a Windows company, but what we’ve recognized is the best solution for a computer of this size in a toy is not a full-blown version of Windows,” said Smith at the Azure Sphere announcement, as quoted by Redmond. “It is what we are creating here. It is a custom Linux kernel, complemented by the kinds of advances that we have created in Windows itself.”

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more. Sign up below for the latest event news:

Docker Enterprise Edition 2.0 Launches With Secured Kubernetes

After months of development effort, Kubernetes is now fully supported in the stable release of the Docker Enterprise Edition.

Docker Inc. officially announced Docker EE 2.0 on April 17, adding features that have been in development in the Docker Community Edition (CE) as well as enhanced enterprise grade capabilities. Docker first announced its intention to support Kubernetes in October 2017. With Docker EE 2.0, Docker is providing a secured configuration of Kubernetes for container orchestration.

“Docker EE 2.0 brings the promise of choice,” Docker Chief Operating Officer Scott Johnston told eWEEK. “We have been investing heavily in security in the last few years, and you’ll see that in our Kubernetes integration as well.”

Docker EE 2 provides support for Docker’s own Swarm container orchestration system as well. Among the key security features in Swarm is the concept of mutually authenticated TLS (Transport Layer Security),…

Read more at eWeek

A Quick Look at the Git Object Store

Let’s talk about some of the internals of git and how it stores and tracks objects within the .git directory.

If you’re unaware of what the .git directory is, it’s simply a space that git uses to store your repositories data, the directory is created when you run git init. Information such as binary objects and plain text files for commits and commit data, remote server information and information about branch locations are stored within.

The key concept throughout this entire article is very simple – pretty much every operation you do in git creates objects with a bunch of metadata which point to some more objects with a bunch of metadata and so on so for forth. That’s pretty much it.

Read more at Dev.to