Cloud Native Computing Foundation (CNCF), chiefly responsible for Kubernetes, and the recently established Linux Foundation Networking (LF Networking) group are collaborating on a new class of software tools called Cloud-native Network Functions (CNFs).
CNFs are the next generation Virtual Network Functions (VNFs) designed specifically for private, public and hybrid cloud environments, packaged inside application containers based on Kubernetes.
VNFs are primarily used by telecommunications providers; CNFs are aimed at telecommunications providers that have shifted to cloud architectures, and will be especially useful in the deployment of 5G networks.
In just over two months, the global Hyperledger community will gather in Basel, Switzerland, for the inaugural Hyperledger Global Forum.
With business and technical tracks filled with a diverse range of speakers – from Kiva to the Royal Bank of Canada and from Oracle to the Sovrin Foundation – there’s plenty of educating and engaging content for anyone looking to deepen their knowledge about enterprise blockchain. However, there are many other great reasons to make sure Hyperledger Global Forum is on your calendar for December 12-15, 2018. Here are five things that make this a must-attend event:
Applications
The fast-growing Hyperledger community is putting blockchain to work with PoCs and production deployments around the world. Hyperledger Global Forum is your chance to see live demos and roadmaps showing how the biggest names in financial services, healthcare, supply chain and more are integrating Hyperledger technologies for commercial, production deployments.
Docker has radically changed the way admins roll out their software in many companies. However, regular updates are still needed. How does this work most effectively with containers?
From an admin’s point of view, Docker containers have much to recommend them: They can be operated with reduced permissions and thus provide a barrier between a service and the underlying operating system. They are easy to handle and easy to replace. They save work when it comes to maintaining the system on which they run: Software vendors provide finished containers that are ready to run immediately. If you want to roll out new software, you download the corresponding container and start it – all done. Long installation and configuration steps with RPM, dpkg, Ansible, and other tools are no longer necessary. All in all, containers offer noticeable added value from the sys admin’s perspective.
…No matter what the reasons for updating applications running in Docker containers, as with normal systems, you need a working strategy for the update. Because running software in Docker differs fundamentally from operations without a container layer, the way updates are handled also differs.
Several options are available with containers, which I present in this article, but first, you must understand the structure of Docker containers, so you can make the right decisions when updating.
Shareware software had a simple premise: You could try the application and if you liked it, you paid for it.
In those halcyon days, the PC software market was still getting its traction. Most programs were expensive—a single application often retailed for $495, in 1980s dollars. Often, they were complex and difficult to use. Then, Jim “Button” Knopf, creator of PC-File, a simple flat database, and Andrew Fluegelman, inventor of the program PC-Talk, a modem interface program, came up with the same idea: share their programs with other users for a voluntary, nominal payment. Knopf and Fluegelman supported each other’s efforts, and a new software marketing and sales model was born.
Joel Diamond, vice president of the Association of Software Professionals (originally called Association of Shareware Professionals), believes shareware should be recognized as the first version of e-commerce. You can also point to shareware as the ancestor of mobile app development, collaborative development (with philosophies of trust that led to open source), and community support.
As Linux adoption expands, it’s increasingly important for the kernel community to improve the security of the world’s most widely used technology. Security is vital not only for enterprise customers, it’s also important for consumers, as 80 percent of mobile devices are powered by Linux. In this article, Linux kernel maintainer Greg Kroah-Hartman provides a glimpse into how the kernel community deals with vulnerabilities.
There will be bugs
Greg Kroah-Hartman
As Linus Torvalds once said, most security holes are bugs, and bugs are part of the software development process. As long as the software is being written, there will be bugs.
“A bug is a bug. We don’t know if a bug is a security bug or not. There is a famous bug that I fixed and then three years later Red Hat realized it was a security hole,” said Kroah-Hartman.
There is not much the kernel community can do to eliminate bugs, but it can do more testing to find them. The kernel community now has its own security team that’s made up of kernel developers who know the core of the kernel.
“When we get a report, we involve the domain owner to fix the issue. In some cases it’s the same people, so we made them part of the security team to speed things up,” Kroah Hartman said. But he also stressed that all parts of the kernel have to be aware of these security issues because kernel is a trusted environment and they have to protect it.
“Once we fix things, we can put them in our stack analysis rules so that they are never reintroduced,” he said.
Besides fixing bugs, the community also continues to add hardening to the kernel. “We have realized that we need to have mitigations. We need hardening,” said Kroah-Hartman.
Huge efforts have been made by Kees Cook and others to take the hardening features that have been traditionally outside of the kernel and merge or adapt them for the kernel. With every kernel released, Cook provides a summary of all the new hardening features. But hardening the kernel is not enough, vendors have to enable the new features and take advantage of them. That’s not happening.
Kroah-Hartman releases a stable kernel every week, and companies pick one to support for a longer period so that device manufacturers can take advantage of it. However, Kroah-Hartman has observed that, aside from the Google Pixel, most Android phones don’t include the additional hardening features, meaning all those phones are vulnerable. “People need to enable this stuff,” he said.
“I went out and bought all the top of the line phones based on kernel 4.4 to see which one actually updated. I found only one company that updated their kernel,” he said. “I’m working through the whole supply chain trying to solve that problem because it’s a tough problem. There are many different groups involved — the SoC manufacturers, the carriers, and so on. The point is that they have to push the kernel that we create out to people.”
The good news is that unlike with consumer electronics, the big vendors like Red Hat and SUSE keep the kernel updated even in the enterprise environment. Modern systems with containers, pods, and virtualization make this even easier. It’s effortless to update and reboot with no downtime. It is, in fact, easier to keep things secure than it used to be.
Meltdown and Spectre
No security discussion is complete without the mention of Meltdown and Spectre. The kernel community is still working on fixes as new flaws are discovered. However, Intel has changed its approach in light of these events.
“They are reworking on how they approach security bugs and how they work with the community because they know they did it wrong,” Kroah-Hartman said. “The kernel has fixes for almost all of the big Spectre issues, but there is going to be a long tail of minor things.”
The good news is that these Intel vulnerabilities proved that things are getting better for the kernel community. “We are doing more testing. With the latest round of security patches, we worked on our own for four months before releasing them to the world because we were embargoed. But once they hit the real world, it made us realize how much we rely on the infrastructure we have built over the years to do this kind of testing, which ensures that we don’t have bugs before they hit other people,” he said. “So things are certainly getting better.”
The increasing focus on security is also creating more job opportunities for talented people. Since security is an area that gets eyeballs, those who want to build a career in kernel space, security is a good place to get started with.
“If there are people who want a job to do this type of work, we have plenty of companies who would love to hire them. I know some people who have started off fixing bugs and then got hired,” Kroah-Hartman said.
You can hear more in the video below:
Check out the schedule of talks for Open Source Summit Europe and sign up to receive updates:
LDAP is the Lightweight Directory Access Protocol, which allows for the querying and modification of an X.500-based directory service. LDAP is used over an IP network to manage and access a distributed directory service. The primary purpose of LDAP is to provide a set of records in a hierarchical structure. If you’re curious as to how LDAP fits in with Active Directory, think of it this way: Active Directory is a directory service database, and LDAP is one of the protocols used to communicate with it. LDAP can be used for user validation, as well as the adding, updating, and removing of objects within a directory.
I want to show you how to install OpenLDAP on the latest iteration of Ubuntu, and then how to populate an LDAP database with a first entry. All you will need for this is a running instance of Ubuntu 18.04 and a user account with sudo privileges.
In the previous post about Containers and Cloud Security, I noted that most of the tenants of a Cloud Service Provider (CSP) could safely not worry about the Horizontal Attack Profile (HAP) and leave the CSP to manage the risk. However, there is a small category of jobs (mostly in the financial and allied industries) where the damage done by a Horizontal Breach of the container cannot be adequately compensated by contractual remedies. For these cases, a team at IBM research has been looking at ways of reducing the HAP with a view to making containers more secure than hypervisors. For the impatient, the full open source release of the Nabla Containers technology is here and here, but for the more patient, let me explain what we did and why. We’ll have a follow on post about the measurement methodology for the HAP and how we proved better containment than even hypervisor solutions.
The essence of the quest is a sandbox that emulates the interface between the runtime and the kernel (usually dubbed the syscall interface) with as little code as possible and a very narrow interface into the kernel itself.
The Basics: Looking for Better Containment
The HAP attack worry with standard containers is shown on the left: that a malicious application can breach the containment wall and attack an innocent application.
The Kubernetes project has been hurtling at breakneck speed towards the boring. As the popular open source container orchestration platform has matured, it’s been the boring features which have come front and center, many of which focus on stability and reliability. For the Kubernetes 1.12 release on Thursday, those working on the project and on the various special interest groups (SIGs) initially laid out over 60 proposed features. A little over half of those made it to the final release, with many more being pushed back or delayed, as usual.
Amongst the changes that made it into this release are such additions as the general availability of TLS bootstrapping, the ability to use the Kubernetes API to restore a volume from a volume snapshot data source, a newly beta version of the KubeletPluginsWatcher, and some groundwork which is being put in place to solve scheduling challenges that confront large clusters.
Stephen Augustus, specialist solution architect on the OpenShift Tiger Team at Red Hat and Kubernetes product management chair said that the name of the game for Kubernetes these days is being boring, and avoiding breaking changes.
To find files quickly in the deeply nested subdirectories of his home directory, Mike whips up a Go program to index file metadata in an SQLite database.
…the GitHub Codesearch [1] project, with its indexer built in Go, at least lets you browse locally available repositories, index them, and then search for code snippets in a flash. Its author, Russ Cox, then an intern at Google, explained later how the search works [2].
How about using a similar method to create an index of files below a start directory to perform quick queries such as: “Which files have recently been modified?” “Which are the biggest wasters of space?” Or “Which file names match the following pattern?”
Unix filesystems store metadata in inodes, which reside in flattened structures on disk that cause database-style queries to run at a snail’s pace. To take a look at a file’s metadata, run the statcommand on it and take a look at the file size and timestamps, such as the time of the last modification (Figure 2).
Figure 2: Inode metadata of a file, here determined by stat, can be used to build an index.
Newer filesystems like ZFS or Btrfs take a more database-like approach in the way they organize the files they contain but do not go far enough to be able to support meaningful queries from userspace.
Fast Forward Instead of Pause
For example, if you want to find all files over 100MB on the disk, you can do this with a find call like:
find / -type f -size +100M
If you are running the search on a traditional hard disk, take a coffee break. Even on a fast SSD, you need to prepare yourself for long search times in the minute range. The reason for this is that the data is scattered in a query-unfriendly way across the sectors of the disk.
The Linux Foundation published results of an industry survey that revealed strong confidence among communication services providers (CSPs) in open source networking, and software-defined networking (SDN) in particular.
The foundation announced the results during the Open Networking Summit Europe in Amsterdam, reporting on a survey conducted by Heavy Readingthat polled 150 CSP representatives from 98 companies around the globe, with help from sponsors Affirmed/Intel, Amdocs, CloudOps, Ericsson, Netgate and Red Hat.
“Top takeaways from the survey indicate an increasing maturity of open source technology use from operators, ongoing innovation in areas such as DevOps and CI/CD, and a glimpse into emerging technologies in areas such as cloud native and more,” The Linux Foundation said in a statement.