Home Blog Page 308

How Architecture Evolves into Strategy

Technology systems are difficult to wrangle. Our systems grow in accidental complexity and complication over time. Sometimes we can succumb to thinking other people really hold the cards, that they have the puppet strings we don’t.

This is exacerbated by the fact that our field is young and growing and changing, and we’re still finding the roles we need to have to be successful. To do so, we borrow metaphors from roles in other industries. The term “data scientist” was first used in the late 1990s. In 2008 or so, when data scientist emerged as a job title, it was widely ridiculed as a nonjob: the thought that people who just worked with data could be scientists, or employ the rigors of their time-honored methods, was literally laughable in many circles. …

Likewise, the term “architect” didn’t enter popular usage to describe a role in the software field until the late 1990s. It, too, was ridiculed as an overblown, fancy-pants misappropriation from a “real” field. Part of the vulnerability here is that it hasn’t always been clear what the architect’s deliverables are. We often say “blueprints,” but that’s another metaphor borrowed from the original field, and of course we don’t make actual blueprints.

So, we will define the role of the architect in order to proceed from common ground. This is my tailored view of it; others will have different definitions. Before we do that, though, let’s cover some historical context that informs how we think of the role.

Read more at O’Reilly

New Security Flaw Impacts Most Linux and BSD Distros

Issue is only a privilege escalation flaw but it impacts a large number of systems.

Linux and BSD variants that employ the popular X.Org Server package –almost all do– are vulnerable to a new vulnerability disclosed on Thursday.

The vulnerability allows an attacker with limited access to a system, either via a terminal or SSH session, to elevate privileges and gain root access. It can’t be used to break into secure computers, but it is still useful to attackers because it can quickly turn simple intrusions into bad hacks.

Read more at ZDNet

Modernizing Applications for Kubernetes

Modern stateless applications are built and designed to run in software containers like Docker, and be managed by container clusters like Kubernetes. They are developed using Cloud Native and Twelve Factorprinciples and patterns, to minimize manual intervention and maximize portability and redundancy. Migrating virtual-machine or bare metal-based applications into containers (known as “containerizing”) and deploying them inside of clusters often involves significant shifts in how these apps are built, packaged, and delivered.

Building on Architecting Applications for Kubernetes, in this conceptual guide, we’ll discuss high-level steps for modernizing your applications, with the end goal of running and managing them in a Kubernetes cluster. Although you can run stateful applications like databases on Kubernetes, this guide focuses on migrating and modernizing stateless applications, with persistent data offloaded to an external data store. 

Read more at Dev.to

Linux tcpdump Command Tutorial for Beginners (8 Examples)

Every time you open a webpage on your computer, data packets are sent and received on your network interface. Sometimes, analyzing these packets becomes important for many reasons. Thankfully, Linux offers a command line utility that dumps information related to these data packets in output.

In this article, we will discuss the basics of the tool in question – tcpdump. But before we do that, it’s worth mentioning that all examples here have been tested on an Ubuntu 18.04 LTS machine.

Linux tcpdump command

The tcpdump command in Linux lets you dump traffic on a network. Following is its syntax in short:

tcpdump [OPTIONS]

Here’s the detailed syntax:

tcpdump [ -AbdDefhHIJKlLnNOpqStuUvxX# ] [ -B buffer_size ]
               [ -c count ]
               [ -C file_size ] [ -G rotate_seconds ] [ -F file ]
               [ -i interface ] [ -j tstamp_type ] [ -m module ] [ -M secret ]
               [ --number ] [ -Q in|out|inout ]
               [ -r file ] [ -V file ] [ -s snaplen ] [ -T type ] [ -w file ]
               [ -W filecount ]
               [ -E spi@ipaddr algo:secret,...  ]
               [ -y datalinktype ] [ -z postrotate-command ] [ -Z user ]
               [ --time-stamp-precision=tstamp_precision ]
               [ --immediate-mode ] [ --version ]
               [ expression ]

Read more at HowtoForge

Understanding Linux Links: Part 2

In the first part of this series, we looked at hard links and soft links and discussed some of the various ways that linking can be useful. Linking may seem straightforward, but there are some non-obvious quirks you have to be aware of. That’s what we’ll be looking at here. Consider, for example, at the way we created the link to libblah in the previous article. Notice how we linked from within the destination folder:

cd /usr/local/lib

ln -s /usr/lib/libblah
 

That will work. But this:


cd /usr/lib

ln -s libblah /usr/local/lib

That is, linking from within the original folder to the destination folder, will not work.

The reason for that is that ln will think you are linking from inside /usr/local/lib to /usr/local/lib and will create a linked file from libblah in /usr/local/lib to libblah also in /usr/local/lib. This is because all the link file gets is the name of the file (libblah) but not the path to the file. The end result is a very broken link.

However, this:


cd /usr/lib

ln -s /usr/lib/libblah /usr/local/lib

will work. Then again, it would work regardless of from where you executed the instruction within the filesystem. Using absolute paths, that is, spelling out the whole the path, from root (/) drilling down to to the file or directory itself, is just best practice.

Another thing to note is that, as long as both /usr/lib and /usr/local/lib are on the same partition, making a hard link like this:


cd /usr/lib

ln -s libblah /usr/local/lib

will also work because hard links don’t rely on pointing to a file within the filesystem to work.

Where hard links will not work is if you want to link across partitions. Say you have fileA on partition A and the partition is mounted at /path/to/partitionA/directory. If you want to link fileA to /path/to/partitionB/directory that is on partition B, this will not work:


ln /path/to/partitionA/directory/file /path/to/partitionB/directory

As we saw previously, hard links are entries in a partition table that point to data on the *same partition*. You can’t have an entry in the table of one partition pointing to data on another partition. Your only choice here would be to us a soft link:


ln -s /path/to/partitionA/directory/file /path/to/partitionB/directory

Another thing that soft links can do and hard links cannot is link to whole directories:


ln -s /path/to/some/directory /path/to/some/other/directory

will create a link to /path/to/some/directory within /path/to/some/other/directory without a hitch.

Trying to do the same by hard linking will show you an error saying that you are not allowed to do that. And the reason for that is unending recursiveness: if you have directory B inside directory A, and then you link A inside B, you have situation, because then A contains B within A inside B that incorporates A that encloses B, and so on ad-infinitum.

You can have recursive using soft links, but why would you do that to yourself?

Should I use a hard or a soft link?

In general you can use soft links everywhere and for everything. In fact, there are situations in which you can only use soft links. That said, hard links are slightly more efficient: they take up less space on disk and are faster to access. On most machines you will not notice the difference, though: the difference in space and speed will be negligible given today’s massive and speedy hard disks. However, if you are using Linux on an embedded system with a small storage and a low-powered processor, you may want to give hard links some consideration.

Another reason to use hard links is that a hard link is much more difficult to break. If you have a soft link and you accidentally move or delete the file it is pointing to, your soft link will be broken and point to… nothing. There is no danger of this happening with a hard link, since the hard link points directly to the data on the disk. Indeed, the space on the disk will not be flagged as free until the last hard link pointing to it is erased from the file system.

Soft links, on the other hand can do more than hard links and point to anything, be it file or directory. They can also point to items that are on different partitions. These two things alone often make them the only choice.

Next Time

Now we have covered files and directories and the basic tools to manipulate them, you are ready to move onto the tools that let you explore the directory hierarchy, find data within files, and examine the contents. That’s what we’ll be dealing with in the next installment. See you then!

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

The State of Hyperledger with Brian Behlendorf

Brian Behlendorf has been heading the Hyperledger project from the early days. We sat down with him at Open Source Summit to get an update on the Hyperledger project.

Hyperledger has grown in a way that mirrors the growth of the blockchain industry. “When we started, all the excitement was around bitcoin,” said Brian Behlendorf,  Executive Director of Hyperledger. Initially, it was more about moving money around. But the industry started to go beyond that and started to see if it “could be used as a way to reestablish how trust works on the Internet and, and try to decentralize a lot of things that today with led to being centralized.”

As the industry has evolved around blockchain so did Hyperledger. “We realized pretty early that we needed to be a home for a lot of different ways to build a blockchain. It wasn’t going to be like the Linux kernel project with one singular architecture,” said Behlendorf.

Read more at The Linux Foundation

 

 

Internationalizing the Kernel

At a time when many companies are rushing to internationalize their products and services to appeal to the broadest possible market, the Linux kernel is actively resisting that trend, although it already has taken over the broadest possible marketthe infrastructure of the entire world.

David Howells recently created some sample code for a new kernel library, with some complex English-language error messages that were generated from several sources within the code. Pavel Machek objected that it would be difficult to automate any sort of translations for those messages, and that it would be preferable simply to output an error code and let something in userspace interpret the error at its leisure and translate it if needed.

Ordinarily, I might expect Pavel to be on the winning side of this debate, with Linus Torvalds or some other top developer insisting that support for internationalization was necessary in order to give the best and most useful possible experience to all users. However, Linus had a very different take on the situation:

Read more at Linux Journal

Cloud Native Computing Grows by 200 Percent

Over the last few years the way you moved applications from your data center to the cloud was lift-and-shift, refactor, or migrate to containers. The latter has gotten a kick in the pants as cloud-native techniques such as serverless computing and microservices have joined forces with containers.

Still unsure what I’m talking about? Chris Aniszczyk, executive director of the Open Container Initiative (OCI) and the Cloud Native Computing Foundation (CNCF), explained: “Cloud-native computing uses an open-source software stack to deploy applications as microservices, [each part packaged] into its own container, and dynamically orchestrate those containers to optimize resource utilization.”  

You can find proof this methodology is taking off in the latest CNCF survey. This survey of primarily enterprise or DevOps professionals found that “production usage of CNCF projects has grown more than 200 percent on average since December 2017, and evaluation has jumped 372 percent.”

Read more at DXC 

Secure Apache with Let’s Encrypt on Debian 9

Let’s Encrypt is a certificate authority created by the Internet Security Research Group (ISRG). It provides free SSL certificates via fully automated process designed to eliminate manual certificate creation, validation, installation and renewal.

Certificates issued by Let’s Encrypt are are valid for 90 days from the issue date and trusted by all major browsers today.

This tutorial will guide you through the process of obtaining a free Let’s Encrypt using the certbot tool on Debian 9. We’ll also show how to configure Apache to use the new SSL certificate and enable HTTP/2.

Ensure that you have met the following prerequisites before continuing with this tutorial:

Read more at LInuxize

Linux: Find Out Which Port Number a Process is Listening on

As Linux users, we sometimes need to know which port number a particular process is listening upon. All ports are associated with a process ID or service in an OS. So how do we find that port? This article presents three different methods for you to find which port number a process is listening on.

We have run the commands and procedures described in this article on an Ubuntu 18.04 LTS system.

Method 1: Using the netstat command

Netstat or the network statistics utility is used to view information related to the network connections. This includes information about interface statistics, routing tables and much more. This utility is available on most Linux systems so let us make use of it to view information about which ports certain processes are using on the system.

For using the netstat command, you need to install the net-tools utility if it is already not installed on your system through the following command:

$ sudo apt install net-tools

Read more at Vitux