Home Blog Page 656

Why the Fuss about Serverless?

To explain this, I’m going to have to recap on some old work with a particular focus on co-evolution.

Co-evolution

Let us take a hike back through time to the 80s/90s. Back in those days, computers were very much a product and the applications we built used architectural practices that were based upon the characteristics of a product, in particular mean time to recovery (MTTR)

When a computer failed, we had to replace or fix it and this would take time. The MTTR was high and architectural practices had emerged to cope with this. 

Read more at HackerNoon

How To Enable Shell Script Debugging Mode in Linux

A script is simply a list of commands stored in a file. Instead of running a sequence of commands by typing them one by one all the time on the terminal, a system user can store all of them (commands) in a file and repeatedly invokes the file to re-execute the commands several times.

While learning scripting or during the early stages of writing scripts, we normally start by writing small or short scripts with a few lines of commands. And we usually debug such scripts by doing nothing more than looking at their output and ensuring that they work as we intended.

Read complete article at Tecmint

How to Install an SSL Certificate on Linux Server

With Security being the top most priority in the e-commerce world, the importance of SSL Certificates has skyrocketed. Installing an SSL Certificate on an online portal has become the basic foundation of a company’s business structure.

But the question is ‘How to install an SSL Certificate on a server?’

It is not necessary that everyone who is into e-commerce has a technical background. E-commerce is all about business and the owners are mostly businessmen. So also the core team of an e-commerce industry is not fully technical. In such a situation it becomes very difficult for people with minimal technical knowledge to grasp concepts even as basic like SSL Certificates or its installation for that matter.

This article aims at giving a sneak peek into the process of installing an SSL Certificate on Linux server in lay man’s words. This would help the non-technical people also to get a grasp of what it is all about. Of course, every e-commerce company has a core technical team, so they can easily take over from here. But it is always good to have a know-how of the process.

The installation of SSL Certificates on a Linux server is very easy. It can be done using a Plesk control panel and also without it.

What is Plesk?

It is a web hosting platform that has a very simple configuration. This simple configuration helps all web hosting providers to manage a lot of virtual hosts easily and on a single server. Ever since its conception, Plesk has been coming up as a preferred choice for all the web hosting companies.

How to install an SSL certificate on a Linux Server that has Plesk?

1. First Log into the control panel of Plesk.

2. Then, Select Domain;

3. The third step implies choosing the domain to be updated.

4. In the next step click on the ‘Add New Certificate’ icon.

5. Save the certificate name in the ‘Certificate Name’ box.

One would have the certificate and key files saved on the local computer. These certificate and key files are provided by the certificate authority and are important for the installation.

6. The next step is to find these files. Open these in a Notepad or in other similar text formats from where one can copy the text.

7. Copy the entire text of the files.

8. Paste them in the correct boxes. Reading through the content and the box name in Plesk will give one an idea where to paste it.

9. Next, click on the ‘Send Text’ button.

10. Go to the ‘Hosting Section’. It is on the domain screen.

11. Click ‘Set-up’ from this section. A drop down list will follow.

12. The next step is to click on the ‘new certificate’ from the drop down list.

13. Click ‘Ok’ to finish.

How to install SSL Certificate on Linux servers that do not have Plesk.

1. The first and foremost step is to upload the certificate and important key files. One can upload the files to the server using –  S/FTP.

2. Login to Server. It is important to log in via SSH. Logging in via SSH will help the user to become the root user.

3. Give Root Password.

4. One can see /etc/httpd/conf/ssl.crt in the following step. Move the certificate file here

5. Next move key file also to /etc/httpd/conf/ssl.crt

It is important to ensure the security of the files that has been moved. One can keep the files secure by restricting permission. Using ‘chmod 0400’ will help users to securely restrict permission to the key.

6. Next Go to etc/httpd/conf.d/ssl.conf. Here the user will find Virtual Host Configuration set up for the domain.

7. Edit Virtual Host Configuration.

8. Restart Apache.

The technicality of installing an SSL certificate may baffle many non-technical people, but once one gets a hang of it, it becomes easy.

Helm: The Kubernetes Package Manager

Back on October 15th 2016, Helm celebrated its one year birthday. It was first demonstrated ahead of the inaugural KubeCon conference in San Francisco in 2015. What is Helm? Helm aims to be the default package manager for Kubernetes.

In Kubernetes, distributed applications are made of various resources: Deployments, Services, Ingress, Volumes, and so on (as discussed in parts one and two of this series). You can create all those resources in your Kubernetes cluster using the kubectl client, but there is a need for a way to package them as a single entity. Creating a Package allows for simple sharing between users, tuning using a templating scheme, as well as provenance tracking, among other things. All in all, Helm tries to simplify complex application deployment on Kubernetes coupled with sharing of applications’ manifests.

Helm has been created by the folks at Deis and was donated to the Cloud Native Computing Foundation. Recently, Helm released version 2.0.0.

Helm is made of two components: A server called Tiller, which runs inside your Kubernetes cluster and a client called helm that runs on your local machine. A package is called a chart to keep with the maritime theme. Read the birthday retrospective from Matt Butcher to get the historical context of the naming.

With the Helm client, you can browse package repositories (containing published Charts) and deploy those Charts on your Kubernetes cluster. Helm will pull the Chart and talking to Tiller will create a release (an instance of a Chart). The release will be made of various resources running in the Kubernetes cluster.

Structure of a Chart

A Chart is easy to demystify; it is an archive of a set of Kubernetes resource manifests that make up a distributed application. Check the GitHub repository, where the Kubernetes community is curating Charts. As an example, let’s have a closer look at the MariaDB chart. The structure is as follows:

```

.

├── Chart.yaml

├── README.md

├── templates

│   ├── NOTES.txt

│   ├── _helpers.tpl

│   ├── configmap.yaml

│   ├── deployment.yaml

│   ├── pvc.yaml

│   ├── secrets.yaml

│   └── svc.yaml

└── values.yaml

```

The Chart.yaml contains some metadata about the Chart, such as its name, version, keywords, and so on. The values.yaml file contains keys and values that are used to generate the release in your Cluster. These values are replaced in the resource manifests using Go templating syntax. And, finally, the templates directory contains the resource manifests that make up this MariaDB application.

If we dig a bit deeper into the manifests, we can see how the Go templating syntax is used. For example, the database passwords are stored in a Kubernetes secret, and the database configuration is stored in a Kubernetes configMap.

We see that a set of labels are defined in the Secret metadata using the Chart name, release name etc. The actual values of the passwords are read from the values.yaml file.

```

apiVersion: v1

kind: Secret

metadata:

 name: {{ template "fullname" . }}

 labels:

   app: {{ template "fullname" . }}

   chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"

   release: "{{ .Release.Name }}"

   heritage: "{{ .Release.Service }}"

type: Opaque

data:

 mariadb-root-password: {{ default "" .Values.mariadbRootPassword | b64enc | quote }}

 mariadb-password: {{ default "" .Values.mariadbPassword | b64enc | quote }}

```

Similarly, the configMap manifest contain metadata that is computed on the fly when tiller expands the templates and creates the release. In addition, you can see below that the database configuration can be set in the values.yaml file, and if present is placed inside the configMap.

```

apiVersion: v1

kind: ConfigMap

metadata:

 name: {{ template "fullname" . }}

 labels:

   app: {{ template "fullname" . }}

   chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"

   release: "{{ .Release.Name }}"

   heritage: "{{ .Release.Service }}"

data:

 my.cnf: |-

{{- if .Values.config }}

{{ .Values.config | indent 4 }}

{{- end -}}

```

Bottom line: a Chart is an archive of a set of resource manifests that make an application. The manifests can be templatized using the Go templating syntax. A instantiated Chart is called a release, it reads values in the values.yaml file and replaces those values in the template manifests.

Using Helm

As always you can build Helm from source, or grab a release from the GitHub page. I expect to see Linux packages for the stable release. OSX users will also be able to get it quickly using Brew.

```

$ brew cask install helm

```

With helm installed, you can deploy the server-side tiller in your cluster. Note that this will create a deployment in the kube-system namespace.

```

$ brew init

$ helm init

Now, Tiller (the helm server-side component) has been installed into your Kubernetes cluster.

$ kubectl get deployments --namespace=kube-system

NAMESPACE     NAME            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE

kube-system   tiller-deploy   1         1         1            1           1m

```

The client will be able to communicate with the tiller Pod using port forwarding. Hence you will not see any service exposing tiller.

To deploy a Chart, you can add a repository and search for a keyword. Once Helm is officially released, the format of the repository indexed will be fixed and the default repository will be fully tested and usable.

```

$ helm repo add testing http://storage.googleapis.com/kubernetes-charts-testing

$ helm repo list

NAME       URL                                               

stable     http://storage.googleapis.com/kubernetes-charts   

local      http://localhost:8879/charts                      

testing    http://storage.googleapis.com/kubernetes-charts...


$ helm search redis

WARNING: Deprecated index file format. Try 'helm repo update'

NAME                        VERSION    DESCRIPTION                                       

testing/redis-cluster       0.0.5      Highly available Redis cluster with multiple se...

testing/redis-standalone    0.0.1      Standalone Redis Master                           

testing/example-todo        0.0.6      Example Todo application backed by Redis   

```

To deploy a Chart, just use the install command:

```

$ helm install testing/redis-standalone

Fetched testing/redis-standalone to redis-standalone-0.0.1.tgz

amber-eel

Last Deployed: Fri Oct 21 12:24:01 2016

Namespace: default

Status: DEPLOYED


Resources:

==> v1/ReplicationController

NAME               DESIRED   CURRENT   READY     AGE

redis-standalone   1         1         0         1s


==> v1/Service

NAME      CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE

redis     10.0.81.67   <none>        6379/TCP   0s

```

You will be able to list the release, delete it, and even upgrade it and rollback.

```

$ helm list

NAME         REVISION    UPDATED                     STATUS      CHART                 

amber-eel    1           Fri Oct 21 12:24:01 2016    DEPLOYED    redis-standalone-0.0.1

```

Underneath, of course, Kubernetes will have created its regular resources. In this particular case, a replication controller, svc, and a pod were created:

```

$ kubectl get pods,rc,svc

NAME                               READY     STATUS    RESTARTS   AGE

po/redis-standalone-41eoj          1/1       Running   0          6m

NAME                  DESIRED   CURRENT   READY     AGE

rc/redis-standalone   1         1         1         6m

NAME             CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE

svc/redis        10.0.81.67   <none>        6379/TCP   6m

```

And that’s it for a quick walkthrough of Helm. Expect stable, curated Charts to be available once Helm is released. This will give you quick access to packaged distributed applications with simple deployment, upgrade, and rollback capability.

Read the other articles in this series:

Getting Started With Kubernetes Is Easy With Minikube

Rolling Updates and Rollbacks using Kubernetes Deployments

Federating Your Kubernetes Clusters — The New Road to Hybrid Clouds

Enjoy Kubernetes with Python

Want to learn more about Kubernetes? Check out the new, online, self-paced Kubernetes Fundamentals course from The Linux Foundation. Sign Up Now!

Sebastien Goasguen (@sebgoa) is a long time open source contributor. Member of the Apache Software Foundation, member of the Kubernetes organization, he is also the author of the O’Reilly Docker cookbook. He recently founded skippbox, which offers solutions, services and training for Kubernetes.

Math in V8 Is Broken; How Do We Fix It?

JavaScript has become increasingly more popular, especially with the introduction of Node.js, which has allowed full-stack JavaScript development. As this 20-year development language continues to rise, a group of individuals began to notice something: Math in V8 (a JavaScript engine) is broken.

In advance of Node.js Interactive, to be held Nov. 29 through Dec. 2 in Austin, we talked with Athan Reines, software engineer at Fourier, about the importance that JavaScript Math library has to the overall community; how they discovered underlying implementations were not accurate; and why a group of individuals are working to fix this.

Linux.com: What is the general importance of the built-in JavaScript Math library?

Athan Reines: The built-in JavaScript Math library is often a developer’s first introduction to math in JavaScript. Developers use it for generating unique ids, computing exponential back-off times, manipulating colors, animating, detecting collisions, rendering 3D graphics, simulating physics, calculating performance metrics, and much more. Almost all applications, server and browser alike, depend on the built-in Math library.

Linux.com: How did you figure out the underlying implementations were not accurate, performant, and/or correct? How does this affect users when they work with this library?

Athan Reines: We (co-authors and I) are currently developing libraries for numeric computing in JavaScript. As part of this work, we wrote libraries for higher order special functions, like gamma, beta, erf, and others. When testing our implementations against other environments (R, Python, Julia), we noticed that our results often deviated significantly from reference implementations. To our surprise, we discovered that much of the deviation was due to precision issues in the built-in JavaScript Math library. Compounding problems, we noticed that precision varied across JavaScript engines.

For many common application tasks, minor deviations in precision are unlikely to have a significant effect. However, for numeric computing and applications requiring high precision, poor precision does have a significant effect due to accumulated error. Especially since the built-in Math functions are frequently used by higher-order functions, small deviations compounded many times can lead to significant drift from the “true” value.

Additionally, the ECMAScript specification for many Math functions does not require either a specific algorithm or a minimum precision. Algorithms are at the discretion of those implementing the specification, which means that browser vendors frequently implement different algorithms. Some vendors choose speed over precision, while others choose precision over speed. Because of cross-browser variability, applications cannot guarantee consistent numerical results, thus affecting portability.

Linux.com: What are some efforts taking place to fix this issue and/or what improvements are being made?

Athan Reines: Over the past several years, various individuals on the ECMAScript technical committee (TC39) have raised the issue that the built-in Math library is underspecified. However, an updated specification does not seem likely anytime soon. While vendors seem open to a minimum precision requirement, they want to retain the ability to choose the underlying implementation.

Over the past two years, both the Firefox and Chrome teams have made efforts to standardize their Math implementations. Both teams chose FDLIBM for reference implementations. While these efforts are a step in the right direction, unless all vendors implement the same algorithm, individuals developing portable numeric compute applications must develop for the lowest common denominator; i.e., the vendor with the least precise and/or slowest algorithm.

Currently, community-driven solutions are spearheading most innovation with regard to Math in JavaScript. I am part of one such effort, stdlib. Our work provides robust, rigorous, and performant implementations for standard math functions. Each implementation is tested against multiple environments and measured against built-in equivalents. For many implementations, we have found that we can match and sometimes exceed the performance of built-in methods. Because our implementations are vendor independent, we avoid portability issues, ensuring consistent results across environments.

Linux.com: Given that JavaScript is more than 20 years old, why now? What has changed to fuel user demands for better Math libraries?

Athan Reines: The big change is Node.js. Node.js gave JavaScript a legitimacy beyond being a toy language people used in the browser to build web applications. As companies continue to realize the benefit of full-stack JavaScript development, demand continues to grow for JavaScript numeric computing libraries in order to obviate the need for polyglot architectures.

Furthermore, browser applications have moved beyond mostly static applications and games. Applications now are increasingly sophisticated and increasingly leverage a user’s own compute capacity. Part of the allure of numeric computing in JavaScript is to outsource what have traditionally been viewed as server-side computation tasks to client-side applications to decrease server load and increase application responsiveness.

Linux.com: Is there anyway for people to get more involved in these efforts or to stay in the loop on what’s being done here?

Athan Reines: The easiest way to stay in the loop is to follow development of stdlib, as we continue to develop facilities for numeric computation. To contribute, feel free to reach out to either me or Philipp Burckhardt, and we will be happy to get you started. To keep up-to-date with TC39, follow TC39 meeting agendas and Rick Waldron’s corresponding notes.

Lastly, follow Chrome and Firefox issue threads for additional built-in JavaScript Math library development.

View the full schedule to learn more about this marquee event for Node.js developers, companies that rely on Node.js, and vendors. Or register now for Node.js Interactive.

Monitoring Network Load With nload: Part 2

In the previous article, I provided an introduction to nload, covering some background, installation, and basic usage. Here, I’ll build on that information and show some specific examples of using nload with various options.

Runtime Options

You can launch nload with a number of options, so let’s explore some of those now.

Previously, I discussed the effect of making changes to the “refresh interval” and “traffic averages” settings. The setting which you probably shouldn’t drop much lower than the default — for fear of losing accuracy — is the -t option, which affects the “refresh interval.” If you decide to do so, then you can move away from the default 50 milliseconds to using a quarter of a second (250 milliseconds) by launching nload with this option:

# nload -t 250 

When it comes to “traffic averages,” we can adjust the period used in the calculations by using the -a option as follows. Note that this value is in fact in seconds and not milliseconds, and it defaults at five minutes (300s).

# nload -a 60

Consider another option now. Picture the scene: your Internet connection is via a gigabit network link but your ISP only allows you to use 100Mbit of that connection. Any network tool querying your network link will see a gigabit link speed as being available. Clearly, however, this is not of any use to you. The clever nload tool lets you configure the throughput ceiling, which you will monitor. As you continue to use the tool, do bear in mind that you’ve altered this setting just in case you see unusual spikes above that ceiling. Otherwise, it’s as simple as altering the setting like this:

# nload -i 100000

The “100000” value above is in kilobits-per-second (as scaling settings in nload generally are) and represents 100Mbit if my calculator is working properly. Note that the -i option is only for inbound (ingress) traffic, and the -o option is for outbound (egress) traffic.

On that note, should you wish to alter the default unit of measurement for traffic (as I almost always do when moving between different network capacities)then you can launch nload in a variety of differing ways. Here is an example of making nload use kilobits-per-second (kbps) units by using the -u option:

# nload -u k

In the case of nload and the above example, there’s actually no need to run that command, because that’s the default setting (“kbps” is a very good choice on all but very fast networks in my opinion).

Looking at Table 1, we can see the other available options for unit measurements.

Bits

Bytes

Throughput units of measurement

h

H

Human readable formatted (otherwise known as auto mode)

b

B

Bits per second or Bytes per second

k

K

Kilobits per second or Kilobytes per second (the default is “k” or “kbps”)

m

M

Megabits per second or Megabytes per second

g

G

Gigabits per second or Gigabytes per second

Table 1: Unit measurement traffic throughput options.

Let’s continue looking at another group of runtime options available to nload. Along the same vein as our unit measurements, in terms of network throughput, we can also change how the amount of data transferred is presented to you.

In Table 2, we can see the possible upper and lowercase options. Note that this time we use the uppercase -U option to effect transfer data measurements and that the Bytes and Bits columns are in a different order due to the default setting being for megabytes (or “M”). This is almost the same as Table 1, but there’s no per-second measurement as it relates to file sizes essentially.

Bytes

Bits

Data transfer units of measurement

H

h

Human readable format (auto mode)

B

b

Bytes of data or bits of data

K

k

Kilobytes of data transferred or kilobits

M

m

Megabytes of data or megabits of data

G

g

Gigabytes of data transferred or gigabits

Table 2: The available nload data transfer unit measurement options.

For clarity, here’s a quick example of altering the data transfer measurement:

# nload -U K

This option changes moves off the default megabytes to displaying data collection values in kilobytes.

Live Options

There are also a few commands which you can use while nload is running.

I mentioned having more than one device displayed on the console at once but you additionally have the ability to quickly move between devices. You can do this by simply pressing the Left and Right arrow keys (the cursor keys) on your keyboard. You won’t get lost because the number of windows available to you are paginated. How many pages can be accessed and which page you are currently on is dutifully displayed at the top of the window. Alternatively, you can achieve the same functionality by hitting the Enter key or the Tab key to cycle through the network interfaces visible to your machine.

As shown previously, the available options are displayed in a box at the top of the console. To toggle this Options Window on and off, you can simply hit the F2 key. To move around the Options Window, thankfully, there’s not much to learn; it’s very intuitive. Simply use the cursor keys on your keyboard to move around the box. Once you’re over the setting you want to adjust, simply use the plus and minus keys on your keyboard to increment and decrement the setting. Once you’re happy with your selections, just hit the F2 key again to hide the Options Window.

If you make a mistake and your display isn’t as you would like, then you can load up any saved settings (we’ll look at this in a moment) by using the F6 key. If you’ve hit the sweet spot with your config settings and want to overwrite your saved config file, then it’s as simple as hitting the F5 key.

I have to admit that I got the F5 and F6 keys the wrong way around at first (probably because I associate the F5 key with reloading a browser’s page). Just create a backup of your saved config to an unrelated filename if you’re worried that you’ll do this and lose configs.

If you wanted to quit nload, then you can either reach for the ever-present Ctrl+C key combination or additionally simply hit the lowercase “q” key.

Saved Config

There are two main files that nload uses for saving its config. The system-wide configuration file is called /etc/nload.conf. We can affect all users by editing options within this file, as opposed to an individual user’s settings. To change options for an individual, it’s as simple as creating and editing a file in your home directory such as:

# pico -w /home/chrisbinnie/.nload

Follow the options that we’ve discussed and any in the system-wide config file to populate this file.

Troubleshooting

Fear not if you get stuck. In addition to running this command below, there are of course other routes to receiving assistance:

# nload --help

There’s a useful mailing list available where you could ask questions, and archives of previous mailing list discussions are also available. The useful netiquette applies. Be courteous and respectful, and don’t expect every list member to immediately jump to your rescue if you haven’t made any efforts yourself.

Summary

Watching the mighty nload in action can be a bit mesmerizing at times. And, running it alongside other console windows nload is a real lifesaver. During periods of heightened stress, it launches almost instantly, and you can usually easily discern the required information.

I hope it goes without saying that I recommend trying nload a few times before an outage or other stressful interlude. Having your boss lean over your shoulder and witness your lack of understanding of your tool during a problem is not ideal, I’m sure you’ll agree.

When you are awakened at 4am and can glean the pertinent information from nload with no effort at all — you will be glad you tried it out beforehand.

Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.

Advance your career in system administration! Check out the Essentials of System Administration course from The Linux Foundation.

Fujitsu’s Open Source Journey — From Consumer to Contributor

Open source is a journey for many companies as they move through a process that seems a bit like magic when releasing an open source project. Recently at LinuxCon Europe, Wolfgang Ries, CMO at Fujitsu Enabling Software Technology GmbH, talked about the company’s open source journey and how they still think of themselves as apprentice contributors in the process of learning how to improve their open source contributions.

Ries began by talking about how open source is a bit like magic, referring back to Arthur C. Clarke’s quote, “Every sufficiently advanced technology is indistinguishable from magic.” He says, “If you look at all the social problems that open source can solve, it becomes even more obvious that open source is really magic.”

One example, is the National Ebola Response System in Sierra Leone, which was built using open source software, and in Sierra Leone, open source software is credited with restoring the “faith in the ability of government to be able to run its systems honestly.” He mentioned another example from DefCon at the DARPA Cyber Grand Challenge, which was designed to show computers fixing other computers. An open source project won the DARPA prize, and he suggested that “you could think of it as open source actually being able to fight Skynet in the future.” If all of this is a bit like magic, he suggests that “you have to be careful with how you deal with it” to avoid things going horribly wrong, like when the apprentice tries to do magic without the master being around in the The Sorcerer’s Apprentice by Johann Wolfgang von Goethe. 

Open Source Catalog Manager

About a year ago, Fujitsu created the Open Service Catalog Manager (OSCM), their first full software project contribution to the open source space. Ries describes this a “winding road” where they moved through several different steps to ultimately release the OSCM as an open source project. They started with “Consensus Ridge” to decide whether Fujitsu should even do this as an open source project, which was easily answered because so many of their customers and the industry are demanding open source solutions.

The next hurdle was the “Confidence River” where it took a bit of time to evaluate patent risk, security, and other architectural considerations to decide whether they had what it takes to create this open source project. The next phase on their journey was through the “Mechanics Maze” to figure out exactly what online materials, documentation and other resources were needed to make OSCM a success. “Governance Desert” was the next section of the winding path to select a license, contributor processes, and other governance considerations. The final stop on their journey is at “Community House” where Ries admits that they are at LinuxCon to “entice people to join” and build a community around OSCM.

To get more details and learn about contributing to OSCM, watch the entire keynote below.

LinuxCon Europe videos

Keynote: Fujitsu’s Open Source Journey – From Consumer to Apprentice Contributor

About a year ago, Fujitsu created the Open Service Catalog Manager (OSCM), their first full software project contribution to the open source space. Wolfgang Ries describes the journey to this milestone.

 

 

How Unikernels Can Better Defend against DDoS Attacks

On the episode of The New Stack Makers podcast, Dell EMC CTO Idit Levine, an EMC chief technology officer at the cloud management division and office of the CTO, discussed how unikernels are poised to offer all of the developer flexibility afforded to containers, while striving for better security and integrations with many of todays top container platforms. 

At KubeCon earlier this month, Levine and the rest of the team behind the open source unikernel compilation and management platform, Unik announced new features for Unik designed to bolster both unikernel adoption and community involvement with the project moving forward. These changes included Kubernetes integration, with users having the ability to run Unik side-by-side with Kubernetes, and adding support for the Google Cloud Platform after continued requests to do so from the community.

Read more at The New Stack

Resolving Conflict

In a perfect world, we would all get along with our coworkers and bosses all the time. Unfortunately, we don’t live in a perfect world.

While most of us make our best efforts to avoid conflict at work, occasionally it is unavoidable. Here are some of my best tips on how to make all of your conflicts in the workplace healthy and (hopefully) productive, so you can move on and get back to what really matters.

1. Give up on the idea of “winning”

The best way to win an argument is to let go of the idea that you actually have something to “win.”

Winning, in this case, doesn’t mean getting your way or showing the opposition how they are wrong. Instead, it means being the person who helps everyone get on the same page so everyone can move forward.

Read more at ACM Queue