Home Blog Page 559

Probe Your Linux Sockets With ss

We all know and love netstat (network statistics), because it is a wonderful tool for viewing detailed network connection information. An interesting alternative is ss, socket statistics. ss is part of the iproute2 suite of network tools.

ss displays statistics about your network sockets, which includes TCP, UDP, RAW, and UNIX domain sockets. Let us briefly review what these are.

Transmission Control Protocol (TCP) is a fundamental networking protocol. It is part of the Internet protocol suite and operates in the transport layer. All networking transmissions are broken up into packets. TCP guarantees that all packets arrive, in order, and without errors. This requires a lot of back-and-forth communication, as this joke illustrates:

“Hi, I’d like to hear a TCP joke.”
“Hello, would you like to hear a TCP joke?”
“Yes, I’d like to hear a TCP joke.”
“OK, I’ll tell you a TCP joke.”
“Ok, I will hear a TCP joke.”
“Are you ready to hear a TCP joke?”
“Yes, I am ready to hear a TCP joke.”
“Ok, I am about to send the TCP joke. It will last 10 seconds, it has two characters, it does not have a setting, it ends with a punchline.”
“Ok, I am ready to get your TCP joke that will last 10 seconds, has two characters, does not have an explicit setting, and ends with a punchline.”
“I’m sorry, your connection has timed out. Hello, would you like to hear a TCP joke?”

User Datagram Protocol (UDP is simpler and has less overhead. It is a connection-less protocol with no error checking or correction mechanisms, and does not guarantee delivery. There are UDP jokes, too:

I would tell you a UDP joke but you might not get it.

A UDP packet walks into a bar.
A UDP packet walks into a bar.

RAW sockets are naked. TCP and UDP encapsulate their payloads, and the kernel manages all the packets. RAW sockets transport packets without encapsulating them in any particular protocol, so we can write applications that manage network packets. Some applications that take advantage of RAW sockets are tcpdump and nmap.

UNIX sockets, also called inter-process communication (IPC) sockets, are internal sockets that processes use to communicate with each other on your Linux computer.

Dumping Sockets

Now we get to the fun part, dumping sockets! This is not quite as much fun as dumping a load from a backhoe, but it has its charms. These commands print the current state of TCP, UDP, RAW, and UNIX sockets respectively:

$ ss -ta
$ ss -ua
$ ss -wa
$ ss -xa

See how your UNIX sockets are verbose and numerous. If your Linux distribution uses systemd you’ll see it all over the place. This little incantation counts all the systemd lines:

$ ss -xa | grep systemd | wc -l
53

ss -a dumps everything. Let’s take a look at what the columns mean.

$ ss | less
Netid State    Recv-Q Send-Q Local Address:Port           Peer Address:Port                
u_seq ESTAB    0      0      @0002b 25461                 * 25462                
u_str ESTAB    0      0      @/tmp/dbus-C3OhS7lOOc 28053             * 22283   
udp   ESTAB    0      0      127.0.0.1:45509              127.0.1.1:domain               
tcp   ESTAB    0      0      192.168.0.135:40778          151.101.52.249:http 
tcp   LAST-ACK 1      1      192.168.0.135:60078          192.229.173.136:http
tcp   LISTEN   0      80     127.0.0.1:mysql                 *:*
tcp   LISTEN   0      128    :::ssh                         :::*

Netid displays the socket type and transport protocol.

State is the socket state, which are the standard TCP states. You’ll see ESTAB and LISTEN the most.

Recv-Q and Send-Q display the amount of data queued for receiving and sending, in bytes.

Local Address:Port is the open socket on your computer, and Peer is the address of the remote connection, if there is one.

Cool Examples

It’s always good to check for open ports. This shows all listening sockets:

$ ss -l

Seeing all the UNIX sockets isn’t necessary when you’re concerned about anything that might be open to the outside world, so this displays only listening TCP, UDP, and RAW sockets:

$ ss -tuwl
Netid  State      Recv-Q Send-Q  Local Address:Port    Peer Address:Port                
raw    UNCONN     0      0              :::ipv6-icmp   :::*                                                                                             
udp    UNCONN     0      0               *:bootpc      *:* 
tcp    LISTEN     0      80      127.0.0.1:mysql       *:*                                      
tcp    LISTEN     0      128             *:ssh         *:*                                       
tcp    LISTEN     0      128            :::http        :::*                    

UNCONN, unconnected, is the same as LISTEN. This example shows that pings are not blocked, bootpc is listening for DHCP assignments, MySQL is listening for local connections only, and SSH and HTTP are open to all requests, including external. *:* means all IPv4 addresses, and :::* means all IPv6 addresses.

You can see which processes are using sockets, which can be quite enlightening. This example shows the activity generated by a bit of Web surfing:

$ ss -tp
State      Recv-Q Send-Q         Local Address:Port       Peer Address:Port                
ESTAB      0      918            192.168.0.135:49882      31.13.76.68:https 
users:(("chromium-browse",pid=2933,fd=77))
ESTAB      0      0              192.168.0.135:60274      108.177.98.189:https 
users:(("chromium-browse",pid=2933,fd=114))
FIN-WAIT-1 0      619            192.168.0.135:57666      208.85.40.50:https                
ESTAB      0      0              192.168.0.135:52086      31.13.76.102:https                 
users:(("chromium-browse",pid=2933,fd=108))
SYN-SENT   0      1              192.168.0.135:46660      52.84.50.246:http                  
users:(("firefox",pid=3663,fd=55))
SYN-SENT   0      1              192.168.0.135:46662      52.84.50.246:http                  
users:(("firefox",pid=3663,fd=66))

Want to see the domain names? Add -r, for “resolve”:

$ ss -tpr
State      Recv-Q Send-Q    Local Address:Port     Peer Address:Port                
ESTAB      0      0         studio:48720           ec2-50-18-192-250.
us-west-1.compute.amazonaws.com:https   users:(("firefox",pid=3663,fd=71))
ESTAB      0      0         studio:57706            www.pandora.com:https                 
users:(("firefox",pid=3663,fd=69))
ESTAB      0      0          studio:49992           edge-star-mini-shv-01-
sea1.facebook.com:https      users:(("chromium-browse",pid=2933,fd=77))

Use the -D [filename] to dump your results into a text file, or use tee so you can see the output in your terminal and also store it in a file:

$ ss -tpr | tee ssoutput.txt

The more you know about TCP/IP, the more tools like ss will work effectively for you. The fine man ss contains a lot of useful examples, and if you install the iproute2-doc package you’ll find more help.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

How to Manage the Computer Security Threat

It is tempting to believe that the security problem can be solved with yet more technical wizardry and a call for heightened vigilance. And it is certainly true that many firms still fail to take security seriously enough. That requires a kind of cultivated paranoia which does not come naturally to non-tech firms. Companies of all stripes should embrace initiatives like “bug bounty” programmes, whereby firms reward ethical hackers for discovering flaws so that they can be fixed before they are taken advantage of.

But there is no way to make computers completely safe. Software is hugely complex. Across its products, Google must manage around 2bn lines of source code—errors are inevitable. The average program has 14 separate vulnerabilities, each of them a potential point of illicit entry. Such weaknesses are compounded by the history of the internet, in which security was an afterthought (see article).

Read more at The Economist

Watson Service Chaining With OpenWhisk (Part 1 of 3)

OpenWhisk can merge the power of Watson with the simple beauty of serverless computing. As we delve into Watson services, we’ll cover the building blocks here.

This three-part series will help you understand the in-depth features of Serverless Computing via OpenWhisk. OpenWhisk offers an easy way to chain services where an output of the first action acts as an input to the second action and so on in a sequence.

OpenWhisk achieves the chaining of services via sequences on Bluemix. By the end of Part 3, You will be chaining Watson services and exposing the sequence as a REST API via OpenWhisk API Gateway.

This post describes the resource requirements for performing this lab. The two sub-sections are:

  • Bluemix account
  • Locally installed software

Read more at DZone

How Docker Is Growing Its Container Business

Docker is a name that has become synonymous with the application container revolution in recent years. At the helm of Docker Inc. is CEO Ben Golub who is tasked with leading the company forward and making sure that containers aren’t just a good idea for developers, but are also a good idea for paying customers.

In a video interview with eWEEK at the DockerCon 17 conference, Golub details how Docker Inc. has developed its business model and what lies ahead.

“We have built up a subscription business model,” Golub said. “What we have seen over the course of the past year, since we have introduced our more serious commercial products is a great ramp in terms of the number of customers.”

Read more at eWeek

Mirantis Launches its New OpenStack and Kubernetes Cloud Platform

Mirantis, one of the earliest players in the OpenStack ecosystem, today announced that it will end-of-life Mirantis OpenStack support in September 2019. The Mirantis Cloud Platform, which combines OpenStack with the Kubernetes container platform (or which could even be used to run Kubernetes separately), is going to take its place.

While Mirantis is obviously not getting out of the OpenStack game, this move clearly shows that there is a growing interest in the Kubernetes container platform and that Mirantis’ customers are now starting to look at this as a way to modernize their software deployment strategies without going to OpenStack. The new platform allows users to deploy multiple Kubernetes clusters side-by-side with OpenStack — or separately.

The company is also changing how it delivers its new platform. 

Read more at TechCrunch

Software Heritage Backed By UNESCO

UNESCO and INRIA signed last Monday an agreement to contribute to the preservation of the technological and scientific knowledge contained in software. This includes promoting universal access to software source code.

The agreement, signed by UNESCO’s Director-General, Irina Bokova, and INRIA’s Chief Executive Officer, Antoine Petit, and in the presence of the President of the French Republic, François Hollande, focuses especially on Software Heritage. Software Heritage is an INRIA project that strives to collect, preserve and make accessible the source code of all available software.  The Software Heritage project aims to build a universal and perennial archive of software accessible for future generations.

“We are expected to be able to control, to be able to transmit, to be able to put these technologies, this information, these elements that become of the heritage at the service of the humanity” said M Hollande during the event. The FSFE was involved with Software Heritage’s early success, by offering support and helping publicise its creation and activities. Matthias Kirschner, President of FSFE, says “It is important to preserve our collective knowledge about how software has influenced humankind. Collecting source code makes Software Heritage a valuable resource to understand how our society worked at certain times, and to build upon knowledge from humankind’s past.”

Software Heritage is the brainchild of Roberto Di Cosmo (founder and CEO) and Stefano Zacchiroli (founder and CTO), two long-time Free Software advocates and activists.

See more at UNESCO

Update on the Exascale Computing Project (ECP)

In this video from the HPC User Forum, Paul Messina from Argonne presents: Update on the Exascale Computing Project.

“The Exascale Computing Project (ECP) was established with the goals of maximizing the benefits of high-performance computing (HPC) for the United States and accelerating the development of a capable exascale computing ecosystem. Exascale refers to computing systems at least 50 times faster than the nation’s most powerful supercomputers in use today.The ECP is a collaborative effort of two U.S. Department of Energy organizations – the Office of Science (DOE-SC) and the National Nuclear Security Administration (NNSA).”

Read more at insideHPC

How to Look at Mission-Critical Safety in the Internet of Cars

Alex Agizim is CTO of Automotive and Embedded Systems at EPAM.

The autonomous car will redefine how we travel, ship inventory, and design infrastructure. As physical objects become more deeply integrated into the Internet of Things, the connected car will soon become an essential component of the IoT ecosystem.

An important element as we look towards actually implementing the autonomous car is understanding how mission-critical safety software and the Internet of Cars will operate within the car ecosystem. This is a blog that tries to explain what is happening currently; the importance of creating a security-first approach with open source software; and how we at EPAM are approach and solving some of the common problems.

If you are interested in learning more about this, Alex will be at the Automotive Linux Summit happening in Tokyo from May 31 – June 2. His talk will be all about the cloud connected vehicle based on open source software. Linux.com readers get to save 5% off the “attendee” pass to Automotive Linux Summit. Register now with code LINUXRD5.

What is the current problem?

The Internet of Cars ecosystem and shared economy model require the vehicle to become part of the Cloud. Service vendors should own the end-to-end service software stack including the part of the software executed in the vehicle. The deployment, upgrades and development of the in-vehicle part of the service should be completely independent from the Car OEM development lifecycle.

Currently, service vendors don’t have the ability to update or deploy the in-vehicle part of the software. It can only be done by the Car OEM that owns the complete software that runs on onboard computers.

Protecting Your Vehicle on the Cloud

No matter which solution will be used for cloud integration, it still opens the system for potential intrusions through the exploitation of connection vulnerabilities. Thus, some level of isolation from the rest of the safety-critical software is needed. Here is how we envision the Xen hypervisor-based solution for isolation of different subsystems (soft ADAS & cluster, HMI, cloud apps):

This infrastructure (including containers and Docker to deploy service software with the same approach as regular cloud-based services to the EPAM Fusion domain) allows service vendors to develop and deploy services without any special knowledge of embedded/automotive software. The domain provided by the Car OEM would ensure the full control of APIs and policies that might be used by the service. The domain would not have access to the hardware because of hardware virtualization isolation.

The autonomous car is slowly upon us, but there are many challenges that lie ahead, especially when it comes to critical software functions on the cloud. The way the technology industry approaches this will be imperative to innovation.

If you are curious to learn more about EPAM Fusion, see our demo video below:

https://www.youtube.com/watch?v=jMmz1odBZb8

Secure Web Apps with JavaEE and Apache Fortress

ApacheCon is just a couple weeks away — coming up May 16-18 in Miami. We asked Shawn McKinney, Software Architect at Symas Corporation,  to share some details about his talk at ApacheCon. His presentation, “The Anatomy of a Secure Web Application Using Java EE, Spring Security, and Apache Fortress” will focus on an end-to-end application security architecture for an Apache Wicket Web app running in Tomcat. McKinney explains more in this interview.

Linux.com: Tell us about your inspiration for this talk.

Shawn McKinney: The idea for this talk started several years back, when I first began working full-time with Symas. I was working on a project that spanned multiple companies with my friend and colleague, John Field, who’s a security architect at EMC, now Pivotal.

At the time, we were working on a process to migrate legacy Cobalt apps from running on their native IBM z Series mainframe platform to run on top of open systems architectures, i.e. Linux.

These were massive programs with millions of lines of code, built over decades. Their conversion processes required mimicking the mainframe’s legendary security controls onto Linux platforms, using what was available to us via native and non-native security controls.

This meant dealing with a multitude of security concerns across every tier of the system and into many of its sub-layers as well. Mandatory access controls were enforced on every node in the system.

Linux systems had to be hardened to the nth degree and at the same time, multiple grades of authorization were required within the platform layers. Fortunately, everything we needed to do all this was already readily available and usable, and easily found within the public domain.

Only open, established, and timeworn practices were being targeted. That is, technologies released under permissible licenses, like the Apache software license, and these things were allowed into the final design. Our problem wasn’t with how to design the security system, per se, nor how to build it. Strangely, those were the easy parts.

The hard part for us was how do we convey the contents of its complex design to others in a way that is understandable? Because, many of us are not, shall we say, security afflicted, so despite recommending only best practices, their concepts remain arcane, complicated, and generally not known to the masses.

To break through this complexity barrier, John and I borrowed an idea remembered from our youth, and that is those science textbooks that depict the human anatomy. You remember the ones that use translucent pages, each with a particular organ, all overlaying together comprising the comprehensive image of the human body, complete with all of its sub-systems?

We thought this a good way to communicate our complicated security system design to others. We adapted this idea for our end-to-end security design layout. Each image corresponds with an individual security component contained within a typical web app from it’s outer to innermost layers.

What’s unique about this particular talk is that it started with those initial visual images of a typical web security system architecture.

Next, a test application was created to go along with those anatomy images. The test app mimicked a typical web system, complete with test pages, links, buttons, database tables, et cetera, all of which are under tight security controls of various types.

The goal of the test app was to create a comprehensive tutorial demonstrating all of the pertinent security controls that were contained within the anatomy diagram. Finally, we added instructions to install, deploy, and run the test app and published it all to GitHub.

The project is called The Apache Fortress Demo, and we used it in our live demos and it could also be used by anyone else who wants to try it out at home. During our live demos, we would simultaneously dissect and discuss the web system security functionality, switching between the Power Point slides visually depicting the images and into the concrete demo to show how it all worked in a live system.

Linux.com: Who should attend this talk?

McKinney: It is a Java security demonstration, so anyone who’s working in Java platform that’s in security would be particularly interested. The demo’s going to cover security protocol interplays, including TLS in its various forms and flavors, so LDAPS, for example, HPS. So, I’m going to say anyone who’s interested in security-related topics should get something from it.

Linux.com: What technical background will you assume the audience has?

McKinney: I’m going to assume basic conceptual understanding of security concepts like authentication, authorization, and encryption of data. So, understand those abstract concepts, and then we will then make them concrete for this specific platform in the demo.

Learn first-hand from the largest collection of global Apache communities at ApacheCon 2017 May 16-18 in Miami, Florida. ApacheCon features 120+ sessions including five sub-conferences: Apache: IoT, Apache Traffic Server Control Summit, CloudStack Collaboration Conference, FlexJS Summit and TomcatCon. Secure your spot now! Linux.com readers get $30 off their pass to ApacheCon. Select “attendee” and enter code LINUXRD5. Register now >>

Introducing the Open Source Entrepreneur Network

I’m happy to announce that Linux.com will syndicate content from the Open Source Entrepreneur Network. Wait. What? Who? Where? Read on for more…

I’ve been an open source guy for many years now – since 1998. Over the years I’ve been a proud open source user, sometime developer, and overall advocate. Seeing the success of open source has been a real joy, but I’ve also been mystified by the myths that permeate the industry when it comes to business models and product development and where they intersect with open source software. Now that open source has “won” the focus now shifts to opimization. As in, how do you optimize your processes to fully participate in and get maximum benefits from all the things happening right now in open source ecosystems?

Frankly, I’m amazed – and not in a good way – at how much bad advice and “thought leadership” exists out there pertaining to open source business things. I followed the open source way of scratching an itch, and I decided to finally do something about it: I created the Open Source Entrepreneur Network or OSEN. The OSEN is where you learn how to make, market and sell products and services based on open source software. In our brave new open source world, this is a skillset needed by startup founders (and their investors), product managers, IT managers, CIOs/CTOs, devops pros and more. The fact that so much of modern software supply chains originates with upstream communities adds several layers of complexity to product development, which was already complex to begin with.

So join us on this journey and let me know what you think!