Home Blog Page 772

Syslog, A Tale Of Specifications

The advantage of a unikernel work cycle is many-fold. You get performance benefits from not having a memory management unit or Kernel/User boundary and the attack surface is greatly minimized as all system dependencies are compiled with your application logic. Don’t use a file-system in your application? Leave it out. The philosophy here is keep it simple and use what you need. These unikernels are also the secret sauce for how Docker Beta natively works on Windows and MacOSX.

I’m specifically focusing on hacking on the Mirage implementation of Syslog which was started by Jochen Bartl (verbosemode, lobo on IRC) who is an all around awesome guy to work with. This is Jochen’s first big OCaml project (mine too) and has already proven how capable and passionate he is by leading the charge. 

Read more at Gina Codes

Network Security: The Unknown Unknowns

Using the Assimilation Project to Perform Service Discovery and Inventory of Systems

I recently thought of the apocryphal story about the solid reliability of the IBM AS/400 systems. I’ve heard several variations on the story, but as the most common version of the story goes, an IBM service engineer shows up at a customer site one day to service an AS/400. The hapless employees have no idea what the service engineer is talking about. Eventually the system is found in a closet or even sealed in a walled off space where it had been reliably running the business for years completely forgotten and untouched. From a reliability perspective, this is a great story. From a security perspective, it is a nightmare. It represents Donald Rumsfeld’s infamous “unknown unknowns” statement regarding the lack of evidence linking the government of Iraq with the supply of weapons of mass destruction to terrorist groups.

Alan Robertson, an open source developer and high availability expert, likes to ask people how long it would take them to figure out which of their services are not being monitored. Typical answers range from three days to three months.

Read more at Security Week

Scientific Audio Processing, Part I – How to read and write Audio files with Octave 4.0.0 on Ubuntu

Octave, the equivalent software to Matlab in Linux, has a number of functions and commands that allow the acquisition, recording, playback and digital processing of audio signals for entertainment applications, research, medical, or any other science areas. In this tutorial, we will use Octave V4.0.0 in Ubuntu and will start reading from audio files through writing and playing signals to emulate sounds used in a wide range of activities.

Best Linux Command-Line Tools For Network Engineers

Trends like open networking and adoption of the Linux operating system by network equipment vendors require network administrators and engineers to have a basic knowledge of Linux-based command-line utilities.

When I worked full-time as a network engineer, my Linux skills helped me with the tasks of design, implementation, and support of enterprise networks. I was able to efficiently collect information needed to do network design, verify routing and availability during configuration changes, and grab troubleshooting data necessary to quickly fix outages that were impacting users and business operations. Here is a list of some of the command-line utilities I recommend to network engineers.

Read more at Network Computing

6 Amazing Linux Distributions For Kids

Linux and open source is the future and there is no doubt about that, and to see this come to a reality, a strong foundation has to be lied, by starting from the lowest…

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

True Network Hardware Virtualization

If your network is fine, read no further.

TROUBLE IN PARADISE

But it’s likely you’re still reading. Because things are not exactly fine. Because you are probably like 99.999% of us who are experiencing crazy changes in our networks. Traffic in metro networks is exploding, characterized by many nodes, with varying traffic flows and a wide mix of services and bit rates.

To cope with all this traffic growth and changing usage patterns, WAN and metro networks require more flexibility. Network resources that can be dynamically and easily set into logical zones; the ability to create new services out of pools of network resources.

Virtualization is a concept that has achieved this in data centers where compute resources have long been virtualized using virtual machines (VMs). NICs providing network connectivity to VMs have been virtualized. But network resources are still rigidly and physically assigned in metro and WAN networks.

You, the devoted architects and tireless operators of these high capacity networks, are confronted with complex networking structures that don’t lend themselves to any form of dynamic change. A further dilemma is that you need to both manage existing connections but also build new platforms that excel at delivering on-demand services and subscriber-level networking.

VAULTING TO THE FRONT

The IXPs and ISPs who architect networks with dynamic programmatic control will achieve service velocity that is winning.

Enter true network hardware virtualization which creates virtual forwarding contexts (VFCs) at WAN scale. For ISPs, Internet Exchanges (IX) and Large Campus networks, WAN-scale multi-context virtualization offers dynamic creation of logical forwarding contexts within a physical switch to make programmable network resource allocation possible.

VIRTUALIZATION IN NETWORK HARDWARE FOR WAN AND METRO NETWORKS

Corsa’s SDN data planes allow hardware resources to be exposed as independent logical SDN Virtual Forwarding Contexts (VFCs) running at 10G and 100G physical network speed. Under SDN application control, VFCs are created in the logical overlay. Three context types are fully optimized for production network applications: L2 Bridge, L3 IP Routing, L2 Circuit Switching. A Generic OpenFlow Switch context type is provided for advanced networking applications where the user wants to use OpenFlow to define any forwarding logic.

Each packet entering the hardware is processed with full awareness of which VFC it belongs to.
Each VFC is assigned its own dedicated hardware resources that are independent of other VFCs and cannot be affected by other VFCs scavenging. Each VFC can be controlled by its own, separate SDN application.

The physical ports of the underlay are abstracted from the logical interfaces of the overlay. The logical interfaces defined for each VFC correspond to a physical port or an encapsulated tunnel, such as VLAN, MPLS pseudo wire, GRE tunnel, or VXLAN tunnel, in the underlay. Logical interfaces of any VFC can be shaped to their own required bandwidth.

USE SDN VIRTUALIZATION TO AUGMENT YOUR NETWORK WHERE IT’S NEEDED

This level of hardware virtualization, coupled with advanced traffic engineering and management, allows building traditional Layer 2 and Layer 3 services with new innovative SDN enabled capabilities. For example, an SDN enabled Layer 2 VPLS service can provide SDN enabled features such as Bandwidth on Demand, or application controlled forwarding rules, and at the same time use existing network infrastructure in the underlay (physical) network to provide connectivity for the new service. To further differentiate, service providers may even allow customers to bring their own SDN controllers to control their services, while retaining full control over the underlay network.

STOP READING AND START DOING!

With Corsa true network hardware virtualization, virtual switching and routing can be achieved at scale to enable programmable, on-demand services for operators and their customers.

WEBINAR

Join us on the live webinar at 10am PDT on June 22 to see how networks can be built using true network hardware virtualization and learn the specific uses cases that it benefits. This webinar will outline specific attributes of open, programmable SDN switching and routing platforms that are needed, especially at scale and in the process dispel the notion of ‘the controller’, discussing how open SDN applications can be used to control the virtualized instances.

Register Now!

 

A Shared History & Mission with The Linux Foundation: Todd Moore, IBM

IBM is no stranger to open source software. In fact, the global corporation has been involved with The Linux Foundation since the beginning. Founded over a century ago, IBM has made a perennial commitment to innovation and emerging technology; That’s why they chose to participate in Linux Foundation Collaborative Projects.

3 Reasons IBM Participates in Linux Foundation Projects

It’s impressive that  IBM was founded more than a century ago with decades of research, technologies, and products behind it. But even more impressive is that the company continues to evolve and embrace emerging technologies. It’s done so, in part, due to its continued involvement with Linux and open source through The Linux Foundation.

“IBM has a long history with The Linux Foundation,” says Todd Moore, VP of Open Technology at IBM. “We’ve been one of the bedrock members of The Linux Foundation since its inception.” And, more generally, says Moore, “We have a long history of doing open source projects throughout many communities.”

Today IBM participates in many Linux Foundation projects, including the Open Mainframe Project. The project’s goal is bringing government, academic and corporate members together, “to boost adoption of Linux on mainframes.”

IBM was one of the founding Platinum members of the Open Mainframe Project, along with ADP, CA Technologies, and SUSE. IBM’s participation included making “the largest single contribution of mainframe code from IBM to the open source community,” Moore says.

“We choose to work with The Linux Foundation and participate in projects like the Open Mainframe Project because of the people, the communities who come together, and the great things that get done,” says Moore.

3 reasons IBM participates in Linux Foundation projects

Moore cites three main reasons IBM participates in Linux Foundation projects:

  • Tailored structure: “There’s quite a bit of customization that can happen within a Linux Foundation project. Many communities impose structure in how they want to operate. When we work with the Linux Foundation to create a community, the community can be very much tailored to just that set of individuals.”

  • Open Governance: “Working with the Linux Foundation brings credibility to the actual open governance structure that we like to see in communities. This partnership brings the credibility that this is a project that will be truly governed out in the open.

  • Encouraging collaboration and participation: “We set up organizations and work effectively to create an atmosphere where people will come and collaborate, and they’ll be ‘sticky’ and they’ll want to go and work on those projects.”

Other Linux Foundation projects that IBM is involved in include Node.js, ODPi, the Cloud Native Computing Foundation, and The Hyperledger Project.

“If we were just to take a project and open source it ourselves and expect people to come to that project, it’s a very difficult path,” says Moore. “When you do it in partnership with someone like The Linux Foundation, that path very much gets smoothed. We have great contacts, great recruitment into these projects, and the staff that we can really go and help and deliver on that.”

Watch the complete video below:

Read more stories about Linux Foundation Collaborative Projects:

PLUMgrid: Open Source Collaboration Speeds IO and Networking Development

Telecom Companies Collaborate Through OPNFV to Address Unique Business Challenges

 

ON.Lab Releases Latest ONOS SDN Platform

The Open Network Lab’s Open Network Operating System project unveiled its seventh release targeting a software-defined networking operating system, dubbed “Goldeneye.”

ONOS said the Goldeneye release includes advances such as improved adaptive flow monitoring and selective DPI from ETRI, claimed to provide lower overhead flow monitoring and Yang tool chain support from Huawei; integration of northbound intent subsystem with the Flow objective subsystem; a six-times improvement in core performance to support consistent distributed operations; and southbound improvements to Cisco IOS NetConf and Yang tool chain.

Read more at RCR Wireless

Make Peace With Your Processes: Part 1

A fundamental design feature of Unix-like operating systems is that many of a system’s resources are accessible via the filesystem, as a file. For example the “procfs” pseudo-filesystem offers us access to all kinds of valuable treasures. In this series of articles, I’ll provide an overview of your system processes, explain how to use the “ps” command, and much more.

By querying files present in “/proc” you can quickly find out the intricacies of your network connections, the server’s CPU vendor, and look up a mountain of useful information, such as the command line parameters that were passed to your latest-and-greatest application when it fired up. This is because many of the functions of a server — such as a network connection — are really just another stream of data and in most cases can be presented as a file on your filesystem.

Let’s jump ahead for a moment in case you’re not too familiar with the “/proc” filesystem. If you knew that your Process ID was 16651, for example, then you could run this command to find out what was sent to the Puppet process to start it up:

# xargs -0 < /proc/16551/cmdline

The output from that command is:

/usr/bin/ruby /usr/bin/puppet agent

As you can see, Puppet’s agent is using the Ruby programming language in this case and the binary “/usr/bin/puppet” is passed the parameter “agent” to run it as an “agent” and not a “master”.

The “Everything Is A File” philosophy makes total sense if you think about it. The power harnessed within Unix’s standard tools (usually used for manipulating data held in the more common text files) such as “grep”, “awk” and “sed” are a major strength of the Operating System. But most importantly you can have a system’s components integrate very efficiently and easily if many things are simply files.

If you have you ever tried to look into a process running on a Unix-like machine then you’ll know that if anything the abundance of information adds confusion, rather than assists, if you don’t know where to look. There are all sorts of things to consider when you are eagerly trying to track down a rogue process on production machine.

In this series, I will attempt to offer a broad insight into how the Process Table can be accessed by the ps command and in combination with “/proc” and “/dev” how it can help you manipulate your systems.

Legacy

There are a few legacy stumbling blocks when it comes to looking up a process on different types of Unix boxes, but thankfully we can rely on the trusty “ps” command to mitigate some of these headaches automatically.

For example, Unix used the “ps” command by grouping its parameters together and prepending a hyphen. BSD, on the other hand, enjoyed grouping switches together but, for one reason or another, fell out with the hyphen entirely.

Throwing another spanner in the works, however, was good old GNU’s preference, in which its long options used two dashes. Now that you’ve fully committed those confusing differences to memory, let’s assume that the ps command does as much as it can by mixing and matching the aforementioned options in an attempt to keep everyone happy.

Be warned that occasionally sometimes oddities can occur, so keep an eye out for them just in case. I’ll try to offer alternative commands as we go along to act as a reminder that not all is to be taken exactly as read. For example, a very common use of the ps command is:

# ps -aux

Note,however, that this is indeed different from:

# ps aux

You might suspect, and would be forgiven for thinking as much, that this is purely to keep everyone on their toes. However, according to the ps command’s manual, this is apparently because POSIX and UNIX insist that they should cater to processes owned by a user called “x”. However, if I’m reading the information correctly, then if the user “x” does not exist, then “ps aux” is run. I love the last sentence of the manual’s definition and draw your attention to it as a gentle warning: “It is fragile, subject to change, and thus should not be relied upon.”

Process Tree

Enough eye strain for a moment; let’s begin by looking at the ps command and what it can help with in relation to querying the Process Table.

For starters (and I won’t be betting the ranch on this statement), it’s relatively safe to assume that upper- and lowercase mean the same thing.

If you’ve never seen the output of a Process Tree, then it might help with understanding “child” threads, which live under a “parent” process. In this case, the command is simply:

# pstree

The not-so-impossible-to-decipher output from that command is shown in Listing 1 (on a server not running “systemd but good, old “init” (which is always Process ID (PID) number one, as an aside):

init-+-auditd---{auditd}

    |-certmonger

    |-crond

    |-dbus-daemon---{dbus-daemon}

    |-hald---hald-runner-+-hald-addon-acpi

    |                    `-hald-addon-inpu

    |-apache2---8*[apache2]

    |-master-+-pickup

    |        `-qmgr

    |-6*[mingetty]

    |-oddjobd

    |-rpcbind

    |-rsyslogd---3*[{rsyslogd}]

    |-sshd---sshd---sshd---bash---pstree

    |-udevd---2*[udevd]

Listing 1: Output from the “pstree” command showing parent processes and their children.

You can make your screen output much messier by adding the “-a” switch. Doing so will add command-line arguments (pulled from the /proc filesystem in the same way that our example did earlier). This is very useful, but you might want to do something like “grep” a specific process name from the output, as follows:

# pstree -ap | grep ssh


|-sshd,29076

 |   `-sshd,32365

 |       `-sshd,32370

 |               |-grep,8401 ssh

 |   |-sssd_ssh,1143 --debug-to-files

Listing 2: The command “pstree” showing only SSH processes with command line arguments and PIDs

As you can see from Listing 2, the command we are querying with is also shown (starting with “grep”) in the output so try not to let that trip you up. I’ve added the “-p” switch to display the PIDs, too.

One final look at this example is shown in Listing 3. Here, the all-pervasive “-Z” switch offers us any SELinux config associated with the parent and child detail displayed in our process table tree. That command for reference was:


# pstree -aZp | grep ssh


|-sshd,29076,`unconfined_u:system_r:sshd_t:s0-s0:c0.c1023'

 |   `-sshd,32365,`unconfined_u:system_r:sshd_t:s0-s0:c0.c1023'

 |       `-sshd,32370,`unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023'

 |               |-grep,8406,`unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023' ssh

 |   |-sssd_ssh,1143,`system_u:system_r:sssd_t:s0' --debug-to-files

Listing 3: The output now includes SELinux detail, command line arguments and PIDs

In this article, I provided a very brief introduction to the “ps” command. In the next several articles, I’ll show further options, examples, and details for using this powerful tool. Stay tuned.

Read Part 2 here.

Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.