Home Blog Page 555

Get a Preview of Apache IoT Projects at Upcoming ApacheCon

The countdown until ApacheCon North America has begun. The blockbuster event will be in Miami this year and runs May 16-18. The Apache community is made up of many niche communities and ApacheCon offers something for all of them.

Here, Roman Shaposhnik, Director of Open Source, Pivotal Inc., who is heading the Apache IoT track at the ApacheCon conference, gave us a sneak peek of what the Apache Internet of Things community can look forward to at the event.

Linux.com: Please give us an overview of the projects the Apache IoT community are working on.

Roman Shaposhnik: There are many projects underway. Including projects such as Apache Mynewt and several that are directly relevant to the embedded space, which is a precursor to the IoT. Those are on the edge, and then there are projects on the data centered side too. So, we’ve got projects on Hadoop, and some on NiFi, an enterprise integration and dataflow automation tool, that is happening on the edge in the data center. Apache gives you end-to-end approach to an IoT architecture. We don’t just stop at the edge, we don’t just stop at the data center. It’s kind of end to end — that’s what’s exciting to me about it.

Linux.com: When many of us think about the IoT, we think about smart devices like the Amazon Echo, but we don’t think about what is behind that. What’s happening on that end?

Shaposhnik: Today the bulk of the projects are on the back end, but with things like Mynewt joining the Apache family, we’re moving towards what is commonly referred to as “Fog” computing, where your edge becomes increasingly more intelligent, and more independent from the data center.

Linux.com: Which talks are you most excited about in your track at ApacheCon?

Shaposhnik:  We had to turn talks down because we didn’t have enough space in our track but the ones that made the cut are simply outstanding. That makes it difficult to choose a few to highlight, so I’ll just mention some that I either took a direct role in organizing or that I, myself, would really love to attend.

I’ll start with two of the keynotes, both from an investment or VC community perspective. Basically, they are VCs explaining to developers what areas they feel will attract investments. That information is super important to an emerging technology like IoT. IoT is a hot market today, but figuring out what part of the market to address is a huge challenge.

One of those talks is a keynote by a friend of mine from Lightspeed Venture Partners. Sudip Chakrabarti is his name, and he will be explaining his experiences. The other one is a keynote organized as a panel of VC partners and investors in the Valley. These two are really super exciting to me.

Besides those talks, there are several Mynewt talks that I’m super excited about. Mynewt is exciting to me because I come from a background with some operating system design. I really love this approach in how they make it all pluggable and modularized. I highly recommend any talk on Mynewt, but everything else is just amazing as well.

There is also Justin McLean, who is driving the Apache IoT community in Australia. He is doing his own presentations and introducing people to different ways of doing IoT on small devices.

Linux.com: The Apache software foundation has a long tradition of being the place where innovation happens in a variety of communities. For example, we started out in the web and web services and then we had a boom of Java standards development kinds of projects. Right now, we’re in the midst of a big data surge. Do you see IoT as the next place that Apache is going to go and go big?

Shaposhnik: That was one reason why I was so happy to bring my VC friends to the conference because right now I see exactly that happening. I think big data has graduated to, I wouldn’t say a full enterprise adoption because there’s still challenges companies are working to iron out the kinks, but I think the investment interest has shifted more towards IoT. Now, that is not to say that big data is somehow unimportant. It is still a very important piece of the overall puzzle, but from an investment, hyper-growth perspective, I think IoT is the big next thing.

Learn first-hand from the largest collection of global Apache communities at ApacheCon 2017 May 16-18 in Miami, Florida. ApacheCon features 120+ sessions including five sub-conferences: Apache: IoT, Apache Traffic Server Control Summit, CloudStack Collaboration Conference, FlexJS Summit and TomcatCon. Secure your spot now! Linux.com readers get $30 off their pass to ApacheCon. Select “attendee” and enter code LINUXRD5. Register now >>  

Open Source Groups Provide New Licensing Resources

Newcomers to free and open source software (FOSS) might be bewildered by the variety of licenses that dictate how users can use community offerings.

For example, the Open Source Initiative lists nine “popular licenses” and Wikipedia lists dozens more coming in a variety of flavors for different purposes. Those purposes include linking, distribution, modification, patent grant, private use, sublicensing and trademark grant.

To help newbies get a handle on FOSS licenses, The Linux Foundation and Free Software Foundation Europe (FSFE) today announced new resources to help with identification and compliance.

Read more at ADT Mag

New Strain of Linux Malware Could Get Serious

A new strain of malware targeting Linux systems, dubbed “Linux/Shishiga,” could morph into a dangerous security threat.

Eset on Tuesday disclosed the threat, which represents a new Lua family unrelated to previously seen LuaBot malware.

Linux/Shishiga uses four different protocols — SSH, Telnet, HTTP and BitTorrent — and Lua scripts for modularity, wrote Detection Engineer Michal Malik and the Eset research team in an online post. 

Linux/Shishiga targets GNU/Linux systems using a common infection vector based on brute-forcing weak credentials on a built-in password list. The malware uses the list to try a variety of different passwords in an effort to gain access. This is a similar approach used by Linux/Moose, with the added capability of brute-forcing SSH credentials.

Read more at LinuxInsider

Compose your Infrastructure, Don’t Micromanage It

TLDR: Leverage Kubernetes annotations across your cluster to declaratively configure management of monitoring and logging.

Two of the largest surfaces of applications that make contact with infrastructure are monitoring and logging, and this post talks how to approach both these needs in a scalable, composable, and simplified way. If you are familiar with the Kubernetes, Prometheus, Fluentd, and the ELK stack, feel free to skip the background. …

The real power of Kubernetes comes from the API types that describe what pods should look like and manage the scheduling (and querying) of pods. Kubernetes does this with the use of key-value pairs on API objects called Labels. As an example, when a Deployment API-object is created, the Kubernetes Scheduler ensures that a specified number of pods are running across the available servers.

Read more at Medium

What’s a Service Mesh? And Why Do I Need One?

tl;dr: A service mesh is a dedicated infrastructure layer for making service-to-service communication safe, fast, and reliable. If you’re building a cloud native application, you need a service mesh!

Over the past year, the service mesh has emerged as a critical component of the cloud native stack. High-traffic companies like Paypal, Lyft, Ticketmaster, and Credit Karma have all added a service mesh to their production applications, and this January, Linkerd, the open source service mesh for cloud native applications, became an official project of the Cloud Native Computing Foundation. But what is a service mesh, exactly? And why is it suddenly relevant?

In this article, I’ll define the service mesh and trace its lineage through shifts in application architecture over the past decade. I’ll distinguish the service mesh from the related, but distinct, concepts of API gateways, edge proxies, and the enterprise service bus.

Read more at Buoyant

How to Get Started Learning to Program

There’s a lot of buzz lately about learning to program. Not only is there a shortage of people compared with the open and pending positions in software development, programming is also a career with one of the highest salaries and highest job satisfaction rates. No wonder so many people are looking to break into the industry!

But how, exactly, do you do that? “How can I learn to program?” is a common question. Although I don’t have all the answers, hopefully this article will provide guidance to help you find the approach that best suits your needs and situation.

Read more at Opensource.com

How to Safely and Securely Back Up Your Linux Workstation

Even seasoned system administrators can overlook Linux workstation backups or do them in a haphazard, unsafe manner. At a minimum, you should set up encrypted workstation backups to external storage. But it’s also nice to use zero-knowledge backup tools for off-site/cloud backups for more peace of mind.

Let’s explore each of these methods in more depth. You can also download the entire set of recommendations as a handy guide and checklist.

Full encrypted backups to external storage

It is handy to have an external hard drive where you can dump full backups without having to worry about such things like bandwidth and upstream speeds (in this day and age, most providers still offer dramatically asymmetric upload/download speeds). Needless to say, this hard drive needs to be in itself encrypted (again, via LUKS), or you should use a backup tool that creates encrypted backups, such as duplicity or its GUI companion, deja-dup. I recommend using the latter with a good randomly generated passphrase, stored in a safe offline place. If you travel with your laptop, leave this drive at home to have something to come back to in case your laptop is lost or stolen.

In addition to your home directory, you should also back up /etc and /var/log for various forensic purposes. Above all, avoid copying your home directory onto any unencrypted storage, even as a quick way to move your files around between systems, as you will most certainly forget to erase it once you’re done, exposing potentially private or otherwise security sensitive data to snooping hands — especially if you keep that storage media in the same bag with your laptop or in your office desk drawer.

Selective zero-knowledge backups off-site

Off-site backups are also extremely important and can be done either to your employer, if they offer space for it, or to a cloud provider. You can set up a separate duplicity/deja-dup profile to only include most important files in order to avoid transferring huge amounts of data that you don’t really care to back up off-site (internet cache, music, downloads, etc.).

Alternatively, you can use a zero-knowledge backup tool, such as SpiderOak, which offers an excellent Linux GUI tool and has additional useful features such as synchronizing content between multiple systems and platforms.

The first part of this series walked through distro installation and some pre- and post-installation security guidelines. In the next article, we’ll dive into some more general best practices around web browser security, SSH and private keys, and more.

Workstation Security

Read more:

Part 5: 9 Ways to Harden Your Linux Workstation After Distro Installation

Part 1: 3 Security Features to Consider When Choosing a Linux Workstation

Release Update: Prometheus 1.6.1 and Sneak Peak at 2.0

After 1.5.0 earlier in the year, Prometheus 1.6.1 is now out. There’s a plethora of changes, so let’s dive in.

The biggest change is to how memory is managed. The -storage.local.memory-chunks and -storage.local.max-chunks-to-persist flags have been replaced by -storage.local.target-heap-sizePrometheus will attempt to keep the heap at the given size in bytes. For various technical reasons, actual memory usage will be higher so leave a buffer on top of this. Setting this flag to 2/3 of how much RAM you’d like to use should be safe.

The GOGC environment variable has been defaulted to 40, rather than its default of 100. This will reduce memory usage, at the cost of some additional CPU.

A feature of major note is that experimental remote read support has been added, allowing the read back of data from long term storage and other systems.

Read more at CNCF

Best Linux Distros for Gaming in 2017

Linux can be used for everything, including gaming. When it comes to Linux gaming and distros, you have many options to choose from. We hand-picked and tested all distros and only included the best ones. All with a detailed overview, minimum requirements, and screenshots.

Gaming in Linux has evolved a lot in the past few years. Now, you have dozens of distros pre-optimized for gaming and gamers. We tested all of them and hand-picked the best. There are a few other articles and lists of this type out there, but they don’t really go into detail and they are pretty outdated. This is an up-to-date list with any info you’d need.

Continue reading

Martin Casado at ONS: Making SDN Real

Software Defined Networking (SDN) has evolved significantly since the concept began to be considered in the 1990s, and Martin Casado, General Partner, Andreessen Horowitz, used his keynote at the Open Networking Summit to talk about how he’s seen SDN change over the past 10 years. 

As one of the co-founders of Nicira in 2007, Casado was on the leading edge of some of this SDN evolution. At Nicira, they were focused on addressing two main issues in networking. First, if you look at the industry, specific, even customer level functionality, is tied down to the hardware, and second, operations are tied down to a box. As computer scientists, they assumed that these problems could be solved by creating high-level abstractions and using a modern development environment to reduce the complexity of implementing network systems; however, they quickly learned that networking is not computing. Networking, Casado says, is less about computation and more about distributed state management. This led Nicira down the path of creating a distributed SDN operating system based on the idea that a general platform would simplify the distribution model for application developers. They came to realize that it wasn’t quite this easy.

First, networking isn’t a single problem; different parts of the network have different problems. Second, it can be hard to reduce complexity in the platform when applications need to be able to manage this complexity in order to scale. The biggest change happening in the industry around this time in late 2008 and early 2009 was the idea of using a vSwitch as an access layer to the network for implementation of network functionality, and this proved to be a successful idea.

They also had the idea of creating a domain specific language that would help reduce some of the complexity; however, the downside was that they were never quite clear that they were able to get full coverage of the existing model, and by changing the abstractions, they were creating a massive learning curve and breaking existing tool chains. The turning point was when they decided that the abstractions themselves could be networks with logical networks sitting on top of a network virtualization layer / hypervisor that acts as the interface to the physical network. 

All of this work led Casado to four key lessons:

  • Put product before platform.
  • Virtualize before changing abstractions.
  • Software over hardware.
  • Sales / Go to market are as important as technology

Watch the video of Casado’s entire talk to learn more about what he’s learned about the evolution of SDN over the past 10 years.

https://www.youtube.com/watch?v=oNXwxl2Q1tQ?list=PLbzoR-pLrL6p01ZHHvEeSozpGeVFkFBQZ

Interested in open source SDN? The “Software Defined Networking Fundamentals” training course from The Linux Foundation provides system and network administrators and engineers with the skills to maintain an SDN deployment in a virtual networking environment. Download the sample chapter today!

Check back with Open Networking Summit for upcoming news on ONS 2018.