Home Blog Page 363

Compliance is Not Synonymous With Security

Along with the clear benefits to be gained from upholding the standards enforced by GDPR, PCI DSS, HIPAA, and other regulatory bodies often comes a shift toward a more compliance-centric security approach. But regardless of industry or regulatory body, achieving and maintaining compliance should never be the end goal of any security program. Here’s why:

Compliance does not guarantee security

It’s critical to remember that many—if not most—breaches disclosed in recent years occurred at compliant businesses. This means that PCI compliance, for example, has been unable to prevent numerous retailers, financial services institutions, and web hosting providers from being breached, just as the record-breaking number of healthcare data breaches in 2016 were suffered by HIPAA-compliant organizations.

Compliance standards are not comprehensive

In fact, this trend reinforces how compliance standards should be operationalized and perceived: as thoughtful standards for security that can help inform the foundations of a security program but are by no means sufficient. The most effective security programs view compliance as a relatively small component of a comprehensive security strategy.

Read more at SecurityWeek

Dialing Up Security for Docker Containers

Container systems like Docker are a powerful tool for system administrators, but Docker poses some security issues you won’t face with a conventional virtual machine (VM) environment. For example, containers have direct access to directories such as /proc/dev, or /sys, which increases the risk of intrusion. This article offers some tips on how you can enhance the security of your Docker environment.

Docker Daemon

Under the hood, containers are fundamentally different from VMs. Instead of a hypervisor, Linux containers rely on the various namespace functions that are part of the Linux kernel itself.

Starting a container is nothing more than rolling out an image to the host’s filesystem and creating multiple namespaces. The Docker daemon dockerd is responsible for this process. It is only logical that dockerd is an attack vector in many threat scenarios.

The Docker daemon has several security issues in its default configuration. For example, the daemon communicates with the Docker command-line tool using a Unix socket (Figure 1). If necessary, you can activate an HTTP socket for access via the network.

Read more at ADMIN Magazine

Parrot 4.0 Ethical Hacking Linux Distro Released

Popular hacking Linux distro Parrot Security has upgraded to version 4.0, and comes with all the fixes and updated packages along with many new changes.

“This release includes all the updated packages and bug fixes released since the last version (3.11), and it marks the end of the development and testing process of many new features experimented in the previous releases since Parrot 3.9,” reads the company’s announcement.

Parrot Security OS 4.0 will ship with netinstall images to enable those interested to create their own system with only the bare core and software components they need. Besides this, the company has also released Parrot on Docker templates that allows users to quickly download a Parrot template and instantly spawn unlimited and completely isolated parrot instances on top of any host OS that supports Docker. 

Read more at TechWorm

Learn more in this Parrot Security distro review from Jack Wallen.

How CERN Is Using Linux and Open Source

CERN really needs no introduction. Among other things, the European Organization for Nuclear Research created the World Wide Web and the Large Hadron Collider (LHC), the world’s largest particle accelerator, which was used in discovery of the Higgs boson.  Tim Bell, who is responsible for the organization’s  IT Operating Systems and Infrastructure group, says the goal of his team is “to provide the compute facility for 13,000 physicists around the world to analyze those collisions, understand what the universe is made of and how it works.”

CERN is conducting hardcore science, especially with the LHC, which generates massive amounts of data when it’s operational. “CERN currently stores about 200 petabytes of data, with over 10 petabytes of data coming in each month when the accelerator is running. This certainly produces extreme challenges for the computing infrastructure, regarding storing this large amount of data, as well as the having the capability to process it in a reasonable timeframe. It puts pressure on the networking and storage technologies and the ability to deliver an efficient compute framework,” Bell said.

Tim Bell, CERN

The scale at which LHC operates and the amount of data it generates pose some serious challenges. But CERN is not new to such problems. Founded in 1954, CERN has been around for about 60 years. “We’ve always been facing computing challenges that are difficult problems to solve, but we have been working with open source communities to solve them,” Bell said. “Even in the 90s, when we invented the World Wide Web, we were looking to share this with the rest of humanity in order to be able to benefit from the research done at CERN and open source was the right vehicle to do that.”

Using OpenStack and CentOS

Today, CERN is a heavy user of OpenStack, and Bell is one of the Board Members of the OpenStack Foundation. But CERN predates OpenStack. For several years, they have been using various open source technologies to deliver services through Linux servers.

“Over the past 10 years, we’ve found that rather than taking our problems ourselves, we find upstream open source communities with which we can work, who are facing similar challenges and then we contribute to those projects rather than inventing everything ourselves and then having to maintain it as well,” said Bell.

A good example is Linux itself. CERN used to be a Red Hat Enterprise Linux customer. But, back in 2004, they worked with Fermilab to  build their own Linux distribution called Scientific Linux. Eventually they realized that, because they were not modifying the kernel, there was no point in spending time spinning up their own distribution; so they migrated to CentOS. Because CentOS is a fully open source and community driven project, CERN could collaborate with the project and contribute to how CentOS is built and distributed.

CERN helps CentOS with infrastructure and they also organize CentOS DoJo at CERN where engineers can get together to improve the CentOS packaging.

In addition to OpenStack and CentOS, CERN is a heavy user of other open source projects, including Puppet for configuration management, Grafana and  influxDB for monitoring, and is involved in many more.

“We collaborate with around 170 labs around the world. So every time that we find an improvement in an open source project, other labs can easily take that and use it,” said Bell, “At the same time, we also learn from others. When large scale installations like eBay and Rackspace make changes to improve scalability of solutions, it benefits us and allows us to scale.”

Solving realistic problems

Around 2012, CERN was looking at ways to scale computing for the LHC, but the challenge was people rather than technology. The number of staff that CERN employs is fixed. “We had to find ways in which we can scale the compute without requiring a large number of additional people in order to administer that,” Bell said. “OpenStack provided us with an automated API-driven, software-defined infrastructure.” OpenStack also allowed CERN to look at problems related to the delivery of services and then automate those, without having to scale the staff.

“We’re currently running about 280,000 cores and 7,000 servers across two data centers in Geneva and in Budapest. We are  using software-defined infrastructure to automate everything, which allows us to continue to add additional servers while remaining within the same envelope of staff,” said Bell.

As time progresses, CERN will be dealing with even bigger challenges. Large Hadron Collider has a roadmap out to 2035, including a number of significant upgrades. “We run the accelerator for three to four years and then have a period of 18 months or two years when we upgrade the infrastructure. This maintenance period allows us to also do some computing planning,” said Bell. CERN is also  planning High Luminosity Large Hadron Collider upgrade, which will allow for beams with higher luminosity. The upgrade would mean about 60 times more compute requirement compared to what CERN has today.

“With Moore’s Law, we will maybe get one quarter of the way there, so we have to find ways under which we can be scaling the compute and the storage infrastructure correspondingly  and finding automation and solutions such as OpenStack will help that,” said Bell.

“When we started off the large Hadron collider and looked at how we would deliver the computing, it was clear that we couldn’t put everything into the data center at CERN, so we devised a distributed grid structure, with tier zero at CERN and then a cascading structure around that,” said Bell. “There are around 12 large tier one centers and then 150 small universities and labs around the world. They receive samples at the data from the LHC in order to assist the physicists to understand and analyze the data.”

That structure means CERN is collaborating internationally, with hundreds of countries contributing toward the analysis of that data. It boils down to the fundamental principle that open source is not just about sharing code, it’s about collaboration among people to share knowledge and achieve what no single individual, organization, or company can achieve alone. That’s the Higgs boson of the open source world.

How to Measure the Impact of your Open Source Project

Conventional metrics of open source projects lack the power to predict their impact. The bad news is, there is no significant correlation between open source activity metrics and project impact. The good news? There are paths forward.

Let’s start with some questions: How do you measure the impact of your open source project? What value does your project provide to other projects? How is your project important within an open source ecosystem? Can you predict your project’s impact using open source metrics that you can follow day to day?

If these questions resonate, chances are you care about measuring the impact of your open source project. On Opensource.com, we have already learned about measuring the project’s health, the community manager’s performance, the tools available for measuring, and the right metrics to use—and we understand that not all metrics are to be trusted.

While all these factors are critical in building a comprehensive picture of open source project health, there is more to the story. Indeed, many metrics fail to provide the information we need in a timely fashion. We want to use predictive metrics on a daily basis—metrics that are correlated with, and that act as predictors of, the outcomes and impact metrics that we care about.

Read more at OpenSource.com

Learn more in the OS Guides for the Enterprise from The TODO Group.

​ICANN Makes Last Minute WHOIS Changes to Address GDPR Requirements

The Board of Directors of the Internet Corporation for Assigned Names and Numbers (ICANN) struggled and sweated and with days left came up with a way to make the Domain Name System (DNS) and WHOIS, the master database of who owns what website name, compliant with the European Union (EU)’s General Data Protection Regulation (GDPR).

We’ll see.

It doesn’t appear to me that ICANN’s “Temporary Specification for gTLD Registration Data” will pass muster with the GDPR Article 29 working party, the GDPR enforcement group.

ICANN had wanted a year of grace to address WHOIS’s data privacy problems. They didn’t get it.

ICANN argued, “Unless there is a moratorium, we may no longer be able to … maintain WHOIS. Without resolution of these issues, the WHOIS system will become fragmented … A fragmented WHOIS would no longer employ a common framework for generic top-level domain (gTLD) registration directory services.”

Read more at ZDNet

Speak at Open Source Summit Europe – Submit by July 1

Share your expertise and speak at Open Source Summit Europe in Edinburgh, October 22 – 24, 2018. We are accepting proposals through Sunday, July 1, 2018.

Open Source Summit Europe is the leading technical conference for professional open source. Join developers, sysadmins, DevOps professionals, architects and community members, to collaborate and learn about the latest open source technologies, and to gain a competitive advantage by using innovative open solutions.

As open source continues to evolve, so does the content covered at Open Source Summit. We’re excited to announce all-new tracks and content that make our conference more inclusive and feature a broader range of technologies driving open source innovation today.

Read more at The Linux Foundation

Comprehensive Beginner’s Guide to Jupyter Notebooks for Data Science & Machine Learning

Jupyter Notebooks allow data scientists to create and share their documents, from codes to full blown reports. They help data scientists streamline their work and enable more productivity and easy collaboration. Due to these and several other reasons you will see below, Jupyter Notebooks are one of the most popular tools among data scientists.

In this article, we will introduce you to Jupyter notebooks and deep dive into it’s features and advantages.

By the time you reach the end of the article, you will have a good idea as to why you should leverage it for your machine learning projects and why Jupyter Notebooks are considered better than other standard tools in this domain!

What is a Jupyter Notebook?

Jupyter Notebook is an open-source web application that allows us to create and share codes and documents.

It provides an environment, where you can document your code, run it, look at the outcome, visualize data and see the results without leaving the environment. This makes it a handy tool for performing end to end data science workflows – data cleaning, statistical modeling, building and training machine learning models, visualizing data, and many, many other uses.

Jupyter Notebooks really shine when you are still in the prototyping phase. This is because your code is written in indepedent cells, which are executed individually. 

Read more at Analytics Vidhya

Kubernetes and the Challenge of Adding Persistent Storage

Kubernetes adoption is exploding, but hype aside, Kubernetes remains very new — and has a long way to go before it ever might become an integral part of most IT infrastructures.

In the meantime, many, if not most, enterprises and IT shops are just looking to get their feet wet as they enter this brave new world of Kubernetes. And before developers can begin to do their work on the platform; the admins, operations teams and/or DevOps must lay the groundwork to add traditional, yet vital data management components to the mix: persistent storage is a good example of a necessary component in a Kubernetes deployment, while it is not always easy to implement.

Adding Persistent to Stateless

A key demand enterprises should have is that developers should be able to store data in Kubernetes clusters without having to worry about how persistent storage is working under the hood.

Read more at The New Stack

AsteroidOS and OpenWatch Aim to Open Up Smartwatch Market

The AsteroidOS project has released version 1.0 of its open source, Linux-based smartwatch distribution. Designed for after-market installation on “Wear OS by Google” (formerly Android Wear) watches, AsteroidOS can now be dual booted on seven different models. The release follows the late March announcement of an OpenWatch Project for building Android based open source custom ROMs on Wear OS watches.

Despite the widespread view that wearables have been a disappointment, both smartwatches and other wearables are growing in popularity, with better days projected ahead (see farther below).

In truth, it’s not so much wearables that have been a disappointment as it is Wear OS. According to a recent Strategy Analytics report, Wear OS devices have cumulatively slipped from second to third place this year, slightly behind Samsung’s Tizen Linux based Gear watches. They both trail the AppleWatch, which owns about half the smartwatch market.

One problem is that unlike Android in general, Wear OS is, like Android Things, primarily a proprietary platform. Tizen is more open source, but unlike Google, Samsung has yet to encourage third party hardware vendors, and some software developers chafe at a platform controlled by one giant corporation.

Open source may not be a cure-all for technology market success, but wearables could benefit greatly from more developer participation. AsteroidOS and OpenWatch may offer a way forward.

AsteroidOS 1.0

AsteroidOS was developed by Florent Revest as an open source, privacy-oriented smartwatch platform. The distribution appeared last summer on the Connect Watch, but the product failed to reach its crowdfunding goal and quickly disappeared. An AsteroidOS blog post at the end of the year suggested that the company — essentially one individual — turned out to be unreliable and uncommunicative. (For example, as we noted in our story, Connect Watch never revealed that the prototype was actually an Android Wear watch from KingWear.)

The campaign was good for AsteroidOS, however. Revest says that the project has drawn on contributions from about 100 developers. The new stable release is ready to replace or dual-boot with Wear OS stacks on the Sony Smartwatch 3, the LG G Watch, G Watch Urbane, and G Watch R, as well as the Asus Zenwatch 1, 2, and 3.

Under the hood, AsteroidOS is built on open source Linux components such as OpenEmbedded, opkg, Wayland, Qt5, systemd, BlueZ, and PulseAudio. The platform uses libhybris for porting to Android Wear.

Version 1.0 provides phone notifications, agenda, alarm clock, calculator, music remote control, settings customizations, stopwatch, timer, and a weather forecast app. There is support for 20 languages and numerous watchface designs, and an open source SDK enables developers to add more of each.

Future plans may include an always-on display, grouped notification, calendar synchronization, and sync apps for more platforms. Farther out, there’s the potential for a personal assistant.

OpenWatch Project

The OpenWatch Project emerged from the Phonebloks inspired Blocks project, which in 2014 announced a modular Blocks watch that was to run Tizen on an Intel Edison module. The company quickly switched to Android Wear, but found that just as limiting and finally settled on its own platform based on Android 5.0.

The following year, Blocks ran a successful, $1.6 million Kickstarter campaign for a new Blocks watch that ran Android 5.0. Like the original, the design is unusual in that it houses modular components in the watchband links. Yet unlike the original, it’s proprietary rather than open source.

After significant delays, the Blocks watch was improved to keep up with the times. It switched to a higher resolution, 400 x 400 screen and swapped out the dual-core Snapdragon 400 for a quad-core MediaTek MTK6580M. Earlier this year, Blocks said it was finally shipping to backers, and it opened new pre-orders starting at $259. Yet, recent angry comments on the Kickstarter page suggest that many are still waiting.

In March, Blocks launched Project OpenWatch. The project is not open sourcing the full Blocks stack, but only key components including a Linux kernel and an Android Oreo based BSP. The idea is that others can build their own custom ROMs.

Early partners include LineageOS — the main fork of the discontinued CyanogenMod — as well as CarbonROM, a newer project that similarly produces smartphone aftermarket firmware based on Android Open Source Project (AOSP). Both projects are building their own custom ROMs based on OpenWatch.

Initially, OpenWatch will run only on watches built on the MediaTek MTK6580M SoC. These include low-end Android Wear devices like the Zeblaze Thor, Lemfo LES1, and KingWear KW88, KW98, and KW99.

Future Smartwatch

Despite the slow growth of wearables, the category appears to be gaining speed. The market grew 15.1 percent in 2018, totaling 132.9 million units, according to an IDC report released in March. The research firm projects that the segment will see compound growth of 13.4 percent through 2022, when it projects 219.4 million unit shipments. Explanations for the growth include lower prices and a larger choice of sensors.

IDC’s wearables definition includes fitness bands, smart earbuds, and smart clothing in addition to smartwatches, which IDC projects will represent “almost two out of every five wearable devices shipped in 2022.” Low-end health tracking wristbands, which led the way in the early years, will drop from a 36 percent share in 2018 to 22 percent in 2022. The fastest growing categories will be earbuds with voice assistant technology and sensor-laden clothing aimed at athletes, says IDC.

The Apple Watch represented over half of all smartwatches that shipped in 2017, says IDC. A Forrester report from last November pegged it as slightly less than half, with Samsung and Garmin coming in second and third.

Like Garmin, FitBit has introduced a proprietary stack for its first full-fledged smartwatch, the fitness oriented FitBit Ionic. FitBit acquired Pebble, and now aims to build up its app library with the help of Pebble developers.

A lack of apps could be the reason why Samsung is rumored to be switching from Tizen to Wear OS for its next watch. Years ago, Samsung released the Sony Smartwatch line of Android Wear watches, but flopped with the Wear-based Gear Live, and never looked back.

If the rumors are true, Samsung could draw on a much wider selection of apps available for Wear OS. Yet, in a May 22 9to5Google post about the unconfirmed rumor, Ben Schoon suggests that Samsung might consider sticking to Tizen. He acknowledges Google’s huge app advantage and says that Wear OS offers easier setup and better security. Yet, he argues that the Gear watches are superior in both hardware and software and have better battery life.

The story also notes that Google is rumored to be releasing its own Pixel-branded smartwatch in the fall based on Wear OS. Considering that Google never sufficiently opened up Wear OS to enable true innovation, perhaps it should have gone that route in the first place.

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.