Home Blog Page 806

Encrypt Your Cloud Files With Cryptomator

Cryptomator is a free and open source client-side encryption solution for your cloud files, available for Linux, Windows and Mac OS X, as well as iOS. An Android app is currently under development. 

Cryptomator is advertised as being especially developed to encrypt your cloud files from services such as Dropbox, Google Drive, Mega and other cloud storage services that synchronize with a local directory. 
 
Since the encryption is done on the client side, it means that no unencrypted data is shared with any online service.
 
Furthermore, you can use Cryptomator to create as many vaults as you want, each having individual passwords.

Read more at Web UPD8

How to Install Cygwin, a Linux-like Commandline Environment for Windows

During the last Microsoft Build Developer Conference held from March 30th to April 1st, Microsoft released an announcement and gave a presentation that surprised the industry: beginning with Windows 10 update #14136, it would…

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

Facebook promises release of own ‘modular routing platform’

Facebook has promised to open source a “modular routing platform” it says powers many of its own networks.

“Open/R” was developed to power Facebook’s Terragraph WiFi networks. Now The Social Network says the more it played with the code, the more it became apparent it was fit for general purpose networking.

The platform looks to offer a different take on software-defined networking and to the development of interoperable networks.

“To create an interoperable standard, the industry’s process is often lengthy due to code being built independently by multiple vendors and then slowly deployed to their customer networks,” writes Facebook’s Petr Lapukhov. “Furthermore, every vendor has to accommodate for the demands of numerous customers — complicating the development process and requiring features that are not always useful universally.”

Read more at The Register.

Networks need automation — just ask the U.S. military

IT professionals are looking to software-defined networking to automate what are still complex and vulnerable systems controlled by human engineers. Major General Sarah Zabel knows where they’re coming from. The general summed up what many enterprises are saying about software-defined networking.

Zabel is the vice director of the Defense Information Systems Agency (DISA), which provides IT support for all U.S. combat operations. Soldiers, officers, drones, and the president all rely on DISA to stay connected. Its network is the epitome of a system that’s both a headache to manage and a prime hacking target.

DISA is a case in point. With 4.5 million users and 11 core data centers, its infrastructure generates about 10 million alarms per day, Zabel said. Approximately 2,000 of those become trouble tickets. These aren’t just for users who can’t get into Outlook: A lost circuit could cause a battlefield surveillance drone to abort its mission and return to base, or could cut off commanders in the field from their superiors.

Read more at CIO.com.

ODPi Won’t Fork Hadoop, Pledges Support for Apache Software Foundation with New Gold Sponsorship

The folks at the Open Data Platform Initiative (ODPi) have heard the concerns and the criticisms of the Hadoop community, and today John Mertic, the standards organization’s Director of Program Management, took to Apache Big Data in Vancouver to clear the air.

Contrary to the Hadoop community’s concerns, ODPi does not want to take over the development of Hadoop, it does not want to fork Hadoop, Mertic said.

The ODPi wants the big data projects based at the Apache Software Foundation, including Hadoop, to continue to innovate, try new things and splash around in the code base’s pool – digitally speaking, of course.

What ODPi intends to do is keep the companies looking to use Hadoop in production downstream from getting wet while the ASF is making waves.

ASF for innovation, ODPi for standards

ODPi was formed last year by dozens of leading tech companies, including Hortonworks, IBM, and Pivotal, as a collaborative project at The Linux Foundation to develop a common reference platform called ODPi Core. The nonprofit has been developing standards around a few Apache projects, including Hadoop in its Runtime Spec and test suite, which includes HDFS, YARN, and MapReduce. The Operations Spec due out this summer will focus on Ambari.

To show the organization’s commitment to the ASF’s mission, Mertic announced that ODPi is now a Gold Sponsor of the open source foundation.

“We want the ASF to be the home of innovation,” Mertic said. “What we want to bring to the table is those use cases from industry; [to show] how are people using Hadoop.”

Mertic said that end users and independent software vendors (ISVs) have been frustrated by inconsistencies across the various Hadoop distributions, which is having a dampening effect on investment. He said while there are 15 generally-accepted common components to a Hadoop distribution, the versions of those components are often different from distro to distro.

“Both our organizations are laser focused – we want a stronger Hadoop ecosystem,” Mertic said, of the ASF and ODPi. “But let’s be honest, it hasn’t been easy.

“The Apache project really focuses on those raw components, the peanuts and the chocolate, if you will. But if you’re an end user, you’re looking for that Reese’s Peanut Butter Cup.

By developing a layer of standards for Hadoop distributions so end users and ISVs see a consistent level of performance and progression, to spur adoption and investment.

“From the ODPi perspective, we find this to be the place where we we need to provide a baseline group of standards so people know what to expect,” Mertic said.

Mertic gave the following quote by Robert W. Lane, CEO of Deere & Company, as the perfect encapsulation of the ODPi’s mission:

“A sustainable environment requires increased productivity; productivity comes about by innovation; innovation is the result of investment; and investment is only possible when a reasonable return is expected. The efficient use of money is more assured when there are known standards in which to operate.”

https://www.youtube.com/watch?v=XUl7vlVwNaI?list=PLGeM09tlguZQ3ouijqG4r1YIIZYxCKsLp

linux-com_ctas_apache_052316_452x121.png?itok=eJwyR2ye

Watch Live Keynote From Hadoop Creator Doug Cutting at Apache Big Data Today

Apache: Big Data and ApacheCon take place this week, May 9 – 13, in Vancouver, B.C., and The Linux Foundation is streaming the keynotes live for those who can’t attend.

Apache: Big Data gathers the Apache projects, people and technologies working in Big Data and is the only event that brings together the full suite of Big Data open source projects. The livestream today, May 11, begins at 9 a.m. Pacific. Keynote speakers today include:

  • Seshu Adunuthula, head of analytics infrastructure at eBay

  • Doug Cutting, co-creator of Hadoop, chief architect at Cloudera

  • Ashish Thusoo, co-founder & CEO of Qubole

See the full agenda of keynotes from Apache: Big Data

linux-com_ctas_may2016_v3_apache.png?ito

ApacheCon is the annual conference of The Apache Software Foundation. The Apache and open source community will gather May 11-13 to learn about and collaborate on the technologies and projects driving the future of open source, web technologies and cloud computing. Keynote speakers at ApacheCon include:

  • Ross Gardler, president of The Apache Software Foundation

  • Stephen O’Grady, co-founder of RedMonk

  • Sam Ramji, chief executive officer of Cloud Foundry Foundation

See the full agenda of keynotes from ApacheCon

and

Sign up for free live streaming of ApacheCon keynotes now.

Can’t catch the live stream next week? Don’t worry—  you can still register now, to receive the recordings of keynotes after the conferences end.

 

Spark 2.0 Will Be Faster, Easier for App Development, and Tackle Streaming Data

It only makes sense that as the community of Spark contributors got bigger the project would get even more ambitious. So when Spark 2.0 comes out in a matter of weeks it’s going to have at least three robust new features, according to Ion Stoica, the founder of Databricks and keynote speaker at Apache Big Data in Vancouver on Tuesday afternoon.

“Spark 2.0 is about taking what has worked and what we have learned from the users and making it even better,” Stoica said.

Queries will be more performant – the goal is 10x faster – through the success of Project Tungsten, an ongoing effort which set out to improve the efficiency of memory and CPU for applications. The three ways it’s succeeded is through cache-aware computation that uses algorithms and data structures to exploit memory hierarchy, code generation to exploit modern compilers and CPUs, and using application semantics to eliminate memory getting bogged down on garbage collection and the JVM object model.

“The more semantics you know the better you can optimize the applications,” Stoica said.

Spark 2.0 will ship with even more components from Project Tungsten, which has been rolling out in pieces, across multiple releases, since Spark 1.4 about a year ago. 

Spark 2.0 will also feature improved APIs to make it “even easier” to write applications for Spark, a feature for the influx of data scientists that are now using Spark who aren’t necessarily full-blown developers and database admins. Part of this feature is the introduction of the Dataset API. Datasets are static typed extensions that use Resilient Distributed Dataset (RDD)-like operations, and when added to Spark’s dataframes, it creates a best-of-both-worlds approach.

Stoica said each library in Spark – things like graphing libraries, machine learning libraries, and so on – will be rewritten to include datasets.

The third major improvement is increased support for streaming data by creating what Stoica called “infinite dataframes,” which is a table that’s constantly adding new entries.

“We’re going to integrate support for interactive and batch queries,” Stoica said. “It’s not just streaming, it’s what we’re going to call continuous applications.”

Stoica ran a demo of the streaming capabilities through his company Databricks’ Spark distribution by showing Twitter clusters on a map computed in real time. But the implications are much bigger for running analytics on streaming data with all the many systems and languages that connect standard to Spark. This will be huge, he said, for applications like fraud detection, or updating machine learning algorithms in real-time.

“Everyone knows that streaming is more and more important,” Stoica said. “You want to operate and do analytics on fresh data. You want the ability to do queries on the data that was just streamed.”
 

https://www.youtube.com/watch?v=9xSz0ppBtFg?list=PLGeM09tlguZQ3ouijqG4r1YIIZYxCKsLp

linux-com_ctas_apache_052316_452x121.png?itok=eJwyR2ye

4 Container Networking Tools to Know

With so many new cloud computing technologies, tools, and techniques to keep track of, it can be hard to know where to start learning new skills. This series on next-gen cloud technologies aims to help you get up to speed on the important projects and products in emerging and rapidly changing areas such as software-defined networking (SDN) , containers, and the space where they coincide: container networking.

The relationship between containers and networks remains challenging for enterprise container deployment. Containers need networking functionality to connect distributed applications. Part of the challenge, according to a recent Enterprise Networking Planet article, is “to deploy containers in a way that provides the isolation they need to function as their own self-contained data environments while still maintaining effective connectivity.”

Docker, the popular container platform, uses software-defined virtual networks to connect containers with the local network. Additionally, it uses Linux bridging features and virtual extensible LAN (VXLAN) technology so containers can communicate with each other in the same Swarm, or cluster. Docker’s plug-in architecture also allows other network management tools, such as those listed below, to control containers.

Innovation in container networking has enabled containers to connect with other containers across hosts. This enables developers to start an application in a container on a host in a development environment and transition it across testing and then into a production environment enabling continuous integration, agility, and rapid deployment.

Container networking tools help accomplish container networking scalability, mainly by:

1) enabling complex, multi-host systems to be distributed across multiple container hosts.

2) enabling orchestration for container systems spanning a tremendous number of hosts across multiple public and private cloud platforms.

John Willis speaking at Open Networking Summit 2016.
For more information, check out the Docker Networking Tutorial video, which was presented by Brent Salisbury and John Willis at the recent Open Networking Summit (ONS). This and many other ONS keynotes and presentations can be found here.

Container networking tools and projects you should know about include:

Calico — The Calico project (from Metaswitch) leverages Border Gateway Protocol (BGP) and integrates with cloud orchestration systems for secure IP communication between virtual machines and containers.

Flannel — Flannel (previously called rudder) from CoreOS provides an overlay network that can be used as an alternative to existing SDN solutions.

Weaveworks — The Weaveworks projects for managing containers include Weave Net, Weave Scope, and Weave Flux. Weave Net is a tool for building and deploying Docker container networks.

Canal — Just this week, CoreOS and Tigera announced the formation of a new open source project called Canal. According to the announcement, the Canal project aims to combine aspects of Calico and Flannel, “weaving security policy into both the network fabric and the cloud orchestrator.”

You can learn more about container management, software-defined networking, and other next-gen cloud technologies through The Linux Foundation’s free “Cloud Infrastructure Technologies” course — a massively open online course being offered through edX. Registration for this course is open now, and course content will be available in June.

How Linux Kernel Development Impacts Security

At CoreOS Fest, Greg Kroah-Hartman, maintainer of the Linux kernel, declares that almost all bugs can be security issues. The Linux kernel is a fast moving project, and it’s important for both users and developers to quickly update to new releases to remain up-to-date and secure. That was the keynote message Greg Kroah-Hartman, maintainer of the stable Linux kernel, delivered at CoreOS Fest on May 9 here.

Kroah-Hartman is a luminary in the Linux community and is employed by the Linux Foundation, publishing on average a new Linux stable kernel update every week. In recent years, he has also taken upon himself the task of helping to author the “Who Writes Linux” report that details the latest statistics on kernel development. He noted that, from April 2015 to March 2016, there were 10,800 new lines of code added, 5,300 lines removed and 1,875 lines modified in Linux every day.

Read more at eWeek

HPE’s IoT Platform Supports oneM2M, LoRa, SigFox

HPE looks to make it easier for enterprises and service providers to connect and manage Internet of Things (IoT) devices with the debut of its IoT Platform 1.2. The platform is aligned with the oneM2M ETSI industry standard and will also support long-range, low-power networks such as LoRa and SigFox.

HPE said its platform, which has been announced in conjunction with the IoT World 2016 conference in Santa Clara, Calif., this week, will support a variety of protocols including cellular, radio, WiFi, and Bluetooth.

Read more at SDx Central