Home Blog Page 546

The Beauty of Links on Unix Servers

Symbolic and hard links provide a way to avoid duplicating data on Unix/Linux systems, but the uses and restrictions vary depending on which kind of link you choose to use. Let’s look at how links can be most useful, how you can find and identify them, and what you need to consider when setting them up.

Hard vs soft links

Don’t let the names fool you. It’s not an issue of malleability, but a very big difference in how each type of link is implemented in the file system. A soft or “symbolic” link is simply a file that points to another file. If you look at a symbolic link using the ls command, you can easily tell that it’s a symbolic link.

Read more at ComputerWorld

The Next Challenge for Open Source: Federated Rich Collaboration

When over a decade ago the file sync and share movement was started by Dropbox and later joined by Google Drive, it became popular very fast. Having your data available, synced or via the web interface, no chance of forgetting to bring that important document or use USB sticks — it was a huge step forward. But more than having your own data at hand, it enabled sharing and collaboration. No longer emailing documents, no longer being unsure if you got feedback on the latest version of your draft or fixing errors that were already fixed before. Usage grew, not only among home users but also business users who often used the public cloud without the IT departments’ approval.

Problems would creep up quickly, too. Some high-profile data leaks showed what a big target the public clouds were. Having your data co-mingled with that of thousands of other home and business users means little control over it and exacerbates risks. The strong European privacy protection rules increased the cost of breaches and thus created awareness in Europe, while businesses in Asian countries especially in the tech sector disliked the risks with regards to trade secrets. Although there are stronger intellectual property protections — and less emphasis on privacy in the United States — control over data is becoming a concern there as well.

Open source, self-hosted solutions providing file sync and share began to be used by home, business and government users as a way to achieve this higher degree of privacy, security and control. I was at the center of these developments, having started the most popular open source file sync and share project, a vision I continue to push forward together with the early core contributors and the wider community at Nextcloud.

Open Source and Self Hosting
Hosting their own, open source solution gives business the typical benefits of open source:
* Customer driven development
* Long-term viability

Customer driven development
Open Source brings in contributions from a wide range of sources, advancing the interests of customers while accelerating innovation. The transparent development and its strict peer review process also ensures security and accountability, which are crucial for a component on which companies rely to protect their proprietary knowledge, critical customer data and more. The stewardship of the Nextcloud business, collaborating with a variety of partners and independent contributors, gives customers the piece of mind that they have access to a solid, enterprise-ready product.

Long-term viability
Where choosing proprietary solutions means betting on a single horse, open source allows customers to benefit from the race regardless of the outcome. Nextcloud features a large and quickly growing, healthy ecosystem with well over 300 contributors in the last 9 months and many dozens of third-party apps providing additional functionality. One can find hundreds of videos and blogs on the web talking about how to implement and optimize Nextcloud installations on various infrastructure setups and there are well over 6K people on our forums asking and answering questions. Besides us and our partners, many independent consultants and over 40 hosting providers all offer support and maintenance of Nextcloud systems. There is a healthy range of choices!

Forward looking
Having your data in a secure place, in alignment with IT policy, is thus possible and thousands of businesses already use our technology to stay in control over their data. Now the question becomes: What comes next?

The Internet and the world wide web were originally designed as distributed and federated networks. The centralized networks have lately enabled users to work together, to collaborate and share more easily. The disconnected, private networks you’d create with self-hosted technologies seem to not be able to match that. This is where Nextcloud’s Federated Cloud Sharing technology comes in. Developed by Bjoern Schliessle and myself some years ago, it enables users on one cloud server to transparently share data with users on another. To share a file to a user on another server, one can simply type in the ‘Federated Cloud ID’, a unique ID similar to an email address. The recipient will be notified and the two servers (if configured to do so) will even exchange address books to, in the future, auto-complete user names for their respective users. In our latest release, we improved integration to the point where users are even notified of any changes and access done by users on the other server, completing the seamless integration experience.

Next level of collaboration
This last feature is what efficient collaboration requires: context! People don’t only want files from other people popping up on their computer — or to have them changed in the background by other users.

Why do I have access to this file or folder? Who shared it with me and what are the recent changes? Maybe you want a way to directly chat with the person who changed the file? Maybe leave a comment or maybe directly call the person? And if you are discussing possible changes on a document, why not edit it together collaboratively? Maybe you’d like integration with your calendar to arrange a time to work on the document? Or maybe integration into your email to access the latest version you got by email. Maybe having a video call while working on that presentation deck together? Having a shared todo list with someone or who isn’t even working in the same organization as you?

Our latest release, Nextcloud 12, introduces a wide range of collaboration features and capabilities, functioning in a federated, decentralized way. Users can call each other through a secure, peer-to-peer audio/video conferencing technology; they can comment, edit documents in real time, and get push notifications when anything of note happens.

At the same time, their respective IT teams continue to be able to ensure company policies around security and privacy are fully enforced.

The open source community in a unique position to take the lead in this space because it is in our DNA. Open Source IS built in a collaborative way. Using the internet, using chat, version control, video calling, document sharing and so on. Basically all big open source communities are distributed over different continent, while working together in a very efficient way, creating great results. The Open Source movement is the child of the Internet using it as a collaborating tool. My own open source company, Nextcloud GmbH, has almost all its employees work from home or co-working places.

So we can and do build privacy aware and secure software for rich collaboration. Alternatives to the proprietary competitors. And successfully so!

If you want to join me, get involved at Nextcloud.

SNAS.io, Formerly OpenBMP Project, Joins The Linux Foundation’s Open Source Networking Umbrella

By Arpit Joshipura, General Manager, Networking and Orchestration, The Linux Foundation

We are excited to announce that SNAS.io, a project that provides network routing topologies for software-defined applications, is joining The Linux Foundation’s Networking and Orchestration umbrella. SNAS.io tackles the challenging problem of tracking and analyzing network routing topology data in real time for those who are using BGP as a control protocol, internet service providers, large enterprises, and enterprise data center networks using EVPN.

Topology network data collected stems from both layer 3 and layer 2 of the network, and includes IP information, quality of service requests, and physical and device specifics. The collection and analysis of this data in real time allows DevOps, NetOps, and network application developers who are designing and running networks, to work with topology data in big volumes efficiently and to better automate the management of their infrastructure.

Contributors to the project include Cisco, Internet Initiative of Japan (IIJ), Liberty Global, pmacct, RouteViews, and the University of California, San Diego.

Originally called OpenBMP, the project focused on providing a BGP monitoring protocol collector. Since it launched two years ago, it has expanded to include other software components to make real-time streaming of millions of routing objects a viable solution. The name change helps reflect the project’s growing scope.

The SNAS.io collector not only streams topology data, it also parses it, separating the networking protocol headers and then organizing the data based on these headers. Parsed data is then sent to the high-performance messagebus, Kafka, in a well-documented and customizable topic structure.

SNAS.io comes with an application that stores the data in a MySQL database. Others that use SNAS.io can access the data either at the messagebus layer using Kafka APIs or using the project’s RESTful database API service.

The SNAS.io Project is complementary to several Linux Foundation projects, including PNDA and FD.io, and is a part of the next phase of networking growth: the automation of networking infrastructure made possible through open source collaboration.

Industry Support for the SNAS.io Project and Its Use Cases

Cisco

SNAS.io addresses the network operational problem of real-time analytics of the routing topology and load on the network. Any NetDev or Operator working to understand the dynamics of the topology in any IP network can benefit from SNAS.io’s capability to access real-time routing topology and streaming analytics,” said David Ward, SVP, CTO of Engineering and Chief Architect, Cisco. “There is a lot of potential linking SNAS.io and other Linux Foundation projects such as PNDA, FD.io, Cloud Foundry, OPNFV, ODL and ONAP that we integrating to evolve open networking. We look forward to working with The Linux Foundation and the NetDev community to deploy and extend SNAS.io.”

Internet Initiative Japan (IIJ)

“If successful, the SNAS.io Project will provide a great tool for both operators and researchers,” said Randy Bush, Research Fellow, Internet Initiative Japan. “It is starting with usable visualization tools, which should accelerate adoption and make more of the Internet’s hidden data accessible.”

Liberty Global

“The SNAS.io Project’s technology provides our huge organization with an accurate network topology,” said Nikos Skalis, Network Automation Engineer, Liberty Global. “Together with its BGP forensics and analytics, it suited well to our toolchain.”

pmacct

“The BGP protocol is one of the very few protocols running on the Internet that has a standardized, clean and separate monitoring plane, BMP,” said Paolo Lucente, Founder and Author of the pmacct project. “The SNAS.io Project is key in providing the community a much needed full-stack solution for collecting, storing, distributing and visualizing BMP data, and more.”

RouteViews

“The SNAS.io Project greatly enhances the set of tools that are available for monitoring Internet routing,” said John Kemp, Network Engineer, RouteViews. “SNAS.io supports the use of the IETF BGP Monitoring Protocol on Internet routers. Using these tools, Internet Service Providers and university researchers can monitor routing updates in near real-time. This is a monitoring capability that is long overdue, and should see wide adoption throughout these communities.”

University of California, San Diego

“The Border Gateway Protocol (BGP) is the backbone of the Internet. A protocol for efficient and flexible monitoring of BGP sessions has been long awaited and finally standardized by the IETF last year as the BGP Monitoring Protocol (BMP). The SNAS.io Project makes it possible to leverage this new capability, already implemented in routers from many vendors,  by providing efficient and easy ways to collect BGP messages, monitor topology changes, track convergence times, etc,” said Alberto Dainotti, Research Scientist, Center for Applied Internet Data Analysis, University of California, San Diego “SNAS.io will not only have a large impact in network management and engineering, but by multiplying opportunities to observe BGP phenomena and collecting empirical data, it has already demonstrated its utility to science and education.”

You can learn more about the project and how you can get involved here https://www.SNAS.io.

Red Hat OpenStack Platform 11 Released

Red Hat’s love affair with OpenStack continues with enhanced support for upgrades with composable roles, new networking capabilities, and improved integration with Red Hat CloudForms for cloud management. This latest OpenStack distribution delivers a reliable cloud platform built on the proven backbone of Red Hat Enterprise Linux (RHEL).

OpenStack enables its users their own private cloud service mix. This makes it easier for enterprises to customize OpenStack deployments to fit specific needs, but it also makes it challenging in upgrade. To address this need, Ocata, and thus, OpenStack Platform 11, now make in-place upgrades much easier.

Read more at ZDNet

Enterprise Open Source Programs: From Concept to Reality

How pervasive is open source in today’s businesses? According to the 2016 Future of Open Source Survey from Black Duck and North Bridge, a mere three percent of respondents say they don’t use any open source tools or platforms.

Leveraging open source has also become a key avenue for fostering new ideas and technologies. Gartner’s Hype Cycle for Open Source Software (2016) notes that organizations are using open source today not just for cost savings, but increasingly for innovation. With this in mind, major companies and industries are quickly building out their open source programs, and the open source community is responding.

Open Source as a Business Priority

The Linux Foundation, the OpenStack Foundation, Cloud Foundry, InnerSource Commons, and the Free Software Foundation are just some of the organizations that businesses can turn to when advancing their open source strategies. Meanwhile, countless enterprises have rolled out professional, in-house programs focused on advancing open source and encouraging its adoption.

In a previous article, I covered some of these businesses that may not immediately come to mind. For example, Walmart’s  Walmart Labs division has released a slew of open source projects, including a notable one called Electrode, which is a product of Walmart’s migration to a React/Node.js platform. General Electric also might not be the first company that you think of when it comes to moving the open source needle, but GE is a powerful player in open source. GE Software has an “Industrial Dojo” – run in collaboration with the Cloud Foundry Foundation – to strengthen its efforts to solve the world’s biggest industrial challenges.

Over the past couple of years, we have also seen increased focus on open source within vertical industries not historically known for embracing it. As this post notes, LMAX Exchange, Bloomberg, and CME Group are just three of the companies in the financial industry that are innovating with open source tools and components and moving past merely consuming open source software to becoming contributors. For example, you can find projects that Bloomberg has contributed to the open source community on GitHub. Capital One and Goldman Sachs are advancing their open source implementations and programs as well.

The telecom industry, previously known for its proprietary ways, is also embracing open source in a big way. Ericsson, for example, regularly contributes projects and is a champion of several key open source initiatives. You can browse through the company’s open source hub here. Ericsson is also one of the most active telecom-focused participants in the effort to advance open NFV and other open technologies that can eliminate historically proprietary components in telecom technology stacks. Ericsson works directly with The Linux Foundation on these efforts, and engineers and developers are encouraged to interface with the open source community.

Other telecom players who are deeply involved with NFV and open source projects include AT&T, Bloomberg LP, China Mobile, Deutsche Telekom, NTT Group, SK Telekom, and Verizon.

Red Hat maintains a dedicated blog on telecoms transforming their infrastructure and technology stacks with open source tools and components. Likewise, you can find Red Hat’s ongoing coverage of how open source is transforming the oil and gas industry here.

Resources for Leveraging Open Source

Organizations of any size can take advantage of resources, ranging from training to technology, to help them build out their own open source programs and initiatives. On the training front, The Linux Foundation offers courses on everything from Essentials of OpenStack Administration to Software Defined Networking. The Foundation also offers a free ebook, Open Source Compliance in the Enterprise, that can help organizations understand issues related to the licensing, development and reuse of open source software. Additionally, The Linux Foundation’s ebook Open Source Compliance in the Enterprise offers practical guidelines on how best to use open source code in products and services.

Organizations can also partner with businesses already leveraging open source, so that they can share resources. The Linux Foundation’s TODO is a partnered group of companies that collaborates on practices and policies for running powerful open source projects and programs. According to TODO:

“Open source is part of the fabric of each of our companies. Between us, our open source programs enable us to use, contribute to, and maintain, thousands of projects – both large and small. These programs face many challenges, such as ensuring high-quality and frequent releases, engaging with developer communities, and contributing back to other projects effectively. The members of this group are committed to working together in order to overcome these challenges. We will be sharing experiences, developing best practices, and working on common tooling. But we can’t do this alone. If you are a company using or sharing open source, we welcome you to join us and help make this happen.”

InnerSource Commons, founded by PayPal, also specializes in helping companies pursue open source methods and practices as well as exchange ideas. You can find the group’s GitHub repositories here, and a free O’Reilly ebook called Getting Started with InnerSource offers more details. The ebook reviews the principles that make open source development successful and includes case study material on how InnerSource has worked at PayPal.

Are you interested in how organizations are bootstrapping their own open source programs internally? You can learn more in the Fundamentals of Professional Open Source Management training course from The Linux Foundation. Download a sample chapter now.

MapD Open Sources GPU-Powered Database

Since starting work on MapD more than five years ago while taking a database course at MIT, I had always dreamed of making the project open source. It is thus with great pleasure to announce that today our company is open sourcing the MapD Core database and associated visualization libraries, effective immediately.

The code is available on Github under an Apache 2.0 license. It has everything you need to build a fully functional installation of the MapD Core database, enabling sub-second querying across many billions of records on a multi-GPU server. All of our core tech, including our tiered caching system and our LLVM query compilation engine, is contained in today’s open source release.

Hence in conjunction with the open source announcement we are also excited today to announce the foundation of the GPU Open Analytics Initiative (GOAI) with Continuum Analytics and H2O.ai. Together we are unveiling our first project, the GPU Data Frame (GDF).

Read more at MapD

Network Microsegmentation Possible with NFV and SDN Combined

Network microsegmentation is foundational to zero-trust architectures. In a microsegmented model, the network knows which systems are allowed to talk to which other systems, in which ways and under what circumstances. Network microsegmentation allows sanctioned traffic to pass, allows each network node to see only what it needs to talk to or listen to and hides the rest. …

In an SDN environment, some of this processing can be done using data plane devices as distributed policy enforcement points. Network functions virtualization (NFV) offers further help to the service provider implementing zero-trust models by making it easier to put security processing in virtual network function (VNF) packages and download it as needed to compute nodes immediately preceding or following (proximate) to the traffic being processed.

Read more at TechTarget

Much Ado About Communication

One of the first challenges an open source project faces is how to communicate among contributors. There are a plethora of options: forums, chat channels, issues, mailing lists, pull requests, and more. How do we choose which is the right medium to use and how do we do it right?

Sadly and all too often, projects shy away from making a disciplined decision and instead opt for “all of the above.” This results in a fragmented community: Some people sit in Slack/Mattermost/IRC, some use the forum, some use mailing lists, some live in issues, and few read all of them.

This is a common issue I see in organizations I’m working with to build their internal and external communities. Which of these channels do we choose and for which purposes? Also, when is it OK to say no to one of them?

Read more at OpenSource.com

SUSE Unveils OpenStack Cloud Monitoring & Supports TrilioVault

Today at the OpenStack Summit 2017 in Boston, MA, SUSE, aside from celebrating its 25th anniversary, announced its new open source software solution that makes it simple to monitor and manage the health and performance of enterprise OpenStack cloud environments and workloads, SUSE OpenStack Cloud Monitoring. In other SUSE related news, Trilio Data, announced that its TrilioVault is Ready Certified for SUSE OpenStack Cloud.

SUSE OpenStack Cloud Monitoring is based on the OpenStack Monasca project. Its main goal is to make it easy for operators and users to monitor and analyze the health and performance of complex private clouds, delivers reliability, performance and high service levels for OpenStack clouds. Through automation and preconfiguring, SUSE OpenStack Cloud Monitoring is also aimed at reducing costs.

Read more at StorageReview

3 Ways to Run a Remote Desktop on Raspberry Pi

3 ways to Remote Desktop on Raspberry Pi

In this post, we will tell you about 3 ways to run Remote Desktop on your Raspberry Pi.

The first one is by using TeamViewer. Using TeamViewer is as simple as making a pie. You just install TeamViewer on Raspberry Pi, find the provided login and password, and enter them on PC. That’s it! No need in static IP address from your provider, no tricks with setting up of port forwarding on your router.

The second way to run Remote Desktop on RPi is by using VNC. VNC is a graphical desktop protocol that allows you to access the full Raspberry Pi desktop from another PC. So, you can see the start menu and run programs from desktop shortcuts. VNC is simple if your PC and Raspberry Pi are located on the same local network. But if you want to connect from office to your home RPi you’ll have to do some pretty some tricky configurations to set up port forwarding on your home router.

The third way of running Remote Desktop is via ssh + X11 forwarding. It is pretty simple, requires few configurations, but is limited to show windows of a separate program only. However, if you are on the same local network with your RPi and are going to access RPi from time to time, it is a good option.

Using TeamViewer for Remote Desktop on Raspberry Pi

Raspberry Pi setup

There is no version of Teamviewer available for ARM-based devices such as Raspberry Pi. Fortunately, there is a way to run TeamViewer on Raspberry Pi using ExaGear Desktop, which allows running x86 apps on Raspberry Pi.

1. Obtain your ExaGear Desktop. Unpack the downloaded archive and install ExaGear by running install-exagear.sh script in the directory with deb packages and one license key:

$ tar -xvzpf exagear-desktop-rpi2.tar.gz
$ sudo ./install-exagear.sh

2. Enter the guest x86 system using the following command:

$ exagear
Starting the shell in the guest image /opt/exagear/images/debian-8-wine2g

3. Download and install TeamViewer:

$ sudo apt-get update
$ sudo apt-get install wget
$ wget http://download.teamviewer.com/download/teamviewer_i386.deb
$ sudo dpkg -i teamviewer_i386.deb
$ sudo apt-get install -f
$ wget http://s3.amazonaws.com/wine1.6-2g-2g/wine1.6-2g-2g.tar.gz
$ tar -xzvf wine1.6-2g-2g.tar.gz
$ sudo ./teamviewer-fix-2g.sh

4. Now you can run TeamViewer from Raspberry Pi start menu:

Using TeamViewer for Remote Desktop on Raspberry Pi

5. Setup static password for remote connection in TeamViewer GUI:

Setup unattended access of TeamViewer on Raspberry Pi

Define password on TeamViewer on Raspberry Pi

Remote Desktop on Raspberry Pi using TeamViewer personal ID

Remember the personal ID and password for remote access to RPi using TeamViewer.

Windows PC setup

1. Download and install TeamViewer for Windows from www.teamviewer.com.

2. Run TeamViewer from the start menu, enter your personal ID in the “Partner ID” field and press “Connect to partner” button:

Remote Desktop on Raspberry Pi using TeamViewer. Enter ID.

Enter your personal password in the new pop-up window and log on:

Remote Desktop on Raspberry Pi using TeamViewer. Enter password.

That’s it! You connected to your Raspberry Pi:

Using TeamViewer for Remote Desktop on Raspberry Pi

Using VNC for Remote Desktop on Raspberry Pi

Raspberry Pi setup

1. Install VNC server on Raspberry:

$ sudo apt-get install tightvncserver

2. Start VNC server:

$ vncserver

On the first run, you’ll be asked to enter a password which will be used to access RPi remotely.

3. Check and keep in mind your Raspberry’s IP address

$ sudo ifconfig

and find the string like

inet addr: 192.168.0.109

The last two numbers might vary depending on your network but 192.168 is always there. So, this is your IP address.

That’s it for RPi setup.

Windows PC setup

1. You will need to download and install a VNC client program. For example, you can use TightVNC (tightvnc.com).

2. Run the downloaded file to install TightVNC client and follow the installation instruction:

Using VNC for Remote Desktop on Raspberry Pi. VNC client installation step 1.

Choose “Custom” setup type:

Using VNC for Remote Desktop on Raspberry Pi. VNC client installation step 2.

Using VNC for Remote Desktop on Raspberry Pi. VNC client installation step 3.

Using VNC for Remote Desktop on Raspberry Pi. VNC client installation step 4.

Using VNC for Remote Desktop on Raspberry Pi. VNC client installation step 5.

Now VNC client is installed.

3. Run TightVNC Client from the start menu. In Remote Host field enter: IP address of Raspberry, colon, 1 (in my case it was 192.168.0.109:1 ) and press Connect:

Remote access to Raspberry Pi using VNC

That’s it! You connected to your Raspberry Pi:

Remote Desktop on Raspberry Pi using TightVNC

Unfortunately, this method works only when your PC and Raspberry are located on the same local network. It’s possible to set up VCN connection if PC and RPi are in different networks, but it requires tricky configuration of port forwarding on your router.

Using ssh + X11 forwarding for Remote Desktop on Raspberry Pi

This case doesn’t require any additional package installation on your Raspberry Pi.

On Window PC do the following:

1. Install Xming X Server for Window

2. Run Xming Server

3. Run Putty, enter your RPi IP address, select X11 in the options menu and check the box labeled “Enable X11 forwarding”:

Enable X11 forwarding on Putty for Remote Desktop on Raspberry Pi

4. Login to Raspberry Pi and run GUI of a program:

Using ssh + X11 forwarding for Remote Desktop on Raspberry PiIn case you need the software Exagear Desktop, used in this post, get it here

the original aricle is here