Home Blog Page 564

Android Apps on Linux PCs: Now Anbox Tool Runs Smartphone Software Natively

A new open-source project, Anbox, from a Canonical engineer lets you run Android apps natively on Ubuntu and other Linux-powered desktops.

It differs from several existing projects that allow Android apps to run on PCs. Instead of using emulators, Anbox employs Linux namespaces to run Android in a container on the same kernel as the host operating system, allowing Android software to run like native apps on the host.

Fels explains in a blogpost that he began the project 2015 with “the idea of putting Android into a simple container based on LXC and bridging relevant parts over to the host operating system while not allowing any access to real hardware or user data”.

Read more at ZDNet

A 1986 Bulletin Board System Has Brought the Old Web Back to Life in 2017

Today, many can be forgiven for thinking that the digital communications revolution kicked off during the mid-1990s, when there was simply an explosion of media and consumer interest in the World Wide Web. Just a decade earlier, however, the future was now for the hundreds of thousands of users already using home computers to communicate with others over the telephone network. The online culture of the 1980s was defined by the pervasiveness of bulletin board systems (BBS), expensive telephone bills, and the dulcet tones of a 1200 baud connection (or 2400, if you were very lucky). While many Ars readers certainly recall bulletin board systems with pixelated reverence, just as many are likely left scratching their heads in confusion (“what exactly is a BBS, anyway?”). 

It’s a good thing, then, that a dedicated number of vintage computing hobbyists are resurrecting these digital communities that were once thought lost to time. With some bulletin board systems being rebooted from long-forgotten floppy disks and with some still running on original 8-bit hardware, the current efforts of these seasoned sysops (that is, system administrators) provide a very literal glimpse into the state of online affairs from more than three decades ago.

Read more at Ars Technica

Singularity Containers for HPC, Reproducibility, and Mobility

Containers are an extremely mobile, safe and reproducible computing infrastructure that is now ready for production HPC computing. In particular, the freely available Singularity container framework has been designed specifically for HPC computing. The barrier to entry is low and the software is free.

At the recent Intel HPC Developer Conference, Gregory Kurtzer (Singularity project lead and LBNL staff member) and Krishna Muriki (Computer Systems Engineer at LBNL) provided a beginning and advanced tutorial on Singularity. One of Kurtzer’s key takeaways: “setting up workflows in under a day is commonplace with Singularity”.

Singularity was designed so that applications which run in a container have the same “distance” to the host kernel and hardware as natively running applications as shown below.

Read more at The Next Platform

Micro – A Modern Terminal Based Text Editor with Syntax Highlighting

Micro is a modern, easy-to-use and intuitive cross-platform terminal-based text editor that works on Linux, Windows and MacOS. It is written in GO programming language and designed to utilize the full capabilities of modern Linux terminals.

It is intended to replace the well known nano editor by being easy to install and use on the go. It has well aims to be pleasant to use around the clock (because you either prefer to work in the terminal, or you need to operate a remote machine over ssh).

Read more at Tecmint

Trivial Transfers with TFTP, Part 3: Usage

In previous articles, we introduced TFTP and discussed why you might want to use it, and we looked at various configuration options. Now let’s try and move some files around. You can do this with some comfort now that you know how to secure your server a little better.

Testing 1, 2, 3

To test your server, you obviously need a client to connect with. Thankfully, we can install one very easily:

# apt-get install tftp

Red Hat derivatives should manage a client install as so:

# yum install tftp

If you look up your server’s IP address using a command like the one below, then it’s possible to connect to your TFTP server from anywhere (assuming that your TCP Wrappers configuration or IPtables rules let you, of course).

# ip a

Once you know the IP address, it’s very simple to get going. You can connect like this to the server from the client:

# tftp 10.10.10.10

Now you should be connected. (If it didn’t work, check your firewalling or you might “telnet” to port 69 on your TFTP’s server IP address). Next, you can run a “status” command as follows:

tftp> status

Connected to 192.168.0.9.
Mode: netascii Verbose: off Tracing: off
Rexmt-interval: 5 seconds, Max-timeout: 25 seconds

At this point, I prefer to use verbose output by simply typing this command:

tftp> verbose

You can opt to download binaries or plain text files by typing this for plain text:

tftp> ascii

Or, you can force binaries to download correctly with:

tftp> binary

To give us some content to download, I created a simple text file like this:

# echo hello > HERE_I_AM

This means that the file HERE_I_AM contains the word “hello”. I then moved that file into our default TFTP directory, which we saw in use previously in the main config file, /etc/inetd.conf. That directory — from which our faithful daemon is serving — is called /srv/tftp, as we can see in Listing 3.

Because this is just a plain text file, there’s little need to enable binary mode, and we’ve already written verbose, so now it’s just a case of transferring our file.

If you’re at all familiar with FTP on the command line, then you’ll have no difficulty picking up the parlance. It is simply “get” to receive and “put” to place. My sample file HERE_I_AM can be retrieved as follows.

tftp> get HERE_I_AM

getting from 192.168.0.9:HERE_I_AM to HERE_I_AM [netascii]

Received 7 bytes in 0.1 seconds [560 bits/sec]

The above example offers the verbose output when that mode is enabled. You can glean useful information, such as that we’re not using binary mode but just “netascii” mode. Additionally, you can see how many bytes were transferred and how quickly. In this case, the data was seven bytes in size and took a tenth of a second, at half a kilobyte (or so) a second, to complete.

Compare and contrast that to the non-verbose mode output, and I’m sure you’ll agree it’s worth using:

tftp> get HERE_I_AM

Received 7 bytes in 0.0 seconds

If you feel the need to obfuscate your TFTP server’s port number then, after editing the /etc/services file, you need to connect with your client software, like so:

# tftp 10.10.10.10 11111

Additionally, don’t be too frightened of requesting multiple files on one line. Achieve just that by using this syntax:

# get one.txt two.txt three.txt four.txt five.txt

If you run into problems, there are a couple troubleshooting options to explore. You might for example have a saturated network link due to a broadcast storm or a misbehaving device causing weird network oddities. You will be pleased to learn, however, that you can adjust timeouts. First, from the TFTP client prompt, we can set timeouts on a per-packet basis as so:

tftp> rexmt 10

This shows us setting the retransmission timeouts on a per-packet basis to 10 seconds.

To set the total-transfer timeout, for the entire transaction, adjust the following setting, like this:

tftp> timeout 30

Another useful tool for debugging is the “trace” functionality. It can be enabled as follows:

tftp> trace
Packet tracing on.

Now, each transfer will look very noisy, as below, which should help with your troubleshooting:

tftp> get HERE_I_AM
sent RRQ <file=HERE_I_AM, mode=netascii>

received DATA <block=1, 512 bytes> 

sent ACK <block=1> 

received DATA <block=2, 512 bytes>

sent ACK <block=2> 

received DATA <block=3, 512 bytes>

[..snip..]

From the above information, you should be able to tell at which point a transfer fails and perhaps discern a pattern of behavior.

Incidentally, if you want to quit out of the TFTP client prompt, then hitting the “q” key should suffice.

In this section, we have seen how to configure and receive files from a TFTP Server. As mentioned it’s a good idea to firewall port 69 from the outside world if you experiment with this software and especially if you deploy it in production.

I recommend that you either lock down your server so it can only be accessed by a few machines by using IPtables or TCP Wrappers. Although it’s an older technology, TFTP is undoubtedly still a useful tool to have in your toolbox, and it shouldn’t be dismissed as only being useful for passing configuration to workstations or servers as they boot up.

Systemd

Incidentally, to start and stop the TFTP server (or restart it after making a config change) on modern systemd systems you can run this command, substituting “stop” with “restart” or “start” where needed.

# systemctl stop openbsd-inetd

On older systems, there’s no need to remind you that in most cases one of these commands will suffice.

# /etc/init.d/openbsd-inetd start

# service openbsd-inetd restart

Simply change openbsd-inetd to the name of the appropriate inetd or xinetd scripts for your distribution. Remember that you can find out service names by running a command like this:

# ls /etc/init.d

This even works on modern systemd versions but you need to look closely at the resulting file listing because let’s face it conduits to “inetd” might be considered a throwback in some circles and have strange filenames.

EOF

If you decide to take your life into your own hands and serve TFTP across the Internet, let me offer one word of warning, well, one acronym actually: NAT. If Network Address Translation is involved in remote connections, then you may struggle with TFTP transfers because of the fact that they use UDP. You need the NAT router to act as a slightly more advanced proxy to make this work. You might look at the renowned security software, pfSense, which can apparently assist.

We have looked at a number of of TFTP’s features. Clearly, there are specific circumstances when the excellent TFTP is a useful tool that can be used quickly and effectively with little configuration.

At other times admittedly, TFTP won’t quite cut the mustard. In such cases, sFTP and standard FTP might be more appropriate. Before searching for those packages, however, have a quick look to see if the features you need are present within TFTP’s toolkit. You might be pleasantly surprised at what you find. After all, many of the tools we use today herald from the time from which TFTP came.

Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.

Advance your career in Linux System Administration! Check out the Essentials of System Administration course from The Linux Foundation.

Read previous articles:

Trivial Transfers with TFTP, Part 1: Why Use It?

Trivial Transfers with TFTP, Part 2: Configuration

Open Source Project Directors in Cloud, Blockchain, IoT, SDN to Speak at Open Source Summit in Japan

Executive directors from top open source projects in cloud computing, blockchain, Internet of Things, and software-defined networking will keynote next month at Open Source Summit Japan, The Linux Foundation has announced. The full agenda, now available on the event website, also features a panel of Linux kernel developers and The Linux Foundation Executive Director Jim Zemlin.

LinuxCon, ContainerCon and CloudOpen have combined under one umbrella name in 2017 – Open Source Summit. More than 600 open source professionals, developers and operators will gather May 31-June 2  in Tokyo to collaborate, share information, and learn about the latest in open technologies, including Linux, containers, cloud computing and more.

Confirmed keynote speakers at this year’s event include:

  • Brian Behlendorf, Executive Director, Hyperledger

  • Philip DesAutels, Sr. Director of IoT, The Linux Foundation

  • Arpit Joshipura, General Manager, Networking, The Linux Foundation

  • Abby Kearns, Executive Director, Cloud Foundry Foundation

  • Jim Zemlin, Executive Director, The Linux Foundation

  • Linux Kernel Panel with Alice Ferrazzi, Gentoo Kernel Project Leader; Greg Kroah-Hartman, Linux Foundation Fellow; Steven Rostedt, Red Hat; and Dan Williams, Intel

Session highlights include:

  • TensorFlow in the Wild: From Cucumber Farmer to Global Insurance Firm – Kazunori Sato, Google

  • Fast releasing and Testing of Gentoo Kernel Packages and Future Plans of the Gentoo Kernel Project – Alice Ferrazzi, Gentoo Kernel Project Leader

  • Testing at Scale – Andrea Frittoli, IBM

  • Device-DAX: Towards Software Defined Memory – Dan Williams, Intel

  • Kubernetes Ground Up – Vishnu Kannan, Google

View the full agenda of sessions.

Linux.com readers get 5% off the “attendee” registration with code LINUXRD5. Register now and save $150 through April 16.

Automotive Linux Summit (ALS) is co-located with Open Source Summit Japan. Attendees may add on registration for ALS at no additional charge.                               

Applications for diversity and needs-based scholarships are also being accepted.

Top 5 Programming Languages for DevOps

I’ve been focused on infrastructure for the majority of my career, and the specific technical skills required have shifted over time. In this article, I’ll lay out five of the top programming languages for DevOps, and the resources that have been most helpful for me as I’ve been adding those development skills to my infrastructure toolset.

Knowing how to rack and stack servers isn’t an in-demand skill at this stage. Most businesses aren’t building physical datacenters. Rather, we’re designing and building service capabilities that are hosted in public cloud environments. The infrastructure is configured, deployed, and managed through code. This is the heart of the DevOps movement—when an organization can define their infrastructure in lines of code, automating most (if not all) tasks in the datacenter becomes possible.

Read more at OpenSource.com

Tracking the Explosive Growth of Open-Source Software

Many of today’s hottest new enterprise technologies are centered around free, “open-source” technology. As a result, many big companies — from financial giants to retailers to services firms — are building their businesses around new, community-based technology that represents a sea change from the IT practices of the past.

But how can corporate customers — and investors — evaluate all these new open-source offerings? How can they tell which projects (often strangely named: Ansible, Vagrant, Gradle) are generating the most customer traction? Which ones have the biggest followings among software developers, and the most potential to capture market share?

Read more at TechCrunch

Things I Learned Managing Site Reliability for Some of the World’s Busiest Gambling Sites

For several years I managed the 3rd line site reliability operation for many of the world’s busiest gambling sites, working for a little-known company that built and ran the core backend online software for several businesses that each at peak could take tens of millions of pounds in revenue per hour. I left a couple of years ago, so it’s a good time to reflect on what I learned in the process.

In many ways, what we did was similar to what’s now called an SRE function (I’m going to call us SREs, but the acronym didn’t exist at the time). We were on call, had to respond to incidents, made recommendations for re-engineering, provided robust feedback to developers and customer teams, managed escalations and emergency situations, ran monitoring systems, and so on. 

I’m going to focus here on process and documentation, since I don’t think they’re talked about usefully enough where I do read about them.

Read more at Zwischenzugs

Google Brings SDN to the Public Internet

Google unveiled to the outside world its peering edge architecture — Espresso. At the Open Networking Summit (ONS), Google Fellow Amin Vahdat said Espresso is the fourth pillar of Google’s software-defined networking (SDN) strategy. Its purpose is to bring SDN to the public Internet.

Espresso, which has been in production for over two years, already routes 20 percent of Google’s total traffic to the Internet. Previously, Google ran protocols on high-end routers and peered with its partners. But Vahdat said those routing protocols had a very local view with the goal of simply finding a path between source and destination. The goal was not to find the best path or to dynamically shift paths.

Read more at SDxCentral