Home Blog Page 533

Only 2 More Weeks to Submit Your Talk to APIStrat 2017

The API Strategy & Practice Conference, which will be held Oct. 31 – Nov. 2 in Portland, OR, provides a vendor-neutral event for discussion of the latest API topics and will bring together developers, IT teams, business users, executives, and others to discuss opportunities and challenges in the API space. The deadline for proposals for APIstrat is June 16 at 11:59 Pacific.

Submit a proposal now!

APIStrat — which is now a Linux Foundation event jointly produced with the Open API Initiative (OAI) —  invites you to share your creative ideas, enlightening case studies, best practices, or technical knowledge.  We want to make APIStrat the best place to get to know the API community and to share your ideas and work. If you haven’t presented at APIStrat or other conferences before, we’d especially like to hear from you!

With your participation, the event aims to encourage the open collaboration, discussions, and debates necessary to make APIStrat successful. APIStrat is a working conference intended for professional networking and collaboration in the API community, and we work closely with our attendees, sponsors, and speakers to help keep the event professional, welcoming, and friendly.

Join the discussion by submitting your proposal now.

This Week in Open Source: New Toyota Camry Powered AGL, Sudo Vulnerability Patches & More

This week in Linux and OSS news, the 2018 Toyota Camry infotainment system is powered by Automotive Grade Linux, high severity Sudo vulnerability gets patched, and more! Read on to stay in the open source know. 

1) The 2018 Toyota Camry will come loaded with Automotive Grade Linux (AGL) and Entune 3.0.

Toyota’s Latest Infotainment System is Powered By Linux- Engadget

2) Linux and UNIX program Sudo has revealed a “high severity” vulnerability recently; Linux distros like Red Hat and Debian have pushed patches forward.

Patches Available For Linux Sudo Vulnerability- Threatpost

3) “[Containers and hypervisors] are beginning to merge,” writes Liam Proven. Here’s what that means for the enterprise.

The Linux Cloud Swap That Spells Trouble For Microsoft and VMware

4) Red Hat’s acquisition of Codenvy “will add additional cloud tools to enable developers to enhance their container-based and cloud applications.”

Red Hat Buying Cloud Development Tools Vendor Codenvy

5) Online-only supermarket Ocado recently announced Kubermesh, an open source package aimed at simplifying data center architectures for “smart factories.”

Open Source Solution For Smarter Warehouses- Huffington Post

AryaLinux Focuses More on Source Than Simplicity

I remember well the days of Linux when the desktop was a challenge to get up and running, keep running, and configure to meet your daily needs. It was a tinkerer’s dream come true but also sent many users back to a path more travelled. For those of us who stuck it out, the reward was considerable: Stability, security, and the bragging rights that we’d achieved something others had not.

That was then. Today? People just want to get their work done. Bragging rights ring hollow in the modern world, so the idea of working with a desktop platform that requires unnecessary effort is hard to justify.

However, there are still those who prefer a Linux distribution that focuses more on source than simplicity. That’s where the likes of AryaLinux come in. Arya is a source-based Linux distribution, hailing from India, that has been pieced together using the guidelines from Linux From Scratch (LFS). That, in and of itself, is a bit of information that would send many users packing. However, the developers of Arya have done something wise; they’ve created a live distribution that lets you easily install the latest version of Arya, with either the Xfce or Mate desktop to make things simple. There is, of course, one caveat. The installation of AryaLinux isn’t exactly as straightforward as you might expect for a live distribution.

Let’s take a look at what it takes to get AryaLinux installed and then get a glimpse at why this distribution might be the one for you.

Installation

Spinning up AryaLinux from the downloadable ISO image is exactly as you might expect. Once you burn the image onto a DVD (the image is 1.8GB, so it won’t fit on a CD) or create a bootable USB drive, you will find yourself on either the Mate or the Xfce desktop (I chose the Xfce spin). On the desktop, you’ll find an icon marked AryaLinux Installer. Don’t click that yet. Why? The Arya installer system doesn’t included a partitioner; so, if you attempt to install without having first created the necessary partition(s), the installation will fail.

The good news on that front is that the developers have included the GParted tool in the live distribution, so you can create the necessary partitions. Before you launch the installer, click the desktop menu (three vertical lines in the upper left corner of the desktop) and then click System > GParted. When GParted fires up, you will need to create (at least) a root partition (Figure 1).

Figure 1: Using GParted on Arya.

Do note that you need a minimum of 20GB of space to install Arya; anything less will cause the installer to fail. After you’ve created the root partition with GParted, you can close the tool and then fire up the Arya Installer.

For the most part, the Arya installer is very straightforward. The only hiccup that might trip you is the partition selector. The first thing you must do is select the partition you just created as the root partition. Once you have that done, you can click Next (Figure 2)—unless you want your /home partition to exist outside of the root partition (in which case, you must have already created that partition with GParted as well).

Figure 2: The Arya partition selector.

Beyond that, you should have no problems with the remainder of the Arya installation. When the installer completes, reboot the system, and log into your shiny new (source-based) Linux distribution.

What’s to like?

I should preface this by saying neither Xfce or Mate have ever been my desktop of choice (I’ve always veered toward more modern takes on the desktop, such as Elementary OS or GNOME). That said, the Xfce instance I worked with was quite nice. There’s nothing surprising here; it all works without a single issue. Out of the box, you’ll find software like:

Of that list, the only piece of software that isn’t out of date is Xfce. The good news, however, is that Arya does, in fact, include a package manager called Alps. With this command-line-only tool, you can upgrade the software with relative ease. You must first open up a terminal window and then issue the command su. When prompted, type your administrator password (which is set during installation). At the new Bash prompt, you can upgrade software like so:

alps upgrade PACKAGENAME

Say, for example, you want to upgrade the Thunderbird email client. To do this, you will issue the command:

alps upgrade thunderbird

Unlike an upgrade on, say Ubuntu, the process on Arya can take some time. Alps downloads the newest version of the source and then installs from there. Upgrading Thunderbird from the default version to the latest version took nearly two hours (which is considerably longer than it took to install the operating system). That’s considerable time for upgrading an email client. However, in the end, I could rest assured knowing I had the latest iteration of Thunderbird, built from source. Unfortunately, there is not a mechanism within Arya to run a full upgrade. In other words, you must upgrade your desktop one application at a time.

As far as performance is concerned, Arya runs like a champ. This may be due to the lightweight desktop; even so there was a smoothness and stability to the platform you don’t always find with distributions this new (less than two years) being developed by so few (two developers).

What’s not to like?

Outside the time it took to upgrade a single piece of software, the only problem I ran into was an inability to get the VirtualBox Guest Additions installed. If you’re installing on standard hardware, this won’t be an issue. If, however, you’re thinking of testing Arya as a virtual machine, you’ll have to miss out on the guest additions. That’s a small price to pay, especially if you plan on eventually installing Arya on a standard desktop or laptop machine. Beyond that, Arya did not disappoint.

I should make one disclaimer: I did not test Arya on a laptop, so I cannot comment on how well it performs with mobile hardware (e.g., wireless).

Is AryaLinux right for you?

Ask yourself this one question: Would you rather have a Linux distribution that is all about convenience, or does having a complete desktop built from scratch better suit your needs? If you’re looking for simplicity and convenience, look elsewhere. If the idea of having a from-source desktop gives you a case of the smiles, then AryaLinux might well be the very distribution you’ve been looking for.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

How to Protect Samba from the SambaCry Exploit

If you make use of a Linux server to share out directories and files, you’ll want to make sure you do everything you can to prevent the likes of SambaCry. Here are a few tips.

You’ve already heard of WannaCry, a ransomware attack that can lock down data on Windows machines. This particular exploit comes by way of an SMB vulnerability. Naturally, if you use Linux you know about Samba; but did you also know that, according to CVE-2017-7494:

All versions of Samba from 3.5.0 onwards are vulnerable to a remote code execution vulnerability, allowing a malicious client to upload a shared library to a writable share, and then cause the server to load and execute it.

Read more at TechRepublic

OPNFV Summit Preview: Q&A with ZTE

ZTE’s Zhang Fan, Chief Architect of Packet Core, will deliver a keynote address at the upcoming OPNFV Summit focused on “NFV Practice for vEPC Commercial Network.” Read  below for a preview of what you can expect from ZTE onsite at OPNFV Summit this year. OPNFV Summit is taking place June 12-15 in Beijing. 

Where do you see your role — and that of OPNFV — in terms of the broader end-to-end open networking stack?  
OPNFV plays a key role in the integration of IT technologies and standards organizations by centralizing the ecosystem via a dedicated reference platform, which speeds up NFV development and maturity. Since OPNFV representatives are comprised of CT vendors, operators, and IT vendors, among others; OPNFV reflects common requirements needed across the NFV ecosystem. For example, OPNFV’s strong collaboration with other upstream communities including OpenStack, OpenDaylight, DPDK, FD.io, etc. illustrate the project’s ability to serve as a connection across the end-to-end open networking stack.  

OPNFV is the only open source community targeting  NFV solutions, including infrastructure, VIM and MANO. ZTE is pleased to be joining the OPNFV community along the journey to accelerate open source NFV.

Read more at OPNFV

Installing Node.js 8 on Linux via Package Manager

At NodeSource, we maintain the consistently-updated Node.js repositories for Linux distributions. These aren’t repositories like most Node.js developers think of them – git repositories – but rather repositories in the sense of installing Node.js via the given Linux OS’s built-in package manager – like aptand yum.

With the release of Node.js 8 yesterday, we’ve gone ahead and built the Node.js 8 binaries and made them readily available if you’re using Node.js on a Debian- or Enterprise Linux-based distro.

Want to get the latest and greatest with Node.js 8 on Linux? Let’s get you up and running:

Read more at NodeSource

Just Starting Out with Git and GitHub? It Gets Easier, Honest!

No doubt you have heard of Git or GitHub for source control, but what is source control?

This is a basic overview of source code control and the advantages of using it within a team environment or using it on your own, if you do a Google search for source code control you can read how Wikipedia defines Source control, as:

“Revision control (also known as version control, source control or (source) code management (SCM)) is the management of multiple revisions of the same unit of information.”

Read more at Dev.to

Choose Your Favorite Linux Hacker SBCs and Enter to Win a Free Board

It’s once again time for our annual reader survey of open-spec, Linux- or Android-ready single board computers priced under $200. In collaboration with LinuxGizmos, we have published freshly updated summaries of 98 SBCs, up from 81 boards in our 2016 survey and 53 boards in our 2015 survey.

You can now take our brief, 3-minute SurveyMonkey survey, where you can rank up to three of your favorite boards from our list.

 Take the survey!

We also ask a handful of questions on key buying criteria and intended applications. Completing the survey earns you a chance to be among 27 randomly selected winners who will receive a free board from one of six SBC families. These include the Arduino Uno WiFi, BeagleBone and variants, DragonBoard 410C, MinnowBoard Turbot Quad-core, Gumstix Pepper DVI-D, and several UP boards including the new UP Squared.

Community-backed SBCs running Linux and Android are seeing increasing demand from makers, educators, researchers, and a growing number of embedded equipment manufacturers. Hacker boards are used as desktop replacements, media centers, and Internet of Things (IoT) devices, such as home or industrial automation gateways and gadgets. Other applications include robots, drones, smart city equipment, signage, and kiosks.

We’ve added dozens of new products since last year’s survey and trimmed a few others from our list. Newcomers include several new BeagleBone variants such as the robotics focused BeagleBone Blue, and a new Raspberry Pi Zero W board that adds wireless to the minimalist IoT-oriented Zero. There’s also the RPi-like, Rockchip-based Tinker Board from Asus, the first hacker board from a major PC manufacturer. Many of the new boards add to the low-cost Raspberry Pi imitating product lines from Shenzhen Xunlong (Orange Pi) and FriendlyElec (NanoPi). Other newcomers include the high-end, media-focused Khadas Vim and the Wandboard-like SavageBoard.

Recent trends include increased adoption of 64-bit ARMv8 SoCs, leading to a total of 20 such boards in our roundup. There are also several more tiny, stripped-down IoT boards like the Raspberry Pi Zero W, Orange Pi Zero, and NanoPi Neo. On the high end, we’ve seen new x86 boards like the UP board and UP Squared, as well as the Udoo x86 and MinnowBoard Turbot Quad-core. High-end ARM boards include the Firefly-RK3399 and the MediaTek X20 Development Board.

To be included in our survey, the SBCs must be supported with open Linux and/or Android stacks and be priced under $200 (not counting shipping), with promised shipment in July. They must also meet our relatively flexible selection criteria for open source compliance. The vast majority of the boards are offered with full schematics and extensive specs, and most include open source licensing. However, we also admit some less open source boards like the Raspberry Pi, especially if considerable attention has been given to providing suitable open source Linux and Android images, as well as community features such as forums, tutorials, and tech support.

For more details on selection criteria, as well as summaries and a comparison spreadsheet for the 98 boards, see the 2017 hacker board catalog posted at LinuxGizmos.com. To read more about product trends and fill out the survey to earn a chance to win prizes, see the 2017 hacker board survey page. The results of the survey and the lists of winners will be posted here in mid-June. Prizes should arrive in July.

What Is NoSQL?

NoSQL databases are one of those fun topics where people get all excited because it’s cool and new. But, they’re really not new, they’re different from SQL databases, and they have different use cases. NoSQL is not going to make world peace or give you your own private holodeck but, aside from those deficiencies, it is pretty cool.

What is NoSQL?

NoSQL is a fine multi-purpose term with multiple meanings, thanks to the incurable nerd habit of modifying everything and never being finished. NoSQL means non-SQL, non-relational, distributed, scalable, and not-only-SQL, because many support SQL-type queries. NoSQL-type databases have been around since the olden Unix days, although naturally some people act like they’re a brand-new awesome sauce Web 2.0 thingy. I don’t even know what Web 2.0 is, although I like it for not having blink tags. At any rate, NoSQL and SQL databases have some things in common, and some differences. NoSQL is not a replacement for SQL, and it is not better or worse, but is made for different tasks. The distinction between the two is blurring as they both evolve, and perhaps someday will not be very meaningful.

Traditional RDBMS

Some examples of traditional relational database management systems are MySQL, MariaDB, and PostgreSQL. We know and love these old-time RDBMS because they’re solid and reliable, are proven for ensuring data integrity, and we’re used to them.

But our beloved old-timers don’t fit well into our modern world of fabulously complex and high-speed datacenters. They have formal schema defining tables and field types, and may also have indexes, primary keys, triggers, and stored procedures. You have to start with designing your schema before you can start using your DB. Because of this rigid structure adding a new data type is a significant operation. They don’t scale, cluster, or replicate very easily, so while you can use them for your lovely cloud or distributed datacenter, many NoSQL DBs are designed for the fast-paced anarchic world of cloud and distributed computing.

NoSQL DBs also scale down nicely. When you don’t want all the complexity and overhead of MariaDB or PostgreSQL, or want an embedded DB, try CouchbaseLite, TinyDB, or good old Berkeley DB.

NoSQL Types

NoSQL DBs have flexible schema, which is both a lovely feature and a pitfall. It is easier than wrangling inflexible tables, although it doesn’t mean you don’t have to care about schema, because you do. Sensible design is always better than chaos.

There are over 200 NoSQL-type databases, and these fall into some general categories according to their data models.

Key:Value store uses a hash table of keys/value pairs.

Document-based store uses actual text documents.

Column-based store organizes your data in columns, and each storage block contains data from only one column.

Graph-based DBs are very fast at querying and retrieving diverse data.

Document store DBs include CouchDB, MongoDB, and SequoiaDB. Instead of tables, these use JSON-like field-value pair documents, actual text documents that you type and look at, and documents can be organized into collections. These examples show a little bit about how MongoDB organizes data.

{
   title: "MongoDB: The Definitive Guide",
   author: [ "Kristina Chodorow", "Mike Dirolf" ],
   published_date: ISODate("2010-09-24"),
   pages: 216,
   language: "English",
   publisher: {
              name: "O'Reilly Media",
              founded: 1980,
              location: "CA"
            }
}

There is not a rigid number of fields; you could omit any of the fields in the example, or add more. The publisher information is an embedded sub-document. If the publisher information is going to be re-used a lot, use a reference to make it available to multiple entries.

{
   _id: "oreilly",
   name: "O'Reilly Media",
   founded: 1980,
   location: "CA"
}

{
   _id: 123456789,
   title: "MongoDB: The Definitive Guide",
   author: [ "Kristina Chodorow", "Mike Dirolf" ],
   published_date: ISODate("2010-09-24"),
   pages: 216,
   language: "English",
   publisher_id: "oreilly"
}

Key:value NoSQLs include Redis, Riak, and our old friend Berkeley DB, which has been around since forever. Berkeley DB is the default back end for the Cyrus IMAP server, Exim, Evolution mail client, OpenLDAP, and Bogofilter.

Redis also represents another type of NoSQL database, in-memory. It is very fast because it defaults to running in memory only, with a configurable write-to-disk option (which, of course, is much slower). Redis is replacing Memcached as a distributed memory caching object system on dynamic web sites. If running in memory sounds scary, consider the use case: dynamic web sites delivering all kinds of transient data. If Redis makes the user click an extra time, no biggie. Start it by running redis-server; this is how it looks after installation:

$ redis-server
5786:C 30 May 07:34:06.939 # Warning: no config file specified, using 
the default config. In order to specify a config file use redis-server 
/path/to/redis.conf
5786:M 30 May 07:34:06.940 * Increased maximum number of open files 
to 10032 (it was originally set to 1024).
                _._                                                  
           _.-``__ ''-._                                             
      _.-``    `.  `_.  ''-._           Redis 3.0.6 (00000000/0) 64 bit
  .-`` .-```.  ```/    _.,_ ''-._                                   
 (    '      ,       .-`  | `,    )     Running in standalone mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
 |    `-._   `._    /     _.-'    |     PID: 5786
  `-._    `-._  `-./  _.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |           http://redis.io        
  `-._    `-._`-.__.-'_.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |                                  
  `-._    `-._`-.__.-'_.-'    _.-'                                   
      `-._    `-.__.-'    _.-'                                       
          `-._        _.-'                                           
              `-.__.-'            

Open a second terminal and run redis-cli. Commands in bold are what you type.

$ redis-cli
127.0.0.1:6379> SET my:name "carla schroder"
OK
127.0.0.1:6379> GET my:name
"carla schroder"
127.0.0.1:6379>

Type exit to quit. This is more like using a traditional RDBMS, because everything is done with commands and you don’t have nice text documents to look at.

Column Store Databases include Cassandra and Apache HBase. These store your data in a column format. Why does this matter, you ask? It makes queries and data aggregation easier. With document store and in-memory DBs you have to rely on good program logic to run queries and manipulate your data. Column-store DBs simplify your program logic.

When you see those “You may also be interested in…” links on a web site, they are most likely delivered from a graph-based DB. Graph-based DBs do not use SQL because it is too rigid for such fluid queries. There is not yet a standard query language, so each one is its own special flower. Some examples are InfiniteGraph, OrientDB, and Neo4J.

So, there you have it, another plethora of choices in an already-overloaded tech world. In a nutshell, NoSQL DBs are great for fast prototyping of new schema, raw speed, and scalability. If you want the most data integrity, such as for financial transactions, stick with RDBMS. Though, again, as the two evolve we’re going to see less-clear boundaries and perhaps someday One DB to Rule Them All.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Ops Engineer Explains Let’s Encrypt’s Automated TLS/SSL Certificate Issuance

Let’s Encrypt is a free, automated and open Certificate Authority issuing digital certificates for website encryption globally. Let’s Encrypt is a service provided by the Internet Security Research Group (ISRG), a public benefit organization with a mission to reduce financial, technological, and education barriers to secure communication over the Internet. Let’s Encrypt secures communication for more than 40 million websites.

Jillian Karner, operations engineer at Let’s Encrypt.
In an upcoming presentation at Open Source Summit Japan, Let’s Encrypt technical staff will provide a brief history of the organization’s development and how it has made considered decisions to enable a growing portion of the Web to benefit from the security provided by HTTPS. In this interview, Jillian Karner, operations engineer at Let’s Encrypt expands upon a key differentiator of Let’s Encrypt: its emphasis on automation.

Jillian is a full-time crypto enthusiast with a passion for a free, open, and secure Web.  She has worked for start-ups in the security field since her early college years at Arizona State University maintaining secure infrastructures and developing encrypted endpoint to endpoint applications. She is currently working with Let’s Encrypt and looks forward to a 100% encrypted Web.

Linux.com: Can you give our readers some background on Let’s Encrypt? Why was it developed?

Jillian Karner: Let’s Encrypt is a free, automated, and open source certificate authority. It provides websites and endpoints on the Web a TLS/SSL certificate allowing users to communicate to those sites through an encrypted Web session. By having a web server configured with a TLS/SSL certificate, users can reach a site over the HTTPS protocol and know the endpoint has been authenticated and that the communication is encrypted. Let’s Encrypt also worked to write the ACME spec, currently a work in progress draft through the Internet Engineering Task Force (IETF), which defines an automated method for issuing certificates. The spec will allow other certificate authorities to create their own ACME-based CA systems and allows the community to write clients that use these issuance system.

Let’s Encrypt and the related ACME spec were developed with the goal to help encrypt the entire Web. To achieve that, the project was started with the foundation that it needs to be free to reduce the complexity and make it accessible to everyone. And since certificate authorities rely on being trusted, Let’s Encrypt worked to be as transparent as possible from the get-go. The certificate authority software, Boulder, is open-sourced and all the certificates issued are logged to Certificate Transparency and are auditable.

Linux.com: How does automation play into Let’s Encrypt’s approach?

Karner: Automation is a significant part of the Let’s Encrypt ecosystem both in terms of the certificate issuance protocol and the infrastructure that keeps it running. If you’ve ever attempted to get a certificate before Let’s Encrypt entered the game, you know that every few years or so you had to recall the special commands and steps to issue/renew and deploy a certificate. Even the most proficient System Administrators would not look forward to renewing certificates. But the ACME protocol that the Let’s Encrypt certificate authority is developed on enables automatic issuance and renewal of certificates. It was designed to remove the human element of the process and make getting a certificate more accessible for anyone who needs one.

Our team of system administrators has automated most of the processes for maintaining and running Boulder and the related infrastructure. We’ve worked hard to make sure that the environment is available for users with high uptime by using automated checks, repeatable processes, and configuration management tools.

Linux.com: Where do you see automation making a big difference for your users?

Karner: In the case of Let’s Encrypt it makes all the difference. Acquiring certificates is understood to be a tedious task, but with the help of Let’s Encrypt, the intermediary steps are automated and it only requires setting up one of the many available clients to start the process. Once a cert is issued most clients don’t require any manual work for certificate renewal. Since the work of issuing and renewing is automated, it also enables Let’s Encrypt to offer certificates that are valid for 90 days, which improves security in the certificate ecosystem by preventing a compromised certificate from lasting very long, which is much more effective than techniques like certificate revocation. The automation also trickles down to the end users on the Web and improves security for the user because there will be no lapse in a valid certificate.

Linux.com: What are the greatest challenges you’ve faced in building and maintaining Let’s Encrypt?

Karner: The greatest challenge has been dealing with rapid growth. There are nearly 34 million active certificates issued by Let’s Encrypt, and we’re not even at two years of operations. We’re constantly working to improve our services and our operations that will keep downtime to a minimum. With so many users, including large integrators that rely on the service, we have to frequently evaluate our infrastructure usage and needs to make sure we stay ahead of the growth that we see.

Linux.com: What has been the most interesting or fulfilling aspect of working on Let’s Encrypt?

Karner: Let’s Encrypt has a great mission in wanting to encrypt the entire Web and enable better security for users. It’s incredible to be a part of that mission and watch the change happen. When the project started, Firefox Telemetry data showed that only 39% of all Web sessions were encrypted. Now, that number has surpassed 55% and is continuing to increase. It’s fulfilling that people like my parents who aren’t very technical can browse the Web securely because Websites have an easy, free option to get a cert and provide that for them.

View the full agenda of sessions happening this week at Open Source Summit Japan.