Originally created as an internal solution by YouTube to handle scaling for massive amounts of storage, Vitess is a database orchestration system for horizontal scaling of MySQL through generalized sharding. By encapsulating shard-routing logic, Vitess allows application code and database queries to remain agnostic to the distribution of data onto multiple shards. With Vitess, organizations can even split and merge shards as needs grow, with an atomic cutover step that takes only a few seconds. Companies like BetterCloud, Flipkart, Quiz of Kings, Slack, Square Cash, Stitch Labs and YouTube are using Vitess across various stages of production and deployment. Organizations including Booking.com, GitHub, HubSpot, Slack, and Square are also active contributors to the project.
If you want to buy a router specifically to be modded, you might be best served by working backward. Start by looking at the available offerings, picking one of them based on the feature set, and selecting a suitable device from the hardware compatibility list for that offering.
In this article. I’ve rounded up six of the most common varieties of third-party network operating systems, with the emphasis on what they give you and who they’re best suited for. Some of them are designed for embedded hardware or specific models of router only, some as more hardware-agnostic solutions, and some to serve as the backbone for x86-based appliances.
As the leader of an open source foundation, I have a unique perspective on the way open source technologies are catalyzing the digital transformation of enterprises around the world. More than half of the Fortune 100 is using Cloud Foundry. If you’re wondering why, there are two main reasons: one is the allure of open source, and the other is the strength of the platform itself.
Open and free
Open source is based on freedom. That freedom includes access to the source code, freedom to collaborate and, ultimately, the freedom to innovate. In open source, no one person or company owns a project. Open source is a philosophy and a movement, and what makes open source thrive is the community that grows up around it.
All participants in an open source ecosystem have the opportunity to shape and improve the software. Users can identify features they need and contribute code upstream. Everyone has a chance to make a difference.
In the past, many embedded projects used off-the-shelf distributions and stripped them down to bare essentials for a number of reasons. First, removing unused packages reduced storage requirements. Embedded systems are typically shy of large amounts of storage at boot time, and the storage available, in non-volatile memory, can require copying large amounts of the OS to memory to run. Second, removing unused packages reduced possible attack vectors. There is no sense hanging on to potentially vulnerable packages if you don’t need them. Finally, removing unused packages reduced distribution management overhead. Having dependencies between packages means keeping them in sync if any one package requires an update from the upstream distribution. That can be a validation nightmare.
Yet, starting with an existing distribution and removing packages isn’t as easy as it sounds. Removing one package might break dependencies held by a variety of other packages, and dependencies can change in the upstream distribution management.
Security is tantamount to peace of mind. After all, security is a big reason why so many users migrated to Linux in the first place. But why stop with merely adopting the platform, when you can also employ several techniques and technologies to help secure your desktop or server systems.
One such technology involves keys—in the form of PGP and SSH. PGP keys allow you to encrypt and decrypt emails and files, and SSH keys allow you to log into servers with an added layer of security.
Sure, you can manage these keys via the command-line interface (CLI), but what if you’re working on a desktop with a resplendent GUI? Experienced Linux users may cringe at the idea of shrugging off the command line, but not all users have the same skill set and comfort level there. Thus, the GUI!
In this article, I will walk you through the process of managing both PGP and SSH keys through the Seahorse GUI tool. Seahorse has a pretty impressive feature set; it can:
Encrypt/decrypt/sign files and text.
Manage your keys and keyring.
Synchronize your keys and your keyring with remote key servers.
Sign and publish keys.
Cache your passphrase.
Backup both keys and keyring.
Add an image in any GDK supported format as a OpenPGP photo ID.
Create, configure, and cache SSH keys.
For those that don’t know, Seahorse is a GNOME application for managing both encryption keys and passwords within the GNOME keyring. But fear not, Seahorse is available for installation on numerous desktops. And since Seahorse is found in the standard repositories, you can open up your desktop’s app store (such as Ubuntu Software or Elementary OS AppCenter) and install. To do this, locate Seahorse in your distribution’s application store and click to install. Once you have Seahorse installed, you’re ready to start making use of a very handy tool.
Let’s do just that.
PGP Keys
The first thing we’re going to do is create a new PGP key. As I said earlier, PGP keys can be used to encrypt email (with tools like Thunderbird’s Enigmail or the built-in encryption function with Evolution). A PGP key also allows you to encrypt files. Anyone with your public key will be able to decrypt those emails or files. Without a PGP key, no can do.
Creating a new PGP key pair is incredibly simple with Seahorse. Here’s what you do:
Open the Seahorse app
Click the + button in the upper left corner of the main pane
Select PGP Key (Figure 1)
Click Continue
When prompted, type a full name and email address
Click Create
Figure 1: Creating a PGP key with Seahorse.
While creating your PGP key, you can click to expand the Advanced key options section, where you can configure a comment for the key, encryption type, key strength, and expiration date (Figure 2).
Figure 2: PGP key advanced options.
The comment section is very handy to help you remember a key’s purpose (or other informative bits). With your PGP created, double-click on it from the key listing. In the resulting window, click on the Names and Signatures tab. In this window, you can sign your key (to indicate you trust this key). Click the Sign button and then (in the resulting window) indicate how carefully you’ve checked this key and how others will see the signature (Figure 3).
Figure 3: Signing a key to indicate trust level.
Signing keys is very important when you’re dealing with other people’s keys, as a signed key will ensure your system (and you) you’ve done the work and can fully trust an imported key.
Speaking of imported keys, Seahorse allows you to easily import someone’s public key file (the file will end in .asc). Having someone’s public key on your system means you can decrypt emails and files sent to you from them. However, Seahorse has suffered a known bug for quite some time. The problem is that Seahorse imports using gpg version one, but displays with gpg version two. This means, until this long-standing bug is fixed, importing public keys will always fail. If you want to import a public PGP key into Seahorse, you’re going to have to use the command line. So, if someone has sent you the file olivia.asc, and you want to import it so it can be used with Seahorse, you would issue the command gpg2 –import olivia.asc. That key would then appear in the GnuPG Keys listing. You can open the key, click the I trust signatures button, and then click the Sign this key button to indicate how carefully you’ve checked the key in question.
SSH Keys
Now we get to what I consider to be the most important aspect of Seahorse—SSH keys. Not only does Seahorse make it easy to generate an SSH key, it makes it easy to send that key to a server, so you can take advantage of SSH key authentication. Here’s how you generate a new key and then export it to a remote server.
Open up Seahorse
Click the + button
Select Secure Shell Key
Click Continue
Give the key a description
Click Create and Set Up
Type and verify a passphrase for the key
Click OK
Type the address of the remote server and a remote login name found on the server (Figure 4)
Type the password for the remote user
Click OK
Figure 4: Uploading an SSH key to a remote server.
The new key will be uploaded to the remote server and is ready to use. If your server is set up for SSH key authentication, you’re good to go.
Do note, during the creation of an SSH key, you can click to expand the Advanced key options and configure Encryption Type and Key Strength (Figure 5).
Figure 5: Advanced SSH key options.
A must-use for new Linux users
Any new-to-Linux user should get familiar with Seahorse. Even with its flaws, Seahorse is still an incredibly handy tool to have at the ready. At some point, you will likely want (or need) to encrypt or decrypt an email/file, or manage secure shell keys for SSH key authentication. If you want to do this, while avoiding the command line, Seahorse is the tool to use.
Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.
This week in open source and Linux news, developer Eugenia Kuyda’s fascinating open source-built chatbot is emotionally intelligent, The Linux Foundation forms new networking umbrella, & more!
Software developer Eugenia Kuyda is releasing the code to her chatbot, which can apply emotion into exchanges. The bot is built on open source.
Almost every time Linus Torvalds releases a new mainline Linux kernel, there’s inevitable confusion about which kernel is the “stable” one now. Is it the brand new X.Y one, or the previous X.Y-1.Z one? Is the brand new kernel too new? Should you stick to the previous release?
The kernel.org page doesn’t really help clear up this confusion. Currently, right at the top of the page. we see that 4.15 is the latest stable kernel — but then in the table below, 4.14.16 is listed as “stable,” and 4.15 as “mainline.” Frustrating, eh?
Unfortunately, there are no easy answers. We use the word “stable” for two different things here: as the name of the Git tree where the release originated, and as indicator of whether the kernel should be considered “stable” as in “production-ready.”
Due to the distributed nature of Git, Linux development happens in a number of various forked repositories. All bug fixes and new features are first collected and prepared by subsystem maintainers and then submitted to Linus Torvalds for inclusion into his own Linux tree, which is considered the “master” Git repository. We call this the “mainline” Linux tree.
Release Candidates
Before each new kernel version is released, it goes through several “release candidate” cycles, which are used by developers to test and polish all the cool new features. Based on the feedback he receives during this cycle, Linus decides whether the final version is ready to go yet or not. Usually, there are 7 weekly pre-releases, but that number routinely goes up to -rc8, and sometimes even up to -rc9 and above. When Linus is convinced that the new kernel is ready to go, he makes the final release, and we call this release “stable” to indicate that it’s not a “release candidate.”
Bug Fixes
As any kind of complex software written by imperfect human beings, each new version of the Linux kernel contains bugs, and those bugs require fixing. The rule for bug fixes in the Linux Kernel is very straightforward: all fixes must first go into Linus’s tree. Once the bug is fixed in the mainline repository, it may then be applied to previously released kernels that are still maintained by the Kernel development community. All fixes backported to stable releases must meet a set of important criteria before they are considered — and one of them is that they “must already exist in Linus’s tree.” There is a separate Git repository used for the purpose of maintaining backported bug fixes, and it is called the “stable” tree — because it is used to track previously released stable kernels. It is maintained and curated by Greg Kroah-Hartman.
Latest Stable Kernel
So, whenever you visit kernel.org looking for the latest stable kernel, you should use the version that is in the Big Yellow Button that says “Latest Stable Kernel.”
Ah, but now you may wonder — if both 4.15 and 4.14.16 are stable, then which one is more stable? Some people avoid using “.0” releases of kernel because they think a particular version is not stable enough until there is at least a “.1”. It’s hard to either prove or disprove this, and there are pro and con arguments for both, so it’s pretty much up to you to decide which you prefer.
On the one hand, anything that goes into a stable tree release must first be accepted into the mainline kernel and then backported. This means that mainline kernels will always have fresher bug fixes than what is released in the stable tree, and therefore you should always use mainline “.0” releases if you want fewest “known bugs.”
On the other hand, mainline is where all the cool new features are added — and new features bring with them an unknown quantity of “new bugs” that are not in the older stable releases. Whether new, unknown bugs are more worrisome than older, known, but yet unfixed bugs — well, that is entirely your call. However, it is worth pointing out that many bug fixes are only thoroughly tested against mainline kernels. When patches are backported into older kernels, chances are they will work just fine, but there are fewer integration tests performed against older stable releases. More often than not, it is assumed that “previous stable” is close enough to current mainline that things will likely “just work.” And they usually do, of course, but this yet again shows how hard it is to say “which kernel is actually more stable.”
So, basically, there is no quantitative or qualitative metric we can use to definitively say which kernel is more stable — 4.15 or 4.14.16. The most we can do is to unhelpfully state that they are “differently stable.”
Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.
As organizations strive to innovate quickly and be more agile, development teams are driven to deliver code faster and with more stability. Enter DevOps, which Gartner characterizes as the rapid and agile iteration from development into operations, with continuous monitoring and analytics at the core. DevOps has quickly taken hold and, according to the RightScale 2017 State of the Cloud Report, overall adoption has reached 78 percent and 84 percent among enterprises. …
In a DevOps model, developers use automation to test, configure, and deploy their own code quickly. Organizations are beginning to layer in security automation to add controls that help address legal and regulatory compliance requirements and manage risk.
Heptio holds a special place in the Kubernetes startup ecosystem. Its co-founders, Craig McLuckie and Joe Beda, are, after all, also two of the co-founders of the Kubernetes project (together with Brendan Burns), which launched inside of Google. Heptio also raised $8.5 million when it launched in 2016 (and another $25 million last year), but it was never quite clear what the company’s actual business plan looked like beyond offering training and professional services. That’s becoming quite a bit clearer now, though, as the company today announced the launch of the Heptio Kubernetes Subscription.
I always assumed that Heptio would launch some kind of Kubernetes distribution in the near future — and that’s kind of what this is, but the company is also putting a different spin on this.
Agile has proven to be a polarizing force to every type of businesses, big or small since a group of software developers proposed it in 2001 in the form of the Agile Manifesto as a reaction to traditional “Waterfall” development, which they found dysfunctional and slow to give results. Agile is flexible and encourages rapid response to changing business needs and user requirements as compared to the Waterfall methodology.
Whether we talk about IT or other business communities, all businesses are justifiably enthusiastic to achieve desired results quickly, and some of them take Agile training to get the most out of advanced software development approaches.
Not everyone is able to implement agile properly due to a variety of reasons. One of the reasons is the disruptive changes in agile to establish processes that place additional burdens on users. The reality is somewhere in the middle.