It is really important for an IT graduate to keep up with the latest developments and advances in the technology to explore enough opportunities for excelling at the highest level. While you’d definitely need the best certifications to succeed at the highest level corporate environment with enough skill sets and expertise in the required area. There are various certifications that can add the desired value to your career and profile and sometimes it gets hard to choose the better one for your work profile.
MCSE Certification MCSE (Microsoft Certified Systems Engineer) is a top level certification provided by Microsoft to validate one’s authority in Microsoft products and servers all around the world. The certification requires you to first pass the MCSA (Microsoft Certified Solutions Associate) certification. Based on your choice of expertise and the field that you want to pursue, there are eight different options to opt for:
The certification exam judges you on advanced skills ranging from installing software to managing and troubleshooting networks in Microsoft Systems and Servers Enterprise. You have to answer just about 50 questions in the duration of 90 minutes or so and get a score of 750 out of 1000 to secure passing marks in the examination. The passing rate in the first attempt of examination is 70-80% and that is not very bad as compared to other high level certification exams.
CCNA
The CCNA (Cisco Certified Network Associate) Certification is one of the most advanced level certifications that Cisco provides for individuals seeking expertise in Cisco enterprise. The certification requires extensive skills of troubleshooting and configuring a network and knowledge of installing and configuring WAN and LAN along with routers and switches. The certification gives you wide range of expertise in Cisco enterprise to mould your career in the best shape possible. It is good enough to give you a high amount of opportunities at the corporate level.
The Cisco Certification exam is one of the toughest certification exams to crack in a single attempt. The certification tests your ability to troubleshoot, manage, secure and configure a medium sized network in highly advanced and real time Cisco enterprise along with your knowledge of routers and switches management and configuration. You have to answer about 45 multiple choice questions in the duration of 90 minutes or so. The exam has a success rate of just 40% in the first attempt and can give you a shocker or a wakeup call if you’re underprepared.
Which is the best?
Both the certifications are highly valued in the industry of IT professional and they have their own set of pros and cons over time. Which one is best for you certainly depend on what you’re seeking and in what field you want to excel in future? While MCSE gives you more scope and opportunities, it does require you to give seven different examinations to earn the certification. But the CCNA certification gives you more credibility and expertise if you’re planning to pursue a career in networking and security and moreover you have to pass just a single examination to earn the certification. While CCNA gives you more authority as a network administrator, the MCSE can consolidate your position as a system administrator.
CCNA professionals earn more salaries than MCSE professional but the margin is not very much. It is also observed that people with CCNA certifications get more jobs as compared to the MCSE certified professionals.
Conclusion
Both the certifications require you to recertify within 3-4 years time. The MCSE certification is the highest level Microsoft certification while one can opt for more advanced level certifications in Cisco environment after CCNA like; CCNP (Cisco Certified Network Professional) and CCIE (Cisco Certified Internet Professional). The Microsoft certification is more popular and the fact that you’ll have to give 8 different exams to attain the credential means that you’ll have to invest more time, money and resources to get through but it also means that you’ll also have a wide range of skills to broadcast your employers. The CCNA certification gives you more leverage in WAN and LAN field along with superiority in networking and security and requires you to just one exam. What is best for you totally depend on your present job profile and where you want to go in the future.
The Call For Papers (CFP) for MesosCon Europe is closing soon! Submit your proposal by July 28 for consideration.
MesosCon is an annual conference that brings together users and developers to share and learn about the project and its growing ecosystem. The conference will feature two days of sessions to learn more about the Apache Mesos core and related technologies. The program will include workshops to get started with Apache Mesos, keynote addresses from industry leaders, and sessions led by adopters and contributors.
Here are a few examples of topics we would like to see:
Best practices and lessons on deploying and running Mesos at scale
Deep dives and tutorials into Mesos
Interesting extensions to Mesos (e.g., new communication models, support for new containerizers, new resource types and allocation models, etc.)
Improvements/additions to the Mesos ecosystem (packaging systems, monitoring, log aggregation, load balancing, service discovery, etc.)
New frameworks
Microservice design
Continuous delivery / DevOps (automating into production)
If you’re unsure about your proposal, or want some feedback or general advice, please don’t hesitate to reach out to us. We’ll be happy to help!
Our events are working conferences intended for professional networking and collaboration in the Linux community and we work closely with our attendees, sponsors, and speakers to help keep The Linux Foundation events professional, welcoming, and friendly.
Not interested in speaking but want to attend? Linux.com readers receive 5% off the “attendee” registration with code LINUXRD5.
If a JavaScript developer was frozen in 2005 and miraculously thawed in our present world of 2017, the thing that would likely amaze them is the massive proliferation of JavaScript packages. The video below gives us a fascinating visual representation of the package explosion over time.
The JavaScript ecosystem today consists of packages for nearly every need, from large framework libraries, to small functional packages that perform niche tasks. These bundles of componentized code have been instrumental in the evolution of JavaScript as a powerful and popular programming language. With the growth in packages, developers have also seen increased need for performant, reliable package managers to install and manage the multitude of dependencies.
The first post in our series discussed the JavaScript revolution and outlined three of the core components of a modern front-end development stack: package management, application bundling, and language specification. In this post, we’ll talk about where we started with package managers, how they’ve evolved, and why Kenzanrecommends Yarn as a best bet for scaled applications in the continuing evolution of package management.
How It All Began
As package managers started to take root, and people moved away from downloading packages and including all their files manually, npm (version 2) and Bower both emerged as frontrunners.
Back then, npm exclusively handled node packages. It was uncommon to house packages with front-end assets like HTML and CSS in the early versions of npm. Bower, on the other hand, was built specifically to handle client-side packages with HTML, CSS, and JS assets. It was an invaluable tool for early front-end projects. Bower had its own registry of front-end packages and delivered a flattened dependency tree, making the user decide which version of a package they wanted on conflicts. It set a precedent for the package management function and started the JavaScript world down the road of package proliferation.
As our applications grew and the number of libraries increased, this foundation began to crack. Version mismatches became harder to handle. Build processes with Bower required wiredep and quite a bit of configuration. And, most importantly, CommonJS and other module formats didn’t play nice. Module loaders like webpack™ had a difficult time handling the concatenated and bundled format of bower packages, and occasionally couldn’t modularize them at all. This became a prominent issue as a more modular JavaScript took root.
Amidst these problems, npm launched version 3. It offered a flattened dependency tree with module nesting on conflicts, CommonJS module support, and a single ecosystem for both front-end and node packages. This was overall a big success for npm, and it was largely adopted in the JavaScript community. However, this system soon began to show flaws in projects. First, npm version 3 and 4 were non-deterministic, meaning that modules were not always installed in the same order or nesting pattern. This caused notorious “Works on my machine” bugs for developers as node modules began to drift. A second issue was caching. The npm cache was unreliable, and corrupted items could be removed and not replaced. This meant that developers could not rely on previously cached items being available for future installs, and offline installs were out of the question.
We also saw new package managers like JSPM begin to appear. JSPM (short for JavaScript Package Manager) came out as a tool alongside SystemJS. The two packages attempted to handle JS module loading with a clean integration into package management. JSPM worked with the npm registry, GitHub, and private registries to install dependencies, and then update the SystemJS configuration to map to the new module so it could be imported across files. Although package management and module loading shared some common concerns, they didn’t seem so in sync that they required explicit pairing. Additionally, for two tools built to work together, the configuration was very challenging. As webpack gained popularity, JSPM began to lose support. The final nail in the coffin seemed to come when SystemJS was replaced by webpack in the Angular CLI.
And Then Came Yarn
Yarn was introduced in late 2016 by Facebook to address some of the common complaints of npm, as mentioned above. Yarn takes influence from npm, leveraging the huge npm registry and package.json file, to make the application scaling process more repeatable and pain free. Yarn uses a custom lockfile and install algorithm to ensure a deterministic install across all users with the same version of Yarn. This ensures that all developers have identical node modules, no matter how they choose to install them. No more package drift, and no more hidden bugs! It also means consistency between developers and CI environments.
Further, Yarn introduces a caching mechanism for node modules. With a warm cache, Kenzan has seen a huge decrease in install times, with speeds up to 4x faster. Faster installs translates to faster builds and faster development. The cache also allows for sandbox installs, or an install without Internet. This feature is increasingly important for enterprise applications because it prevents any malicious content from being injected during the install. The packages are cached and inspected, then all further installs coming from the cache are safe.
At Kenzan we have largely adopted Yarn over npm to take advantage of the build consistency, decreased install times, and sandbox installs for our clients. With the change we have seen fewer issues with our CI/CD processes and faster install times overall. For our clients, this means less money spent and fewer bugs sneaking into downstream environments where they don’t belong.
This is not to say that Yarn is without flaws. It has shown a couple of issues, including limited support for private npm packages, and some issues with pulling packages from GitHub. If these become important issues for one’s project, Yarn may not be the right choice just yet. However, the package sustains a high level of development support, and it appears that bug fixes are pushed through the PR process faster than with npm. We anticipate Yarn will be able to handle all the situations that npm can handle soon enough.
And Then npm Again!
This spring npm released version 5, which brings the package manager up on competitive ground with Yarn. This begs an interesting question for all the developers that have hopped on board with Yarn—do we change back or stay the course?
To make an educated choice, we felt we needed to run some tests and do our own research. Is npm v5 really as fast as they say? How does it compare to Yarn when it comes to the cache, integration, and scalability?
First, let’s start with speed. We conducted an install speed test using the create-react-app project. We compared npm v4, Yarn, npm v5, and then the last two options with a warm cache. For each item, we ran 10 install tests to normalize for varying network conditions. Here is what we found.
npm v5 was fast. Almost 4x faster than npm v4! It was even faster than Yarn on a cold install, but only by a smidgen. The real standout in terms of speed was Yarn with a warm cache, which was roughly 2x faster than the next quickest option. The conclusion of this test was that, yes, npm v5 exhibited strong performance benefits over previous versions, but was still slightly lacking on cache install speed when compared to Yarn.
Next we looked at caching mechanisms. A big pain point of previous npm versions was an inconsistent cache. On this front, they seem to have caught up. The cache is now self-healing, so when corrupted data is removed it is automatically reinstalled fresh, and installs are retried on failures. This means that cached items should be available when you need them, similar to Yarn.
On usability and integration npm has a small leg up. The new npm lockfile file contains a deterministic list of all packages, with root level packages raised to the top of the tree so a full install can be completed with just that file. For a deterministic install, Yarn requires both yarn.lock and package.json. Also npm integrates seamlessly with private registries and package publishing since they are created by a single force, the npm team.
Finally, we looked into scaling. At this moment, the two package managers have similar capabilities when it comes to scaling for full-size enterprise applications. However, Yarn was built and continues to progress as a package manager designed for large scale applications. It prioritizes issues of scale and security over advanced registry features (although those are still addressed). It is likely that Yarn will lead when it comes to improvements for large-scale applications, although this is only a hedge.
Where We’ve Landed
In the introduction to this blog series, we stated that all projects should only need one package manager. In our experience, we’ve found that consistency in tooling across machines helps to create reproducible builds and fewer bugs. But which one you choose should take into account the kinds of projects your organization takes on.
With the new release of version 5, npm may be a good choice. It provides similar speeds for smaller projects, and it is fully deterministic for consistency across machines. It has fewer bugs with private npm packages and GitHub repositories, and maintains an adequate level of development for bug fixes and new features. In our opinion, it is suited to smaller applications with less room for scale.
However, at Kenzan our focus is on enterprise applications with a maximum ability to scale. Within these constraints Yarn has proven to be a pragmatic choice. It provides us with speed for quick builds, determinism for package consistency, and an active development network for new and improved features. No matter how many developers and environments exist in an application, we can be sure it will scale with Yarn. For these reasons, we have chosen Yarn for our single package manager and invite you to give it a try as well.
Stay tuned for our next blog post, where we take our front-end stack one step further by exploring tools for bundling modules.
Kenzan is a software engineering and full service consulting firm that provides customized, end-to-end solutions that drive change through digital transformation. Combining leadership with technical expertise, Kenzan works with partners and clients to craft solutions that leverage cutting-edge technology, from ideation to development and delivery. Specializing in application and platform development, architecture consulting, and digital transformation, Kenzan empowers companies to put technology first.
In April, The Linux Foundation launched the open source EdgeX Foundry project to develop a standardized interoperability framework for Internet of Things (IoT) edge computing. Recently, EdgeX Foundry announced eight new members, bringing the total membership to 58.
The new members are Absolute, IoT Impact LABS, inwinSTACK, Parallel Machines, Queen’s University Belfast, RIOT, Toshiba Digital Solutions Corporation, and Tulip Interfaces. They join a roster that includes AMD, Analog Devices, Canonical/Ubuntu, Cloud Foundry, Dell, Linaro, Mocana, NetFoundry, Opto 22, RFMicron, and VMWare, among others.
EdgeX Foundry is built around Dell’s early stage, Apache 2.0 licensed FUSE IoT middleware framework, which offers more than a dozen microservices comprising over 125,000 lines of code. The Linux Foundation worked with Dell to launch the EdgeX Foundry after the FUSE project merged with a similar AllJoyn-compliant IoTX project led by current EdgeX members Two Bulls and Beechwood.
EdgeX Foundry will create and certify an ecosystem of interoperable, plug-and-play components. The open source EdgeX stack will mediate between a variety of sensor network messaging protocols and multiple cloud and analytics platforms. The framework is designed to help facilitate interoperability code that spans edge analytics, security, system management, and services.
The key benefit for members and their customers is the potential to more easily integrating pre-certified software for IoT gateways and smart edge devices. “EdgeX Foundry reduces the challenges that we face in deploying multi-vendor solutions in the real world,” said Dan Mahoney, Lead Engineer for IoT Impact LABS, in an interview with Linux.com.
Why would The Linux Foundation launch another IoT standardization group while it’s still consolidating its AllSeen Alliance project’s AllJoyn spec into its IoTivity standard? For one thing, EdgeX Foundry differs from IoTivity in that for now it’s focused exclusively on industrial rather than both consumer and industrial IoT. Even more specifically, it targets middleware for gateways and smart endpoints. The projects also differ in that IoTivity is more about interoperability of existing products while EdgeX hopes to shape new products with pre-certified building blocks.
“IoTivity provides a device protocol enabling seamless device-to-device connectivity, while EdgeX Foundry provides a framework for edge computing,” said Philip DesAutels, PhD Senior Director of IoT at The Linux Foundation. “With EdgeX Foundry, any protocol — IoTivity, BacNet, EtherCat, etc. — can be integrated to enable multi-protocol communications between devices implementing a variety of protocols and a common edge framework. The goal is to create an ecosystem of interoperable components to reduce uncertainty, accelerate time to market, and facilitate scale.”
Last month, the IoTivity project, which is backed by the Open Connectivity Foundation (OCF), as well as The Linux Foundation, released IoTivity 1.3, which adds bridges to the once rival AllJoyn spec backed by the Allseen Alliance, and also adds hooks to the OCF’s UPnP device discovery standard. The IoTivity and AllJoyn standards should achieve even greater integration in IoTivity 2.0.
IoTivity and EdgeX are “highly complementary,”DesAutels told Linux.com. “Since there are several members of EdgeX Foundry that are also involved in either IoTivity or OCF, the project anticipates strong partnerships between IoTivity and EdgeX.”
Although both EdgeX and IoTivity are billed as being cross-platform in both CPU architecture and OS, IoTivity is still primarily a Linux driven effort — spanning Ubuntu, Tizen, and Android — that is now expanding to Windows and iOS. By comparison, EdgeX Foundry is designed from the start to be fully cross-platform, regardless of CPU architecture or OS, including Linux, Windows, and Mac OS, and potentially real-time operating systems (RTOSes).
One of the new EdgeX members is the RIOT project, which offers an open source, IoT-oriented RIOT RTOS. “RIOT starts where Linux doesn’t fit so it is natural for the RIOT community to participate and support complementary open-source initiatives like EdgeX Foundry for edge computing,” stated RIOT’s Thomas Eichinger in a testimonial quote.
Easing sensor integration
IoT Impact LABS (aka Impact LABS or just plain LABS) is another new EdgeX member. The company has a novel business model of helping small-to-medium sized businesses live pilot IoT solutions. Most of its clients, which include several EdgeX Foundry members, are working on projects for enabling smart cities, resilient infrastructure, improved food security, as well as solutions designed for communities facing natural resource challenges.
“At LABS we spend a lot of time troubleshooting new solutions for our pilot hosts,” said Dan Mahoney. “EdgeX Foundry will let us deploy faster with high-quality solutions by keeping the edge software development efforts to a minimum.”
The framework will be especially helpful in projects that involve many types of sensors from multiple vendors. “EdgeX Foundry gives us the ability to rapidly build gateway software to handle all the sensors being deployed,” added Mahoney. Sensor manufacturers will be able to use the EdgeX SDK to write a single application-level device driver for a given protocol that can then be used by multiple vendors and solutions.
Bringing analytics to the edge
When asked how his company would like to see the EdgeX framework evolve, Mahoney said: “A goal we would like to encourage is to have multiple industrial protocols available as device services — and a clear path for implementing edge analytics.”
Edge computing analytics is a growing trend in both industrial and consumer IoT. In the latter, we’ve already seen several smart home hubs integrating analytics technology such as Alexa voice activated AI support or video analytics. This typically requires offloading processing to cloud services, which poses challenges in security and privacy, potential service loss due to provider outages, and latency issues.
With industrial IoT gateways, latency is the most important issue. As a result, there’s growing interest in adding more cloud-like intelligence to IoT gateways. One solution is to securely bring cloud-like applications to embedded devices via containers, as with ResinOS and Ubuntu Core’s snap mechanisms. Another approach is to develop IoT ecosystems that shift more cloud intelligence to the edge. Last month, Amazon released its AWS Lambda based AWS Greengrass IoT stack for Linux based gateways. The software enables AWS compute, messaging, data caching, and sync capabilities to run on connected devices such as IoT gateways.
Analytics is a key element of the EdgeX Foundry roadmap. One founding member is Cloud Foundry, which is aiming to integrate its industry leading cloud application platform with edge devices. Another new member — Parallel Machines — plans to leverage EdgeX to help it bring AI to the edge.
It’s still early days at EdgeX Foundry. The software is still in alpha stage, and the company had its first big meeting only last month. The project has initiated a series of “Tech Talks” training sessions for new developers. More information may be found here.
Do you want to sharpen your system administration or Linux skills? Perhaps you have some stuff running on your local LAN and you want to make your life easier—where do you begin? In this article, I’ll explain how to set up tooling to simplify administering multiple machines.
When it comes to remote administration tools, SaltStack, Puppet, Chef, and Ansible are a few popular options. Throughout this article, I’ll focus on Ansible and explain how it can be helpful whether you have 5 virtual machines or a 1,000.
Eight months after three critical vulnerabilities were fixed in the memcached open source caching software, there are over 70,000 caching servers directly exposed on the internet that have yet to be patched. Hackers could execute malicious code on them or steal potentially sensitive data from their caches, security researchers warn.
Memcached is a software package that implements a high performance caching server for storing chunks of data obtained from database and API calls in RAM. This helps speed up dynamic web applications, making it well suited for large websites and big-data projects. While memcached is not a database replacement, the data it stores in RAM can include user sessions and other sensitive information from database queries.
While the default IEEE Spectrum ranking in the Top Programming Languages interactive gives a good aggregate signal of language popularity, here we are taking a deep dive into the metrics related to job demand. Two of our data sources, Dice and CareerBuilder, measure job openings for the languages included in the interactive, and consequently we have a preset for “Jobs” that weighs the rankings heavily toward those metrics. So, if you want to build up your tech chops before looking for a programming job, what languages should you focus on?
Although Python has moved to the top of the default Spectrum ranking, if we instead go purely by the volume of openings that mention a language, we find that C beats Python by a ratio of 3.5 to 1, or about 19,300 job openings versus 5,400 across Dice and CareerBuilder combined.
It is easy to dismiss bash — the typical Linux shell program — as just a command prompt that allows scripting. Bash, however, is a full-blown programming language. I wouldn’t presume to tell you that it is as fast as a compiled C program, but that’s not why it exists. While a lot of people use shell scripts as an analog to a batch file in MSDOS, it can do so much more than that. Contrary to what you might think after a casual glance, it is entirely possible to write scripts that are reliable and robust enough to use in many embedded systems on a Raspberry Pi or similar computer.
I say that because sometimes bash gets a bad reputation. For one thing, it emphasizes ease-of-use. So while it has features that can promote making a robust script, you have to know to turn those features on. Another issue is that a lot of the functionality you’ll use in writing a bash script doesn’t come from bash, it comes from Linux commands (or whatever environment you are using; I’m going to assume some Linux distribution). If those programs do bad things, that isn’t a problem specific to bash.
This week in Linux and open source headlines, ONAP leads the way in the automation trend, Mozilla launches new, open source speech recognition project, and more! Get up to speed with the handy Linux.com weekly digest!
1) With automation being one of the top virtualization trends of 2017, The Linux Foundation’s ONAP is credited with moving the industry forward
2) Mozilla has launched a new open source project speech recognition system that relies on online volunteers to submit voice samples and validate them.
3)In addition to membership growth, EdgeX Foundry has launched a series of technical training sessions to help developers get up to speed on the project.
It would have been impossible to avoid hearing that Canonical has decided to shift their flagship product away from their in-house Unity desktop back to an old friend: GNOME. You may remember that desktop — the one that so many abandoned after the shift from 2.x to 3.x.
A few years later, GNOME 3 is now one of the most rock-solid desktops to be found, and one of the most user-friendly Linux desktop distributions is heading back to that particular future. As much as I enjoyed Unity, this was the right move for Canonical. GNOME is a mature desktop interface that is as reliable as it is user-friendly.
I won’t spend too much time speculating on why this happened (there are already plenty of pieces on this topic). There has also been plenty of speculation as to whether or not Canonical will deliver a GNOME-based Ubuntu that offers some of the features found in Unity. To that, Ken VanDine said to OMGUbuntu that the Ubuntu team “…may consider a few tweaks here and there to ease our users into the new experience.”
That’s not much. It also means, features like the HUD will not be anywhere to be found. Unfortunately, there aren’t any (current) GNOME extensions to replicate that feature. For some (like myself), losing the HUD is big (but not unforgivable). Why? I’d always found that particular menu interface to be one of the single most efficient on the market.
My guess is that GNOME, as shipped with Ubuntu 17.10, will be a fairly vanilla take on the desktop (with a bit of Ubuntu theme-branding in the mix). If the daily builds of 17.10 are any indication, that will be exactly the case (Figure 1).
Figure 1: The default Ubuntu 17.10 look.
Extensions will be your friend
For those that consider GNOME to be a bit less efficient than Unity, know that extensions will be your friend. Again, you’re not going to find an extension to bring about every feature found in Unity, but you can at least gain some added functionality, to make the GNOME desktop a bit more familiar to those who’ve been working with Unity for the last few years.
The first two extensions I would suggest you examine are:
Which of the above will better suit your needs will depend on three things:
Where you like your panel
If you prefer a bit of transparency
If you prefer a separate top panel with your dock
With Dash to Dock, your GNOME Favorites (found within the Dash) is added to the desktop (Figure 2) to function in similar fashion to the Unity Launcher.
Figure 2: Dash to Dock in action.
What I like about the Dash to Dock extension is that it not only allows you to add a bit of transparency to the dock, it can be placed on the top, bottom, left, or right edge of the display and does not do away with the top panel.
With the Dash to Panel (Figure 3), your Dash Favorites are placed in a panel that spreads across the screen and rolls in the top panel.
Figure 3: Dash to Panel in action.
For those that might miss the look and feel of what Unity offered, Dash to Dock will be your preferred extension. For those that might like a traditional panel (such as that found in Windows 7 or KDE), Dash to Panel will be your go to.
If you do use Dash to Dock, you might want to enable the feature to move the applications button to the beginning of the dock (Figure 4).
Figure 4: Moving the applications button.
For anyone that has been using Unity long enough, having that applications button at the bottom of the dock can be a real point of frustration. You can also shift Dash to Dock to panel mode (to even better emulate Unity (Figure 5).
Figure 5: Now we’re starting to look more like Unity.
GNOME Tweak
One thing you must know is that, to gain access to the options for these extensions (and to even enable/disable them), you will need to install the GNOME Tweak Tool. To do this, open the GNOME Software tool, search for GNOME Tweak and click Install. Once installed, you can click the Launch button (Figure 6) and you’re ready to tweak your extensions (and other aspects of GNOME).
Figure 6: GNOME Tweak installed from Software.
Trust me when I say, GNOME Tweak will make your transition from Unity to GNOME slightly smoother.
The end result
As much as the Unity lovers might hate to hear this, the switch to GNOME will wind up being quite welcome on all fronts. The primary reason is that GNOME is simply more mature than Unity. This translates to (at least from my experience thus far) a much better smoother and snappier desktop. And, with the addition of a few extensions, the only thing Unity fans will miss is the HUD. But for those that cannot let go of Unity, know that there has been a fork of Unity 7, now named Artemis. At the moment, there is not even an alpha to test, but this looks to be a very promising project that might be offering both a “pure” Unity-like desktop or a Plasma-like Unity desktop. Either way, for anyone hoping that Unity 7 will continue on… fear not.
Try it out now
If you can’t wait until October 2017, you can download the latest daily build and install your very own Ubuntu 17.10. I’ve been working with the daily build and have found it to be remarkably stable. If you go this route, just make sure to update regularly. If you’re not one to test pre-release software, the final release is but a few months away.
Once again, the future looks incredibly bright for the Ubuntu Linux desktop.
Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.