So you want to take advantage of Docker. And why not? It’s one of the most game-changing pieces of software to be developed in recent years. With Docker you can enable your business to expand its technological offerings with the help of containers (encapsulated applications that are easily rolled out and updated). Thing is, you work with Fedora and not Ubuntu. Is it possible to install this amazing platform on a non-Ubuntu operating system? Of course it is. In fact, installing Docker is equally as simple on Fedora as it is on the more user-friendly Ubuntu.
Let me walk you through the process of installing Docker on the latest iteration of Fedora.
This week in open source news, the massive Wikileaks document release prompts industry leaders to comment, Steven J. Vaughan-Nichols reviews the latest updates and features for Skype for Linux, and more! Keep reading to stay on top of this busy OSS news week!
1) CIA “Vault7” leaks involving weaponized exploits used against operating systems including Linux were likely the work of a “dissatisfied insider.”
Hart is a medical software technology company that improves the ways in which people inside and outside of the industry access and engage with health data.
Founded in 2012, the startup develops HartOS, an API platform that allows healthcare providers and their vendors and partners to use health data from multiple computer systems in a HIPAA-compliant manner in a range of digital formats. These may include medical records, hospital information, radiology information, laboratory information, picture archiving, emergency department, and other systems.
Last month, Hart became a Gold member of The Linux Foundation. Here, Hart Founder Mo Alkady tells us more about his company; how open source is contributing to changes in the healthcare industry; and how they participate in the open source community.
Linux.com: How and why do you use Linux and open source?
Mo Alkady: Utilizing the Linux kernel is crucial to our servers running CentOS. It’s a no-brainer; the open source community has helped tipped the scales as giants continue to open up licensing under Apache, in order to help and maintain their own ecosystems.
Linux.com: Why did you increase your commitment to The Linux Foundation?
Mo Alkady: In an ever changing world, contributions and support of the open source community are more crucial than ever to help expedite the developments we make as a technological society.
Linux.com: What interesting or innovative trends in the healthcare industry are you witnessing and what role do Linux and open source play in them?
Mo Alkady: The electronic medical record is crucial to the advancement of healthcare around the world, and we are starting to see the healthcare industry look to specific solutions in the technology sector, looking to adopt newer standards by vendors; if we can work to build an open source standard, this enables and further lowers the barriers of entry for those to help advance the medical space.
Linux.com: How is your company participating in that innovation?
Mo Alkady: We joined as a contributing body to the Open API initiative. For us, we believe building the proper frameworks to enable others to share data is the key for vast improvements of the technical variety to be pushed forward in healthcare.
Linux.com: How has participating in the Linux and open source communities changed your company?
Mo Alkady: The open source community is key to our culture; our values involve ingenuity, craftsmanship and change.
Switching from one technology to another is always going to be hard, and, despite the popularity of Node.js, it does come with its own set of complexities, and the advantages are not always apparent to management, says Trevor Livingston, principal architect at HomeAway, speaking at Node.js Interactive.
Livingston’s previous work at Paypal gives him a unique insight into how to introduce Node into companies. PayPal started out using C++ and later Java, before introducing Node. Livingston was recruited to help introduce Node and was the Node Platform Lead at PayPal, while at the same time part of the KrakenJS team. Toward the end of his tenure at PayPal, the company employed 800 Node developers, maintained 100 applications, and 1500 internal modules. The Node framework served over 400 million requests per day. All in all, quite a successful move.
But how do you get there? Livingston admits there is no magic recipe. At PayPal, the team learned as they went and counted on a lot of help, not only from a receptive management but also from the Node community. Livingston recommends leveraging the community, noting that problems you encounter have probably been encountered by others before you.
Apart from support, you are going to need a plan. The first thing you have to take into account is that re-platforming comes at a cost no matter what. Figuring out if the benefits outweigh that cost — that is, finding the true reason you are shifting to Node — is crucial to get started on the right foot. Livingston says it is fundamental to understand what kind of problems you are setting out to solve. It could be you want to increase productivity, allowing your developers to iterate faster; or you may be looking to scale more cheaply. Understanding the problem and aligning your goals to those of the business not only helps you plan better but also makes it easier to explain the aim of the migration to those financing the project.
Once you have a plan, Livingston recommends demonstrating success. Even small successes, such as a single application that improves the overall user experience, can help pave the way for the rest of the platform shift. Using what Livingston describes as “Build. Measure. Learn.” approach, when you deploy a small application, you need to monitor how it works, including users into the process, and draw conclusions to improve the application in the next iteration.
Livingston also warns against several anti-patterns — the first of which is entering the “Migration” mindset. Thinking in terms of “how to translate a Java class into a JavaScript class,” for example, is the wrong way to go about things, because you end up with Java, or C, or what have you, but just written with a JavaScript syntax. Instead, Livingston suggests “creating isolation and building new,” by breaking down tasks into small pieces and implementing each piece from the ground up using the new platform.
The next challenge is moving your project from one team to all the teams. Being consistent, says Livingston, will help with that. Despite being a fan of “wild west” development, Livingston says constraints are necessary to create reproducible success. The consistency will come from the design choices you make, making your project more configuration-based, using pre-existing frameworks. Another important factor is education. You need to ensure that engineers are trained from the outset and then mentor them. Engagement is the third element to bring teams together. To encourage engagement, Livingston recommends developing in the open, sharing code, landmarks and successes with other teams.
Beware Turn-Key Solutions
The anti-pattern to the above is turn-key solutions. There will be the temptation of wrapping everything up in a box and tying everybody to that. Doing so, however, traps you in an ecosystem that stifles innovation. Being consistent is about capabilities, not rails, says Livingston. Capabilities allow teams to move to new technologies as they become available — not so if you are married to rigid framework.
Moving along from development to deployment, when the code is put into production, when people start using what has been built, things start to get really interesting. How do you account for performance problems or Node crashing? The first thing to look at is security. Security is the number one concern for a business, so it should be the number one concern for the developers. As npm is pretty open as to what modules get uploaded to the service, you should take into account security advisories. There are several tools for this, such as nsp and Snyk.
Performance is another area of concern. To monitor performance of your application in the real world, Livingston recommends becoming familiar with APM (application performance management) tools, such as New Relic and AppDynamics, and incorporating performance monitoring as a matter of course when testing.
The final “sticky problem” Livingston mentions is availability. Node, for all intents and purposes, is single process, and when you have uncaught exceptions, the process crashes. When the process crashes while handling a request, it will hang for the user and the state of the program becomes unknown. This requires you becomes familiar with how frameworks handle errors. However, in Livingston’s experience, 99 percent of all crashes are caused by developers ignoring errors in callbacks.
The anti-pattern here is being single-minded about your approach. Unfortunately, Node is not a silver bullet that will solve all your developmental and deployment problems, and Livingston suggests a holistic approach to problem solving.
Down the line, the decisions you make today regarding design choices are going to affect you tomorrow. Livingston warns against using pure dependencies, for example, because it makes migration much harder in the future. You must also be wary of globals. He also advises against making assumptions about your upstream. If you have something like an Express middleware, and you have expectations on the upstreams, you no longer have decoupled software. As for Local Continuation Storage, for Livingston, this is definitely a no-no. He describes it as “magic” of the blackest kind.
Moving on from design choices, Livingston recommends considering ownership carefully. You must think of who is going to maintain each piece of code in the future. The same goes for support: you should focus on self-sufficiency. That said, the anti-pattern for this is hand holding, because people learn through failing. It is a good idea to establish early on that it is okay to fail and that failure is valuable educational tool.
One last thing that can help you succeed in moving your platform over to Node is being transparent with management, especially with regard to inner source. Inner source is the concept of sharing code across teams and contributing code that other teams’ can use.
Management may not understand why a developer is not on the task they have been assigned 100 percent of the time. However, a developer may be improving the productivity of another team in a substantial way, and other teams may help in turn the developer’s original team with their own issues. The idea is that a certain degree of inner source can help improve the productivity across the board. This concept, which is obvious in open source circles, must often be explained to the management.
For more details, watch the complete presentation below:
If you’re interested in speaking at or attending Node.js Interactive North America 2017 – happening October 4-6 in Vancouver, Canada – please subscribe to the Node.js community newsletter to keep abreast of dates and deadlines.
There is an adage, not quite yet old, suggesting that compute is free but storage is not. Perhaps a more accurate and, as far as public clouds are concerned, apt adaptation of this saying might be that computing and storage are free, and so are inbound networking within a region, but moving data across regions in a public cloud is brutally expensive, and it is even more costly spanning regions.
So much so that, at a certain scale, it makes sense to build your own datacenter and create your own infrastructure hardware and software stack that mimics the salient characteristics of one of the big public clouds. What that tipping point in scale is really depends on the business and the sophistication of the IT organization that supports it; Intel has suggested it is somewhere around 1,200 to 1,500 nodes. But clearly, just because a public cloud has economies of scale does not mean that it passes all of those benefits on to customers. One need only look as far as the operating profits of Amazon Web Services to see this. No one is suggesting that AWS does not provide value for its services. But in its last quarter, it brought nearly $1 billion to its middle line out of just under $3.5 billion in sales – and that is software-class margins for a business that is very heavily into building datacenter infrastructure.
Some companies, say the folks that run the OpenStack project, are ricocheting back from the public cloud to build their own private cloud analogues, and for economic reasons.
The Linux kernel is the core of all Android devices, and nearly a third of all Internet traffic rides on just one openly developed project, Netflix. (Read the excellent article in Time magazine about this.) How does the choice of using open source software as part of a project plan affect the amount and type of risk to a project within an organization?
Risk is both a perception and a reality. Tools help us move from perception toward reality the same way good thermometers helped us move from very generalized use of the terms hot and cold to more specific quantifiable temperatures (see an example in Google). Over time we’ve adopted different standards and techniques for discussing specific temperatures, which depend on the audience and the standard’s limitations. Kelvin, Celsius, Fahrenheit, and even RealFeel are now established standards for measuring temperature.
Aside from 5G and the Internet of Things (IoT), the third acronym on everyone’s lips at Mobile World Congress 2017 was MEC, which stands for mobile edge computing. But where exactly is the edge? The answers are all over the board, but they paint a picture of network architectures, which are getting more generic.
Before the show, speaking with Nurit Sprecher, a principle architect at Nokia who’s heading up the ETSI ISG MEC group, she said, “MEC is about providing cloud computing at the edge of the network, characterized by low latency and high bandwidth. We’re talking about distributed cloud.”
Blockchain is presently at the peak of Gartner’s Hype Cycle, which means the next stop is the Trough of Disillusionment. In supply chain circles the technology is suddenly drawing serious interest, in part because of IBM’s recent push to go public with pilots including one with Maersk and another with Walmart. It has also begun to feature regularly in conversations I’m having around disruptive technology with C-level supply chain leaders, especially in CPG and retail. It feels a bit like RFID déjà vu.
Reminiscent of RFID, blockchain could one day provide certainty on the exact source of every ingredient in every jar, in every case, on every shelf and at all times. Was your palm oil sustainably sourced? Are the cherries in your ice cream organic? Are the avocados in your salad imported from Mexico? Also reminiscent of RFID, however, is a decent amount of uncertainty about the timing of the business case.
There are several reasons to restrict a SSH user session to a particular directory, especially on web servers, but the obvious one is a system security. In order to lock SSH users in a certain directory, we can use chroot mechanism.
change root (chroot) in Unix-like systems such as Linux, is a means of separating specific user operations from the rest of the Linux system; changes the apparent root directory for the current running user process and its child process with new root directory called a chrooted jail.
In this tutorial, we’ll show you how to restrict a SSH user access to a given directory in Linux. Note that we’ll run the all the commands as root, use the sudo command if you are logged into server as a normal user.
Since 1999, The Apache Software Foundation (ASF) has been recognized as a leading source for Open Source software and tools that meet the demand for interoperable, adaptable, and sustainable solutions.
The all-volunteer ASF develops, stewards, and incubates dozens of enterprise-grade Open Source projects that power mission-critical applications in financial services, aerospace, publishing, government, healthcare, research, infrastructure, and more. From Abdera to ZooKeeper, the ASF’s reliable, community-driven software continues to grow dramatically across many categories, including Cloud, IoT and Edge Computing, Artificial Intelligence and Deep Learning, Mobile, and Big Data, where the Apache Hadoop ecosystem dominates the marketplace.
Today, many of the ASF’s 300+ projects serve as the backbone for some of the world’s most visible and widely used applications in Big Data (Cassandra, Hadoop, Spark); Cloud (CouchDB, CloudStack, Mesos); Search and CMS (Derby, Jackrabbit, Lucene/Solr); DevOps and Build Management (Ant, Buildr, Maven); Web Frameworks (Flex, OFBiz, Struts); Servers (HTTP Web Server, Tomcat, Traffic Server); among others.
Come to ApacheCon to learn about tomorrow’s software, today. Find out what’s coming next out of the Apache Incubator that will change the world again. Meet the people that make it happen, and get in on the ground floor of the next wave of innovation.
ApacheCon North America and ApacheCon Big Data will be held at the Miami Intercontinental, May 16th through 18th, 2017.
And on Monday, May 15, we’ll be holding the BarCampApache event, a full-day, unconference style event, where many of the ideas behind Apache projects have been hatched in the past. Details are at http://events.linuxfoundation.org/events/apachecon-north-america/extend-the-experience/barcamp
For the latest information about the event, follow us on Twitter, @apachecon. For interviews and past conference talks, see http://feathercast.apache.org/ and follow @feathercast. For news and announcements, subscribe to the apachecon-discussion mailing list by sending a blank message to apachecon-discuss-subscribe@apache.org or, subscribe to the lower-volume ApacheCon Announce list by sending mail to announce-subscribe@apachecon.com