WebAssembly is a new type of code that can be run in modern web browsers — it is a low-level assembly-like language with a compact binary format that runs with near-native performance and provides languages such as C/C++ with a compilation target so that they can run on the web. It is also designed to run alongside JavaScript, allowing both to work together.
In a nutshell
WebAssembly has huge implications for the web platform — it will provide a way to run code written in multiple languages on the web at near native speed, with client apps running on the web that previously couldn’t have done so.
You’ll find negative stereotypes for most jobs. “Ambulance-chasing lawyer.” “Greedy banker.” “Corrupt politician.” One that hits close to home for me? “Ivory tower architect.” You know the type. Focused on pristine diagrams and IT standards instead of real-life concerns. When I was an IT architect, I tried to buck that stereotype. But unfortunately, it’s a reality within many companies and slowing down their required evolution. That doesn’t have to be the case. How can enterprise architects champion the org changes needed to compete with the cloud-natives? Let’s talk about it.
Look at what we ask for from enterprise architects today. I pulled these from public job postings:
DevOps taking the center stage in the software industry, the job role ‘DevOps Engineer’ is buzzing around and today, I have some thoughts that can guide you to become a great DevOps engineer.
What is a DevOps?
The term Devops was coined as a combination of DEVelopers and OPerationS. According to Wikipedia:
DevOps is a term used to refer to a set of practices that emphasize the collaboration and communication of both software developers and information technology (IT) professionals while automating the process of software delivery and infrastructure changes. It aims at establishing a culture and environment where building, testing, and releasing software can happen rapidly, frequently, and more reliably.
Digital transformation is happening in every sector today and if you don’t adapt yourself to the changing technology advancements, your business is likely to die in the coming years. Automation being the key, every company nowadays wants to get rid of repetitive tasks and automate them as much as possible to increase the productivity. Here is where DevOps comes into the picture and however it is derived from the agile and lean tech aspects but still it’s new in the software industry today.
With the increasing usage of tools and platforms like Docker, AWS, Puppet, GitHub etc the companies can easily leverage automation and succeed their way.
What is a DevOps Engineer?
A major part of adopting DevOps is to create a better working relationship between development and operations teams. Some suggestions to do this include setting the teams together, involving them in each other’s processes and workflows, and even creating one cross-functional team that does everything. In all these methods, Dev is still Dev and Ops is still Ops.
The term DevOps Engineer tries to blur this divide between Dev and Ops altogether and suggests that the best approach is to hire engineers who can be excellent coders as well as handle all the Ops functions. In short, a DevOps engineer can be a developer who can think with an Operations mindset and has the following skillset:
Familiarity and experience with a variety of Ops and Automation tools
Great at writing scripts
Comfortable with dealing with frequent testing and incremental releases
Understanding of Ops challenges and how they can be addressed during design and development
Soft skills for better collaboration across the team
How can you be a great DevOps Engineer?
The key to being a great DevOps Engineer is to focus on the following:
Know the basic concepts on DevOps and get into the mindset of automating almost everything
There are many common Ops pitfalls that developers need to consider while designing software. Reminding developers of these during design and development will go a long way in avoiding these altogether rather than running into issues and then fixing them.
Try to standardize this process by creating a checklist that is part of a template for design reviews.
End to end collaboration and helping others solve the issues
You should be a scripting guru: Bash, Powershell, Perl, Ruby, JavaScript, Python – you name it. They must be able to write code to automate repeatable processes.
Factors to measure DevOps success
Deployment frequency
Lead time for code changes
Roll back rate
Usage of automation tools for CI/CD
Test automation
Meeting business goals
Faster time to market
Customer satisfaction %
This is all about DevOps and how to be good at it.
As anyone involved with managing an OpenStack deployment quickly learns, cost savings and elimination of time-consuming tasks are among the biggest benefits that the cloud platform provides. However, leaders at many OpenStack-focused organizations, including Canonical, believe that the business technology arena is under such tremendous pressure to keep up as Software-as-a-Service, containers, and cloud platforms proliferate, that the true economics of OpenStack are misunderstood. Simply put, a lot of people involved with OpenStack don’t fully understand what they can get out of the platform and the ecosystem of tools surrounding it.
Mark Baker, OpenStack Product Manager at Canonical, provides useful background on all of this in a recent essay:
“Over the past decade, IT Directors turned to public cloud providers like AWS (Amazon Web Services), Microsoft Azure, and GPC (Google Public Cloud) as a way to offset much of the CAPEX (capital expenses) of deploying hardware and software by moving it to the cloud. They wanted to consume applications as services and offset most of the costs to OPEX (Operating Expenses). Initially, public cloud delivered on the CAPEX to OPEX promise. Moor Insights & Strategy analysts [point to] upwards of 45 percent in capital reductions in some cases, but organizations needing to deploy solutions at scale found themselves locked into a single cloud provider, fluctuating pricing models, and a rigid scale-up model that inhibits the organization’s ability to get the most out of their legacy hardware and software investments. Forward thinking IT directors realized they must disaggregate (put into units) their current data center environments to support scale-out. Consequently, OpenStack was introduced as a public cloud alternative for enterprises wishing to manage their IT operations as a cost-effective private or hybrid cloud environment.”
As Baker notes, it can be very challenging with OpenStack to determine where the exact year-over-year operating costs and benefits of managing the platform reach parity, not just with public cloud alternatives, but with their software licensing and critical infrastructure investments. “In a typical multi-year OpenStack deployment, labor makes up 25% of the overall costs, hardware maintenance and software license fees combined are around 20%, while hardware depreciation, networking, storage, and engineering combine to make-up the remainder,” Baker notes.
Indeed, the economic tradeoffs between public cloud solutions and platforms like OpenStack are complicated enough that many enterprises are running both types of solutions. As Forrester Research reports ina detailed brief on OpenStack economics, 82 percent of OpenStack deployments exist in parallel to other cloud platforms.
So what are some best practices for organizations that want to better understand the economics of an OpenStack deployment? Here are key thoughts from Forrester and Canonical on managing OpenStack efficiently and understanding its benefits:
Canonical: The only way to fully benefit from OpenStack is by adopting a new model for deploying and managing IT Operations… Building a private cloud infrastructure on OpenStack, is an example of the big software challenge. Significant complexity exists in the design, configuration, and deployment of all production ready OpenStack private cloud projects. While the upfront costs are negligible, the true costs are in the ongoing operations; upgrading and patching of the deployment can be expensive.
Forrester: Identify a self-contained project or area of the business. Begin with a pilot project, and use this to build familiarity with OpenStack and any partners with whom you might be working. OpenStack users have stated that working with OpenStack requires significant training in the first six months, especially if you don’t have seasoned OpenStack veterans. Ensure that the pilot — and OpenStack — have a senior champion within the business, such as the chief technology officer. How are other aspects of organizational IT currently managed, and does the rationale behind those historic deployment decisions also apply here? Figure out the right level of vendor support, given your team and your organization’s strategies.
Canonical also offers a free ebook, that breaks down the economics of OpenStack in easily understood ways.
“It is important to keep in mind that OpenStack is not a destination, but rather a part of the scale-out journey to becoming cloud native,” said Canonical’s Baker. “CIOs know they must have cloud as part of their overall strategy. From a long-term perspective OpenStack will remain the key driver and enabler for hybrid cloud adoption. However, IT organizations will continue to struggle with service and applications integration while working to keep their operational costs from rising too much.”
In addition to the ebook from Canonical on OpenStack economics, The OpenStack Foundation has a free online webinar on the topic.
Watch Stormy Peters describe how the interactions between companies and open source software communities influence our culture, in this keynote from LinuxCon Europe 2016.
With all of the discussion about source code contributions in open source, sometimes we don’t spend enough time talking about the culture. In her keynote at LinuxCon Europe, Stormy Peters points out that when we say the word “culture,” we sometimes think only about diversity or hiring more women, but culture means more than that. Culture is about how we work, how we think, and how we interact with each other.
It used to be that companies were confused by open source software, and the communities were often skeptical about companies. These days, most of the Internet, most of the web, and most of the world runs on open source software with open source communities and companies working together, Peters points out. She compares the early days of open source to street art. Many people don’t understand why an artist would create a work of art for free, which is something we heard quite often in the early days of open source. Derivative work and building on the work of others is also common in street art along with social norms that a street artist should only make the work better and never deface the work of better artists. This is similar to how we have norms and unspoken ways of working in open source software.
One of the main open source contributions from companies comes in the form of providing careers in open source software. Peters once worried about whether people who are paid by companies to work on open source software would stop doing it when the company stopped paying them. Instead, she found that they might stop working on a project if it doesn’t seem as important anymore, but they’ll probably stay in open source software.
Peters talked about how companies have helped open source software grow and helped many more people get involved, which has also brought us more diversity. While the technology industry as a whole is not very diverse, companies tend to have a more diverse population than open source software, and when they pay people to work, they bring that entire cohort with them. Anecdotally, she knows quite a few women that have started their careers with a paid job and become involved in open source software that way, which mirrors her experience starting in open source as part of her job at HP. These companies influence open source software, but it is a two-way cultural exchange, and we also influence these companies.
The way that we interact with the companies and the individuals around us shapes our culture, and every time we make a decision about whether to interact over the phone, on a mailing list, or have conversations over drinks, we are shaping the culture and the society that we’re creating. “We are the backbone of society right now, so I think it’s really important that we create a culture that is open, as open as our projects,” Peters says.
Watch the complete video below for more about how the interactions between companies and open source software communities influences our culture.
Interested in speaking at Open Source Summit North America on September 11-13? Submit your proposal by May 6, 2017. Submit now>>
Not interested in speaking but want to attend? Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the all-access attendee registration price. Register now to save over $300!
As more people access the Internet from their mobile devices, mobile operators must adapt their networks to accommodate skyrocketing data use and new traffic patterns. To do so, they’re turning to the same principles of software-defined networking (SDN) already finding success in the data center.
Next-generation 5G networks will be built with SDN technologies, revolutionizing telco infrastructure, says Dr. Alex Jinsung Choi, CTO, Executive Vice President and Head of the Corporate R&D Center at SK Telecom. Open source projects such as ONOS and CORD are leading the revolution and provide a good starting point for telco companies in SDN, he said.
Dr. Alex Jinsung Choi, CTO, Executive Vice President and Head of the Corporate R&D Center at SK Telecom.
“SDN technology has evolved dramatically,” with many commercial and open source solutions now available, Dr. Choi says in the Q&A, below. “Now it is high time to transform our telco infrastructure using the solutions.”
Dr. Choi will give a keynote on “The Road to 5G with Open Source” at Open Networking Summit 2017, to be held April 3-6 in Santa Clara. Here, he discusses how SK Telecom is involved in open source networking, some of the successes the industry has had, and the challenges it faces in 2017.
Linux.com: How is SK Telecom using SDN today?
Dr. Alex Choi: We are using SDN to control data center network and transport network. In our data center, SDN is used to construct and control leaf-spine fabric networking using a commercial SDN solution. Additionally, we are planning to use SDN to manage the virtual network of our cloud. At this time, we are considering an open source controller and our own open source solution.
We have built T-SDN using SDN technology to control transport network, T-SDN controls Layer 0 and Layer 1 of our transport network. We will try to expand T-SDN for bandwidth and VPN control of the transport network.
Linux.com: Which open source networking projects does your organization use and contribute to? Why do you participate? How are you contributing?
Choi: We are working with ONOS and CORD projects. We have joined ONOS in 2015 and are contributing our virtual network solution for data center and OpenStack, called SONA (Simplified Overlay Networking Architecture). Additionally, we have been contributing to the Open-CORD project. We have proposed the M-CORD project and have been leading it with ON.Lab, and also have contributed VTN (virtual tunnel networking) module for CORD Infrastructure.
We believe that cooperation with global community is very important for improving code quality, and the open source project is the best way for global cooperation.
Linux.com: What have been the biggest successes in SDN in the past year, and what do you expect the industry to accomplish in 2017?
Choi: The showcase of the M-CORD project with the 5G use cases is the biggest milestone in the SDN world. In the early stage of SDN, the concept of SDN was used for traffic engineering in large-scale L3 networks by Google. However, since then SDN has been applied mainly in the data center networking but not in the telco infrastructure.
Recently (two years ago), as the TCO reduction is inevitable in the telco industry, the CORD project has been born to transform telco’s central offices to data center, and M-CORD project has additionally been launched to transform mobile network functions such as IMS and EPC. We expect that M-CORD project would be the real reference architecture for 5G infrastructure.
Linux.com: What will be the biggest challenges for SDN in 2017?
Choi: Six years have passed since SDN was born. During these years, SDN technology has evolved dramatically: many new protocols and stacks have been developed such as FBOSS, SAI, and P4, in addition to OpenFlow, and many commercial solutions have been out in the market, and tons of SDN related open source projects have been launched.
Now it is high time to transform our telco infrastructure using the solutions. The market might not wait anymore. Even though some SDN solutions from major vendors were so successful, it was done in very limited areas such as data center networking, and the impact was not so significant. It would be the biggest challenge to show how SDN can revolutionize the telco infrastructure this year with real use cases.
Linux.com: What’s your advice to individuals and companies getting started in SDN?
Choi: The basic concept of SDN is separating the control plane and data plane of network devices, but we should not understand that SDN is just using OpenFlow protocol or applying white-box switches. These days there are tons of ways of adopting SDN technologies, including commercial solutions from traditional legacy device vendors and open source solutions from many startups.
We strongly recommend studying reference applications such as ONOS use cases and reference architecture like CORD as the starting point. They have the right use cases and light architecture, and thus it is easy to understand the SDN technologies. Also, they have large communities and it is easy to get help on any troubleshooting.
Learn more about the future of SDN at Open Networking Summit 2017. Linux.com readers can register now with code LINUXRD5 for 5% off the attendee registration. Register now!
GlusterFS is a free and open source network distributed storage file system. Network distributed storage file systems are very popular amoung high trafficked websites, cloud computing servers, streaming media services, CDN (content delivery networks) and more. This tutorial shows you how to install GlusterFS on Ubuntu Linux 16.04 LTS server and configure 2 nodes high availability storage for your web server and enable TLS/SSL for WAN based connection between two data centers.
A single cloud or bare metal server is going to be a single point of failure. For example /var/www/html/ can be a single point of failure. So you deployed two Apache web-servers. However, how do you make sure /var/www/html/ synced with both Apache server? You do not want to server different images or data to clients. To keep your /var/www/html/ in sync you need a clustered storage. Even if one node goes down the other keeps working. Moreover, when failed node comes online, it should sync missing file from another server in /var/www/html/.
In 1998 Red Hat was continuing to gather together names of new allies and prospective supporters for its enterprise Linux. Several more of the usual suspects had joined the party: Netscape, Informix, Borland’s Interbase, Computer Associates (now CA), Software AG. These were the challengers in the Windows software market, the names to which the VARs attached extra discounts.
One Monday in July of that year, Oracle added its name to Red Hat’s list.
“That was a seminal moment,” recalled Dirk Hohndel, now VMware’s Chief Open Source Officer. He happened to be visiting the home of his good friend and colleague, Linus Torvalds — the man for whom Linux was named. A colleague of theirs, Jon “Maddog” Hall, burst in to deliver the news: Their project was no longer a weekend hobby.
Containers make the world go round, whether it’s shipping goods from China or making cat videoson YouTube work properly on your smartphone. Yesterday, Google announced that their Cloud Container Builder is finally available for general use after a year of running the Google App Engine behind “gcloud app deploy”. Now you can build your Docker containers right in the Google Cloud Platform!
Google describes the Cloud Container Builder as “a stand-alone tool for building container images regardless of deployment environment.” Calling it faster and more reliable, Google hopes that users will find it more flexible with its command-line interface, automated build triggers, and build-steps.