Home Blog Page 510

A Modern Day Front-End Development Stack

Application development methodologies have seen a lot of change in recent years. With the rise and adoption of microservice architectures, cloud computing, single-page applications, and responsive design to name a few, developers have many decisions to make, all while still keeping project timelines, user experience, and performance in mind. Nowhere is this more true than in front-end development and JavaScript.

To help catch everyone up, we’ll take a brief look at the revolution in JavaScript development over the last few years. Next, we’ll look at the some of the challenges and opportunities facing the front-end development community. To wrap things up, and to help lead into the next parts of this series, we’ll preview the components of a fully modern front-end stack.

The JavaScript Renaissance

When NodeJS came out in 2009, it was more than just JavaScript on the command line or a web server running in JavaScript. NodeJS revolutionized a concentration of software development around something that was so desperately needed: a mature and stable ecosystem focused on the front-end developer. Thanks to Node and its default package manager, npm, JavaScript saw a renaissance in how applications could be architected (e.g., Angular leveraging Observables or the functional paradigms of React) as well as how they were developed. The ecosystem thrived, but because it was young it also constantly churned.

Happily, the past few years have allowed certain patterns and conventions to rise to the top. In 2015, the JavaScript community saw the release of a new spec, ES2015, along with an even greater explosion in the ecosystem. The illustration below shows just some of the most popular JavaScript ecosystem elements.

FrontendToolingArray.png

State of the JavaScript ecosystem in 2017

At Kenzan, we’ve been developing JavaScript applications for more than 10 years on a variety of platforms, from browsers to set-top boxes. We’ve watched the front-end ecosystem grow and evolve, embracing all the great work done by the community along the way. From Grunt to Gulp, from jQuery® to AngularJS, from copying scripts to using Bower for managing our front-end dependencies, we’ve lived it.

As JavaScript matured, so did our approach to our development processes. Building off our passion for developing well-designed, maintainable, and mature software applications for our clients, we realized that success always starts with a strong local development workflow and stack. The desire for dependability, maturity, and efficiency in the development process led us to the conclusion that the development environment could be more than just a set of tools working together. Rather, it could contribute to the success of the end product itself.  

Challenges and Opportunities

With so many choices, and such a robust and blossoming ecosystem at present, where does that leave the community? While having choices is a good thing, it can be difficult for organizations to know where to start, what they need to be successful, and why they need it. As user expectations grow for how an application should perform and behave (load faster, run more smoothly, be responsive, feel native, and so on), it gets ever more challenging to find the right balance between the productivity needs of the development team and the project’s ability to launch and succeed in its intended market. There is even a term for this called analysis paralysis, which is a difficulty in arriving at a decision due to overthinking and needlessly complicating a problem.

Chasing the latest tools and technologies can inhibit velocity and the achievement of significant milestones in a project’s development cycle, risking time to market and customer retention. At a certain point an organization needs to define its problems and needs, and then make a decision from the available options, understanding the pros and cons so that it can better anticipate the long-term viability and maintainability of the product.

At Kenzan, our experience has led us to define and coalesce around some key concepts and philosophies that ensure our decisions will help solve the challenges we’ve come to expect from developing software for the front end:

  • Leverage the latest features available in the JavaScript language to support more elegant, consistent, and maintainable source code (like import / export (modules), class, and async/await).

  • Provide a stable and mature local development environment with low-to-no maintenance (that is, no global development dependencies for developers to install or maintain, and intuitive workflows/tasks).

  • Adopt a single package manager to manage front-end and build dependencies.

  • Deploy optimized, feature-based bundles (packaged HTML, CSS, and JS) for smarter, faster distribution and downloads for users. Combined with HTTP/2, large gains can be made here for little investment to greatly improve user experience and performance.

A New Stack

In this series, our focus is on three core components of a front-end development stack. For each component, we’ll look at the tool that we think brings the best balance of dependability, productivity, and maintainability to modern JavaScript application development, and that are best aligned around our desired principals.  

Package Management: Yarn

The challenge of how to manage and install external vendor or internal packages in a dependable and consistently-reproducible way is critical to the workflow of a developer. It’s also critical for maintaining a CI/CD (continuous integration/continuous delivery) pipeline. But, which package manager do you choose given all the great options available to evaluate? npm? jspm? Bower? CDN? Or do you just copy and paste from the web and commit to version control?    

Our first article will look at Yarn and how it focuses on being fast and providing stable builds. Yarn accomplishes this by ensuring the version of a vendor dependency installed today will be the exact same version installed by a developer next week. It is imperative that this process is frictionless and reliable, distributed and at scale, because any downtime prevents developers from being able to code or deploy their applications. Yarn aims to address these concerns by providing a fast, reliable alternative to the npm cli for managing dependencies, while continuing to leverage the npm registry as the host for public Node packages. Plus it’s backed by Facebook, an organization that has scale in mind when developing their tooling.

Application Bundling: webpack

The orchestration of building a front-end application, which is typically comprised of a mix of HTML, CSS, and JS, as well as binary formats like images and fonts, can be tricky to maintain and even more challenging to orchestrate. So how does one turn a code base into an optimized, deployable artifact? Gulp? Grunt? Browserify? Rollup? SystemJS? All of these are great options that provide their own strengths and weaknesses, but we need to make sure the choice reflects our intended principals we discussed above.

webpack is a build tool specifically designed to package and deploy web applications comprised of any kind of potential assets (HTML, CSS, JS, images, fonts, and so on) into an optimized payload to deliver to users. We want to take advantage of the latest language features like import/export and class to make our code future-facing and clean, while letting the tooling orchestrate the bundling of our code such that it is optimized for both the browser and the user. webpack can do just that, and more!

Language Specification: TypeScript

Writing clean code in and of itself is always a challenge. JavaScript, which is a dynamic language and loosely typed, has afforded developers a medium to implement a wide range of design patterns and conventions. Now, with the latest JavaScript specification, we see more solid patterns from the programming community making their way into the language. Support for features like the use of import/export and class have brought a fundamental paradigm shift to how a JavaScript application can be developed, and can help ensure that code is easier to write, read, and maintain. However, there is still a gap in the language that generally begins to impact applications as they grow: maintainability and integrity of the source code, and predictability of the system (the application state at runtime).

TypeScript is superset of JavaScript that adds type safety, access modifiers (private and public), and newer features from the next JavaScript specification. The security in a more strictly typed language can help promote and then enforce architectural design patterns by using a transpiler to validate code before it even gets to the browser, which helps to reduce developer cycle time while also being self-documenting. This is particularly advantageous because, as applications grow and change happens within the codebase, TypeScript can help keep regressions in check while adding clarity and confidence to the code base. IDE integration is also a huge win here as well.

What About Front-End Frameworks?

As you may have noticed, so far we’ve intentionally avoided recommending a front-end framework or library like Angular or React, so let’s address that now.

Different applications call for different approaches to their development based on many factors like team experience, scope and size, organizational preference, and familiarity with concepts like reactive or functional programming. At Kenzan, we believe evaluating and choosing any ES2015/TypeScript compatible library or framework, be it Angular 2 or React, should be based on characteristics specific to the given situation.  

If we revisit our illustration from earlier, we can see a new stack take form that provides flexibility in choosing front-end frameworks.

FrontendToolingSimplified.png

A modern stack that offers flexibility in front-end frameworks

Below this upper “view” layer is a common ground that can be built upon by leveraging tools that embrace our key principles. At Kenzan, we feel that this stack converges on a space that captures the needs of both user and developer experience. This yields results that can benefit any team or application, large or small. It is important to remember that the tools presented here are intended for a specific type of project development (front-end UI application), and that this is not intended to be a one-size-fits-all endorsement. Discretion, judgement, and the needs of the team should be the prominent decision-making factors.

What’s Next

So far, we’ve looked back at how the JavaScript renaissance of the last few years has led to a rapidly-maturing JavaScript ecosystem. We laid out the core philosophies that have helped us to meet the challenges and opportunities of developing software for the front end. And we outlined three main components of a modern front-end development stack. Throughout the rest of this series, we’ll dive deeper into each of these components. Our hope is that, by the end, you’ll be in a better position to evaluate the infrastructure you need for your front-end applications.

We also hope that you’ll recognize the value of the tools we present as being guided by a set of core principles, paradigms, and philosophies. Writing this series has certainly caused us to put our own experience and process under the microscope, and to solidify our rationale when it comes to front-end tooling. Hopefully, you’ll enjoy what we’ve discovered, and we welcome any thoughts, questions, or feedback you may have.

Next up in our blog series, we’ll take a closer look at the first core component of our front-end stack—package management with Yarn.

Kenzan is a software engineering and full service consulting firm that provides customized, end-to-end solutions that drive change through digital transformation. Combining leadership with technical expertise, Kenzan works with partners and clients to craft solutions that leverage cutting-edge technology, from ideation to development and delivery. Specializing in application and platform development, architecture consulting, and digital transformation, Kenzan empowers companies to put technology first.

Grunt, jQuery, and webpack are trademarks of the JS Foundation.

DevOps Fundamentals, Part 2: The Value Stream

We’re continuing our preview of the DevOps Fundamentals: Implementing Continuous Delivery (LFS261) course from The Linux Foundation. The online, self-paced course is presented through short videos and provides basic knowledge of the process, patterns, and tools used in building and managing a Continuous Integration/Continuous Delivery (CI/CD) pipeline. In the first article last week, we talked about high-performing organizations and the type of Continuous Delivery that involves deployment automation and high throughput and stability.

But, we can’t really talk about Continuous Delivery without understanding the value stream. So, I will spin through that to make sure we are on the same page. The value stream is “the sequence of activities an organization undertakes to deliver upon a customer request.” That’s pretty obvious. If we are going to build a Continuous Delivery pipeline or flow, we really need to understand some data points, and particularly the difference between Lead Time and Cycle Time.

Even different authors will differ on what Lead Time means, but here, we’ll define it as “what it takes to get a piece of work all the way through the system.” Basically, Cycle Time is “how often a part or product is completed by a process, as timed by observation.” The clock starts when the work begins and stops when the item is ready for delivery. And Cycle Time is the more mechanical measure of the process capability.

Deployment Lead Time is where we really want to focus on the tool chain — the things that we know that we can improve, such as automation, testing, the repeatable functionality, repeatable processes. Process times should be reasonably predictable. So, you really need to figure out your particular Lead Time or Deployment Lead Time, and how you are going to track that.

In Effective DevOps — which is a really good book — Jennifer Davis and Katherine Daniels say “Continuous integration is the process of integrating new code written by developers with a mainline or “master” branch frequently throughout the day. This is in contrast to having developers work on independent feature branches for weeks or months at a time, only merging their code back to the master branch when it is completely finished.”

And, there are tools to allow people to be much more effective, to be doing parallel work, creating branches and feature branches. The key points here are:

  • Integration

  • Testing

  • Automation

  • Fast feedback

  • Multiple developers

You cannot really talk about continuous anything — but certainly not Continuous Integration — without quoting Martin Fowler, who is one of the original “Agile Manifesto” signers. Fowler says:

“Continuous integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily — leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible.”

In the next article, we’ll take this one step further and look the difference between Continuous Delivery and Continuous Deployment.

Want to learn more? Access all the free sample chapter videos now!

This course is written and presented by John Willis, Director of Ecosystem Development at Docker. John has worked in the IT management industry for more than 35 years.

Cluster Schedulers

This post aims to understand:

1. the purpose of schedulers the way they were originally envisaged and developed at Google
2. how well (or not) they translate to solve the problems of the rest of us
3. why they come in handy even when not running “at scale”
4. the challenges of retrofitting schedulers into existing infrastructures
5. running hybrid deployment artifacts with schedulers
6. why where I work we chose Nomad over Kubernetes
7. the problems yet to be solved
8. the new problems these new tools introduce
9. what the future holds for us

This is an embarrassingly long post and Medium doesn’t allow me to create links to subsections, so searching for titles that interest you is probably your best bet to get through this.

Read more at Medium

IBM’s Plan to Encrypt Unthinkable Amounts of Sensitive Data

DATA BREACHES AND exposures all invite the same lament: if only the compromised data had been encrypted. Bad guys can only do so much with exfiltrated data, after all, if they can’t read any of it. Now, IBM says it has a way to encrypt every level of a network, from applications to local databases and cloud services, thanks to a new mainframe that can power 12 billion encrypted transactions per day.

The processing burden that comes with all that constant encrypting and decrypting has prevented that sort of comprehensive data encryption at scale in the past. Thanks to advances in both hardware and software encryption processing, though, IBM says that its IBM Z mainframe can pull off the previously impossible. If that holds up in practice, it will offer a system that’s both accessible for users, and offers far greater data security than currently possible.

Read more at WIRED

Linux 4.13 RC1 Arrives: ‘Get Testing’ Says ​Linus Torvalds

Linus Torvalds took the wraps off the first Linux 4.13 kernel release candidate on Saturday, a day ahead of its expected release.

The new release candidate (RC) comes a fortnight after the stable release of Linux 4.12, which was one of the biggest updates in the kernel’s 25 year history. That kernel also got its first update to 4.12.1 last week.

“This looks like a fairly regular release, and as always, rc1 is much too large to post even the shortlog for,” wrote Torvalds.

“Once again, the diffstat is absolutely dominated by some AMD gpu header files, but if you ignore that, things look pretty regular, with about two thirds drivers and one third “rest” (architecture, core kernel, core networking, tooling).”

Read more at ZDNet

Quantum Computing in the Enterprise: Not So Wild a Dream

We discussed these trends with David Schatsky, of the Deloitte University think tank, who has recently written on the state of quantum, and pressed him to predict quantum computing’s next important milestone toward commercial viability. Such is the elusive nature of the technology, and in the knowledge how difficult progress has been in its 30 years of existence, that Schatsky swathed his response in caveats.

“I’ll only give you a guess if you include that nobody really has an idea, especially me,” he said good naturedly. “But I think what we’re likely to see is answers to questions arrived at through the application of quantum computing in a laboratory setting first. It could be some kind of research question that a quantum computer has been especially designed to answer, in an R&D kind of setting. I wouldn’t be shocked if we see things like that in a couple of years.”

Actual commercial viability for quantum computing is probably in the 15-year time frame, he said, adding that while quantum computing is expected be used for somewhat tightly focused analytical problems, “if quantum computing becomes a really commercially accessible platform, these things have a way of creating a virtuous cycle where the capability to solve problems can draw new problem types and new uses for them. So I think we may be able to use them in ways we can’t image today.”

More immediate impact from quantum could come in the form of hybrid strategies that merge HPC systems with quantum computing techniques, Schatsky said, attacking HPC-class problems with the infusion of “quantum thinking.”

Read more at EnterpriseTech

Why You Should Become a SysAdmin

Chances are good that you are already an administrator for some systems you own, and you do it for free because that’s just how it goes these days. But there are employers willing and eager to pay good money for someone to help administer their systems. We’re currently near zero unemployment in system and network administration, and the Bureau of Labor Statistics projects continued 9% growth in the field through 2024.

What about automation, you ask. Perhaps you’ve heard sysadmins say how they intend to automate away their entire job, or how they automated their predecessor’s job in a single shell script. How many have you heard of that succeeding? When the job is automation, there is always more to automate.

If you attend or watch videos of sysadmin conferences, you’ll see a field that needs new blood. Not only is there a distinct lack of younger people, but also fairly extreme gender and racial imbalances. While those are topics for a different article, diversity is well proven to improve resilience, problem-solving, innovation, and decision-making—things of great interest to sysadmins.

Read more at OpenSource.com

Open Source Artificial Intelligence Projects For GNU/Linux

Artificial intelligence is becoming more ingrained with the consumer market. Microsoft has Cortana, Apple has Siri and Amazon has Alexa as self-learning artificial intelligence projects. Self-driving cars are now becoming a reality thanks to AI driving technology. Even the marketing industry is taking advantage self-learning AI, as shown by Andy Fox of Element 7 Digital.

The disappointing reality of mainstream artificial intelligence is that it is being dominated by proprietary software. Industries may not give up their secrets so easily, which is why the open source community needs to support free AI projects that currently exist.

Why Use Linux?

Linux is not a household name for the majority of end users, but it widely appreciated by web hosters, researchers, and programmers. The security of Linux is much greater than Windows or OSX and it does not have any nasty surprises since the source code is public domain. It is also the most portable operating system since the kernel can be compiled and used by just about any architecture.

Considering the openness and security of Linux, wouldn’t you prefer that your self-driving car uses a more secure operating system? Even Google has been bitten by the Linux bug and is using their own Ubuntu variant for machine learning named Goobuntu.

Some Of The AI Projects For GNU/Linux

Lovers of FOSS and Linux will be pleased to know that there is a plethora of AI projects available for Linux. Most of these projects are machine learning libraries that can also be cross-platform for Windows, OSX or BSD variants.

Mycroft AI

Mycroft is the first project that aims to be an open source competitor to assistants like Siri or Cortana. Dubbed as the “AI For Everyone”, it is designed to run on any platform including automobiles or a Raspberry Pi. The framework is designed to learn from voice commands and will share the information with the project to help develop a better AI. The source code can be ran on any device that has a Python interpreter.

OpenNN

The Open Neural Networks Library (OpenNN) is an open source C++ library used specifically for deep machine learning. It’s architecture uses several layers of processing units for analytical learning. It supports acceleration by OpenMP and NVIDIA’s Cuda.

OpenCyc

OpenCyc is one of the older AI projects and has been in production since 2001. It is a general knowledge AI that is particularly useful for trivia games, understanding text, and learning knowledge within specific domains.

NuPIC

NuPIC is an AI learning framework that is implementable into Python, C++, Java, Clojure, Go, or JavaScript. It gathers analytics from from live data streams to recognized time-based patterns. It is ideal for detecting anomalies within live data. Their HTM design is inspired by neuroscience.

Apache SystemML

Apache’s SystemML is an artificial intelligence framework that is available for R and Python. It is designed for big-data systems using high-level mathematical equations. It is currently being used by large industries like automotive or airport traffic control.

Deeplearning4j

Deeplearning4j (Deep Learning for Java) is one of the leading open source AI libraries for Java and Scala. It is suitable for business applications and may be accelerated by CPUs or GPUs.

Caffe

Caffe boasts as being one of the fastest of the deep learning framework. It is ideal for research projects needing quick processing of data and hardware acceleration. Its modular design allows it to easily be forked or extended and it is already been deployed in thousands of other projects.

H20

H20 is designed for advanced decision making for large industries. It supports AI methods like gradient boosting, random forests and generalized linear.

MLlib

MLlib is designed to run on Hadoop clusters and other distributed computing platforms. It comes with a variety of advanced algorithms and it compatible with Python, Java, Scala and R.

Since AI is becoming such a hot trend this day, it is inevitable that more open sourced projects will keep spawning. As more large corporations will realize the benefits of using Linux, we should expect to see more corporate funding amongst these open source projects as well. Also consider that since Linux has such portability, it may be the most desired operating system for AI solutions in embedded IoT devices in the near future.

This Week in Open Source: OSS as New Normal in Data, New Linux Foundation Kubernetes MOOC

This week in open source and Linux news, Hortonworks CTO considers why open source is the new normal in analytics, new Linux Foundation edX MOOC called a “no-brainer” and more! Read on for the top headlines of the week

1) Hortonworks CTO unpacks how open source data architectures are “now considered mainstream in the IT environments and are widely deployed in live production in several industries.”

Open Source Is The New Normal In Data and Analytics – Forbes

2) Steven J. Vaughan-Nichols calls new Linux Foundation Kubernetes MOOC a “no-brainer.”

Linux Foundation Offers Free Introduction to Kubernetes Class – ZDNet

3) “Lyft’s move is part of a greater trend among tech companies to open-source their internal tools for performing machine learning work.”

Lyft to Open-Source some of its AI Algorithm Testing Tools – VentureBeat

4) The Linux Foundation has become a catalyst for the shift toward network functions virtualization (NFV) and software-defined networking (SDN)

How is The Linux Foundation Shaping Telecom? – RCRWireless News

5) You can now download a flavor of the popular Linux distribution to run inside Windows 10

Ubuntu Linux is Available in the Windows Store– engadget

How Open Source Took Over the World

GOING WAY BACK, pretty much all software was effectively open source. That’s because it was the preserve of a small number of scientists and engineers who shared and adapted each other’s code (or punch cards) to suit their particular area of research. Later, when computing left the lab for the business, commercial powerhouses such as IBM, DEC and Hewlett-Packard sought to lock in their IP by making software proprietary and charging a hefty license fee for its use.

The precedent was set and up until five years ago, generally speaking, that was the way things went. Proprietary software ruled the roost and even in the enlightened environs of the INQUIRER office mention of open source was invariably accompanied by jibes about sandals and stripy tanktops, basement-dwelling geeks and hairy hippies. But now the hippies are wearing suits, open source is the default choice of business and even the arch nemesis Microsoft has declared its undying love for collaborative coding.

But how did we get to here from there? Join INQ as we take a trip along the open source timeline, stopping off at points of interest on the way, and consulting a few folks whose lives or careers were changed by open source software.

Read more at The Inquirer