Home Blog Page 510

The Risks of DNS Hijacking Are Serious and You Should Take Countermeasures

Editor’s Note: In a separate post, Lucian Constantin explains how a researcher hijacked .io top level domain nameserver and what exposures it has surfaced about registries for country-code top-level domains.

Over the years hackers have hijacked many domain names by manipulating their DNS records to redirect visitors to malicious servers. While there’s no perfect solution to prevent such security breaches, there are actions that domain owners can take to limit the impact of these attacks on their Web services and users.

Just last Friday, attackers managed to change the DNS records for 751 domain names that had been registered and managed through Gandi.net, a large domain registrar. Visitors to the affected domains were redirected to an attacker-controlled server that launched browser-based exploits to infect computers with malware.

Read more at The New Stack

To the Moon? Blockchain’s Hiring Crunch Could Last Years

In today’s blockchain market, raising money is the easy part.

As the headlines already attest, startups that have sold cryptographic tokens as part of a new wave of fundraisings are struggling to find qualified developers, but it’s a pain also shared by projects building public and private blockchains.

Even the enterprise consortia and corporates looking to cut costs and gain efficiencies through these platforms are not immune.

Now, that may not be a surprise given that it’s such a nascent industry. After all, there are only so many people who really understand the intricacies of blockchain, and they are hard to hire.

But that doesn’t mean companies aren’t finding strategies to attract and retain talent.

Read more at CoinDesk

Facets: An Open Source Visualization Tool for Machine Learning Training Data

Getting the best results out of a machine learning (ML) model requires that you truly understand your data. However, ML datasets can contain hundreds of millions of data points, each consisting of hundreds (or even thousands) of features, making it nearly impossible to understand an entire dataset in an intuitive fashion. Visualization can help unlock nuances and insights in large datasets. A picture may be worth a thousand words, but an interactive visualization can be worth even more.



Working with the PAIR initiative, we’ve released Facets, an open source visualization tool to aid in understanding and analyzing ML datasets. Facets consists of two visualizations that allow users to see a holistic picture of their data at different granularities. Get a sense of the shape of each feature of the data using Facets Overview, or explore a set of individual observations using Facets Dive. These visualizations allow you to debug your data which, in machine learning, is as important as debugging your model. They can easily be used inside of Jupyter notebooks or embedded into webpages. In addition to the open source code, we’ve also created a Facets demo website. This website allows anyone to visualize their own datasets directly in the browser without the need for any software installation or setup, without the data ever leaving your computer. 

Read more at Google Research Blog

Mageia 6 GNU/Linux Distribution Launches Officially with KDE Plasma 5, GRUB2

After a long wait, the final release of the Mageia 6 GNU/Linux operating system is finally here, and it looks like it comes with a lot of exciting new features and performance improvements.

According to Mageia contributor Rémi Verschelde, development of the major Mageia 6 release took longer than anticipated because the team wanted to transform it into their greatest release yet. Mageia 6 comes more than two years after the Mageia 5 series, and seven and a half months after Mageia 5.1.

“Though Mageia 6’s development was much longer than anticipated, we took the time to polish it and ensure that it will be our greatest release so far,” reads today’s announcement. “We thank our community for their patience, and also our packagers and QA team who provided an extended support for Mageia 5 far beyond the initial schedule.”

Read more at Softpedia

A Modern Day Front-End Development Stack

Application development methodologies have seen a lot of change in recent years. With the rise and adoption of microservice architectures, cloud computing, single-page applications, and responsive design to name a few, developers have many decisions to make, all while still keeping project timelines, user experience, and performance in mind. Nowhere is this more true than in front-end development and JavaScript.

To help catch everyone up, we’ll take a brief look at the revolution in JavaScript development over the last few years. Next, we’ll look at the some of the challenges and opportunities facing the front-end development community. To wrap things up, and to help lead into the next parts of this series, we’ll preview the components of a fully modern front-end stack.

The JavaScript Renaissance

When NodeJS came out in 2009, it was more than just JavaScript on the command line or a web server running in JavaScript. NodeJS revolutionized a concentration of software development around something that was so desperately needed: a mature and stable ecosystem focused on the front-end developer. Thanks to Node and its default package manager, npm, JavaScript saw a renaissance in how applications could be architected (e.g., Angular leveraging Observables or the functional paradigms of React) as well as how they were developed. The ecosystem thrived, but because it was young it also constantly churned.

Happily, the past few years have allowed certain patterns and conventions to rise to the top. In 2015, the JavaScript community saw the release of a new spec, ES2015, along with an even greater explosion in the ecosystem. The illustration below shows just some of the most popular JavaScript ecosystem elements.

FrontendToolingArray.png

State of the JavaScript ecosystem in 2017

At Kenzan, we’ve been developing JavaScript applications for more than 10 years on a variety of platforms, from browsers to set-top boxes. We’ve watched the front-end ecosystem grow and evolve, embracing all the great work done by the community along the way. From Grunt to Gulp, from jQuery® to AngularJS, from copying scripts to using Bower for managing our front-end dependencies, we’ve lived it.

As JavaScript matured, so did our approach to our development processes. Building off our passion for developing well-designed, maintainable, and mature software applications for our clients, we realized that success always starts with a strong local development workflow and stack. The desire for dependability, maturity, and efficiency in the development process led us to the conclusion that the development environment could be more than just a set of tools working together. Rather, it could contribute to the success of the end product itself.  

Challenges and Opportunities

With so many choices, and such a robust and blossoming ecosystem at present, where does that leave the community? While having choices is a good thing, it can be difficult for organizations to know where to start, what they need to be successful, and why they need it. As user expectations grow for how an application should perform and behave (load faster, run more smoothly, be responsive, feel native, and so on), it gets ever more challenging to find the right balance between the productivity needs of the development team and the project’s ability to launch and succeed in its intended market. There is even a term for this called analysis paralysis, which is a difficulty in arriving at a decision due to overthinking and needlessly complicating a problem.

Chasing the latest tools and technologies can inhibit velocity and the achievement of significant milestones in a project’s development cycle, risking time to market and customer retention. At a certain point an organization needs to define its problems and needs, and then make a decision from the available options, understanding the pros and cons so that it can better anticipate the long-term viability and maintainability of the product.

At Kenzan, our experience has led us to define and coalesce around some key concepts and philosophies that ensure our decisions will help solve the challenges we’ve come to expect from developing software for the front end:

  • Leverage the latest features available in the JavaScript language to support more elegant, consistent, and maintainable source code (like import / export (modules), class, and async/await).

  • Provide a stable and mature local development environment with low-to-no maintenance (that is, no global development dependencies for developers to install or maintain, and intuitive workflows/tasks).

  • Adopt a single package manager to manage front-end and build dependencies.

  • Deploy optimized, feature-based bundles (packaged HTML, CSS, and JS) for smarter, faster distribution and downloads for users. Combined with HTTP/2, large gains can be made here for little investment to greatly improve user experience and performance.

A New Stack

In this series, our focus is on three core components of a front-end development stack. For each component, we’ll look at the tool that we think brings the best balance of dependability, productivity, and maintainability to modern JavaScript application development, and that are best aligned around our desired principals.  

Package Management: Yarn

The challenge of how to manage and install external vendor or internal packages in a dependable and consistently-reproducible way is critical to the workflow of a developer. It’s also critical for maintaining a CI/CD (continuous integration/continuous delivery) pipeline. But, which package manager do you choose given all the great options available to evaluate? npm? jspm? Bower? CDN? Or do you just copy and paste from the web and commit to version control?    

Our first article will look at Yarn and how it focuses on being fast and providing stable builds. Yarn accomplishes this by ensuring the version of a vendor dependency installed today will be the exact same version installed by a developer next week. It is imperative that this process is frictionless and reliable, distributed and at scale, because any downtime prevents developers from being able to code or deploy their applications. Yarn aims to address these concerns by providing a fast, reliable alternative to the npm cli for managing dependencies, while continuing to leverage the npm registry as the host for public Node packages. Plus it’s backed by Facebook, an organization that has scale in mind when developing their tooling.

Application Bundling: webpack

The orchestration of building a front-end application, which is typically comprised of a mix of HTML, CSS, and JS, as well as binary formats like images and fonts, can be tricky to maintain and even more challenging to orchestrate. So how does one turn a code base into an optimized, deployable artifact? Gulp? Grunt? Browserify? Rollup? SystemJS? All of these are great options that provide their own strengths and weaknesses, but we need to make sure the choice reflects our intended principals we discussed above.

webpack is a build tool specifically designed to package and deploy web applications comprised of any kind of potential assets (HTML, CSS, JS, images, fonts, and so on) into an optimized payload to deliver to users. We want to take advantage of the latest language features like import/export and class to make our code future-facing and clean, while letting the tooling orchestrate the bundling of our code such that it is optimized for both the browser and the user. webpack can do just that, and more!

Language Specification: TypeScript

Writing clean code in and of itself is always a challenge. JavaScript, which is a dynamic language and loosely typed, has afforded developers a medium to implement a wide range of design patterns and conventions. Now, with the latest JavaScript specification, we see more solid patterns from the programming community making their way into the language. Support for features like the use of import/export and class have brought a fundamental paradigm shift to how a JavaScript application can be developed, and can help ensure that code is easier to write, read, and maintain. However, there is still a gap in the language that generally begins to impact applications as they grow: maintainability and integrity of the source code, and predictability of the system (the application state at runtime).

TypeScript is superset of JavaScript that adds type safety, access modifiers (private and public), and newer features from the next JavaScript specification. The security in a more strictly typed language can help promote and then enforce architectural design patterns by using a transpiler to validate code before it even gets to the browser, which helps to reduce developer cycle time while also being self-documenting. This is particularly advantageous because, as applications grow and change happens within the codebase, TypeScript can help keep regressions in check while adding clarity and confidence to the code base. IDE integration is also a huge win here as well.

What About Front-End Frameworks?

As you may have noticed, so far we’ve intentionally avoided recommending a front-end framework or library like Angular or React, so let’s address that now.

Different applications call for different approaches to their development based on many factors like team experience, scope and size, organizational preference, and familiarity with concepts like reactive or functional programming. At Kenzan, we believe evaluating and choosing any ES2015/TypeScript compatible library or framework, be it Angular 2 or React, should be based on characteristics specific to the given situation.  

If we revisit our illustration from earlier, we can see a new stack take form that provides flexibility in choosing front-end frameworks.

FrontendToolingSimplified.png

A modern stack that offers flexibility in front-end frameworks

Below this upper “view” layer is a common ground that can be built upon by leveraging tools that embrace our key principles. At Kenzan, we feel that this stack converges on a space that captures the needs of both user and developer experience. This yields results that can benefit any team or application, large or small. It is important to remember that the tools presented here are intended for a specific type of project development (front-end UI application), and that this is not intended to be a one-size-fits-all endorsement. Discretion, judgement, and the needs of the team should be the prominent decision-making factors.

What’s Next

So far, we’ve looked back at how the JavaScript renaissance of the last few years has led to a rapidly-maturing JavaScript ecosystem. We laid out the core philosophies that have helped us to meet the challenges and opportunities of developing software for the front end. And we outlined three main components of a modern front-end development stack. Throughout the rest of this series, we’ll dive deeper into each of these components. Our hope is that, by the end, you’ll be in a better position to evaluate the infrastructure you need for your front-end applications.

We also hope that you’ll recognize the value of the tools we present as being guided by a set of core principles, paradigms, and philosophies. Writing this series has certainly caused us to put our own experience and process under the microscope, and to solidify our rationale when it comes to front-end tooling. Hopefully, you’ll enjoy what we’ve discovered, and we welcome any thoughts, questions, or feedback you may have.

Next up in our blog series, we’ll take a closer look at the first core component of our front-end stack—package management with Yarn.

Kenzan is a software engineering and full service consulting firm that provides customized, end-to-end solutions that drive change through digital transformation. Combining leadership with technical expertise, Kenzan works with partners and clients to craft solutions that leverage cutting-edge technology, from ideation to development and delivery. Specializing in application and platform development, architecture consulting, and digital transformation, Kenzan empowers companies to put technology first.

Grunt, jQuery, and webpack are trademarks of the JS Foundation.

DevOps Fundamentals, Part 2: The Value Stream

We’re continuing our preview of the DevOps Fundamentals: Implementing Continuous Delivery (LFS261) course from The Linux Foundation. The online, self-paced course is presented through short videos and provides basic knowledge of the process, patterns, and tools used in building and managing a Continuous Integration/Continuous Delivery (CI/CD) pipeline. In the first article last week, we talked about high-performing organizations and the type of Continuous Delivery that involves deployment automation and high throughput and stability.

But, we can’t really talk about Continuous Delivery without understanding the value stream. So, I will spin through that to make sure we are on the same page. The value stream is “the sequence of activities an organization undertakes to deliver upon a customer request.” That’s pretty obvious. If we are going to build a Continuous Delivery pipeline or flow, we really need to understand some data points, and particularly the difference between Lead Time and Cycle Time.

Even different authors will differ on what Lead Time means, but here, we’ll define it as “what it takes to get a piece of work all the way through the system.” Basically, Cycle Time is “how often a part or product is completed by a process, as timed by observation.” The clock starts when the work begins and stops when the item is ready for delivery. And Cycle Time is the more mechanical measure of the process capability.

Deployment Lead Time is where we really want to focus on the tool chain — the things that we know that we can improve, such as automation, testing, the repeatable functionality, repeatable processes. Process times should be reasonably predictable. So, you really need to figure out your particular Lead Time or Deployment Lead Time, and how you are going to track that.

In Effective DevOps — which is a really good book — Jennifer Davis and Katherine Daniels say “Continuous integration is the process of integrating new code written by developers with a mainline or “master” branch frequently throughout the day. This is in contrast to having developers work on independent feature branches for weeks or months at a time, only merging their code back to the master branch when it is completely finished.”

And, there are tools to allow people to be much more effective, to be doing parallel work, creating branches and feature branches. The key points here are:

  • Integration

  • Testing

  • Automation

  • Fast feedback

  • Multiple developers

You cannot really talk about continuous anything — but certainly not Continuous Integration — without quoting Martin Fowler, who is one of the original “Agile Manifesto” signers. Fowler says:

“Continuous integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily — leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible.”

In the next article, we’ll take this one step further and look the difference between Continuous Delivery and Continuous Deployment.

Want to learn more? Access all the free sample chapter videos now!

This course is written and presented by John Willis, Director of Ecosystem Development at Docker. John has worked in the IT management industry for more than 35 years.

Cluster Schedulers

This post aims to understand:

1. the purpose of schedulers the way they were originally envisaged and developed at Google
2. how well (or not) they translate to solve the problems of the rest of us
3. why they come in handy even when not running “at scale”
4. the challenges of retrofitting schedulers into existing infrastructures
5. running hybrid deployment artifacts with schedulers
6. why where I work we chose Nomad over Kubernetes
7. the problems yet to be solved
8. the new problems these new tools introduce
9. what the future holds for us

This is an embarrassingly long post and Medium doesn’t allow me to create links to subsections, so searching for titles that interest you is probably your best bet to get through this.

Read more at Medium

IBM’s Plan to Encrypt Unthinkable Amounts of Sensitive Data

DATA BREACHES AND exposures all invite the same lament: if only the compromised data had been encrypted. Bad guys can only do so much with exfiltrated data, after all, if they can’t read any of it. Now, IBM says it has a way to encrypt every level of a network, from applications to local databases and cloud services, thanks to a new mainframe that can power 12 billion encrypted transactions per day.

The processing burden that comes with all that constant encrypting and decrypting has prevented that sort of comprehensive data encryption at scale in the past. Thanks to advances in both hardware and software encryption processing, though, IBM says that its IBM Z mainframe can pull off the previously impossible. If that holds up in practice, it will offer a system that’s both accessible for users, and offers far greater data security than currently possible.

Read more at WIRED

Linux 4.13 RC1 Arrives: ‘Get Testing’ Says ​Linus Torvalds

Linus Torvalds took the wraps off the first Linux 4.13 kernel release candidate on Saturday, a day ahead of its expected release.

The new release candidate (RC) comes a fortnight after the stable release of Linux 4.12, which was one of the biggest updates in the kernel’s 25 year history. That kernel also got its first update to 4.12.1 last week.

“This looks like a fairly regular release, and as always, rc1 is much too large to post even the shortlog for,” wrote Torvalds.

“Once again, the diffstat is absolutely dominated by some AMD gpu header files, but if you ignore that, things look pretty regular, with about two thirds drivers and one third “rest” (architecture, core kernel, core networking, tooling).”

Read more at ZDNet

Quantum Computing in the Enterprise: Not So Wild a Dream

We discussed these trends with David Schatsky, of the Deloitte University think tank, who has recently written on the state of quantum, and pressed him to predict quantum computing’s next important milestone toward commercial viability. Such is the elusive nature of the technology, and in the knowledge how difficult progress has been in its 30 years of existence, that Schatsky swathed his response in caveats.

“I’ll only give you a guess if you include that nobody really has an idea, especially me,” he said good naturedly. “But I think what we’re likely to see is answers to questions arrived at through the application of quantum computing in a laboratory setting first. It could be some kind of research question that a quantum computer has been especially designed to answer, in an R&D kind of setting. I wouldn’t be shocked if we see things like that in a couple of years.”

Actual commercial viability for quantum computing is probably in the 15-year time frame, he said, adding that while quantum computing is expected be used for somewhat tightly focused analytical problems, “if quantum computing becomes a really commercially accessible platform, these things have a way of creating a virtuous cycle where the capability to solve problems can draw new problem types and new uses for them. So I think we may be able to use them in ways we can’t image today.”

More immediate impact from quantum could come in the form of hybrid strategies that merge HPC systems with quantum computing techniques, Schatsky said, attacking HPC-class problems with the infusion of “quantum thinking.”

Read more at EnterpriseTech