Home Blog Page 498

Future Proof Your SysAdmin Career: Locking Down Security

For today’s system administrators, gaining competencies that move them up the technology stack and broaden their skillsets is increasingly important. However, core skills like networking remain just as crucial. Previously in this series, we’ve provided an overview of essentials and looked at evolving network skills. In this part, we focus on another core skill: security.

With ever more impactful security threats emerging, the demand for fluency with network security tools and practices is increasing for sysadmins. That means understanding everything from the Open Systems Interconnect (OSI) model to devices and protocols that facilitate communication across a network.

future proof ebook

Locking down systems also means understanding the infrastructure of a network, which may or may not be Linux-based. In fact, many of today’s sysadmins serve heterogeneous technology environments where multiple operating systems are running. Securing a network requires competency with routers, firewalls, VPNs, end-user systems, server security, and virtual machines.

Securing systems and networks calls for varying skillsets depending on platform infrastructure, as is clear if you spend just a few minutes perusing, say, a Fedora security guide or the Securing Debian Manual. However, there are good resources that sysadmins can leverage to learn fundamental security skills.

For example, The Linux Foundation has published a Linux workstation security checklist that covers a lot of good ground. It’s aimed at sysadmins and includes discussion of tools that can thwart attacks. These include SecureBoot and Trusted Platform Module (TPM). For Linux sysadmins, the checklist is comprehensive.

The widespread use of cloud platforms such as OpenStack is also introducing new requirements for sysadmins. According to The Linux Foundation’s Guide to the Open Cloud: “Security is still a top concern among companies considering moving workloads to the public cloud, according to Gartner, despite a strong track record of security and increased transparency from cloud providers. Rather, security is still an issue largely due to companies’ inexperience and improper use of cloud services,” and a sysadmin with deeply entrenched cloud skills can be a valuable asset.

Most operating systems and widely used Linux distributions feature timely and trusted security updates, and part of a good sysadmin’s job is to keep up with these. Many organizations and administrators shun spin-off and “community rebuilt” platform infrastructure tools because they don’t have the same level of trusted updating.

Network challenges

Networks, of course, present their own security challenges. The smallest holes in implementation of routers, firewalls, VPNs, and virtual machines can leave room for big security problems. Most organizations are strategic about combating malware, viruses, denial-of-service attacks, and other types of hacks, and good sysadmins should study the tools deployed.

Freely available security and monitoring tools can also go a long way toward avoiding problems. Here are a few good tools for sysadmins to know about:

  • Wireshark, a packet analyzer for sysadmins

  • KeePass Password Safe, a free open source password manager

  • Malwarebytes, a free anti-malware and antivirus tool

  • NMAP, a powerful security scanner

  • NIKTO, an open source web server scanner

  • Ansible, a tool for automating secure IT provisioning

  • Metasploit, a tool for understanding attack vectors and doing penetration testing

For a lot of these tools, sysadmins can pick up skills by leveraging free online tutorials. For example, there is a whole tutorial series for Metasploit, and there are video tutorials for Wireshark.

Also on the topic of free resources, we’ve previously covered a free ebook from the editors at The New Stack called Networking, Security & Storage with Docker & Containers. It covers the latest approaches to secure container networking, as well as native efforts by Docker to create efficient and secure networking practices. The ebook is loaded with best practices for locking down security at scale.

Training and certification, of course, can make a huge difference for sysadmins as we discussed in “7 Steps to Start Your Linux Sysadmin Career.”

For Linux-focused sysadmins, The Linux Foundation’s Linux Security Fundamentals (LFS216) is a great online course for gaining well-rounded skills. The class starts with an overview of security and covers how security affects everyone in the chain of development, implementation, and administration, as well as end users. The self-paced course covers a wide range of Linux distributions, so you can apply the concepts across distributions. The Foundation offers other training and certification options, several of which cover security topics. For example, LFS201 Essentials of Linux System Administration includes security training.

Also note that CompTIA Linux+ incorporates security into training options, as does the Linux Professional Institute. Technology vendors offer some good choices as well; for example, Red Hat offers sysadmin training options that incorporate security fundamentals. Meanwhile, Mirantis offers three-day “bootcamp” training options that can help sysadmins keep an OpenStack deployment secure and optimized.

In the 2016 Linux Foundation/Dice Open Source Jobs Report, 48 percent of respondents reported that they are actively looking for sysadmins. Job postings abound on online recruitment sites, and online forums remain a good way for sysadmins to learn from each other and discover job prospects. So the market remains healthy, but the key for sysadmins is to gain differentiated types of skillsets. Mastering hardened security is surely a differentiator, and so is moving up the technology stack — which we will cover in upcoming articles.

Learn more about essential sysadmin skills: Download the Future Proof Your SysAdmin Career ebook now.

 

Read more:

Future Proof Your SysAdmin Career: An Introduction to Essential Skills 

Future Proof Your SysAdmin Career: New Networking Essentials

Future Proof Your SysAdmin Career: Locking Down Security

Future Proof Your SysAdmin Career: Looking to the Cloud

Future Proof Your SysAdmin Career: Configuration and Automation

Future Proof Your SysAdmin Career: Embracing DevOps

Future Proof Your SysAdmin Career: Getting Certified

Future Proof Your SysAdmin Career: Communication and Collaboration

Future Proof Your SysAdmin Career: Advancing with Open Source

The Rise of Test Impact Analysis

Test Impact Analysis (TIA) is a modern way of speeding up the test automation phase of a build. It works by analyzing the call-graph of the source code to work out which tests should be run after a change to production code. Microsoft has done some extensive work on this approach, but it’s also possible for development teams to implement something useful quite cheaply.

One curse of modern software development is having “too many” tests to run all of them prior to check-in. When that becomes true, developers use a costly coping strategy of not running any tests on their local developer workstation. Instead they rely on tests running later on an integration server. And quite often even those fall into disrepair, which is inevitable when shift right becomes normal for a dev team.

Of course, everything that you test pre-integrate should immediately be tested post-integrate in the Continuous Integration (CI) infrastructure. Even the highest functioning development teams might experience breakages born from timing alone for commits landing in real time. 

Read more at Martin Fowler

Everything Is an HTTPS Interface

In the Linux world everything is file, in the Serverless world everything is an HTTPS interface.

Serverless applications by their nature are heavily decomposed into a variety of services, such as autonomous functions, object storage, authentication services, document databases, and pub/sub message queues. The interfaces between these services are typically HTTPS. When you’re using the AWS SDK to call an AWS services, the interface it’s calling under the hood is an HTTPS interface. This is true for the majority of cloud platforms, with some alternative protocols occasionally being used (WebSockets and MQTT) in specific use cases.

In the same way that in Linux you can access all the resources of the underlying machine through the file system, in a serverless world you can access all the resources of the underlying cloud platform through an HTTPS interface.

Read more at Serverless.Zone

Dumping Windows and Installing Linux Mint, in Just 10 Minutes

One of my older netbook computers, an Acer Aspire V5, is still being used by my partner. It still runs Windows 7, but it has been acting up very badly recently, and I finally decided that rather than spend a few hours trying to get it to limp along a while longer again, I would just trash everything on it and install Linux Mint for her.

Besides the obvious step of dumping Windows, there is another big step for me in this. I am not going to make my usual multi-boot Linux configuration on this netbook, I am only going to install Linux Mint, and let it use the entire disk as it sees fit.

The first step is to download the latest Linux Mint installation image, from the Download Linux Mint page

Read more at ZDNet

Containers to Eclipse VMs in Application Platform Space, SDxCentral Survey Says

Enterprises looking to garner more efficiency from their cloud operations are increasingly turning to containers.

SDxCentral recently conducted a survey as part of our 2017 Container and Cloud Orchestration report,  and found a spike in container usage. In fact, it appears that containers could surpass virtual machines (VMs) as the application development platform of choice.

One of the more striking takeaways from the survey was the increased use of containers, which surged from just 8 percent in 2016, to 45 percent this year. Of the 55 percent of respondents not currently using containers, 45 percent said they expect to make the move in the next year.

Read more at SDxCentral

Unix: How Random Is Random?

On Unix systems, random numbers are generated in a number of ways and random data can serve many purposes. From simple commands to fairly complex processes, the question “How random is random?” is worth asking.

EZ random numbers

If all you need is a casual list of random numbers, the RANDOM variable is an easy choice. Type “echo $RANDOM” and you’ll get a number between 0 and 32,767 (the largest number that two bytes can hold).

$ echo $RANDOM
29366

Of course, this process is actually providing a “pseudo-random” number. 

Read more at NetworkWorld

Linux cksum Command Explained for Beginners (with Examples)

There are times when we download a file (say an ISO image) hosted somewhere on the Internet only to find that it’s not working as expected (or, at all). There could be multiple reasons behind this, with one among them being file corruption (the file got corrupted during the download process, or the original, hosted file itself was corrupt). But how to confirm that such a corruption has occurred?

In Linux, there’s a command line tool that you can use to create/verify checksum. It’s dubbed cksum. Most vendors offer a checksum (or a checksum-like code) corresponding to the file(s) being downloaded. If the file doesn’t behave in an expected way, user’s can recompute the file’s checksum and compare it with the original checksum provided by the vendor to see if the file is intact or got corrupted.

Well, there does exist a solution to this problem. In most cases, what’s done is, when the file is originally created, a checksum is computed which is unique to that file. Even if there’s a slight change in the file, the checksum – when computed again – changes.

So most vendors offer a checksum (or a checksum-like code) corresponding to the file(s) being downloaded. If the file doesn’t behave in expected way, user’s can recompute the file’s checksum and compare it with the original checksum provided by the vendor to see if the file is intact or got corrupted.

Read more at HowtoForge

pdd – Tool to Find Date and Time Difference in Linux Command Line

In some occasions where you want to check by how many years someone older than you, how old you are (in days, years or months), the countdown to an event or the next flash sale. There is a python-based command line application known as pdd which enables you to calculate date and time differences. Now, there’s no go online and search for websites for date and time calculations. In this article, we’ll give you more insight into “pdd” tool and teach you how to use it.

Installing pdd

To install pdd in Ubuntu/Debian, we first have to install the dependencies – pdd requires Python 3.5 or newer and the dateutil module.

Read more at LinOxide

TypeScript: Our Type of JavaScript

Dynamic typing is a great feature of JavaScript. Variables are capable of handling any type of object, and types are determined on the fly, with types cast implicitly when necessary. However, as an application begins to grow, the dynamic nature of JavaScript increasingly becomes harder to manage. Some of the challenges that large, loosely-typed JavaScript applications present include:

  • Ensuring the proper, strict type comparison across components

  • Capturing the right data type received from the API

  • Having confidence in refactoring

  • Ensuring integrity throughout the application (avoiding “could not call X of undefined” at runtime)

  • Reducing development cycle time by catching errors in the compiler

Every front-end developer has had the frustrating experience of delving backwards through a code base for a bug fix to determine what, exactly, a mysterious var is defined as. Ensuring types between components cuts off these time-consuming issues before they occur. It helps reduce the margin for error and improves readability, allowing the opportunity to create elegant JavaScript with minimal runtime errors. Which brings us to TypeScript—a superset of JavaScript that lets you add in strongly-typed classes to your front-end application.

In our previous articles, we looked at two of the core components of our modern front-end stack: Yarn for package management and webpack for bundling modules. In this post we’ll look at the third and final component: TypeScript. We will walk through what TypeScript is, what some of its contemporaries provide, and why TypeScript might be a good candidate for your own stack. We’ll also take a look at some of the fundamental principles of TypeScript and how they can improve the workflow and readability over traditional ES5 and ES2015 syntaxes.

What is TypeScript?

Developed by Microsoft, TypeScript is an open-source language and compiler that runs both in the browser (through SystemJS with transpiling on the fly) and on NodeJS. Its intention is to address JavaScript’s shortcomings for large-scale application development.

TypeScript was designed with JavaScript in mind, and as a superset of JavaScript it accepts regular JavaScript syntax. It’s not an entirely new language, but a strictly-typed superset of JavaScript that compiles down to plain vanilla JavaScript. TypeScript introduces features such as static typing, data encapsulation through classes (in ES2015) as well as through interfaces, decorators, and private, public, and protected variables. It ultimately attempts to bring the advantages of strong typing commonly found in traditional object-oriented languages like Java to JavaScript.

TypeScript follows the standards provided by ECMAScript’s governing body, TC39. As such, it receives features introduced to TC39 before major releases of ECMAScript. For example, import/export in TypeScript came before ES2015, and interfaces in TypeScript are currently at Stage 2 in the TC39 adoption process for ECMAScript.

TypeScript Contemporaries

As TypeScript’s popularity has grown, it’s important to note that it’s not the only technology that has aimed at improving JavaScript’s readability. There are several contemporaries of the language that share the same goal.

Babel

Out of all of TypeScript’s contemporaries, Babel may be most in line with TypeScript’s offerings. Rather than introduce new syntax (flow types or static typing), Babel allows developers the opportunity to write in next generation JavaScript that will compile down to a browser-consumable form of JavaScript. Babel’s power comes from its plugin ecosystem. Rather than creating a defined set of rules to write code, Babel allows you to pick your preferred syntax through plugins and compile to an appropriate JavaScript syntax.

Dart

Dart is open-source software developed by Google and later approved as an ECMA standard. It is an object-oriented, C# like language that compiles to plain JavaScript using the dart2js compiler. It also supports interfaces, mix-ins, and optional typing.  

While Dart has similarities to TypeScript, unfortunately the community is not as large as TypeScript. Dart has a much smaller ecosystem of libraries, packages, and documentation compared to TypeScript or JavaScript. It follows a single object paradigm in the form of classes. A telling testament to the prominence of TypeScript in the community is that, even though Angular is also developed by Google, it does not use Dart but rather TypeScript.

Flow

Flow was developed by Facebook to address TypeScript 1.x shortcomings and is built with bug finding in mind. Flow is not a compiler but a type checker. It ultimately provides static types, but it doesn’t support data encapsulation the same way TypeScript does: no classes, interfaces, or constructors. It only provides the static types.

CoffeeScript

CoffeeScript is a JavaScript alternative language that was inspired by Ruby and Haskell. Its goal was to enhance brevity. Several of JavaScript’s updates were inspired by functionality introduced by CoffeeScript, including arrow functions. This language has a very similar syntax to JavaScript, but it reduces verbosity by providing arrow functions (->), reduction in parentheses, and adds an emphasis to whitespace usage. The compiler for CoffeeScript has been written in CoffeeScript since version 0.5 and can be run in the Node environment.

Why TypeScript

TypeScript plays an important role as an application begins to grow. The complexity of large applications typically leads to less readability and greater confusion when walking through code. Types then become essential for understanding object composition. Data encapsulation through interfaces and classes—as well as code modularity—in TypeScript are unmatched in the JavaScript ecosystem.

Since TypeScript is a typed superset of JavaScript, there’s no concern about using multiple languages, as it will accept normal JavaScript syntax (unlike languages such as Dart or CoffeeScript). Due to the static typing found in TypeScript, refactoring old code bases is significantly easier with TypeScript as compared to JavaScript. And finding errors at compile time rather than run time will ultimately improve the lifecycle efficiency of any front-end application.

Due to the object-oriented, strongly-typed nature of TypeScript, it lends itself to easier adoption from developers with more backend experience (Java, C#, PHP). The TypeScript syntax gives backend developers more insight and a better grasp of the language, and helps them take on front-end projects with a lower learning curve.

Because TypeScript is a superset of the JavaScript language, and because of its strict adherence to the standards released by TC39, JavaScript features usually make an appearance in TypeScript prior to their official release. This creates a nice release flow, as it allows the developer community the opportunity to use features yet to be released in JavaScript.

A Closer Look at TypeScript

Let’s take a closer look at some of the features of TypeScript that give it a strong backbone for developing typed JavaScript.

From ES5 to TypeScript

To see how Typescript has evolved JavaScript, it’s helpful to look at how you would create a basic class-like inheritance in ES5 and in ES2015, and then finally in TypeScript.

Since ES5 does not have typical class-like structures, we have to implement that functionality ourselves. Let’s examine how we would create a basic User “class” in ES5 using a named function expression:

Now contrast the above code that uses the ubiquitous var with how we would implement the same object with the class syntax available in ES2015:

ES2015 introduces the concept of the class. In the code above we define a class object, and we use the constructor to define its initial structure. This helps us move towards the order and typing of a more object-oriented like language.

Finally, let’s see how TypeScript adds value. The following code shows TypeScript’s static typing through an interface that we can use to configure our base User class.

As you can see in the TypeScript example, setting types to both parameters and return values lets us know what to expect when calling methods from the User class. The interface creates a generic structure for our User class, which we can share with other classes that may inherit or share the same structure. While ES2015 provides basic object typing, TypeScript goes further by advocating true object-oriented design and enforcing strict typing to ensure we’re managing the right data type at all times.

It’s important to note that some of the features used in our TypeScript example do not compile down to vanilla JavaScript. The types used by Typescript are compiled by its own compiler. Any errors will be caught at compile time rather than run time, providing a great advantage in catching discrepancies. That said, our resulting compiled JavaScript will not declare private variables or interfaces since they are currently not supported. Whether TypeScript is used or ES2015, it will all be translated to JavaScript within its current capabilities.

Configuring Your Project With tsconfig.json

As your project grows, consistency in settings becomes paramount. While it is possible to enable setting flags in the command line when using the compiler, keeping a centralized configuration enables a large group of developers to code and compile with consistency.

The tsconfig.json file provides base configuration settings for the project. It includes things like targeting a specific version of ECMAScript to compile, which module pattern to use, which folders in your project need to be compiled (using glob-like file pattern matching), setting implicit types, and strict checking for the null object. It’s kept at the root of the project folder, and the compiler evaluates the JSON to determine the necessary settings for compiling your TypeScript code to plain JavaScript. For details, see TypeScript’s article on tsconfig.json.

External Libraries

As a modern developer, working with external libraries inside your project is almost mandatory. Since most JavaScript libraries aren’t written in TypeScript, this presents an interesting problem for the compiler. TypeScript solves this by allowing you to add in type definitions from other libraries so the compiler understands them. To avoid compile-time errors, most prominent libraries provide their own type definitions which can be installed with your package manager of choice using the following command:

Linting

It’s important that large-scale applications follow coding standards to avoid programmatic and stylistic errors. TypeScript is no different. Fortunately, linting is available through tslint.

TypeScript Playground

The easiest way to get started with TypeScript is through the TypeScript playground found on the TypeScript website (http://www.typescriptlang.org/play/). In the playground, TypeScript is compiled to JavaScript as you type. It also allows inspection of various features available in TypeScript. The interface is helpful to developers familiar with backend technologies as well as new developers in general.

typescript-playground.png

Figure 1: TypeScript Playground allows a developer to live edit TypeScript, inspect various TypeScript features, and see the transpiled JavaScript in real time.

The Future of TypeScript

Since TypeScript follows the guidelines set forth by the ECMAScript governing body TC39, future releases of TypeScript include exciting new features that are currently in proposal state. These features include function decorators, variadic types, the ES Decorator Proposal, and decorators for function expressions and arrow functions.

TypeScript 2.4 has already introduced dynamic import expressions, safer callback parameter checking, weak types, and string enums. The tight coupling of TypeScript with official JavaScript releases ensures that developers are writing code that will be forward compatible with future releases of JavaScript.

The Right Fit

So is TypeScript right for your project? It depends. There are a few points to consider when determining whether TypeScript fits into your project.

Is your project new, or a refactor of an existing codebase? While you may see the benefits of TypeScript in a large code base, the work involved with refactoring ES* to TypeScript may not be worth the time. Fortunately, since TypeScript does compile to plain JavaScript, updates to a large codebase could be handled incrementally.

If it’s a small project, or you’re the sole developer, TypeScript might seem like overkill. The benefits of strict typing may not be immediately apparent on a smaller scale, and a single developer may not need to ensure data types across a smaller application.

If your app needs to scale to support the integrity of multiple components that work together like building blocks, including many “service” classes that just communicate with backend APIs, then TypeScript is a good choice. These types of complex projects lend themselves to strong typing and an object-oriented approach. At Kenzan we’ve worked on a number of projects like this, and in our experience Typescript has proven to be a powerful tool.

On the plus side, TypeScript is gaining popularity in the developer community. Task runners and bundlers like Grunt, Gulp and webpack all provide support for TypeScript, and several popular packages in the npm registry provide their own typings for TypeScript support.

That gain in popularity does not come without a cost. There are many libraries and tools in the JavaScript ecosystem that are not written in TypeScript but are essential to many developers’ toolchains. To use these dependencies in your TypeScript based project, you have to hope that there are up-to-date type definitions in those project repositories (or in some external repository like DefinitelyTyped) that can inform your project tooling how to handle the untyped code in that dependency. Otherwise, you’ll be left with that additional work yourself or unable to use that dependency. This issue is currently being worked on. A group representing a number of open source projects, including the TypeScript team, has been formed at the JS Foundation to try to tackle this problem. To join those efforts, reach out to projects@js.foundation.

Whether or not you decide to use TypeScript for your next project, at Kenzan we recommend at least using the latest ECMAScript Release (ES2015) for your next project. Most of the major browsers support ES2015 (with IE11 being the major exception), and the improvements between ES5 and ES2015 are too great to not use it in your current projects. Simply employing a few classes in any code base will improve the level of consistency and readability as a foundation for your front end.

Up Next: A Living Example

So far in our blog series, we’ve looked at building out our stack using Yarn for package management, webpack for bundling, and now TypeScript to bring “strict” order to our code.

What’s next in the series is a living example: we will put a Hello-World application through its paces with the all of the components of our modern front-end stack, showing off some of their features in the development lifecycle.

Kenzan is a software engineering and full service consulting firm that provides customized, end-to-end solutions that drive change through digital transformation. Combining leadership with technical expertise, Kenzan works with partners and clients to craft solutions that leverage cutting-edge technology, from ideation to development and delivery. Specializing in application and platform development, architecture consulting, and digital transformation, Kenzan empowers companies to put technology first.

Grunt and webpack are trademarks of the JS Foundation.

Read the previous articles in the series:

DevOps Fundamentals, Part 5: Consistency in the Pipeline

So far in our series previewing the DevOps Fundamentals: Implementing Continuous Delivery (LFS261) course from The Linux Foundation, we have already looked at:

In this article, we’ll do a quick review of some of the tools and then discuss how to achieve consistency in the pipeline.

To start, we have source control, and Git is one of the more popular tools to use. But in the Microsoft world, you have Team Foundation Server, and then there’s Perforce and SVN, which is a bit older. Then, there is the really old CVS. There are also some SaaS-based source control systems like GitHub, Bitbucket, and GitLab.

In terms of what we call the build console or the Continuous Integration server, we have: Jenkins, Bamboo, Team City, as well as Travis CI, Circle CI, and Shippable.

For repository managers, we are going to use Nexus and Artifactory in this course, but we also have: Docker Trusted Registry, Docker Hub, and Google Container Registry.

In terms of operations consoles, there’s Rundeck and Marathon. Also, Asgard is interesting; it is part of Netflix’s open source tooling and particularly works with Amazon. There’s also Spinnaker and WeaveScope.

For automation, we have: Cfengine, Chef, Puppet, Ansible, Docker Compose, Cloud Formation, and Terraform.  This is in no way the entire list. There are new products every day. These are just some of the most commonly used. For more information, watch the video below:

Back to consistency in the pipeline. Here is the thing. When you get into containers, this is even more important, because you are running hundreds, maybe thousands of containers, running containers in a cluster.

So, the goal is to create consistency. In the early days, it was all checklist-based or somebody’s shell script, and that was somewhat inconsistent. Then, Chef and Puppet came in and created a high level of consistency, where all the environments would get built at every level through some domain specific language (DSL).

At the end of the day, all elements of the pipeline should be disposable and reproducible. All environments should look like production, from laptop, to integration, to any type of testing. You want to decrease variability between elements in the pipeline. Repeatability increases speed in rebuilding environments. Improved consistency also results in reduced errors and increased security.

You also want to version control everything. It keeps a history of all your changes, so you can check the differences. You can restore and rebuild elements. All changes are visible and audited by everybody, and changes can be automated. The magic really starts when everything is in version control and you can track where everything came from. Basically everything should be in version control.

More details on creating consistency in the pipeline are provided in the video:

Want to learn more? Access all the free sample chapter videos now!

This course is written and presented by John Willis, who has worked in the IT management industry for more than 35 years.