Home Blog Page 509

Get an OpenStack Instance Up and Running in 40 Minutes or Less

Once you have followed the previous tutorial and have OpenStack installed using the distribution of your choice, it’s time to get some instances running.

First, you’ll want to choose how you’d like to work with OpenStack:

  • Using the Horizon Browser User Interface (BUI), which provides easy authentication and accessibility to all components.

  • Using the OpenStack Command from the command line interface (CLI), in which case you’ll need to set up some items before you can get started in the user credential file.

I like to work from the CLI, because the openstack command gives access to all of the available options, whereas when working from the BUI you’ll notice that some of the advanced options are not available.

Create a Credentials File

Before you can start working with instances, you’ll need to to create a Project or Tenant. A project (which previously was referred to as a Tenant) is the environment that is created for a customer in OpenStack. This needs to be done as the OpenStack admin user, and to keep it easy on yourself, I’d recommend creating this user from the Horizon web interface. Make sure you’re logged in as admin, and next under Identity you’ll be able to add a project, a user in that project and assign the user as a member to the project.

Screen Shot 2017-05-23 at 10.32.32.png

For working with OpenStack, it’s important to realize which set of credentials you should use. In OpenStack, Admin credentials are typically used to create infrastructure while Tenant user credentials are typically used to create instances. So to spin off an instance, you’ll need to make sure that you have user credentials.

Before you can do anything with the CLI, you’ll need to create a credentials file that sets Linux shell variables and then source that file so that the environment variables will become available in your current shell environment. Such a credentials file can have the following contents, assuming you want to create a project with the name project1, in which a user with the name user1 and password “password” can do his work:

unset SERVICE_TOKEN SERVICE_ENDPOINT

export OS_USERNAME=user1

export OS_TENANT_NAME=project1

export OS_PASSWORD=password

export OS_AUTH_URL=http://server1.example.com:35357/v2.0/

export PS1='[u@h W(keystone_user1)]$ 

source ~/keystonerc_user1

Steps to Creating an OpenStack Instance

Now we’re ready to create an instance. An instance is based on an image that is joined with a flavor and a volume and connected to a private network. To implement it, some steps need to be applied:

  • Get an image (Glance)

  • Assign a Template

  • Find out which internal network you can use to connect the instance to

  • Assign a Security Group

  • Add an SSH Key

  • Add a Floating IP address

  • Boot the instance

Here are the OpenStack commands to carry out the steps, above:

  1. source /root/keystonerc_user: This command will give you the required credentials to work as user in OpenStack.

  2. openstack keypair create key1 > /root/key1: This command creates an SSH key-pair and includes it in OpenStack, so that users can later log in to the instance using the SSH key.

  3. openstack security group create mysecgroup: Use this command to create a security group, which basically is a firewall.

  4. nova secgroup-add-rule mysecgroup tcp 22 22 0.0.0.0/0: This command opens the security group firewall rules for SSH traffic.

  5. wget https://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img: This downloads a bootable Cirros image to your local machine.

  6. glance image-create --name cirros1 --disk-format qcow2 --container-format bare --file cirros[Tab] : Use this to import the image file you’ve just downloaded into Glance so that you can use it to spin of your instance.

  7. nova flavor-list : A flavor is a hardware profile. Use this command to display a list of flavors and select the flavor you want to use. For a small test environment, I’d recommend the m1.tiny flavor as it has the minimal settings that are required to boot an instance.

  8. neutron net-list : Notice the ID of the private network to which you are going to connect the instance.

  9. nova boot --flavor m1.tiny --image cirros1--key-name key1 --security-group mysecgroup --nic net-id=<NET-ID> myvm1 : This command will boot the instance, using the components that were discussed earlier in this procedure.

  10. nova list : This command verifies that the instance has indeed booted successfully. Notice that it may take a few seconds before the instance will show “Up” in its state.

Conclusion

Now that you’ve installed OpenStack and started some instances, let’s talk about how to enable Docker containers in OpenStack. Containers are ready-to-run applications, including the entire stack that’s required to run them. Learning how to run and manage containers is key to making the most of the OpenStack platform for scale-out applications — a topic that we’ll explore in part 3 of this series.

Now updated for OpenStack Newton! Our Essentials of OpenStack Administration course teaches you everything you need to know to create and manage private and public clouds with OpenStack. Download a sample chapter today!

vkmark: More Than a Vulkan Benchmark

Say hello to vkmark, a Vulkan benchmarking tool providing an extensible suite of targeted, configurable benchmarking scenes.

Written by Alexandros Frantzis, Senior Software Engineer at Collabora.

Ever since Vulkan was announced a few years ago, the idea of creating a Vulkan benchmarking tool in the spirit of glmark2 had been floating in my mind. Recently, thanks to my employer, Collabora, this idea has materialized! The result is the vkmark Vulkan benchmark, hosted on github:

https://github.com/vkmark/vkmark

Like its glmark2 sibling project, vkmark’s goals are different from the goals of big, monolithic and usually proprietary benchmarks. Instead of providing a single, complex benchmark, vkmark aims to provide an extensible suite of targeted, configurable benchmarking scenes. Most scenes exercise specific Vulkan features or usage patterns (e.g., desktop 2.5D scenarios), although we are also happy to have more complex, visually intriguing scenes.

Benchmarking scenes can be configured with options that affect various aspects of their rendering. We hope that the ease with which developers can use different options will make it painless to perform targeted tests and eventually provide best practices advice.

A few years ago we were pleasantly surprised to learn that developers were using glmark2 as a testing tool for driver development, especially in free (as in freedom) software projects. This is a goal that we want to actively pursue for vkmark, too. The flexible benchmarking approach is a natural fit for this kind of development; the developer can start with getting the simple scenes working and then, as the driver matures, move to scenes that use more advanced features. vkmark has already proved useful in this regard, being an valuable testing aid for my own experiments in the Mesa Vulkan WSI implementation.

With vkmark we also want to be on the cutting edge of software development practices and tools. vkmark is a modern, C++14 codebase, using the vulkan-hpp bindings, the Meson build system and the Catch test framework. To ensure a high quality codebase, the core of vkmark is developed using test-driven development.

It is still early days, but vkmark already has support for X11, Wayland and DRM/KMS, and provides two simple scenes: a “clear” scene, and a “cube” scene that renders a simple colored cube based on the vkcube example (which is itself based on kmscube). The future looks bright!

We are looking forward to getting more feedback on vkmark and, of course, contributions are always welcome!

Google’s OSS-Fuzz Tool Helps Secure Open Source Projects

At the end of last year, Google announced OSS-Fuzz, an open source threat detection tool focused on making open source applications and platforms more secure and stable. The tool itself is open and available on GitHub, and there are now solid numbers showing that this security tool has made a remarkable difference for some well-known open source projects.

By the Numbers

According to Google developers, Fuzz has found more than 1,000 bugs (264 of which are potential security vulnerabilities) in widely used open source projects, some of them major. The bugs have been uncovered in projects ranging from LibreOffice to WireShark, and Google notes the following:

We believe that user and internet security as a whole can benefit greatly if more open source projects include fuzzing in their development process. To this end, we’d like to encourage more projects to participate and adopt the ideal integration guidelines that we’ve established.”

Once an open source project is integrated with OSS-Fuzz, it does continuous and automated scanning so that it can reveal problems only hours after changes go into an upstream repository, before any users are affected.

Google reports: “OSS-Fuzz has found numerous security vulnerabilities in several critical open source projects: 10 in FreeType2, 17 in FFmpeg, 33 in LibreOffice, 8 in SQLite 3, 10 in GnuTLS, 25 in PCRE2, 9 in gRPC, and 7 in Wireshark, etc. We’ve also had at least one bug collision with another independent security researcher (CVE-2017-2801).”

OSS-Fuzz’s utility is not limited to security, either. It has reported over 300 timeout and out-of-memory failures (75% of which got fixed, according to Google). While not every project treats these as bugs, fixing them improves performance and stability.

A Rewards Program

Google also announced that it is expanding its existing Patch Rewards program to include rewards for the integration of fuzz targets into OSS-Fuzz. To qualify for these rewards, a project needs to have a large user base and/or be critical to global IT infrastructure. Eligible projects will receive $1,000 for initial integration, and up to $20,000 for ideal integration (the final amount is at Google’s discretion). Project leaders have the option of donating these rewards to charity instead, and Google will double the amount.

To qualify for the ideal integration reward, projects must show that:

  • Fuzz targets are checked into their upstream repository and integrated in the build system with sanitizer support (up to $5,000).

  • Fuzz targets are efficient and provide good code coverage (>80%) (up to $5,000).

  • Fuzz targets are part of the official upstream development and regression testing process (i.e., they are maintained) run against old known crashers and the periodically updated corpora (up to $5,000).

  • The last $5,000 is a bonus that Google may reward at our discretion for projects that the company feels have gone the extra mile or done something really awesome.

Google is doing some outreach to project leaders to encourage participation in the rewards program, but you may also reach out to participate. Meanwhile, leaders of open source projects may want to look into implementing OSS-Fuzz for more hardened security.

Connect with the open source development community at Open Source Summit NA, Sept. 11-14 in Los Angeles. Save $150 on registration through July 30. Linux.com readers save an additional $47 with discount code LINUXRD5. Register now!

Free Webinar: Join Jono Bacon for Open Source Community Tips and Tricks

Community manager and author Jono Bacon will provide tips for building and managing open source communities in a free webinar on Monday, July 24 at 9:30am Pacific.

In this webinar, Bacon will answer questions about community strategy and share an in-depth look at this exciting new conference held in conjunction with this year’s Open Source Summit North America, happening Sept. 11-14 in Los Angeles.

The Open Community Conference provides presentations, panels, and Birds-of-a-Feather sessions with practical guidance for building and engaging productive communities and is an ideal place to learn how to evolve your community strategy. The webinar will provide event details as well as highlights from the conference schedule, which includes such talks as:

  • Building Open Source Project Infrastructures – Elizabeth K. Joseph, Mesosphere

  • Scaling Open Source – Lessons Learned at the Apache Software Foundation – Phil Steitz, Apache Software Foundation

  • Why I Forked My Own Project and My Own Company – Frank Karlitschek, ownCloud

  • So You Have a Code of Conduct… Now What? – Sarah Sharp, Otter Tech

  • Fora, Q&A, Mailing Lists, Chat…Oh My! – Jeremy Garcia, LinuxQuestions.org / Datadog

Also, if you post questions on Twitter with the #AskJono hashtag about community strategy, leadership, open source, or the conference, you’ll get a chance to win a free ticket to the event (including all the sessions, networking events, and more).

Join us July 24, 2017 at 9:30am Pacific to learn more about community strategy from Jono Bacon. Sign Up Now »

Building Docker Images Without Docker

Building a Docker image is actually all about building a root filesystem that a process will use. So there should be a relatively simple way to build a Docker image without having to rely on the Docker daemon !!! Shouldn’t there be ?

There are approaches like source to image but recently I have looked at Bazel and its Docker rules.

Bazel, Basel or Basil

Bazel is a build system open sourced in 2015 by Google. It is the open sourced version of their internal Blaze system. With just a letter permutation in the name. I have no clue how to pronounce it properly, maybe it is Basel like for the Swiss town or maybe it is Basil like the culinary plant.

Bazel is used in Kubernetes and TensorFlow and we are seeing it pop-up in more and more projects. So no more ./configure, make, make install people, get with the Bazel it is 2017. Plus you want the speed, the cross language support, the reproducibility and the scale.

Read more at Bitnami

Facets: An Open Source Visualization Tool for Machine Learning Training Data

Getting the best results out of a machine learning (ML) model requires that you truly understand your data. However, ML datasets can contain hundreds of millions of data points, each consisting of hundreds (or even thousands) of features, making it nearly impossible to understand an entire dataset in an intuitive fashion. Visualization can help unlock nuances and insights in large datasets. A picture may be worth a thousand words, but an interactive visualization can be worth even more.



Working with the PAIR initiative, we’ve released Facets, an open source visualization tool to aid in understanding and analyzing ML datasets. Facets consists of two visualizations that allow users to see a holistic picture of their data at different granularities. Get a sense of the shape of each feature of the data using Facets Overview, or explore a set of individual observations using Facets Dive. These visualizations allow you to debug your data which, in machine learning, is as important as debugging your model. They can easily be used inside of Jupyter notebooks or embedded into webpages. In addition to the open source code, we’ve also created a Facets demo website. This website allows anyone to visualize their own datasets directly in the browser without the need for any software installation or setup, without the data ever leaving your computer. 

Read more at Google Research Blog

To the Moon? Blockchain’s Hiring Crunch Could Last Years

In today’s blockchain market, raising money is the easy part.

As the headlines already attest, startups that have sold cryptographic tokens as part of a new wave of fundraisings are struggling to find qualified developers, but it’s a pain also shared by projects building public and private blockchains.

Even the enterprise consortia and corporates looking to cut costs and gain efficiencies through these platforms are not immune.

Now, that may not be a surprise given that it’s such a nascent industry. After all, there are only so many people who really understand the intricacies of blockchain, and they are hard to hire.

But that doesn’t mean companies aren’t finding strategies to attract and retain talent.

Read more at CoinDesk

The Risks of DNS Hijacking Are Serious and You Should Take Countermeasures

Editor’s Note: In a separate post, Lucian Constantin explains how a researcher hijacked .io top level domain nameserver and what exposures it has surfaced about registries for country-code top-level domains.

Over the years hackers have hijacked many domain names by manipulating their DNS records to redirect visitors to malicious servers. While there’s no perfect solution to prevent such security breaches, there are actions that domain owners can take to limit the impact of these attacks on their Web services and users.

Just last Friday, attackers managed to change the DNS records for 751 domain names that had been registered and managed through Gandi.net, a large domain registrar. Visitors to the affected domains were redirected to an attacker-controlled server that launched browser-based exploits to infect computers with malware.

Read more at The New Stack

Mageia 6 GNU/Linux Distribution Launches Officially with KDE Plasma 5, GRUB2

After a long wait, the final release of the Mageia 6 GNU/Linux operating system is finally here, and it looks like it comes with a lot of exciting new features and performance improvements.

According to Mageia contributor Rémi Verschelde, development of the major Mageia 6 release took longer than anticipated because the team wanted to transform it into their greatest release yet. Mageia 6 comes more than two years after the Mageia 5 series, and seven and a half months after Mageia 5.1.

“Though Mageia 6’s development was much longer than anticipated, we took the time to polish it and ensure that it will be our greatest release so far,” reads today’s announcement. “We thank our community for their patience, and also our packagers and QA team who provided an extended support for Mageia 5 far beyond the initial schedule.”

Read more at Softpedia

A Modern Day Front-End Development Stack

Application development methodologies have seen a lot of change in recent years. With the rise and adoption of microservice architectures, cloud computing, single-page applications, and responsive design to name a few, developers have many decisions to make, all while still keeping project timelines, user experience, and performance in mind. Nowhere is this more true than in front-end development and JavaScript.

To help catch everyone up, we’ll take a brief look at the revolution in JavaScript development over the last few years. Next, we’ll look at the some of the challenges and opportunities facing the front-end development community. To wrap things up, and to help lead into the next parts of this series, we’ll preview the components of a fully modern front-end stack.

The JavaScript Renaissance

When NodeJS came out in 2009, it was more than just JavaScript on the command line or a web server running in JavaScript. NodeJS revolutionized a concentration of software development around something that was so desperately needed: a mature and stable ecosystem focused on the front-end developer. Thanks to Node and its default package manager, npm, JavaScript saw a renaissance in how applications could be architected (e.g., Angular leveraging Observables or the functional paradigms of React) as well as how they were developed. The ecosystem thrived, but because it was young it also constantly churned.

Happily, the past few years have allowed certain patterns and conventions to rise to the top. In 2015, the JavaScript community saw the release of a new spec, ES2015, along with an even greater explosion in the ecosystem. The illustration below shows just some of the most popular JavaScript ecosystem elements.

FrontendToolingArray.png

State of the JavaScript ecosystem in 2017

At Kenzan, we’ve been developing JavaScript applications for more than 10 years on a variety of platforms, from browsers to set-top boxes. We’ve watched the front-end ecosystem grow and evolve, embracing all the great work done by the community along the way. From Grunt to Gulp, from jQuery® to AngularJS, from copying scripts to using Bower for managing our front-end dependencies, we’ve lived it.

As JavaScript matured, so did our approach to our development processes. Building off our passion for developing well-designed, maintainable, and mature software applications for our clients, we realized that success always starts with a strong local development workflow and stack. The desire for dependability, maturity, and efficiency in the development process led us to the conclusion that the development environment could be more than just a set of tools working together. Rather, it could contribute to the success of the end product itself.  

Challenges and Opportunities

With so many choices, and such a robust and blossoming ecosystem at present, where does that leave the community? While having choices is a good thing, it can be difficult for organizations to know where to start, what they need to be successful, and why they need it. As user expectations grow for how an application should perform and behave (load faster, run more smoothly, be responsive, feel native, and so on), it gets ever more challenging to find the right balance between the productivity needs of the development team and the project’s ability to launch and succeed in its intended market. There is even a term for this called analysis paralysis, which is a difficulty in arriving at a decision due to overthinking and needlessly complicating a problem.

Chasing the latest tools and technologies can inhibit velocity and the achievement of significant milestones in a project’s development cycle, risking time to market and customer retention. At a certain point an organization needs to define its problems and needs, and then make a decision from the available options, understanding the pros and cons so that it can better anticipate the long-term viability and maintainability of the product.

At Kenzan, our experience has led us to define and coalesce around some key concepts and philosophies that ensure our decisions will help solve the challenges we’ve come to expect from developing software for the front end:

  • Leverage the latest features available in the JavaScript language to support more elegant, consistent, and maintainable source code (like import / export (modules), class, and async/await).

  • Provide a stable and mature local development environment with low-to-no maintenance (that is, no global development dependencies for developers to install or maintain, and intuitive workflows/tasks).

  • Adopt a single package manager to manage front-end and build dependencies.

  • Deploy optimized, feature-based bundles (packaged HTML, CSS, and JS) for smarter, faster distribution and downloads for users. Combined with HTTP/2, large gains can be made here for little investment to greatly improve user experience and performance.

A New Stack

In this series, our focus is on three core components of a front-end development stack. For each component, we’ll look at the tool that we think brings the best balance of dependability, productivity, and maintainability to modern JavaScript application development, and that are best aligned around our desired principals.  

Package Management: Yarn

The challenge of how to manage and install external vendor or internal packages in a dependable and consistently-reproducible way is critical to the workflow of a developer. It’s also critical for maintaining a CI/CD (continuous integration/continuous delivery) pipeline. But, which package manager do you choose given all the great options available to evaluate? npm? jspm? Bower? CDN? Or do you just copy and paste from the web and commit to version control?    

Our first article will look at Yarn and how it focuses on being fast and providing stable builds. Yarn accomplishes this by ensuring the version of a vendor dependency installed today will be the exact same version installed by a developer next week. It is imperative that this process is frictionless and reliable, distributed and at scale, because any downtime prevents developers from being able to code or deploy their applications. Yarn aims to address these concerns by providing a fast, reliable alternative to the npm cli for managing dependencies, while continuing to leverage the npm registry as the host for public Node packages. Plus it’s backed by Facebook, an organization that has scale in mind when developing their tooling.

Application Bundling: webpack

The orchestration of building a front-end application, which is typically comprised of a mix of HTML, CSS, and JS, as well as binary formats like images and fonts, can be tricky to maintain and even more challenging to orchestrate. So how does one turn a code base into an optimized, deployable artifact? Gulp? Grunt? Browserify? Rollup? SystemJS? All of these are great options that provide their own strengths and weaknesses, but we need to make sure the choice reflects our intended principals we discussed above.

webpack is a build tool specifically designed to package and deploy web applications comprised of any kind of potential assets (HTML, CSS, JS, images, fonts, and so on) into an optimized payload to deliver to users. We want to take advantage of the latest language features like import/export and class to make our code future-facing and clean, while letting the tooling orchestrate the bundling of our code such that it is optimized for both the browser and the user. webpack can do just that, and more!

Language Specification: TypeScript

Writing clean code in and of itself is always a challenge. JavaScript, which is a dynamic language and loosely typed, has afforded developers a medium to implement a wide range of design patterns and conventions. Now, with the latest JavaScript specification, we see more solid patterns from the programming community making their way into the language. Support for features like the use of import/export and class have brought a fundamental paradigm shift to how a JavaScript application can be developed, and can help ensure that code is easier to write, read, and maintain. However, there is still a gap in the language that generally begins to impact applications as they grow: maintainability and integrity of the source code, and predictability of the system (the application state at runtime).

TypeScript is superset of JavaScript that adds type safety, access modifiers (private and public), and newer features from the next JavaScript specification. The security in a more strictly typed language can help promote and then enforce architectural design patterns by using a transpiler to validate code before it even gets to the browser, which helps to reduce developer cycle time while also being self-documenting. This is particularly advantageous because, as applications grow and change happens within the codebase, TypeScript can help keep regressions in check while adding clarity and confidence to the code base. IDE integration is also a huge win here as well.

What About Front-End Frameworks?

As you may have noticed, so far we’ve intentionally avoided recommending a front-end framework or library like Angular or React, so let’s address that now.

Different applications call for different approaches to their development based on many factors like team experience, scope and size, organizational preference, and familiarity with concepts like reactive or functional programming. At Kenzan, we believe evaluating and choosing any ES2015/TypeScript compatible library or framework, be it Angular 2 or React, should be based on characteristics specific to the given situation.  

If we revisit our illustration from earlier, we can see a new stack take form that provides flexibility in choosing front-end frameworks.

FrontendToolingSimplified.png

A modern stack that offers flexibility in front-end frameworks

Below this upper “view” layer is a common ground that can be built upon by leveraging tools that embrace our key principles. At Kenzan, we feel that this stack converges on a space that captures the needs of both user and developer experience. This yields results that can benefit any team or application, large or small. It is important to remember that the tools presented here are intended for a specific type of project development (front-end UI application), and that this is not intended to be a one-size-fits-all endorsement. Discretion, judgement, and the needs of the team should be the prominent decision-making factors.

What’s Next

So far, we’ve looked back at how the JavaScript renaissance of the last few years has led to a rapidly-maturing JavaScript ecosystem. We laid out the core philosophies that have helped us to meet the challenges and opportunities of developing software for the front end. And we outlined three main components of a modern front-end development stack. Throughout the rest of this series, we’ll dive deeper into each of these components. Our hope is that, by the end, you’ll be in a better position to evaluate the infrastructure you need for your front-end applications.

We also hope that you’ll recognize the value of the tools we present as being guided by a set of core principles, paradigms, and philosophies. Writing this series has certainly caused us to put our own experience and process under the microscope, and to solidify our rationale when it comes to front-end tooling. Hopefully, you’ll enjoy what we’ve discovered, and we welcome any thoughts, questions, or feedback you may have.

Next up in our blog series, we’ll take a closer look at the first core component of our front-end stack—package management with Yarn.

Kenzan is a software engineering and full service consulting firm that provides customized, end-to-end solutions that drive change through digital transformation. Combining leadership with technical expertise, Kenzan works with partners and clients to craft solutions that leverage cutting-edge technology, from ideation to development and delivery. Specializing in application and platform development, architecture consulting, and digital transformation, Kenzan empowers companies to put technology first.

Grunt, jQuery, and webpack are trademarks of the JS Foundation.