Home Blog Page 502

How to Get the Next Generation Coding Early

You’ve probably heard the claim that coding, or computer programming, is as crucial a skill in the 21st century as reading and math were in the previous century. I’ll go one step further: Teaching a young person to code could be the single most life-changing skill you can give them. And it’s not just a career-enhancer. Coding is about problem-solving, it’s about creativity, and more importantly, it’s about empowerment.

Empowerment over computers, the devices that maintain our schedules, enable our communications, run our utilities, and improve our daily lives.

But learning to code is also personally empowering. The very first time a child writes a program and makes a computer do something, there’s an immediate sense of “I can do this!” And it transforms more than just a student’s attitude toward computers. Being able to solve a problem by planning, executing, testing, and improving a computer program carries over to other areas of life, as well. What parts of our lives wouldn’t be made better with thoughtful planning, doing, evaluating, and adjusting?

Read more at OpenSource.com

Database Updates Across Two Databases (part 1)

Every time a product owner says “We should pull in XYZ data to the mobile app” or “we need a new app to address this healthcare fail” an engineer needs to make two crucial decisions:

  1. Where and how should the new code acquire data the company currently has?
  2. Where and how should the new code record the data that will be newly created?

Sadly, the most expedient answer to both questions is “just use one of our existing databases”. The temptation to do so is high when an engineer need only add a database migration or write a query for a familiar database. The alternative might involve working with the organization’s infrastructure team to plan out changes to the operational footprint, then potentially making updates to the developer laptop setup.

Decisions expedient for today aren’t necessarily the best decisions for Rally’s long term delivery velocity. We recognized database reuse and sharing was fairly common at Rally, so we tried to stop to the practice in Spring 2017. We were concerned the company’s development speed and agility would eventually grind to a halt.

Read more at Rally Engineering

Open Source AI Solutions Evolve through Community Development

Tech titans ranging from Google to Facebook have been steadily open sourcing powerful artificial intelligence and deep learning tools, and now Microsoft is out with version 2.0 of the Microsoft Cognitive Toolkit. It’s an open source software framework previously dubbed CNTK, and it competes with tools such as TensorFlow (created by Google) and Caffe (created by Yahoo!). Cognitive Toolkit works with both Windows and Linux on 64-bit platforms. It was originally launched into beta in October 2016 and has been evolving ever since.

“Cognitive Toolkit enables enterprise-ready, production-grade AI by allowing users to create, train, and evaluate their own neural networks that can then scale efficiently across multiple GPUs and multiple machines on massive data sets,” reports the Cognitive Toolkit Team. The team has also compiled a set of reasons why data scientists and developers who are using other frameworks now should try Cognitive Toolkit.

For example, Microsoft has tuned its software framework for peak performance, as detailed here. “Hundreds of new features, performance improvements and fixes have been added since beta was introduced,” the Cognitive Toolkit team notes. “The performance of Cognitive Toolkit was recently independently measured, and on a single GPU it performed best amongst other similar platforms.”

The other open source platforms in this space are making surprising advancements as well. H2O.ai, formerly known as Oxdata, has carved out a unique niche in the machine learning and artificial intelligence arena because its primary tools are free and open source.  You can get the main H2O platform and Sparkling Water — a package that works with Apache Spark — just by downloading them. You can also find many tutorials for H2O.ai’s AI and machine learning tools here. As an example of how the H2O platform is working in the field, Cisco uses it to analyze its huge data sets that track when customers have bought particular products — such as routers — and when they might logically be due for an upgrade or checkup.

Google has open sourced a program called TensorFlow that it has spent years developing to support its AI software and other predictive and analytics programs. You can find out more about TensorFlow at its site; it is the engine behind several Google tools you may already use, including Google Photos and the speech recognition found in the Google app. According to MIT Technology Review: “[TensorFlow] underpins many future ambitions of Google and its parent company, Alphabet…Once you’ve built something with TensorFlow, you can run it anywhere but it’s especially easy to transfer it to Google’s cloud platform. The software’s popularity is helping Google fight for a bigger share of the roughly $40 billion (and growing) cloud infrastructure market, where the company lies a distant third behind Amazon and Microsoft.”

Indeed, both Google and Microsoft are drawing benefits from open sourcing their artificial intelligence tools, as community development makes the tools stronger. “Our goal is to democratize AI to empower every person and every organization to achieve more,” Microsoft CEO Satya Nadella has said.

Yahoo! has also released its key artificial intelligence software (AI) under an open source license. Its CaffeOnSpark tool is based on deep learning, a branch of artificial intelligence particularly useful in helping machines recognize human speech, or the contents of a photo or video.

If you are interested in experimenting with Microsoft Cognitive Toolkit, you can learn more here, and assorted code samples and tutorials are found here.

To learn more about the promise of machine learning and artificial intelligence, watch a video featuring David Meyer, Chairman of the Board at OpenDaylight.

Connect with the open source development community at Open Source Summit NA, Sept. 11-14 in Los Angeles. Linux.com readers save on registration with discount code LINUXRD5Register now!

Let Us Know How You Are Using R and Data Science Tools Today

The R Consortium exists to promote the R language, environment, and community. The R community has seen significant growth — with more than 2 million users worldwide — and a broad range of organizations have adopted the R language as a data science platform.

Now, to help us understand the changing needs of our community, we have put together a short survey.

Take the R Consortium Survey Now

We want to hear: How do you use R? What do you think about the way R is developing? What issues should we be addressing? What does the big picture look like?

We want to know how you use R, and we would like to hear from the entire R community. We don’t have any particular hypothesis or point of view but would like to reach everyone who is interested in participating.

Please take a few minutes to respond to the survey and help us understand your perspective. The survey will adapt depending on your answers and will take about 10 minutes to complete.

Take the R Consortium survey now and please share with others who might be interested.

DevOps Fundamentals, Part 4: Patterns and Practices

We are back with more information in our series previewing the DevOps Fundamentals: Implementing Continuous Delivery (LFS261) course from The Linux Foundation. So far, we’ve looked at high-performing organizations, the value stream, and Continuous Delivery and Deployment.

In this article, we will cover patterns and practices. We will go over the deployment pipeline, consistency in the pipeline, automated testing at a high level, and deployment strategies.

To start, we will look at the concept of The Three Ways of DevOps, which is covered extensively in The DevOps Handbook.

The First Way is really Continuous Delivery, but it is about the flow. It is systems thinking about a left-to-right flow, an automated supply, a software delivery supply chain, the commit, and the whole Continuous Delivery process.

The Second Way is really about monitoring and feedbacks.

And, the Third Way is Continuous Learning.

Here, we will focus on The First Way.

I cannot see any alternative or any viable reason not to implement a Continuous Delivery pattern. Get more details in the video below:

Again, everything is going to be about the First Way. We are talking about the pipeline.

The pipeline requires you to think about the visibility. All stages of the pipeline are visible to everyone responsible for the delivery. That is a key point.

That is where we need to get to. All the switch configs, Axle in Git, Cucumber testing on network stuff on the back end, emulating network software environments like SDN environments, a lot of that stuff is out there. The point is everybody is responsible; everybody should see the pipeline.

We have talked about the feedback loops. These should be designed gates to create and eliminate downstream defects. You find things downstream. You move those checks early. You have test-driven development, and behavior-driven development, and smoke tests. Again, at the end of the day, you start building these incredibly robust chains of delivery.

And you are continually deploying. And, now you have got it. The pipeline is such that any patch, any update, new feature can be automated and deployed for release.

That means whether you are Dev, you are Ops, you are Sec, or network, you are basically putting everything in source control.

Want to learn more? Access all the free sample chapter videos now!

This course is written and presented by John Willis, Director of Ecosystem Development at Docker. John has worked in the IT management industry for more than 35 years.

Read more:

DevOps Fundamentals: High-Performing Organizations

DevOps Fundamentals, Part 2: The Value Stream

DevOps Fundamentals, Part 3: Continuous Delivery and Deployment

Red Hat’s Boltron Snaps Together a Modular Linux Server

Red Hat’s ongoing experiments with making its Linux distributions more modular and flexible have yielded a new sub-distribution of Fedora.

Dubbed Fedora Boltron Server, the new prototype server project uses the various modularity technologies that Red Hat has been building into Fedora. Its goal is a Linux distribution in which multiple versions of the same system components can live and work side-by-side, non-destructively.

Read more at InfoWorld

Enterprise Network Monitoring Needs Could Hamper the Adoption of TLS 1.3

The upcoming version of the Transport Layer Security (TLS) protocol promises to be a game changer for web encryption. It will deliver increased performance, better security and less complexity. Yet many website operators could shun it for years to come.

TLS version 1.3 is in the final stages of development and is expected to become a standard soon. Some browsers, including Google Chrome and Mozilla Firefox, already support this new version of the protocol on an opt-in basis and Cloudflare enables it by default for all websites that use its content delivery network.

TLS 1.3 is a major overhaul of the technology that underpins web and email encryption. One of the biggest improvements in the new version is a simplified handshake between clients and servers.

Read more at The New Stack

Support Driven Development: Listen Now So You Don’t Hear It Later

Here at Scalyr, we’re big fans of Complaint-Driven Development, which I’ll summarize as “focus engineering effort on fixing the things users actually complain about.” We especially focus on issues that generate support requests, with such success that, as CEO, I’m still able to personally handle the majority of frontline support – even as we head toward eight-digit annual revenue.

An important consideration is that support requests cost money even if they aren’t your (product’s) fault. In this post, I’ll explore five common sources of support requests relating to the first piece of Scalyr software most users touch – our log collection agent  and how we’ve sometimes had to think outside the box to address them. None of these were bugs, exactly. (We’ve had those as well, but you don’t need to read a blog post to know it’s a good idea to fix bugs.)

Read more at Scalyr blog

Ops: It’s Everyone’s Job Now

Twenty years ago ops engineers were called “sysadmins,” and we spent our time tenderly caring for a few precious servers. And then DevOps came along. DevOps means lots of things to lots of people, but one thing it unquestionably meant to lots and lots of people was this: “Dear Ops: learn to write code.”

It was a hard transition for many, but it was an unequivocally good thing. We needed those skills! Complexity was skyrocketing. We could no longer do our jobs without automation, so we needed to learn to write code. It was non-optional. 

Now

It’s been 10-15 years since the dawn of the automation age, and we’re already well into the early years of its replacement: the era of distributed systems.

Consider the prevailing trends in infrastructure: containers, schedulers, orchestrators. Microservices. Distributed data stores, polyglot persistence. Infrastructure is becoming ever more ephemeral and composable, loosely coupled over lossy networks.  Components are shrinking in size while multiplying in count, by orders of magnitude in both directions.  

Read more at OpenSource.com

Supercomputing by API: Connecting Modern Web Apps to HPC

In this video from OpenStack Australia, David Perry from the University of Melbourne presents: Supercomputing by API – Connecting Modern Web Apps to HPC.

OpenStack is a free and open-source set of software tools for building and managing cloud computing platforms for public and private clouds. OpenStack Australia Day is the region’s largest, and Australia’s best, conference focusing on Open Source cloud technology. Gathering users, vendors and solution providers, OpenStack Australia Day is an industry event to showcase the latest technologies and share real-world experiences of the next wave of IT virtualization.

Watch the video at insideHPC