Home Blog Page 479

Machine Learning Lends a Hand for Automated Software Testing

Automated testing is increasingly important in development, especially for finding security issues, but fuzz testing requires a high level of expertise — and the sheer volume of code developers are working with, from third-party components to open source frameworks and projects, makes it hard to test every line of code. Now, a set of artificial intelligence-powered options like Microsoft’s Security Risk Detection service and Diffblue’s security scanner and test generation tools aim to make these techniques easier, faster and accessible to more developers.

“If you ask developers what the most hated aspect of their job is, it’s testing and debugging,” Diffblue CEO and University of Oxford Professor of Computer Science Daniel Kroening told the New Stack.

The Diffblue tools use generic algorithms to generate possible tests, and reinforcement learning combined with a solver search to make sure that the code it’s giving you is the shortest possible program, which forces the machine learning system to generalize rather than stick to just the examples in its training set.

Read more at The New Stack

Video: Linus Torvalds On Fun, the Linux Kernel, and the Future

Linus Torvalds, creator of the Linux kernel, took to the stage at Open Source Summit in Los Angeles. In this keynote presentation, Torvalds joined The Linux Foundation Executive Director Jim Zemlin in conversation about Linux kernel development and how to get young open source developers involved. Here are some highlights of their talk.

On the importance of the Linux kernel and being listed as by Time magazine as #17 on the list of Most Important People of the Century:

I am happy about the fact that I do something meaningful. Everyone wants to do something that matters, that has an impact. I feel like the work is meaningful. At the same time, I work in my home office, in my bathrobe.

On his book  Just for Fun

The premise of the book was that you kind of move on to fun. You have to start with survival. … Once you’re guaranteed survival, and once you’re guaranteed that you have a social connection to the world around you, then you want to get to the point where the most motivating thing in your life is fun.

For me, that fun is a technical challenge. That’s not fun for everybody, but hopefully it is fun for most people in this audience.

On open source adoption in the industry:

It’s very important to have companies involved in open source. … You should not hate those companies that can actually help make your project better. They can bring you all those users, because users to any project are what really matter.

In the kernel community, we’ve come to the realization that it’s not about the small guy against the companies; it’s about collaboration.

On laying the groundwork for participation:

We’re having an easier time working with companies who are not necessarily part of the community. It used to be a huge problem with a lot of tech companies where we had educated technical people who really wanted to collaborate with us, but their companies wouldn’t allow them to work on open source projects.

Companies were worried about their employees being associated with a project that was not their project. And I think the last couple of decades, The Linux Foundation and others have been teaching companies that it ok to participate in the process.

On the time it takes:

People think Linux development is very fast, but I notice over and over that we take forever to do one particular thing. We take years and years of effort. … Quite often, you only see the end result.

On improving security:

The concept of absolute security does not exist.

As a technical person, I’m always very impressed by the people who are attacking our code. … I wish they were on our side. They are so smart, and they could help us. I want to get those people before they turn to the dark side.

On getting the next generation of developers interested in development:

In order to get into the kernel, you have to be interested in the kind of low-level programming that most people are not interested in. I don’t think the kernel will ever be something that you would want to teach in a high school class. It’s fairly esoteric, and you need a certain type of dedication to really even bother to care. … But we get a large percentage of people who are interested in these kinds of low-level problems.

We have thousands of new people every single release. A lot people will only do something small. But from a health perspective, the kernel has more developers than just about any other project out there. So, I’m not worried about that.

You can watch the complete conversation here:

Are Women in Tech Facing Extinction?

We hear a lot about how few women work in tech. The numbers range from 3 percent in open source to 25 percent industry-wide. But frankly, those aren’t the numbers that scare me most. The numbers that scares the hell out me are the ones that underscore how many women are choosing to leave tech.

The latest NCWIT data shows that women leave tech at twice the rate of men, and that number has been increasing since 1991. A Harvard Business Review study found that as many as 50 percent of women working in science, engineering and technology will, over time, leave because of hostile work environments.

As a young, very talented female programmer recently told me: “I don’t want to leave tech but after a year into my first job, I’m considering it.”

Read more at Medium

Migrating GitHub’s Web and API to Kubernetes Running on Bare Metal

Over the last year GitHub has evolved their internal infrastructure that runs the Ruby on Rails application responsible for github.com and api.github.com to run on Kubernetes. The migration began with web and API applications running on Unicorn processes that were deployed onto Puppet-managed bare metal (“metal cloud“) servers, and ended with all web and API requests being served by containers running in Kubernetes clusters deployed onto the metal cloud.

According to the GitHub engineering blog, the basic approach to deploying and running GitHub did not significantly change over the initial eight years of operation. However, GitHub itself changed dramatically, with new features, larger software communities, more GitHubbers on staff, and many more requests per second. As the organisation grew, the existing operational approach began to exhibit new problems: many teams wanted to extract the functionality into smaller services that could run and be deployed independently; and as the number of services increased, the SRE team found they were increasingly performing maintenance, which meant there was little time for enhancing the underlying platform.

Read more at InfoQ

Uber and Lyft Bring Open-Source Cloud Projects to CNCF

In the market for ride sharing services, Uber and Lyft are fierce competitors, the world of open-source however is another story. At the Open Source Summit here on Sept. 13, the Cloud Native Computing Foundation (CNCF) announced that it had accepted two new projects, Envoy from Lyft and Jaeger from Uber.

Envoy is an edge and service proxy that aims to make the network transparent to applications. Jaeger in contrast is a distributed tracing system, that can be used to help find application performance bottlenecks.

“Lyft developed a fancy service mesh/reverse proxy to handle all their traffic to help scale micro-services within Lyft,” Chris Aniszczyk, COO of Cloud Native Computing Foundation, told eWEEK in a video interview. 

Read more at eWeek

The Basics of Going Serverless with Node.js

Developers are continuing to look for more efficient and effective ways to build out applications, and one of the new approaches to this involves serverless applications, which are the future of lightweight, scalable, and performant applications development.

The space of “serverless” is still fairly new and many developers and companies are wanting to go “serverless,” but don’t know how to orchestrate decisions like how to choose the right cloud provider, how to avoid vendor lock in. And, if you do change your mind about the cloud platform, does that mean you have to rewrite your application code?

Linda Nichols, cloud enablement leader at Cloudreach, will be talking about this subject extensively at Node.js Interactive happening Oct. 4-6, 2017 in Vancouver, BC, Canada. In preparation for her session, we asked her a few questions about serverless and why it works so well with Node.js.

Interested? Read below and be sure to check out her full session “Break-Up with Your Server, But Don’t Commit to a Cloud Platform” and many other serverless-based topics by registering for Node.js Interactive.

Linux.com: How do you define serverless?

Linda Nichols: My definition of “serverless” has been evolving and changing since I gave my first talk on it a year ago. The ecosystem is moving forward so fast! This is what I’m going with currently:

“Serverless Architecture is an event-driven architecture that uses a back-end system, such as FaaS (Functions-as-a-Service), that is fully managed by a cloud provider.”  

Linux.com: Is there a certain environment or type of company that would benefit from serverless architecture?

Nichols: I think serverless architecture is really perfect for companies that need inexpensive tools and prototypes. It’s been really popular in the startup and non-profit communities because serverless applications are faster and easier to develop and nearly free to host — even for extended periods of time.

That said, I think nearly any environment that has access to a cloud provider can benefit from leveraging serverless architecture. It’s not an all-or-nothing architecture; sometimes the best way is a complete re-write of backend services and other times a hybrid system is a great fit.

Linux.com: What are some of the obstacles that folks need to overcome if they want to go “serverless”?

Nichols: If an application is hosted entirely on-premises, then an obstacle can be that initial organizational cloud adoption.

Another obstacle is for applications described as “monoliths” where all of the services are tightly coupled inside of a system. In this case, there needs to be a separate effort to break off some smaller micro or nano services and migrate those to serverless functions. That process can be gradual, so that doesn’t mean a prerequisite is a complete system rewrite.

Linux.com: Why is Node.js a good choice when you are looking to go serverless?

Nichols: My answer here is the same as when someone asks me why I like Node.js in any environment: I think it makes projects more flexible and accessible. Most developers already know at least a little JavaScript because they’ve written web applications, so that gives me a larger pool of people that can work on all parts of my project. If I have a “front-end” development team, then they have the option to work on “back-end” serverless functions. Same for my “back-end” developers that might want to help support a React.js development team.  

Linux.com: Your talk for Node.js Interactive is about breaking up servers, but not committing to cloud platform, how can developers go about doing this?

Nichols: Without giving away too much of what’s in my talk, I will say that it largely involves leveraging some of the great tools that have been built to support Serverless architecture.

Linux.com: What are three key takeaways you say a developer must know if they are thinking of going “serverless”? Any must-have tools that they would need in their toolbelt?

Nichols: I think it might be easier for me to say what tools a developer doesn’t have to have to go “serverless.”

FaaS and API Gateway tools create an ecosystem that allows developers to eliminate several of the typical tools and frameworks necessary when creating an application.

It’s also very unlikely that they’ll need to learn a new programming language since all of the major FaaS offerings support Node.js and a list of several other popular languages.

Finally, they don’t need to know how to do container management or other typical “ops” tasks. The cloud platforms take care of that for you.

Learn more about Node.js Interactive and register now.

Linux Gains Ascendance in Cloud Infrastructures: Report

Linux is now the dominant operating system on Amazon’s AWS cloud service and is growing rapidly on Microsoft’s Azure platform this year, according to a report on public cloud adoption trends Sumo Logic released on Tuesday.

The company’s second annual State of Modern Apps report reveals usage trends on AWS, Azure and Google clouds, and how they impact the use of modern apps in the enterprise.

Based on data from the experiences of 1,500 Sumo Logic customers, the report gives other organizations a set of frameworks, best practices and hard stats to guide their migration to the cloud. It shows how developers build modern applications across each tier of the application architecture.

“Today’s enterprises are striving to deliver high-performance, highly scalable and always-on digital services. These services are built on modern architectures — an application stack with new tiers, technologies and microservices — typically running on cloud platforms like AWS, Azure and Google Cloud Platform,” said Kalyan Ramanathan, vice president of product marketing for Sumo Logic.

Read more at Linux Insider

4 Tips for Leaders Helping Others Evolve their Careers

In open organizations, we like to say that you own your career. Each one of us is encouraged to find a gap and fill it.

In settings like these—and when there’s more work to be done than there are hands to do it—it’s important to understand your strengths so you can identify where you can be most effective in the organization and which problems you’re passionate about solving. That means everyone—associates, managers, and executives alike—shares responsibility for proactively nurturing an open dialogue about ways they can engage with challenging, meaningful, and interesting work.

Not long ago, my colleague Sam Knuth began making this point in his advice to people who feel underutilized at work:

Read more at OpenSource.com

How to Use Maybe to Test Linux Commands

There are times when you know a command must be run, but you’d really like to test the action before execution. This could be on a production server, where running a command could have results that might negatively impact the server’s ability to perform. When any systems administrator comes across such an instance, the impulse would be to turn to a test server, set up to mirror the production server.

But what if you don’t have the luxury of such a test server? What do you do? You could turn to the likes of the maybe command. maybe is a piece of software (one that should be considered very much in the alpha stages—so tread carefully) that allows an administrator (or user, for that matter) to run a command and see what that command would do to the file system on the machine. When you issue a command with maybe, it will output the results of what running the actual command would do. Once you’ve looked at the possible outcome, you can then decide if you want to execute the command or not.

Let me walk you through the process of installing and using maybe. 

Read more at TechRepublic

New Initiatives to Create Sustainable Open Source Projects at The Linux Foundation

Open source software isn’t only growing. It’s actually accelerating exponentially in terms of its influence on technology and in society.

The sheer number of projects and developers in open source today are just amazing. There are:

  • 23 million open source developers worldwide
  • 22 million accounts and 64 million repositories on GitHub
  • 41 million lines of code
  • 1,100 new open source projects every day
  • 10,000 new versions of open source projects every day

Even within individual projects, the pace of development, not just the number of projects, is accelerating. Linux is the best example of this. Today we have 4,300 developers contributing to the Linux kernel, adding 10,000 lines of code daily. Think about that: a codebase that changes 8.5 times an hour.

It’s self-evident at this point that no single organization could ever keep up with a development pace that fast and robust.

Open source is just the way modern application development works. And open source isn’t really slowing down anytime soon. The prediction is that we’ll have hundreds of millions of open source libraries available to build the technologies of the future.

We have an abundance of code — but with that abundance comes a bit of anxiety as well. Developers have a problem knowing if they’re choosing the right framework, or package. Is it secure or not? Which projects are safe to bet my future, or my company’s infrastructure, on?

Read more at The Linux Foundation