Home Blog Page 663

Linux Foundation Backs Reproducible Builds Effort for Secure Software

Building software securely requires a verifiable method of reproduction and that is why the Linux Foundation’s Core Infrastructure Initiative is supporting the Reproducible Builds Project.

In an effort to help open-source software developers build more secure software, the Linux Foundation is doubling down on its efforts to help the reproducible builds project. Among the most basic and often most difficult aspects of software development is making sure that the software end-users get is the same software that developers actually built.

“Reproducible builds are a set of software development practices that create a verifiable path from human readable source code to the binary code used by computers,” the Reproducible Builds project explains.

Read more at eWeek

Beginner’s Guide to Vim

Vim is the most popular and widely-used editor for Linux. It has thousands of useful features and once you learn Vim, you will use it on a daily basis. One of our previous posts had useful Vim tips and tricks. Because of the popular demand, we wrote a guide to Vim for beginners. We recommend that you start using Vim as soon as possible, so you can find out all the things that Vim is capable of doing.

Read more at Rose Hosting

Don’t Leave Software Testers Out of DevOps

The term DevOps implies that software delivery is all about integrating the development and IT operations teams. But the software testing team is an important, if oft-overlooked, part of the picture. Here’s why.

Could you deliver software continuously without involving the QA team in the continuous delivery pipeline? You could try, but it would probably not go well. Software testers play a vital role in continuous delivery, for the following reasons:

They are the glue connecting developers to IT Ops. If there is no one to test the code that developers write, IT Ops cannot deliver a stable product.

Read more at DevOps.com

Nithya Ruff’s Appointment to Linux Foundation Aids Diversity Efforts

It’s been a couple of weeks since the nonprofit Linux Foundation announced the addition of Nithya Ruff, along with Erica Brescia and Jeff Garzik, to its board. Ruff comes to the table with a long list of accomplishments. For the past two years she’s been at Western Digital, where she’s currently the director of open source office and is also the founding president of the company’s Women’s Innovation Network. During this same period, she’s been a member of the OpenStack Diversity Working Group and Women of OpenStack.

Her career in tech began with a ten year stint at Kodak, where among other things, she was an IT Analyst. Since then, she’s spent time at some well known tech firms, including SGI, Avaya and Intel. With her current employer, her duties include being an evangelist, meaning she can often be found speaking at various tech conferences, including the Linux Foundation hosted LinuxCon. Her appointment to the Linux Foundation board is as an at-large director, which puts her in a rather advantageous position, since she’s not directly beholden to any of the Linux Foundation’s corporate sponsors.

Read more at DevPro Connections

Online Behavioral Experiments Happening With nodeGame and Node.js

The goal of social science research is to discover fundamental features of human behavior. There are many different approaches to discover this, but one of the best approaches is through games. How do researches implement these types of games and what technologies are most appropriate to help them gather the research that they need to discover why humans behave like we do?

In advance of Node.js Interactive, to be held November 28 – December 2 in Austin, we talked with Stefano Balietti, postdoc at Northeastern Network Science Institute and fellow at Harvard Institute of Quantitative Social Science, about nodeGame, a framework in Node.js to create and manage real-time and discrete-time synchronous behavioral experiments. In this interview, he discusses why he chose Node.js for his game-based research, other research projects using Node.js, and what impact it could have on science in the future.

Stefano Balietti, postdoc at Northeastern Network Science Institute and fellow at Harvard Institute of Quantitative Social Science

Linux.com: Can you give us a little background on how you came into working in social science and took an interest in scientific behavioral experiments?

Stefano Balietti: Well, this is a good question. I am not quite sure… Maybe my interest in behavioral experiments came out because I played too many video games as a kid, or because I believe that playing is the best way to learn. There is a famous quote — that we printed also in our Network Science lab t-shirts — that says, “Play is the highest form of research.”

I think it’s true.

Behavioral experiments are a wonderful way of inferring how people actually think and act accordingly. In other words, if you need to know what the real preferences and beliefs of a group of people are, most of the time the best way is not by asking them (because they might not actually know, or might be reluctant to say), but it is by putting them in a situation that replicates the problem at study.  

When we compare their behavior in an analogous situation, where only one situational variable (out of many) is changed, we can then determine the role of the variable, the causal mechanism, that the variable shapes in determining behavior.

Linux.com: What is nodeGame? Why did you decide to use Node.js to help you with your research?

Stefano Balietti: nodeGame is a platform with an API for creating and running behavioral experiments in the browser, online or in a university lab. It uses Express.js as the HTTP server, and Socket.io as a real-time transport mechanism for all game messages. It helps developers with automated requirements and waiting rooms; define the game sequence; handle players’ disconnections and reconnections; and more! If you want more details, check out this site.

I came from a background of PHP/MySQL applications and also did a couple of Drupal modules. However, when working with these technologies, I always needed better performance.

One day (a few years ago), I read an article in The Register talking about this new technology called Node.js. It emphasized that it was particularly good for dealing with many async requests.

I didn’t try Node.js straightway, but the article somehow got stuck in my mind. After a few months, in my research group we decided to run a new experiment about creativity and how competition affects it. The design we had in mind was more complex than traditional behavioral experiments in the social science, and would have been difficult to run with standard experimental software available that was available at the time.

I decided to implement it in Node.js. At the time, I had only a basic knowledge of JavaScript so that was quite a risky move. Eventually, it turned out to be a very good choice. Especially given that Node.js is fairly easy to learn when you know JavaScript and there were also so many software packages available, which made the first installation and setup of web server with websockets fairly easy. I was able to create a prototype in a relatively short time, run the experiment in the lab, create a follow up to run online, and finally the article was published in highly reputed scientific journal.

From there, I expanded the platform, made it more general, started writing documentation, wrote an interface to connect to Amazon Mechanical Turk, etc. Now we have nodeGame version 3.0, and other researchers have used it to their own studies as well.

Linux.com: Do you see Node.js helping other people that are in behaviorial research?

Stefano Balietti: Node.js could really have an enormous impact with behavioral research. Mainly because funding in academia are generally much more limited than in industry, and given that Node.js is open source, this helps with the cost. Not to mention, it can achieve very high performance with relative low use of resources, if people know how to program it correctly.

Linux.com: Why is Node.js a good platform to use in the world of science?

Stefano Balietti: Node.js is currently not yet used to its full potential in the world of science, in my opinion. Python at the moment has a larger user base, mainly due to the many scientific packages that are available out there for data-mining and machine-learning, but also for matrix calculus. Furthermore, in academia curricula, there are more Python courses than Node.js/JavaScript, so researchers (like myself) would have to learn a new language. This creates a small barrier.

However, Node.js is catching up as they are beginning to offer more scientific packages, and it has an edge on all web based application. Specially, when you need to create a virtual space where many individuals need to interact, like crowd-sourced research like prediction markets or Citizen Science platforms, which are both promising and trending right now. Node.js is very reliable and can scale pretty easily, with limited costs. I see a large potential of expansion here.

View the full schedule to learn more about this marquee event for Node.js developers, companies that rely on Node.js, and vendors. Or register now for Node.js Interactive.

Speak at Embedded Linux Conference and OpenIoT Summit 2017 in Portland

The Linux Foundation is seeking developers and systems architects interested in sharing their knowledge, expertise and ideas at the 2017 Embedded Linux Conference and Open Internet of Things (IoT) Summit North America.

The co-located conferences, to be held Feb. 21-23 in Portland, Oregon, bring together embedded and application developers, product vendors, kernel and systems developers as well systems architects and firmware developers to learn, share and advance the technical work required for embedded Linux and IoT.

Now in its 12th year, Embedded Linux Conference is the premier vendor-neutral technical conference for companies and developers using Linux in embedded products. While OpenIoT Summit is the first and only IoT event focused on the development of IoT solutions.

The deadline to submit proposals is Dec.10, 2016.  Submit a proposal today!

Submit an ELC Proposal  

Submit an OpenIoT Summit Proposal

You can see potential speaker topics for ELC, below, and watch speakers in 155+ recorded sessions from ELC 2016

  • Audio, Video, Streaming Media and Graphics

  • Security

  • System Size, Boot Speed

  • Real-Time Linux – Performance, Tuning and Mainlining

  • SDKs for Embedded Products

  • Flash Memory Devices and Filesystems

  • Build Systems, Embedded Distributions and Development Tools

  • Linux in Devices such as Mobile Phones, DVRs, TV, Cameras, etc.

  • Practical Experiences and War Stories

  • And more.

View the full list of suggested Embedded Linux Conference topics here >>

Potential speaker topics for OpenIot Summit include:

  • Frameworks and OSes

  • Low-Power Communication

  • Connected Car

  • Drones

  • Smart Home

  • Device and Firmware Management

  • Provisioning (Device, Service, User)

  • Cloud Integration / Connectivity

  • App Development and UX

  • Security

  • Scaling

  • And More

View the full list of suggested OpenIoT Summit topics here >>

Tilling the Brownfield: Bumps on the Road to the Container Dream

Greenfield buildouts are wonderful and we love them. We get to start from scratch and don’t have to worry about compatibility with existing servers and applications, and don’t have to struggle with preserving and migrating data. Greenfields are nice and clean and not messy. Greenfields are fun.

Sadly, back here in the real world, greenfield buildouts are rare, and we must till the brownfields of legacy systems. This is becoming an acute issue in our fun new era of clouds, containers, and microservices, which all look like wonderful technologies, but implementing them is not quite as easy as talking about them. In his presentation from LinuxCon North America, Richard Marshall of IAC Publishing Labs describes Ask.com’s adventures in navigating two decades of legacy infrastructure, and the many speedbumps and roadblocks along the way, to living the container native dream. It’s a great realistic guide on what to expect when your turn comes and how to deal with the inevitable difficulties.

Marshall tells us how the beginnings were innocuous enough: “About three years ago in the early end of 2014, the first glimmers of interest of the container concept started to emerge within the Ask development organizations…We spun up a pilot environment, tested things. It went very well, actually. It was stable. It did everything it said it was going to do.”

So far, so good. But the first speed bumps came early: “However, because this was one of those initiatives that was driven more on the interest in the technology and less of an actual business driver, we ended up at a bit of an impasse where the developers wouldn’t buy into the process of working towards putting real applications on this until operations gave them a timeline for when we would be able to go to production. Ops reciprocally wouldn’t do that until dev bought into it…That kind of catch-22 lingered for a while, and eventually we just let the pilot environment rot in place. It’s still there,” says Marshall.

Brave New Container World

Time passed and there it sat. But the buzz amplified, and tech news was all about Docker, Kubernetes, Mesos, orchestration, continuous integration, virtual machines, all the promises of the brave new container world. Marshall’s teams launched some new pilot projects using Docker, Kubernetes, and VMs, which succeeded to the point that most of the dev teams were using them. Marshall says, “The further we got with that pilot, the more it became apparent that to have any reasonable timelines for getting to production, we would need some sort of on-ramp that didn’t actually include all of the complexities at once.”

Despite the success in deploying Kubernetes, Marshall’s team realized they would have to take a step or two back and replace it with the Kubernetes-based OpenShift Origin. “That decision did kind of upend a lot of what we were doing, and required some rethinking of how we were going to make that happen…So far we’ve only run into a few problems with the differences between the Kubernetes exposed by OpenShift and the bare Kubernetes that we were running before. Last week, we launched our first front-end production service on Docker, serving about 10 million requests per day. We will finish deploying the rest of that service, and hopefully that will jump that figure up to about 40 million requests per day,” he says.

Some of the difficulties were caused by fascination with the technologies, rather than having business reasons to deploy them, and diverting resources from other projects. Some were delays caused by security testing. Marshall says one of the biggest speed bumps was “The learning curve was probably the most challenging thing that we had to overcome in the months and year leading up to our first production deployment.”

Watch the full video (below) to learn in detail the challenges Marshall’s team faced and how they overcame them.

LinuxCon videos

Tilling the Brownfield: A Container Story by Richard Marshall, IAC Publishing Labs

Richard Marshall of IAC Publishing Labs describes Ask.com’s adventures in navigating two decades of legacy infrastructure on the way to living the container native dream, in his presentation from LinuxCon North America.

Radio Free HPC Reviews the New TOP500

The 48th edition of the TOP500 list saw China and United States pacing each other for supercomputing supremacy. Both nations now claim 171 systems apiece in the latest rankings, accounting for two-thirds of the list. However, China has maintained its dominance at the top of the list with the same number 1 and 2 systems from six months ago: Sunway TaihuLight, at 93 petaflops, and Tianhe-2, at 34 petaflops. This latest edition of the TOP500 was announced Monday, November 14, at the SC16 conference in Salt Lake City, Utah.

After US and China, Germany claims the most systems with 32, followed by Japan with 27, France with 20, and the UK with 17. 

Read more at insideHPC

Linus Torvalds Announces Linux Kernel 4.9 RC5, Things Look Fairly Normal Now

Today, November 13, 2016, Linus Torvalds announced the release and general availability of the fifth RC (Release Candidate) version of the upcoming and highly anticipated Linux 4.9 kernel series.

Linux kernel 4.9 could be the next LTS (Long Term Support) branch, and it promises to be the greatest kernel release ever, bringing support for some older AMD Radeon GPUs in the AMDGPU driver, and lots of other improvements. Right now, earlier adopters can get their hands on Linux kernel 4.9 RC5, which looks like it’s a much smaller than RC4.

“Things have definitely gotten smaller, so a normal release schedule (with rc7 being the last one) is still looking possible despite the large size of 4.9,” said Linus Torvalds in today’s announcement. “But let’s see how things work out over the next couple of weeks. In the meantime, there’s a lot of normal fixes in here, and we just need more testing…”

Read more at Softpedia