When it comes to scientific computing, few names are more well known than Stephen Wolfram. He was the creator of the Mathematica, a program that researchers have been using for decades to aid in their computations. Later Wolfram expanded Mathematica into a full multi-paradigm programming language, called Wolfram Language. The company also packaged many of the Mathematica formulas, and a lot of outside data, into a cloud-based service and API. So at this year’s SXSW Interactive, we spoke with Wolfram about how to use this new cloud service to add computational intelligence into your own programs.
These days, digital grabs a lot of headlines that trumpet how it’s radically changing customer behaviors. This typically means that IT departments have to deliver new features faster even in the face of more demanding requirements for availability (24/7) and security.
DevOps promises to do exactly that, by fostering a high degree of collaboration across the full IT value chain (from business, over development, operations and IT infrastructure). But there’s a problem.
While many software-development and operations teams have made steps toward DevOps methods, most enterprise IT-infrastructure organizations still work much as they did in the first decade of this century: They use a “plan-build-run” operating model organized by siloed infrastructure components , such as network, storage, and computing.
A recent federal district court decision denied a motion to dismiss a complaint brought by Artifex Software Inc. (“Artifex”) for breach of contract and copyright infringement claims against Defendant Hancom, Inc. based on breach of an open source software license. The software, referred to as Ghostscript, was dual-licensed under the GPL license and a commercial license.
This case highlights the need to understand and comply with the terms of open source licenses. … It also highlights the validity of certain dual-licensing open source models and the need to understand when which of the options apply to your usage. If your company does not have an open source policy or has questions on these issues, it should seek advice.
Cedric Bail, a long-time contributor to the Enlightenment project who works on EFL integration with Tizen at Samsung Open Source Group, discussed some of the lessons learned in optimizing wearable apps for low battery, memory, and CPU usage.
At the recent Embedded Linux Conference, Walt Miner provided an AGL update and summarized AGL’s Yocto Project based Unified Code Base (UCB) for automotive infotainment.
Patrick Ohly, a software engineer at Intel, discusses integrity protection schemes and system update mechanisms at the recent Embedded Linux Conference.
Watch the video of this Open Networking Summit keynote to get more details about AT&T’s approach to using software and hardware to evolve their network.
This series provides a preview of the new, self-paced Containers Fundamentals course from The Linux Foundation, which is designed for those who are new to container technologies. The course covers container building blocks, container runtimes, container storage and networking, Dockerfiles, Docker APIs, and more. In the first excerpt, we defined what containers are, and in this installment, we’ll explain a bit further. You can also sign up to access all the free sample chapter videos now.
Note that containers are not lightweight VMs. Both of these tools provide isolation and run applications, but the underlying technologies are completely different. The process of managing them is also different.
VMs are created on top of a hypervisor, which is installed on the host operating system. Containers directly run on the host operating system, without any guest OS of its own. The host operating system provides isolation and does resource allocation to individual containers.
Once you become familiar with containers and would like to deploy them on production, you might ask “Where should I deploy my containers — on VMs, bare metal, in the cloud?” From the container’s perspective, it does not matter as it can run anywhere. But in reality, many variables affect the decision, such as cost, performance, security, current skill set, and so on.
Find out more in these sample course videos below, taught by Neependra Khare (@neependra), Founder and Principal Consultant at CloudYuga, former Red Hat engineer, Docker Captain, and author of the Docker Cookbook:
By Sebastien Goasguen – @sebgoa, Kubernetes lead at Bitnami, Founder of Skippbox, O’Reilly Author
OpenShift is Red Hat container application platform. It is based on Kubernetes and to keep things short we are going to call it a PaaS. The new OpenShift v3 represents a big bet by Red Hat to re-write the software entirely in Go and leverage Kubernetes. Indeed when you use OpenShift, you get a Red Hat distribution of Kubernetes plus the OpenShift functionalities around code deployment, automated builds and so on, that you are used to with a typical PaaS.
What stands out with OpenShift, and what Red Hat touts quite often is the focus on security.