Home Blog Page 523

Innovating With Open Source: Microsoft’s Story

This article was sponsored by Microsoft and written by Linux.com.

After much anticipation, LinuxCon, ContainerCon and Cloud Open China is finally here. Some of the world’s top technologists and open source leaders are gathering at the China National Convention Center in Beijing to discover and discuss Linux, containers, cloud technologies, networking, microservices, and more. Attendees will also exchange insights and tips on how to navigate and lead in the open source community, and what better way than to meet in person at the conference?

To preview how some leading companies are using open source and participating in the open source community, Linux.com interviewed several companies attending LinuxCon China. Here, Microsoft discusses how and why they adopted open source, how that strategy helps their customers and the open source community, but also how it helps Microsoft innovate and change how it does business.

Gebi Liang
We spoke with Gebi Liang, Partner Director of Microsoft Cloud and Enterprise China Cloud Incubation Center to learn more.

Linux.com: What is Microsoft’s open source strategy today?

Gebi Liang: Our company mission is to enable companies to do more. An important step is enabling organizations to work on the tools and platforms they know, love and have already invested in. Thus, our strategy centers around providing an open and flexible platform that works the way you want and need it to. The platform integrates with leading ecosystems to deliver consistent offerings. But Microsoft went even further to release technology to support a strong ecosystem through Microsoft’s portfolio of investments, and to contribute technology to the open source community as well.

Shaping and deploying this strategy has been a multi-year journey. But each step along the way was significant including investing in open source contributions across the company and joining key foundations to deepen our partnerships with the community. We also made Linux and OSS run great and smoothly on Azure, and now one in three VMs on Azure are Linux. Microsoft teams forged key open source partnerships to bring more choice in solutions to Azure, such as Canonical, Red Hat, Pivotal, Docker, Chef and many more. Plus, we are also bringing many of our technologies into the open, or making them available on Linux.  

Linux.com: What are some of Microsoft’s contributions in open source and as a platform?

Gebi Liang: We are making great progress in enabling and integrating open source, but also in contributing and releasing aspects.

First, while integrating open source solutions into our platforms, we collaborate with the community and contribute the code back to the community.  Projects we contributed to is included , but not limited to: Linux and FreeBSD on Hyper-V, Hadoop, Windows container, Mesos and Kubernetes, Cloud Foundry and Openshift, various cloud deployment and management tools such as Chef & Puppet, and Hashicorp tools. Of course, there are many other projects too.

While developing our own VS code, the strong and lightweight IDE, we had also made a lot of contributions to the Electron codebase.  As Microsoft has become member of many prominent open source foundations such as the Linux Foundation, we will be even more involved in these communities and continuously contribute.

Microsoft has also been releasing more and more of our Platforms, Services and Products to the open source community.  The best-known ones include .Net, Powershell, Typescript, Xamarin, CNTK for machine learning, all the Azure SDKs and CLIs and VS Code.  

After the acquisition of Deis, we continue to invest in the set of popular K8s tools they developed, and recently released Draft, the tool to create apps for K8s, on Github.  Even for products that are not fully open sourced, there are many components, especially the newly developed ones, become open source, such as many of the IoT tools and adapters, and the OMS agent for Linux.  You can find the full list at https://opensource.microsoft.com/.  

Even in the hardware space, we’re contributing our data center design to Open Compute Project.

Linux.com: How exactly does Microsoft empower companies that are using or looking to use open source?

Gebi Liang: We fully recognize that customers wanted to have more choices including the use of open source, we have been enabling the popular Open source stacks on our platform with unprecedented speed. I am very proud to share a list of such project covering just about every aspect of what customers need.  On OS images, we enabled all the major Linux distros plus FreeBSD and OpenBSD as the latest addition. On Dev tools, now developers who are used to Mac environment can use Visual Studio on Mac, VS Code on Linux/Mac, or Eclipse, IntelliJ. And on database/big data, a Linux developer can use SQL on Linux and also use fully managed MySQL/PostgreSQL service on Azure.  

In terms of Management/monitoring, one can not only use OMS, PowerShell, but also Chef/Puppet/Ansible/Terraform/Zabbix, etc.  And for the popular microservices, we provide the fully diversified microservice platform support on Azure such as Docker Swarm, Mesos DC/OS, Kubernetes (k8s) in addition to Microsoft’s own microservice, service fabric, which supports both Windows and Linux. As a result, today we have 30%+ IaaS VMs running Linux on Azure and in China that number has reached 60%!

Linux.com: How is open source important to innovation at Microsoft?

Gebi Liang: Open source allows us to build on what the community had contributed, it gave us much speed to go to market.  Also, when we contribute and release software back to the community, we can leverage the communities for better feedback and build better application inspired by new & creative ideas. This helps us innovate faster and making best practices beyond any single company could achieve. And that’s the Power of Crowd’s Wisdom.

Linux.com: It’s interesting to hear how Microsoft’s embrace of open source helps its customers, but also how open source helps Microsoft innovate internally. What else is Microsoft doing to build or empower an open source culture?

Gebi Liang: We are committed to building a sustainable open source culture at Microsoft. Cultural Shift requires deep internal alignment with rewards and compensation. Microsoft had refined the Performance Review system for better accommodating the culture to share and to contribute. All employees are asked at every performance review to describe how they are empowering others and how they are building on the work of others. Open source is an officially recognized and documented core aspect of the developer skill set. And we can see that internal culture change is paying off with over 16K employees on GitHub, with some of them are even making critical contributions to projects like Docker and Hadoop.

I hope to see everyone at LinuxCon China. I’m happy to share more information about Microsoft and open source and perhaps collaborate on new projects too. See you there!

U.S. Slips in New Top500 Supercomputer Ranking

In June, we can look forward to two things: the Belmont Stakes and the first of the twice-yearly TOP500 rankings of supercomputers. This month, a well-known gray and black colt named Tapwrit came in first at Belmont, and a well-known gray and black supercomputer named Sunway TaihuLight came in first on June’s TOP500 list, released today in conjunction with the opening session of the ISC High Performance conference in Frankfurt. Neither was a great surprise.

…Sunway TaihuLight was the clear pick for the number-one position on TOP500 list, it having enjoyed that first-place ranking since June of 2016 when it beat out another Chinese supercomputer, Tianhe-2. The TaihuLight, capable of some 93 petaflops in this year’s benchmark tests, was designed by the National Research Center of Parallel Computer Engineering & Technology (NRCPC) and is located at the National Supercomputing Center in Wuxi, China. 

Read more in IEEE Spectrum

Commercial Dependencies and Sustainable Open Source Ecosystems

Open source is the new norm for software development. It’s also rapidly becoming the new economic norm. New businesses, industries, segments of computing and life have been enabled and created due to shared innovation. Open source has been at the heart of a lot of those shared innovations. The numbers speak for themselves*:

  • 3.8 million+ open source contributors world wide
  • 31 billion lines committed to open source repositories
  • 110+ open technology startups that raised funding
  • 10 open technology companies valued above $1 billion

             *all statistics courtesy of the Linux Foundation

Every day, we see developer communities work on and contribute code to open source projects. They’re out there working to build the best solutions possible for their particular objectives – and they’re doing it collaboratively. 

Read more at DevExchange

Hacker Board Survey Results: More Raspberry Pi, Please

The results are in for the 2017 Hacker Board survey. A total of 1,705 Linux.com and LinuxGizmos readers voted for their favorite Linux-driven, community-backed SBCs under $200 out of a catalog of 98. As with last year’s jointly sponsored survey, as well as the 2015 and 2014 polls, a Raspberry Pi single board computer came out on top.

What was remarkable this time around was the huge 4-to-1 gap between the Raspberry Pi 3 and the nearest competitor, which for the first time was also a Raspberry Pi: the Raspberry Pi Zero W. Third place went to the revamped, Cortex-A53 based Raspberry Pi 2.

Our official totals are based on Borda Count scoring, in which we tripled the number of first choices for an SBC, then doubled the number of second place selections, and added the two results to the unadjusted third-choice amount. When looking only at first-choice selections, the Raspberry Pi 3 outshined the second-place UDOO X86 by a factor of 7-to-1.

Before the quad-core Raspberry Pi 2 arrived in 2015, it was easy to consider the possibility that some Odroid, UDOO, Banana Pi, BeagleBone, or other contender might come along to give the Pi a run for its money. More powerful, and often more affordable competitors arrived, and most, such as the BeagleBone, offered far better open source hardware support. Yet, the Pi 2 continued to thrive, and the 64-bit RPi 3 took it to the next level.

Even the very competitive Raspberry Pi 3 has been overshadowed by some faster, cheaper, and more feature rich boards, some of which provide the same 40-pin expansion connector. But none offer the guaranteed compatibility of expansion boards, especially the newer HAT add-ons, nor can they match the project’s software support. Perhaps because the Raspberry Pi Foundation is at heart an education-focused effort, the community is also deeper, broader and more grassroots than many of the more vendor-driven projects.

X86 boards gain traction

I’ll take a further look at the Pi phenomenon below, but first let’s examine some other trends reflected in the results. First is the rising popularity of x86 entries, which this year totaled eight boards out of 98 mostly ARM-based designs.

Earlier x86 entries have had middling scores in our surveys, but newer models like Seco’s UDOO X86, Intel’s MinnowBoard Turbot Quad, and Aaeon’s UP Squared and UP Board have drawn considerable interest. Prices are still higher than with ARM boards, but they offer powerful quad-core Intel Atom SoCs, as well as features you don’t usually find on ARM SBCs such as SATA and USB 3.0.

The Intel Edison Kit for Arduino advanced to 18th from last year’s 35th, and the Quark-based Intel Galileo Gen 2 ranked #43. However, both products, including the Intel Edison module itself, are being discontinued, according to a June 19 Hackaday story. Although the discontinuation of these older boards is unsurprising, the story also said that the newer, Atom-based Intel Joule module is also being discontinued. 

Another surprise this year was the rebounding popularity of official Arduino boards that also run Linux. Most of these, such as the number 10 ranked Arduino Industrial 101, were in last year’s survey, but didn’t score as well. This also represents a win for the beleaguered MIPS architecture, which forms the foundation of the companion chips on the MCU-based Arduino boards that run OpenWrt or Linino Linux.

Despite the gains for x86 and MIPS boards, ARM boards still dominated the contest. The first non Raspberry flavored SBC on the list is the fourth ranked Odroid-XU4, which is based on an octa-core Samsung Exynos SoC. Other top 10 winners include the industrial-oriented BeagleBone Black, the 96Boards compatible DragonBoard 410c, the pseudo Pi compatible Odroid-C2, and the ninth-ranked Raspberry Pi Zero, which has been eclipsed by the almost identical, but wireless enabled RPi Zero W.

The 11th ranked BeagleBone Black Wireless is one of several BeagleBone clones that did well, including the new robotics-targeted BeagleBone Blue. Other Top 20 models include the Pine A64, the Asus Tinker Board, and Banana Pi BPI-M64, all of which have RPi 40-pin expansion. Also in the 10-20 range are the Arduino Yun and Arduino Tian, the UP Squared and UP Board, and the old Intel Edison Kit for Arduino. Number 19 goes to the Chip Pro Dev Kit — Next Thing’s sandwich style alternative to the Chip SBC, which is currently out of stock.

The 1,705-person sample, as well as the 90 different countries of origin, suggest the survey is a fairly accurate indicator of consumer SBC popularity. However, SBCs that are more commonly available and popular in east Asia, such as the many Orange Pi and NanoPi boards in our catalog, may have been undercounted. It turns out that SurveyMonkey is blocked in China, which resulted in only eight Chinese respondents, three of which are from Hong Kong. Yet content firewalls probably don’t explain why there were similarly miniscule totals cast from Japan, Korea, and Russia. Readership totals suggest that the numbers should be considerably higher. We’re looking into it.

The Raspberry Pi vs. the IBM PC

Even if there were more voters from China, Japan, Korea, and Russia, it’s hard to imagine that the Raspberry Pi would not have dominated. The many Raspberry Pi pseudo clones with 40-pin connectors, including the Orange Pi, NanoPi, Banana Pi, Odroids, and others did well overall but did not score quite as high as in last year’s survey. This would suggest that the compatibles competition has been neutralized at least for now.

It is difficult to find an exact parallel in computing history to the Raspberry Pi and the consumer SBC market – in part due to the novel open source, hobbyist, and educational nature of the genre. The closest we can think of is the IBM PC market back in the 1980s, which was also marked by a comparatively open platform. Yet while the Raspberry Pi dominates the consumer SBC market, IBM has long been a relatively minor PC vendor, and sold off its ThinkPad line to Lenovo in 2005.

Despite establishing the PC standard, IBM was quickly besieged by competition from cheaper PC compatibles such as the Compaq, and it was ultimately eclipsed by those vendors. There were many reasons for this, including the IBM PC’s high price, IBM’s underestimation of the PC market it created, and the very openness of the platform it created and which was the prime factor behind the PC standard’s success. The openness was not entirely voluntary, as full PC clones were not possible until Compaq reverse-engineered the closed IBM PC BIOS, as fictionally dramatized in the first season of Halt and Catch Fire.

None of these were issues with the Raspberry Pi, which wasn’t even the first community-backed Linux hacker board, having followed the BeagleBoard and others. Although Pi pseudo clones mimic the Raspberry Pi expansion interface and some key features, they aren’t true clones since the Pi’s Broadcom SoC is unavailable to other vendors. In this respect, at least, the IBM PC was more open than the Raspberry Pi, which makes it difficult for software written for the Pi to run without modification on boards with other processors.

Thanks to Linux and other common attributes among today’s hacker boards, however, software usually can be easily ported between different ARM Linux SBCs. Also, software compatibility is not as important in the diverse, often IoT-focused, world of often purpose-specific hacker board projects than with multi-purpose PCs.

The one common theme in the success of both the PC and the Pi is that a generally open hardware platform wedded to a common OS – Windows then instead of Linux/Android now – led to domination over more closed platforms. IBM benefited greatly from this approach for decades even if it had to share the loot with dozens of other companies.

For more analysis, charts, and other details about the 2017 hacker board survey, please see the additional coverage at LinuxGizmos.

Tips on Scaling Open Source in the Cloud

This article was sponsored by Alibaba and written by Linux.com.

After much anticipation, LinuxCon, ContainerCon and Cloud Open China will soon be officially underway. Some of the world’s top technologists and open source leaders are gathering at the China National Convention Center in Beijing. The excitement is building around the discoveries and discussions on Linux, containers, cloud technologies, networking, microservices, and more. Attendees will also exchange insights and tips on how to navigate and lead in the open source community, and what better way than to network in person at LinuxCon China?

To preview how some leading companies are using open source and participating in the open source community, Linux.com interviewed several companies attending the conference. In this segment, Alibaba discusses how to successfully manage scaling open source in the cloud

Hong Tang, chief architect of Alibaba Cloud.
We spoke with Hong Tang, chief architect of Alibaba Cloud.  Here are the interesting insights he had to share.

Linux.com: What are some of the advantages of using open source in the cloud?

Hong: I can summarize that in three points for application developers: a shorter learning curve, better security with less hassle, and more resources with increased agility.

First is the shortened learning curve. Developers just want to develop applications when they use open source. They want to focus on their particular application logic and they want to decide what features to develop. They do not want to spend time and effort on managing the physical infrastructure, an aggravation cloud computing eliminates.

Further, developers are aware that many of the open source products are not easy to setup and configure properly — particularly those running on a distributed set of machines, which means it is much more than a single library you can just link to your application. Managing open source on the cloud lowers the learning curve on those issues for developers.

Also, given there are so many choices, with different kinds of open sources on the cloud, means developers can try several choices and quickly figure out which will work for them. And they don’t waste time learning how to set up, configure and use it, only to discover that software doesn’t deliver what they need. So that’s the first big advantage of using open source in the cloud.

The second thing I think is very important is the security. Given the nature of the openness of the open source software, everyone can see the source code, so it’s much easier to figure out the security vulnerabilities of the software. But not all developers are highly focused on security so sometimes they may fall behind in things like applying patches or upgrading to the latest version of the software. Particularly when the newer version might not be compatible, an upgrade possibly means they have to reconfigure everything. The cloud is very helpful with that since patches and upgrades are automatic.

Also, we have dedicated teams watching all those vulnerabilities of all those open source options, and commercial software as well. We can manage them and protect them from the peripherals because things can be done outside their virtual machines, or their cloud instances.

Third, running open source on the cloud combines the advantages of both open source and the cloud. Not everything the developer seeks may be available in open source, or maybe best of breed is offered in something that is not open sourced. By using both cloud and open source, developers don’t have to restrict themselves to what is within the open source software. They can leverage the best of open source with some cloud services that open source does not provide yet. We have plenty of those, by the way.

These are three things that I can see as why running open source on the cloud matters.

Linux.com: What are some of the problems you see in scaling open source on the cloud?

Hong: It’s not that there is a direct problem with scaling the adoption of open source on the cloud. We see people using open source and creating applications comfortably on the cloud. We see pretty good growth of open source options on the cloud. But certainly, I think there are a lot of things we can do to help developers to better leverage open source on the cloud. So, I wouldn’t call it a problem but I would say there are things that we can do to unlock the advantages of open source on the cloud.

The first thing is to make open source more manageable. A lot of the things we talked about previously require integrations between open source and the cloud to deliver that increased manageability. Essentially, we want developers to use open source as managed services on the cloud.

Why is that? Well, if they just repeat what they are already doing and simply put their software, including the open source parts, on the cloud, they’ll probably discover there’s not much difference in running their applications in an on-premises environment or on the cloud. A lot of people doing this kind of application migration essentially mirror the on-premises environment in a cloud environment, but that basically means they didn’t really leverage the advantages of the cloud.

We want to educate developers on how to properly architecture applications on the cloud so that they can capture all the benefits.

Linux.com: How does embracing DevOps make a positive difference in scaling properly?

Hong:  The key difference between on-premises and cloud environments is that in an on-premises environment, the developer has a fixed set of iron boxes and services and they want to put those application pieces into those boxes. Of course, private cloud solutions like VMWare or Docker make things a little bit easier, but still they have a fixed physical infrastructure. Basically what the developers do is following a fixed deployment.

Developers have to think, ok this application requires, let’s see, how many QPS?  I need to provision with how many servers? Further, they think deployment through and decide the type of servers they want to run this application on, with customizations for memory sizes, or faster disks, or faster CPUs. That’s the way they do it and they buy a set of boxes for an application and another set of boxes for other applications, and so on.

On the cloud, it’s different because there are “unlimited resources” underneath it which means you can get any combination of server specs. If you want high performance, high memory, or high performing disks, you can get that. And you get that with only the things you want with an API call so there’s no depreciation between the physical infrastructure provisioned and running things on top of that. And we provide the pieces to do this. For example, there’s a thing called elastic scaler that can monitor the load on the backend and decide when you need to acquire another server instance for the application and put load balancer in front to hide those little details.  

We have now what’s called serverless computing in the industry. With that, you don’t have to put this process in that box, you don’t have to care where all those processing and storage happen. That’s why they’re called serverless. Open source also provides some of those like HBase, Cassandra, etc so you don’t really know, you don’t really care where the piece of data is stored, or where the application’s processing is happening. So you can see that by leveraging both open source and cloud services, a developer’s work becomes much easier and faster with these multitude of options.

Also on the cloud we have resource orchestration. You can choose resources, label them, and with that spin up a testing version of services directly. This is also sometimes called agility. So you can test more easily in full scale and not in a mocking way.

All of these capabilities and options bring forth a different mentality when you write applications targeted for the cloud vs when you write applications for the on-premises environment. If you take advantage of those, developers can save a lot of hassle in reasoning the scalability of their components or deciding how much resources they need, as you don’t have to worry about it.

The application can simply scale along with the workload.

Linux.com: Any final thoughts?

Hong: I hope to see many of the people reading this at LinuxCon China. We are working hard every day to engage developers, provide them with new tools, and build services they tell us they want and features that we discover by listening to attendees at conferences like this one. See you there!

The article is sponsored by Alibaba CloudAlibaba Group’s cloud computing arm, develops highly scalable platforms for cloud computing and data management. It provides a comprehensive suite of cloud computing services to support participants of Alibaba Group’s online and mobile commerce ecosystem, including sellers, and other third-party customers and businesses. 

As Open Source and Cloud Converge, Red Hat Expands Partnerships and Training

As open source and cloud computing converge, Red Hat is ramping up the scope of its cloud and DevOps initiatives, including building out its training offerings. If you still think of the company as primarily focused on enterprise Linux, think again. Through partnerships, such as its work with IBM, and acquisitions, such as its intent to purchase Codenvy, the cloud represents a particularly promising frontier for Red Hat. Meanwhile, the company is calling out skills gaps in the DevOps arena.

Betting on the Cloud and Container Future

IBM and Red Hat have been deepening their partnership with IBM — helping enterprises integrate Red Hat OpenStack and Ceph with IBM Private Cloud. At IBM’s recent InterConnect conference in Las Vegas, IBM executives said the partnership means that Red Hat customers will be able to extend their Red Hat-based environments into IBM’s public cloud. That, in turn, enables many of them to run the same management and software tools they have on premises while taking advantage of Red Hat’s open source platforms.

It’s worth noting that Red Hat has integrated its open tools with most of the major public cloud platforms now. Its tools are already offered for AWS, Microsoft Azure and Google’s cloud.

Meanwhile, Red Hat has announced its intent to acquire San Francisco-based startup Codenvy, which will give developers options for building out cloud-based integrated development environments. Codenvy is built on the open source project, Eclipse Che, which offers a cloud-based Integrated Developer Environment (IDE) and development environment. The openshift.io cloud-based container development service from Red Hat already integrates Codenvy’s Eclipse Che implementation.

In essence, Codenvy has DevOps software that can streamline coding and collaboration environments. According to Red Hat: “[Codenvy’s] workspace approach makes working with containers easier for developers. It removes the need to setup local VMs and Docker instances enabling developers to create multi-container development environments without ever typing Cocker commands or editing Kubernetes files. This is one of the biggest pain points we hear from customers and we think that this has huge potential for simplifying the developer experience.”

The Bottom Line for the IT and DevOps Community

Recently, several executives from Red Hat participated in a panel discussion focused on skills gaps found in the IT industry. They emphasized that skills gaps are particularly acute in the areas of Big Data, DevOps, containers, microservices, and cloud computing.

With that in mind, Red Hat is expanding its training offerings. The company is partnered with universities to focus on open source-centric training, including Boston University, the Rensselaer Polytechnic Institute, Duke University, and the University of Colorado at Boulder. Students at these institutions get the opportunity to work with open source tools and platforms.

In addition, Red Hat offers a number of training and certification options. The company continues to be very focused on OpenStack and has certification options that are worth considering. The company has announced a cloud management certification for Red Hat Enterprise Linux OpenStack Platform as part of the Red Hat OpenStack Cloud Infrastructure Partner Network. (The Linux Foundation also offers an OpenStack Administration Fundamentals course.)

Red Hat also offers educational options for microservices, working with middleware and more. It has announced five new training and certification offerings focused on improving open source and DevOps skills, as follows:

  • Developing Containerized Applications (course and exam);

  • OpenShift Enterprise Administration (course and exam);

  • Cloud Automation with Ansible (course and exam);

  • Managing Docker Containers with RHEL Atomic Host (course and exam); and

  • Configuration Management with Puppet (course and exam).

Ken Goetz, vice president of training at Red Hat, said: “DevOps isn’t a product but rather a culture and process. There are certain technologies and skills someone working in a DevOps environment should have. Our goal with this new RHCA concentration is to offer a way for employers to validate these critical open source skills, and in the process, further enable enterprises to accelerate application delivery.”

“Today, it is almost impossible to name a major player in IT that has not embraced open source,” Red Hat CEO Jim Whitehurst noted in a LinkedIn post. “Open source was initially adopted for low cost and lack of vendor lock-in, but customers have found that it also results in better innovation and more flexibility. Now it is pervasive, and it is challenging proprietary incumbents across technology categories.”

Are you interested in how organizations are bootstrapping their own open source programs internally? You can learn more in the Fundamentals of Professional Open Source Management training course from The Linux Foundation. Download a sample chapter now!

The Evolution of Scalable Microservices

In this article, we will look at microservices, not as a tool to scale the organization, development and release process (even though it’s one of the main reasons for adopting microservices), but from an architecture and design perspective, and put it in its true context: distributed systems. In particular, we will discuss how to leverage Events-first Domain Driven Design and Reactive principles to build scalable microservices, working our way through the evolution of a scalable microservices-based system.

Don’t build microliths

Let’s say that an organization wants to move away from the monolith and adopt a microservices-based architecture. Unfortunately, what many companies end up with is an architecture similar to the following:

Read more at O’Reilly

Serious Privilege Escalation Bug in Unix OSes Imperils Servers Everywhere

“Stack Clash” poses threat to Linux, FreeBSD, OpenBSD, and other OSes.

A raft of Unix-based operating systems—including Linux, OpenBSD, and FreeBSD—contain flaws that let attackers elevate low-level access on a vulnerable computer to unfettered root. Security experts are advising administrators to install patches or take other protective actions as soon as possible.

Stack Clash, as the vulnerability is being called, is most likely to be chained to other vulnerabilities to make them more effectively execute malicious code, researchers from Qualys, the security firm that discovered the bugs, said in a blog post published Monday. Such local privilege escalation vulnerabilities can also pose a serious threat to server host providers because one customer can exploit the flaw to gain control over other customer processes running on the same server. Qualys said it’s also possible that Stack Clash could be exploited in a way that allows it to remotely execute code directly.

Read more at ArsTechnica

What Is IT Culture? Today’s Leaders Need to Know

“Culture” is a pretty ambiguous word. Sure, reams of social science research explore exactly what exactly “culture” is, but to the average Joe and Josephine the word really means something different than it does to academics. In most scenarios, “culture” seems to map more closely to something like “the set of social norms and expectations in a group of people.” By extension, then, an “IT culture” is simply “the set of social norms and expectations pertinent to a group of people working in an IT organization.”

I suspect most people see themselves as somewhat passive contributors to this thing called “culture.” Sure, we know we can all contribute to cultural change, but I don’t think most people actually feel particularly empowered to make this kind of meaningful change. On top of that, we can also observe significant changes in cultural norms that depend on variables like time and geography. 

Read more at OpenSource.com

Hello Whale: Getting Started with Docker & Flask

When it comes to learning, I tend to retain info best by doing it myself (and failing many times in the process), and then writing a blog about it. So, surprise: I decided to create a blog explaining how you can get a Flask app up and running with Docker! Doing this on my own helped connect the dots when it came to Docker, so I hope it helps you as well. 

You can follow along with my repo here:

https://github.com/ChloeCodesThings/chloe_flask_docker_demo

First, I created a simple Flask application. I started by making a parent directory and naming it chloes_flask_demo.

Read more at Codefresh.io