I run many operating systems every day, from macOS, to Windows 7 and 10, to more Linux desktop distributions than you can shake a stick at. And, once more, as a power-user’s power user, I’ve found the latest version of Linux Mint to be the best of the best.
If you’ve never installed Mint before, you can download its ISO files from the Mint Downloads. There are still both 64-bit and 32-bit versions for the Cinnamon desktop, but unless you’re running a really old system, just down the 64-bit version. Then burn the ISO image to a DVD using a tool such as ImgBurn. Or, you can put it on a bootable USB stick with a program like Rufus.
Then, boot your computer using the DVD or stick and make sure Mint works with your computer. If it does — and I’ve never met a PC it wouldn’t work on — you can then install it.
Today’s open source communities include people from all around the world. What challenges can you expect when establishing an online community, and how can you help overcome them?
People contributing to an open source community share a commitment to the software they’re helping to develop. In the past, people communicated by meeting in person at a set place and time, or through letters or phone calls. Today, technology has fostered growth of online communities—people can simply pop into a chat room or messaging channel and start working together. You might work with someone in Morocco in the morning, for example, and with someone in Hawaii that evening.
Global communities: 3 common challenges
Anyone who’s ever worked in a group knows that differences of opinion can be difficult to overcome. In online communities, language barriers, different time zones, and cultural differences can also create challenges.
Kubeflow brings composable, easier to use stacks with more control and portability for Kubernetes deployments for all ML, not just TensorFlow.
Introducing Kubeflow, the new project to make machine learning on Kubernetes easy, portable, and scalable. Kubeflow should be able to run in any environment where Kubernetes runs. Instead of recreating other services, Kubeflow distinguishes itself by spinning up the best solutions for Kubernetes users.
Why switch to Kubeflow?
Kubeflow is intended to make ML easier for Kubernetes users. How? By letting the system take care of the details (within reason) and support the kind of tooling ML practitioners want and need.
Bhumika Goyal, age 22, of India, has had more than 340 patches accepted into the Linux kernel – an accomplishment that contributed in no small part to her receiving one of two Linux Kernel Guru scholarships from The Linux Foundation.
Bhumika Goyal
Goyal served as an Outreachy intern earlier this year, focused on the Linux kernel, where she worked on securing the kernel from surface attacks by making the kernel structures read-only. Since her internship, Goyal has continued this work with the support of The Linux Foundation’s Core Infrastructure Initiative.
“Having contributed to the kernel for a year now, I have developed a keen interest in learning the internals of the kernel,’’ Goyal explained in her scholarship application. “This training will definitely help me to develop my skills so that I can contribute to the kernel community more effectively.” Her goal is to become a full-time kernel engineer after completing this current project.
Mohammed Al-Samman
Mohammed Al-Samman, 25, of Egypt, is the other recipient of a Linux Kernel Guru scholarship from The Linux Foundation. Al-Samman has spent the past year working on the Linux kernel, doing analysis, debugging, and compiling. He also built an open source Linux firewall, and a kernel module to monitor power supply electrical current status (AC/DC) by using the Linux kernel notifier.
Al-Samman is studying the Linux kernel network subsystem. “I hope to start a Linux kernel community in my country and secure a job as Linux kernel developer and contribute to the community,’’ he said.
SysAdmin Super Star
Omar Aaziz, 39, United States, is one of two recipients of the Foundation’s SysAdmin Super Star scholarship. Aaziz, who is originally from Iraq, now administers the computer science clusters at New Mexico State University. He manages the Linux firewall to prevent and detect cyberattacks and transfer data safely. He also administers a 180TB supercomputer storage system and performs backups.
Omar Aaziz
Aaziz has five years of experience with high-performance computing (HPC) and is also pursuing a Ph.D. “As a research assistant, I learned how to build a complete HPC cluster from scratch, install CentOS operating systems, version 6.5 and 7. I,’’ he wrote in his scholarship application. “I have many duties, such as helping the users to install different open-source software, creating job scripts, and more. I managed the clusters [in a] Linux firewall to prevent and detect cyberattacks and transfer data safely.”
Aaziz also had two internships at Sandia National Laboratories, during which time he served as administrator of three supercomputers. His extensive admin work required him to use Linux and open source software heavily. “I used CentOS, Ubuntu, and I built my own Linux version with customized security modules to prevent security breaches.”
His goal is to become a high performance computing engineer, helping develop the next generation of supercomputers.
Leonardo Goncalves da Silva
Leonardo Goncalves da Silva, 41, Brazil, who also received a SysAdmin Super Star scholarship, has worked with Linux for 20 years and recently shifted his career toward cloud development based on a Linux and Kubernetes framework. He currently contributes to several open source projects, and is planning to start contributing to Hyperledger, The Linux Foundation’s open source project focused on blockchains and other tools.
“The various systems I’ve designed were developed using open source tools and frameworks with great results to my employees in terms of cost reduction and productivity,’’ he wrote in his scholarship application. His career shift, he explained, “is helping my customers to create great solutions with security and agility.” Da Silva plans to use the scholarship to take the Kubernetes Fundamentals course to provide better service to his clients.
The annual Linux Foundation Training (LiFT) Scholarships provide advanced open source training to existing and aspiring IT professionals from all over the world. Twenty-seven recipients received scholarships this year – the highest number ever awarded by the Foundation. Scholarship recipients receive a Linux Foundation training course and certification exam at no cost.
Learn how Orange leverages open source software via OPNFV to solve several important issues along the way.
Over the past few years, the entire networking industry has begun to transform as network demands rapidly increase. This is true for both the technology itself and the way in which carriers — like my employer Orange, as well as vendors and other service providers — adapt and evolve their approach to meeting these demands. As a result, we’re becoming more and more agile and adept in how we virtualize our evolving network and a shifting ecosystem.” keep up with growing demands and the need to virtualize.
At Orange, we are laser-focused on investments into future technologies and plan to spend over $16 billion between 2015 and 2018 towards new networks (including 4G, 4G+, fixed fiber). A key component of these investments — along with access network investments — are advancements in software-defined networking (SDN) and network functions virtualization (NFV) technologies as a way to create new revenue streams, improve agility, and reduce costs via a program we call On-Demand Networks.
Just because we’re using containers doesn’t mean that we “do DevOps.” Docker is not some kind of fairy dust that you can sprinkle around your code and applications to deploy faster. It is only a tool, albeit a very powerful one. And like every tool, it can be misused. Guess what happens when we misuse a power tool? Power fuck-ups. Let’s talk about it.
I’m writing this because I have seen a few people expressing very deep frustrations about Docker, and I would like to extend a hand to show them that instead of being a giant pain in the neck, Docker can help them to work better, and (if that’s their goal) be an advantage rather than a burden in their journey (or their “digital transformation” if we want to speak fancy.)
Docker: hurting or helping the DevOps cause?
I recently attended a talk where the speaker tried to make the point that Docker was anti-devops, for a number of reasons (that I will list below.) However, each of these reasons was (in my opinion) not exactly a problem with Docker, but rather in the way that it was used (or sometimes, abused). Furthermore, all these reasons were, in fact, not specific to Docker, but generic to cloud deployment, immutable infrastructure, and other things that are generally touted as good things in the DevOps movement, along with cultural choices like cross-team collaboration. The speaker confirmed this when I asked at the end of the talk, “did you identify any issue that was specific to Docker and containers and not to cloud in general?” — there was none.
What are these “Docker problems?” Let’s view a few of them.
HTML, cascading stylesheets (CSS), and JavaScript have experienced massive growth and evolution over the past two decades, which should come as no surprise given the ever-expanding role of the internet in our lives. JavaScript development has come a long way since the early 1990s and IBM’s famous commercial depicting business’ early recognition of the internet’s significance. That commercial forever changed the role of the web developer. Before the business invasion, web developers were more artistic, but the influence of business and industry changed all of that.
More than 25 years have passed since the first web pages produced with JavaScript were developed, and things have improved immensely. Today, IDEs are well structured to validate your code, and self-contained environments help with testing and debugging web frontend logic. Now, learning JavaScript goes well beyond simply studying the language’s syntax.
Managing an open source project isn’t as easy as it sounds. A successful open source project is more than just making the source code available. In this article, Wayne Beaton and Gunnar Wagenknecht explain how you can make your open source project a runaway success.
Running an open source project is easy. All you have to do is make your source code available and you’re open source, right? Well, maybe. Ultimately, whether or not an open source project is successful depends on your definition of success. Regardless of your definition, creating an open source project can be a lot of work. If you have goals regarding adoption, for example, then you need to be prepared to invest. While open source software is “free as in beer”, it’s not really free: time and energy are valuable resources and these valuable resources need to be invested in the project.
With its open source Fn project, Oracle is looking to make a splash in serverless computing. The functions-based, open source serverless computing platform requires Docker and initially supports Java.
Fn is a container native serverless platform that can be run on-premises or in the cloud. It requires the use of Docker containers. Fn developers will be able to write functions in Java initially, with Go, Ruby, Python, PHP, and Node.js support planned for later. Applications can be built and run without users having to provision, scale, or manage servers, by using the cloud.
This week in open source news, CNCF made several announcements at CloudNativeCon+KubeCon including 1.0 releases from the containerd, Jaeger, CoreDNS and Fluentd projects.
1) CNCF releases a number of exciting updates at CloudNativeCon+KubeCon
3) OpenStack Foundation launches Kata Containers, which aims to combine the best of containers’ speed and flexibility with the best of virtual machines. The project is independent, but will be managed by OpenStack much like the way The Linux Foundation hosts its projects.
4) “Linus Torvalds last week rushed a patch into the Linux kernel, after researchers discovered the patch for 2016’s Dirty COW bug had a bug of its own.”