Home Blog Page 378

Suspicious Event Hijacks Amazon Traffic for 2 Hours, Steals Cryptocurrency

Amazon lost control of a small number of its cloud services IP addresses for two hours on Tuesday morning when hackers exploited a known Internet-protocol weakness that let them to redirect traffic to rogue destinations. By subverting Amazon’s domain-resolution service, the attackers masqueraded as cryptocurrency website MyEtherWallet.com and stole about $150,000 in digital coins from unwitting end users. They may have targeted other Amazon customers as well.

The incident, which started around 6 AM California time, hijacked roughly 1,300 IP addresses, Oracle-owned Internet Intelligence said on Twitter. …  The 1,300 addresses belonged to Route 53, Amazon’s domain name system service.

The highly suspicious event is the latest to involve Border Gateway Protocol, the technical specification that network operators use to exchange large chunks of Internet traffic. Despite its crucial function in directing wholesale amounts of data, BGP still largely relies on the Internet-equivalent of word of mouth from participants who are presumed to be trustworthy. Organizations such as Amazon whose traffic is hijacked currently have no effective technical means to prevent such attacks.

Read more at Ars Technica

What Do High School Students Know or Understand about Open Source Software?

Only 20 years after the label “Open Source” was coined, the entire tech ecosystem has embraced its values of sharing, collaboration and freedom. Although Open Source Software is pervasive to our everyday life, does everyone and especially the younger generation realize how to leverage it?

Last summer, over the course of 3 weeks, High School students with no prior experience in Computer Science (CS) joined Holberton School’s first Immersion Coding Camp to learn how to code and build their own website.

“Best things that the tech industry could give us”  – Julia (senior at Bishop O’Dowd High School)

Students who had never heard of open source before the camp, got excited about the concept and ended up completely hooked by Git,  Atom or Bootstrap.

When asked to talk about their experience and their understanding of OSS, students have a lot to say! Here is a summary of their feedback.“Over the course of 3 weeks, not only have the students experienced the school approach — project-based and peer-learning — but they also have had the opportunity to be exposed to the tech industry. They visited tech companies such as Scality, Salesforce, Twitch and the VC firm Trinity Ventures. They made connections with engineers, designers, investors and learned how to use open-source softwares that professionals use on a daily basis”

“A great way to learn is by looking at other people’s work and study what they did” – Mickael

All the students realize open source is a great way to explore or learn more about technology on their own, especially when you don’t have computer science classes at school. As Mickael  (sophomore at Redwood High School) says “A great way to learn is by looking at other people’s work and study what they did”. GitHub is a considerable source of information and knowledge, “there’s always a ‘Read me’ file” to help you understand projects.

“It’s really cool that your work can help so many other people because you might have created what people had been looking for” shares Tyler (junior at KIPP King Collegiate).

Students embraced that OSS allows collaboration and mutual support: people are able to share valuable information and anyone can contribute to a project or a product and “it can even become better than what the person made originally”. When coding their navigation bar for example students used the code already available on Bootstrap instead of coding from scratch. It allowed them to express their creativity without the limitation of coding expertise.

“It can even become better than what the person made originally” – Tyler

Another reason why students adopted OSS so easily is because they realized the potential and opportunity it conveys. When using OSS, students feel they earn valuable skills and feel they are given equal opportunity to succeed:  “Anybody can get into the tech industry” says Jonathan (senior at Castro Valley High School). Learning how to use software used by big tech companies equip you best for the real world once you are looking for a job regardless of your social background!

“Anybody can make their own website and create animations for free and nobody’s held back” – Jonathan

As they experienced the benefit of sharing and collaboration (which is not always encouraged in their classroom) students really embraced OSS and its values. They realized they had a steep learning curve over the course of the summer camp and they are looking forward to discovering more software to help them learn more of all aspects of coding and programming.

With no prior knowledge in CS, by the end of camp high school students were able by to build their own website from scratch, implement very sophisticated layouts and create animations. They learned a lot, expressed their creativity and increased their confidence in their technical abilities This coding camp would not have been so rewarding for them without OSS.

tYsd2m3jayuRnetuLCGMxjf7uMjMaEyvGYn8cmI1

Holberton School Summer Coding Camp 2018 is starting on July 9th, more info on our website!

How to Synchronize Time with NTP in Linux

The Network Time Protocol (NTP) is a protocol used to synchronize computer system clock automatically over a networks. The machine can have the system clock use Coordinated Universal Time (UTC) rather than local time.

The most common method to sync system time over a network in Linux desktops or servers is by executing the ntpdate command which can set your system time from an NTP time server. In this case, the ntpd daemon must be stopped on the machine where the ntpdate command is issued.

In most Linux systems, the ntpdate command is not installed by default. To install it, execute the below command:

$ sudo apt-get install ntpdate    [On Debian/Ubuntu]
$ sudo yum  install ntpdate       [On CentOS/RHEL]
$ sudo dnf install ntpdate        [On Fedora 22+]

Read more at Tecmint

Take the Open Source Job Survey from Dice and The Linux Foundation

Interest in hiring open source professionals is on the rise, with more companies than ever looking for full-time hires with open source skills and experience. To gather more information about the changing landscape and opportunities for developers, administrators, managers, and other open source professionals, Dice and The Linux Foundation have partnered to produce two open source jobs surveys — designed specifically for hiring managers and industry professionals.

Take the Open Source Professionals Survey

Take the Hiring Managers Survey

Please take a few minutes to complete the short survey and share it with your friends and colleagues. Your participation can help the industry better understand the state of open source jobs and the nature of recruiting and retaining open source talent. This is your chance to let companies, HR and hiring managers and industry organizations know what motivates you as an open source professional.

The survey results will be compiled into the 2018 Open Source Jobs Report. This annual report from Dice and The Linux Foundation presents the current state of the job market for open source professionals and examines what hiring managers are looking for when recruiting open source talent. You can download the 2017 Open Source Jobs Report for free.   

As a token of our appreciation, $2000 in Amazon gift cards will be awarded to survey respondents selected at random after the closing date. Complete the survey for a chance to win one of four $500 gift cards.

There’s a Server in Every Serverless Platform

Serverless computing or Function as a Service (FaaS) is a new buzzword created by an industry that loves to coin new terms as market dynamics change and technologies evolve. But what exactly does it mean? What is serverless computing?

Before getting into the definition, let’s take a brief history lesson from Sirish Raghuram, CEO and co-founder of Platform9,  to understand the evolution of serverless computing.

“In the 90s, we used to build applications and run them on hardware. Then came virtual machines that allowed users to run multiple applications on the same hardware. But you were still running the full-fledged OS for each application. The arrival of containers got rid of OS duplication and process level isolation which made it lightweight and agile,” said Raghuram.

Serverless, specifically, Function as a Service, takes it to the next level as users are now able to code functions and run them at the granularity of build, ship and run. There is no complexity of underlying machinery needed to run those functions. No need to worry about spinning containers using Kubernetes. Everything is hidden behind the scenes.

“That’s what is driving a lot of interest in function as a service,” said Raghuram.

What exactly is serverless?

There is no single definition of the term, but to build some consensus around the idea, the Cloud Native Computing Foundation (CNCF) Serverless Working Group wrote a white paper to define serverless computing.

According to the white paper, “Serverless computing refers to the concept of building and running applications that do not require server management. It describes a finer-grained deployment model where applications, bundled as one or more functions, are uploaded to a platform and then executed, scaled, and billed in response to the exact demand needed at the moment.”

Ken Owens, a member of the Technical Oversight Committee at CNCF said that the primary goal of serverless computing is to help users build and run their applications without having to worry about the cost and complexity of servers in terms of provisioning, management and scaling.

“Serverless is a natural evolution of cloud-native computing. The CNCF is advancing serverless adoption through collaboration and community-driven initiatives that will enable interoperability,” said Chris Aniszczyk, COO, CNCF.

It’s not without servers

First things first, don’t get fooled by the term “serverless.” There are still servers in serverless computing. Remember what Raghuram said: all the machinery is hidden; it’s not gone.

The clear benefit here is that developers need not concern themselves with tasks that don’t add any value to their deliverables. Instead of worrying about managing the function, they can dedicate their time to adding featured and building apps that add business value. Time is money and every minute saved in management goes toward innovation. Developers don’t have to worry about scaling based on peaks and valleys; it’s automated. Because cloud providers charge only for the duration that functions are run, developers cut costs by not having to pay for blinking lights.

But… someone still has to do the work behind the scenes. There are still servers offering FaaS platforms.

In the case of public cloud offerings like Google Cloud Platform, AWS, and Microsoft Azure, these companies manage the servers and charge customers for running those functions. In the case of private cloud or datacenters, where developers don’t have to worry about provisioning or interacting with such servers, there are other teams who do.

The CNCF white paper identifies two groups of professionals that are involved in the serverless movement: developers and providers. We have already talked about developers. But, there are also providers that offer serverless platforms; they deal with all the work involved in keeping that server running.

That’s why many companies, like SUSE, refrain from using the term “serverless” and prefer the term function as a service, because they offer products that run those “serverless” servers. But what kind of functions are these? Is it the ultimate future of app delivery?

Event-driven computing

Many see serverless computing as an umbrella that offers FaaS among many other potential services. According to CNCF, FaaS provides event-driven computing where functions are triggered by events or HTTP requests. “Developers run and manage application code with functions that are triggered by events or HTTP requests. Developers deploy small units of code to the FaaS, which are executed as needed as discrete actions, scaling without the need to manage servers or any other underlying infrastructure,” said the white paper.

Does that mean FaaS is the silver bullet that solves all problems for developing and deploying applications? Not really. At least not at the moment. FaaS does solve problems in several use cases and its scope is expanding. A good use case of FaaS could be the functions that an application needs to run when an event takes place.

Let’s take an example: a user takes a picture from a phone and uploads it to the cloud. Many things happen when the picture is uploaded – it’s scanned (exif data is read), a thumbnail is created, based on deep learning/machine learning the content of the image is analyzed, the information of the image is stored in the database. That one event of uploading that picture triggers all those functions. Those functions die once the event is over.  That’s what FaaS does. It runs code quickly to perform all those tasks and then disappears.

That’s just one example. Another example could be an IoT device where a motion sensor triggers an event that instructs the camera to start recording and sends the clip to the designated contant. Your thermostat may trigger the fan when the sensor detects a change in temperature. These are some of the many use cases where function as a service make more sense than the traditional approach. Which also says that not all applications (at least at the moment, but that will change as more organizations embrace the serverless platform) can be run as function as service.

According to CNCF, serverless computing should be considered if you have these kinds of workloads:

  • Asynchronous, concurrent, easy to parallelize into independent units of work

  • Infrequent or has sporadic demand, with large, unpredictable variance in scaling requirements

  • Stateless, ephemeral, without a major need for instantaneous cold start time

  • Highly dynamic in terms of changing business requirements that drive a need for accelerated developer velocity

Why should you care?

Serverless is a very new technology and paradigm, just the way VMs and containers transformed the app development and delivery models, FaaS can also bring dramatic changes. We are still in the early days of serverless computing. As the market evolves, consensus is created and new technologies evolve, and FaaS may grow beyond the workloads and use cases mentioned here.

What is becoming quite clear is that companies who are embarking on their cloud native journey must have serverless computing as part of their strategy. The only way to stay ahead of competitors is by keeping up with the latest technologies and trends.

It’s about time to put serverless into servers.

For more information, check out the CNCF Working Group’s serverless whitepaper here. And, you can learn more at KubeCon + CloudNativeCon Europe, coming up May 2-4 in Copenhagen, Denmark.

OpenTracing: Distributed Tracing’s Emerging Industry Standard

What was traditionally known as just Monitoring has clearly been going through a renaissance over the last few years. The industry as a whole is finally moving away from having Monitoring and Logging silos – something we’ve been doing and “preaching” for years – and the term Observability emerged as the new moniker for everything that encompasses any form of infrastructure and application monitoring.  Microservices have been around for a over a decade under one name or another.  Now often deployed in separate containers it became obvious we need a way to trace transactions through various microservice layers, from the client all the way down to queues, storage, calls to external services, etc.  This created a new interest in Transaction Tracing that, although not new, has now re-emerged as the third pillar of observability….

In a distributed system, a trace encapsulates the transaction’s state as it propagates through the system. During the journey of the transaction, it can create one or multiple spans. A span represents a single unit of work inside a transaction, for example, an RPC client/server call, sending query to the database server, or publishing a message to the message bus. Speaking in terms of OpenTracing data model, the trace can also be seen as a collection of spans structured around the directed acyclic graph (DAG).

Read more at Sematext

How to Upgrade to Ubuntu Linux 18.04

Soon, Ubuntu 18.04, aka the Bionic Beaver, and Canonical‘s next long-term support version of its popular Linux distribution will be out. That means it’s about time to consider how to upgrade to the latest and greatest Ubuntu Linux.

First, keep in mind that this Ubuntu will not look or feel like the last few versions. That’s because Ubuntu is moving back to GNOME for its default desktop from Unity. The difference isn’t that big, but if you’re already comfortable with what you’re running, you may want to wait a while before switching over.

Read more at ZDNet

Building A Custom Brigade Gateway in 5 Minutes

Brigade gateways trigger new events in the Brigade system. While the included GitHub and container registry hooks are useful, the Brigade system is designed to make it easy for you to build your own. In this post, I show the quickest way to create a Brigade gateway using Node.js. How quick? We should be able to have it running in about five minutes.

Prerequisites

You’ll need Brigade installed and configured, and you will also need Draft installed and configured. Make sure Draft is pointed to the same cluster where Brigade is installed.

If you are planning to build a more complex gateway, you might also want Node.js installed locally.

Getting Started

Draft provides a way of bootstrapping a new application with a starter pack. Starter packs can contain things like Helm charts and Dockerfiles. But they can also include code. 

Read more at TechnoSophos

An Introduction to the GNU Core Utilities

Two sets of utilities—the GNU Core Utilities and util-linux—comprise many of the Linux system administrator’s most basic and regularly used tools. Their basic functions allow sysadmins to perform many of the tasks required to administer a Linux computer, including management and manipulation of text files, directories, data streams, storage media, process controls, filesystems, and much more….

These tools are indispensable because, without them, it is impossible to accomplish any useful work on a Unix or Linux computer. Given their importance, let’s examine them…

You can learn about all the individual programs that comprise the GNU Utilities by entering the command info coreutils at a terminal command line. The following list of the core utilities is part of that info page. The utilities are grouped by function to make specific ones easier to find; in the terminal, highlight the group you want more information on and press the Enter key.

Read more at OpenSource.com

Manipulating Binary Data with Bash

Bash is known for admin utilities and text manipulation tools, but the venerable command shell included with most Linux systems also has some powerful commands for manipulating binary data.

One of the most versatile scripting environments available on Linux is the Bash shell. The core functionality of Bash includes many mechanisms for tasks such as string processing, mathematical computation, data I/O, and process management. When you couple Bash with the countless command-line utilities available for everything from image processing to virtual machine (VM) management, you have a very powerful scripting platform.

One thing that Bash is not generally known for is its ability to process data at the bit level; however, the Bash shell contains several powerful commands that allow you to manipulate and edit binary data. This article describes some of these binary commands and shows them at work in some practical situations.

Read more at Linux Pro