Home Blog Page 629

10 Lessons from 10 Years of Amazon

Amazon launched their Simple Storage Service (S3) service about 10 years ago followed shortly by Elastic Compute Cloud (EC2). In the past 10 years, Amazon has learned a few things about running these services. In his keynote at LinuxCon Europe, Chris Schlaeger, Director Kernel and Operating Systems at the Amazon Development Center in Germany, shared 10 lessons from Amazon.
 
1. Build evolvable systems

The cloud is all about scale and being able to get compute power only when you need it and getting rid of it when you don’t need it anymore. Schlaeger says that “the lesson that we learned isn’t to design for a certain scale, you always get it wrong. What you want to do instead is design your system so you can evolve it … over time without the customers or users knowing it.”

2. Expect the unexpected

Hardware has a finite lifespan, so things will fail, but you can design your systems to check for failure, deal with it, isolate failures, and then react to them. “Control the blast radius and raise failure as a natural occurrence of your software and hardware, all the time,” Schlaeger suggests.

3. Primitives, not frameworks

Amazon doesn’t know what every customer wants to do, and they don’t want try to tell customers how to do their work. However, they do want to evolve quickly to follow the needs of their customers, and this agility is something that is much easier to accomplish with primitives rather than frameworks.

4. Automation is key

Schlaeger points out that “if you want to scale up, you need to have some form of automation in place.” If someone can log into your servers and make changes on the fly, then you can’t track what changes have been made over time.

5. APIs are forever

APIs can be tricky because if you want to keep your customers happy, you can’t keep changing your APIs. “You need to be very, very cautious and conscious about the APIs you have and make sure you don’t change them,” Schlaeger says.

6. Know your resource usage

When Amazon first launched S3, they charged for storage space and transactions, so people quickly learned that storing and retrieving tiny thumbnail images for items on eBay was quite cheap. However, the large numbers of API calls generated a big enough load on Amazon’s servers that they had to start including call rates in the pricing model. Understanding all of your costs and building them into your prices is important.

7. Build security in from the ground up

It is important that you get the security involved in the design of a system in addition to the implementation. You should also do regular check-ins as your service evolves over time to make sure that it stays secure. 

8. Encryption is a first class citizen

Schlaeger points out that “the best way you can prove to your customers that the data is safe from access from other parties … is to have them encrypted.” Within AWS, customers can encrypt all of their data and only the customer has access to the keys used to encrypt and decrypt the data. 

9. Importance of the network

This is probably the hardest part to get right, because the network is a shared resource for everybody across all use cases. Various customers have unique and often contradictory requirements for using the network.

10. No gatekeepers

“The more open you are with your platform, … the more success you will have,” Schlaeger says. Amazon doesn’t try to limit what their customers can do beyond what they need to protect the instances or services of other customers.

For more details about each of these 10 lessons, watch the full video below.

Interested in speaking at Open Source Summit North America on September 11 – 13? Submit your proposal by May 6, 2017. Submit now>>

Not interested in speaking but want to attend? Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the all-access attendee registration price. Register now to save over $300!

 

Darktrace Automates Network Security Through Machine Learning

 

Darktrace co-founder Poppy Gustafsson recently predicted, at TechCrunch Disrupt London, that malicious actors will increasingly use artificial intelligence to create more sophisticated spearphishing attacks.

Criminals are just as capable of using artificial intelligence as those trying to thwart them, according to security vendor ESET‘s 2017 trends report, with “next-gen” security marketers throwing around the buzzwords “machine learning,” “behavioral analysis” and more. That’s making it more difficult for potential customers to sift through all the hype.

It predicts the rise of “jackware” or Internet-of-Things ransomware, such as locking the software in cars until a ransom is paid.

Read more at The New Stack

Troubleshooting Tips for the 5 Most Common Linux Issues

Although Linux installs and operates as expected for most users, inevitably some users will run into problems. For my final article in The Queue column for the year, I thought it would be interesting to summarize the most common technical Linux issues people ran into in 2016. I posted the question to LinuxQuestions.org and on social media, and I analyzed LQ posting patterns. Here are the results.

1. Wifi drivers (especially Broadcom chips)

Generally speaking, wifi drivers—and Broadcom cards in particular—continue to be one of the most problematic technical issues facing Linux. There were hundreds of posts about this topic on LQ alone in 2016, and myriad more elsewhere.

Read more at OpenSource.com

Open Source Server Simplifies HTTPS, Security Certificates

Forget expired TLS certificates; the lightweight Caddy web server handles Let’s Encrypt certificates and redirects HTTP traffic by default. For administrators seeking an easier method to turn on HTTPS for their websites, there is Caddy, an open source web server that automatically sets up security certificates and serves sites over HTTPS by default.

Built on Go 1.7.4, Caddy is a lightweight web server that supports HTTP/2 out of the box and automatically integrates with any ACME-enabled certificate authority such as Let’s Encrypt. HTTP/2 is enabled by default when the site is served over HTTPS, and administrators using Caddy will never have to deal with expired TLS certificates for their websites, as Caddy handles the process of obtaining and deploying certificates.

Read more at InfoWorld

Linux Kernel 4.8 Reaches End of Life, Users Urged to Move to Linux 4.9 Series

After informing us about the availability of the Linux 4.8.16 kernel update a few days ago, Greg Kroah-Hartman announced earlier today the availability of a new maintenance update, which appears to be the last in the stable series.

It was bound to happen sooner or later, especially now that the Linux 4.9 kernel series has been officially declared stable and ready for deployment in production environments, so we’re sad to inform you that there won’t be any updates to the Linux 4.8 kernel branch. The last point release is now Linux kernel 4.8.17.

Read more at Softpedia

How to Secure MongoDB on Ubuntu or Debian or CentOS Linux Production server

MongoDB Ransomware attacks over 28000 databases server in last two days. MongoDB ransom attacks are in Wild. MongoDB is a free and open-source NoSQL document database server. It is used by web application for storing data on a public facing server. Securing MongoDB is critical. Crackers and hackers are accessing insecure MongoDB for stealing data and deleting data from unpatched or badly-configured databases. This guide explains how to protect and secure your MongoDB nosql server running on Linux operating system.

Read more at: nixCraft

VMware Joins Open-O to Pursue its Telco NFV Strategy

VMware joined the Open-O project today as a premier member. The open source project hosted by the Linux Foundation works to enable end-to-end service orchestration via network functions virtualization (NFV) over both software-defined networks (SDN) and legacy networks. 

As a premier member, VMware will participate in both the governing board, along with the technical steering and marketing committees. In 2016 VMware indicated it would put more focus on telco NFV. And it hired Gabriele Di Piazza as vice president of telco NFV products.

Read more at SDxCentral

Sweden’s Blockchain Land Registry to Begin Testing in March

A public-private effort in Sweden to record land titles on a blockchain is set to begin public testing this March.

Spearheaded by the Swedish National Land Survey and blockchain startup ChromaWay, the project was revealed in June to have support from consulting firm Kairos Future and telephone service provider Telia. Now, the project is moving ahead with the addition of two banks that specialize in mortgages, Landshypotek and SBAB, CoinDesk has learned.

ChromaWay CEO Henrik Hjelte said that the sandbox release would seek to test the platform from a business, legal and security perspective, while allowing the public to test the interface and back-end.

Read more at CoinDesk

Communities Over Code: How to Build a Successful Project by Joe Brockmeier, Red Hat

Joe Brockmeier has tips for how you can build community and attract more users – which in turn, will attract more developers, make life easier, and help ensure a long life for your project.

 

Essentials of OpenStack Administration Part 5: OpenStack Releases and Use Cases

Start exploring Essentials of OpenStack Administration by downloading the free sample chapter today. DOWNLOAD NOW

OpenStack has come a long way since 2010 when NASA approached Rackspace for a project. With 1,600 individual contributors to OpenStack and a six-month release cycle, there are a lot of changes and progress. This amount of change and progress is not without its drawbacks. In the Juno release, there were something like 10,000 bugs. In the next release, Kilo, there were 13,000 bugs. But as OpenStack is deployed in more environments, and more people are interested in it, the community grows both in users and developers.

In part 5 of our series from the Essentials of OpenStack Administration course sample chapter, we discuss the OpenStack project in more detail: its community of contributors, release cycle, and use cases. Download the full sample chapter now.

History of OpenStack

In 2010, Engineers at NASA approached some friends at Rackspace to build an open cloud for NASA and hopefully other government organizations as part of an Open Government initiative. At that time, there were only proprietary and expensive offerings available. Project Nebula was born. Rackspace was interested in moving their software toward open source and saw Nebula as a good place to begin.

Together they started working on something called Nova, known now as OpenStack Compute. At the time, Nova was the project that did everything. It did storage, and network, and virtual machines. Now, new projects have taken over some of those duties.

Since then, the number of projects has grown incredibly. If you go to the OpenStack.org website and look at the projects page, you’ll notice there are more than 35 different projects. Each project is made up of one or more services to the cloud. Each of the projects is developed separately.

Although NASA has stopped major work on OpenStack, a large and growing group of supporters still remains. Each component of OpenStack has a dedicated project. Each project has an official name, as well as a more well-known code-name. The project list has been growing with each release. Some projects are considered core, others are newer and in incubation stages. See a list of the current projects.

There are several distributions of OpenStack available as well, from large IT companies and start-ups alike. DevStack is a deployment of OpenStack available from the www.openstack.org website. It allows for easy testing of new features, but is not considered production-safe. Red Hat, Canonical, Mirantis and several other companies also provide their own deployment of OpenStack, similar to the many options to install Linux.

OpenStack Release Pattern

The first release of the project was code-named Austin, in October of 2010. Since then, a major release has been deployed every six months. There are code features and proposals that are evaluated every two months or so, as well as code sprints planned on a regular basis.

The quick release schedule and large number of developers working on code does not always lead to smooth transitions. The Kilo release was the first one to address an upgrade path, with its success yet to be known. In fact, there were approximately 10 percent more bugs in the Kilo release than the first Juno release.

OpenStack Use Cases

The ability to deploy and redeploy various instances allows for software development at the speed of the developer, without downtime waiting for IT to handle a ticket.

Testing can be easily done in parallel with various flavors, or system configurations, and operating system configurations. These choices are also within the reach of the end user to lessen interaction with the IT team.

Using both a Browser User Interface (BUI) or a command line, much of the common IT requests can be delegated to the users. The IT staff can focus on higher-level functions and problems instead of more common requests.

The flexibility of OpenStack through various software-defined layers allows for more options, instead of fewer, as has happened with server consolidation.

The next, and final, article in this series is a tutorial on installing DevStack, a simple way for developers to test-drive OpenStack.

The Essentials of OpenStack Administration course teaches you everything you need to know to create and manage private and public clouds with OpenStack. Download a sample chapter today!

Read the other articles in the series:

Essentials of OpenStack Administration Part 1: Cloud Fundamentals

Essentials of OpenStack Administration Part 2: The Problem With Conventional Data Centers

Essentials of OpenStack Administration Part 3: Existing Cloud Solutions

Essentials of OpenStack Administration Part 4: Cloud Design, Software-Defined Networking and Storage