Home Blog Page 787

4 Steps To Secure Serverless Applications

Serverless applications remove a lot of the operational burdens from your team. No more managing operating systems or running low level infrastructure.

This lets you and your team focus on building…and that’s a wonderful thing.

But don’t let the lack of day-to-day operational tasks fool you into thinking that there’s nothing to do but write code. With a serverless design, you still have operational tasks. These tasks are different and tend to be more directly tied to delivering business value but they still exist.

Read more at Marknca Blog

SDN-Powered Platform Helps Providers Quickly Monetize New Services

Communication service providers face stagnating revenues, increasing expenditures, and tough competition from cloud and over-the-top (OTT) service providers. Network functions virtualization (NFV) and software-defined networking (SDN) are opening the way to increased efficiency and reduced costs, but providers are encountering new challenges as they evolve NFV from trials to network-wide deployments.

As OTTs offer more and more services, providers are fighting to keep their networks from becoming commodities by adding new services they can monetize such as Internet of things (IoT), video, and data analytics offerings. To support these services, providers need carrier-class infrastructure capable of connecting millions of data flows to thousands of virtual network functions (VNFs).

Read more at SDxCentral.

 

 

What a Virtual Network Looks Like: Services

In the past, services have been the direct product of cooperative behavior of systems of network devices. The devices communicate with each other to learn network topology, status, and the location of endpoints in a process called “adaptive discovery.” This collection of adaptive processes ties networks tightly to protocols and ties devices to services, which means that service changes can demand protocol and sometimes even device changes. If virtual networks can break this tie, it could be… well … huge.

Services divide into three basic categories — connection services, on-network hosting services, and endpoint services. Connection services are the things that move traffic among endpoints; hosting services are services offered on the network (websites and the cloud); and endpoint services are things like firewall, network address translation, dynamic host configuration, and so forth that are offered per (and usually at) an endpoint and appear to be a logical part of a connection services. A big, perhaps even the biggest, question for virtual networks is how they’d relate to these three classes.

If virtual networks let you compose virtual networks from a collection of users and applications, virtual connection-point services could let you compose features. With network functions virtualization (NFV), you could deploy security, monitoring and management, addressing, and even load-balancing tools as needed, move them around to follow users or applications, and change or modernize the features when better stuff is available — all without changing out any devices.

Read more at No Jitter.

 

 

Architecting Stable Systems and Solid Code

Solid code does what the developers intended it to do and can gracefully handle anything you throw at it. Learn more about how to make your code solid.

Sometimes the code is solid, but the intention was wrong. That should be addressed at a different level of requirements gathering. But, I don’t recommend a waterfall approach here. Often requirements are fuzzy to begin with, and during development and incremental deployments, they become more refined and clear. –

Quantifying Benefits of Network Virtualization in the Data Center

Network virtualization (NV) in the data center promises to improve service agility, simplify network operations, and reduce capital expenditures. One of the biggest challenges for IT professionals is to quantify the return-on-investment required to justify the costs of network virtualization and the changes it requires in their data center network operations.

Defining NV in the Data Center
NV provides the ability to create logical, virtual networks that are decoupled from the underlying network hardware. NV creates a logical, software-based view of the hardware and software networking resources (switches, routers, etc.). The physical networking gear (the underlay) is responsible for the forwarding of packets, while the virtual network (software) provides an intelligent abstraction that makes it easy to deploy and manage Layers 4-7 network services, including network security and application delivery control.

The Benefits of NV in the Data Center
Modern data centers have increased significantly in scale and complexity as compute and storage resources become highly virtualized. The rise of the DevOps style of application deployment means that data center resources must be agile and respond rapidly to changing workload requirements. Data center network technologies have been challenged to keep up with these rapidly evolving application requirements.

Read more at SDxCentral.

 

5 Cloud, Big Data, and Networking Platforms to Kickstart Your Open Source Career

A decade ago, Red Hat CEO Jim Whitehurst predicted that open source tools and platforms would become pervasive in IT. Fast-forward to today, and that prediction has come true, with profound implications for the employment market.

“Today, it is almost impossible to name a major player in IT that has not embraced open source,” Whitehurst noted in a LinkedIn post. “Open source was initially adopted for low cost and lack of vendor lock-in, but customers have found that it also results in better innovation and more flexibility. Now it is pervasive, and it is challenging proprietary incumbents across technology categories.”

In particular, open source cloud, Big Data and networking platforms have flourished and the job market now places a premium on workers skilled in these areas. According to The Linux Foundation’s 2016 Open Source Jobs Report, 51 percent of surveyed hiring managers say knowledge of OpenStack and CloudStack has a big impact on open source hiring decisions, followed by networking (21 percent), security experience (14 percent), and container skills (8 percent).

The good news is that when it comes to five of the most in-demand open source platforms and tools — Hadoop, OpenStack, Apache Spark, OpenDaylight and Docker — there are fast tracks available for becoming skilled with them. In so doing, you can kickstart your open source career.

Here is a sampling of avenues you can take to gain skills with these tools:

Hadoop

When talk turns to job market opportunities these days, hardly any technology trend is drawing more attention than Big Data. In fact, Cloudera reports that 65 percent of the current Fortune 100 is using big data to drive their business. Hadoop is one of the most storied platforms in the Big Data space, allowing organizations to draw insights from huge datasets.

So how can you get up to speed with Hadoop? When it comes to this platform, the good news is that free training options are flourishing. With its sights set on making Hadoop training for developers and administrators easy to complete, Hadoop distribution provider MapR Technologies has unveiled several free on-demand training offerings. You can visit here to see the full list of courses and certifications and to sign up for free classes. MapR also offers free training for Apache Drill and other tools that exist within the Hadoop ecosystem. More details are available here.

You can also look into Hadoop training options from Cloudera University. It offers a certification option that can make you a valuable commodity in the Big Data job market.

Apache Spark 

Folks are becoming increasingly interested in Apache Spark, an open source data analytics cluster computing framework originally developed in the AMPLab at UC Berkeley. Spark is not only central to how many organizations are working with their data stores, but it is poised to play a big role in processing the data generated by the Internet of Things (IoT). Meanwhile, IBM has announced a major commitment to Apache Spark, billing it as “potentially the most important new open source project in a decade that is being defined by data.”

In addition to the Hadoop training options that they offer, MapR and Cloudera are great places to start if you want to pick up Spark skills. MapR offers some free training options, and you can scan options here. Cloudera also has an expanded Apache Spark training curriculum. For more information about the courses on Spark and to register for a class, visit university.cloudera.com.

“The Big Data market continues to be one that allows people to command the highest average salaries,” MapR Vice President Dave Jespersen said.

OpenStack

Are you looking to pick up valuable OpenStack certification? If so, you have several good options, and costs are minimal. At the recent OpenStack Summit in Austin, TX, The OpenStack Foundation announced the availability of a Certified OpenStack Administrator (COA) exam. Developed in partnership with The Linux Foundation, the exam is performance-based and available anytime, anywhere. The Linux Foundation offers an OpenStack Administration Fundamentals course, which serves as preparation for the certification. The course is available bundled with the COA exam, enabling students to learn the skills they need to work as an OpenStack administrator and get the certification to prove it.

Red Hat continues to be very focused on OpenStack and has a certification option that is also worth considering. The company has announced a new cloud management certification for Red Hat Enterprise Linux OpenStack Platform as part of the Red Hat OpenStack Cloud Infrastructure Partner Network.

Mirantis has built a name for keeping its certification training vendor-agnostic, and the company teaches OpenStack across the most popular distributions, hypervisors, storage back ends, and network topologies. Earlier this year, Mirantis also launched Virtual Training, a synchronized, instructor-led online OpenStack professional training option. You can find more of Mirantis’ OpenStack courses here.

OpenDaylight 

Networking has also emerged as a technology category rich with open source opportunities, and the OpenDaylight Platform is definitely worth a look here. It’s a collaborative, open source project hosted by The Linux Foundation, focused on Software Defined Networking (SDN).

The expansion of data centers and rise of cloud computing, coupled with changing demands on service provider networks, are driving companies to look to software-defined solutions to help improve network performance and management, lower costs, and increase efficiencies. OpenDaylight has reached broad industry acceptance, and you can get involved with the project itself and get access to training materials, or check out the Software Defined Networking with OpenDaylight course offered by The Linux Foundation.

Docker

We all know that container technology is hot, and Docker is the hottest star in this galaxy. Docker has issued a report (called the “Evolution of the Modern Software Supply Chain”), based on a survey of more than 500 people currently using and deploying container technology in various stages.

Among the findings, the report noted that Docker is central to many hybrid cloud/multi-cloud strategies. In fact 80 percent of respondents using Docker describe it as part of their cloud strategy for a variety of reasons including migration, hybrid cloud portability and avoiding lock-in to a single cloud vendor. Docker is also changing how organizations think about delivering and maintaining applications.

Docker is a tool that you can quickly come up to speed with, and Docker Inc. offers both instructor-led training options and self-paced options. There are introductory and advanced courses available. As just one example of how fast you can become skilled with Docker, you can complete the comprehensive Docker Administration and Operations course in four consecutive days.

The Linux Foundation has also announced that a massive open online course (MOOC) is available for registration, and it includes coverage of Docker. The course is an Introduction to Cloud Infrastructure Technologies and is offered through edX. You can register now for this free cloud training course, and it begins in June — coming right up.

 

linux-com_ctas_may2016_v2_opensource.jpg?itok=Hdu0RIJn

At MesosCon: Chris Pinkham Details Twitter’s Platform Infrastructure

As a preview to MesosCon, we spoke with Chris Pinkham, VP of Engineering at Twitter, about some of the issues involved with running “one of the largest single Mesos clusters known” and why open source technology is critical to Twitter’s success.

In his keynote presentation, “Platform Infrastructure at Twitter: The Past, Present and Future” on Thursday, June 2, Pinkham will provide an overview of the company’s current platform infrastructure, explain some of the challenges of operating at scale, and describe his team’s vision for hybrid cloud services.

Chris Pinkham, VP of Engineering, Twitter

Linux.com: Please tell us briefly about Twitter’s platform infrastructure. What are some of the problems you are working to solve?

Chris Pinkham: The platform infrastructure consists of the large-scale services behind the scenes that power Twitter’s websites and apps. Major components include the compute clusters, key-value, graph and object storage systems, search infrastructure, data platform, traffic management and load distribution services as well as the actual tweet services, user services, and the social graph services linking them all together.

At a high level, the biggest problems we’re working on are how to operate — efficiently and reliably — a huge messaging infrastructure connecting hundreds of millions of users globally, and at the same time allow Twitter’s engineers the ability to make continuous improvements to our products without negatively impacting our users’ experience.

Can you give us an example of a major problem you had to solve when massively scaling services? How did you approach that?

We regularly have to invent our way out of large scale distributed system problems — there are many examples, some of which we have made available in open source (such as Mesos and Aurora in our compute infrastructure) and some we haven’t (Manhattan, our internal key-value service and Omnisearch, a new information retrieval system).

In all cases, we are concerned with efficiency of operations in the sense that, when dealing with many thousands of machines, we are continually faced with defects. Building operational intelligence deeply into the systems allows the relatively small development teams to operate reliable services with minimal human intervention, but this only works if you assume that component failure is ongoing.

What is Twitter working on in regard to Mesos specifically?

Our immediate focus area is scalability and reliability of the compute platform. We run one of the largest single Mesos clusters known and are often the first to encounter a variety of performance issues given the scale. We are also focusing on making the compute platform more efficient.

We built a chargeback system that also leverages Mesos container stats to provide accurate utilization and cost reporting per job, project, and team. This not only drove over 30 percent improvement in overall resources utilized across our clusters but also provided a framework to evaluate ROI when compared to other alternatives (including public cloud services). We are now in the process of enabling resource oversubscription (revocable offers) that will open up a “spot” market where customers will have the flexibility to return resources and avoid being charged for it.

Finally, we are working to expose other compute resources such as network bandwidth and GPU for low latency, high throughput, and machine learning use-cases.

What role does open source play in your platform infrastructure strategy?

Open source is critical to Twitter’s success and our ability to scale the business effectively. Starting about 15-20 years ago, the largest web companies realized that traditional vendor relationships weren’t going to help with their most important problems — the traditional vendors were too slow, opaque and expensive, and didn’t scale to support their needs well. So, we collectively set about solving our most important problems ourselves and then shared the results so that we didn’t all have to be working on everything.

Many important contributions have come from other sources, of course, but a disproportionate number of the popular scalable compute, storage, and data systems in the open source world have come from companies such as Twitter and other large web companies. We rely on open source to make progress, and we feed the system by helping out with our own contributions whenever possible, especially when we have a particularly compelling point of view.

Can you give us a quick preview of your upcoming talk at MesosCon?

I will be giving an overview of Twitter’s current platform infrastructure, pointing to some of the major open source contributions we have made and are continuously making, and discussing how we are moving towards a compute infrastructure that combines private and public services in a seamless self-service platform that engineers can use to operate their own services at developer speed, unlimited by the infrastructure.

Anything else that you’d like to share?

Check our Twitter’s engineering blog — https://blog.twitter.com/engineering — to keep up to date on some of the most interesting happenings behind the scenes at Twitter.

In case you won’t be able to attend MesosCon in person, The Linux Foundation will be offering free live video streaming of all keynote sessions. You can see the full agenda of keynotes here, and sign up for the livestream now.

 

linux-com_ctas_mesosconna_452x121_final.png?itok=jfAjYJN3

Introducing Blue Ocean: A New User Experience for Jenkins

In recent years developers have become rapidly attracted to tools that are not only functional but are designed to fit into their workflow seamlessly and are a joy to use. This shift represents a higher standard of design and user experience that Jenkins needs to rise to meet.

We are excited to share and invite the community to join us on a project we’ve been thinking about over the last few months called Blue Ocean.

Blue Ocean is a project that rethinks the user experience of Jenkins, modelling and presenting the process of software delivery by surfacing information that’s important to development teams with as few clicks as possible, while still staying true to the extensibility that is core to Jenkins. 

Read more at Jenkins.io Blog
 

Univa Brings Supercomputer Scheduling to Kubernetes

As the market about container scheduling heats up, Univa, workload management specialists from the high-performance computing (HPC) market, are among those entering the ring. Earlier this month, the company demonstrated Navops Command, a pluggable, automated workload placement and policy management solution for Kubernetes, at OSCON conference in Austin, Texas.

“Navops Commands big differentiator is its rich policy management which allows a Kubernetes admin to have a lot more control over what runs where when, said Robert Lalonde, vice president and general manager of Navops. NavOps Command offers Kubernetes-enhancing features access control, fair share resource placement, workload prioritization, pre-emption, and a variety of placement policies provide for enterprise-grade scheduling.

 

Read more at The New Stack

Watch Apache Mesos Keynotes Live Today at MesosCon

Can’t make it to MesosCon North America this week? The Linux Foundation is pleased to offer free live video streaming of all keynote sessions on June 1-2, 2016.

The Apache Mesos conference going on in Denver is a veritable who’s who from across the industry of those using Mesos as a framework to develop cloud native applications. MesosCon is a great place to learn about how to design application clusters running on Apache Mesos from engineers who have done it.

Tune in at 9 a.m. Mountain Time today, June 1, to watch Benjamin Hindman (@benh), the co-creator of Apache Mesos, give the welcome address. And, at 10:15 MT, Craig Neth (@cneth), distinguished member of the technical staff at Verizon, will walk attendees through how they got a 600-node Mesos cluster powered up and running tasks in 14 days.

On June 2, the event features a special keynote from Matei Zaharia, VP of Apache Spark, and keynotes from Twitter, Mesosphere, and EMC. See the full agenda of keynotes here, and sign up for the livestream. While you watch, we encourage you to join the conversation on Twitter using #mesoscon.

By signing up, you’ll also be the first to get notified when the recordings of the keynotes and more than 50 sessions, become available.

Once you sign up, you’ll be able to view the livestream on this page. If you sign up prior to the livestream day/time, simply return to this page and you’ll be able to view.

linux-com_ctas_mesosconna_452x121_final.png?itok=jfAjYJN3