Home Blog Page 504

Building IPv6 Firewalls: IPv6 Security Myths

We’ve been trundling along nicely in IPv6, and now it is time to keep my promise to teach some iptables rules for IPv6. In this two-part series, we’ll start by examining some common IPv6 security myths. Every time I teach firewalls I have to start with debunking myths because there are a lot of persistent weird ideas about the so-called built-in IPv6 security. In part 2 next week, you will have a nice pile of example rules to use.

Security yeah, no

You might recall the optimistic claims back in the early IPv6 days of all manner of built-in security that would cure the flaws in IPv4, and we would all live happily ever after. As usual, ’tisn’t exactly so. Let’s take a look at a few of these.

IPsec is built-in to IPv6, rather than added on as in IPv4. This is true, but it’s not particularly significant. IPsec, IP Security, is a set of network protocols for encrypting and authenticating network traffic. IPsec operates at the Network layer. Other encryption protocols that we use every day, such as TLS/SSL and SSH, operate higher up in the Transport Layer, and are application-specific.

IPsec operates similarly to TLS/SSL and SSH with encryption key exchanges, authentication headers, payload encryption, and complete packet encryption in encrypted tunnels. It works pretty much the same in IPv6 and IPv4 networks; patching code isn’t like sewing patches on clothing, with visible lumps and seams. IPv6 is approaching 20 years old, so whether certain features are built-in or bolted-on isn’t relevant anyway.

The promise of IPsec is automatic end-to-end security protecting all traffic over an IP network. However, implementing and managing it is so challenging we’re still relying on our old favorites like OpenVPN, which uses TLS/SSL, and SSH to create encrypted tunnels.

IPsec in IPv6 is mandatory. No. The original specification required that all IPv6 devices support IPsec. This was changed in 2011 RFC 6434 Section 11 from MUST to SHOULD. In any case, having it available is not the same as using it.

IPsec in IPv6 is better than in IPv4. Nah. Pretty much the same.

NAT = Security. No no no no no no, and NO. NAT is not and never has been about security. It is an ingenious hack that has extended the lifespan of IPv4 many years beyond its expiration date. The little bit of obfuscation provided by address masquerading doesn’t provide any meaningful protection, and it adds considerable complexity by requiring applications and protocols to be NAT-aware. It requires a stateful firewall which must inspect all traffic, keep track of which packets go to your internal hosts, and rewrite multiple private internal addresses to a single external address. It gets in the way of IPsec, geolocation, DNSSEC, and many other security applications. It creates a single point of failure at your external gateway and provides an easy target for a Denial of Service (DoS) attack. NAT has its merits, but security is not one of them.

Source routing is built-in. This is true; whether it is desirable is debatable. Source routing allows the sender to control forwarding, instead of leaving it up to whatever routers the packets travel through, which is usually Open Shortest Path First (OSPF). Source routing is sometimes useful for load balancing, and managing virtual private networks (VPNs); again, whether it is an original feature or added later isn’t meaningful.

Source routing presents a number of security problems. You can use it to probe networks and gain information and bypass security devices. Routing Header Type 0 (RH0) is an IPv6 extension header for enabling source routing. It has been deprecated because it enables a clever DoS attack called amplification, which is bouncing packets between two routers until they are overloaded and their bandwidth exhausted.

IPv6 networks are protected by their huge size. Some people have the idea that because the IPv6 address space is so large this provides a defense against network scanning. Sorry but noooo. Hardware is cheap and powerful, and even when we have literally quintillions of potential addresses to use (an IPv6 /64 network segment is 18.4 quintillion addresses) we tend to organize our networks in predictable clumps.

The difficulties of foiling malicious network scanning are compounded by the fact that certain communications are required for computer networks to operate. The problem of controlling access is beyond the abilities of any protocol to manage for us. Read Network Reconnaissance in IPv6 Networks for a lot of interesting information on scanning IPv6 networks, which attacks require local access and which don’t, and some ways to mitigate hostile scans.

Multitudes of Attack Vectors

Attacks on our networks come from all manner of sources: social engineering, carelessness, spam, phishing, operating system vulnerabilities, application vulnerabilities, ad networks, tracking and data collection, snooping by service providers… going all tunnel vision on an innocent networking protocol misses almost everything.

Come back next week for some nice example IPv6 firewall rules.

You might want to review the previous installments in our meandering IPv6 series:

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Why Kubernetes, OpenStack and OPNFV Are a Match Made in Heaven

Chris Price, open source strategist for SDN, cloud and NFV at Ericsson, says there’s plenty of love in the air between Kubernetes, OpenStack and OPNFV.

“Kubernetes provides us with a very simple way of very quickly onboarding workloads and that’s something that we want in the network, that’s something we want in our platform,” Price said, speaking about what he called “a match made in heaven” between Kubernetes, OpenStack and NFV at the recent OPNFV Summit.

Price believes the Euphrates release is the right time to integrate more tightly with Kubernetes and OpenStack, finding ways of making the capabilities from each available to the NFV environment and community.

Read more at SuperUser

Using Prototypes to Explore Project Ideas, Part 1

Imagine that you work for an agency that helps clients navigate the early stages of product design and project planning.

No matter what problem space you are working in, the first step is always to get ideas out of a client’s head and into the world as quickly as you possibly can. Conversations and wireframes can be useful for finding a starting point, but exploratory programming soon follows because words and pictures alone can only take you so far.

By getting working software into the mix early in the process, product design becomes an interactive collaboration. Fast feedback loops allow stumbling blocks to be quickly identified and dealt with before they can burn up too much time and energy in the later (and more expensive) stages of development.

Read more at Practicing Developer

The Biggest Shift in Supercomputing Since GPU Acceleration

If you followed what was underway at the International Supercomputing Conference (ISC) this week, you will already know this shift is deep learning. Just two years ago, we were fitting this into the broader HPC picture from separate hardware and algorithmic points of view. Today, we are convinced it will cause a fundamental rethink of how the largest supercomputers are built and how the simulations they host are executed. After all, the pressures on efficiency, performance, scalability, and programmability are mounting—and relatively little in the way of new thinking has been able to penetrate those challenges.

The early applications of deep learning in using approximation approach to HPC—taking experimental or supercomputer simulation data and using it to train a neural network, then turning that network around in inference mode to replace or augment a traditional simulation—are incredibly promising. This work in using the traditional HPC simulation as the basis for training is happening fast and broadly, which means a major shift is coming to HPC applications and hardware far quicker than some centers may be ready for. What is potentially at stake, at least for some application areas, is far-reaching. Overall compute resource usage goes down compared to traditional simulations, which drives efficiency, and in some cases, accuracy is improved. Ultimately, by allowing the simulation to become the training set, the exascale-capable resources can be used to scale a more informed simulation, or simply be used as the hardware base for a massively scalable neural network.

Read more at The Next Platform

New GitHub Features Focus on Open Source Community

GitHub is adding new features and improvements to help build and grow open source communities. According to the organization, open source thrives on teamwork, and members need to be able to easily contribute and give back. The new features are centered around contributing, open source licensing, blocking, and privacy.

To make open source licensing easier, the organization has introduced a new license picker that provides an overview of the license, the full text, and the ability to customize fields.

Read more at SDTimes

Azure Container Instances: No Kubernetes Required

Microsoft has introduced a new container service, Azure Container Instances (ACI), that is intended to provide a more lightweight and granular way to run containerized applications than its Azure Container Service (ACS).

ACI runs individual containers that you can configure with specific amounts of virtual CPU and memory, and that are billed by the second. Containers can be pulled from various sources – Docker Hub, the Azure Container Registry, or a private repository – and deployed from the CLI or by way of an Azure template.

Read more at InfoWorld

Open Source Mentoring: Your Path to Immortality

Rich Bowen is omnipresent at any Open Source conference. He wears many hats. He has been doing Open Source for 20+ years, and has worked on dozens of different projects during that time. He’s a board member of the Apache Software Foundation, and is active on the Apache HTTPd project. He works at Red Hat, where he’s a community manager on the OpenStack and CentOS projects.

At Open Source Summit North America, Bowen will be delivering a talk titled “Mentoring: Your Path to Immortality.” We talked to Bowen to know more about the secret of immortality and successful open source projects.

Linux.com: What was the inspiration behind your talk?

Rich Bowen: My involvement in open source is 100 percent the result of people who mentored me, encouraged me to participate, and cheered me on as I worked. In recent years, as I have lost steam on some of these projects. I’ve turned my attention to encouraging younger people to step in and fill my space. This has been every bit as rewarding as participating myself, and I wanted to share some of this joy.

Linux.com: Have you seen projects that died because their creators left?

Bowen: Oh, sure. Dozens of them. And many of them were projects that had a passionate user community, but no active developers. I tend to think of these projects as not really open source. It’s not enough to have your code public, or even under an open source license. You have to actually have a collaborative community in order for your project to be truly open and sustainable.

Linux.com: When we talk about immortality of a project and changing leadership, there can be many factors — documentation, adapting processes, sustainability. What do you think are some of the factors that ensure immortality?

Bowen: Come to my talk and find out! Seriously, the most important thing — the thing that I want people to take away from my talk — is that you be willing to step out of your comfort zone and ask someone to help out. Be willing to relinquish control, and let someone else do something that you could probably do letter. Or, maybe you couldn’t. There’s only one way to find out.

Linux.com: Can you give an example of some of the projects that followed the model and have never faced issues with changing guard?

Bowen: I would have to point to the Apache Web server. The project is 23 years old, and there’s only one person involved now who was involved at the beginning. The rest of the people working on it come and go, based on their interests and availability. The culture of handing out commit rights to all interested parties has been sustained over the years, and all the people on the project are treated as equals.

Other interesting examples include projects like Linux, Perl, or Python, which have very strong project leaders who, while they remain the public face of the project, in reality, delegate a lot of their power to the community. These projects all have strong cultures of mentors reaching out to new contributors and helping them with their first contributions.

Linux.com: How important are people and processes in the open source world or is it all about technology?

Bowen: We have a saying at Apache: Community > Code. 

Obviously, our communities are based around code, but it’s the community, not the code, that the Apache board looks at when it evaluates whether a project is running in a sustainable way.

I would assert that open source is all about people — people who happen to like technology. The open source mindset, and everything that I talk about in my presentation, are equally applicable to any discipline where people create in a collaborative way — academia is one obvious example, but there are lots of other places like government, business coalitions, music, and so on.

Check out the full schedule for Open Source Summit here and save $150 on registration through July 30.  Linux.com readers save an additional $47 with discount code LINUXRD5. Register now!

Activities for All at OS Summit in Los Angeles: Mentoring, Yoga, Puppies, and More!

Open Source Summit North America is less than two months away! Join 2,000+ open source professionals Sept. 11-14 in Los Angeles, CA, for exciting keynotes and technical talks covering all things Linux, cloud, containers, networking, emerging open source technologies, and more.

Register now!

With your registration, you also get access to many special events throughout the four-day conference. Special events include:

  • New Speed Networking Workshop: Looking to grow your technical skills, get more involved in an open source community, or make a job change? This networking and mentoring session taking place Monday, Sept. 11 is for you!

  • New Recruiting Program: Considering a career move or a job change? This year we are making it easier than ever for attendees to connect with companies looking for new candidates.

  • Evening Events: Join fellow attendees for conversation, collaboration and fun at numerous evening events including the attendee reception at Paramount Studios featuring studio tours, live music, and dinner from LA favorites In-N-Out, Coolhaus, Pink’s and more!

  • Women in Open Source Lunch: All women attendees are invited to connect at this networking lunch, sponsored by Intel, on Monday, Sept. 11.

  • Dan Lyons Book Signing: Attendees will have the opportunity to meet author Dan Lyons on Tuesday, Sept. 12. The first 100 attendees will receive a free signed copy of his book Disrupted: My Misadventure in the Start-Up Bubble.

  • Thursday Summits & Tutorials: Plan to stay on September 14, to attend the Diversity Empowerment Summit (Hosted by HPE & Intel), Networking Mini-Summit or deep-dive tutorials – all included in your OSS NA registration!

  • New Executive Track: Full details coming soon on this special event, hosted by IBM, taking place Tuesday, Sept. 12.

  • Morning Activities for Attendees: Morning meditation, a 5K fun run, and a downtown Los Angeles sightseeing bus tour.

Check back for updates on even more activities, including our Attendee Partner Program, Kids Day (an opportunity for kids to learn Scratch programming, in partnership with LA Makerspace), and Puppy Pawlooza (enjoy playtime with shelter dogs thanks to our partnership with LA Animal Rescue).

Linux.com readers receive an additional $47 off with code LINUXRD5. Register now »

How to Integrate Containers in OpenStack

One of the key features of the OpenStack platform is the ability to run applications, and quickly scale them, using containers.  

Containers are ready-to-run applications because they come packed with the entire stack of services required to run them.

OpenStack is an ideal platform for containers because it provides all of the resources and services for containers to run in a distributed, massively scalable cloud infrastructure. You can easily run containers on top of Nova because it includes everything that is needed to run instances in a cloud. A further development is offered by project Zun.

In more complex environments, container orchestration is often required. Using container orchestration makes managing many containers in data center environments easier. Kubernetes has become the preferred solution for container orchestration. Container orchestration in OpenStack is implemented using project Magnum.

In the current OpenStack, there are no less than three solutions for running containers:

  • Directly on top of Nova

  • Using container orchestration in project Magnum

  • Using project Zun

In this tutorial, I’ll show you how to run containers in OpenStack using the Nova driver with Docker.

What is Docker?

Multiple solutions are available for running containers on cloud infrastructure. Currently, Docker is the most used solution for containers. It offers all that is needed to run containers in a corporate environment, and is backed by Docker Inc. for support.

Docker has many advantages. Its containers are portable as images and can be assembled from an application source code. File system level changes can also easily be managed. And

Docker can collect STDIN and STDOUT of processes running in a container, which allows for interactive management of containers.

The Nova driver embeds an HTTP client which talks with the Docker internal REST API through a UNIX socket. The HTTP API is used to control containers and fetch information about them.

The driver fetches images from OpenStack’s Glance service and loads them into the Docker file system. From Docker, container images may be placed in Glance to make them available to OpenStack.

Enabling Docker in OpenStack

Now that you have a general sense of how containers work in OpenStack, let’s talk about how you can enable containers using the Nova driver for Docker. The OpenStack Wiki has a detailed explanation of how to configure any OpenStack installation to enable Docker. You can also use your distribution’s deployment mechanism to deploy Docker.

When you do this, the Docker driver will be added to the nova.conf file. And the Docker container format will be added to glance.conf.

Once it’s enabled, Docker images can be added to the Glance repository, using the CLI command docker save. The below commands show how you can first pull a Docker image, next save it to the local machine using docker save, after which a glance image can be created from it, using the docker container format.

$ docker pull samalba/hipache; 
$ docker save samalba/hipache | glance image-create --is-public=True 
  --container-format=docker --disk-format=raw --name samalba/hipache

Booting from a Docker image

Finally, once Docker is enabled in Nova, you can boot an OpenStack instance from a Docker image.Just add the image to the Glance repository, and then you’ll be able to boot from it.This works like booting any other instance from a nova environment.

$ nova boot --image "samalba/hipache" --flavor m1.tiny test

After booting, you’ll see the docker instance in the openstack environment using either nova list as well as docker ps.

Conclusion

In this short tutorial series on OpenStack, we’ve covered how to install a distribution, get an instance up and running, and enable containers in just a few hours.

Read the other articles in the series:

How to Install OpenStack in Less Than an Hour

Get an OpenStack Instance Up and Running in 40 Minutes or Less

Interested in learning more OpenStack fundamentals?  Check out the self-paced, online Essentials of OpenStack Administration course from The Linux Foundation Training. The course is excellent preparation for the Certified OpenStack Administrator exam.

 

Node.js Emerging as the Universal Development Framework for a Diversity of Applications

Last year and at the beginning of this year, we asked you, Node.js users, to help us understand where, how and why you are using Node.js. We wanted to see what technologies you are using alongside Node.js, how you are learning Node.js, what Node.js versions you are using, and how these answers differ across the globe.

Thank you to all who participated.

1,405 people from around the world (85+ countries) completed the survey, which was available in English and Mandarin. 67% of respondents were developers, 28% held managerial titles, and 5% listed their position as “other.” Geographic representation of the survey covered: 35% United States and Canada, 41% EMEA, 19% APAC, and 6% Latin and South America.

There was a lot of incredible data collected revealing:

  • Users span a broad mix of development focus, ways of using Node.js, and deployment locations.
  • There is a large mix of tools and technologies used with Node.js.
  • Experience with Node.js is also varied — although many have been using Node.js less than 2 years.
  • Node.js is a key part of the survey user’s toolkit, being used at least half of their development time.

The report also painted a detailed picture of the types of technologies being used with Node.js, language preferences alongside Node.js, and preferred production and development environments for the technology.

“Given developers’ important role in influencing the direction and pace of technology adoption, surveys of large developer communities are always interesting,” said Rachel Stephens, RedMonk Analyst. “This is particularly true when the community surveyed is strategically important like Node.js.”

In September, we will be releasing the interactive infographic of the results, which will allow you to dive deeper into your areas of interest. For the time being, check out our blog on the report below and download the executive summary here.

The Benefits of Node.js Grow with Time No Matter the Environment

Node.js is emerging as a universal development framework for digital transformation with a broad diversity of applications.

With more than 8 million Node.js instances online, three in four users are planning to increase their use of Node.js in the next 12 months. Many are learning Node.js in a foreign language with China being the second largest population outside of the United States using Node.js.

Those who continue to use Node.js over time were more likely to note the increased business impact of the application platform. Key data includes:

0*9_2aI_E8VhyZ_aa7.

Most Node.js users found that the application platform helped improve developer satisfaction and productivity, and benefited from cost savings and increased application performance.

0*T94w6EKMtnCku4-n.

The growth of Node.js within companies is a testament to the platform’s versatility. It is moving beyond being simply an application platform, and beginning to be used for rapid experimentation with corporate data, application modernization and IoT solutions. It is often times the primary focus for developers with the majority of developers spending their time with Node.js on the back-end, full stack, and front-end. Although Node.js use is beginning to rise in the Ops/DevOps sector and mobile as well.

0*FGGdaCEvZBxxPjvE.

Node.js Used Less Than 2 Years, but Growing Rapidly In Businesses

Experience with Node.js varied — although many have been using Node.js less than 2 years. Given the rapid pace of Node.js adoption with a growth rate of about 100% year-over-year this isn’t surprising.

0*Q31lY3YVoe2-yV9N.

Companies that were surveyed noted that they were planning to expand their use of Node.js for web applications, enterprise, IoT, embedded systems and big data analytics. Conversely, they are looking to decrease the use of Java, PHP and Ruby.

0*8v94wMKxzoEvoI0C.

Large Mix of Technology and Tools Used Alongside Node.js for Digital Transformation

Modernizing systems and processes are a top priority across businesses and verticals. Node.js’ light footprint and componentized nature make it a perfect fit for microservices (both container and serverless based) for lean software development without the need to gut out legacy infrastructure.

The survey revealed that 47% of respondents are using Node.js for container and serverless-based architectures across development areas:

  • 50% of respondents are using containers for back-end development.
  • 52% of respondents are using containers for full-stack development.
  • 39% of respondents are using containers for front-end development.
  • 48% of respondents are using containers for another area of development with Node.js.

The use of Node.js expands well beyond containers and cloud-native apps to touch development with databases, front-end framework/libraries, load balancing, message systems and more.

0*JMZitGLKkf0vunOd.

And for developers of all focus areas, Node.js has versatile usage.

0*_lWTTie6xWbgVcCi.

Amazon Leads Serverless Technology and Node.js

68% of survey respondents who are using Node.js for serverless are using Amazon Web Services for production. 47% of survey respondents using Node.js and serverless are using Amazon Web Services for development.

Developers who use Node.js and serverless used it across several development areas with the most popular being: back-end, full stack, front-end and DevOps.

0*1f6qaP-AkyuNvqnD.

The Big Data Benefit

Revenues for big data and business analytics are set to grow to more than $203 billion in 2020. Vendors in this market require distributed systems within their products and rely on Node.js for data analysis.

The survey revealed that big data/business analytics developers and managers are more likely to see major business impacts after instrumenting Node.js into their infrastructure with key benefits being productivity, satisfaction, cost containment, and increased application performance.

Node.js Grows in the Enterprise

With the creation of the long-term support plan in 2015, there has been an increase of enterprise development work with Node.js. The Long-Term support plan provides a stable (LTS) release line for enterprises that prioritize security and stability, and a current release line for developers who are looking for the latest updates and experimental features. The survey revealed:

  • 39% of respondents with the developer title are using Node.js for enterprise.
  • 59% of respondents with a manager title are using Node.js for enterprise.
  • 69% of enterprise users plan to increase their use of Node.js over the next 12 months.
  • 47% of enterprise users have been using Node.js for 3+ years and 58% of developers using Node.js have 10+ years in total development experience.

The Long-Term Support versions of Node.js tend to be the most highly sought after with development.

0*JYYKNtwrHINMyjam.

*Node.js 4 and 6 are the LTS versions and best suited for the enterprise of those that favor stability and security over new features.

If you want to continue to learn more about Node.js, sign up for our monthly community newsletter, which will continue to pepper you with data for months to come, and also provide you with information on cool new projects using Node.js. The Node.js Foundation will also hold its annual conference from October 4–6 in Vancouver, Canada. Come join us at Node.js Interactive.

*Mark Hinkle, the Node.js Foundation’s Executive Director, will be talking about this data and provide an update on the Foundation during his keynote at NodeSummit.

This article originally appeared on HackerNoon.