Home Blog Page 400

Usenet, Authentication, and Engineering (or: Early Design Decisions for Usenet)

Twitter thread on trolls brought up mention of trolls on Usenet. The reason they were so hard to deal with, even then, has some lessons for today; besides, the history is interesting. (Aside: this is, I think, the first longish thing I’ve ever written about any of the early design decisions for Usenet. I should note that this is entirely my writing, and memory can play many tricks across nearly 40 years.)

A complete tutorial on Usenet would take far too long; let it suffice for now to say that in the beginning, it was a peer-to-peer network of multiuser time-sharing systems, primarily interconnected by dial-up 300 bps and 1200 bps modems. (Yes, I really meant THREE HUNDRED BITS PER SECOND. And some day, I’ll have the energy to describe our home-built autodialers—I think that the statute of limitations has expired…) Messages were distributed via a flooding algorithm. Because these time-sharing systems were relatively big and expensive and because there were essentially no consumer-oriented dial-up services then (even modems and dumb terminals were very expensive), if you were on Usenet it was via your school or employer. If there was abuse, pressure could be applied that way—but it wasn’t always easy to tell where a message had originated—and that’s where this blog post really begins: why didn’t Usenet authenticate requests?

Read more at Columbia CS

Supercomputing under a New Lens: A Sandia-Developed Benchmark Re-ranks Top Computers

A Sandia National Laboratories software program now installed as an additional test for the widely observed TOP500 supercomputer challenge has become increasingly prominent. The program’s full name — High Performance Conjugate Gradients, or HPCG — doesn’t come trippingly to the tongue, but word is seeping out that this relatively new benchmarking program is becoming as valuable as its venerable partner — the High Performance LINPACK program — which some say has become less than satisfactory in measuring many of today’s computational challenges.

“The LINPACK program used to represent a broad spectrum of the core computations that needed to be performed, but things have changed,” said Sandia researcher Mike Heroux, who created and developed the HPCG program. “The LINPACK program performs compute-rich algorithms on dense data structures to identify the theoretical maximum speed of a supercomputer. Today’s applications often use sparse data structures, and computations are leaner.”

Read more at Sandia Labs

CNCF Sponsors New Free “Kubernetes Deployment and Security Patterns” eBook From The New Stack

CNCF is proud to sponsor a new FREE ebook from The New Stack titled Kubernetes Deployment and Security Patterns. Download the ebook today.

Moving beyond the shiny new technology stage, the reports posits that Kubernetes is now in adolescence. That means all eyes are tracking its growing maturity, how well it works in production, and what else is needed for Kubernetes to ascend further within enterprises of any size and across all industries, in all corners of the world. CNCF is also partnering with The New Stack and Huawei on a webinar that will explore international container growth.

Register today for “Global Container Adoption: A Closer Look at the Container Ecosystem in China” to be held 10 a.m. PT, March 20, 2018. Join Huawei CTO Dr. Ying Xiong, CNCF VP of Marketing Dee Kumar and The New Stack Editorial Director Libby Clark for the latest research, analysis and perspectives on how the container ecosystem is evolving in China.

eBook Highlights New Global Survey Data

Developers and Ops teams transitioning from VMs to containers will appreciate the detailed explanation and analysis the report includes on container orchestration and security patterns. The book also outlines how companies are deploying and securing Kubernetes, sharing insights from the most experienced users and advocates of the technology. Other highlights include:

  • The results of recent surveys (one from the CNCF and the other from The New Stack) detailing how current Kubernetes operators are using the software.

  • A recommendation of deployment patterns designed to help cluster operators deploy Kubernetes to manage containerized workloads.

  • A comparison of varying levels of control, costs and features to expect from different deployment patterns such as self-hosted/custom, managed Kubernetes, CaaS and PaaS platforms.

  • Analysis of emerging scenarios utilizing Kubernetes such as machine learning, serverless, edge computing and streaming analytics.  

  • A detailed list of security considerations including threat models and various security considerations for a Kubernetes deployment, along with some best practices for operators to follow.

Chapter 1 dives into the latest research on Kubernetes adoption. It analyzes New Stack data, our cloud native survey results (see previous blog for survey highlights) and new findings from the same CNCF survey recently completed in China to offer a more global and broader look at Kubernetes deployment patterns and adoption challenges and trends. A close inspection of this data helps tell a more complete story on Kubernetes acceptance and dominance in the market.

In total, 764 respondents completed CNCF’s survey, with 187 responses from a questionnaire that was translated into Mandarin. Almost all (97 percent) respondents were using containers in some way, while 61 percent were using containers in production. Overall, 69 percent of respondents said they were using Kubernetes to manage containers. P.S. A more detailed analysis on China’s adoption of Kubernetes and key takeaways from the “Global Container Adoption: A Closer Look at the Container Ecosystem in China” webinar will be covered in a future CNCF blog.

4pBIwnKZxxyY303eOWpW6BAyqsHdfc3GR-WIPmd7

By looking at many variables, such as company size, public, private or multi-cloud environments, workload types, and cluster size, Chapter 1 offers in-depth analysis of the tools and infrastructure surrounding Kubernetes in areas like storage, networking, security and monitoring and logging.
 

YaLUncSL22UfhUeE_BIoAJCbBqnOGGi5ljhB4bmm
 

ymstU_8ks_LbB_IPPE8Bi4TgBo8Bswob5Jjzr5BR
 

FS22uXJ6YUdJ1n0qMpT093QUkx_aF5GV3sZjojyP

After carefully reviewing numerous data sets, TNS Writer Lawrence Hecht concludes: “At a high level, Kubernetes won the first battle of the container orchestration wars. Companies with competitive offerings, such as Docker and Mesosphere, now promote how their products interoperate with Kubernetes. The major cloud providers have followed suit, with Alibaba Cloud, Amazon Web Services (AWS), Google Cloud Platform, Huawei Cloud and Microsoft Azure offering services to manage Kubernetes environments. Today, Kubernetes is the leading choice for managing containers at scale.”

Be sure to download the full book today and join us for the webinar.

Postage-Stamp Linux

We’ve come a long way from the early days of big iron, and few things demonstrate that better than Microchip’s new SAMA5D27. What’s a SAMA5D27, you ask? It’s a postage stamp that runs Linux. Well, not literally a postage stamp, but a fully realized microcontroller that measures about 1½ inches (40mm) on a side. It’s not much more expensive than a first-class stamp, either, at about $39 in small quantities.

For that, you get an Arm Cortex-A5 processor running at 500 MHz, a floating-point unit, 128 MB of DRAM, Ethernet with PHY, flash memory, camera and LCD interfaces, USB, CAN, a pile of everyday peripherals – and Linux. Yup, we’ve reduced the hulking mainframes of our parents’ age to the size of a postage stamp. If it were delivered by jetpack, we’d be in the future.

Read more at EE Journal

The RedMonk Programming Language Rankings: January 2018

Given that we’re into March, it seems like a reasonable time to publish our Q1 Programming Language Rankings.

The data source used for these queries is the GitHub Archive. We query languages by pull request in a manner similar to the one GitHub used to assemble the 2016 State of the Octoverse. Our query is designed to be as comparable as possible to the previous process.

  • Language is based on the base repository language. While this continues to have the caveats outlined below, it does have the benefit of cohesion with our previous methodology.
  • We exclude forked repos.
  • We use the aggregated history to determine ranking (though based on the table structure changes this can no longer be accomplished via a single query.)

The primary change is that the GitHub portion of the language ranking is now based on pull requests rather than repos. 

Read more at RedMonk

Dynamic Linux Routing with Quagga

So far in this series, we have learned the intricacies of IPv4 addressing in Linux LAN Routing for Beginners: Part 1 and how to create static routes manually in Linux LAN Routing for Beginners: Part 2.

Now we’re going to use Quagga to manage dynamic routing for us, just set it and forget it. Quagga is a suite of routing protocols: OSPFv2, OSPFv3, RIP v1 and v2, RIPng, and BGP-4, which are all managed by the zebra daemon.

OSPF means Open Shortest Path First. OSPF is an interior gateway protocol (IGP); it is for LANs and LANs connected over the Internet. Every OSPF router in your network contains the topology for the whole network, and calculates the best paths through the network. OSPF automatically multicasts any network changes that it detects. You can divide up your network into areas to keep routing tables manageable; the routers in each area only need to know the next hop out of their areas rather than the entire routing table for your network.

RIP, Routing Information Protocol, is an older protocol. RIP routers periodically multicast their entire routing tables to the network, rather than just the changes as OSPF does. RIP measure routes by hops, and sees any destination over 15 hops as unreachable. RIP is simple to set up, but OSPF is a better choice for speed, efficiency, and scalability.

BGP-4 is the Border Gateway Protocol version 4. This is an exterior gateway protocol (EGP) for routing Internet traffic. You won’t use BGP unless you are an Internet service provider.

Preparing for OSPF

In our little KVM test lab, there are two virtual machines representing two different networks, and one VM acting as the router. Create two networks: net1 is 192.168.110.0/24 and net2 is 192.168.120.0/24. It’s all right to enable DHCP because you are going to go into your three virtual machines and give each of them static addresses. Host 1 is on net1, Host 2 is on net2, and Router is on both networks. Give Host 1 a gateway of 192.168.110.126, and Host 2 gets 192.168.120.136.

  • Host 1: 192.168.110.125
  • Host 2: 192.168.120.135
  • Router: 192.168.110.126 and 192.168.120.136

Install Quagga on your router, which on most Linuxes is the quagga package. On Debian there is a separate documentation package, quagga-doc. Uncomment this line in /etc/sysctl.conf to enable packet forwarding:

net.ipv4.ip_forward=1

Then run the sysctl -p command to load the change.

Configuring Quagga

Look in your Quagga package for example configuration files, such as /usr/share/doc/quagga/examples/ospfd.conf.sample. Configuration files should be in /etc/quagga, unless your particular Linux flavor does something creative with them. Most Linuxes ship with just two files in this directory, vtysh.conf and zebra.conf. These provide minimal defaults to enable the daemons to run. zebra always has to run first, and again, unless your distro has done something strange, it should start automatically when you start ospfd. Debian/Ubuntu is a special case, which we will get to in a moment.

Each router daemon gets its own configuration file, so we must create /etc/quagga/ospfd.conf, and populate it with these lines:

!/etc/quagga/ospfd.conf
hostname router1
log file /var/log/quagga/ospfd.log
router ospf
 ospf router-id 192.168.110.15
 network 192.168.110.0/0 area 0.0.0.0
 network 192.168.120.0/0 area 0.0.0.0
access-list localhost permit 127.0.0.1/32
access-list localhost deny any
line vty
  access-class localhost

You may use either the exclamation point or hash marks to comment out lines. Let’s take a quick walk through these options.

  • hostname is whatever you want. This isn’t a normal Linux hostname, but the name you see when you log in with vtysh or telnet.
  • log file is whatever file you want to use for the logs.
  • router specifies the routing protocol.
  • ospf router-id is any 32-bit number. An IP address of the router is good enough.
  • network defines the networks your router advertises.
  • The access-list entries restrict vtysh, the Quagga command shell, to the local machine, and deny remote administration.

Debian/Ubuntu

Debian, Ubuntu, and possibly other Debian derivatives require one more step before you can launch the daemon. Edit /etc/quagga/daemons so that all lines say no except zebra=yes and ospfd=yes.

Then, to launch ospfd on Debian launch Quagga:

# systemctl start quagga

On most other Linuxes, including Fedora and openSUSE, start ospfd:

# systemctl start ospfd

Now Host 1 and Host 2 should ping each other, and the router.

That was a lot of words to describe a fairly simple setup. In real life the router will connect to two switches and provide a gateway for all the computers attached to those switches. You could add more network interfaces to your router to provide routing for more networks, or connect directly to another router, or to a LAN backbone that connects to other routers.

You probably don’t want to hassle with configuring network interfaces manually. The easy way is to advertise your router with your DHCP server. If you use Dnsmasq then you get DHCP and DNS all in one.

There are many more configuration options, such as encrypted password protection. See the official documentation at Quagga Routing Suite.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Raising More than Capital: Successful Women in Technology

One of my employees chooses a word at the beginning of each year to guide her personal and professional development efforts. Last year the word she selected was “Rise.” She told me it inspired her to elevate not only her skills, but the quality of her relationships, her attitude toward life and her self-confidence. As a female entrepreneur and the CEO of a growing global software company, our conversation led me to reflect on how successful women in technology rise above our challenges.

Raising Awareness

Research highlights the plethora of internal and external hurdles female technology entrepreneurs face, including limited access to funding, lack of advisors and mentors, sexism and harassment, social expectations, balancing personal and professional responsibility, downplaying our worth and of course, fear of failure. With such a gender gap to overcome, it’s no surprise that in 2017 only 17% of startups had a female founder, a number which has failed to increase in the last five years.

Read more at The Linux Foundation

Optimizing Data Queries for Time Series Applications

Now that we understand what time series data is and why we want to store it in a time series database, we’ve reached a new challenge. As with any application, we want to ensure our database queries are smart and performant, so let’s talk about how we can avoid some common pitfalls.

Indexing

Indexing, the oft-recommended and rarely understood solution to all attempts at optimization, is applicable to most databases. Whether the time series database you’re using is built on Cassandra or MySQL or its own unique architecture, indexing affects your queries. Essentially, an index is a data structure that stores the values from a specific column, meaning that when we search by an indexed field, we have a handy shortcut to the values. When we search by unindexed fields, we have to discover the full path to the value, no shortcuts or magic tricks. Searching unindexed fields is like having to watch Frodo walk through Middle Earth unedited — it takes a long time.

Read more at The New Stack

Infrastructure 2.0: Whatever We’re Calling it Now, It’s Here

The cloud has taught us about the economies of scale, and now containers are threatening to redefine it once again. It’s the collection of devices known as the network — or the data path — that support the scale of applications and services. In that data path lies a number of network and application services that provide for the scale, security, and speed of the applications they deliver. Each one needs to be provisioned, configured, and managed. Every. Single. One.

That’s where Infrastructure 2.0 — DevNetOps, NetOps 2.0, Super-NetOps — comes in. Because its purpose is to embrace DevOps principles and apply its methodologies to the network.

This notion comprises three core concepts: programmable (API-enabled) infrastructure, infrastructure as code, and the inclusion of integration.

Read more at SDxCentral

Improving Teamwork by Engineering Trust

Even in highly mature open organizations, where we’re doing our best to be collaborative, inclusive, and transparent, we can fail to reach alignment or common understanding. Disagreements and miscommunication between leaders and their teams, between members of the same team, between different teams in a department, or between colleagues in different departments remain common even in the most high-performing organizations. Responses to their intensity and impact run the gamut, from “Why did someone take our whiteboard?” to “Why are we doing this big project?”

Vagueness and confusion are often at the heart of these moments. And intentional relationship design is one tool to help us address them.

Read more at OpenSource.com