Home Blog Page 791

Securing the Cloud With SDN

It’s becoming clear that rising network security threats will drive increasing integration between network virtualization (NV) and security, as we’ve long predicted here. This means that software-defined networking (SDN) will become a key technology for securing the cloud.

SDN can significantly improve cloud network security using virtualization techniques. The opportunities for improvement come from:

  • Centralizing network security service policy and configuration management
  • Automating network security remediation
  • Blocking malicious traffic from endpoints
  • Simultaneously allowing for expected normal traffic
  • Network policy auditing and detection and resolution of conflicts

Read more at SDx Central

How to Use Postfix Postscreen to Test Email for Spam: Part 2

In the previous article, I looked at some pre-greeting tests that Postfix performs to help identify spam; now I’ll move on through the chain and explore the available post-salutation tests. Remember that limiting how many machines make it to this stage is of significant value to a mail server’s resource capacity. Here, our trusty Postfix performs a series of “deep protocol” tests that are disabled by default for a variety of reasons. One such reason is that these tests are more brutal than those you might be used to seeing with RBLs. Equally, they also come with some limitations which should first be understood.

One key limitation is that a sender machine has to connect to your mail server all over again after passing the “deep protocol” tests before it can send its email. Expiration times can be upped to allow the machine to return again much later, but obviously this isn’t ideal. Bear in mind, however, the popularity of “greylisting,” which defers email deliveries to detect if a sender is in a frenzied rush and willing to return again in a few minutes or not. The deferral after the “deep protocol tests” is far from alien to mail servers and works along these lines. Remember that once an IP address has been whitelisted, when the sender machine returns, they will be let straight through to an SMTP process. This unfettered access will be allowed for a relatively lengthy period of time (we’ll look at that shortly), so this deferral only affects the initial connection.

Another limitation is the lack of compatibility, which sadly means that for the time being you should disable “deep protocol” tests if you need it available on TCP port 25, using the AUTH, XCLIENT, and XFORWARD commands. Additionally, you should not enable RBLs that don’t play nicely with servers running on either dial-up or residential networks and reject those IP address ranges.

Pipelining

Let’s look at three post-greeting tests now, starting with the “pipelining” test.

If you’re familiar with networking, you will know that half duplex means traffic flowing in one direction, and full duplex means simultaneous traffic flowing in both directions. Clearly, the difference in bandwidth between the two is significant. That difference is compounded if you factor in the delays for response/receive times along with the data throughput.

The term “pipelining” also relates to concurrency of sorts. A well-used example is where a manufacturing plant’s assembly line allows greater efficiency thanks to the output of certain processes being the input of another process, which might be next in the line on the conveyor belt. Apparently, even if there are some dependencies — and therefore delays — time-savings can usually be achieved.

One of the challenges that Postfix faces is that SMTP is a half-duplex protocol by design. Although Postfix itself advertises support for pipelining (where senders don’t have to necessarily wait for a response before continuing with a conversation), the excellent Postscreen does not. Among the SMTP commands included for this functionality are RSET, MAIL, RCPT, or an encoded message. This was introduced by RFC 1854 in 1995 and then refreshed RFC 2197 in 1997. Although it’s an old design, what’s clever about adding this capability to mail servers is the addition of allowing the server to defer responses as long as the sender is still submitting new requests. According to the documentation provided by the bulletproof qmail server, this explanation applies:

“The server must never wait for client input unless it has first “flushed” all pending responses; and it must send responses in the correct order. It is the client’s responsibility to avoid deadlock.”

Despite the benefits it brings, as I said, pipelining is disabled by default for Postscreen; thus, senders are not allowed to send multiple commands. However, if you switch on the option postscreen_pipelining_enable, then Postscreen will vigilantly stay alert checking for any zombie machines that send multiple commands.

This option can add another test and also improve your logging by including the fact that pipelining was attempted. The manual shows the logging syntax that would be written to your log files as so:

COMMAND PIPELINING from [address]:port after command: text

Such a log entry would tell us that the sender machine sent many, and not just one, commands without waiting for the MTA to respond.

Invalid SMTP

Some nefarious spambots will attack your mail server via an open proxy. A telltale sign of a proxy being used is that non-SMTP commands bleed into the conversation between the mail server and the sender, such as the CONNECT command. We can explicitly log and reject these invalid commands using the postscreen_forbidden_commands option. Apparently, this function will additionally look out for commands that look like a message’s header, sent in the wrong part of the conversation. This error condition can be common if the sending machine keeps on transmitting data having ignored Postscreen’s rejections. The Postfix docs offer this as the logging syntax, which you would expect to discover in your logs after such an event has occurred:

NON-SMTP COMMAND from [address]:port after command: text

You Say LF, I Say CR

Another post-SMTP-greeting test is referred to as the “bare newline” test. The structure of SMTP commands are certainly simple, and usually very short; however, they must be adhered to in order to make sense. A long-standing pain for sys admins involved the differences between carriage returns and line feeds, known as <CR> and <LF>, respectively in SMTP. These otherwise invisible characters (which are supposed to seen by software but not by humans) have caused great consternation in the past, thanks to different support from varying operating systems. For example, Unix-type machines generally use line feeds, Macs use carriage returns, and just to keep everyone on their toes Windows uses <CR><LF>, with the carriage return always being used first.

For one reason or another the SMTP protocol terminates its new lines with <CR><LF>, Windows style, and if a spambot deviates from adhering to such rules, then it fails this test. This needs to be enabled from its default in order to use it. Here’s how such an occurrence appears in Postfix’s logs:

BARE NEWLINE from [address]:port after command

If you want to catch sender machines that aren’t playing nicely, then you simply add this line to your config file that enables it:

postscreen_bare_newline_enable = yes

Failure to Comply

Let’s look at what happens when a sender machine fails the post-greeting tests. Similar to pre-greeting tests, we can see a familiar set of actions in Table 1.

Action

Description

ignore

Ignoring the failure of this particular test is the default for the post-greeting “bare newline” test.

enforce

By default, pipelining enforces its actions if a sender machine fails this test. It will then reject connections with a 550 SMTP response. This test is run all over again if the machine returns later on.

drop

If the mighty Postfix picks up any non-SMTP commands, then a 521 SMTP error is promptly sent to the connecting machine. This test is repeated upon each connection. You can adjust settings away from the defaults (CONNECT, GET, and POST) by altering the smtpd_forbidden_commands option.

Table 1: What actions Postfix undertakes if post-greeting failures occur.

Other SMTP Scenarios

Clearly, a number of other errors are generated by MTAs, which occur due to varying scenarios. Table 2 shows the log entries that you might expect to see when these errors are generated from differing scenarios.

Log Entry

Description

HANGUP after time from [address]:port in test name

This will show up in your logs if the connecting machine dropped its connection for some reason. You can tell how many seconds after inception it occurred with “time.” You might be surprised to hear that no penalties apply if a machine is caught out hanging up. Postfix continues to allow that machine to progress with other tests afterwards.

COMMAND TIME LIMIT from [address]:port after command

You can specify how long a connection should be allowed to run by using the postscreen_command_time_limit option before dropping it.

COMMAND COUNT LIMIT from [address]:port after command

With this option, you can avoid a barrage of SMTP commands and specify how many are allowed within a particular session:

postscreen_command_count_limit.

COMMAND LENGTH LIMIT from [address]:port after command

Set a strict per-command length limit as specified using the line_length_limit option.

NOQUEUE: reject: CONNECT from [address]:port: too many connections

If an SMTP client requests too many resources from our server in too short a period of time, then we can reject the connection using a SMTP 421 error. This error relates to too many messages or connections (concurrency).

NOQUEUE: reject: CONNECT from [address]:port: all server ports busy

This is very similar to the above error, also dealing with concurrency issues.

Table 2: Other Postfix SMTP errors and how they are logged to our log files.

What Success Looks Like

Rather than perpetually focusing on the negative, let’s see what logs look like when an inbound email passes all of the tests you throw at it. This doesn’t include machines specifically whitelisted but rather machines that have passed your SMTP tests before proving successful.

PASS NEW [address]:port

When such a happy event occurs, our trusting MTA then writes an entry inside its temporary whitelist and our mail server remains accessible to the IP address according to the “time to live” (TTL) options that I’ll look at now. Some relate to the actions I just examined, as you will see.

The postscreen_bare_newline_ttl usually defaults to 30 days, and Postscreen will remember the results of such a test for that period. This can be adjusted to your preference with relative impunity.

One of the key concepts behind RBLs is that the information they contain is current and therefore useful. You may trust some more than others for validity, however. You can change the default setting — one hour — to some other time measurement, such as a number of seconds, minutes, days, or weeks with postscreen_dnsbl_max_ttl and postscreen_dnsbl_ttl. In case it causes confusion, the latter option was only available in versions 2.8 to 3.0 and is replaced by the former in version 3.1.

There may also be circumstances when a response from an RBL offers a very high or low TTL. We can affect the minimum TTL with postscreen_dnsbl_min_ttl, which usually defaults to 60 seconds to keep the number of requests down. Note that if there’s sizeable TTL sent back, then this will override the postscreen_dnsbl_max_ttl option, which I just covered..

To keep our Postfix server’s load down, we can cache the results of successfully passing our pre-greeting tests. Usually that is set to a day and can be changed with the postscreen_greet_ttl. Such a change could be very useful, especially if there aren’t many offenders changing their behavior too frequently.

If you wanted to change the length of time that we remember if machines aren’t found to be bombarding our mail server with non-SMTP commands, then you can alter this option, postscreen_non_smtp_command_ttl, which is usually 30 days by default. If you infrequently see this error then it prevents unnecessary lookups if you increase this value.

Finally, if you’re not expecting your initial findings to change, in respect of your pipelining tests, then you can increase the 30 days by default with postscreen_pipelining_ttl. Potentially, this can also lessen unnecessary lookups.

Danger, Will Robinson

The docs make an important point about the use of Postscreen. This point relates to mail clients, and by that I mean software such as Thunderbird or Evolution, which are also known as MUAs (Mail User Agents). They allow you to pick up inbound emails and send outbound emails. With the use of Postscreen, however you need to avoid using TCP port 25, because you will definitely encounter issues. Essentially that SMTP port is for inbound email only when Postscreen is running.

The outside world uses your MX (mail exchanger) records, declared in your DNS, to find your mail server in the first place and they then start their conversation with TCP port 25. For outbound email, however, your user’s email client should instead use the Submission Service (which listens on TCP port 587) to first authenticate — usually — and then send emails through. You may also have seen TCP port 465 in use (known as the SMTPS port to allow secure, SSL-based SMTP transactions), which was used more in the past. TCP port 587 is known as SMTP-MSA to specifically allow end users to send outbound email. There are a number of creative workarounds to this scenario; however, setting up your daemon ports differently is for another day’s discussion.

EOF

We covered a good deal of ground while looking at the venerable Postscreen. Its raison d’etre is to reduce volumes of spam at every level of the SMTP transaction and dutifully remember senders that have successfully passed its tricky tests en route so that it can forward their emails more quickly next time.

Effective, efficient, and robust — there’s little doubt that even for small volumes of email I would tune Postscreen to suit my user’s email needs. Although I couldn’t fully cover this massive subject area here, I hope now that you are equipped with a practical overview of Postscreen, so you can also take advantage of its many features and choose ham over spam.

Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.

 

Distributed Tracing for Polyglot Microservices

Distributed tracing is a critical tool for debugging and understanding microservices. But setting up tracing libraries across all services can be costly—especially in systems composed of services written in disparate languages and frameworks. In this post, we’ll show you how you can easily add distributed tracing to your polyglot system by combining linkerd, our open source RPC proxy, with Zipkin, a popular open source distributed tracing framework.

Why distributed tracing? As companies move from monolithic to multi-service architectures, existing techniques for debugging and profiling begin to break down. Previously, troubleshooting could be accomplished by isolating a single instance of the monolith and reproducing the problem. With microservices, this approach is no longer feasible, because no single service provides a complete picture of the performance or correctness of the application as a whole. We need new tools to help us manage the real complexity of operating distributed systems at scale.

Read more at Buoyant.io

How the Internet Works: Submarine Fibre, Brains in Jars, and Coaxial Cables

Ah, there you are. That didn’t take too long, surely? Just a click or a tap and, if you’ve some 21st century connectivity, you landed on this page in a trice.

But how does it work? Have you ever thought about how that cat picture actually gets from a server in Oregon to your PC in London? We’re not simply talking about the wonders of TCP/IP, or pervasive Wi-Fi hotspots, though those are vitally important as well. No, we’re talking about the big infrastructure: the huge submarine cables, the vast landing sites and data centres with their massively redundant power systems, and the elephantine, labyrinthine last-mile networks that actually hook billions of us to the Internet.

And perhaps even more importantly, as our reliance on omnipresent connectivity continues to blossom, the number of our connected devices swells, and our thirst for bandwidth knows no bounds, how do we keep the Internet running? How do Verizon or Virgin reliably get 100 million bytes of data to your house every second, all day every day?

Well, we’re going to tell you over the next 7,000 words…

Read more at Ars Technica

Year 2038 Fixes Still Being Worked On for The Linux Kernel

The Linux kernel has been working on many Year 2038 fixes for a while now but the work is not over. Another pull request was sent in for the Linux 4.7 kernel in trying to prepare the VFS layer with Y2038 fixes. Arnd Bergmann has submitted a batch of Y2038 fixes for the Linux 4.7 merge window for the VFS code. He commented, “This is a preparation series for changing the VFS infrastructure to use time64_t in inode timestamps…

Read more at Phoronix

Five-Minute Screencasts to Learn Kubernetes

Containers are disrupting the way we develop applications and the way we manage them in our datacenter. Docker provides a great user experience for developers, who can develop, package, ship, and run applications easily. However creating a truly distributed application made of several dozens, hundreds, or more containers is a challenge. While Docker Swarm allows you to create a cluster of Docker hosts and start distributed applications, Kubernetes is an alternative solution worth a look.

Learning all these new technologies takes a lot of time. To try to ease your pain, I created a set of short screencasts to introduce Kubernetes. Watching a five-minute video should save you a lot of time.

Read more at DZone

Creating a Neural Network in Python

Neural Networks are very complicated programs accessible only to elite academics and geniuses, not something which an average developer would be able to work with, and definitely not anything I could hope to comprehend. Right?

Well, no. After an enlightening talk by Louis Monier and Gerg Renard at Holberton, I realized that neural networks were simple enough even for just about any developer to understand and implement. Of course the most complicated networks are huge projects and their designs are elegant and intricate, but the core concepts underlying them are more or less straightforward. Writing any network from scratch would be challenging, but fortunately there are some excellent libraries that can handle the grunt work for you.

neuron.png

A neuron in this context is quite simple. It takes several inputs, and if their sum passes a threshold, it fires. Each input is multiplied by a weight. The learning process is simply the process of adjusting the weights to produce a desired output. The networks we’re interested in right now are called “feed forward” networks, which means the neurons are arranged in layers, with input coming from the previous layer and output going to the next.

network.png

 

There are other kinds of networks, like recurrant neural networks, which are organized differently, but that’s a subject for another day.

The type of neuron described above, called a perceptron, was the original model for artificial neurons but is rarely used now. The problem with perceptrons is that a small change in the input can lead to a dramatic change in the output, due to their stepwise activation function. A negligible reduction in its input value can cause that value to no longer exceed the threshold and prevent that neuron from firing, leading to even bigger changes down the line. Fortunately this is a relatively easy problem to resolve with a smooth activation function, which most modern networks use.

perceptron activation function

sigmoid activation function

However, our network will be simple enough that perceptrons will do. We’re going to create a network that can process an AND operation. That means it requires two input neurons and one output neuron, with relatively few neurons in the middle “hidden” layer. The image below shows its design, which should be familiar.

andnet.png

Monier and Renard used convnet.js to create browser demos for their talk. Convnet.js creates neural networks directly in your browser, allowing you to easily run and manipulate them on almost any platform. Of course there are drawbacks to using a javascript solution, not the least of which is speed. So for this article, we’ll use FANN (Fast Artificial Neural Networks). There is a python module, pyfann, which contains bindings for FANN. Go ahead and install that now.

Import FANN like so:

>>> from pyfann import libfann

And we can get started! The first thing we need to do is create an empty network.

>>> neural_net = libfann.neural_network()

Now, neural_net has no neurons in it, so let’s go ahead and add some. The function we’re going to use is libfann.create_standard_array(). Create_standard_array() creates a network where all neurons are connected to all the neurons for their neighboring layers, which we call a “fully connected” network. As a parameter, create_standard_array takes an array of the number of neurons in each layer. in our case, this array will be [2, 4, 1].

>>> neural_net.create_standard((2, 4, 1))

Then, we set the learning rate. This reflects how much the network will change its weights on a single iteration. We’ll set a pretty high learning rate, .7, since we’re giving it a simple problem.

>>> neural_net.set_learning_rate(0.7)

Then set the activation function, as discussed above. We’re using SIGMOID_SYMMETRIC_STEPWISE, which is a stepwise approximation of the tanh function. It’s less precise and faster than the tanh function, which is okay for this problem.

>>> neural_net.set_activation_function_output(libfann.SIGMOID_SYMMETRIC_STEPWISE)

finally, run the training algorithm and save the network to a file. The training command takes four arguments: the file containing the data that will be trained on, the maximum number of times the training alorithm will run, the number of times the network should train before it reports its status, and the desired error rate.

>>> neural_network.train_on_file(“and.data”, 10000, 1000, .00001)
>>> neural_network.save(“and.net”)

The file “and.data” should look as follows:

4 2 1
-1 -1
-1
-1 1
-1
1 -1
-1
1 1
1

The first line contains three values: the number of examples contained in the file, the number of inputs, and the number of outputs. The rest of the file consists of examples, each line alternating between input and output.

One your network has trained successfully, you’re going to want to try it out, right? Well, first, let’s load it from the file we stored it to.

>>> neural_net = libfann.neural_net()
>>> neural_net.create_from_file(“and.net”)

Next, we can simply run it like so:

>>> print nn.run([1, -1])

which should output [-1.0] or a similar value, depending on what happened during training.

Congratulations! You just taught a computer to do basic logic!

How to Write a Job Posting for an Open Source Office Lead

By Benjamin VanEvery

I ran into several folks this past week at OSCON who expressed a keen interest in creating a dedicated role for Open Source at their respective companies. So what was stopping them? One simple thing: every single one of them was struggling to define exactly what that role means. Instinctively we all have a feeling of what an employee dedicated to Open Source might do, but when it comes time to write it down or try to convince payroll, it can be challenging. Below I have included a starting point for a job description of what a dedicated Open Source manager might do. If you are in this boat, I’d highly recommend that you also check out the slides from our talk at OSCON this year. In addition, the many blog posts we’ve published about why our respective companies run Open Source.

Also, on top of reusing what is below, we are collecting open source office job descriptions on GitHub from the industry that you can learn from.

The Job Posting Template

Side note: if you use this template, try running it through analysis on https://textio.com/talent/ first.

The Mission

Our open source effort is currently lead by a multi-functional group of engineers and we are looking for a motivated, visionary individual to lead this effort and take Company Open Source to the next level.

In this role, you’ll work with our Engineering (Dev & Ops), Legal, Security, Business Ops, and Public Relations teams to help define what Open Source at Company means and build our open source community. Your day to day responsibilities will alternate between programming and several forms of program management. This is an exciting opportunity to work with all levels of the organization and leave a lasting impact here and on the engineering community at large.

A good match might have (a)…

  • 8 years experience coding in or leading software engineering environments
  • Experience working on at least one successful and widely recognized open source project
  • Excellent communication and organizational skills
  • Familiarity with GitHub and open source CI tooling (Travis CI, Coveralls, etc)
  • Understanding of open source licenses
  • Experience and familiarity with multiple programming languages
  • Real passion for quality and continuous improvement

Some things you might find yourself doing

  • You will lead and streamline all aspects of the outgoing open source process. This encompasses people processes to tooling automation.
  • You will own and handle our open source presence and reputation on GitHub and beyond
  • You will steer involvement and recognition of the open source program internally
  • You will work alongside product and business leadership to integrate Open Source goals with company goals. Overall, working to build Open Source mentality into our DNA.
  • You will build awareness of Company Open Source externally and increase overall involvement in the open source community.
  • You will establish Company as an actively contributing member of industry-leading Open Source initiatives. This involves taking active parts in TODO Group initiatives.
  • You will run our process for evaluating incoming open source code for use in our product.

This article originally appeared at TODO 

Netronome Integrates P4 and C Programming on Production Server NICs

Netronome introduced a P4 and C compliant Integrated Development Environment (IDE) for dynamically programming new networking capabilities on its Agilio CX and LX family of intelligent server adapters (ISAs).

The news is significant because bringing SDN capabilities into a server NIC could help identify and resolve tenant application performance bottlenecks rapidly, enabling cloud service providers to maintain high levels of user experience.

For telco NFV deployments, Netronome said its solution enables a significantly higher degree of dynamic data center traffic observability, helping telco operators to pinpoint issues related to call drops or poor call and video quality in 4G and 5G networks. AT&T and Netronome are presenting and demonstrating this use case at the P4 Language Consortium workshop this week at Stanford University.

“Server-based networking has evolved as the most widely deployed form of SDN; its fundamental tenets are feature velocity and control – important requirements for data center operators,” said Sujal Das, senior vice president and general manager, marketing and corporate strategy. “As a pioneer in hardware-accelerated, server-based networking solutions, we take great pride in being the first in the industry with shipping products that can truly help customers realize the value of integrated P4 and C programming for their data center applications.”

Read more at Converge Digest.

How Newegg is Winning the Battle Against Patent Trolls [Video]

As Lee Cheng explained in his keynote speech at the Collaboration Summit held March 29-31 in Lake Tahoe, California, fighting patent trolls as part of his job at Newegg Inc. is a natural progression from his earliest legal involvement in civil rights advocacy.

In 1994, a non-profit organization that Cheng helped form “filed a lawsuit against the San Francisco Unified School District in Federal Court,” recalled Cheng. “This was the first legal windmill that I tilted against, and I’ve been looking for others ever since.”

Cheng, who is now Chief Legal Officer, SVP of Corporate Development and Corporate Secretary at Newegg, said, “When I joined Newegg in 2005, I was very surprised to get our first patent lawsuit, because we don’t actually make anything.”

Nonetheless, Cheng has spent much of his time at Newegg fighting pernicious patent trolls — more than 30 claims over the past decade — about various aspects of e-commerce and websites, which were often basic functions like an Internet shopping cart, drop-down menus, and search boxes.

“Each troll only wanted a very small percentage of all of our online sales or profits despite the fact that they contributed absolutely nothing to our success,” said Cheng. “They all very generously offered to settle for initially high six figures or low seven figures, and they explained to us that litigation costs and risks were so high that we should just cut the check.”

Not knowing any better when the claims first started hitting Newegg in 2006, “I started to ask questions. I soon concluded that patent trolling was just a total scam. Settling with trolls might save a little bit of money on the front end but would encourage more lawsuits,” said Cheng. “Settling frivolous lawsuits and paying plaintiffs off simply fed the beast of patent trolling while stifling innovation and entrepreneurship.”

Cheng’s approach included finding ways to reduce Newegg’s legal costs, such as using small boutique law firms, doing as much legal work in-house as possible, and using alternative fee arrangements.

Newegg’s victories included a 2010 shopping cart case against Sovereign, who had already received multi-million dollar decisions from leading e-retailers like Amazon, CDW, Gap, and Zappos. “We actually haven’t lost a case of any kind after appeal since 2006,” said Cheng.

According to Cheng: “Patent trolls don’t sue Newegg anymore, except ones that are literally too stupid to realize that they sued Newegg. Believe it or not, that actually happens. We had two patents cases filed against Newegg since 2013. In one case, a patent troll sued us in mid-2013, and they dropped the case shortly after one of the younger associates at the plaintiff’s firm explained to their senior partners what Newegg does to patent trolls.”

Currently, Newegg “We now mostly focus on pursuing fee motions against patent trolls. We are trying to get the fees that we spent back under Section 285 of the Patent Act, and we try to support other companies, especially small and medium sized business who are still being plagued by abusive patent asserters  — people who really try to extend specious intellectual property rights to achieve commercial advantage. … We are willing to engage in what I guess can be called corporate pro bono work, because Newegg doesn’t just appreciate technological innovation, we actually need it.”

Whether it’s fighting patent trolls or pursuing civil rights, “Don’t assume that you can’t make a difference,” said Cheng, “or that there’s an insurmountable reason for tolerating something that doesn’t look or feel right to you. Don’t be afraid to lead… Your presence here today means that you are part of one of the greatest collaborative efforts in history, one that crosses borders and languages and cultures.”

Watch Lee Cheng’s full keynote below:

https://www.youtube.com/watch?v=X-yUZW-v0io?list=PLGeM09tlguZQ17kXq679jthIhf12Tkat9

linux-com_ctas_may2016_v2_collab.png?itok=Mj86VQnX