In the first article in our series on the Cloud Foundry for Developers training course, we explained what Cloud Foundry is and how it’s used. We continue our journey here with a look at some basic terms. Understanding the terminology is the key to not being in a constant state of bewilderment, so here are the most important terms and concepts to know for Cloud Foundry.
Command Line Interface (CLI)
The Cloud Foundry command line interface is a locally installed program that simplifies interaction with a Cloud Foundry instance. The CLI exposes functions (like pushing an app) via the command line, and executes REST calls against a Cloud Foundry target. For more information, see cloudfoundry/cli on GitHub.
Target
A target is a Cloud Foundry installation or endpoint you want to interact with, e.g. by logging in to get information, configuring something, or deploying your application. This endpoint is a standard REST API, and the core API is consistent across all Cloud Foundry distributions. It takes the form of a standard URL, requires login credentials, and your organization and space.
User
That is you! Human users have their own user accounts in a Cloud Foundry instance. Humans users must have organization roles and space roles, with the organization role assigned first. There are also application users, and both human and application users authenticate through the User Account and Authentication (UAA) Server. For more information on users, visit Cloud Foundry Documentation: User Accounts.
Organization
A Cloud Foundry organization logically segregates tenants in a Cloud Foundry instance. The separation is purely logical, and there is no physical segregation. While the use of organizations is required, the method of segregation is arbitrary and left to the end user. Common use cases are for different business units, projects, or even companies (this is common in a hosted CF public cloud). To learn more, visit Cloud Foundry Documentation:Orgs.
Space
An organization is divided into spaces. Applications and service instances are always scoped to a space, and every organization must contain at least one space. Spaces have roles, and these roles apply only to their spaces. Like organizations, the method of separation is left to the end user. You can use spaces for different applications, projects, or lifecycle steps, such as development, testing, and production. To learn more, visit Cloud Foundry Documentation:Spaces.
Quota Plan
Quota plans are logical resource limits for organizations and spaces. A quota plan is a named set of memory, service, and instance usage quotas, for example a set that includes 4GB of memory, 20 services, and 20 routes named “quota1”. Quota plans are not assigned per-user, but rather per organization, so everyone in the organization has the same quota plan. You may create any number of quota plans per account, but assign only one at a time. Spaces may also have quotas, but this is not required. To learn more, visit Cloud Foundry Documentation: Creating and Modifying Quota Plans.
Role
Users are assigned to roles in organizations and spaces. Roles grant granular capabilities to a user. Role names are logical and attempt to convey scope, as well as the capabilities they provide: for example, Admin, Org Manager, Space Developer, and so on.
There are two questions you have to ask when considering shipping software:
Can we ship?
Should we ship?
“Should we ship?” is ultimately a business decision. Is it valuable to the business to put the latest features in the hands of the users right now? The product manager (PM) represents the business interests on the team and must own this decision.
However, the question “Can we ship?” is fundamentally an engineering question. Is the software in a working state? Are we confident it won’t fail in production? The goal of the XP engineers is to always — ALWAYS — have a “yes” answer to this question. A team that can’t ship, can’t learn. And the longer you’re not learning, the greater the risk that you’re wasting time and money building the wrong thing.
In this article, we talk with Andrew Jenkins, Lead Architect at Aspen Mesh, about moving from monolithic apps to microservices and cut through some of the hype around service mesh for managing microservice architectures. For more on service mesh, consider attending KubeCon + CloudNativeCon EU, May 2-4, 2018 in Copenhagen, Denmark.
1. Microservices are solving many of the problems companies face with monolithic architectures. Where do you see the greatest value?
Andrew Jenkins: To me, it’s about minimizing time-to-user-impact. The shift to virtualization and then cloud was all about reducing the complexity associated with all the infrastructure for supporting an app, so that you can flexibly allocate servers and storage and so on. But that shift didn’t necessarily change the apps we build. Now that we’ve got flexible infrastructure, we should build flexible apps to take full advantage of it.
Microservices are those flexible apps – build small, single-purpose blocks and build them rapidly so you can get them in end user’s hands quickly. Organizations can use this to test against real user requirements and build iteratively.
2. As enterprises make the move from monolithic apps to microservices the benefits are clear, but what are some of the challenges companies are running into as they make the move?
Jenkins: Shifting to microservices doesn’t by itself eliminate complexity. The complexity in any one microservice is small but there is complexity across the entire system. Fundamentally, companies want to know which service is talking to which, about what, on behalf of whom, and then be able to control that communication with policy.
3. How are organizations attempting to address these challenges?
Jenkins: Some companies add this visibility and policy piece into every application that they build, from day one. This is especially common when a company invests in custom tooling, workflows, deployment managers and CD pipelines. Also we find these are usually companies that orient themselves around a few languages and write nearly everything they run themselves.
If your app stack is polyglot and a combination of new development and migrating existing applications, it’s harder to justify adding these pieces to every app individually. Apps from different teams and externally-developed apps raise this bar more. One approach is to treat those non-conforming apps separately – putting them behind a policy-enforcing proxy or treating them as more of a black box from a visibility perspective. But, if you don’t have to make this separation, if there’s instead an easy way to get that native-style policy and visibility for any app in any language, then you can see the advantage there. A service mesh is one approach for this.
4. There is a lot of hype around service mesh as the ultimate solution to manage microservice architectures. Your thoughts?
Jenkins: Yeah, it is definitely climbing the hype cycle curve. It’s not going to be perfect for every situation. If you already have microservices and you feel like you’ve got really good control and visibility, you’ve got a good developer workflow dialed in, then you don’t need to rip everything out and cram in a service mesh tomorrow. I’d suggest you might still want to understand what’s inside since it may be helpful when your team tackles new languages or environments.
I think we should understand how service mesh communizes functionality into a consistent layer. We all love to keep our code DRY (don’t-repeat-yourself). We know that two look-alike implementations are never quite the same. If you can leverage a service mesh to get one implementation of, say, retry logic that works across your entire infrastructure, that really simplifies things for developers, operators, everyone who works with that system. I bet no one on your team wants to write yet another copy of the retry loop, and especially no one wants to debug the subtle differences between the one written in go and the one written in python.
5. As the amount of services to monitor increases, each of these is highly likely to:
– Use different technologies / languages
– Live on a different machine / container
– Have its own version control
How does a service mesh address these disparities?
Jenkins: Service mesh’s first promise is to do the same thing (that visibility and control piece) for microservices written in any language, for any application stack. Next, when you think about different containers talking to each other, there’s a lot that could be relevant at that layer that a service mesh could help with. For instance, do you believe in securing each individual running container rather than perimeter (firewall) security? Then use a service mesh to provide mTLS from container to container.
I’m also seeing that version control differences are the manifestation of deeper application lifecycle differences. So this team uses such-and-such version control, an extensive qualification phase and careful upgrade strategy because they’re providing one of the most core services that everyone relies on. Another team working on a brand new prototype service has a different policy but you for sure want to ensure they’re not writing to the production database, say. Fitting their “square peg workflow” into your “round hole process” isn’t the right thing.
You can use a service mesh to graft these different apps and services into the system in a way that’s appropriate for them. Now obviously you want to use some judgement and not make bespoke pegs for every single little microservice but we’re hearing a lot of interest in service mesh to help smooth out the differences between these lifecycles and expectations. Again, it’s all about providing that rapid iterability but without giving up the visibility and control.
6. Control plane vs data plane: where does service mesh provide value for each?
Jenkins: It’s remarkable how easy it is to start making a web service today. You can fit the code in a tweet. This isn’t a real web service, though. To make it resilient and scalable you need to add some stuff to the data plane of the app. It needs to do TLS, and it needs to retry failures, and it needs to only accept requests from this service but not that one, and it needs to check the user’s authentication, and so on. A service mesh can help you get that data plane functionality without having to add code to the app.
Also, since that’s now in the data plane layer, there’s an ability to upgrade and enhance that layer without modifying the application.
A service mesh brings consistency to the control plane for your microservices. Container orchestration systems like Kubernetes provide a common way of describing what containers you want running. It’s not that you can’t run containers without them, it’s that once you’re beyond running a handful of containers, you want a consistent way to run them all. Service mesh is like that, for the communication between containers.
7. The buzzword around service mesh is “observability”. Can you share a bit on the real world benefits observability provides?
Jenkins: We’ve talked to one team that told us about a time they spent hours on the phone trying to solve some issue that spanned lots of services and components. They had collected lots of data from each service, and they knew the answer was in that sea of data somewhere. But they spent so much time translating between each snapshot of the information. They didn’t have confidence that each step in that translation was correct – after all, if they understood what was going on, they would have engineered out the problem in the first place. On top of this, it isn’t always clear where to should start looking.
What they asked for was one view – all the information across services collected in one place, and the most important information for their issue right at the top. Again, service mesh is not a panacea, and I won’t promise that you’ll never have to look at a log file again. But my goal would be that once this team has a service mesh, they are always confident that they’ve got good observations on what went into and out of every microservice, and the service mesh has already pointed them in the right direction.
To me, observability is about more than just collecting a lot of datapoints. This is about getting the smart brains applied to the true fault in the system as quickly as possible.
8. What do you see for the future of service mesh?
Jenkins: I think that the various implementations are providing a compelling toolbox of policies and components. I’m glad that we’re leveraging lessons learned from pioneers of microservices in building this common service mesh layer.
The next step is going to be choosing how to use that toolbox to solve problems. Organizations are going to want some consistency in what policies get deployed: The challenge will be to combine the interests of app devs, InfoSec, and platform teams so that all their policy comes together in the service mesh.
On a bit of a technical nuance, we’ve seen service meshes that leverage what’s called a Sidecar model for integrating and service meshes that do not. A sidecar feels natural for an app enhancement layer, but we’re not used to that for layers that we consider infrastructure.
Once we write our apps from day one to rely on this service mesh, we’ll have the opportunity for fine-grained but high-level control over applications. Every app will have advanced retry logic, security, visibility, etc. built in from day one. First, that’s going to change the way we develop and test applications. I think it’s also going to open doors for cross-application policies we haven’t thought of yet.
I had grand ambitions this week. I’d come across a smattering of articles delving into the history of programming languages, practices, and other Internet-based tidbits. I’d pondered a pithy title like “if !mistake(history) do repeat” and dug through my source materials for evidence, but came up a bit empty-handed. In the end, the line that really summed up this week’s theme was found at the closing of an interesting article asking why does “=” mean assignment?
“I don’t know if this adds anything to the conversation. I just like software history.”
So with that in mind, that’s where we’ll start, with an interesting look at the origins of “=” as a tool of variable value assignment rather than (or in addition to, rather) evaluation.
After finally reaching the tipping point with off-the-shelf solutions that can’t match increasing speeds available, we recently took the plunge. Building a homebrew router turned out to be a better proposition than we could’ve ever imagined. With nearly any speed metric we analyzed, our little DIY kit outpaced routers whether they were of the $90- or $250-variety.
Naturally, many readers asked the obvious follow-up—”How exactly can we put that together?” Today it’s time to finally pull back the curtain and offer that walkthrough. By taking a closer look at the actual build itself (hardware and software), the testing processes we used, and why we used them, hopefully any Ars readers of average technical abilities will be able to put together their own DIY speed machine. And the good news? Everything is as open source as it gets—the equipment, the processes, and the setup. If you want the DIY router we used, you can absolutely have it. This will be the guide to lead you, step-by-step.
What is a router, anyway?
At its most basic, a router is just a device that accepts packets on one interface and forwards them on to another interface that gets those packets closer to their eventual destination.
It isn’t hard to find what you’re looking for on a Linux system — a file or a command — but there are a lot of ways to go looking.
7 commands to find Linux files
find
The most obvious is undoubtedly the find command, and find has become easier to use than it was years ago. It used to require a starting location for your search, but these days, you can also use find with just a file name or regular expression if you’re willing to confine your search to the local directory.
$ find e*
empty
examples.desktop
In this way, it works much like the ls command and isn’t doing much of a search.
Facebook repeats the pattern of USENET, this time as farce. As a no-holds-barred Wild West sort of social network, USENET was filled with everything we rightly complain about today. It was easy to troll and be abusive; all too many participants did it for fun. Most groups were eventually flooded by spam, long before spam became a problem for email. Much of that spam distributed pornography or pirated software (“warez”). You could certainly find newsgroups in which to express your inner neo-Nazi or white supremacist self. Fake news? We had that; we had malicious answers to technical questions that would get new users to trash their systems. And yes, there were bots; that technology isn’t as new as we’d like to think.
But there was a big divide on USENET between moderated and unmoderated newsgroups. Posts to moderated newsgroups had to be approved by a human moderator before they were pushed to the rest of the network. Moderated groups were much less prone to abuse. They weren’t immune, certainly, but moderated groups remained virtual places where discussion was mostly civilized, and where you could get questions answered. Unmoderated newsgroups were always spam-filled and frequently abusive, and the alt.* newsgroups, which could be created by anyone, for any reason, matched anything we have now for bad behavior.
So, the first thing we should learn from USENET is the importance of moderation. Fully human moderation at Facebook scale is impossible. With seven billion pieces of content shared per day, even a million moderators would have to scan seven thousand posts each: roughly 4 seconds per post. But we don’t need to rely on human moderation. After USENET’s decline, researchshowed that it was possible to classify users as newbies, helpers, leaders, trolls, or flamers, purely by their communications patterns—with only minimal help from the content.
FOSSology turns ten this year. Far from winding down, the open source license compliance project is still going strong. The interest in the project among its thriving community has not dampened in the least, and regular contributions and cross-project contributors are steering it toward productive and meaningful iterations.
An example is the recent 3.2 release, offering significant improvements over previous versions, such as the import of SDPX files and word processor document output summarizing analysis information. Even so, the overall project goal remains the same: to make it easier to understand and comply with the licenses used in open source software.
There are thousands of licenses used in Open Source software these days, with some differing by only a few words and others pertaining to entirely different use universes. Together, they present a bewildering quagmire of requirements that must be adhered to, but only as set out in the appropriate license(s), the misunderstanding or absence of which can revert rights to a reserved status and bring about a complete halt to distribution.
The Xen Project is comprised of a diverse set of member companies and contributors that are committed to the growth and success of the Xen Project Hypervisor. The Xen Project Hypervisor is a staple technology for server and cloud vendors, and is gaining traction in the embedded, security and automotive space. This blog series highlights the companies contributing to the changes and growth being made to the Xen Project, and how the Xen Project technology bolsters their business.
When did you start contributing to the Xen Project?
I started contributing to Xen Project in 2008. At that time, I was working for Citrix in the XenServer product team. I have been contributing every year since then, that makes it 10 years now!
What advice would you give someone considering contributing to the Xen Project?
Learning the intricate details of the Xen Project hypervisor can be daunting at first, but it is fun, and the community is great.
Most people know Capital One as one of the largest credit card companies in the U.S. Some also know that we’re one of the nation’s largest banks — number 8 in the U.S. by assets. But Capital One is also a technology-focused digital bank that is proud to be disrupting the financial services industrythrough our commitment to cutting edge technologies and innovative digital products. Like all U.S. banks, Capital One operates in a highly regulated environment that prioritizes the protection of our consumers and their financial data. This sets us apart from many companies who don’t operate under the same level of oversight and responsibility.
Our goal to reimagine banking is attracting amazing engineers that want to be part of the movement to reinvent the financial technology industry. During interviews, they are often surprised to find we want them to use open source project and contribute back to the open source community. Even more are blown away that we sponsor open source projects built by our engineers.
People expect that kind of behavior at a start-up, not a top bank. There is nothing traditional about Capital One and our approach to technology.