If you accidentally delete data or format a disk, good advice can be expensive. Or maybe not: You can undo many data losses with SystemRescueCd.
The price for mass storage devices of all types has been falling steadily in recent years, with a simultaneous increase in capacity. As a result, users are storing more and more data on local storage media – often without worrying about backing it up. Once the milk has been spilled, the anxious search begins for important photos, videos, correspondence, and spreadsheets. SystemRescueCd can help in these cases by providing a comprehensive toolbox for every computer, with the possibility of restoring lost items.
The Gentoo derivative is a hybrid image for 32- and 64-bit computers that comes in just under 470MB [1]. The entire distro fits on a CD, so it is also suitable for use on older systems. To boot the operating system from a USB stick, use the commands:
In this tutorial series, we’re providing practical guidelines for using PGP. Previously, we provided an introduction to basic tools and concepts, and we showed how to generate and protect your master PGP key. In this third article, we’ll explain how to generate PGP subkeys, which are used in daily work.
Checklist
Generate a 2048-bit Encryption subkey (ESSENTIAL)
Generate a 2048-bit Signing subkey (ESSENTIAL)
Generate a 2048-bit Authentication subkey (NICE)
Upload your public keys to a PGP keyserver (ESSENTIAL)
Set up a refresh cronjob (ESSENTIAL)
Considerations
Now that we’ve created the master key, let’s create the keys you’ll actually be using for day-to-day work. We create 2048-bit keys because a lot of specialized hardware (we’ll discuss this more later) does not handle larger keys, but also for pragmatic reasons. If we ever find ourselves in a world where 2048-bit RSA keys are not considered good enough, it will be because of fundamental breakthroughs in computing or mathematics and therefore longer 4096-bit keys will not make much difference.
Your key creation is complete, so now you need to make it easier for others to find it by uploading it to one of the public keyservers. (Skip the step if you’re not planning to actually use the key you’ve created, as this just litters keyservers with useless data.)
$ gpg --send-key [fpr]
If this command does not succeed, you can try specifying the keyserver on a port that is most likely to work:
Most keyservers communicate with each other, so your key information will eventually synchronize to all the others.
Note on privacy: Keyservers are completely public and therefore, by design, leak potentially sensitive information about you, such as your full name, nicknames, and personal or work email addresses. If you sign other people’s keys or someone signs yours, keyservers will additionally become leakers of your social connections. Once such personal information makes it to the keyservers, it becomes impossible to edit or delete. Even if you revoke a signature or identity, that does not delete them from your key record, just marks them as revoked — making them stand out even more.
That said, if you participate in software development on a public project, all of the above information is already public record, so making it additionally available via keyservers does not result in a net loss in privacy.
Upload your public key to GitHub
If you use GitHub in your development (and who doesn’t?), you should upload your key following the instructions they have provided:
In its latest news, Purism announced that it has successfully integrated Trammel Hudson’s Heads security firmware into its Trusted Platform Module (TPM)-equipped Librem laptops. Heads is an open-source computer firmware and configuration tool that aims to provide better physical security and data protection.
Heads combines physical hardening of hardware platforms and flash security features with custom coreboot firmware and a Linux boot loader in ROM. While still not a complete replacement for proprietary AMD or Intel firmware blobs, Heads, by controlling a system from the first instruction the CPU executes to full boot up, enables you to track steps of the boot firmware and configuration.
Prometheus is open-source and one of the popular CNCF projects written in Golang. Some of its components are written in Ruby but most of the components are written in Go. This means you have a single binary executables, you download and run Prometheus with it’s components as that simple. Prometheus is fully Docker compatible. A number of Prometheus components with the Prometheus itself are available on the Docker Hub.
You will see how to spin-up a minimal Prometheus server with a Node Exporter and a Grafana components in Docker containers to monitor a stand-alone Linux Ubuntu 16.04 server. Let’s see first that what are the main mandatory components in Prometheus from ground-up.
Amazon engineers are working with Nuance Communications Inc. and Voicebox Technologies Corp. to write code that makes in-vehicle apps compatible with several speech-recognition technologies, eliminating the need for developers to make multiple versions.
The catch is that the cars must use Automotive Grade Linux, an open-source platform being developed by Toyota Motor Corp. and other auto manufacturers and suppliers to underpin all software running in the vehicle. The only cars currently on the system are Toyota’s new Camry and Sienna and the Japanese version of the plug-in Prius, though the carmaker plans to expand that list. AGL has been growing too, reaching 114 members currently, up from around 90 a year earlier. Amazon signed on last month.
DevSecOps is emerging as a superior way to integrate security throughout the DevOps cycles, using better intelligence, situational awareness, and enhanced collaboration. It entails a solid approach to change management, or standardizing specific processes that can help prevent problems downstream. Poor (or no) change management is the biggest culprit in preventing organizations from pinpointing the root cause of critical issues, thereby slowing down the entire business.
Security Incident and Event Management (SIEM)
The key to optimizing your business for DevSecOps is to build the necessary infrastructure to interact with your SIEM system, and enable rapid data collection, data analysis, and incident response. Your SIEM platform should act as the hub, around which you can customize the full workflow for managing incidents.
Hardware and software are certainly different beasts. Software is really just information, and the storing, modification, duplication, and transmission of information is essentially free. Hardware is expensive, or so we think, because it’s made out of physical stuff which is costly to ship or copy. So when we talk about open-source software (OSS) or open-source hardware (OSHW), we’re talking about different things — OSS is itself the end product, while OSHW is just the information to fabricate the end product, or have it fabricated.
The fabrication step makes OSHW essentially different from OSS, at least for now, but I think there’s something even more fundamentally different between the current state of OSHW and OSS: the pull request and the community. The success or failure of an OSS project depends on the community of people developing it, and for smaller projects that can hinge on the ease of a motivated individual digging in and contributing. This is the main virtue of OSS in my opinion: open-source software is most interesting when people are reading and writing that source.
With pure information, it’s essentially free to copy, modify, and push your changes upstream so that others can benefit.
Helm can make deploying and maintaining Kubernetes-based applications easier, said Amy Chen in her talk at KubeCon + CloudNativeCon. Chen, a Systems Software Engineer at Heptio, began by dissecting the structure of a typical Kubernetes setup, explaining how she often described the basic Docker containersas “baby computers,” in that containers are easy to move around, but they still need the “mommy” computer. However, containers do carry with them all the environmental dependencies for a given application.
Basic Units
On the next level up, there is the pod, the basic unit of Kubernetes. Pods group related containers together, along with other frameworks (e.g., databases). They can only be reached from inside the cluster and are given IPs by your LAN. A pod’s IP is dynamic and can change when, for example, they are terminated and spun up again. This makes their addresses unreliable, an issue that the Kubernetes’ service structures solves (see below).
A deployment groups replicated Pods together. Here is where the Kubernetes concepts of actual state and desired state come into play. The deployment controller is in charge of reaching the desired state from the actual state. If, for example, you need three pods and one crashes, the deployment controller will spin up another to achieve the desired state.
A service is a group of pods or deployments. While a service has nothing to do with the states described above, it does provide a way of locating deployments. As pods can be terminated and die, when they are spun up, they may have different IPs, meaning you cannot rely on a pod’s IP as a way to communicate with it. Services define a dependable end point that can be used to communicate with one another. Finally, Ingress routes traffic from the outside worlds to the internal services.
Although all pieces can be managed with Kubernetes’ own kubectl tool to some extent, this does have several drawbacks: kubectl forces you to execute complicated multiple command lines which must be run in a certain order, for example. Also, kubectl does not make any provisions for version controlling your set up. So, if you change something and then want to go back to your initial setup, with kubectl you have to tear everything down and then build it all up, inputting your original settings all over again.
Chart your course
That’s where Helm can help, says Chen. Helm uses charts, text configuration files that define a group of manifest files. Helm charts reference a series of templates, yaml documents that define the deployment, services, and ingress. With those in place, using helm install will bring up your service.
In the demo phase of her talk, Chen showed how Helm makes it easier to get all the moving parts working. The yaml files used by Helm are easily parsable by humans, in that, just by looking at each section, it is easy to understand what they do.
A line that says replicas: 3 in the deployment.yaml file, for example, will bring up three replicas of your pods in the deployment phase; the containerPort: 80 line tells Helm to expose port 80 on the pod; and so on. The service.yaml andingress.yamlfiles are equally simple to understand.
After configuring your set up, you can run helm install and the application will start, while returning data on the deployment so you can check everything went correctly.
Helm also lets you use configuration variables. You can create a values.yamlfile that contains the values that will be used in thedeployment.yaml, service.yamland ingress.yamlfiles. This avoids having to edit the configuration files by hand every time you want to change something, making modifications easier and less prone to errors.
Helm also allows you to “upgrade” (which, in this context means modify) a running set up live with the upgrade command. If, for any reason, the upgrade does not conform to what you want, you can use helm listto see the deployment’s you have already had, and then helm rollbackto go back to the version that worked best.
In conclusion, Helm is not only useful for beginners thanks to its simplified usage, says Chen, but it also provides some extra features that make running and maintaining Kubernetes-based applications more efficient than using just kubectl.
The 2018 conference season — which, let’s admit, lasts from January 1 to December 31 these days — is already in full swing. And, the various conferences and summits held around the world provide meeting grounds for people who create, maintain, govern, and promote open source.
However, as the conference scene has become much larger and more varied, one thing that is often missing is simple educational instruction in a specific technology. Conferences are wonderful places if you already know what’s going on and want to find out the latest, but learning at the apprentice level about esoteric or deep subjects like embedded Linux, for example, is often a much less glamorous affair, and training has been limited to classrooms or professional corporate training, or DIY with a good book.
E-ALE
E-ALE, which is short for Embedded Apprentice Linux Engineer, is a new initiative that aims to challenge this state of affairs. This undertaking is the brainchild of a group of embedded Linux professionals who met, logically enough, at a conference — SCaLE in 2017 — and discussed this lack of apprenticeship level training. Afterwards, Behan Webster and Tom King, both Linux consultants and professional trainers with The Linux Foundation, and Jeff Osier-Mixon, open source community strategist at Intel, took the reins to organize a portable educational track consisting of nine 2-hour courses over 3 days.
Sign up for ELC/OpenIoT Summit updates to get the latest information:
In this track, which debuts at SCaLE 16x in Pasadena on March 8 and will also be presented at the Embedded Linux Conference in Portland starting March 12, professional trainers each contribute one course and their time in exchange for exposure and the pleasure of mentoring new users in the not-so-dark arts of embedded Linux. The collection of courses is available to up to 50 apprentices. Each can choose to attend only the courses that interested them, so that they can also attend presentations at the rest of the conference. Each individual class hosts approximately 30 students.
The only cost — beyond that of the conference itself — is a small hardware kit, consisting of a Pocket BeagleBoard (ARM Cortex-based development board) along with a BaconBits add-on board provided by QWERTY Embedded Design, GHI, and OSHPark. The kit costs $75 and is required to attend any of the hands-on courses. A laptop is also required (see the E-ALE page for other requirements).
Apprentice level instruction
Note that while these are apprentice-level courses, they are not “beginner” courses in how to use Linux. Students are expected to understand the basics of the Linux operating system, and to have familiarity with command line interfaces and the C programming language as well as some facility with electronics. Classes run for about 2 hours, and typically consist of about 45 to 60 minutes of instruction, to get a solid high level grounding in a subject, followed by an hour or more of hands-on time with the hardware, exploring the subject matter. Students can then continue their practice at home and stay in touch with each other and ask further questions through an alumni mailing list and participate on the E-ALE wiki. See the course descriptions for details on the scope and depth of each course.
And the best part? All of the training materials for the courses are available as Creative Commons documents, free to download after each conference, along with recordings where possible.
With 18 hour of corporate training for the cost of a bit of hardware, students win big with the E-ALE track. Trainers also win, with two hours of high-profile exposure to students who can then take business cards back to their companies and provide personal recommendations for corporate training and consulting. The conferences themselves win by providing venues for high quality instruction, making them places of learning. And with documentation and real training now provided at events around the world, the entire Linux community wins. How often do you run across a win-win-win-win scenario?
Learn more
If you would like to attend, support, or sponsor E-ALE, visit the website for details and upcoming conferences. The E-ALE track is currently planned in 2018 for SCaLE 16x, Embedded Linux Conference + OpenIoT Summit North America, and Embedded Linux Conference Europe, October 22 in Edinburgh, UK. E-ALE is also exploring the possibility of providing courses on other Linux-based technologies, so stay tuned for more.
Jeff “Jefro” Osier-Mixon has been a fixture in the open source landscape since long before the term “open source” was invented. He is currently a strategist and community manager in Intel’s Open Source Technology Center and a community manager and advisor for a number of Linux Foundation projects.
The cost savings Serverless offers greatly accelerated its rate of adoption, and many companies are starting to use it in production, coping with less mature dev and monitoring practices to get the monthly bill down. Such a trade off makes sense when you balance effort vs reward, but one aspect of it is especially scary – security.
Key Takeaways
FaaS takes on the responsibility for “patching” the underlying servers, freeing you from OS patching
Denial of Service (DoS) attacks are naturally thwarted by the (presumed) infinite capacity Serverless offers.
With serverless, we deploy many small functions that can have their own permissions. However, managing granular permissions for hundreds or thousands of functions is very hard to do.
Since the OS is unreachable, attackers will shift their attention to the areas that remain exposed – and first amongst those would be the application itself.
Known vulnerabilities in application libraries are just as risky as those in the server dependencies, and the responsibility for addressing vulnerable app libraries falls to you – the function developer.