Home Blog Page 272

CHIPS Alliance to Create Open Chip Design Tools for RISC-V and Beyond

The Linux Foundation and several major RISC-V development firms have launched an LF-hosted CHIPS Alliance with a mission “to host and curate high-quality open source code relevant to the design of silicon devices.” The founding members — Esperanto Technologies, Google, SiFive, and Western Digital — are all involved in RISC-V projects.  

On the same day that the CHIPS Alliance was announced, Intel and other companies, including Google launched a Compute Express Link (CXL) consortium that will open source and develop Intel’s CXL interconnect. CXL shares many traits and goals of the OmniXtend protocol that Western Digital is contributing to CHIPS (see farther below).

The CHIPS Alliance aims to “foster a collaborative environment that will enable accelerated creation and deployment of more efficient and flexible chip designs for use in mobile, computing, consumer electronics, and Internet of Things (IoT) applications.” This “independent entity” will enable “companies and individuals to collaborate and contribute resources to make open source CPU chip and system-on-a-chip (SoC) design more accessible to the market,” says the project.

This announcement follows a collaboration between RISC-V and Linux Foundation formed last November to accelerate development for the open source RISC-V ISA, starting with RISC-V starter guides for Linux and Zephyr. The CHIPS Alliance is more focused on developing open source VLSI chip design building blocks for semiconductor vendors.

The CHIPS Alliance will follow Linux Foundation style governance practices and include the usual Board of Directors, Technical Steering Committee, and community contributors “who will work collectively to manage the project.” Initial plans call for establishing a curation process aimed at providing the chip community with access to high-quality, enterprise grade hardware.”

A testimonial quote by Zvonimir Bandic, senior director of next-generation platforms architecture at Western Digital, offers a few clues about the project’s plans: “The CHIPS Alliance will provide access to an open source silicon solution that can democratize key memory and storage interfaces and enable revolutionary new data-centric architectures. It paves the way for a new generation of compute devices and intelligent accelerators that are close to the memory and can transform how data is moved, shared, and consumed across a wide range of applications.”

Both the AI-focused Esperanto and SiFive, which has led the charge on Linux-driven RISC-V devices with its Freedom U540 SoC and upcoming U74 and U74-MC designs, are exclusively focused on RISC-V. Western Digital, which is contributing its RISC-V based SweRV core to the project, has pledged to produce 1 billion of SiFive’s RISC-V cores. All but Esperanto have committed to contribute specific technology to the project (see farther below).

Notably missing from the CHIPS founders list is Microchip, whose Microsemi unit announced a Linux-friendly PolarFire SoC, based in part on SiFive’s U54-MC cores. The PolarFire SoC is billed as the world’s first RISC-V FPGA SOC.

Although not included as a founding member, the RISC-V Foundation appears to behind the CHIPS Alliance, as evident from this quote from Martin Fink, interim CEO of RISC-V Foundation and VP and CTO of Western Digital: “With the creation of the CHIPS Alliance, we are expecting to fast-track silicon innovation through the open source community.”

With the exploding popularity of RISC-V, the RISC-V Foundation may have decided it has too much on its plate right now to tackle the projects the CHIPS Alliance is planning. For example, the Foundation is attempting to crack down on the growing fragmentation of RISC-V designs. A recent article in Semiconductor Engineering reports on the topic and RISC-V’s RISC-V Compliance Task Group.

Although the official CHIPS Alliance mission statements do not mention RISC-V, the initiative appears to be an extension of the RISC-V ecosystem. So far, there have been few open-ISA alternatives to RISC-V. In December, however, Wave Computing announced plans to follow in RISC-V’s path by offering its MIPS ISA as open source code without royalties or proprietary licensing. As noted in a Bit-Tech.net report on the CHIPS Alliance, there are also various open source chip projects that cover somewhat similar ground, including the FOSSi (Free and Open Source Silicon) Foundation, LibreCores, and OpenCores.

Contributions from Google, SiFive, and Western Digital

Google plans to contribute to the CHIPS Alliance a Universal Verification Methodology (UVM) based instruction stream generator environment for RISC-V cores. The configurable UVM environment will provide “highly stressful instruction sequences that can verify architectural and micro-architectural corner-cases of designs,” says the CHIPS Alliance.

SiFive will contribute and continue to improve its RocketChip (or Rocket-Chip) SoC generator, including the initial version of the TileLink coherent interconnect fabric. SiFive will also continue to contribute to the SCALA-based Chisel open-source hardware construction language and the FIRRTL “intermediate representation specification and transformation toolkit” for writing circuit-level transformations. SiFive will also continue to contribute to and maintain the Diplomacy SoC parameter negotiation framework.

As noted, Western Digital will contribute its 9-stage, dual issue, 32-bit SweRV Core, which recently appeared on GitHub. It will also contribute a SWERV test bench and SweRV instruction set simulator. Additional contributions will include specification and early implementations of the OmniXtend cache coherence protocol.

Intel launches CXL interconnect consortium

Western Digital’s OmniXtend is similar to the high-speed Compute Express Link (CXL) CPU interconnect that Intel is open sourcing. On Monday, Intel, Alibaba, Cisco, Dell EMC, Facebook, Google, Hewlett Packard Enterprise, Huawei, and Microsoft announced a CXL consortium to help develop the PCIe Gen 5 -based CXL into an industry standard. Intel intends to incorporate CXL into its processors starting in 2021 to link the CPU with memory and various accelerator chips.

The CXL group competes with a Cache Coherent Interconnect for Accelerators (CCIX) consortium founded in 2016 by AMD, Arm, IBM, and Xilinx. It similarly adds cache coherency atop a PCIe foundation to improve interconnect performance. By contrast, OmniXtend is based on Ethernet PHY technology. While the CXL and CCIX groups are focused only on interconnects, the CHIPS Alliance has a far more ambitious agenda, according to an interesting EETimes story on the CHIPS Alliance, CXL, and CCIX.

Tuxedo InfinityCube v9 Linux PC Review: Small Size, Big Possibilities

The InfinityCube v9 has a small footprint (22 x 28 x 26 cm, not quite a cube!), making it ideal for several use-cases. It has the makings of an awesome living room PC (just add Steam Big Picture and Kodi), a developer / professional video workstation or a fantastic 1440p gaming rig. Or in the case of many users, all of the above.

Despite its size, Tuxedo crams in some powerful components. … Read more at Forbes

Announcing The Linux Kernel Mentorship Project on CommunityBridge, a New Linux Foundation Platform

Since joining the Linux Foundation, I have been working to build out a new mentoring initiative. Today I am excited to announce our new Linux Kernel Mentorship Program on CommunityBridge, a platform that will bring opportunities for new developers to join and learn from our community and improve it at the same time.

CommunityBridge is a place where kernel mentors can sign up to share their expertise and pair them with anyone who has the basic skills to apply to work and learn from our community as selected mentees. CommunityBridge will give individuals the opportunity to get paid $5500 plus a $500 travel stipend for a 12-week program to learn from us and solve problems such as finding and fixing bugs that will make the kernel more stable and secure.   At the end of the program, mentees will also be paired with CommunityBridge employers for opportunities to interview with some of the top names in tech.

What’s more, in order to improve diversity in our community, the Linux Foundation will provide full financial sponsorship for the first 5 mentees from diverse backgrounds in the upcoming summer session starting this April. Even more, the Linux Foundation will match dollar for dollar for donations to support the first 100 diverse mentees across all projects hosted on the CommunityBridge platform.

Read more at the Linux Foundation

The Linux Foundation Launches Continuous Delivery Foundation

The Linux Foundation announced it will provide the home base for a vendor-neutral Continuous Delivery Foundation (CDF) committed to making it easier to build and reuse DevOps pipelines across multiple continuous integration/continuous delivery (CI/CD) platforms.

The first projects to be hosted under the auspices of CDF, which was launched at the Open Source Leadership Summit conference, includes Jenkins, the open source CI/CD system, and Jenkins X, an open source CI/CD solution on Kubernetes. Both were developed by CloudBees. Netflix and Google, meanwhile, are contributing Spinnaker, an open source multi-cloud CD solution, and Google is also adding Tekton, an open source project and specification for creating CI/CD components.

Read more at DevOps.com

New Red Team Project Aims to Help Secure Open Source Software

The Linux Foundation has launched the Red Team Project, which incubates open source cybersecurity tools to support cyber range automation, containerized pentesting utilities, binary risk quantification, and standards validation and advancement.

The Red Team Project’s main goal is to make open source software safer to use. They use the same tools, techniques, and procedures used by malicious actors, but in a constructive way to provide feedback and help make open source projects more secure.

We talked with Jason Callaway, Customer Engineer at Google, to learn more about the Red Team project.

Linux Foundation: Can you briefly describe the Red Team project and its history with the Fedora Red Team SIG?

Jason Callaway: I founded the Fedora Red Team SIG with some fellow Red Hatters at Def Con 25. We had some exploit mapping tools that we wanted to build, and I was inspired by Mudge and Sarah Zatko’s Cyber-ITL project; I wanted to make an open source implementation of their methodologies. The Fedora Project graciously hosted us and were tremendous advocates. Now that I’m at Google, I’m fortunate to get to work on the Red Team as my 20% Project, where I hope to broaden its impact and build a more vendor neutral community. Fedora is collaborating with LF, supports our forking the projects, and will have a representative on our technical steering committee.

LF: What are some of the short- and long-term goals of the project?

Jason: Our most immediate goal is to get back up and running. That means migrating GitHub repos, setting up our web and social media presence, and most importantly, getting back to coding. We’re forming a technical steering committee that I think will be a real force multiplier in helping us to stay focused and impactful. We’re also starting a meetup in Washington DC that will alternate between featured speakers and hands-on exploit curation hackathons on a two-week cadence.

LF: Why is open source important to the project?

Jason: Open source is important to us in many ways, but primarily because it’s the right thing to do. Cybersecurity is a global problem that impacts individuals, businesses, governments, everybody. So we have to make open source software safer.

There are lots of folks working on that, and in classic open source fashion, we’re standing on the shoulders of giants. But the Red Team Project hopes to offer some distinctly offensive value to open source software security.

LF: How can the community learn more and get involved?

Jason: I used to have a manager who liked to say, “80% of the job is just showing up.” It was tongue-in-cheek for sure, but it definitely applies to open source projects. To learn more, you can attend our meetups either in person or via Google Hangout, subscribe to our mailing list, and check out our projects on GitHub or our website.

This article originally appeared at The Linux Foundation

BackBox Linux for Penetration Testing

Any given task can succeed or fail depending upon the tools at hand. For security engineers in particular, building just the right toolkit can make life exponentially easier. Luckily, with open source, you have a wide range of applications and environments at your disposal, ranging from simple commands to complicated and integrated tools.

The problem with the piecemeal approach, however, is that you might wind up missing out on something that can make or break a job… or you waste a lot of time hunting down the right tools for the job. To that end, it’s always good to consider an operating system geared specifically for penetration testing (aka pentesting).

Within the world of open source, the most popular pentesting distribution is Kali Linux. It is, however, not the only tool in the shop. In fact, there’s another flavor of Linux, aimed specifically at pentesting, called BackBox. BackBox is based on Ubuntu Linux, which also means you have easy access to a host of other outstanding applications besides those that are included, out of the box.

What Makes BackBox Special?

BackBox includes a suite of ethical hacking tools, geared specifically toward pentesting. These testing tools include the likes of:

  • Web application analysis

  • Exploitation testing

  • Network analysis

  • Stress testing

  • Privilege escalation

  • Vulnerability assessment

  • Computer forensic analysis and exploitation

  • And much more

Out of the box, one of the most significant differences between Kali Linux and BackBox is the number of installed tools. Whereas Kali Linux ships with hundreds of tools pre-installed, BackBox significantly limits that number to around 70.  Nonetheless, BackBox includes many of the tools necessary to get the job done, such as:

  • Ettercap

  • Msfconsole

  • Wireshark

  • ZAP

  • Zenmap

  • BeEF Browser Exploitation

  • Sqlmap

  • Driftnet

  • Tcpdump

  • Cryptcat

  • Weevely

  • Siege

  • Autopsy

BackBox is in active development, the latest version (5.3) was released February 18, 2019. But how is BackBox as a usable tool? Let’s install and find out.

Installation

If you’ve installed one Linux distribution, you’ve installed them all … with only slight variation. BackBox is pretty much the same as any other installation. Download the ISO, burn the ISO onto a USB drive, boot from the USB drive, and click the Install icon.

The installer (Figure 1) will be instantly familiar to anyone who has installed a Ubuntu or Debian derivative. Just because BackBox is a distribution geared specifically toward security administrators, doesn’t mean the operating system is a challenge to get up and running. In fact, BackBox is a point-and-click affair that anyone, regardless of skills, can install.

Figure 1: The installation of BackBox will be immediately familiar to anyone.

The trickiest section of the installation is the Installation Type. As you can see (Figure 2), even this step is quite simple.

Figure 2: Selecting the type of installation for BackBox.

Once you’ve installed BackBox, reboot the system, remove the USB drive, and wait for it to land on the login screen. Log into the desktop and you’re ready to go (Figure 3).

Figure 3: The BackBox Linux desktop, running as a VirtualBox virtual machine.

Using BackBox

Thanks to the Xfce desktop environment, BackBox is easy enough for a Linux newbie to navigate. Click on the menu button in the top left corner to reveal the menu (Figure 4).

Figure 4: The BackBox desktop menu in action.

From the desktop menu, click on any one of the favorites (in the left pane) or click on a category to reveal the related tools (Figure 5).

Figure 5: The Auditing category in the BackBox menu.

The menu entries you’ll most likely be interested in are:

  • Anonymous – allows you to start an anonymous networking session.

  • Auditing – the majority of the pentesting tools are found in here.

  • Services – allows you to start/stop services such as Apache, Bluetooth, Logkeys, Networking, Polipo, SSH, and Tor.

Before you run any of the testing tools, I would recommend you first making sure to update and upgrade BackBox. This can be done via a GUI or the command line. If you opt to go the GUI route, click on the desktop menu, click System, and click Software Updater. When the updater completes its check for updates, it will prompt you if any are available, or if (after an upgrade) a reboot is necessary (Figure 6).

Figure 6: Time to reboot after an upgrade.

Should you opt to go the manual route, open a terminal window and issue the following two commands:

sudo apt-get update

sudo apt-get upgrade -y

 

Many of the BackBox pentesting tools do require a solid understanding of how each tool works, so before you attempt to use any given tool, make sure you know how to use said tool. Some tools (such as Metasploit) are made a bit easier to work with, thanks to BackBox. To run Metasploit, click on the desktop menu button and click msfconsole from the favorites (left pane). When the tool opens for the first time, you’ll be asked to configure a few options. Simply select each default given by clicking your keyboard Enter key when prompted. Once you see the Metasploit prompt, you can run commands like:

db_nmap 192.168.0/24

The above command will list out all discovered ports on a 192.168.1.x network scheme (Figure 7).

Figure 7: Open port discovery made simple with Metasploit on BackBox.

Even often-challenging tools like Metasploit are made far easier than they are with other distributions (partially because you don’t have to bother with installing the tools). That alone is worth the price of entry for BackBox (which is, of course, free).

The Conclusion

Although BackBox usage may not be as widespread as Kali Linux, it still deserves your attention. For anyone looking to do pentesting on their various environments, BackBox makes the task far easier than so many other operating systems. Give this Linux distribution a go and see if it doesn’t aid you in your journey to security nirvana.

A Brief History of Wi-Fi Security Protocols from “Oh My, That’s Bad” to WPA3

Thanks to upcoming developments in Wi-Fi, all of us connectivity-heads out there can look forward to getting familiar with new 802.11 protocols in the near future. Ars took a deep look at what’s on the horizon last fall, but readers seemed to have a clear request in response—the time had come to specifically discuss the new Wi-Fi security protocol, WPA3.

Before anyone can understand WPA3, it’s helpful to take a look at what came before it during The Dark Ages (of Internet)—a time with no Wi-Fi and unswitched networks. Swaths of the Internet today may be built upon “back in my day” ranting, but those of you in your 20s or early 30s may genuinely not remember or realize how bad things used to be. In the mid-to-late 1990s, any given machine could “sniff” (read “traffic not destined for it”) any other given machine’s traffic at will even on wired networks. Ethernet back then was largely connected with a hub rather than a switch, and anybody with a technical bent could (and frequently did) watch everything from passwords to Web traffic to emails wing across the network without a care.

Closer to the turn of the century, wired Ethernet had largely moved on from hubs (and worse, the old coax thinnet) to switches. A network hub forwards every packet it receives to every machine connected to it, which is what made widespread sniffing so easy and dangerous. A switch, by contrast, only forwards packets to the MAC address for which they’re destined—so when computer B wants to send a packet to router A, the switch doesn’t give a copy to that sketchy user at computer C. This subtle change made wired networks far more trustworthy than they had been before. And when the original 802.11 Wi-Fi standard released in 1997, it included WEP—Wired Equivalent Privacy—which supposedly offered the same expectations of confidentiality that users today now expect from wired networks.

In retrospect, WPA3’s early predecessor missed the mark. Badly.

Read more at Ars Technica

Considering Fresh C Extensions

Matthew Wilcox recently realized there might be a value in depending on C extensions provided by the Plan 9 variant of the C programming language. All it would require is using the -fplan9-extensionscommand-line argument when compiling the kernel. As Matthew pointed out, Plan 9 extensions have been supported in GCC as of version 4.6, which is the minimum version supported by the kernel. So theoretically, there would be no conflict.

Nick Desaulniers felt that any addition of -f compiler flags to any project always would need careful consideration. Depending on what the extensions are needed for, they could be either helpful or downright dangerous.

Read more at Linux Journal

Best Linux Gaming Distros That Might Be Helpful

There are plenty of Linux operating systems available for the various purposes. Some of them are also available for the gaming purposes. There are plenty of beautiful Linux operating systems available for the gaming purpose.

1. SteamOS

Let’s start with SteamOS for your gaming desire. It is specially designed for the gaming purpose. It has steam per-installed and is based on Debian. SteamOS is maintained and developed by Valve.

This is the most recommended Gaming operating system among Linux users. Some of the requirements for the SteamOS are:

  • Intel or AMD 64-bit capable processor
  • 4GB or more RAM
  • 200GB + HDD
  • NVIDIA graphics card / AMD graphics card

Download SteamOS

2. Linux Console

Linux Console is another Linux operating system which can be used for the gaming purpose too. There are around 15+ games which you can play live on this Linux operating system.

Read more at It’s Ubuntu

New Linux Kernel: The Big 5.0

Linus Torvalds at last made the jump with the recent release of kernel 5.0. Although Linus likes to say that his only reason to move on to the next integer is when he runs out of fingers and toes with which to count the fractional part of the version number, the truth is this kernel is pretty loaded with new features.

On the network front, apart from improvements to drivers like that of the Realtek R8169, 5.0 will come with better network performance. Network performance has been down for the last year or so because of Spectre V2. The bug forced kernel developers to introduce something called a Retpoline (short for “RETurn tramPOLINE“) to mitigate its effect. The changes introduced in kernel 5.0 “[…] Overall [give a greater than] 10% performance improvement for UDP GRO benchmark and smaller but measurable [improvements] for TCP syn flood” according to developer Paolo Abeni.

What hasn’t made the cut yet is the much anticipated integration of WireGuard. Wireguard is a VPN protocol that is allegedly faster, more versatile and safer than the ones currently supported by the kernel. Wireguard is easy to implement, uses state of the art encryption, and is capable of maintaining the network link to the VPN up even if the user switches to a different WiFi network or changes from WiFi to a wired connection.

An ongoing task is the work going into preparing for the Y2038 problem. In case you have never heard of this, UNIX and UNIX-like systems (including Linux) have clocks that count from January the 1st, 1970. The amount of seconds from that date onwards is stored in a signed 32-bit variable called time_t. The variable is signed because, you know, there are some programs that need to show dates before the 70s.

At the moment of writing we are already somewhere in the 01011100 01110010 10010000 10111010 region and the clock is literally ticking. On January 19th 2038, at 3:14:07 in the morning, the clock will reach 01111111 11111111 11111111 11111111. One second later, time_t will overflow, changing the sign of your clock and making your system believe, along with millions of devices and servers worldwide, that we are back in 1901.

Then… well, the usual: planes will fall from the sky, nuclear power stations will melt down, and toasters will explode, rendering the world breakfastless. That is, of course, unless the brave kernel developers don’t come up with a solution in the meantime. Then again, they made the Wii controller work in Linux, what could they not achieve?

More stuff to look forward to in Linux kernel 5.0

  • Native support for FreeSync/VRR of AMD GPUs means that now your smart monitor and your video card can sync up their frame rates and you won’t see any more tearing artifacts when playing a busy game or watching an action movie.
  • Linux now has native support for and boosted the performance of the Adiantum filesystem encryption. This encryption system is used in low-powered devices built around ARM Cortex-A7 or lower — think mid- to low-end phones and many SBCs.
  • Talking of SBCs, the touch screen for the Raspberry Pi has at last been mainlined, and Btrfs now supports swap files.

As always, you can find more information about Linux 5.0 by reading Linus’s announcement on the Linux Kernel mailing list, checking out the in-depth articles at Phoronix and by reading the Kernel Newbies report.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.