Home Blog Page 729

Save the Whale: Docker Rightfully Shuns Standardization

You can be forgiven for thinking Docker cares about standardization. 

A little more than a year ago Docker donated “its software container format and its runtime, as well as the associated specifications,” to the Open Container Project to be housed under the Linux Foundation. In the FAQ, the Linux Foundation stressed, “Docker has taken the entire contents of the libcontainer project, including nsinit, and all modifications needed to make it run independently of Docker, and donated it to this effort.”

It was euphoric, this kumbaya moment. Many, including Google’s Kelsey Hightower, thought the container leader was offering full Docker standardization in our time. Nope.

As Docker founder Solomon Hykes declared last week, “We didn’t standardize our tech, just a very narrow piece where it made sense.” Importantly, what Hykes is arguing, a “reasonable argument against weaponized standards,” may be the best kind of “Docker standardization” possible.

Read more at InfoWorld

Multifactor Authentication with Google Authenticator

Google Authenticator provides one-time passwords to smartphone owners for multifactor authentication, or you can integrate it into other applications, such as blogs.

Login security increases significantly when using a combination of factors to authenticate a user (i.e., multifactor authentication). In most situations, two-factor authentication is usually enough. The first authentication factor is usually a password or key, with various possibilities for the second factor, including hardware tokens owned by authorized users or one-time password (OTP) generators that provide OTP tokens. One-time passwords come in several varieties (e.g., hardware, software, grid card). A popular, free, and simple way to implement two-factor authentication (2FA) with OTP is Google Authenticator, which is available in the form of an app for iOS and Android and as source code [1] for the server side.

Read more at ADMIN

MediaTek’s 10nm Mobile-Focused SoC Will Tap Cortex-A73 and -A35

MediaTek has revealed an upcoming Helio X30 system-on-chip (SoC) that will likely be one of the first 10nm fabricated SoCs to arrive when it ships in mid-2017. The deca-core mobile SoC will also be one of the first to use ARM’s new Cortex-A73 successor to the Cortex-A72, according to PhoneRadar, which based its story in part on an EETimes China interview [translated] with MediaTek CTO Zhu Shangzu.

Manufactured using TSMC’s 10nm FinFET process, the mobile-focused Helio X30 has a tri-cluster design with two 2.8GHz, Cortex-A73 based “Artemis” cores, as well as four 2.2GHz Cortex-A53 cores and four 2.0GHz Cortex-A35 cores. The X30 also sports the high-end, quad-core PowerVR 7XT GPU, which is capable of driving VR headsets, says PhoneRadar.

The Helio X30 will support up to 8GB of LPDDR4 RAM and is powerful enough to drive a 26-megapixel rear-facing camera, as well as a front-facing cam. The SoC is said to support UFS 2.1 storage, as well as a three carrier aggregation (3CA) modem that supports the speedy CAT 12 LTE standard.

MediaTek’s latest SoC follows its similarly 10-core, but 20nm fabricated Helio X20 and X25. These SoCs debuted the tri-cluster concept, but use only two architectures as opposed to three with the X30. The X20 and X25 have two Cortex-A72 cores and two quad-core banks of Cortex-A53 cores with different clock rates. The Helio X25, which is exclusive to the Android-based Meizu Pro 6 smartphone, clocks the Cortex-A72 cores higher to 2.5GHz instead of 2.3GHz and has a faster 850MHz version of the Mali T880 MP4 GPU.

Cortex-A73 cores should provide a 30 percent better sustained performance and efficiency than Cortex-A72, claims ARM, which is in the process of being acquired by SoftBank. The tiny, 0.65 x 0.65mm design, which has been licensed by HiSilicon and Marvell, in addition to MediaTek, features an Artemis microarchitecture borrowed from the rarely used Cortex-A17 chip. Artemis emphasizes “sustained performance,” which means that mobile devices can maintain peak performance for longer periods before being forced to throttle down due to heat buildup. The Helio X30 also features power-efficient Cortex-A35 cores for its low-end cluster. According to ARM, Cortex-A35 draws about 33 percent less power per core and occupies 25 percent less silicon area, compared to Cortex-A53. The -A35 is not as power efficient as the new, IoT-focused Cortex-A32, which is slower, but smaller and more efficient. Cortex-A32 is the first ARMv8 chip that works only in 32-bit mode.

Coming Up: Apple A10 and Snapdragon 821

According to PhoneRadar, the Helio X30 won’t ship for another year, either at the end of the second quarter or the beginning of the third. A lot can happen in that time, including the upcoming Apple A10 due in the iPhone 7, which is said to be fabricated at 10nm as well. The latest rumors has Apple sticking with two Cortex-A72 or -A73-like cores, following the minimalist design of the dual-core, 16nm Apple A9/A9X. Previously, the A10 was rumored to have between three and six cores.

In AnTuTu benchmarks the dual-core Apple A9 has beat most other Android-targeted mobile SoCs with many more cores. The A9 was edged out only by Qualcomm’s quad-core, 14nm Snapdragon 820, which has four Cortex-A72 like “Kyro” cores, although the 14nm, octa-core Samsung Exynos 8890 was close behind. In other benchmarks, the A9 has beaten the Helio X20 and X25.

It’s unclear whether Apple has proven that high core counts don’t matter or whether Big.Little dual- and tri-cluster SoC designs have yet to evolve to their full potential. Battery life and other considerations are also important. It remains to be seen how these complex multi-core designs are following through on promises to offer smoother transitions between operation modes like high-power video and VR, standard text and voice, and sleep/idle.

Meanwhile, with the move to VR and augmented reality, GPUs are becoming almost as important as CPUs. Indeed Imagination’s PowerVR 7XT is expected to beat out the the Helio X20’s ARM Mali GPU in high-end gaming.

MediaTek cares more about competition on high-end Android devices than it does about Apple. Principally, this comes down to Qualcomm and Samsung, with upstarts like Kirin advancing from below. Qualcomm recently introduced a modest upgrade to the Snapdragon 820 called the Snapdragon 821. Due to arrive in devices later this year, the SoC is claimed to offer 10 percent faster performance and provide better battery life than the 820.

Intel’s Kaby Lake Stays at 14nm

If all had gone according to plan it would be Intel, not MediaTek, announcing the first 10nm processor this month. Last month, Intel began shipping its seventh-generation Kaby Lake core processors, but with few details. We do know that the 7th Gen Core CPUs, which were originally meant to debut a 10nm process, will instead stick with the same 14nm process used by the sixth-gen Skylake and fifth-gen Broadwell chips.

Intel’s ditching of its traditional “tick-tock” product release cadence is a sign that the Moore’s Law pace of chip miniaturization is slowing. Still Intel’s 10nm “Cannonlake” is expected in 2017, and on the ARM side, TSMC is already working on a 7nm process. The actual transistor density of the 7nm design is expected to be very similar to the 10nm Intel process, however.

Meanwhile, Kaby Lake should offer incremental performance improvements over Skylake while squashing some of the bugs that have nipped at Skylake’s heels. We’ll find out more next week at the Intel Developer Forum. One thing Kaby Lake won’t be is a mobile processor, unless you count high-end tablets and all-in-ones. Intel appears to have dropped future development on its 14nm “Cherry Trail” Atom SoCs, which failed to win over many phone vendors, although more embedded-oriented SoCs like the “Braswell” Celeron, Pentium, and Atom x5-e8000 continue to do well.

Inside How Microsoft Views Open Source

Editor’s Note: This article is paid for by Microsoft as a Diamond-level sponsor of LinuxCon North America, to be held Aug. 22-24, 2016, and was written by Linux.com.

Few saw Microsoft’s embrace of open source coming. When CEO Satya Nadella declared two years ago that “Microsoft loves Linux”  it’s safe to say many in the open source community were flabbergasted. Indeed, suspicion and disbelief continues in certain circles despite Microsoft’s increasing product builds focused on, or including, open source.

Among Microsoft’s recent open source efforts are its contributions to the FreeBSD project, support for Ubuntu as a subsystem on Windows 10, and the Xamarin software development kit. Microsoft has also partnered with The Linux Foundation for its official Linux on Azure certification.

More than a few folks wonder what the heck is going on. Could Microsoft be really turning into a full-blown open source company?   

We talked to corporate vice president of enterprise open source at Microsoft, Wim Coekaerts – yeah, Mr. Linux himself who, up until a few months ago, led Linux and virtualization engineering at Oracle – to get a closer look at how Microsoft views open source internally and maybe catch a glimpse of the company’s open source end game.

Linux.com: You were at Oracle for 21 years, heading its open source initiatives. What was happening at Microsoft that interested you so much that you would join the company?

Wim Coekaerts: Yes, I was at Oracle for 21 years. It was an exciting environment and I was involved in most of the initial open source efforts there. But it was also time for a change.

In mid-January, I had a chat with Scott Guthrie and Mike Neil at Starbucks and they started telling me about all the things Microsoft was doing with open source. Now, I knew about some of those things like everyone else who keeps a close eye on open source, but I was totally blown away by how much more there was.

I hadn’t seen or heard anything about those open source projects in the news or anywhere, and they were really interesting projects.  I was totally blown away.

Linux.com: What were some of those projects that you hadn’t heard about before that “blew you away?”

Coekaerts: Oh, there were many. But certainly VS Code, which combines a code editor with developer tools for the core edit-build-debug cycle. It’s a new type of tool with editing capabilities, light integration with other tools, debugging support, and other features. Other interesting projects included the documentation generator for dotnet; OMI which is an open source CIM server; and, Project Malmo which is an AI experimentation platform built on top of Minecraft. Oh, and the Azure documentation on github developed in the open under a CC3 license.

Linux.com: Ok, so you were excited about Microsoft open source projects and joined the company. What are the ideas at Microsoft for open source now, under your lead especially?

Coekaerts: Well, I’ve only been here for four months so there’s not a huge roadmap yet. But there are so many open source projects going on already that my first step was to create a map of those so we’ll have better insights on where we are and where we want to go from here.

I’m also making sure what we offer now is consistent with rules and projects in Linux distributions. The customer experience really matters, and we want to ensure we keep customer trust because we’ve truly earned it, so we’re taking the time to make sure everything installs right, that the right version is running, and that everything really runs correctly and smoothly across the board.

But our focus is on much more than just our open source products. For example, we’ve found a lot of open source projects that don’t have enough developer tools so we’re helping with that and with QA too. We’re not just following the herd with products of our own, we’re actively leading and sharing within the community.

Linux.com: And that brings us to the big question. Is Microsoft setting its sights on becoming a full-fledged open source company?

Coekaerts: We’re building products that are critical for us to offer to make our customers happy. Certainly open source is part of that, for developers and customers alike. We are in the business of providing what our customers want and need and that includes open source. Our customers want choices, so we give them choices.

We are very committed to open source internally. It’s a really exciting time to be at Microsoft.

Open source is growing internally and externally and the opportunities that brings to everyone is almost limitless. Stay tuned, we have some more very exciting open source news coming up.

Linux.com: Which brings us to your keynote at LinuxCon. Can you give us a preview of what you’ll be speaking about, or at least a few hints?

Coekaerts: Why of course. I’ll be giving a detailed overview of what Microsoft is doing now with open source and how we can be of help. We are committed to helping everyone, not just ourselves, and so I’ll cover some of the ways we can contribute and assist. We’re also releasing a number of things in the next several weeks so I’ll be speaking about those, too.

I hope to see everyone reading this there. I like to share what we know and have to offer. But I also like to hear thoughts and concerns from people working with this everyday so that I stay informed and focused on what else is needed in the community.  

 

Microsoft is a Diamond sponsor of LinuxCon North America. You can join Wim and the team working with Linux & open source technologies in Microsoft’s booth #3 at LinuxCon in Toronto, August 22-24. Make sure you visit Microsoft’s Linux website if you’re interested in learning more about how they work with Linux & open source technologies in the cloud.

 

CloudNativeDay Brings Containers, Microservices, PostgreSQL, Mantl.io, OpenWhisk and More

The first cloud native-focused event hosted by The Cloud Native Computing Foundation will gather leading technologists from open source cloud native communities in Toronto on Aug. 25, 2016, to further the education and advancement of cloud native computing.

Co-located with LinuxCon and ContainerCon North America, CloudNativeDay will feature talks from IBM, 451 Research, CoreOS, Red Hat, Cisco and more. For a sneak peek at the event’s speakers and their presentations, read on.

For Linux.com readers only; get 20% off your CloudNativeDay tickets with code CND16LNXCM. Register now.

Scaling Containers from Sandbox to Production

There is an industry IT renaissance occurring as we speak around cloud, data and mobile technology and it’s driven by open source code, community and culture.

IBM’s VP Cloud Architecture & Technology, Dr. Angel Diaz, opens up CloudNativeDay with a keynote on “Scaling Containers from Sandbox to Production,” where he will discuss how the digital disruption in today’s market is largely driven by containers and other open technologies. With a container-centric approach, developers are able to quickly stand up containers, iterate, and change their architectures. Dr. Diaz will provide insight on how enterprises are able to transform the way they grow, maintain, and rapidly expand container and microservice-based applications across multiple clouds. Dr. Diaz will also discuss the role of CNCF in creating a new set of common container management technologies informed by technical merit and end user value.

Real-World Examples of Containers and Microservices Architectures

Enabling DevOps are two of the fastest-growing trends in technology: containers and microservices. With rapid growth comes rapid confusion. Who is using the technology? How did they build their architectures? What is the ROI of the technology?

Having real-world examples of how leading-edge companies are building containers and microservices architectures will help answer these burning questions. 451 Research’s Development, DevOps, & IT Ops channel Research Director, Donnie Berkholz will provide these examples in his talk Cloud Native in the Enterprise: Real-World Data on Container and Microservice Adoption.”

Berkholz’s current research is steeped in the latest innovative technologies employed for software development and software life cycle management to drive business growth. His research will shape this session exploring the state of cloud-native prerequisites in the enterprise, the container ecosystem including current adoption, and data on companies moving to cloud-native platforms.

When Security and Cloud Native Collide

In one world, the cloud native approach is redefining how applications are architected, throwing many traditional assumptions out of the window. In the other world, traditional security teams ensure projects in the enterprise meet a rigid set of security rules in order to proceed. What happens when these two worlds collide?

Apprenda Senior Director Joseph Jacks, Box Sight Reliable Engineer Michael Ansel, Tigera Founder and CEO Christopher Liljenstolpe join forces to discuss “Whither Security in a Cloud-Native World?

This panel will diving into how applications will be secured, who will define security policies, and how these policies will be enforced across hybrid environments – both private and public clouds, and traditional bare metal / VM and cloud-native, containerized workloads.

Peek Inside The Cloud Foundry Service Broker API

Services are integral to the success of a platform. For Cloud Foundry, the ability to connect to and manage services is a crucial piece of its platform.

Abby Kearns, VP of industry strategy for Cloud Foundry Foundation, will discuss why they created a cross-foundation working group with The Cloud Native Computing Foundation to determine how the Cloud Foundry Service Broker API can be opened up and leveraged as an industry-standard specification for connecting services to platforms.

In her presentation, “How Cloud Foundry Foundation & Cloud Native Computing Foundation Are Collaborating to Make the Cloud Foundry Service Broker API the Industry Standard,” Kearns will share the latest progress on a proof of concept that allows services to write against a single API, and be accessible to a variety of platforms.

Innovative Open Source Strategies Key to Cloud Native in the Enterprise

As IT spending on cloud services reaches $114 billion this year and grows to $216 billion in the year 2020 (according to a report released by Gartner), cloud-native apps are becoming commonplace across enterprises of all sizes.

Enterprises are investing in people and process to enable cloud native technologies.  Adoption of collaborative and innovative open source technologies have become a key factor in their success, according to Vice President and Chief Technologist of Red Hat, Chris Wright.

Wright’s closing keynote at CloudNativeDay, “Bringing Cloud Native Innovations into the Enterprise,” will discuss the open source strategies and organizations driving this success. After more than a decade serving as a Linux kernel developer working on security and virtualization, Wright understands the importance of ensuring industry collaboration on common code bases, standardized APIs, and interoperability across multiple open hybrid clouds.

 

Read more on CloudNativeDay. Save 20% when using code CND16LNXCM and register now.

 
 

Sometimes the Most Qualified Applicant Is Not the Most Obvious

Hiring managers often give preference to (and even hold out for) those who have the “right” specific last roles, the “right” internships, a specific number of years of experience or graduated from the “right” university. Interestingly, those companies often state bold goals in improving diversity, yet their upstream talent practices diminish the likelihood of increasing diversity.

In defense of slow improvement in diversity, companies often cite, and even complain, that the talent “pipeline” simply does not have enough diversity, so how could they possibly hire diverse candidates out of a non-diverse pipeline without compromising on skills? Of course the potential candidate pool is not diverse if the specs for candidates reflect exactly who you already have on the board!

Research shows that in hiring, the deck is often stacked against low-income and minority candidates — especially when it comes to the technology sector. Degree requirements alone drastically limit the candidate pool, with approximately 70 percent of American adults over 25 finding themselves without an undergraduate degree — particularly racial/ethnic minorities or low-income adults. For those employers specifically focused on top-tier or Ivy League graduates, their talent pool is further limited to less than 1 percent of college graduates.

Read more at TechCrunch

Virtual Machine Introspection: A Security Innovation With New Commercial Applications

A few weeks ago, Citrix and Bitdefender launched XenServer 7 and Bitdefender Hypervisor Introspection, which together compose the first commercial application of the Xen Project Hypervisor’s Virtual Machine Introspection (VMI) infrastructure. In this article, we will cover why this technology is revolutionary and how members of the Xen Project Community and open source projects that were early adopters of VMI (most notably LibVMI and DRAKVUF) collaborated to enable this technology.

Evolving Security Challenges in Virtual Environments

Today, malware executes in the same context and with the same privileges as anti-malware software. This is an increasing problem, too. The Walking Dead analogy I introduced in this Linux.com article is again helpful. Let’s see how traditional anti-malware software fits into the picture and whether our analogy applies to anti-malware software.

In the Walking Dead universe, Walkers have taken over the earth, feasting on the remaining humans. Walkers are active all the time, and attracted by sound, eventually forming a herd that may overrun your defences. They are strong, but are essentially dumb. As we explored in that Linux.com article, people make mistakes, so we can’t always keep Walkers out of our habitat.

For this analogy, let’s equate Walkers with malware. Let’s assume our virtualized host is a village, consisting of individual houses (VMs) while the Hypervisor and network provides the infrastructure (streets, fences, electricity, …) that bind the village together.

Enter the world of anti-malware software: assume the remaining humans have survived for a while and re-developed technology to identify Walkers fast, destroy them quickly and fix any damage caused. This is the equivalent of patrols, CCTV, alarmed doors/windows and other security equipment, troops to fight Walkers once discovered and a clean-up crew to fix any damage. Unfortunately, the reality of traditional malware security technology can only be deployed within individual houses (aka VMs) and not on the streets of our village.

To make matters worse, until recently malware was relatively dumb. However, this has changed dramatically in the last few years. Our Walkers have evolved into Wayward Pine’s Abbies, which are faster, stronger and more intelligent than Walkers. In other words, malware is now capable of evading or disabling our security mechanisms.

What we need is the equivalent of satellite surveillance to observe the entire village, and laser beams to remotely destroy attackers when they try and enter our houses. We can of course also use this newfound capability to quickly deploy ground troops and clean-up personnel as needed. In essence that is the promise that Virtual Machine Introspection gives us. It allows us to address security issues from outside the guest OS without relying on functionality that can be rendered unreliable from the ground. More on that topic later.

From VMI in Xen to the First Commercial Application: A Tale of Collaboration

The idea of Virtual Machine Introspection for the Xen Project Hypervisor hatched at Georgia Tech in 2007, building on research by Tal Garfinkel and Mendel Rosenblum in 2003. The technology was first incorporated into the Xen Project Hypervisor via the XenAccess and mem-events APIs in 2009. To some degree, this was a response to VMware’s VMsafe technology, which was introduced in 2008 and deprecated in 2012, as the technology had significant limitations at scale. VMSafe was replaced by vShield, which is an agent-based, hypervisor-facilitated, file-system anti-virus solution that is effectively a subset of VMsafe.

Within the Xen Project software however, Virtual Machine Introspection technology lived on due to strong research interests and specialist security applications where trading off performance against security was acceptable. This eventually led to the creation of LibVMI (2010), which made these APIs more accessible. This provided an abstraction that eventually allowed exposure of a subset of Xen’s VMI functionality to other open source virtualization technologies such as KVM and QEMU.

In May 2013, Intel launched its Haswell generation of CPUs, which is capable of maintaining up to 512 EPT pointers from the VMCS via the #VE and VMFUNC extensions. This proved to be a potential game-changer for VMI, enabling hypervisor controlled and hardware enforced strong isolation between VMs with lower than previous overheads, which led to a collaboration of security researchers and developers from Bitdefender, Cisco, Intel, Novetta, TU Munich and Zentific. From 2014 to 2015, the XenAccess and mem-events APIs have been re-architected into the Xen Project Hypervisor’s new VMI subsystem, alt2pm and other hardware capabilities have been added, as well as support for ARM CPUs and a baseline that was production ready has been released in Xen 4.6.

Citrix and Bitdefender collaborated to bring VMI technology to market: XenServer 7.0 introduced its Direct Inspect APIs built on the Xen Projects VMI interface. It securely exposes the introspection capabilities to security appliances, as implemented by Bitdefender HVI.

What Can Actually Be Done Today?

Coming back to our analogy: what we need is the equivalent of satellite surveillance to observe the entire village. Does VMI deliver? In theory, yes: VMI makes it possible to observe the state of any virtual machine (house and its surroundings in the village), including memory and CPU state and to receive events when the state of the virtual machine changes (aka if there is any movement). In practice, the performance overhead of doing this is far too high, despite using hardware capabilities.

In our imagined world that is overrun by Walkers and Abbies, this is equivalent to not having the manpower to monitor everything, which means we have to use our resources to focus on high value areas. In other words, we need to focus on the suspicious activity on system perimeters (the immediate area surrounding each of our houses).

This focus is executed by monitoring sensitive memory areas for suspicious activity. When malicious activity is detected, a solution can take corrective actions on the process state (block, kill) or VM state (pause, shutdown) while collecting and reporting forensic details directly from a running VM.

Think of a laser beam on our satellite that is activated whenever an Abbie or Walker approaches our house. In technical terms, the satellite and laser infrastructure maps to XenServer’s Direct Inspect API, while the software which controls and monitors our data maps onto Bitdefenders Hypervisor Introspection.

It is important to stress that monitoring and remedial action takes place from the outside, using the hypervisor to provide hardware-enforced isolation. This means that our attackers cannot disable surveillance nor laser beams.

Of course, no security solution is perfect. This monitoring software may not always detect all suspicious activity, if that activity does not impact VM memory. This does not diminish the role of file-system-based security; we must still be vigilant, and there is no perfect defense. In our village analogy, we could also be attacked through underground infrastructure such as tunnels and canalisation. In essence this means we have to use VMI together with traditional anti-malware software.

How does VMI compare to traditional hypervisor-facilitated anti-virus solutions such as vShield? In our analogy, these solutions require central management of all surveillance equipment that is installed in our houses (CCTV, alarmed doors/windows, …) while the monitoring of events is centralized very much like a security control centre in our village hall. Albeit such an approach significantly simplifies monitoring and managing of what goes on within virtual machines, it does not deliver the extra protection that introspection provides.

You can find more information (including some demos) about VMI, XenServer Direct Inspect API and BitDefender Hypervisor Introspection here:

Conclusion

The development of VMI and its first open source and commercial applications show how the Xen Project community is innovating in novel ways, and is capable of bringing revolutionary technologies to market. The freedom to see the code, to learn from it, to ask questions and offer improvements has enabled security researchers and vendors such as Citrix and Bitdefender to bring new solutions to market.

It is also worth pointing out that hardware-enabled security technology is moving very fast: only a subset of Intel’s #VE and VMFUNC extensions are currently being deployed in VMI. Making use of more hardware extensions carries the promise of combining the protection of out-of-guest tools with the performance of in-guest tools.

What is even more encouraging is that other vendors such as A1Logic, Star Lab and Zentific are working on new Xen Project-based security solutions. In addition, the security focused, Xen-based OpenXT project has started to work more closely with the Xen Project community, which promises further security innovation.

A few of these topics will be discussed in more detail during Xen Project Developer Summit happening in Toronto, CA from August 25 – 26, 2016. You learn more about the event here.

Bad Dockerfile

If you deal with Docker one of the security challenges you might come across is that of image content security.  When I talk about this I mean some way of verifying that the software in an image is:

  • Free from known software vulnerabilities in the base OS
  • Free from known software vulnerabilities in any added third party packages
  • Free from malicious software (backdoors, rootkits etc.)

This is different from image integrity which to my mind is something that can be addressed with content trust and Notary. …

Read more at STIndustries

Linux Flaw Allows Attackers to Hijack Web Connections

Researchers discovered that a Transmission Control Protocol (TCP) specification implemented in Linux creates a vulnerability that can be exploited to terminate connections and conduct data injection attacks.

The flaw, tracked as CVE-2016-5696, is related to a feature described in RFC 5961, which should make it more difficult to launch off-path TCP spoofing attacks. The specification was formulated in 2010, but it has not been fully implemented in Windows, Mac OS X, and FreeBSD-based operating systems. However, the feature has been implemented in the Linux kernel since version 3.6, released in 2012.

Read more at Security Week

10 IoT Security Best Practices For IT Pros

IT professionals have to treat internet of things (IoT) vulnerabilities as they would vulnerabilities in databases or web applications. Any flaw can bring unwelcome attention, for those making affected products and those using them. Any flaw may prove useful to compromise other systems on the network. When everything is connected, security is only as strong as the weakest node on the network.

The Internet Crime Complaint Center (IC3), a partnership between the FBI, the National White Collar Crime Center, and the Bureau of Justice Assistance, issued a warning in September 2015 about the risks posed by internet of things (IoT) devices.

“As more businesses and homeowners use web-connected devices to enhance company efficiency or lifestyle conveniences, their connection to the Internet also increases the target space for malicious cyber actors,” the IC3 alert said. “The FBI is warning companies and the general public to be aware of IoT vulnerabilities cybercriminals could exploit, and offers some tips on mitigating those cyber threats.”

Read more at InformationWeek