Thomas Dreibholz, Senior Research Engineer at Simula Research Laboratory, describes how his team is using open source software to build NorNet — an inter-continental Internet testbed for a variety of networked applications.
Thomas Dreibholz, Senior Research Engineer at Simula Research Laboratory, describes how his team is using open source software to build NorNet — an inter-continental Internet testbed for a variety of networked applications.
A critical element of building safer apps is having a secure way of communicating with other apps and systems, something that often requires credentials, tokens, passwords and other types of confidential information—usually referred to as application secrets. We are excited to introduce Docker Secrets, a container native solution that strengthens the Trusted Delivery component of container security by integrating secret distribution directly into the container platform.
Read more at Docker
PaddlePaddle, Baidu’s open source framework for deep learning, is now compatible with the Kubernetes cluster management system to allow large models to be trained anywhere Kubernetes can run.
This doesn’t simply expand the range of systems that can be used for PaddlePaddle training; it also provides end-to-end deep learning powered by both projects.
Read more at InfoWorld
Analytics plays a key role in digital-ready networks. It reveals rich contextual insights about users, applications, devices, and threats. This helps organizations and their IT professionals make more informed decisions. To make this happen, however, organizations must do two things. First, they must liberate IT time and resources by automating daily networking tasks, which makes room to focus on business innovation. Hence the willingness to take up SDN and NFV.
Second, organizations must build key programming skills in their network engineers.
Read more at SDx Central
In today’s fast-paced online world, speedy loading times are almost considered to be a given. Websites that take a long time to load or lag in between pages are often left behind by the vast majority of Internet users, which is why optimizing this aspect of the visitor’s experience is essential for long-term success.
Apache, currently one of the world’s most widely-used web servers, was not expressly designed to set benchmark records, but it can nevertheless handle an impressive number of requests from remote clients and provide a high level of performance if administrators take the time to implement the following five tips and tricks:
1. Always keeping Apache updated to its latest version
Like any piece of software, Apache will work best if upgraded to its latest version. Everything from bug fixes to general improvements will be included in these updates, so it’s worth taking the time to download and install them for your system of choice.
2. Upgrading to a newer version of Kernel
From 2.4 onwards, Kernel supports a sendfile system that allows for high performance network transfers, therefore enabling Apache to deliver static content much faster. For this reason, it pays to upgrade as soon as possible, even if the actual process isn’t exactly friendly for beginners and requires a bit of in-depth knowledge about the internals of Linux.
3. Choosing a Multi-processing Module that works for you
Multi-processing modules allow you to decide how to configure the web server, an important functionality that cannot be neglected at an admin level. Apache currently offers three MPMs to choose from. There’s the “prefork”, the “worker” and the “event”. Study each to get to know their respective advantages and disadvantages, and then choose the one that works best for your particular situation.
4. Allocate the appropriate amount of RAM
Out of all the hardware items that must be taken into account when optimizing your Apache process, RAM is by far the most important. While you cannot control RAM directly, you can limit the number of child processes through the MaxRequestWorkers directive, which will set a limit on RAM usage by Apache. Be sure to always keep RAM usage within limits and never rely on swap because it negatively impacts performance levels.
5. Know your applications
Finally, in order to avoid overburdening your system, be sure to refrain from loading any Apache modules that are not strictly necessary for your application to work. In order to do this, you’ll need to know which applications are running on your server, and disable the modules using the procedures for CentOS and Debian respectively.
As you can see, the aforementioned five tips can make a massive difference when it comes to increasing your Apache web server’s performance. Of course, optimizing performance without also increasing website safety is pointless, so take care to implement adequate security measures as well.

Now, if you are like the vast majority of website owners these days, you probably have your own site on platforms like WordPress, Drupal, Joomla, Magento or SITE123. These were all designed to be as SEO-friendly as possible, but that doesn’t mean they are immune to slow loading times. As always, a badly-run page can be seen as a sign of unprofessionalism, regardless of which website platform you use. So be sure to boost your site’s load speed by employing these essential strategies:
· Use a CDN (Content Delivery Network)
· Use a caching plugin
· Add Expires headers to leverage browser caching
· Clean up your database
· Compress your website with gzip
· Fix all broken links
· Reducing Your redirects
· Minify your CSS and JS files
· Replace PHP with static HTML where possible
· Link to your stylesheets, don’t use @import
· Specify image dimensions
· Put CSS at the top and JS at the bottom
· Disable hotlinking of images
· Switch off all plugins you don’t use
· Minimize round trip times (RTTs)
That concludes our quick rundown of the most important things you can do to boost the loading speed of your website or blog. Nowadays especially, with mobile Internet usage becoming the norm, people are becoming less patient with sites that take forever to load. So make sure that you do everything in your power to keep your website running smoothly and efficiently, and you’ll quickly reap the rewards of your efforts.
There’s a lot of interest in becoming a data scientist, and for good reasons: high impact, high job satisfaction, high salaries, high demand. A quick search yields a plethora of possible resources that could help — MOOCs, blogs, Quora answers to this exact question, books, Master’s programs, bootcamps, self-directed curricula, articles, forums and podcasts. Their quality is highly variable; some are excellent resources and programs, some are click-bait laundry lists. Since this is a relatively new role and there’s no universal agreement on what a data scientist does, it’s difficult for a beginner to know where to start, and it’s easy to get overwhelmed.
Read more at Forbes
Software-defined networking’s biggest accomplishment last year was achieving market traction and validation, says Martin Casado, a general partner at the venture capital firm Andreessen Horowitz. But there are still many challenges ahead for the industry at large and the organizations that aim to drive SDN forward.
“We’ve seen a lot of progress in SDN over the last few years, (but) there is still a lot more work to do,” said Casado, who was previously the co-founder and chief technology officer at Nicira, which was acquired by VMware in 2012. “That said, I’m optimistic that the tangible opportunity will continue to be a strong draw given continued market maturation.”
Casado will elaborate on these ideas and more at Open Networking Summit, April 3-6 in Santa Clara, where he will give a keynote on “The Future of Networking.” Here, he discusses where software-defined networking is headed, the momentum in open source networking projects, challenges they will face, and the best way for companies to get involved in the SDN revolution.
Linux.com: What’s your advice to individuals and companies getting started in SDN?
Martin Casado: Don’t get lost in the noise. While definitions vary, most would agree that SDN involves moving networking functions to a software domain, which changes how it can be created, consumed, and delivered. Yet, like many hyped movements, the term has also been diluted to the point of causing real confusion for those who have not been with the movement long.
I would recommend learning about SDN from a vendor-neutral source, and then determining what value you can get from it, whether as a developer, a vendor, or a user. Then I would align with projects that reinforce your objectives and not spend too much time worrying about every project, product or organization that is being thrown into the SDN bucket.
Linux.com: What have been the biggest successes in SDN in the past year, and what do you expect the industry to accomplish in 2017?
Casado: Market traction and validation were the big takeaways from 2016. The network virtualization space continues to mature with multiple solutions available and individual products breaking the half-billion-dollar mark on software alone. Further, the SD-WAN space continued to gain traction with a number of companies offering innovative solutions. Finally, we’re seeing a new wave of solutions targeting developing markets such as container networking. Both the size of the markets being addressed and the verticalization in multiple spaces are strong signs of the generality and impact SDN is having in the industry. Meanwhile we continue to see great momentum in open source projects and other efforts that drive innovation and adoption.
Linux.com: What will be the biggest challenges for SDN for 2017?
Casado: Maintaining momentum and focus. The industry at-large can be fickle and easily distracted, and while we’ve seen a lot of progress in SDN over the last few years, there is still a lot more work to do. With so many new, exciting technology trends competing for attention, we as a community need to stay focused and continue to drive SDN forward. That said, I’m optimistic that the tangible opportunity will continue to be a strong draw given continued market maturation.
Linux.com: How do we harmonize all the open source networking initiatives across the entire stack and industry?
Casado: To be frank, I don’t think we should. I’m a huge fan of the amount of chaos you find in early markets: It’s all energy and creativity and exploration. A Darwinist system of many ideas, some of enormous value and others that won’t go anywhere. I prefer many conflicting approaches that cover a broad spectrum of the problem domain than trying to foist order and risk constraining innovation too early on. Ultimately, there will be winners and losers and hopefully those that survive and see widespread adoption win because they are the most useful, not because it was pre-ordained by some governing body.
Linux.com: How can companies and individuals best participate in the ‘Open Revolution’ in networking?
Casado: For individuals, I suggest contributing to a project that speaks to you. There is so much great work being done in open networking—from core research, to large open source frameworks, to projects aimed at social good. Contribution can be at any level; it doesn’t have to be code. Documentation, design, outreach, community organization, and evangelization are all very valuable contributions. For companies, the landscape is a bit more complicated. I’d recommend contributing to relevant open source projects that support the movement of functionality to software. This doesn’t have to be an SDN-specific project, but could be an enabler such as Linux, OpenStack, Kubernetes, etc. I strongly believe these contributions are ultimately in the best interest of the company with respect to customer acquisition, maintaining relevance, and recruiting, and they can be done in a way that doesn’t conflict with existing proprietary or closed solutions.
Learn More
Start exploring Linux Security Fundamentals by downloading the free sample chapter today. DOWNLOAD NOW
Last week, we learned to begin a risk assessment by first evaluating the feasibility of a potential attack and the value of the assets you’re protecting. These are important steps to determining what and how much security will be required for your system.
You must also then weigh these considerations against the potential business impacts of a security compromise with the costs of protecting them.
It is hard to calculate the Return on Investment that managers need in order to make decisions about how to mitigate a risk. How much value does a reputation have?
Estimating the cost of a cyber attack can be difficult, if not impossible. There is little data on how often various industries suffer from different types of intrusions. Until recent laws were passed, companies would often conceal attacks even from law enforcement.
These factors cause difficulties in making rational decisions about how to address the different risks. Security measures may result in the loss of usability, performance, and even functionality. Often, if usability concerns are not addressed in the design of a secure system, users respond by circumventing security mechanisms.
Still, you can get a good idea of the costs associated with a potential loss of business assets, as well as the costs involved in protecting them, to make an informed decision.
The following questions should be evaluated on a regular basis in order to ensure that the security position is optimal for the environment:
• What is the cost of system repair/replacement?
• Will there be lost business due to disruption?
• How much lost productivity will there be for employees?
• Will there be a loss of current customers?
• Will this cause a loss of future customers?
• Are business partners impacted?
• What is your legal liability?
There are many aspects to the costs associated with securing an IT environment. You should consider all of them carefully:
• Software
• Staff
• Training
• Time for implementation
• Impact to customers, users, workers
• Network, Compute, and Storage resources
• Support
• Insurance.
So far in this series, we’ve covered the types of hackers who might try to compromise your Linux system, where attacks might originate, the kinds of attacks to expect, and some of the business tradeoffs to consider around security. The final two parts of this series will cover how to install and use common security tools: tcpdump, wireshark, and nmap.
Stay one step ahead of malicious hackers with The Linux Foundation’s Linux Security Fundamentals course. Download a sample chapter today!
Read the other articles in the series:
Linux Security Threats: The 7 Classes of Attackers
Linux Security Threats: Attack Sources and Types of Attacks
Linux Security Fundamentals Part 3: Risk Assessment / Trade-offs and Business Considerations
Linux Security Fundamentals Part 5: Introduction to tcpdump and wireshark
Engagement in an open source community leads to collaboration, says Jason Hibbets, community evangelist at Red Hat. And social media is one good tool that projects can use to help increase engagement in their communities, he adds, “because you can reach a broad audience at pretty much no-to-low costs.”
Hibbets will discuss how Red Hat has increased engagement with one such social media tool, Twitter chats, in his talk at Open Source Leadership Summit in Lake Tahoe on Feb. 16, 2017. Here, he shares with us some of his reasoning behind why engagement is important, some best practices for increasing engagement, and a few lessons learned from Red Hat’s Twitter chats.
Linux.com: Why should an open source project be concerned with building engagement?
Jason Hibbets: Let’s first start with why have a community in the first place? A community is a group of people who come together with a common vision, collective passion, and shared purpose. Communities bring together a diverse group of people to share work and can accomplish more than individuals can alone.
Many open source projects exemplify these qualities and come together to form a community. Typically, an individual wants to solve a problem (scratch their own itch) and it just so happens that other people are trying to solve a similar problem. When communities collaborate to solve these problems together, it leads to better outcomes and results.
So, why should leaders be concerned with engagement? Engagement leads to collaboration. And if communities can collaborate, then work gets done and they can achieve something together. As an individual, your knowledge is limited. There will be a point when you want feedback, need advice, or get stuck. If you have an engaged community, you are building in a human-powered support system.
Linux.com: What are some of the best practices, in general, for increasing engagement and gaining more active followers?
HIbbets: I’ll share two best practices, but believe me there are a lot more. The first is to provide a safe environment. The second is to create value.
Having a well-written Code of Conduct and enforcing those rules is a foundation for having a safe and inviting environment. This can ultimately lead to increased participation from a more diverse group of contributors and creative problem-solving with faster, more innovative solutions.
A second best practice is to provide value. In the community programs I’ve built, you need to think about why a person would volunteer their precious time to contribute–this is commonly referred to as the “what’s in it for me?” question.
When contributors are finding value in the community, they are more likely to be engaged. And if they are more engaged, they can become your advocate. Which can lead to the best type of marketing for your community, word-of-mouth recommendations.
For more best practices about community building, I recommend reading The Art of Community by Jono Bacon.
Linux.com: Why is social media, and Twitter in particular, a good place for open source projects to do outreach?
Hibbets: In general, social media is a good place for outreach and amplification because you can reach a broad audience at pretty much no-to-low costs (other than your time). The challenge, of course, is putting in the investment and time to build a following, a content strategy, and determine the right way to fit into each social media community.
Twitter is a great platform for open source projects because of ease-of-use and, for now, unfiltered streams. Engagement levels can be higher, and people follow specific hashtags. Once you filter through all the noise, there is a lot of valuable information that can be found for open source communities.
And bonus, there’s a lot of open source behind each Tweet.
Linux.com: What is a Twitter chat?
Hibbets: I like to describe a Twitter chat as a public-facing conversation at a set time, using Twitter as the platform and a hashtag as the way to follow. It’s the equivalent of using a chat room in IRC (Internet relay chat) or similar chat functionality, but instead, you’re using and following a hashtag on Twitter. What it boils down to for our Open Organization community on Opensource.com is to have focused discussions on topics with several source matters experts invited to participate and help lead the discussion. For example, last October, we talked about the intersection of DevOps and Open Organizations.
There are several different formats Twitter chats can take. We chose to do more of a live event where we are actively Tweeting questions for an hour and watching the responses come in. My team leads the conversation, monitors the responses, and learns from our community. Participants learn from other participants and make valuable connections that enhance their network.
Linux.com: How do you measure progress and what’s the goal?
Hibbets: My talk at the Open Source Leadership Conference will be on building a community using Twitter chats for our Open Organization community. The examples I will use come from my experience doing this for the Open Organization community, so I’ll focus on my response on that aspect.
First off, the goal is two-fold: to build awareness of our community and attract new people to join the conversation.
By hosting a Twitter chat, we are able to have an amazing conversation with our community. Seeing the engagement, responses, and interactions really makes me proud as a community manager. We are having a conversation that is engaging to people with vastly different roles–from solutions architects to consultants, and open source project leaders to people managers outside of open source. We have a diverse audience of participants.
So, how do we measure success? There are two main metrics we are concerned with: the number of unique participants and how many Tweets they generate. From there, we can calculate more impressive numbers like total reach and total timeline exposures. These numbers can impress managers, which is helpful, but the more meaningful metrics are really around the number of active participants as well as how many new people continue to join.
To give you some context, on average, we have about 30-50 unique participants generating about 300-400 tweets in about an hour.
Linux.com: What did you learn from hosting regular Twitter chats with your community?
Hibbets: There are three things we learned I’d like to share. First, there are people out there who not only want to have this conversation in the first place, but want to continue the conversation. The number of repeat participants that come back to our Twitter chats is high for the Open Organizations community. .
Second, being prepared makes our “live” events successful. We did a number of things (which I cover in extreme detail in my talk) that makes our event run smoothly. A few examples include promoting your Twitter chat in advance, preparing your questions ahead of time, and sharing your questions with invited guests in advance.
Third, having guest hosts and source matter experts is critical. Nothing draws a crowd more than a crowd, right? We found that inviting experts to join us and putting them in the spotlight worked really well for our community building efforts.
Join us for a future #OpenOrgChat Twitter chat to see what it’s all about.
Want to learn more about what it takes to grow an open source project? Tune in for the free live video of Open Source Leadership Summit. Sign up now!
Satya Nadella, the CEO of Microsoft, famously said: “Every business will become a software business, build applications, use advanced analytics and provide Saas services.” That’s a bold prediction. Aaron Williams, Head of Advocacy and DC/OS at Mesosphere, believes it’s true and that DC/OS is the bridge to this new world. He tells us why at MesosCon Asia 2016.
There has to be a business case for all of this upheaval. For early adopters such as Adobe, Twitter, IBM, and many other industry bigwigs, it’s about doing things nobody could do before, and they have the resources to experiment.
But what do you if you’re not an industry titan with vast resources? Containers, microservices, Apache Mesos, these are all massive change-movers, upending datacenters and remaking them in completely different ways, and changing how business operate. Everything is different. “It’s really a story of increased complexity,” says Williams, “Going from a single mainframe to multiple servers to virtual machines to containers inside virtual machines. You increase the sophistication of your data center. You increase the complexity of your data center.”
All of this requires much more than just Mesos. It’s a constellation of all different kinds of software: data analytics engines, containers, container orchestrators, service discovery, monitoring and alerting…the good news all of this is open source software. It’s freely available, freely shared, and supported by large communities of skilled motivated users. This is one of the most startling changes — competitors in all industries cooperating on building and sharing core software stacks.
The bad news is the complexity: you just want to build your apps and services and not have to invest large resources in building the supporting framework. This where DC/OS comes in. Williams says, “I think what you’ll find is that the DC/OS project does a good job of bringing together the core components that are needed, makes it easy for you to install, easy to get started… We’ve got a GUI and a CLI… You can install your favorite frameworks, analytics, big data, fast data, etc. Then we have what’s called the Universe, which gives you an easy one-click or one-command way to install these frameworks into your data center.”
In a short amount of time, we’ve gone from having to painfully piece everything together and do a lot of custom coding to having a nice ready-to-use platform in DC/OS.
Watch Williams’ complete talk (below) to learn more about the key DC/OS components, and how large vendors like Autodesk use DC/OS to streamline their datacenter and invest more resources in microservices and applications that move their businesses forward.
Interested in speaking at MesosCon Asia on June 21 – 22? Submit your proposal by March 25, 2017. Submit now>>
Not interested in speaking but want to attend? Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the attendee registration price. Register now to save over $125!