Home Blog Page 818

5 Developers Explain Why They Attend ApacheCon

ApacheCon North America and Apache Big Data are coming up in just a few weeks and it’s an event that Apache and open source community members won’t want to miss.

Apache products power half the Internet, manage exabytes of data, execute teraflops of operations, store billions of objects in virtually every industry, and enhance the lives of countless users and developers worldwide. And behind those projects is a thriving community of more than 4,500 committers from around the world.

ApacheCon, the annual conference of The Apache Software Foundation, is the place where all of those users and contributors can meet to collaborate on the next generation of cloud, Internet, and big data technologies.

Here, five attendees of last year’s ApacheCon and Apache Big Data, explain how they benefitted from the conference.  

1. Learn from experienced developers

“You meet the best people around the globe who share the same passion for software and sharing. It’s great listening to experienced senior programmers and the interesting use cases they have been solving.” – Yash Sharma, a contributor to Apache Drill, Apache Calcite, and a committer to Apache Lens.

2. Reach consensus faster

“You’re able to meet with some of the folks and talk about things that may take more time than on the (mailing) lists. You’re able to exchange ideas before bringing them to the community. Face to face can have a huge impact on attitude and interaction moving forward. Sometimes it’s tough to put tone in email, so it’s good to share in a personal manner.” – Jeff Genender, who is involved in several Apache projects including Camel, CXF, ServiceMix, Mina, TomEE, and ActiveMQ.

3. Meet your ecosystem partners

“I had the opportunity to talk with committers and PMC members of other projects that are built on top of Apache jclouds. At the time of ApacheCon we had to make some unpopular decisions such as dropping support for unmaintained providers, or rejecting some pull requests that had little hope to progress, and one of the objectives I had was to directly discuss with the jclouds ecosystem which impact that could have, how the projects could collaborate better, and how we could better align our roadmaps.” – Ignasi Barrera, Chair of Apache jclouds.

4. Explore other open source projects

“For me ApacheCon is all about community. I met so many great people, had a lot of thoughtful conversations, and heard about dozens of very interesting projects I had no idea existed.” – Andriy Redko, who participates in Apache CXF.

5. Meet your family

“Only after the ApacheCon did I understand the real power of Apache. For me, before ApacheCon it was just a group of geeks who try to write awesome code to make the world a better place, but now I feel like I’m a member of a huge family who cares very much for each other. It was like, what it seems to be a code base became home for me and now I’m not just trying to improve the code base but rather to make the family bigger in every aspect.” – Dammina Sahabandu, who’s involved in Apache Bloodhound.

ApacheCon North America and Apache Big Data take place May 11-13 in Vancouver, B.C.

 

Register Now for ApacheCon North America

Register Now for Apache: Big Data

 

Network Time Keeps on Ticking with Long-Running NTP Project

“Time is an illusion. Lunchtime doubly so.” This quote from The Hitchhiker’s Guide to the Galaxy is the title of a recent article by George V. Neville-Neil in the Communications of the ACM that takes an in-depth look at how time is kept for individual machines and across computer networks. The article mentions one approach to improving computer timekeeping that has been around since the 1980s: the Network Time Protocol (NTP).

According to Neville-Neil, “any discussion of time should center around two different measurements: synchronization and syntonization.” Synchronization, he says, is loosely defined as “how close two different clocks are to each other at any particular instant,” whereas syntonization “is the quality of the timekeeping of an individual clock.” Computers, like wristwatches, use quartz crystals as the basis of their internal timekeeping and cheaper crystals are less stable than expensive ones.

Judah Levine, a physicist in the Time and Frequency Division of the National Institute of Standards and Technology (NIST), explained it this way: “The price of these [commodity] systems is very important, and so the cheapest possible hardware components are used. This is especially true for the timing hardware because time accuracy is not specified for these systems, and most purchasers don’t choose a system based on its timing accuracy.”

Levine added, “High-end systems have somewhat better clocks. The clock in a high-end server typically gains or loses about 2 seconds per day. However, this rate is pretty stable — the stability is much better than the accuracy, so that programs like NTP, which correct for the frequency offset, can keep the system time within a few milliseconds of the correct time.”*

Keeping Time

NTP provides “nominal accuracies of low tens of milliseconds on WANs, submilliseconds on LANs, and submicroseconds using a precision time source such as a Cesium oscillator or GPS receiver,” according to the Network Time Synchronization Research Project website. “NTP is arguably the longest running, continuously operating, ubiquitously available protocol in the Internet,” the website says.

David L. Mills of the University of Delaware began designing the NTP software as a network time service around 1980 and, by 1985, had developed the first specification and publication of NTP.  Harlan Stenn, the current maintainer and release manager of NTP, began working with NTP around 1992 and in 2011 formed the Network Time Foundation (NTF). Stenn’s work is funded in part through a grant from The Linux Foundation’s Core Infrastructure Initiative (CII), which recently renewed the grant to Stenn for a third year.

The NTP Project currently develops the protocol standard used to communicate time between systems along with the software reference implementation of that standard. The resulting software and protocol specifications now keep time for tens of millions of computers around the world.

In a 2015 article for ACM Queue, called Securing the Network Time Protocol, Stenn wrote: “People just expect accurate time, and they rarely see the consequences of inaccurate time… Last year, NTP and our software had an estimated 1 trillion hours plus of operation. We’ve received some bug reports over this interval, and we have some open bug reports we would love to resolve, but in spite of this, NTP generally runs very, very well.”

According to a recent report to the community, the NTP Project has accomplished much over the past year — publishing four NTP production releases containing many improvements. Additionally, Cisco recently had two of its internal teams audit the NTP source code, providing essential feedback and resulting in the hardening of NTP source.

I spoke with Susan Graves, NTF’s director of client experience, to find out more about the project’s challenges and goals.

What are the immediate challenges for the NTP project?

The main challenge is that (maintainer) Harlan Stenn is one person, and NTP needs more paid developers to help. We also need more volunteers/contractors for things like testing and documentation — Harlan can’t do it all. NTP has had Cisco, and others, reporting mostly very low severity security issues that have taken up 90 percent of Harlan’s time to fix since October, 2015.

What are the future plans for the project?

NTS (Network Time Security) for NTP, which is in Draft version 12 with the Internet Engineering Task Force (IETF). The other plan is version 5 of NTP, which will include General TimestampAPI and some other protocol enhancements. We are also helping to author a NTP BCP (Best Common Practices) draft with the IETF. We’re also looking to overhaul the NTP documentation and the NTP support website. We want more comprehensive Q/A tests, and access to a bigger “compile farm.”

We’re (NTF) building a new online home for the research papers by (NTP creator) Dave Mills, and we’re looking to augment that “library” to include or point to other pertinent materials. Once we get these going, we can start in on a number of other projects we want to begin.

What other projects would you like to start on?

We want to build a proper testing laboratory, to test all aspects of network time distribution, including GPS simulators, highly accurate time sources so we can measure and improve timekeeping on computers that we also put in environmental chambers, as temperature affects the rate at which the internal clock in a computer counts time. This also includes lots of different network configurations, as well as security vulnerability testing.

According to Graves, the NTF is also looking at a “Certification and Compliance Program that covers traceable timestamps from the National Labs to each device that requires this time, for compliance, audit, and liability protection.”

Other NTF Projects

Along with NTP, the Network Time Foundation includes the following projects:

  • Ntimed Project — Ntimed is a “tightly focused NTP implementation” for high security and high performance. According to the website, this work is largely the result of Poul-Henning Kamp’s decades of experience as an NTP Project Developer.

  • PTPd Project — The PTP daemon (PTPd) Project implements the Precision Time Protocol (PTP) specification as defined by relevant IEEE 1588 standards. The project page states that PTPd can run on most 32- or 64-bit, little- or big-endian processors and is open source. It does not require a Floating Point Unit (FPU), is great for embedded processors, and currently runs on Linux, uClinux, FreeBSD, and NetBSD. According to the project page, PTP itself provides precise time coordination of Ethernet LAN connected computers and was designed mainly for instrumentation and control systems.

  • Linux PTP Project — This is a Linux-focused software implementation of the PTP specification. Its stated goals are to provide the highest possible performance levels and to be a thoroughly robust implementation of the PTP standard.

  • RADclock — The RADclock project (formerly known as TSCclock) aims to provide a new method for network timing. It can be used as an alternative to ntpd under FreeBSD and Linux.

  • General Timestamp API Project — A typical timestamp usually includes a date and time, sometimes with fractional seconds. The General Timestamp project’s goal is define a new “timestamp structure” that will contain more information and be more useful. It also aims to develop an efficient and portable library API that will operate on these new timestamps.

If you’re interested in contributing to any of these projects, please check out their project pages for more information.

*Judah Levine’s personal opinions do not necessarily reflect the opinions of his employer.

 

Apache Apex Is Promoted To Top-Level Project

The rise of interest in Apache Spark has demonstrated just how important streaming data has become in the big data ecosystem. Real-time data and the technologies that support it were perhaps the biggest stars of last month’s Strata + Hadoop World conference in San Jose.

So it’s probably no coincidence that Apache Apex has been elevated to a Top-Level Project (TLP) by the Apache Software Foundation this week, too. The streaming and batch-processing engine for Hadoop is used by the GE Predix IoT cloud platform for industrial data and analytics, and by Capital One for real-time decisions and fraud detection.

Read more at InformationWeek

How to Build your own IRC Server with InspIRCd and Anope

In this tutorial, I will guide you trough the installation of InspIRCd from source on a CentOS 7 server. Then we will integrate InspIRCd with anope services and enable gnutls encryption on it. InspIRCd is a modern and fast IRC server and one of the few IRC server applications that provides high performance and stability and is written from scratch in C++.

Out of the Box: A Peek at the Future of Containerisation in Enterprise

It may be the new ‘it’ technology, but how will it fit within current enterprise infrastructure and drive business value? 

Since launching in 2013, open source cloud containerisation engine Docker has seen explosive growth. The concept of containerisation is not new – it’s been around for many years as a solution to the problem of how to get software to run reliably when moved from one computing environment to another.

But Docker has breathed new life into the idea by simplifying the process for the average developer and system administrator, giving them a standard interface and easy-to-use tools to quickly assemble composite, enterprise-scale, business-critical applications.

It may seem like just another approach to virtualisation, but unlike virtual machines (VMs), Docker does not require a full OS to be created. It can be thought of as ‘OS virtualisation’ to a VM’s ‘hardware virtualisation’.

Read more at Information Age

CoreOS’s Stackanetes Puts OpenStack in Containers

CoreOS said a few weeks ago it was working on a way to run OpenStack as an application on the Kubernetescontainer platform. Today the company says it has done just that with its new Stackanetes.

Stackanetes puts OpenStack in containers as a way to make OpenStack easier to use, according to Alex Polvi, CoreOS CEO, who spoke with SDxCentral in early April. He said OpenStack can be “a bit fragile,” and containers can be useful to make an organization’s infrastructure behave like that of a Web-scale cloud provider.

Read more at SDxCentral

Bimodal IT: And Other Snakeoil

“Bimodal IT is the practise of managing two separate, coherent modes of IT delivery, one focused on stability and the other on agility.  Mode 1 is traditional and sequential, emphasizing safety and accuracy.  Mode 2 is exploratory and non-linear, emphasizing agility and speed.”

So your Mode 1 services have been around for a while, they are mission critical, and they contain the important data that is of central importance to your organisation (your customer data, your trading platforms, yours systems of record).  Meanwhile your Mode 2 services are new, they can break (it’s a beta!) but they are redefining how you engage with your customers and you need to get them out there fast, before you competition does.  Neat huh?

Read more at Automation Logic

 

Container Orchestration and Scheduling: Herding Computational Cattle

In the world of cloud-native container platforms, the orchestration and scheduler frameworks perform all of these roles for our application “cattle.” The best application “farmers” maximize resource utilization while balancing the constantly changing demands on their systems with the need for fault-tolerance.

There is a perfect storm forming within the IT industry that comprises three important trends: the rise of truly programmable infrastructure, which includes cloud, configuration management tooling and containers; the development of adaptable application architectures, including the use of microservices, large-scale distributed message/log processing, and event-driven applications; and the emergence of new processes/methodologies, such as Lean and DevOps.

Read more at The New Stack.

Samsung Unveils New Artik Module Tools for IoT Developers

A new Artik IDE development environment and the Artik Cloud give developers new capabilities with Artik modules. 

Samsung has given Internet of things developers several new tools to create and grow their ideas for new devices and concepts, including the Samsung Artik IDE (integrated development environment) and an IoT-focused Samsung Artik Cloud where developers can collect, store and access their data from any device or other cloud. The new tools for developers who use Samsung’s tiny Artik System-on-Module (SOM) platform were unveiled April 27 at the company’s third annual Samsung Developer Conference, …

Read more at eWeek

7 Science Projects Powered by Open Source GIS

7 science projects powered by open source GIS

Next week, FOSS4G North America is coming to Raleigh, NC. FOSS4G is a conference celebrating all of the ways that free and open source software are changing the world of geographic and geospatial information science (GIS).

These days, with ever-expanding technologies for collecting geographic data, sensor networks and the Internet of Things are driving larger and larger quantities of data that must be stored, processed, visualized, and interpreted. Practically every type of industry imaginable is increasing the types and quantities of geographic data they utilize. And the traditional closed source tools of the olden days can no longer keep up.

Many of the applications of geographic tools are scientific in nature, from biology to oceanography to geology to climatology. Here are seven applications for geographic science that I’m excited about hearing talks on next week.

Read more at OpenSource.com