Home Blog Page 623

Davos 2017: China Unites 25 Countries to Establish Global Blockchain Business Council

…The national representatives provided their progress in blockchain research in detail and reached initial consensus in the fields of global blockchain teaching, business cooperation, standard synergy, and governmental communications on the first conference of GBBC.

In industry, the layout of blockchain ecosystem is of integrity. From infrastructure to enterprise application, many excellent blockchain and fintech companies like Tai Cloud Corporation, Dian Rong, Oklink are able to compete with the best blockchain companies in the world. Many areas, such as Beijing, Shanghai, Shenzhen, Hangzhou, Ningbo, Ganzhou (Jiangxi) are actively deploying the layout of blockchain industry. It is the first time that the China Blockchain Representatives Team appeared on the World Economic Forum, which marks the efforts by the Chinese blockchain industry to join the global efforts relating to the research of blockchain technology with most positive attitude and action.

Read more at EconoTimes

The Linux Foundation Brings 3 New Open Source Events to China

LinuxCon, ContainerCon, and CloudOpen will be held in China this year for the first time, The Linux Foundation announced this week.

After the success of other Linux Foundation events in the country, including MesosCon Asia and Cloud Foundry Summit Asia, The Linux Foundation decided to offer its flagship LinuxCon, ContainerCon and CloudOpen events in China as well, said Linux Foundation Executive Director Jim Zemlin.

“Chinese developers and businesses have strongly embraced open source and are contributing significant amounts of code to a wide variety of projects,” Zemlin said. “We have heard the call to bring more open source events to China.”

The flagship event, also known as LC3, will be held June 19-20, 2017 at the China National Convention Center in Beijing. As it was in previous years, the event will also be held in North America and Europe this year under a new name, Open Source Summit.

LC3 will cover many of the hottest topics in open source, including open networking, Blockchain, compliance issues and the business and professionalization of open source.

Attendees will have access to the content of all three events with one registration. Activities will include 70+ educational sessions, keynotes from industry leaders, an exhibit hall for demonstrations and networking, hackathons, social events, and more.

  • LinuxCon is where the leading maintainers, developers and project leads in the Linux community and from around the world gather together for updates, education, collaboration and problem-solving to further the Linux ecosystem.

  • ContainerCon is the place to learn how to best take advantage of container technology, which is revolutionizing the way we automate, deploy and scale workloads; from hardware virtualization to storage to software defined networking, containers are helping to drive a cloud native approach.

  • CloudOpen gathers top professionals to discuss cloud platforms, automation and management tools, DevOps, virtualization, software-defined networking, storage and filesystems, Big Data tools and platforms, open source best practices, and much more.

The conference is designed to enable attendees to collaborate, share information and learn about the newest and most interesting open source technologies, including Linux, containers, cloud technologies, networking, microservices and more. It also provides insight into how to navigate and lead in the open source community.

Speaking proposals are being accepted through March 18. Submit your proposal now!

Registration for the event will be open in the coming weeks.

How Disney Is Realizing the Multi-Cloud Promise of Kubernetes

The Walt Disney Company is famous for “making magic happen,” and their cross-cloud, enterprise level Kubernetes implementation is no different. In a brief but information-packed lightning talk at CloudNativeCon in Seattle in November, Disney senior cloud engineer Blake White laid out a few of the struggles and solutions in making Kubernetes work across clouds.

“Kubernetes does a lot of the heavy lifting for you but when you need to think about an enterprise and all of its needs, maybe you need to think a little bit outside of it,” White said. “Get your hands dirty, don’t rely on all the magic, make some of the magic happen for yourself.”

With an enterprise the size of Disney, there are a lot of development, QA, and engineering teams working on many different projects, and each one will have their own cloud environment. White said Kubernetes can handle that just fine, but it takes some adjustment to make everything work.

The first considerations are connectivity and data:

  • Does the project need to reach code repos, artifacts, or other services in your corporate network?

  • Is there data that needs to follow certain privacy standards?

  • How much latency is tolerable?

  • Do you need to interconnect between cloud accounts?

Both Amazon and Google have services that allow for connections across cloud accounts, and both can get a cluster up and running quickly, but neither system works flawlessly with Kubernetes, so White suggests not relying on an out-of-the-box solution if your project is complicated.

There are automated ways to bring up Kubernetes clusters; White mentioned both Kube-up.sh and kops as excellent options, but neither were as configurable as Disney needed, so they built their own bespoke system.

“We ended up building things our own and the main reason for that was because we needed our [virtual public cloud] to be connected back to our corporate network,” White said. The trickiest part with their build was setting up the DNS, he continued.

“We moved from sky DNS to kube DNS; that helped the cluster a lot, but in AWS things just weren’t working,” White explained. “Basically, our DHCP set for the VPC was skipping the Amazon internal and pointing just back to our corporate network, which was what we needed, but Kubernetes was unhappy because it couldn’t find all of the notes. We set up a bind server, pointed that at the AWS internal for internal stuff, and back to our corporate network for everything else… Everything started working again.”

White touched on logging at the tail end of his talk, giving tips on how not to incur unwanted expenses by shipping everything to a central repository, and therefore paying for egress. His solution was to keep everything next to the cluster and query only what you need, he said.

“We set up an ELK stack (Elasticsearch, Logstash and Kibana),” White said. “Be careful where you put your dashboards or else you’ll be shipping much more than you thought, and that works really well. Set up tribe nodes above it, and you can query across multiple clouds. It’s not the only solution but it’s a good solution.”

Watch the complete presentation below.

Do you need training to prepare for the upcoming Kubernetes certification? Pre-enroll today to save 50% on Kubernetes Fundamentals (LFS258), a self-paced, online training course from The Linux Foundation. Learn More >>

Arrive On Time With NTP — Part 1: Usage Overview

Few services on the Internet can claim to be so critical in nature as time. Subtle issues which affect the timekeeping of your systems can sometimes take a day or two to be realized, and they are almost always unwelcome because of the knock-on effects they cause.

Consider as an example that your backup server loses connectivity to your Network Time Protocol (NTP) server and, over a period of a few days, introduces several hours of clock skew. Your colleagues arrive at work at 9am as usual only to find the bandwidth-intensive backups consuming all the network resources meaning that they can barely even log into their workstations to start their day’s work until the backup has finished.

In this first of a three-part series, I’ll provide brief overview of NTP to help prevent such disasters. From the timestamps on your emails to remembering when you started your shift at work, NTP services are essential to a happy infrastructure.

You might consider that the really important NTP servers (from which other servers pick up their clock data) are at the bottom of an inverted pyramid and referred to as Stratum 1 servers (also known as “primary” servers). These servers speak directly to national time services (known as Stratum 0, which might be devices such as atomic clocks or GPS clocks, for example). There are a number of ways of communicating with them securely, via satellite or radio, for example.

Somewhat surprisingly, it’s reasonably common for even large enterprises to connect to Stratum 2 servers (or “secondary” servers) as opposed to primary servers. Stratum 2 servers, as you’d expect, synchronize directly with Stratum 1 servers. If you consider that a corporation might have their own onsite NTP servers (at least two, usually three, for resilience) then these would be Stratum 3 servers. As a result, such a corporation’s Stratum 3 servers would then connect upstream to predefined secondary servers and dutifully pass the time onto its many client and server machines as an accurate reflection of the current time.

A simple design component of NTP is that it works on the premise — thanks to the large geographical distances travelled by Internet traffic — that round-trip times (of when a packet was sent and how many seconds later it was received) are sensibly taken into account before trusting to a time as being entirely accurate. There’s a lot more to setting a computer’s clock than you might at first think, if you don’t believe me, then this fascinating web page is well worth looking at.

At the risk of revisiting the point, NTP is so key to making sure your infrastructure functions as expected that the Stratum servers to which your NTP servers connect to fuel your internal timekeeping must be absolutely trusted and additionally offer redundancy. There’s an informative list of the Stratum 1 servers available at the main NTP site.

As you can see from that list, some NTP Stratum 1 servers run in a “ClosedAccount” state; these servers can’t be used without prior consent. However, as long as you adhere to their usage guidelines, “OpenAccess” servers are indeed open for polling. Any “RestrictedAccess” servers can sometimes be limited due to a maximum number of clients or a minimum poll interval. Additionally, these are sometimes only available to certain types of organizations, such as academia.

Respect My Authority

On a public NTP server, you are likely to find that the usage guidelines follow several rules. Let’s have a look at some of them now.

The “iburst” option involves a client sending a number of packets (eight packets rather than the usual single packet) to an NTP server should it not respond during at a standard polling interval. If, after shouting loudly at the NTP server a few times within a short period of time, a recognized response isn’t forthcoming, then the local time is not  changed.

Unlike “iburst” the “burst” option is not commonly allowed (so don’t use it!) as per the general rules for NTP servers. That option instead sends numerous packets (eight again apparently) at each polling interval and also when the server is available. If you are continually throwing packets at higher-up Stratum servers even when they are responding normally, you may get blacklisted for using the “burst” option.

Clearly, how often you connect to a server makes a difference to its load (and the negligible amount of bandwidth used). These settings can be configured locally using the “minpoll” and “maxpoll” options. However, to follow the connecting rules on to an NTP server, you shouldn’t generally alter the the defaults of 64 seconds and 1024 seconds, respectively.

Another, far from tacit, rule is that clients should always respect Kiss-Of-Death (KOD) messages generated by those servers from which they request time. If an NTP server doesn’t want to respond to a particular request, similar to certain routing and firewalling techniques, then it’s perfectly possible for it to simply discard or blackhole any associated packets.

In other words, the recipient server of these unwanted packets takes on no extra load to speak of and simply drops the traffic that it doesn’t think it should serve a response to. As you can imagine, however, this isn’t always entirely helpful, and sometimes it’s better to politely ask the client to cease and desist, rather than ignoring the requests. For this reason, there’s a specific packet type called the KOD packet. Should a client be sent an unwelcome KOD packet then it should then remember that particular server as having responded with an access-denied style marker.

If it’s not the first KOD packet received from back the server, then the client assumes that there is a rate-limiting condition (or something similar) present on the server. It’s common at this stage for the client to write to its local logs, noting the less-than-satisfactory outcome of the transaction with that particular server, if you ever need to troubleshoot such a scenario.

Bear in mind that, for obvious reasons, it’s key that your NTP’s infrastructure be dynamic. Thus, it’s important not to hard-code IP addresses into your NTP config. By using DNS names, individual servers can fall off the network and the service can still be maintained, IP address space can be reallocated and simple load balancing (with a degree of resilience) can be introduced.

Let’s not forget that we also need to consider that the exponential growth of the Internet of Things (IoT), eventually involving billions of new devices, will mean a whole host of equipment will need to keep its wristwatches set to the correct time. Should a hardware vendor inadvertently (or purposely) configure their devices to only communicate with one provider’s NTP servers (or even a single server) then there can be — and have been in the past — very unwelcome issues.

As you might imagine, as more units of hardware are purchased and brought online, the owner of the NTP infrastructure is likely to be less than grateful for the associated fees that they are incurring without any clear gain. This scenario is far from being unique to the realms of fantasy. Ongoing headaches — thanks to NTP traffic forcing a provider’s infrastructure to creak — have been seen several times over the last few years.

In part two, I’ll look at some important security configuration options and then describe server setup in part three.

Chris Binnie is a Technical Consultant with 20 years of Linux experience and a writer for Linux Magazine and Admin Magazine. His new book Linux Server Security: Hack and Defend teaches you how to launch sophisticated attacks, make your servers invisible and crack complex passwords.

Advance your career in system administration! Check out the Essentials of System Administration course from The Linux Foundation.

Lightning Talk – Realizing the Multi-Cloud Promise of Kubernetes by Blake White, The Walt Disney Co.

Disney’s diverse business units and applications require running in multiple cloud environments. This talk will touch on some of the tools and techniques used to realize the cross cloud promise, as well as some of the challenges and their solutions.

 

Who Contributes to the Linux Kernel?

While many people tend to think of open source projects as being developed by passionate volunteers, the Linux kernel is mostly developed by people who are paid by their employers to contribute. According to The Linux Foundation, since 2005, “some 14,000 individual developers from over 1,300 different companies have contributed to the kernel.”

About once a year, The Linux Foundation releases the Linux Kernel Development report with data about release frequency, rate of change, who contributes, and which companies sponsor this work among other things. The 25th Anniversary Edition released in August 2016, covers development through the 4.7 release (July 24, 2016), with an emphasis on 3.19 to 4.7, which were released since the previous report in February 2015.

One of the most interesting data points is the decline of contributions from unpaid developers, which has decreased to just 7.7 percent in the period covered in the 2016 report compared to 14.6 percent in the 2012 version

Read more at The New Stack

Moving Persistent Data Out of Redis

Historically, we have used Redis in two ways at GitHub:

We used it as an LRU cache to conveniently store the results of expensive computations over data originally persisted in Git repositories or MySQL. We call this transient Redis.

We also enabled persistence, which gave us durability guarantees over data that was not stored anywhere else. We used it to store a wide range of values: from sparse data with high read/write ratios, like configuration settings, counters, or quality metrics, to very dynamic information powering core features like spam analysis. We call this persistent Redis.

Read more at GitHub Engineering

Public Cloud on the Rise While Private Cloud Usage Declines

The use of public clouds has nearly doubled to 57 percent from 30 percent since 2012, while private cloud use has dropped to 40 percent from 52 percent, according to an Interop ITX and InformationWeek survey.

Even further, the respondents who use a private cloud estimate a 12 percent drop between current and future usage, with 28 percent expecting to use a private cloud for new projects.

Read more at SDx Central

17 Essential Skills for Growing Performance Engineers

Performance engineering as a discipline goes back several decades. I’ve heard firsthand accounts of testing and optimization of software from the 1960s. Still, much of what we practice today has built up in the last twenty years or so, since the first generation of commercial performance testing tools started appearing.

It can be hard to describe all the different skills that go into performance engineering. Most people in the field agree that it is a intersection of disciplines that includes testing, optimization, and systems engineering. There are great depths to be explored even in these subjects, and they need to be thought about in order to think about how to make more of us.

Read more at Soasta

 

Quick jump to start NodeJS development on LTPS boards

Unlike Node.js, Git and Python 2.7, NPM is not installed on each LTPS by default.

To install it, you should connect to the LTPP board from an SSH client and do the following steps:

# Setup package repositories (if it's not done already)
smart channel --add a0 type=rpm-md name="LTPS all" baseurl=${RPMSBASE}/all/ -y
smart channel --add a1 type=rpm-md name="Tibbo LTPS general" baseurl=${RPMSBASE}/cortexa8hf_neon/ -y
smart channel --add a2 type=rpm-md name="Tibbo LTPS tpp" baseurl=${RPMSBASE}/tpp/ -y
smart update

# Install NPM
smart install nodejs-npm -y

# Enhance Git functionality. Required if you want to install NPM modules directly from Git.
smart install git-perltools -y

# Install build-essential (GCC, Make, libraries etc). Required for on-board compilation of native C addons.
smart install packagegroup-core-buildessential -y

 

Define your Tibbits layout

Various Tibbits require various resources. For instance, a relay Tibbit set into the socket S1 of the LTPP3 board will require the S1 interface lines to operate as general-purpose I/O (GPIO) lines. The RS232 Tibbit set into the same socket will need a UART to be enabled and mapped to this Tibbit’s interface lines.

The LTPP3 gives you an interactive graphical tool to define your Tibbit configuration (to specify what Tibbit goes into what socket). The tool also checks out if there are no problems/mistakes with your configuration. As soon as the desired layout is created and saved, you should reboot the board to let the new configuration take charge. See the video below to get the idea of the configuration process.

https://vimeo.com/185297307​

When giving our code examples a try or testing your own projects, don’t forget to define the proper Tibbit configuration before running the corresponding code.