Home Blog Page 614

How to Install PowerDNS DNS Server on Ubuntu

PowerDNS is a powerful DNS server like Bind9. But PowerDNS is much more flexible because PowerDNS uses a database such as MySQL, PostgreSQL, SQLite etc to store it’s DNS entries.

So it’s very flexible and you can easily write custom web apps with PHP, Python, Node.js, Ruby etc to add, remove, and modify DNS entries.

In this article, I am going to show you how to install PowerDNS DNS server on Ubuntu and configure it to use MySQL database.

To follow this tutorial, you need,

    – A computer or virtual machine with Ubuntu server or Desktop installed.

    – Internet connection for downloading MySQL server and PowerDNS server packages.

    – Basic understanding of Linux command line.

Installing MySQL Server:

Before you install PowerDNS, you must install MySQL server if you want to use it as a database backend. Otherwise the installation won’t go smoothly. In this case, I do want to use MySQL as the database backend. So I am going to install MySQL server first for the sake of simplicity.

To install MySQL, run the following command.

$ sudo apt-get update

$ sudo apt-get -y install mysql-server mysql-client

MySQL server should ask you for the root password. Give it the password. Remember this password is not the same as your operating systems root password. So you can give it anything. I gave it, 123 as the root password of MySQL. It’s not the most secure password, but for a demo it works.

Installing PowerDNS

Now that the MySQL server is installed, we can install PowerDNS.

To install PowerDNS, run the following command.

$ sudo apt-get -y install pdns-server pdns-backend-mysql

PowerDNS should show you this dialog while installation.

Just press “Enter”.

Then you should see prompt.

It asks you whether you want to configure MySQL server for PowerDNS now. Yes we do. Select “Yes” and press “Enter“.

Now you should be prompted for your MySQL password. It’s 123 if you’re following along. Type in the password, select “Ok” and press “Enter”.

Now confirm your password by re-typing it.

Now PowerDNS is installed successfully. 

You can check if the DNS server is running with the following command.

$ sudo service pdns status

That’s it. Now you can add DNS entries to PowerDNS using MySQL. You can login to your MySQL database as user ‘pdns’ and host ‘localhost’ with the password ‘123’ and select the database ‘pdns’ and insert data to it. For more information, go to the official website of PowerDNS at https://doc.powerdns.com/md/authoritative/ .

Netflix’s Open Source Orchestrator, Conductor, May Prove the Limits of Ordinary Scalability

It’s one thing to imagine your data center running a little bit, or a lot, like Google. It’s another thing entirely to imagine it working like Netflix. Nevertheless, for anyone who is so inclined, Netflix released into open source last month the orchestration engine its engineers developed for running its entire operation in the cloud — specifically, on Amazon’s cloud. It could very well be the engine for the world’s single most sophisticated distributed processing system.

So the question, for our sake, becomes, does it scale down? More to the point, if Netflix’ good ideas remain just as good on a smaller scale, then does Conductor hold an advantage over little ol’ Kubernetes open source container orchestration engine to which we need to be paying attention?

Read more at The New Stack

Linux Security Threats: Attack Sources and Types of Attacks

Start exploring Linux Security Fundamentals by downloading the free sample chapter today. DOWNLOAD NOW

In part 1 of this series, we discussed the seven different types of hackers who may compromise your Linux system. White hat and black hat hackers, script kiddies, hacktivists, nation states, organized crime, and bots are all angling for a piece of your system for their own nefarious/various reasons.

It’s important to also realize that these hackers can perpetrate an attack from inside or outside your organization. And their attacks can be either active or passive:

An active attack attempts to alter system resources or affect their operation, so it compromises the Integrity or Availability.

A passive attack attempts to learn or make use of information from the system, but does not affect system resources, so it compromises Confidentiality.

Active Attacks

Let’s look at different types of active attacks.

Denial of service attacks

Generally done by flooding the service or network with more requests than can be serviced, which results in the service becoming unreachable. This sometimes happens due to a client mis-configuration.

Spoofing attacks

Take place when a valid or authorized system is impersonated via IP address manipulation. The service thinks it is communicating with an authorized system when it is really talking to an impostor. ARP (Address Resolution Protocol), DNS (Domain Name System), IP Address, and MAC (Message Authentication Code) are susceptible to spoofing.

Port scanning

Can be done with the nmap utility and involves sending SYN packets to a range of ports on the target systems. The replies, or lack of replies, from the target provide a significant amount of information about the possible services running on the target.

Idle scans

Variations on port scans that use a third system, referred to as zombie, to gain information about a target system. To learn more about idle scans, you can go to http://en.wikipedia.org/wiki/Idle_scan.

There are quite a variety of network attacks that are still widely used that take advantage of various network protocols required in most infrastructures. ARP storms, session hijacking, packet injection are all active network attack techniques.

Passive Attacks

Now, let’s take a look at a passive wiretapping attack.

Wiretapping is generally done with tcpdump or Wireshark to listen to traffic on the network. This is done by placing network interfaces into a promiscuous mode, in which all packets the switch sends to the port are then passed to the tcpdump application.

During normal operations, network interfaces throw away packets sent to them by the network devices when the destinations do not match those configured on the host. Pretty much all communications protocols and mechanisms are susceptible to wiretapping, including:

• Ethernet

• Wi-Fi

• USB

• Cellular networks.

In part 3 of this series, we’ll discuss the trade-offs you’ll face when making security decisions including the likelihood of an attack, the value of the assets you’re protecting, and the impact to business operations.

Stay one step ahead of malicious hackers with The Linux Foundation’s Linux Security Fundamentals course. Download a sample chapter today!

Read the other articles in this series:

Linux Security Threats: The 7 Classes of Attackers

Linux Security Fundamentals Part 3: Risk Assessment / Trade-offs and Business Considerations

Linux Security Fundamentals: Estimating the Cost of a Cyber Attack

Linux Security Fundamentals Part 5: Introduction to tcpdump and wireshark

Linux Security Fundamentals Part 6: Introduction to nmap

IoTivity-Constrained: A Flexible Framework for Tiny Devices

The future of IoT will be connected by tiny, resource-constrained edge devices, says Kishen Maloor, Senior Software Engineer at the Intel Open Source Technology Center. And, the IoTivity-Constrained project is a small-footprint implementation of the Open Connectivity Foundation’s (OCF) standards that’s designed to run on just such devices.

In his upcoming talk at Embedded Linux Conference + OpenIoT Summit, Kishen, who is lead developer and maintainer of the IoTivity-Constrained project, will present the project’s architecture, features, and uses. We spoke with Kishen to get a preview of his talk and more information about this lightweight, customizable framework for IoT.

Kishen Maloor, Senior Software Engineer at the Intel Open Source Technology Center
Linux.com: Please fill us in on the IoTivity-Constrained project. What is it? What are its goals?

Kishen Maloor: The future of IoT will be driven in part by vast numbers of tiny, connected sensors or embedded computing devices situated on the edge of the network, enabling novel experiences. The Open Connectivity Foundation (OCF) standards for connectivity and interoperability aim to contribute to this vision.

The IoTivity-Constrained project is a small-footprint implementation of the OCF standards and is tailored to run on such tiny, resource-constrained hardware and software environments. Its goal is to deliver a flexible and royalty-free software framework for developing IoT applications across a wide range of devices, operating systems, and network stacks.

Linux.com: What problems does the project seek to address?

Kishen: IoTivity-Constrained’s design and implementation address the following concerns.

  • Flash/RAM constraints: These restrict the workload of an application. There is usually no support for dynamic memory allocation. Software must be modular with the ability to include only the desired features for an application. It must further be able to identify and pre-allocate a set of resources.

  • Low frequency/Low power CPUs: These require that hot spots in code be lightweight and efficient, with minimal copy operations.

  • Battery-powered devices: Software must take advantage of known idle periods to put the CPU to sleep.

  • Single stack, diverse deployments: There are a multitude of software infrastructures (OS, libraries) to choose from in the embedded space, with variations in design and APIs that software built against one does not easily port over to another. Greater reuse and developer participation makes for more robust implementations. It is therefore desirable to have a single framework as a base, which can also be easily customized to any chosen platform.

  • OCF spec compliance: OCF has laid out a certification process to verify compliance with its specification and guarantee interoperability between OCF devices in the marketplace. Participation in this program is mandatory for vendors seeking to ship OCF-enabled products. It is hence useful for developers to rely on a framework that accurately represents all of the constructs of the OCF spec, and that is maintained for spec compliance. This would simplify their process of getting their products certified by OCF.

In addressing the above requirements, IoTivity-Constrained contains a common core that encompasses most of its features in a way that is OS-agnostic, lightweight, modular, and efficient, and with a set of simple interfaces to platform specific functionality.

Linux.com: Who is the IoTivity-Constrained project aimed at? What are some example use cases?

Kishen: IoTivity-Constrained is aimed at IoT product companies who want to develop IoT applications on their platforms and deploy their products into the OCF ecosystem. As an industry-led consortium with a number of stakeholders from various vertical markets, OCF aims to target a variety of IoT applications across the consumer, industrial, and healthcare domains.

IoTivity-Constrained can be employed in all those applications but is particularly helpful in products where resource utilization, energy efficiency, and customization are essential factors. Direct use cases are in sensing and actuation applications in wearables, homes and buildings for climate control, energy monitoring, appliance control, etc.

As a simple use case, in the morning, an alarm goes off on your smart watch (and you don’t hit the snooze button), your smart lights turn on, and your smart coffee machine is signaled to start brewing coffee. All three devices in this example may have come from different manufacturers, and might employ different platforms. However, they may all contain long-running IoTivity-Constrained applications that issue requests to achieve the desired functionality.

The possibilities are endless as any other enablers (IFTTT recipes, machine learning, etc.) are effectively application code that an IoT product might use to its advantage for differentiation.

Linux.com: Can you explain how IoTivity-Constrained works in conjunction with Zephyr RTOS and other real-time operating systems?

Kishen: IoTivity-Constrained’s core framework interacts with platform-specific functionality via a set of abstract interfaces. A set of implementations of these interfaces for a platform constitutes a “port.” IoTivity-Constrained contains a few sample ports for Zephyr, Apache Mynewt, RIOT OS, Contiki and Linux. Each port interfaces with the native network stack and subsystems of its respective platform. Apart from giving developers the opportunity to get started with building and testing the included samples, they also serve to demonstrate the general ease of porting and customizing the framework.

Linux.com: What are some future development plans for the project?

Kishen: One of our plans is to more closely examine any special requirements for applications in the industrial and healthcare domains and appropriately extend IoTivity-Constrained to meet those. Other possibilities include addition of higher-level components such as a rule-engine that may be harnessed by applications to assist in the actuation process.

Embedded Linux Conference + OpenIoT Summit North America will be held on February 21-23, 2017 in Portland, Oregon. Check out over 130 sessions on the Linux kernel, embedded development & systems, and the latest on the open Internet of Things.

Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the attendee registration price. Register now>>

Lessons Learned Running IBM Watson on Mesos

https://www.youtube.com/watch?v=F5v5B2Fncvg?list=PLbzoR-pLrL6pLSHrXSg7IYgzSlkOh132K

This talk will share the key lessons learned through the deployment of cognitive workloads on Mesos and related technologies, including challenges scaling, monitoring, security, cognitive/batch computing support, GPUs, and more. 

AmEx Joins JPMorgan, IBM in Hyperledger Blockchain Effort

American Express Co. is elbowing its way into the crowded blockchain party.

The biggest credit-card issuer by purchases has signed on to the Hyperledger Project, a industry group of more than 100 membersdeveloping blockchain technology for corporate use. The digital ledger known for underpinning bitcoin has potential to reshape the global financial system and other industries.

American Express will contribute code and engineers to Hyperledger, which was started by the Linux Foundation in 2015 and now counts companies like International Business Machines Corp., Airbus Group SE and JPMorgan Chase & Co. as members.

Read more at Bloomberg

Lessons Learned Running IBM Watson on Mesos

All these newfangled container and microservices technologies inspire all manner of ingenious experiments, and running IBM’s Watson on Apache Mesos has to be one of the most — maybe it’s not fair to say crazy — but certainly ambitious. Jason Adelman of IBM tells us the story of this novel endeavor at MesosCon Asia 2016.

If you’re not familiar with Watson, that is IBM’s cognitive computing platform. Watson became a Jeopardy champion in 2011, beating human contestants. Watson is a mighty beast, so how do you make it run on Mesos? And why? Adelman answers the why: “IBM looked at how they could commercialize this. Turn this into something that customers could use, first in healthcare, then financial services and then broader industries.”

Now we’ll look at the how. Want to play with Watson on Mesos? That’s what Bluemix is for. Adelman says, “This is IBM’s developer portal, our developer cloud. There’s a lot of services in Bluemix that developers can use to run to get applications up and running quickly on the web. This is where you’ll find Watson as well. Under Watson you’ll see the services…There are 16 services there for now but there’s a lot of things coming all the time. It’s been developing very rapidly. A lot of these services are currently running on Mesos, and we are working on trying to get everything running on one platform there…It’s running on a mixture of containers managed by Mesos, Marathon, and Netflix OSS.”

The Watson Developer Cloud also uses Eureka, Zuul, Ansible, ZooKeeper, and Solr. Solr presented some special challenges. Adelman’s team concluded that they needed local storage for Solr to work effectively. But, as it happens in so many similar projects, when you need stateful services (Solr) in a stateless environment you have an interesting condundrum. Adelman’s team elected to use SolrCloud, which provides a highly-available cluster of Solr servers.

There were growing pains caused by network problems and Marathon limitations which caused lapses in communication between the various elements. Adelman says, “We had some outages where Marathon and Mesos were not talking to each other and connection was lost for a significant amount of time. After that…the connection was re-established, but when Marathon reconnected with Mesos, Mesos thought it was a new Marathon, gave it a new framework IP.”

“So now we have a Marathon running with a new framework IP, we have all these original containers still running with the old IP, so they can no longer communicate with Marathon. This is the problem with stateful services…To get out of this we had to do a bunch of manual work.” This included developing pinning functionality, and building additional infrastructure on top of Mesos and Marathon.

Adelman discusses not only the difficulties but also valuable lessons about how to make everything work reliably. Watch the full presentation (below) to learn how they set up networking, scheduling, auto-scaling, and use chaos testing to keep everything operating smoothly.

https://www.youtube.com/watch?v=F5v5B2Fncvg?list=PLbzoR-pLrL6pLSHrXSg7IYgzSlkOh132K

Interested in speaking at MesosCon Asia on June 21 – 22? Submit your proposal by March 25, 2017. Submit now>>

OpenSSL Issues New Patches as Heartbleed Still Lurks

The OpenSSL Project has addressed some moderate-severity security flaws, and administrators should be particularly diligent about applying the patches since there are still 200,000 systems vulnerable to the Heartbleed flaw.

OpenSSL updated the 1.0.2 and 1.1.0 branches and released versions 1.1.0d and 1.0.2k. The 1.0.1 branch stopped receiving security updates Dec. 31, while support for OpenSSL 0.9.8 and 1.0.0 ended a year ago, on Dec. 31, 2015.

Read more at InfoWorld

Improve Your Node.js App Throughput One Micro-optimization at a Time

In order to improve the performance of an application that involves IO, you should understand how your CPU cycles are spent and, more importantly, what is preventing higher degrees of parallelism in your application.

While focusing on improving the overall performance of the DataStax Node.js driver for Apache Cassandra, I’ve gained some insights that I share in this article, trying to summarize the most significant areas that could cause throughput degradation in your application.

Background

The JavaScript engine used by Node.js, V8, compiles JavaScript into machine code and runs it as native code. The engine uses three components to try achieve both low start-up time and peak performance:

Read more at InfoQ

Linus Torvalds Outs Linux Kernel 4.10 Release Candidate 6, the Biggest So Far

If last week’s fifth RC was relatively normal and kept small, the Linux kernel 4.10 Release Candidate 6 snapshot appears to be much bigger because of a flood of patches that landed on Friday and this weekend. This makes today’s RC release the biggest so far for the Linux 4.10 series.

As for the changes, the sixth RC of Linux kernel 4.10 adds numerous updated drivers, this time GPU, MD, media, networking, and RDMA ones, various improvements to the XFS file systems, an updated networking stack, and a bunch of other bug fixes that you can see in the appended shortlog.

Read more at Softpedia