Home Blog Page 655

12 Signs You’re Working in a Feature Factory

I’ve used the term Feature Factory at a couple conference talks over the past two years. I started using the term when a software developer friend complained that he was “just sitting in the factory, cranking out features, and sending them down the line.”

How do you know if you’re working in a feature factory?

  1. No measurement. Teams do not measure the impact of their work. Or, if measurement happens, it is done in isolation by the product management team and selectively shared. You have no idea if your work worked…

Read more at HackerNoon

Containers Moving Beyond Servers

It’s obvious enough why containers are valuable within the data center. They provide a portable, lightweight mode of deploying applications to servers.

However, servers account for only one part of the software market. There is a good chance that, sooner or later, containers will expand to other types of devices and deployment scenarios.

Docker Beyond the Data Center

Read more at Container Journal

How to Choose the Ultimate DevOps Tools

It’s not just about deciding to practice DevOps, Agile, and Lean, it’s also important to know the proper tools that make DevOps a success.

To be frank, there are no any magical tools that make you agile or lean. Hence, DevOps is more of a cultural shift rather than a toolset. This cultural shift should transfer from the top level management to the developers in the organization.

The top management should enable the use of certain set of identified tools to practice DevOps.

A proper understanding of the tools and training is required to use these tools.

Although there are tools that help in every touch points in the development life cycle, not all you can consider as DevOps tools.

0-1VcTqUT4SJct9YMSO4cD_vIl90xBiIMBVjga6T

  1. Plan : GitHub

  2. Build : Docker

  3. Continuous integration : Shippable

  4. Deploy : Heroku & Amazon Web Services

  5. Operate : Botmetric

  6. Continuous Feedback : Github

Let’s discuss each stage with the tools that can fit properly as DevOps tools,

Plan:

Planning is the major and initial part of any software development life cycle. Choosing a tool that can help you easily plan and assign work to different teams or developers is very important.

Transparency, collaboration and clear assignment of issues with who should do what are some key factors to look for when choosing the right tool for planning. Here comes ‘GitHub’ that can help you ease this process.

Build:

After all the planning done, it’s time to get the work going. To build, there is a need for a tool/platform that supports various environments, languages etc and since it’s the age of containers, it’s obvious that developers use Docker to provision individual development environments. Docker gets more work done with the ease of containerization, many things are possible now.

Continuous integration:

In this stage, every code written by developers will be integrated immediately and this is the objective of continuous integration. Here you need to look for a tool/platform that can help do the testing faster and give you instant feedback. Although there are many tools available, Shippable is one best tool that gets you going all the way from continuous integration to continuous deployment.

Shippable enables teams to build and test their repositories for every code commit or pull request and get instant feedback. Shippable can be said as the fastest CI/CD platform.

Deploy:

Deploy is the critical stage and here you need to find a tool that gives you full visibility on branches, builds, pull requests, and deployment warnings in one place. Here comes the platform Heroku, a cloud application platform that helps app developers spend their time on their application code, not managing servers, deployment, ongoing operations, or scaling. This increases the developer productivity by easing the deployment with less effort and full visibility.

Another best platform here is AWS, that let’s you use it’s amazing cloud computing services. Running your applications in the AWS Cloud can help you move faster, operate more securely, and save substantial costs.

Operate:

Here is the stage where you critical need to monitor the application, server performance budget reports. You need to consider a tool that can let you know the application performance and whether the development is happening in the right path or not. Botmetric can run a comprehensive audit of your AWS Cloud to detect security issues, performance problems, disaster recovery bottlenecks, adherence to cloud best practices and cost optimization opportunities etc. The special features of this tool like budget alerts and daily reports help you analyze critical business data and take proper high-level IT decisions.

Continuous Feedback:

Continuous feedback is one thing that you can never neglect, if you consider it less, then you will go out of business soon. A continuous feedback loop is necessary where your customers will let you know what they liked, what they want and what are the issues they are facing. There are so many tools to name but many software powered organizations prefer GitHub issue board to know the customers’ feedback.

I think these are the tools/platforms for DevOps success. Let me know your favorite tools.

3 Emerging Cloud Technologies You Should Know

In previous articles, we’ve discussed four notable trends in cloud computing and how the rise of microservices and the public cloud has led to a whole new class of open source cloud computing projects. These projects leverage the elasticity of the public cloud and enable applications designed and built to run on it.

Early on in cloud computing, there was a migration of existing applications to Amazon Web Services, Google, and Microsoft’s Azure. Virtually any app that ran on hardware in private data centers could be virtualized and deployed to the cloud. Now with a mature cloud market, more applications are being written and deployed directly to the cloud and are often referred to as being cloud native.

Here we’ll explore three emerging cloud technologies and mention a few key projects in each area. For a more in-depth explanation and to see a full list of all the projects across six broad categories, download our free 2016 Guide to the Open Cloud report.  

Cloud Native Applications

While there is no textbook definition, the self-described cloud native in its simplest definition indicates applications that have been designed to run on modern distributed systems environments capable of scaling to tens of thousands of nodes. The old mantra, “No one ever got fired for buying IBM (or Microsoft),” has changed to the new slogan of “No one is going to get fired for moving to the cloud.” Rather than looking at hard and fast qualifiers for cloud-native, we need to look at the design patterns that are being applied to this evolving breed of applications.

In the pre-cloud days we saw virtualization take hold where entire operating systems were portable inside of virtual machines. That way a machine could move from server to server based on its compatibility with hypervisors like VMware, KVM or Xen Project. In recent years we have seen the level of abstraction at the application layer where applications are container-based and run in portable units that are easily moved from server to server regardless of hypervisor due to container technologies like Docker and CoreOS-sponsored rkt (pronounced rocket).

Containers

A more recent addition in the cloud era is the rise of the container, most notably Docker and rkt. These application hosts are an evolution of previous innovations including Linux control groups (cgroups) and LXC, and an even further abstraction to make applications more portable. This allows them to be moved from development environments to production without the need for reconfiguration.

Applications are now deployed either from registries or through continuous deployment systems to containers that are orchestrated using tools like Ansible, Puppet, or Chef.

Finally, to scale out these applications, the use of schedulers such as Kubernetes, Docker Swarm, Mesos, and Diego coordinate these containers across machines and nodes.

Unikernels

Another emerging technology that bears some similarity to containers is that of unikernels. A unikernel is a pared-down operating system, which is combined with a single application into a unikernel application and which is typically run within a virtual machine. Unikernels are sometimes called library operating systems, because they include libraries that enable applications to use hardware and network protocols in combination with a set of policies for access control and isolation of the network layer. There were systems in the 1990s called Exokernel and Nemesis, but today popular unikernels include MirageOS and OSv. Because unikernel applications can be used independently and deployed across diverse environments, unikernels can create highly specialized and isolated services and have become increasingly used for developing applications in a microservices architecture.

In the series that follows, we’ll dive into each category of open source cloud technology and list the most useful, influential, and promising open source projects with which IT managers and practitioners can build, manage, and monitor their current and future mission-critical cloud resources.  

Learn more about trends in open source cloud computing and see the full list of the top open source cloud computing projects. Download The Linux Foundation’s Guide to the Open Cloud report today!

Read the other articles in this series:

4 Notable Trends in Open Source Cloud Computing

Trends in the Open Source Cloud: A Shift to Microservices and the Public Cloud

Why the Open Source Cloud Is Important

 

Build Your Own Netflix and Pandora With Raspberry Pi 3

Do you have a huge collection of movies, TV shows, and music that you purchased over the years but it’s collecting digital dust on your hard drives? How about creating your very own Netflix- and Pandora-like setup using the free Plex Media Server software? No, you don’t have to buy an expensive, bulky PC. All you need is a Raspberry Pi 3, a hard drive, an SD card and a mobile charger. It should all cost less than $100.

What you need:

  • PC or laptop

  • Raspberry Pi 3

  • Micro SD card

  • A powered hard drive

  • 5v 2A power supply for Pi

  • Monitor, HDMI cable, keyboard and mouse (only for initial setup)

  • I also recommend a heat sink for Pi chips as multimedia consumption does make them hot

  • Ethernet cable (optional)

I will be using it in a headless manner, but we do need a monitor with an HDMI cable for initial setup.  On your PC/laptop, download the ‘NOOBS’ distribution installer from the official site. It’s a zip file, which you’ll extract using the unzip command.

Insert the Micro SD card and format it as FAT32 using Gnome Disk Utility.

gztF3kGsoByxxfjYXA_D-hfRAiwxGF1CqjNnb155

Then, change directory to the Micro SD card:

cd /path_of_USB

And unzip the NOOBS file into the Micro SD card:

unzip PATH_OF_NOOBS

In my case it was:

unzip /home/swapnil/Downloads/NOOBS_v1_9_2.zip

Just ensure that all the content of the NOOBS folder is in the root directory of the Micro SD card.

Now plug the monitor, keyboard and mouse into the Pi, insert the Micro SD card and connect the power supply. The system will boot up to NOOBS where you can choose the operating system you want to install. Choose Raspbian. Once the installation is finished, it will reboot into your brand new Raspbian OS. It will also automatically resize the file system to use all available space on the SD card.

If you can use an Ethernet cable, I would recommend that because it will give you faster speed compared to the WiFi on board. If not, then use the WiFi utility in Raspbian to connect to the wireless network. Once you are online, open the terminal and run the following command to find the IP address of your Pi:

if config

Once you have the IP address, open the terminal on your PC/laptop and ssh into your Pi:

ssh pi@IP_ADDRESS_OF_PI

The default password for the Pi is raspberry. If you want to change the password, run the following command and enter the new password after the prompt:

passwd pi

Now let’s update the system before we install Plex. This is a best practice for all fresh distro and software installations

sudo apt-get update

sudo apt-get dist-upgrade

Once updated, connect the external hard drive to your Pi using one of the USB ports. It’s best to use a hard drive that has been formatted in the ext4 file system for better compatibility with Linux. Mount it and create an entry in the ‘fstab’ so that it auto mounts between reboots.

Now it’s time to install Plex Media Server. We are using packages created by a third-party developer so let’s add their GPG key:

wget -O - https://dev2day.de/pms/dev2day-pms.gpg.key  | sudo apt-key add - 

Now add repos to the source list file:

echo "deb https://dev2day.de/pms/ jessie main" | sudo tee /etc/apt/sources.list.d/pms.list

Now update the system:

sudo apt-get update

And then install Plex Media Server:

sudo apt-get install -t jessie plexmediaserver -y

Now run it:

service plexmediaserver star

That’s it. You have Plex Media Server running on your Raspberry Pi 3.

Set up your media server

Plex Makes it extremely easy to set up your Plex Media Center. Now you need to point your Plex Media Center towards the media files: movies, music, and TV shows. You can do it from any PC in your local network. Just type this address into a web browser, filling in your own Pi’s IP address:

IP_ADDRESS_OF_PI:32400/web/index.html#

In my case it was:

10.0.0.26:32400/web/index.html#

This will open the Plex Media Server interface. The greatest feature of Plex is metadata that it pulls from the internet and attaches to your media files. But it’s extremely important to categorize your media otherwise Plex won’t detect it. So create these folders on your hard drive and store appropriate media inside the folders: movies, tv_shows, music, home_videos, photos.

Now copy movies to the movies folder, TV shows to the tv_shows folder, any videos that you take from your phone or camera to home_video folder, and so on. If you copy TV shows or home videos to movies or vice versa, those files won’t show up on Plex and you won’t be able to play them.

Once you have taken care of your media files, open the movie tab on the Plex Media Center interface and browse to add the movies folder from your hard drive. Repeat the step for each media type. Once done, give Plex some time to scan and process those files.

eEAS8rVfeHCjfwrG6m4StFGLCsBYG9MYfUp2s4k3

Another interesting thing that you can do with your Plex is add online video channels such as CNN, PBS, and History.… Just go to the ‘Channels’ option and install channels that you like. Now all of these channels, in addition to your movies, tv shows, music and photos are accessible through your Plex server running on the Pi.

w5N1FC1GJEqk2PlUcc2VyXBX9VakE3SZ4Yal7wjR

Access your Plex Media Server

There are many ways to access your Plex Media Center:

1. If you are on the local network open this URL in the web browser:

IP_ADDRESS_OF_PI:32400/web/index.html#

In my case it was:

10.0.0.26:32400/web/index.html#

It will open the Plex Media Player interface, just log into your media server and start playing content. You can also manage your Plex Media Server from this interface.

2. You can access your Plex Media Server from mobile devices using the official Plex app that’s available for both Android and iOS.

3. Or you can set up a Plex Media Player device (such as RasPlex) and turn any HDMI-enabled TV into your very own entertainment system. (See the next tutorial on how to do this!)

If you want to be able to access Plex outside of your home network then you can purchase PlexPass which allows you to stream your content across devices over the Internet. Because Plex also remembers playback history and where you are in any content, you can also add family members, just like Netflix, so that you can maintain your own viewing history.

All of this for just under $100, and you got to build it yourself. Isn’t it fun?

Read the other articles in the series:

5 Fun Raspberry Pi Projects: Getting Started

How to Build a Minecraft Server with Raspberry Pi 3

Turn Raspberry Pi 3 Into a Powerful Media Player With RasPlex

For 5 more fun projects for the Raspberry Pi 3, including a holiday light display and Minecraft Server, download the free E-book today!

Time Is Running Out for NTP

Everyone benefits from Network Time Protocol, but the project struggles to pay its sole maintainer or fund its various initiatives. 

“NTF’s NTP project remains severely underfunded,” the project team wrote in a recent security advisory. “Google was unable to sponsor us this year, and currently, the Linux Foundation’s Core Internet Initiative only supports Harlan for about 25 percent of his hours per week and is restricted to NTP development only.”

Read more at InfoWorld

 

Writing Docker Microservices in COBOL

There is one thing that COBOL does very well and that has kept it around longer than one would expect. The one thing that COBOL does well is volume processing. A lot of very big companies and firms use it to process a lot of data. The Social Security Administration currently has about 60 million lines of COBOL in production, and the US Navy and the Internal Revenue Service still use COBOL.

What happens when we need to access COBOL programs in a more modern architecture? What happens we we have to move a function that has been performed by a COBOL program in house for years?

Read more at Microservices Practitioner Articles

Uncommon but Useful GCC Command-Line Options

Software tools usually offer multiple features, but – as most of you will agree – not all their features are used by everyone. Generally speaking, there’s nothing wrong in that, as each user has their own requirement and they use the tools within that sphere only. However, it’s always good to keep exploring the tools you use as you never know when one of their features might come in handy, saving you some of your precious time in the process. So, in this article, we will cover a couple of such options, offering all the required details, and explaining them through easy to understand examples wherever necessary.

Case in point: compilers. A good programming language compiler always offers plethora of options, but users generally know and use a limited set only. Specifically, if you are C language developer and use Linux as your development platform, it’s highly likely that you’d using the gcc compiler, which offers an endless list of command line options.

Read the full article here

Health Checking Your Docker Containers

One of the new features in Docker 1.12 is how health check for a container can be baked into the image definition. And this can be overridden at the command line.

Just like the CMD instruction, there can be multiple HEALTHCHECK instructions in Dockerfile but only the last one is effective.

This is a great addition because a container reporting status as Up 1 hour may return errors. The container may be up but there is no way for the application inside the container to provide a status. This instruction fixes that.

Read more at DZone

 

Managing Devices in Linux

There are many interesting features of the Linux directory structure. This month I cover some fascinating aspects of the /dev directory. Before you proceed any further with this article, I suggest that, if you have not already done so, you read my earlier articles, Everything is a file, and An introduction to Linux filesystems, both of which introduce some interesting Linux filesystem concepts. Go ahead—I will wait.

Great! Welcome back. Now we can proceed with a more detailed exploration of the /dev directory.

Read more at OpenSource.com