Vivek Juneja of GS Shop’s Container Platform Team, at MesosCon Asia 2016, shares how he and his team moved to a new agile way of running the datacenter.
Vivek Juneja of GS Shop’s Container Platform Team, at MesosCon Asia 2016, shares how he and his team moved to a new agile way of running the datacenter.
GS Shop is one of the largest TV shopping networks in Asia, and one of the largest e-commerce sites in Korea with more than 1000 employees and 1.5 million users daily. Vivek Juneja of GS Shop’s Container Platform Team, at MesosCon Asia 2016, shares how he and his team moved this behemoth to the new agile way of running the datacenter.
We know that change is not easy, and Juneja shares many valuable insights in how to successfully manage completely revamping your IT department. Progress is hard even when the old way is difficult. Juneja describes their old practice of “yawn-driven deployment”: “We practice something called Yawn-Driven Deployment, deploying at 3:00 a.m. That’s what we were doing for a long time. Everybody gets together at 3:00 a.m. It’s a party. We deploy, and we have a lot of yawns, and that code goes to production.” Nobody really like working this way, but it’s what they are used to.
“When we look at any deployments or adoption of DevOps practices,” says Juneja, “We try to follow this adoption graph, which means, at the beginning, you’re in Evaluation Stage. And then you’ll start putting something in production so that you get some feedback, and teams get some confidence that this thing works. And once you reach a confidence in a production environment, you will likely see a tipping point.”
Juneja’s team deployed their new DevOps methodologies on both new and old services, running new and old side-by-side. “Which shows them the difference between the old style and the new style,” says Juneja, “Doing a compare-and-contrast node for that technology… we move traffic between them so that a particular percentage of traffic moves to the old environment and the rest goes to the containerized environment. This has trade-offs, but it also provides us the basis for proving the technology. So everything becomes mainstream. Everything becomes stable. That’s the time where we move everything to our new environment which uses Mesos.”
This process of introducing and proving changes systematically and in small steps worked so well that staffers went overboard and overloaded the new systems. “Our teams started creating more environments. They loved it so much, they would have environments for every new deployment that they were creating too much of it. Making it easy for our teams to deploy a service means there are too many of these environments laying around.” This is a common problem, users not understanding the true cost of their resource usage. Juneja’s team’s solution was to set automatic timeouts on dev-and-test environments. The infrastructure team learned about resource utilitization and developed good resource-management habits.
Watch Juneja’s full talk (below) to learn more excellent insights into progressing from yawn-driven development to a more comfortable schedule, and to learn about bringing all of your diverse and sometimes competing teams together and working towards common goals.
Interested in speaking at MesosCon Asia on June 21 – 22? Submit your proposal by March 25, 2017. Submit now>>
Not interested in speaking but want to attend? Linux.com readers can register now with the discount code, LINUXRD5, for 5% off the attendee registration price. Register now to save over $125!
The founder of Gluster is ready to push storage further, as his new startup, Minio, is announcing general availability of its container-based object storage yesterday.
The catch is that it’s not really about storage — not in the long term. Minio founder Anand Babu (AB) Periasamy — who wrote the open source GlusterFS file system and also founded the startup Gluster, now owned by Red Hat — says his Palo Alto, California-based company is about data — specifically, using the data to help pay for the storage.
Read more at SDx Central
Internet of Things (IoT) devices are soon expected to outnumber end-user devices by as much as four to one. These applications can be found everywhere—from manufacturing floors and building management to video surveillance and lighting systems.
However, security threats pose serious obstacles to IoT adoption in enterprises or even home environments for sensitive applications such as remote healthcare monitoring. IoT security can be divided into the following three distinct components:
Read more at Network World
There’s nothing new about mesh-networking technology. What is new is that mesh networking is finally cheap enough to be deployed in both homes and small businesses. Mesh networking deals with that most common of Wi-Fi problems: Dead zones. You know how it goes. You move your laptop from your office to your conference room and — blip! — there goes your Wi-Fi connection.
Read more at ZDNet
Gain a solid understanding of the current state of Cloud platforms, how to integrate the Cloud into your systems and how to manage the risks.
In this article, I’ll introduce you to Cloud platforms, discuss the services they provide, the cost (not just monetary cost) and the problem of lock-in. I’ll also discuss hybrid systems that can run from the Cloud or where some of their components can run from the Cloud. At the end of this article you should a solid understanding of the current state of Cloud platforms, how to integrate the Cloud into your systems and how to manage the risks. Why Go to the Cloud?
The number one reason to go to the Cloud is that the Cloud platforms provide so much value that is important even for small companies. If you had to build even the most essential parts you would spend a lot of time and even more time maintaining and addressing all the issues that your half-baked system causes. Today’s systems handle more and more data and have higher expectations in terms of uptime, availability and responsiveness. Even startups in beta must provide reliable service, even if not very rich. Letting the system crash and discover it in the morning with 50 angry user emails is not an option anymore. Now, the Cloud is not a magic panacea. You still have to work hard to put things together and use the Cloud offering intelligently, but all the building blocks, as well as integrated solutions, are available to you.
Read more at DevX
According to Linus Torvalds, things have been very quiet since the sixth Release Candidate of Linux kernel 4.10, and this RC7 build, which also appears to be the last, is a small one that brings various updated GPU, HID, and networking drivers, a bunch of improvements for the ARM64, PowerPC (PPC), SPARC, and x86 hardware architectures, as well as various other fixes to supported filesystems, virtual machine support, networking stack, and genksyms scripting.
“It’s all been very quiet, and unless anything bad happens, we’re all back to the regular schedule with this being the last RC,” said Linus Torvalds in today’s mailing list announcement.
Read more at Softpedia
Apache Kafka, the open source distributed streaming platform, is making an increasingly vocal claim for stream data “world domination” (to coin Linus Torvald’s whimsical initial modest goals with Linux). Last summer I wrote about Kafka and the company behind its enterprise rise, Confluent. Kafka adoption was accelerating as the central platform for managing streaming data in organizations, with production deployments of Kafka claiming six of the top 10 travel companies, seven of the top 10 global banks, eight of the top 10 insurance companies, and nine of the top 10 US telecom companies.
Today, it’s used in production by more than a third of the Fortune 500.
Read more at TechRepublic
On the previous posts we talked about some of the basic Linux commands, and today we continue our journey and we will talk about something very important in Linux which is Environment Variables
So what are Environment Variables and where it is and the benefit of knowing it?
Well the bash shell we use to run our commands uses a feature called Environment Variables to store some values which is required by the running programs or scripts from that shell, actually this is a very handy way to store persistent data and make it available for any script or a program when you run it from the shell
There are two types of environment variables in the bash shell:
Global variables are visible from the shell session and for any running process that run from the shell.
Local variables are visible in the shell that creates them.
Read the complete article
As operating systems, Linux/Unix put the user’s privacy and safety above all. While this has resulted in a product that many people swear by, it’s also led to certain features that may not be easy to discern at first sight. For instance, the possibility of password protecting directories without using encryption is something that many people don’t know about. While encrypting is definitely useful, it also has certain issues associated with it, including:
1. Decreased performance
No matter how strong your system is, opening encrypted folders tends to take up a considerable amount of resources, resulting in a slower and more cumbersome computer that’s not always fun to use.
2. Encryption prevents the folder’s contents from being searchable/indexed
By its very nature, encryption hides content. Therefore, the files inside the folder you’ve encrypted will not show up on any search or index attempts, which can be quite annoying when you’re looking for something specific.

As you can see, there are a number of issues that pop up whenever encryption is involved. While this kind of protection is necessary for sensitive material, a simple password would be enough to deter strangers from accessing folders of lesser importance. Luckily, Linux/Unix allow for this kind of password protection system as well, as evidenced by the following methods:
1. Changing file permissions
By modifying the permissions of certain files and folders, you can control who gets to access them. This way, they will only be readable by their owner. Anyone who’d want to change these permissions would have to type in a password, or sudo as root, which also requires a password. To change the permissions, just use the “chmod og-rwx filename” command on all the files you want to restrict access to.
2. Create a new user
You can also choose to create a new user for all your protected files and directories. Simply employ the “chown $newuser filename directoryname” and “chmod og-rwx filename directoryname” commands, taking care to replace “$newuser” with the new user account name. By using this method your files will be safe even if you forget to log out for any reason.
The methods described above do a good job of protecting your folders from unwarranted intrusions without resorting to encryption. Of course, a password protection system would be nothing without a good password, so take the time to come up with something that’s easy to remember for you, but hard to guess for others. Ideally, you should shy away from using information about yourself as a base for any password, since such passwords can rather easily be detected through some basic social engineering techniques. In fact, for most people nowadays it’s much easier to simply use a good password generator instead. These can be readily found online and generate passwords that are much harder to guess. Just be sure to write them down somewhere or use a dedicated password manager, lest you risk getting locked out of your own accounts.
That concludes our quick guide on how to password protect a folder on Linux/Unix the easy way. Keep in mind that this method is only recommended for sheltering low-priority folders, with encryption still being an irreplaceable tool when it comes to more important content.