In this tutorial, we will install WordPress by using multiple docker containers. WordPress itself in one container and the MariaDB database in another container. Then we will install Nginx on the host machine as reverse proxy for the WordPress container.
Docker is an open source project to make developers and sysadmin easier to create, deploy and run distributed application inside a docker container. Docker is an operating-system-level virtualization, you can create system isolation for your application with docker for the application that is running inside the container.
Early this year, researchers from the University College London Optical Networks Group set a record for the fastest-ever data rate for digital information — 1.125 Tb/s. That’s terabits. With that data rate, you could download the entire Games of Thrones series, in HD, within one second!
To achieve their record-breaking date rate, researchers built an optical communications system with multiple transmitting channels and a single receiver using techniques from information theory and digital signal processing. They then applied coding techniques commonly used in wireless communications, but not yet widely used in optical communications, to ensure the transmitted signals adapt to distortions in the system electronics.
This tutorial will show you how to install VMware Workstation 12 on RHEL/CentOS 7, Fedora 20-24, Debian 7-9,Ubuntu 16.04-14.14 and Linux Mint 17-18.
VMware Workstation 12 is a popular software which allows you to run multiple different virtual machines on physical hosts using the concept of Type II of hypervisors (Hosted Hypervisors). This tutorial also discuss some common issues during the installation process.
[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
There are a lot of users that work with Linux via PuTTy, especially beginners that have a Linux VPS. To ease the process, we’ve listed and explained the best and most common shell commands that you can use in your SSH client. It’s a beginner-friendly tutorial that guides you step-by-step on each command like mkdir, cd, touch etc.
A NEW TROJAN targeting Linux servers has been discovered in the wild, exploiting servers running the Redis NoSQL database to use them for bitcoin mining.
Up to 30,000 Redis servers may be vulnerable, largely because careless systems administrators have put them online without setting a password.
The Linux.Lady malware was discovered by Russian antivirus software vendor Dr Web and is, intriguingly, written using Google’s Go programming language, largely based on open source Go libraries hosted on GitHub. The malware uses a more compact trojan called Linux.Downloader.196…
Docker and microservices-based architectures have expanded our horizons with respect to how the industry builds and supports applications at scale, says Corey Quinn, Director of DevOps at FutureAdvisor and an early developer behind SaltStack. Nonetheless, Quinn, in his LinuxCon and ContainerCon North America talk — called “Heresy in the Church of Docker” — will preach caution and present an alternative view of the containerization craze.
In this interview, Quinn shares his perspective on the downside of containers, the possibilities for failure, and why containers cannot fix architecture woes without careful consideration and management.
Corey Quinn, Director of DevOps at FutureAdvisorLinux.com: Your talk has a bold title. Why is it heresy to speak about the downside of Docker/containers?
Corey Quinn: Docker is a fantastic technology, but it’s not one that’s well understood. If we take a look at the lessons of the past, there was more hype than understanding around cloud as well — and before that, around virtualization. I’m seeing the same patterns repeat themselves here, and in some circles this is a far from popular viewpoint.
Linux.com: What is the unspoken truth about containers?
Quinn: The container model works well from a greenfield perspective once you control for a few factors. It’s decidedly less than ideal for a number of existing applications that weren’t written with containerization in mind. Rearchitecting your legacy application to conform with the ideals of a microservices-based architecture is all well and good — but is that (non-trivial) effort worth it? I don’t pretend to have the answer — it’s going to depend upon what your priorities look like. That said, I don’t think it’s a slam dunk either way.
Linux.com: It’s a surprising stance from the Director of DevOps at a cloud-native company. What experience have you had that informs your caution?
Quinn: You’ve said it yourself — we’re a cloud native company, we’re not container native. Our applications were written from a perspective of stateful instances, uniquely identified at times. A number of container-centric concerns (service discovery, configuration management, scheduling) don’t enter into the architecture in quite the same way.
Linux.com: How does configuration management fit into the story?
Quinn: “Carefully!” You don’t want to run a configuration management agent inside of a container, or you’ll find that your “stateless container” just became a very strangely built virtual machine. That said, you need to have some form of configuration management on the hosts that underlie your infrastructure in some form — whether that looks like traditional CM, whether you leverage something like Mesosphere or Kubernetes, or something else entirely is going to be site dependent.
Linux.com: What’s the one thing about containers that you’d like DevOps pros to take away from your talk?
Quinn: The same thing I’d like DevOps pros to take to every architectural decision: “Look before you leap.” There are a lot of shops out there where containers are fantastic — they’re the right decision! The problem is blindly assuming that containers are the fix to your architectural problems without considering what the migration, failure cases, or longer term view look like. Once you’ve given those things reasonable consideration and have a plan, make your decision and act accordingly.
Look forward to three days and 175+ sessions of content covering the latest in containers, Linux, cloud, security, performance, virtualization, DevOps, networking, datacenter management and much more at LinuxCon + ContainerCon North America, Aug. 22-24 in Toronto. You don’t want to miss this year’s event, which marks the 25th anniversary of Linux! Register nowbefore tickets sell out.
On July 29 at the Sunnyvale Tech Corner Campus in Calif., Google hosted the open source community for the inaugural CORD Summit. CORD, or Central Office Re-architected as a Datacenter, launched last week as an independently funded On.Lab software project hosted by The Linux Foundation. The sold-out event featured interactive talks from partners and leading stakeholders of the newly formed CORD Project, including AT&T, China Unicom, Ciena, Google, NEC, ON.Lab, ONF, The Linux Foundation, University of Arizona, and Verizon.
CORD is the biggest innovation in the access market since ADSL and the cable modem. Considering the broad scope of the access network, and the technical roadmap the growing open source CORD community laid out at the Summit, CORD has the potential to redefine the economics of access.
To understand the importance of CORD we must first understand how a Tier 1 network is constructed. We’ve all seen network diagrams on whiteboards or PowerPoint slides illustrating routers, switches and optical transport equipment with rectangles or circles connected by straight lines, rings, or perhaps a Cloud. An often overlooked aspect is that these pieces of metal must reside in a physical building. On one side, “the Cloud” represents Cloud companies like Google and Amazon, Over-the-Top Providers (e.g., Netflix) and the Service Providers’ (SP’s) own Cloud infrastructure. On the other side is “Access.” To get from your home or building to “the Cloud,” your packets must first go through customer premise equipment (CPE) provided by the SP, followed by the outside plant (OSP), the local central office (CO), the metro CO and then the large city CO where the large SP’s interconnect or peer with other SPs and the Cloud companies.
The Cloud datacenter (DC) has experienced an innovation boom driven by the massive increase in Cloud computing and “big data.” Modern datacenters are pristine with raised flooring, state-of-the-art HVAC systems, and hot and cold aisles. COs are not pristine. They are, however, strategically located in downtown locations in every city and town in your country. COs can usually be identified as the building near the center of town with few, if any, windows. The local CO is just the beginning of the access, or broadband, network. The end is in a box (CPE), such as an ONT/ONU, Cable Modem, or Gateway, in every home and building the SP serves.
This access network, or “last mile,” is a challenging business. The network encompasses thousands of miles of facilities (wires) that radiate from COs and cable hubs, and terminate in nearly every home and building in the serving area. It includes specialized outside plant electronics deployed in cabinets, underground vaults and mounted to utility poles. Onerous federal, state and local regulations that hinder flexibility add to the challenges, as well as real operational setbacks such as lightning strikes, floods, backhoe fades and squirrel chews.
An important consideration for understanding the value of CORD is the economic scope of the CO and access network. The modern DC can consume 100’s of megawatts of electricity. For SPs, these are few and far between.For example, Comcast only has three large DCs. Big city CO’s would encompass perhaps the largest 50 cities in the USA with the medium city CO’s covering a few hundred more. However, in the USA, close to 20,000 local COs connect 10’s of thousands of endpoints (buildings); AT&T owns 4,700. Many of these facilities are “unmanned.” To illustrate this scope, the SCTE’s Energy 2020 program produced the “Energy Pyramid,” showing that 73-83% of a cable company’s energy bill goes to the access network. Given that Comcast’s electric bill is over $1 billion, the access network consumes real money. And don’t forget the fleet of bucket trucks that are an integral part of the access network.
The large number of CO’s are only the beginning of the CORD challenge. In the USA, COs are more than 50 years old, with many decades older. They have evolved over time, from early analog voice switches, to the Class 5 PSTN switch, to Frame Relay, ISDN, digital loop carriers, ADSL, DOCSIS, GPON, and so on. During this period, the racks and bays of installed base of equipment was simply added onto with other proprietary purpose-built hardware devices. This resulted in a conglomeration of technologies located where convenient during the time of installation. This led to more than 300 different types of equipment from dozens of vendors each with their own management system, leading to enormous operational expenses (OPEX). Capital expenses (CAPEX) are a one-time event with a 5 to 7 year depreciation cycle. OPEX, on the other hand, remain ongoing as long as the equipment is in operation. It’s no surprise that adding a new service to this mix is an expensive and time-consuming challenge.
Keep in mind the SP’s design for “5 Nines” (99.999% uptime) availability. The access network can only be down 4.7 minutes per year. A reboot may only take 5 minutes, but that’s over the SP’s budget for the year. If the closest replacement part (sparing) is 20 minutes away in the regional warehouse, the SP could get fined by the FCC, (E911 is serious business). The alternative is to stock 300 different spares in every CO (with secondary spares in the regional warehouse). The resulting purchasing and logistic systems add to the high OPEX and limit flexibility.
CORD must address the above real-world, street-level challenges. Specific to the access or broadband network are the current access equipment architectures. SP’s look to virtualization for more than “cheap” hardware. They must adapt their business models to better compete with the deep-pocketed, aggressive cloud companies who want them to become the proverbial dumb pipe.
One of the many desired outcomes of virtualization is to better align expenses with actual demand.Consider a GPON network. Located in the CO is the Optical Line Terminal (OLT). These are expensive, proprietary and large chassis devices that can support upwards of 5,000 homes with a 1:32 split, (32 homes per port). What if you have 5,010 subscribers? You must purchase and install a second large chassis and only populate a few slots. Thus, while demand may be growing linearly or exponentially, your capacity can only grow in a large step function.
CORD is an ambitious project that addresses the most difficult and economically challenging part of any SP’s network: The Access Network or Last Mile. With the move to more and more gigabit cities, a linear extension of today’s technologies and architectures is unsustainable. The bar is high, but as shown in Sunnyvale last week, the CORD community is ripe for the challenge.
Open source gives startups an opportunity to shoulder big vendors aside by leveraging the cloud to disrupt traditional relationships, says Martin Casado, a pioneering software-defined networking entrepreneur turned venture capitalist.
“It turns out one of the most difficult things for a startup to do is actually go to market,” said Casado, a general partner at Andreessen Horowitz, in a presentation. “The incumbents have sewn up the go to market space.”
Incumbent vendors have literally decades of relationships with enterprise customers, predating the Internet. These relationships transcend an individual salesperson — a startup can’t disrupt the relationship simply by hiring key salespeople. …
Software Defined Networking (SDN) firm Midokura today is announcing a new update to its enterprise platform that provides multi-cloud connectivity as well as some container networking support as well.Midokura CTO, Pino de Candia explained that the new Midokura Enterprise MidoNet (MEM) 6.2 update is based on Open Source MidoNet 5.0. Midokura first open-sourcedits MidoNet platform in November 2014 at the OpenStack Summit in Paris.
“There is no difference in the core functionality between open source MidoNet and MEM,” de Candia told Enterprise Networking Planet. “MEM 5.2 is the commercial, hardened release of MidoNet, includes management via MidoNet Manager, analytics via MEM Insights with Fabric Topology, 24/7 enterprise-class support.”
One of the many things I love about the Linux community is how ridiculously helpful it can be.
Like a lot of us, Børge A. Roum has bought a lot of Humble Bundles over the years. And, also like a lot of us, says he was lured in by the inclusion of Linux DRM-free games. But have you ever hit a problem trying to play any of them?
Roum did, as he explains in a blog post: “Quite a few of the games I bought didn’t seem to actually work! Or I had to jump trough some crazy hoops that no self respecting developers would ever think to ask any Windows or Mac OS users to jump through.”