Linux.com

Home Linux Community Community Blogs Business (or Enterprise)

Community Blogs



Physical Security Market is Expected to Reach USD 125.03 Billion Globally in 2019

The worldwide market for physical security was valued at USD 48.05 billion in 2012 and is projected to reach the market size of USD 125.03 billion by 2019, growing at a CAGR of 14.9% during the period from 2013 to 2019. Some of the major factors driving the demand for physical security include rising global security concerns and increasing budget allocations for physical security by governments to prevent terrorism and crime activities. In addition, regulations imposed by governments of different countries demanding increased security levels is driving the adoption of physical security in several end-user sectors including industrial and business organizations. Continued investments in infrastructure worldwide, especially, in Asia Pacific region is expected to emerge as a significant factor behind the growth of physical security market in coming years.

global-physical-security-market-size-and-forecast-2011-2019

The primary concern in physical security is the protection and prevention in order to serve security interests of people, equipment, and property. The increase in incidences of terror activities and crime has resulted in escalated demand for physical security solutions. It is expected that internet protocol (IP) video, sophisticated access control systems and biometric solutions would drive the demand for physical security solutions. Further, the emerging trend of convergence of logical and physical security and increased demand for integrated physical security solutions are expected to boost the growth of physical security market.

The different components of physical security include hardware, software and services. The market for physical security hardware has been further segmented into intrusion detection and prevention systems, access control systems and others (fire and life safety, visitor management and backup power). Among intrusion detection and prevention hardware products, video surveillance was the largest market and held around 72% share in 2012 and is expected to be the fastest growing segment throughout the forecast period. In access control segment, biometric access control held the largest market share of around 38% of total access control market in 2012. Physical security software market has been segmented into physical security information management (PSIM) and management, analysis and modeling software. PSIM is fast gaining market demand, driven by declining costs, increased sophistication and increasing awareness among end-users. Physical security services market has been segmented into video surveillance as a service (VSaaS), remote management services, technical support, public safety answering point (PSAP), security consulting, public alert, customer information and warning systems and others (data source, hosted access control, managed access control, alert notification, mobile security management). Among the services segments, VSaaS is expected to be the fastest growing market driven by benefits such as cost savings, simplicity, and remote access.

End-user segments of physical security include transportation and logistics, government and public sector, control centers, utilities/energy markets, fossil generation facilities, oil and gas facilities, chemical facilities, industrial (manufacturing sector excluding chemical facilities), retail, business organizations, hospitality and casinos and others (stadiums, educational and religious infrastructure, healthcare organizations). Transportation industry which includes aviation, rail, ports, road and city traffic and new start projects (including light rail, rapid rail, metro rail, commuter rail, bus rapid transit, and ferries) in transportation and logistics sector was the largest end-user of physical security in 2012. North America emerged as the largest regional market for physical security in 2012. In view of high terrorism incidences, the region has been increasing security measures across all end-use verticals. Moreover, governments in North America have significantly increased the regulatory measures for adoption of physical security. Asia Pacific is one of the fastest emerging markets for physical security, growing at a CAGR of around 17% owing to significant push from governments and the police to enhance security in view of increasing crime and terror in the region.

The market for physical security was highly fragmented in 2012 and no single player was dominant; however, Honeywell Security Group emerged as the market leader, accounting for around 5% share in 2012. Honeywell Security group was followed by Bosch Security Systems Inc, Morpho SA (Safran), Hikvision Digital Technology, Assa Abloy AB, Axis Communication AB, Pelco Inc, Tyco International Ltd, NICE Systems Ltd, and others.

 

Source : http://www.transparencymarketresearch.com/physical-security-market.html

 

Cutting-Edge New Virtualization Technology: Docker Takes On Enterprise

Docker’s new container technology is offering a smart, more sophisticated solution for server virtualization today. The latest version of Docker, version 0.8, was announced couple of days ago.

Docker virtualization

Docker 0.8 is to focus more on quality rather than on features, with the objective of targeting the requirements of enterprises.

According to the software’s present developmental team; many companies that use the software have been using it for highly critical functions. As a result, the aim of the most recent release has been to provide such businesses top quality tools for improving efficiency and performance.

What Is Docker?

Docker is an open source virtualization technology for Linux that is essentially a modern extension of Linux Containers (LXC). The software is still quite a young initiative, having been launched for the first time in March 2013. Founder Solomon Hykes created Docker as an internal project for dotCloud, a PaaS enterprise.

The response to the application was highly impressive and the company soon reinvented itself as Docker Inc, going on to obtain $15 million in investments from Greylock Partners. Docker Inc. continued to run their original PaaS solutions, but the focus moved to the Docker platform. Since its initiation, over 400,000 users have downloaded the virtualization software.

Google (along with couple of most popular cloud computing providers out there) is offering the software as part of its Google Compute Engine though still nothing from major Australian companies (yes, I’m looking at you Macquarie).

Red Hat also included it in OpenShift PaaS as well as in the beta version of the upcoming release Red Hat Enterprise Linux. The benefits of containers are receiving greater attention from customers, who find that they can reduce overheads with lightweight apps and scale across cloud and physical architectures.

Containers Over Full Virtual Machines

For those unfamiliar with Linux containers, they are called the Linux kernel containment at a basic level. These containers can hold applications and processes like a virtual machine, rather than virtualizing an entire operating system. In such a scenario the application developer does not have to worry about writing to the operating system. This allows greater security, efficiency and portability when it comes to performance.

Virtualization through containers has been available as part of the Linux source code for many years. Solaris Zones was pioneering software created by Sun Microsystems over 10 years ago.

Docker takes the concept of containers a little further and modernizes it. It does not come with a full OS, unlike full virtual machines, but it shares the host OS, which is Linux. The software offers a simpler deployment process for the user and tailors virtualization technology for the requirements of PaaS (platform-as-a-service) solutions and cloud computing.

Docker images

This makes containers more efficient and less resource hungry than virtual machines. The condition is that the user must limit the OS host to a single platform. Containers can launch within seconds while full virtual machines can take several minutes to do so. Virtual machines must also be run through a hypervisor, which containers do not.

This further enhances container performance as compared to virtual machines. According to the company, containers can offer application processing speeds that are double than virtual machines. In addition, a single server can have a greater number of containers packed into it. This is possible because the OS does not have to be virtualized for each and every application.

The New Improvements and Features Present In Docker 0.8

Docker 0.8 has seen several improvements and debugging since its last release. Quality improvements have been the primary goal of the developmental team. The team – comprising over 120 volunteers for the release – focused on bug fixing, improving stability, and streamlining the code, performance boosting and updating documentation. The goals in future releases will be to keep the improvements on and increase quality.

There are some specific improvements that users of earlier releases will find in version 0.8. The Docker daemon is quicker. Containers and images can be moved faster. It is quicker building source images with docker build. Memory footprints are smaller; the build is more stable with fixed race conditions. Packaging is more portable for tar implementation. The code has been made easier to change because of compacted sub-packaging.

The Docker Build command has also been improved in many ways. A new caching layer, greatly in demand among customers, speeds up the software. It achieves this by eschewing the need to upload content from the same disk again and again.

There are also a few new features to expect from 0.8. The software is being shipped with a BTRFS (B-Tree File System) storage driver that is at an experimental stage. The BTRFS file system is a recent alternative to ZFS among the Linux community. This gives users a chance to try out the new, experimental file system for themselves.

A new ONBUILD trigger feature also allows an image to be used later to create other images, by adding a trigger instruction to the image.

Version 0.8 is supported by Mac OSX, which will be good news for many Mac users. Docker can be run completely offline and directly on their Mac machines to build Linux applications. Installing the software to an Apple Macintosh OS X workstation is made easy with the help of a lightweight virtual machine named Boot2Docker.

Docker may have gained the place it has today partly because of its simplicity. Containers are otherwise a complex technology, and users are traditionally required to apply complex configurations and command lines. Docker makes it easier for administrators, with its API, to easily have Docker images inserted in a larger workflow.

It is currently being developed as a plug-in that will allow use with platforms beyond Linux, such as Microsoft Windows, via a hypervisor. The future plans for the developmental team is to update the software once a month. Version 0.9 is expected to see a release early in March, 2014. The new release may have some new features if they are merged before the next release, otherwise they will be carried over to the next release.

Docker is expected to follow Linux in numbering versions. Major changes will be represented by changing the first digit. Second digit changes signify regular updates while emergency fixes will be represented by a final digit.

Customers looking forward to the production ready Docker version 1 will have to wait until April. They can also expect support for the software as well as a potential enterprise release. There are also attempts by the team to develop services for signing images, indexing them and creating private image registries.

Give it a try!

 

The benefits of a well planned Virtualization

One of the biggest challenges facing IT departments today is to keep your work environment. This is due to the need to maintain IT infrastructure able to meet the current demand for services and applications, and also ensure, that in the critical situations of the company, is able to resume normal activities quickly. And here is where it appears the big problem .

Much of IT departments are working on their physical limit, logical and economical . Your budget is very small and grows on average 1% a year, while managing the complexity grows at an exponential rate. IT has been viewed as a cost center real and not as an investment, as I have observed in most of the companies for which I have passed.

With this reality, IT professionals have to juggle to maintain a functional structure. For colleagues working in a similar reality, recommend special attention to this topic Virtualization .

Instead of speculating, Virtualization is not an expensive process compared to its benefits . Believing that depending on the scenario, Virtualization can be more expensive than many traditional designs. To give you an idea, today over 70% of the IT budget is spent just to keep the system environment, while less than 30% of the budget is invested in innovation advantage, differentiation and competitiveness. This means that almost any IT investment is dedicated simply to "put out the fire" emergency solve problems and very little is spent on solving the problem.

I followed a very common reality in the daily lives of large companies where the IT department is so overwhelmed that you can not measure the time to think again. In several of them, we see two completely different scenarios. A before and after Virtualization / cloud computing. In the first case, what we see is a bottleneck with reality drastic and resources to the limit. In the second, a scene of tranquility, guaranteed safe management and scalability.

Therefore, consider the proposal of Virtualization and discover what you can do for your department and therefore, for your company.

Within this reasoning, we have two maxims. The first: "Rethinking IT. The second: "Reinventing the business."

The big challenge for organizations is precisely this: rethink. What to do to transform technical consultants?

Increase efficiency and security

To the extent that the structure increases, so does the complexity of managing the environment. It is common to see data center dedicated to a single application. This is because the best practices for each service request that has a dedicated server. Obviously the metric is still valid, because without doubt this is the best option to avoid conflicts between applications, performance, etc.. Also, environments such as this are becoming increasingly detrimental as the processing capacity and memory are increasingly underutilized . On average, only 15% of the processing power is consumed by a server application, that is, over 80% of processing power and memory is actually no use.

Can you imagine the situation? We, first, we have virtually unused servers, while others need more resources, and ever lighter applications, the use of hardware is more powerful.

Another point that needs careful consideration is the safety of the environment. Imagine a database server with disk problem? What will be the difficulty of your company today? The time that your business needs to quote, purchase, receive, change and configure the environment to drop the item. During all this time, what was the problem?

Many companies are based in the cities / regions far from major centers and therefore may not think this hypothesis.

With Virtualization it does not, because we left the traditional scenario where we have a lot of servers, each hosting its own operating system and applications, and we go to a more modern and efficient.

In the image below, we can see the process of migrating the physical environment, where multiple servers to a virtual environment, where we have fewer physical servers or virtual servers hosting.

vmware1

By working with this technology and we have underutilized servers for different applications / services that are assigned to the same physical hardware, sharing CPU resources, memory, disk and network. This makes the average usage of this equipment can reach 85%. Moreover, fewer physical servers means less spending on supplies, memories, processors, means less purchasing power and cooling, and therefore fewer people to manage the structure.

Vmware2

At this point you should ask, but what about security? If now I have multiple servers running simultaneously on a single physical server I'm at the mercy of this server? What if equipment fails?

New thinking is not only the technology but how to implement this technology in the best way possible. Today VMware , the global leader in Virtualization and cloud computing, working with a technology cluster, enabling and ensuring high availability of their servers. Basically, if you have two or more servers that work together in the event of failure of any equipment, VMware identifies this fault and automatically restores all its services on another host. This is automatic, without IT staff intervention.

At runtime, the physical failure is simulated to test the high availability and security of the environment in the future, the response time is fairly quick. On average, each server can be restarted with 10 seconds, 30 seconds or up to 2 minutes between each server. In some scenarios, it is possible that the operating environment will restart in about 5 minutes.

Be ready quickly new services

In a virtualized environment, the availability of new services becomes a quick and easy task, since resources are managed by the Virtualization tool and not tied to a single physical machine. This way you can hire a virtual server resources only and therefore avoids waste. On the other hand, if demand is rapidly increasing daily can increase the amount of memory allocated to this server. This same reasoning applies to the records and processing.

Remember that you are limited by the amount of hardware present in the cluster, you can only increase the memory to a virtual server if this report is available in its physical environment. This ends underutilized servers, as it begins to manage their environment intelligently and dynamically, ensuring greater stability

Integrating resources through the cloud

Cloud computing is a reality, and there is no cloud without Virtualization. VMware provides a tool called vCloud with it is possible to have a private cloud using its virtual structure, all managed with a single tool.

Reinventing the Business

After rethinking, now is the time to change, now is the time to reap the rewards of having an optimized IT organization, we see that when we do a project structured high availability, security, capacity growth and technology everything becomes much easier in the benefits we can mention the following:

Respond quickly to expand its business

When working in a virtualized environment, you have to think in a professional manner to meet all your needs, you can meet the demand for new services, this is possible with VMware because it offers a new server configured in a few clicks, in five minutes and has offered a new server ready to use. Today becomes crucial, since the start time a new project is decreasing.

Increase focus on strategic activities

With the controlled environment, management is simple and it becomes easier to focus on the business. That's because you get almost all the information and operational work is to have a thought of IT in business, and that is to transform a technical consultant. Therefore, a team will be fully focused on technology and strategic decisions, and not another team as firefighters, are dedicated to put out the fires caused.

Aligning the IT departments decision making

Virtualization allows IT staff have the metric reporting and analysis. With these reports have in their hands a professional tool that will lead to a fairly simple language and understand the reality of their environment. Often, this information supports a negotiation with management and, therefore, the approval of the budget for the purchase of new equipment.

Well folks, that's all. I tried not to write too much, but it's hard to say something as important in less lines, I promise that future articles will discuss in detail a little more about VMware and how it works.

 

Cloud Operating System - what is it really?

A recent article published on Linux.org, “Are Cloud Operating Systems the Next Big Thing”, suggests that a Cloud Operating System should simplify the Application stack. The idea being that the language runtime is executed directly on the hypervisor without an Operating System Kernel.

Other approaches for cloud operating systems are focussed on optimising Operating System distributions for the cloud with automation in mind. The concepts of IaaS (Infrastructure as a Service), PaaS (Platform as a Service) and SaaS (Software as a Service) remain in the realm of conventional computing paradigms. 

None of these approaches address the core benefits of the cloud. The cloud is a pool of resources, not just another “single” computer. When we think of a computer, it has a processor, persistent storage and memory. A conventional operating system exposes compute resources based on these physical limitations of a single computer. 

There are numerous strategies to create the illusion of a larger compute platform, such as load balancing to a cluster of compute nodes. Load balancing is most commonly performed at a network level with applications or operating systems having limited exposure of the overall compute platform. This means an application cannot determine the available compute resources and scale the cloud accordingly.

To fully embrace the cloud concept a platform is required that can automatically scale application components with additional cloud compute resources. Amazon and Google both have solutions that provide some of these capabilities, however internal Enterprise solutions are somewhat limited. Many organisations embrace the benefits of a hosted cloud within the mega data centres around the world. Many companies have a requirement to host applications internally.

As network speeds increase the feasibility of a real “Cloud Operating System” becomes a reality. This is where an application can start a thread that executes not on a separate processor core, but executes somewhere within the cloud. 

A complete paradigm shift is required to comprehend the possibilities of an Operating System providing distributed parallel processing. Virtualisation takes this new cloud paradigm to a different level where the abstraction of the hardware using a virtualisation layer and a platform operating system presents compute resources to a Cloud Operating System.

The same way as a conventional operating system determines which CPU core is the most appropriate to execute a specific process or thread, a cloud operating system should identify which instance of the cloud execution component is most appropriate to execute a task. 

A cloud operating system with multiple execute instances on numerous hosts can schedule tasks based on the available resources of an execute instance. By abstracting task scheduling to a higher layer the underlying operating system is still required to optimise performance  using techniques such as Symmetric Multiprocessing (SMP), processor affinity and thread priorities.

The application developer has for many years been abstracted from the hardware with development environments such as C#, Java and even PHP. Operating systems have not adapted to the Cloud concept of providing compute resources beyond a single computer. 

The most comparable implementation is the route taken by Application Servers with solutions such as JAVA EJB where lookups can occur to find providers.  Automatic scalability is however limited with these solutions.

Hardware vendors are moving ahead by creating cloud optimised platforms. The concept is that many smaller platforms create optimal compute capacity. HP seem to be leading this sector with their Moonshot solution. The question however remains: How do you make many look like one?  

Enterprises have existing data centres where very little of the overall compute capacity is actually leveraged on an ongoing basis. When one system is busy, numerous others are idle. A cloud compute environment that can automatically scale across a collection of servers providing actual cost savings. Compute capacity would be additive using existing infrastructure for workload based on available resources. According to the IDC report on world wide server shipments the server market is in excess of $12B per quarter. The major vendors are looking for ways to differentiate their solutions and provide optimal value to customers.

Combining hardware, virtualisation and a Cloud Operating System organisations will benefit from a reduction in the cost to provide adequate compute capacity to serve business needs.

Gideon Serfontein is a co-founder of the Bongi Cloud Operating System research project. Additional information at http://bongi.softwaremooss.com

 

The Tyranny of the Clouds

Or “How I learned to start worrying and never trust the cloud.”

The Clouderati have been derping for some time now about how we’re all going towards the public cloud and “private cloud” will soon become a distant, painful memory, much like electric generators filled the gap before power grids became the norm. They seem far too glib about that prospect, and frankly, they should know better. When the Clouderati see the inevitability of the public cloud, their minds lead to unicorns and rainbows that are sure to follow. When I think of the inevitability of the public cloud, my mind strays to “The Empire Strikes Back” and who’s going to end up as Han Solo. When the Clouderati extol the virtues of public cloud providers, they prove to be very useful idiots advancing service providers’ aims, sort of the Lando Calrissians of the cloud wars. I, on the other hand, see an empire striking back at end users and developers, taking away our hard-fought gains made from the proliferation of free/open source software. That “the empire” is doing this *with* free/open source software just makes it all the more painful an irony to bear.

I wrote previously that It Was Never About Innovation, and that article was set up to lead to this one, which is all about the cloud. I can still recall talking to Nicholas Carr about his new book at the time, “The Big Switch“, all about how we were heading towards a future of utility computing, and what that would portend. Nicholas saw the same trends the Clouderati did, except a few years earlier, and came away with a much different impression. Where the Clouderati are bowled over by Technology! and Innovation!, Nicholas saw a harbinger of potential harm and warned of a potential economic calamity as a result. While I also see a potential calamity, it has less to do with economic stagnation and more to do with the loss of both freedom and equality.

The virtuous cycle I mentioned in the previous article does not exist when it comes to abstracting software over a network, into the cloud, and away from the end user and developer. In the world of cloud computing, there is no level playing field – at least, not at the moment. Customers are at the mercy of service providers and operators, and there are no “four freedoms” to fall back on.

When several of us co-founded the Open Cloud Initiative (OCI), it was with the intent, as Simon Phipps so eloquently put it, of projecting the four freedoms onto the cloud. There have been attempts to mandate additional terms in licensing that would force service providers to participate in a level playing field. See, for example, the great debates over “closing the web services loophole” as we called it then, during the process to create the successor to the GNU General Public License version 2. Unfortunately, while we didn’t yet realize it, we didn’t have the same leverage as we had when software was something that you installed and maintained on a local machine.

The Way to the Open Cloud

Many “open cloud” efforts have come and gone over the years, none of them leading to anything of substance or gaining traction where it matters. Bradley Kuhn helped drive the creation of the Affero GPL version 3, which set out to define what software distribution and conveyance mean in a web-driven world, but the rest of the world has been slow to adopt because, again, service providers have no economic incentive to do so. Where we find ourselves today is a world without a level playing field, which will, in my opinion, stifle creativity and, yes, innovation. It is this desire for “innovation” that drives the service providers to behave as they do, although as you might surmise, I do not think that word means what they think it means. As in many things, service providers want to be the arbiters of said innovation without letting those dreaded freeloaders have much of a say. Worse yet, they create services that push freeloaders into becoming part of the product – not a participant in the process that drives product direction. (I know, I know: yes, users can get together and complain or file bugs, but they cannot mandate anything over the providers)

Most surprising is that the closed cloud is aided and abetted by well-intentioned, but ultimately harmful actors. If you listen to the Clouderati, public cloud providers are the wonderful innovators in the space, along with heaping helpings of concern trolling over OpenStack’s future prospects. And when customers lose because a cloud company shuts its doors, the clouderati can’t be bothered to bring themselves to care: c’est la vie and let them eat cake. The problem is that too many of the clouderati think that Innovation! is a means to its own ends without thinking of ground rules or a “bill of rights” for the cloud. Innovation! and Technology! must rule all, and therefore the most innovative take all, and anything else is counter-productive or hindering the “free market”. This is what happens when the libertarian-minded carry prejudiced notions of what enabled open source success without understanding what made it possible: the establishment and codification of rights and freedoms. None of the Clouderati are evil, freedom-stealing, or greedy, per se, but their actions serve to enable those who are. Because they think solely in terms of Innovation! and Technology!, they set the stage for some companies to dominate the cloud space without any regard for establishing a level playing field.

Let us enumerate the essential items for open innovation:

  1. Set of ground rules by which everyone must abide, eg. the four freedoms
  2. Level playing field where every participant is a stakeholder in a collaborative effort
  3. Economic incentives for participation

These will be vigorously opposed by those who argue that establishing such a list is too restrictive for innovation to happen, because… free market! The irony is that establishing such rules enabled Open Source communities to become the engine that runs the world’s economy. Let us take each and discuss its role in creating the open cloud.

Ground Rules

We have already established the irony that the four freedoms led to the creation of software that was used as the infrastructure for creating proprietary cloud services. What if the four freedoms where tweaked for cloud services. As a reminder, here are the four freedoms:

 

  • The freedom to run the program, for any purpose (freedom 0).
  • The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1).
  • The freedom to redistribute copies so you can help your neighbor (freedom 2).
  • The freedom to distribute copies of your modified versions to others (freedom 3).

 

If we rewrote this to apply to cloud services, how much would need to change? I made an attempt at this, and it turns out that only a couple of words need to change:

 

  • The freedom to run the program or service, for any purpose (freedom 0).
  • The freedom to study how the service works, and change it so it does your computing as you wish (freedom 1).
  • The freedom to implement and redistribute copies so you can help your neighbor (freedom 2).
  • The freedom to implement your modified versions for others (freedom 3).

 

Freedom 0 adds “or service” to denote that we’re not just talking about a single program, but a set of programs that act in concert to deliver a service.

Freedom 1 allows end users and developers to peak under the hood.

Freedom 2 adds “implement and” to remind us that the software alone is not much use – the data forms a crucial part of any service.

Freedom 3 also changes “distribute copies of” to “implement” because of the fundamental role that data plays in any service. Distributing copies of software in this case doesn’t help anyone without also adding the capability of implementing the modified service, data and all.

Establishing these rules will be met, of course, with howls of rancor from the established players in the market, as it should be.

Level Playing Field

With the establishment of the service-oriented freedoms, above, we have the foundation for a level playing field with actors from all sides having a stake in each other’s success. Each of the enumerated freedoms serves to establish a managed ecosystem, rather than a winner-take-all pillage and plunder system. This will be countered by the argument that if we hinder the development of innovative companies won’t we a.) hinder economic growth in general and b.) socialism!

In the first case, there is a very real threat from a winner-take-all system. In its formative stages, when everyone has the economic incentive to innovate (there’s that word again!), everyone wins. Companies create and disrupt each other, and everyone else wins by utilizing the creations of those companies. But there’s a well known consequence of this activity: each actor will try to build in the ability to retain customers at all costs. We have seen this happen in many markets, such as the creation of proprietary, undocumented data formats in the office productivity market. And we have seen it in the cloud, with the creation of proprietary APIs that lock in customers to a particular service offering. This, too, chokes off economic development and, eventually, innovation. At first, this lock in happens via the creation of new products and services which usually offer new features that enable customers to be more productive and agile. Over time, however, once the lock-in is established, customers find that their long-term margins are not in their favor, and moving to another platform proves too costly and time-consuming. If all vendors are equal, this may not be so bad, because vendors have an incentive to lure customers away from their existing providers, and the market becomes populated by vendors competing for customers, acting in their interest. Allow one vendor to establish a larger share than others, and this model breaks down. In a monopoly situation, the incumbent vendor has many levers to lock in their customers, making the transition cost too high to switch to another provider. In cloud computing, this winner-take-all effect is magnified by the massive economies of scale enjoyed by the incumbent providers. Thus, the customer is unable to be as innovative as they could be due to their vendor’s lock-in schemes. If you believe in unfettered Innovation! at all costs, then you must also understand the very real economic consequences of vendor lock-in. By creating a level playing field through the establishment of ground rules that ensure freedom, a sustainable and innovative market is at least feasible. Without that, an unfettered winner-take-all approach will invariably result in the loss of freedom and, consequently, agility and innovation.

Economic Incentives

This is the hard one. We have already established that open source ecosystems work because all actors have an incentive to participate, but we have not established whether the same incentives apply here. In the open source software world, developers participate because they had to, because the price of software is always dropping, and customers enjoy open source software too much to give it up for anything else. One thing that may be in our favor is the distinct lack of profits in the cloud computing space, although that changes once you include services built on cloud computing architectures.

If we focus on infrastructure as a service (IaaS) and platform as a service (PaaS), the primary gateways to creating cloud-based services, then the margins and profits are quite low. This market is, by its nature, open to competition because the race is on to lure as many developers and customers as possible to the respective platform offerings. However, the danger becomes if one particular service provider is able to offer proprietary services that give it leverage over the others, establishing the lock-in levers needed to pound the competition into oblivion.

In contrast to basic infrastructure, the profit margins of proprietary products built on top of cloud infrastructure has been growing for some time, which incentivizes the IaaS and PaaS vendors to keep stacking proprietary services on top of their basic infrastructure. This results in a situation where increasing numbers of people and businesses have happily donated their most important business processes and workflows to these service providers. If any of them are to grow unhappy with the service, they cannot easily switch, because no competitor would have access to the same data or implementation of that service. In this case, not only is there a high cost associated with moving to another service, there is the distinct loss of utility (and revenue) that the customer would experience. There is a cost that comes from entrusting so much of your business to single points of failure with no known mechanism for migrating to a competitor.

In this model, there is no incentive for service providers to voluntarily open up their data or services to other service providers. There is, however, an incentive for competing service providers to be more open with their products. One possible solution could be to create an Open Cloud certification that would allow services that abide by the four freedoms in the cloud to differentiate themselves from the rest of the pack. If enough service providers signed on, it would lead to a network effect adding pressure to those providers who don’t abide by the four freedoms. This is similar to the model established by the Free Software Foundation and, although the GNU people would be loathe to admit it, the Open Source Initiative. The OCI’s goal was to ultimately create this, but we have not yet been able to follow through on those efforts.

Conclusion

We have a pretty good idea why open source succeeded, but we don’t know if the open cloud will follow the same path. At the moment, end users and developers have little leverage in this game. One possibility would be if end users chose, at massive scale, to use services that adhered to open cloud principles, but we are a long way away from this reality. Ultimately, in order for the open cloud to succeed, there must be economic incentives for all parties involved. Perhaps pricing demands will drive some of the lower rung service providers to adopt more open policies. Perhaps end users will flock to those service providers, starting a new virtuous cycle. We don’t yet know. What we do know is that attempts to create Innovation! will undoubtedly lead to a stacked deck and a lack of leverage for those who rely on these services.

If we are to resolve this problem, it can’t be about innovation for innovation’s sake – it must be, once again, about freedom.

This article originally appeared on the Gluster Community blog.

 

On the use of low thread high speed “gaming computers” to solve engineer simulations.

On the use of low thread high speed “gaming computers” to solve engineer simulations.

   Many of us in the linux community work only with software that is FOSS, which stands for Free open-source software. This is software that is not only open source but is available without licensing fees. There are many outstanding products out on the market today that are considered FOSS from the Firefox browser to most distributions of linux to the OpenOffice suite of text editing programs. However, there are times when FOSS is not an option, a good example is in my line of work supporting engineering software especially CAD tools and simulators. This software is not only costly but it is very restrictive. Each aspect of the software is charged. For example many of the simulators can run multithreaded. With one piece of software running on up to 16 threads for a single simulation.  More threads require more tokens, and we pay per token available. This puts us in a situation that we wish to maximize the amount we accomplish with as little threads as we can.

   If for example an engineer needs a simulation to finish to prove a design concept and it will take 6 hours to simulate at 1 thread he will want another token in order to use more threads. Using one token may buy him or her a reduction of 3 hours in simulation time, but the cost is that the tokens used for his or her simulation cannot be used by another engineer. The simple solution would be to keep on buying more and more tokens until every engineer has enough to run on the maximum number of threads at all times. If there are 5 engineers who run simulations that can run on 16 threads each for the cost of 5 tokens then we will need 25 tokens. Of course the simple solution rarely works. The cost of 25 is so high that it can easily bankrupt a medium sized company.

   Another solution would be to use less tokens but implement advance queuing software. This has the advantage that engineers can submit tasks and the servers running the simulations will run at all time (we hope) using the tokens we do have to the utmost. This strategy works well when deadlines are far away, but as they get close the fight for slots can grow.

  Since the limiting resource here is the number of threads we tried a different approach. As we are paying per thread we run, we should try to run threads as fast as possible (increasing our performance) rather then our throughput.  To further justify our reasoning we looked at creating benchmarks for our tools and comparing the amount of time it took to run a simulation compared to the number of threads we employed for it.

  The conclusion was:  Independent of software and the type of simulation we ran the performance increased exponentially to 4.5 threads and then leveled off. A shocking result given that the tools we used came out in different years and were produced by different venders.

   Given this information we concluded that if we ran 4 threads 25% faster on machine A (by overclocking) we could achieve better results then on machine B despite the same architecture.  This meant that for the near trivial price (compared to a server’s cost or additional tokens) of a modified desktop computer we could outperform a server with the maximum number of tokens we could purchase.

Our new system specifications:

Newegg #

Price

Item name

Quantity

N82E16819115095

349.99

Intel core i7 1155 socket

1

N82E16813131837

139.99

Asus motherboard

1

N82E16817171048

149.99

Cooler master power supply

1

N82E16820231611

139.99

G.Skill DDR3 ram

2

N82E16835103181

84.99

All in one liquid cpu cooler

1

N82E16811119213

164.99

Cooler master PC case

1

N82E16833106126

131.99

Ethernet server adapater

1

N82E16820167115

204.99

SSD 180GB

1

Amazon order

349

I7

1

   The total cost was approximately 1200 per unit after rebates. Assembly took about 3 hours. Overclocking was achieved at 4.7Ghz stable with the maximum recorded temperature at 70 C. The operating system is centos with the full desktop installed. The NICs have two connections link aggregated to our servers.

  To test overclocking we wrote a simple infinite loop floating point operation in perl and launched 8 instances of it while monitoring the results using a FOSS program called i7z. The hard drive only exists to provide a boot drive all other functions are performed via ssh and NFS exports. The units sit headless in our server room. It has been estimated that we have reduced overall simulation time across the company by 50% with only two units.

  The analogy we give is one of transportation. We have servers which function like buses. They can move a great deal of people at a time, which is great but buses are slow. Now we constructed high speed sports cars, these cars can only move a few people at a time but can move them much faster.


Isiah Schwartz

Teledyne Lecroy

 

 

Meet HD Camera Cape for BBB- $ 49.99 low cost camera cape

  • Adding a camera cape to the latest BeagleBone Black, RadiumBoards has increased its “Cape”-ability to provide a portable camera solution. Thousands of designers, makers, hobbyists and engineers are adopting BeagleBone Black and are becoming huge fans because of its unique functionality as a pocket-sized expandable Linux computer that can connect to the Internet. To augment them in their work we have designed this HD Camera Cape with following features and benefits as:
  •  

    • 720p HD video at 30fps
    • Superior low-light performance
    • Ultra-low-power
    • Progressive scan with Electronic Rolling Shutter
    • No software effort required for OEM
    • Proven, off-the-shelf driver available for quick and easy software integration
    • Minimize development time for system designers
    • Easily Customized
    • Simple Design with Low Cost
  • Priced for just $49.99, this game-changing cape plug for the latest credit card sized BBB can help developers differentiate their product and get to market faster.
  • To learn more about the new and exciting cape checkout www.radiumboards.com
  •  
     

    Dick MacInnis, Creator of DreamStudio, Launches Celeum Embedded Linux Devices

    Celeum offers four unique embedded devices based on Linux:

    • 1.The CeleumPC, which dual boots Android and DreamStudio
    • 2.The CeleumTV, which runs Android with a custom XBMC setup
    • 3.The Celeum Cloud Server, which runs Ubuntu Server with ownCloud for personal cloud storage, and
    • 4.The Celeum Domain Server, a drop in replacement for Windows Domain Controllers, powered by Ubuntu Server and a custom fork of Zentyal Small Business Server.

    The Celeum TV is currently available only in the Saskatoon, Canada area, while the other three devices are currently in crowdfunding phase, and can be preordered by making a donation to the Celeum Indiegogo campaign

     

    Leveraging Open Source and Avoiding Risks in Small Tech Companies

    Today’s software development is geared more towards building upon previous work and less about reinventing content from scratch. Resourceful software development organizations and developers use a combination of previously created code, commercial software and open source software (OSS), and their own creative content to produce the desired software product or functionality. Outsourced code can also be used, which in itself can contain any of the above combination of software.

    There are many good reasons for using off-the-shelf and especially open source software, the greatest being its ability to speed up development and drive down costs without sacrificing quality. Almost all software groups knowingly, and in many cases unknowingly, use open source software to their advantage. Code reuse is possibly the biggest accelerator of innovation, as long as OSS is adopted and managed in a controlled fashion.

    In today’s world of open-sourced, out-sourced, easily-searched and easily-copied software it is difficult for companies to know what is in their code. Anytime a product containing software changes hands there is a need to understand its composition, its pedigree, its ownership, and any open source licenses or obligations that restrict the rules around its use by new owners.

    Given developers’ focus on the technical aspects of their work and emphasis on innovation, obligations associated with use of third party components can be easily compromised. Ideally companies track open source and third party code throughout the development lifecycle. If that is not the case then, at the very least, they should know what is in their code before engaging in a transaction that includes a software component.

    Examples of transactions involving software are: a launch of a product into the market, mergers & acquisitions (M&A) of companies with software development operations, and technology transfer between organizations whether they are commercial, academic or public. Any company that produces software as part of a software supply chain must be aware of what is in their code base.

     

    Impact of Code Uncertainties

    Any uncertainty around software ownership or license compliance can deter downstream users, reduce ability to create partnerships, and create litigation risk to the company and their customers. For smaller companies, intellectual property (IP) uncertainties can also delay or otherwise threaten closures in funding deals, affect product and company value, and negatively impact M&A activities.

    IP uncertainties can affect the competitiveness of small technology companies due to indemnity demands from their clients. Therefore technology companies need to understand the obligations associated with the software that they are acquiring. Any uncertainties around third party content in code can also stretch sales cycles. Lack of internal resources allocated to identification, tracking and maintaining open source and other third party code in a project impacts smaller companies even more.

    Along with licensing issues and IP uncertainties, organizations that use open source also need to be aware of security vulnerabilities. A number of public databases, such as the US National Vulnerability Database (NVD) or Carnegie Mellon University's Computer Emergency Response Team (CERT) database, list known vulnerabilities associated with a large number of software packages. Without an accurate knowledge of what exists in the code base it is not possible to consult these databases. Aspects such as known deficiencies, vulnerabilities, known security risks, and code pedigree all assume the existence of software Bill of Materials (BOM). In a number of jurisdictions, another important aspect to consider before a software transaction takes place is whether the code includes encryption content or other content subject to export control – this is important to companies that do business internationally.  

    Solutions

    The benefits of OSS usage can be realized and the risks can be managed at the same time. Ideally, a company using OSS should have a process in place to ensure that OSS is properly adopted and managed throughout the development cycle. Having such a process in place allows organizations to detect any licensing or IP uncertainties at the earliest possible stage during development which reduces the time, effort, and cost associated correcting the problem later down the road.

    If a managed OSS adoption process spanning all stages of a development life cycle is not in place, there are other options available to smaller companies. Organizations are encouraged to audit their code base, or software in specific projects, regularly. Some may decide to examine third party contents and the associated obligations just before a product is launched, or in anticipation of an M&A.

     

    Internal Audits

    The key here is having an accurate view of all third-party, including OSS, content within the company. One option is to carry out an internal audit of the company code base for the presence of outside content and its licensing and other obligations. Unfortunately manually auditing a typical project of 1000-5000 files is a resource and time consuming process. Automated tools can speed up the discovery stage considerably. For organizations that do not have the time, resources or expertise to carry out an assessment on their own, an external audit would be the fastest, most accurate and cost effective option.

     

    External Audits

    External audit groups ideally deploy experts on open source and software licensing that use automated tools, resulting in accurate assessment and fast turnaround. A large audit project requires significant interactions between the audit agency and the company personnel, typically representatives in the R&D group, resident legal or licensing office, and product managers. A large audit project requires an understanding of the company’s outsourcing and open source adoption history, knowledge of the code portfolio in order to break it down into meaningful smaller sub projects, test runs, and consistent interactions between the audit team and the company representatives.

    Smaller audit projects however can be streamlined and a number of overhead activities can be eliminated, resulting in a time and cost efficient solution without compromising details or accuracy. An example would be streamlined machine-assisted software assessment service. The automated scanning operation, through use of automated open source management tools, can provide a first-level report in hours. Expert review and verification of the machine-generated reports and final consolidation of the results into an executive report can take another few days depending on the size of the project.

    The executive report delivered by an external audit agency is a high level view of all third party content, including OSS, and attributes associated with them. The audit report describes the software code audit environment, the process used, and the major findings, drawing attention to specific software packages, or even software files and their associated copyright and licenses. The audit report will highlight third party code snippets that were “cut & pasted” into proprietary files and how that could affect the distribution or the commercial model. This is important for certain licenses such as those in the GPL (GNU Public License) family of OSS licenses, depending on how the public domain code or code snippet is utilized.

    The report significantly reduces the discovery and analysis effort required from the company being audited, allowing them to focus on making relevant decisions based on the knowledge of their code base.

    Conclusion

    Third party code, including open source and commercially available software packages, can accelerate development, reduce time to market and decrease development costs. These advantages can be obtained without compromising quality, security or IP ownership. Especially for small companies, any uncertainty around code content and the obligations associated with third party code can impact the ability of an organization to attract customers. Ambiguity around third party code within a product stretches sales cycles, and reduces the value of products and impacts company valuations. For small organizations, an external audit of the code base can quickly, accurately and economically establish the composition the software and its associated obligations.

     

    Corks? Or Screw Tops? Why the Experience Matters

    I've noticed a disturbing trend amongst a few of the high quality wineries in my state. They have abandoned the cork to close their high-end wine bottles and turned to screw caps. This is good news to people who struggle with how to get a cork out of a wine bottle. 

    Read more... Comment (0)
     

    Chatting with Peter Tait of Lucid Imagination

    Back in January I had the opportunity to test drive LucidWorks Enterprise, a search engine for internal networks. The cross-platform search engine was flexible, stable, easy to install and came backed by a friendly support staff. In short, it was a good experience which demonstrated how useful (and straight forward) running one's own search engine can be.

    Read more... Comment (0)
     
    Page 2 of 7

    Upcoming Linux Foundation Courses

    1. LFD331 Developing Linux Device Drivers
      25 Aug » 29 Aug - Virtual
      Details
    2. LFD411 Embedded Linux Development
      25 Aug » 29 Aug - Santa Clara, CA
      Details
    3. LFS422 High Availability Linux Architecture
      08 Sep » 11 Sep - Raleigh, NC
      Details

    View All Upcoming Courses


    Who we are ?

    The Linux Foundation is a non-profit consortium dedicated to the growth of Linux.

    More About the foundation...

    Frequent Questions

    Join / Linux Training / Board