Bioinformatics market is estimated to reach market size worth USD 9.1 Billion in 2018. The market is forecasted to record double digit growth, with highest revenue contribution from the bioinformatics platforms segment.
Related Report : Generic Drugs Market
The global bioinformatics market, estimated at USD 2.3 billion in 2012 is forecasted to reach a market size of USD 9.1 billion in 2018, at a CAGR of 25.4% from 2012 - 2018. The market growth is driven by rise in applications across various industries. The key contribution to the market demand is from fields such as agriculture biotechnology, pharmaceutical research and development, medical and clinical diagnostics, and other life-sciences related industries.
The bioinformatics platform holds the largest market share and is estimated to account for nearly 50% of the market revenue. The services market currently holds a relatively smaller market share, however is expected to increase considerably over the forecast period. The bioinformatics platform segment is the fastest growing market and is expected to contribute 54% of the total market growth during the same period.
Related Report : Molecular Diagnostics Market
The demand across genomics and wide application in the medical and biological information sector is driving the demand for bioinformatics platforms and services in the global market. The research outsourcing by pharmaceutical giants in the fields involving bioinformatics content is a significant driver of the global bioinformatics market. These companies in efforts to reduce time and cost on R&D activities involved in the development of novel drugs and new applications for existing drugs, are looking for outsourcing services for bioinformatics knowledge and management tools, platforms, and services.
Browse Blog : Business Research Industry
Among the regional markets, North America holds the largest share; however, is forecasted to be succeeded by Europe as the leading market share holder in 2018, due to fast growth shown by major European markets such as Germany and U.K. Europe is forecasted to be the fastest growing region, with growth mainly driven by rising government support for R&D activities in the region. The bioinformatics services market in Europe and North America is well developed and organized whereas it is still in initial growth stage in emerging markets of Asia Pacific region.
Browse the full report with TOC at http://www.transparencymarketresearch.com/bioinformatics-market.html.
The report analyzes the global bioinformatics market growth across various segments such as knowledge management tools, platforms, and services. The in-depth analysis includes analysis of sub-segments up to three levels, with cross sectional analysis on the basis of geographical and regional market size and forecasts. The cross-sectional analysis of the global bioinformatics market based on applications further provides market growth potential across various end user industries. The segments considered on the basis of application are - molecular medicine, gene therapy, drug development and preventive medicine.
Transparency Market Research is a global market intelligence company, providing global business information reports and services. Our exclusive blend of quantitative forecasting and trends analysis provides forward-looking insight for thousands of decision makers. We are privileged with highly experienced team of Analysts, Researchers, and Consultants, who use proprietary data sources and various tools and techniques to gather, and analyze information.
Our data repository is continuously updated and revised by a team of research experts, so that it always reflects the latest trends and information. With a broad research and analysis capability, Transparency Market Research employs rigorous primary and secondary research techniques in developing distinctive data sets and research material for business reports.
Large scale adoption of digital signal processing in consumer electronics has lead to increased utilization of DSP chips that have penetrated into a number of applications that use advanced digital signal processing. Electronic design automation (EDA) vendors, foundries, fabless and fab manufacturers, intellectual property (IP) vendors, assembly & testing and packaging vendors are some of the key industry players in this market. IP market can be classified into standard non-customizable, customizable, application specific integrated circuits (ASIC), and programmable (FPGA & PLD) DSP core IP.
Enquiry before Buying @ http://www.transparencymarketresearch.com/sample/sample.php?flag=B&rep_id=1483
Design architecture market can be segmented as product design, IC design and DSP System-On-Chips (SOC) design. Product segment markets can be enlisted as general purpose, application specific and programmable DSP ICs. IC design segment markets are standard non-embedded, embedded, single-core, and multi-core DSP processors markets.
Applications sectors can be enlisted as computers and computer peripherals, wireless communication, surveillance, VoIP, consumer electronics sector, automotive sector, industrial sector, medical sector, radar communication applications and nanotechnology. The increased use of DSP in the high demand consumer electronic equipments such as set-top-boxes, digital cameras and printers are driving the growth of this market. The application of DSP in automobile industry has increased. Automobile equipment manufacturers are using DSP for manufacturing vehicle parts. Moreover, several location-based service providers use advanced digital signal processors to manufacture vehicle surveillance equipments. North America and Asia Pacific are the largest manufacturer and consumer of in digital signal processing market. The leading nations in DSP market are U.S., China, Japan, Taiwan and Korea. Asia Pacific is now the leading destination for electronics manufacturers due to availability of skilled workforce and low production cost.
Some of the market players in this industry are Analog Devices Inc., Altera Corp., Broadcom Corp., Freescale Semiconductor Ltd., Ceva Inc., Infineon Technologies AG, Marvell Technology Group Ltd., LSI Corp., MIPS Technologies Inc, Qualcomm Inc., NXP Semiconductors N.V., Renesas Electronics Corp., ST Microelectronics N.V., Samsung Electronics Co. Ltd., Toshiba Corp., Texas Instruments Inc. and Xilinx Inc.
A simple idea to get independent high quality video for online interviews would be to cheat a little. This would only work if your not required to publish the video in real time.
The idea is to use dual recordings with the devices. For example. If the person to be interviewed has a high quality web cam but poor internet connection. The software could store a high quality video recording of his part of the session. In real-time the interview would be of the quality the connection can handle but because the stored part can be re-transmitted after the interview is done. It can then with the other videos restore the quality.
Broadcast Switchers Market was worth USD 1,200 million in 2012 and is expected to reach USD 1,908 million by 2019, growing at a CAGR of 6.9% from 2013 to 2019. North America was the largest market for broadcast switchers in 2012. Growth in this region is expected to be driven by replacement of deployed switchers over the forecast period. In addition, the increasing number of HD channels is expected to drive the market in near future.
The broadcast switchers market is driven by various factors including transition from analog to digital broadcasting, increasing adoption of HD (High Definition) worldwide, rising number of digital channels and increasing focus on production automation. Enforcement of government regulations regarding digitalization is also expected to drive the market. However, lack of standardization in content distribution and high initial price of broadcasting equipments are some of the factors inhibiting the growth of this market.
Among all types, routing switcher segment was the largest and accounted for 47.4% of the market share in 2012. However, production switcher segment is expected to witness strong growth during the forecast period.
Among different end use segments, studio production held the largest market share in 2012 and accounted for 24.7% share of the global market. It is expected to maintain leading position throughout the forecast period owing to increasing awareness in emerging regions including Asia Pacific and RoW. Sports' broadcasting is the second largest end use segment and is expected to show strong growth during forecast period.
Geographically North America was the largest broadcast switcher market and accounted for 40.7% in 2012 owing to increase in adoption of low end routing switchers that are deployed in production trucks, generating less heat, less noisy and consuming low power. In addition, the growth is driven by the increase in usage of production switchers across non-broadcast segments such as places of worship, corporate conferences and educational institutes.
Broadcast switchers market is segmented depending price of the switchers as high end segment, mid end segment and low end segment. The market is dominated by few players in each of these segments. Most of the switcher manufacturers are competing among each other by developing state of the art technology products to get competitive advantage. The factors determining different categories of switchers such as high end, mid end and low end include formats, size and configuration of the switchers. The global high end broadcast switchers market in is dominated by Sony Electronics Inc., Snell Group, Grass Valley Panasonic Corporation among others. Broadcast Pix, Ross Video among others lead the mid end switchers segment and Blackmagic Design, For A Company, Miranda Technologies, Evertz Corporation, and New Tek Inc.dominate the low end switchers segment.
Broadcast switchers market analysis, by type
- Production switchers
- High end production switchers
- Mid end production switchers
- Low end production switchers
- Routing switchers
- High end routing switchers
- Mid end routing switchers
- Low end production switchers
- Master control switchers
- High end master control switchers
- Mid end master control switchers
- Low end master control switchers
Broadcast switchers market analysis, by end user
- Sports broadcasting
- Studio production
- Production trucks
- News production
- Post production
- Others (Corporate conferences, Places of worship, educational institutes and Playouts)
In addition the report provides cross sectional analysis of the market with respect to the following geographical segments:
- North America
- RoW (Rest of the World)
Browse the full report with TOC at http://www.transparencymarketresearch.com/broadcast-switchers-market.html
The worldwide market for physical security was valued at USD 48.05 billion in 2012 and is projected to reach the market size of USD 125.03 billion by 2019, growing at a CAGR of 14.9% during the period from 2013 to 2019. Some of the major factors driving the demand for physical security include rising global security concerns and increasing budget allocations for physical security by governments to prevent terrorism and crime activities. In addition, regulations imposed by governments of different countries demanding increased security levels is driving the adoption of physical security in several end-user sectors including industrial and business organizations. Continued investments in infrastructure worldwide, especially, in Asia Pacific region is expected to emerge as a significant factor behind the growth of physical security market in coming years.
The primary concern in physical security is the protection and prevention in order to serve security interests of people, equipment, and property. The increase in incidences of terror activities and crime has resulted in escalated demand for physical security solutions. It is expected that internet protocol (IP) video, sophisticated access control systems and biometric solutions would drive the demand for physical security solutions. Further, the emerging trend of convergence of logical and physical security and increased demand for integrated physical security solutions are expected to boost the growth of physical security market.
The different components of physical security include hardware, software and services. The market for physical security hardware has been further segmented into intrusion detection and prevention systems, access control systems and others (fire and life safety, visitor management and backup power). Among intrusion detection and prevention hardware products, video surveillance was the largest market and held around 72% share in 2012 and is expected to be the fastest growing segment throughout the forecast period. In access control segment, biometric access control held the largest market share of around 38% of total access control market in 2012. Physical security software market has been segmented into physical security information management (PSIM) and management, analysis and modeling software. PSIM is fast gaining market demand, driven by declining costs, increased sophistication and increasing awareness among end-users. Physical security services market has been segmented into video surveillance as a service (VSaaS), remote management services, technical support, public safety answering point (PSAP), security consulting, public alert, customer information and warning systems and others (data source, hosted access control, managed access control, alert notification, mobile security management). Among the services segments, VSaaS is expected to be the fastest growing market driven by benefits such as cost savings, simplicity, and remote access.
End-user segments of physical security include transportation and logistics, government and public sector, control centers, utilities/energy markets, fossil generation facilities, oil and gas facilities, chemical facilities, industrial (manufacturing sector excluding chemical facilities), retail, business organizations, hospitality and casinos and others (stadiums, educational and religious infrastructure, healthcare organizations). Transportation industry which includes aviation, rail, ports, road and city traffic and new start projects (including light rail, rapid rail, metro rail, commuter rail, bus rapid transit, and ferries) in transportation and logistics sector was the largest end-user of physical security in 2012. North America emerged as the largest regional market for physical security in 2012. In view of high terrorism incidences, the region has been increasing security measures across all end-use verticals. Moreover, governments in North America have significantly increased the regulatory measures for adoption of physical security. Asia Pacific is one of the fastest emerging markets for physical security, growing at a CAGR of around 17% owing to significant push from governments and the police to enhance security in view of increasing crime and terror in the region.
The market for physical security was highly fragmented in 2012 and no single player was dominant; however, Honeywell Security Group emerged as the market leader, accounting for around 5% share in 2012. Honeywell Security group was followed by Bosch Security Systems Inc, Morpho SA (Safran), Hikvision Digital Technology, Assa Abloy AB, Axis Communication AB, Pelco Inc, Tyco International Ltd, NICE Systems Ltd, and others.
Source : http://www.transparencymarketresearch.com/physical-security-market.html
Docker’s new container technology is offering a smart, more sophisticated solution for server virtualization today. The latest version of Docker, version 0.8, was announced couple of days ago.
Docker 0.8 is to focus more on quality rather than on features, with the objective of targeting the requirements of enterprises.
According to the software’s present developmental team; many companies that use the software have been using it for highly critical functions. As a result, the aim of the most recent release has been to provide such businesses top quality tools for improving efficiency and performance.
What Is Docker?
Docker is an open source virtualization technology for Linux that is essentially a modern extension of Linux Containers (LXC). The software is still quite a young initiative, having been launched for the first time in March 2013. Founder Solomon Hykes created Docker as an internal project for dotCloud, a PaaS enterprise.
The response to the application was highly impressive and the company soon reinvented itself as Docker Inc, going on to obtain $15 million in investments from Greylock Partners. Docker Inc. continued to run their original PaaS solutions, but the focus moved to the Docker platform. Since its initiation, over 400,000 users have downloaded the virtualization software.
Google (along with couple of most popular cloud computing providers out there) is offering the software as part of its Google Compute Engine though still nothing from major Australian companies (yes, I’m looking at you Macquarie).
Red Hat also included it in OpenShift PaaS as well as in the beta version of the upcoming release Red Hat Enterprise Linux. The benefits of containers are receiving greater attention from customers, who find that they can reduce overheads with lightweight apps and scale across cloud and physical architectures.
Containers Over Full Virtual Machines
For those unfamiliar with Linux containers, they are called the Linux kernel containment at a basic level. These containers can hold applications and processes like a virtual machine, rather than virtualizing an entire operating system. In such a scenario the application developer does not have to worry about writing to the operating system. This allows greater security, efficiency and portability when it comes to performance.
Virtualization through containers has been available as part of the Linux source code for many years. Solaris Zones was pioneering software created by Sun Microsystems over 10 years ago.
Docker takes the concept of containers a little further and modernizes it. It does not come with a full OS, unlike full virtual machines, but it shares the host OS, which is Linux. The software offers a simpler deployment process for the user and tailors virtualization technology for the requirements of PaaS (platform-as-a-service) solutions and cloud computing.
This makes containers more efficient and less resource hungry than virtual machines. The condition is that the user must limit the OS host to a single platform. Containers can launch within seconds while full virtual machines can take several minutes to do so. Virtual machines must also be run through a hypervisor, which containers do not.
This further enhances container performance as compared to virtual machines. According to the company, containers can offer application processing speeds that are double than virtual machines. In addition, a single server can have a greater number of containers packed into it. This is possible because the OS does not have to be virtualized for each and every application.
The New Improvements and Features Present In Docker 0.8
Docker 0.8 has seen several improvements and debugging since its last release. Quality improvements have been the primary goal of the developmental team. The team – comprising over 120 volunteers for the release – focused on bug fixing, improving stability, and streamlining the code, performance boosting and updating documentation. The goals in future releases will be to keep the improvements on and increase quality.
There are some specific improvements that users of earlier releases will find in version 0.8. The Docker daemon is quicker. Containers and images can be moved faster. It is quicker building source images with docker build. Memory footprints are smaller; the build is more stable with fixed race conditions. Packaging is more portable for tar implementation. The code has been made easier to change because of compacted sub-packaging.
The Docker Build command has also been improved in many ways. A new caching layer, greatly in demand among customers, speeds up the software. It achieves this by eschewing the need to upload content from the same disk again and again.
There are also a few new features to expect from 0.8. The software is being shipped with a BTRFS (B-Tree File System) storage driver that is at an experimental stage. The BTRFS file system is a recent alternative to ZFS among the Linux community. This gives users a chance to try out the new, experimental file system for themselves.
A new ONBUILD trigger feature also allows an image to be used later to create other images, by adding a trigger instruction to the image.
Version 0.8 is supported by Mac OSX, which will be good news for many Mac users. Docker can be run completely offline and directly on their Mac machines to build Linux applications. Installing the software to an Apple Macintosh OS X workstation is made easy with the help of a lightweight virtual machine named Boot2Docker.
Docker may have gained the place it has today partly because of its simplicity. Containers are otherwise a complex technology, and users are traditionally required to apply complex configurations and command lines. Docker makes it easier for administrators, with its API, to easily have Docker images inserted in a larger workflow.
It is currently being developed as a plug-in that will allow use with platforms beyond Linux, such as Microsoft Windows, via a hypervisor. The future plans for the developmental team is to update the software once a month. Version 0.9 is expected to see a release early in March, 2014. The new release may have some new features if they are merged before the next release, otherwise they will be carried over to the next release.
Docker is expected to follow Linux in numbering versions. Major changes will be represented by changing the first digit. Second digit changes signify regular updates while emergency fixes will be represented by a final digit.
Customers looking forward to the production ready Docker version 1 will have to wait until April. They can also expect support for the software as well as a potential enterprise release. There are also attempts by the team to develop services for signing images, indexing them and creating private image registries.
Give it a try!
One of the biggest challenges facing IT departments today is to keep your work environment. This is due to the need to maintain IT infrastructure able to meet the current demand for services and applications, and also ensure, that in the critical situations of the company, is able to resume normal activities quickly. And here is where it appears the big problem .
Much of IT departments are working on their physical limit, logical and economical . Your budget is very small and grows on average 1% a year, while managing the complexity grows at an exponential rate. IT has been viewed as a cost center real and not as an investment, as I have observed in most of the companies for which I have passed.
With this reality, IT professionals have to juggle to maintain a functional structure. For colleagues working in a similar reality, recommend special attention to this topic Virtualization .
Instead of speculating, Virtualization is not an expensive process compared to its benefits . Believing that depending on the scenario, Virtualization can be more expensive than many traditional designs. To give you an idea, today over 70% of the IT budget is spent just to keep the system environment, while less than 30% of the budget is invested in innovation advantage, differentiation and competitiveness. This means that almost any IT investment is dedicated simply to "put out the fire" emergency solve problems and very little is spent on solving the problem.
I followed a very common reality in the daily lives of large companies where the IT department is so overwhelmed that you can not measure the time to think again. In several of them, we see two completely different scenarios. A before and after Virtualization / cloud computing. In the first case, what we see is a bottleneck with reality drastic and resources to the limit. In the second, a scene of tranquility, guaranteed safe management and scalability.
Therefore, consider the proposal of Virtualization and discover what you can do for your department and therefore, for your company.
Within this reasoning, we have two maxims. The first: "Rethinking IT. The second: "Reinventing the business."
The big challenge for organizations is precisely this: rethink. What to do to transform technical consultants?
Increase efficiency and security
To the extent that the structure increases, so does the complexity of managing the environment. It is common to see data center dedicated to a single application. This is because the best practices for each service request that has a dedicated server. Obviously the metric is still valid, because without doubt this is the best option to avoid conflicts between applications, performance, etc.. Also, environments such as this are becoming increasingly detrimental as the processing capacity and memory are increasingly underutilized . On average, only 15% of the processing power is consumed by a server application, that is, over 80% of processing power and memory is actually no use.
Can you imagine the situation? We, first, we have virtually unused servers, while others need more resources, and ever lighter applications, the use of hardware is more powerful.
Another point that needs careful consideration is the safety of the environment. Imagine a database server with disk problem? What will be the difficulty of your company today? The time that your business needs to quote, purchase, receive, change and configure the environment to drop the item. During all this time, what was the problem?
Many companies are based in the cities / regions far from major centers and therefore may not think this hypothesis.
With Virtualization it does not, because we left the traditional scenario where we have a lot of servers, each hosting its own operating system and applications, and we go to a more modern and efficient.
In the image below, we can see the process of migrating the physical environment, where multiple servers to a virtual environment, where we have fewer physical servers or virtual servers hosting.
By working with this technology and we have underutilized servers for different applications / services that are assigned to the same physical hardware, sharing CPU resources, memory, disk and network. This makes the average usage of this equipment can reach 85%. Moreover, fewer physical servers means less spending on supplies, memories, processors, means less purchasing power and cooling, and therefore fewer people to manage the structure.
At this point you should ask, but what about security? If now I have multiple servers running simultaneously on a single physical server I'm at the mercy of this server? What if equipment fails?
New thinking is not only the technology but how to implement this technology in the best way possible. Today VMware , the global leader in Virtualization and cloud computing, working with a technology cluster, enabling and ensuring high availability of their servers. Basically, if you have two or more servers that work together in the event of failure of any equipment, VMware identifies this fault and automatically restores all its services on another host. This is automatic, without IT staff intervention.
At runtime, the physical failure is simulated to test the high availability and security of the environment in the future, the response time is fairly quick. On average, each server can be restarted with 10 seconds, 30 seconds or up to 2 minutes between each server. In some scenarios, it is possible that the operating environment will restart in about 5 minutes.
Be ready quickly new services
In a virtualized environment, the availability of new services becomes a quick and easy task, since resources are managed by the Virtualization tool and not tied to a single physical machine. This way you can hire a virtual server resources only and therefore avoids waste. On the other hand, if demand is rapidly increasing daily can increase the amount of memory allocated to this server. This same reasoning applies to the records and processing.
Remember that you are limited by the amount of hardware present in the cluster, you can only increase the memory to a virtual server if this report is available in its physical environment. This ends underutilized servers, as it begins to manage their environment intelligently and dynamically, ensuring greater stability
Integrating resources through the cloud
Cloud computing is a reality, and there is no cloud without Virtualization. VMware provides a tool called vCloud with it is possible to have a private cloud using its virtual structure, all managed with a single tool.
Reinventing the Business
After rethinking, now is the time to change, now is the time to reap the rewards of having an optimized IT organization, we see that when we do a project structured high availability, security, capacity growth and technology everything becomes much easier in the benefits we can mention the following:
Respond quickly to expand its business
When working in a virtualized environment, you have to think in a professional manner to meet all your needs, you can meet the demand for new services, this is possible with VMware because it offers a new server configured in a few clicks, in five minutes and has offered a new server ready to use. Today becomes crucial, since the start time a new project is decreasing.
Increase focus on strategic activities
With the controlled environment, management is simple and it becomes easier to focus on the business. That's because you get almost all the information and operational work is to have a thought of IT in business, and that is to transform a technical consultant. Therefore, a team will be fully focused on technology and strategic decisions, and not another team as firefighters, are dedicated to put out the fires caused.
Aligning the IT departments decision making
Virtualization allows IT staff have the metric reporting and analysis. With these reports have in their hands a professional tool that will lead to a fairly simple language and understand the reality of their environment. Often, this information supports a negotiation with management and, therefore, the approval of the budget for the purchase of new equipment.
Well folks, that's all. I tried not to write too much, but it's hard to say something as important in less lines, I promise that future articles will discuss in detail a little more about VMware and how it works.
A recent article published on Linux.org, “Are Cloud Operating Systems the Next Big Thing”, suggests that a Cloud Operating System should simplify the Application stack. The idea being that the language runtime is executed directly on the hypervisor without an Operating System Kernel.
Other approaches for cloud operating systems are focussed on optimising Operating System distributions for the cloud with automation in mind. The concepts of IaaS (Infrastructure as a Service), PaaS (Platform as a Service) and SaaS (Software as a Service) remain in the realm of conventional computing paradigms.
None of these approaches address the core benefits of the cloud. The cloud is a pool of resources, not just another “single” computer. When we think of a computer, it has a processor, persistent storage and memory. A conventional operating system exposes compute resources based on these physical limitations of a single computer.
There are numerous strategies to create the illusion of a larger compute platform, such as load balancing to a cluster of compute nodes. Load balancing is most commonly performed at a network level with applications or operating systems having limited exposure of the overall compute platform. This means an application cannot determine the available compute resources and scale the cloud accordingly.
To fully embrace the cloud concept a platform is required that can automatically scale application components with additional cloud compute resources. Amazon and Google both have solutions that provide some of these capabilities, however internal Enterprise solutions are somewhat limited. Many organisations embrace the benefits of a hosted cloud within the mega data centres around the world. Many companies have a requirement to host applications internally.
As network speeds increase the feasibility of a real “Cloud Operating System” becomes a reality. This is where an application can start a thread that executes not on a separate processor core, but executes somewhere within the cloud.
A complete paradigm shift is required to comprehend the possibilities of an Operating System providing distributed parallel processing. Virtualisation takes this new cloud paradigm to a different level where the abstraction of the hardware using a virtualisation layer and a platform operating system presents compute resources to a Cloud Operating System.
The same way as a conventional operating system determines which CPU core is the most appropriate to execute a specific process or thread, a cloud operating system should identify which instance of the cloud execution component is most appropriate to execute a task.
A cloud operating system with multiple execute instances on numerous hosts can schedule tasks based on the available resources of an execute instance. By abstracting task scheduling to a higher layer the underlying operating system is still required to optimise performance using techniques such as Symmetric Multiprocessing (SMP), processor affinity and thread priorities.
The application developer has for many years been abstracted from the hardware with development environments such as C#, Java and even PHP. Operating systems have not adapted to the Cloud concept of providing compute resources beyond a single computer.
The most comparable implementation is the route taken by Application Servers with solutions such as JAVA EJB where lookups can occur to find providers. Automatic scalability is however limited with these solutions.
Hardware vendors are moving ahead by creating cloud optimised platforms. The concept is that many smaller platforms create optimal compute capacity. HP seem to be leading this sector with their Moonshot solution. The question however remains: How do you make many look like one?
Enterprises have existing data centres where very little of the overall compute capacity is actually leveraged on an ongoing basis. When one system is busy, numerous others are idle. A cloud compute environment that can automatically scale across a collection of servers providing actual cost savings. Compute capacity would be additive using existing infrastructure for workload based on available resources. According to the IDC report on world wide server shipments the server market is in excess of $12B per quarter. The major vendors are looking for ways to differentiate their solutions and provide optimal value to customers.
Combining hardware, virtualisation and a Cloud Operating System organisations will benefit from a reduction in the cost to provide adequate compute capacity to serve business needs.
Gideon Serfontein is a co-founder of the Bongi Cloud Operating System research project. Additional information at http://bongi.softwaremooss.com
Or “How I learned to start worrying and never trust the cloud.”
The Clouderati have been derping for some time now about how we’re all going towards the public cloud and “private cloud” will soon become a distant, painful memory, much like electric generators filled the gap before power grids became the norm. They seem far too glib about that prospect, and frankly, they should know better. When the Clouderati see the inevitability of the public cloud, their minds lead to unicorns and rainbows that are sure to follow. When I think of the inevitability of the public cloud, my mind strays to “The Empire Strikes Back” and who’s going to end up as Han Solo. When the Clouderati extol the virtues of public cloud providers, they prove to be very useful idiots advancing service providers’ aims, sort of the Lando Calrissians of the cloud wars. I, on the other hand, see an empire striking back at end users and developers, taking away our hard-fought gains made from the proliferation of free/open source software. That “the empire” is doing this *with* free/open source software just makes it all the more painful an irony to bear.
I wrote previously that It Was Never About Innovation, and that article was set up to lead to this one, which is all about the cloud. I can still recall talking to Nicholas Carr about his new book at the time, “The Big Switch“, all about how we were heading towards a future of utility computing, and what that would portend. Nicholas saw the same trends the Clouderati did, except a few years earlier, and came away with a much different impression. Where the Clouderati are bowled over by Technology! and Innovation!, Nicholas saw a harbinger of potential harm and warned of a potential economic calamity as a result. While I also see a potential calamity, it has less to do with economic stagnation and more to do with the loss of both freedom and equality.
The virtuous cycle I mentioned in the previous article does not exist when it comes to abstracting software over a network, into the cloud, and away from the end user and developer. In the world of cloud computing, there is no level playing field – at least, not at the moment. Customers are at the mercy of service providers and operators, and there are no “four freedoms” to fall back on.
When several of us co-founded the Open Cloud Initiative (OCI), it was with the intent, as Simon Phipps so eloquently put it, of projecting the four freedoms onto the cloud. There have been attempts to mandate additional terms in licensing that would force service providers to participate in a level playing field. See, for example, the great debates over “closing the web services loophole” as we called it then, during the process to create the successor to the GNU General Public License version 2. Unfortunately, while we didn’t yet realize it, we didn’t have the same leverage as we had when software was something that you installed and maintained on a local machine.
The Way to the Open Cloud
Many “open cloud” efforts have come and gone over the years, none of them leading to anything of substance or gaining traction where it matters. Bradley Kuhn helped drive the creation of the Affero GPL version 3, which set out to define what software distribution and conveyance mean in a web-driven world, but the rest of the world has been slow to adopt because, again, service providers have no economic incentive to do so. Where we find ourselves today is a world without a level playing field, which will, in my opinion, stifle creativity and, yes, innovation. It is this desire for “innovation” that drives the service providers to behave as they do, although as you might surmise, I do not think that word means what they think it means. As in many things, service providers want to be the arbiters of said innovation without letting those dreaded freeloaders have much of a say. Worse yet, they create services that push freeloaders into becoming part of the product – not a participant in the process that drives product direction. (I know, I know: yes, users can get together and complain or file bugs, but they cannot mandate anything over the providers)
Most surprising is that the closed cloud is aided and abetted by well-intentioned, but ultimately harmful actors. If you listen to the Clouderati, public cloud providers are the wonderful innovators in the space, along with heaping helpings of concern trolling over OpenStack’s future prospects. And when customers lose because a cloud company shuts its doors, the clouderati can’t be bothered to bring themselves to care: c’est la vie and let them eat cake. The problem is that too many of the clouderati think that Innovation! is a means to its own ends without thinking of ground rules or a “bill of rights” for the cloud. Innovation! and Technology! must rule all, and therefore the most innovative take all, and anything else is counter-productive or hindering the “free market”. This is what happens when the libertarian-minded carry prejudiced notions of what enabled open source success without understanding what made it possible: the establishment and codification of rights and freedoms. None of the Clouderati are evil, freedom-stealing, or greedy, per se, but their actions serve to enable those who are. Because they think solely in terms of Innovation! and Technology!, they set the stage for some companies to dominate the cloud space without any regard for establishing a level playing field.
Let us enumerate the essential items for open innovation:
- Set of ground rules by which everyone must abide, eg. the four freedoms
- Level playing field where every participant is a stakeholder in a collaborative effort
- Economic incentives for participation
These will be vigorously opposed by those who argue that establishing such a list is too restrictive for innovation to happen, because… free market! The irony is that establishing such rules enabled Open Source communities to become the engine that runs the world’s economy. Let us take each and discuss its role in creating the open cloud.
We have already established the irony that the four freedoms led to the creation of software that was used as the infrastructure for creating proprietary cloud services. What if the four freedoms where tweaked for cloud services. As a reminder, here are the four freedoms:
- The freedom to run the program, for any purpose (freedom 0).
- The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1).
- The freedom to redistribute copies so you can help your neighbor (freedom 2).
- The freedom to distribute copies of your modified versions to others (freedom 3).
If we rewrote this to apply to cloud services, how much would need to change? I made an attempt at this, and it turns out that only a couple of words need to change:
- The freedom to run the program or service, for any purpose (freedom 0).
- The freedom to study how the service works, and change it so it does your computing as you wish (freedom 1).
- The freedom to implement and redistribute copies so you can help your neighbor (freedom 2).
- The freedom to implement your modified versions for others (freedom 3).
Freedom 0 adds “or service” to denote that we’re not just talking about a single program, but a set of programs that act in concert to deliver a service.
Freedom 1 allows end users and developers to peak under the hood.
Freedom 2 adds “implement and” to remind us that the software alone is not much use – the data forms a crucial part of any service.
Freedom 3 also changes “distribute copies of” to “implement” because of the fundamental role that data plays in any service. Distributing copies of software in this case doesn’t help anyone without also adding the capability of implementing the modified service, data and all.
Establishing these rules will be met, of course, with howls of rancor from the established players in the market, as it should be.
Level Playing Field
With the establishment of the service-oriented freedoms, above, we have the foundation for a level playing field with actors from all sides having a stake in each other’s success. Each of the enumerated freedoms serves to establish a managed ecosystem, rather than a winner-take-all pillage and plunder system. This will be countered by the argument that if we hinder the development of innovative companies won’t we a.) hinder economic growth in general and b.) socialism!
In the first case, there is a very real threat from a winner-take-all system. In its formative stages, when everyone has the economic incentive to innovate (there’s that word again!), everyone wins. Companies create and disrupt each other, and everyone else wins by utilizing the creations of those companies. But there’s a well known consequence of this activity: each actor will try to build in the ability to retain customers at all costs. We have seen this happen in many markets, such as the creation of proprietary, undocumented data formats in the office productivity market. And we have seen it in the cloud, with the creation of proprietary APIs that lock in customers to a particular service offering. This, too, chokes off economic development and, eventually, innovation. At first, this lock in happens via the creation of new products and services which usually offer new features that enable customers to be more productive and agile. Over time, however, once the lock-in is established, customers find that their long-term margins are not in their favor, and moving to another platform proves too costly and time-consuming. If all vendors are equal, this may not be so bad, because vendors have an incentive to lure customers away from their existing providers, and the market becomes populated by vendors competing for customers, acting in their interest. Allow one vendor to establish a larger share than others, and this model breaks down. In a monopoly situation, the incumbent vendor has many levers to lock in their customers, making the transition cost too high to switch to another provider. In cloud computing, this winner-take-all effect is magnified by the massive economies of scale enjoyed by the incumbent providers. Thus, the customer is unable to be as innovative as they could be due to their vendor’s lock-in schemes. If you believe in unfettered Innovation! at all costs, then you must also understand the very real economic consequences of vendor lock-in. By creating a level playing field through the establishment of ground rules that ensure freedom, a sustainable and innovative market is at least feasible. Without that, an unfettered winner-take-all approach will invariably result in the loss of freedom and, consequently, agility and innovation.
This is the hard one. We have already established that open source ecosystems work because all actors have an incentive to participate, but we have not established whether the same incentives apply here. In the open source software world, developers participate because they had to, because the price of software is always dropping, and customers enjoy open source software too much to give it up for anything else. One thing that may be in our favor is the distinct lack of profits in the cloud computing space, although that changes once you include services built on cloud computing architectures.
If we focus on infrastructure as a service (IaaS) and platform as a service (PaaS), the primary gateways to creating cloud-based services, then the margins and profits are quite low. This market is, by its nature, open to competition because the race is on to lure as many developers and customers as possible to the respective platform offerings. However, the danger becomes if one particular service provider is able to offer proprietary services that give it leverage over the others, establishing the lock-in levers needed to pound the competition into oblivion.
In contrast to basic infrastructure, the profit margins of proprietary products built on top of cloud infrastructure has been growing for some time, which incentivizes the IaaS and PaaS vendors to keep stacking proprietary services on top of their basic infrastructure. This results in a situation where increasing numbers of people and businesses have happily donated their most important business processes and workflows to these service providers. If any of them are to grow unhappy with the service, they cannot easily switch, because no competitor would have access to the same data or implementation of that service. In this case, not only is there a high cost associated with moving to another service, there is the distinct loss of utility (and revenue) that the customer would experience. There is a cost that comes from entrusting so much of your business to single points of failure with no known mechanism for migrating to a competitor.
In this model, there is no incentive for service providers to voluntarily open up their data or services to other service providers. There is, however, an incentive for competing service providers to be more open with their products. One possible solution could be to create an Open Cloud certification that would allow services that abide by the four freedoms in the cloud to differentiate themselves from the rest of the pack. If enough service providers signed on, it would lead to a network effect adding pressure to those providers who don’t abide by the four freedoms. This is similar to the model established by the Free Software Foundation and, although the GNU people would be loathe to admit it, the Open Source Initiative. The OCI’s goal was to ultimately create this, but we have not yet been able to follow through on those efforts.
We have a pretty good idea why open source succeeded, but we don’t know if the open cloud will follow the same path. At the moment, end users and developers have little leverage in this game. One possibility would be if end users chose, at massive scale, to use services that adhered to open cloud principles, but we are a long way away from this reality. Ultimately, in order for the open cloud to succeed, there must be economic incentives for all parties involved. Perhaps pricing demands will drive some of the lower rung service providers to adopt more open policies. Perhaps end users will flock to those service providers, starting a new virtuous cycle. We don’t yet know. What we do know is that attempts to create Innovation! will undoubtedly lead to a stacked deck and a lack of leverage for those who rely on these services.
If we are to resolve this problem, it can’t be about innovation for innovation’s sake – it must be, once again, about freedom.
This article originally appeared on the Gluster Community blog.
On the use of low thread high speed “gaming computers” to solve engineer simulations.
Many of us in the linux community work only with software that is FOSS, which stands for Free open-source software. This is software that is not only open source but is available without licensing fees. There are many outstanding products out on the market today that are considered FOSS from the Firefox browser to most distributions of linux to the OpenOffice suite of text editing programs. However, there are times when FOSS is not an option, a good example is in my line of work supporting engineering software especially CAD tools and simulators. This software is not only costly but it is very restrictive. Each aspect of the software is charged. For example many of the simulators can run multithreaded. With one piece of software running on up to 16 threads for a single simulation. More threads require more tokens, and we pay per token available. This puts us in a situation that we wish to maximize the amount we accomplish with as little threads as we can.
If for example an engineer needs a simulation to finish to prove a design concept and it will take 6 hours to simulate at 1 thread he will want another token in order to use more threads. Using one token may buy him or her a reduction of 3 hours in simulation time, but the cost is that the tokens used for his or her simulation cannot be used by another engineer. The simple solution would be to keep on buying more and more tokens until every engineer has enough to run on the maximum number of threads at all times. If there are 5 engineers who run simulations that can run on 16 threads each for the cost of 5 tokens then we will need 25 tokens. Of course the simple solution rarely works. The cost of 25 is so high that it can easily bankrupt a medium sized company.
Another solution would be to use less tokens but implement advance queuing software. This has the advantage that engineers can submit tasks and the servers running the simulations will run at all time (we hope) using the tokens we do have to the utmost. This strategy works well when deadlines are far away, but as they get close the fight for slots can grow.
Since the limiting resource here is the number of threads we tried a different approach. As we are paying per thread we run, we should try to run threads as fast as possible (increasing our performance) rather then our throughput. To further justify our reasoning we looked at creating benchmarks for our tools and comparing the amount of time it took to run a simulation compared to the number of threads we employed for it.
The conclusion was: Independent of software and the type of simulation we ran the performance increased exponentially to 4.5 threads and then leveled off. A shocking result given that the tools we used came out in different years and were produced by different venders.
Given this information we concluded that if we ran 4 threads 25% faster on machine A (by overclocking) we could achieve better results then on machine B despite the same architecture. This meant that for the near trivial price (compared to a server’s cost or additional tokens) of a modified desktop computer we could outperform a server with the maximum number of tokens we could purchase.
Our new system specifications:
Intel core i7 1155 socket
Cooler master power supply
G.Skill DDR3 ram
All in one liquid cpu cooler
Cooler master PC case
Ethernet server adapater
The total cost was approximately 1200 per unit after rebates. Assembly took about 3 hours. Overclocking was achieved at 4.7Ghz stable with the maximum recorded temperature at 70 C. The operating system is centos with the full desktop installed. The NICs have two connections link aggregated to our servers.
To test overclocking we wrote a simple infinite loop floating point operation in perl and launched 8 instances of it while monitoring the results using a FOSS program called i7z. The hard drive only exists to provide a boot drive all other functions are performed via ssh and NFS exports. The units sit headless in our server room. It has been estimated that we have reduced overall simulation time across the company by 50% with only two units.
The analogy we give is one of transportation. We have servers which function like buses. They can move a great deal of people at a time, which is great but buses are slow. Now we constructed high speed sports cars, these cars can only move a few people at a time but can move them much faster.