The Linux Foundation is bringing open source to the auto industry, thanks to Automotive Grade Linux.
The foundation started Automotive Grade Linux (AGL) to create open source software solutions for automotive applications. Their initial focus is on In-Vehicle-Infotainment (IVI) and their long-term goals include the addition of instrument clusters and telematics systems. Already AGL already has the likes of Ford, Jaguar, Land Rover, Mazda, Mitsubishi Motors, Nissan, Subaru, and Toyota on board and that list will only continue to grow.
AGL is completely open. In fact, you can already download the source for Automotive Grade Linux and run it on supported hardware (Renasas R-CAR M2 PORTER, Renasas R-CAR E2 SILK, QEMU x86). Because AGL is open source, car manufacturers won’t be dealing with a collection of proprietary code that will work for a single model, …
The first is that metaphors and models simplify and abstract a messy real world down to especially relevant or important points. Over time, these simplifications can come to be seen as too simple or not adequately capturing essential aspects of reality. (This seems to be what’s going on with the increasing pushback on “bimodal IT.” But that’s a topic for another day.)
The other reason is that the world changes in such a way that it drifts away from the one that was modeled.
Or it can be a bit of both. That’s the case with the pets and cattle analogy as it’s been applied to virtualized enterprise infrastructure and private clouds.
The “pets vs. cattle” metaphor is usually attributed to Bill Baker, then of Microsoft. The idea is that traditional workloads are pets. If a pet gets sick, you take it to the vet and try to make it better. New-style, cloud-native workloads, on the other hand are cattle. If the cow gets sick, well, you get a new cow.
Codenvy, Microsoft and Red Hat have banded together to bring a more consistent developer experience across different code editors, by way of a new protocol that would allow any editing tool to check user’s code against a set of rules and best practices formed for each language.
For the keepers of programming languages, the Language Server Protocol project could help them provide better support for their users, without worrying about the underlying platform.
With these specifications, code editors can offer advanced functionality such as syntax analysis, code completion, outlining and refactoring that are designed for specific languages.
Matei Zaharia, the CTO of Databricks and creator of Spark, talked about Spark’s advanced data analysis power and new features in its upcoming 2.0 release in this MesosCon 2016 keynote.
It should come as no surprise that open source training and hiring is typically predicated on what skills are trending in tech. As an example, Big Data, cloud and security are three of the most in-demand skillsets today, which explains why more and more open source professionals look to develop these particular skillsets and why these professionals are amongst the most sought after. One skillset that employers have not found as useful as professionals is container management.
While 19% of open source professionals said that containers will have a big impact on open source hiring in 2016, only 8% of employers felt this way, according to the 2016 Open Source Jobs Report. One potential reason for this mismatch may be that professionals see a greater benefit in adopting container technologies than employers do at present. Technical professionals have been able to see the advantages of container packaging and development workflows, but the relative youth of orchestration technologies have made it more difficult for organizations, particularly large enterprises, to widely adopt container infrastructures.
In the past year, the adoption of containers has skyrocketed along with the amount of software easily available to developers and container builders, but significant questions in the management and operation of containers have remained – specifically questions around security, networking and persistent data storage in container-based environments. While developers have been able to create flexible application architectures with containers, there are still many areas where the difficulty in overcoming challenges has made adoption less likely in more risk-averse environments.
The rapid pace of change and evolution in the container ecosphere have also presented challenges to employers in finding personnel who have the skills to cope with the rapid pace of change while maintaining stable production environments. Therefore, as an open source professional with container skills and strong soft skills, you can be a key asset and contributor.
With a robust knowledge of containers, you have the ability to help foster greater collaboration within your team. In addition to providing tech teams with application portability, containers let individuals have greater flexibility and control of their work. Docker, as an example, one of the two most prominent technologies associated with containers, allows developers to have complete ownership of their code and operations teams to have the ability to manage and scale their operating systems.
A search on Dice for professionals with Docker experience generates a results page with various job titles (i.e. data analytics software engineer, cloud architect, senior principal DevOps engineer, etc.). Employers want team members who have skillsets that can help them work more quickly, efficiently and independently. That isn’t a requirement that is title specific. With that said, there remains some concern amongst employers around data persistence, with many companies adopting Docker tending to be environments that need to operate at large scales.
Open source professionals who are also familiar with CoreOS, specifically its rkt product, may have a leg up with security-minded organizations. CoreOS’s rkt product offers an alternative approach to Docker, focusing more on security and composability, something many employers have voiced their concerns over with containers. For that reason, utilize this skill to your advantage during the interview and hiring process.
Still new to the tech world, employers and professionals alike have a lot more to learn about containers as the technology continues to develop. As a result, uncertainty, particularly amongst employers, remains in terms of what type of impact containers will have on open source hiring in the future. With that being said, continuous evolution of the container ecosphere has caused some of the initial concerns around the technology to dissipate. For an open source professional with container skills, use this time to demonstrate to employers as well as professionals the value of containers and how they can be used to improve team dynamics and workflow.
Apache Spark has been an integral part of Mesos from its inception. Spark is one of the most widely used big data processing systems for clusters. Matei Zaharia, the CTO of Databricks and creator of Spark, talked about Spark’s advanced data analysis power and new features in its upcoming 2.0 release in his MesosCon 2016 keynote.
Spark’s Design Goals
Spark was created to meet two needs: to provide a unified engine for big data processing, and a concise high-level API for working with big data.
“A lot of data and analysis is exploratory and interactive. So, unlike things like high-performance computing, where you write a program and then you run it for many years, and you can afford to spend a few months optimizing it, in data science, what you really do is write a program and you run it once, and then you realize it was computing the wrong thing and you never run it again. So you can’t actually spend a lot of time sitting down and tuning your program. The solution is to have very high-level APIs that try to get you pretty good performance and are faster to iterate, so that you can actually explore your data,” said Zaharia.
Spark uses libraries for data processing, such as SQL and data frames for structured data, streaming libraries for incremental processing, and graphics processing. According to Zaharia, “These all build on top of the Resilient Distributed Dataset (RDD) API, and the cool thing is when we look at users, most users do use a mix of these. I think something like 75% of users use two libraries or more. It’s actually useful for people trying to build applications.”
Spark 2.0
Spark 2.0 has not yet been released, but you can try out the preview release. The most significant new feature is structured streaming, which greatly expands Spark’s real-time data analysis capabilities.
“It has event time, which means your records can have time stamps set from outside, and they can come in out of order, and you can still do aggregation and windowing by the original time in the data. It’s got windowing, sessions, sessionization, and a really nice API for plugging in data sources and syncs… With structured streaming, you’re able to take the data in a stream, build a table in Spark SQL, and serve the table through JBDC, and anything that docks SQL can query the real time state of your stream,” Zaharia said.
Watch Matei Zaharia’s full keynote presentation below to learn about other new 2.0 features, and see a live demonstration of structured streaming.
And, watch this spot for more blogs on ingenious and creative ways to hack Mesos for large-scale tasks.
MesosCon Europe 2016 offers you the chance to learn from and collaborate with the leaders, developers and users of Apache Mesos. Don’t miss your chance to attend! Register by July 15 to save $100.
Apache, Apache Mesos, and Mesos are either registered trademarks or trademarks of the Apache Software Foundation (ASF) in the United States and/or other countries. MesosCon is run in partnership with the ASF.
As part of the Docker Captains program, I was given a preview of Docker 1.12 including the new Swarm integration which is Docker’s native clustering/orchestration solution (also known as SwarmKit, but that’s really the repo/library name). And it’s certainly a big change. In this post I’ll try to highlight the changes and why they’re important.
The first and most obvious change is the move into Docker core; to start a Docker Swarm is now as simple as running dockerswarm init on the manager node and docker swarm join$IP:PORT on the worker nodes, where IP:PORT is the address of the leader. You can then use the top level node command to get more information on the swarm e.g:
GitHub released charts last week that tell a story about the heartbeat of a few open source, giving insights into activity, productivity and collaboration of software development.
Salted throughout the GitHub website are analytics, and there is an application programming interface (API) that data-driven enterprises can use to create their own analytics to measure the progress and health of any public open source projects important to them. A dashboard displaying the project’s heartbeat could be built with the API.
Just a week after Nokia (NYSE:NOK) announced an agreement to help China Mobile move to a more flexible cloud network infrastructure, Nokia said it is teaming up with Intel to make its carrier-grade AirFrame Data Center Solution hardware available for an Open Platform Network Functions Virtualization (OPNFV) Lab.
The move means the hardware can be used by the OPNFV collaborative open source community to accelerate the delivery of cloud-enabled networks and applications.
We read of virus infections (new ones come out all the time) and are somehow affected by spam mail on a daily basis. While there are plenty of free and commercial solutions (available as…
[[ This is a content summary only. Visit my website for full links, other content, and more! ]]