Home Search
lin - search results
If you're not happy with the results, please do another search
You want content? We’ve got your content right here! – Linux Foundation
ONE Summit Agenda is now live!
This post originally appeared on LF Networking’s blog. The author, Heather Kirksey, is VP Community & Ecosystem. ONE Summit is the Linux Foundation Networking event that focuses on the networking and automation ecosystem that is transforming public and private sector innovation across 5G network edge, and cloud native solutions. Our family of open source projects address every layer of infrastructure needs from the user edge to the cloud/core. Attend ONE Summit to get the scoop on hot topics for 2022!
Today LF Networking announced our schedule for ONE Summit, and I have to say that I’m extraordinarily excited. I’m excited because it means we’re growing closer to returning to meeting in-person, but more importantly I was blown away by the quality of our speaking submissions. Before I talk more about the schedule itself, I want to say that this quality is all down to you: You sent us a large number of thoughtful, interesting, and innovative ideas; You did the work that underpins the ideas; You did the work to write them up and submit them. The insight, lived experience, and future-looking thought processes humbled me with its breadth and depth. You reminded me why I love this ecosystem and the creativity within open source. We’ve all been through a tough couple of years, but we’re still here innovating, deploying, and doing work that improves the world. A huge shout out to everyone across every company, community, and project that made the job of choosing the final roster just so difficult.
Now onto the content itself. As you’ve probably heard, we’ve got 5 tracks: Industry 4.0, Security and Privacy, The New Networking Stack, Operationalizing Deployment, and Emerging Technologies and Business Models:
“Industry 4.0” looks at the confluence of edge and networking technologies that enable technology to uniquely improve our interactions with the physical world, whether that’s agriculture, manufacturing, robotics, or our homes. We’ve got a great line-up focused both on use cases and the technologies that enable them.
“Security and Privacy” are the most important issues with which we as global citizens and we as an ecosystem struggle. Far from being an afterthought, security is front and center as we look at zero-trust and vulnerability management, and which technologies and policies best serve enterprises and consumers.
Technology is always front and center for open source groups and our “New Networking Stack” track dives deep into the technologies and components we will all use as we build the infrastructure of the future. In this track we have a number of experts sharing their best practices, as well as ideas for forward-looking usages.
In our “Operationalizing Deployment” track, we learn from the lived experience of those taking ideas and turning them into workable reality. We ask questions like, How do you bridge cultural divides? How do you introduce and truly leverage DevOps? How do you integrate compliance and reference architectures? How do you not only deploy but bring in Operations? How do you automate and how to you use tools to accomplish digital transformation in our ecosystem(s)?
Not just content focusing only on today’s challenges and success, we look ahead with “Emerging Technologies and Business Models.” Intent, Metaverse, MASE, Scaling today’s innovation to be tomorrow’s operations, new takes on APIs – these are the concepts that will shape us in the next 5-10 years; we talk about how we start approaching and understanding them?
Every talk that made it into this program has unique and valuable insight, and I’m so proud to be part of the communities that proposed them. I’m also honored to have worked with one of the best Programming Committees in open source events ever. These folks took so much time and care to provide both quantitative and qualitative input that helped shape this agenda. Please be sure to thank them for their time because they worked hard to take the heart of this event to the next level. If you want to be in the room and in the hallway with these great speakers, there is only ONE place to be. Early bird registration ends soon, so don’t miss out and register now!
And please don’t forget to sponsor. Creating a space for all this content does cost money, and we can’t do it without our wonderful sponsors. If you’re still on the fence, please consider how amazing these sessions are and the attendee conservations they will spark. We may not be the biggest conference out there, but we are the most focused on decision makers and end users and the supply chains that enable them. You won’t find a more engaged and thoughtful audience anywhere else.
Open 3D Foundation (O3DF) Announces Keynote Lineup for O3DCon—Online and In-Person in Austin, October...
Keynotes, workshops and sessions will explore innovations in open source 3D development and use of Open 3D Engine (O3DE) for gaming, entertainment, metaverse, AI/ML,...
2 tools to manage infrastructure sprawl with Red Hat Enterprise Linux (RHEL)
Trim the number of Linux distributions you support, handle in-place RHEL updates, and simplify your overall Linux infrastructure with the Convert2RHEL and Leapp tools.
Read...
LFPH Tackles the Next Frontier in Open Source Health Technology: The Rise of Digital...
This post originally appeared on the LF Pubic Health’s blog. The author, Jim St. Clair, is the Executive Director. With the Digital Twin Consortium, Academic Medical Centers and other LF projects, Linux Foundation Public Health addresses open software for next generation modeling
Among the many challenges in our global healthcare delivery landscape, digital health plays an increasingly important role on almost a daily basis, from personal medical devices, to wearables, to new clinical technology and data exchanges. Beyond direct patient care, digital health also applies to diagnostics, drug effectiveness, and treatment delivery. These use cases are being driven by rapid growth in data modeling, artificial intelligence (AI)/machine learning (ML), and data visualization. Given the rapid digitalization of healthcare delivery, emerging digital twin technology is considered the next system that will advance further efforts in medical discoveries and improve clinical and public health outcomes.
What is a Digital Twin?
Put simply, a digital twin is a digital replica or “twin” of a physical object, process, or service. It is a virtual model (a compilation of data plus algorithms) that can dynamically pair the physical and digital worlds. The ultimate goal for digital twins, such as in manufacturing, is to iteratively model, test, and optimize a physical object in the virtual space until that model meets expected performance, at which point it is then ready to be built or enhanced (if already built) in the physical world. To create a pairing between the digital world and the real world, a digital twin leverages real time data, such as smart sensor technology, coupled with analytics, and often artificial intelligence (AI) in order to detect and prevent system failures, improve system performance, and explore innovative uses or functional models.
As mentioned, developments in smart sensor technologies and wireless networks have pushed forward the applications of the Internet of Things (IoT), and contributed to the practical applications of digital twin technology. Thanks to IoT, cloud computing and real time analytics, digital twins can now be created to collect much more real-world and real-time data from a wide range of sources, and thus can establish and maintain more comprehensive simulations of physical entities, their functionality, and changes they undergo over time.
Digital Twins in Healthcare
While the application of digital twins in healthcare is still very new, there are three general categories for their use: digital twins of a patient/person or a body system; digital twins of an organ or a smaller unit; and digital twins of an organization.
Digital twins can simulate the whole human body, as well as a particular body system or body function (e.g., the digestive system). One example of this kind of patient-sized digital twin is the University of Miami’s MLBox system, designed for the measurement of a patient’s “biological, clinical, behavioral and environmental data” to design personalized treatments for sleep issues.
Digital twins can also simulate one body organ, part of an organ or system, like the heart, and can even model subcellular (organelle/sub-organelle) functions or functions at the molecular level of interest within a cell. Dassault Systèmes’ Living Heart Project is an example of this kind of digital twin, which is designed to simulate the human heart’s reaction to implantation of cardiovascular devices.
Additionally, healthcare institutions (e.g., a hospital) can have their corresponding digital twins, such as Singapore General Hospital. This kind of simulation can be useful when determining environmental risks within institutions, such as the risks of infectious disease transmission.
The “Heart” of Health Digital Twins is Open Source – and the LF
While digital twins represent a complex and sophisticated new digital model, the building blocks of this technology—like all other software foundations—are best supported by an open-source development and governance model. The Linux Foundation sustains the nexus of open source development that underpins digital twin technology:
Linux Foundation Public Health (LFPH) is dedicated to advancing open source software development for digital health applications across the globe. Together with its members, LFPH is developing projects that address public health data infrastructure, improving health equity, advancing cybersecurity, and building multi-stakeholder collaboration for patient engagement and health information exchange.
The LF AI and Data Foundation is working to build and support an open artificial intelligence (AI) and data community, and drive open source innovation in the AI and data domains by enabling collaboration and the creation of new opportunities for all the members of the community.
LF Edge aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system. By bringing together industry leaders, LF Edge will create a common framework for hardware and software standards and best practices critical to sustaining current and future generations of IoT and edge devices.
The Open 3D Foundation includes many collaborators working to make an open source, fully-featured, high-fidelity, realtime 3D engine for building games and simulations, such as digital twins, available to every industry. “The Open 3D Foundation, along with its partners and community is helping advance 3D digital twin technology by proving an open source implementation that is completely dynamic with no need to preload the media.” said General manager Royal O’Brien, “This can ensure the smallest customizable footprint possible for any device platform to meet any industry needs.”
Additionally, LFPH has established a joint membership with the Digital Twin Consortium, focused on healthcare and life sciences. “Artificial Intelligence (AI), edge computing and digital twins represent the next generation in data transformation and patient engagement,” said Jim St. Clair, Executive Director, “Developing a collaborative relationship with the Digital Twin Consortium will greatly advance the joint efforts of model development and supporting open source components to advance adoption in healthcare through multi-stakeholder collaboration.” LFPH looks forward to supporting, innovating, and driving forward an open-source vision in the critical and growing area of digital twins for healthcare.
How to tune the Linux kernel with the /proc filesystem
How to tune the Linux kernel with the /proc filesystem
Image
...
Elevate Your Organization’s Open Source Strategy – Linux Foundation
The role of software, specifically open source software, is more influential than ever and drives today’s innovation. Maintaining and growing future innovation depends on the open source community. Enterprises that understand this are driving transformation and rising to the challenges by boosting their collaboration across industries, understanding how to support their open source developers, and contributing to the open source community.
They realize that success depends on a cohesive, dedicated, and passionate open source community, from hundreds to thousands of individuals. Their collaboration is key to achieving the project’s goals. It can be challenging to manage all aspects of an open source project considering all the different parts that drive it. For example:
Project’s scope and goals
Participating members, maintainers, and collaborators
Management and governance
Legal guidelines and procedures
IT services
Source control, CI/CD, distribution, and cloud providers
Communication channels and social media
The Linux Foundation’s LFX provides various tools to help open source communities design and adopt a successful project strategy considering all moving parts. So how do they do it? Let’s explore that using the Hyperledger project as an example.
1. Understand your project’s participation
Through the LFX Individual Dashboard, participants can register the identity they are using to contribute their code to GitHub and Gerrit (Since the Hyperledger project uses both). Then, the tool uses that identity to connect users’ contributions, affiliations, memberships, training, certifications, earned badges, and general information.
With this information, other LFX tools gather and propagate data charts to help the community visualize their participation in GitHub and Gerrit for the different Hyperledger repositories. It also displays detailed contribution metrics, code participation, and issue participation.
The LFX Organization Dashboard is a convenient tool to help managers and organizations manage their project memberships, discover similar projects to join, and understand the team’s engagement in the community. In detail, it provides information on:
Code contributions
Committee members
Event speakers and attendees
Training and certification
Project enrollments
It is vital to have the project’s members and participant identities organized to understand better how their work makes a difference in the project and how their participation interacts with others toward the project’s goals.
2. Manage your project’s processes
LFX Project Control Center offers a one-stop portal for program managers to organize their project participation, IT services, and quick access to other LFX tools.
Project managers can also connect:
Their project’s source control
Issue tracking tool
Distribution service
Cloud provider
Mail lists
Meeting management
Wiki and hosted domains
For example, Hyperledger can view all related organizations under their Hyperledger Foundation umbrella, analyze each participant project, and connect services like GitHub, Jira, Confluence, and their communication channels like Groups.io and Twitter accounts.
Managing all the project’s aspects in one place makes it easier for managers to visualize their project scope and better understand how all their services impact the project’s performance.
3. Reach outside and get your project in the spotlight
Social and earned media are vital to ensure your project reaches the ears of its consumers. In addition, it is essential to have good visibility into your project’s influence in the Open Source world and where it is making the best impact.
LFX’s Insights Social Media Metrics provides high-level metrics on a project’s social media account like:
Twitter followers and following information
Tweets and retweet breakdown
Trending tweets
Hashtag breakdown
Contributor and user mentions
In the case of Hyperledger, we have an overall view of their tweet and retweet breakdown. In addition, we can also see how tweets by Bitcoin News are making an impression on the interested communities.
Insights help you analyze how your project impacts other regions, reaches diverse audiences by language, and adjust communication and marketing strategies to reach out to the sources that open source participants rely on to get the latest information on how the community contributes and engages with others. For example, tweets written in English, Japanese, and Spanish made by Hyperledger contributors are visible in an overall languages chart with direct and indirect impressions calculated.
The bottom line
A coherent open source project strategy is a crucial driver of how enterprises manage their open source programs across their organization and industry. LFX is one of the tools that make enterprise open source programs successful. It is an exclusive benefit for Linux Foundation members and projects. If your organization and project would like to join us, learn more about membership or hosting your project.
Display more user-friendly Linux man pages with the tldr command
The tldr command provides a short list and examples of the most common ways to use Linux commands.
Read More at Enable Sysadmin
Secure Coding Practice – A Developer’s Learning Experience of Developing Secure Software Course –...
The original article appeared on the OpenSSF blog. The author, Harimohan Rajamohanan, is a Solution Architect and Full Stack Developer with Wipro Limited. Learn more about the Linux Foundation’s Developing Secure Software (LFD121) course.
All software is under continuous attack today, so software architects and developers should focus on practical steps to improve information security. There are plenty of materials available online that talk about various aspects of secure development practices, but they are scattered across various articles and books. Recently, I had come across a course developed by the Open Source Security Foundation (OpenSSF), which is a part of the Linux Foundation, that is geared towards software developers, DevOps professionals, web application developers and others interested in learning the best practices of secure software development. My learning experience taking the DEVELOPING SECURE SOFTWARE (LFD121) course was positive, and I immediately started applying these learnings in my work as a software architect and developer.
“A useful trick for creating secure systems is to think like an attacker before you write the code or make a change to the code” – DEVELOPING SECURE SOFTWARE (LFD121)
My earlier understanding about software security was primarily focused on the authentication and the authorization of users. In this context the secure coding practices I was following were limited to:
No unauthorized read
No unauthorized modification
Ability to prove someone did something
Auditing and logging
It may not be broad enough to assume a software is secure if a strong authentication and authorization mechanism is present. Almost all application development today depends on open source software and it is important that developers verify the security of the open source chain of contributors and its dependencies. Recent vulnerability disclosures and supply chain attacks were an eye opener for me about the existing potential of vulnerabilities in open source software. The natural focus of majority of developers is to get the business logic working and deliver the code without any functional bugs.
The course gave me a comprehensive outlook on the secure development practices one should follow to defend from the kind of attacks that happen in modern day software.
What does risk management really mean?
The course has detailed practical advice on considering security as part of the requirements of a system. Being part of various global system integrators for over a decade, I was tasked to develop application software for my customers. The functional requirements were typically written down in such projects but covered only a few aspects of security in terms of user authentication and authorization. Documenting the security requirement in detail will help developers and future maintainers of the software to have an idea of what the system is trying to accomplish for security.
Key takeaways on risk assessment:
Analyze security basics including risk management, the “CIA” triad, and requirements
Apply secure design principles such as least privilege, complete mediation, and input validation
Supply chain evaluation tips on how to reuse software with security in mind, including selecting, downloading, installing, and updating such software
Document the high-level security requirements in one place
Secure design principles while designing a software solution
Design principles are guides based on experience and practice. The software will generally be secure if you apply the secure design principles. This course covers a broad spectrum of design principles in terms of the components you trust and the components you do not trust. The key principles I learned from the course that guide me in my present-day software design areas are:
The user and program should operate using the least privilege. This limits the damage from error or attack.
Every data access or manipulation attempt should be verified and authorized using a mechanism that cannot be bypassed.
Access to systems should be based on more than one condition. How do you prove the identity of the authenticated user is who they claimed to be? Software should support two-factor authentication.
The user interface should be designed for ease of use to make sure users routinely and automatically use the protection mechanisms correctly.
Importance of understanding what kind of attackers you expect to counter.
A few examples on how I applied the secure design principles in my solution designs:
The solutions I build often use a database. I have used the SQL GRANT command to limit the privilege the program gets. In particular, the DELETE privilege is not given to any program. And I have implemented a soft delete mechanism in the program that sets the column “active = false” in the table for delete use cases.
The recent software designs I have been doing are based on microservice architecture where there is a clear separation between the GUI and backend services. Each part of the overall solution is authenticated separately. This may minimize the attack surface.
Client-side input validation is limited to counter accidental mistakes. But the actual input validation happens at the server side. The API end points validates all the inputs thoroughly before processing it. For instance, a PUT API not just validates the resource modification inputs, but also makes sure that the resource is present in the database before proceeding with the update.
Updates are allowed only if the user consuming the API is authorized to do it.
Databases are not directly accessible for use by a client application.
All the secrets like cryptographic keys and passwords are maintained outside the program in a secure vault. This is mainly to avoid secrets in source code going into version control systems.
I have started to look for OpenSSF Best Practices Badge while selecting open source software and libraries in my programs. I also look for the security posture of open source software by checking the OpenSSF scorecards score.
Another practice I follow while using open source software is to check whether the software is maintained. Are there recent releases or announcements from the community?
Secure coding practices
In my opinion, this course covers almost all aspects of secure coding practices that a developer should focus on. The key focus areas include:
Input validations
How to validate numbers
Key issues with text, including Unicode and locales
Usage of regular expression to validate text input
Importance of minimizing the attack surfaces
Secure defaults and secure startup.
For example, apply API input validation on IDs to make sure that records belonging to those IDs exists in the database. This reduces the attack surface. Also make sure first that the object in the object modify request exists in the database.
Process data securely
Importance of treating untrusted data as dangerous
Avoid default and hardcoded credentials
Understand the memory safety problems such as out-of-bounds reads or writes, double-free, and use-after-free
Avoid undefined behavior
Call out to other programs
Securely call other programs
How to counter injection attacks such as SQL injection and OS command injection
Securely handle file names and file paths
Send output
Securely send output
How to counter Cross-Site scripting (XSS) attacks
Use HTTP hardening headers including Content Security Policy (CSP)
Prevent common output related vulnerability in web applications
How to securely format strings and templates.
Conclusion
“Security is a process – a journey – and not a simple endpoint” – DEVELOPING SECURE SOFTWARE (LFD121)
This course gives a practical guidance approach for you to develop secure software while considering security requirement, secure design principles, counter common implementation mistakes, tools to detect problems before you ship the code, promptly handle vulnerability reports. I strongly recommend this course and the certification to all developers out there.
About the author
Harimohan Rajamohanan is a Solution Architect and Full Stack Developer, Open Source Program Office, Lab45, Wipro Limited. He is an open source software enthusiast and worked in areas such as application modernization, digital transformation, and cloud native computing. Major focus areas are software supply chain security and observability.
How to configure a hostname on a Linux system
Make it easier to access your Linux computer by giving it a human-friendly name that's simpler to use than an IP address.
Read More at...