Home Search

applications - search results

If you're not happy with the results, please do another search

LFPH Tackles the Next Frontier in Open Source Health Technology: The Rise of Digital...

The post LFPH Tackles the Next Frontier in Open Source Health Technology: The Rise of Digital Twins appeared first on Linux Foundation.

LFPH Tackles the Next Frontier in Open Source Health Technology: The Rise of Digital...

This post originally appeared on the LF Pubic Health’s blog. The author, Jim St. Clair, is the Executive Director. With the Digital Twin Consortium, Academic Medical Centers and other LF projects, Linux Foundation Public Health addresses open software for next generation modeling Among the many challenges in our global healthcare delivery landscape, digital health plays an increasingly important role on almost a daily basis, from personal medical devices, to wearables, to new clinical technology and data exchanges. Beyond direct patient care, digital health also applies to diagnostics, drug effectiveness, and treatment delivery. These use cases are being driven by rapid growth in data modeling, artificial intelligence (AI)/machine learning (ML), and data visualization. Given the rapid digitalization of healthcare delivery, emerging digital twin technology is considered the next system that will advance further efforts in medical discoveries and improve clinical and public health outcomes. What is a Digital Twin? Put simply, a digital twin is a digital replica or “twin” of a physical object, process, or service. It is a virtual model (a compilation of data plus algorithms) that can dynamically pair the physical and digital worlds. The ultimate goal for digital twins, such as in manufacturing, is to iteratively model, test, and optimize a physical object in the virtual space until that model meets expected performance, at which point it is then ready to be built or enhanced (if already built) in the physical world. To create a pairing between the digital world and the real world, a digital twin leverages real time data, such as smart sensor technology, coupled with analytics, and often artificial intelligence (AI) in order to detect and prevent system failures, improve system performance, and explore innovative uses or functional models. As mentioned, developments in smart sensor technologies and wireless networks have pushed forward the applications of the Internet of Things (IoT), and contributed to the practical applications of digital twin technology. Thanks to IoT, cloud computing and real time analytics, digital twins can now be created to collect much more real-world and real-time data from a wide range of sources, and thus can establish and maintain more comprehensive simulations of physical entities, their functionality, and changes they undergo over time. Digital Twins in Healthcare While the application of digital twins in healthcare is still very new, there are three general categories for their use: digital twins of a patient/person or a body system; digital twins of an organ or a smaller unit; and digital twins of an organization. Digital twins can simulate the whole human body, as well as a particular body system or body function (e.g., the digestive system). One example of this kind of patient-sized digital twin is the University of Miami’s MLBox system, designed for the measurement of a patient’s “biological, clinical, behavioral and environmental data” to design personalized treatments for sleep issues. Digital twins can also simulate one body organ, part of an organ or system, like the heart, and can even model subcellular (organelle/sub-organelle) functions or functions at the molecular level of interest within a cell. Dassault Systèmes’ Living Heart Project is an example of this kind of digital twin, which is designed to simulate the human heart’s reaction to implantation of cardiovascular devices. Additionally, healthcare institutions (e.g., a hospital) can have their corresponding digital twins, such as Singapore General Hospital. This kind of simulation can be useful when determining environmental risks within institutions, such as the risks of infectious disease transmission. The “Heart” of Health Digital Twins is Open Source – and the LF While digital twins represent a complex and sophisticated new digital model, the building blocks of this technology—like all other software foundations—are best supported by an open-source development and governance model. The Linux Foundation sustains the nexus of open source development that underpins digital twin technology: Linux Foundation Public Health (LFPH) is dedicated to advancing open source software development for digital health applications across the globe. Together with its members, LFPH is developing projects that address public health data infrastructure, improving health equity, advancing cybersecurity, and building multi-stakeholder collaboration for patient engagement and health information exchange. The LF AI and Data Foundation is working to build and support an open artificial intelligence (AI) and data community, and drive open source innovation in the AI and data domains by enabling collaboration and the creation of new opportunities for all the members of the community. LF Edge aims to establish an open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system. By bringing together industry leaders, LF Edge will create a common framework for hardware and software standards and best practices critical to sustaining current and future generations of IoT and edge devices. The Open 3D Foundation includes many collaborators working to make an open source, fully-featured, high-fidelity, realtime 3D engine for building games and simulations, such as digital twins, available to every industry. “The Open 3D Foundation, along with its partners and community is helping advance 3D digital twin technology by proving an open source implementation that is completely dynamic with no need to preload the media.” said General manager Royal O’Brien, “This can ensure the smallest customizable footprint possible for any device platform to meet any industry needs.” Additionally, LFPH has established a joint membership with the Digital Twin Consortium, focused on healthcare and life sciences. “Artificial Intelligence (AI), edge computing and digital twins represent the next generation in data transformation and patient engagement,” said Jim St. Clair, Executive Director, “Developing a collaborative relationship with the Digital Twin Consortium will greatly advance the joint efforts of model development and supporting open source components to advance adoption in healthcare through multi-stakeholder collaboration.” LFPH looks forward to supporting, innovating, and driving forward an open-source vision in the critical and growing area of digital twins for healthcare.

Secure Coding Practice – A Developer’s Learning Experience of Developing Secure Software Course

The author share's his experience learning about secure coding with the Linux Foundation's Developing Secure Software course. The post Secure Coding Practice – A Developer’s Learning Experience of Developing Secure Software Course appeared first on Linux Foundation.

Secure Coding Practice – A Developer’s Learning Experience of Developing Secure Software Course –...

The original article appeared on the OpenSSF blog. The author, Harimohan Rajamohanan, is a Solution Architect and Full Stack Developer with Wipro Limited. Learn more about the Linux Foundation’s Developing Secure Software (LFD121) course.  All software is under continuous attack today, so software architects and developers should focus on practical steps to improve information security. There are plenty of materials available online that talk about various aspects of secure development practices, but they are scattered across various articles and books. Recently, I had come across a course developed by the Open Source Security Foundation (OpenSSF), which is a part of the Linux Foundation, that is geared towards software developers, DevOps professionals, web application developers and others interested in learning the best practices of secure software development. My learning experience taking the DEVELOPING SECURE SOFTWARE (LFD121) course was positive, and I immediately started applying these learnings in my work as a software architect and developer. “A useful trick for creating secure systems is to think like an attacker before you write the code or make a change to the code” – DEVELOPING SECURE SOFTWARE (LFD121) My earlier understanding about software security was primarily focused on the authentication and the authorization of users. In this context the secure coding practices I was following were limited to: No unauthorized read No unauthorized modification Ability to prove someone did something Auditing and logging It may not be broad enough to assume a software is secure if a strong authentication and authorization mechanism is present. Almost all application development today depends on open source software and it is important that developers verify the security of the open source chain of contributors and its dependencies. Recent vulnerability disclosures and supply chain attacks were an eye opener for me about the existing potential of vulnerabilities in open source software. The natural focus of majority of developers is to get the business logic working and deliver the code without any functional bugs. The course gave me a comprehensive outlook on the secure development practices one should follow to defend from the kind of attacks that happen in modern day software. What does risk management really mean? The course has detailed practical advice on considering security as part of the requirements of a system. Being part of various global system integrators for over a decade, I was tasked to develop application software for my customers. The functional requirements were typically written down in such projects but covered only a few aspects of security in terms of user authentication and authorization. Documenting the security requirement in detail will help developers and future maintainers of the software to have an idea of what the system is trying to accomplish for security. Key takeaways on risk assessment: Analyze security basics including risk management, the “CIA” triad, and requirements Apply secure design principles such as least privilege, complete mediation, and input validation Supply chain evaluation tips on how to reuse software with security in mind, including selecting, downloading, installing, and updating such software Document the high-level security requirements in one place Secure design principles while designing a software solution Design principles are guides based on experience and practice. The software will generally be secure if you apply the secure design principles. This course covers a broad spectrum of design principles in terms of the components you trust and the components you do not trust. The key principles I learned from the course that guide me in my present-day software design areas are: The user and program should operate using the least privilege. This limits the damage from error or attack. Every data access or manipulation attempt should be verified and authorized using a mechanism that cannot be bypassed. Access to systems should be based on more than one condition. How do you prove the identity of the authenticated user is who they claimed to be? Software should support two-factor authentication. The user interface should be designed for ease of use to make sure users routinely and automatically use the protection mechanisms correctly. Importance of understanding what kind of attackers you expect to counter. A few examples on how I applied the secure design principles in my solution designs: The solutions I build often use a database. I have used the SQL GRANT command to limit the privilege the program gets. In particular, the DELETE privilege is not given to any program. And I have implemented a soft delete mechanism in the program that sets the column “active = false” in the table for delete use cases. The recent software designs I have been doing are based on microservice architecture where there is a clear separation between the GUI and backend services. Each part of the overall solution is authenticated separately. This may minimize the attack surface. Client-side input validation is limited to counter accidental mistakes. But the actual input validation happens at the server side. The API end points validates all the inputs thoroughly before processing it. For instance, a PUT API not just validates the resource modification inputs, but also makes sure that the resource is present in the database before proceeding with the update. Updates are allowed only if the user consuming the API is authorized to do it. Databases are not directly accessible for use by a client application. All the secrets like cryptographic keys and passwords are maintained outside the program in a secure vault. This is mainly to avoid secrets in source code going into version control systems. I have started to look for OpenSSF Best Practices Badge while selecting open source software and libraries in my programs. I also look for the security posture of open source software by checking the OpenSSF scorecards score. Another practice I follow while using open source software is to check whether the software is maintained. Are there recent releases or announcements from the community? Secure coding practices In my opinion, this course covers almost all aspects of secure coding practices that a developer should focus on. The key focus areas include: Input validations How to validate numbers Key issues with text, including Unicode and locales Usage of regular expression to validate text input Importance of minimizing the attack surfaces Secure defaults and secure startup. For example, apply API input validation on IDs to make sure that records belonging to those IDs exists in the database. This reduces the attack surface. Also make sure first that the object in the object modify request exists in the database. Process data securely Importance of treating untrusted data as dangerous Avoid default and hardcoded credentials Understand the memory safety problems such as out-of-bounds reads or writes, double-free, and use-after-free Avoid undefined behavior Call out to other programs Securely call other programs How to counter injection attacks such as SQL injection and OS command injection Securely handle file names and file paths Send output Securely send output How to counter Cross-Site scripting (XSS) attacks Use HTTP hardening headers including Content Security Policy (CSP) Prevent common output related vulnerability in web applications How to securely format strings and templates. Conclusion “Security is a process – a journey – and not a simple endpoint” – DEVELOPING SECURE SOFTWARE (LFD121) This course gives a practical guidance approach for you to develop secure software while considering security requirement, secure design principles, counter common implementation mistakes, tools to detect problems before you ship the code, promptly handle vulnerability reports. I strongly recommend this course and the certification to all developers out there. About the author Harimohan Rajamohanan is a Solution Architect and Full Stack Developer, Open Source Program Office, Lab45, Wipro Limited. He is an open source software enthusiast and worked in areas such as application modernization, digital transformation, and cloud native computing. Major focus areas are software supply chain security and observability.

Base64 encoding: What sysadmins need to know

By understanding Base64 encoding, you can apply it to Kubernetes secrets, OpenSSL, email applications, and other common situations. Read More at Enable Sysadmin

LFX’22 Mentorship Experience with Open Horizon

Ruchi Pakhle shares his experience in the Linux Foundation's mentorship program working with Open Horizon. The post LFX’22 Mentorship Experience with Open Horizon appeared first on Linux Foundation.

Public-private partnerships in health: The journey ahead for open source

The past three years have redefined the practice and management of public health on a global scale. What will we need in order to support innovation over the next three years? The post Public-private partnerships in health: The journey ahead for open source appeared first on Linux Foundation.

5 things sysadmins should know about software development

Advances in edge computing, machine learning, and intelligent applications make sysadmins more important than ever in the software development process. Read More at Enable Sysadmin

The Open 3D Foundation Welcomes Epic Games as a Premier Member to Unleash the...

Interoperability and portability of real-time 3D assets and tools deliver unparalleled flexibility, as the Open 3D community celebrates its first birthday SAN FRANCISCO – July...

Top 5 Reasons to be Excited about Zowe

The Open Mainframe Project’s Zowe initiative was born from an ambitious goal: make the mainframe a seamless, integrated part of the modern IT landscape. The post Top 5 Reasons to be Excited about Zowe appeared first on Linux Foundation.