Home Blog

Navigating the Digital Terrain – A Salute to SysAdmins!     

Navigating the Digital Terrain – A Salute to SysAdmins!      Save up to $400 on Training & Certifications

Learn more at LF Training

Delivering Prime Training Deals – 2 DAYS ONLY

Save 40% on all training and certifications.

Learn more at LF Training

Why You Need to Know About Event Modeling:  —An Intro

Alexandra Moxin, CSO, Adaptech Group

Introduction

Have you ever wondered why most software projects start off well and then, several months later, turn slow and difficult? You’ve likely fallen into the traps of design as you go, two-week sprints that never accomplish much, and more ceremony meetings than time to complete your work. Maybe you’ve divided up your product into microservices but are running into never-ending orchestration issues and costs. Regardless, your team is now weeks past a critical product release that is going to take more weeks to finish.

We’ve all been there. But it doesn’t have to be that way.”

What if I told you that software projects, products, upgrades, migrations, etc., could be easy to implement, seamlessly run, and your teams could accomplish twice as much in the time they have now? Oh, and fewer ceremony meetings cluttering calendars, so your teams would be happier as well?

Enter Event Modeling.

What Is It?

Event Modeling is a seven-step methodology to describe information flow that is used by multifunctional teams to easily understand and solve complex problems. While Event Modeling techniques can be used in any domain, they bring ease, simplicity, and the rigor of systems thinking to software engineering.

Imagine an interactive canvas where your C-suite is engaged in conversation with CX, UX, and DX; everyone is working from the same page, requirements are understood, specifications are tracked and connected, and all parties have a deep understanding of what each team will be doing. This is what we always see with every team that uses Event Modeling to plan and guide their endeavors.

The Seven Steps of Event Modeling

For a deeper understanding, see Event Modeling: What is it? by original author Adam Dymitruk, CEO and Founder of Adaptech Group (1). Event Modeling is done in seven steps, and all information is represented from the user’s perspective. Let’s go through these steps and show how to build a software blueprint.

Step 1: Brainstorming

This is done collaboratively with at least one representative of each department. One person explains the goals of the project. The participants then envision how this system would look and behave, starting from the screens or pieces of information that they can conceive of having happened. Here, we gently introduce the concept that only state-changing information should be specified.

Step 2: Formulate the Plot

The participants now create a plausible walk-through of these screens or stories made of this information (aka events). The information is arranged in a timeline, which everyone reviews, to understand if it makes a cohesive story.

Step 3: Create the StoryBoard

If the mockups or screen designs haven’t been added yet, these are now added to the top of the diagram. The team then reviews the screens so that the source and destination of each field that the user sees is recorded on the event model.

Step 4: Identify What the User Can Do

Next, we show how we enable the user to change the state of the system. This is where we identify inputs, aka commands, which are identified as blue boxes. A command links the information a user enters on a screen, applies validation and business rules, and shows how that information will be stored in the system. An example in a vacation planning system would be a user selecting a range of dates, and type of room, and then pressing “Book Now”.

Step 5: Identify What the User Sees

Returning to our goals for the event model, we now link and identify what information has accumulated in the system and reflect this as UI views (aka read-models) or tasks for automation to fulfill. These outputs are colored green and may be things such as a calendar view or availability of rooms in a vacation planner, or a shopping cart in an ecommerce system.

Step 6: Apply Conway’s Law

Now that we know how information gets into and out of our system, we then organize the events into swimlanes. We do this so that the system can exist as a set of autonomous parts that separate teams can own. This allows specialization to happen to a level that we control and allows for fixed pricing, which we’ll introduce in a future post. See Conway’s Law (2) by Mel Conway.

Step 7: Elaborate Scenarios for Testing

Each workflow step is tied to information coming into the system or information going out. Given-When-Then or Given-Then specification by example scenarios are constructed while being reviewed collaboratively by the participants. This enables user story writing, which is traditionally done by a dedicated product owner in isolation in a text format, to be done collaboratively, visually, and in a small amount of time. What is critical is that each specification, while allowing for multiple variations of possible data, is tied to exactly one command or view.

Going Forward

This article is an intro to Event Modeling. We’ve been using Event Modeling in its earliest forms for more than a decade. Adam has been developing an open source project for creating and running Open Spaces, with its own Event Model, and has been streaming this on Twitch, YouTube (3), LinkedIn, and X. We invite you to drop in on these streams and take part, learn, and contribute; or join our Discord channel (4) to connect with the Event Modeling community. If there is interest, we’re open to sharing more detailed posts on Linux.com in a future series.

References

1. eventmodeling.org/posts/what-is-event-modeling/#seven-steps

2. melconway.com/Home/Conways_Law.html

3. eventmodeling.org/resources/#live-streams

4. eventmodeling.org/resources/#discord

Author Bio

Alexandra is Chief Strategy Officer for Adaptech Group and is an Ambassador for the ALS Network, and Google (Women TechMakers). Her background is in business intelligence, product development, and digital transformation. She attended Stanford Open University and graduated from the University of British Columbia.

Kickstart Your Career & Save Up To 40%!

Save on Technical Training & Certification

Learn more at Linux Foundation Training

Take the Cloud Native Security Survey

Share your insights to help improve cloud native security and receive 25% off your CloudNativeSecurityCon registration.

Further Your Education with Courses & Certifications

SAVE UP TO 50% TODAY

*Offer ends May 21, 2024

Learn More at Linux Foundation Training

Maintainer Confidential: Challenges and Opportunities One Year On

A year ago I wrote an article to give some insight into how an open source project looks behind the scenes from a maintainer’s perspective. One year on, I thought it might be interesting to share an update on that.

Who I am and what the project is and does was covered previously and hasn’t really changed. In short, I’m the Yocto Project’s primary technical lead. The project allows people/companies to build and maintain customized Linux and open source software in general in a scalable and maintainable way. 

Who is using it? We often don’t know!

As the project continues to grow in usage, we keep finding out about new and interesting places it is being used. This is really exciting and what the project was designed for, so it is wonderful to see. The sad thing is that we can’t really talk about a lot of the usage. In some cases we find out by looking at the license compliance “bill of materials” that companies share. It is usually clear looking at the versions/names of the components that it is likely OpenEmbedded/Yocto Project derived but there is nothing we can quote to show that definitively. It is hard to demonstrate project usage or importance when you don’t know or can’t say who is using it. If you are using it, please let us say that you are! Please drop us an email or you can add to the list on our wiki.

Since last year the project has gained several members, some of them joining after reading the previous article and realizing the challenges the project was facing. This is great to see and really appreciated. The economic situation, globally and in this industry, hasn’t passed the project by and we have lost some members or some have downgraded, too.

The increased membership and participation has meant that the project can balance its budget and not forecast a deficit, hoping things will work out ok. For me personally, that does mean my job has a bit more security too; I’m not wondering if I’ll need to find a different income source in a few months. This also makes it easier for the project to retain some of our key help for things like documentation or maintaining our LTS releases. The time taken in training those people to the roles is not something which can be easily or quickly replaced so retention is important.

Sovereign Tech Fund support

The big news in the last year for us was finding that we could get some help from the Sovereign Tech Fund (STF), a German government funded initiative that is trying to help projects and the overall open software ecosystem. They read the article and wanted to see if there was a way to work together and help. The project had already been working on a five year plan, basically an open-ended discussion of where we’d like to see the project in five years’ time and what kinds of things might we like to see happen in that time frame. We found that we could take some of the themes from that plan and have financial help to bring them to reality.

Funding comes with constraints and it has been a challenge to do things in the time frame needed, but by contracting the work through many of the consultancies working within our ecosystem, we’ve been able to quickly pull together some amazing changes.

The projects we targeted were a mix across a spectrum of topics. Some are future looking with things like IDE integration into newer IDEs like VSCode. Some add automated testing to older code like Toaster, meaning we can stop it bit-rotting and degrading and start planning ways to better use it in the future. There was work to improve the developer experience both within our tools such as better understanding why cache objects (“sstate”) weren’t being reused, through to re-enabling patch submission/review processes automated CI-style helpers. There was also work done on properly documenting our security processes and preparing the project for the next generation of SPDX which is key to our Software Bill of Materials (SBoM) support.

Other projects include tool improvements to demonstrate and roll them out to other layers in the wider project ecosystem. Taking processes, techniques, and tools we have in the core and showing other layer maintainers how they can take advantage of them leads to wider ecosystem improvements in quality and productivity. We also have projects underway to explore the topic of binary packages in a source-based distro world and to look at ways we could improve our initial setup and user experience. Separate from the STF work, the project was also able to fund some improvements to the layer index but that wouldn’t have happened without the STF funding for other areas. The layer index works like a search engine for the project so it is of key importance to most of our users.

There were multiple good things to come out of all this work besides just the work itself. It meant that multiple members of the community were able to work on things that they have wanted to for a long time, knowing the benefit to everyone yet being unable to find a sponsor to allow them to spend that time. It also helped raise developer experience in a number of key areas, something we were conscious we were lacking.

I’d also note that the work was carefully planned to include and prioritize  test automation so that as well as fixing fundamental issues, we’re better placed to avoid some of these issues in the future, too.

Maintainer/developer resourcing issues remain

All this sounds really positive, and it most definitely is, but there was a bit of a darker side too. The core of the project was stretched thin and I remain the only full time developer at the core. Much of the writing and technical execution of the contracts therefore fell to me. I did realize this was likely to happen but the opportunity to fix so many of these long-term issues meant that I opted to push through it and make it happen. While I don’t regret it, I doubt I could sustain doing anything like this again.

The project has talked about “the bus factor” problem it has for a long time, and I’ve grown quite used to being hit by metaphorical buses in meeting discussions. In some ways I’m not as worried about this as I once was. Both Yocto Project and OpenEmbedded both have structures in place allowing a clear path to making decisions and those would work to allow the roles I fill to be replaced. It is ironic that when things are running relatively smoothly, people actually question the need for those structures, often not realizing that the time they really come into their own are in times of crisis.

The real concern now is one of scaling and overload and this is probably the key problem the project now needs to find solutions for. Funding is one challenge to improving this, it becomes an easier problem to solve if that is less constrained. The second challenge is the project has tried several times to write a job description for someone to shadow/assist/help me in various ways and we’ve struggled every time as my role within the project has so many hats and the skill sets overlap into what are traditionally different roles. When you add together the project and programme management pieces, the technical architecture oversight and vision, the bug fixing, general development skills, community relations and business relations pieces, good QA engineering skills and general operational execution, it gets complicated. The closest we’ve come was realizing that we needed both deeply technical programme management/execution and general but highly skilled development engineering help for me. This is still an ongoing discussion, so let us know if you have ideas!

Relevance in the wider open source ecosystem

There is a significant lack of understanding and recognition of what the project can actually do for the wider ecosystem and also for specific enablement. An interesting example is RISC-V support within the project. There has been community-driven support added over the last few years and it does basically work but the architecture has not been tested on our CI systems. The main reason for that is that those systems are high cost and maintenance, funded by the project membership and RISC-V does not have enough representation there. We’ve actively sought out platinum or multiple gold member participation from RISC-V interested parties but sadly there hasn’t been any commitment. The RISC-V story is particularly unfortunate since the project is about to release its next LTS which only happens every two years and it won’t be on the test matrix.

Besides the LTS, the project is extremely efficient at bringing in new versions of FOSS components as they become available when those upstream projects make releases. There is particular value in testing those on more unusual architectures such as RISC-V as early as you can, at the point of entry into the project and the wider ecosystem. By doing that it doesn’t just help Yocto Project support but also that support in other distributions too. We’re clearly struggling to showcase the huge benefit this has!

I’d also like to highlight another key feature of the project, which is the ability for users to own and control their entire build process. This means users don’t have dependencies on other companies or public services and that years from now, they have the ability to rebuild the software shipping in your products. Several recent examples of changes in availability of software or services, such as the structural changes around Fedora/CentOS, have made some users ask very valid questions about their reliance on other companies and their ability to “control their own destiny”. Yocto Project and OpenEmbedded were built to be able to solve that problem and there is no lock-in or reliance on others necessary.

Two other related areas the project has been able to help make step change improvements in is reproducibility and software manifests. For reproducibility we’ve worked with various upstreams to ensure the tooling is able to support it well (through compiler options for example) and that upstream software stops encoding things like build paths into binary output. For software manifest support, the project was proud to help test elements of the upcoming SPDX 3.0 standard to ensure some of the usage issues of the previous versions are addressed and that it fits well in a software build environment. With recent developments like the European Cyber Resilience Act and with similar changes already present or coming in other jurisdictions, being able to comply easily with these through good tooling and processes will be key.

Availability of developers

The huge demand for Yocto Project/OpenEmbedded skilled engineers does have one other rather unfortunate impact on the project core. That demand is great for ensuring people in the project have employment, however because the skills are scarce, they often aren’t allowed time to contribute to “upstream” or back to the project core. Understandably, they may also be asked to prioritize work on product specific layers in preference to core code and overall project architecture. The “layer” approach the project takes in some ways makes this much easier to do, too.

While understandable, the loss of access to people’s knowledge, and their ability to help work on bugs or improvements, is another significant challenge for us which I’m not sure how to address at this time. 

Summary

All in all, the last year has been really positive. The STF involvement was a very welcome surprise and we’ve achieved great things. Reading the article from a year ago, it is nice to be able to say that we’ve moved forward or even resolved some of those topic areas. Challenges remain though, particularly around participation in the project (both financial/membership and developer) if we’re to improve the overload problem.

Some of these issues are not unique to the Yocto Project and are faced by many open source projects. Regardless, I feel that we do need to be open about the issues even if we don’t have good solutions yet. While we don’t want to alienate our current developer community and maintainers, we’re trying to be open to new approaches and ideas, so please do get in touch if you think there is a way forward that we’re missing!

About the author: Richard Purdie is the Yocto Project architect and a Linux Foundation Fellow.

Find Your Pot of Career Gold & Save Up to 40%

→ Save 40% with code LUCK24 on IT Professional Programs & Bundles
→ Save 25% with code LUCK24CC on e-Learning Courses & Certifications & SkillCreds
→ Save 25% with code LUCK24ILT on Instructor-led Courses

Learn more at Linux Foundation Training

Bridging Design and Runtime Gaps: AsyncAPI in Event-Driven Architecture

The AsyncAPI specification emerged in response to the growing need for a standardized and comprehensive framework that addresses the challenges of designing and documenting asynchronous APIs. It is a collaborative effort of leading tech companies, open source      communities, and individual contributors who actively participated in the creation and evolution of the AsyncAPI specification. 

Various approaches exist for implementing asynchronous interactions and APIs, each tailored to specific use cases and requirements. Despite this diversity, these approaches fundamentally share a common baseline of key concepts. Whether it’s messaging queues, event-driven architectures, or other asynchronous paradigms, the overarching principles remain consistent. 

Leveraging this shared foundation, AsyncAPI taps into a spectrum of techniques, providing developers with a unified understanding of essential concepts. This strategic approach not only fosters interoperability but also enhances flexibility across various asynchronous implementations, delivering significant benefits to developers.

From planning to execution: Design and runtime phases of EDA

The design time and runtime refer to distinct phases in the lifecycle of an event-driven system, each serving distinct purposes:

Design time: This phase occurs during the design and development of the event-driven system, where architects and developers plan and structure the system engaging in activities around:

  • Designing event flows
  • Schema definition
  • Topic or channel design
  • Error handling and retry policies
  • Security considerations
  • Versioning strategies
  • Metadata management
  • Testing and validation
  • Documentation
  • Collaboration and communication
  • Performance considerations
  • Monitoring and observability

The design phase yields assets, including a well-defined and configured messaging infrastructure. This encompasses components such as brokers, queues, topics/channels, schemas, and security settings, all tailored to meet specific requirements. The nature of these assets may vary based on the choice of the messaging system.

Runtime: This phase occurs when the system is in operation, actively processing events based on the design-time configurations and settings, responding to triggers in real time.

  • Dynamic event routing
  • Concurrency management
  • Scalability adjustments
  • Load balancing
  • Distributed tracing
  • Alerting and notification
  • Adaptive scaling
  • Monitoring and troubleshooting
  • Integration with external systems

The output of this phase is the ongoing operation of the messaging platform, with messages being processed, routed, and delivered to subscribers based on the configured settings.

Role of AsyncAPI

AsyncAPI plays a pivotal role in the asynchronous API design and documentation. Its significance lies in standardization, providing a common and consistent framework for describing asynchronous APIs. AsyncAPI details crucial aspects such as message formats, channels, and protocols, enabling developers and stakeholders to understand and integrate with asynchronous systems effectively. 

It should also be noted that the AsyncAPI specification serves as more than documentation; it becomes a communication contract, ensuring clarity and consistency in the exchange of messages between different components or services. Furthermore, AsyncAPI facilitates code generation, expediting the development process by offering a starting point for implementing components that adhere to the specified communication patterns.

In essence, AsyncAPI helps bridge the gap between design-time decisions and the practical implementation and operation of systems that rely on asynchronous communication.

Bridging the gap

Let’s explore a scenario involving the development and consumption of an asynchronous API, coupled with a set of essential requirements:

  • Designing an asynchronous API in an event-driven architecture (EDA):
    • Define the events, schema, and publish/subscribe permissions of an EDA service
    • Expose the service as an asynchronous API
  • Generating AsyncAPI specification:
    • Use the AsyncAPI standard to generate a specification of the asynchronous API
  • Utilizing GitHub for storage and version control:
    • Check in the AsyncAPI specification into GitHub, leveraging it as both a storage system and a version control system
  • Configuring GitHub workflow for document review:
    • Set up a GitHub action designed to review pull requests (PRs) related to changes in the AsyncAPI document
      • If changes are detected, initiate a validation process
      • Upon a successful review and PR approval, proceed to merge the changes
      • Synchronize the updated API design with the design time

This workflow ensures that design-time and runtime components remain in sync consistently. The feasibility of this process is grounded in the use of the AsyncAPI for the API documentation. Additionally, the AsyncAPI tooling ecosystem supports validation and code generation that makes it possible to keep the design time and runtime in sync.

Putting the scenario into action

Let us consider Solace Event Portal as the tool for building an asynchronous API and Solace PubSub+ Broker as the messaging system. 

An event portal is a cloud-based event management tool that helps in designing EDAs. In the design phase, the portal facilitates the creation and definition of messaging structures, channels, and event-driven contracts. Leveraging the capabilities of Solace Event Portal, we model the asynchronous API and share the crucial details, such as message formats, topics, and communication patterns, as an AsyncAPI document.

We can further enhance this process by providing REST APIs that allow for the dynamic updating of design-time assets, including events, schemas, and permissions. GitHub actions are employed to import AsyncAPI documents and trigger updates to the design-time assets. 

The synchronization between design-time and runtime components is made possible by adopting AsyncAPI as the standard for documenting asynchronous APIs. The AsyncAPI tooling ecosystem, encompassing validation and code generation, plays a pivotal role in ensuring the seamless integration of changes. This workflow guarantees that any modifications to the AsyncAPI document efficiently translate into synchronized adjustments in both design-time and runtime aspects. 

Conclusion

Keeping the design time and runtime in sync is essential for a seamless and effective development lifecycle. When the design specifications closely align with the implemented runtime components, it promotes consistency, reliability, and predictability in the functioning of the system. 

The adoption of the AsyncAPI standard is instrumental in achieving a seamless integration between the design-time and runtime components of asynchronous APIs in EDAs. The use of AsyncAPI as the standard for documenting asynchronous APIs, along with its robust tooling ecosystem, ensures a cohesive development lifecycle. 

The effectiveness of this approach extends beyond specific tools, offering a versatile and scalable solution for building and maintaining asynchronous APIs in diverse architectural environments.

Author
Post contributed by Giri Venkatesan, Solace