We live within nested and complex systems. We use cloud AI/ML/BI services to understand their operation and future with 100 trillion gigabytes of yearly data volume we create, capture, copy and consume (Domo 2022) (IONOS 2021). These services stress the most important system we rely on, the biosphere (Monserrate 2022) (Strubell, Ganesh, and McCallum 2019) (Getzner, Charpentier, and Gรผnnemann 2023) (WHO 2021). We are responsible for the machines that we build and operate to help us understand and manage our world, as well as the damage the machines do; therefore, we must insist on understanding our telemetry against specific, documented goals, and take action towards them as a species (Fuller 2016). Through human cognition and agency, mapped against present and future systems, it is my hope that we will avoid a cliff plunge that takes out large swathes of the complex life we share the biosphere with (McPherson et al. 2023). (๐ธ) Failing that, we can use these methods after the plunge, in the next century, or the one beyond that, regardless (Snyder 1996). Fig. 1 maps features needed to understand, build, fix, and implement new, complex systems in a resilient way.
The word sustainability can be interpreted as maintaining our current lifestyles, e.g., replacing gasoline cars with electric, as though single occupancy vehicles is something the biosphere can handle with the complex supply chain behind them. Rather than sustainability, the focus of this paper is on resilience, and assumes that disruptive change is inevitable, appearing in the form of system crises. While resilience is normally a psychological term, the ideas translate well to systems (Heshmat 2020). Resilience takes place at time of system crisis (Daniel 2014). Understanding and preparing to tackle crises with the features in Fig. 1 facilitates resilience. Green arrows show aspects that facilitate the primary ones, but there are lesser relationships between all aspects. Establishing knowledge about a system requires communication and collaboration. (๐ณ) (โ๏ธ) (๐ฅ) These three primary features define human civilization in general, and how it ratchets forward. When cheap energy and materials are plentiful, they act as an accelerant (Knowledge at Wharton Staff 2018) (Heinberg 2023). When they are less plentiful, we experience system stress, and the three primary features are more about resilience than accelerating civilization. (Levi and Cullen 2018) (Fix 2020) (Cline 2016) (Dyni 2006).
There is an assumption in this paper that collaborative system inquiry, leveraging existing tools like BI and AI/ML when prudent, is the most effective route to resilience (Anderson and Burney 1999). There is another assumption that fulfilling the feature requirements for resilience leads to better telemetry against documented goals. I admit this is a form of optimism, and does have some technological bias; however, my analysis likely yields many features that are familiar and useful to the reader. (๐คช) At the same time, it seems unlikely that a modern system analyst toolkit includes triples and semiotics. This is why Iโm taking the time to write this paper. The philosopher that first wrote about triples also wrote about semiotics (Campbell, Olteanu, and Kull 2019) (Dรผrscheid and Meletis 2018) (Burch 2022) (Petrilli 2015) (Palmolive 1978). Iโm not competing with tools; Iโm adding some, along with perspective. Triples and semiotics do not require machine cognition, but extensibility to machine cognition is a benefit (McCallum and Henshaw 1855). (๐ช)
(๐) Fig. 2 lists system quality requirements. A cloud vendor might promise only three nines of availability. (๐ฏ) Coupled with external infrastructure availability, like office power or network connectivity, many organizations are running at two nines of availability. If the system is down, slow, locks up, or the user loses data, quality requirements all of a sudden become the most important aspects. For the analyst, there is a bit of crystal ball reading involved with them. What kinds of things can happen that will reveal that the quality of the system is a problem? How does the system deal with the eight fallacies of distributed computing (Jausovec, Peter 2020) (Van Den Hoogen, Ingrid 2007)? If data is maliciously or accidentally corrupted or deleted in the system, and isnโt discovered for several months, then how is it recovered? Many backup systems do not account for this, yet it is a key part of RPO requirements (Sharma 2020). (๐พ) (๐ฆซ) Quality requirements may be considered historically with the existing system (as-is), or for a future system (to-be). A CRM system might have a past availability of 99.5%, but a to-be requirement of 99.99%. (๐ฏ) Notice the breadth and likely changing nature of requirements. In the case of availability, it is quite likely that when the system does have performance or availability problems, the sponsor and stakeholders will want a deep dive on why their expensive initial system investment with increasing monthly fees is less usable than the previous system. An analyst, whether dedicated, or rolled under a developer role, will need to wheel out the requirements and analysis, mapped against the design, and refine (Schaffer,Erin 2023). This might be the point where the organization values availability and performance enough to pay for cloud agnostic or local-first architectures, or even double down on cloud (Sharma 2019) (Kleppmann et al. 2019). (๐) (๐ฅก) This flexibility is where triples shine, as they can be added in and changed anywhere in the original capture of requirements without significant rework, nor a loss of established small facts. (โ๏ธ) (๐งฉ) The system used to collaboratively model, understand, operate and improve systems has many of the same requirements as the system being analyzed. This is what Triple System Analysis () does, and so I will detail resilience features and quality requirements of . This will help the reader understand the application of the aspects, implement for their own application, and get acquainted with my use of emoji for brevity and cognition.
The Whiteboard and Stickies collaborative business analysis technique is a crude version of , but it illustrates many of the same features. An analyst gathers users and experts of a system in a room with stacks of colored sticky note pads, and prompts the attendees to lay out aspects of the as-is or to-be system by writing small bits of information on the notes and sticking them on a whiteboard for overall visualization. The analyst serves as a ringmaster (Teoh 2012). (๐๏ธ) (๐ก) The main reason this works so well is that it is collaborative without the burden of dense jargon or existing description of the system. (โ๏ธ) (๐ฅ) This method works well as far as communication for those present at the meeting, but there are limits to the collaboration, as it is all within a local room. Collaboration virtually is possible, but difficult. (๐ฅ) Meaning is often encoded with color and text of the stickies, as well as text on the whiteboard. All may contribute. (โ๏ธ) There is instant validation of gathered information. (๐) If somebody puts up a sticky note that is inaccurate, it is easy to correct. (โ๏ธ) (๐) There is a real-time update of the output of the group. (๐) There is little control of meaning, as it is whatever people put on the notes. (๐ง ) It is guided by the analyst, but there is usually no prior schema, which is a disadvantage as far as common, re-used meaning. There is an overall visual order (map). (๐บ๏ธ) Sometimes the map is in swimlane form; sometimes it is more chaotic. The map usually needs to be expressed in a different form for knowledge re-use. Knowledge is captured on the whiteboard itself, so persistence requires taking a picture or rolling the whiteboard into another room to process for future storage and visualization. (๐๏ธ) (๐ณ) Capturing the knowledge is labor intensive, and the analyst is often the bottleneck. is an alternative to this method, that provides many of the same benefits. The whiteboard and stickies technique is extensible to ; however, I provide technical solutions that make using as a direct replacement appealing, despite the added complexity. I will discuss the feature and quality requirements as applicable to , but they also apply to the system under analysis, using the techniques and tools of . Triples and semiotics (emoji) distinguish from the Whiteboard and Stickies method (Gorman 2014).
I discovered triples as I struggled to communicate about complicated systems with increasingly specialized engineers. I found that while I couldnโt find anybody who could answer questions about the overall system, it was relatively easy to establish small facts. As I tried to visualize the facts in an automated way, I realized that the bioinformatics field had tools that could help, and that my previous career success with data flow was related (Shannon et al. 2003). Triples decompose systems into small facts that can be easily established and collaborated on iteratively. The most famous example is the Gene Ontology, but the concept goes back to the 1800s (Consortium 2023) (Peirce 1878).
Fig. 4 illustrates the triple โmonitor has_resolution 1920X1080โ. The subject is โmonitorโ, the predicate is โhas_resolutionโ, and the object is โ1920X1080โ. (โ๏ธ) Triples can be assembled as a graph to visualize a broader perspective. (๐บ๏ธ) Letโs start with four sentence triples: โTigers eat cats.โ, โCats eat rats.โ, โCats eat mice.โ, and โMice eat cheese.โ The subject and objects are connected by โeatโ. (๐ฉ) Fig. 5 shows cheese flowing through mice to tigers. The graph can prompt questions, like โWhat does the rat eat?โ or โIf the cheese is poisoned, what animals might be affected?โ. The fact that we donโt have anything for the rat to eat on the graph doesnโt invalidate the graph. We can add what the rat eats as we learn. Likewise, the fact that a mouse eats peanut butter doesnโt invalidate the fact that the mouse eats cheese. We can also do inference: the tiger might well be eating cheese via other animals, so if the cheese was poisoned, we could query the graph to see what animals would be affected. (๐ต๏ธ) The important thing to notice is that these small facts can be added at any time to the model. We could add โtigers eat pigsโ later on. This flexibility is a key feature of using triples for system analysis called the Open World Assumption (Bergman 2009).
Breaking down meaning into triples works well to create maps, as they can be re-assembled and visualized. (๐๏ธ) Key-value pairs, coupled with a timestamp, do not provide the same flexibility, as an external structure has to be laid down, either via a model or through interactive exploring at the time of analysis, so they canโt be stored in a meaningful way for re-use separate from the model or interaction. Triples are the smallest atom of meaning (Lehnert 2021). (โ๏ธ) (๐ง )
Letโs practice by analyzing a larger system with triples. Consider what modern compute needs at a high level. We are collecting small facts.
๐ป๏ธ โฌ
๏ธ ๐๏ธ โซ ๐ป๏ธ โฌ
๏ธ ๐ญ๏ธ โซ ๐ป๏ธ โฌ
๏ธ ๐ข
๐ป๏ธ โฌ
๏ธ ๐ง โซ ๐ป๏ธ โฌ
๏ธ ๐ง โซ ๐ป๏ธ โฌ
๏ธ โก๏ธ
Compute needs to be constructed and manufactured using globally distributed parts. Compute needs to be cooled, and needs humans and electricity to operate. These are fairly straightforward facts. Triples can be written in many forms, most of which are optimized for machines (W3C 2014b). We are optimizing our analysis for human cognition, though, so using emoji is a plus. Triples donโt have to be in machine format (Microformats 2022). Fig. 6 visualizes the relations. (๐๏ธ) (โ๏ธ)
We donโt have to toss established knowledge to build out perspective. For instance, say we want to distinguish local transport from global. We could add ๐ป๏ธ โฌ ๏ธ ๐ to show that compute uses local transport. The fact that some compute requires local transport does not negate the fact that some compute requires global transport. Perhaps we want to add that not all global shipping is necessarily based on oil. We want to allow for efforts like the Porrima or even good old fashion sailboats (MS Porrima 2022). No problem, just add ๐ป๏ธ โฌ ๏ธ โต๏ธ and ๐ข โฌ ๏ธ โก๏ธ. If we want to account for the fact that cooling compute in datacenters requires water and electricity, that construction, transport, and manufacturing require oil and electricity, and allow for photovoltaic arrays, we can add these eleven triples too:
๐งโฌ
๏ธ๐ง โซ ๐งโฌ
๏ธโก๏ธ โซ ๐โฌ
๏ธโก๏ธ
๐๏ธโฌ
๏ธโก๏ธ โซ ๐ญ๏ธโฌ
๏ธโก๏ธ โซ ๐๏ธโฌ
๏ธ๐ข๏ธ
๐โฌ
๏ธ๐ข โซ ๐ญ๏ธโฌ
๏ธ๐ข๏ธ โซ ๐ขโฌ
๏ธ๐ข๏ธ
โก๏ธโฌ
๏ธโ๏ธ โซ โก๏ธโฌ
๏ธ๐ข๏ธ
Fig. 7 shows the resulting graph.
Letโs imagine that while we are building out our model of what compute needs, that another group is building out a model of what humans need. This is an important aspect of . The graphs can be combined. (๐ฅ) Formal ontologies exist so that the meaning of the triples is agreed on (ISO/IEC 21838-2 2020). (๐ช) (โ๏ธ) (๐ผ) The problem is that this is in conflict with human cognition, as the standards intend machine cognition. (๐ก) It doesnโt have to be a big problem, as long as human cognition is the priority. The nature of triples is that mapping to formal definitions is fairly straightforward.
Consider what humans need. Here is a list of triples followed by an explanation (โ๏ธ):
๐ง โฌ
๏ธ ๐ก๏ธ โซ ๐ง โฌ
๏ธ ๐ฐ โซ ๐ง โฌ
๏ธ ๐ฅ
๐ง โฌ
๏ธ ๐ฒ โซ ๐ก๏ธ โฌ
๏ธ ๐ โซ ๐ โฌ
๏ธ ๐๏ธ
โ๏ธ ย Humans need a certain wet bulb temperature to live, potable water, medical care, and prepared food (Vecellio et al. 2022). Shelter is needed for humans to maintain tolerable temperatures, and this shelter needs to be constructed.
๐ฒ โฌ
๏ธ ๐ โซ โก๏ธ โฌ
๏ธ ๐ข๏ธ โซ ๐ โฌ
๏ธ ๐๏ธ
๐ฒ โฌ
๏ธ ๐ฑ โซ ๐ โฌ
๏ธ โก๏ธ โซ ๐๏ธ โฌ
๏ธ โก๏ธ
โ๏ธ ย Prepared food for humans comes from animal and plant sources. Construction and transport need electricity, which is provided by oil. Transport needs to be constructed.
โก๏ธ โฌ
๏ธโ๏ธ โซ ๐ โฌ
๏ธ ๐ฑ โซ ๐๏ธ โฌ
๏ธ ๐ง
๐ฑ โฌ
๏ธ ๐ฉ โซ ๐ฑ โฌ
๏ธ ๐ โซ ๐ฉ โฌ
๏ธ ๐ข๏ธ
โ๏ธ ย Electricity can also come from the sun. Animals eat plants. Construction needs humans. Plants need fertilizer and water. Fertilizer comes from oil.
๐ฉ โฌ
๏ธ ๐ โซ ๐ฉ โฌ
๏ธ ๐ง โซ ๐ฒ โฌ
๏ธ ๐ฐ
๐ โฌ
๏ธ โก๏ธ โซ ๐ฐ โฌ
๏ธ ๐ โซ ๐ฐ โฌ
๏ธ ๐๏ธ
โ๏ธ ย Fertilizer can also come from animals or humans. Drugs need electricity. Potable water is sourced from rivers, lakes, springs, and groundwater. Processed food needs potable water, which needs constructed infrastructure to operate.
๐ฅ โฌ
๏ธ ๐ โซ ๐ฅ โฌ
๏ธ ๐ง โซ ๐ โฌ
๏ธ ๐ข๏ธ
๐ โฌ
๏ธ ๐ โซ ๐ฅ โฌ
๏ธ ๐ข๏ธ โซ ๐ โฌ
๏ธ ๐ข๏ธ
โ๏ธ ย Medical care needs drugs, people, and oil. Drugs need oil and transport. Shelter needs oil for heating as well as components.
๐ฐ โฌ
๏ธ ๐ข๏ธ โซ ๐ โฌ
๏ธ ๐ฑ โซ ๐๏ธ โฌ
๏ธ ๐ข๏ธ
๐ง โฌ
๏ธ ๐ฑ โซ ๐ฑ โฌ
๏ธ ๐ก๏ธ โซ ๐ฐ โฌ
๏ธ โก๏ธ
โ๏ธ ย Construction and potable water infrastructure needs oil. Potable water distribution needs electricity. Humans can eat plants directly, unprocessed. Drugs are made from plants. Plants need particular temperature ranges to germinate and thrive (Reed, Bradford, and Khanday 2022).
๐ฒ โฌ
๏ธ โก๏ธ โซ ๐ฒ โฌ
๏ธ ๐ข๏ธ โซ ๐ฒ โฌ
๏ธ ๐
๐ฑ โฌ
๏ธ โ๏ธ โซ ๐ โฌ
๏ธ ๐ข๏ธ โซ ๐๏ธ โฌ
๏ธ ๐
โ๏ธ ย Processed food needs electricity oil, and transport. Plants need sun. Transport needs oil, and construction needs transport.
๐ฅ โฌ
๏ธ ๐๏ธ โซ ๐ฅ โฌ
๏ธ ๐ โซ ๐ฅ โฌ
๏ธ ๐ฐ
๐ โฌ
๏ธ ๐ โซ ๐ง โฌ
๏ธ ๐ง โซ ๐๏ธ โฌ
๏ธ ๐๏ธ
โ๏ธ ย Medical facilities need to be constructed, and also need potable water and transport. Shelter needs transport. Humans need humans to reproduce, and construction equipment is created with construction equipment. We might argue a bit about whether a hospital is needed, but in our current civilization, this is reasonable (Barbara Mann Wall ). Likewise, in some societies transport is not needed to build a shelter. The advantage of this form of analysis is that the individual triples are relatively easy to agree on. Do we need oil for construction? Are drugs made with oil? This can be verified by experts to the satisfaction of those who are evaluating the system. If there is conflicting information, mark it as such and/or donโt include. The triples can be assembled as a graph for visualization as the model is built out, which facilitates collaboration. Fig. 8 shows the finished graph. (The ๐ข๏ธ demo site ) is a live version of this graph.
If we decide that we will only consider transport that gets electricity from the sun, then we still have quite a few other problems to address. The graph helps put things in perspective, and facilitates human cognition of the system. (๐๏ธ) (โ๏ธ)
Back to our scenario with what compute needs. โ๏ธ๐ข๏ธ๐ง๐๏ธโก๏ธ๐ have the same meaning and use the same symbol. ๐ก๏ธ=๐ง, ๐=๐ง, and ๐ on human needs could be either ๐ข or ๐.
If we combine the graphs with the translation for ๐ก๏ธand ๐, we get Fig. 9.
There are some semantic issues here. We say that maintaining a certain temperature range requires water. While this is currently reasonable for most datacenters, it is less so for cooling humans. Humans do need water for other primary reasons. The main point is that as a human trying to understand related systems, the map of compute and human needs is roughly as complicated as you can get and still take it in with a viewing. I am tackling a narrow part of visualization and cognition. There are ontologies for many of these entities, but they are too cumbersome to lay out in the way we have here. This is a key point of : with a focus on human cognition, we only want to lay out detail at a level necessary to make an informed decision. We have oodles of tools to analyze streams, and many choices for necessary black-box modeling. What we lack is human cognitive agency in our systems. I will now detail the resilience feature requirements, starting with maps.
Whether we are looking at a road map or a diagram of arteries and veins in the human body, maps are one of the main ways that we understand something. They serve as a guide to the as-is, and as a cognitive exoskeleton to build new knowledge.
Along with casting and the imagination and technique of the director, a screenplay is used to create a movie. A screenplay is a re-usable document that helps humans cognitively understand the movie enough to participate. It captures just enough knowledge to express the ideas as a movie. A screenplay does this by working within established meaning and conventions. It is possible to insert scenes and modify characters easily. Consider Fig. 10, the 1939 movie Made for Each Other, now in the public domain (David O. Selznick 1939). AI/ML and business intelligence tools running against the finished movie, metaphor or literal, can identify the actors and items in various scenes. They can categorize the props to recreate the scene with other actors superimposed. They can trace the flow of the movie against theater and consumer behavior data to predict the amount of popcorn that viewers decide to skip out and buy during the scene. Much of the knowledge, though, is not recoverable from the surface movie with AI/ML. It is quite likely that the pauses, awkward moments, and humor will be lost, like when Carole Lombard glances at James Stewart. We lose the original intent and the beauty of the way that the director, actors, and crew created the movie from the screenplay map. This gets at the human interest and effort underneath the stream. (๐ ) Humans need more than just monetizing, parsing, and predicting streams. As stakeholders in the systems that we rely on, what is our equivalent screenplay?
Like a screenplay, it is preferable to create maps that are based on standards. A marvelous example is Structured System Analysis, which was refined in the 1970s and 1980s as a way to analyze systems, primarily around data flow diagrams (DFDs). It uses three symbols and some conventions for connecting them (Gane and Sarson 1977) (Yourdon 1989) (Debruyne et al. 2019). Think of this as a vertical, dumbed-down version of graph analysis. The methods apply to any IT system. IT stores, transforms, and reports on information. Before most in compute called their profession IT, they called it IS, information systems, which is more appropriate. What we really care about isnโt the technology, it is the information and the system that stores, transforms, and reports on it, no matter what the transforms are, AI or not. This, folks, is our screenplay for IT systems. The data flow ontology isnโt too complicated to have both a cognitive, single-serving view, yet still have machine cognition and a formal, agreed-on method of analysis. (๐ช) (๐๏ธ) (๐๏ธ)
While different authors choose different symbols to represent the nodes of DFDs, structured system analysis consists of an entity, process, and data store. (โ๏ธ)
๐ค External Entity (Entity)
This is a source or destination of data. It might be a person,
or a group of people, or even sensors.
โ๏ธ Process
This transforms data from one form to another.
There are multiple standard forms of DFDs in triple form (Debruyne et al. 2019) (โCommon Core Ontologiesโ 2022). This means that DFDs are extensible to formal meaning and scaling with machine cognition. (๐ก) If we choose one model, one screenplay format for IT/IS, this is the one. Fig. 12 is a top level (0) DFD.
(The ๐ฆ demo site ) is a live version of this graph. If we expand process 6 in Fig. 12 we get Fig. 13, and our view is entirely within Customer Relationship Manager.
If we expand Social Fanisizer in Fig. 13 we get Fig. 14.
The view is important. When we are in a particular view, a node can open up as a graph with a different view. Fig. 13 is a wormhole into the perspective of the specific group: the perspective of Customer Relationship Manager, including social media and snail mail cards. (๐๏ธ) Unless somebody is working directly with the Social Fanisizer, they donโt need to know details, but they would need to understand that Lead Addresses are used by both Social Fanisizer and the Constant Email Service.
Iโve used DFDs without the visualization automation at my work for almost a decade. Much of the motivation for this paper is from that experience, and a desire to improve available tools. It varies both from formal DFDs, but also from formal graph analysis. Unlike conventional graphs, hierarchical paths for the processes are a feature. Consider 0๐ธ6๐ธ11โช๏ธโ๏ธ๐ธ1๐นโ๏ธ๐น๐ค๐ธCFO . In Gane and Sarson notation, this would show that process 6.11.1 has a two-way data flow with the CFO. shows the nodes visually with the labels for meaning, without needing to create extra symbols. The โช๏ธ shows the level or graph. The level is 0๐ธ6๐ธ11, and is also a process ( 0๐ธ6โช๏ธโ๏ธ๐ธ11) at level 0๐ธ6, which fits with the intent of Gane and Sarson. I do not pull through meaning between levels. In the previous example of Social Fanisizer, for instance, I would expect those with detailed knowledge of that particular process, the graph at the node 0๐ธ6๐ธ6, to have a different understanding of their processes and datastores. There may well be overlap, but since this is about human cognition, it is completely fine to only enforce process as the nesting mechanism. Everybody can agree that Social Fanisizer is part of the Customer Relationship Manager. The groups that provide and consume information from Social Fanisizer, from their perspective, will likely be much different than the level 0 view of the entire org, which should be understood by anybody in the organization.
There should be a minimal number of symbols on the map, and they should require only a few minutes to learn. Data flow fits this with just six symbols: three for data flow direction and three for the nodes. This is one of the reasons why I prefer emoji, if possible. Text works, but as I show in (๐ ), it is harder to understand at a glance.
Different stakeholders have different needs for visualization. Triples facilitate this, as there are many ways to style the map in an automated way, redrawing and visualizing as needed (Graphviz 2023). Narrative can also be tailored and overlaid on top of the graph.
Modifications to the underlying data should automatically adjust existing maps, and should not require major adjustments to the schema.
Wade through the mind-numbing muck of a corporate โwhatโs wrong with our groupโ exercise, and you will usually end up with the top ranking advice that better communication is needed. It may seem fruitless to spend all that time arriving at that conclusion every two years like clockwork, but it is still a core problem. Letโs analyze what we mean by communication about systems in crisis. Mostly this involves representing and sharing the knowledge of who, what, when, how, where, and why with a two-way tight coupling with streams and meaning. (๐) (๐ง )
Who are the people contributing to the model of the system? This needs to be traced for several requirements. It is the only real structural requirement, as it is key to collaboration and streams.
What are the pieces of the system being communicated about? What are we doing overall?
How do we describe how pieces are related? How do we get to a complete system?
Where are the interactions we are concerned with in the system? This might be a logical or physical perspective. Where are we going overall?
Are there crucial times or dates associated with the interactions? When will we arrive at our goal?
Triples are particularly good at establishing why, since they can be added at any point in the process, and with any type of connection or interaction. This doesnโt interfere with velocity, as triples can be attached to the issue and fix, much like a tag, but with an extra dimension. While it is true that most platforms offer notes, they usually donโt offer notes referencing a broader map of โwhyโ.
We need to communicate with agreed meaning in order to store and filter knowledge. Streams, which are often captured without known structure, need AI/ML via models, black box analysis based on standard schemas, or a human hunt and search method to shoehorn meaning on the stream. Triples are better at this, because they are simple enough to be deemed true or false, but can also be put together to form broader, re-usable knowledge. Triples are also extensible to formal knowledge structures with globally agreed-on meaning (Smith 2022) (Smith et al. 2007). There is no room for confusion about what is meant. If I am stuck in the rain and talking over an unstable connection, and I want to convey that I am in Portland, I might use a standard phonetic alphabet and say โPapa, Oscar, Romeo, Tango, Lima, Alfa, November, Deltaโ (NATO 1956). The phonetic alphabet has agreed-on meaning, and is designed so that when transmitted over a stream (radiotelephone), there is a low possibility for confusion over similarly sounding words. Meaning should be quickly established with visual queues when possible.
There are several reasons why I conclude that emoji are useful for system analysis:
There are also reasons why emoji are bad, primarily compatibility, manageability, maintainability, and security :
Long-form text is usually not immediately understood. For many people, only the first couple of sentences is read in a long narrative. Emoji have quite a few quirks that make them difficult to use in knowledge systems; however, from a human perspective it makes understanding systems at rest and in motion much easier. The priority of is human, not machine cognition.
If I try and relate meaning with a paragraph of text, it is open to interpretation. Smaller propositions, though, are easier to pin down. What are the atoms of the system that can be pinned down? (๐)
Knowledge needs to be in a form that helps the broadest set of knowledge workers. This implies knowledge is written. Written knowledge started civilization with the written word. The logistics of civilization are such that knowledge is required to be written (Schmandt-Besserat 2014).
We need to be able to show crucial knowledge as a priority. Streams can help with this by triggering views of the system when triples are embedded. In a quickly changing system that a team is involved in analyzing, focusing on the metrics of what is broken is important.
We need to be able to retain knowledge for future, unforeseen views. The format for retaining re-usable knowledge will vary widely by the needs of the stakeholders and application. uses emoji triples. Emoji are tricky to deal with, as they can take many forms for the same looking emoji. (๐ง ) I standardize on the full emoji that has color on most platforms. Usually this means that there is a variation selector. (Codepoints 2023)
By iterating with complex numbers c, and colorful visualization of what doesnโt escape to infinity, we get the Mandelbrot Set (josch 2021).
No matter what kind of AI/ML machinery is pointed at the animated rendering of the Mandelbrot Set in Fig. 15, it is extremely unlikely it will arrive at the simple formula (Mittal 2015). We might be able to find profitable gullies and shorelines in the rendered fractal territory, but we will not discover the formula. We could spend a lifetime enjoying the beauty of the surface curves, but we would always be caught in the surface, unaware of the rule, the formula underneath the territory. The gap between the movie and the screenplay is much like the gap between the formula for the Mandelbrot Set and the rendered visual, but rather than humans bridging the gap and creating unique movies that separate the gap, machines express the formula. (๐ญ๏ธ)
During collaboration, there should be minimal limitations on who can contribute.(๐ฅ) If any participant feels something is wrong with the system map being built collaboratively, it should be easy to capture. Likewise, if a particular proposition (triple) is verified by an agreed expert, then that should also be easy to capture. (๐) Focusing on the core knowledge first, rather than code, platform, frameworks or metal, facilitates collaboration from the start. Further, a focus on decomposed knowledge, coupled with immediate visualization, makes meetings more productive, since there is little delay between gathering knowledge and the visualized model. Decomposed knowledge streams (triples) only require a โlockโ or merge on the triple itself (Kleppmann et al. 2019) (Shapiro et al., ). It is less likely that during collaboration one person will step on another, e.g., one person might change the title while somebody else is editing the description. The big win for collaboration is on multi-level data flow diagrams (DFDs), as different areas of expertise can collaborate concurrently to build the models.
As the model is built, we should be able to agree and document validation. If we are modeling a system with a database, and need to understand failover procedures, who validates that information? (๐) Triples facilitate this by providing a mechanism to detail what an aspect means, and what the aspects used to describe those aspects mean. If a failover procedure to provide availability requires particular staff, this can be added. A famous example from when many businesses used on-premises database clusters, is failing back a failed database to ensure it can fail again, if it wasnโt possible or convenient to automate. Even if it is convenient, who verifies that the process works? Who verifies it when storage is changed? The design might specify active/passive failover to meet the availability requirements, but there is another entire graph of aspects that can be added on and changed easily about the details, without losing the original effort. If the CIO validated the triple โavailability must_be 99.999โ, then this implies 24/7 DBA staff that understand how to failover the database, orchestration with the storage team, contracts for storage, server, and software maintenance, and even HR relationships. None of this contradicts the original triple of availability that the CIO validated. Building these kinds of open structures work for more than just IT. It provides both elemental facts (atoms), but collaborative ways to build real meaning and validation and avoid erroneous conclusions, like โwe are running at 99.999 availability because the CIO validated itโ (ciaranhickey 2022). (โ๏ธ)
Participation often requires accountability. If I analyze a system, I am accountable for my work. Likewise, others who participate should be held accountable for their efforts. This is in a different direction than who validates. I might be responsible for determining who validates the database failover procedures. I then document them and get somebody to validate it. This also requires identity. (๐)
One of the reasons that the Whiteboard and Stickies exercise works so well is that everybody can contribute. (๐) Rabbit holes are often revealed that need to be explored later. While it is true that there are often an abundance of ideas, yet few are actionable, triples allow these to exist with a minimum of meeting derailment. For instance, โOne time five years ago we were down for three days because Mary left, and we never got another DBA with storage experience,โ is a perfectly good collection of triples. (Mary was DBA, Mary had storage experience, We were down five years ago. The incident five years ago lasted three days. The incident could have been resolved if the current DBA had storage experience.) If, upon investigation, it is realized that the real reason the database was down was that the firmware on the storage array was upgraded, the rest of the triples are still valid. At the same time, the only thing that needs to take place in the meeting is a sticky note on the whiteboard that says โFred: One time five years ago we were down for three days because Mary left, and we never got another DBA with storage experienceโ. The same exact note can be put down to be moved to triples later, matched against the appropriate knowledge graph, or visualized immediately by utilizing the tools described in this paper. shares some features with mind mapping (Sheng Huang 2022). (๐ชธ)
Much communication is done with streams. Sensor and vision data on an autonomous vehicle, human speech, and log entries are all streams. Metrics usually come in stream form, either in real-time or historical time-series data. Specific to system analysis, any change to the model needs to be communicated quickly via streams to facilitate collaboration. Triples can easily be embedded into streams, and have more inherent power than metrics in a time-series.
The system should leverage existing event streams for insight. Replay of system changes should be able to be mapped both by timestamp and with queries against multiple streams and datasets.
Streams in IT are often key-value pairs with timestamps. These streams are the relative of triples, with key-value being two to a tripleโs three.
Consider this local alarm log entry:
<154>1 20220327T211058.426Z dc1mon.example.com process=โ6 6 3โ cpu=โ100โ memfMB=โ10โ
This is in Syslog format (Gerhards 2009). It illustrates a priority alarm coming through for the available CPU dedicated to process 6.6.3 running at 100 percent, with only 10 MB of free memory. This is an operational stream that could trigger integration with the graph visualization. (๐๏ธ) For instance, process 3, the Ad Targeting Engine in Fig. 14 could be highlighted with red when the alarm came through, as shown in Fig. 16.
Streaming techniques donโt have to be applied to just operational events. We can put a triple update directly into an event stream:
<77>1 20220327T211058.426Z allsys tech=โsallyโ triple=โ0๐ธ6๐ธ6๐ธโ๏ธ๐ธ3๐น๐ท๐นAd\nTargeting\nEngine\n2โ
This change could be visualized in near real-time by updating the graph visualization, showing that we are now calling process 6.6.3 โAd Targeting Engine 2โ. This facilitates collaboration, as participants can see their input live, much like (๐). (๐ฅ) (๐) The model can be replayed over time, based on the timestamp. For scaling, put the data into a time-series or graph database.
Fig. 17 shows our IT world, coupled to incremental workflow, managed by streams of code. Visualization and understanding is done mostly through BI and AI/ML, analyzing streams and stores of information. Cloud companies scrape large swaths of the world wide web to understand and visualize via AI models, tilting at the massive stores and streams of data, while consuming insane amounts of compute, network, and storage (Strubell, Ganesh, and McCallum 2019). It is a critical requirement that co-exist with and leverage BI and AI/ML tech, but I donโt focus on it much in this paper, as there are oodles of staff available to do just about any kind of magic. Granted, the lifestyle portrayed in Fig. 18 is pleasant, but it leads to complacency, as system understanding and overarching organization goals are secondary to the appeal of dopamine pings (Chec 2020). I admit, I like the dopamine. It alleviates the grind with small wins. Iโd like to flip the script around on agile: the customer doesnโt care what the overall goal of the organization is. Triggering all based on product is a mistake, particularly if your work as an organization is more important than an entertaining mobile app. facilitates org-wide understanding of systems and purpose vs.ย streams, and can match stream workflow velocity.
Updates need to be visualized quickly, directly from the streams.
Changes over particular time periods, or with certain constraints, should be easily filtered and viewed.
Imagine driving a car cross country, but only relying on stream analysis, as though you are driving through a visualization of movie or rendered Mandelbrot Set, unaware of the formula. (๐ ) (๐ญ๏ธ) Through the car windows flows scenery of billboards selling shaving products, mixed in with desert, lakes, and trees. We could base the decision on where to drive and how to operate the vehicle by the stream of information flashing by the windows and coming in through the sensors. Perhaps a satellite of love correlates the position of the car constantly and tells you where to turn (Lou Reed 2013). As time and technology progress, we get better equipment, better cameras, faster cars; we build models to correlate with the stream of data. Is that a tree or a lake? How does our software recognize it? What algorithm works best? As the car goes faster and the features going by get more complex, we are faced with needing more and more equipment to navigate, more satellites, deeper supply chains, and more complete structural models in the car (Anthony 2017). With enough compute and machinery, we might even be able to characterize the feel of the dirt, the way it might erode in the wind, just by the massive amount of data gleaned in the trip and stored from other trips. At the same time, a map that shows the roads, gas stations, hotels, and towns is prudent to carry. A checklist for changing a flat tire, a list of things to pack, what to do if the car overheats, how to hitch a ride, five ways to overcome a psycho killer that gives you a rideโฆ all of this can fit in a small notebook (RHINO 2019) (Gawande 2010).
Having real-time measurements of the trip can be helpful, but what happens when there is no connectivity? What does your organization really need to run? Do you have it mapped out? What do you do when something unexpected happens? What do you do if that apt command in your Dockerfile uses a repository that is AWOL? A map lets you do things like continue on foot, change cars, or find the APT repository. A territory perspective ties you in to your service providers. Maps help ensure resilience. Note that a car full of territory equipment might usually win the race, at least while all the upstream dependencies are in good working order, including that satellite of love, but this is a different topic than resilience and autonomy. The massive compute, sensors, and AI/ML kit that processes data from various aspects of the scrolling desert, approximates territory; however, it does this over the surface of the fractal, the movie, going for that statistical nudge in identifying the place on the map via the territory, without first agreeing on a map with the sponsors and stakeholders. A stream-only approach, that relies on third parties, cedes knowledge, because the knowledge is embedded in services and platforms that are often not useful if you change cars or decide to walk. The identified gullies and shorelines might be profitable, but where is your map that you can use to change cars or walk yourself? Autonomous vehicles take this to an extreme. A map is less important if you canโt even drive your own car or fix it, if everything is a subscription service, and you own nothing, and decide nothing besides which corporate-created show entertains you this evening as you peel plastic off of your meal, or what piece of the product you fix in this weekโs sprint. You are reading this presentation via the machinery of modern stream work flows, and it truly is wonderful. Iโm writing this on an OS with a Linux kernel at the base (โThe Linux Kernel Archivesโ ). This kernel powers most of the cloud interests. I am not arguing that we need to abandon the territory perspective, which does require massive compute and centralized resources. I am arguing that authoring our own maps as individuals and organizations is crucial to resilience, and to be suspicious of stream-only views, regardless of the sophistication of the AI/ML.
Quality aspects are often referred to as non-functional requirements; however, quality makes more cognitive sense to me. Since Iโm pretty much off the conventional rails on much of this, I decided to just call these aspects quality. They are distinguished from features, as they do not affect the immediate use of the system when all is working properly i.e., the features exist. Note that this is counter to the closest definition of quality I can find, although there is some backing for my stance elsewhere (โISO 9000โs Definition of Qualityโ 2013). I stick with my version, though, as it doesnโt make sense to me to call quality how close to functional requirements you can get. Iโll side more with the agile folks on this one. Those things at that detail do change every day. My point is that these aspects can vary widely, yet the system still can function with the feature requirements. Non-functional vs.ย functional just seems wrong for the layperson. Of course it will function! The funny thing is, as much as Iโm steeped in the world of these words, I still get confused with the discussions about quality. This is likely because of the term โquality assuranceโ. I can see how testing the system fits the ISO version, but my-o-my, it seems like a stretch. For that matter, features is not an established word replacing functional requirements. Iโll leave this up to the readerโs preference. For me Iโm going to talk features vs.ย quality, as the only people that will protest are familiar enough to do a substitution. Frankly, even non-functional vs.ย functional gets tricky, even if you have been divvying up functional vs.ย non-functional requirements for years. is supposed to facilitate human cognition, right? Letโs go off the rails (Ozzy Osbourne 2017).
How bad is it if this solution fails while in operation?
Availability is expressed as a percentage. 99.5%, for example,
means the system can be down 1.83 days per year, 3.6 hours per
month, and 50.4 minutes per week. Maintenance windows donโt
normally count towards availability numbers, but they do
indicate how critical and tight maintenance windows should
be.
:
It should be quick enough to be used at time of crisis, i.e., to
facilitate resilience. This means that the quality requirements
are extreme. Availability should be many 9s.
These are capacity requirements beyond simply fulfilling the
normal operational aspects of the system. As an example, the
storage for the operating system and slack for patching would
not be part of this. External storage for images that grew by
1TB per month would be included.
:
The constraint is human cognition. Within that constraint, it is
difficult for me to imagine a map where capacity is a problem. I
do address capacity limits further in (๐) .
At what level of performance will users complain the system is
slow or locks up? What are the costs associated with
performance?
:
Updates from a change of knowledge atom should trigger an update
in the visualization quick enough so it does not hinder the pace
of collaboration. Generating the visualizations often takes
significant compute. Any personal computer with an optimized
graphics system from 2018 or later should be able to handle the
graph visualizations, as should mainstream mobile phones.
What are the expectations for adjusting performance and capacity
over time?
:
The main constraint on scalability is the complexity of the
models. Like most of these aspects, there should be no technical
limit as long as the data is constrained to human cognition.
This means that it is critical that the types of models and
schemas that are reasonable to tackle with these methods are
made very clear. At what point should other methods be used? The
ideas of
can be used with mainstream graph databases. Triples scale.
(Stadler et al. 2012)
How important is it that the system can be expanded? What types
of expansion are required? This is less of a capacity issue and
more of a capability issue.
:
There needs to be a path to machine cognition if the model
becomes too large. The data store at scale may be housed in a
time-series or graph database. While
is unique because of a human cognition priority, triples can be
made compatible with RDF and related easily. (H.
2023g) (H. 2023a)
What forms of data, protocol and application interchange does
the system need to comply with?
:
Any modern OS and popular web browser should work. The data
should be stored in a way that can be retrieved without
specialized applications. The triple.pub website will load emoji
used by this document into a Firefox web browser, so even
Windows 10 can read it (H. 2023h).
How is the system modified to improve security or correct
faults?
:
The system needs to be able to be modified at any time without
disruption. There is no room to have users function as testers.
The design of the quality assurance process is determined at
initiation; however, the deployment and code simplicity should
be such that disruption is unlikely. Further, user-initiated
fall-back should be easy and obvious, a part of every effort.
Even when there are management tasks that need to be completed,
like an update of a public certificate, this should not block
use of the system at certain levels. As an example, in the case
of a certificate that is out of date, there should be alternate
paths of validation and transport available to users.
How is the system operated to keep the system running correctly
and securely?
:
The software is identical across installs, and without state
tracked with anything but the triples themselves, so this is
mostly not applicable. Distributing a web page is a relatively
trivial task.
How is the system, or parts of the system moved to a different
platform or service?
:
The core knowledge and visualizations should be viewable as-is
with a modern phone or computer, 2022 forward, including macOS,
Windows, or GNU/Linux. The document you are reading now is
available in a portable form. PDF=Portable Document Format.
Triples, by nature, are portable, as they are knowledge captured
at the data level.
How much data can the system lose when it fails in terms of
time? 1 hour? 24 hours? None?
:
The system should be able to be restored to the last atom of
data gathered. Essentially this means there is zero tolerance.
These methods are being used at times of crisis. As I explained
above in (โ
) , there is an assumption
that the requirements for resilience can also be used to
establish better telemetry towards goals, something that is
needed regardless of whether the immediate situation is labeled
as a crisis. But, if you are in crisis mode, and need to restore
the model to a previous state, there is no room for tolerance.
Iโm writing many of these requirements with crisis in mind. Part
of my logic is that the reality of how people work is they will
run right into disaster they create themselves. Most are
perfectly fine relying on insanely deep supply chains and
hundreds of millions of US dollars in datacenter equipment to
make decisions for their business (Franklin 2018) (Edwards
2023). Human minds are powerful, and currently in
abundance. We donโt need much to live, compared to datacenter
needs (Buis
2022). (๐ง)
How long can the system be down when it fails?
:
The ability to view the current version of local data should
never be disrupted. Operational loss of system is unacceptable
at any level. This is facilitated via (๐ฅก) .
This includes security features like preventing internal users
from seeing traffic over the wire. It is not about maintaining
security, which would be manageability and
maintainability.
:
Since the design is a single web page, the page itself needs to
be secured appropriately as far as view, and the integrity of
the page needs to be audited and enforced.
assumes a first time implementation. It is unlikely that collaboration and streams features will be needed right away, and the complication will be in conflict with agility. There are ooodles of people able to do streams work collaboratively with oodles of products, so mostly streams are considered out of scope for . (๐ค) The important part is that can co-exist with streams easily. Triples can be passed via streams, and streams can tag triples to pin knowledge. is a bit more than a minimum viable product, but it is close.
For more sophisticated use, see Log Integrity, which covers various messaging protocols, collaborative triple creation, identity, integrity, and logging related to (H. 2023e). Fig. 19 is an example of the kinds of tech I provide and write about on Log Integrity. I plan to spend more time on Log Integrity after I publish this document. As of May, 2023, it is pretty light, just one Plotly Dash example for managing triples for a DFD (Plotly 2023).
I assure you that the design is complete as far as a single-page application that helps people collaboratively model, understand, operate and improve systems using local-first graph analysis, prioritizing human cognition over machine. It is miles above traditional analysis and diagramming tools. I have attempted to constrain it so that it is very solid and useful. It is quite possible to use this outside of multi-level DFDs, but that is the most likely application. (๐ผ) The design goes wider, though. (๐ต) shows how it is possible to cognitively understand a physical supply chain with .
On to Triple System Analysis design! Letโs start with the schema, and move on to the components of the single-page app.
Usually the focus for IT systems is on software that transforms and visualizes the data. Changes to the software are triggered by design considerations and user product requests. pushes the difficult stuff down to the data itself. Most of the lift when implementing this design is deciding what to capture with triples. There are many live examples in the operations section. (โ๏ธ) Schema design is where collaboration happens. Directly coupling human effort in this way facilitates cognition and active, real-time modeling without heavy technical barriers. There is no heavy framework, no software, no cloud, few symbols to learn, and yet it is possible to collaborate on, model, and visualize systems. It is easy to use this same technique with text as shown in (๐ ); however, this design focuses on emoji.
Nodes are the objects that comprise your system. List these with brief labels, comments, and delimiters so nobody is confused by what the node emoji means. Delimiters make it easier to split a path into components, as the visual character is made up of multiple characters that vary by platform and language. (See (๐)) for a discussion of storage design.)
Here are my recommended reserved emoji:
๐ธ = delimiter for node path
๐น = delimiter for triple
โช๏ธ = delimiter for level
๐จ = comment
๐ท๏ธ = label
Pick a single emoji to represent a node type. For instance, ๐ might be a bus in a model of Washington and Oregon roads.
๐บ๏ธ๐ธ๐ฃ๏ธ๐ธ1๐น๐ท๏ธ๐นWashington
๐บ๏ธ๐ธ๐๐ธ1๐น๐ท๏ธ๐นElectric
๐บ๏ธ๐ธ๐๐ธ2๐น๐ท๐นHybrid
๐บ๏ธ๐ธ๐๐ธ3๐น๐ท๏ธ๐นGas
๐บ๏ธ๐ธ๐๐ธ4๐น๐ท๏ธ๐นCNG
๐บ๏ธ๐ธ๐๐ธ1๐น๐ท๏ธ๐นSmall Bus
๐บ๏ธ๐ธ๐๐ธ1๐น๐จ๐นLess than 10,000 GVWR
๐บ๏ธ๐ธ๐๐ธ1๐น๐ท๏ธ๐นLarge Bus
๐บ๏ธ๐ธ๐๐ธ1๐น๐จ๐นGreater than 10,000 GVWR
๐บ๏ธ๐ธ๐ฃ๏ธ๐ธ2๐น๐ท๏ธ๐นOregon
Fig. 20 shows how these look.
The path of the node is everything before the first ๐น, and it must be unique. If you are short on emoji, or would rather not deal with that at the moment, just pick an emoji for everything as shown in Fig. 21:
๐บ๏ธ๐ธ๐ต๐ธwa_rds๐น๐ท๏ธ๐นWashington Roads
๐บ๏ธ๐ธ๐ต๐ธsmall_bus๐น๐ท๏ธ๐นSmall Bus
๐บ๏ธ๐ธ๐ต๐ธlarge_bus๐น๐ท๏ธ๐นLarge Bus
๐บ๏ธ๐ธ๐ต๐ธor_rds๐น๐ท๏ธ๐นOregon Roads
It might make sense for your application to skip the labels at first and use the text part of the path after the emoji. It is easy to add nodes later if you wish. The only catch is that the nodes need to work with your chosen nesting (๐ช) and relations (โ๏ธ). Add triples like this for clarification:
๐บ๏ธ๐ธ๐ต๐ธsmall_bus๐น๐จ๐นLess than 10,000 GVWR
๐บ๏ธ๐ธ๐ต๐ธlarge_bus๐น๐จ๐นGreater than 10,000 GVWR
Decide what properties make sense for the nodes. These can be added later, but it helps initial visualization if there is a set to start. As an example, we could add โจ to mean the level of perceived luxury and status a car has. These two triples:
๐บ๏ธ๐ธ๐๐ธ1๐นโจ๐น11
๐บ๏ธ๐ธโจ๐น๐ท๏ธ๐นBling Level
would translate as โOn our roadmap graph, car number 1, a BEV, has a bling level of 11.โ After this has been captured, somebody asks, โWhat does 11 mean?โ This triple is added:
๐บ๏ธ๐ธโจ๐น๐จ๐นBling level ranges from 1-10
The analyst assumed that 11 was understood in Spinal Tap terms, but it was a dated reference, so a base64 image was added to the path of the property to motivate memory:
๐บ๏ธ๐ธ๐๐ธ1๐ธโจ๐ธ11๐น๐ผ๏ธ๐นhttps://triple.pub/files/blingimage.b64.txt (H. 2023b)
Notice how we now have a URL (IRI) on the right? We are starting to wade in to the logic of embedding meaning on the world wide web. (Berners-Lee, Hendler, and Lassila 2001) Tim Berners-Lee originally saw machine cognition in his design of the WWW (Berners-Lee 1989). In this particular case, listing the base64 output would be silly. Putting a visual picture would derail the general idea of triples with , as they are processed as lines of text. Humans canโt read base64 in a meaningful way. It is a good application for referencing a web document. Focusing on human cognition at the data level changes our design. The resulting set of triples are much easier to understand for humans at the data level (H. 2023g) (H. 2023a). This also shows how freewheeling is with meaning. borrows many ideas, including the open world assumption, to gain flexibility and leverage existing tools; however, the meaning is scruffy because the systems analyzed are assumed to be for human cognition as well (Munn and Smith 2008) (Smith 2001). Triples can be philosophical, as Smith, Welby, and Peirce show (Broad 1912) (Bergman 2018a). After all, we are considering meaning at a deeper level, analyzing it by breaking it down and re-assembling. What do you mean by 11? What do you mean by bling level? These are reasonable questions. Do we use poems? Words? How do we express what we mean? (Popova 2017) (Laura Riding Jackson Foundation 2021). Triples provide an unlimited way to refine the meaning, the properties of nodes, easily. See (๐ฃ๏ธ) for a more complicated treatment. Isolated nodes floating around do have a certain level of meaning, but how they relate is even more meaningful. Once we have the properties of the nodes pinned down, we can relate them.
Triples are interesting in many ways. A property is a kind of relation that pins meaning to a node. There are other relations between nodes, though, that are fundamentally different. This gets into more philosophy. The key thing is that at some point you need to relate the nodes. Because of human cognition, is limited to a single class of relation. This is your primary relation predicate. It never changes; however, there is direction. Consider these triples:
๐บ๏ธ๐ธ๐ฃ๏ธ๐ธWA๐น๐ท๏ธ๐นWashington
๐บ๏ธ๐ธ๐ฃ๏ธ๐ธOR๐น๐ท๏ธ๐นOregon
๐บ๏ธ๐ธ๐ฃ๏ธ๐ธCA๐น๐ท๏ธ๐นCalifornia
๐บ๏ธ๐ธ๐ฃ๏ธ๐ธOR๐นโ๏ธ๐น๐ฃ๏ธ๐ธWA
๐บ๏ธ๐ธ๐ฃ๏ธ๐ธOR๐นโ๏ธ๐น๐ฃ๏ธ๐ธCA
Note that we are using two-letter state codes instead of a number. This is perfectly valid for this design. A relation is what connects the model at the current level. This should be the same at all levels. If you are working with information systems, consider the relation of data to/from. An org chart is โreports toโ. A data flow is โreceives/sends dataโ. A relation is a line that is drawn on a graph of the system. As an example, say you are part of a group that gets together when the water for your city is poisoned by a chemical spill. In this case, we might consider a couple of different relations in our group. Are we going to โclean potable waterโ or โmove potable waterโ. Pipes, trains, and tanker trucks might move potable water, and the relation would be flow. If the focus of the analysis is on a machine that cleans potable water, then the relation might be โneedsโ.
A relation is signified in the triple by an arrow:
โ๏ธ = Both directions
โฌ
๏ธ = Backward
โก๏ธ = Forward
Backward means the object provides what is consumed by the subject. Alternatively, the object is the target of the subjectโs relation. For flows, this is clear, as it is easy to establish what is going where. For other relations it is more difficult. If I need water, I would use a forward arrow. Dependencies, though, can also act like flow, e.g.ย (๐ง), so backward makes more sense. Whatever you choose, be consistent, and donโt get bogged down in long talks about what directions the arrows go. Here are three triples from Fig. 12:
0โช๏ธโ๏ธ๐ธ4๐นโ๏ธ๐น๐ฝ๐ธ2 (H. 2023a)
0โช๏ธโ๏ธ๐ธ4๐นโ๏ธ๐น๐ค๐ธCstS
0โช๏ธโ๏ธ๐ธ4๐ธโ๏ธ๐ธ๐ค๐ธCstS๐น๐ท๐นTickets
This shows how to consider a relation a node, kind of like how we considered the bling level in (๐ชง). Triples are magical that way. It is important to constrain it, as multiple dimensions or too many symbols will confuse the user. Letโs look at a materials supply chain model for cups.
The relations in Fig. 23 are flow of cup materials. This example uses text, color, and Gane and Sarson notation shapes instead of emoji. ACME Cups manufactures ceramic mugs that are distributed throughout the country. Zellostra Mud Mining provides mud that is then filtered and injected into mugs that are fired in a kiln and glazed by glaze from Queen Glaze Co.
Motion of mug materials are tracked with the flow. Materials at rest are either mud (M), cups (C), or glaze (G). There are multiple companies involved in the supply chain for mugs. Staff are designated by company as the first letter: Queen Glaze Co.ย (Q), ACME Cups (A), Zellostra Mud Mining (Z), Temp Superheroes (T), and Xtra Fast Delivery (X), with a second letter of E (entity). Materials are moved or transformed with processes designated by the company letter first, P second, and a sequence integer, as well as color coded. The IDs are unique for all. This is just a high level; however, the processes that change and move the materials, can be exploded into separate diagrams for more detail.
It is quite difficult, if not impossible, to obtain a simple diagram like this just from operational and other gathered real-time metrics. It requires interviewing the business stakeholders. (๐) (๐ญ๏ธ) (๐ ) Because of the nature of graphs, the diagram can be collaboratively built and delegated. ACME gets glaze from G3, and ACME can modify their part of the diagram without having to be concerned with changes Queen Glaze might make to their process. This can be vetted with operational stream data.
What does this get us? If the power is out, we could look at this single graph, and see that the items that we are concerned with are all purple. True, a power outage might limit ability for staff to get to work, but letโs assume that staff are available. Any materials at rest should still be at rest with or without power, so letโs look at purple. AP1 can use manual screening techniques if the electricity goes out. AP2 and AP3 require a generator. AP4 and AP5 only need lighting. If there are holdups in the line, the graph can show who to turn to. If no temps show up from Temp Superheroes and the phones are out, we would need an address.