⚠️⚠️⚠️ Please note that this work is incomplete, full of errors, and changes daily. I will remove this notice when I am done. ⚠️⚠️⚠️
Where are we? Where would we like to go? How do we get there? These questions seem simple enough, but they are difficult to answer as the world streams by like a giant movie on the screen of our perception. We are aware that the systems we create interact negatively with the biosphere, but our cognition is limited. We use reporting and AI/ML to understand our telemetry, gauge system function, and predict our course; however, this does not address our goals and choices from a human perspective. Besides the overwhelming amount of data, our cognition of streams is hijacked by advertising and culture. The “billboard on the side of the road that screams reassurance that whatever you are doing is okay” works. Smoking is okay in the present, until the outcome of our choices catches up with us. (Collins 2015) (Taylor, Alan 2007) How do we determine our telemetry and goals as humans within extremely complex, nested systems? How do we establish the meaning of the hijacked movie we are in?
A screenplay can be understood by humans. Actors read the screenplay to see if it is something they are interested in. It is a re-usable document that helps humans cognitively understand the movie enough to participate. It captures just enough knowledge to express the ideas as a movie. Creation of a movie depends on casting, the imagination and technique of the director, as well as technical resources available. A screenplay does this by working within established meaning and conventions. It is possible to insert scenes and modify characters easily. As stakeholders in the systems that we rely on, what is our equivalent screenplay?
AI/ML and business intelligence tools running against the finished movie, metaphor or literal, can identify the actors and items in various scenes. They can categorize the props to recreate the scene with other actors superimposed. They can map the flow of the movie against theater and consumer behavior data to predict the amount of popcorn that viewers decide to skip out and buy during the scene. Much of the knowledge, though, is not recoverable from the surface movie with AI/ML. It is quite likely that the pauses, awkward moments, and humor will be lost on our models. We lose the original intent and the beauty of the way that the director, actors, and crew created the movie from the screenplay. This gets at the human of interest and effort underneath the stream. How do we emerge from the streams of daily demands and unconscious forces that hold us down cognitively? How do we became directly engaged as humans, choosing our activity with agency towards agreed-on goals in a collaborative way? My solution is Triple System Analysis.
I discovered triples as I struggled to communicate about complicated systems with increasingly specialized engineers. I found that while I couldn’t find anybody who could answer questions about the overall system, it was relatively easy to establish small facts. Different people knew different facts. As I tried to visualize the facts in an automated way, I realized that the bioinformatics field had many tools that could help. This was also when I realized that my success with using data flow as a model was related. Triples form facts with a particular convention.
A triple can be illustrated by “monitor has_resolution 2048X1080”. The “Father of Pragmatism”, Charles Peirce, came up with triples in the nineteenth century. (Bergman 2018) Triples are composed as a subject, predicate, and object. Triples decompose complicated systems into small facts that can be easily established and collaborated on iteratively. They can be assembled to visualize a broader perspective, yet still be agile enough for tactical response. More importantly, from my perspective, is that those small facts can be understood cognitively, and then treated as an atom in broader knowledge.
Stream perspectives, while important, are usually not conducive to authoring a screen play. An event stream has a timestamp and associated data that corresponds to that time. The individual event might have a single one-off plain text log entry, or it might have structured data in the form of JSON or key-value pairs. Knowledge of a system based on streams relies on external expertise, a model in the intelligence tool that forms the streams around it for cognition. This is a form of black box modeling. Realistically, we have a massive demand for stream analysis. We have a corresponding pool of people that are focusing on stream analysis for their jobs. The goal seems to be to model a human mind to get cognition of flows like human speech and ideas in real time, or autonomous vehicles, or what lever to trigger at what time to make the most wealth. I don’t see people strive, as Donella Meadows urges, to help humans understand systems they rely on without compute. (Meadows, Donella 1977) This brings us back to the three questions. Where are we? Where would we like to go? How do we get there? I am fine with validating my answers with terabytes of streams and various models; however, I want to answer those questions cognitively, at a prudent and possible level as a human, as a stakeholder in the biosphere. Please don’t worry. For those of you looking for a technical presentation of how to visualize systems with triples, I promise I will deliver code and graphs of fury. 🥋
I’ve used the metaphor of the screenplay to help imagine a different way of looking at knowledge from a human cognition perspective, but how are triples related? Triples can be assembled as a graph. The entire World Wide Web is a giant graph made of triples. Intention is important. If I make the correlation “My cat is a calico” and create a link to a website about calico cats, I am creating a triple intentionally. This is where identity plays into SEO. If I’m curious what the original author thinks intentionally, this is an entirely different problem than if I’m trying to find out what the best website on the internet is for calico cats. A search engine establishes rank based on the broader graph of all links for calico cats and what links to where in a weighted graph of connections. My method focuses on stakeholders establishing their telemetry via agile maps, a screenplay that can be quickly assembled to facilitate resilience during crises. After I explain triple system analysis (3SA), I will lay out requirements for resilience, which is the goal of the solution I describe.
Let’s start with these four sentences: Tigers eat cats. Cats eat rats. Cats eat mice. Mice eat cheese. These are triple, and can be represented with this graph:
Emoji are easier to recognize:
🔹 delimits the predicate, either a primary relation or other predicates listed in design.
Here is the graph with emoji:
Notice the flow direction. Cheese flows through mice to tigers. The relation direction doesn’t correspond to flow direction, necessarily. We are doing what Barry Smith would describe as scruffy, perhaps with a superlative or two. (Smith 2001) The graph can prompt questions, like “What does the rat eat?” or “If the cheese is poisoned, what animals might be affected?”. The fact that we don’t have anything for the rat to eat on the graph doesn’t invalidate the graph. We can add what the rat eats as we learn. Likewise, the fact that a mouse eats peanut butter off of our traps doesn’t invalidate the fact that the mouse eats cheese. We can also do some inference, in that the tiger might well be eating cheese. If the cheese was poisoned, we could follow the graph to see what animals would be affected. The important thing to notice is that these small facts, a triple, can be added at any time to the model. We could add 🐅 🔹➡️🔹🐖 later on, if we discovered that tigers ate pigs.
Let’s analyze a larger system with triples to demonstrate how they work. Consider what modern compute needs at a high level.
💻️ ⬅️ 🏗️ ▫ 💻️ ⬅️ 🏭️ ▫ 💻️ ⬅️ 🚢
💻️ ⬅️ 🧊 ▫ 💻️ ⬅️ 🧍 ▫ 💻️ ⬅️ ⚡️
Compute needs to be constructed and manufactured using globally distributed parts. Compute needs to be cooled, and needs humans and electricity to operate. These are fairly straightforward facts. Triples can be written in many forms, most of which are optimized for machines. We are optimizing our analysis for human cognition, so using emoji has benefits. It is easy to picture the relations in this graph:
We don’t have to toss established knowledge to build out perspective. For instance, say we want to distinguish local transport from global. We could add 💻️ ⬅️ 🚚 to show that compute uses local transport. The fact that some compute requires local transport does not negate the fact that some compute requires global transport. Perhaps we want to add that not all global shipping is necessarily based on oil. We want to allow for efforts like the Porrima or even good old fashion sailboats. (MS Porrima 2022) No problem, just add 💻️ ⬅️ ⛵️ and 🚢⬅️⚡️. If we want to account for the fact that cooling compute in datacenters requires water and electricity, that construction, transport, and manufacturing require oil and electricity, and allow for photovoltaic arrays, we can add these eleven triples:
🧊⬅️💧 ▫ 🧊⬅️⚡️ ▫ 🚚⬅️⚡️
🏗️⬅️⚡️ ▫ 🏭️⬅️⚡️ ▫ 🏗️⬅️🛢️
🚚⬅️🛢 ▫ 🏭️⬅️🛢️ ▫ 🚢⬅️🛢️
⚡️⬅️☀️ ▫ ⚡️⬅️🛢️
Here is the resulting graph:
Let’s imagine that while we are building out our model of what compute needs, that another group is building out a model of what humans need. This is an important aspect of 3SA. The graphs can be combined. Formal ontologies exist so that the meaning of the triples is agreed on. (ISO/IEC 21838-2 2020) The problem is that this is in conflict with human cognition. It doesn’t have to be a big problem, as long as we are aware of it going in. I’ll show you what I mean.
Consider what humans need. Here is a list of triples followed by an explanation (↘️):
🧍 ⬅️ 🌡️ ▫ 🧍 ⬅️ 🚰 ▫ 🧍 ⬅️ 🏥
🧍 ⬅️ 🍲 ▫ 🌡️ ⬅️ 🏠 ▫ 🏠 ⬅️ 🏗️
↘️ Humans need a certain temperature to live, potable water, medical care, and prepared food. Shelter is needed for humans to maintain tolerable temperatures, and this shelter needs to be constructed.
🍲 ⬅️ 🐄 ▫ ⚡️ ⬅️ 🛢️ ▫ 🚚 ⬅️ 🏗️
🍲 ⬅️ 🌱 ▫ 🚚 ⬅️ ⚡️ ▫ 🏗️ ⬅️ ⚡️
↘️ Prepared food for humans comes from animal and plant sources. Construction and transport need electricity, which is provided by oil. Transport needs to be constructed.
⚡️ ⬅️☀️ ▫ 🐄 ⬅️ 🌱 ▫ 🏗️ ⬅️ 🧍
🌱 ⬅️ 💩 ▫ 🌱 ⬅️ 🌊 ▫ 💩 ⬅️ 🛢️
↘️ Electricity can also come from the sun. Animals eat plants, and processed food comes from plants. Construction needs humans. Plants need fertilizer and water. Fertilizer comes from oil.
💩 ⬅️ 🐄 ▫ 💩 ⬅️ 🧍 ▫ 🍲 ⬅️ 🚰
💊 ⬅️ ⚡️ ▫ 🚰 ⬅️ 🌊 ▫ 🚰 ⬅️ 🏗️
↘️ Fertilizer can also come from animals or humans. Drugs need electricity. Potable water is sourced from rivers, lakes, springs, and groundwater. Processed food needs potable water, which needs constructed infrastructure to operate.
🏥 ⬅️ 💊 ▫ 🏥 ⬅️ 🧍 ▫ 💊 ⬅️ 🛢️
💊 ⬅️ 🚚 ▫ 🏥 ⬅️ 🛢️ ▫ 🏠 ⬅️ 🛢️
↘️ Medical care needs drugs, people, and oil. Drugs need oil and transport. Shelter needs oil for heating as well as components.
🚰 ⬅️ 🛢️ ▫ 💊 ⬅️ 🌱 ▫ 🏗️ ⬅️ 🛢️
🧍 ⬅️ 🌱 ▫ 🌱 ⬅️ 🌡️ ▫ 🚰 ⬅️ ⚡️
↘️ Construction and potable water infrastructure needs oil. Potable water distribution needs electricity. Humans can eat plants directly, unprocessed. Drugs are made from plants. Plants need particular temperature ranges to germinate and thrive.
🍲 ⬅️ ⚡️ ▫ 🍲 ⬅️ 🛢️ ▫ 🍲 ⬅️ 🚚
🌱 ⬅️ ☀️ ▫ 🚚 ⬅️ 🛢️ ▫ 🏗️ ⬅️ 🚚
↘️ Processed food needs electricity oil, and transport. Plants need sun. Transport need oil, and construction needs transport.
🏥 ⬅️ 🏗️ ▫ 🏥 ⬅️ 🚚 ▫ 🏥 ⬅️ 🚰
🏠 ⬅️ 🚚 ▫ 🧍 ⬅️ 🧍 ▫ 🏗️ ⬅️ 🏗️
↘️ Medical facilities need to be constructed, and also need potable water and transport. Shelter needs transport. Humans need humans to reproduce, and construction equipment is created with construction equipment.
We might argue a bit about whether a hospital is needed, but in our current civilization, this is reasonable. Likewise, in some societies transport is not needed to build a shelter. The advantage of this form of analysis is that the individual triples are relatively easy to agree on. Do we need oil for construction? Are drugs made with oil? This can be verified by experts to the satisfaction of those who are evaluating the system. If there is conflicting information, mark it as such and/or don’t include. The triples can be assembled as a graph for visualization as the model is built out, which facilitates collaboration. Here is what the graph looks likes:
If we decide that we will only consider transport that gets electricity from the sun, then we still have quite a few other problems to address. The graph helps put things in perspective, and facilitates human cognition of the system.
Back to our scenario with what compute needs. ☀️🛢️🧍🏗️⚡️🚚
have the same meaning and use the same symbol. 🌡️=🧊, 🌊=💧, and
🚚 on human needs could be either 🚢 or 🚚. If we combined the
graphs with the translation for 🌡️and 🌊, we get this
There are a few semantic issues here. We say that maintaining a certain temperature range requires water. While this is currently reasonable for most datacenters, it is less so for cooling humans. Humans do need water for other primary reasons. The main point I want to make, though, is that as a human trying to understand related systems, the map of compute and human needs is roughly as complicated as you can get and still take it in with a viewing. I am tackling a narrow part of visualization and cognition. There are ontologies for many of these entities, but they would be too cumbersome to lay out in the way we have here. (Ong et al. 2017) The main reason why I started to look at these methods, though, was analyzing IT systems and data flow, and there is value to formal versions of this. The data flow ontology isn’t too complicated to have both a cognitive, single-serving view, yet still have machine cognition and a formal, agreed-on method of analysis. (Debruyne et al. 2019)
Structured System Analysis was refined in the 1970s and 1980s as a way to analyze systems, primarily around data flow diagrams (DFDs). It uses three symbols and some conventions for connecting them to make data flow diagrams. (Gane and Sarson 1977) (Yourdon 1989) Think of this as a vertical, dumbed down version of graph analysis. The beautiful thing, though, is that the methods apply to any IT system. 🎆 🎉 IT stores, transforms, and reports on information. Before most in compute called their profession IT, they called it IS, information systems, which is more appropriate. What we really care about isn’t the technology, it is the information and the system that stores, transforms, and reports on it, no matter what the transforms are, AI or not. This, folks, is our screenplay for IT systems. 🍻 After that, I’ll look more at the design of triple system analysis, including requirements for resilience.
While different authors choose different symbols to represent the nodes of DFDs, structured system analysis consists of:
👤 External Entity (Entity)
This is a source or destination of data. It might be a person, or a group of people, or even sensors.
This transforms data from one form to another.
QA uses the BugScan application to check for bugs. The data is stored in the BugScan database. Here is a set of triples for this diagram:
Here is a top level (0) of a DFD:
If we expand process 6, our view is entirely within Customer Relationship Manager:
The view is important. When we are in a view, a node, the node opens up as a graph. It is a wormhole into the perspective of the specific group: the perspective of customer relationship manager, including social media and snail mail cards. Unless somebody is working directly with the Social Fanisizer, they don’t need to know details, but they would need to understand that Lead Addresses are used by both the email service and Social Fanisizer. If we zoom into Social Fanisizer Here is the expanded 6 within 6, the Social Fanisizer process as a graph:
I’ve used DFDs without the automation at my work for almost a decade. Much of this solution description is based on that experience. It varies both from formal DFDs, but also from formal graph analysis. Unlike conventional graphs, hierarchical paths for the processes are a feature. Consider 0🔸6🔸11▪️⚗️🔸1🔹↔️🔹👤🔸CFO In Gane and Sarson this would show that process 6.11.1 has two-way dataflow with the CFO. The unique thing about 3SA is that the emoji visually show that meaning without needing to create symbols. The ▪️ shows the level or graph. The level is 0🔸6🔸11, and is also a process ( 0🔸6▪️⚗️🔸11) at level 0🔸6, which fits with the intent of Gane and Sarson. I do not pull through meaning between levels. In the previous example of Social Fanisizer, for instance, I would expect those with detailed knowledge of that particular process, the graph at the node 0🔸6🔸6, to have a different understanding of their processes and datastores. There may well be overlap, but since this is about cognition, it is completely fine to only enforce process as the nesting mechanism. Everybody can agree that Social Fanisizer is part of the Customer Relationship Manager. The groups that provide and consume information from Fanisizer, from their perspective, will likely be much different than a view of the entire org, which be understood by anybody in the organization.
The primary goal 3SA is resilience. Resilience is a bit over-used, but it is still a great word. From a systems perspective, resilience is realized at time of failure. This makes it different than the typical system design aspects, as it requires light and quick reactions in a situation that is likely much different than when the system was originally designed. (Daniel 2014) As humans, we rely on many systems that we fear may collapse. We are stressing the biosphere with all that we do as a civilization, including our cloud compute. (Monserrate 2022) The specifics of what kind of crises’ we will face over the next few decades are less important than our ability to respond in a resilient way. Be very clear, here, that I don’t see resilience as a synonym for business as usual without severe consequences. We have earned severe consequences, and can expect them. Likely the consequences and situation will be different than we forecast. What we do, how we respond at that point, determines our resilience. We have lots of humans that can help cognitively at time of crisis, and I believe these methods will help. What facilitates resilience from a system perspective? What do/will humans need to adapt systems at time of crisis? My thesis is that communication, meaning, knowledge, maps, collaboration, and streams form interrelated requirements for resilience:
Normally a document like this is a taxonomy. There is a table of contents, and all of the requirements map to analysis, which maps to a design. The functional requirements all relate to each other in this case, so a mere taxonomy is not really sufficient. Narrative form is necessarily sequential, so it ends up being unwieldy, particularly for non-functional requirements.
This diagram shows the typical dance that system engineer or analyst might navigate for a new information system:
The problem with this method is around agility. The tools available to a system analyst do not provide sufficient software development velocity within rapidly changing systems and customer demands. A complicated system might take several months of analyst work to generate a 60 page document that nobody on the team has time to read. The conclusion at many organizations is to eliminate the system analyst role, and include it under software development as a task called a system design interview. (Martin 2023) (Schaffer,Erin 2023) System analysis should start with human requirements, rather than solutions, but often the task of system design falls into a particular platform, as a developer thinks in terms of tools and APIs that they are familiar with. While this does contribute to velocity, it also tends to be myopic. As an example, the eight fallacies of distributed computing should be addressed, but often isn’t, because cloud is assumed. (Van Den Hoogen, Ingrid 2007) (Jausovec, Peter 2020) Involving cloud in the solution is likely the correct solution for an organization, but relying on a cloud software developer to imagine non-cloud solutions is unreasonable, and this breeds organizational blind spots. So, we are back to the same problem. How do we get velocity, as well as a broad system perspective that an analyst role can provide? My intention is not to invalidate the system design perspective that different developers might have. My intention is to align these perspectives across different roles with a collaborative method that attains a suitable velocity. I’m taking an any/all approach that is greased with triples and instant visualization of emerging intent and design. The challenge of this document, then, is to show how these functional requirements and non-functional requirements are reflected in a proposed system design. I will do this in semi-narrative form so that it can flow via a PDF; however, at the end there will also be an interactive artifact that utilizes the design.
This brings up another part of my experience with graph forms of analysis. All you really need is a toe-hold to start productive analysis. Because of the nature of graphs, once you get started, you can grow the analysis into a complete treatment. The reader should have enough triples so far to understand if we jump right in. Let’s lay some down:
💡🔸✅🔸🧑🔸🌱🔸🧠🔸vis🔹🏷️🔹Visual and semantic standards
💡🔸✅🔸🧑🔸🌱🔸🌳🔸wht🔹🏷️🔹What knowledge is crucial?
💡🔸✅🔸🧑🔸🌱🔸🌳🔸how🔹🏷️🔹How do we retain knowledge?
💡🔸✅🔸🧑🔸🌱🔸🗺️🔸qck🔹🏷️🔹Quick to learn
💡🔸✅🔸🧑🔸🌱🔸🗺️🔸eas🔹🏷️🔹Easy to change
💡🔸✅🔸🧑🔸🌱🔸🗺️🔸std🔹🏷️🔹Standard for area
💡🔸✅🔸🧑🔸🌱🔸👥🔸cnt🔹🏷️🔹All may contribute
💡🔸✅🔸🧑🔸🌱🔸👥🔸vld🔹🏷️🔹All may validate
💡🔸✅🔸🧑🔸🌱🔸👥🔸act🔹🏷️🔹All may be accountable
💡🔸✅🔸🧑🔸🌱🔸🌊🔸qck🔹🏷️🔹Quick system state updates
In graph terminology the root is 💡, this document, similar to level 0 in Gane and Sarson terminology. ✅=requirements 🧑=functional 🌱=resilience (There may be functional requirements that aren’t necessarily because of resilience.) I use the convention of delimiting the path of node IDs with 🔸. The predicate is delimited with 🔹. The node IDs must each be unique. In this case, the path of the node forms the ID. Consider “how”. This is in both nodes “💡🔸✅🔸🧑🔸🌱🔸☎️🔸how” and “💡🔸✅🔸🧑🔸🌱🔸🌳🔸how”. It is possible, and sometimes prudent to build graphs from triples where there is no meaning in the path. Some use Universally Unique IDentifiers (UUIDs). It is true that mistakes in the path are easier to fix than if the store is a relational database; however, you will break references with paths. In this case, the solution description has a fairly standard form, so there is little risk. I’ve probably written 100 of them. It is easier than the taxonomy of a Word document that relies on indents and font type to arrange the paths.
The document you are reading, the solution description, serves as a repository of the narrative for the nodes, so I won’t add those in. I will try not to repeat myself. If there is nothing of significance beyond the node labels (🏷️), I won’t add anything. So far, there are edges between the nodes that correspond to the green “facilitates” lines in the diagram. They are intentionally two way. There are other relationships between the nodes implied by the triskelion-like diagram center that I will write about later on. I’d like to add that it was accidental, or, at least, not a conscious decision to create such a nested rule-of-threes diagram. I simply outlined what I thought were the requirements for resilience with an agile system visualization and model. It all just fell into place with the diagram. Now that we have our triple pivot, let’s continue on with narration about the functional requirements. This doubles as a demonstration, so if you don’t understand, hopefully you will if you just follow along for awhile.
Wade through the mind-numbing muck of a corporate “what’s wrong with our group” exercise, and you will usually end up with the top ranking advice that better communication is needed. It may seem fruitless to spend all that time arriving at that conclusion every two years like clockwork, but it is still a core problem. Let’s analyze what we mean by communication, in particular communication about systems in crisis. Mostly this involves representing and sharing the knowledge of who, what, how, where, and why with a two-way tight coupling with streams and meaning.
Who is working on what?
This needs to be established quickly. There should be no required integration or schema update. A unique ID is the only procedural/technical item that should be enforced. Association of the ID with a project or role, and any other metadata, should simply flow in the stream.
What changes have been made?
Changes to the system should be posted to streams and visualized in near real-time.
How does the system work?
There should be no limitation on the types of models besides being able to decompose the knowledge. The way that the system meaning is expressed should be flexible, as different people have different perspectives and experience.
There is no room for confusion about what is meant. If I am stuck in the rain and talking with you over an unstable connection, and I want you to know I am in Portland, I might use a standard phonetic alphabet and say “Papa, Oscar, Romeo, Tango, Lima, Alfa, November, Delta”. The phonetic alphabet has agreed-on meaning, and is designed so that when transmitted over a stream (radiotelephone), there is a low possibility for confusion over similarly sounding words. Meaning should be quickly established with visual queues when possible.
There needs to be a mechanism to filter for crucial knowledge and to re-assemble for future, unforeseen views.
Quick to learn
There should be a minimal number of symbols on the map that requires less than a minute to learn. Stakeholder visibility
Different stakeholders have different needs for visualization.
Easy to change
Modifications to the underlying data should automatically adjust existing maps.
Standard for area
Consider standards that exist for mapping in the domain. (“OpenStreetMap Carto/Symbols - OpenStreetMap Wiki” n.d.)
During collaboration, there should be minimal limitations on who can contribute. If any participant feels something is wrong with the system map being built collaboratively, it should be easy to capture. Likewise, if a particular proposition (triple) is verified by an agreed expert, than that should also be easy to capture. Focusing on the core knowledge first, rather than code, platform, frameworks or metal, facilitates collaboration from the start. Further, a focus on decomposed knowledge, coupled with immediate visualization, makes meetings more productive, since there is little delay between gathering knowledge and the visualized model. Decomposed knowledge streams (triples) only require a “lock” or merge on the triple itself. (Kleppmann et al. 2019) (Shapiro et al., n.d.) It is less likely that during collaboration one person will step on another. One person might change the title while somebody else is editing the description. The big win for collaboration is on multi-level data flow diagrams (DFDs), as different areas of expertise can collaborate concurrently to build the models.
Streaming should leverage existing event stream mechanisms for insight. Replay of system changes should be able to be mapped both by timestamp and with queries against multiple streams and datasets.
Non-functional requirements are not measurable as application features, so they are often opaque from an agile user story perspective. As we all know, though, if the system is down, slow, or locks up; if the user loses data, these requirements all of a sudden become the most important aspects. There is also a bit of crystal ball reading involved with non-functional requirements. What kinds of things can happen? I mentioned the eight fallacies of distributed computing, but there are other more banal problems. For instance, if data is deleted in the system, intentionally, accidentally, or through malware, and isn’t discovered for several months, then how is it recovered? Many backup systems do not account for this, yet it is a key part of RPO requirements. (Sharma 2020)
Here are the triples associated with the non-functional requirements (🖥️) for 3SA:
Note that there is nothing sacred about emoji. Text is fine, and often a better choice if there are lots of aspects.
This system is intended to be used at time of crisis. This means that the non-functional requirements are extreme. Availability should be as many 9s as you can write. Since the data itself is the knowledge of the system, this is attainable. All you need is a copy of the data and something to render it with. There are more details in the analysis and design sections; however, availability is extreme.
There should be no constraints on capacity. Since this is for human cognition of systems, the systems are relatively flat and not that dimensional. The stack of resulting triples, like availability, has no limits within the constraints of a system used for human cognition.
Updates from a change of knowledge atom should trigger an update in the visualization quick enough so it does not hinder the pace of collaboration.
Generating the visualizations takes significant compute. Any personal computer with an optimized graphics system from 2018 or later should be able to handle the graph visualizations, as should mainstream mobile phones.
I am assuming human cognition, so this constrains the data. The main constraint on scalability is the complexity of the models. Like most of these aspects, there should be no technical limit as long as the data is constrained to human cognition. This means that it is critical that the types of models and schemas that are reasonable to tackle with these methods are made very clear. At what point should other methods be used?
There needs to be a path to machine cognition if the model becomes too large.
The system needs to be able to be modified at any time without disruption. There is no room to have users function as testers. The design of the quality assurance process is determined at initiation; however, the deployment and code simplicity should be such that disruption is unlikely. Further, user-initiated fall-back should be easy and obvious, a part of every effort.
Even when there are management tasks that need to be completed, like an update of a public certificate, this should not block use of the system at certain levels. As an example, in the case of a certificate that is out of date, there should be alternate paths of validation and transport available to users.
This is a fully distributed system. The triples are distributed as single triples and deletion of relations. The software should be identical across installs, and without state tracked with anything but the triples themselves.
The core knowledge and visualizations should be viewable as-is with a modern phone or computer, 2022 forward, including Mac, Windows, or *NIX.
Recovery Point Objective
The system should be able to be restored to the last atom of data gathered. Essentially this means there is zero tolerance. These methods are being used at times of crisis.
Recovery Time Objective (RTO)
The ability to view the current version of local data should never be disrupted. Operational loss of system is unacceptable at any level. This is facilitated via the portability requirements.
We now have a set of requirements, and we need to analyze various options as we arrive at a proposed design. This is also where the open world assumption shines, as we can track the possible paths. We might choose one solution now, perhaps incurring technical debt, but if things change, we can choose a different path without having to re-discover the entire system. This is one of the pitfalls of complex systems that are entrenched in an organization, particularly with staff turnover. The only expedient way to change it is to toss out the entire thing and buy all new, often renting the service from a third party. 3SA can help with that by relating a design aspect, or even a particular screen on an operational system back through analysis and requirements without having to read a 60 page solution description. Just the (small) facts Ma’am (triples). (Mikkelson 2002)
With modern software application development methods, we can expect the system to work slightly differently every day in some cases. This is one of the reasons why monolithic documentation from waterfall methods is not tenable. There is a straw man with waterfall that an architect or analyst just tosses an artifact down the falls to the next group, and washes their hands. The waterfall idea was used as an example of maladaptive workflow. (Clift 2021) (Rovce 1970) Architects, analysts and developers have different perspectives, yes, but that doesn’t mean that it is prudent to push everything on developers in order to liberate us from the perils of the waterfall. A user story is one way to capture a small item of concern, to change the design and quickly bring it through to product. The problem is that it is an isolated perspective. Some requirements are firm, and often in conflict with user stories. We are all familiar with the conflict of “I want to see that data on my phone”. Perhaps that is a good idea, but there are security, data convergence, and backup/recovery implications to this. We can interject roles into the workstream as cat herders, but what if these are captured as triples at the data level from the beginning? These can then be visualized as different perspectives of knowledge. We can re-use and track to intent, leveraging the perspective of multiple roles.
An alternative to having a holistic understanding of the complete system, and pulling through requirements to the workstream, is to use stream telemetry and intelligence software.
Why, who, what, where follow similar lines of thought. Agreed this can change at various velocities. All can be captured in streams, but tracking from intention has advantages, primarily, the knowledge from triples can be assembled into a broader set of knowledge through established meaning.
💡🔸✅🔸🧑🔸🌱🔸🧠🔸vis🔹🏷️🔹Visual and semantic
🧬=Design (DNA is used to render the design into a lifeform, rather than the actual design, but it works just fine as a symbol. Alternatively, like triples, perhaps the cumulative experience of living organism was captured in a self-replicating way. “People get so hung up on specifics.” (Cox, Alex 1984))
One criticism, among many, about system analysis in general, is the problem of computability. We live within extremely complex, interrelated systems where humans and nature are involved. What is the meaning of something that is in constant change? My stance is that meaning is what we set out to do with the system. This fits between communication and knowledge. We communicate our intention with items of meaning that ratchet knowledge further. When we merely fall back to streams and AI/ML to evaluate those streams, ironically, we lose meaning. The meaning becomes the existence of the system itself, rather than the essence.
Perhaps the AI model can predict behavior better in the existing world, but that is a different idea than intentional meaning from a human perspective.
Written language goes back 6,000 years. The earliest versions represented a concept with one token, much like emoji. It started with actual tokens to track debt and trade, evolved to pictographs, went through full-on emoji stage in some versions, and then moved on to written word like we know now. (“The Evolution of Writing Denise Schmandt-Besserat” n.d.) (Tepić, Tanackov, and Stojić 2011)
Knowledge at this point in our civilization arc is extremely complex. Imagine a dot on a line 6,000 years ago that moves up ever slightly to the collapse of 1177 BC, goes down again, rises again with Rome an inch off the line, collapses again, and then under our current rise since the dark ages, goes to the moon, literally and figuratively. (📑39)
There are several reasons why I conclude that emoji are useful to complex analysis:
There are also reasons why emoji are bad, primarily compatibility, manageability, maintainability, and security:
Long-form text is usually not immediately understood. For many people, only the first couple of sentences is read in a long narrative. Emoji has quite a few quirks that make it difficult to use in knowledge systems; however, from a human perspective it makes understanding systems at rest and at flow much easier. The priority is human, not machine cognition.
The Whiteboard and Stickies collaborative business analysis technique gathers users and experts of a system gather in a room with stacks of sticky note pads.
Under the prompting of an analyst, the users lay out aspects of the system by writing small bits of information on the notes and sticking them on a whiteboard. Many who have witnessed this technique have marveled at how well it works. The main reason this works so well is that it is collaborative without the burden of dense jargon or existing description of the system.
This method works well as far as communication for those present at the meeting. The analyst serves as an interpreter. There are limits to the collaboration, as it is all within a local room. Collaboration virtually is difficult.
Meaning is often encoded with color and text of the stickies, as well as text on the whiteboard. There is little control of meaning, as it is whatever people put on the notes. It is guided by the analyst, but there is no schema, which is a disadvantage as far as common, re-used meaning.
Knowledge is captured on the whiteboard itself. Somebody might take a picture or roll the whiteboard into another room. Capturing the knowledge is labor intensive and often a choke point of the analyst. There is an overall visual order. Sometimes the map is in swimlanes; sometimes it is more chaotic. The map usually needs to be expressed in a different form.
All may contribute without barriers to entry. There is instant validation of gathered information. If somebody puts up a sticky note that is inaccurate, it is easy to correct. There is a real-time update of the output of the group
Whiteboard and Stickies is a great example of collaboration, primarily through the simple process and few barriers. It shows how knowledge can be broken down and re-assembled successfully, and the stream of changes can be instantly visualized.
Streams in IT are often key-value pairs with timestamps. These streams are the relative of triples, with key-value being two to a triple’s three.
Consider this local alarm:
<154>1 20220327T211058.426Z dc1mon.example.com process=“6 6 3” cpu=“100” memfMB=“10”
This is in Syslog format. (Gerhards 2009) It illustrates a priority alarm coming through for the available CPU dedicated to process 6.6.3 running at 100 percent, with only 10 MB of free memory. This is an operational stream that could trigger integration with the graph visualization. For instance, process 3, the Ad Targeting Engine on the 6.6 graph could be highlighted with red when the alarm came through:
Perhaps it is useful to alarm on a map of the entire system data flow. This shows an alarm on process 11, subprocess 1, the AI Feeder: