🌱 💦 💧 ⌛️ 🌐 📜 🚂 🛢️ ⛰️ 🏗️ 🐟️ 📣 🏃 🗜️

 interactive site w/ triples
 zip file of demo site

Triple System Analysis (3SA)

Author: Agatha Codrust

Last update: 2022-12-08


Stakeholders should be able to quickly analyze and understand their systems at a fundamental level, with tools they own and manage; however, this is often difficult because of frequent change, the need to scale, and lack of experienced staff. There are many existing services and tools that provide operational metrics and prediction based on data streams, as well as knowledge graphs and AI that provide these kinds of insights, but they do not facilitate real-time, human cognition. Humans have a limited ability to consider multiple actors and dimensions in real-time, but this should not interfere with a fundamental understanding of critical systems. I propose that starting with constrained semantic triples, aided by existing bioinformatics software and web browser standards, it is possible for humans to collaboratively establish system knowledge cognitively in real-time. This facilitates resilience, as the techniques can be used at times of crisis, and towards emerging human goals. In this paper I document a design that fulfills the requirements of real-time human cognition, with a goal of system resilience. I demonstrate the technique with multiple examples, and show how the product of this effort is extensible to modern cloud compute services, AI, and knowledge graph platforms. I provide design details for a fully contained information processing application in a single HTML page, that does not require network connectivity or deep supply chains, yet can still facilitate collaboration.

Document Format and Quirks

This document is formatted as a solution description for a design of triple analysis, as well as how to operate analytics based on the design. A solution description is not a particularly resilient information artifact, as it can get dense, and the terminology and form intimidating. It is currently the most recognizable and useful form for system design, though, so it serves as a way to bootstrap the ideas for a technical audience that is unfamiliar with triples. Readers that are tight on time, might be better served if they immediately skip to the operations section, where the ideas are shown with live examples. A sequential reading will guide through my interest, expected audience, premises, design requirements, analysis and design, before arriving at the operations section. While the format and terms are likely unfamiliar to non-technical readers, I will explain each section's purpose without jargon. It is quite long. See my Single Page version.

Table of Contents

1 Front Matter

trimg (📑65)

1.1 Dedication

For Sean and Sunn, who read many versions of this, encouraged me, and provided critical feedback.

1.2 Audience

3SA is intended for anybody that deals with complex systems. No specialized knowledge is required. While information technology is usually part of the solution for any complicated problem, the ideas of 3SA will work without it.


Considering the complexity of industrial civilization right now, and the associated stressors, the stakeholders for these ideas are everybody. As stakeholders in organizations and ecosystems that support vertebrates, we need the ability to quickly and collaboratively establish where we are, what that means, and where we want to go. It may be that the complex mess, the territory of our global supply chain that we are embedded in, is something that we will not be able to extricate ourselves from enough to reclaim individual agency. In that case, my audience is stakeholders of future systems.

Some of the wording and format is intended for system analysts, so it might be a stretch for some readers; however, this kind of documentation has a rich history, and it is suitable for bootstrapping a new way of thinking about system analysis. Where the meaning of the section or aspect is specific to system analysts, I will put an explanation in bold italic right after. The system considerations are common. It is not a waste of time for a novice to become familiar with the format of a formal solution description.

1.2.1 Not Intended for EA

If your focus is in IT, and you are satisfied with an enterprise architecture (EA) framework, then you are already convinced of the need for underlying maps and rules vs. the surface of typical IT movement. Enterprise architects are not my audience here. This is for the scrappy, for those trying to break out of the territory/flow/surface, while still building, improving, and operating live systems. 3SA is for those stuck in sprints with minimal architects on staff, and an overwhelmed non-technical project manager (if one even exists at all). These methods can start from almost nothing, in the middle, and scale up to a mediocre EA map. I don’t pretend that my methods can replace a good EA map. If TOGAF works for you, stick with it. If you already have the management buy-in, are comfortable with the cost, and feel the implementation of your EA maps for your various domains/views is sufficient, then this method is unlikely to be helpful. Be wary of the limitations of a formal EA method as far as resilience. In a rapidly changing system and dependencies, it is quite likely that the framework will need to change quickly, and EA has significant risk there. Keep your eyes open for a TOGAF mappings under BFO, for example, as a way to mitigate the risk. (📑42)

1.3 Foreword

Some guidance for the journey

Look at this solution description as though it comes from the middle of things, a plateau: not the peak, and not the base. This is counter to the way we normally look at designs, where the base leads up to a peak of design, and the middle is relegated to drudgery. This solution description does arrive at a design, but it is critical that the reader break out of the product mindset. There is no product. At the same time, this work is not a collection of basic tools and principles. It is a perspective on systems, combined with tools and a shifted world-view. Go on this journey with a the mindset of coding on your own time, willing to learn web browser scripting and graph analysis. Take your web browser out for a spin, your own row boat in a sea of change, amid infinitely complex, inter-related systems. Establish where you are, where you want to go, and how to get there, but do it from the middle, right there from your perspective at the oars. For those the experienced the freedom of personal computers in the 1980s, this is much the same; however, the “PC” in this case is a web browser with a semantic data focus.

Errors in judgment can be avoided by simply going out a couple more degrees of Kevin Bacon. At one degree, battery-electric vehicles (BEVs) make sense. Perhaps BEVs make sense at two degrees, as solar and wind energy are becoming cheaper; however, at three degrees, where the components and the resources to create those components are accounted for, the truth becomes complicated. There are other systems in play, like climate, that have momentum and consequences that also need consideration. Do BEVs do what we need, and do it quickly enough? The extra work to analyze a few degrees further is prudent, as the stakes are very high. As stakeholders in ecosystems that support vertebrates, we need to ensure that we are working towards the goals we say we are. The prevailing idea seems to be, “Don’t worry your pretty little head past two degrees. We’ll take care of you.” This laziness suits many of us, as it is convenient to lose agency.

1.4 Hobbies, Work, Future Graphs



Click on images for interactive graph

1.5 Preface

Why did I write this? What is my interest?

I started my technical journey in electronics. I was also interested in computers, but I wasn’t satisfied just entering BASIC programs. I wanted to know how computers really worked. Over the next 10 years, I built my own Z-80 homebrew computer. (📜4). During that time I moved 20 times, attended school, bounced around many technical and non-technical jobs, and finally settled in an IT career. Some solder joints were made with a cheap soldering iron. Some wire was whatever I could get for free. I was stuck with my original decided building technique of masses of point-to-point wires soldered together with the aid of pre-drilled, un-tinned perfboard:


Like most technical debt, I chose it because of the low cost and familiarity; however, whenever I moved the homebrew, the solder joints would break. During vacations from work, I would unpack my homebrew computer and repair it. My desk often looked like this:


I transferred my hand-drawn schematic to a computer-generated schematic to help troubleshoot. This experience works well as a lens for IT. Seemingly impossible tangled messes of a system can be fixed with a good enough diagram of where the individual connections are, and an understanding of how they work together. At the same time, some fragile platforms will never be reliable, no matter how excellent the documentation and analysis is.

My IT career slowly moved (📜5) from an Operations perspective to analysis, and I focused on some of the same kinds of resilience issues, but with more actors and much bigger systems. At the same time I became interested in broader systems like the global supply chain and climate. In my mind, this all is related. We make seemingly small decisions that make sense at the time, like choosing to solder point-to-point with whatever wire is available, or not heeding the advice of Buckminster Fuller about oil (📑52), and we face those decisions every time we move.

In recent years I have taken these ideas, coupled with my experience collaboratively analyzing systems, and created a solution that fits the broader requirements of bigger systems.

1.6 Prologue

A stand-alone taste of what is to come that stands independently.

Sue was just returning from lunch break to staff the operations center of the county water district. She had heavy framed, black eyeglasses, and wore jeans and a gray long-sleeved canvas shirt. She had been there through three reduction-in-force sweeps, and was one of two people left watching the operations screen and tending the district machinery. Three red circles appear on the wall console that showed pump failures at Lovelane Lake, Upper Dredge, and Placidish River, connected by a web of pipes. Sue logged on to her computer to get details, but received “Access Denied”. She tried again, and got the same response. John was the only other person in the ops center. There were four room-width desks facing the screen on the wall with empty chairs, and he and sue sat two seats apart in the second row.

“It won’t work, Sue. There is no authentication available, as Datacenter West is down, or at least unavailable.”

“Do you know what the pump failures are?” asked Sue, anxiously.

“I assume it is electrical, as the dam at Upper Dredge blew a transformer. Datacenter West gets power from there. They are on backup, so the datacenter is still live, at least until they run out of diesel, but nobody can reach it because the network is down. I’m going to drive out there and get a copy of our pipe layout and IoT keys, as we don’t have one on site. We may need those.”

John grabbed his backpack, shoved his laptop inside, and ran out the door, leaving Sue with a screen of red. John had been there almost as long as Sue. There were now seven circles of red on the console.

“I’ll log on locally”, Sue muttered, and was able to get a command prompt. She tried to ping the pump at Lovelane, and nothing. She got the same result for the other pumps. Her email and other office software was also hosted at Datacenter West, so all she could do is run notepad and ping. She noticed Joyce outside the window waving at her, and opened the door to talk to her. Joyce worked in accounting reconciling C* expense reports.

“Did you know the water is out? I can’t wash my hands,” Joyce said, annoyed.

“Oh no! If the water is out too, Datacenter West won’t cool.”


“Never mind. I think Upper Dredge dam is not pumping water. I got an alarm. I think there is still bottled water in the fridge. Power is going out across the county as well, so you might want to get your car out of the garage. I’m calling Laura Talos right now. She’ll know what to do.”

“OK. I think I’ll get home while the traffic lights are still working. See you tomorrow.”

“Think, think, think,” Sue reminded herself in a half-whisper. “John is getting the IoT keys, so that’s good. We can at least see if any Upper Dredge pumps were damaged in the surge when the transformer blew. Where did I put Laura’s number? I know I copied that down from the Datacenter West contact app just in case. Ah, here it is.”

“Hello, Laura? Sorry to bother you so late, but we have a situation. Upper Dredge blew a transformer… oh, you know… yes… yes… but the problem is that it powered some networking equipment, and we can’t reach many of the pumps. The monitoring screen is all red. Yes… yes… I thought that too, but there is no running water at the office, so the outage looks real. I’m worried about cooling at Datacenter West. John headed out to grab a copy of the IoT keys and an updated map. … No, we don’t have one here. I can’t get into email, access my spreadsheet of pumps. I can’t even log on at all. I had to log in locally. … Yes, yes… I’ll call after John gets back.”

1.7 Introduction

The problem the solution solves, who is involved, who is responsible, and who cares the most about a successful implementation

As a species we have the unique ability to gauge shared intention, and work together towards common goals. (📑56) We grow and share information culturally to supplement our natural, human abilities. We need help as the world gets more and more complicated. We are running at cognitive capacity most of the time. Our attention is stretched. It is critical that we make sure that our common goals map out correctly. Where are we now? Where do we want to go? How do we get there? We have always maintained guiding principles, sometimes written, to keep us on course as we navigate our lives, but we are in the middle of extremely complicated systems and quick change that makes it difficult to map the new territory at the velocity needed.

Information processing in large datacenters with deep hardware and software supply chains, coupled with extensive human capital, is how we currently attempt to shepherd the 8 billion (2022) people on the planet and biosphere. That is one way to tackle the task. The problem with this is that as individuals and organizations, we need to be able to participate and be assured that we are working towards goals we intend, rather than the goals of third parties or merely immediate concerns. The nature of the way that we deal with our challenges ends up clouding the original problem and goals. Another problem is that the deeper the supply chain, the less resilient it is. A deeper supply chain might facilitate control for those at the top of the pyramid. It might create more economic benefit while it is functioning well. It might also facilitate scaling and tailored, incremental changes; however, real change in the context of related systems is extremely difficult, and the overall resilience is necessarily fragile because of this. This goes for any system. If the software you use to manage your systems requires layers upon layers of features, accessed through layers of infrastructure, housed and secured with the effort of millions of individuals, change is excruciatingly slow and difficult. It may work fine when the systems are relatively static in nature, but quick change will often knock systems with deep supply chains out of operation. This is an assumption of 3SA.

On-premises hardware and software is often much more expensive to operate. 3SA does not necessarily promote a clawback of infrastructure from cloud. 3SA is about cognition of systems and expressing and managing those systems as humans with agency during quick change, and within extremely complicated systems. As humans we need to be able to tackle system analysis without relying on deep supply chains. Remember the dilemma about where you store your datacenter recovery documents? You shouldn’t only store them in your datacenter, and, yet, storing them outside the datacenter becomes a thorny issue. (📑64) From a broader perspective, our systems are meant for humans and the biosphere, or should be, so we need to be able to understand the schema that supports our everyday decisions, as we collaboratively work with shared intentionality towards common goals in the face of crises.

1.7.1 Checklists

Atul Gawand sees checklists as a useful tool to aid the cognitive jalopy mind of humans. (📑57)

“The philosophy is that you push the power of decision making out to the periphery and away from the center. You give people the room to adapt, based on their experience and expertise. All you ask is that they talk to one another and take responsibility. That is what works.” ~Atul Gawande (📑24)

In his book The Checklist Manifesto (📑24), he describes the wonder of checklists used by surgeons and aircraft pilots. Imagine an aircraft pilot unable to take off because the checklist app they use is upgrading, or they have run out of data on their plan, or the datacenter running their app has a power outage, or network connectivity between the app hosting and storage back-end is lost. (📑44) The pilot might choose to take off anyway, seat of the pants and all, but what if the plane crashed because they missed an important step like “lock the wheels up” or some such. Perhaps the aircraft has super sensors and AI that invalidates the need for checklists, but things change, the world is messy, and we have a lot of very competent brains that can adapt to the change without all of the external dependencies.

What if we could have both things? What if we could partition off a level of checklist flexibility, but soup it up a bit, and even leave room for more complicated levels of meaning that our challenged bio compute has difficulty doing well? (📑45) That is what this solution description is about.

1.7.2 3SA and Resilience

3SA facilitates resilience for any system at the time of crisis. Like a checklist, it provides room to adapt, and promotes collaboration and responsibility. It is a way of thinking about systems, as well as some simple techniques to represent system knowledge. There are certain precautions you can take for a variety of scenarios like power outage, earthquake, etc., but being prepared is different than resilience. Resilience successfully navigates the unknown crisis that hasn’t happened yet.


The ability of 3SA to be quick enough to respond effectively in a crisis, means that it can model and analyze existing systems, allowing organizations to understand how their systems operate at a high level without extensive resources. Owning system knowledge at this level informs better decisions and will contribute to resilience.

While it is a general purpose technique, the 3SA solution includes data flow, a specific application of triples. This is appealing, as most current problems involve data… flowing. We are entangled in compute and data. As an example, when modeling a system that dispatches fire fighters, data about fires, locations of fire fighters and equipment, and routing of smoke metrics could well happen as data, rather than modeling the physical interactions. This also helps existing organizations, as many have data flow challenges, and have already abstracted their business processes within those flows.

Decomposed graphs, as triples, for the purposes of system analysis, is my lifetime “Aha!” moment. Was the moment laying in wait in the world, like a video game sequence of puzzles that needed to be unlocked before the final prize? Is it merely a reflection of my experience, a solipsistic convenience at show’s end?

1.8 Assumptions

Key items that should be carefully considered before the rest of the solution description is read, as they lead the design conclusions

All of these assumptions come from the perspective of resilience, which means that the organization is facing a crisis or wishes to improve their skill set and knowledge representation to facilitate resilience.

1.8.1 Organization Autonomy

Stakeholders of organizations desire the ability to build, represent, and manage knowledge of their organization that is independent of third parties. System knowledge should not be owned by a third party, nor should strategic views of that knowledge be dependent on third parties.

1.8.2 Maximum Analytical Velocity

The priority is analysis at the highest velocity. The priority is not a sustainable, re-usable and globally understood representation of knowledge. We often treat everything as a crisis in corporate settings; however, analyzing systems at maximum velocity works equally well for other types of crises.

1.8.3 Minimal Existing Knowledge

There is little existing knowledge of the system that is usable within the focus of analysis. If there is a crisis, the relations and focus will be unique. Likewise, for a product in a quickly changing agile workstream, existing knowledge is not useful at the needed velocity.

1.8.4 Human Cognition First

The priority is for humans to understand the system under analysis. This is a major assumption, as it runs counter to most applications of triples since the 1960s.

1.8.5 Triple as Atom of Knowledge

The World Wide Web is based on triples. Formal ontologies are based on triples. The magic of graphs happens when going from two (key-value pairs (KVPs)) to three (triples). This solution description assumes that system knowledge atoms are triples. This assumption is a bit odd, as many are convinced that event streams with KVPs form atoms of knowledge; however, KVPs in event streams are more suited to operational knowledge than the focus of this solution description, which is human cognition of systems.

1.8.6 Emoji Good

The solution utilizes emoji because of the immediate visual recognition. This outweighs the scalability and compatibility issues.

1.8.7 Underlying Logic

Determining the underlying logic is a higher priority than gauging the current running system.

1.8.8 Relational Database Solutions

Relational databases are too difficult to design and deploy at maximum analytical velocity, particularly if we accept the assumption of minimal existing knowledge. See (📜2).

1.9 Principles

What guides the system design choices of the system?

1.9.1 KISS

The system should be as simple as possible. Don’t make it more difficult to use for 99 percent of the users, simply to account for the 1 percent.

1.9.2 DRY

Capture and visualize systems directly. Don’t repeat yourself by transforming captured information multiple times.

1.9.3 Once

Capture a system once, in a stand-alone way that adds human system cognition. What is it that is worthwhile to capture? Many things will change by the week, day, and hour; however, basic goals and needs are less dynamic. This does not mean that changes captured in streams of data and work are not important, it just means that on principle, it is valueable to at least capture a system completely for the sake of human cognition at a level that makes sense.

1.10 Scope

Items this solution description addresses or doesn’t address that might cause confusion for stakeholders if not made explicit

1.10.1 Multiple dimensions

Since the focus is on human cognition, immediate visualization and analysis of multiple dimensions is out of scope. An example of this is nesting, which is done on one dimension.

1.10.2 Flows and States

Weights and logic in flows are out of scope. How much water going through a pipe, for instance, is out of scope. The fact that a pipe connects two points is in scope, as is the type of pipe and other properties.

1.10.3 Triple acquisition

Triple acquisition and management is out of scope. 3SA does provide design and future considerations that address some of the issues, and fulfill requirements that are related.

1.10.4 At rest items

Management and storage of at rest items is out of scope.

1.10.5 Security

Securing the triples, either in a web page or as part of the messaging/streams, is not in scope. Security needs will vary widely by particular application.

1.10.6 Identity

Identity is in scope.

1.10.7 Universally Unique IDentifiers (UUIDs)

While using integers, as I do in many of the demo sites, is useful to overload the meaning of the ID with sequence, this can cause problems when re-arranging, as links break when the sequence changes. This also causes problems with real-time collaboration, as it is quite possible that two people could create a new identical ID. There are many ways to mitigate these issues, but UUIDs are likely one of the top ones. I will address this in a bit more depth in future considerations.

2 Requirements

What the solution must provide

Disaster, quick change, the new normal, whatever you call the time immediately after a crisis, all we can do is assess the situation and proceed. The more effective our response, the more resilient we are, whether the crisis is on a personal, local, or global level.

“I propose that the common view of resilience as ‘the ability of a system to cope with a disturbance’ is a disposition that is realized through processes since resilience cannot exist without its bearer i.e. a system and can only be discerned over a period of time when a potential disturbance is identified.” ~Desiree Daniel (📑6)

We may not be able to avoid system failure and quick change;however, we can improve resilience with the way we represent knowledge of the system.

What facilitates resilience? What are our requirements?

Since this is a design for system analysis with triples, rather than a particular application of the method, the system requirements are not fixed; however, I will list and discuss them individually in this section, and design for them. For instance, for real-time streams of map locations of pipe breaks for a water system, many of these aspects are critical.

I will also set requirements that don’t fit all applications. For instance, data retention is easy if you never purge it. Knowledge atoms don’t take up much space, particularly if the streams are hybrid traditional key-value and graph, so setting requirements for data retention and recovery at a very high value is easy to design for. This is also a key part of resilience, as refactoring previous facts for the new normal may well require going back to foundational data. This is where triples shine, so I will treat this as a requirement, even though it may be something that the reader needs to adjust.

Click on image for interactive presentation

The functional requirements are quite interdependent. The diagram shows tight couplings.

One of the problems with modern workstreams that have a product focus, is that the idea of a knowledge foundation is lost. An atom is intended to be grounded in some form of knowledge, rather than whatever activity we intend for the week. An atom should stand alone with meaning. That is what makes it an “atom” vs. just a strong of characters.

The requirements do not assume triples are used; however, they do assume the concept of an atom of knowledge. An atom of knowledge is the smallest unit of change when building knowledge. It should have a low dimension value, and be normalized. There are several possibilities for this, as discussed in streams.

2.1 Communication

Functional requirement for representing and sharing knowledge


Wade through the mind-numbing muck of a corporate “what’s wrong with our group” exercise, and you will usually end up with the top ranking advice that better communication is needed. It may seem fruitless to spend all that time arriving at that conclusion every two years like clockwork, but it is still a core problem. Let’s analyze what we mean by communication, in particular communication about systems in crisis. Mostly this involves representing and sharing the knowledge of who, what, how, where, and why with a two-way tight coupling with streams and meaning.

Communication, particularly after crisis, needs to be quick and understandable.

2.1.1 Who is working on what?

This needs to be established quickly. There should be no required integration or schema update. An unique ID is the only procedural/technical item that should be enforced. Association of the ID with a project or role, and any other metadata, should simply flow in the stream. This means that relational database solutions are out. This is an assumption;however, I’ll list it here this one time.

2.1.2 What changes have been made?

Changes to the system should be posted to streams and visualized in near real-time.

2.1.3 How does the system work?

There should be no limitation on the types of models besides being able to decompose the knowledge. There should be flexibility in meaning precision.

2.1.4 Where are the system components?

This requires near real-time updates as the system changes, with multiple views. This affects who is working on what, how the system works, and what changes are made.

2.1.5 Why are we doing this?

This is also related to knowledge. Different people have perspectives on why. This is a core issue that enterprise architecture tackles.

2.2 Meaning

Functional requirement for meaning of atoms in streams, communication, and knowledge


There is no room for confusion. If I am stuck in the rain and talking with you over an unstable connection, and I want you to know I am in Portland, I might use a standard phonetic alphabet and say “Papa, Oscar, Romeo, Tango, Lima, Alfa, November, Delta”. The phonetic alphabet has agreed-on meaning, and is designed so that when transmitted over a stream (radiotelephone), there is a low possibility for confusion over similarly sounding words.


Meaning should be quickly established with visual queues when possible. While smell adds another useful sensory dimension, like the cards passed out at John Waters’ movie Polyester, it is not required, and will likely lead to tactical issues. (📑53)

2.3 Knowledge

Functional requirements for visualizing, storing, and expanding knowledge


There needs to be a mechanism to filter for crucial knowledge and to re-assemble future, unforeseen views.

2.4 Maps

Functional requirements for knowledge artifacts as maps/graphs


Maps function as a bridge behind knowledge and groups of people collaborating. They are the primary artifact used to share system aspects.

They are:

2.4.1 Quick to learn

There should be a minimal number of symbols on the map that requires less than a minute to learn.

2.4.2 Stakeholder visibility

Different stakeholders have different needs for visualization.

2.4.3 Easy to change

Modifications to the underlying data should automatically adjust existing maps.

2.4.4 Standard for area

Come as close to any standard for a particular area as possible.

2.5 Collaboration

Functional requirements for collaborating on system knowledge


2.5.1 All may contribute.

During collaboration, there should be no limitations.

2.5.2 All may validate.

2.5.3 All may be accountable.

Focusing on the core knowledge first, rather than code, platform, frameworks or metal, facilitates collaboration from the start. Further, a focus on decomposed knowledge, coupled with immediate visualization, makes meetings more productive, since there is little delay between gathering knowledge and the visualized model.

Decomposed knowledge streams (triples) only require a “lock” or merge on the triple itself. It is less likely that during collaboration one person will step on another. One person might change the title while somebody else is editing the description. The big win for collaboration is on multi-level data flow diagrams (DFDs), as different areas of expertise can collaborate concurrently to build the models.

2.6 Streams

Functional requirements for streaming knowledge atoms


Streaming should leverage existing event stream mechanisms for insight.

2.6.1 Quick system state updates


2.6.2 Tracing

2.7 Performance

System requirements for critical performance metrics

2.7.1 Visualization

For collaboration to work, updates for atom should trigger an update in the visualization in less than 5 seconds.

2.8 Capacity

System requirements for expected capacity outside of operational capacity

2.8.1 Compute

Generating the visualizations takes significant compute. A workstation with an optimized graphics system from 2018 or later should work just fine.

2.8.2 Storage

The storage needs are small. It is hard to imagine needing more than a few MB for most analysis within scope. Further, storage should by minimal, so that it is easy to transport.

2.8.3 Network

System should not require much bandwidth to run. 128k/S should allow collaboration.

2.9 Extensibility

System and functional requirements for future expansion of system

Data atoms should be extensible to other formal meaning frameworks.

2.10 Compatibility

System requirements for integrations and compatibility

Compatibility for data at rest, in transit (via connectivity/protocols), and integrity checks should be ubiquitous.

2.11 Maintainability

System requirements for modifying the system to improve security or correct faults

The system needs to be able to be modified at any time without disruption. There is no room to have users function as testers. The design of the quality assurance process is determined at initiation; however, the deployment and code simplicity should be such that disruption is unlikely. Further, user-initiated fall-back should be easy and obvious, a part of every effort.

Even when there are management tasks that need to be completed, like an update of a public certificate, because of the general availability requirements, this should not block use of the system at certain levels. As an example, in the case of a certificate that is out of date, there should be alternate paths of validation and transport available to users.

2.12 Portability

System requirements for moving to different environments and platforms

The core knowledge and visualizations should be viewable as-is with a modern phone or computer, 2022 forward, including Mac, Windows, or *NIX.

2.13 Disaster Recovery

System requirements to mitigate regional infrastructure failure

Since this is intended to deal with disaster, the requirements for recovery from disaster are high. There should be mitigations for geographical disruptions within 500 miles, as well as global internet infrastructure like hosting, name resolution and connectivity.

2.14 Availability

System requirements for system outage tolerance

The system needs to be up all of the time. Call it ten nines, whatever you like. If the visualization aren’t available on screens, then the system should have hardcopy maps available.

2.15 Recovery Point Objective (RPO)

System requirements for data loss tolerance

The system should be able to be restored to the last atom of data gathered. Essentially this means there is zero tolerance. These methods are being used at times of crisis. Having to roll back to system information from an hour previous would be extremely disruptive.

2.16 Recovery Time Objective (RTO)

System requirements for operational loss tolerance The ability to view the current version of local data should never be disrupted. Operational loss of system is unacceptable at any level. This is facilitated via the portability requirements.

2.17 Archiving

System and functional requirements for archiving

2.17.1 Data

Archive disposition of data over time

All data is live at all times.

Note that not all stores need to be fully synced.

2.17.2 Log

Archive disposition of logs over time

No requirement for log archives. It is quite likely that a similar crisis will arise again. Past logs are useful in understanding flows of system change and resolution.

2.18 Retention

System requirements for retention ### Backup Retention of backups

The interval is directly related to RPO, since all atoms are retained. Because data is distributed, all stored need to be considered to address RPO. Intentional or accidental destruction of data should be mitigated by both location and interval.

2.18.1 Data

Retention of production data

Until the end of time

2.18.2 Log

Retention of production logs

Until the end of time

2.19 Scalability

System requirements for expanding capacity

At initiation, a guess at compute for needed visualizations in dynamic and hardcopy views needs to be established. This will dictate needed compute as well as potential scaling vertically and horizontally.

2.20 Manageability

System requirements for keeping the system operating correctly and securely

There should be few centralized management tasks. This should be designed in to be a non-issue in most cases.

3 Analysis

3.1 Introduction

Synthesis and breakdown of the requirements with perspective against different solution options and background

The promise of cloud was to bootstrap, configure,and command/control infrastructure in an elastic way, with agency. Some of the promise was kept;however, most organizations traded an on-prem fiefdom for cloud, glomming on to division and global distribution of labor that standard APIs and distributed connectivity allowed.

Consider this diagram:


For those of you in IT, you will recognize this as how your world works. For those of you outside of IT, whether you recognize it or not, [this is how the world works. (📑71). Like everything, there are exceptions, but we traded knowledge of our situation and goals for a lead role in a cage real-time understanding of how well the all-consuming global supply chain was tuned, including software and labor ecosystems. (📑34) Everybody and everything is wired in, and rather than knowledge, we call it data science, and real-time streams of data replace knowledge.

One implication is that waste is not necessarily a bad thing. If it takes ten people to do the work of one, if those ten people can be treated as commodities and plugged in to the head-eats-tail circle, this is a win. More people have income and economies improve, but mostly things improve for the major cloud providers while we degrade the health of the biosphere.

I understand the pressures that pushed holistic system knowledge out of the way for most organizations, and the focus on streams of small decisions and territory metrics. Tracing territory is immediate. If a user has trouble ordering a widget on their phone, then stick that in the sprint, solve the problem, and get the fixed product back out to the user. Another advantage is that small changes, immediately realized, can help navigate, much like flying an airplane. I don’t need to know how to design a flight control stick if I just move the controls to see how the plane moves.

Agile workstreams provide a hand on the territory navigation controls and review the constant feedback to determine the next action, but I see some damage from the switch of focus away from knowledge maps. Another thing that happened between the 1960s and now, is that most major work in these areas assumed we would use more and more complicated computer systems to provide human cognition instead of involving humans. This is why it matters less and less to people why they are doing something, or, even, where they are going. All of that is baked in to the service. The only choice is, “Where do I subscribe, and with whom?”

When I first started out as an analyst, requirements were demanded. I could ask, “What is the availability you need for this system?”, and somebody would tell me a certain number of 9s. I would verify with stakeholders by converting to outage times. When a cloud provider only offered three 9s, and added with network connectivity via an ISP it became lower, the answer, more and more, was silence. It didn’t matter. It only mattered when they were down. Cognition about requirements became less and less a technical issue. Human cognition by stakeholders morphed into project and product efforts within an agile workstream within the context of many software and labor ecosystems rather than actual cognition of knowledge of the system itself.

We now have an extremely complicated software ecosystems that changes constantly, just like the territory. When we lose track of where we are, we often start over, and plunk down another million dollars on an enterprise software system, as we no longer understand how to get the plane into the air in the first place. Or, alternatively, we just sign it all over to a third party and become system administrators for another company, ceding software and hardware ownership, as well as knowledge. We are pulled forward by small incremental fixes aligned with user needs, pulled forward by the surface, the squeaky wheel. We need to be wary of the Tyranny of Small Decisions. (📑35) We need meaning; we need knowledge to ensure resilience. We need both views: map and territory. (📑31) More importantly, and the entire reason for this solution design, is that as humans we need to be able to cognitively deal directly with the system knowledge in order to be resilient.

Before you start on your trip, take a map with you, and a light so you can read in the dark, even if your car battery is dead.

3.2 Communication

3.2.1 How does the system work?

This was covered a bit with data flow, but this is a weakness of the kinds of simplifications made to facilitate quick modeling. There are many perspectives needed to cover how a system works. Triples, utilizing formal conventions, have a place, but there are other structural formats for knowledge artifacts and other forms of knowledge representation that are valuable and, more importantly, in use and standardized.


This solution description is a format of knowledge representation of a system that is bootstrapping a design for an alternative form of knowledge representation. Data flow is a lowest common denominator form of analysis, but this will need to be supplemented. Maps of knowledge are not the best form for tracking flows and states. This is addressed a bit in Scope. We need AI/ML, cloud services, and other kinds of stream processing and multi-variate modeling to deal with flows and states; however, we also need to own the broader system first, from a knowledge and intent perspective. We also need to identify the risk of relying on third parties for flow and state, and the cost/benefit of bringing those services on premises.

Who, what, where are fairly easy to handle with triples, and combined with meaning and streams, work well.

3.2.2 Why are we doing this?

Generally, we do something to fit requirements that either the situation or orders demand. Requirements map out easily:


3.3 Meaning

3.3.1 Visual and Semantic Standards

There is a balance between standards and the ability to quickly address changing systems. The intent with the analysis is to show just how flexible 3SA can be as far as visual and semantic standards; however, this is a tricky thing to manage. It is one area where preparation prior to crisis can help. What visual and semantic standards are appropriate? Will you use emoji? Will you use BFO, which is an ISO standard?

3.3.2 Emoji

Emoji are a way to collapse knowledge. Written language goes back 6,000 years. The earliest versions represented a concept with one token, much like emoji. It started with actual tokens to track debt and trade, evolved to pictographs, went through full-on emoji stage in some versions, and then moved on to written word like we know now. (📑37)

Uluburun shipwreck from 1400 BCE (📑36)

Knowledge at this point in our civilization arc is extremely complex. Imagine a dot on a line 6,000 years ago that moves up ever slightly to the collapse of 1177 BC, goes down again, rises again with Rome an inch off the line, collapses again, and then under our current rise since the dark ages, goes to the moon, literally and figuratively. (📑39)

There are several reasons why I conclude that emoji are useful to complex analysis:

There are also reasons why emoji are bad, primarily compatibility, manageability, maintainability, and security:

Long-form text is usually not immediately understood. For many people, only the first couple of sentences is read in a long narrative. Emoji has quite a few quirks that make it difficult to use in knowledge systems; however, from a human perspective it makes understanding systems at rest and at flow much easier. The priority is human, not machine cognition.

3.3.3 The Triple

The concept of a triple goes back to the 1800s with Charles Peirce, who called it a proposition. (📑19). The triple has also been used to make the World Wide Web more meaningful. (📑23). The level of triples used here is a stripped down, simple version, so that it is possible for anybody to apply the ideas. See Future Considerations for more advanced use.

An entity is anything we wish to represent.

A triple shows a relation between two entities. Here is an example triple:

My cat is a calico.

We have been using the word map so far, but triples form a kind of map called a graph.

This is the triple in graph form:


The line between is the relation that represents “is a”.

We could extend this with by adding another triple:

Calico is a domestic cat.

This is the graph form:


Usually the entities are called subject and object, and the relation between the entities is called a predicate. In the above example, calico is the subject, “is a” is the predicate, and cat is the object. A label to the calico that has a comment “The color of a calico cat is usually tri-color, mostly white, with orange and black splotches” is still a triple, with the subject of calico, a predicate of has_comment, and an object of the text of the comment. Let’s use an invented term called “relation predicate” to signify “is a” relations, and “aspect predicate” to signify predicates like comments or details.

Here is a simplified water distribution system that is modeled with triples.


One relation predicate is “fitted_to”, which is between pipes and valves of the same size or the reservoir. The other relation predicate is “transition_to”.

Imagine an entire public utility along these lines. A reservoir in the mountains might have a very large pipe, as would an aquifer. At the furthest point in the system the pipe would be much smaller as it enters a customers house. We could model the entire system around the pipe diameter and this would be our primary focus, with the relation predicates fitted_to and transition_to.

The scope of this solution description is only for triples with one “relation predicate”, which is signified by ➡️. If the same relation applies in both directions, we will signify this by ↔️. The other details on predicates will be covered in design. If you want to dig in more, see (19).

With our convention of one relation predicate signified by ➡️ = “eats” we can list what things animals eat (Tiger eats cats, cats eat rats, cats eat mice, mice eat cheese):


🔹 delimits the predicate, either a primary relation or other predicates listed in design.

Here is the graph of these triples:


The graph can prompt questions, like “What does the rat eat?” or “If the cheese is poisoned, what animals might be affected?”. The fact that we don’t have anything for the rat to eat on the graph doesn’t invalidate the graph. We can add what the rat eats as we learn. Likewise, the fact that a mouse eats peanut butter off of our traps doesn’t invalidate the fact that the mouse eats cheese. We can also do some inference, in that the tiger might well be eating cheese. If the cheese was poisoned, we could follow the graph to see what animals would be affected. The important thing to notice is that these small facts, a triple, can be added at any time to the model. We could add

🐅 🔹➡️🔹🐖

later on, if we discovered that tigers ate pigs.

The use of emoji adds visual meaning, like colored stickies on steroids.

We now have a basic toolset and vocabulary of triples.

3.4 Knowledge

3.4.1 Journal

A mountain of triples becomes a web of internal and external facts that can be navigated with a journal. Not all views of graphs need to be lines connecting nodes, although that can be of use.

“Saturday, 22nd. Fresh Gales, with Squalls, attended with rain. In the Evening had 14 Sail in sight, 13 upon our lee Quarter, and a Snow upon our lee Bow. In the Night split both Topgallant Sails so much that they were obliged to be unbent to repair. In the Morning the Carpenter reported the Maintopmast to be Sprung in the Cap, which we supposed hapned in the P.M., when both the Weather Backstays broke. Our Rigging and Sails are now so bad that something or another is giving way every day. At Noon had 13 Sail in sight, which we are well assured are the India Fleet, and are all now upon our Weather Quarter. Wind North to North-East; course North 81 degrees East; distance 114 miles; latitude 41 degrees 11 minutes, longitude 27 degrees 52 minutes West.” ~James Cook (📑43)

A journal captures knowledge of a journey in a format that can be read by anybody, without special tools or reporting. With triples, a journal can be re-used in different ways, but the view can still be a typical format. Tags are the relations, but across all levels. A journal is universal enough, with a long tradition, that it should be included in a knowledge representation system.

📓🔸📝🔸1🔹➡️🔹Weather Backstays
📓🔸📝🔸1🔹➡️🔹India Fleeet
📓🔸📝🔸1🔹🌬️🔹Fresh Gales
📓🔸📝🔸1🔹📄🔹Fresh Gales, with Squalls, attended…

Long-form writing, like journal entries, benefit from text editors and word processing. For this reason, entries should be stored at rest in a form separate from triples. The updates could be routed as triples if needed, but the locking would be by document in that case. For technical work, the longer entries are likely created by subject matter experts. To mitigate the issues, comments can be routed separately for collaboration. The view, though, can update as fast as anybody pushes updates. There are no locking issues if you separate view, as it is simply the last triple with a view sent. The example above is too simplistic, as the journal entries likely have characters that should be encoded with base64.

For instance, this would be a view entry: 📓🔸📝🔸1🔹📒🔹PHA+VHJpcGxlIHB1YiB1c2VzIGEgbXVuZ…

3.4.2 Mandelbrot Set

Consider this formula:

zn + 1 = zn2 + c

By iterating with complex numbers c, and colorful visualization of what doesn’t escape to infinity, we get the Mandelbrot Set:

click image for animated Mandelbrot set zoom with center at (-0.743643887037158704752191506114774, 0.131825904205311970493132056385139) and magnification 1 .. 3.18 × 1031

No matter what kind of AI/ML machinery is pointed at the above animated rendering, it is extremely unlikely it will arrive at the simple formula. We might be able to find profitable gullies and shorelines in the rendered fractal territory, but we will not discover the formula. We could spend a lifetime enjoying the beauty of the surface curves, but we would always be caught in the surface, unaware of the rule, the formula underneath reality.

Our solution should be more like an original formula, rather than a data and compute-intensive rendering against an algorithm that statistically matches against the rendered data.

3.4.3 Screenplay vs. Movie

In order to be resilient, the captured knowledge needs to be re-usable. Consider this movie clip from Made for Each Other - 1939 Public Domain:

Click on the image to watch with sound.

Now, consider the screenplay:


This is a very different kind of knowledge than the movie. The screenplay is the essence of the movie. The essence is mapped out by the screenwriter. The screenplay can be re-used and adapted, based on what actors are available for casting, the preferences of a changing audience, or different directors.

A screenplay packs a lot of knowledge into a small space by working within a set of meaning and conventions. It is possible to insert scenes and modify characters easily. The fact that the name of an actor doesn’t exist in a screenplay does not mean the director can’t have one added. The meaning, conventions, and ability to easily modify the knowledge are characteristic of knowledge graphs.

A screenplay is primarily about communicating the vision of the play, and it does this well, but it is one-way. Meaning is conveyed by convention using columns, type face, and explanatory text. Since the meaning is meant to be scrolled through as text is read, the convention is very specific and inflexible for other applications. Knowledge is the core function of a screenplay, and it functions well as a map, but is very specific to movie/plays. It has a time component, in that the map is played through the scenes. There is little collaboration on the screenplay by those that use it. There is no stream component in the creation; however, the movie is a good example of how we use AI/ML to analyze streams. Presumably, we can gain enough information about the structure and meaning of the movie by pointing enough compute and proper algorithms at the stream.

What knowledge can be harvested from the external view of the product, the operational system, the movie clip? We can identify the actors and items in the scene. We can categorize the props to recreate the scene with other actors. We can work with theaters to determine the amount of popcorn viewers decide to skip out and buy during the scene. Much of the knowledge, though, we are reverse engineering. If we are using AI/ML to harvest the knowledge, it is quite likely that the pauses, awkward moments, and the humor of the cookie are lost on our models. We don’t have context within the movie. We don’t have a design. We don’t have a map. We don’t have the screenplay.

The difference between the screenplay and the movie illustrates how our current approach to knowledge can be misleading. We have massive compute, extremely sophisticated algorithms that we can point at streams of metrics gauged from cameras, audio, and other sources; however, this is a very heavy set of knowledge to need to rely on if we need to quickly change the movie. Further, any measurement and algorithm is still not the real territory, it is territory as modeled through the metrics and algorithm, and the majority of our algorithms work on the goal of profit. We don’t gauge the value of a movie, from the audience perspective, on profit. We gauge it on other human aspects, and this is why, like Gawande’s checklist, we get better movies by pushing “the power of decision making out to the periphery and away from the center. You give people the room to adapt, based on their experience and expertise.” (📑24). The real resilience is by humans for humans.

The gap between the movie and the screenplay is much like the gap between the formula for the Mandelbrot Set and the rendered visual, but rather than humans bridging the gap and creating unique movies that separate the gap, it is quite likely impossible to guess at the formula for the visual image from the rendered visuals.

A screenplay has the right level of meaning vs. the rendered movie. We want something more like a screenplay, as it is re-usable with different scenarios.

3.5 Maps

3.5.1 Road Map vs. Gauges

Imagine driving a car cross country, but only relying on the territory view. This is the surface, as though you are driving through a visualization of the Mandelbrot Set. Through the car windows flows scenery of billboards selling shaving products, mixed in with desert, lakes, and trees. There are various metrics gathered from the car’s machinery. If we take a territory approach, the decision on where to drive and how to operate the vehicle would be based on the stream of information flashing by the windows, and coming in through the sensors. Perhaps a satellite of love correlates the position of the car constantly and tells you where to turn. As time and technology progress, we get better equipment, better cameras, faster cars; we build models of what we see. Is that a tree or a lake? How does our software recognize it? What algorithm works best? As the car goes faster and the features going by get more complex, we are faced with needing more and more equipment to navigate, more satellites, deeper supply chains. With enough compute and machinery, we might even be able to characterize the feel of the dirt, the way it might erode in the wind, just by the massive amount of data gleaned in the trip and stored from other trips. This still isn’t true territory in the menu vs. meal sense, but it plays in the same area.

A map that shows the roads, gas stations, hotels, and towns is prudent to carry. A checklist for changing a flat tire, a list of things to pack, what to do if the car overheats, how to hitch a ride, five ways to overcome a psycho killer that gives you a ride… all of this can fit in a small notebook. Having real-time measurements of the trip can be helpful, but what happens when there is no connectivity? What does your organization really need to run? Do you have it mapped out? What do you do when something unexpected happens? What do you do if that apt command in your Dockerfile just pulls air? What happens with the dreaded phone call that you no longer have a job, and you are stuck in a motel room across the country from everyone you know. A map lets you do things like continue on foot or change cars. A territory perspective ties you in to your service providers. Maps help ensure resilience. Note that a car full of territory equipment might usually win the race, at least while all the upstream dependencies are in good working order, including that satellite of love, but this is a different topic than resilience and autonomy.

Another aspect of maps is that the rules are contained.
The massive compute, sensors, and AI/ML kit that processes data from various aspects of the scrolling desert, approximates territory; however, it does this over the surface of the fractal, going for that statistical nudge in identifying the place on the map via the territory, without first agreeing on a map with the sponsors and stakeholders. A territory approach, that relies on third parties, cedes knowledge, because the knowledge is embedded in services and platforms that are often not useful if you change cars or decide to walk. The identified gullies and shorelines might be profitable, but where is your map that you can use to change cars or walk yourself?

Autonomous vehicles take this to an extreme. A map is less important if you can’t even drive your own car or fix it, if everything is a subscription service, and you own nothing, and decide nothing besides which corporate-created show entertains you this evening as you peel plastic off of your meal, or what piece of the product you fix in this week’s sprint. You are reading this presentation via the machinery of modern territory work flows, and it truly is wonderful. I’m pushing this via a tool written by the author of one of the most fabulous pieces of open source, running on his kernel, that powers most of the cloud interests. I am not arguing that we need to abandon the territory perspective, which does require massive compute and centralized resources. I am arguing that authoring our own maps as individuals and organizations is crucial to resilience, and to be suspicious of territory-centric views. But where to start, particularly at this stage in the shift to territory?

Winning the race is less important than completing the race, as far as resilience. Resilience requires something like a road map, rather than thousands of real-time sensors.

3.5.2 New York and Erie Railroad in 1855

A map is more than just gas stations and roads. It can be used like a checklist to delegate decisions, yet keep control over the system. Daniel McCallum designed a map, compiled and drafted by George Holt Henshaw, to operate the New York and Erie Railroad in 1855 (27):


The ability to delegate to “any subordinate on their own responsibility” is explicit:


Communication is also important, and the map facilitates that, so that a crisis on the line can be handled effectively. This map also stores knowledge about the system that can be retrieved remotely without the need to communicate with the central hub. At the most practical level, it means that those who have a map know who to ask if they have problems.

This is a closeup of the map:

The usefulness and portability of this map is a key feature. It is comparable in re-usability to the Screenplay, in that it allows for changes within the system, but doesn’t allow for a significantly different system without a complete rewrite. How could re-usability be better? Whiteboard and Stickies offers a clue: decomposed knowledge. To improve the usefulness of the Railroad map, we need to figure out how to break the map into sticky notes.

3.5.3 Railroad Crisis Example

Let’s illustrate how decomposed knowledge utilizing triples derived from the 1855 map can assist with resilience with a fictional scenario. The names are abbreviated in the diagrams; for instance, Master of Engineering and Repairs for Buffalo and Dunkirk is abbreviated as BufDunMasterEngrRep. To follow along, download a copy.

The crisis is that coal and oil is too expensive. We need to convert the engines to charcoal utilizing our own crew, so we need to understand how they are currently utilized. We don’t need to diagram everything, just what is relevant to converting the engines.

If we take the Initial set of triples from Appendix 1, and add this header to the beginning:

digraph {

and add this footer to the end:


we have a standard graph format called “dot”.

overlap="false" keeps the nodes from touching. We have gone from triples to a graph by using an old graph visualization program called Graphviz.

By feeding the triples with the header and footer into the Graphviz twopi command we get this graph:


Because of the disruption in our coal supply, we need to convert our engines to run on charcoal as quickly as we can, but minimize disruption to the revenue from our rail freight business.

We have a total of 132 engines on our five lines we will convert:

Line Engines
Est 31
Del 23
Sus 18
Wst 10
Buf 5
Total: 132

We also need to create and distribute charcoal on a daily schedule in time for the engines as they are converted, and as part of our daily operations of the railroad going forward.

We need to add a couple more lines to our header for the Modified set of triples from Appendix 1:

root=President and splines="true"

trimg A single triple can be changed with immediate visibility, which contributes to collaboration:

Click on the image to watch video

Here is the modified graph with the green line in xdot (📑29):


Triples provide a root formula that facilitates collaboration via streams of live maps.

3.5.4 Structured System Analysis

Structured System Analysis was refined in the 1970s an 1980s as a way to analyze systems, primarily around data flow. It uses three symbols and some conventions for connecting them to make data flow diagrams. (📑8) (📑30)

While different authors choose different symbols to represent the nodes, they consist of:

Here is a small diagram that illustrates a data flow:


A job Applicant submits a resume in PDF form to an online recruiting system. The system processes it by transforming it to XML data and storing it on a file system. It doesn’t matter what symbols are used; however, there are standards, see (📑8) (📑30).

Here is a set of triples for this diagram:

0🔸👤🔸JbAppl🔸➡️🔸⚗️🔸1🔹🏷🔹 resume PDF
0🔸⚗️🔸1🔸➡️🔸💽🔸1🔹🏷🔹 resume XML

The top level diagram is noted by 0 (level 0 of the data flow diagram).

3.5.5 Graph Node Expand

One advantage of structured system analysis based on data flow is that it leverages graphs vs. flat representations of knowledge. The nodes of a graph can be expanded into another entire graph.


3.5.6 Larger Flow

Here is a larger level 0 data flow to illustrate expanding a node.


Notice 6, the Customer Relationship Manager.

If we expand process 6, it looks like this:


These are all items from the perspective of customer relationship manager, including social media and snail mail cards. Here is the expanded 6, the Social Fanisizer process:


One advantage to breaking down knowledge representation into triple form and then recomposing, is that it is easy to re-assemble into any view you like. This shows the expansion from Customer Relationship Manager to Social Fanisizer using a more typical Gane and Sarson style notation for data flow:


3.5.7 Cups and more cups

Systems pinned to a single relation are easy to model. Here is an example where we simply swap out data flow for the flow of cup materials:


ACME Cups manufactures ceramic mugs that are distributed throughout the country. Zellostra Mud Mining provides mud that is then filtered and injected into mugs that are fired in a kiln and glazed by glaze from Queen Glaze Co.

Motion of mug materials are tracked with the flow. Materials at rest are either mud (M), cups (C), or glaze (G). There are multiple companies involved in the supply chain for mugs. Staff are designated by company as the first letter: Queen Glaze Co. (Q), ACME Cups (A), Zellostra Mud Mining (Z), Temp Superheroes (T), and Xtra Fast Delivery (X), with a second letter of E (entity). Materials are moved or transformed with processes designated by the company letter first, P second, and a sequence integer, as well as color coded. The IDs are unique for all.

This is just a high level; however, the processes that change and move the materials, can be exploded into separate diagrams for more detail.

It is quite difficult, if not impossible, to glean a diagram like this just from operational and other gathered real-time metrics. It requires interviewing the business stakeholders; however, because of the nature of graphs, the diagram can be collaboratively built and delegated. ACME gets glaze from G3, and ACME can modify their part of the diagram without having to be concerned with changes Queen Glaze might make to their process.

What does this get us? If the power is out, we could look at this single graph, and see that the items that we are concerned with are all purple. True, a power outage might limit ability for staff to get to work, but let’s assume that staff are available. Any materials at rest should still be at rest with or without power, so let’s look at purple.

AP1 can use manual screening techniques if the electricity goes out. AP2 and AP3 require a generator. AP4 and AP5 only need lighting.

If there are holdups in the line, the graph can show who to turn to. If no temps show up from Temp Superheroes and the phones are out, we would need an address.

3.5.8 What do Humans Need?

What is a better store of knowledge? Is it years of cumulative metrics of existing systems matched with an AI model? Is it through harvesting your user’s emails or keylogging? My thought is that knowledge of what is important starts with something more fundamental, and that the streams of metrics have a place later on. Let’s look at a larger system without text. Our predicate is ⬅️, which means “needs”. Here is a list of triples followed by an explanation (↘️):

🧍 ⬅️ 🌡️ | 🧍 ⬅️ 🚰 | 🧍 ⬅️ 🏥
🧍 ⬅️ 🍲 | 🌡️ ⬅️ 🏠 | 🏠 ⬅️ 🏗️

↘️   Humans need a certain temperature to live, potable water, medical care, and food. Shelter is needed for humans to maintain tolerable temperatures, and this shelter needs to be constructed.

🍲 ⬅️ 🐄 | ⚡️ ⬅️ 🛢️ | 🚚 ⬅️ 🏗️
🍲 ⬅️ 🌱 | 🚚 ⬅️ ⚡️ | 🏗️ ⬅️ ⚡️

↘️   Food for humans comes from animal and plant sources. Construction and transport need electricity, which is provided by oil. Transport needs to be constructed.

⚡️ ⬅️ ☀️ | 🐄 ⬅️ 🌱 | 🏗️ ⬅️ 🧍
🌱 ⬅️ 💩 | 🌱 ⬅️ 🌊 | 💩 ⬅️ 🛢️

↘️   Electricity can also come from the sun. Animals eat plants, and processed food comes from plants. Construction needs humans. Plants need fertilizer and water. Fertilizer comes from oil.

It doesn’t matter how these triples are entered. There is a related method of establishing system information where a room full of people brainstorm and put yellow sticky notes on a whiteboard. The advantage is that it allows collaboration and visualization without a lot of rules, jargon, and procedures. Unlike the sticky note process, though, the data in triples-style analysis can be more easily re-used. The process is also similar to mind mapping. I’ve used mind mapping software to capture triples collaboratively quite successfully.

Let’s continue with some more triples.

💩 ⬅️ 🐄 | 💩 ⬅️ 🧍 | 🍲 ⬅️ 🚰
💊 ⬅️ ⚡️ | 🚰 ⬅️ 🌊 | 🚰 ⬅️ 🏗️

↘️   Fertilizer can also come from animals or humans. Drugs need electricity. Potable water is sourced from rivers, lakes, springs, and groundwater. Processed food needs potable water, which needs constructed infrastructure to operate.

🏥 ⬅️ 💊 | 🏥 ⬅️ 🧍 | 💊 ⬅️ 🛢️
💊 ⬅️ 🚚 | 🏥 ⬅️ 🛢️ | 🏠 ⬅️ 🛢️

↘️   Medical care needs drugs, people, and oil. Drugs need oil and transport. Shelter needs oil for heating as well as components.

🚰 ⬅️ 🛢️ | 💊 ⬅️ 🌱 | 🏗️ ⬅️ 🛢️
🧍 ⬅️ 🌱 | 🌱 ⬅️ 🌡️ | 🚰 ⬅️ ⚡️

↘️   Construction and potable water infrastructure needs oil. Potable water distribution needs electricity. Humans can eat plants directly, unprocessed. Drugs are made from plants. Plants need particular temperature ranges to germinate and thrive.

🍲 ⬅️ ⚡️ | 🍲 ⬅️ 🛢️ | 🍲 ⬅️ 🚚
🌱 ⬅️ ☀️ | 🚚 ⬅️ 🛢️ | 🏗️ ⬅️ 🚚

↘️   Processed food needs electricity oil, and transport. Plants need sun. Transport need oil, and construction needs transport.

🏥 ⬅️ 🏗️ | 🏥 ⬅️ 🚚 | 🏥 ⬅️ 🚰
🏠 ⬅️ 🚚 | 🧍 ⬅️ 🧍 | 🏗️ ⬅️ 🏗️

↘️   Medical facilities need to be constructed, and also need potable water and transport. Shelter needs transport. Humans need humans to reproduce, and construction equipment is created with construction equipment.

We might argue a bit about whether a hospital is needed, but in our current civilization, this is reasonable. Likewise, in some societies transport is not needed to build a shelter. The advantage of this form of analysis is that the individual facts are relatively easy to agree on. Do we need oil for construction? Are drugs made with oil? These individual facts can be verified by experts to the satisfaction of those who are evaluating the system. If there is conflicting information, mark it as such and/or don’t include.

The triples can be assembled as a graph for visualization as the model is built out, which facilitates collaboration. Here is what the graph looks like:


If we decide that we will only consider transport that gets electricity from the sun, than we still have quite a few other problems to address. The graph helps put things in perspective, and facilitates human cognition of the system. This is important before we immediately jump to extensive infrastructure investment and real-time measurements of each mile all transmitted back to a data center. (📑1)

3.6 Collaboration

3.6.1 Whiteboard and Stickies

The Whiteboard and Stickies collaborative business analysis technique gathers users and experts of a system gather in a room with stacks of sticky note pads.


Under the prompting of an analyst, the users lay out aspects of the system by writing small bits of information on the notes and sticking them on a whiteboard. Many who have witnessed this technique have marveled at how well it works. The main reason this works so well is that it is collaborative without the burden of dense jargon or existing description of the system.

This method works well as far as communication for those present at the meeting. The analyst serves as an interpreter. There are limits to the collaboration, as it is all within a local room. Collaboration virtually is difficult.

Meaning is often encoded with color and text of the stickies, as well as text on the whiteboard. There is little control of meaning, as it is whatever people put on the notes. It is guided by the analyst, but there is no schema, which is a disadvantage as far as common, re-used meaning.

Knowledge is captured on the whiteboard itself. Somebody might take a picture or roll the whiteboard into another room. Capturing the knowledge is labor intensive and often a choke point of the analyst. There is an overall visual order. Sometimes the map is in swimlanes; sometimes it is more chaotic. The map usually needs to be expressed in a different form.

All may contribute without barriers to entry. There is instant validation of gathered information. If somebody puts up a sticky note that is inaccurate, it is easy to correct. There is a real-time update of the output of the group

Whiteboard and Stickies is a great example of collaboration, primarily through the simple process and few barriers. It shows how knowledge can be broken down and re-assembled successfully, and the stream of changes can be instantly visualized.

3.7 Streams

Streams in IT are often key-value pairs with timestamps. These streams are the relative of triples, with key-value being two to a triple’s three.

Consider this local alarm:

<154>1 20220327T211058.426Z dc1mon.example.com process=“6 6 3” cpu=“100” memfMB=“10”

This is in Syslog format (📑33). It illustrates a priority alarm coming through for the available CPU dedicated to process 6.6.3 running at 100 percent, with only 10 MB of free memory. This is an operational stream that could trigger integration with the graph visualization. For instance, process 3, the Ad Targeting Engine on the 6.6 graph could be highlighted with red when the alarm came through:


Perhaps it is useful to alarm on a map of the entire system data flow. This shows an alarm on process 11, subprocess 1, the AI Feeder:

Click on the image for interactive graph

Triples can also be streamed for live visual updates over MQTT, AMQP and Syslog. I’ve coded systems using Python for all three message stream protocols. I’ve successfully used Plotly Dash with MQTT for a full, low code, dynamic console.

Streaming techniques don’t have to be applied to just operational events. We can put a triple update directly into an event stream:

<77>1 20220327T211058.426Z allsys tech=“sally” triple=“0🔸6🔸6🔸⚗️🔸3🔹🏷🔹Ad\nTargeting\nEngine\n2”

This change could be visualized in near real-time by updating the graph visualization showing that we were now calling process 6.6.3 “Ad Targeting Engine 2”:


This facilitates collaboration, as participants can see their input live, much like the Whiteboard and Stickies example. The model can be replayed over time, based on the timestamp. Better yet, throw the data into a time-series database. (📑11)

3.7.1 Monitors


3.8 Conclusion

We need something exponentially quicker, less reliant on existing power structures and goals. We need the flexibility to align with systems that are not recognized or currently prioritized. We need to be able to determine our own goals and immediate actions during rapid system change.

We are on a human journey, not a profit journey, not a consume-biosphere journey, not a ignore-negative-externalities journey. We are in this together, and our goals that we imagine together, how we see the world, is a human endeavor, not a machine endeavor. And, while our machines are often a marvelous feature of our species, how we place ourselves in the world and understand our relation to the planet and other life, is, and should always be a human-centered effort, first, and our machines should follow, serving our goals. Letting cloud services lead our goals as a species is a mistake, either through full outsourcing, or by allowing ourselves to become mere cogs in workstream process outside of holistic system understanding. Resilience of our species requires this. Resilience of our independent businesses requires this.

Even our memories are recreated with the help of graph-like maps. (📑32). Our cognition of our place in the world, then, past, present and future, is determined by the accuracy of our internal and external maps. In order to be resilient, we must own these methods, adopt them intentionally, and use them to martial the array of stream-based workflows and data analysis machines and services we have available to us. Triples provide the advantage of gathering, managing, and visualizing small, delegated pieces of information, yet place them in a more holistic way via a knowledge graph.

4 Design

Proposed solution based on requirements and analysis, including different points of view of those building the solution

Triples are described in The Triple. This design section assumes you have read the analysis for an understanding of triples. Expanding a bit, the key to analyzing systems that fit the requirements is to use triples. I do not mean RDF. (📑54)

3SA will work with any system. The design provides a way to visualize triples used for system analysis. The design satisfies the requirements for resilience with certain assumptions and within a certain scope. Be clear on the requirements, assumptions, and scope, as well as the analysis, as this will ensure that this solution fits your application. It may well be that relying on third party consultants and cloud services suit your organization better. 3SA is simple enough to do yourself, as stakeholders in the system. There are many examples of specific scenarios that utilize this design in Operations.

3SA provides ideas and instruction. It is not a product, nor a service.

4.1 Initiation

3SA requires human interaction to start, and is structured like the Whiteboard and Stickies technique. Meet in a physical room together, or online. Agree on nodes, properties, relation and nesting.

4.2 Graph

A graph is a container for the system under analysis. It is not required; however, if you intend to integrate with other graphs, the graph needs to be unique. If you don’t have a need to integrate with any other graphs, just use 0 if it is multi-level (nesting).

4.3 Nodes-edit

Nodes are the objects that comprise your system. Pick a single emoji to represent a node type. For instance, 🚌 might be a bus in a model of Washington, Oregon, and California roads. List these with brief labels so nobody is confused by what the node emoji means.

There are reserved emoji listed in Reserved Emoji. three of them we will use right now are:

🔸 = delimiter for node path 🔹 = delimiter for triple (as discussed in The Triple 🗨 = comment 🏷 = label

Here is a simple set of initial nodes:

🗺️🔸🚐🔸1🔹🏷🔹Small Bus
🗺️🔸🚐🔸1🔹🗨🔹Less than 10,000 GVWR
🗺️🔸🚌🔸1🔹🏷🔹Large Bus
🗺️🔸🚌🔸1🔹🗨🔹Greater than 10,000 GVWR

These can be visualized like this:



The path of the node is everything before the first 🔹, and it must be unique. If you are short on emoji, or would rather not deal with that, just pick an emoji for everything:

🗺️🔸🔵🔸wa_rds🔹🏷🔹Washington Roads 🗺️🔸🔵🔸small_bus🔹🏷🔹Small Bus
🗺️🔸🔵🔸small_bus🔹🗨🔹Less than 10,000 GVWR
🗺️🔸🔵🔸large_bus🔹🏷🔹Large Bus
🗺️🔸🔵🔸large_bus🔹🗨🔹Greater than 10,000 GVWR
🗺️🔸🔵🔸or_rds🔹🏷🔹Oregon Roads

It might make sense for your application to skip the labels at first and use the text part of the path after the emoji.


It is easy to add nodes later if you wish. The only catch is that the nodes need to work with your chosen nesting and relation.

4.4 Properties

Decide what properties make sense for the nodes. These can be added later, but it helps initial visualization if there is a set to start. Review Reserved Emoji to see if one of those works.

As an example, we could add ✨ to mean the level of perceived luxury and status a car has. This triple then:

🗺️🔸🚗🔸1🔹✨🔹Bling Level 11

Would translate as “On our roadmap graph, car number 1, a battery electric vehicle, has a bling level of 11.”

4.5 Relation

Establish the relation of the model. This is your primary relation predicate. It never changes; however, there is direction. Consider these triples:


Note that we are using two-letter state codes instead of a number. This is perfectly valid for this design.


A relation is what connects the model. This should be the same at all levels. Review analysis for ideas. If you are working with information systems, consider the relation of data to/from. An org chart is “reports to”. A data flow is “receives/sends data”. A relation is a line that is drawn on a graph of the system.

As an example, say you are part of a group that gets together when the water for your city is poisoned by a chemical spill. In this case, we might consider a couple of different relations in our group. Are we going to “clean potable water” or “move potable water”. Pipes, trains, and tanker trucks might move potable water, and the relation would be flow. If the focus of the analysis is on a process that cleans potable water, than the relation might be “needs”.

A relation is signified in the triple by an arrow:

↔️ = Both directions ⬅️ = Backward ➡️ = Forward

Backward means the object provides what is consumed by the subject. Alternatively, the object is the target of the subject’s relation. For flows, this is clear, as it is easy to establish what is going where. For other relations it is more difficult. If I need water, I would use a forward arrow. Dependencies, though, can also act like flow, like the What do Humans Need? diagram, so backward makes more sense. Whatever you choose, be consistent, and don’t get bogged down in long talks about what directions the arrows go.

0🔸6🔸⚗️🔸6🔹⬅️🔹💽🔸2 0🔸6🔸⚗️🔸6🔹➡️🔹⚗️🔸8

4.6 Nesting

Imagine that you have a collection of different Russian nesting doll sets (Matryoshka dolls). (📑40)

The primary relations go between the different sets of nesting dolls. When you open up a nesting doll, you can use the primary relation with different detail. At that level there is an entirely different set of dolls, and none of them can have relations to the top level. The important thing to note, is that everything inside the nesting doll has the characteristics of the outer dolls.

As an example, I could have a node called accounting. Within that node are all nodes that are part of accounting, which includes Accounts Receivable and Accounts Payable. Within Accounts Payable might be a billing application.

Nesting does not conserve relations. Any relations have to be re-defined.

Stores are highly decentralized

4.7 At Rest Items

These are the items that are defined for use in triples, but are stored at rest separately under the same path as the current domain. For example, the markdown for this article is stored at ./🌳/🧬/5/✍/article.md.

Data at rest stored in UTF-8 character encoding.

Signatures use RSASSA-PKCS1-v1_5.

4.8 Reserved Emoji

Some emoji reservations are related to context. Emoji in paths are often not reserved; however, emoji in predicates are always reserved. For instance, consider:


This might be a graph of all price tags at a sporting good store. It is a bit confusing, but it isn’t forbidden to use 🏷 in the path. I’m not going to distinguish this unless I am aware of a big problem. Most of the issues show up when parsing and processing the triples, so they can be worked around. My recommendation is to just avoid using emoji on the reserved list for anything but the designated purpose.

Be aware of the at rest emoji. They can be used in triples, but should match, and likely you will want to reserve them.

Two reserved emoji that form the path and predicate, must always be reserved in path or predicate:

🔹= delimits the predicate, either a primary relation or other predicates 🔸= path delimiter

These are reserved emoji that must not be used in a predicate, and should be avoided in paths:

🏷 = 🗨 = Comment in triples

5 Operations

Operating instructions for the deployed solution

5.1 Page Verification

The markdown documents for 3SA are all stored at rest, which makes finding text easy. Recoll is the best local search engine we could find. Here is an example:


For minor corrections it is easy to just open and edit the text directly. If the scripts are running you can just view the page with localhost as you edit, and it will automatically refresh if live.js is configured.

Install via apt:

sudo apt install recoll

At initial run, choose the root to index. Likely, if you are just searching MCJ documents, you will want to choose the source directory:


Skip ^^ files, as these are the version files:


Also add the line:
.md = text/plain

This will index and open Markdown files.

5.3 Data Flow

Data Flow is a perfect application for triples, and has been used as the basic framework for full structured analysis. Review Structured System Analysis, Larger Flow, and Data Flow Expand for background.

5.4 Simple Example Time Kitty

Time Kitty: 3200BCE to 1000BCE

5.5 Virtuoso Inference

docker pull openlink/virtuoso-opensource-7
mkdir triple_pub_store
cd triple_pub_store
docker run –name my_virtdb -e DBA_PASSWORD=dba -p 1111:1111 \
-p 8890:8890 -v pwd:/database openlink/virtuoso-opensource-7:latest (📑68)







5.6 Stream Visualization

At the top of the Multi-level data flow there are three Python scripts that can be downloaded that illustrate stream visualization using 3SA.

🔐 (stk.py) Sets up the keys.

python3 stk.py localhost  
Human readable name (no spaces): pookie  
Password again:   
keys written  

The keys ensure identity, as control of the private key is needed to sign a message.

🎳 (tde.py) Modifies the diagram in near real-time, sharing

trimg 👀 (trv.py) Watches the stream.

It uses kitty terminal to display changes as long entries, along with updated graphs, verifying claimed identity against known public keys:


👀 will also save streams for later replay:


by saving in a database:


6 Future Considerations

Items that were not in scope that might be useful to consider in the future

6.1 Formal Ontologies

6.2 Combined Domains

6.3 UUIDs

6.4 Combined Domains

7 Bibliography

7.1 MEER

MEER:Reflection at COP26
MEER Project
MEER Flash Presentation

Ye Tao’s MEER:Reflection project seems like a map solution vs. a “throw streams of tech at streams of changing systems with streams of agile workers” idea. His solution has all of the map-like qualities of pure, structural, simple solutions, along with understandable calculations. I also think that MEER, and its failure at getting traction, points at an underlying issue with streams, in that our culture is based on streams, as is the current hierarchy of power. Simple maps and solutions are a threat. A taxonomy of sources would show that most things in industrial civilization are made from fossil fuels, including alternative energy. Ye Tao’s flash presentation has also inspired me to compress a presentation of my own efforts into 7 minutes. I have yet to accomplish it, but it remains a goal.

My ideas will be more relevant in a world that seriously considers Ye Tao’s work.

7.2 Triple Trouble

The problem with triples
This bit has inspired me as a counter. I’m often thinking of this as I try and make triples simpler to understand.

7.3 Public Domain

Who’s Afraid of the Public Domain?
This article convinced me to use Unlicense for my work. I still support the full Richard Stallman treatment for broader software, but I limit the extent of my software so it is in the realm of ideas, and not product.

7.4 O4IS

Ontology for Information Systems (O4IS) Design Methodology
This is one of the first pieces I found as I puzzled about how to automate IT relations, and my introduction to the ideas of ontologies as they relate to IT.

7.5 The Telling

The Telling

I visited Las Vegas for a friend’s wedding in the middle of my efforts to make sense of triples and plan my presentation. I showed up a day early, locked myself in my room, and read Laura Riding Jackson’s The Telling. I need to read it again, but it inspired me to work on my ideas until I could explain them at the depth I needed. It is also related to ontology and meaning. Laura Riding Jackson started with poetry, but became dissatisfied with the ability of poetry to express truth, and worked on a Rational Meaning with Schuyler B. Jackson. In this regard, she shares quite a bit with Victoria Welby (📑22).

7.6 Resilience as a Disposition

Resilience as a Disposition
This paper formed the bridge between the issues I saw in IT and broader socio-ecological resilience.

7.7 Data Flows to Facilitate Compliance

An Ontology for Representing and Annotating Data Flows to Facilitate Compliance Verification
This paper both changed and validated my focus. It was the first and only instance of using formal ontologies to model data flow, specifically, that I have found.

7.8 Structured Systems Analysis

Gane, Chris; Sarson, Trish. Structured Systems Analysis: Tools and Techniques. New York: Improved Systems Technologies, 1977

7.9 Extended Relation Ontology

Extended Relation Ontology Another way to get data flow via has_input and has_output.

7.10 GraphDB

GraphDB The Ontotext product was the first graph database I did inference on, and it was fast with the set set of triples I loaded, particularly on Windows.

7.11 TimescaleDB

TimescaleDB is an alternative to a graph database, particularly with my focus on data flow by level. It is free as in beer and freedom as long as you aren’t offering a cloud service. Build a virtual table as a graph, and update triples collaboratively. A query against a level (6.6, for instance) will show the most recent graph. Mix this with Plotly Dash for quick and easy collaborative models.

7.12 Plotly Dash

Plotly Dash, mentioned in (📑11), above, lets you hang out in Python land without getting too dirty in JavaScript world. I like coding in Python better, and Plotly fixes the problem with collaborative UIs. Sophisticated, UI-intensive applications get a bit cumbersome, though, and I ended up writing these in wxPython, mentioned in (📑13).

7.13 wxPython

wxPython is a great tool for complex UIs. It looks like this:


It is hard to get this level of interaction in a web app, for me, at least. Note that I’m using web application components extensively, and this is one of the reasons wxPython rocks so hard: it has a full WebKit client with events you can couple to the GUI.

7.14 Virtuoso

Virtuoso is a floor wax and a dessert topping. I use it locally to run my websites on different ports. It can handle PHP. But, best of all, it has many ways to import triples to do inference on with its graph database. It can even handle WebID, which back in the day was a “Triple” way to handle distributed social web. (📑15)

7.15 WebID

Henry Story’s explanations of WebID, although I didn’t know it at the time, was my first introduction to the triples. I was very interested in alternative, distributed social media in 2012. It wasn’t until much later, in 2019, that I realized that my work life and the semantic web intersected because of graphs.

7.16 Vasco Asturiano

Vasco Asturiano’s 3d-force-graph was very useful in establishing a common triple form for DFDs. I don’t get into it much in this presentation, but it is easy to make 3d, rotating models that show the entire org in one big ball. Just like it is useful to make sure that simplified triples are extensible to real meaning, like 17, it is also useful to make sure you know how to render the entire org in one graph. 3d models are good for that.

7.17 Barry Smith

Barry Smith’s work in ontologies changed the way I thought of meaning, and how I envisioned making simple forms of triple extensible to common knowledge. There is a standard here if the vid rots over time. I am focused on a much simpler version, but I track how it maps to these standards.


UTEOTW has inspired me, served as a cautionary tale, and continues to provide insight into the world we have created. Make sure you watch the 287-minute director’s cut. You will be glad you did. Spoiler below:

Henry Farber points his AI/ML systems at the streams of recorded vision, coupled with live human memory. Henry abandons the wisdom, the behavior maps of his trusted family, and gets pulled deeper and deeper into his territory rendering. He pulls Claire and Sam with him and loses all that he loves. The approximation of the territory through AI/ML is always just that. We can get closer and closer, but the two will never meet, not in a human way. Meaning, though, intentional meaning, that is something that we can own. We can own knowledge, which morphs like law with our culture. Trying to couple the territory too tightly is both futile and a sickness. Untethered by maps, we drift into uncharted territory, void of meaning. Here be dragons.

The Hunt–Lenox Globe, a copper globe created around 1510. Cropped to show “Hic Sunt Dracones” text.

7.19 A Knowledge Representation Practionary

A Knowledge Representation Practionary If you need vocabulary around triples, as well as a broader view, this is a great reference.

7.20 Encyclopedia of Knowledge Organization

Encyclopedia of Knowledge Organization - Hierarchy Bergman again…

7.21 Conceptual Structures

Conceptual Structures, John Sowa, 1984

7.22 Significs and Language

Welby, Victoria, Lady 1911. Significs and Language. The Articulate Form of Our Expressive and Interpretative Resources. London: Macmillan & Co.

7.23 The Semantic Web

The Semantic Web A new form of Web content that is meaningful to computers will unleash a revolution of new possibilities by TIM BERNERS-LEE, JAMES HENDLER and ORA LASSILA

7.24 The Checklist Manifesto

2009, Atul Gawande, The Checklist Manifesto: How to Get Things Right, Metropolitan Books

7.25 Charles Sanders Peirce

Charles Sanders Peirce

7.26 Front Panel

The most famous front panel is the IMSAI 8080, used in the movie War Games:

Click on the image for larger version.

Source Flickr: IMSAI 8080 Computer by Don DeBold

7.27 New York and Erie Railroad diagram

Mccallum, D. C. , Cartographer, G. H Henshaw, and Publisher New York And Erie Railroad Company.

New York and Erie Railroad diagram representing a plan of organization: exhibiting the division of academic duties and showing the number and class of employees engaged in each department: from the returns of September, 1855. [New York: New York and Erie Railroad Company, 1855].

Retrieved from the Library of Congress

7.28 Graphviz

Graphviz works with triples directly to both visualize and analyze graphs.

7.29 xdot

xdot is a simple, small program written in Python that renders dot format files into interactive graphs.

7.30 Modern Structured Analysis

Modern Structured Analysis, Edward Yourdon
Yourdon Press, 1989

7.31 The Map Is Not the Territory

The Map Is Not the Territory

7.32 New Map of Meaning

New Map of Meaning

7.33 rfc5424


The Syslog Protocol

7.34 Wish You Were Here

Pink Floyd, Released on: 1975-09-12 https://youtu.be/hjpF8ukSrvk

7.35 Environmental Degradation and the Tyranny of Small Decisions

Environmental Degradation and the Tyranny of Small Decisions

7.36 Recreation of Uluburun shipwreck from 1400 BCE

Panegyrics of Granovetter

Recreation of Uluburun shipwreck from 1400 BCE

CC BY-SA 2.0


Jovan T epić, Ilija Tanackov, Gordan Stojić


7.38 Public Domain Clipart

Images are all public domain if not noted separately




7.39 1177 BC - The Year Civilization Collapsed

1177 BC - The Year Civilization Collapsed

Eric H. Cline, 2014

7.40 Matryoshka Doll

Matryoshka Doll

7.41 An overview of the KL-ONE Knowledge Representation System

Ronald, J., Brachman., James, G., Schmolze. (1985). An overview of the KL-ONE Knowledge Representation System. Cognitive Science, 9(2):171-216. doi: 10.1016/S0364-0213(85)80014-8

An overview of the KL-ONE Knowledge Representation System

What I find most fascinating about this, is both the extent of this ecosystem of ideas, but also the complete focus on computer applications. I was in a conversation in 2021 with an old friend, and we talked about when people started losing their ability to think for themselves. We ended up pinpointing the mid 1980s. On a related note, I remember reading about the Lincoln vs. Douglas debates. The participants in these debates, both the debaters and the audience, acted much different, cognitively, than modern day people. This goes back to how we use frameworks of knowledge as humans. My hunch is that rich language, education, and practice in establishing one’s place in the world, provided more than two teams and a few tools. We gave up on humans and started focusing on computer cognition. We now have a perfect consumer and engines of ecosystem destruction running on the profit algorithm. And… all of this with more information and knowledge available to us than ever before. Huxley was right.

7.42 Ontological theory for information architecture

Toward a document-centered ontological theory for information architecture in corporations

Mauricio B. Almeida, Eduardo R. Felipe, Renata Barcelos First published: 22 January 2020 https://doi.org/10.1002/asi.24337


A Literal Transcription of the Original MSS.
Hydrographer of the Admiralty.


This eBook is for the use of anyone anywhere at no cost and with almost no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this eBook or online at www.gutenberg.org

7.44 Singular They

Singular They

Singular They

I’ll tell ya… he/she has been difficult for me for years. It is quite awkward. Why should an unknown person singular person be he? Should I alternate? I think of Genesis P-Orridge, now, as I use they. Genesis P-Orridge has gone by so many pronouns, and I’ve followed h/er for many years spanning different pronouns. Regardless of present, it helps me remember the correct frame of meaning for they. They = all versions of Gen.

7.45 Bio Units

Taking care of bio units (humans), with all of their collective will and idiosyncrasies, costs money, and is difficult to control and scale. Don’t lose track of this point, as why things are as they are now has quite a bit to do with the we have to take care of them… so why not replace them with machines?. There are two problems with this. First off, the deep supply chains associated with the machines we replace humans with, usually dodge negative externalities. These include:

The other big problem is “Why are we even doing this?” Our civilization is for humans. We need to take care of our home, planet earth, and other living things on the planet, both because we rely on those living thing for survival, but also because it is unforgivable hubris to destroy swaths of living things for whatever ponzi scheme we are hooked on this year. If the core reason why we replace humans with machines is so we don’t have to provide medical care and housing, that is missing the point. Think carefully about that, coupled with the negative externalities associated with more complicated supply chains. Who benefits, really, in the end? Trace the entire system, well-to-wheel, and map out what is important. What do humans need? What do ecosystems need (which humans are part of)? Don’t just take the “machines good, life improved” lure. Examine it. Perhaps it is not really the tasty minnow we think it is. All of that being said, we need machines or most of us will die. Most of our machines run on oil. Most of our food is fertilized with fertilizer that comes from oil (and is harvested with machines). This is a big old nasty ouroboros encircling the tree of knowledge of good and evil.

7.46 Journal of Exploration

Journal of Exploration

An Approach to Teaching Writing

Pete Sinclair, 1981

Journal of Exploration

7.47 MicroAce


Byte Magazine Volume 05 Number 11 - MicroAce ad

7.48 Breadboard

A breadboard is used to prototype electronic circuits.

Here is a solar power controller I breadboarded in 2001:


A breadboard is just a bunch of spring-loaded connectors with wholes you poke wires and parts into to connect circuits prior to soldering up a more permanent version.

Here is the same circuit soldered up:


Note that I was using a homebrew 8048 ICE (in-circuit emulator), so there are more wires on the breadboard version than seem consistent with the soldered up circuit.

7.49 Bootstrap

What do you do when you start from scratch? Many things are just rocks, when it comes right down to it. As Dylan Bettie put it:

“We invented computers, which means taking lightning and sticking it in a rock until it learns to think.”

To make a rock learn how to think, you need to start somewhere. The idea of bootstrapping is important for all kinds of systems. Imagine attempting to bootstrap the creation of a microprocessor. How would that start, from zero? What if you are the only person who knows? It would probably start with knowledge of sequential logic, progress to something Babbage-like. Wire and metal for relay switches, along with electricity generation, then transistor, silicon wafers, and clean rooms? Knowledge of how all of this works is one thing, but how do you start production is entirely different. For a computer at rest, particularly without permanent memory, like my Z-80 Homebrew was at first, a bootstrap is the initial instructions to bring the rock, the hunk of metal, silicon, and wires to the point where it can transfer programs and instruction in a more user-friendly way.

7.50 History of Pets vs Cattle

History of Pets vs Cattle

The History of Pets vs Cattle and How to Use the Analogy Properly

Posted on Sep 29, 2016 by Randy Bias

7.51 Interfacing the Standard Parallel Port

1998, Craig Peacock

Interfacing the Standard Parallel Port

7.52 Operating Manual For Spaceship Earth

Astute advice we did not heed.

Operating Manual For Spaceship Earth, Richard Buckminster Fuller, 1969

7.53 Inside the Odorama Process

Inside the Odorama Process

The legendary scratch-and-sniff cards that made John Waters’ POLYESTER a unique olfactory experience are featured in our new edition of the film. Take a look at how they got made!

7.54 RDF 1.1 N-Triples

RDF 1.1 N-Triples are what many people consider triples proper; however, the idea goes back to Charles Peirce. My version is simpler than Peirce. I place constraints on how I use triples in order to facilitate human cognition with limited nesting, primary relation, and use of emoji.

7.55 Beattie on Architecture

Architecture: The Stuff That’s Hard to Change

In particular, go to 27 minutes, where he discusses data flow diagrams. Notice that he misses the point of complexity and graphs. The superpower of graphs is constraining the model so that a node can be exploded for detail. Don’t put too much information on the top level. There is also a useful explanation of the Agile Manifesto in context.

Dylan Beattie is wonderful to watch. One of my favorite talks of all time is The Art of Code.

7.56 Shared Intentionality

O’Madagain C, Tomasello M. 2021 Shared intentionality, reason-giving and the evolution of human culture. Phil. Trans. R. Soc. B 377: 20200320. https://doi.org/10.1098/rstb.2020.0320

Understanding and sharing intentions: The origins of cultural cognition Michael Tomasello, Malinda Carpenter, Josep Call, Tanya Behne, and Henrike Moll Max Planck Institute for Evolutionary Anthropology D-04103 Leipzig, Germany

7.57 Jalopy


7.58 Donella Meadows

Many have heard of her work on World3 and Limits to Growth; however, some of her lectures that have been coming out on Youtube recently, for instance: Sustainable Systems, Presented at the University of Michigan Ross School of Business. 1999 What I didn’t know, until I saw this lecture, is that she attempted to live in a sustainable way on an organic farm. I wish she had lived longer so she could write more about her personal experiences. There is some comedy, too, where she talks about convincing the banks to invest in her commune with composting toilets.

Another lecture that aligns with my stance, is how in this lecture given at Dartmouth College in the Spring of 1977, Systems: Overshoot and Collapse, she talks about how she sees her models as facilitating human cognition without computers. Computers bootstrap human cognition as a goal, rather than the other way around.

Leverage Points: Places to Intervene in a System is a good read.

7.59 Eye Tracking camelCase

An Eye Tracking Study on camelCase and under_score Identifier Styles

“The interaction of Experience with Style indicates that novices benefit twice as much with respect to time, with the underscore style.”

7.60 Jean Dubuffet

Those works created from solitude and from pure and authentic creative impulses – where the worries of competition, acclaim and social promotion do not interfere – are, because of these very facts, more precious than the productions of professionals. After a certain familiarity with these flourishings of an exalted feverishness, lived so fully and so intensely by their authors, we cannot avoid the feeling that in relation to these works, cultural art in its entirety appears to be the game of a futile society, a fallacious parade.

— Jean Dubuffet, "Place à l'incivisme"

7.61 Agencement/Assemblage

John WP Phillips
May 2006
Theory, Culture and Society 23(2-3):108-109


Deleuze and Guattari Lecture Notes

Bumblenut Plateaus

7.62 Naive enthusiast

Benjamin P Taylor:

These are the ego traps that lie in wait as you enter into any powerful field of knowledge.

7.63 lz-string

https://github.com/pieroxy/lz-string https://github.com/marcel-dancak/lz-string-python

7.64 Offsite Storage

Back when I worked in datacenters as an operations manager, one of the more difficult issues was offsite storage. Originally IT staff would take home backup tapes. I always thought that was a bad idea. I didn’t want that responsibility, nor did I think it was good for the organization. Eventually weekly pickups of tapes were commonplace. Now? In 2022? We trust cloud organizations with the offsite storage of documents and other backups. Further, the idea of partitioned sets of backups is foreign to many. Often offsite backups are streamed; however, rotation/archival of changed data is not considered fully. Take the simplest example. Let’s say that the procedure to bring up the local networking equipment on a diesel generator at a hospital is written in a proprietary world processor format. Likely it is stored in the cloud. At the moment in time where the document is needed, there is no connectivity to cloud. But, let’s say that somebody has the presence of mind to sync the document locally. First off, rendering the document might require external cloud services. But, there is another more subtle problem. What if the technician documenting the procedure makes a mistake? Say that the procedure for recycling the HVAC system at the time of a power failure was overwritten on the cloud document for the “local networking equipment on a diesel generator” procedure. It is quite possible that this would not be discovered until time of failure. Any cloud versioning features would be useless. Information can also be harmed intentionally. Just because a system is “working” doesn’t mean that it is possible to react to a crisis in the future. While the primary focus of 3SA is immediate reaction, the concepts of offsite (partitioned) data storage are captured in the Archiving, Retention, and RPO requirements.

7.65 Age of Anxiety

W.H. Auden, The Age of Anxiety: A Baroque Eclogue (Princeton University Press, 2011) page 105

The Age of Anxiety begins in fear and doubt, but the four protagonists find some comfort in sharing their distress.

Cross of the Moment movie

7.66 Terry A. Davis


Terry A. Davis

7.67 Agency Definition

I only recently started using agency as a word in relation to systems, but it seems more and more appropriate as time goes on for me. Agent, from 1471 as “a force capable of acting on matter”. That reminds me of the Dylan Beattie quote that computers are “Taking lightning and sticking it in a rock until it learns to think”.

Agency Definition

Docker Hub Openlink virtuoso-opensource-7

7.69 An Ontology for Representing and Annotating Data Flows to Facilitate Compliance Verification

C. Debruyne, J. Riggio, O. De Troyer and D. O’Sullivan, “An Ontology for Representing and Annotating Data Flows to Facilitate Compliance Verification,” 2019 13th International Conference on Research Challenges in Information Science (RCIS), 2019, pp. 1-6, doi: 10.1109/RCIS.2019.8877036.

An Ontology for Representing and Annotating Data Flows to Facilitate Compliance Verification

Data Flow Ontology


7.70 How GitHub works

How GitHub works is a great reveal, perhaps unintentionally, because it shows how the monster of our technological bandaids grow. It all works as long as there is plenty of oil to fuel infinite complexity. How much water and other resources go into the screen, internet connectivity, and sensor box that the vid portrays?

We work together, as people, shipping stuff, fixing stuff, not getting bogged down with requirements that just create friction. reveals the strange world of sarcasm and irony, where it is difficult to gauge anything but a feeling that the new world is better than the old, and if you don’t embrace the constant change, you are part of the problem. For me, DevOps is about bootstrapping, configuring, and command/control of infrastructure via an API, either on-prem or in cloud. If, as humans, we are clear on requirements and goals, DevOps has a clear advantage, particularly when combined with what the above vids reference. I mean that without irony or sarcasm, but consider this:

Imagine a road trip with six people. One of them wants coffee. One spills coffee. One is sharing chips. There is a constant real-time back and forth dealing with the immediate needs. At the same time, though, there are fundamental requirements like checking air pressure in the tires, or making sure there is gas in the car. Whining about having to go through the checklist at every gas station because it interferes with coffee and chips is childish. Wrapping everything in sarcasm and irony doesn’t change the fact that it is much, much better to not have to change that tire on the side of the road or get stranded without gas, than it is to just run through the knowledge for the trip. Sure, we can rely on third parties like AAA and technology that magically tells us everything, but let me ask you this: How many of those reading this have seen tire gauge pressure lights that are in error?

In order to have agency with our systems, we need to understand them in a broader way. We don’t need all of the details, just what is necessary for agency. (Another topic way, way beyond the scope is the tendency for people to claim agency itself is an illusion). Whew!! Anyhoo… a couple of videos straight from the horse’s mouth. The Decoded Show is another good example. Notice that there are also references to topics that we are all aware are very important, like energy supply. Just remember… our entire world right now works in this mode of view, and it is becoming more so. The real answers are unbearable for most. It is way, way outside of the scope of this document to wade into what those real answers are, but hopefully the tools and ideas presented here within this doc will help poke your head out of the stream above Mirkwood and discover them for yourself. The reality of the journey will certainly need agile and DevOps tools: give stakeholders in the journey coffee and chips, but also agree on the destination, and make sure the tires have enough air pressure.

8 Appendices

8.1 Railroad Triples

8.1.1 Initial


8.1.2 Modified

“DunMachinistsF\n10”->DunShop [color=red]
“DunMachinistsF\n10” [color=red]
“DunMachinists\n48” [color=red]
“SusBoilerMakers\n3” [color=red]
“SusBoilerMakersF\n3”->SusShop [color=red]
“SusBoilerMakersF\n3” [color=red]
“SusMachinists\n43” [color=red]
“SusMachinistsF\n23”->SusShop [color=red]
“SusMachinistsF\n23” [color=red]
“PieCoppersmithsF\n3”->DunShop [color=red]
“PieCoppersmithsF\n3” [color=red]
“PieCoppersmiths\n8” [color=red]
“DelTLaborers\n203” [color=red]
“DelTLaborersF\n35”->BuffaloKiln [color=red]
“DelTLaborersF\n35” [color=red]
“DelTLaborersF\n35”->“BuffaloLot\n5000” [color=red]
“SusTLaborers\n295” [color=red]
“SusTLaborersF\n27”-> DunkirkKiln [color=red]
“SusTLaborersF\n27” [color=red]
“SusTLaborersF\n27”->“DunkirkLot\n4000” [color=red]
GenSup->BuffaloKiln [color=red]
BuffaloKiln [color=red]
GenSup->DunkirkKiln [color=red]
DunkirkKiln [color=red]
ErieTimberSales->President [dir=“both” color=red]
SusShop->GenSup [color=red]
DunShop->GenSup [color=red]
SusShop->PieMasterEngrRep [color=red]
SusShop->BufDunMasterEngrRep [color=red]
DunShop->SusMasterEngrRep [color=red]
DunShop->PieMasterEngrRep [color=red]

8.2 The problem with relational databases

8.3 Wrenching

I likely have more than enough background in my preface, but my wrenching over the years is related to my perspective on complicated systems, failure, and resilience.

Here, I am removing the engine from a 1963 Rambler American in 2005:


In the background there is a chicken tractor that had a Busybox/Linux system I compiled mounted in the top.


You can see I’ve fashioned a dust filter and duct-taped it to the cooling intake. It had a camera that automatically posted regular pictures of the chickens on the world wide web.


Here I am in 1987, fixing the brakes on a 1965 Rambler Station Wagon:


I was young and foolish to not use jack stands; however, I could barely afford the pads, so I’ll give myself a little slack, but my-o-my, seeing the car balanced on that bottle jack makes me shake my head and offer advice I likely wouldn’t have heeded anyway. My “toolbox” was that big metal bowl in the foreground.

The technical service manual I had for my 1963 Rambler American was incorrect. I created a correct diagram for intake and exhaust valves using Xfig, the same program I created my first data flow and my homebrew schematic with:


8.4 Homebrew Computer

In 1980, I purchased a MicroAce. (47) It was a Timex/Sinclair computer in kit form. I could program in BASIC on it, but I was not satisfied. I wanted to know more, dig deeper. I wanted to wire it, know it from roots to leaves, and intentionally author the code that brought it to life.

Click on the image for larger version.

I completed the first breadboard (48) version of a Z-80 homebrew computer that same year, in 1980. I mounted a breadboard in the top of a file box, with a keypad and hexadecimal display. It failed miserably. I didn’t understand the concept of a high-impedance state for the bus, and I thought my construction technique was creating too much noise. I worked on and off for many years, breadboarding different versions. It took awhile to finish, with the ups and downs in my life. I would go for years at a time without working on it, but I finally completed a working, soldered system in 1992.

The display in the upper right I soldered first, in 1989. You can see I’m using old 50 pair telco wire, which isn’t the best, because it can melt and cause shorts with other wires when soldering, but I happened to have some at the time. The lower right board that is connected to the bottom of the case holds 2N2222 drivers for lamps, which you can see in this video:

Click on the image to watch with sound.

The video shows me toggling reset. Right after reset, the lamps in the center, to the right, show the streaming data the homebrew is receiving over a PC parallel port. (51) This is a small bootstrap (49) program that looks for a byte on the parallel port with the correct signal line state, loads it into memory, waits for another state change, loads that byte into memory, repeats until all bytes are loaded, and, finally, jumps back to run the program, which cycles the incandescent lamps and the 7-segment displays.

A bootstrap is usually entered through rows of switches and lights called a front panel (26). I couldn’t afford a proper front panel, so I used dipswitches and a paperclip interface with NAND gates in a set/reset configuration to debounce and enter data. Here is what I used to program the bootstrap directly into memory:

Click on the image for larger version.

The NAND gates are in the electrical tape wrapped handle of the perfboard. You can see the binary-decimal conversion for the bit locations written in between the paperclips sticking out.

When I first breadboarded this, it started with this hand-drawn diagram:

Click on the image for larger version.

My desk in 1999, fixing broken solder joints:


Some joints were too hot, some were too cold:


As I moved the homebrew around, following jobs and apartments, the solder connections would break. I needed a way to document it, and a schematic which doubled as a physical diagram of the pinouts was the most effective reference for troubleshooting. My hand drawn version worked OK, but I realized that legible hand-drawn lines would be difficult to manage, and finishing the hand-drawn version would likely end in failure. I tried a variety of diagram programs, but the only one that worked for what I needed, that didn’t cost too much, was Xfig.

8.4.1 Xfig


Xfig only ran on *NIX systems, and was my early motivation to learn GNU/Linux. Here is what I ended up with, which I didn’t finish until 2003:

Click on the image for vector version. Schematic in fig format

Wiring functions like a graph, where the edge is a wire connecting the nodes of two connections. Or alternatively, a schematic is a wiring map. The intention is to create a solder joint that won’t break, but the reality is that they will, and do, and having a map of pinouts and wires goes miles towards keeping the homebrew running. My map distinguished control lines from address and bus with color coding, and curved the lines so they were distinguishable from each other.

I started my IT career as a computer salesperson of IBM PC and compatible computers. Prior to PCs, information processing systems required many people to operate and maintain, as well as large corporations that often owned the hardware, leasing the systems to users. With PCs, organizations could scale and operate with agency. A typical sale involved system analysis. A customer would have a problem they needed to solve, and the sale would revolve around that problem. I would often go onsite after the sale to help bring the system operational within their organization. I remember needing to use a hex editor on a word processing binary file to match superscript/subscript commands for a customer’s printer, as they were printing academic papers. I also helped with a successful political campaign by selling a politician a PC and helping her load and configure her database for mailings. She won her state representative campaign, and later went on to become a member of the US House of Representatives. These experiences stayed with me, ingrained as both a benefit of information technology in operation, but also as a time of revolution, when stakeholders had had technological agency.

My career moved on to enterprise IT with the widespread adoption of networking. I got a job at an IT consulting firm as a technician and helpdesk for a main and satellite office. This single company morphed through an initial joining of four founding companies across six cities, into a nation-wide consulting company in 25 cities. The complexity of the resulting system required automation and third party software to manage. I learned an early enterprise IT system management platform, Unicenter TNG, but added my own automation, as the GUI was cumbersome.

Diagram I created of the system I built in 1999. Click on the image for vector version.

As we acquired companies, IT operations and engineering was directed to push expenditures into the cost of acquisitions. Combined with “goodwill”, I saw how acquisitions could create a seemingly healthy company, yet there was no real operational agency or strategy that unified the acquired companies for healthy profit. I confronted our accountant about this in a diplomatic way, and she said, “You aren’t supposed to know about that.” Upper management felt that they could gain control of the monolith they had built with enough business intelligence (BI), so they put in Cognos. On their own, the leaders who had built the successful individual consulting companies understood how they ran, and what they were all doing together. The combined roll-up, particularly with the accounting strategy of acquisitions, caused a crippling cognitive dissonance for shared intention and overall health. This was my first glimpse of black box BI used to mitigate lack of human cognition. Human cognition should come first. What are our strengths? What do we have to offer the world that is unique? How do we gauge success? Once these broader questions are answered, BI can be used to ensure the organization is tracking to intent.

I got a job in 2001 working for a startup that provided inventory management and combined purchasing for health food stores. Most of the stores were connected with dial-up lines. With a crew of three, I deployed and managed 150 stores with POS/PC systems, centrally updated inventory databases, and updated frequently updated software at the stores, including Palm OS devices that handled the scanning. This, combined with the Unicenter TNG work I had done previously, gave me a decent insight into the coming DevOps perspective that infrastructure was code.

Diagram I created of the system I built in 2002.

When the company got tight on money, it used the float from the store purchases for operations. This has a similar kind of problem as the tweak that rolling operational costs into acquisitions. Everything is great as long as you are expanding, but a lull can be devastating.

I got a job in 2003 for a medical ASP (ASP=Application Service Provider It is what they called cloud before it became SaaS). In addition to building out and monitoring the front-end servers, I worked on their remit downloads, verification, and processing. While there, I created my first data flow, documenting my work. I used the same diagramming tools I used for my Z-80 homebrew schematic.

Diagram of the system I created in 2005. Click on the image for vector version.

Like the previous two jobs, this company failed as well; however, it failed because of technical reasons. There was a misconfigured cross-connect between switches that caused performance problems. I count this as one of my bigger errors in my career. While I was not part of the networking team (of two), the pool of servers I designed, built, and operated, ran across the multiple switches. Eventually I was able to figure out the problem after the main network engineer left; however, it took way too long, and we had lost most of our business by then. I should have been more active, earlier on, in troubleshooting a solution. My lack of engagement across silos was arguably the reason for the failure of the company. What is very interesting about this, is that all of the silo groups (networking, apps, compute, storage, database) had lots of real-time reporting of the systems. I had several myself. I even put in SmokePing and created other network testing tools that measured TCP latency to my servers (vs. just ICMP). None of our reporting got to the root problem. I don’t think that we once all got together in a room and discussed all of the pieces together to try and brainstorm a solution. We just created lots and lots of reporting that tended to show why the problem wasn’t within our silo.

In 2006 I got a job as a system architect at a global law firm. Within the first two weeks of my job at the firm, I jumped right in to a critical project. They were trying to solve the problem of running discovery apps over latent links. They also had a horrible network that aggravated this, but they weren’t aware of just how bad it was, and wanted to buy an app to solve a particular symptom. The CIO set up a meeting, and I met along with my assigned project manager to establish the timeline for rollout on my first week on the job. The CIO put me on the spot, and I figured no big deal, I would figure out how it worked, what people needed the first week, get the vendor recommendations, and put it in by week two. My project manager, who was previously a project manager that worked at NASA on the space shuttle, was not happy that I had answered in this way. I told her I would back it up, and responded with an email that had bullet points for how I saw the project being implemented. She came back, waving the email at me, and said that what I gave her was entirely unacceptable. I was confused and thought she was abusing me (she wasn’t).

I went to my boss, a wonderful boss that was generous and thought broadly. She gave me an example of what was needed. This is how I was introduced to the concept of a solution design document. It is a form of knowledge that describes where we are now and where we want to be in standard terms, so that everybody can agree. Not every aspect needed to be filled out. It varied by solution. In the years that followed, I realized that if the information applied to a system at all, at some point during the procurement, deployment, or operation of the system, the aspects would come up. From the time forward, I insisted on creating an appropriately scaled solution description for every medium+ project I worked on. My work experience so far showed the value of this level of detail.

I got bored and moved a POS system to cloud for a brick and mortar retailer, and moved on again to a pioneer of internet search that was re-inventing itself after losing the search wars to a current cloud giant. I was in charge of all monitoring. There was a brilliant person in charge of IT that had replaced the typical relational database reporting with decomposed data that was then fed into reporting and analysis engines, kind of like the modern Elastic. I realized that key-value pairs in event streams could be much more effectively analyzed than canned relational reports. This is the idea of Splunk and I evangelized Splunk. I struggled with the simplest tasks of reporting on all monitors across thousands of servers. Nodes in a monitoring system do not fit well into a relational database. Most machines are different, even if they are the same model. I found that NoSQL approaches worked better for reporting on monitor classes.

At this point, 2011, I have in my kit: streams via key-value pairs and analysis via Splunk, knowledge management, formal solution description/design documents, graphs for resilience (homebrew) and graphs for reporting (monitors)

I was hot on the key-value pair analysis track. I moved on to another startup, where I could do anything I wanted in IT as long as it was fast enough and fit the requirements of the money that backed us (large banks). I struggled with my main developer to build out an analysis platform for an upcoming launch. I finally just did it all myself in two weeks using GNU/Linux, BASH, Perl to capture and normalize the data, and ran it all into Splunk as key-value pairs, happily proving my ideas from my previous job. I used my skills in system documentation to demonstrate to the banks our systems were secure and protected. This company failed, again because of funding.

I moved on to another law firm, which had a similar cycle of projects that my solution design skills worked well for; however, there are some cracks starting to show. There was no longer an architecture team, and the meaning of engineering and design had degraded to quick vendor meetings and a few notes. I remember one design consideration that I focused on that was particularly difficult for people to grok. Backup retention was complicated at a law firm because of discovery. If email and deleted files were only retained for 30 days, then discovery was easier to comply with. The cognitive ability for somebody to include backup retention from a discovery perspective, backup retention from a critical files perspective, and mix that in with offsite replication was stretched to the point that additional questions about backup retention of critical files were quickly brushed off as already dealt with. The scenario I was focused on was if data is corrupted or purposefully deleted and not discovered until after 30 days. Certainly there are important files at a law firm where this needed to be addressed, but the collapse of architecture->engineering->operations was coming down on my head as I struggled. I met over ten times over the course of a year to get a proper backup retention policy in place. I finally got the operations team to put in a fix; however, they couldn’t figure out how to make it permanent for more than a year, and I had to set a yearly notice on my calendar to remind them to put in the every year for the one-off interval. This also means that the only files in the entire global law firm, at that time, that were backed up outside of the 30 day retention policy, were my files that I had specifically adjusted for. I had no indication that after all of this fight, that it had sunk in that we needed a broader policy to cover other files, and I had used up more than my allotted attention fighting for this one backup design requirement aspect.

In addition to the increasing cognitive challenges for operations folks trying to shepherd design considerations, the level of documentation, even in its simplest form, was too much for most people to understand. I think that the worst part was the long narrative form. Work and associated design knowledge was being broken down while I was building my analysis and collaboration skills up. More and more I would find that even engineering managers could only digest a couple sentences. There was a perception by management that long-form analysis documents were part of the old world. The new world was agile. When network, security, storage, and OS dependencies are stripped away, i.e. all that remains are containers and cloud services, the scope gets narrow enough that the developers can just write the app, show it to users, and in a tightly coupled loop deliver and improve products without much engineering or architecture. I imagine that most who are in IT and are tracking my story here, would recognize that agile doesn’t necessarily have anything to do with design and architecture, but we are back to human cognition. The perceived freedom of agile is that there is constant progress, but in practice it comes from sacrificing system cognition by the humans participating in the agile workstreams. There are plenty of cattle. Pets are too expensive. Just rely on the cloud company to supply the feedlots and slaughterhouses. (📑50)

One project, though, changed my life again, just as significantly as the NASA project manager did at the previous law firm. I was put on a project to convert the public finance group from paper workflow to electronic.

I needed something that captured the system in an abstract way that could be reviewed with the group. The default Visio model that looked best was Gane and Sarson. It had three symbols. It made more sense than UML. More importantly, it solved the biggest problem I had so far: an easy and understandable way to provide levels of different detail. Gane and Sarson is a data flow model (DFD). Information Technology, at root, deals with data flow. There are many other perspectives that formal enterprise architecture frameworks capture, but data flow is the lowest common denominator. I used it to analyze several systems since, at full detail, and it is quite flexible, particularly with some of the constraints and conventions I have added.

In 2018 I moved on to a company that offered wellness programs and coaching to employers. We had an outsourced engineering and design team, located overseas, with product management handled locally. I was meant to bridge that gap. Much of the business workflow was spread through a cloud service that cost a lot of money, and there was a desire to untangle the systems from this service. The workflow was coded over time by many people, and it touched every aspect of their business. It was not documented. It was a perfect candidate for a DFD. I created a system-wide DFD. Upper management and stakeholders found the method helpful, but the problem was that it was difficult to match the velocity of the product and engineering teams. I did some research on how to increase velocity, found that triples could help, and pitched it to the company, but they said my ideas were too advanced for them, and in 2019 I was laid off. I have worked on the ideas on my own since then.

8.6 Journal Software

trimg (📑66)

I have written down my dreams, memories, and general daily journal entries since 1985, inspired by this class. I’ve improved and maintained different versions of a journal application since 1994. Here is a screenshot of a version I created in 1994 using Visual Basic:


I wrote up a high level design for my journal software in 2011 here. Many of the design considerations for 3SA match. It is also the first time I referred to knowledge management.

I use my journal to manage build steps used to generate the OS that runs the journal system, which I am currently using to compose what you are reading right now. I’ve tried many approaches over the years, using existing journal applications, cloud and local, and many operating systems. I always arrive back at something I control down to the individual operating system components.

I couple the operating system with the journal because I am painfully aware of persistence issues. Operating systems and applications change constantly, and controlling the storage and view of thousands of entries requires ownership and control. This plays into how I see resilient knowledge tools. While it is true that I originally looked at LFS as a way to understand GNU/Linux components, over the years I have needed particular combinations of libraries and applications that were not available with the precision I needed in the current Linux distribution. I’ve tried them all, from the original bash script based compilation distros that preceded Gentoo, to Gentoo, as well as package-based (yast, apt, yum and tarball Slackware). I’m actually happy, now, with where my journal software is at, and wxPython and Ubuntu 20.04 appears to be capable of doing all that I envision, so eventually I will move to that, is my guess, but when I say that I am conscious of how interconnected and cumbersome the ecosystem is, and even now have a reluctance to move, I say that with significant background. Even with the extra hours of doing trivial stuff to most, overall I am more productive.

Here is a screenshot of the current version written with JavaScript and Python, that I’m using to write the document you are reading:

Click on the image for larger version.

My solution design doesn’t tackle the maintenance of triples themselves. It is a way that I make the design persistent, as it pushes the ideas down to data rather than stopping at processes (software). Ultimately that is how I’m tackling my own growing collection of journal entries. In triple form I can create something with any OS. Further, by decoupling the rendering from the OS and triple creation, using modern browsers, I can always read my journal. The 100 inch wheelbase Nash/AMC Ramblers were built around the 196 CI engine. I think of my current journal view in that way. It is an engine of data, triples, surrounded by a script coach. Put the horse before the cart, right?

8.7 Fictional United Nations Speech

[Sean asked what I would say if I had the chance to speak as the UN President. I will not repeat, or pretend to represent better than Csaba Kőrösi, but I do have something I would add. What follows is what I would insert into a United Nations speech.]

I would like to take a few minutes to talk directly to the 8 billion people that the United Nations Charter is for. The United Nations Charter addresses broad goals that we generally agree with, but there is a problem. We are all human. We all have cognitive limitations as we work towards shared goals. As a species we use our culture to supplement our natural abilities. Culture includes social conventions, passing on knowledge of the world to future generations, and other cognitive tools. My very speech that you are listening to, right now, is filtered through your particular cultural experience.

There are two main problems. First, our culture, more and more, is transmitted and controlled by interests that are not necessarily our shared interest. Second, we exist in an extremely complicated socio-economic-ecological system that it is impossible for a human to cognitively understand. To consent to something requires understanding. One outcome of these two problems combined, is that our desire to work towards shared collaborative goals is hijacked and monetized without our consent. I do not propose any change to the governance structures that our nations have evolved that allow these problems. Every member country has reasons for evolving to their existing governance. All that I am asking is that each of the 8 billion people that might be listening to this, the individual citizens, acknowledge the two problems, take steps to compensate, and participate in working towards our agreed on shared goals with agency. How, as individuals, do we compensate for our limited cognitive abilities within such a complex system, particularly when our culture is transmitted by global corporate interests? We use the same exact kinds of tools that global corporate interests have built their wealth and power on. Let me give you an example. Here is an outline of the analysis that Dr. Ye Tao presented at COP26 (📑1) last year on mitigation strategies for human-induced climate change:

        🧊Cooling Return on Investment  
            ☝️ We need to offset 1,500TW EEI  
        🔒Locked in warming(LIW)  
            🙉Why unknown?  
                1) Lack of public discussion  
                2) Reluctance of those with knowledge/leadership  
                3) Inconvenient truth  
            🎓 Future increase if human causes stopped  
            ☝️ 2-3 W/M²  
        🏒All Play together  
            🎚️ Combined, the scale is insufficient  
        🌍Earth System Energy Balance  
            🔥 Imbalance causes heat  
            🎓 EEI = Earth’s Energy Imbalance  
            ☝️ Our problem is heat right now  
        ✅4 most important requirements  
            1) Net cooling at a small scale while meeting a minimum energy efficiency  
            2) Enough material exists for global use  
            3) Enough energy exists for global use  
            4) Global implementation that would be fast enough  
        ⚗️Use Science  
            🗿Popular efforts lead to ecosystem collapse  
                ⚡️Efforts require energy  
                    💯 MEER low energy to scale  
                ⏲️Efforts require time  
                    💯 MEER immediately addresses imbalance  

His analysis concludes that our focus on renewables and carbon capture is misguided, yet this goes against our cultural feeds. Tackling this is daunting for even the most proficient scientist familiar with the field. At the same time, the 8 billion stakeholders in the socio-economic-ecological system that this analysis addresses, should be able to form an understanding of the points. A key tool that facilitates this is the ability to break down the problem into cognitively manageable pieces. Is our main problem heat? Do you agree, yes or no? What causes the heat? What do we need to do to change that? Where are we at now, as far as locked in warming, warming that will continue even if we stopped all human activity that contributes to warming? Where do the components that make up your electric vehicle come from? What energy is used? What resources?

As you try to arrive at answers to these questions, which are core to your own personal agency as world system stakeholders, it is important to map relationships further than just one level. Don’t stop at “CO2 bad” or “Battery-Electric Vehicles are good”. Map out your own concerns further. There are many ways to do this, but one way is using the tools and ideas documented at Triple Pub. Do not let your agency be jacked. Understand what you are doing, how you relate to the global system, and choose your personal path. Grow your cultural cognition towards shared goals with agency. [I break character at the end of this, as the entire focus of Triple Pub is much like I would tell 8 billion people. I am no politician. I am no diplomat. I am no CEO. I am a system analyst attempting to do something I feel is worthwhile.]

8.8 Single Page Description

I created this single page description early on in my journey, and shared with zero response. It isn’t able to cover the aspects of agency and human cognition. There is no direct, simple route. A meaningful map that a reader could use to understand and relate to these ideas, requires much more, as the concepts are usually foreign to the reader. My attempt at a concise single page description is more of a curiosity. Yes, it is possible to use a filesystem as a graph, and generate graphs with the possibility of inference from only a handful of lines of Python, but it doesn’t provide a meaningful contrast with black box BI/AI/ML cloud, so the benefit of brevity and simplicity is lost.