logo

Triple System Analysis

Agatha D. Codrust
January 25, 2023
Interactive Site cccc0 legal

TL;DR

I show how to analyze systems in a way that humans can own and understand with a single web page application. It works without an internet connection on any modern web browser, the exact opposite of cloud-first. I go through this effort, because humans understand and make decisions about complex systems by breaking it down into propositions that can be agreed on and vetted. The propositions (triples) are embedded in the web page, and I use established graph analysis tools to visualize and analyze the system. I provide twelve examples of this in operation. The most useful, for those working with information systems, is a data flow diagram (DFD). Three of the examples show how DFDs can be created and visualized with triples: one level, multiple levels in 2D, and a 3D model. Other examples include minimal model, a dynamic contextual list, an expandable tree, legit graph analysis, single-level icon visual dependency graph, a personal journal, sequential steps, and a dymanic presentation. The HTML you are reading here is built with triples. I use emoji to make triples easier to visualize vs. machine-centric formats like RDF. I show how to bring up a system capable of running this from free, open software, bare-metal on up. Humans should be stewards of the biosphere, and we should insist on individual agency in this role. Relying on products and deep supply chains backed by oil and a profit motive to gain agency, is at cross purposes with human ownership and understanding towards system resilience.

Abstract

Stakeholders should be able to quickly analyze and understand their systems at a fundamental level, with tools they own and manage; however, this is often difficult because of frequent change, the need to scale, and lack of experienced staff. There are many existing services and tools that provide operational metrics and prediction based on data streams, as well as knowledge graphs and AI that provide these kinds of insights, but they do not facilitate real-time human cognition. Humans have a limited ability to consider multiple actors and dimensions in real-time, but this should not interfere with a fundamental understanding of critical systems. I propose that starting with constrained semantic triples, aided by existing bioinformatics software and web browser standards, it is possible for humans to collaboratively establish system knowledge cognitively in real-time. This facilitates resilience, as the techniques can be used at times of crisis, and towards emerging human goals. In this paper I document a design that fulfills the requirements of real-time human cognition, with a goal of system resilience. I demonstrate the technique with multiple examples, and show how the product of this effort is extensible to modern cloud compute services, AI, and knowledge graph platforms. I provide design details for a fully contained information processing application in a single HTML page, that does not require network connectivity or deep supply chains, yet can still facilitate collaboration.

Document Format and Quirks

This document is formatted as a solution description for a design of triple analysis, as well as how to operate analytics based on the design. A solution description is not a particularly resilient information artifact, as it can get dense, and the terminology intimidating. It is currently the most recognizable form of system design, though, so it serves as a way to bootstrap the ideas for a technical audience that is unfamiliar with triples. Readers that are tight on time, might be better served if they immediately skip to the design and operations sections, where the ideas are shown with live examples. A sequential reading will guide through my interest, expected audience, premises, system requirements, analysis, and design, before arriving at the operations section. I will explain each section's purpose without jargon.


Table of Contents

1 List of Abbreviations

Abbreviation Definition
3SA Triple System Analysis
KVP key-value pair
BEV battery electric vehicle
OWL Web Ontology Language
RDF Resource Description Framework
BFO Basic Formal Ontology
TOGAF The Open Group Architecture Framework
EA enterprise architecture
KISS Keep it Simple Stupid
DRY Don’t repeat yourself
UUID Universally Unique IDentifier
AMQP Advanced Message Queuing Protocol
CI/CD continuous integration and continuous delivery
JSON JavaScript Object Notation
DFD data-flow diagram
MDFD multi-level data-flow diagram
MQTT Message Queue Telemetry Transport
UML Unified Modeling Language
SVG Scalable Vector Graphics
SPA single page application

2 Front Matter

2.1 Epigraph

“We haven’t been too bad, have we?”

“No, nor enormously good. I suppose that’s the trouble. We haven’t been very much of anything except us, while a big part of the world was busy being lots of quite awful things.”

~Ray Bradbury, The Last Night of the World (📑75)

2.2 Dedication

For Sean and Sunn, who read many versions of this, encouraged me, and provided critical feedback.

2.3 Audience

Triple System Analysis (3SA) is intended for anybody that deals with complex systems. No specialized knowledge is required. While information technology is usually part of the solution for any complicated problem, the ideas of 3SA will work without it.

Civilization unity with IT, constant change, destruction, and recreation

Considering the complexity of industrial civilization right now, and the associated stressors, the stakeholders for these ideas are everybody. As stakeholders in organizations and ecosystems that support vertebrates, we need the ability to quickly and collaboratively establish where we are, what that means, and where we want to go. It may be that the complex mess, the territory of our global supply chain that we are embedded in, is something that we will not be able to extricate ourselves from enough to reclaim individual agency. In that case, my audience is stakeholders of future systems.

Some of the wording and format is intended for system analysts, so it might be a stretch for some readers; however, this kind of documentation has a rich history, and is suitable for bootstrapping a new way of thinking about system analysis. Where the meaning of the section or aspect is specific to system analysts, I will put an explanation in bold italic right after. The system considerations I outline are standard terms. It is not a waste of time for a novice to become familiar with the format of a formal solution description. I get it that most are treading water with a mix of established organizational skills and demands from the fires of the moment that need to be put out. There is little time remaining to tackle something like this; however, I believe that if you do put in the time, I can help you make whatever system you are responsible for work better.

If your focus is in IT, and you are satisfied with an enterprise architecture (EA) framework, then you are already convinced of the need for underlying maps and rules vs. the surface of typical IT movement. EA followers are not my audience here. My work is for the scrappy, for those trying to break out of the territory/flow/surface, while still building, improving, and operating live systems. 3SA is for those stuck in sprints with minimal architects on staff, and an overwhelmed non-technical project manager (if one even exists at all). These methods can start from almost nothing, in the middle, and scale up to a mediocre EA map. I don’t pretend that my methods can replace a good EA map. If The Open Group Architecture Framework (TOGAF) works for you, stick with it. If you already have the management buy-in, are comfortable with the cost, and feel the implementation of your EA maps for your various domains/views is sufficient, then this method is unlikely to be helpful. Be wary of the limitations of a formal EA method as far as resilience. In a rapidly changing system and dependencies, it is quite likely that the framework will need to change quickly, and EA has significant risk there. Keep your eyes open for TOGAF mappings under Basic Formal Ontology (BFO), for example, as a way to mitigate the risk. (📑42)

2.3.1 World Wide Web

The original ideas of the World Wide Web were based in triples. You can see a graph right on the front page of Tim Berners-Lee’s Information Management proposal from 1989:

Tim Berners-Lee’s Information Management proposal from 1989

He understands knowledge representation at a foundational level, and that the general architecture of using triples worked well. He called triples “links” in the context of linked information systems. Extending and formalizing triples with OWL and RDF proved more difficult; however, the technique of navigating knowledge via links is key to the World Wide Web. (📑74) (📑2)

2.4 Foreword

Some guidance for the journey

Look at this solution description as though it comes from the middle of things, a plateau: not the peak, and not the base (📑61). This is counter to the way we normally look at designs, where the base leads up to a peak of design, and the middle is relegated to drudgery. This solution description does arrive at a design, but it is critical that the reader break out of the product mindset. I have no product in mind as I write this. At the same time, this work is not merely a collection of basic tools and principles. It is a perspective on systems, combined with tools and a shifted world-view. Go on this journey with a the mindset of coding on your own time, willing to learn web browser scripting and graph analysis. Take your web browser out for a spin, your own rowboat in a sea of change, amid infinitely complex, inter-related systems. Establish where you are, where you want to go, and how to get there, but do it from the middle, right there from your perspective at the oars. For those that experienced the freedom of personal computers in the 1980s, this is much the same; however, the “PC” in this case is a web browser.

Errors in judgment can be avoided by simply going out a couple more degrees of Kevin Bacon. At one degree, battery-electric vehicles (BEVs) make sense. Perhaps BEVs make sense at two degrees, as solar and wind energy are becoming cheaper; however, at three degrees, where the components and the resources to create those components are accounted for, the truth becomes complicated. There are other systems in play, like climate, that have momentum and consequences that also need consideration. Do BEVs do what we need, and do it quickly enough? The extra work to analyze a few degrees further is prudent, as the stakes are very high. As stakeholders in ecosystems that support vertebrates, we need to ensure that we are working towards the goals we say we are. The prevailing idea seems to be, “Don’t worry your pretty little head past two degrees. We’ll take care of you.” This laziness suits many of us, as it is convenient to cede agency.

2.5 Hobbies, Work, Future Graphs

For interactive graphs, go here.

My hobbies
My work experience
My interest in helping humans with systems

2.6 Preface

Why did I write this? What is my interest?

I started my technical journey in electronics. I was also interested in computers, but I wasn’t satisfied just entering BASIC programs. I wanted to know how computers really worked. Over the next 10 years, I built my own Z-80 homebrew computer. (📜4). During that time I moved 20 times, attended school, bounced around many technical and non-technical jobs, and finally settled in an IT career. Some solder joints were made with a cheap soldering iron. Some wire was whatever I could get for free. I was stuck with my original decided building technique of masses of point-to-point wires soldered together with the aid of pre-drilled, un-tinned perfboard:

Perfboard solder

Like most technical debt, I chose it because of the low cost and familiarity; however, whenever I moved the homebrew, the solder joints would break. During vacations from work, I would unpack my homebrew computer and repair it. My desk often looked like this:

Fixing cold solder joints

I transferred my hand-drawn schematic to a computer-generated schematic to help troubleshoot. This experience works well as a lens for IT. Seemingly impossible tangled messes of a system can be fixed with a good enough diagram of where the individual connections are, and an understanding of how they work together. At the same time, some fragile platforms will never be reliable, no matter how excellent the documentation and analysis is.

My IT career slowly moved (📜5) from an Operations perspective to analysis, and I focused on some of the same kinds of resilience issues, but with more actors and much bigger systems. At the same time I became interested in broader systems like the global supply chain and climate. In my mind, this all is related. We make seemingly small decisions that make sense at the time, like choosing to solder point-to-point with whatever wire is available, or not heeding the advice of Buckminster Fuller about oil (📑52), and we face those decisions every time we move, figuratively and literally.

In recent years I have taken these ideas, coupled with my experience collaboratively analyzing systems, and created a solution that fits the broader requirements of bigger systems.

2.7 Prologue

A stand-alone taste of what is to come, that supplements the main content.

Sue was just returning from lunch break to staff the operations center of the county water district. She wore heavy framed, black eyeglasses, jeans and a gray long-sleeved canvas shirt. She had been there through three reduction-in-force sweeps, and was one of two people left watching the operations screen and tending the district machinery. Three red circles appear on the wall console that showed pump failures at Lovelane Lake, Upper Dredge, and Placidish River, connected by a web of pipes. Sue logged on to her computer to get details, but received “Access Denied”. She tried again, and got the same response. John was the only other person in the ops center. There were four room-width desks facing the screen on the wall with empty chairs, and he and sue sat two seats apart in the second row.

“It won’t work, Sue. There is no authentication available, as Datacenter West is down, or at least unavailable.”

“Do you know what the pump failures are?” asked Sue, anxiously.

“I assume it is electrical, as the dam at Upper Dredge blew a transformer. Datacenter West gets power from there. They are on backup, so the datacenter is still live, at least until they run out of diesel, but nobody can reach it because the network is down. I’m going to drive out there and get a copy of our pipe layout and IoT keys, as we don’t have one on site. We may need those.”

John grabbed his backpack, shoved his laptop inside, and ran out the door, leaving Sue with a screen of red. John had been there almost as long as Sue. There were now seven circles of red on the console.

“I’ll log on locally”, Sue muttered, and was able to get a command prompt. She tried to ping the pump at Lovelane, and nothing. She got the same result for the other pumps. Her email and other office software was also hosted at Datacenter West, so all she could do is run notepad and ping. She noticed Joyce outside the window waving at her, and opened the door to talk to her. Joyce worked in accounting reconciling C* expense reports.

“Did you know the water is out? I can’t wash my hands,” Joyce said, annoyed.

“Oh no! If the water is out too, Datacenter West won’t cool.”

“Huh?”

“Never mind. I think Upper Dredge dam has stopped pumping water over ridge 4. I got an alarm. I think there is still bottled water in the fridge. Power is going out across the county as well, so you might want to get your car out of the garage. I’m calling Laura Talos right now. She’ll know what to do.”

“OK. I think I’ll get home while the traffic lights are still working. See you tomorrow.”

“Think, think, think,” Sue reminded herself in a half-whisper. “John is getting the IoT keys, so that’s good. We can at least see if any Upper Dredge pumps were damaged in the surge when the transformer blew. Where did I put Laura’s number? I know I copied that down from the Datacenter West contact app just in case. Ah, here it is.”

“Hello, Laura? Sorry to bother you so late, but we have a situation. Upper Dredge blew a transformer… oh, you know… yes… yes… but the problem is that it powered some networking equipment, and we can’t reach many of the pumps. The monitoring screen is all red. Yes, I thought that too, but there is no running water at the office, so the outage looks real. I’m worried about cooling at Datacenter West. John headed out to grab a copy of the IoT keys and an updated map. … No, we don’t have one here. I can’t get into email or access my spreadsheet of pumps. I can’t even log on at all. I had to log in locally. … Yes, yes… I’ll call after John gets back. Kor? Who’s Kor? OK. Bye.”

2.8 Introduction

The problem the solution solves, who is involved, who is responsible, and who cares the most about a successful implementation

As a species we have the unique ability to gauge shared intention, and work together towards common goals. (📑56) We grow and share information culturally to supplement our natural, human abilities. We need help as the world gets more and more complicated. We are running at cognitive capacity most of the time. Our attention is stretched. It is critical that we make sure that our common goals map out correctly. Where are we now? Where do we want to go? How do we get there? We have always maintained guiding principles, sometimes written, to keep us on course as we navigate our lives, but we are in the middle of extremely complicated systems and quick change that makes it difficult to map the new territory at the velocity needed.

Rowboats, not robots!!

Information processing in large datacenters with deep hardware and software supply chains, coupled with extensive human capital, is how we currently attempt to shepherd the 8 billion (2022) people on the planet and biosphere. That is one way to tackle the task. The problem with this is that as individuals and organizations, we need to be able to participate and be assured that we are working towards goals we intend, rather than the goals of third parties or merely immediate concerns. The nature of the way that we deal with our challenges ends up clouding the original problem and goals. Another problem is that the deeper the supply chain, the less resilient it is. A deeper supply chain might facilitate control for those at the top of the pyramid. It might create more economic benefit while it is functioning well. It might also facilitate scaling and tailored, incremental changes; however, real change in the context of related systems is extremely difficult, and the overall resilience is necessarily fragile because of this. This goes for any system. If the software you use to manage your systems requires layers upon layers of features, accessed through layers of infrastructure, housed and secured with the effort of millions of individuals, change is excruciatingly slow and difficult. It may work fine when the systems are relatively static in nature, but quick change will often knock systems with deep supply chains out of operation. This is an assumption of 3SA.

On-premises hardware and software is often much more expensive to operate. 3SA does not necessarily promote a clawback of infrastructure from cloud. 3SA is about cognition of systems and expressing and managing those systems as humans with agency during quick change, from within extremely complicated systems. As humans we need to be able to tackle system analysis without relying on deep supply chains. Remember the dilemma about where you store your datacenter recovery documents? You shouldn’t only store them in your datacenter, and, yet, storing them outside the datacenter becomes a thorny issue. (📑64) From a broader perspective, our systems are meant for humans and the biosphere, or should be, so we need to be able to understand the schema that supports our everyday decisions, as we collaboratively work with shared intentionality towards common goals in the face of crises.

2.8.1 Checklists

“The philosophy is that you push the power of decision making out to the periphery and away from the center. You give people the room to adapt, based on their experience and expertise. All you ask is that they talk to one another and take responsibility. That is what works.” ~Atul Gawande (📑24)
 

Atul Gawand sees checklists as a useful tool to aid the cognitive jalopy (📑57) mind of humans. In his book The Checklist Manifesto (📑24), he describes the wonder of checklists used by surgeons and aircraft pilots. Imagine an aircraft pilot unable to take off because the checklist app they (📑44) use is upgrading, or they have run out of data on their plan, or the datacenter running their app has a power outage, or network connectivity between the app hosting and storage back-end is lost. The pilot might choose to take off anyway, seat of the pants and all, but what if the plane crashed because they missed an important step like “lock the wheels up” or some such. Perhaps the aircraft has super sensors and AI that invalidates the need for checklists, but things change, the world is messy, and we have a lot of very competent brains that can adapt to the change without all of the external dependencies.

What if we could have both things? What if we could partition off a level of checklist flexibility, but soup it up a bit, and even leave room for more complicated levels of meaning that our challenged bio compute (📑45) has difficulty doing well? That is what this solution description is about.

2.8.2 3SA and Resilience

3SA facilitates resilience for any system at the time of crisis. Like a checklist, it provides room to adapt, and promotes collaboration and responsibility. It is a way of thinking about systems, as well as some simple techniques to represent system knowledge. There are certain precautions you can take for a variety of scenarios like power outage, earthquake, etc., but being prepared is different than resilience. Resilience successfully navigates the unknown crisis that hasn’t happened yet.

System resilience

The ability of 3SA to be quick enough to respond effectively in a crisis, means that it can model and analyze existing systems, allowing organizations to understand how their systems operate at a high level without extensive resources. Owning system knowledge at this level informs better decisions and should contribute to resilience.

“Information systems start small and grow. They also start isolated and then merge. A new system must allow existing systems to be linked together without requiring any central control or coordination.” ~Tim Berners-Lee (📑74)
 

While it is a general purpose technique, the 3SA solution includes data flow, a specific application of triples. This is appealing, as most current problems involve flowing data. We are entangled in compute and data. As an example, when modeling a system that dispatches fire fighters, data about fires, locations of fire fighters and equipment, and routing of smoke metrics could well happen as data, rather than modeling the physical interactions. This also helps existing organizations, as many have data flow challenges, and have already abstracted their business processes within those flows.

Decomposed graphs, as triples, for the purposes of system analysis, is my lifetime “Aha!” moment. Was the moment laying in wait in the world, like a video game sequence of puzzles that needed to be unlocked before the final prize? Is it merely a reflection of my experience, a solipsistic convenience at show’s end?

2.9 Assumptions

Key items that should be carefully considered before the rest of the solution description is read, as they lead the design conclusions

All of these assumptions come from the perspective of resilience, which means that the organization or individual is facing a crisis or wishes to improve their skill set and knowledge representation to facilitate resilience when a crisis happens.

2.9.1 Middle origin

Middle origin is when stakeholders collaborate on design and improvement of the system from the middle. 3SA assumes this is the primary mode of system design origin.

In some cases, an organization only supports a top-down design and implementation. This is similar to classic waterfall; however, the stakeholders might be engaged even less. The scenario might be that the organization is required to purchase a particular piece of software because of client relationships, and so requirements and design of the system follow the solution.

In some cases the origin comes from users directly. This is the situation with most agile workstreams. It might quickly update products from a user perspective, but it downplays design origin from the organization stakeholders. Generally this mode downplays all design.

In all cases there is likely a mix. This is fine, as long as the primary, intended mode of system design is from the middle, collaboratively among organization stakeholders.

2.9.2 Organization Autonomy

Stakeholders of organizations desire the ability to build, represent, and manage knowledge of their organization that is independent of third parties. System knowledge should not be owned by a third party, nor should strategic views of that knowledge be dependent on third parties.

2.9.3 Maximum Analytical Velocity

The priority is analysis at the highest velocity. The priority is not a sustainable, re-usable and globally understood representation of knowledge. We often treat everything as a crisis in corporate settings; however, analyzing systems at maximum velocity works equally well for other types of crises.

2.9.4 Minimal Existing Knowledge

At the time of crisis, for these methods to be prudent, the assumption is that there is little existing knowledge of the system that is usable within the focus of analysis. Likewise, for a product in a quickly changing agile workstream, existing knowledge is not useful at the needed velocity. If there is re-usable knowledge that is relevant to the crisis, then use that knowledge rather than using these methods. This assumption clarifies that the focus is on new sets of knowledge. It may be that there is existing knowledge at the time of crisis that needs to be represented in a more flexible way. These methods might work for that.

2.9.5 Human Cognition First

The priority is for humans to understand the system under analysis. This is a major assumption, as it runs counter to most applications of triples since the 1960s.

2.9.6 Triple as Atom of Knowledge

The World Wide Web is based on triples. Formal ontologies are based on triples. The magic of graphs happens when going from two (key-value pairs (KVPs)) to three (triples). This solution description assumes that system knowledge atoms are triples. This assumption is a bit odd, as many are convinced that event streams with KVPs form atoms of knowledge; however, KVPs in event streams are more suited to operational knowledge than the focus of this solution description, which is human cognition of systems.

2.9.7 Emoji Good

The solution utilizes emoji because of the immediate visual recognition. This outweighs the scalability and compatibility issues.

2.9.8 Underlying Logic

Determining the underlying logic is a higher priority than gauging the current running system. There are many, many tools and platforms that can gauge the current, running system. Use them. It might be prudent to figure out how to do this from a local-first perspective. Whatever metrics you gather from the running system, whatever correlation is helpful, prepare for this; however, 3SA is not primarily concerned with the current running system.

2.9.9 Relational Database Solutions

Relational databases are too difficult to design and deploy at maximum analytical velocity, particularly if we accept the assumption of minimal existing knowledge.

2.10 Principles

What guides the system design choices of the system?

2.10.1 Keep it Simple Stupid (KISS)

The system should be as simple as possible. Don’t make it more difficult to use for 99 percent of the users, simply to account for the 1 percent.

2.10.2 Don't repeat yourself (DRY)

Capture and visualize systems directly. Don’t repeat yourself by transforming captured information multiple times.

2.10.3 Once

Capture a system once, in a stand-alone way that adds human system cognition. What is it that is worthwhile to capture? Many things will change by the week, day, and hour; however, basic goals and needs are less dynamic. This does not mean that changes captured in streams of data and work are not important, it just means that on principle, it is valueable to at least capture a system completely for the sake of human cognition at a level that makes sense.

Knowledge workers that understand a complete model from problem analysis to visualization and operation, are of more use in future crises than the knowledge obtained as a cog in decomposed workstreams. There can be collaboration on broader efforts, but being able to do something completely one time feeds flexibility in future efforts. When the “once” principle is combined with the assumptions and scope of 3SA, it both constrains what is meant by a captured system, and points to the value of an effort that is sufficient to be completed one time end-to-end. If the knowledge worker leaves before the next crisis happens, then a complete model, along with documentation of adaptations to the previous crisis, will assist the new knowledge worker.

2.11 Scope

Items this solution description addresses or doesn’t address that might cause confusion for stakeholders if not made explicit

2.11.1 Multiple dimensions

Since the focus is on human cognition, immediate visualization and analysis of multiple dimensions is out of scope. An example of this is nesting, which is done on one dimension.

2.11.2 Flows and States

Weights and logic in flows are out of scope. How much water going through a pipe, for instance, is out of scope. The fact that a pipe connects two points is in scope, as is the type of pipe and other properties.

2.11.3 Triple acquisition

Triple acquisition and management is out of scope. 3SA does provide design and future considerations that address some of the issues, and fulfill requirements that are related, and provides examples in Operations, but this will vary widely by application, and so is considered out of scope.

2.11.4 Storage and management of at rest items

Management and storage of at rest items is out of scope.

2.11.5 Security

Securing the triples, either in a web page or as part of the messaging/streams, is not in scope. Security needs will vary widely by particular application.

2.11.6 Identity and Integrity

Identity of users associated with triples is in scope at a high level, i.e. signing the hash of triples or documents with a private key, is described in Collaboration. Examples of an operating system based on the design are in Page Verification and Stream Visualization. The specific implementation of the high level design, is out of scope, and should be reviewed during Initiation, as the requirements will change based on application of the design.

2.11.7 Node ID Uniqueness

Integers can be useful to overload the meaning of the ID with sequence; however, this can cause problems when re-arranging, as links break when the sequence changes. This also causes problems with real-time collaboration, as it is quite possible that two people could create a new identical ID. There are many ways to mitigate these issues, but Universally Unique IDentifiers (UUIDs) are likely one of the top ones. I will address this in a bit more depth in future considerations.

2.11.8 Formal Semantics

Formal semantics are out of scope, including Resource Description Framework (RDF) and Web Ontology Language (OWL). Extensibility to formal semantics is in scope, and is discussed at an operational level here.

2.11.9 Trustworthy Code

I had to learn Python and JavaScript to create this document and the demonstrations. Mainly this was because those in the industry that I worked with were unable to understand my interest in leveraging graphs for system analysis. I received little support. I got almost no traction, certainly no maintainable motion. While the code I present appears to work, I present it more as an introduction to the ideas, something to bootstrap individual journeys towards system analysis via a simple web page application and related Python scripts. I use this in my personal journal, and will improve and publish the code over time; however, good, trustworthy, production code is not in scope for this paper, in much the same way that managing the data and security is out of scope. This is also related to my principle of capturing a system once. The quality, the trustworthiness of the pieces is secondary to covering a complete idea and presenting it in whole. This runs counter to our current world where everything is broken down. A million people weigh in on individual pieces, and overall we can scale at an insane rate at our peak; however, the ability to create and understand a complete system is sacrificed with the agility gained by the decomposition of cognition and effort. The inverse is true. I have a limited capacity to create. While my scope is necessarily broad to present a complete idea from bare metal to knowledge representation, the individual pieces do not have the needed trustworthiness to operate un-examined in our complex world.

3 Requirements

What the solution must provide

“I propose that the common view of resilience as ‘the ability of a system to cope with a disturbance’ is a disposition that is realized through processes since resilience cannot exist without its bearer i.e. a system and can only be discerned over a period of time when a potential disturbance is identified.” ~Desiree Daniel (📑6)
 

Disaster, quick change, the new normal, whatever you call the time immediately after a crisis, all we can do is assess the situation and proceed. The more effective our response, the more resilient we are, whether the crisis is on a personal, local, or global level. We may not be able to avoid system failure and quick change;however, we can improve resilience with the way we represent knowledge of the system.

What facilitates resilience? What are our requirements?

Since this is a design for system analysis with triples, rather than a particular application of the method, the system requirements are not fixed; however, I will list and discuss them individually in this section, and design for them. For instance, for real-time streams of map locations of pipe breaks for a water system, many of the aspects are critical.

I will also set requirements that don’t fit all applications. For instance, data retention is easy if you never purge it and the space needed is low. Knowledge atoms don’t take up much space, particularly if the streams are hybrid traditional key-value and triples, so setting requirements for data retention and recovery at a very high value is easy to design for. This is also a key part of resilience, as refactoring previous facts for the new normal may well require going back to foundational data. This is where triples shine, so I will treat this as a requirement, even though it may be something that the reader needs to adjust.

System resilience graph
Click here for interactive presentation

The functional requirements are quite interdependent. The diagram shows tight couplings.

One of the problems with modern workstreams that have a product focus, is that the idea of a knowledge foundation is lost. An atom should be grounded in some form of knowledge, rather than whatever activity we intend for the week. An atom should stand alone with meaning. That is what makes it an “atom” vs. just a strong of characters. An atom of knowledge is the smallest unit of change when building knowledge. It should have a low dimension value, and be normalized within the graph.

3.1 Communication

communication icon  

Functional requirement for representing and sharing knowledge

Wade through the mind-numbing muck of a corporate “what’s wrong with our group” exercise, and you will usually end up with the top ranking advice that better communication is needed. It may seem fruitless to spend all that time arriving at that conclusion every two years like clockwork, but it is still a core problem. Let’s analyze what we mean by communication, in particular communication about systems in crisis. Mostly this involves representing and sharing the knowledge of who, what, how, where, and why with a two-way tight coupling with streams and meaning.

Communication, particularly after crisis, needs to be quick and understandable.

3.1.1 Who is working on what?

This needs to be established quickly. There should be no required integration or schema update. A unique ID is the only procedural/technical item that should be enforced. Association of the ID with a project or role, and any other metadata, should simply flow in the stream. This means that relational database solutions are out. This is an assumption;however, I’ll list it here this one time.

3.1.2 What changes have been made?

Changes to the system should be posted to streams and visualized in near real-time.

3.1.3 How does the system work?

There should be no limitation on the types of models besides being able to decompose the knowledge. The way that the system meaning is expressed should be flexible, as different people have different perspectives and experience.

3.1.4 Where are the system components?

This requires near real-time updates as the system changes, and should provide multiple views. This affects who is working on what, how the system works, and what changes are made.

3.1.5 Why are we doing this?

This is also related to knowledge. Different people have perspectives on why a system is being created or utilized. This is a core issue that enterprise architecture tackles.

3.2 Meaning

meaning icon

Functional requirement for meaning of atoms in streams, communication, and knowledge

There is no room for confusion. If I am stuck in the rain and talking with you over an unstable connection, and I want you to know I am in Portland, I might use a standard phonetic alphabet and say “Papa, Oscar, Romeo, Tango, Lima, Alfa, November, Delta”. The phonetic alphabet has agreed-on meaning, and is designed so that when transmitted over a stream (radiotelephone), there is a low possibility for confusion over similarly sounding words.

Odorama card

Meaning should be quickly established with visual queues when possible. While smell adds another useful sensory dimension, like the cards passed out at John Waters’ movie Polyester, it is not required, and will likely lead to tactical issues in implementation. (📑53)

3.3 Knowledge

knowledge icon

Functional requirements for visualizing, storing, and expanding knowledge

There needs to be a mechanism to filter for crucial knowledge and to re-assemble future, unforeseen views.

3.4 Maps

maps icon

Functional requirements for knowledge artifacts as maps/graphs

Maps function as a bridge behind knowledge and groups of people collaborating. They are the primary artifact used to share system aspects.

They are:

3.4.1 Quick to learn

There should be a minimal number of symbols on the map that requires less than a minute to learn.

3.4.2 Stakeholder visibility

Different stakeholders have different needs for visualization.

3.4.3 Easy to change

Modifications to the underlying data should automatically adjust existing maps.

3.4.4 Standard for area

Come as close to any standard for a particular area as possible.

3.5 Collaboration

collaboration icon

Functional requirements for collaborating on system knowledge

3.5.1 All may contribute.

During collaboration, there should be minimal limitations on who can contribute.

3.5.2 All may validate.

If any participant feels something is wrong with the system map being built collaboratively, it should be easy to capture. Likewise, if a particular proposition (triple) is verified by an agreed expert, than that should also be easy to capture.

3.5.3 All may be accountable.

Focusing on the core knowledge first, rather than code, platform, frameworks or metal, facilitates collaboration from the start. Further, a focus on decomposed knowledge, coupled with immediate visualization, makes meetings more productive, since there is little delay between gathering knowledge and the visualized model.

Decomposed knowledge streams (triples) only require a “lock” or merge on the triple itself. (📑72)

It is less likely that during collaboration one person will step on another. One person might change the title while somebody else is editing the description. The big win for collaboration is on multi-level data flow diagrams (DFDs), as different areas of expertise can collaborate concurrently to build the models.

3.6 Streams

stream icon

Functional requirements for streaming knowledge atoms

Streaming should leverage existing event stream mechanisms for insight.

3.6.1 Quick system state updates

Replay

3.6.2 Tracing

3.7 Performance

System requirements for critical performance metrics

3.7.1 Visualization

Updates from a change of knowledge atom should trigger an update in the visualization quick enough so it does not hinder the pace of collaboration.

3.8 Capacity

System requirements for expected capacity outside of operational capacity

3.8.1 Compute

Generating the visualizations takes significant compute. Any personal computer with an optimized graphics system from 2018 or later should be able to handle the graph visualizations.

3.8.2 Storage

The storage needs are small. It is hard to imagine needing more than a few MB for most analysis within scope. Further, storage should by minimal, so that it is easy to transport.

3.8.3 Network

System should not require much bandwidth to run. 128k/S should allow collaboration.

3.9 Extensibility

System and functional requirements for future expansion of system

Data atoms should be extensible to other formal meaning frameworks.

3.10 Compatibility

System requirements for integrations and compatibility

Compatibility for data at rest, in transit (via connectivity/protocols), and integrity checks should be easy and obvious.

3.11 Maintainability

System requirements for modifying the system to improve security or correct faults

The system needs to be able to be modified at any time without disruption. There is no room to have users function as testers. The design of the quality assurance process is determined at initiation; however, the deployment and code simplicity should be such that disruption is unlikely. Further, user-initiated fall-back should be easy and obvious, a part of every effort.

Even when there are management tasks that need to be completed, like an update of a public certificate, this should not block use of the system at certain levels. As an example, in the case of a certificate that is out of date, there should be alternate paths of validation and transport available to users.

3.12 Portability

System requirements for moving to different environments and platforms

The core knowledge and visualizations should be viewable as-is with a modern phone or computer, 2022 forward, including Mac, Windows, or *NIX.

3.13 Disaster Recovery

System requirements to mitigate regional infrastructure failure

Since this is intended to deal with disaster, the requirements for recovery from disaster are high. There should be mitigations for geographical disruptions within 500 miles, as well as global internet infrastructure like hosting, name resolution and connectivity.

3.14 Availability

System requirements for system outage tolerance

The system needs to be up all of the time. Call it ten nines, whatever you like. If the visualization aren’t available on screens, then the system should have hardcopy maps available.

3.15 Recovery Point Objective (RPO)

System requirements for data loss tolerance

The system should be able to be restored to the last atom of data gathered. Essentially this means there is zero tolerance. These methods are being used at times of crisis. Having to roll back to system information from an hour previous would be extremely disruptive.

3.16 Recovery Time Objective (RTO)

System requirements for operational loss tolerance The ability to view the current version of local data should never be disrupted. Operational loss of system is unacceptable at any level. This is facilitated via the portability requirements.

3.17 Archiving

System and functional requirements for archiving

3.17.1 Data

Archive disposition of data over time

All data is live at all times.

Note that not all stores need to be fully synced.

3.17.2 Log

Archive disposition of logs over time

No requirement for log archives. It is quite likely that a similar crisis will arise again. Past logs are useful in understanding flows of system change and resolution.

3.18 Retention

System requirements for retention

3.18.1 Backup

Retention of backups

The interval is directly related to RPO, since all atoms are retained. Because data is distributed, all stored need to be considered to address RPO. Intentional or accidental destruction of data should be mitigated by both location and interval.

3.18.2 Data

Retention of production data

Until the end of time

3.18.3 Log

Retention of production logs

Until the end of time

3.19 Scalability

System requirements for expanding capacity

At initiation, a guess at compute for needed visualizations in dynamic and hardcopy views needs to be established. This will dictate needed compute as well as potential scaling vertically and horizontally.

3.20 Manageability

System requirements for keeping the system operating correctly and securely

There should be few centralized management tasks. This should be designed in to be a non-issue in most cases.

4 Analysis

4.1 Introduction

Synthesis and breakdown of the requirements with perspective against different solution options and background

The promise of cloud was to bootstrap, configure,and command/control infrastructure in an elastic way, with agency. Some of the promise was kept;however, most organizations traded an on-prem fiefdom for cloud, glomming on to division and global distribution of labor that standard APIs and distributed connectivity allowed.

Consider this diagram:

DevOps Civilization

For those of you in IT, you will recognize this as how your world works. For those of you outside of IT, whether you recognize it or not, this is how the world works. (📑71). Like everything, there are exceptions, but we traded knowledge of our situation and goals for a lead role in a cage real-time understanding of how well the all-consuming global supply chain was tuned, including software and labor ecosystems. (📑34) Everybody and everything is wired in, and rather than knowledge, we call it data science, and real-time streams of data replace knowledge.

One implication is that waste is not necessarily a bad thing. If it takes ten people to do the work of one, if those ten people can be treated as commodities and plugged in to the head-eats-tail circle, this is a win. More people have income and economies improve, but mostly things improve for the major cloud providers while we degrade the health of the biosphere.

I understand the pressures that pushed holistic system knowledge out of the way for most organizations, and the focus on streams of small decisions and territory metrics. Tracing territory is immediate. If a user has trouble ordering a widget on their phone, then stick that in the sprint, solve the problem, and get the fixed product back out to the user. Another advantage is that small changes, immediately realized, can help navigate, much like flying an airplane. I don’t need to know how to design a flight control stick if I just move the controls to see how the plane moves.

Agile workstreams provide a hand on the territory navigation controls and review the constant feedback to determine the next action, but I see some damage from the switch of focus away from knowledge maps. Another thing that happened between the 1960s and now, is that most major work in these areas assumed we would use more and more complicated computer systems to provide human cognition instead of involving humans. This is why it matters less and less to people why they are doing something, or, even, where they are going. All of that is baked in to the service. The only choice is, “Where do I subscribe, and with whom?”

Outsourced continuous integration and continuous delivery (CI/CD), Infrastructure, and Visualization

When I first started out as an analyst, requirements were demanded. I could ask, “What is the availability you need for this system?”, and somebody would tell me a certain number of 9s. I would verify with stakeholders by converting to outage times. When a cloud provider only offered three 9s, and added with network connectivity via an ISP it became lower, the answer, more and more, was silence. It didn’t matter. It only mattered when they were down. Cognition about requirements became less and less a technical issue. Human cognition by stakeholders morphed into project and product efforts within an agile workstream within the context of many software and labor ecosystems rather than actual cognition of knowledge of the system itself.

We now have an extremely complicated software ecosystems that changes constantly, just like the territory. When we lose track of where we are, we often start over, and plunk down another million dollars on an enterprise software system, as we no longer understand how to get the plane into the air in the first place. Or, alternatively, we just sign it all over to a third party and become system administrators for another company, ceding software and hardware ownership, as well as knowledge. We are pulled forward by small incremental fixes aligned with user needs, pulled forward by the surface, the squeaky wheel. We need to be wary of the Tyranny of Small Decisions. (📑35) We need meaning; we need knowledge to ensure resilience. We need both views: map and territory. (📑31) More importantly, and the entire reason for this solution design, is that as humans we need to be able to cognitively deal directly with the system knowledge in order to be resilient.

4.2 Communication

4.2.1 How does the system work?

There are many perspectives needed to cover how a system works. Triples, utilizing formal conventions, have a place, but there are other structural formats for knowledge artifacts and other forms of knowledge representation that are valuable and, more importantly, in use and standardized.

Communication via knowledge representations

This solution description is a format of knowledge representation of a system that is bootstrapping a design for an alternative form of knowledge representation. Data flow is a lowest common denominator form of IT analysis, but this will need to be supplemented. Maps of knowledge are not the best form for tracking flows and states. This is addressed a bit in Scope. We need AI/ML, cloud services, and other kinds of stream processing and multi-variate modeling to deal with flows and states; however, we also need to own the broader system first, from a knowledge and intent perspective. We also need to identify the risk of relying on third parties for flow and state, and the cost/benefit of bringing those services on premises.

Who, what, where are fairly easy to handle with triples, and combined with meaning and streams, work well.

4.2.2 Why are we doing this?

Generally, we do something to fit requirements that either the situation or orders demand. Requirements map out easily:

Requirements to design map

4.3 Meaning

4.3.1 Visual and Semantic Standards

There is a balance between standards and the ability to quickly address changing systems. The intent with the analysis is to show just how flexible 3SA can be as far as visual and semantic standards; however, this is a tricky thing to manage. It is one area where preparation prior to crisis can help. What visual and semantic standards are appropriate? Will you use emoji? Will you use BFO, which is an ISO standard?

4.3.2 Emoji

Written language goes back 6,000 years. The earliest versions represented a concept with one token, much like emoji. It started with actual tokens to track debt and trade, evolved to pictographs, went through full-on emoji stage in some versions, and then moved on to written word like we know now. (📑37)

Uluburun shipwreck from 1400 BCE
(📑36)

Knowledge at this point in our civilization arc is extremely complex. Imagine a dot on a line 6,000 years ago that moves up ever slightly to the collapse of 1177 BC, goes down again, rises again with Rome an inch off the line, collapses again, and then under our current rise since the dark ages, goes to the moon, literally and figuratively. (📑39)

There are several reasons why I conclude that emoji are useful to complex analysis:

There are also reasons why emoji are bad, primarily compatibility, manageability, maintainability, and security:

Long-form text is usually not immediately understood. For many people, only the first couple of sentences is read in a long narrative. Emoji has quite a few quirks that make it difficult to use in knowledge systems; however, from a human perspective it makes understanding systems at rest and at flow much easier. The priority is human, not machine cognition.

4.3.3 The Triple

The concept of a triple goes back to the 1800s with Charles Peirce, who called it a proposition. (📑19). The triple has also been used to make the World Wide Web more meaningful. (📑23). The level of triples used here is a stripped down, simple version, so that it is possible for anybody to apply the ideas. See Future Considerations for more advanced use.

An entity is anything we wish to represent.

A triple shows a relation between two entities. Here is an example triple:

My cat is a calico.

We have been using the word map so far, but triples form a kind of map called a graph.

This is the triple in graph form:

Cat Triple

The line between is the relation that represents “is a”.

We could extend this with by adding another triple:

Calico is a domestic cat.

This is the graph form:

Cat Triple Clarification

Usually the entities are called subject and object, and the relation between the entities is called a predicate. In the above example, calico is the subject, “is a” is the predicate, and cat is the object. A label to the calico that has a comment “The color of a calico cat is usually tri-color, mostly white, with orange and black splotches” is still a triple, with the subject of calico, a predicate of has_comment, and an object of the text of the comment. Let’s use an invented term called “relation predicate” to signify “is a” relations, and “aspect predicate” to signify predicates like comments or details.

Triples In; Visualization and Models Out

Here is a diagram of a simplified water distribution system that is modeled with triples:

Example Water System

One relation predicate is “fitted_to”, which is between pipes and valves of the same size or the reservoir. The other relation predicate is “transition_to”.

Imagine an entire public utility along these lines. A reservoir in the mountains might have a very large pipe, as would an aquifer. At the furthest point in the system the pipe would be much smaller as it enters a customers house. We could model the entire system around the pipe diameter and this would be our primary focus, with the relation predicates fitted_to and transition_to.

The scope of this solution description is only for triples with one “relation predicate”, which is signified by ➡️. If the same relation applies in both directions, we will signify this by ↔️. The other details on predicates will be covered in design. If you want to dig in more, see (📑19).

With our convention of one relation predicate signified by ➡️ = “eats” we can list what things animals eat (Tiger eats cats, cats eat rats, cats eat mice, mice eat cheese):

🐅🔹➡️🔹🐈  
🐈🔹➡️🔹🐀  
🐈🔹➡️🔹🐁  
🐁🔹➡️🔹🧀  

🔹 delimits the predicate, either a primary relation or other predicates listed in design.

Here is the graph of these triples:

What eats what?

The graph can prompt questions, like “What does the rat eat?” or “If the cheese is poisoned, what animals might be affected?”. The fact that we don’t have anything for the rat to eat on the graph doesn’t invalidate the graph. We can add what the rat eats as we learn. Likewise, the fact that a mouse eats peanut butter off of our traps doesn’t invalidate the fact that the mouse eats cheese. We can also do some inference, in that the tiger might well be eating cheese. If the cheese was poisoned, we could follow the graph to see what animals would be affected. The important thing to notice is that these small facts, a triple, can be added at any time to the model. We could add

🐅 🔹➡️🔹🐖

later on, if we discovered that tigers ate pigs.

The use of emoji adds visual meaning, like colored stickies on steroids.

We now have a basic toolset and vocabulary of triples.

4.4 Knowledge

4.4.1 Journal

A mountain of triples becomes a web of internal and external facts that can be navigated with a journal. Not all views of graphs need to be lines connecting nodes, although that can be of use.

“Saturday, 22nd. Fresh Gales, with Squalls, attended with rain. In the Evening had 14 Sail in sight, 13 upon our lee Quarter, and a Snow upon our lee Bow. In the Night split both Topgallant Sails so much that they were obliged to be unbent to repair. In the Morning the Carpenter reported the Maintopmast to be Sprung in the Cap, which we supposed hapned in the P.M., when both the Weather Backstays broke. Our Rigging and Sails are now so bad that something or another is giving way every day. At Noon had 13 Sail in sight, which we are well assured are the India Fleet, and are all now upon our Weather Quarter. Wind North to North-East; course North 81 degrees East; distance 114 miles; latitude 41 degrees 11 minutes, longitude 27 degrees 52 minutes West.” ~James Cook (📑43)
 

A journal captures knowledge of a journey in a format that can be read by anybody, without special tools or reporting. With triples, a journal can be re-used in different ways, but the view can still be a typical format. Tags are the relations, but across all levels. A journal is universal enough, with a long tradition, that it should be included in a knowledge representation system.

📓🔸📝🔸1🔹➡️🔹Maintopmast  
📓🔸📝🔸1🔹➡️🔹Weather Backstays  
📓🔸📝🔸1🔹➡️🔹India Fleeet  
📓🔸📝🔸1🔹🌬️🔹Fresh Gales  
📓🔸📝🔸1🔹🏷️🔹1771-06-22  
📓🔸📝🔸1🔹🧭🔹41°11'N,27°52'W  
📓🔸📝🔸1🔹📄🔹Fresh Gales, with Squalls, attended...  

Long-form writing, like journal entries, benefit from text editors and word processing. For this reason, entries should be stored at rest in a form separate from triples. The updates could be routed as triples if needed, but the locking would be by document in that case. For technical work, the longer entries are likely created by subject matter experts. To mitigate the issues, comments can be routed separately for collaboration. The view, though, can update as fast as anybody pushes updates. There are no locking issues if you separate view, as it is simply the last triple with a view sent. The example above is too simplistic, as the journal entries likely have characters that should be encoded with base64.

For instance, this would be a view entry:
📓🔸📝🔸1🔹📒🔹PHA+VHJpcGxlIHB1YiB1c2VzIGEgbXVuZ…

4.4.2 Mandelbrot Set

Consider this formula:

zn + 1 = zn2 + c

By iterating with complex numbers c, and colorful visualization of what doesn’t escape to infinity, we get the Mandelbrot Set:

Mandelbrot set zoom
Click for animated Mandelbrot zoom
Click for animated Mandelbrot set details

No matter what kind of AI/ML machinery is pointed at the above animated rendering, it is extremely unlikely it will arrive at the simple formula. We might be able to find profitable gullies and shorelines in the rendered fractal territory, but we will not discover the formula. We could spend a lifetime enjoying the beauty of the surface curves, but we would always be caught in the surface, unaware of the rule, the formula underneath reality.

Our solution should be more like an original formula, rather than a data and compute-intensive rendering against an algorithm that statistically matches against the rendered data.

4.4.3 Screenplay vs. Movie

In order to be resilient, the captured knowledge needs to be re-usable. Consider this movie clip from Made for Each Other - 1939 Public Domain:

Made for Each Other - 1939
Click here to watch with sound

Now, consider the screenplay:

made for eachother screenplay

This is a very different kind of knowledge than the movie. The screenplay is the essence of the movie. The essence is mapped out by the screenwriter. The screenplay can be re-used and adapted, based on what actors are available for casting, the preferences of a changing audience, or different directors.

A screenplay packs a lot of knowledge into a small space by working within a set of meaning and conventions. It is possible to insert scenes and modify characters easily. The fact that the name of an actor doesn’t exist in a screenplay does not mean the director can’t have one added. The meaning, conventions, and ability to easily modify the knowledge are characteristic of knowledge graphs.

A screenplay is primarily about communicating the vision of the play, and it does this well, but it is one-way. Meaning is conveyed by convention using columns, type face, and explanatory text. Since the meaning is meant to be scrolled through as text is read, the convention is very specific and inflexible for other applications. Knowledge is the core function of a screenplay, and it functions well as a map, but is very specific to movie/plays. It has a time component, in that the map is played through the scenes. There is little collaboration on the screenplay by those that use it. There is no stream component in the creation; however, the movie is a good example of how we use AI/ML to analyze streams. Presumably, we can gain enough information about the structure and meaning of the movie by pointing enough compute and proper algorithms at the stream.

What knowledge can be harvested from the external view of the product, the operational system, the movie clip? We can identify the actors and items in the scene. We can categorize the props to recreate the scene with other actors. We can work with theaters to determine the amount of popcorn viewers decide to skip out and buy during the scene. Much of the knowledge, though, we are reverse engineering. If we are using AI/ML to harvest the knowledge, it is quite likely that the pauses, awkward moments, and the humor of the cookie are lost on our models. We don’t have context within the movie. We don’t have a design. We don’t have a map. We don’t have the screenplay.

The difference between the screenplay and the movie illustrates how our current approach to knowledge can be misleading. We have massive compute, extremely sophisticated algorithms that we can point at streams of metrics gauged from cameras, audio, and other sources; however, this is a very heavy set of knowledge to need to rely on if we need to quickly change the movie. Further, any measurement and algorithm is still not the real territory, it is territory as modeled through the metrics and algorithm, and the majority of our algorithms work on the goal of profit. We don’t gauge the value of a movie, from the audience perspective, on profit. We gauge it on other human aspects, and this is why, like Gawande’s checklist, we get better movies by pushing “the power of decision making out to the periphery and away from the center. You give people the room to adapt, based on their experience and expertise.” (📑24). The real resilience is by humans for humans.

The gap between the movie and the screenplay is much like the gap between the formula for the Mandelbrot Set and the rendered visual, but rather than humans bridging the gap and creating unique movies that separate the gap, it is quite likely impossible to guess at the formula for the visual image from the rendered visuals.

A screenplay has the right level of meaning vs. the rendered movie. We want something more like a screenplay, as it is re-usable with different scenarios.

4.5 Maps

4.5.1 Road Map vs. Gauges

Imagine driving a car cross country, but only relying on the territory view. This is the surface, as though you are driving through a visualization of the Mandelbrot Set. Through the car windows flows scenery of billboards selling shaving products, mixed in with desert, lakes, and trees. There are various metrics gathered from the car’s machinery. If we take a territory approach, the decision on where to drive and how to operate the vehicle would be based on the stream of information flashing by the windows, and coming in through the sensors. Perhaps a satellite of love correlates the position of the car constantly and tells you where to turn. As time and technology progress, we get better equipment, better cameras, faster cars; we build models of what we see. Is that a tree or a lake? How does our software recognize it? What algorithm works best? As the car goes faster and the features going by get more complex, we are faced with needing more and more equipment to navigate, more satellites, deeper supply chains. With enough compute and machinery, we might even be able to characterize the feel of the dirt, the way it might erode in the wind, just by the massive amount of data gleaned in the trip and stored from other trips. This still isn’t true territory in the menu vs. meal sense, but it plays in the same area.

Before you start on your trip, take a map with you, and a light so you can read in the dark, even if your car battery is dead.

A map that shows the roads, gas stations, hotels, and towns is prudent to carry. A checklist for changing a flat tire, a list of things to pack, what to do if the car overheats, how to hitch a ride, five ways to overcome a psycho killer that gives you a ride… all of this can fit in a small notebook. Having real-time measurements of the trip can be helpful, but what happens when there is no connectivity? What does your organization really need to run? Do you have it mapped out? What do you do when something unexpected happens? What do you do if that apt command in your Dockerfile just pulls air? What happens with the dreaded phone call that you no longer have a job, and you are stuck in a motel room across the country from everyone you know. A map lets you do things like continue on foot or change cars. A territory perspective ties you in to your service providers. Maps help ensure resilience. Note that a car full of territory equipment might usually win the race, at least while all the upstream dependencies are in good working order, including that satellite of love, but this is a different topic than resilience and autonomy.

The massive compute, sensors, and AI/ML kit that processes data from various aspects of the scrolling desert, approximates territory; however, it does this over the surface of the fractal, going for that statistical nudge in identifying the place on the map via the territory, without first agreeing on a map with the sponsors and stakeholders. A territory approach, that relies on third parties, cedes knowledge, because the knowledge is embedded in services and platforms that are often not useful if you change cars or decide to walk. The identified gullies and shorelines might be profitable, but where is your map that you can use to change cars or walk yourself?

Autonomous vehicles take this to an extreme. A map is less important if you can’t even drive your own car or fix it, if everything is a subscription service, and you own nothing, and decide nothing besides which corporate-created show entertains you this evening as you peel plastic off of your meal, or what piece of the product you fix in this week’s sprint. You are reading this presentation via the machinery of modern territory work flows, and it truly is wonderful. I’m pushing this via a tool written by the author of one of the most fabulous pieces of open source, running on his kernel. This kernel powers most of the cloud interests. I am not arguing that we need to abandon the territory perspective, which does require massive compute and centralized resources. I am arguing that authoring our own maps as individuals and organizations is crucial to resilience, and to be suspicious of territory-centric views. But where to start, particularly at this stage in the shift to territory?

Winning the race is less important than completing the race, as far as resilience. Resilience requires something like a road map, rather than thousands of real-time sensors.

4.5.2 New York and Erie Railroad in 1855

A map is more than just gas stations and roads. It can be used like a checklist to delegate decisions, yet keep control over the system. Daniel McCallum designed a map, compiled and drafted by George Holt Henshaw, to operate the New York and Erie Railroad in 1855 (27):

1855 Railroad diagram title

The ability to delegate to “any subordinate on their own responsibility” is explicit:

1855 Railroad diagram narrative

A map facilitates communication, so that a crisis on the line can be handled effectively. It stores knowledge about the system that can be retrieved remotely without the need to communicate with the central hub. At the most practical level, it means that those who have a map know who to ask if they have problems.

This is a closeup of the map:

1855 Railroad diagram close-up

The usefulness and portability of this map is a key feature. It is comparable in re-usability to the Screenplay, in that it allows for changes within the system, but doesn’t allow for a significantly different system without a complete rewrite. How could re-usability be better? Whiteboard and Stickies offers a clue: decomposed knowledge. To improve the usefulness of the Railroad map, we need to figure out how to break the map into sticky notes.

4.5.3 Railroad Crisis Example

Let’s illustrate how decomposed knowledge utilizing triples derived from the 1855 map can assist with resilience with a fictional scenario. The names are abbreviated in the diagrams; for instance, Master of Engineering and Repairs for Buffalo and Dunkirk is abbreviated as BufDunMasterEngrRep. To follow along, download a copy.

The crisis is that coal and oil is too expensive. We need to convert the engines to charcoal utilizing our own crew, so we need to understand how they are currently utilized. We don’t need to diagram everything, just what is relevant to converting the engines.

If we take the Initial set of triples from Appendix 1, and add this header to the beginning:

digraph {
overlap="false"

and add this footer to the end:

}

we have a standard graph format called “dot”.

overlap="false" keeps the nodes from touching. We have gone from triples to a graph by using an old graph visualization program called Graphviz.

By feeding the triples with the header and footer into the Graphviz twopi command we get this graph:

As-is graph of RR crisis with relevant detail

Because of the disruption in our coal supply, we need to convert our engines to run on charcoal as quickly as we can, but minimize disruption to the revenue from our rail freight business.

We have a total of 132 engines on our five lines we will convert:

Line Engines
Est 31
Del 23
Sus 18
Wst 10
Buf 5
Total: 132

We also need to create and distribute charcoal on a daily schedule in time for the engines as they are converted, and as part of our daily operations of the railroad going forward.

We need to add a couple more lines to our header for the Modified set of triples from Appendix 1:

root=President and splines="true"

Transform graph of RR crisis with relevant detail

A single triple can be changed with immediate visibility, which contributes to collaboration:

Near real-time visualization of graph
Click to watch video

Here is the modified graph with the green line in xdot (📑29):

RR crisis visualization

Triples provide a root formula that facilitates collaboration via streams of live maps.

4.5.4 Structured System Analysis

Structured System Analysis was refined in the 1970s an 1980s as a way to analyze systems, primarily around data flow. It uses three symbols and some conventions for connecting them to make data flow diagrams. (📑8) (📑30)

While different authors choose different symbols to represent the nodes, they consist of:

Here is a small diagram that illustrates a data flow:

Simple data flow with card/icon visualization

A job Applicant submits a resume in PDF form to an online recruiting system. The system processes it by transforming it to XML data and storing it on a file system. It doesn’t matter what symbols are used; however, there are standards, see (📑8) (📑30).

Here is a set of triples for this diagram:

0🔸👤🔸JbAppl🔹🏷️🔹Job\\nApplicant   
0🔸⚗️🔸1🔹🏷️🔹Submit\\nResume\\nOnline  
0🔸👤🔸JbAppl🔹➡️🔹⚗️🔸1  
0🔸👤🔸JbAppl🔸➡️🔸⚗️🔸1🔹🏷️🔹 resume PDF  
0🔸⚗️🔸1🔸➡️🔸💽🔸1🔹🏷️🔹 resume XML  
0🔸⚗️🔸1🔹➡️🔹💽🔸1  
0🔸💽🔸1🔹🏷🔹Resume\\nStore  

The top level diagram is noted by 0 (level 0 of the data flow diagram).

4.5.5 Graph Node Expand

One advantage of structured system analysis based on data flow is that it leverages graphs vs. flat representations of knowledge. The nodes of a graph can be expanded into another entire graph.

Node expand to graph

4.5.6 Larger Flow

Here is a larger level 0 data flow to illustrate expanding a node.

Level 0 DFD w/ card/icon visualization

Notice 6, the Customer Relationship Manager.

4.5.7 Data Flow Expand

If we expand process 6, it looks like this:

Level 6 DFD w/ card/icon visualization

These are all items from the perspective of customer relationship manager, including social media and snail mail cards. Here is the expanded 6, the Social Fanisizer process:

Level 6.6 DFD w/ card/icon visualization

One advantage to breaking down knowledge representation into triple form and then recomposing, is that it is easy to re-assemble into any view you like. This shows the expansion from Customer Relationship Manager to Social Fanisizer using a more typical Gane and Sarson style notation for data flow:

DFD expand with Gane and Sarson style visualization

4.5.8 Cups and more cups

Systems pinned to a single relation are easy to model. Here is an example where we simply swap out data flow for the flow of cup materials:

ACME cup manufacturing and distribution example
Click for vector diagram

ACME Cups manufactures ceramic mugs that are distributed throughout the country. Zellostra Mud Mining provides mud that is then filtered and injected into mugs that are fired in a kiln and glazed by glaze from Queen Glaze Co.

Motion of mug materials are tracked with the flow. Materials at rest are either mud (M), cups (C), or glaze (G). There are multiple companies involved in the supply chain for mugs. Staff are designated by company as the first letter: Queen Glaze Co. (Q), ACME Cups (A), Zellostra Mud Mining (Z), Temp Superheroes (T), and Xtra Fast Delivery (X), with a second letter of E (entity). Materials are moved or transformed with processes designated by the company letter first, P second, and a sequence integer, as well as color coded. The IDs are unique for all.

This is just a high level; however, the processes that change and move the materials, can be exploded into separate diagrams for more detail.

It is quite difficult, if not impossible, to glean a diagram like this just from operational and other gathered real-time metrics. It requires interviewing the business stakeholders; however, because of the nature of graphs, the diagram can be collaboratively built and delegated. ACME gets glaze from G3, and ACME can modify their part of the diagram without having to be concerned with changes Queen Glaze might make to their process.

What does this get us? If the power is out, we could look at this single graph, and see that the items that we are concerned with are all purple. True, a power outage might limit ability for staff to get to work, but let’s assume that staff are available. Any materials at rest should still be at rest with or without power, so let’s look at purple.

AP1 can use manual screening techniques if the electricity goes out. AP2 and AP3 require a generator. AP4 and AP5 only need lighting.

If there are holdups in the line, the graph can show who to turn to. If no temps show up from Temp Superheroes and the phones are out, we would need an address.

4.5.9 Human Needs

What is a better store of knowledge? Is it years of cumulative metrics of existing systems matched with an AI model? Is it through harvesting your user’s emails or keylogging? My thought is that knowledge of what is important starts with something more fundamental, and that the streams of metrics have a place later on. Let’s look at a larger system without text. Our predicate is ⬅️, which means “needs”. Here is a list of triples followed by an explanation (↘️):

🧍 ⬅️ 🌡️ | 🧍 ⬅️ 🚰 | 🧍 ⬅️ 🏥
🧍 ⬅️ 🍲 | 🌡️ ⬅️ 🏠 | 🏠 ⬅️ 🏗️

↘️   Humans need a certain temperature to live, potable water, medical care, and food. Shelter is needed for humans to maintain tolerable temperatures, and this shelter needs to be constructed.

🍲 ⬅️ 🐄 | ⚡️ ⬅️ 🛢️ | 🚚 ⬅️ 🏗️
🍲 ⬅️ 🌱 | 🚚 ⬅️ ⚡️ | 🏗️ ⬅️ ⚡️

↘️   Food for humans comes from animal and plant sources. Construction and transport need electricity, which is provided by oil. Transport needs to be constructed.

⚡️ ⬅️ ☀️ | 🐄 ⬅️ 🌱 | 🏗️ ⬅️ 🧍
🌱 ⬅️ 💩 | 🌱 ⬅️ 🌊 | 💩 ⬅️ 🛢️

↘️   Electricity can also come from the sun. Animals eat plants, and processed food comes from plants. Construction needs humans. Plants need fertilizer and water. Fertilizer comes from oil.

It doesn’t matter how these triples are entered. There is a related method of establishing system information where a room full of people brainstorm and put yellow sticky notes on a whiteboard. The advantage is that it allows collaboration and visualization without a lot of rules, jargon, and procedures. Unlike the sticky note process, though, the data in triples-style analysis can be more easily re-used. The process is also similar to mind mapping. I’ve used mind mapping software to capture triples collaboratively quite successfully.

Let’s continue with some more triples.

💩 ⬅️ 🐄 | 💩 ⬅️ 🧍 | 🍲 ⬅️ 🚰
💊 ⬅️ ⚡️ | 🚰 ⬅️ 🌊 | 🚰 ⬅️ 🏗️

↘️   Fertilizer can also come from animals or humans. Drugs need electricity. Potable water is sourced from rivers, lakes, springs, and groundwater. Processed food needs potable water, which needs constructed infrastructure to operate.

🏥 ⬅️ 💊 | 🏥 ⬅️ 🧍 | 💊 ⬅️ 🛢️
💊 ⬅️ 🚚 | 🏥 ⬅️ 🛢️ | 🏠 ⬅️ 🛢️

↘️   Medical care needs drugs, people, and oil. Drugs need oil and transport. Shelter needs oil for heating as well as components.

🚰 ⬅️ 🛢️ | 💊 ⬅️ 🌱 | 🏗️ ⬅️ 🛢️
🧍 ⬅️ 🌱 | 🌱 ⬅️ 🌡️ | 🚰 ⬅️ ⚡️

↘️   Construction and potable water infrastructure needs oil. Potable water distribution needs electricity. Humans can eat plants directly, unprocessed. Drugs are made from plants. Plants need particular temperature ranges to germinate and thrive.

🍲 ⬅️ ⚡️ | 🍲 ⬅️ 🛢️ | 🍲 ⬅️ 🚚
🌱 ⬅️ ☀️ | 🚚 ⬅️ 🛢️ | 🏗️ ⬅️ 🚚

↘️   Processed food needs electricity oil, and transport. Plants need sun. Transport need oil, and construction needs transport.

🏥 ⬅️ 🏗️ | 🏥 ⬅️ 🚚 | 🏥 ⬅️ 🚰
🏠 ⬅️ 🚚 | 🧍 ⬅️ 🧍 | 🏗️ ⬅️ 🏗️

↘️   Medical facilities need to be constructed, and also need potable water and transport. Shelter needs transport. Humans need humans to reproduce, and construction equipment is created with construction equipment.

We might argue a bit about whether a hospital is needed, but in our current civilization, this is reasonable. Likewise, in some societies transport is not needed to build a shelter. The advantage of this form of analysis is that the individual facts are relatively easy to agree on. Do we need oil for construction? Are drugs made with oil? These individual facts can be verified by experts to the satisfaction of those who are evaluating the system. If there is conflicting information, mark it as such and/or don’t include.

The triples can be assembled as a graph for visualization as the model is built out, which facilitates collaboration. Here is what the graph looks likes:

human needs
Click here for interactive diagram

If we decide that we will only consider transport that gets electricity from the sun, then we still have quite a few other problems to address. The graph helps put things in perspective, and facilitates human cognition of the system. This is important before we immediately jump to extensive infrastructure investment and real-time measurements of each mile all transmitted back to a data center. (📑1)

4.5.10 Maps Roundup

The artifacts, the tangible work product of 3SA, is usually a graph. Maps and graphs are interchangeable when limited to human cognition. The internet grew along these lines. Originally Yahoo was a map of internet websites, in the form of a tree. I owned and authored a website on NT server administration in the mid 1990s that Yahoo linked to. When I changed the URL because of the concern of Microsoft suing owners of domains with “NT” in the name, I changed it to “Net…”. I submitted the change online, and a human vetted my change request, responding that it was an appropriate change. Eventually, though, the tree was unusable, because it got too big. Yahoo is arguably even less usable now, as it is mostly ads (and links to stuff that look like articles but are really ads), but, still, the tree collapsed under its own weight, with too many branches and leaves. We started relying more on search to deal with this, and moved towards attention-capturing feeds tailored to our AI-perceived personal interests (and, the interest of those placing pesky links to stuff that look like articles, but are really ads).

Almost everything followed this same route. Systems became too complicated to house compute and software on-premises. Knowledge became to complicated to establish in-house. My issue, though, is that the management, and those implementing systems that management asks for, were increasingly unable to answer the most basic questions of where we are, where we wanted to be, and how to get there. They were unable to create or even read maps. This needs to be established no matter what we are doing, no matter how complicated things are. That is… unless it is all a ponzi scheme, and obfuscation of the map is part of it. Seriously. I’ve watched entire systems replaced, costing millions of dollars, and the end result was a system that was just as bad as the old system. Motion, taking steps, shuffling around, lost in the woods, was the priority, rather than following a map. Those that embraced the shuffling, those that showed progress somewhere but never faced the harder questions, seemed to always land on their feet, almost like maps are bad. Yes, “road maps” are asked for, but these often fall into efforts and motion around outsourcing to cloud, rather than tackling hard questions like whether or not it makes sense to spend a million dollars on a new system, establishing the requirements and usage of the existing system, or dealing with the eight fallacies of distributed computing.

I reject the idea that our systems are too complicated for the average organization or person to tackle, at least at an appropriate level of detail. I’ve tried to show this with examples. Certainly having a map for a road trip is important. A map can help to manage an entire railroad, at least in 1855. Structured system analysis can map data flow, and characteristics of graphs to collapse graphs into a single node make even complicated, deep maps cognitively possible in real-time. My example of cup manufacture and distribution showed a high level flow outside of data. My example of what humans need was meant to help any human who wanted to understand our dependency on fossil fuels. Maps work. They may not work at the current scale of the internet, at least with the original tree model Yahoo used; however, as far as cognition of a system, they do work. The key is to only map out at the level of detail required for cognition. It is fine to use cloud to do the heavy crunching, reporting and associating, just be clear on where you are and where you are going first, and own the tools and knowledge necessary to collaborate on, create, and visualize maps. With the assumption of human cognition first, this means that it should be possible to print a useful map in 2D. Just because a computer allows us to go out a million times more complicated does not mean that we need to cede human cognition at a level that is possible.

4.6 Collaboration

4.6.1 Whiteboard and Stickies

The Whiteboard and Stickies collaborative business analysis technique gathers users and experts of a system gather in a room with stacks of sticky note pads.

Whiteboard and stickies

Under the prompting of an analyst, the users lay out aspects of the system by writing small bits of information on the notes and sticking them on a whiteboard. Many who have witnessed this technique have marveled at how well it works. The main reason this works so well is that it is collaborative without the burden of dense jargon or existing description of the system.

This method works well as far as communication for those present at the meeting. The analyst serves as an interpreter. There are limits to the collaboration, as it is all within a local room. Collaboration virtually is difficult.

Meaning is often encoded with color and text of the stickies, as well as text on the whiteboard. There is little control of meaning, as it is whatever people put on the notes. It is guided by the analyst, but there is no schema, which is a disadvantage as far as common, re-used meaning.

Knowledge is captured on the whiteboard itself. Somebody might take a picture or roll the whiteboard into another room. Capturing the knowledge is labor intensive and often a choke point of the analyst. There is an overall visual order. Sometimes the map is in swimlanes; sometimes it is more chaotic. The map usually needs to be expressed in a different form.

All may contribute without barriers to entry. There is instant validation of gathered information. If somebody puts up a sticky note that is inaccurate, it is easy to correct. There is a real-time update of the output of the group

Whiteboard and Stickies is a great example of collaboration, primarily through the simple process and few barriers. It shows how knowledge can be broken down and re-assembled successfully, and the stream of changes can be instantly visualized.

4.7 Streams

Streams in IT are often key-value pairs with timestamps. These streams are the relative of triples, with key-value being two to a triple’s three.

Consider this local alarm:

<154>1 20220327T211058.426Z dc1mon.example.com process=“6 6 3” cpu=“100” memfMB=“10”

This is in Syslog format (📑33). It illustrates a priority alarm coming through for the available CPU dedicated to process 6.6.3 running at 100 percent, with only 10 MB of free memory. This is an operational stream that could trigger integration with the graph visualization. For instance, process 3, the Ad Targeting Engine on the 6.6 graph could be highlighted with red when the alarm came through:

Node Alarm

Perhaps it is useful to alarm on a map of the entire system data flow. This shows an alarm on process 11, subprocess 1, the AI Feeder:

One level system data flow
Click here for interactive single-level graph

Triples can also be streamed for live visual updates over MQTT, AMQP and Syslog. I’ve coded systems using Python for all three message stream protocols. I’ve successfully used Plotly Dash with MQTT for a full, low code, dynamic console.

Streaming techniques don’t have to be applied to just operational events. We can put a triple update directly into an event stream:

<77>1 20220327T211058.426Z allsys tech=“sally” triple=“0🔸6🔸6🔸⚗️🔸3🔹🏷🔹Ad\nTargeting\nEngine\n2”

This change could be visualized in near real-time by updating the graph visualization showing that we were now calling process 6.6.3 “Ad Targeting Engine 2”:

Update Ad Targeting Engine 2

This facilitates collaboration, as participants can see their input live, much like the Whiteboard and Stickies example. The model can be replayed over time, based on the timestamp. Better yet, throw the data into a time-series database. (📑11)

4.7.1 Monitors

Monitor Graph

4.8 Single Page Application (SPA)

Back-end processing and analysis are counter to agency and portability. The standardization of embedded processing in the browser, as well as standards for sharing and rendering (JavaScript, HTTP and HTML), can provide a completely portable analysis tool without the need for external services.

Most SPAs rely on external scripts and/or web servers. If all of the libraries are embedded in the SPAs, though, there are no external dependencies. In addition to embedding the scripts, the source code and license must be available from within the SPA, as reference to other files limit the portability. This facilitates agency, as well as protects from legal issues from redistribution of modified code.

4.9 Python

While single page applications provide portability and agency, it is difficult to integrate web pages with data sources and targets. Python makes this easier. A way to facilitate portability is to embed the Python script, so it can be downloaded from the page directly.

4.10 Knoppix 9.1

There are many GNU/Linux distrubutions. The problem with most is that they require too much configuration to bring up an analysis platform quickly. All that is needed is something to run the Python scripts, serve up the pages, and provide a browser. Up to date emoji is also required. For extensibility, a triple store and analysis is a nice to have, as well as the ability to view streams.

MacOS and Windows operating systems can both read the demo pages of 3SA; however, they are not something that one can reliably possess, no matter what the situation. They may well be easily available to use, and any solution should work with these, but there needs to be something firm to start from, something that can be downloaded by anybody and installed on many different kinds of x86 PC hardware without much pain.

Knoppix 9.1 is freely available, is mirrored around the world, and provides everything needed with just a handful of commands.

4.11 Analysis Conclusion

We need something exponentially quicker, less reliant on existing power structures and goals. We need the flexibility to align with systems that are not recognized or currently prioritized. We need to be able to determine our own goals and immediate actions during rapid system change.

We are on a human journey, not a profit journey, not a consume-biosphere journey, not a ignore-negative-externalities journey. We are in this together, and our goals that we imagine together, how we see the world, is a human endeavor, not a machine endeavor. And, while our machines are often a marvelous feature of our species, how we place ourselves in the world and understand our relation to the planet and other life, is, and should always be a human-centered effort, first, and our machines should follow, serving our goals. Letting cloud services lead our goals as a species is a mistake, either through full outsourcing, or by allowing ourselves to become mere cogs in workstream process outside of holistic system understanding. Resilience of our species requires this. Resilience of our independent businesses requires this.

Even our memories are recreated with the help of graph-like maps. (📑32). Our cognition of our place in the world, then, past, present and future, is determined by the accuracy of our internal and external maps. In order to be resilient, we must own these methods, adopt them intentionally, and use them to martial the array of stream-based workflows and data analysis machines and services we have available to us. Triples provide the advantage of gathering, managing, and visualizing small, delegated pieces of information, yet place them in a more holistic way via a knowledge graph.

5 Design

5.1 Design Introduction

Proposed solution based on requirements and analysis, including different points of view of those building the solution

The 3SA design is part human process, as it includes ways to collaborate on quick analysis. It also includes technical aspects like the design of data schemas, processing, and visualization architectures.

design architecture

The engine of the design is the way knowledge is represented in the single page application (SPA). The page is built around the engine. This is counter to the way most information systems are designed, where the engine transforms and visualizes the data. Most of the lift when implementing this design is creating a knowledge representation data schema. This is also where collaboration happens. Directly coupling human effort in this way facilitates cognition and active, real-time modeling without heavy technical barriers.

While most of the focus is on the data schema, the processing of the data for ingestion and visualization requires some technical design.

3SA provides ideas and instruction. It is not a product, nor a service.

5.2 Collaboration Design

5.2.1 Initiation

The priority with 3SA is human cognition, so start by talking to each other. Meet in a physical room together, or online, and agree on as much as you can on these questions:

Where are we at?

Where do we want to be?

How do we get there?

With the initial pass done, agree on how you want to design the data schema, including graph, nodes, properties, relation and nesting. Who will be collaborating as the system representation is developed? How strongly does identity need to be enforced?

5.2.2 Identity

Identity is an important part of collaboration. Decide what is appropriate for your application. Here are three options:

  1. A single analyst could guide a small group and author the page on the fly. Participants could refresh the page to see the additions. Frankly, this is the most likely; however, this creates a bottleneck on the analyst. At least the visualization is immediate and the participants can see the data schema directly. In this case, log the analyst as part of the session artifact. As an example, if the analyst named “Clara Smith” facilitated a session for data flow like this demo site, at the end of the session a single file could be saved as org-wide-dfd.clara.smith.2023-11-23.html.

  2. Changes deletions, or addition of triples could be logged by a handle (mary and bruce):

20221229T121519.135Z🔸mary🔸🧮🔸🚗🔸1🔹🏷️🔹Electric
20221229T121519.155Z🔸bruce🔸⛔️🔸🚗🔸1🔹🏷️🔹Electric 
20221229T121519.156Z🔸bruce🔸🧮🔸🚗🔸1🔹🏷️🔹Electrics 
20221229T121525.155Z🔸mary🔸⛔️🔸🚗🔸1🔹🏷️🔹Electrics
20221229T121526.155Z🔸mary🔸🧮🔸🚗🔸1🔹🏷️🔹All Electrics 
  1. Set up private keys for participants, and log signed entries. This can be done with messaging or logging (AMQP, MQTT, Syslog etc.). For a full example of setting up keys and receiving signed triples, as well as code for a full console to change the graph, see this demo site. This demo also shows replay techniques.

5.3 Knowledge Representation Design

This is where the design varies from all methods I’m aware of. There is no heavy framework, no software, no cloud, few symbols to learn, and yet it is possible to collaborate on, model, and visualize systems. This method shares quite a bit with the magic of the World Wide Web. The most useful relation in the World Wide Web is arguably the “a anchor” element with an “href” link. Links cross domains, and it is possible to grow an information system collaboratively from the center without requiring overall control.

5.3.1 Mary and Bruce

Mary might put up a collection of recipes at a domain called marysfavs.recipes. Bruce might have another recipe domain that they post to called bruz.hotsauce. Say that Bruce likes a recipe Mary has. They could place a link to the recipe on their bruz.hotsauce domain favorites page without Mary doing a thing. Meanwhile there could be a university somewhere that is categorizing all hot sauces. Bruce might be able to use that, but they don’t have to. We can and will borrow from the magic of the World Wide Web, and bring it back to the web browser where it lives quite well. (📜9)

5.3.2 Graph

A graph is a container for the system under analysis, much like Bruce’s burningbruzfoods.hotsauce domain. Pages, labels, images, dates, and links all work within the graph. It is not required; however, if you intend to integrate with other graphs, the graph needs to be unique, much like how if Bruce was only creating recipes for themselves, they could ignore burningbruzfoods.hotsauce references.

If you don’t have a need to integrate with any other graphs, just use 0 if it is multi-level (nesting). You can even get away with the graph 0 being implied. Using this example, there is no container for the graph. 📔, 💭, 💤, 👄,🏫 ( Journal, Memories, Dreams, Dialog, Subject) are all at the root of the node path:

📝🔸🏫🔸2🔹📒🔹I don't trust my cat.  I caught her embezzling from me.  
📝🔸🏫🔸2🔹📅🔹2022-08-02  
📝🔸🏫🔸3🔹📅🔹2022-08-04  
📝🔸💭🔸1🔹🔖🔹cats  
📝🔸💭🔸1🔹📅🔹1991-02-20  
📝🔸💭🔸1🔹🏷️🔹Jacob's mother  
📝🔸💭🔸1🔹📒🔹Fluffy's mother, Jacob was a vicious, but fun cat.  
📝🔸📔🔸1🔹🔖🔹cats  
📝🔸📔🔸1🔹🔖🔹steve_martin  
📝🔸📔🔸1🔹📅🔹2022-10-05  
📝🔸📔🔸1🔹🏷️🔹cat theft  
📝🔸📔🔸1🔹📒🔹I think my cat tried to open too many envelopes, looking for checks.  
📝🔸💤🔸1🔹🔖🔹snl  
📝🔸💤🔸1🔹🔖🔹anna_freud  
📝🔸💤🔸1🔹📅🔹2022-07-21  
📝🔸💤🔸1🔹🏷️🔹Banana and Bowl of Fruit  
📝🔸💤🔸1🔹📒🔹I dreamed that everybody had a big bowl of fruit in their lap.  
📝🔸👄🔸1🔹📅🔹2022-07-04  
📝🔸👄🔸1🔹🏷️🔹Pete Sinclair  

Requiring a path to identify a node can cause problems; however, it can also provide an easy way to represent knowledge as a hybrid taxonomy. In the above example, for instance, you might write a journal entry that includes present events, but also write about a dream rather than creating a separate entry. If the unique node ID includes a taxonomy classification it becomes more difficult to adjust. Consider the triples from the Collapsible lists demo. They are in a taxonomy:

☀️🔸⚗️🔸🗿🔹🏷️🔹Popular efforts lead to ecosystem collapse
☀️🔸⚗️🔸🗿🔸⚡️🔹🏷️🔹Efforts require energy
☀️🔸⚗️🔸🗿🔸⚡️🔸💯🔹🏷️🔹MEER low energy to scale
☀️🔸⚗️🔸🗿🔸⏲️🔸💯🔹🏷️🔹MEER immediately addresses imbalance
☀️🔸⚗️🔸🗿🔸⏲️🔹🏷️🔹Efforts require time 

Node ☀️🔸⚗️🔸🗿🔸⏲️🔸💯 depends on node ☀️🔸⚗️🔸🗿 that means ecosystem collapse. This makes it difficult to move things around as we gain new information. In this particulare demo, the point was to show collapsible lists, which are a kind of taxonomy anyway, so this is fine. The 1855 New York and Erie Railroad diagram would be a bad choice for a taxonomy. First off, the original diagram has circular connections. One end of the line is attached to another end in a circle. Second, if you add a station between two stations, if you used a taxonomy to represent the railway line, you would have to rewrite all of the downstream nodes. Instead, the nodes are merely IDs without any paths:

🛤🔸1🔹↔️🔹🛤🔸6
🛤🔸2🔹↔️🔹🛤🔸7
🛤🔸3🔹↔️🔹🛤🔸8
🛤🔸8🔹↔️🔹🛤🔸11
🛤🔸11🔹↔️🔹🛤🔸12
🛤🔸4🔹↔️🔹🛤🔸9
🛤🔸5🔹↔️🔹🛤🔸10
🛤🔸7🔹↔️🔹🛤🔸13
🛤🔸13🔹↔️🔹🛤🔸14
🛤🔸9🔹↔️🔹🛤🔸16
🛤🔸16🔹↔️🔹🛤🔸17

To wrap up the idea of graphs, and how they relate to paths, consider the paths for the a DFD:

0🔸6🔸💽🔸3🔹🏷🔹Bulk Cards
0🔸6🔸👤🔸PO🔹🏷🔹Postal Employee
0🔸6🔸💽🔸4🔹🏷🔹A-R\nDatabase
0🔸6🔸6🔸⚗️🔸1🔹🏷🔹Fanisizer\nCloud
0🔸6🔸6🔸⚗️🔸1🔹↔️🔹💽🔸1
0🔸6🔸6🔸⚗️🔸1🔹↔️🔹💽🔸2
0🔸6🔸6🔸⚗️🔸1🔹↔️🔹👤🔸DigMkt
0🔸6🔸6🔸⚗️🔸1🔹↔️🔹👤🔸SupC
0🔸6🔸6🔸⚗️🔸1🔹⬅️🔹💽🔸3
0🔸6🔸6🔸⚗️🔸2🔹🏷🔹IdentiTroll

With data flow, process 6🔸6🔸⚗️🔸1 is a subprocess of 6🔸⚗️🔸6. We can put all of the nodes on one level, or graph (0). Alternatively, we could place the nodes on a level/graph of the particular process, hiding the path from the nodes and encorporating the path into the view.

5.3.3 Nodes

Nodes are the objects that comprise your system. Pick a single emoji to represent a node type. For instance, 🚌 might be a bus in a model of Washington, Oregon, and California roads. List these with brief labels so nobody is confused by what the node emoji means.

There are reserved emoji listed in Reserved Emoji. I will use these four:

🔸 = delimiter for node path
🔹 = delimiter for triple 
🗨 = comment
🏷️ = label

Here is a simple set of initial nodes:

🗺️🔸🛣️🔸1🔹🏷️🔹Washington  
🗺️🔸🚗🔸1🔹🏷️🔹Electric  
🗺️🔸🚗🔸2🔹🏷🔹Hybrid  
🗺️🔸🚗🔸3🔹🏷️🔹Gas  
🗺️🔸🚗🔸4🔹🏷️🔹CNG  
🗺️🔸🚐🔸1🔹🏷️🔹Small Bus  
🗺️🔸🚐🔸1🔹🗨🔹Less than 10,000 GVWR  
🗺️🔸🚌🔸1🔹🏷️🔹Large Bus  
🗺️🔸🚌🔸1🔹🗨🔹Greater than 10,000 GVWR  
🗺️🔸🛣️🔸2🔹🏷️🔹Oregon  

These can be visualized like this:

Road and cars
Road, cars, and buses

The path of the node is everything before the first 🔹, and it must be unique. If you are short on emoji, or would rather not deal with that, just pick an emoji for everything:

🗺️🔸🔵🔸wa_rds🔹🏷️🔹Washington  Roads
🗺️🔸🔵🔸small_bus🔹🏷️🔹Small Bus  
🗺️🔸🔵🔸small_bus🔹🗨🔹Less than 10,000 GVWR  
🗺️🔸🔵🔸large_bus🔹🏷️🔹Large Bus  
🗺️🔸🔵🔸large_bus🔹🗨🔹Greater than 10,000 GVWR  
🗺️🔸🔵🔸or_rds🔹🏷️🔹Oregon  Roads

It might make sense for your application to skip the labels at first and use the text part of the path after the emoji.

Plain road and cars

It is easy to add nodes later if you wish. The only catch is that the nodes need to work with your chosen nesting and relation.

5.3.4 Properties

Decide what properties make sense for the nodes. These can be added later, but it helps initial visualization if there is a set to start. Review Reserved Emoji to see if one of those works.

As an example, we could add ✨ to mean the level of perceived luxury and status a car has. This triple then:

🗺️🔸🚗🔸1🔹✨🔹Bling Level 11

Would translate as “On our roadmap graph, car number 1, a BEV, has a bling level of 11.”

5.3.5 Relation

Establish the relation of the model. This is your primary relation predicate. It never changes; however, there is direction. Consider these triples:

🗺️🔸🛣️🔸WA🔹🏷️🔹Washington  
🗺️🔸🛣️🔸OR🔹🏷️🔹Oregon  
🗺️🔸🛣️🔸CA🔹🏷️🔹California  
🗺️🔸🛣️🔸OR🔹↔️🔹🛣️🔸WA  
🗺️🔸🛣️🔸OR🔹↔️🔹🛣️🔸CA  

Note that we are using two-letter state codes instead of a number. This is perfectly valid for this design.

Road icon/card format

A relation is what connects the model at the current level. This should be the same at all levels. Review analysis for ideas. If you are working with information systems, consider the relation of data to/from. An org chart is “reports to”. A data flow is “receives/sends data”. A relation is a line that is drawn on a graph of the system.

As an example, say you are part of a group that gets together when the water for your city is poisoned by a chemical spill. In this case, we might consider a couple of different relations in our group. Are we going to “clean potable water” or “move potable water”. Pipes, trains, and tanker trucks might move potable water, and the relation would be flow. If the focus of the analysis is on a process that cleans potable water, than the relation might be “needs”.

A relation is signified in the triple by an arrow:

↔️ = Both directions
⬅️ = Backward
➡️ = Forward

Backward means the object provides what is consumed by the subject. Alternatively, the object is the target of the subject’s relation. For flows, this is clear, as it is easy to establish what is going where. For other relations it is more difficult. If I need water, I would use a forward arrow. Dependencies, though, can also act like flow, like the What do Humans Need? diagram, so backward makes more sense. Whatever you choose, be consistent, and don’t get bogged down in long talks about what directions the arrows go.

0🔸6🔸⚗️🔸6🔹⬅️🔹💽🔸2
0🔸6🔸⚗️🔸6🔹➡️🔹⚗️🔸8

5.3.6 Nesting

Consider Russian nesting dolls (📑40). Think of the outermost doll as processes for the entire organization. If the nesting is “process”, then the next doll in the nested dolls is not only a process, but is a subprocess of the first.

Nesting process dolls: All, Accounting, AR/AP, AR, Billing, and Email

Email means an email process within the Billing process, which is within AR, etc. This vertical constraint on nesting facilitates human cognition. With computer cognition, we could (and often do) have any number of nesting relations.

Each nesting doll is also a graph at its own level. This provides a different perspectives, and is useful to bring the focus of the domain to the humans involved. For instance, anybody in the accounting department should be able to answer questions about “accounting processes” level. Within that level, or graph, there may be questions about AR/AP processes that are specific to that group. Nesting does not conserve relations. Any relations have to be re-defined. What this means is that if you are at the “accounting processes” level, a link to “Email billing” is not the same as the same kind of link at the Email process level. This makes sense if you consider how a human interview would go, since somebody authoritative for how email works within the billing application has a much more sophisticated understanding. This is the beauty of graphs.

5.3.7 Reserved Emoji

Some emoji reservations are related to context. Emoji in paths are often not reserved; however, emoji in predicates are always reserved. For instance, consider:

🏷️🔸⚽️🔸1🔹🏷️🔹3A-27

This might be a graph of all price tags at a sporting good store. It is a bit confusing, but it isn’t forbidden to use 🏷️ in the path. I’m not going to distinguish this unless I am aware of a big problem. Most of the issues show up when parsing and processing the triples, so they can be worked around. My recommendation is to just avoid using emoji on the reserved list for anything but the designated purpose.

Be aware of the at rest emoji. They can be used in triples, but should match, and likely you will want to reserve them.

Two reserved emoji that form the path and predicate, must always be reserved in path or predicate:

🔹= delimits the predicate, either a primary relation or other predicates 🔸= path delimiter

These are reserved emoji that must not be used in a predicate, and should be avoided in paths:

🏷️ = 🗨 = Comment in triples

5.3.8 Decentralized Stores

Once your data schema is design, decide how to decentralize storage. Do users view a central page? Is it beneficial to break out the triples by atom, rather than a page? How are the stores backed up? Consider the items where the authoritative atom is at rest.

5.3.9 At Rest Items

These are the items that are defined for use in triples, but are stored at rest separately under the same path as the current domain. For example, the markdown for this article is stored at ./🧬/17/✍/article.md.

5.4 Processing Design

While creating and managing the triples is out of scope of the design, there are some sample scripts that show how to do this. Processing for the single page app includes navigation and ingesting and transforming the embedded triples.

5.4.1 Triple Embedding

Triples are embedded in the web page using emoji like this:

let triples=`🐟🔸1🔹🏷️🔹Party like 1999
🐟🔸1🔹📅🔹1999-12-31
🐟🔸1🔹📝🔹This is my article`

This example and code can be seen live here. It is the simplest operational demonstration of 3SA.

5.4.2 Triple Ingestion

This code will ingest the above triples:

let lines=triples.split('\n')
let preds=new Set()
let lines_to_data = (dct) =>{
  lines.forEach( 
    triple => {
      preds.add(triple.split('🔹')[1])
      let a=triple.split(/🔹|🔸/)
      let path=[]
      a.reduce(
        (p,c,i) => {
        path.push(a[i])
        p[c]=p[c] || {}
        return p[c]
      }
      ,dct
      )
    }
  )
  return dct
}
let data=lines_to_data({})

After ingesting the triples, this is what preds contains:

[ “🏷️”, “📅”, “📝” ]

This is what data contains:

{
    "🐟": {
        "1": {
            "🏷️": {
                "Party like 1999": {}
            },
            "📅": {
                "1999-12-31": {}
            },
            "📝": {
                "This is my article": {}
            }
        }
    }
}

5.4.3 In-page Navigation

Navigation is handled by hashchange events. Set up the functions associated with the change:

let nav_top = (last) => {
  window.location = (decodeURI(last))
  window.scrollTo({
    top: 0,
    behavior: 'smooth'
  })
  return false
}
let cc0 = () => {
  document.getElementById('sean_button').style.visibility = "visible"
  main_content.innerHTML = 'This content is released under the terms of CC0'
  return false
}
let initiate = () => {
  main_content.innerHTML=''
  for (let k in data['🐟']){
    if (!(k in preds)){
      main_content.innerHTML+='<p>Title: '+
       Object.keys(data['🐟'][k]['🏷️'])[0]+'<br>Date: '+
       Object.keys(data['🐟'][k]['📅'])[0]+'<br>-----------<br>'+
       Object.keys(data['🐟'][k]['📝'])[0]
    }
  }
}

Add a listener for hash change on the window and an associated function:

let refresh_page = (last) =>{
  let q_line=decodeURI(window.location.hash.substring(1)) || ''
  q_line=='⏫'
  ? nav_top(last)
  : q_line=='⚖️'
  ? legal()
  : initiate()
}
refresh_page('#')
window.addEventListener('hashchange', (event) => {
  refresh_page(event.oldURL)
  return false
})

5.4.4 Source Code

Embedding

Third party source
Click here for live source on Codrust SPA
let view_source = (item) => {
  document.getElementById('sean_button').style.visibility = "visible"
  let re = /<!-- (.+) --><script src="data:text\/javascript;base64,(.+)"><\/script>/
  let comm = document.body.innerHTML.match(re)
  main_content.innerHTML = '<p><b id="' + comm[1] + '"><i>' + comm[1] + '</i></b><p><pre class="small"><code>' +
  hljs.highlight(js_beautify(b64DecodeUni(comm[2]), {
    indent_size: 2,
    space_in_empty_paren: true
  }), {
      language: 'javascript',
      ignoreIllegals: true
    }).value + '</code></pre>'
}
Fancy looking source
Python Script Export
Click here to download the triple DFD editor Python script

5.4.5 Triples Visibility and Local Modification

Live local edit
Click here for live edit
let rows=[]
  for (let l of lines){
    let a=l.split('🔹')
    rows.push({column1: a[0],column2: a[1],column3: a[2]})
  }
  const t = new SimpleDataTable(main_content)
  t.load(rows)
  t.setHeaders(['s', 'p', 'o']);
  t.on(SimpleDataTable.EVENTS.UPDATE, (dt) => {
    lines=[]
    for (let c of dt){
      lines.push(c.column1+'🔹'+c.column2+'🔹'+c.column3)
    }
    data=lines_to_data({})
    return false
  })
  t.render()

5.5 Visualization Design

Quick and collaborative visualization of the system, coupled with the feature of proposition validataion, is the primary advantage of triples. While the preference of humans will vary, designing an interactive, visual scheme that fits the audience and situation is key to successfully collaborating on difficult problems.

5.5.1 Text Visualizations

Most original system visualizations start as text artifacts. 3SA goes deeper, and holds analysts and stakeholders to the task at building system knowledge from triples collaboratively. Consider the MEER First Principles taxonomy:

MEER Taxonomy
Click here for interactive demo

This shows a hierarchy of propositions that can each be validated on their own, expanded, and clarified. Do we need to offset 1,500TW EEI? We should be able to evaluate the larger system to determine how much energy is going into the Earth system and how much is leaving. Taxonomies also facilitate the appropriate view, as only the interesting branch is expanded.

Another variation is text visualizations that change based on user choices, like the Contextual Lists demo:

Contextual Lists demo
Click here for interactive demo

5.5.2 2D Visualizations

Since triples are components of graphs, graph visualization programs can re-assemble them easily. For 2D visualizations, I use Cytoscape JS. Graphviz does provide great 2D graphs for data flow, and I show how to use it with triples in my Railroad Crisis Example; however, it doesn’t fit the requirements as well with a SPA, primarily because the only JavaScript port I can find is a WebAssembly port. The benefit of the Neato algorithm for data-flow diagrams (DFDs) doesn’t outweigh the penalty of the somewhat clunky and difficult to handle port.

Visibility of connections in 2D
Click here for interactive demo

5.5.3 3D Visualizations

3D visualizations can be useful to show relations, outliers, and nestings with multiple levels.

3D DFD
Click here for interactive demo

5.5.4 Visualization Export

While 3SA focuses on a SPA as the primary container and form of knowledge representation, the ability to export visualizations is a crtical part of the design of 3SA. For primarily textual visualization, Pandoc works well, and can export to many formats. This document itself is based on both at-rest Markdown articles:

At rest Markdown

combined with triples. Ideally, the export should be something that can be held and used without a computer, a knowledge artifact. 3SA is available in both a PDF 1.5 and PDF/A format. Use something like veraPDF to ensure that the artifact can be read in the future. 3SA’s pdf verifies as PDFA-1B:

VeraPDF Validator

For graphs, a preferred export format to use is Scalable Vector Graphics (SVG), as it is vector based. As an example, navagate to this dependency graph and click on 📤 to export the graph to an SVG. It is imporant to allow the user to change the view prior to export if possible i.e. let the user drag nodes around so the graph is clearer.

3D graphs are more difficult, as they often don’t render in 2D in a useful way. Since the SPA has the ability to render the 3D graph, though, and there are no external dependencies, this mitigates the risk to persistence.

6 Operations

Operating instructions for the deployed solution

6.1 Page Verification

Because of bot mitigation on the main online Triple Pub page (https://triple.pub/3sa.html), it is necessary to download the zip file from here.

Extract the file to the local filesystem, and run this Python script to verify the page:

# coding=utf-8   
import sys
from Crypto.Hash import SHA256
from Crypto.PublicKey import RSA
from Crypto.Signature import pkcs1_15
import base64 
page_arg=sys.argv[1]
pub=sys.argv[2].rstrip()
with open(page_arg) as f:
  page=f.read();
sl=page.find('<!-- Timestamp and Signature: ') 
ts=page[30+sl:sl+50]
sigl=page.find('-->',sl)
sig=page[53+sl:sigl]
pagetop=page[:sl]
pagebottom=page[sigl+3:]
try:
  h = SHA256.new(bytearray(pagetop+pagebottom,'utf8'))
  pub = RSA.import_key(open(pub).read())
  res=pkcs1_15.new(pub).verify(h, base64.b64decode(bytearray(sig,'utf8')))
  print('Page with timestamp '+ts+' is verified')
except:
  print('Page with timestamp '+ts+' has an invalid signature')

You will need the public key of the publisher. In this case, it is here.

Verify the page like this:

./ver.py  3sa.html triples_pub.pem
Page with timestamp 20221229T005557.838Z is verified

The markdown documents for 3SA are all stored at rest, which makes finding text easy. Recoll is the best local search engine we could find. Here is an example:

Recoll search

For minor corrections it is easy to just open and edit the text directly. If the scripts are running you can just view the page with localhost as you edit, and it will automatically refresh if live.js is configured.

Install via apt:

sudo apt install recoll

At initial run, choose the root to index.

Choose index root

Add the line:
.md = text/plain
to:
~/recoll/mimemap

This will index and open Markdown files:

Open Markdown in Vim
Click here for Vim author’s charity work

6.3 Graph Traversal and Analysis

The 1855 New York and Erie Railroad diagram is a full-on graph, that does not rely on paths.

6.4 Simple Example Time Kitty

Time Kitty: 3200BCE to 1000BCE

6.5 Virtuoso Inference

docker pull openlink/virtuoso-opensource-7
mkdir triple_pub_store
cd triple_pub_store
docker run --name my_virtdb -e DBA_PASSWORD=dba -p 1111:1111 \\
-p 8890:8890 -v `pwd`:/database openlink/virtuoso-opensource-7:latest

(📑68)

Initial Virtuoso
Conductor login
SPARQL insert
SPARQL insert result
SPARQL query and result

(📑69)

6.6 Stream Visualization

At the top of the Multi-level data flow there are three Python scripts that can be downloaded that illustrate stream visualization using 3SA.

🐍🔐 (stk.py) Sets up the keys.

python3 stk.py localhost  
Human readable name (no spaces): pookie  
Password:   
Password again:   
keys written  

The keys ensure identity, as control of the private key is needed to sign a message.

🐍🎳 (tde.py) Modifies the diagram in near real-time, sharing

Triple DFD editor

🐍👀 (trv.py) Watches the stream.

It uses kitty terminal to display changes as long entries, along with updated graphs, verifying claimed identity against known public keys:

Triple Receiver View

🐍👀 will also save streams for later replay:

Triple Receiver View replay

by saving in a database:

Triple Receiver View replay database

6.7 Multi-level Data Flow Diagrams (MDFDs)

Here, buried in the middle of a 200+ page solution description about Triple System Analysis is the gem, as well as the genesis of my interest in triples eight years ago. While I intuitively knew that something very interesting was going in with this method, and had real success putting the ideas to work at my job, I didn’t understand what was really going on. Because I didn’t understand what was really going on, I had a difficult time explaining the method to others.

My method of building MDFDs is slightly different than Gane and Sarson. I take some shortcuts and leverage modern technologies. While MDFDs are a specific instance of 3SA Design, they are powerful enough that they can be used independently of the broader design. Structured System Analysis should be reviewed prior to continuing with this section.

6.7.1 Why are MDFDs so powerful?

Information technology, at the most basic level, takes stores of information at rest, transforms that information into forms useful to humans, and then stores that information again. MDFDs show these relations in a way that can be understood, even for very complex systems. While it is true that TOGAF offers varying perspectives, MDFDs can also do this with minimal symbols and frameworks to understand. MDFDs center around processes. This is their nesting. This facilitates perspectives without getting too formal. Consider the top level of an MDFD:

top level of an MDFD
Click here for interactive demo

This view should show the major areas of a system. In the case of MDFDs, these are the primary ways that information is transformed, where it is stored, and who sees it. The top level should be understood by anybody in the organization. The processes should naturally fall into areas of focus. Because the 3SA does not maintain meaning between levels, it only maintains constraints around nesting, it is possible to cater perspective to the associated audience. At the top level, everybody understands “Accounting Services”. When we go down a level, into accounting services, though, we see:

Accounting Services View

We can expect that everybody in accounting knows “Honest Abe Collections” and that an accounting database exists. If we drilled down further into Accounts Payable, we might see a specific application and other databases. This is why the disconnect between levels is a benefit. There is some advantage in inference with the processes, but you also gain the ability to focus in on domains of knowledge specific to different groups of people, and use the wording of that domain that the users themselves use. For a more extensive example, everybody in the organization knows what customer relationship management (CRM) is, or should. The top level should be geared towards “should”. For those that aren’t aware, it is an opportunity to learn.

Customer Relationship Manager

Unless somebody is working directly with the Social Fanisizer, they don’t need to know details, but they would need to understand that Lead Addresses are used by both the email service and Social Fanisizer. If we zoom into Social Fanisizer:

Social Fanisizer

We see finer detail that would be appropriate for somebody that works within that specific domain of knowledge. This lets us place Social Fanisizer into the broader system context and relations as it intersects other domains. For instance, the operation of Social Fanisizer could show up on the top level as “CRM is down”, so that everybody understands what that means, even though analysis at a deeper level gives more details. Zooming in on the processes can facilitate fixing the problem as well.

6.7.2 MDFD Triple Conventions

Unlike more conventional graphs, the MDFD benefit from paths embedded in the triples. This is the beauty of the Gane and Sarson method of Structured System Analysis. (📑8). Here are two data flows:

0🔸6🔸11▪️⚗️🔸1🔹↔️🔹👤🔸CFO
0🔸6🔸11▪️⚗️🔸1🔹↔️🔹👤🔸CSO

In Gane and Sarson this would show that process 6.11.1 has two-way dataflow between the entities CFO and CSO. The unique thing about 3SA is that the emoji visually show that meaning without needing to create symbols. The ▪️ shows the level or graph. The level is 0🔸6🔸11, and is also a process ( 0🔸6▪️⚗️🔸11) at level 0🔸6, which fits with the entent of Gane and Sarson.

6.7.3 MDFD Visualization Conventions

Place a navigation tool at the very top that orients the user:

Nav example

In this example, the user is visualizing level 6.1 (Snail Mail Import). There is one process, process 1 (Tray Firmware), that has a one-way dataflow from the Input Tray. The notes ( 🗨) are short explanations of the level, and the narrative (📒) is more extensive.

7 Future Considerations

Items that were not in scope that might be useful to consider in the future

7.1 Formal Ontologies

7.2 Combined Domains

7.3 UUIDs

7.4 Formal Semantics

(📑54)

8 Bibliography

8.1 MEER

MEER:Reflection at COP26
MEER Project
MEER Flash Presentation

Ye Tao’s MEER:Reflection project seems like a map solution vs. a “throw streams of tech at streams of changing systems with streams of agile workers” idea. His solution has all of the map-like qualities of pure, structural, simple solutions, along with understandable calculations. I also think that MEER, and its failure at getting traction, points at an underlying issue with streams, in that our culture is based on streams, as is the current hierarchy of power. Simple maps and solutions are a threat. A taxonomy of sources would show that most things in industrial civilization are made from fossil fuels, including alternative energy. Ye Tao’s flash presentation has also inspired me to compress a presentation of my own efforts into 7 minutes. I have yet to accomplish it, but it remains a goal.

My ideas will be more relevant in a world that seriously considers Ye Tao’s work.

8.2 The Trouble With Triples

The problem with triples
This bit has inspired me as a counter. I’m often thinking of this as I try and make triples simpler to understand.

The Trouble With Tribbles

8.3 Public Domain

Who’s Afraid of the Public Domain?
This article convinced me to use CC0 for my work. I still support the full Richard Stallman treatment for broader software, but I limit the extent of my software so it is in the realm of ideas, and not product.

8.4 O4IS

Ontology for Information Systems (O4IS) Design Methodology
This is one of the first pieces I found as I puzzled about how to automate IT relations, and my introduction to the ideas of ontologies as they relate to IT.

8.5 The Telling

The Telling

I visited Las Vegas for a friend’s wedding in the middle of my efforts to make sense of triples and plan my presentation. I showed up a day early, locked myself in my room, and read Laura Riding Jackson’s The Telling. I need to read it again, but it inspired me to work on my ideas until I could explain them at the depth I needed. It is also related to ontology and meaning. Laura Riding Jackson started with poetry, but became dissatisfied with the ability of poetry to express truth, and worked on a Rational Meaning with Schuyler B. Jackson. In this regard, she shares quite a bit with Victoria Welby (📑22).

8.6 Resilience as a Disposition

Resilience as a Disposition
This paper formed the bridge between the issues I saw in IT and broader socio-ecological resilience.

8.7 Data Flows to Facilitate Compliance

An Ontology for Representing and Annotating Data Flows to Facilitate Compliance Verification
This paper both changed and validated my focus. It was the first and only instance of using formal ontologies to model data flow, specifically, that I have found.

8.8 Structured Systems Analysis

Gane, Chris; Sarson, Trish. Structured Systems Analysis: Tools and Techniques. New York: Improved Systems Technologies, 1977

8.9 Extended Relation Ontology

Extended Relation Ontology Another way to get data flow via has_input and has_output.

8.10 GraphDB

GraphDB The Ontotext product was the first graph database I did inference on, and it was fast with the set set of triples I loaded, particularly on Windows.

8.11 TimescaleDB

TimescaleDB is an alternative to a graph database, particularly with my focus on data flow by level. It is free as in beer and freedom as long as you aren’t offering a cloud service. Build a virtual table as a graph, and update triples collaboratively. A query against a level (6.6, for instance) will show the most recent graph. Mix this with Plotly Dash for quick and easy collaborative models.

8.12 Plotly Dash

Plotly Dash, mentioned in (📑11), above, lets you hang out in Python land without getting too dirty in JavaScript world. I like coding in Python better, and Plotly fixes the problem with collaborative UIs. Sophisticated, UI-intensive applications get a bit cumbersome, though, and I ended up writing these in wxPython, mentioned in (📑13).

8.13 wxPython

wxPython is a great tool for complex UIs. It looks like this:

Python Triple Dataflow Editor

It is hard to get this level of interaction in a web app, for me, at least. Note that I’m using web application components extensively, and this is one of the reasons wxPython rocks so hard: it has a full WebKit client with events you can couple to the GUI.

8.14 Virtuoso

Virtuoso is a floor wax and a dessert topping. I use it locally to run my websites on different ports. It can handle PHP. But, best of all, it has many ways to import triples to do inference on with its graph database. It can even handle WebID, which back in the day was a “Triple” way to handle distributed social web. (📑15)

8.15 WebID

WebID
Henry Story’s explanations of WebID, although I didn’t know it at the time, was my first introduction to the triples. I was very interested in alternative, distributed social media in 2012. It wasn’t until much later, in 2019, that I realized that my work life and the semantic web intersected because of graphs.

8.16 Vasco Asturiano

Vasco Asturiano’s 3d-force-graph was very useful in establishing a common triple form for DFDs. I don’t get into it much in this presentation, but it is easy to make 3d, rotating models that show the entire org in one big ball. Just like it is useful to make sure that simplified triples are extensible to real meaning, like 17, it is also useful to make sure you know how to render the entire org in one graph. 3d models are good for that.

8.17 Barry Smith

Barry Smith’s work in ontologies changed the way I thought of meaning, and how I envisioned making simple forms of triple extensible to common knowledge. There is a standard for BFO if the vid rots over time. I am focused on a much simpler version, but I track how my work maps to these standards.

8.18 UTEOTW

UTEOTW has inspired me, served as a cautionary tale, and continues to provide insight into the world we have created. Make sure you watch the 287-minute director’s cut. You will be glad you did. Spoiler below:

Henry Farber points his AI/ML systems at the streams of recorded vision, coupled with live human memory. Henry abandons the wisdom, the behavior maps of his trusted family, and gets pulled deeper and deeper into his territory rendering. He pulls Claire and Sam with him and loses all that he loves. The approximation of the territory through AI/ML is always just that. We can get closer and closer, but the two will never meet, not in a human way. Meaning, though, intentional meaning, that is something that we can own. We can own knowledge, which morphs like law with our culture. Trying to couple the territory too tightly is both futile and a sickness. Untethered by maps, we drift into uncharted territory, void of meaning.

Here be dragons
Click here for details on the Hunt-Lenox Globe

8.19 A Knowledge Representation Practionary

A Knowledge Representation Practionary If you need vocabulary around triples, as well as a broader view, this is a great reference.

8.20 Encyclopedia of Knowledge Organization

Encyclopedia of Knowledge Organization - Hierarchy Bergman again…

8.21 Conceptual Structures

Conceptual Structures, John Sowa, 1984

8.22 Significs and Language

Welby, Victoria, Lady 1911. Significs and Language. The Articulate Form of Our Expressive and Interpretative Resources. London: Macmillan & Co.

8.23 The Semantic Web

The Semantic Web A new form of Web content that is meaningful to computers will unleash a revolution of new possibilities by TIM BERNERS-LEE, JAMES HENDLER and ORA LASSILA

8.24 The Checklist Manifesto

2009, Atul Gawande, The Checklist Manifesto: How to Get Things Right, Metropolitan Books

8.25 Charles Sanders Peirce

Charles Sanders Peirce

8.26 Front Panel

The most famous front panel is the IMSAI 8080, used in the movie War Games:

IMSAI 8080
Click here for larger version
(Source Flickr: IMSAI 8080 Computer by Don DeBold)

8.27 New York and Erie Railroad diagram

Mccallum, D. C. , Cartographer, G. H Henshaw, and Publisher New York And Erie Railroad Company.

New York and Erie Railroad diagram representing a plan of organization: exhibiting the division of academic duties and showing the number and class of employees engaged in each department: from the returns of September, 1855. [New York: New York and Erie Railroad Company, 1855].

Retrieved from the Library of Congress

8.28 Graphviz

Graphviz works with triples directly to both visualize and analyze graphs.

8.29 xdot

xdot is a simple, small program written in Python that renders dot format files into interactive graphs.

8.30 Modern Structured Analysis

Modern Structured Analysis, Edward Yourdon
Yourdon Press, 1989

8.31 The Map Is Not the Territory

The Map Is Not the Territory

8.32 New Map of Meaning

New Map of Meaning

8.33 rfc5424

rfc5424

The Syslog Protocol

8.34 Wish You Were Here

Pink Floyd, Released on: 1975-09-12 https://youtu.be/hjpF8ukSrvk

8.35 Environmental Degradation and the Tyranny of Small Decisions

Environmental Degradation and the Tyranny of Small Decisions

8.36 Recreation of Uluburun shipwreck from 1400 BCE

Panegyrics of Granovetter

Recreation of Uluburun shipwreck from 1400 BCE

CC BY-SA 2.0

8.37 ANCIENT LOGISTICS - HISTORICAL TIMELINE AND ETYMOLOGY

Jovan T epić, Ilija Tanackov, Gordan Stojić

ANCIENT LOGISTICS - HISTORICAL TIMELINE AND ETYMOLOGY

8.38 Public Domain Clipart

Images are all public domain if not noted separately

Sources:

https://www.epa.gov/ozone-layer-protection-milestones-clean-air-act/strat-city-usa

https://openclipart.org/

8.39 1177 BC - The Year Civilization Collapsed

1177 BC - The Year Civilization Collapsed

Eric H. Cline, 2014

8.40 Matryoshka Doll

Matryoshka Doll

8.41 An overview of the KL-ONE Knowledge Representation System

Ronald, J., Brachman., James, G., Schmolze. (1985). An overview of the KL-ONE Knowledge Representation System. Cognitive Science, 9(2):171-216. doi: 10.1016/S0364-0213(85)80014-8

An overview of the KL-ONE Knowledge Representation System

What I find most fascinating about this, is both the extent of this ecosystem of ideas, but also the complete focus on computer applications. I was in a conversation in 2021 with an old friend, and we talked about when people started losing their ability to think for themselves. We ended up pinpointing the mid 1980s. On a related note, I remember reading about the Lincoln vs. Douglas debates. The participants in these debates, both the debaters and the audience, acted much different, cognitively, than modern day people. This goes back to how we use frameworks of knowledge as humans. My hunch is that rich language, education, and practice in establishing one’s place in the world, provided more than two teams and a few tools. We gave up on humans and started focusing on computer cognition. We now have a perfect consumer and engines of ecosystem destruction running on the profit algorithm. And… all of this with more information and knowledge available to us than ever before. Huxley was right.

8.42 Ontological theory for information architecture

Toward a document-centered ontological theory for information architecture in corporations

Mauricio B. Almeida, Eduardo R. Felipe, Renata Barcelos First published: 22 January 2020 https://doi.org/10.1002/asi.24337

8.43 CAPTAIN COOK JOURNAL

CAPTAIN COOK’S JOURNAL
DURING HIS
FIRST VOYAGE ROUND THE WORLD
MADE IN
H.M. BARK “ENDEAVOUR”
1768-71
A Literal Transcription of the Original MSS.
WITH
NOTES AND INTRODUCTION
EDITED BY
CAPTAIN W.J.L. WHARTON, R.N., F.R.S.
Hydrographer of the Admiralty.

http://www.gutenberg.org/files/8106/8106.txt

This eBook is for the use of anyone anywhere at no cost and with almost no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this eBook or online at www.gutenberg.org

8.44 Singular They

Singular They

Singular They

I’ll tell ya… he/she has been difficult for me for years. It is quite awkward. Why should an unknown person singular person be he? Should I alternate? I think of Genesis P-Orridge, now, as I use they. Genesis P-Orridge has gone by so many pronouns, and I’ve followed h/er for many years spanning different pronouns. Regardless of present, it helps me remember the correct frame of meaning for they. They = all versions of Gen.

8.45 Bio Units

Taking care of bio units (humans), with all of their collective will and idiosyncrasies, costs money, and is difficult to control and scale. Don’t lose track of this point, as why things are as they are now has quite a bit to do with the we have to take care of them… so why not replace them with machines?. There are two problems with this. First off, the deep supply chains associated with the machines we replace humans with, usually dodge negative externalities. These include:

The other big problem is “Why are we even doing this?” Our civilization is for humans. We need to take care of our home, planet earth, and other living things on the planet, both because we rely on those living thing for survival, but also because it is unforgivable hubris to destroy swaths of living things for whatever ponzi scheme we are hooked on this year. If the core reason why we replace humans with machines is so we don’t have to provide medical care and housing, that is missing the point. Think carefully about that, coupled with the negative externalities associated with more complicated supply chains. Who benefits, really, in the end? Trace the entire system, well-to-wheel, and map out what is important. What do humans need? What do ecosystems need (which humans are part of)? Don’t just take the “machines good, life improved” lure. Examine it. Perhaps it is not really the tasty minnow we think it is. All of that being said, we need machines or most of us will die. Most of our machines run on oil. Most of our food is fertilized with fertilizer that comes from oil (and is harvested with machines). This is a big old nasty ouroboros encircling the tree of knowledge of good and evil.

8.46 Journal of Exploration

Journal of Exploration

An Approach to Teaching Writing

Pete Sinclair, 1981

Journal of Exploration

8.47 MicroAce

MicroAce

Byte Magazine Volume 05 Number 11 - MicroAce ad

8.48 Breadboard

A breadboard is used to prototype electronic circuits.

Here is a solar power controller I breadboarded in 2001:

Breadboarded solar power controller

A breadboard is just a bunch of spring-loaded connectors with wholes you poke wires and parts into to connect circuits prior to soldering up a more permanent version.

Here is the same circuit soldered up:

Complated solar power controller

Note that I was using a homebrew 8048 ICE (in-circuit emulator), so there are more wires on the breadboard version than seem consistent with the soldered up circuit.

8.49 Bootstrap

What do you do when you start from scratch? Many things are just rocks, when it comes right down to it. As Dylan Bettie put it:

“We invented computers, which means taking lightning and sticking it in a rock until it learns to think.”

To make a rock learn how to think, you need to start somewhere. The idea of bootstrapping is important for all kinds of systems. Imagine attempting to bootstrap the creation of a microprocessor. How would that start, from zero? What if you are the only person who knows? It would probably start with knowledge of sequential logic, progress to something Babbage-like. Wire and metal for relay switches, along with electricity generation, then transistor, silicon wafers, and clean rooms? Knowledge of how all of this works is one thing, but how do you start production is entirely different. For a computer at rest, particularly without permanent memory, like my Z-80 Homebrew was at first, a bootstrap is the initial instructions to bring the rock, the hunk of metal, silicon, and wires to the point where it can transfer programs and instruction in a more user-friendly way.

8.50 History of Pets vs Cattle

History of Pets vs Cattle

The History of Pets vs Cattle and How to Use the Analogy Properly

Posted on Sep 29, 2016 by Randy Bias

8.51 Interfacing the Standard Parallel Port

1998, Craig Peacock

Interfacing the Standard Parallel Port

8.52 Operating Manual For Spaceship Earth

Astute advice we did not heed.

Operating Manual For Spaceship Earth, Richard Buckminster Fuller, 1969

8.53 Inside the Odorama Process

Inside the Odorama Process

The legendary scratch-and-sniff cards that made John Waters’ POLYESTER a unique olfactory experience are featured in our new edition of the film. Take a look at how they got made!

8.54 RDF 1.1 N-Triples

RDF 1.1 N-Triples are what many people consider triples proper; however, the idea goes back to Charles Peirce. My version is simpler than Peirce. I place constraints on how I use triples in order to facilitate human cognition with limited nesting, primary relation, and use of emoji.

8.55 Beattie on Architecture

Architecture: The Stuff That’s Hard to Change

In particular, go to 27 minutes, where he discusses data flow diagrams. Notice that he misses the point of complexity and graphs. The superpower of graphs is constraining the model so that a node can be exploded for detail. Don’t put too much information on the top level. There is also a useful explanation of the Agile Manifesto in context.

Dylan Beattie is wonderful to watch. One of my favorite talks of all time is The Art of Code.

8.56 Shared Intentionality

O’Madagain C, Tomasello M. 2021 Shared intentionality, reason-giving and the evolution of human culture. Phil. Trans. R. Soc. B 377: 20200320. https://doi.org/10.1098/rstb.2020.0320

Understanding and sharing intentions: The origins of cultural cognition Michael Tomasello, Malinda Carpenter, Josep Call, Tanya Behne, and Henrike Moll Max Planck Institute for Evolutionary Anthropology D-04103 Leipzig, Germany

8.57 Jalopy

jalopy

8.58 Donella Meadows

Many have heard of her work on World3 and Limits to Growth; however, some of her lectures that have been coming out on Youtube recently, for instance: Sustainable Systems, Presented at the University of Michigan Ross School of Business. 1999 What I didn’t know, until I saw this lecture, is that she attempted to live in a sustainable way on an organic farm. I wish she had lived longer so she could write more about her personal experiences. There is some comedy, too, where she talks about convincing the banks to invest in her commune with composting toilets.

Another lecture that aligns with my stance, is how in this lecture given at Dartmouth College in the Spring of 1977, Systems: Overshoot and Collapse, she talks about how she sees her models as facilitating human cognition without computers. Computers bootstrap human cognition as a goal, rather than the other way around.

Leverage Points: Places to Intervene in a System is a good read.

8.59 Eye Tracking camelCase

An Eye Tracking Study on camelCase and under_score Identifier Styles

“The interaction of Experience with Style indicates that novices benefit twice as much with respect to time, with the underscore style.”

8.60 Jean Dubuffet

Jean Dubuffet, “Place à l’incivisme” (Make way for Incivism)

8.61 Agencement and Assemblage

John WP Phillips
May 2006
Theory, Culture and Society 23(2-3):108-109

DOI:10.1177/026327640602300219

Deleuze and Guattari Lecture Notes

Bumblenut Plateaus

8.62 Naive enthusiast

Benjamin P Taylor:

These are the ego traps that lie in wait as you enter into any powerful field of knowledge.

8.63 lz-string

https://github.com/pieroxy/lz-string https://github.com/marcel-dancak/lz-string-python

8.64 Offsite Storage

Back when I worked in datacenters as an operations manager, one of the more difficult issues was offsite storage. Originally IT staff would take home backup tapes. I always thought that was a bad idea. I didn’t want that responsibility, nor did I think it was good for the organization. Eventually weekly pickups of tapes were commonplace. Now? In 2022? We trust cloud organizations with the offsite storage of documents and other backups. Further, the idea of partitioned sets of backups is foreign to many. Often offsite backups are streamed; however, rotation/archival of changed data is not considered fully. Take the simplest example. Let’s say that the procedure to bring up the local networking equipment on a diesel generator at a hospital is written in a proprietary world processor format. Likely it is stored in the cloud. At the moment in time where the document is needed, there is no connectivity to cloud. But, let’s say that somebody has the presence of mind to sync the document locally. First off, rendering the document might require external cloud services. But, there is another more subtle problem. What if the technician documenting the procedure makes a mistake? Say that the procedure for recycling the HVAC system at the time of a power failure was overwritten on the cloud document for the “local networking equipment on a diesel generator” procedure. It is quite possible that this would not be discovered until time of failure. Any cloud versioning features would be useless. Information can also be harmed intentionally. Just because a system is “working” doesn’t mean that it is possible to react to a crisis in the future. While the primary focus of 3SA is immediate reaction, the concepts of offsite (partitioned) data storage are captured in the Archiving, Retention, and RPO requirements.

8.65 TL;DR

“It was 70 years ago that the poet W. H. Auden published ‘The Age of Anxiety,’ a six-part verse framing modern humankind’s condition over the course of more than 100 pages, and now it seems we are too rattled to even sit down and read something that long (or as the internet would say, tl;dr).”

From Prozac Nation Is Now the United States of Xanax ~Alex Williams

As of this writing, I’ve never read the Age of Anxiety, but I do enjoy the cross of the moment bit.

TLDR of Age of Anxiety graphic
W.H. Auden, The Age of Anxiety: A Baroque Eclogue (Princeton University Press, 2011) page 105

The Age of Anxiety begins in fear and doubt, but the four protagonists find some comfort in sharing their distress.

Cross of the Moment movie

WH Auden’s ‘The Age of Anxiety’

8.66 Terry A. Davis

Terry A. Davis
Terry dances to
Anasazi Kaha Time Dive
into eternity. RIP, Terry.

8.67 Agency Definition

I only recently started using agency as a word in relation to systems, but it seems more and more appropriate as time goes on for me. Agent, from 1471 as “a force capable of acting on matter”. That reminds me of the Dylan Beattie quote that computers are “Taking lightning and sticking it in a rock until it learns to think”.

Agency Definition

Docker Hub Openlink virtuoso-opensource-7

8.69 An Ontology for Representing and Annotating Data Flows to Facilitate Compliance Verification

C. Debruyne, J. Riggio, O. De Troyer and D. O’Sullivan, “An Ontology for Representing and Annotating Data Flows to Facilitate Compliance Verification,” 2019 13th International Conference on Research Challenges in Information Science (RCIS), 2019, pp. 1-6, doi: 10.1109/RCIS.2019.8877036.

An Ontology for Representing and Annotating Data Flows to Facilitate Compliance Verification

Data Flow Ontology

https://datatracker.ietf.org/doc/html/rfc4122.html

8.70 How GitHub works

How GitHub works is a great reveal, perhaps unintentionally, because it shows how the monster of our technological bandaids grow. It all works as long as there is plenty of oil to fuel infinite complexity. How much water and other resources go into the screen, internet connectivity, and sensor box that the vid portrays?

We work together, as people, shipping stuff, fixing stuff, not getting bogged down with requirements that just create friction reveals the strange world of sarcasm and irony, where it is difficult to gauge anything but a feeling that the new world is better than the old, and if you don’t embrace the constant change, you are part of the problem. For me, DevOps is about bootstrapping, configuring, and command/control of infrastructure via an API, either on-prem or in cloud. If, as humans, we are clear on requirements and goals, DevOps has a clear advantage, particularly when combined with what the above vids reference. I mean that without irony or sarcasm, but consider this:

Imagine a road trip with six people. One of them wants coffee. One spills coffee. One is sharing chips. There is a constant real-time back and forth dealing with the immediate needs. At the same time, though, there are fundamental requirements like checking air pressure in the tires, or making sure there is gas in the car. Whining about having to go through the checklist at every gas station because it interferes with coffee and chips is childish. Wrapping everything in sarcasm and irony doesn’t change the fact that it is much, much better to not have to change that tire on the side of the road or get stranded without gas, than it is to just run through the knowledge for the trip. Sure, we can rely on third parties like AAA and technology that magically tells us everything, but let me ask you this: How many of those reading this have seen tire gauge pressure lights that are in error?

In order to have agency with our systems, we need to understand them in a broader way. We don’t need all of the details, just what is necessary for agency. (Another topic way, way beyond the scope is the tendency for people to claim agency itself is an illusion). Whew!! Anyhoo… a couple of videos straight from the horse’s mouth. The Decoded Show is another good example. Notice that there are also references to topics that we are all aware are very important, like energy supply. Just remember… our entire world right now works in this mode of view, and it is becoming more so. The real answers are unbearable for most. It is way, way outside of the scope of this document to wade into what those real answers are, but hopefully the tools and ideas presented here within this doc will help poke your head out of the stream above Mirkwood and discover them for yourself. The reality of the journey will certainly need agile and DevOps tools: give stakeholders in the journey coffee and chips, but also agree on the destination, and make sure the tires have enough air pressure.

8.71 Local First Crowd

There is a very interesting crowd, a cluster of research and interest in some of the ideas I work on with 3SA. Because my focus is on human cognition immediately, the math behind some of these ideas is not in alignment, even if the overall goals are similar. I am focused on the general application of triples to a similar goal, and for similar reasons, rather than a focus on JSON or a convergence algorithm. But, regardless, if some bits of my work attracted you here, these folks likely have a larger audience; quite likely you are their audience. So, I’ll just leave this here for you:

Local First

CRDT

Distributed Systems

8.72 Daniel Schmachtenberger

If you are looking for a personal telemetry in the modern civilization problem space, War on Sensemaking V, Daniel Schmachtenberger is a decent place to start. My personal telemetry using his vocabulary, is: “establishing, collaborating on, and visualizing propositions”. Charles Peirce invented triples, and called them propositions. It is a vital part of the particular solution Schmachtenberger describes. My premise is that it is vital for any viable solution to the problem space he outlines.

Other links:

Psychological Pitfalls of Engaging With X-Risks & Civilization Redesign

The Consilience Project

DarkHorse Podcast with Daniel Schmachtenberger & Bret Weinstein

8.73 Information Management Proposal

Tim Berners-Lee, CERN March 1989, May 1990

Information Management: A Proposal

8.74 The Last Night of the World

Bradbury, Ray (1951, February). The Last Night of the World. Esquire. Retrieved from https://web.archive.org/web/20151102062545/https://www.esquire.com/entertainment/books/a14340/ray-bradbury-last-night-of-the-world-0251/

8.75 FOAF

One of the early applications of the semantic web was FOAF. The idea seems fine, but it necessarily has a lot of things to understand and learn before it is useful. The knowledge schema becomes a curse rather than a help, particularly for somebody that just wants to see snapshots of their relatives, etc. Sure, formal schemas work well for computers that assist with bioinformatics collaboration. But understanding a system from a human perspective at time of crisis, or simply to gauge current telemetry and desired location is hindered by formal standards like this. Like BFO and Barry Smith, extensibility to more formal schemas is prudent if re-use of knowledge work is expected, or there is a need to work in a system represented by multiple relations and nesting.

9 Appendices

9.1 Railroad Triples

9.1.1 Initial

“DunBoilerMakers\n6”->BufDunMasterEngrRep
BufDunMasterEngrRep->GenSup
GenSup->President
“DunMachinists\n58”->BufDunMasterEngrRep
“DunBlacksmiths\n19”->BufDunMasterEngrRep
“DunLaborers\n12”->BufDunMasterEngrRep
“DunCoppersmiths\n4”->BufDunMasterEngrRep
“BufBoilerMakers\n2”->BufDunMasterEngrRep
“BufMachinists\n13”->BufDunMasterEngrRep
“BufBlacksmiths\n19”->BufDunMasterEngrRep
“SusBoilerMakers\n6”->SusMasterEngrRep
“SusMachinists\n94”->SusMasterEngrRep
“SusBlacksmiths\n31”->SusMasterEngrRep
“SusLaborers\n13”->SusMasterEngrRep
“SusCoppersmiths\n9”->SusMasterEngrRep
“PieBoilerMakers\n6”->PieMasterEngrRep
“PieMachinists\n83”->PieMasterEngrRep
“PieBlacksmiths\n35”->PieMasterEngrRep
“PieLaborers\n24”->PieMasterEngrRep
“PieCoppersmiths\n11”->PieMasterEngrRep
“EstTLaborers\n189”->EstSup
EstSup->EstDivFor
EstDivFor->EstDivSup
EstDivSup->GenSup
“DelTLaborers\n238”->DelSup
DelSup->DelDivFor
DelDivFor->DelDivSup
DelDivSup->GenSup
“SusTLaborers\n322”->SusSup
SusSup->SusDivFor
SusDivFor->SusDivSup
SusDivSup->GenSup
“WstTLaborers\n227”->WstSup
WstSup->WstDivFor
WstDivFor->WstDivSup
WstDivSup->GenSup
“BufTLaborers\n230”->BufSup
BufSup->BufDivSup
BufDivSup->GenSup
PieMasterEngrRep->GenSup
SusMasterEngrRep->GenSup
ErieTimberSales->“BuffaloLot”
ErieTimberSales->President
ErieTimberSales->“DunkirkLot”
SusShop->SusMasterEngrRep
DunShop->BufDunMasterEngrRep

9.1.2 Modified

“DunBoilerMakers\n6”->BufDunMasterEngrRep
BufDunMasterEngrRep->GenSup
GenSup->President
“DunMachinists\n48”->BufDunMasterEngrRep
“DunMachinistsF\n10”->DunShop [color=red]
“DunMachinistsF\n10” [color=red]
“DunMachinists\n48” [color=red]
“DunBlacksmiths\n19”->BufDunMasterEngrRep
“DunLaborers\n12”->BufDunMasterEngrRep
“DunCoppersmiths\n4”->BufDunMasterEngrRep
“BufBoilerMakers\n2”->BufDunMasterEngrRep
“BufMachinists\n13”->BufDunMasterEngrRep
“BufBlacksmiths\n19”->BufDunMasterEngrRep
“SusBoilerMakers\n3”->SusMasterEngrRep
“SusBoilerMakers\n3” [color=red]
“SusBoilerMakersF\n3”->SusShop [color=red]
“SusBoilerMakersF\n3” [color=red]
“SusMachinists\n43”->SusMasterEngrRep
“SusMachinists\n43” [color=red]
“SusMachinistsF\n23”->SusShop [color=red]
“SusMachinistsF\n23” [color=red]
“SusBlacksmiths\n71”->SusMasterEngrRep
“SusLaborers\n13”->SusMasterEngrRep
“SusCoppersmiths\n9”->SusMasterEngrRep
“PieBoilerMakers\n6”->PieMasterEngrRep
“PieMachinists\n83”->PieMasterEngrRep
“PieBlacksmiths\n35”->PieMasterEngrRep
“PieLaborers\n24”->PieMasterEngrRep
“PieCoppersmithsF\n3”->DunShop [color=red]
“PieCoppersmiths\n8”->PieMasterEngrRep
“PieCoppersmithsF\n3” [color=red]
“PieCoppersmiths\n8” [color=red]
“EstTLaborers\n189”->EstSup
EstSup->EstDivFor
EstDivFor->EstDivSup
EstDivSup->GenSup
“DelTLaborers\n203”->DelSup
DelSup->DelDivFor
DelDivFor->DelDivSup
DelDivSup->GenSup
“DelTLaborers\n203” [color=red]
“DelTLaborersF\n35”->BuffaloKiln [color=red]
“DelTLaborersF\n35” [color=red]
“DelTLaborersF\n35”->“BuffaloLot\n5000” [color=red]
“SusTLaborers\n295”->SusSup
SusSup->SusDivFor
SusDivFor->SusDivSup
SusDivSup->GenSup
“SusTLaborers\n295” [color=red]
“SusTLaborersF\n27”-> DunkirkKiln [color=red]
“SusTLaborersF\n27” [color=red]
“SusTLaborersF\n27”->“DunkirkLot\n4000” [color=red]
“WstTLaborers\n227”->WstSup
WstSup->WstDivFor
WstDivFor->WstDivSup
WstDivSup->GenSup
“BufTLaborers\n230”->BufSup
BufSup->BufDivSup
BufDivSup->GenSup
PieMasterEngrRep->GenSup
SusMasterEngrRep->GenSup
GenSup->BuffaloKiln [color=red]
BuffaloKiln [color=red]
GenSup->DunkirkKiln [color=red]
DunkirkKiln [color=red]
ErieTimberSales->“BuffaloLot\n5000”
ErieTimberSales->President [dir=“both” color=red]
ErieTimberSales->“DunkirkLot\n4000”
“BuffaloLot\n5000”->BuffaloKiln
“DunkirkLot\n4000”->DunkirkKiln
SusShop->SusMasterEngrRep
SusShop->GenSup [color=red]
DunShop->GenSup [color=red]
DunShop->BufDunMasterEngrRep
SusShop->PieMasterEngrRep [color=red]
SusShop->BufDunMasterEngrRep [color=red]
DunShop->SusMasterEngrRep [color=red]
DunShop->PieMasterEngrRep [color=red]

9.2 Wrenching

I likely have more than enough background in my preface, but my wrenching over the years is related to my perspective on complicated systems, failure, and resilience.

Here, I am removing the engine from a 1963 Rambler American in 2005:

196 engine removal

In the background there is a chicken tractor that had a Busybox/Linux system I compiled mounted in the top.

Robocoop

You can see I’ve fashioned a dust filter and duct-taped it to the cooling intake. It had a camera that automatically posted regular pictures of the chickens on the world wide web.

Robocoop cam

Here I am in 1987, fixing the brakes on a 1965 Rambler Station Wagon:

Fixing brakes
Click here for larger picture

I was young and foolish to not use jack stands; however, I could barely afford the pads, so I’ll give myself a little slack, but my-o-my, seeing the car balanced on that bottle jack makes me shake my head and offer advice I likely wouldn’t have heeded anyway. My “toolbox” was that big metal bowl in the foreground.

The technical service manual I had for my 1963 Rambler American was incorrect. I created a correct diagram for intake and exhaust valves using Xfig, the same program I created my first data flow and my homebrew schematic with:

AMC 196 valves

9.3 Homebrew Computer

In 1980, I purchased a MicroAce. (📑47) It was a Timex/Sinclair computer in kit form. I could program in BASIC on it, but I was not satisfied. I wanted to know more, dig deeper. I wanted to wire it, know it from roots to leaves, and intentionally author the code that brought it to life.

Homebrew computer
Click here for larger version

I completed the first breadboard (📑48) version of a Z-80 homebrew computer that same year, in 1980. I mounted a breadboard in the top of a file box, with a keypad and hexadecimal display. It failed miserably. I didn’t understand the concept of a high-impedance state for the bus, and I thought my construction technique was creating too much noise. I worked on and off for many years, breadboarding different versions. It took awhile to finish, with the ups and downs in my life. I would go for years at a time without working on it, but I finally completed a working, soldered system in 1992.

The display in the upper right I soldered first, in 1989. You can see I’m using old 50 pair telco wire, which isn’t the best, because it can melt and cause shorts with other wires when soldering, but I happened to have some at the time. The lower right board that is connected to the bottom of the case holds 2N2222 drivers for lamps, which you can see in this video:

Der blinkenlights
Click here to watch with sound.

The video shows me toggling reset. Right after reset, the lamps in the center, to the right, show the streaming data the homebrew is receiving over a PC parallel port. (📑51) This is a small bootstrap (49) program that looks for a byte on the parallel port with the correct signal line state, loads it into memory, waits for another state change, loads that byte into memory, repeats until all bytes are loaded, and, finally, jumps back to run the program, which cycles the incandescent lamps and the 7-segment displays.

A bootstrap is usually entered through rows of switches and lights called a front panel (📑26). I couldn’t afford a proper front panel, so I used dipswitches and a paperclip interface with NAND gates in a set/reset configuration to debounce and enter data. Here is what I used to program the bootstrap directly into memory:

Paperclip front panel
Click here for larger version.

The NAND gates are in the electrical tape wrapped handle of the perfboard. You can see the binary-decimal conversion for the bit locations written in between the paperclips sticking out.

When I first breadboarded this, it started with this hand-drawn diagram:

Original hand-drawn homebrew schematic
Click here for larger version.

As I moved the homebrew around, following jobs and apartments, the solder connections would break. I needed a way to document it, and a schematic which doubled as a physical diagram of the pinouts was the most effective reference for troubleshooting. My hand drawn version worked OK, but I realized that legible hand-drawn lines would be difficult to manage, and finishing the hand-drawn version would likely end in failure. I tried a variety of diagram programs, but the only one that worked for what I needed, that didn’t cost too much, was Xfig.

9.3.1 Xfig

xfig logo

Xfig only ran on *NIX systems, and was my early motivation to learn GNU/Linux. Here is what I ended up with, which I didn’t finish until 2003:

Homebrew schematic
Click here for vector version.
Click here for schematic in fig format.

Wiring functions like a graph, where the edge is a wire connecting the nodes of two connections. Or alternatively, a schematic is a wiring map. The intention is to create a solder joint that won’t break, but the reality is that they will, and do, and having a map of pinouts and wires goes miles towards keeping the homebrew running. My map distinguished control lines from address and bus with color coding, and curved the lines so they were distinguishable from each other.

I started my IT career as a computer salesperson of IBM PC and compatible computers. Prior to PCs, information processing systems required many people to operate and maintain, as well as large corporations that often owned the hardware, leasing the systems to users. With PCs, organizations could scale and operate with agency. A typical sale involved system analysis. A customer would have a problem they needed to solve, and the sale would revolve around that problem. I would often go onsite after the sale to help bring the system operational within their organization. I remember needing to use a hex editor on a word processing binary file to match superscript/subscript commands for a customer’s printer, as they were printing academic papers. I also helped with a successful political campaign by selling a politician a PC and helping her load and configure her database for mailings. She won her state representative campaign, and later went on to become a member of the US House of Representatives. These experiences stayed with me, ingrained as both a benefit of information technology in operation, but also as a time of revolution, when stakeholders had had technological agency.

My career moved on to enterprise IT with the widespread adoption of networking. I got a job at an IT consulting firm as a technician and helpdesk for a main and satellite office. This single company morphed through an initial joining of four founding companies across six cities, into a nation-wide consulting company in 25 cities. I developed my own early form of DevOps to assist with this. I would ask managers at the acquired companies to fill out a spreadsheet of applications, users, shares, and permissions, and I would feed that into a Perl script that would configure an NT server.

The complexity of the resulting system required automation and third party software to manage. I learned an early enterprise IT system management platform, Unicenter TNG, but added my own automation, as the GUI was cumbersome.

Diagram I created of the system I built in 1999

Click here for vector version.

As we acquired companies, IT operations and engineering was directed to push expenditures into the cost of acquisitions. Combined with “goodwill”, I saw how acquisitions could create a seemingly healthy company, yet there was no real operational agency or strategy that unified the acquired companies for healthy profit. I confronted our accountant about this in a diplomatic way, and she said, “You aren’t supposed to know about that.” Upper management felt that they could gain control of the monolith they had built with enough business intelligence (BI), so they put in Cognos. On their own, the leaders who had built the successful individual consulting companies understood how they ran, and what they were all doing together. The combined roll-up, particularly with the accounting strategy of acquisitions, caused a crippling cognitive dissonance for shared intention and overall health. This was my first glimpse of black box BI used to mitigate lack of human cognition. Human cognition should come first. What are our strengths? What do we have to offer the world that is unique? How do we gauge success? Once these broader questions are answered, BI can be used to ensure the organization is tracking to intent.

I got a job in 2001 working for a startup that provided inventory management and combined purchasing for health food stores. Most of the stores were connected with dial-up lines. With a crew of three, I deployed and managed 150 stores with POS/PC systems, centrally updated inventory databases, and updated frequently updated software at the stores, including Palm OS devices that handled the scanning. This, combined with the Unicenter TNG work I had done previously, gave me a decent insight into the coming DevOps perspective that infrastructure was code.

Diagram I created of the system I built in 2002

When the company got tight on money, it used the float from the store purchases for operations. This has a similar kind of problem as the tweak that rolling operational costs into acquisitions. Everything is great as long as you are expanding, but a lull can be devastating.

I got a job in 2003 for a medical ASP (ASP=Application Service Provider It is what they called cloud before it became SaaS). In addition to building out and monitoring the front-end servers, I worked on their remit downloads, verification, and processing. While there, I created my first data flow, documenting my work. I used the same diagramming tools I used for my Z-80 homebrew schematic.

Diagram of remit system I created in 2005
Click here for vector version.

Like the previous two jobs, this company failed as well; however, it failed because of technical reasons. There was a misconfigured cross-connect between switches that caused performance problems. I count this as one of my bigger errors in my career. While I was not part of the networking team (of two), the pool of servers I designed, built, and operated, ran across the multiple switches. Eventually I was able to figure out the problem after the main network engineer left; however, it took way too long, and we had lost most of our business by then. I should have been more active, earlier on, in troubleshooting a solution. My lack of engagement across silos was arguably the reason for the failure of the company. What is very interesting about this, is that all of the silo groups (networking, apps, compute, storage, database) had lots of real-time reporting of the systems. I had several myself. I even put in SmokePing and created other network testing tools that measured TCP latency to my servers (vs. just ICMP). None of our reporting got to the root problem. I don’t think that we once all got together in a room and discussed all of the pieces together to try and brainstorm a solution. We just created lots and lots of reporting that tended to show why the problem wasn’t within our silo.

In 2006 I got a job as a system architect at a global law firm. Within the first two weeks of my job at the firm, I jumped right in to a critical project. They were trying to solve the problem of running discovery apps over latent links. They also had a horrible network that aggravated this, but they weren’t aware of just how bad it was, and wanted to buy an app to solve a particular symptom. The CIO set up a meeting, and I met along with my assigned project manager to establish the timeline for rollout on my first week on the job. The CIO put me on the spot, and I figured no big deal, I would figure out how it worked, what people needed the first week, get the vendor recommendations, and put it in by week two. My project manager, who was previously a project manager that worked at NASA on the space shuttle, was not happy that I had answered in this way. I told her I would back it up, and responded with an email that had bullet points for how I saw the project being implemented. She came back, waving the email at me, and said that what I gave her was entirely unacceptable. I was confused and thought she was abusing me (she wasn’t).

I went to my boss, a wonderful boss that was generous and thought broadly. She gave me an example of what was needed. This is how I was introduced to the concept of a solution design document. It is a form of knowledge that describes where we are now and where we want to be in standard terms, so that everybody can agree. Not every aspect needed to be filled out. It varied by solution. In the years that followed, I realized that if the information applied to a system at all, at some point during the procurement, deployment, or operation of the system, the aspects would come up. From the time forward, I insisted on creating an appropriately scaled solution description for every medium+ project I worked on. My work experience so far showed the value of this level of detail.

I got bored and moved a point of sale system to cloud for a brick and mortar retailer, and moved on again to a pioneer of internet search that was re-inventing itself after losing the search wars to a current cloud giant. I was in charge of all monitoring. There was a brilliant person in charge of IT that had replaced the typical relational database reporting with decomposed data that was then fed into reporting and analysis engines, kind of like the modern Elastic. I realized that key-value pairs in event streams could be much more effectively analyzed than canned relational reports. This is the idea of Splunk and I evangelized Splunk. I struggled with the simplest tasks of reporting on all monitors across thousands of servers. Nodes in a monitoring system do not fit well into a relational database. Most machines are different, even if they are the same model. I found that NoSQL approaches worked better for reporting on monitor classes.

At this point, 2011, I have in my kit: streams via key-value pairs and analysis via Splunk, knowledge management, formal solution description/design documents, graphs for resilience (homebrew) and graphs for reporting (monitors)

I was hot on the key-value pair analysis track. I moved on to another startup, where I could do anything I wanted in IT as long as it was fast enough and fit the requirements of the money that backed us (large banks). I struggled with my main developer to build out an analysis platform for an upcoming launch. I finally just did it all myself in two weeks using GNU/Linux, BASH, Perl to capture and normalize the data, and ran it all into Splunk as key-value pairs, happily proving my ideas from my previous job. I used my skills in system documentation to demonstrate to the banks our systems were secure and protected. This company failed, again because of funding.

I moved on to another law firm, which had a similar cycle of projects that my solution design skills worked well for; however, there are some cracks starting to show. There was no longer an architecture team, and the meaning of engineering and design had degraded to quick vendor meetings and a few notes. I remember one design consideration that I focused on that was particularly difficult for people to grok. Backup retention was complicated at a law firm because of discovery. If email and deleted files were only retained for 30 days, then discovery was easier to comply with. The cognitive ability for somebody to include backup retention from a discovery perspective, backup retention from a critical files perspective, and mix that in with offsite replication was stretched to the point that additional questions about backup retention of critical files were quickly brushed off as already dealt with. The scenario I was focused on was if data is corrupted or purposefully deleted and not discovered until after 30 days. Certainly there are important files at a law firm where this needed to be addressed, but the collapse of architecture->engineering->operations was coming down on my head as I struggled. I met over ten times over the course of a year to get a proper backup retention policy in place. I finally got the operations team to put in a fix; however, they couldn’t figure out how to make it permanent for more than a year, and I had to set a yearly notice on my calendar to remind them to put in the every year for the one-off interval. This also means that the only files in the entire global law firm, at that time, that were backed up outside of the 30 day retention policy, were my files that I had specifically adjusted for. I had no indication that after all of this fight, that it had sunk in that we needed a broader policy to cover other files, and I had used up more than my allotted attention fighting for this one backup design requirement aspect.

In addition to the increasing cognitive challenges for operations folks trying to shepherd design considerations, the level of documentation, even in its simplest form, was too much for most people to understand. I think that the worst part was the long narrative form. Work and associated design knowledge was being broken down while I was building my analysis and collaboration skills up. More and more I would find that even engineering managers could only digest a couple sentences. There was a perception by management that long-form analysis documents were part of the old world. The new world was agile. When network, security, storage, and OS dependencies are stripped away, i.e. all that remains are containers and cloud services, the scope gets narrow enough that the developers can just write the app, show it to users, and in a tightly coupled loop deliver and improve products without much engineering or architecture. I imagine that most who are in IT and are tracking my story here, would recognize that agile doesn’t necessarily have anything to do with design and architecture, but we are back to human cognition. The perceived freedom of agile is that there is constant progress, but in practice it comes from sacrificing system cognition by the humans participating in the agile workstreams. There are plenty of cattle. Pets are too expensive. Just rely on the cloud company to supply the feedlots and slaughterhouses. (📑50)

One project, though, changed my life again, just as significantly as the NASA project manager did at the previous law firm. I was put on a project to convert the public finance group from paper workflow to electronic.

I needed something that captured the system in an abstract way that could be reviewed with the group. The default Visio model that looked best was Gane and Sarson. It had three symbols. It made more sense than Unified Modeling Language (UML). More importantly, it solved the biggest problem I had so far: an easy and understandable way to provide levels of different detail. Gane and Sarson is a data flow model DFD. Information Technology, at root, deals with data flow. There are many other perspectives that formal enterprise architecture frameworks capture, but data flow is the lowest common denominator. I used it to analyze several systems since, at full detail, and it is quite flexible, particularly with some of the constraints and conventions I have added.

In 2018 I moved on to a company that offered wellness programs and coaching to employers. We had an outsourced engineering and design team, located overseas, with product management handled locally. I was meant to bridge that gap. Much of the business workflow was spread through a cloud service that cost a lot of money, and there was a desire to untangle the systems from this service. The workflow was coded over time by many people, and it touched every aspect of their business. It was not documented. It was a perfect candidate for a DFD. I created a system-wide DFD. Upper management and stakeholders found the method helpful, but the problem was that it was difficult to match the velocity of the product and engineering teams. I did some research on how to increase velocity, found that triples could help, and pitched it to the company, but they said my ideas were too advanced for them, and in 2019 I was laid off. I have worked on the ideas on my own since then.

9.5 Journal Software

terry a davis quote (📑66)

“Those works created from solitude and from pure and authentic creative impulses – where the worries of competition, acclaim and social promotion do not interfere – are, because of these very facts, more precious than the productions of professionals. After a certain familiarity with these flourishings of an exalted feverishness, lived so fully and so intensely by their authors, we cannot avoid the feeling that in relation to these works, cultural art in its entirety appears to be the game of a futile society, a fallacious parade. ” ~Jean Dubuffet. (📑60)
 

I have written down my dreams, memories, and general daily journal entries since 1985, inspired by this class. I’ve improved and maintained different versions of a journal application since 1994. Here is a screenshot of a version I created in 1994 using Visual Basic:

MCJ 1994

I wrote up a high level design for my journal software in 2011 here. Many of the design considerations for 3SA match. It is also the first time I referred to knowledge management.

I use my journal to manage build steps used to generate the OS that runs the journal system, which I am currently using to compose what you are reading right now. I’ve tried many approaches over the years, using existing journal applications, cloud and local, and many operating systems. I always arrive back at something I control down to the individual operating system components.

I couple the operating system with the journal because I am painfully aware of persistence issues. Operating systems and applications change constantly, and controlling the storage and view of thousands of entries requires ownership and control. This plays into how I see resilient knowledge tools. While it is true that I originally looked at LFS as a way to understand GNU/Linux components, over the years I have needed particular combinations of libraries and applications that were not available with the precision I needed in the current Linux distribution. I’ve tried them all, from the original bash script based compilation distros that preceded Gentoo, to Gentoo, as well as package-based (yast, apt, yum and tarball Slackware). I’m actually happy, now, with where my journal software is at, and wxPython and Ubuntu 20.04 appears to be capable of doing all that I envision, so eventually I will move to that, is my guess, but when I say that I am conscious of how interconnected and cumbersome the ecosystem is, and even now have a reluctance to move, I say that with significant background. Even with the extra hours of doing trivial stuff to most, overall I am more productive.

Here is a screenshot of the current version written with JavaScript and Python, that I’m using to write the document you are reading:

MCJ 2022
Click here for larger version.

My solution design doesn’t tackle the maintenance of triples themselves. It is a way that I make the design persistent, as it pushes the ideas down to data rather than stopping at processes (software). Ultimately that is how I’m tackling my own growing collection of journal entries. In triple form I can create something with any OS. Further, by decoupling the rendering from the OS and triple creation, using modern browsers, I can always read my journal. The 100 inch wheelbase Nash/AMC Ramblers were built around the 196 CI engine. I think of my current journal view in that way. It is an engine of data, triples, surrounded by a script coach. Put the horse before the cart, right?

9.6 Fictional United Nations Speech

[Sean asked what I would say if I had the chance to speak as the UN President. I will not repeat, or pretend to represent better than Csaba Kőrösi, but I do have something I would add. What follows is what I would insert into a United Nations speech.]

I would like to take a few minutes to talk directly to the 8 billion people that the United Nations Charter is for. The United Nations Charter addresses broad goals that we generally agree with, but there is a problem. We are all human. We all have cognitive limitations as we work towards shared goals. As a species we use our culture to supplement our natural abilities. Culture includes social conventions, passing on knowledge of the world to future generations, and other cognitive tools. My very speech that you are listening to, right now, is filtered through your particular cultural experience.

There are two main problems. First, our culture, more and more, is transmitted and controlled by interests that are not necessarily our shared interest. Second, we exist in an extremely complicated socio-economic-ecological system that it is impossible for a human to cognitively understand. To consent to something requires understanding. One outcome of these two problems combined, is that our desire to work towards shared collaborative goals is hijacked and monetized without our consent. I do not propose any change to the governance structures that our nations have evolved that allow these problems. Every member country has reasons for evolving to their existing governance. All that I am asking is that each of the 8 billion people that might be listening to this, the individual citizens, acknowledge the two problems, take steps to compensate, and participate in working towards our agreed on shared goals with agency. How, as individuals, do we compensate for our limited cognitive abilities within such a complex system, particularly when our culture is transmitted by global corporate interests? We use the same exact kinds of tools that global corporate interests have built their wealth and power on. Let me give you an example. Here is an outline of the analysis that Dr. Ye Tao presented at COP26 (📑1) last year on mitigation strategies for human-induced climate change:

   ☀️MEER
        🧊Cooling Return on Investment  
            ☝️ We need to offset 1,500TW EEI  
        🔒Locked in warming(LIW)  
            🙉Why unknown?  
                1) Lack of public discussion  
                2) Reluctance of those with knowledge/leadership  
                3) Inconvenient truth  
            🎓 Future increase if human causes stopped  
            ☝️ 2-3 W/M²  
        🏒All Play together  
            🎚️ Combined, the scale is insufficient  
        🌍Earth System Energy Balance  
            🔥 Imbalance causes heat  
            🎓 EEI = Earth’s Energy Imbalance  
            ☝️ Our problem is heat right now  
        ✅4 most important requirements  
            1) Net cooling at a small scale while meeting a minimum energy efficiency  
            2) Enough material exists for global use  
            3) Enough energy exists for global use  
            4) Global implementation that would be fast enough  
        ⚗️Use Science  
            🗿Popular efforts lead to ecosystem collapse  
                ⚡️Efforts require energy  
                    💯 MEER low energy to scale  
                ⏲️Efforts require time  
                    💯 MEER immediately addresses imbalance  

His analysis concludes that our focus on renewables and carbon capture is misguided, yet this goes against our cultural feeds. Tackling this is daunting for even the most proficient scientist familiar with the field. At the same time, the 8 billion stakeholders in the socio-economic-ecological system that this analysis addresses, should be able to form an understanding of the points. A key tool that facilitates this is the ability to break down the problem into cognitively manageable pieces. Is our main problem heat? Do you agree, yes or no? What causes the heat? What do we need to do to change that? Where are we at now, as far as locked in warming, warming that will continue even if we stopped all human activity that contributes to warming? Where do the components that make up your electric vehicle come from? What energy is used? What resources?

As you try to arrive at answers to these questions, which are core to your own personal agency as world system stakeholders, it is important to map relationships further than just one level. Don’t stop at “CO2 bad” or “Battery-Electric Vehicles are good”. Map out your own concerns further. There are many ways to do this, but one way is using the tools and ideas documented at Triple Pub. Do not let your agency be jacked. Understand what you are doing, how you relate to the global system, and choose your personal path. Grow your cultural cognition towards shared goals with agency. [I break character at the end of this, as the entire focus of Triple Pub is much like I would tell 8 billion people. I am no politician. I am no diplomat. I am no CEO. I am a system analyst attempting to do something I feel is worthwhile.]

9.7 Single Page Filesystem Graph DFD Ontology

I created this single page description early on in my journey, and shared with zero response. It isn’t able to cover the aspects of agency and human cognition. There is no direct, simple route. A meaningful map that a reader could use to understand and relate to these ideas, requires much more, as the concepts are usually foreign to the reader. My attempt at a concise single page description is more of a curiosity. Yes, it is possible to use a filesystem as a graph, and generate graphs with the possibility of inference from only a handful of lines of Python, but it doesn’t provide a meaningful contrast with black box BI/AI/ML cloud, so the benefit of brevity and simplicity is lost.

Single Page Filesystem Graph DFD Ontology
Click here for larger version.

9.8 The Internet and Everything

The number of relations in this example is outside of the scope of 3SA, but it might be helpful for understanding how Knowledge Representation Design works. The Internet and related technology is heavily steeped in graphs. Nested key-value pairs, in the form of JSON, can easily represent triples, since a triple is just bonded pairs around a predicate, with ends that hook up to other pairs.

The following set of triples should be self-explanatory, at least for a network engineer:

🛰🔹🏷️🔹Internet
🛰🔸🦷🔹🏷️🔹Top level domain (TLD)
🛰🔸🍕🔹🏷️🔹Second level domain (2LD)
🛰🔸🕴️🔹🏷️🔹Host
🛰🔸📛🔹🏷️🔹 IPV4 Address
🛰🔸🪧🔹🏷️🔹 IPV6 Address
🛰🔸🧑🔹🏷️🔹 2LD Registration Contact
🛰🔸🔌🔹🏷️🔹TCP Port
🛰🔸🔌🔸🌐🔹🏷️🔹443 w/ HTTPS (website)

🔹, 🏷️, and 🔸are reserved emoji with an agreed meaning in 3SA.

As far as triples are concerned, there isn’t anything special about emoji vs. text. I just like emoji because they are so concentrated and provide visual meaning. A tree 🌳 could work to mean a TLD just as well as the root of a tooth 🦷. Using ASCII as triples is also fine:

net ^ lbl ^ Internet
net ~ TLD ^ lbl ^ Top level domain (TLD)
net ~ 2LD ^ lbl ^ Second level domain (2LD)
net ~ hst ^ lbl ^ Host
net ~ IPV4 ^ lbl ^ IPV4 Address
net ~ IPV6 ^ lbl ^ IPV6 Address
net ~ TCP ^ lbl ^ TCP Port
net ~ TCP ~ 443 ^ lbl ^ HTTPS (website)

It is my take that emoji facilitate human cognition in the above example, but human cognition often comes at a cost of scaling, primarily because of machine cognition. Some people find icons more difficult to follow than text. The priority of 3SA is human cognition, not machine cognition, so whatever works better for your group wins.

Let’s work with the emoji version, and set up some 🦷, 🍕, and🕴️.

🛰🔹🦷🔹recipes
🛰🔹🦷🔹hotsauce
🛰🔸🦷🔸recipes🔹🍕🔹marysfavs
🛰🔸🦷🔸hotsauce🔹🍕🔹bruz
🛰🔸🦷🔸hotsauce🔸🍕🔸bruz🔹🪧🔹2606:2800:220:1:248:1893:25c8:1946
🛰🔸🦷🔸recipes🔸🍕🔸marysfavs🔹🪧🔹2606:2800:220:1:248:1893:25c8:1946
🛰🔸🦷🔸hotsauce🔸🍕🔸bruz🔹📛🔹93.184.216.34
🛰🔸🦷🔸recipes🔸🍕🔸marysfavs🔹📛🔹93.184.216.34
🛰🔸🦷🔸recipes🔸🍕🔸marysfavs🔹🧑🔹Mary Jenkins
🛰🔸🦷🔸hotsauce🔸🍕🔸bruz🔹🧑🔹Bruce Smith
🛰🔸🦷🔸hotsauce🔸🍕🔸bruz🔹🕴️🔹www
🛰🔸🦷🔸recipes🔸🍕🔸marysfavs🔹🕴️🔹www

The above triples tell us the TLD, 2LD, IP addresses, Domain Owners, and hosts. Notice that I am assigning an IP address to a 2LD, even though I have another node that is a host. This illustrates the mixed support of using a 2LD as a host. It is a bit off from a graph perspective.

I could enter IP addresses for the hosts, and even TCP 443 w/ HTTPS:

🛰🔸🦷🔸hotsauce🔸🍕🔸bruz🔸🕴️🔸www🔹📛🔹93.184.216.34
🛰🔸🦷🔸hotsauce🔸🍕🔸bruz🔸🕴️🔸www🔸📛🔸93.184.216.34🔹 🔌 🔹🌐

The convention I’m using is a pair for each node. The first of the pair is the class, and the second forms the ID. This means that the node is unique via the path, the stuff delimited by 🔸. I don’t always do this. 🌐 is a specific instance of 🔌, and 🛰 is the root graph designation.

I am crossing/conflating “domains” from a graph sense. There are multiple things going on, here. The IP address is part of its own hierarchy/graph. I am merging the namespace for the internet with physical aspects. DNS crosses graphs, going from a taxonomy to OSI layer 3 and 4.

I’m still not at “a href”. To get there, I need to add some new node types:

🛰🔸📑🔹🏷️🔹Index
🛰🔸📓🔹🏷️🔹Page
🛰🔸✒️🔹🏷️🔹Entry
🛰🔸🔗🔹🏷️🔹A HREF hypertext reference

Mary’s website, with the front page listing other pages for dinners and lunches, has a recipe for Crispy Eel on the lunches Dinners page:

🛰🔸🦷🔸recipes🔸🍕🔸marysfavs🔸🕴️🔸www🔸📑🔸📓🔸1🔹🏷️🔹Dinners
🛰🔸🦷🔸recipes🔸🍕🔸marysfavs🔸🕴️🔸www🔹📑🔹📓🔸1
🛰🔸🦷🔸recipes🔸🍕🔸marysfavs🔸🕴️🔸www🔹📑🔹📓🔸2
🛰🔸🦷🔸recipes🔸🍕🔸marysfavs🔸🕴️🔸www🔸📑🔸📓🔸2🔹🏷️🔹Lunches
🛰🔸🦷🔸recipes🔸🍕🔸marysfavs🔸🕴️🔸www🔸📑🔸📓🔸1🔸✒️🔸31🔹🏷️🔹Crispy Eel

Notice how these are listed out of order? 🛰🔸🦷🔸recipes🔸🍕🔸marysfavs🔸🕴️🔸www🔸📑🔸📓🔸1🔹🏷️🔹Dinners is a path to the Dinner section and a label, but the triple is above 🛰🔸🦷🔸recipes🔸🍕🔸marysfavs🔸🕴️🔸www🔹📑🔹📓🔸1. We haven’t defined other things, like the title of the index page (website). That might well be different than the host or domain name. We don’t even have to know what 📑 even means to know that whoever is writing up this schema asserts that the label for the node 🛰🔸🦷🔸recipes🔸🍕🔸marysfavs🔸🕴️🔸www🔸📑🔸📓🔸1 is Dinners. This is one of the magical things about triples. It works on the open world assumption. This helps particularly with collaboration. As long as everybody agrees on the general schema, the entire system can come together asynchronously. Knowledge just arises from the collaboration, rather than needing a lot of up-front work. In this case, a bunch of information about the dinners and lunches on the site could be done by somebody with zero knowledge of IP addresses and domain name resolution. This can be rolled in with other information. Even if bits of 🦷🔸recipes🔸🍕🔸marysfavs🔸🕴️🔸www change, it is relatively easy to plug existing work into a new schema. On to 🔗!!

Here is Bruce’s website:

🛰🔸🦷🔸hotsauce🔸🍕🔸bruz🔸🕴️🔸www🔸📑🔸📓🔸1🔹🏷️🔹Favorite Links
🛰🔸🦷🔸hotsauce🔸🍕🔸bruz🔸🕴️🔸www🔹📑🔹📓🔸1
🛰🔸🦷🔸hotsauce🔸🍕🔸bruz🔸🕴️🔸www🔸📑🔸📓🔸1🔸✒️🔸3🔹🏷️🔹Mary's Crispy Eel
🛰🔸🦷🔸hotsauce🔸🍕🔸bruz🔸🕴️🔸www🔸📑🔸📓🔸1🔸✒️🔸3🔹🔗↩️
🔹🛰🔸🦷🔸recipes🔸🍕🔸marysfavs🔸🕴️🔸www🔸📑🔸📓🔸1🔸✒️🔸31

Note that we added a convention of ↩︎️ to mean the triple is split up between lines. That is a completely arbitrary choice, but visually it seems to be fine. We got there. As you can see, though, the schema with multiple nestings and relations gets a bit complicated. It would be prudent to at least map our schema to a formal ontology (📑76), or have an idea of how it might map, even if it isn’t painstackingly precise. Regardless, this level is out of scope for 3SA.