Idea Threads

On Ontology

Carcinization Platypus

Idea Bank

Cargo Cult / Rearranging the chairs on titanic.

On System Failure

Source: Richard Cook's How Complex Systems Fail.

No Root Cause

Single point failures don't cause failures, only a combination of them does lead to failure. Post accident attribution to a root cause is patently wrong since overt failure requires multiple faults. There is no no isolated cause of an accident. There are multiple contributors which are insufficient in itself to create accidents. Only a combination of them creates an accident. The evaluations based on such reasoning as root cause do not reflect a technical understanding of the nature of failure but rather the social, cultural need to blame specific, localized forces or events for outcomes. Hindsight bias also clutters it and makes it seem that the practitioners at the time was aware. This remains the primary obstacle to accident investigation, especially when expert human performance is involved.

All practitioner actions are gambles.

After accidents, the overt failure often appears to have been inevitable and the practitioner’s actions as blunders or deliberate willful disregard of certain impending failure. But all practitioner actions are actually gambles, that is, acts that take place in the face of uncertain outcomes.

Actions at the sharp end resolve all ambiguity.

Organizations are ambiguous, often intentionally, about the relationship between production targets, efficient use of resources, economy and costs of operations, and acceptable risks of low and high consequence accidents. All ambiguity is resolved by actions of practitioners at the sharp end of the system. After an accident, practitioner actions may be regarded as ‘errors’ or ‘violations’ but these evaluations are heavily biased by hindsight and ignore the other driving forces, especially production pressure.

Human practitioners are the adaptable element of complex systems.

The two main criteria

The end goal of systems is to maximize production and minimize accidents.

On operating on these measures:

These adaptations often occur on a moment by moment basis. Some of these adaptations include:

  • Restructuring the system in order to reduce exposure of vulnerable parts to failure.
  • Concentrating critical resources in areas of expected high demand.
  • Providing pathways for retreat or recovery from expected and unexpected faults.
  • Establishing means for early detection of changed system performance in order to allow graceful cutbacks in production or other means of increasing resiliency.

Change introduces new forms of failure.

The low rate of overt accidents in reliable systems may encourage changes, especially the use of new technology, to decrease the number of low consequence but high frequency failures. These changes maybe actually create opportunities for new, low frequency but high consequence failures.

When new technologies are used to eliminate well understood system failures or to gain high precision performance they often introduce new pathways to large scale, catastrophic failures.

Not uncommonly, these new, rare catastrophes have even greater impact than those eliminated by the new technology. These new forms of failure are difficult to see before the fact; attention is paid mostly to the putative beneficial characteristics of the changes. Because these new, high consequence accidents occur at a low rate, multiple system changes may occur before an accident, making it hard to see the contribution of technology to the failure.

On fixing errors

Views of 'cause' limit the effectiveness of defenses against future events. Post-accident remedies for “human error” are usually predicated on obstructing activities that can 'cause' accidents. These end-of-the-chain measures do little to reduce the likelihood of further accidents. In fact that likelihood of an identical accident is already extraordinarily low because the pattern of latent failures changes constantly. Instead of increasing safety, post-accident remedies usually increase the coupling and complexity of the system. This increases the potential number of latent failures and also makes the detection and blocking of accident trajectories more difficult.

Safety is a characteristic of systems and not of their components

Safety is an emergent property of systems; it does not reside in a person, device or department of an organization or system. Safety cannot be purchased or manufactured; it is not a feature that is separate from the other components of the system. This means that safety cannot be manipulated like a feed stock or raw material. The state of safety in any system is always dynamic; continuous systemic change insures that hazard and its management are constantly changing.

Complex systems contain changing mixtures of failures latent within them.

The sheer fact that there are a lot of components within the systems allows for multiple permutations of failure to occur and the complexity of these systems makes it impossible for them to cause failure. Failures change constantly because of changing technology, work organization, efforts to eradicate failures.

Complex systems run in degraded mode.

Complex systems run as broken systems. The system continues to function because it contains so many redundancies and because people can make it function, despite the presence of many flaws. After accident reviews nearly always note that the system has a history of prior proto-accidents that nearly generated catastrophe. Arguments that these degraded conditions should have been recognized before the overt accident are usually predicated on naïve notions of system performance. System operations are dynamic, with components (organizational, human, technical) failing and being replaced continuously.

Failure free operations require experience with failure.

One who plays with fire can only prevent it. Recognizing hazard and successfully manipulating system operations to remain inside the tolerable performance boundaries requires intimate contact with failure. More robust system performance is likely to arise in systems where operators can discern the “edge of the envelope”. This is where system performance begins to deteriorate, becomes difficult to predict, or cannot be readily recovered. In intrinsically hazardous systems, operators are expected to encounter and appreciate hazards in ways that lead to overall performance that is desirable. Improved safety depends on providing operators with calibrated views of the hazards. It also depends on providing calibration about how their actions move system performance towards or away from the edge of the envelope.

Computer Software

Software that gives form and purpose to a programmable machine, much as a sculptor shapes clay.

Computers are to computing as instruments are to music. Software is the score, whose interpretation amplifies our reach and lifts our spirit. Leonardo da Vince called music, "the shaping of the invisible." As in the case of music, the invisibility of software is no more mysterious than where your lap goes when you stand up.

True mystery of computer science: How so much can be accomplished with the simplest of materials, given the right architecture.

Information is any difference that makes a difference. - Gregory Bateson The intrinsic meaning of a mark is that it is there. The first difference is the mark, the second one alludes to the need for interpretation.

The same notation that specifies elevator music specifies the organ fugues of Bach. In a computer the same notation can specify actuarial tables or bring a new world to life.

As with most media from which things are built, whether the thing is a cathedral, a bacterium, a sonnet, a fugue or a word processor, architecture dominates material. To understand clay is not to understand the pot. What a pot is all about can be appreciated better by understanding the creators and the users of the pot and their need both to inform the material with meaning and to extract meaning from the form.

There is a qualitative difference between the computer as a medium of expression and clay or paper. Like the genetic apparatus of a living cell, the computer can read, write and follow its own markings to levels of self-interpretation whose intellectual limits are still not understood.

Hence the task for someone who wants to understand software is not simply to see the pot instead of the clay. It is to see in pots thrown by beginners (for all are beginners in the fledging profession of computer science) the possibility of the Chinese porcelain and Limoges to come.

On the diagrams

The nature of the content since we are doing system analysis requires a lot of visualizations to make the behaviours explicit.

Quantity has a quality of it's own

A quantitative improvement in terms of response times in a UI can have a qualitative improvement. A crabby UI that doesn't respond to a user is seen as one that of quality but the underlying problem is quantitative. Much of life's visual and auditory interaction depends on it's pace.

As children we discovered that clay can be shaped into any form simply by shoving both hands into the stuff. Most of us have learned no such thing about the computer. Its material seems as detached from human experience as a radioactive ingot being manipulated remotely with buttons, tongs and a television monitor.

One feels the clay of computing through the "user interface"; the software that mediates between a person and the programs shaping the computer into a tool for a specific goal, wetter the goal is designing a bridge or writing an article. The user interface was once the last part of a system to be designed. Now it is the first. It is recognized as being primary because to novices and professionals alike, what is presented to one's senses is one's computer. The user illusion as my colleagues and I called it at the Xerox Paolo Alto Research Center, is the simplified myth everyone builds to explain (and make guesses about) the system's actions and what should be done next.

The objective of user illusion is to amplify the user's ability to simulate. A person exerts the greatest leverage when his illusion can be manipulated without appeal to abstract intermediaries such as the hidden programs needed to put into action even a simple word processor. What I call direct leverage is provided when the illusion acts a kit or tool with which to solve a problem. Indirect leverage will be attained when the illusion acts as an agent. An active extension of one's purpose and goals.

In both cases the software designer's control of what is essentially a theatrical context is the key to creating an illusion and enhancing its perceived "friendliness."

The earliest computer programs were designed by mathematicians and scientists who thought the task should be straightforward and logical. Software turned out to be harder to shape than they had supposed. Computers were stubborn. They insisted on doing what was said rather than what the programmer meant. As a result a new class of artisans took over the task. These test pilots of the binary biplane were often neither mathematical nor even very scientific, but they were deeply engaged in a romance with the material -a romance that is often the precursor of new arts and sciences alike. Natural scientists are given a universe and seek to discover its laws. Computer scientists make laws in the form of programs and the computer bring a new universe to life.

Nature sets out a system and humans describe it vs. Humans describe a system and things fall out from it with ability to restart it.

Most system programmers discovered that it is one thing to be the god of a universe and to be able to control it. Emergent behaviors are hard to predict (Was this demonstrated in the evaluation of the design?)

A powerful genre can serve as wings or chains. The most treacherous metaphors are the ones that seem to work for a time, because they can keep more powerful insights from bubbling up.

Intangible message embedded in a material medium is the essence of computer software.

Strong representatives from each past era thrive today, such as programming in the 30-year-old language known as FORTRAN and even in the ancient script known as direct machine code. Some people might look on such relics as living fossils; others would point out that even a very old species might still be filling a particular ecological niche.

The computer field has not yet had its Galileo or Newton, Bach or Beethoven, Shakespeare or Moliere. What it needs first is a William of Occam, who said "Entities should not be multiplied unnecessarily." The idea that it is worthwhile to put considerable effort into eliminating complexity and establishing the simple had a lot to do with the rise of modern science and mathematics, particularly from the standpoint of creating new aesthetics, a vital ingredient of any growing field. It is an aesthetic along the lines of Occam's razor that is needed both to judge current computer software and to inspire future designs. Just how many concepts are there really? And how can metaphor, the magical process of finding similarity and even identity in diverse structures, be put to work to reduce complexity?

The world of symbolic can be dealt with effectively only when the repetitious aggregation of concrete instances becomes boring enough to motivate exchanging them for a single abstract insight.

The designers of computing systems like the metaphors have learned to do the same thing as Newton and Leibniz with programming methods that have the property called inheritance.

Designing the parts to have the same power as the whole is a fundamental technique in contemporary software.

The move to object-oriented design represents a real change in POV - a change of paradigm - that brings with it an enormous increase in expressive power. There was a similar change when molecular chains floating randomly in a prebiological ocean had their efficiency, robustness and energetic possibilities boosted a billionfold when they were first enclosed within a cell membrane.

The early applications of software objects were attempted in the context of the old metaphor of sequential programming languages, and the objects functioned like colonies of cooperating unicellular organisms. If cells are a good idea, however they really start to make things happen when the cooperation is close enough for the cells to aggregate into super cells: tissues and organs. Can the endlessly malleable fabric of computer stuff be designed to form a super object?

"Better old thing" that is likely to be one of the "Almost new things" for the mainstream designs for the next few years.

A spreadsheet is a simulated pocket universe that continuously maintains its fabric; it is a kit for a surprising range of applications. Here the user illustration is simple, direct and powerful.

Dynamic spreadsheets were invented by Daniel Bricklin and Robert Frankston as a reaction to the frustration Bricklin felt when he had to work with the old ruled-paper versions in business school. They were surprised by the success of the idea and by the fact that most people who bought the first spreadsheet program (VisiCalc) exploited it to forecast the future rather than to account for the past. Seeking to develop a "smart editor" they had created a simulation tool.

The strongest test of any system is not how well its features conform to an anticipated needs but how well it performs when one wants to do something the designer did not foresee. It is a question less of possibility than of perspicuity.

Users must be able to tailor system to their wants. Anything less would be absurd as requiring essays to be formed out of paragraphs that have already been written.

It is clear that in shaping software kits the limitations on design are those of the creator and the user, not those of the medium. The question of software's limitations is brought front and center, however by my contention that in the future a stronger kind of indirect leverage will be provided by personal agents: extensions of the user's will and purposes shaped from and embedded in the stuff of the computer. Can material give rise to mentality?

Atoms also seem quite innocent. Yet biology demonstrates that simple materials can be formed into exceedingly complex organizations that can interpret themselves and change themselves dynamically. Some of them even appear to think! It is therefore hard to deny certain mental possibilities to computer material, since software's strong suit is similarly the kinetic structuring of simple components. Computers "can only do what they are programmed to do." but the same is true of a fertilized egg trying to become a baby.

For the composer of software the computer is like a bottle of atoms waiting to be shaped by an architecture he must invent and then impress from the outside.

To pursue the biological analogy, evolution can tell the genes very little about the world and the genes can tell the developing brain still less. All levels of mental competence are found in the more than one and a half million surviving species. The range is from behavior so totally hard-wired that learning is neither needed nor possible, to templates that are elaborated by the experience, to a spectrum of capabilities so fluid that they require a stable social organization - a culture - if full adult potential is to be realized. In other words, the gene's way to get a cat to catch mice is to program the cat to play - and let the mice teach the rest. Workers in artificial intelligence have generally contented themselves with attempting to mimic only the first, hard-wired kind of behavior. The results are often called expert systems, but in a sense they are the designer jeans of computer science.

Any medium powerful enough to extend man's reach is powerful enough to topple his world. To get the medium's magic to work for one's aim rather than against them is to attain literacy.

The protean nature of the computer is such that it can act like a machine or like a language to be shaped and exploited. It is a medium that can dynamically simulate the details of any other medium, including media that cannot exist physically. It is not a tool, although it can act like many tools. It is the first meta-medium and as such it has degrees of freedom for representation and expression never before encountered and as yet barely investigated. Even more important, it is fun, and therefore intrinsically worth doing.

by reading we hope not only to absorb the facts of our civilization and of those before us but also to encounter the very structure and style of thought and imagination. Writing get us out of the bleachers and onto the playing field; old and new knowledge becomes truly ours as we shape it directly.

Early History of Smalltalk

Almost a new thing vs. Better old thing. Most ideas come from previous ideas. This project is a realization of these new points of view as parented by its predecessors.

Small minds try to form religions, the great ones just want better routes up the mountain.

Smalltalk is a recursion on the notion of computer itself.

the semantics of Smalltalk are a bit like having thousands and thousands of computers all hooked together by a very fast network.

Questions of concrete representation can thus be postponed almost indefinitely because we are mainly concerned that the computers behave appropriately and are interested in particular strategies only if the results are off or come back too slowly. (Alluding to verification and performance times)

Make simple problems simple and complex problems achievable.

New ideas go through stages of acceptance, both from within and without. From within, the sequence moves from "barely seeing" a pattern several times, then noting it but not perceiving its cosmic significance, then using it operationally in several areas, then comes a grand rotation in which the pattern becomes the center of a new way of thinking, and finally, it turns into the same kind of inflexible religion that it orginally broke away from.

From without, as Schopenhauer noted, the new idea is first denounced as the work of the insane, in a few years it is considered obivous and mundane, and finally the original denouncers will claim to have invented it.

Difference between procedure and process. Lamp and the genie.

The philosophy is about opinion and engineering is about deeds, with science the happy medium somewhere in between.

It is not simple at all - Alberto Brandolini

Simple domain

Well defined cause-effect relationships. Predictable behaviour. Standard procedures. Best practices.

Complicated domain

Cause-effect requires analysis Non linear but predictable behaviors System thinking Good Practices

Complex domain

Cause-effect relations visible only retrospectively Complex Adaptive Systems Probe-Sense-Respond Emerging Practice No seeming categories (Very fuzzy) and no deep analysis possible.

Chaotic domain

Impossible to define cause-effect relations Act-Sense-Respon*** Chaotic domain Impossible to define cause-effect relations Act-Sense-Respond Experimental practices. No good pratices. A place where you have never been before.

Chaotic -> Complex -> Complicated -> Simple Patterns increasingly become visible.

Too much variability that standard procedures like Gantt Chart are useless.

Data comes before structure.

Non linearity is your friend.

Because even though vicious cycles take their toll in a non-linear cycle, good things have a positive feedback as well.

"If everything seems under control. You are not going fast enough. - Mario Andretti."

The Language of the System

Programming language defines world.

Code Documentation Programmers -> Programmer

Code Programmer -> Computer

System language Programmer -> Program

Flow vs. Places

Queues are different from messages.