Empathetic Systems: Designing Systems for Human Decision-Making
Software begins as an idea in our minds, gets translated through assumptions and decisions into code, and then gets deployed into an environment where it becomes "the software." But what we're rarely taught is that the story doesn't end there.
Once software is deployed and running in production, it starts providing feedback loops that actively shape how teams seek context, acquire knowledge, and open or close communication pathways. This is the reverse flow: how deployed software reforms the cognitive and social patterns of the teams that maintain it. An engineer who can easily set up the application locally learns differently from one struggling with a complex local setup. A slow-to-query database influences which questions teams choose to ask or avoid. Tightly coupled services dictate who talks to whom during planning sessions.
The reverse flow matters because it shapes the most fundamental activity in software development: decision-making. Deployed systems actively train teams through feedback—rewarding certain patterns, punishing others, enabling or constraining choices. The quality of any software system is not a property of the code or the process itself, but a reflection of the quality of the thousands of decisions that have shaped it over time. When an engineer delays decisions until a senior manager is available, the system has taught them autonomous choices don't survive. When another moves at lightning speed without reflection, the system rewards velocity over coherence.
This essay argues that the ability to make, evaluate, and correct these decisions depends not on technical skill, but on the psychological safety of the environment in which they are made. The capacity to surface, evaluate, and reverse decisions ultimately determines whether systems thrive or decay under pressure.
This realization prompts a critical question: If psychological safety drives our ability to report and learn from mistakes, shouldn't we treat human dynamics as a first-class force in software system evolution? And if so, how might we design systems, architectures, and team boundaries with empathy and human values as organizing principles rather than afterthoughts?
Foundational Insights
This essay is built upon the tradition of Peter Naur's "Programming as Theory Building." Instead of offering a formula or universally quantifiable patterns, I share patterns I’ve seen, felt, and reflected on during more than a decade of experience with multiple teams.
Working with E-type systems—a concept originating in Lehman’s Theories of Software Evolution and central to modern Evolutionary Architecture—I've consistently observed that systems actively teach teams how to think, who to talk to, and what to care about. A slow database changes which questions teams ask. Tightly coupled services dictate who communicates during planning. Complex deployment pipelines reshape what counts as "done."
This phenomenon, which I call the "reverse flow," is a direct expression of a Socio-Technical System (STS). STS is a framework positing that the technical system (the code, the infrastructure) and the social system (the team, its communication patterns, its hierarchy) are inseparable. They continuously co-evolve, and a change in one will inevitably provoke a change in the other. This feedback loop—how deployed systems reshape the cognitive and social patterns of teams—appears consistently across E-type systems.
The critical factor determining whether this feedback loop enables evolution or decay isn't technical skill—it's psychological safety. Dr. Amy Edmondson's research and Google's Project Aristotle confirmed that of five characteristics of high-performing teams, psychological safety—the belief that one won't be punished for speaking up—was the most important.
Through this essay, I encourage leaders and practitioners to see these invisible, socio-technical forces and ask: what are our systems really teaching? Are they creating conditions for teams to sense problems and evolve—or suppressing the feedback loops we need?
The Education of a "Technical Expert"
The first few interviews I attended when trying to switch from my first job were, in the kindest terms, a monumental disaster. In one of them, an interviewer looked me in the eye and asked "Do you even know Java? If not, please don't waste our time!" I apologized and left silently, feeling as though my years of study and effort amounted to nothing more than a few awkward moments. Bangalore traffic that evening gave me plenty of time to reflect. By the time I reached my apartment, I had made up my mind: I was going to learn this properly—mastering every technical detail from everything that I could grab.
For the next six months, I consumed and learned everything I could get my hands on. JDK source files, Spring and Struts docs, Apache projects, IDEs like NetBeans and Eclipse—the list seemed endless. The "investment" paid off. All the interviews after that were a breeze and I landed a role at a late-stage startup, complete with the title—Associate Tech Lead.
It was only when I actually joined this new team that I realized knowledge—however hard-won—was not the most precious commodity. In those initial days, I looked at my teammates and wondered how they managed to get so much done, even though some of them hadn't memorized the internals of the JDK or weren’t obsessed over API docs. They built elegant software without needing to show off deep theoretical knowledge. My confidence began to wobble, and I found myself wrestling with frustration and impatience.
There was something within the team that I didn't really understand—a harmony in producing software without the burden of excessive theoretical knowledge. More importantly, they seemed to make decisions with a confidence and consistency I couldn't explain. I had to know how they did it, but initial attempts weren't successful. Nobody could explain how they were doing it. Questioning their thought process occasionally was acceptable, but I wanted deeper access to their mental models. I realized I needed to build interpersonal relationships with them.
My urge to display my expertise and gain access to their thinking process, even though I was a 'friendly lead,' backfired. The more I pushed, the more out of step I felt with the group. That insecurity—that leftover tension of being an outsider—made me reflect. Leadership, I soon realized, isn't a title. It's something you earn, step by step, by building trust and connection. So I let go of the burden of a lead and started small: inviting people for tea, for lunch, or the occasional after-work beer. Gradually, as I worked to break the ice, those small acts led to real conversations, the sharing of tips and tricks, the exchange of smiles, and—most importantly—everyone's guard dropping.
I was slowly gaining insights into their thinking process, their decision-making process. And I started noticing something interesting: each of us on the team had a different—but effective—way of making decisions. I was the quick starter, not afraid to make mistakes and learn fast. Another engineer was meticulous, needing to have every piece in place before committing. Yet another was skilled at following the manager's direction and juggling everyone's expectations. We all produced valuable software, each in our own style, pace, and rhythm. But more importantly, the decisions we made had different lifespans—some held up over days, weeks or months, others needed constant revision.
Discovering and Reflecting on the Four Tendencies
That observation became the focus of my work for the next twelve years. Working with multiple teams, I consistently saw this pattern of individual approaches to decision-making. Initially, I called this "profiling," but that word never quite fit—it felt clinical and narrow. Instead of seeing differences as quirks or obstacles, I began recognizing them as decision-making patterns. That’s when I was introduced to Gretchen Rubin's "Four Tendencies" framework, and suddenly, the diversity I had sensed in team dynamics had a language of its own.
The Four Tendencies model categorizes human motivation into four types, based on how we respond to expectations:
Upholder: Responds readily to both outer and inner expectations.
Questioner: Meets expectations only if they make sense internally.
Obliger: Meets outer expectations, struggles with inner expectations.
Rebel: Resists both outer and inner expectations.
In software teams, these tendencies are deciding factors for success—they're at the heart of how architectural decisions get made, how long they last, and whether they serve the system or create debt. Understanding what motivates individuals becomes a tool for real leadership, adapting practices, communications, and even architecture to fit the constellation of personalities in the room.
Decisions That Endure
Some engineers respond readily to both outer and inner expectations—what Gretchen Rubin calls Upholders. Their motto is "Discipline is my freedom," reflecting their ability to create structure and follow through consistently. In software projects, upholders possess high regard for their values—whether that's code quality, design purity, or other aspects they hold dear. They strive to maintain these standards and can seem reluctant to change because compromising their values feels like an attack on their identity.
As decision-makers, upholders excel at creating durable architectural choices that protect system invariants. When they commit to a decision, it tends to hold up over time because they've aligned it with their core values.
In one of the projects that I joined midway, John, a senior engineer used to leave comments about readability—variable names, line width, spaces vs tabs—in pretty much every single PR that was created by the team. Being nice to talk to, nobody really figured out why the comments were a little too "pedantic." I started noticing that there was visible discomfort growing in the team, and decided to understand what was motivating John. It turns out, John was spending most of the time in on-call support. And to John, the most readable code was the easiest to debug when there was a production issue. My curiosity to understand, to uncover the why for this behavior, revealed something interesting. The on-call roster was almost always fixed with John supporting during the weekend or late hours. It turned out that John, deeply valuing the application's availability, always volunteered for the duty, ensuring he was the one to guard its stability.
As an intervention, I started advocating for the importance of rotation and introduced linters and formatters after standardizing our norms. This wasn't just about code style—it created the necessary decision framework where readability standards became automated, allowing the Upholder to protect their values without blocking team velocity. The architectural decision to standardize through tooling meant individual PRs no longer required manual review for style, freeing this engineer to make higher-level stability decisions. The linter intervention broke a feedback loop: the Upholder no longer received constant negative feedback from PRs violating his values, which had been reinforcing his gatekeeping behavior. Automation changed what the deployed system taught him about his role.
Upholders excel as guardians of software, especially in areas requiring attention and stability. They continuously keep things tidied up, tested, and documented. As a lead, your role is to identify the upholders, understand their values, and map them to areas of the software where stability, clean practices, and reliability are crucial. While automated tooling like linters provides elegant solutions, larger architectural evolution requires careful conversation to help Upholders see how change protects rather than compromises their core values.
This dynamic illustrates how psychological safety for Upholders means explicitly protecting their core values and creating automated frameworks that respect those standards. When the environment validates their sense of discipline and maintains stability, Upholders thrive and produce lasting architectural decisions. Without this safety, they tend to gatekeep or disengage, harming team morale and quality.
Decisions Built on Conviction
Other engineers question all expectations before deciding whether to meet them—the Questioners. They will only comply if they understand the reasoning and believe it makes sense. Their motto is "I'll comply—if you convince me why," highlighting their need for logical reasoning and evidence-based decision-making.
Being a questioner myself, I know that without understanding the "why," a questioner simply cannot function well. They aren't asking because they want answers handed to them; they want conviction and the satisfaction of finding answers themselves. As decision-makers, questioners require full understanding of the decision space before committing. Without investigation time, they either delay decisions or make uncommitted ones that create technical debt.
A new engineer, Sarah, on my team was tasked with implementing a feature using a third-party library. Sarah received code snippets, relevant stories and was instructed, "Let's go!" Instead of an implementation, the first week produced endless questions: How will you do it? Who did this POC? Can I talk to them? How did they decide option A versus option B?
Recognizing the tendency, I knew we had to let Sarah find the answers. I worked with her to prepare a roadmap for talking to other engineers and gave them room to ask questions, while still setting a clear timeline to produce working software—even if imperfect—to enable iteration. For two weeks, nothing worthy of being called a solution emerged. In the third week, Sarah reached out: "I think I figured it out." She presented a high-quality approach that answered every question I could throw at her, complete with bookmarks to documentation. But she didn't stop at the immediate problem. She also delivered recommendations for caching and observability—the stepping stones for a platform-first approach that would benefit the entire team. She had acquired deep knowledge without anyone teaching her. It was a powerful lesson in leadership: sometimes the most valuable action is not to provide answers, but to create the space for conviction to grow. This was one of the best teachable moments of my life.
The two-week exploration period enabled high-quality architectural decision-making because Sarah understood the entire decision space. The decision she made held up throughout the project because it was built on genuine conviction, not blind execution. The exploration period created a new feedback loop: instead of the deployed system sending confusing signals that increased their questions, they could interrogate the system directly until it made sense. The architecture stopped being a black box that they had to accept or that was handed down from someone else. It became something that they could question, understand and improvise.
Most projects don't allocate the investigation time Questioners need to build conviction. I've learned to create explicit time boundaries that protect their exploration while maintaining delivery momentum. In 1-1s, I encourage them to start with small, working increments they can question and improve iteratively, rather than trying to achieve complete understanding upfront. This gives them a tangible system to interrogate—turning their natural questioning into rapid learning rather than decision paralysis. However, not every project can afford weeks of exploration before producing output. The ongoing work is convincing Questioners that working software is the only reliable way to test reality—that conviction builds through the reverse flow from deployed systems, not endless upfront analysis.
For Questioners, psychological safety manifests as allowing space for exploration and assuring their questions will be taken seriously. This sense of safety enables them to build internal conviction and contribute deeply reasoned decisions. When teams respect this need, Questioners become pillars of architectural integrity and learning; when the environment dismisses their inquiries, quality and trust suffer.
External Accountability as Decision Enabler
Many engineers readily meet outer expectations but struggle with self-imposed ones—these are Obligers. Their motto is "You can count on me; and I'm counting on you to count on me," emphasizing their responsiveness to external accountability.
Obligers are often the most dependable members in a team, readily meeting external expectations. They blend well into most organizations, and are also the most common tendency I have observed. In hierarchical organizations, obliger patterns become normalized and nearly invisible. What looks like the shortest path to get work done is often an obliger tendency in action. Obligers also look up to leaders who make decisions for them. They are excellent executors. As long as the context aligns, they produce satisfactory software and aren't afraid of feedback. As decision-makers, they excel at executing decisions when accountability structures are clear, but struggle with autonomous architectural decisions.
One challenge I've seen Obligers struggle with: without a safe environment, it's hard to get their voice heard. Constant "pushing" without autonomy will wear them out, often translating to silent burnout. You can sense this during stand-ups: an untold frustration or a lack of interest, worst of all, a silent agreement.
I once worked with a senior engineer, Alex, who knew how every part of the application had evolved. Yet, he would not propose new ideas or refactor with newer or different patterns. His response to most requests was the same—”Tell me what you need, I will get it done”. This was a significant bottleneck and he would delay the decision until I or another leader was available to make a decision for him, even though he was well aware of which route to take. This generated friction resulted in issues that were often taken care of as technical debt later. However, he saw the job as executing jira cards, and not actively participating in the evolution of the software.
Simple conversations with Alex weren't effective, but we both continuously shared how this friction in decision-making was slowing us down. However, over the many 1-1s Alex was also voicing the coupling issues between REST controllers and service logic—a problem that needed addressing. I used this as an opportunity to initiate a change—to decouple the controller layer and establish explicit package boundaries. Alex became the owner of the controller domain. This restructuring combined with explicit structural boundaries and ownership allowed us to establish a formal mandate for architectural ownership. This architectural intervention established a clear team boundary and reduced Alex's cognitive load, giving him the external accountability and autonomy he needed to thrive.
Alex was initially surprised and conveyed that he never thought that the application, architecture and structure, would be changed for him. He comfortably accepted the responsibility and territory. In further conversations, more and more room for experimentation and learning was introduced. The shift in Alex’s collaboration and participation was visible. He began experimenting with newer patterns in Spring, started making decisions without further delays and proactively refactoring and improving the controller layer.
Obligers require psychological safety through clear, reliable external accountability structures and invitations to contribute. This external validation creates a secure space where they can take ownership and voice concerns without fear. For Alex, changing architecture was the clearest possible signal of trust and expectation, and was reciprocated with ownership and collaboration. External accountability structures inevitably change as organizations evolve, which can leave Obligers without their scaffolding. Rather than maintaining fixed structures, the goal is helping Obligers develop voice and agency alongside structural shifts—co-evolving their capacity to surface concerns even when familiar accountability patterns disappear. When such structures are absent or inconsistent, Obligers may silently withdraw, limiting architectural innovation and introducing hidden risks.
Rapid Decisions and Decision Reconciliation
Some engineers resist all expectations, whether from others or themselves—the Rebels. They value freedom and autonomy above all else. Their motto is "You can't make me, and neither can I." Rebels are motivated by choice and the ability to act according to their own desires.
Rebels push the boundaries of what is possible with software. They love to seek challenges and often work better with ambiguous requirements and less process. As decision-makers, rebels make rapid architectural decisions in ambiguous spaces but create decision inconsistency over time. This often creates an unprecedented drift in the software architecture. Their strength in quick decision-making can, over time, be seen as anti-process. I have noticed in teams with strong rebels a sine-wave like velocity—a period of quick action followed by a period of slow tidying up.
I eventually understood this pattern as decision accumulation followed by decision reconciliation. The “tidying up” was for resolving inconsistencies between rapid decisions made without coordination.
An engineer with around 5-6 years of experience, Nathan, was extremely good at any problem we could throw at him. Nathan also did not worry about many of the team's practices, and seniors were always on alert when his PRs appeared. But, when speed was critical, the team expected Nathan to roll up his sleeves. I proposed to run an experiment: Nathan would actively contribute only four days a week. The fifth day, which I called "No-Decision Friday," Nathan’s only tasks were to talk to other team members, reviewing changes, and updating documentation. No decisions on new work were allowed.
No-Decision Fridays were simply a decision review cadence. This ritual slowed the pace but gave ample time to reflect and talk to others, relieving the frantic pace of work. Over many such Fridays, Nathan learned to distinguish which parts of the system could handle rapid iteration and which required coordination. He started contextualizing his changes—explaining his assumptions and trade-offs in PRs—which enabled continuous validation rather than late discovery of misalignments. The reconciliation periods shortened as Nathan developed judgment about where his speed was valuable and where it created friction. No-Decision Friday interrupted the rapid feedback loop that reinforced his quick-decision pattern. By forcing reflection before the deployed system responded to his changes, he could see the long-term consequences his decisions created.
Decision reconciliation is expensive—the alignment conversations and rework have real costs to team velocity. The trade-off makes sense where pushing creative or innovation boundaries matters most, but in domains requiring tight coordination or predictable delivery, the reconciliation overhead can exceed the value of rapid experimentation. Not every rapid decision-maker is a high performer either—speed without skill creates chaos, not innovation. The key question is whether the rapid decisions, despite needing reconciliation, ultimately move the system forward. For Nathan, the experimentation phase uncovered possibilities that slower, more coordinated approaches would miss. The reconciliation cost was real but worthwhile. Given a different person and context, they may require other non-technical, non-systemic interventions. In contexts where consistency matters more than exploration, this pattern becomes a liability rather than an asset.
Rebels embody psychological safety through freedom and autonomy in choosing how and when to engage with architectural decisions. Providing ritualized time for reflection, like “No-Decision Friday,” creates a safe container for them to balance rapid innovation with necessary stabilization. Denying them this autonomy risks chaos, frustration, and architectural drift.
Conclusion - Empathetic Systems
My observations and experiments consistently revealed that the human elements of a team had the most significant and sustained impact on team outcomes. I came to call this goal "Empathetic Systems"—a software system design that deeply resonates with the team's mental models, their tendencies, and their learning paths.
These tendencies are lenses, neither fixed characteristics nor labels. A Rebel may become an Obliger depending on the power dynamic, and an Upholder who values their team may become a Rebel in the face of an external threat. In reality, we are all four tendencies combined, depending on the context. Given different external factors—a new project, a change in team dynamics, a supportive manager, a challenging life event, or the evolution of the social system itself—a person's dominant tendency can shift, and they can learn to access and strengthen other tendencies.
This framework is a powerful tool for fostering empathy and understanding, but it must be used with wisdom and caution. These tendencies are not rigid boxes to put people in. Leaders should never use this framework to stereotype, label, or pre-judge team members. The goal is to open up conversations and build bridges of understanding, not to create divisions or limit opportunities. With the right environment—one built on trust, respect, and psychological safety—individuals can strengthen all four tendencies, becoming more versatile, resilient, and effective.
The purpose of this approach is to shift our architectural thinking from "making good decisions" to "creating conditions where bad decisions surface fast."
As Dr. Amy Edmondson found and Google’s Project Aristotle confirmed, psychological safety enables the learning behaviour required for error correction. In software terms, this means the most resilient architectures aren't those with the fewest initial mistakes—they're systems where decision problems surface quickly enough to fix before they spread. But a generic "open door policy" isn't enough. To be effective, psychological safety must be targeted to the specific motivational drivers of the individuals on the team.
Why emphasize these patterns when we're hired for our technical expertise? Because how we learn software development creates a fundamental mismatch with how we must practice it.
We're taught to build programs that meet specifications—technical problems with known solutions. But most professional work is about maintaining E-type systems through collaborative decision-making under uncertainty. When professionally trained individuals enter these environments, they must unlearn the individualistic patterns that earned academic success. This unlearning only happens when psychological safety makes it safe to say "I don't know" or "this isn't working."
The deployed system itself is the best teacher. It creates "teachable moments" through its reverse flow—defects, incidents, and structural constraints that reveal where our assumptions were wrong. Psychological safety simply creates the readiness to learn from what the system is teaching us.
This is also why initiatives to adopt microservices, Team Topologies, or other "proven practices" so often fail to provide the promised value. Organizations apply the evolved blueprint without understanding the evolution itself. They miss the reverse flow and co-evolution: the continuous adaptation between human decision-making patterns and technical constraints. We must factor in this co-evolution for even the minutest aspects of software creation—from naming conventions to testing strategies to code review rituals. Successful teams didn't start with the "right" architecture—they built psychological safety that allowed them to detect and correct architectural problems rapidly, creating feedback loops where the reverse flow could teach them effectively.
Creating such responsiveness requires coordinated attention: architectural structures that enable reversible decisions, delivery practices that create guardrails like automated testing and feature flags, and processes that cultivate safety through rituals like blameless retrospectives and decision review cadences.
Empathetic systems recognize that humans are both the agents and sensors of the systems they maintain. The deployed system sends signals constantly, but only humans can sense them and act. And different individuals sense different signals based on what they're motivated to protect or pursue. Tightly coupled services create blame pressure; poorly abstracted boundaries teach teams that speaking up won't help. When our architecture, delivery practices, and processes work together intentionally, they create feedback loops that strengthen psychological safety rather than erode it.
After twelve years of observing these patterns, I'm convinced that understanding individual motivation is foundational for sustainable software architecture. The goal is to cultivate a high-performing team capable of continuously evolving the software. The system evolves through the decision-making conversation between human values and technical constraints—but only if both sides can speak honestly about what's working and what isn't.
By caring for and nurturing the social system that creates the software, the system that is software evolves better. That is a future worth pursuing.
