Skip to content

How Machines Make “Good Enough” Decisions

Published: January 20, 2026

Published: January 20, 2026

how machines make 'good enough' decisions

When people search for a route on Google Maps or get a film recommendation from Netflix, they likely expect the computer to present the best possible option. In reality, most systems aren’t designed to deliver perfect answers. They aim for decisions that are good enough – fast, practical, and reliable under everyday conditions.

The Role of Heuristics in Everyday Algorithms

Computers don’t exist in a vacuum. To be useful, their algorithms must account for constraints on processing power, memory, timing, and available information. To function reliably under these conditions, engineers build systems that prioritize efficiency over accuracy. 

In computing, “good enough” is not a failure. In fact, it’s often the goal. A hospital scheduling system, for instance, must assign staff based not only on availability but also on legal limits for hours worked, specialist qualifications, and patient load. The schedule it produces may not be ideal for everyone, but it meets enough requirements to keep the system functioning. Similarly, online travel booking sites like Travelocity and Expedia balance cost, timing, layover duration, and airline preference. The options they present are not perfect in every dimension, but they reflect a compromise that most users will accept.

how machines make 'good enough' decisions

To achieve this, many systems rely on heuristics, decision shortcuts that limit the search space and speed up decision-making. Heuristics draw on past outcomes, statistical patterns, and domain-specific rules to make educated guesses about what will work well enough. Instead of evaluating every possibility, a system uses predefined indicators to eliminate unlikely options early. This drastically reduces processing time.

This is optimization in its real form. Algorithms are designed to weigh competing goals and prioritize according to a predefined logic. Sometimes that logic is hard-coded; sometimes it adapts based on patterns in real-time data. Either way, it reflects a system designed for action, not philosophical ideals.

Google Maps, for example, may avoid left turns across traffic not because it can’t calculate those routes, but because historical data shows such turns often cause delays. Similarly, DoorDash might assign orders based on proximity and recent driver activity, rather than conducting an exhaustive search for the most mathematically efficient sequence. These strategies aren’t flaws. They’re engineered responses to complexity. 

Good heuristics improve with experience. As systems gather more data, they can refine these shortcuts, replacing crude assumptions with patterns drawn from actual use. That adaptability is key to why heuristic-based systems remain effective at scale. What starts as a guess becomes a pattern, and eventually, a predictive tool. 

How Heuristics Differ from Machine Learning

​​Heuristics are distinct from machine learning. Heuristics are usually hand-crafted rules; machine learning involves models that adjust themselves automatically from data. A heuristic might say, “Avoid left turns.” A machine learning model might conclude, “In neighborhoods with high evening traffic, avoid turns with a delay over 15 seconds,” having learned that pattern from thousands of trips. The two approaches often coexist. 

While machine learning thrives on large datasets and adapts to new inputs, it can be computationally intensive and opaque. Heuristics, by contrast, are efficient, transparent, and easier to maintain. When speed and interpretability are priorities, they remain indispensable. In short, heuristics provide speed, while machine learning provides nuance.

how machines make 'good enough' decisions

Why Heuristics Still Matter

In large-scale systems that must make decisions rapidly and consistently, heuristics provide stability. They simplify complexity without requiring vast amounts of training data. They also make system behavior easier to audit and predict — an important consideration in domains where accountability is essential. Although machine learning offers flexibility, it also introduces risk: models can overfit, behave unpredictably, or pick up on biased patterns in the data. Heuristics offer a controlled fallback.

This explains why many real-world systems combine both approaches. The heuristic ensures the system consistently delivers a fast, reasonable answer. Machine learning layers on subtlety where needed. Together, they create systems that are both practical and adaptable.

At University of the People, computer science students are taught to think beyond the code. They study how algorithms navigate trade-offs, balancing accuracy with efficiency, and fairness with scale. Designing a system is not only about producing solutions but about defining the problem itself. 

What counts as “good enough” is ultimately a judgment that blends technical limits with human priorities. Understanding that boundary is essential to building systems that are not only functional but meaningful. For future engineers, the challenge is not to pursue perfection, but to design decisions that work under constraint, endure over time, and reflect thoughtful choices.

This reveals something fundamental about modern computing: technology mirrors the world it serves. Algorithms not only process data, but they also encode values, trade-offs, and assumptions about what matters. The systems we build are never neutral. They are shaped by the priorities we set, and those priorities determine how technology functions at scale.

Dr. Alexander Tuzhilin currently serves as Professor of Information Systems at the New York University (NYU) and Chair of the Department of Information, Operations and Management Sciences at Stern School of Business.
Read More