Related Subjects: Author Index
Book reviews for "Gartner,_Scott_Sigmund" sorted by average review score:

Strategic Assessment in War
Published in Hardcover by Yale Univ Pr (1997)
Author: Scott Sigmund Gartner
Amazon base price: $47.00
Used price: $17.99
Collectible price: $17.99
Buy one from zShops for: $43.57
Average review score:

Good also for Macro Economists
Apart from a very decent, crisp and entertaining read, I imagine that the model "dominant indicator approach" can be quite helpful to design a decision model for macro economists or other people in the business community (manufacturing, batch production etc.).

The essence of this model is explained in the 2nd chapter (ca. 50 pages).

An illuminating discussion of straty change
When do states change their operational plans? Under what conditions do they change their military or political strategies? These are the basic questions that Scott Gartner answers in his recent book, Strategic Assessment in War (1997, Yale University Press). Gartner answers these question by developing and testing a model of decision making that fits into the bounded rationality tradition of political science. Rational choice scholars might argue, for instance, that decision makers decide to change directions when their estimates of the expected payoffs from the current strategy are outpaced by one or more alternative strategies. While this deceptively simple answer might appear to solve the problem, in practice, for both scholars and policy makers alike the process of evaluation and information gathering creates an intractable problem. In order to evaluate the success or failure of a single policy, according to the rational choice approach, decision makers would have to search over the realm of many possible alternatives and be able to evaluate the costs and benefits of both the chosen strategy and the alternatives, both at the current moment and for the foreseeable future. Scholars working in the organizational behavior tradition have argued that militaries tend to choose simply those strategies that serve their organizational interests best - typically massive offensive strategies, in the absence of civilian intervention. Gartner and decision-makers realize that neither of these approaches is particularly well suited as a general model of state behavior, especially during times of crisis and war. For example, rather than being short of information needed to evaluate the success or failure of a particular strategy, policy makers typically are deluged with it. In fact, they are awash in so much information that they might as well close up their shops for any other purpose other than information evaluation. In Strategic Assessment, Gartner sweeps this approach aside and opts for a far simply one - one that appears to quite close match reality, at least in the cases he presents. Rather than attempting to conduct exhaustive information searches, Gartner argues that decision makers and their advisors pick and focus on a single indicator (or small set of indicators) of success or failure, what Gartner terms the dominant indicator. By focusing on a single or small number of indicators, decision-makers are able to reduce dramatically the danger of information overload. In doing so, they solve one part of their dilemma, but not the other. The other dilemma is, even with indicator of success or failure in hand, under what conditions to decision makers choose to change strategies? For example, they might choose to change strategy in the face of the first piece of bad news. Alternatively, perhaps they might change when the indicator has not changed for some long period. Gartner argues that decision-makers attend neither to the actual value of the indicator (whether it is high or low) nor do they pay attention to its rate of change (are things getting better or worse). Rather, he maintains and provides convincing evidence that leaders pay attention to the acceleration or rate of change of the indicator. Policy makers will change not when things are bad, nor when they are simply getting worse, but when they are getting increasingly worse. Why is this? The answer is that people are remarkably insensitive to events occurring at constant velocity - whether the events are information cascades or physical motion. A simple analogy of a person riding in a car makes his logic clear. A passenger in a car with their eyes closed cannot tell if the car is at rest or if it is moving at a constant rate of speed, either forward or backward. While flying in an airplane at 570 mph, do you feel any different sensation of movement than someone riding along at 20 mph in a car? Of course not. What people are acutely sensitive to, are changes in velocity or acceleration. Just as passengers in an elevator feel in the pit of their stomachs the approaching stop at the next floor, Gartner argues that DM's change their strategies when things are bad and getting worse - when the rate of shipping losses increases dramatically. Rather than acting on the absolute number of tons lost, or the rate at which they are being lost, it is the change in the rate of loss that leaders are terribly sensitive to. Gartner tests this intuitive and simple model on several cases using quantitative data from World War I, World War II, the Vietnam War and Carter's abortive invasion of Iran. He finds strong evidence that leaders first tend to focus on very few indicators for the evaluation of success or failure and they tend to stick with them during both success and failure. Each organization seems to pick one and stick with it - be it tonnage of shipping lost, numbers of U-boats sunk, enemy KIA or popular support polling data. Indicators in hand, using both graphic and tabular analysis Gartner show us that the acceleration of change in an organization's indicator appears to be strongly correlated with subsequent change in strategy. Having presented evidence supporting his argument, Gartner then moves on to a brief investigation of some important ancillary questions. What happens if competing organizations use different indicators? Using the case of the conflict between the US Congress and the US military during the Vietnam War, Gartner demonstrates that under certain conditions organizational gridlock can result. In 1968, at the time of the Tet Offensive, the US Congress was focusing on US battle deaths as their indicator of the success or failure of the Johnson administration's Vietnam policies. The US military focused on Viet Cong and NVA deaths. Gridlock occurred, according to Gartner, because changes in the two indicators were positively correlated. The US Congress viewed the Tet Offensive as one of the greatest military disasters in American History. The US military saw Tet as the great opportunity to destroy the Viet Cong and break the back of the NVA - the opportunity that they had been waiting years for. Just as Congress and the media began to seriously call for and end to US involvement in the war, Westmoreland and the US military asked to increase their presence in Vietnam. How could both organizations draw such different conclusion from the same event? The Vietnamese offensive led to a dramatic surge in US battle deaths; this acceleration in deaths led Congress to update their beliefs about the apparent (to them) failure of the US war effort. In executing their plan, however, the North Vietnamese and the Viet Cong were forced out into the open. The US military, in February and March of 1968 managed to inflict some of the heaviest losses of the war on the North. This dramatic upsurge in N Vietnamese casualties and captured weapons led the US military to believe that the end of the war was at hand. What should we make of Gartner's theory and evidence? His book presents one of the strongest arguments to date that heretofore assumptions about military organizations and their preferences for offensive strategies are simply wrong. Gartner provides compelling data to show that military organizations prefer not offense per se, but rather strategies that serve their interests best - the success or failure being judged by changes in the acceleration of their dominant indicator. This is a powerful corrective too much of the literature on military strategy and organizational preferences. Two important questions are left unanswered. First, when do decision-makers update on what is the most appropriate indicator? Gartner is unclear at to the timing of indicator change. While he points out that the US Military changed their indicator from territorial advance to enemy KIA during the Korean War, he fails to present a compelling story of the mechanism that drives this type of change. Is the change process one that should be characterized as stochastic change or organizational learning? Second, how do organizations choose their dominant indictors in the first place? Just as rational choice scholars tend to treat preferences as fixed and exogenous, Gartner treats the formation of dominant indicators as something of a mystery. These two problems aside, Gartner has produced a book of some importance. Scholars and students of international relations, military strategy and organizational behavior will all find much of interest in Strategic Assessment in War.


Related Subjects: Author Index

Reviews are from readers at Amazon.com. To add a review, follow the Amazon buy link above.