Counting Storm Shelters Along the Highway

Not long ago, during spring break, I went to Huntsville, Alabama with family on a quick getaway.  We’d had a harsh three weeks of winter conditions in an area that is unaccustomed to them, and which is certainly unaccustomed to four winter storms in about twenty days.  We were also under assorted conditions of stress from work, school, and preparation for the future.  We needed the trip, short though it would have to be.

No one in the car really wanted to drive on I-20/I-59.  That stretch of interstate between Tuscaloosa and Birmingham is incredibly nerve-wracking and dangerous, and is not something I’d recommend to any driver unless one particularly savors the thrill of bumper-to-bumper 75 mph across three to four lanes and people passing with only feet to spare.  We took the state highways instead.  Some of the drive took place on Highway 43, a winding but generally pleasant stretch of road that passes through such Alabama towns as Hamilton and Hackleburg.

It was as we approached Hackleburg that we saw the first unusual scene:  a swath of trees snapped, long denuded of leaves, bent down flush with the ground at assorted angles.  The realization hit all of us at once.

The track of the EF-5 “Hackleburg tornado” (really an extremely long-tracked tornado that began in far western Alabama and continued a little into Tennessee) paralleled Highway 43 early in its lifespan, and EF-5 damage was observed along this part of the path.  The destruction we saw was not the organized logging of a timber company.  They weren’t even the type of trees likely to be felled for commercial purposes.

It has been almost four years, and yet these downed trees still remain, a stark reminder of the violence of April 27, 2011.

We continued our drive.  As we moved into Hackleburg, we saw the first one along the road.  A heavy, inset, but otherwise unassuming rectangular door that opened into what appeared to be a small room built into a slope on the property.

“Storm cellar,” somebody remarked matter-of-factly, with no alteration of tone or pitch.  It might have been me.  I don’t recall who noticed the first one.  We were all pointing them out before the end of it.

The “Hackleburg tornado” took 72 lives in rural and small-town Alabama, scouring 132 miles with its fury.  The official survey claims that the winds were at least 210 mph.  I am convinced that they were much higher than that in places.

We talked about things we remembered about this tornado.  I was pretty sure that it had created a “situation” at the nuclear power plant in northern Alabama.  It had.

“Storm cellar.  Looks like the same kind as before.”  And indeed the second one we noticed did look like the same design as the first one.

Although they are generally acceptable for shelter in most tornadoes, I am firmly of the opinion that basements and above-ground safe rooms (even reinforced) are insufficient to guarantee safety in EF-5 tornadoes.  There have been basement fatalities before.  I particularly recall that they happened in the 2008 Parkersburg, IA EF-5 tornado.  Think about it:  If a basement is even partially above surface level, or the flooring above it is not specially reinforced, then just what exactly is to prevent an EF-5 tornado (capable of leveling a house down to a bare slab foundation) from exposing a basement and then descending into it?  If a safe room is above-ground, what is to prevent an EF-5 tornado (capable of hurling heavy metal tanks up to a mile, as happened in the 2013 Moore, OK tornado) from tossing a massive object right at it and crushing it?

“There’s another storm cellar.”  Again the same type.  Most likely the same contractor installed them.

I have some misgivings about the notion of building earthen walls around prefabricated storm shelters, particularly those plastic rooms that Southeasterners frequently saw advertised after the 2011 tornadoes.  The tornado that I drove away from that day, the EF-5 Kemper/Neshoba tornado, dug trenches into the ground 2 feet deep.  The storm shelters that we saw along Highway 43, however, appeared to have been built into natural embankments.  I hope they are sufficiently deep into the ground that they could not likely be exposed by another monster of that sort.

“Another storm cellar.”

We were all pleased to see the residents of Hackleburg being prepared.  It must have been an unimaginably traumatic event.  I didn’t even lose anything, but merely the fear/expectation that I was going to lose everything left me with a mild case of PTSD-like symptoms every time the anniversary approached.  In order to be sure of this, I would have to make the drive again with someone marking placing markers on a GPS-enabled map whenever we passed a house with a shelter, but it is even possible that these were all people who lost their homes in the 2011 tornado.  They were certainly close to the EF-5 damage path, if not directly in it.

These appeared to be homes of middle-class residents.  It should be easier for everyone to install a—

“Storm cellar.”

By then we were simply saying the words.  It was almost like another road game, such as counting cars of a particular color.  As we passed through Hackleburg proper, we couldn’t help but observe how much construction appeared to be quite new.  Even the road had a new stretch of pavement, identifiable because of its smoother surface and different, darker color from the surrounding road.  The tornado did tear up the asphalt as well.  Intense tornadoes often do that.

“Storm cellar.”

If this stretch of road is representative of the community, that says something very positive about the residents of this area.  I don’t support unfunded mandates to require private homeowners to have basements or storm shelters, because I think people should have the right to face private, personal risk as they see fit (after all, I did precisely that by choosing to hit the road to evade an EF-5 tornado, against the recommendation of the National Weather Service), but I am very glad when people do take the initiative to protect themselves and their families in this manner.  I am in favor of the “carrot” of permanent tax credits for any expenditure of this nature.

There were six homes with the same kind of earthen, in-ground storm cellar just along Highway 43 between Hamilton and Hackleburg.  I’ve never seen that many in such a small area before.  It might not even be noticed by most people, especially people who did not know that an exceptionally violent tornado had occurred in this place four years earlier.  But those of us who did have that bit of knowledge, and who still look at things outside the vehicle instead of some sort of onboard entertainment, noticed this series of doors opening to rooms in the ground.  It was a subtle indicator of something different about this area.

Trauma changes people.  What we saw that day, March 11, 2015, was proof positive that it changes communities too.

For the Record

The El Reno tornado (2013) was, in the official records, downgraded from EF-5 to EF-3 on the basis that EF-5 damage was not found and “the Enhanced Fujita scale is a damage scale” (quotation my own).  Let me go on record right now as saying that I oppose this and all other instances where scientifically collected, calibrated wind speed data are ignored.  I oppose the practice of rating tornadoes based strictly on those factors that civil engineers deem important while throwing out data collected by meteorologists, and for several reasons.

  1. Estimates of wind speed that are derived ex post facto from damage are inherently less reliable than objective, instrumentally collected measurements.  This should not even be controversial.  Differences in materials, building practices (which can be very hard to determine in the event of total obliteration), and even environmental factors (e.g., temperature and humidity) prior to a tornado can affect at what wind speed the structure fails.  Surveys attempt to find out about such things, but it’s inherently impossible to cover all bases.  Measurements are always more reliable than estimates, even educated ones.
  2. The Enhanced Fujita scale was designed to be expanded.  In practice, vehicular and ground damage are now included as damage indicators in surveys, even though the official EF scale documents don’t (to my knowledge) list them.  There was also the intention, when the scale was formed, of leaving it open for actual wind data to be used in ratings.
  3. The Enhanced Fujita scale is a wind scale.  It is not just a way of rating the intensity of damage, which need not have anything to do with wind at all.  The EF scale is not used for rating damage caused by floods, hailstorms, or earthquakes; it is used for tornadoes, which are wind events.  Tornado surveys do not merely say that a tornado “has produced EF-3 damage.”  They also assign an estimated numerical wind speed to the storm.  This is apparently a subtle point for those who insist that the EF scale is a “damage scale,” but I really don’t think it’s all that hard to understand once you think about it.  Saying that the EF scale is a damage scale is like saying that, traditionally, the Celsius scale was a mercury expansion scale, not a temperature scale, because mercury thermometers were used to determine temperature.  That would obviously be ridiculous.  The EF scale is a wind scale.  Primarily it uses damage for the determination of wind speeds, but only because measurement data are not usually gathered.  That unfortunate circumstance is no reason to throw out valid data when they are available.
  4. Portable Doppler wind measurements can, in fact, be extrapolated to the surface in tornadoes.  The wind speeds near the ground level (i.e., damage level) of a tornado are likely to either match or even exceed those found at heights measured by portable Doppler radar (Wurman et al, Bulletin of the AMS, June 2013).  Though the researchers cited didn’t measure a tornado with winds this high, the research implies that, yes, 300 mph winds could occur at the surface if they were measured at portable Doppler level.  The cited research is another reason why I have gotten off the fence and decided that 320 mph or higher winds could also theoretically occur at the surface in subvortices of the most violent tornadoes, such as, perhaps, the Hackleburg, AL tornado of 4/27/2011.

It is becoming increasingly clear to meteorologists that, although the categories of the EF scale are probably accurate as regards the intensity of wind required to damage structures in specific ways, the scale is grossly inadequate for measuring the highest possible winds that a tornado could produce.  There is little question that the most powerful EF-5 tornadoes can generate winds well in excess of 200-210 mph at the surface, especially if they are multivortex.  Surface winds of 300 mph in subvortices are also a near-definite, and there is quite a difference between 210 and 310 mph.  The former will reduce a well-built house to its foundation but could be survivable; there are accounts of people who sat through Category 5 hurricanes, which could generate wind gusts of that intensity.  The latter will shred the debris into pellets and tear the human body to pieces (see, for example, the Jarrell, TX tornado of 1997, but get your Pepto and smelling salts if you read detailed accounts of that).  The former could be ridden out in an above-ground shelter (the kind of shelter, incidentally, that some non-meteorologists involved in the creation of the EF scale had a financial stake in selling—just saying).  The latter requires an in-ground storm cellar with guard rails to hold.  I regard it as, frankly, grossly irresponsible for the public not to be informed of the true intensity that EF-5 tornadoes can reach or what such incredible winds can do.

I don’t blame the meteorologists at Norman for what happened.  They wanted to use hard data in the rating of the El Reno tornado, obviously.  There must have been pressure exerted from some other source.  I do hope, however, that weather scientists are soon able to force an official change in the procedure of rating tornadoes when calibrated, scientifically valid wind data are available.  One way to bring this change about more quickly is to increase funding for university meteorology departments so that they can send out chase teams equipped with portable Doppler.  Disregarding one or two sets of data, all from one small region, can apparently be done by the “powers that be.”  Disregarding data from all over Tornado Alley might not be doable.

Thoughts on Instrumental Measurements in Tornado Ratings

It’s been a while since I blogged anything.  I’ve decided that I do not really want to be a forecaster, but instead, a research meteorologist, and the war for funding is so intense that I’d much rather publish research in a scientific journal than on my blog.  However, this post is not research; it is commentary and speculation.  The opinions in it are no one’s but my own.

A controversy in meteorology has developed about the use of mobile Doppler wind data to rate tornadoes.  It flared up initially in 2011 when a tornado in El Reno, OK was rated EF5 purportedly because of mobile Doppler measurements.  However, it later came to light that the tornado had produced EF5 damage indicators along its path as well, including the hurling of very heavy oil tankers, the moving of equipment weighing a million pounds, and the intense scouring of dirt.  The controversy has arisen again, though.  At least two tornadoes in May 2013 had their ratings increased (rather significantly, I should add) strictly because of wind measurements.  The May 31 El Reno, OK tornado was increased from EF3 (from damage indicators) to EF5 because of a mobile Doppler measurement of 296 mph at 500 feet above ground level.  The Rozel, KS tornado was increased from EF2 to EF4 because of a wind measurement.

Some people seriously object to the use of instrumental readings in tornado ratings.  “The EF scale is a damage scale!” they say.  And, to an extent, it is.  However, that’s not all that it is.  In surveys, tornadoes are not simply said to have produced damage of a particular category.  Attached to each of the six ratings is a range of wind speeds that were determined, via engineering analysis, to produce such damage.  Surveys include an estimate of the wind speed of the tornado as well, and these wind speed estimates are often very specific.  I have seen surveys of EF4 tornadoes, for example, that distinguish between 170 and 190 mph winds.  Since the EF scale does not simply classify the level of damage produced by the tornado, but also includes numerical wind speeds for the tornado itself, I therefore have to come down on the side of those who use mobile Doppler and other calibrated, accurate forms of measurement to rate tornadoes.

However, there is a caveat.  I’m concerned about the use of mobile Doppler in areas like the Oklahoma City metro area resulting in a skewed picture of the distribution of EF4 and EF5 tornadoes.  They also are documented in areas that don’t happen to house the Storm Prediction Center, University of Oklahoma meteorology department, Norman OK National Weather Service Office, and National Severe Storms Laboratory.  However, if measurements of these tornadoes are never taken because of a lack of resources, they can be misrated.  The May 31 El Reno tornado was initially rated an EF3 from damage.  One cannot help but wonder how many tornadoes outside this Mecca of meteorology are misrated because there may not be a massive pool of storm chasers with state-of-the-art instruments.  Nevertheless, the proper course of action to correct for this is to fund more tornado research and wind-measuring equipment, not to sacrifice scientific accuracy on those occasions when we can obtain it.

The 296-mph winds in the El Reno tornado (at 500 ft.) were detected in a mesovortex.  This fact would also explain why, perhaps, some tornadoes are underrated; such small vortices might not strike anything if the path of the tornado is primarily unpopulated.  The outer funnel of the El Reno tornado had winds in the EF4 range, though again, at 500 feet.  Winds at the surface in the outer funnel may in fact have only been in the EF3 range, as the damage indicated.  However, this brings up several interesting points.

First, some meteorologists objected to the EF scale because they knew that the winds in EF5 tornadoes could reach speeds much faster than 200-210 mph, the range given in every damage survey for an EF5 tornado until the Joplin tornado.  They knew it from hard observations, including the mobile Doppler measurement of 300 +- 20 mph in the Bridge Creek tornado of 1999 and a measurement of 284 mph in the Red Rock tornado.  Now it seems that this was not just a pair of flukes; such extreme wind speeds may occur much more frequently in multivortex tornadoes than previously imagined, and not just those officially rated EF5.  The Red Rock tornado was rated F4 rather than F5 because wind measurements did not count in the old Fujita scale, and the 2013 El Reno tornado apparently didn’t produce demonstrable EF5 damage.

Second, I would bet that the usage of the EF scale, however accurate it is for below-EF5 winds, has resulted in some extremely inaccurate official wind estimates for EF5 tornadoes in surveys.  210 mph for the Smithville, MS and Hackleburg, AL tornadoes?  I do not believe that for a minute.  Now, I know that it is apparently not possible to distinguish between 200 and 250 mph on residential home damage alone, but if that much uncertainty exists, and if we know that tornadoes do indeed produce 250 mph winds at times, then I think damage surveys should not attempt to estimate a precise wind speed for an EF5 tornado from damage.  To do so implies a level of accuracy and surety that does not actually exist.

Finally, it is worth noting again that the 2013 El Reno tornado did not, apparently, produce demonstrable EF5 wind speeds in its outer funnel or its damage path, but a mesovortex inside the tornado nevertheless reached 296 mph.  This raises some serious questions about just how strong those mesovortices can become.  Now, many damage surveys for EF5 tornadoes note that the swath of EF5 damage was very small, a fact that indicates a mesovortex as the probable culprit.  One can be particularly confident in this if video exists of multiple vortices and the tornado’s path crossed over a developed area, as was tragically the case for the 2013 Moore, OK tornado.  However, what does that suggest for tornadoes that do produce wide swaths of EF5 damage along their paths, swaths too large to have been created by transient mesovortices and that were probably generated by the main funnel itself?  If the El Reno tornado generated an inner vortex spinning 110 mph faster than its main funnel, then I would be inclined to say that some multivortex (E)F5s that were rated on damage may in fact have generated “F6”-range winds (319+ mph) in their inner vortices.  (I say this with some trepidation, because there are few things more controversial and inflammatory in severe storms meteorology than the use of the term “F6.”)  I’m looking at the Hackleburg-Phil Campbell tornado and the Kemper-Philadelphia tornado (both of the April 27, 2011 super outbreak) in particular for this.  The former tornado had an uncommonly large path of EF5 damage, indicating that the main funnel may have reached EF5 levels; the latter had a small region in which the dirt was dug out of the ground to a depth of 2 feet, indicating the possibility of an inner vortex of truly incredible intensity.

I’ve personally been on the fence for a long time about whether such winds can occur on Earth–but this information about the El Reno tornado is edging me off that fence.  I doubt it could happen very often, of course.  I’m not suggesting that every EF4 or EF5 tornado is harboring an inner funnel with 330 mph winds at the surface.  This most assuredly is not the case.  Most EF5s earn their ratings not because of an EF5 damage swath attributable to the outer funnel, but because they do tend to be multivortex, and something had the misfortune of being struck by an inner vortex with EF5 winds.  But do I think 319+ mph winds could occur in a tornado that did have EF5 winds in its outer funnel?  Do I think they may have occurred before?  Honestly, at this point, I’m inclined to give a tentative yes.

Was the Joplin Tornado the Deadliest We Can Expect?

Meteorologists and weather-watchers are bidding the year 2011 a less-than-fond farewell.  While it was certainly a banner year from the point of view of storm chasing—6 EF-5 tornadoes, 17 EF-4s, and many of them highly photogenic, as the dozens of home videos on Youtube illustrate—it was a catastrophe in terms of the human impact.  With 552 fatalities, this year is tied for the second-deadliest tornado year in the U.S.  The death toll is an order of magnitude greater than even most of the “bad years” of the 1975-2010 period.  Two events are primarily responsible for this:  the April 27 Dixie Super Outbreak, which killed over 300 people (breaking the 1974 Ohio Valley Super Outbreak’s grim record by a hair), and the Joplin, MO EF-5 tornado, with approximately 160 fatalities.

With the 2011 Super Outbreak, meteorologists are starting to work out an approximate historical return period for these large-magnitude events.  Before the 1974 event, the last comparable event occurred in 1936, with an outbreak popularly known as the Tupelo-Gainesville outbreak for the violent tornadoes that occurred in Mississippi and Georgia.  It seems that these huge events occur approximately every 35-40 years.  Obviously, a comparable event could occur next spring, but statistically, it seems that they are a 35- to 40-year event.  And, given that the 1974 Super Outbreak and 2011 Super Outbreak saw comparable death tolls, I think we can also estimate what the human toll for such an event will unfortunately be as long as the affected communities have unsuitable safety options for EF-4 and EF-5 tornadoes.

The Joplin tornado is a different beast.  We do not have a comparable modern event.  Individual tornadoes in 1953 killed over 100 people in Waco, TX and Flint, MI, but that year was something of a catalyst of public outrage, for a third tornado in Worcester, MA killed 94 people.  Public sentiment that year was essentially, “DO something so that this never happens again!”  And for 57 years, no single tornado in the U.S. did kill over 100 people.  Then… it happened again.

Was the Joplin event a worst-case scenario?  Is this the deadliest (give or take) that a single tornado can actually be now?

I think the answer to the first question is a guarded “yes,” at least for the specific case of a tornado striking a city.  The tornado was about as strong as they come; its winds were estimated to be up to 250 mph.  They can get more intense than that, but it doesn’t make a lot of difference in terms of structural damage.  The tornado rapidly intensified precisely as it entered the heavily populated regions of Joplin, and it passed right through residential and commercial shopping areas—the worst areas it could strike.  Examination of the track shows that there was also a pretty large corridor of EF-4 and EF-5 tornado damage, which would be expected for a wedge tornado.  Sometimes the area of violent damage is comparatively small, but this was not the case with this tornado.  Storm cellars were rare in this area, making survival above ground mostly a matter of good luck.  The tornado was also rain-wrapped for much of its existence.  In terms of the storm’s power and the location of impact, you can’t get much worse than this.  However, I should note that it occurred on a Sunday.  Some have argued that if it had happened at the same time of day on a work day, it could have been worse.  We don’t know for sure, and let’s hope we don’t find out.  I tend to think it probably would not have been much worse, given that residential areas (not a likely area for commuters to be stranded) and the shopping district (which probably would get more foot traffic on weekends than work-week afternoons) were such a large part of the damage zone.  In my opinion, the Joplin tornado was essentially a worst-case scenario for a tornado striking an urban area.  A comparable tornado striking an urban area probably would have a comparable human toll.

Unfortunately, the second question—is the death toll of ~160 the highest we could see for a single tornado in the modern era—has a different answer.  There are two ways that a single tornado could kill a lot more people than that.

One is the possibility of a weak, poorly-built or dilapidated high rise building taking a direct hit from a violent tornado and collapsing with a lot of people inside it.  Generally, these buildings are not supposed to collapse even in EF-5 events.  Images of collapsed high rises on hurricane landfall sites are misleading; these buildings mostly had shallow foundations and were undermined by the storm surge.  They were not blown over by wind alone, and storm surge is obviously not a factor for tornadoes.  The St. John’s Hospital building in Joplin took a direct hit from the tornado when it was at EF-4 intensity and it did not collapse.  However, a poorly-constructed or dilapidated one could.  (As an aside, one does have to wonder about the possibility of a tornado tearing up ground several feet deep, as happened in the EF-5 tornado on April 27 in central Mississippi. This could definitely undermine a slab foundation on a house, resulting in the foundation being ripped from the ground—the supposed hypothetical “F6 intensity” signature that one heard bandied about prior to the adoption of the Enhanced Fujita Scale.  However, high-rise buildings have much deeper foundations than residential homes.)

The other possibility is that of a violent tornado striking a crowded spectator event, such as a sports game, a fairground, a speedway, etc.  This possibility has been discussed at length by meteorologists such as Dr. Roger Edwards of the Storm Prediction Center.  It’s almost happened before, in fact; in 2008 an EF-2 tornado in Atlanta, GA struck the Georgia Dome while a basketball game (involving my college team) was going on.  It had gone into overtime, so people were not milling around outside.  Still, there are videos from that event of pieces of the roof collapsing and falling to the floor while the spectators were left to fend for themselves in the stands.  A stronger tornado could very easily have taken that roof off.

So yes, although the Joplin tornado was very likely a worst-case event for a tornado strike on a city, thereby representing an approximate limit on fatalities for that type of disaster, the potential exists for individual tornadoes to kill far more people than that in a different sort of disaster.  Let us hope that we can deal with the infrastructure and the safety considerations of large venues so that these greater disasters do not occur, either in 2012 or years to come.

NWA 2011: Thoughts About Tornado Warnings and the Casualty Count

I attended the National Weather Association’s conference in Birmingham, Alabama, for two days.  Toward the end of the second day, the main focus of the talks was the terrible death count from tornadoes for 2011, and most of the speakers were coming at the problem from the perspective of social science such as psychology.  It is understandable that people would want to better understand what happened in an anomalous, outlier year such as 2011.  It is understandable that people would want to find out if the catastrophe was a result of factors that can be easily changed, and that they would even be biased toward that hypothesis.  (One presentation even mentioned the “optimism bias”—a concept that seems a bit strange to me as a natural pessimist, but I can readily see that it would exist in most people, and I would say that this is a perfect example of it.)  My intention here is not to call anyone out.  However, I think that a lot of the research is, frankly, barking up the wrong tree.  There are also some very serious flaws with some of the studies themselves.

The bulk of the research involved surveys of people from the areas that were impacted by tornadoes in 2011.  The surveys contained questions about NOAA watches and warnings (whether people received them, how they received them, whether they were understood) and people’s responses to these messages.

Here are some points I took away from the social science presentations:

  • An overwhelming majority of people in impacted areas did receive warnings.
  • A very small minority of them immediately went to shelter after receiving a warning from the first source.
  • A rather large plurality sought out additional information from TV, the Internet, or personal confirmation to determine if the tornado actually existed and would potentially affect them.  This was more likely in people with higher levels of education and in people who knew more about the weather.  (I would like to note here that this is exactly what I did when the east-central MS EF-5 tornado of April 27 was heading my way.  I did not immediately barricade myself under the stairs when I heard the warning.  I looked at radar to identify a probable debris ball signature and plotted its projected path to go right over my house.  I then grabbed my cat and got out of town.  The tornado lifted, but if it had stayed on the ground, I could have been killed as a result of following the canned advice rather than reasoning out the best course of action for myself!)
  • A minority of people chose to completely ignore a warning.
  • When asked how likely they, personally, thought it was that their area (of what radius?  I don’t recall if it was stated) would be significantly impacted as a result of bad weather mentioned in a warning, the most common answer was less than a 25% chance.  The social scientists said that they wanted people to guess a nearly 100% chance, but in fact, the scientifically and statistically correct answer was less than 5%.  Interestingly, this arguably refutes the “optimism bias” argument in that people did give a more pessimistic judgment of their risk level than was really the case, just not pessimistic enough to suit the social scientists.

The social scientists seemed to be dismayed by the fact that people were less likely to immediately dive for cover the more educated and weather-savvy that they were.  Needless to say, this is an odd message to deliver to a room full of meteorologists (many of whom actively seek out bad weather in their vehicles).  What is the point here?  “Ignorance is strength,” to quote from Orwell? Let alone that people can’t exactly become less weather-savvy, less educated, or more paranoid about the personal impact from a storm if they already know better.  This is an example of trying to close the barn door after the animals have escaped.  These things are what people do, and with the proliferation of web phones with more and more features that allow people to have access to information virtually anywhere, these behaviors are only going to become more common.  This means that they are the behaviors that must be worked with and planned for.  Trying to force people into a state of unnecessary and statistically unwarranted fear is not going to work.  Nor is it a good idea to try to bully people into not seeking out information and using cognitive reasoning.  I’m no social scientist, but I can tell you that if this is attempted, the most likely reaction is a rebellious contempt for “the government” for “trying to make us not question, not think for ourselves, and do as we’re told.”  I would be just about willing to guarantee it.  It could backfire badly.  People ultimately have to be responsible for their own decisions.

Furthermore, there was absolutely no evidence given that people who sought out more information first were more likely to be injured or die in an event, and obviously the survey methodology required interviewing people who did not die.  Knowledge about what the people who died did must come from people who were with them and survived.  However, I never even saw that there was a distinction made between the group of people who were in the path of the tornado and were uninjured or had only minor injuries, and those who were severely hurt or killed.  It would have been useful to find out if the people who were severely harmed did anything differently from those who were more or less okay.  Given that at least one of the surveys was conducted via e-mail shortly after the event in question (the Tuscaloosa tornado), I would expect that there would be very few people who were severely injured who even participated in it, because they would have been in the hospital.  In effect, the social scientists gathered statistics about a control group and presented it as though it represented the experimental group.  In this situation, the statistics about behavior patterns following a warning mean nothing in themselves.  There is nothing (survival/non-survival, minor/major injury) to correlate them to.  Implying that these behaviors caused the death toll to explode is unsupported speculation.  The one survey I saw that definitely interviewed people who had lost loved ones or who were severely injured was conducted in Smithville, MS, and these authors did not make any wild inferences about how seeking out additional information had led to the deaths.  There is simply no data support for it.  The only situation where it might make a difference is when the lead time is basically zero and every second counts, which was not the case in the April 27 tornadoes or the Joplin tornado.  (I had a lead time of about 25 minutes, which was enough for me to get my cat and laptop and go 18 miles away.)

There was one data omission that is, in my perspective, more important than any behavioral survey.  One table that I did not see in any of the social science presentation was this one:

F Scale Killer Tor Fatalities
F0 1 1
F1 3 4
F2 15 24
F3 23 76
F4 13 160
F5 6 282
F? 0 0
TOTAL 61 547

(Credit to the Storm Prediction Center: http://www.spc.noaa.gov/climo/torn/fataltorn.html)

That is, 95% of all tornadic deaths this year occurred in EF-3 or higher tornadoes, which will destroy most or all walls in a house.  EF-5 tornadoes can even expose the basement and descend into it (it is a myth that the funnel would magically stop at the ground level if an open hole existed for it to twist into), sucking people to their deaths.  And it gets even more significant when you dig deeper into the data.  A look at the list at the top of that page shows that only 4 of the deaths from EF-2 or weaker tornadoes occurred in permanent houses.  I don’t know exactly what happened there, but it could have been extremely bad luck such as a tree falling on the house, a piece of heavy furniture, or a piece of timber causing injury.  It could have been a weak structure.  The point is, this is very rare.  The rest of the deaths in EF-2 and weaker tornadoes were in trailers, vehicles, outdoors (all highly dangerous places to be in a tornado) or were unknown.

I respect the research into this year’s terrible tornado casualty count.  It is important to determine exactly why this occurred, and one question that did need to be answered was whether it happened because of bad decisions.  This is the question that the social scientists have attempted to answer.  I simply disagree very strongly with their apparent conclusions, as I think they are unwarranted by the questionable research methodologies, and are little more than speculation.  My contention is that the catastrophic death toll is directly attributable to major, violent tornadoes, the kind that obliterate entire homes, happening to occur in a lot of populated areas this year.  In short, it was a statistical outlier year.  This classification does not address the underlying structural problem of the Southeast, which is that effective storm shelter is not commonly available for the most violent events, but that’s not an easy problem to resolve.  Unfortunately, in my opinion, it is this hard problem, rather than comparatively easy ones regarding bad decisions, that must be answered if this type of death toll is to be prevented from ever happening again.

Emily Organizes; Gulf Threat Decreases

Tropical Storm Emily struggled through most of last night and today with disorganization, an aftereffect of its multivortex structure as an unnamed disturbance.  However, it has become better stacked today, with convection blowing up over its center.  It still has a long way to go, despite its more pleasing appearance in satellite photos.

Steering in the short term is straightforward.  Emily has been generally on the left side of the forecast track for most of the day, and it is now expected to make landfall in the Dominican Republic as a tropical storm.  Weaker systems generally weather the mountains better than stronger ones, provided that they do not linger in the area; therefore dissipation of the system seems comparatively unlikely.

 

Models conclude that trough will miss Emily

The models have largely converged on a scenario in which the trough that is to weaken the Bermuda High will be gone before it can force the full recurvature of Emily.  The ridge is expected to build back in, and the GFS shows the hurricane being trapped off the coast of Florida, unable to move ashore because of another ridge, stalling until a shortwave trough lifts it away.  The GFDL and HWRF models, which take their input from the GFS, both show a very close approach to the east coast of Florida, with the HWRF showing near-hurricane-force winds onshore.  The NOGAPS shows this same scenario without the stall.  In this scenario, a landfall on the Outer Banks of North Carolina occurs, followed by a pull up the Atlantic seaboard (offshore) and out to sea.

The Canadian model shows a very weak system, probably no more than a mild tropical storm, making landfall on the east coast of Florida and then being merged into the shortwave.  I should observe that the Canadian model now shows the first low strengthening to 988 mb at sea and reducing the Bermuda High to its winter stage (the Azores High), which does not seem remotely reasonable to me for an August system.  I am not putting a lot of faith in this aspect of the Canadian solution.

The European model also shows a “screwy” solution, amplifying the shortwave trough to 992 mb at sea, while completely dissipating Emily over Hispaniola.  Although unlikely in my opinion, dissipation of the tropical storm is certainly possible, as yesterday’s blog entry said, but I am having great difficulty believing that either of the baroclinic low pressure systems involved in this will reach 990 mb levels.  The first trough is currently located in New England producing a severe weather risk, and it is analyzed at 1002 mb.

Bottom line, I am giving a highly skeptical eye to anything that destroys the Bermuda High at the beginning of August and amplifies low pressure cores to winter levels, especially when they have not been doing this consistently.  Any land-free recurvature of Emily depends entirely on such “bombs,” and the approach to the East Coast will be so close that a weaker trough or shortwave will make all the difference in the world in what wind speeds are felt onshore and whether landfall occurs.

 

Gulf Coast threat decreases… for now

As should be apparent, the threat to the Gulf Coast states has decreased over the course of the day (with a caveat).  The current thinking is that the trough will lift Emily northward enough to miss an entrance into the Gulf of Mexico.  However, this could change if the storm stays south and west enough, or the trough is weaker than expected at sea.

Emily Is a Threat To the U.S.

After days of teasing weather watchers (and the National Hurricane Center), a tropical wave in the Atlantic has formed into Tropical Storm Emily.  The storm is rather disorganized and not at all “attractive” in the tropical cyclone sense, an artifact of its having had multiple competing vortices for several days that prevented its consolidation into a single system.

Because of its delay in getting organized, Emily is a threat to the United States.  I am going to blog regularly about this system as long as that remains the case.

 

What’s the synoptic setup?

Emily is going to move mostly west, slightly WNW, along the southern end of the Bermuda High.  Its strength will depend primarily on possible land interaction during this time.  The National Hurricane Center forecasts an impact on the island of Hispaniola, which would weaken the system.  How much remains to be seen; many a major hurricane has been reduced to a tropical storm by this island, but some systems that are much weaker have survived passage.  It is terribly difficult to forecast how much effect the mountains will have on any particular storm.  A lot depends on how well-organized the system is when it reaches the island (I do not mean its intensity; intensity and cyclonic organization are not the same thing), how long it stays there, and whether there are any additional destructive factors such as dry air intrusion and wind shear that are hitting the storm at the same time.  It is arguable that there’s not a lot of point in making a forecast for Emily after its interaction with Hispaniola at all, because what happens to it after that will be heavily influenced by its strength at that point.  I’ll discuss the various possibilities, however.

The high is going to be weakened on its left flank by a trough coming off the Atlantic coast in a couple of days.  This should pull the storm to the north.  How much depends on how weak Emily is and how strong the trough has managed to become.  The stronger Emily is, the more northward it is expected to move, all things being equal with respect to the trough.  I think that the trough will be the most important player here, though, and should be watched at least as closely as the tropical system.  It is very uncommon to have a strong trough coming off the East Coast at the beginning of August, and there has been a pattern this year of the GFS (the U.S.’s long-range weather forecast model) overdoing the strength of lows in the days before they arrive.  I am not inclined to buy into a strong trough unless I see it materialize, but it’s always best not to count anything out, either.

Emily is probably too far south and west to have a land-free recurvature (“fish storm”) path.  It’s not impossible, but it is unlikely.  It simply took too long to develop for that to be the most likely track.

 

What’s the model spread?

Models generally have clustered around the state of Florida as of Monday evening, with the NOGAPS (U.S. Navy model) farthest west and the GFDL farthest east.  The NOGAPS is showing an implied strike on the Florida panhandle (it has Emily stalling in the Gulf and not making final landfall within a 7-day period), and the GFDL shows a “fish storm” recurvature.  It is important to observe that this trend for the GFDL is relatively new; until the past 24 hours or so, that model was showing a strike on the East Coast of Florida and the HWRF model was showing a recurvature.  Now that has reversed itself.  In the meantime, the NOGAPS has been consistent in its Gulf track.  Consistency alone is not a reason to support a model’s output, but it is generally indicative of a model’s having a better grip on the environment than one that is prone to the “windshield wiper effect.”

 

Is the Gulf Coast at risk?

Short answer:  yes, but it’s not set in stone.  As of now, I still would say that the Florida peninsula is most likely to get hit, but the Gulf is a definite possibility, especially if Emily is weakened by interaction with land and/or the trough is weak.

Several forecasts indicate that the storm will remain weak for long enough to stay south and get into the Gulf of Mexico before making the recurvature.  This is not a fluke, or a one-off from some model; it has been a solution for the NOGAPS, Canadian, and UK several times over the past two days.  Furthermore, models such as the HWRF have been hinting at a Florida East Coast strike at a perpendicular angle, indicating a strengthening ridge that would force Emily westward again.  While these models do not go out far enough yet to indicate what would become of Emily after the Florida strike, entry into the Gulf (in a weakened state) is certainly possible in this scenario.

 

90L: Weak and Into the Gulf

The area of interest in the Atlantic, 90L, has become more likely to enter the Gulf of Mexico.  After a time yesterday when it was trying to spin up, the system has stayed weak and is now beginning to encounter land.  This land interaction will keep 90L weak as it passes through the Caribbean, making it even more likely to avoid the weakness in the Bermuda High that will be created by a trough.  90L currently has an area of moderate 700 mb to 850 mb vorticity associated with its convection.  This area of vorticity is what currently passes for a circulation.

It is important to note that, even though the current state of the system is less organized than yesterday and the National Hurricane Center has lowered its percentage of becoming a tropical cyclone in the next 48 hours (which I would completely agree with), 90L has gained additional model support for its long-term development prospects.  The cyclone-specific HWRF model was on board with 90L yesterday, taking it just south of Cuba and bringing it to 60 mph by the time it passes by.  Today the HWRF keeps the system even farther south, intensifies 90L to a Category 1 hurricane, and sends it into the Yucatan.  Additionally, the GFDL cyclone model, which was not doing anything at all with 90L yesterday, is today showing a Category 2 hurricane striking the Yucatan.  I think that is overdoing it, personally, but this system is showing indications of going into the Gulf of Mexico and intensifying then.

In recent hours, it has become possible that 90L is experiencing a center reformation.  The center has been located in the part of the system that is now south of Puerto Rico.  However, increased convection just south of Hispaniola (Fig. 1) is changing the polarity of the system, as is evident in upper-level divergence charts (Fig. 2).  This convection is likely associated with the mountains and therefore does not indicate improvement in the tropical structure of 90L.  However, if the center reforms to the northwest, this will throw a great deal of uncertainty into even the survival of 90L, as it will come much closer to the destructive mountains of Hispaniola and Cuba.  If the reformation does not occur, we are looking at a track like that of the GFDL and HWRF.  For my part, I am finding it hard to get on board with a center reformation over a more destructive environment that will make it hard for existing centers to stay together, let alone new ones to form, but time will tell.

One more important point to note for the GFDL model run is the strong ridge that would, in that scenario, serve to block 90L from moving north after it enters the Gulf.  The blocking ridge does not extend that far west in the HWRF run, making a Central Gulf landfall possible.

In summary:  90L is in a state of transition at present, and the outcome of a number of possibilities will determine its fate.  If the center reforms to the northwest, the GFDL and HWRF tracks should not be considered because they assume the present center.  The result of a reformation would be more land interaction, which means a weak system, delays in development, and the possibility of complete dissipation.  If the center does not reform, the GFDL and HWRF scenarios are in play, opening the doors for a significantly stronger system (and it should be noted that those models only go out to 126 hours, and have the system as an organized hurricane or near-hurricane in the middle of 90°F waters and low shear).  The ultimate landfalling location of 90L will then depend on the strength and extent of the ridge.


Figure 1: Rainbow-enhanced infrared image of 90L, Saturday evening.


Figure 2: Upper-level divergence over 90L.

A Tropical System For the Gulf To Watch

A tropical wave, designated 90L by the National Hurricane Center, is worthy of being watched by the Gulf Coast states. This system is arguably the first tropical system of real interest to the Gulf states in the U.S., as Tropical Storm Arlene was regarded as a Mexican storm (correctly so) almost from its inception, and Tropical Storms Bret and Cindy were never a threat to any land areas.  However, 90L is in a situation that will strongly favor its reaching the Gulf of Mexico, where conditions are favorable for development.

The system has been steadily increasing its convection over the course of the day, and with this increase has come an improvement in its cyclonic structure.  Cyclonic curvature is evident on satellite (Fig. 1), and upper-level divergence (Fig. 2) indicates good ventilation for the system.  Lower-level convergence (not shown) is not so impressive, indicating that the system needs to form a strong low-level circulation to be considered a tropical cyclone.  This is usually the last step that developing tropical cyclones take.

90L is in a simple steering regime, being located south of the Bermuda High.  In about 3 days, a trough associated with a cyclone is expected to be located off the East Coast of the U.S., eroding the high somewhat.  It was previously assumed that this temporary weakening of the ridge would result in 90L being drawn north for a recurvature.  However, recently, it has become likely that the trough will be weaker than previously believed.  90L is also expected to take longer to develop owing to shear and likely land interaction.  The net result will be a stronger ridge and a weaker tropical system, and the consensus is that 90L will be forced into the Gulf of Mexico (Fig. 3).

90L will have to pass through an area of 20-knot wind shear (Fig. 3, Fig. 4), which is moderate, but will inhibit strengthening for as long as the system is located under that wind regime.  The GFS model does not indicate a sharp spike in wind shear over the course of 90L’s trek toward the Gulf of Mexico.

Unless the expected path drastically changes, 90L should enter the Gulf in about four or five days.  Models are unreliable for storms like this in the long range, and it should be noted that some of the models, like the GFS, are not particularly impressed with this system in the first place.  However, the cyclone-specific model HWRF does develop 90L into a 60 mph tropical storm, keeping it south of Cuba by the end of its run (126 hours out).  For my part, I am disinclined to accept a forecast of zero land interaction at this point.  However, the salient point is that any interaction with Cuba or Hispaniola will have a profoundly negative effect on 90L’s short-term intensity even if it becomes a tropical storm before reaching those areas, and avoiding those landmasses will result in a stronger cyclone that has not been delayed by a reorganization after being disrupted.

My gut forecast for a week or more out (in other words, break out the salt!) is that this system will become a tropical cyclone of moderate intensity (I’ll say Category 1, max, because of mild levels of shear in the Gulf even though the temperatures are well over 90 degrees in many areas) and that it will make landfall somewhere west of Pensacola.  I will have updates about this system if it continues to be a concern.


Figure 1: Shortwave infrared satellite of 90L, late Friday night


Figure 2: Upper-level divergence for 90L, late Friday night


Figure 3: Google Earth overlay of model tracks and shear for 90L, late Friday night


Figure 4: Wind shear tendency, late Friday night

Ringing In the 2011 Hurricane Season

I think it’s safe to say that I am joined by a substantial part of the Southeast and Midwest in bidding a very loud “GOOD RIDDANCE” to the 2011 spring severe weather season, even as a student meteorologist. This season took a toll on me in a way that I honestly did not expect. It is painful to watch such atmospheric carnage unfold when the career you have embarked upon, whether as a forecaster or a research scientist, is intended to minimize such tragedies. Even when it is fairly universally recognized that the disaster was not in any way the fault of meteorologists, that only leaves a sense of helplessness. So yes, I am quite ready to say “good riddance” to the tornadoes, at least in this particular corner of the world, for a few months. The Southeast, along with much of the rest of the East, is embroiled in a heat wave at the moment, a fairly sure sign that a summer pattern has taken hold. On that cue, enter hurricane season. Hurricanes were my first atmospheric “love” and even the season of 2005 did not change this for me. It is with a sense of excitement that I start opening up my tropical web browser bookmarks regularly again.

I don’t see a lot of point in making a specific numerical forecast for this year’s hurricane season. Suffice it to say that my best guesses that I formed in winter are unchanged, and that I expect an active season with a higher-than-average chance of American strikes, unlike last season. I am not expecting a transition to El Niño, which would tamp down activity in the Atlantic, but I am not entirely sold on the expectation that the ENSO state will remain neutral throughout the season.  I think there are close to even odds that it will begin to return to La Niña conditions again by autumn, albeit milder than those of last winter.  However, either pattern will promote tropical activity.

Did I mention checking tropics-related web bookmarks once more?  Well, it turns out that the Atlantic basin is following the “official” calendar right on schedule, so I have reason to look at the tropics regularly already.  There is a disturbance in the Caribbean Sea that is the first really interesting possibility for tropical development.

Here is an image of how this disturbance looked at 11:00 PM Thursday night:

For reference, here is a true visible satellite image of the same disturbance a few hours earlier, which shows the mid- to low-level circulation of this system better:

There are two “blobs” in the Caribbean, but the first image makes it clear that the one to watch is the one closer to Central America. That is the one that, according to loops of visible satellite images, has visible rotation occurring, and it is analyzed as a low pressure center in official maps:

The area of convection to its east is associated with a tropical wave that is expected to merge with the low, adding energy and moisture to the brew.

This system is interesting for so early in the season because it has some atmospheric variables in its favor despite the calendar. The low has been developing low-level convergence (winds drawing together) and upper-level divergence, both of which are conducive for tropical cyclogenesis, though these areas of convergence and divergence need to become better aligned with each other:

As the images indicate, the area of convergence is the chief culprit in the misalignment. The divergence is occurring above the area of convection, indicating that the low is developing a system for ventilating itself.

An analysis of vorticity shows that the system has positive vorticity at the 850 and 700 mb levels and that the two levels are basically aligned, which in a tropical cyclone (or proto-cyclone) is positive for development:


850 mb vorticity

700 mb vorticity

Shear is all right above the system but not especially favorable in the surrounding environment:

This is no surprise for this early in the season, but the National Hurricane Center expects the environment to become more favorable for this system in the next couple of days. An examination of the GFS model indicates that they are probably correct in this expectation; though shear is expected to be prohibitive of any tropical activity in the Gulf of Mexico, it is supposed to lighten up around the low pressure center.

Incidentally, the GFS doesn’t seem to do much with this cyclone other than letting it churn in place. Don’t expect a hurricane out of this! At best, I’d say it might rate a tropical depression. It is primarily an interesting feature to watch for so early in the season, a harmless storm that we tornado-weary weather folk can observe without anxiety. Tropical cyclogenesis is a fascinating, somewhat mysterious, and awe-inspiring phenomenon, and instances like this that are not the classic “Cape Verde wave in September starts to spin in the middle of the Atlantic” pattern are particularly interesting because the process of genesis for them is not cut-and-dry like the well-known central Atlantic tropical wave process is. This system may very well be a harbinger, but that remains to be seen. For now, it’s a neat feature to watch.