Dr. Robert Chandler | International Crisis Communication Expert
  • HOME |
  • CRITICAL THINKING |
  • ABOUT |
  • Contact

The Future of Logistical Resupply for Disaster Management, Response and Recovery

3/9/2017

0 Comments

 
PictureImage: National Science Foundation
One of the basic preparedness rules for successful disaster response and recovery is the principle that one should have a lot of “stuff” on hand because access, logistics and supply chains may quite likely be disrupted during such situations. In fact, the recommendation is that you should pre-position supplies to meet the wide range of potential needs. To fully respond to this imperative, it is often necessary to establish (multiple) remote logistical supply “dumps” to warehouse a full range of potential supplies. This is an expensive part of disaster preparedness cycle. Furthermore, despite extensive and wide-ranging advance placement, inevitably specific supplies, tools or equipment are missing and not available at the critical points when they are most needed. Is there a better way to plan to meet the needs for fast availability or such resources and to ensure resupply even during catastrophic events where one is essentially “on their own” for hours or days?

Critical Supplies and Logistics for Disaster Management and Recovery

During routine operational conditions, effectively sustaining a functional resource supply chain can be a challenge. During a crisis, emergency or disaster, sustaining a reliable and efficient supply (resupply chain) can often be simply impossible.

Virginia Tech College of Business[i] quotes associate professor Chris Zobel on this challenge:

“Imagine the challenges of managing a supply chain to handle the recovery from a natural disaster, says Chris Zobel, an associate professor of business information technology who studies resilience in supply chains for disaster relief and readiness. A supply chain for a business, Zobel explains, is the series of processes involved in getting goods to customers, from order placement to delivery. Businesses have to consider six key parts of a supply chain: production (in a nutshell, what should be produced in what quantity and quality); supply (how and where the goods are to be made or sourced); inventory (how much to maintain); location (where to site plants and warehouses); transportation (ground, air, or sea?); and, lastly, information (how to obtain, organize, and manage all the information related to the business). Supply chain management is frequently used in disaster relief efforts, he says, noting that the International Federation of Red Cross and Red Crescent Societies won the European Supply Chain Excellence Award in 2006 for its disaster response activities. It is particularly important for the supply chains of humanitarian aid agencies to be resilient — to be strongly resistant to the initial impact of a disaster and to be able to recover quickly and respond and adapt well to changing conditions. ‘I want to help organizations improve their ability to prepare for and respond to disasters — particularly service-oriented organizations that have that as their core mission.’”

What if it were possible to quickly produce a wide range of essential disaster management and recovery supplies without dependence on risky supply chains or the necessity to preposition a massive amount of resources in multiple locations? It is increasingly possible that the future of supply and resupply has arrived and the complex, expensive, cumbersome and often unreliable traditional supply chain process can be replaced with products produced on-site in their field.

Supplies on Demand

“3D printing” or additive manufacturing is a process of making three-dimensional solid objects from a digital file. The creation of a 3D-printed object is achieved using additive processes. (In an additive process, an object is created by laying down successive layers of material until the entire object is created. Each of these layers is a thinly sliced horizontal cross-section of the eventual object). Experts have suggested that 3D printing technology is the threshold point of a third industrial revolution, and will fundamentally alter all aspects of business production and product – including dramatic impacts on supply chains and resupply chains.

PictureImage: NASA
Additive Manufacturing – What is it?

“3D printing” starts with making a virtual design of an object. This virtual design is made in a CAD (Computer Aided Design) file using a 3D-modeling program (for the creation of a totally new object) or with the use of a 3D scanner (to copy an existing object). A 3D scanner makes a 3D digital virtual copy of an object. The “printing” or additive manufacturing (AM), uses successive layers of composite material which are formed to create a physical object. Objects can be of almost any shape or geometry and can be produced from digital model data 3D model or another electronic data source such as an Additive Manufacturing File (AMF) file.

Additive manufacturing in combination combined with cloud computing technologies allows decentralized and geographically independent distributed production. However, decoupled production without cloud computing connections is feasible if one has the proper software, data files and mobile printing technology on hand.

Advances in technology have introduced a growing variety of materials that are appropriate for manufacture, which has in turn introduced the possibility of directly manufacturing finished components virtually anywhere at any time. Thus far, AM can produce products made of plastic, composites and various types of metal.  In most cases, the printing is a rapid process and thus far has been very reliable. Furthermore, the cost factor for this process means that it is economically viable to produce small quantities of objects (or even one) at the time when they are needed.

Successful Demonstrations of the AM Process

An article published in The Economist reports on field applications of AM technology aboard the USS Harry S. Truman naval aircraft carrier on station in the eastern Mediterranean and Persian Gulf:

“…if it is a question of replacing a small but crucial component that has broken – the modern equivalent of reshoeing a horse – then making what is needed to order in this way has huge potential. Moving replacement parts through a long supply chain to a far-flung ship or base can take weeks. And, if a war is on, such convoys make tempting targets. Yet it is unrealistic to keep a full range of spares near the front line. Far better to produce what is needed when it is needed.
Having access to a printer can even encourage innovation. For example, the USS Harry S. Truman, an American aircraft-carrier, took two 3D printers on her most recent tour of duty…During the eight months she was at sea her crew devised and printed such items as better funnels for oil cans (to reduce spillage), protective covers for light switches (to stop people from bumping into them and inadvertently plunging, say, the flight deck into darkness) and also a cleverly shaped widget they dubbed the TruClip. This snaps onto walkie-talkies, reinforcing a connection that is otherwise prone to break in the rough-and-tumble of naval usage. According to Commander Al Palmer, one of the Truman’s maintenance officers, TruClips alone have saved more than $40,000 in replacement parts.”

The article also reports that currently: “Israel’s air force prints plastic parts that are as strong as aluminum, in order to keep planes that date from the 1980s flying. And America is advising the governments of Australia, Britain and France on 3D printing, in order to speed up these allies’ supply chains…”

Plastics, Metals and Composites

AM started with the production of plastic items. However, over time, 3D printing has increasingly diversified beyond plastic components. Today, 3D printers can create a wide variety of complex objects composed of plastic, metal and composite materials. Currently these AM products range from titanium medical implants to nickel alloy aircraft and spaceship parts. Stephanie Yang (“3-D Printing Fuels Demand for Powered Metals.” Wall Street Journal, 12.29.16, B10) reports that the demand for AM materials is rapidly expanding for an ever-increasing range of 3D end products.
She writes that: “Demand for powders made of aluminum, cobalt, and other industrial metals is poised to take off over the next decade as 3-D printing technology becomes more widely used, especially in industries that use tailored component and parts.” Yang quotes Klaus Kleinfeld, Chairman and CEO of Arconic, a producer of 3D printer powdered metals, who says: “I think that we’re at a point here where the capabilities of 3-D printing, particularly in metals [are] so limitless.”
3D printing still must overcome obstacles of costs, reliability testing, and emerging software technology challenges before it becomes ubiquitous. Nonetheless, as Yang notes, existing objects in use today already include a GE Jet Engine, Anatomics rib-cage implant medical device, and an Airbus produced motorcycle frame and body.

AM’s Potential for Disaster Response and Recovery

The next step in this evolution is to find and apply applications of the AM technology to address the needs for disaster response and recovery supply needs and supply chain/prepositioning challenges. The following is a partial list of tools, devices, equipment and objects which are currently available for AM (3D printing) production:
  • Personal protection equipment and devices
  • Containers (many shapes, sizes and purposes) and lids/covers.
  • Hand tools (e.g. shovels, picks, axes, etc.)
  • 3D Signage
  • Valves, couplings, HVAC components, dewatering devices, plumbing items, tubing, conduits, etc.
  • Critical components (for equipment, vehicles, systems, etc.)
  • Storage units and mobile storage units
  • Debris collection devices and removal units
  • Sandbags and sandbag filling tools
  • Isolation and quarantine equipment
  • Fatality management bags, boxes and storage units
  • Emergency shelters
  • Traffic Management tools and equipment
  • 3D badging and Identification items
  • Incident command devices and supplies (e.g. vests, hats, workloads, fasteners, etc.)
  • Specialized items (e.g. whistles, pry bars, multiple purpose value shut off devices, wrenches, glass breakers, pliers, files, screwdrivers, levers, can openers, bottle openers, punch, knifes, filters, ear/eye protection, filtration masks, etc.)
  • Furniture (e.g. cots, desks, chairs, workstations, etc.)
  • Transportation (carts, wagons, wheelbarrows, etc.)

With innovation and creativity, there is a grand potential for using AM technology to address the immediate supply needs for disaster management and recovery. In the future – supply positioning may significantly include a variety of 3D (AM) printers, a database of CAD templates, a programmable CAD software for innovative new products and composite material-containing printer “ink” cartridges.

I think that AM is an inevitable and invaluable part of the future for disaster management and recovery supply and supply chain solutions. I foresee this as a very positive business opportunity for those who are inclined to investigate it further.

[i] http://www.magazine.pamplin.vt.edu/fall10/supplychain.html

“Having no truck with it”, The Economist, November 5, 2016: v. 421 (9014), 69-70


0 Comments

Decision-Making and Communication Errors Blamed for Deadly Jet Crash

2/23/2017

0 Comments

 
PictureLaMia Charter Flight Crash, Image: Twitter
Over the years, I have examined several “cockpit communication breakdowns” that resulted in endangering aircraft in flight and on the taxiway – and in too many instances resulted in crashes and fatalities. Human error has been documented as a primary contributor to more than 70 percent of commercial airplane hull-loss accidents. All airline pilots are required to receive crew resource management (CRM) training, which augments technical flight and ground training with human factors subjects. CRM training has been shown to be efficacious for flight crews to improve human factors performance. Unfortunately, in real flight operations, there are complex situational, business and economic, cognitive and physical factors that cause human factor performance problems, particularly when facing a demanding critical situation, such as in an emergency.

Last fall, one aircraft crash that received a significant amount of attention was the crash of the LaMia Charter Flight 2933 (a BAE 146 Avro RJ85 jet) that cost the lives of 71 people including members of the Chapecoense (Brazil) soccer team on 28 November 2016.

The aircraft had been transporting the Chapecoense soccer team to the biggest game in its history, the final of the Copa Sudamericana. The LaMia Flight 2933 charter plane, headed from Bolivia to Medellin for the championship match of the Copa Sudamericana, crashed Nov. 28 into mountainous terrain near Rionegro, Colombia. Most of the victims were members of the Chapecoense Brazilian soccer team — 19 of which were players and 25 of which were team executives. Six people survived the crash. (The South American Football Confederation awarded Chapecoense the Copa Sudamericana title following the incident.)

Colombia’s Civil Aeronautics Agency (CCAA) concluded in its investigation that the crash of the airliner was caused by a series of human errors including significant communication breakdowns. Among the many factors identified that led to the fatal crash, were the decisions to let the charter jet take off without enough fuel on board to ensure flight safety and failure not to stop en route midway to add fuel. The Civil Aeronautics Agency also stressed that neither the LaMia charter company nor Bolivian authorities should have allowed the plane to take off with the flight plan submitted.

The pilots of the aircraft knew that there was insufficient fuel for the flight plan and they were aware that the jet engines were shutting down due to lack of fuel before they communicated their critical predicament to controllers. They only reported “a total electric failure without fuel” just minutes before the jet slammed into a hillside outside Medellin, Columbia. Even then, they asked only for a “priority landing” but didn’t communicate that they were in imminent danger. Subsequent descriptions of their communication by controllers was described “in a completely normal manner.”

Kejal Vyas writing in the Wall Street Journal reported:

“Yaneth Molina, the air-traffic controller in Jose Maria Cordova International Airport, described the harrowing final minutes in an interview…with Colombia’s Caracol Radio. The LaMia flight, she said, never alerted them of any major problems before suddenly beginning an unauthorized descent for landing, looking to cut in front of three other planes that were scheduled to land before. ‘That’s when I called them and they tell me about an emergency,’ Mrs. Molina said. ‘There were 71 victims, but it was too close. They were practically on top of the other aircraft. It could have been worse,’ she said.”

Investigators concluded without a doubt that crew members of the LaMia flight were aware of the lack of fuel but only communicated with controllers about the emergency situation when it was too late. Analysis of the Cockpit Voice Recorder (CVR) revealed that during the flight the pilot and co-pilot are heard on “various occasions” talking about stopping in Leticia, a city near the borders separating Brazil, Peru and Colombia, to refuel but decided not to do so nor to communicate with air traffic controllers about their predicament.

The CCAA noted that when the plane entered Colombian airspace, it was flying into a wind, which caused more fuel to be consumed. When the pilot finally contracted the air traffic controllers to request priority to land in Medellin (six minutes before crashing), the plane had already spent two minutes with one engine shut off. Three minutes and 45 seconds before the crash all the engines had stopped due to the lack of fuel, the investigation concluded.

In a recording of the belated radio messages from the pilot, he can be heard repeatedly requesting permission to land due to a lack of fuel and a “total electric failure.” Moreover, a surviving flight attendant and a pilot flying nearby testified that they also overheard what they described as frantic pleas from the flight crew of the LaMia jet during the final moments.


As news reports made public (Human error led to Colombia soccer plane crash: authorities) “[n]o technical factor was part of the accident, everything involved human error, added to a management factor in the company’s administration and the management and organization of the flight plans by the authorities in Bolivia.”

The flight crew was aware of the insufficient fuel and yet they did not stop at the mid-way point of the flight to refuel nor did they communicate that decision (and the anticipated low fuel situation) to the controllers. The flight crew were also aware of the imminent danger posed by the jet engines shutting down. Yet, they sought to communicate with the ground controllers only after it was too late to ensure a safe landing and literally just minutes before their aircraft plunged into a hillside.
Once again, dad decision-making and delayed and poor communication by the flight crew appears to be the substantial factors for another air crash disaster. Improving human performance can help reduce the commercial aviation accident rate, and much of the focus is on designing human-airplane interfaces and developing procedures for both flight crews and maintenance technicians to mitigate these breakdowns. Human behavior, particularly decision-making and communication needs to be a priority for flight crew training, assessment and certification.

Preventing Future Human Error Crashes

More data and additional research is needed to better enhance human performance in these contexts. Unfortunately, it is difficult to obtain insightful data in an aviation system that focuses on accountability and punitive responses to breakdowns. Flight and maintenance crews are often unduly exposed to blame because they are the last line of defense when unsafe conditions arise. The system should transcend a “blame” culture and encourage all members of aircraft operations to be forthcoming after any incident. Data collection should not be limited to any one segment of the safety chain. To best reduce the aircraft human factor accident rates, we should continue to promote and implement proactive, nonpunitive safety reporting programs designed to collect and analyze aviation safety information and implement a substantial human behavior and communication training regime to improve performance at critical moments.

We also need business, aviation industry, and regulators to support these approaches. One positive example are the efforts made by the Boeing company. Boeing currently has a focused effort to examine human performance issues throughout the airplane to improve usability, maintainability, reliability, and comfort. In addition, human factors specialists participate in analyzing operational safety and developing methods and tools to help operators better manage human error. These responsibilities require the specialists to work closely with engineers, safety experts, test and training pilots, mechanics, and cabin crews to properly integrate human factors into the design of airplanes. We need more companies in the aviation industry to support and expand such efforts.


0 Comments

Health Communication Priorities for Recent Rise in Mumps Outbreaks

2/9/2017

0 Comments

 
Mumps

Mumps is a highly contagious disease caused by a virus. It typically starts with a few days of fever, headache, muscle aches, tiredness and loss of appetite, followed by swollen salivary glands. Mumps is best known for the puffy cheeks and swollen jaw that it causes. This is a result of swollen salivary glands. Mumps spreads through saliva or mucus from the mouth, nose or throat. An infected person can spread the virus by coughing, sneezing or talking, sharing items, such as cups or eating utensils, with others, and touching objects or surfaces with unwashed hands that are then touched by others.

Contagiousness is similar to that of influenza and rubella, but is less than that for measles or varicella. Although mumps virus has been isolated from seven days before, through 11–14 days after parotitis onset, the highest percentage of positive isolations and the highest virus loads occur closest to parotitis onset and decrease rapidly thereafter. Mumps is therefore most infectious in the several days before and after parotitis onset. Most transmission likely occurs several days before and after parotitis onset. Transmission also likely occurs from persons with asymptomatic infections and from persons with prodromal symptoms.

History of the Disease

Mumps is an acute viral illness. Parotitis and orchitis were described by Hippocrates in the 5th century BCE. In 1934, Johnson and Goodpasture showed that mumps could be transmitted from infected patients to rhesus monkeys and demonstrated that mumps was caused by a filterable agent present in saliva. This agent was later shown to be a virus. Mumps was a frequent cause of outbreaks among military personnel in the pre-vaccine era, and was one of the most common causes of aseptic meningitis and sensorineural deafness in childhood. During World War I, only influenza and gonorrhea were more common causes of hospitalization among soldiers. Mumps virus was isolated in 1945, and an inactivated vaccine was developed in 1948. This vaccine produced only short-lasting immunity, and its use was discontinued in the mid-1970s. The currently used Jeryl Lynn strain of live attenuated mumps virus vaccine was licensed in December 1967. The vaccine was first recommended for routine use in the United States in 1977.
Picture
This figure provides the number of mumps cases in the United States from 1968 through 2011 – CDC
In 2006, a multi-state mumps outbreak in the American Midwest resulted in more than 6,000 reported cases. During 2009-2010, two large outbreaks occurred: one among Orthodox Jewish communities in the Northeast with 3,502 reported cases and the other on the U.S. Territory of Guam with 505 mumps cases reported.

Symptoms of Mumps

Mumps likely spreads before the salivary glands begin to swell and up to five days after the swelling begins. The most common symptoms include:
  • Fever
  • Headache
  • Muscle aches
  • Tiredness
  • Loss of appetite
  • Swollen and tender salivary glands under the ears on one or both sides (parotitis)

Symptoms typically appear 16-18 days after infection, but this period can range from 12-25 days after infection. Mumps occurs in the United States, and the MMR (measles-mumps-rubella) vaccine is the best way to prevent the disease.

Mumps can occasionally cause complications, especially in adults.

Complications include:
  • inflammation of the testicles (orchitis) in males who have reached puberty; rarely does this lead to fertility problems
  • inflammation of the brain (encephalitis)
  • inflammation of the tissue covering the brain and spinal cord (meningitis)
  • inflammation of the ovaries (oophoritis) and/or breast tissue (mastitis)
  • deafness
Recent New Mumps Outbreaks

As the Wall Street Journal reported, 2016 was the worst year for mumps outbreaks in a decade. Despite widespread vaccination requirements, college campuses are bearing the brunt of the attack as students live in close quarters and don’t always maintain the healthiest lifestyles.

Korin Miller writing in an article in SELF (Mumps Cases Are The Highest They’ve Been In 10 Years) reports that according to government data, the U.S. is experienced more mumps cases in 2016 than the country has seen annually in a decade. The Centers for Disease Control and Prevention reports that, as of November 5, 2016, the U.S. has seen 2,879 cases of mumps in 45 states and Washington, D.C., this year. By comparison, there were a little over 1,000 cases reported in 2015.
According to Miller, mumps used to cause up to 186,000 cases a year, but the measles, mumps, and rubella vaccine—better known as the MMR vaccine— has brought numbers down, the CDC says. The CDC recommends that children get two doses of the MMR vaccine, but notes that it’s not 100 percent effective.
PictureImage: CDC
People who contract mumps typically develop puffy cheeks and a swollen jaw due to swollen salivary glands, but they also may have a fever, headache, muscle aches, fatigue, and loss of appetite. Symptoms usually appear up to 18 days after a person is infected, and most people recover completely in a few weeks, the CDC reports.

Why the Sudden Increase in Cases 2016?

Miller cites Richard Watkins, M.D., an infectious disease specialist at Cleveland Clinic Akron General Medical Center, who says that there may be several possible reasons. One is that some outbreaks may occur because parents made the decision not to vaccinate their children, leaving them more susceptible to contracting the virus, he says. The other is likely due to what he calls “waning immunity.” The CDC recommends that children get two doses of the MMR vaccine, with the first dose at 12 to 15 months of age, and the second anywhere between four and six, he explains. Typically, the vaccine’s effectiveness starts to decline 10 years after the last vaccine, he says. It’s soon after this time that people go to college, where they may be exposed to mumps from unvaccinated peers, or students who attend school from abroad, where the MMR vaccine isn’t as popular, said William Schaffner, M.D., an infectious disease specialist and professor at the Vanderbilt University School of Medicine. “If you were vaccinated against mumps and you get exposed to it in your teenage years and into young adulthood when immunity wanes, particularly in close face-to-face contact with someone, you can get a milder case of it,” Schaffner explains.

Mumps Vaccine

Mumps can be prevented with MMR vaccine. The vaccine protects against three diseases: measles, mumps, and rubella. CDC recommends children get two doses of MMR vaccine, starting with the first dose at 12 through 15 months of age, and the second dose at 4 through 6 years of age. Teens and adults also should also be up to date on their MMR vaccination.
MMR vaccine is very safe and effective. The mumps component of the MMR vaccine is about 88% (range: 66-95%) effective when a person gets two doses; one dose is about 78% (range: 49%−92%) effective.

Mumps Vaccine Composition

  • Live virus (Jeryl Lynn strain)

Effectiveness
  • 88% (Range, 66%-95%) – With 2 doses

Duration of Immunity
  • lifelong

Schedule
  • At least 1 dose should be administered with measles and rubella (MMR) or with measles, rubella and varicella (MMRV)
  • Single-antigen vaccine not available in the United States

Children may also get MMRV vaccine, which protects against measles, mumps, rubella, and varicella (chickenpox). This vaccine is only licensed for use in children who are 12 months through 12 years of age.

Before the U.S. mumps vaccination program started in 1967, mumps was a universal disease of childhood. Since the pre-vaccine era, there has been a more than 99% decrease in mumps cases in the United States. Mumps outbreaks can still occur in highly vaccinated U.S. communities, particularly in close-contact settings such as schools, colleges, and camps. However, high vaccination coverage helps to limit the size, duration, and spread of mumps outbreaks.

The CDC admits that the MMR vaccine isn’t perfect. “MMR vaccine prevents most, but not all, cases of mumps and complications caused by the disease,” the agency says on its website. Two doses of the vaccine are 88 percent effective at protecting against mumps, and one dose is 78 percent effective, the CDC says. That’s why outbreaks can still occur in communities where people are vaccinated—however, high vaccination rates limit the size, duration, and spread of mumps outbreaks.

Public Health Implications

Most mumps outbreaks in 2016 have been on college campuses, board-certified infectious disease specialist Amesh A. Adalja, M.D., an assistant professor at the University of Pittsburgh Medical Center, tells SELF. “The nature of a university campus tends to allow for bigger outbreaks,” he says. “It really allows the virus to find enough hosts to get to these types of numbers.” (Harvard, for example, experienced an outbreak this spring.)

When these outbreaks do occur, people may be offered a third dose of the MMR vaccine to try to boost their immunity. “That may be something that has an increased role that we continue to see,” Adalja says. In fact, the Advisory Committee on Immunization Practices (a panel of health experts who give vaccination guidance for the U.S.) is considering recommendation of a third dose of vaccine for everyone as part of the MMR schedule, per CNN. However, they haven’t said for what age they would recommend the third vaccine.

If a mumps outbreak occurs nearby, try to avoid contact with infected people, if possible. “It’s spread by direct person-to-person contact and respiratory droplets,” Watkins says (think: being sneezed or coughed on, or through kissing). You’re especially at risk if you get within three feet of someone who has the virus, Schaffner says, particularly if you’re in close prolonged contact, like being in a class together or work setting with them.

Picture
Image: CDC
Recommendations

In response to a mumps outbreak in the Midwest, college students and health care workers in particular are encouraged to make sure they’ve had two doses of the MMR vaccine. A single dose doesn’t appear to offer sufficient protection during an outbreak. Since the recommendation for a second dose didn’t begin until the late 1980s or early 1990s, many young adults may not have received their second dose and should have one now.

Renewed efforts for health communication about the risks and preventative measures concerning Mumps are needed. This includes more effective messages targeting the most at-risk populations. Specifically, MMR vaccination should be encouraged. higher vaccination rates would limit the size, duration, and spread of mumps outbreaks and provide increased wellness generally in addition to the specific at-risk populations.

0 Comments

Improving Productivity, Participation and Satisfaction in Business Meetings

1/27/2017

0 Comments

 
Picture
In my years of teaching organizational and leadership communication one recurring topic centers on the challenge of conducting effective business meetings. Meetings are one of the commonly dreaded chores for many employees and far too many meetings deserve the disdain. Poor meeting performance can negatively impact productivity, derail decision-making and forward moves, damage employee morale and become a festering problem that drains attention, resources and valuable work time with only negative outcomes to show for the investment.

Over the decades, I have read/reviewed numerous academic and scholarly studies about meetings as well as read a long list of popular professional viewpoints on steps to improve meeting performance. While the body of literature is too large to attempt to summarize in these few short paragraphs, I thought it might be helpful (to someone) to hit a few of the more frequently mentioned “tips” which have been recommend for improving productivity, participation and participant satisfaction in business meetings. I am confident that I picked these ideas up from many others over the years, so I defer credit to numerous unknown sources for the following 10 key ideas.

1. Purpose

Every meeting should have a purpose and end-goal. Objectiveless meetings (meeting just to meet) are not only pointless (literally) but are a drain on time, morale and other resources which could be more productively devoted to other tasks and mission-critical purposes. Each meeting should have a focus and purpose. Ideally, there should only be one central or core objective for the meeting. Never call or organize a meeting without knowing what you seek to accomplish in that session. It is inherently important to know why you are scheduling a meeting.

Does the meeting really need to be held? Having fewer (but better) meetings can provide a breakthrough for increasing productivity, participation when you do hold a meeting and overall satisfaction. Schedule a meeting only when it is necessary for the purpose and objective. Before scheduling a group meeting, ask yourself whether you can achieve your goal in some other way, perhaps through a one-on-one discussion with someone, a telephone conference call, or a simple exchange of emails.

All too often, “meetings” are held to merely update or passively share informational items. If you leave a meeting without having had discussion and interaction and/or any post-meeting action steps, you should question the value of the meeting. A meeting to “share updates” should be replaced with a memo, website update, bulletin board posting, email or voicemail message.
Sue Shellenberger wrote in “The Plan to End Boring Meetings” (Wall Street Journal, 12-21-2016, A11) that managers often invite too many people to attend meetings, as well as ask people to attend the meetings for the wrong reasons resulting in far too many oversized groups that fail to work together effectively. She suggested that the number of meeting participants should be adjusted based on the core purpose of the meeting. Doing so, she argues, may lead to faster and better decisions as well as more engaged employees. Here are her categories of recommendations for number of participants to ask to attend each type of meeting for maximum positive outcomes:

Weighing a problem meeting – 4 to 6 People

Invite enough people to bring needed expertise, without including so many that discussion files off course Each participant should have a role to play….

Making a decision – 4 to 7 people

….For every additional participant over seven, the likelihood of making a sound decision goes down by 10%. According to Michael Mankins, a partner at Bane and Co. “By the time, you get to 17 people, the changes of your actually making a decision are zero….”

Setting the agenda – 5 to 15 people

Another kind of meeting, the daily agenda-setting session, should be brief and vary in size, based on how big your team is. These brief gatherings, often called huddles or stand-up meetings, usually involve only the people who have a logical reason to be there because their work or cooperation is critical to the day’s agenda.

Brainstorming – 10 to 20 people

Sprinkle the list of invitees with people from different backgrounds and social networks to spark diverse ideas…..Brainstorming participants tend to resist throwing out risky or novel ideas because they’re worried about what others might think….[One expert] suggests giving participants time in advance to write down ideas and submit them anonymously before the meeting.

2. Plan

Every meeting should have a plan, developed in advance that answers the basic who, what, when, how and why questions about the meeting. It should lay out how these are communicated in advance to attendees as well as how these will be accomplished in the meeting itself. Perhaps giving a meeting sub-titles could help achieve the goal of sharing the plan for the meeting (e.g. a brainstorming meeting or a timeline development meeting, etc.). include in the plan the method and message of calling the meeting or inviting (requiring) attendance. Attendees should understand in advance “why” they are part of the meeting participants and what to expect in terms of time commitment, preparation or resources to have ready.

Picture
Review previous meetings and previous feedback as you plan the meeting. Learn how to improve meetings by reviewing past presentations and identifying aspects to discard or incorporate into the meeting plan for the next meeting.

Anticipate how much time is required to accomplish the purpose and objectives of the meeting. Set the start and stop time accordingly. Select a meeting space (room) that has appropriate work space (table, sufficient chairs, necessary A/V equipment, etc.). Be prepared. Meetings are work, so, just as in any other work activity, the better prepared you are for them, the better the results you can expect.

There may be certain times (and days) during the week that work best for the meeting. Try to systematically analyze and anticipate the “better” days and times for scheduling a meeting based on the invited attendee’s assignments, duties and other work conflicts. If you use a software calendar – scheduler (e.g. MS Outlook) – don’t schedule a meeting to occur the next minute after a previous meeting or appointment is scheduled to end. This is impractical and unrealistic. Depending on the organizational culture, it may be normative for meetings to routinely “run long” so that some sense of transition time is reasonable. As you schedule meetings, building in 15 minutes of travel time between meetings can be helpful, particularly if attendees are coming from different floors in the building, different buildings or even different campuses to attend the meeting.

3. Descriptive Agenda

Write a one-page summary of the purpose and plan for the meeting. Before the meeting begins, share the one-page summary of the major points that you want to cover during your meeting. This enables employees to know what is expected from them, helps keep the meeting on-track and consistent with the purpose and plan, and results in employees having a better understanding of what to expect in the meeting. Also, this will help reduce any anxieties or fears among your workers and prevent any rumors from spreading before the meeting begins.

An agenda can play a critical role in the success of any meeting. It shows participants where the meeting is going. It is usually best to distribute the agenda and any preparation assignments in advance of the meeting. Research has found that “mystery” meetings tend to have lower levels of motivation, participation and satisfaction than focused and purposeful meetings where the participants understand the why and what expectations of the meeting.

The descriptive agenda should not be an itemized list of specific points but rather a general overview that help set the tone and expectations of what is about to occur. In fact, the person running the meeting may want to have a very detailed personal agenda for the meeting but the one distributed publicly should be general in orientation not detail specific.

4. Start the Meeting

It is important to signal that the meeting has begun. Have a formal threshold to let everyone know that the context has shifted into “meeting mode” and that other activities and small talk should cease and attention turned to the agenda of the meeting. Start the meeting on time. There are many reasons to do this, including: making best use of limited time, signals that the meeting is important, sets expectations about the business purpose of the meeting, overtime tends to encourage prompt attendance and sets up the expectation that the meeting will end on time as well.

5. Participation

One of the core concepts of a meeting is that there is a level of synergy possible which is potentially greater than a single person or two-people considering the topic, subject or idea of the meeting. Thus, it is essential to create a climate to foster engagement, participation and interaction for a meeting to be truly successful. This requires setting aside sufficient time as well as a skillful facilitator to lead the meeting with a goal of fostering participation.
State explicitly that the goal of the meeting is to achieve participation. Establish ground rules that empower all viewpoints to be expressed. Focus on the substantive discussion and topic at hand, not on personalities, procedures or distractions.

6. Facilitation

A meeting leader should function as a facilitator. One tactic business leaders use to avoid inadvertently dominating a meeting is by delegating meeting leadership. Consider whether it is advantageous to assign the meeting management responsibility to someone else, perhaps to build subordinates skills. Other leaders tend to rotate the meeting leader position to other staff in subsequent meetings, which will help them improve their management skills. The delegated personal should have some training and experience in facilitation. Here are a few guidelines which facilitators should consider:
  • Keep the meeting on topic. When there are an abundant amount of people in a meeting, it can be difficult to stay on topic. Prepare accordingly. If you find that the meeting isn’t going anywhere or someone is off on a tangent, politely circle back to the important topic that needs to be addressed. Meetings can easily get off track and stay off track. The role of the facilitator is to keep the meeting on track.
  • Ask Useful Questions. To prepare, write a list of questions that relates to the purpose and objectives of the meeting. If you ask a question and no one answers it, make sure you ask for clarification or push to get an answer that keeps the focus on the subject matter issues.
  • Verbally reward participation. Compliment and express appreciation for those who are engaging and helping advance the goal and purpose of the meeting discussion.
  • Provide constructive feedback for those who are not engaging and advancing the purpose of the meeting. Manage the participants trying to dominate the meeting. Do not let a few people take control of your meetings. Instead, create a friendly atmosphere where everyone feels comfortable expressing their opinions.
  • Allow for sufficient Wait-Time. Sometimes interactive discussion is slow to start (other times the opposite is the challenge). When you ask a question, don’t shortly thereafter answer it yourself. A silent pause in the room may feel awkward but research and experience has demonstrated that that conversation gap is best filled by one of the participants who, after hesitation, begins to offer input.
  • Vary the meeting format. Incorporate some variety in the meetings and do not do the same thing the same way all the time. Be flexible when asking for, receiving and considering suggestions on improving meetings. One interesting technique which I have read about are “standing meetings,” which are meetings in which people gather and remain standing rather than being seated around a conference table. The rationale is that frivolous distractions and longwinded speeches are less likely and there may be grater motivation to focus on the issues and create the action-item lists. If you want a meeting to be short and efficient, a standing meeting might be an option. Another format which I have observed is the “walking meeting.” The walking meeting is conducted while the group “walks.” It has similar advantages to the “standing meeting” but with the added bonus of visiting a work station, lab, classroom or production line on-site is that is appropriate for the purpose of the meeting.

Picture
7. Closure Communication

Don’t end the meeting right away when the discussion wanes. Don’t just finish saying what you want to say and then leave. Typically, there are still good ideas yet unspoken that can be elicited or someone may not fully understand how their idea fits the needs of the meeting. Again, be patient and persistent even if attendees are quiet.

At the end of every meeting, go around and review the action steps each person has captured. Some facilitators also ask for closure observations about the meeting itself or next steps needed to advance the meeting goal and objectives. The exercise takes a small amount of time per person, and is usually well worth the summative feedback.

8. Action Item List

Create an action item list of specific follow-up steps and measures that have arisen during the meeting. Do not assume that everyone will informally remember the various action items slated for follow through. Assign a person’s name as the responsible person to accomplish the action item. This better ensures accountability that it will be accomplished.
It is also helpful to produce notes or “minutes” from each meeting. Don’t just assume that all participants are going to take their assignments to heart and remember all the details. Instead, be sure that someone has agreed to take on the job of record keeping. Immediately after the meeting, summarize the outcome of the meeting, as well as assignments and timelines, and email a copy of this summary to all attendees.

9. End the Meeting

Start on time and end on time. Everyone has suffered through meetings that went way beyond the scheduled ending time. That situation would be fine if no one had anything else to do at work. But in these days of faster and more flexible organizations, everyone always has plenty of work on the to-do list. If you announce the length of the meeting and then stick to it, fewer participants will keep looking at their watches, and more participants will take an active role in your meetings.

10. Assessment

Get feedback. Every meeting has room for improvement. Typically, you want to capture two types of feedback, so structure your data collection methods accordingly. Summative feedback focuses on evaluating the meeting and all aspects of it as it unfolded. This is evaluative feedback from meeting attendees on how the meeting went right for them — and how it went wrong. Was the meeting too long? Did one person dominate the discussion? Were attendees unprepared? Were the items on the agenda unclear? Formative feedback focuses on changes to the meeting plan, procedures and processes that should be implemented or adapted for the next (future) meeting to be held. These would include suggestions for improvements or changes to the way that things were or have been done in the past.

Summary

Time is a precious resource, and it is too expensive to needlessly waste it. With the amount of time devoted to business meetings in organizations, it is appropriate to focus on improving the productivity, participation and satisfaction with meetings.
These are just a few suggestions for improving the performance of the time spent in meetings. It is a win-win opportunity to improve productivity, gain participation and enhance everyone’s satisfaction levels.

0 Comments

(Mis)communication And Deadly Medical Errors

1/12/2017

0 Comments

 
Picture
One area where communication failures and breakdowns too frequently put lives at risk is found in the issue of medical errors in providing health care.

Per research published in the Journal of Health Care Finance (Andel, Davidow, Hollande, and Moreno, 2012[i]) approximately 200,000 Americans die from preventable medical errors including facility-acquired conditions and millions may experience health care provider errors. Medical errors in the United States have an annual impact of roughly $20 billion. Most of these costs are directly associated with additional medical cost, including: ancillary services, prescription drug services and inpatient and outpatient care. Citing previous research (by the Society for Actuaries and conducted by Milliman in 2010) additional costs were attributed to increased mortality rates with about $1.1 billion or more than 10 million days of lost productivity from missed work based on short-term disability claims.

The published research estimated that the economic impact may be much higher when indirect costs are quantified, perhaps another $1 trillion annually when quality-adjusted life years (QALYs) are applied to those that die. Andel et. al, using the Institute of Medicine’s (IOM) estimate of 98,000 deaths due to preventable medical errors annually in its 1998 report, To Err Is Human, and an average of ten lost years of life at $75,000 to $100,000 per year, conservatively projected a loss of $73.5 billion to $98 billion in QALYs for those resulting deaths. Some research suggests that preventable health care error death costs may be up to ten times the IOM estimate.

Quality care and patient safety depends on multiple factors, all of which must be working harmoniously to ensure delivery. However, one key factor is the quality of communication at various critical points in the health care provider sequence. According to the University of Minnesota’ TAGS[ii] site “ Upwards of 100,000 deaths occur in the United States each year because of medical mistakes. One of the biggest factors contributing to the problem is miscommunication or lack of communication between multiple health care professionals.”

The Joint Commission Center for Transforming Healthcare[iii] reports that “ineffective hand-off communication is recognized as a critical patient safety problem in health care; in fact, an estimated 80% of serious medical errors involve miscommunication between caregivers during the transfer of patients. The hand-off process involves “senders,” those caregivers transmitting patient information and transitioning the care of a patient to the next clinician, and “receivers,” those caregivers who accept the patient information and care of that patient. In addition to causing patient harm, defective hand-offs can lead to delays in treatment, inappropriate treatment, and increased length of stay in the hospital.”

Marie McCullough’s writing in The Philadelphia Inquirer[iv] describes a horrific problem of problematic labeling and miscommunication leading to fatal errors in the administration of chemotrophic drugs. She begins the writing with a personal narrative about a patient who died due to a recurring medical error. Christopher Wibeto was receiving vincristine to treat cancer. Vincristine is a chemotherapy medication commonly used to treat several types of cancer. As McCullough describes, shortly after health care providers had injected the drug into Wibeto’s spine, doctors realized that a catastrophic medical error had occurred. Since vincristine is neurotoxic it must be diluted and given intravenously – it should never be injected directly into spinal fluid (which flows around the brain). As a result, Wibeto developed dementia, paralysis and died within days as a result of the improper administration of the drug. McCullough writes that “no one knows how many vincristine disasters have occurred. The Institute for Safe Medical Practices in Horsham has documented 125 fatal misadministrations since the 1960s, but experts believe that the real number is many times higher.” This type of medical error is preventable. Simple changes in the dispending containers, more effective warnings and successful health communication about procedures and verification processes are obviously warranted.  Stories of medical errors such as the Wibetor case simply should never happen.

Most Common Root Causes of Medical Errors

The U.S. Department of Health and Human Services Agency for Healthcare Research and Quality[v] identified a diverse group of factors that cause medical errors. At the top of the list were the factors of communication, information flow, coordination and communication training/planning. Here are the top eight identified factors in rank order:
  1. Communication problems represent the most common cause of medical errors noted by the error reporting evaluation grantees. Communication problems can cause many different types of medical errors and can involve all members of a health care team. Communication failures (verbal or written) can take many forms, including miscommunication within an office practice as well as miscommunication between different components of the health care system or health care providers working different shifts. These problems can occur between health care providers such as primary care physicians and emergency room personnel, attending physicians and ancillary services, and nursing homes and patient services in hospitals. Communication problems can result in poorly documented or lost information on laboratory results, diagnostic testing, or medication information, and can occur at any point along the communication chain. Communication problems can also occur within a health care team in one location, between providers at different locations, between health care teams and other non-clinician providers (such as labs or imaging centers), and between health care providers and patients.
  2. Inadequate information flow can include problems that prevent:
    1. The availability of critical information when needed to influence prescribing decisions.
    2. Timely and reliable communication of critical test results.
    3. Coordination of medication orders at points of interface or transfer of care.
    4. Information flow is critical between service areas as well as within service areas in health care. Often, necessary information does not follow the patient when he or she is transferred to another service or is discharged from one component or organization to another.
  3. Human problems relate to how standards of care, policies, or procedures are followed. Problems that may occur include failures in following policies, guidelines, protocols, and processes. Such failures also include sub-optimal documentation and poor labeling of specimens. There are also knowledge-based errors where individuals do not have adequate knowledge to provide the care that is required for any given patient at the time it is needed.
  4. Patient-related issues can include improper patient identification, incomplete patient assessment, failure to obtain consent, and inadequate patient education. While patient related issues are listed as a separate cause by some reporting systems, they are often nested within other human and organizational failures of the system.
  5. Organizational transfer of knowledge can include deficiencies in orientation or training, and lack of, or inconsistent, education and training for those providing care. This category of cause deals with the level of knowledge needed by individuals to perform the tasks that they are assigned. Transfer of knowledge is critical in areas where new employees or temporary help is often used. The organizational transfer of knowledge addresses how things are done in an organization or health care unit. This information is often not communicated or transferred. Organizational transfer of knowledge is also a critical issue in academic medical centers where physicians in training often rotate through numerous centers of care.
  6. Staffing patterns/work flow can cause errors when physicians, nurses, and other health care workers are too busy because of inadequate staffing or when supervision is inadequate. Inadequate staffing, by itself, does not lead directly to medical errors, but can put health care workers in situations where they are much more likely to make an error.
  7. Technical failures include device/equipment failure and complications or failures of implants or grafts. In many instances equipment and devices such as infusion pumps or monitors can fail and lead to significant harm to patients. In many instances, inadequate instructions or poorly designed equipment can lead to patient injury. Often technical failure of equipment is not properly identified as the underlying cause of patient injury, and it is assumed that the health care provider made an error. A complete root cause analysis often reveals that technical failures, which on first review are not obvious, are present in an adverse event.
  8. Inadequate policies and procedures guiding the delivery of care can be a significant contributing factor in many medical errors. Often, failures in the process of care can be traced to poorly documented, non-existent, or clinically inadequate procedures.

Research at Stanford Medicine concludes that “better communication between caregivers reduces medical errors[vi].” The Stanford research found that focused efforts to improve communication quality alone resulted in a 30% decline in preventable adverse medical error events. (A copy of that research and experimental program is available for download at http://www.ipasshandoffstudy.com.) One of the important benefits from communication education and training is the reduction of preventable medical errors due to communication problems, failures and breakdowns. It is well past time that communication become a priority for health care professionals.

[i] Andel C, Davidow SL, Hollander M, Moreno DA., (2012) The economics of health care quality and medical errors. J Health Care Finance, 2012 Fall;39(1):39-50.
[ii] http://www.healthtalk.umn.edu/2014/04/11/preventing-medical-miscommunication-means-fewer-medical-errors/
[iii] http://www.jointcommission.org/assets/1/6/TST_HOC_Persp_08_12.pdf
[iv] McCullough, Marie (2016) Fighting a deadly chemo error, The Philadelphia Inquiry, November 11, 2016, A2.
[v] https://archive.ahrq.gov/research/findings/final-reports/pscongrpt/psini2.html
[vi] https://med.stanford.edu/news/all-news/2014/12/better-communication-between-caregivers-reduces-medical-errors.html


0 Comments

The Coming Leap Second Clock Adjustment: A Little Prevention is Worth The Effort

1/4/2017

0 Comments

 
Picture
For those of you who remember the disaster mitigation and recovery planning that was associated with the Y2K technological risk, you might find it interesting to note that small adjustments in the world’s atomic clocks still need to be done periodically to keep us all in sync and as well maintain social and economic harmony with the planet. Most people are aware of “Leap Year” adjustments where an entire day is added to the year (in the month of February) but fewer know that there are also “Leap Second” adjustments inserted periodically as well.

Atomic Time vs. Universal Time


Coordinated Universal Time or UTC is a standard, not a time zone. In other words, it is the base point for all other time zones in the world. They are determined by their difference to UTC. UTC is represented as UTC +0. Coordinated Universal Time is a 24-hour time standard that is used to synchronize world clocks. In order to keep Coordinated Universal Time as accurate as possible, two other time standards are used: International Atomic Time or TAI, and Universal Time also known as Solar Time.

There are two components used to determine Coordinated Universal Time (UTC). These are:
  1. International Atomic Time (TAI): A time scale that combines the output of some 200 highly precise atomic clocks worldwide, and provides the exact speed for our clocks to tick.
  2. Universal Time (UT1), also known as Solar or Astronomical Time, refers to the Earth’s rotation around its own axis, which determines the length of a day.

As you might expect, these two units of measurement gradually move out of synchronization with each other. When the difference between UTC and UT1 approaches 0.9 seconds, a leap second is added to UTC and to clocks worldwide. By adding an additional second to the time count, our clocks are effectively stopped for that second to give Earth the opportunity to catch up with atomic time. The reason we have to add a second now and then is that Earth’s rotation around its own axis is gradually slowing down, although very slowly. Atomic clocks, however, tick away at pretty much the same speed over millions of years. Compared to the Earth’s rotation, atomic clocks are simply too consistent.

Upcoming leap seconds are announced by the International Earth Rotation and Reference System Service (IERS) in Paris, France. Before the first leap second was added in 1972, UTC was 10 seconds behind Atomic Time. So far, a total of 26 leap seconds has been added. This means that the Earth has slowed down an additional 26 seconds compared to atomic time since then. (However, this does NOT mean that the days are 26 seconds longer nowadays. The only difference is that the days a leap second was added had 86,401 seconds instead of the usual 86,400 seconds.)

Leap seconds and leap years are both implemented to keep our time in accordance with the position of Earth. However, leap seconds are added when needed, based on measurements, and leap years are regularly occurring events based on set rules. During leap years, an extra day is added as February 29th to keep the calendar synchronized with the precession of the Earth around the Sun. Leap years are necessary because the actual length of the year is 365.2422 days and not 365. The extra day is added every four years to compensate for most of the partial day. However, this is a slight over-compensation, so some century years are not leap years. Only every fourth century year (those equally divisible by 400) is a leap year. For instance, 2000 was a leap year, but 1900, 1800 and 1700 were not.

Business Continuity Implications


The next leap second will be added on December 31, 2016 at 23:59:60 UTC. The difference between UTC and International Atomic Time (TAI) will then increase from the current 36 seconds to 37 seconds.

According to the National Institute of Standards and Technology (NIST) by keeping Coordinated Universal Time (UTC) within one second of astronomical time, scientists and astronomers observing celestial bodies can use UTC for most purposes. If there were no longer a correction to UTC for leap seconds, then adjustments would have to be made to time stamps, legacy equipment and software which synchronize to UTC for astronomical observations. However, adding a second to UTC can create problems for some systems, including data logging applications, telecommunication systems and time distribution services. Special attention must be given to these systems each time there is a leap second.

I recently ran across Tom Brant’s essay about the coming adjustment and some of the technological implications in PC Magazine “2016 Needs a ‘Leap Second’ to Sync With Earth’s Rotation” in which describes the next “leap second” adjustment to Atomic Clocks scheduled to occur on December 31, 2016.

According to Brant, “the people responsible for measuring the world’s time have got very good at determining just how long a second is supposed to last—the most accurate clock in the world uses cesium atoms to determine the exact length of a second, and it won’t get out of sync for at least 300 million years. But the question of how many seconds are in a year is far less certain. The International Earth Rotation and Reference Systems Service, which is responsible for Coordinated Universal Time (UTC), decides twice a year whether or not a “leap second” is needed to ensure that the world’s clocks are in sync with the Earth’s rotation, and this week its scientists decided that 2016 needs one of those extra seconds. It will be added at midnight on Dec 31, when clocks will read 11:59:59 p.m., then 11:59:60 p.m., before the stroke of 12:00:00 a.m. ushers in the year 2017. That slowdown is roughly equivalent to a loss of around two milliseconds per day, so the Paris-based IERS evaluates whether or not to add a leap second twice per year, on June 30 or December 31. As the US Naval Observatory explains, ‘[a]fter 500 days, the difference between the Earth rotation time and the atomic time would be one second. Instead of allowing this to happen a leap second is inserted to bring the two times closer together.’ A leap second has been added 26 times since the practice began in 1972, according to the observatory.”

“One day this year we’ll have 86,401 seconds, not the usual 86,400. When that’s happened before it’s caused some software to get way out of whack.”
– Network World

Patrick Nelson’s writing in Network World before the last Leap Second adjustment identified some of the IT business continuity concerns:

“…official clocks will pause by one second to let the earth’s rotation catch up with atomic time. Shouldn’t be a problem, right? Only tell that to LinkedIn, Reddit, and Qantas. All three were running systems that crashed in 2012, when the last leap second was added. The prior leap second in 2005 also caused problems with some computers, including Google’s. Well, it’s time again for another one. So brace yourself for potential trouble. Indeed, it may well be time to ask server system vendors about their mitigation plans….


What happened in 2012? Issues arose at Foursquare, LinkedIn, Mozilla, Qantas, Reddit, StumbleUpon and Yelp, who all reported crashes, according to media reports. Joab Jackson of IDG News Service wrote at the time that unpatched Linux kernels, Hadoop instances, Cassandra databases and Java-based programs were affected. Some servers running Debian Linux went offline. What caused it?


Computing systems and their Network Time Protocol, or NTP, client software need to be programmed to handle unforeseen extra seconds. If the software isn’t programmed correctly, unexpected seconds can cause problems. NTP is used to sync with the atomic clock. In some cases in the 2012 leap second implementation, NTP had to be disabled in order to restore servers. Linux patches were available before that leap year adjustment because the NTP high-resolution timer used was known to potentially cause a livelock. Livelocks are a way that a process doesn’t progress. The patches presumably weren’t applied in some cases.”


Robert McMillan’s writing in advance of last year’s leap second insertion in Wired.com (“The Leap Second Is About to Rattle the Internet. But There’s a Plot to Kill It”) reported:

“The Qantas Airways computers started crashing just after midnight. A few hours later, as passengers started flying home from weekend getaways, there were long delays in Brisbane, Perth, and Melbourne, and the computers still didn’t work. Qantas flight attendants were forced to check passengers in by hand. That Sunday morning in July 2012 was a disaster for Amadeus IT Group, the Spanish1 company responsible for the software that had computer screens flickering at Qantas kiosks. But it wasn’t entirely the company’s fault. Most of the blame lay with an obscure decades-old timing standard for the UNIX operating system, a standard fashioned by well-intentioned astronomer time lords. They were working for an international standards body, a precursor of the International Telecommunications Union, which today officially tells clock-keepers how to tell the rest of the world what time it is. Back in 1972, they decided to insert the occasional leap-second into Coordinated Universal Time (UTC), the standard most of the world uses to set wristwatches.


We’ve had 25 of these leap seconds since then, and we’re about to get our 26th. This week, the modern time lords announced that the next leap second will arrive at 11:59 pm and 60 seconds on June 30. That has some computer experts worried. Amadeus wasn’t the only company to go glitchy during the last leap-second. Reddit, Foursquare, and Yelp all blew up thanks to the leap second and the way it messed with the underlying Linux operating system, which is based on UNIX.


The trouble is that even as they use the leap second, UNIX and Linux define a day as something that is unvarying in length. ‘If a leap second happens, the operating system must somehow prevent the applications from knowing that it’s going on while still handling all the business of an operating system,’ says Steve Allen, a programmer with California’s Lick Observatory. He likens it to the problem facing the HAL 9000, the fictional onboard computer in Stanley Kubrick’s 2001: A Space Odyssey, which loses its mind after it is programmed to lie. ‘All the problems that crop up are, in a metaphorical sense, the HAL 9000 problem. You have told your computer to lie. I wonder what it will do,’ he says. The Linux kernel folks aren’t expecting any major issues when July 1 comes around, but the situation is unpredictable. Back in 2012, Linux creator Linus Torvalds told us: ‘Almost every time we have a leap second, we find something.’ And this time around, there will be problems again. Torvalds doesn’t think they’ll be as widespread as they were three years ago, but they’re largely unavoidable. The ‘reason problems happen in this space is because it’s obviously rare and special, and testing for it in one circumstance then might miss some other situation,’ he says.”


The leap second is yet another uncertainty for which business continuity planners, and particularly those responsible for IT systems, should be aware and take preparedness measures to mitigate and/or quickly recover from the next predictably unforeseen event.

View the following PBS video to learn more about the science behind the Leap Second.


0 Comments

Putting The Dangers of Today’s Risks, Crises and Disasters into (Some) Perspective

12/15/2016

0 Comments

 
Picture
An NBC/Wall Street Journal poll found that recent terrorist attacks have vaulted terrorism and national security to become one of the American public’s top concerns. This finding is consistent with a Gallup poll, which also showed terrorism as the public’s most important U.S. problem. A Harris Poll/HealthDay survey found that 25 percent of Americans view Ebola as the major public health threat to the United States. Sixty-three percent of Americans believe their world is becoming a riskier place, while only 15 percent feel it is less risky.

According to the National Crime Prevention Council (NCPC): “the news is full of stories about people who have been raped, robbed, mugged, or otherwise assaulted, and everyone cringes when they hear these reports. Who hasn’t feared becoming one of these victims? The truth, however, is that the incidence of personal violence has dropped to its lowest level in almost three decades. Violent crime – murder, rape, robbery, aggravated assault, and simple assault – was down from a high of 52.3 incidents per 1,000 people in 1981 to just 21.1 incidents per 1,000 in 2004, according to statistics compiled by the Bureau of Justice Statistics at the U.S. Department of Justice. Aggravated assault – which involves attack with a weapon or attack without a weapon that results in serious injury – was down even more sharply, from 12.4 incidents per 1,000 people in 1977 to just 4.3 incidents per 1,000 in 2004. Everyone – and this applies to residents of big cities, small towns, and even rural areas – needs to be careful, but these lower rates of crime are evidence that if people are vigilant and take common-sense precautions, crime can be prevented.”

Nonetheless, the Chapman Survey on American Fears discussed people’s perception of rising crime risks:

“’What we found when we asked a series of questions pertaining to fears of various crimes is that a majority of Americans not only fear crimes such as, child abduction, gang violence, sexual assaults and others; but they also believe these crimes (and others) have increased over the past 20 years,” said Dr. Edward Day who led this portion of the research and analysis. “When we looked at statistical data from police and FBI records, it showed crime has actually decreased in America in the past 20 years. Criminologists often get angry responses when we try to tell people the crime rate has gone down.’ Despite evidence to the contrary, Americans do not feel like the United States is becoming a safer place. The Chapman Survey on American Fears asked how they think prevalence of several crimes today compare with 20 years ago. In all cases, the clear majority of respondents were pessimistic; and in all cases Americans believe crime has at least remained steady. Crimes specifically asked about were: child abduction, gang violence, human trafficking, mass riots, pedophilia, school shootings, serial killing and sexual assault.”

Despite the public perceptions that the most serious risks are terrorism, Ebola and increasing crime – it may be helpful to put these perceptions into (some) perspective.

Since the mid-1970s until 2014 at least 11,079 people have died from outbreaks of Ebola in central and west Africa, according to the World Health Organization. This is obviously a tragedy and a concern. True enough, the number of terrorist attacks and fatalities reached a record high in 2012, according to the National Consortium for the Study of Terrorism and Responses to Terrorism. More than 8,500 terrorist attacks (very broadly defined) may have killed about 15,000 people in 2012 mostly in Africa, Asia and the Middle East. That development and statistic is certainly horrific and significantly tragic on both the personal and social levels. However, in comparison, Yuval Noah Harari in his book Homo Deus  points out that “in 2012 about 56 million people died throughout the world: 620,000 died due to human violence. (120,000 were killed in wars). In contrast, 800,000 committed suicide and 1.5 million died of diabetes…. In 2010, obesity killed about 3 million people worldwide.”

Picture
Harari continues: “Yet, in the U.S. we spend $16 billion a year fighting ‘terrorism’ and just over $1 billion a year fighting diabetes. That’s about $360 million per terror victim. By way of comparison, we spend about $38 per person with diabetes in search of a cure…. For the average American or European, Coca Cola poses a far deadlier threat than al-Qaeda.”
Statistically speaking, Ebola is not even close to being the number one health problem despite the widespread perception that it is. Likewise, terrorism is not the biggest threat to personal safety and crime is not on the rise in the U.S.

According to the Centers for Disease Control, last year fatalities in the USA from unintentional falls (30,208); motor vehicle traffic deaths (33,804); and unintentional poisoning deaths (38,851) each more than doubled the worldwide deaths from terrorism attacks. Yet, none of these categories even appeared in the polling for biggest risks and threats to personal safety. Workplace accidents claim far more U.S. lives than do terrorists.  4,821 workers were killed on the job in the U.S. during 2014 (3.4 per 100,000 full-time equivalent workers) — on average, more than 92 a week or more than 13 deaths every single day. Cancer kills approximately 1,500 people each and every day. According to the World Health Organization (WHO), malnutrition is the biggest contributor to child mortality with 36 million deaths recorded worldwide in 2005 related directly to malnutrition. Any comparison to the 15,000 deaths from terrorism seems obvious.  The same comparison could be made for any of the health issues for which the WHO complies statistics of annual fatalities. These include:
  • Heart Disease 6 million
  • Cerebrovascular disease 6 million
  • HIV-AIDS 6 million
  • Lower respiratory infections 8 million
  • Diarrhea 8 million
Picture
Statistics: World Heath Organization (WHO)
Why do people overestimate the threats from some risks and underplay the risks from others – regardless of the actual statistical facts? In part, the answer to that question may have to do with people’s exposure to the entertainment and news media selective and highlighted depictions of risk, violence and dangers narratives that tend to (over) emphasize the sensationalized threats and devote little attention to “mundane” but far more significant risk and danger.

Mean world syndrome is one of the main concepts of media cultivation theory originally proposed by George Gerbner. It describes a phenomenon whereby violence-related content of mass media makes viewers believe that the world is more dangerous than it actually is.

According to the mean world syndrome construct, the effects of media (television, online, print, film, etc.) on society shapes their perceptions of risks and dangers. In other words, people who watch these distorted (sensationalized) depictions in the media tend to think of the world as an intimidating and unforgiving place and more dangerous than it really is. The number of opinions, images, and attitudes that viewers tend to form when absorbing media content tends to have a direct influence on how the media consumer perceives the real world. Gerbner wrote that the spread of the syndrome will become more intense over time. He described how newer media technologies actually allow more complete access and spread of recurrent messages.

So, if you consume media both entertainment and news (e.g. television, newspapers, magazines, websites, online authors, YouTube, film, radio, BLOGS, etc.) you receive a distorted picture of the world and in particular of the relative dangers posed in the world. In fact, your perception of dangers and risks is shaped by the “reductionist” content from a particular perspective of the generators of the media content. In most cases, the content is either implicitly or explicitly biased towards sensationalization. In both entertainment and news businesses (and some say that there is no longer much of a difference in those two categories) there is a motive to attract attention with ever more sensational story lines.

It may be helpful as we consider priorities for crisis and disaster planning to put the dangers of into (some) perspective. This would also seem to be sound advice for public policy makers and public and private sector decision makers.

0 Comments

Do The Right Thing In Time of Disaster: A Lesson From The Louisiana Floods

11/30/2016

0 Comments

 
PictureGonzales, La, Thursday, August 19, 2016 — Regional Response Team Six conduct search and rescue operations by boat in Ascension Parish. (Photo by J.T. Blatty/FEMA)
There are far too many anecdotal tales told about crisis mismanagement and performance failures for disaster mitigation and recovery. Every once in a while, it is helpful to call our attention to those who seem to be doing the right things even at the worst times of a major disaster. One such positive story emerged in the aftermath of the August 2016 floods in Louisiana and Mississippi.

The state of Louisiana and some areas of Mississippi experienced severe storms and extreme flooding in August 2016. Deep, tropical moisture in combination with low pressure near the earth’s surface and aloft were the main ingredients that fueled the serious flooding in Louisiana and adjacent parts of southwest Mississippi. On the morning of Aug. 12, NOAA’s Weather Prediction Center said this when talking about the heavy rain event: “The best description of this system is that of an inland sheared tropical depression.”

This slow-moving area of low pressure and near-record amounts of atmospheric moisture led to extreme rainfall (more than 24 inches of rain) and historic flooding in southeast Louisiana between August 11, 2016 to August 31, 2016. More than 60,000 homes were impacted and at least 13 people lost their lives as a result of the severe flooding in both Louisiana and Mississippi. As a result, the Federal Government declared a major disaster, believed the worst disaster to hit the U.S. since sub-tropical “Superstorm Sandy.”

The rainfall it produced was indeed very similar to what one would expect from a slow-moving tropical depression or storm since rainfall potential is related to the forward speed of those types of systems. Rainfall totals in the double digits from slow-moving tropical depressions or storms can wreak extreme havoc on a region. Rivers can rise rapidly and easily exceed flood levels by a wide margin, inundating homes and businesses and in some cases making travel impossible. The highest storm total rainfall report was 31.39 inches near Watson, Louisiana, according to NOAA.

The rainfall total was higher than from any tropical cyclone or remnant in Louisiana since 1950, though an August 1940 hurricane wrung out 37.50 inches on Miller Island, according to NOAA/WPC forecaster and tropical cyclone rainfall expert, David Roth.

Here are some additional rainfall totals from NOAA:
  • 47 inches near Brownfields, Louisiana
  • 75 inches near Denham Springs, Louisiana
  • 84 inches near Gloster, Mississippi
  • 60 inches at Lafayette, Louisiana
  • 14 inches in Baton Rouge, Louisiana (Record daily rainfall on Friday and Saturday)
  • 43 inches in Panama City, Florida

Lafayette, Louisiana, had two consecutive days with 10 inches or more of rainfall Aug. 12 and Aug. 13. Prior to that, dating to 1893, that happened only one other day in Lafayette.

PictureGonzales, La, Thursday, August 19, 2016 — Standing floodwaters remain in Ascension Parish. (Photo by J.T. Blatty/FEMA)
At least 11 river gauges saw record crests in Louisiana, some by a large margin. Here’s a list of the record crests:

  • Comite River at Comite Joor Road: Record crest set by 3+ feet on Aug. 14
  • Comite River near Olive Branch: Record crest set on Aug. 13
  • Amite River at Magnolia: Record crest set by 6+ feet on Aug. 13
  • Amite River at Denham Springs: Record crest set by nearly 5 feet on Aug. 14
  • Amite River Basin at Bayou Mancha Near Little Prairie: Record crest set on Aug. 14
  • Amite River at Bayou Manchac Point: Record crest set on Aug. 14
  • Amite River at Port Vincent: Record crest set by almost 3 feet on Aug. 14
  • Amite River at French Settlement: Record set on Aug. 14
  • Tangipahoa River at Robert: Record crest set on Aug. 13
  • Tickfaw River at Holden: Record crest set on Aug. 13
  • Tickfaw River at Liverpool: Record crest set on Aug. 12

The Vermillion River at Lafayette, Louisiana, crested at its highest level since an August 1940 hurricane, about 7.5 feet above flood stage and about 6 feet above the March 2016 flood.

The Louisiana Economic Development estimates that the August 2016 Louisiana Flood caused $8.7 billion in damage to Louisiana residential and commercial properties, with damage to businesses in the state exceeding $2 billion. Those figures do not include damage to the state’s public infrastructure.

In addition, more than 6,000 businesses were impacted by the flooding. The combined cost of losses for those buildings and their contents was estimated to be more than $2.2 billion.

With an estimated 146,000 homes damaged in the flooding, and 60,000 residents left homeless, thousands of Louisianans were forced into shelters, with more than 11,000 seeking refuge in state-operated shelters. Because many of the areas that flooded were not in “high flood risk areas,” the majority of homeowners affected by the flood did not have flood insurance.

Crisis Management Exemplariness

Described in an article by Lauren Weber in the The Wall Street Journal (“One CEO’s Hands-On Crisis Management,” 21 September 2016, B8), Amedisys CEO Paul Kusserow is praised for doing the right during the disaster.

Amedisys Home Health and Hospice Care, based in Baton Rouge, Louisiana, is one of the largest home health provider and fourth largest hospice care provider in the United States. Amedisys provides in-home skilled nursing, Physical therapy, Occupational therapy and speech language pathology, medical social work, home aides and hospice and bereavement services, with 11 million patient care visits annually.

The Amedisys corporate philosophy includes the following:

Mission: To provide patient-centered care every day and be the leading healthcare at home team in the communities we serve.

Core Beliefs: The Amedisys SPIRIT
  • Service – Remember why we are here
  • Passion – Care and serve from the heart
  • Integrity – Do the right thing, always
  • Respect – Communicate openly and honestly
  • Innovation – Influence and embrace change
  • Talent – Invest in personal and professional growth

Amedisys, as part of their corporate philosophy is committed to participating in the disaster response in the communities where they operate. “Assistance in Disaster Recovery Efforts: Our care centers are part of communities across the country. When floods, blizzards or other federally declared disasters strike, we stand with our people offering ongoing care and assistance to help recovery efforts.”

However, it is the extra measures undertaken during the August floods that has attracted the praise for the crisis management actions to take care of their own employees and patients during the disaster.

According to Weber’s “Workarounds” report:

The August disaster had left nearly one quarter of Amedisys’ 400 employees with flooded homes and property, and put at risk the company’s patients. Among the people who died in the flooding was Bill Borne, the founder of Amedisys.

Weber quotes Kusserow’s recounting of steps taken in the days after the rains had begun:

“We knew the flooding might shut down fuel stations and that wait times would be huge at the remaining ones, so we brought in a fuel truck on August 14 and were dispensing fuel to caregivers so they could get out to see their patients. We started to collect information on whose property and homes were damaged and we just wired them all $2500 straight out of the bank account. I went to a Lowe’s in New Orleans in the middle of the night with our general counsel. Those were long lines. We bought mops, buckets, fans, bleach, anti-mold spray, anything that was on the shelves.”

Weber concludes that “every crisis is an opportunity to do the right thing.”

Paul and his team did their job, protected their people and their clients even at the most challenging time. For most of us in the crisis management and disaster management field, we know that they “were just doing what they had planned and prepared to do” at these critical moments. I’m guessing that they aren’t necessarily looking for praise or applause. Nonetheless, I think it appropriate, for those of us in the crisis and disaster management sector, to call attention to those who seem to be doing the right things even at the worst times of a major disaster. In this case, let’s give a “thumb’s up” and pat on the back to Paul Kusserow and his team at Amedisys for their “above and beyond” actions during the August floods.

0 Comments

The Coming Seafood Crisis: Path to Sustainability or Collapse

11/17/2016

0 Comments

 
PictureImage: NOAA
In the decade since David Biello warned of the catastrophic consequences of over-fishing the planet’s oceans in a Scientific America article (Overfishing Could Take Seafood Off the Menu by 2048) the looming problem has still not yet been solved.

Biello wrote in 2006 that “In 1994, seafood may have peaked. According to an analysis of 64 large marine ecosystems, which provide 83 percent of the world’s
seafood catch, global fishing yields have declined by 10.6 million metric tons since that year. And if that trend is not reversed, total collapse of all world fisheries should hit around 2048. “Unless we fundamentally change the way we manage all the oceans species together, as working ecosystems, then this century is the last century of wild seafood,” notes marine biologist Stephen Palumbi of Stanford University. Marine biologist Boris Worm of Dalhousie University in Halifax, Nova Scotia, gathered a team of 14 ecologists and economists, including Palumbi, to analyze global trends in fisheries. In addition to data from the U.N. Food and Agriculture Organization stretching back to 1950, the researchers examined 32 controlled experiments in various marine ecosystems, observations from 48 marine protected areas, and historical data on 12 coastal fisheries for the last 1,000 years. The latter study shows that among commercially important species alone, 91 percent have seen their abundance halved, 38 percent have nearly disappeared and 7 percent have gone extinct with most of this reduction happening since 1800. “We see an accelerating decline in coastal species over the last 1,000 years, resulting in the loss of biological filter capacity, nursery habitats and healthy fisheries,” notes team member Heike Latze, also of Dalhousie.” A decade later we are still facing a potential catastrophic food disaster.

Amanda Leland (Senior Vice President, Oceans, Environmental Defense Fund) currently warns that “We have a choice to get fishing right. If we don’t, we’ve got a serious food crisis on our hands.”

So What Happened to all the Fish?

Population growth, demand for seafood and enhanced technology for ever more efficient fishing have been rapidly depleting seafood resources. The growing population’s demand, when paired with boats that can stay out longer in the sea, boats that are floating factories that can catch and process the fish – and you have overfishing. Since the size of their catch has been dwindling over the years, the fishing fleets have resulted to casting out bigger nets. These nets are indiscriminate. For every 1 ton of prawns caught, 3 tons of little fish are caught in the prawn nets and thrown away. The Ocean is unable to renew sufficiently fast what we are rapidly consuming.

According to TheWorldCounts.com “the whales, sharks, Bluefin tuna, king mackerel, dolphins and marlin are disappearing or have already disappeared. It took us only 55 years to wipe out 90% of the ocean’s predators causing a disruption of the marine ecosystem. After the big fish, commercial fishermen will just go down the food chain, until we’ve depleted everything.”

TheWorldCounts.com continues by noting that “it’s the aquatic equivalent of deforestation. Boats cast huge and heavy nets that are held open by heavy doors weighing several tons each and drag them across the ocean floor! Just imagine the devastation that causes. Destruction of Habitat. Coral Reefs which are home to 25% of all marine life are being destroyed. The reefs grow at a rate of 0.3 cm to 10 cm a day. What you see now has been growing for the last 5,000 to 10,000 years. Climate Change. The increase in sea water temperatures are attracting invasive species which are competing with us for our food. In the past decade there has been an ever growing demand, from an ever growing population. The ‘net’ result has been overfishing to the point of depleting sustainable levels of seafood.”

PictureImage: NOAA
Overfishing

The National Geographic defines ocean overfishing as “simply the taking of wildlife from the sea at rates too high for fished species to replace themselves. The earliest overfishing occurred in the early 1800s when humans, seeking blubber for lamp oil, decimated the whale population. Some fish that we eat, including Atlantic cod and herring and California’s sardines, were also harvested to the brink of extinction by the mid-1900s. Highly disruptive to the food chain, these isolated, regional depletions became global and catastrophic by the late 20th century.”

The World Wildlife Fund reports that “Overfishing occurs when more fish are caught than the population can replace through natural reproduction. Gathering as many fish as possible may seem like a profitable practice, but overfishing has serious consequences. The results not only affect the balance of life in the oceans, but also the social and economic well-being of the coastal communities who depend on fish for their way of life. Billions of people rely on fish for protein, and fishing is the principal livelihood for millions of people around the world. For centuries, our seas and oceans have been considered a limitless bounty of food. However, increasing fishing efforts over the last 50 years as well as unsustainable fishing practices are pushing many fish stocks to the point of collapse. More than 85 percent of the world’s fisheries have been pushed to or beyond their biological limits and are in need of strict management plans to restore them. Several important commercial fish populations (such as Atlantic bluefin tuna) have declined to the point where their survival as a species is threatened. Target fishing of top predators, such as tuna and groupers, is changing marine communities, which lead to an abundance of smaller marine species, such as sardines and anchovies.”

Marine scientists know when widespread overfishing of the seas began. And they have a pretty good idea when, if left unaddressed, it will end. In the mid-20th century, international efforts to increase the availability and affordability of protein-rich foods led to concerted government efforts to increase fishing capacity. Favorable policies, loans, and subsidies spawned a rapid rise of big industrial fishing operations, which quickly supplanted local boatmen as the world’s source of seafood. These large, profit-seeking commercial fleets were extremely aggressive, scouring the world’s oceans and developing ever more sophisticated methods and technologies for finding, extracting, and processing their target species. Consumers soon grew accustomed to having access to a wide selection of fish species at affordable prices. But by 1989, when about 90 million tons (metric tons) of catch were taken from the ocean, the industry had hit its high-water mark, and yields have declined or stagnated ever since. Fisheries for the most sought-after species, like orange roughy, Chilean sea bass, and bluefin tuna have collapsed. In 2003, a scientific report estimated that industrial fishing had reduced the number of large ocean fish to just 10 percent of their pre-industrial population.” (Source: National Geographic)

Overfishing.org provides illustrative examples such as “the single best example of the ecological and economical dangers of overfishing is found in Newfoundland, Canada. In 1992 the once thriving cod fishing industry came to a sudden and full stop when at the start of the fishing season no cod appeared. Overfishing allowed by decades of fisheries mismanagement was the main cause for this disaster that resulted in almost 40.000 people losing their livelihood and an ecosystem in complete state of decay. Now, fifteen years after the collapse, many fishermen are still waiting for the cod to return and communities still haven’t recovered from the sudden removal of the regions single most important economical driver. The only people thriving in this region are the ones fishing for crab, a species once considered a nuisance by the Newfoundland fishermen. It’s not only the fish that is affected by fishing. As we are fishing down the food web the increasing effort needed to catch something of commercial value marine mammals, sharks, sea birds, and non-commercially viable fish species in the web of marine biodiversity are overexploited, killed as bycatch and discarded (up to 80% of the catch for certain fisheries), and threatened by the industrialized fisheries. Scientists agree that at current exploitation rates many important fish stocks will be removed from the system within 25 years. Dr. Daniel Pauly describes it as follows: “The big fish, the bill fish, the groupers, the big things will be gone. It is happening now. If things go unchecked, we’ll have a sea full of little horrible things that nobody wants to eat. We might end up with a marine junkyard dominated by plankton.””

Overfishing is as big a threat to humanity as it is to earth’s oceans.

PictureImage: NOAA
This May Not End Well – Global Food Disaster Looms

Currently 3 billion people living on the planet rely on seafood as a key protein source for survival. Millions of people worldwide depend on the oceans for their daily livelihoods and hundreds of macro and micro economies are reliant upon fishing and seafood harvesting.

Faced with the collapse of large-fish populations, commercial fleets are going deeper in the ocean and father down the food chain for viable catches. This so-called “fishing down” is triggering a chain reaction that is upsetting the ancient and delicate balance of the sea’s biologic system. A study of catch data published in 2006 in the journal Science grimly predicted that if fishing rates continue apace, all the world’s fisheries will have collapsed by the year 2048.

The Environmental Defense Fund warns that “of all the threats facing the oceans today, overfishing takes the greatest toll on sea life—and people. Overfishing is catching too many fish at once, so the breeding population becomes too depleted to recover. Overfishing often goes hand-in-hand with wasteful types of commercial fishing that haul in massive amounts of unwanted fish or other animals, which are then discarded. As a result of prolonged and widespread overfishing, nearly a third of the world’s assessed fisheries are now in deep trouble—and that’s likely an underestimate, since many fisheries remain unstudied. Overfishing endangers ocean ecosystems and the billions of people who rely on seafood as a key source of protein. Without sustainable management, our fisheries face collapse—and we face a food crisis. Poor fishing management is the primary cause. Around the world, many fisheries are governed by rules that make the problem worse, or have no rules at all.”

Commondreams.org reports that “people are driving marine ecosystems to “unprecedented” mass extinction, according to a new study published Wednesday in the journal Science. Large-bodied animals will be the first to go, the study says—blue whales, great white sharks, and bluefin tuna, for example. Their size is part of their vulnerability, making them more susceptible to fishing and hunting by humans, ‘the dominant threat to modern marine fauna,’ the researchers found. ‘If this pattern goes unchecked, the future oceans would lack many of the largest species in today’s oceans,’ co-author Jonathan Payne, associate professor and chair of geological sciences at Stanford University, told the Guardian. ‘Many large species play critical roles in ecosystems and so their extinctions could lead to ecological cascades that would influence the structure and function of future ecosystems beyond the simple fact of losing those species.’ The study states that ‘The preferential removal of the largest animals from the modern oceans, unprecedented in the history of animal life, may disrupt ecosystems for millions of years even at levels of taxonomic loss far below those of previous mass extinctions…. Without a dramatic shift in the business-as-usual course for marine management, our analysis suggests that the oceans will endure a mass extinction of sufficient intensity and ecological selectivity to rank among the major extinctions” of the current era.’ In fact, the researchers note, it could usher in a new one—the Anthropozoic era, meaning one created by humans. That’s not to be confused with the Anthropocene, an epoch which scientists estimate is already here.”

Contemporary Responses

We’ve had an over-fishing problem for a long time and over the past 55 years, as fisheries have returned lower and lower yields, humans have begun to understand that the oceans we’d assumed were unendingly vast and rich are in fact highly vulnerable and sensitive. Add overfishing to pollution, climate change, habitat destruction and acidification, a picture of a system in crisis emerges.

As part of the response to these trends, we’ve seen the rise of farm-raised fish and aquaculture. It’s a good idea, but it’s brought its own set of issues.

Rick Moonen writing in The Huffington Post BLOG states that “aquaculture may be fairly new, but what many consumers don’t realize is that the practice has brought its own set of dangers. Many farming processes, specifically salmon, include the same types of antibiotics and overdosing problems we’ve been warned about with commercially raised cattle. Crowded pens, sea lice infestations, toxic chemicals used to eradicate weed and algae growth – all symptoms of irresponsible farming practices, all present in modern day aquaculture. While the message may be bleak, it’s not all bad news. Similar to commercial cattle operations, I’m now seeing responsible, passionate, and thoughtful aquaculture operations come into their own. These types of operators, including Canada’s True North Salmon, have taken a different approach, using environmentally friendly practices and vertical integration to provide our county’s favorite fish in a better way. These companies have looked past profits, and are raising fish in their native environments, embracing their natural habits and using antibiotics when needed, not as a blanket or preventative solution. Driving this positive change? I give the credit to consumers: you’ve asked the right questions; you care about where your food is coming from. I’m proud to be a part of a huge movement to protect our oceans, and every time a restaurant customer checks Monterrey Bay Aquarium’s Seafood Watch app or site before ordering, we further that cause and that protection. I’ve given my career to this cause, and to see change on a consumer level keeps me going. I encourage each and every consumer to continue until we have more responsible solutions than irresponsible ones. Keep asking questions when you dine out and at the grocery store, know which choices are responsible and which ones contribute to environmental damage – and frankly aren’t good for your body.”

PictureImage: NOAA
The Environmental Defense Fund suggests that “with smarter management systems, known as fishing rights, we can reverse the incentives that lead to overfishing. Under fishing rights, fishermen’s interests are tied to the long-term health of a fishery. Their income improves along with the fish population.”

Overfishing.org suggests that the effects of overfishing are still reversible, that is, if we act now and act strongly. They advocate the following actions:

“When fish stocks decline and fisheries become commercially unviable the damaged stock gets some rest and generally struggles along on a pathetic level compared to its pre-fishing level, but doesn’t go biologically extinct. A damaged system is struggling and shifting, but can still be active (e.g. filled with jellyfish instead of cod).  If we want to we can reverse most of the destruction. In some situations, it might only take a decade, in other situations it might take many centuries. Yet in the end we can have productive and healthy oceans again as is shown in many examples around the world. We do however need to act on it now, before we cross the point of no return.

Every long-term successful and sustainable fishery, near-shore or high-seas, needs to be managed according to some basic ground rules:

  • Safe catch limits
A constantly reassessed, scientifically determined, limit on the total number of fish caught and landed by a fishery. Politics and short time economic incentives should have no role in this.

  • Controls on bycatch
The use of techniques or management rules to prevent the unintentional killing and disposal of fish, crustaceans and other oceanic life not part of the target catch or landed.

  • Protection of pristine and important habitats
The key parts in ecosystems need full protection from destructive fisheries; e.g. the spawning and nursing grounds of fish, delicate sea floor, unique unexplored habitats, and corals.

  • Monitoring and Enforcement
A monitoring system to make sure fishermen do not land more than they are allowed to, do not fish in closed areas and cheat as less as possible. Strong monetary enforcement is needed to make it uneconomic to cheat. We need to make sure management systems based on these rules are implemented everywhere. In combination with the banning of the lavish -hidden- subsidies to commercially unviable fisheries.”

Overfishing.org further suggests individual actions to address this looming issue:
“It’s fair to say that individuals cannot solve this global problem all by ourselves, we need politicians to strengthen international law. What we can do is make a difference. Over a decade ago many people started buying dolphin-friendly tuna. Now the time has come to buy ocean friendly tuna.  Here are some of the actions you yourself can undertake.

  • Be informed
Read up a bit on the issues of overfishing, have a look at some articles on this site, see if you can find some information regarding your local situation. Keep in mind that while this is a global problem every local situation is different.

  • Know what you eat
If you eat fish make sure you know what you eat, and pick the ones with the lowest impact. Have a look at the Guide to Good Fish Guides for some tips.

  • Spread the word
I know, it’s all rather obvious, but this is simply how it works. Let your voice be heard!


0 Comments

The Little Acknowledged Crisis of Alcohol Abuse in the Workplace – Part 2: Preventing and Mitigating the Disaster

11/2/2016

0 Comments

 
Picture
In part 1 of this essay on alcohol abuse in the workplace, the nature of the crisis and scope of the disaster was laid out. This essay, seeks to summarize some of the recommendation responses and possible solutions for this major workplace crisis.

As noted in part 1 of this essay, alcohol is the single most used and abused drug in America. According to the National Institute on Alcohol Abuse and Alcoholism (NIAAA), nearly 14 million Americans (1 in every 13 adults) abuse alcohol or are alcoholics. In the workplace, the costs of alcoholism and alcohol abuse manifest themselves in many different ways. Absenteeism is estimated to be 4 to 8 times greater among alcoholics and alcohol abusers. Other family members of alcoholics also have greater rates of absenteeism. Accidents and on-the-job injuries are far more prevalent among alcoholics and alcohol abusers.

Brian Hughes reported that about 17.6 million adults in the U.S. currently suffer from alcohol abuse or dependence. Several million more people engage in risky, binge drinking patterns that can lead to alcohol addiction. Binge drinking means drinking five or more alcoholic beverages on the same occasion on at least one day in the past 30 days. About one-quarter of college students say that excessive drinking causes them to miss and fall behind in classes, perform poorly on exams, and receive lower grades overall. Binge drinking starts early, so by the time a person is ready to seriously pursue a career, these patterns may be hard to break. Heavy drinkers may find themselves in a similar situation as they did at school, but now their livelihood and financial future are at stake.

In the workplace, the costs of alcoholism and alcohol abuse manifest themselves in many different ways. Absenteeism is estimated to be 4 to 8 times greater among alcoholics and alcohol abusers. Other family members of alcoholics also have greater rates of absenteeism. Accidents and on-the-job injuries are far more prevalent among alcoholics and alcohol abusers.

We must first acknowledge the depth, scope and severity of the alcohol abuse crisis. We have to admit that there is a problem and then proceed to talk about it. Secondly, we need to explore potential responses and solutions. The further specific question arises for those us interested in sustaining profit and not-for-profit operations in the face of this widespread challenge – how do we prevent or mitigate this disastrous situation?

Mitigating the Workplace Alcohol Abuse Disaster

Workplaces provide the opportunities for preventing alcohol based contingencies and critical problems. In part because people spend a large amount of time regularly at the workplace and employers may use their employment leverage to motivate an employee to seek help for an alcohol problem. Such efforts include the the use of employee assistance programs (EAPs) and and Drug Free Workplace Programs (DFWP) designed to reduce employee alcohol problems and also to examine risk factors for alcohol problems that exist in the work environment.

Alcohol abuse problems in the workplace are usually categorized as identified in two ways:

  1. The linkage of a drinking pattern with job performance problems, such as a pattern of poor-quality work, poor quantity of work, attendance problems, or problems related to interaction with clients or customers.
  2. Employees’ self-assessment decisions that their drinking behaviors are causing problems for themselves
According to NCADD, the workplace “can be an important and effective place to address alcoholism and other drug issues by establishing or promoting programs focused on improving health. Many individuals and families face a host of difficulties closely associated with problem drinking and drug use, and these problems quite often spill over into the workplace. By encouraging and supporting treatment, employers can dramatically assist in reducing the negative impact of alcoholism and addiction in the workplace, while reducing their costs. Without question, establishment of an Employee Assistance Program (EAP) is the most effective way to address alcohol and drug problems in the workplace. EAPs deal with all kinds of problems and provide short-term counseling, assessment, and referral of employees with alcohol and drug abuse problems, emotional and mental health problems, marital and family problems, financial problems, dependent care concerns, and other personal problems that can affect the employee’s work.

Picture
This service is confidential. These programs are usually staffed by professional counselors and may be operated in-house with agency personnel, under a contract with other agencies or EAP providers, or a combination of the two. Additionally, employers can address substance use and abuse in their employee population by: implementing drug-free workplace and other written substance abuse policies; offering health benefits that provide comprehensive coverage for substance use disorders, including aftercare and counseling; reducing stigma in the workplace; and educating employees about the health and productivity hazards of substance abuse through company wellness programs.

  • Research has demonstrated that alcohol and drug treatment pays for itself in reduced healthcare costs that begin as soon as people begin recovery.
  • Employers with successful EAP’s and DFWP’s report improvements in morale and productivity and decreases in absenteeism, accidents, downtime, turnover, and theft.
  • Employers with longstanding programs also report better health status among employees and family members and decreased use of medical benefits by these same groups.

Paul M. Roman, Ph.D., and Terry C. Blum, Ph.D. (NIH: National Institute of Alcohol Abuse and Alcoholism) authors of The Workplace and Alcohol Problem Prevention offer the following:

“Workplace programs to prevent and reduce alcohol-related problems among employees have considerable potential. For example, because employees spend a lot of time at work, coworkers and supervisors may have the opportunity to notice a developing alcohol problem. In addition, employers can use their influence to motivate employees to get help for an alcohol problem. Many employers offer employee assistance programs (EAPs) as well as educational programs to reduce employees’ alcohol problems. However, several risk factors for alcohol problems exist in the workplace domain. Further research is needed to develop strategies to reduce these risk factors.

As a domain for alcohol-problem prevention, the workplace holds great promise. In the United States and, increasingly, around the world, the majority of adults who are at risk for alcohol problems are employed. As described here, employers have several well-defined means at their disposal for intervening with problem drinking. Those methods serve not only the interests of the employer but also those of the employees and their dependents. Furthermore, the potential for a preventive impact is worldwide. Western styles of workplace organization and employment relationships have spread to influence global practices, setting the stage for the diffusion of workplace interventions and for addressing emerging economies’ increasing alcohol problems (Masi 2000; Roman in press).

Despite these possibilities, the development of prevention programs in U.S. workplaces has slowed considerably in recent years and, in fact, may be in need of revitalization (Roman and Baker 2001; Roman in press). The decline in workplace attention to alcohol problems illustrates the need for creating and maintaining an infrastructure for sustaining alcohol interventions in settings not typically associated with the delivery of health care.”

Picture
Recommendations for Responding to the Workplace Alcohol Abuse Crisis

Executives, managers, human resources specialists, business continuity experts and workplace safety technicians have several opportunities for addressing this disastrous workplace crisis. Roman and Blum suggest guidelines and measures to respond to the workplace alcohol abuse crisis including the following:

  • Full-time employees spend a significant proportion of their time at work, increasing the possibility of exposure to coordinated communication including preventive messages or programs offered through the workplace. The likelihood that evidence of problem drinking will become visible to those who might have a role in intervention also is increased.
  • Work plays an important role in most people’s lives. Because many adults’ roles in the family and community are dependent on maintaining the income, status, and prestige that accompanies employment, the relationship between the employer and the employee contains a degree of “leverage.”
  • The employer has the right to expect an adequate level of job performance. If alcohol abuse breaches the rules of the employer-employee agreement or is associated with substandard job performance, the employer may withdraw pay or privileges associated with the job, thus motivating the employee with alcohol problems to change his or her behavior.
  • Workplace programs should include both primary and secondary prevention measures. Primary prevention aims to keep alcohol problems from developing, and secondary prevention seeks to reduce existing problems. Primary prevention often is more cost-effective than secondary prevention; however, the workplace is not conducive to strategies aimed at preventing alcohol use. Most employees are adults and therefore are legally allowed to consume alcohol. Employers rarely are in a position to prevent their employees from initiating drinking as an off-the-job lifestyle practice, nor do they desire to do so.
  • At the same time, employers want their employees to perform their jobs well and not disrupt or endanger coworkers’ activities. Smooth work transactions with customers and other members of the public also are important in many organizations, including the service sector.
  • The principal means for addressing an employee’s off-the-job drinking is through alcohol education programs conducted at the worksite. These programs usually are associated with an EAP, a health promotion program, or both. The goal of these education programs often is to encourage behavioral change or use of the associated services (i.e., self-referral to an EAP).
  • In addition to alcohol education programs, employers also may offer health promotion programs, which may motivate employees to alter their drinking behaviors. When health problems such as weight, high blood pressure, or gastric problems are identified in a health risk survey administered at the worksite, the administering health worker may suggest a reduction in drinking as a means of alleviating the primary symptom. Alternatively, employees undertaking exercise programs or other health-oriented activities might change their drinking behavior because drinking may not be consistent with their new healthy regimen.
  • As part of workplaces’ “rules of conduct” or “fitness for duty” regulations, supervisors are often empowered to discipline or remove an employee from the job on the suspicion of drinking. However, if an employee is suspected of drinking based on evidence such as odor of alcohol or appearance of intoxication, the employee may object, which could lead to litigation. When alcohol use is suspected, alcohol testing can be used to establish whether the employee was in fact drinking. Specific techniques include both breath testing and blood testing.
  • Compared with EAPs, prevention efforts focused on reducing risk factors in the work environment may offer the greatest potential payoff. This approach is the most problematic in terms of implementation, however. One possible avenue would be to identify and alter work environments that have “toxic” connections to alcohol problems. Employers would be reluctant, however, to participate in efforts that might highlight their liability in creating high-risk environments.

Roman and Blum also argue that “despite the potential problems in implementing interventions to reduce risk factors in the workplace, research has examined several work-related factors that may contribute to alcohol use and related problems among employees.” These identified risk factors are listed below:
  1. Stress
  2. Alienation
  3. Dysfunctional Cultures
  4. Dysfunctional Subcultures

Such situational factors that are correlated with alcohol abuse should also be addressed and if possible mitigate the variables or enhance the resilience resources for employees who are vulnerable.

Picture
Mitigation can be Effective as well as Cost-Effective

For a long time we have known that coordinated efforts for mitigation do work. One foundational study from three decades ago determined that the numerous company programs in North America that have developed countermeasures against drug and alcohol abuse in the workplace, ranging from prevention, health promotion and education, to treatment and rehabilitation, provide instructive examples of an effective approach that in most cases has more than paid for its cost. (Source: Shahandeh, Behrouz International Labour Review, v124 n2 p207-23 Mar-Apr 1985)

  • According to the U.S. Department of Labor, there are numerous examples of successful prevention and mitigation of alcohol abuse disasters in the past where a concerted effort was undertaken. These include:
  • “One small plumbing company in Washington, D.C., the Warner Corporation, saved $385,000 in one year by establishing a drug-free workplace program that included EAP services. The company attributed the savings to a decrease in the number of accidents, which resulted in lower workers’ compensation costs and lower vehicle insurance premiums. Warner now has a waiting list of top-flight mechanics wanting to work in its drug-free environment, saving the company $20,000 a year on personnel advertising costs. Additionally, the proportion of apprentices completing a two-year training course has increased from 25 percent to 75 percent, resulting in annual savings of $165,000.
  • In 1984, CSX Transportation, a freight railroad company, implemented Operation Redblock, a response to widespread violations of Rule G, which prohibits the use and possession of alcohol and drugs. The program’s 4000 volunteers are trained to confront substance abusers, and if appropriate, refer them to the company’s EAP. Since 1990, less than one percent of the drug tests administered to safety-sensitive employees have been positive.
  • After implementing a comprehensive drug-free workplace program in response to a workers’ compensation discount law, W.W. Gay Mechanical Contractors in Florida saved $100,000 on workers’ compensation premiums in 1990, and also has experienced increased productivity, reduced absenteeism, and fewer accidents.
  • Only four years after implementing a workplace substance abuse program which included drug testing, Jerry Moland of Turfscape Landscape Care, Inc., in Chandler, AZ, says that his company is saving over $50,000 a year due to increased productivity, fewer accidents, and less absenteeism and turnover.
  • According to the American Management Association’s annual Survey on Workplace Drug Testing and Drug Abuse Policies, workplace drug testing has increased by more than 1,200 percent since 1987. More than 81 percent of businesses surveyed in 1996 were conducting some form of applicant or employee drug testing. Likewise, the perceived effectiveness of drug testing, as assessed by human resources managers, has increased from 50 percent in 1987 to 90 percent in 1996.
  • In 1995, the average annual cost of EAP services per eligible employee nationwide was $26.59 for internal programs staffed by company employees and $21.47 for external programs provided by an outside contractor, according to the Research Triangle Institute. These costs compare favorably to the expense of recruiting and training replacements for employees terminated due to substance abuse problems—about $50,000 per employee at corporations such as IBM.
  • The Ohio Department of Alcohol and Drug Addiction Services conducted a follow-up survey of 668 substance abuse treatment residents one year after completing treatment. Findings indicated that absenteeism decreased by 89 percent, tardiness by 92 percent and on-the-job injuries by 57 percent.

Statistics such as these suggest not only that workplace substance abuse is an issue all employers need to address, but also, that it is an issue that can be successfully prevented. Taking steps to raise awareness among employees about the impact of substance use on workplace performance, and offering the appropriate resources and/or assistance to employees in need, will not only improve worker safety and health, but also increase workplace productivity and market competitiveness.”

Conclusion

One glimmer of good news is that there appears to have been some declines in reported alcohol abuse and binge drinking between 2002 and 2013. However, 30.2 percent of men and 16.0 percent of women [12 and older] reported binge drinking in the past month during the 2013 data collection period. And 9.5 percent of men and 3.3 percent of women reported “heavy alcohol” use at the same time.

Nonetheless, alcohol abuse is a serious crisis for workplaces. Furthermore, there is a persistent recognition of the problem and appropriate solutions/responses. For example, according to National Institute on Drug Abuse there continues to be a large “treatment gap” in this country. In 2013, an estimated 22.7 million Americans (8.6 percent) needed treatment for a problem related to drugs or alcohol, but only about 2.5 million people (0.9 percent) received treatment at a specialty facility.

Thus, the current alcohol abuse crisis in the workplace is both a challenge and an opportunity. Managers and executives are encouraged to establish responsive procedures or policies so that this challenge can be met in a professional and consistent manner as well as in a way that takes advantage of the workplace setting to better enhance potential of success. It is important for supervisors and managers to have a resource or procedure that they can rely on. Employees need to know that everyone will be treated the same way. Pre-planning and utilizing a comprehensive and coordinated communication program, as for many other occupational health and safety issues, is the best way to avoid confusion and frustration in times that are already difficult.

Images: Centers for Disease Control (CDC)
0 Comments
<<Previous
Forward>>

    dR. Robert Chandler

    Contact

    Archives

    January 2017
    December 2016
    November 2016
    October 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016
    March 2016
    February 2016
    January 2016
    December 2015
    November 2015
    October 2015
    September 2015
    August 2015
    July 2015
    June 2015

    Categories

    All
    Business Continuity Planning
    Crisis Communication
    Crisis Management Leadership
    Crisis Management Teams
    Dr. Robert Chandler Articles
    Health And Risk Communication
    Human Performance Factors

    RSS Feed

    View my profile on LinkedIn
Copyright 2016, Dr. Robert Chandler   |   Contact