Last week I held my Webinar on FDA’s Regulation of Social Media called “In More than 140 Characters…” During the 1 hour presentation, I attempted to cover a lot of ground, wanting not only to focus on the latest two guidance documents from this past June, but also to provide a comprehensive overview on where we are with the five questions raised by FDA in the November 2009 meeting. It was an ambitious agenda and there were things that I wanted to cover in the presentation that had to end up on the cutting room floor. Now there is time to take a look at a few of them and this is one.
One of the guidance documents was on a topic that had long been of interest to stakeholders of all kinds. Can a company correct misinformation posted by independent third parties and if so, how to go about it? Until now, many companies have been fearful of the unknown consequences of correcting misinformation and have opted to steer clear.
The guidance document issued by FDA in June on that topic made clear that a company can correct misinformation, but is not required to. Moreover, if a company decides to correct misinformation, the agency also prescribed principles that must be met in order for a company to do so and be compliant. In other words, whether or not to correct is a decision being left entirely up to the company – at least from the regulatory perspective. The presence of some degree of regulatory risk if one does act to correct misinformation means that companies will need to develop a policy for its approach. The beginning of that approach is deciding what misinformation should be corrected. Should it be all of which the company becomes aware? Should it be none? Is there an in-between?
This is not simple and there are a lot of considerations going into the development of a policy. Here is one way to look at assessing the situation.
Let’s put it on an X-Y axis. On one side of it – the X axis – let’s put a measurement of the severity of the misinformation. This would involve creating a scale to assess the severity. Factors included in creating that scale might include- how many people are affected, the degree to which the misinformation could cause a harm and the seriousness of the harm. The greater the potential gravity of the misinformation the further along the scale it would move.
On the opposing Y axis, we might consider the reach of the communicator. If the misinformation was conveyed by someone in a venue that has only a small audience, it may not be particularly important to correct the misinformation. However, a small audience is not the only factor to consider. A small audience can nevertheless be extremely important if it is a potent one- i.e., that those people getting the message are likely to repeat it to a larger audience. It will be important to consider how to assess influence.
The end result may look something like this. There may be less of a compelling case to correct misinformation that falls into the green quadrant where audience exposure is low and the severity of the misinformation is also low. Whereas a case could be made for that in yellow and that which falls into the red quadrant where both exposure and resulting risk is high might be misinformation of such a consequence that it is worth correcting. In other words, this sort of exercise helps a company in assessing the risk/benefit ratio for correcting any particular incidence of misinformation. A company has to develop its own criteria each of the axes.
This is not to suggest that this is the paradigm by which companies should operate. Some may seek to correct all misinformation, others may decide to hold back until they see how FDA enforces the principles laid out in the guidance document on correcting misinformation, while others may choose to move forward. Nor does this pretend to address all of the procedural issues associated with developing a protocol for correcting misinformation.
But the bottom line is this. With the existence of the guidance, there is now a corresponding need for the development of some internal protocol by companies. That protocol would not only cover which misinformation should be corrected (discussed here) – but also the matter of how that is internally executed – on how such information is monitored, and how it is corrected, and the record-keeping that should go along with it.