Pitting Robots Against Spammers in Rulemaking Comment Wars

As the notice-and-comment process that has been a feature of agency rulemaking for the past 60 years moves online, citizens have started to exercise their right to spam. Some scholars, notably Stuart Shulman, write that electronic comment tools flood agencies with low-quality comments that agencies ultimately ignore. Shulman, The Case Against Mass E-mails: Perverse Incentives and Low Quality Public Participation in U.S. Federal Rulemaking, 1 Policy & Internet 23 (2009). On the other hand, David Karpf argues that activist groups employ the tactic of spamming regulators only rarely, in order to signal broad support for their position. Karpf, Online Political Mobilization from the Advocacy Group’s Perspective: Looking Beyond Clicktivism, 2 Policy & Internet 7 (2010). Increased use of filtering techniques, such as the “active learning” technology currently used in spam filters and electronic discovery, would allow agencies to address Shulman’s concerns while preserving the benefits noted by Karpf.

Spurred by the E-Government Act of 2002, which created a centralized site for submission of electronic comments, agencies have shifted their comment-gathering efforts toward e-mail and online forums. See Strauss et. al., Gellhorn & Byse’s Adminstrative Law 534-46 (11th ed. 2011).  Regulations.gov, the centralized rulemaking comment site, allows users to find and comment on all rules proposed by federal agencies. Since the beginning of 2012, agencies have posted more than 5,000 rules while users have submitted approximately 340,000 comments. Submitting a comment is as easy as clicking the blue “Comment Now!” buttons that are ubiquitous throughout the site. Transitioning from postcards and faxes to websites and e-mails has increased the quantity of submissions, at least in certain high profile cases. See John M. de Figueiredo, When Do Interest Groups Use Electronic Rulemaking, in eRulemaking at the Crossroads 19-20 (2006).

Shulman argues that “low quality, redundant, and generally insubstantial” comments on certain topics “annoy government officials with a mind-numbing redundant task that impedes real work” and obscure the few potentially valuable comments that are submitted. Shulman at 26-29. According to Shulman, quality matters, not quantity. Advocacy organizations lead constituents to believe that submitting a generic comment has an effect on outcomes, but agencies use the comment process to gather pertinent facts and identify new issues—and not as a referendum. Id. at 34-35. Shulman analyzed 1,000 of the longest comments submitted during a proposed rulemaking by the EPA and found that at most 5% presented the agency with potentially new information. Id. at 45. A 2006 survey of existing research suggested that e-rulemaking has not increasing the quality of participation. Any increased participation took the form of mass-comment campaigns. S.M. Benjamin, Evaluating E-Rulemaking: Public Participation and Political Institutions, 55 Duke L. J. 893, 933-35.

Karpf, on the other hand, argues that mass submissions of electronic comments are nothing new: they are merely an adaptation of the postcard campaigns that were common before the Internet. Comment drives “giv[e] people a simple first step to participation.” Id. at 14. Additionally, high-volume campaigns, even if low-quality, provide information to rule makers by “singnal[ing] broad public outrage,” or at least strong sentiments from specific segments of the population, regarding a proposed rulemaking. Id. at 12. Solicitation of comments can increase the quality of interest group participation by helping interest groups determine which issues resonate with their members. Id. at  33. Moreover, Karpf’s study of emails sent by activist groups reveals that calls to submit comments to a federal agency are relatively rare. Id. at 27.

The same technology that powers spam filters and electronic discovery could help agencies avoid the backlog of sorting through comments while providing the signaling and participation advantages that Karpf identifies. After a sample of the submissions are sorted by hand, a remainder of the submissions may be sorted automatically through a process known as active learning. See Cardie et. al., Active Learning for e-Rulemaking: Public Comment Categorization, in The Proceedings of the 9th Annual International Digital Government Research Conference 234 (2008) (describing one method of automatic categorization of agency comments using “active learning” algorithms). There is little indication that the agencies currently employ active learning filters on the data they receive. According to Shulman, electronic comments received on a 2004 rulemaking were printed and “reportedly sorted by the shape of the words on the page by a team of 15 staffers making piles.” Shulman at 46. A description of the process by which comments were reviewed in 2008 did not indicate that any automatic sorting took place. Committee on the Status and Future of Federal e-Rulemaking, Achieving the Potential: The Future of Federal e-Rulemaking 12-16 (2008). A 2011 survey notes that the White House and some agencies are implementing the crowdsourcing software IdeaScale, which ranks comments according to popularity, but does not mention active learning. Cary Coglianese, Federal Agency Use of Electronic Media in the  Rulemaking Process 14-15 (2011).

Importing active learning technology into rulemaking would resolve the issues that Shulman points out by cutting down on agency resources required to review comments and pulling potentially new and relevant information to the top of the heap. Additionally, grouping similar comments would have the desired effect of drawing attention to strong feelings from specific segments of the population. Once comments are categorized by sentiment, agencies could more effectively use metadata associated with the comments to develop a better picture of those supporting or opposing the rule. Use of this technology may raise concerns that not all comments will be read—an important comment could be missed by an automated review and never be read by an agency employee. However, if automated review in e-discovery is any indication, computers can be trained to outperform humans in selecting relevant submissions, leading to a smoother notice-and-comment process for both agencies and advocates. See Maura R. Grossman & Gordon V. Cormack, Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient Than Exhaustive Manual Review, XVII Rich. J.L. & Tech. 11 (2011).

 

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>