Summary
One of the more contentious uses of big data analytics in homeland security is predictive policing, which harnesses big data to allocate police resources, decrease crime, and increase public safety. While predictive analytics has long been in use to forecast human behavior, the framework has not proved to be a flawless undertaking. In an effort to improve outcomes of predictive policing, this thesis assesses two high-profile programs—the nation's most popular credit-scoring system and a federal flight-risk program—to determine the greatest pitfalls inherent to programs using predictive analytics. The programs are assessed using what is commonly known in big data as the four Vs—volume, velocity, variety, veracity—but with an added component of the author's creation: verification. Through this framework, it became apparent that the hardest Vs for any predictive policing program to fulfill are veracity and verification. As the field of predictive policing expands, programs face the challenge of ensuring that data used for analysis is accurate and remains accurate, and that the metrics used to verify risk assessments are sound.