The primary case study we considered was Version 1 of the Electronic Frontier Foundation (EFF’s) Secure Messaging Scorecard. While outdated and considered by the EFF to be insufficient as a recommendation tool, the secure messaging scorecard provided a good starting point for creating a tool evaluation system. The scorecard is presented as a graph with checkmarks notating whether or not a variety of messaging platforms satisfy certain security considerations, such as providing encryption in transit and keeping past communications secure if one’s encryption keys are stolen. Although it has been left online for historical reasons, EFF’s website states that this scorecard does not reflect the most recent developments for all of these messaging platforms.[^10] As an alternative, EFF offers Surveillance Self-Defense, a more complex set of tools and how-tos for safer online communications which includes “how-to” guides for certain security tools but does not offer an overarching security matrix of tools in the field for easy comparison.^11 In a separate article entitled “Why We Can’t Give You a Recommendation,”^12 EFF explains why the org has steered away from making recommendations on which messaging tool users should implement to be most secure. Tool developers can make sudden changes to their tools which change the tools’ pros and cons with regard to security, security features are only one of multiple variables that matter when choosing a secure messenger (including usability, cost, countries a tool is used in, whether it works on iPHone or Android), and the specific threats someone is worried about influence which messenger is right for their purposes. Another EFF article called “What Is a Good Secure Messaging Tool?” points out that by including checkboxes, Version 1 of EFF’s scorecard can accidentally suggest that there is a single standard for security and that a tool can be 100% “secure” if it checks all the boxes, when in reality security is context-specific and never guaranteed.[^13] These observations from EFF suggested to our team that if we were to create an effective framework to evaluate the security of OSINT tools, we should do three things: include categories beyond traditional “security” features, avoid the checkbox format, and create a clear process for the continued maintenance of our tool.