[ { "href": "/post/Fog_Of_CryptoWar/", "title": "Fog of CryptoWar", "tags": ["analysis", "encryption", "privacy", "surveillance", "police-state"], "content": "[The Fog of Cryptowar - Why it’s not about crypto regulation. Over the last two years, politicians in the USA, UK and elsewhere have been threatening the regulation of strong cryptography. But the experts and journalists who have expressed concern over this have done so in ways that we consider misleading. In this document we will recap the motives and strategies of the people who wish to regulate cryptography, the responses by its defenders and the battle over public opinion. We will conclude that the picture painted in media is misleading (as are those of experts and activists) and would lead us to resist a straw man while missing the issues of substance. Governments move towards the regulation of the use of cryptography. Since the San Bernadino terror attacks in December 2015 and the debates about decrypting the attacker’s iPhone that followed it, numerous statements by politicians and law enforcement officials have reignited the fears about another “Crypto War” in which governments would use regulation to undermine the implementation and use of cryptography. President Obama (USA) was quoted saying: “I will urge high-tech and law enforcement leaders to make it harder for terrorists to use technology to escape from justice.”, similarly Prime Minister Cameron (UK) said that law enforcement must “…ensure that terrorists do not have a safe space in which to communicate… We must … ensure that, in every case, we are able, in extremis and on the signature of a warrant, to get to the bottom of what is going on.” Last but not least the UK Home Secretary Umber Rudd stated that “End-2-End encryption is completely unacceptable.” It is thus not surprising, that journalists and experts on encryption were reminded of the first “Crypto War,” which was waged in the US in the 1990s when the government tried to suppress the development and export of strong cryptography products, and establish a key escrow mechanism for communication (the “Clipper Chip”). Following the playbook of the 90s, journalists, experts and activists were quick to respond with the same arguments against crypto regulation then were used before, culminating in Bruce Schneier being widely quoted with saying that a “Ban on strong encryption threatens to destroy the Internet.” Such responses, however, especially when fueled by shallow media reporting and social media, may not be the right response when it comes to shaping public opinion, and may entirely miss the bigger picture. Politics as Negotiation. Before entering the debate it is crucial to understand how politics, media and public opinion interact, and how communication strategies are employed by politicians to win their argument. A common misunderstanding, especially when technologists interact with those making public policy, is that legislation should be built on objective facts that can be conclusively agreed upon. This “engineering view” misses the reality of politics. Most public policy only has to stand trial in the court of public opinion and support by interest groups, not be defended in the realm of science - nor is legislation confined to the technicalities of its field, but is evaluated in a much broader context. When talking about politics, and specifically government action, it’s important to remember that governments and their administration are not monolithic entities. They consist of groups with varied interests. The positions of the ministry of the treasury and the ministry of the interior may not align, and the demands of law enforcement agencies are often opposed to the evaluations of intelligence agencies. Only when these interests align can effective action can be taken. Politics has to deal with trade-offs and must convince the public, interest groups and media that their legislation is better suited to achieve the stated goals than competing suggestions. Dealing with trade-offs means that decisions are never perfect in the sense that any interest will be fully satisfied. Instead, one tries to discover a solution that harms the various competing opinions the least. In this sense politics is utilitarian. The trade-offs depend on value judgments that are informed by the ethical and economic perceptions of the day. This makes politics inherently fluid, not dogmatic. The decisive factors on which legislation can be enacted are the endorsement by relevant interest groups and the concession of public opinion. Politics thus has to signal its support to those interest groups while at the same time shape public opinion through debate. This means that political actors must be strategic communicators, not primarily communicators of facts. This results in a perspective that politics is really something like a tug of war with many ropes at once - with the public standing in the middle. The opposing parties now compete over the support of the public and those interest groups that are necessary for any legislative action. Good arguments sway people to support one’s side, while bad arguments leads to loss of allies from the public. Part of this competition is to pull the correct rope. Most politicians know how to play this game. They might not be experts in any field themselves, but they must have the ability to select experts that can inform their position, and be able to network support while not leaving the big picture out of their view. From that position they enter into the debate. Since politics is about strategic communication, proponents of one side often take an extreme position so that the resulting compromise is as far in their field as possible, or they present straw men to keep their opponent from interacting with the core issues of the debate. We believe that both tactics may be a fundamental part of the debate about crypto regulation, otherwise the debate would have been over two decades ago. Furthermore the debate has been almost confined to the topic of cryptography while the problem stated by politics and interest groups like law enforcement spawns a much wider field. It is this strategic communication that tempts some pro-crypto activists to repeate oversimplified arguments. We will address some of those arguments and the technical aspects of regulation in this field. Bad arguments. The reason for this (shallow) treatment of politic realities is that in the days of social media shallow arguments aren’t recognized by many participants in debates. The journalists, engineers, programmers and cryptographers in the debate tend to take positions that in the context of a realistic view on politics fail to be relevant. Three lines of argument are especially noteworthy: “It is impossible to regulate cryptography.” or “Banning cryptography is like banning math.” This argument misses the point in confusing the knowledge about cryptography with the wide-spread use of cryptography - or more specifically the use of cryptography to protect confidentiality. While it would be beyond the reach of governments to remove the knowledge about cryptography from the public sphere, it is certainly not impossible to threaten those that employ “illegal” cryptography with sanction. This is exactly what happens with most regulation: Speed limits do not prevent the thought about driving fast, instead they address actually driving fast. It is behavior that is regulated, not thought. So, similar arguments about the futility of regulation or the impossibility of enforcement aim a bit too high. Regulation does not require perfect adherence. Often it is enough if some people adhere to the specific law, and others can be punished in case of being caught. Again, speed limits are not perfectly enforceable, but they limit the number of drivers that drive recklessly, and it allows taking action against some drivers and thus nudging other drivers into compliance. “Without cryptography, modern e-commerce is impossible, and the Internet would break.” Again, this argument misses the point. It confuses the whole field of cryptography with the specific uses for authentication and integrity protection. It is certain that the lack of authentication and integrity protection would destroy e-commerce, and make it impossible to operate the Internet as we know it securely. However, the position of crypto-regulators is to undermine the use of cryptography for the purpose of confidentiality. Granted, in today’s technology we rely on confidentiality for authentication and integrity, we transmit passwords and credit card information, which would become very risky without the use of traffic encryption. However, nothing but convenience actually forces us to use passwords or credit-cards, since other means of authentication and integrity protection exist that do not rely on confidentiality. Deploying these methods is possible, though it would incur substantial investments and many opportunities of error. “Any form of regulation makes cryptography insecure.” This claim is a good example for not appreciating that any regulation exists in a much wider context. Of course regulation will make cryptography less secure than it could be. But since security is a gradient, public opinion and politics might be willing to accept less security to accomplish some other goal. Only few proposals for cryptographic regulation are so fundamentally flawed - or so specifically worded - that they would undermine all security benefits of cryptography. For now it is sufficient to recognize that these arguments have something in common - they all make absolute claims. Too much of the pro-cryptography activism centers around painting a black-and-white picture that is simply inapplicable to public policy debates unless one deals with the content of social ethics directly. Crypto regulation is not part of social ethics yet, and maybe there lies a hint that it should be. But in that case, the debate must be fundamentally re-framed. What these absolute claims also hint at is a mode of thinking that is common in the information technology sector. Computer scientists, and a lot of the pro-crypto activists are - or are informed by - computer scientists, seem to have a tendency to express binary thinking that demands perfection or surrender when it comes to problem solution. In combination with a lack of understanding how politics actually operates, this too often results in arguments that can be undercut because of their absolutist claims, failure of stating the problem, or that are simply paternalistic. One thus often finds variations of above arguments combined with comments like “We should take measures to improve tech-literacy amongst the authorities” (Awn Umar) or “My impression was that primarily she doesn’t know what she is talking about” (Paul Bernal, speaking about Teresa May, UK PM). The risk of this form of engagement is that the pro-crypto position could be easily marginalized, or kept occupied with a narrow aspect of the debate that is not the main thrust of novel regulation. While we are very sympathetic to the position of defending the right to cryptography without reservation, it is our intend to warn fellow pro-crypto activists from appearing snug or disconnected from the public at large. This debate is not about convincing high-IQ people like yourself that are specialized in some field of computer science, mathematics or engineering. It is about dealing with a wide variety of people with very different backgrounds and an awareness for the necessity of trade-offs. Bad arguments, over-simplifications, apocalyptic visions and technocratic demeanor might get citations in press, but they fall in deaf ears in politics, law enforcement and regulatory bodies. Instead, it is necessary to engage with the opposing party, understand their position, their options, and their possible agenda. Motives for regulating cryptography. The quest to regulate cryptography has had several motivations during history. The earliest forms were informed solely by military needs to access the secrets of enemies. While this still plays some role, the motive for the current debate lies in the so called “Going Dark Problem”. Since the advent of mass individual communication technologies (telegram, phone) and wide spread information processing technology, law enforcement and intelligence agencies became accustomed to interception and recovery of large amounts of incriminating data without having to invest too much man-power or risky infiltration. In parallel to the scientific, technological and digital advance in all fields of life the methods of investigation and intelligence gathering shifted from human-driven (HUMINT) to technology driven (SIGINT, ELINT, etc.) methods. This also fostered a growing disconnect between enforcement agencies and population, as well as a relative decrease of officers active in the interpersonal nuances of police work. The same is true for intelligence agencies. This vision of police and intelligence work has been reinforced both by politics demanding more substantive information and media portraying modern investigative work as a combination of cutting edge forensics and all-knowing computers, cell phone tracking and Internet tracing. Since we live in a mass media society, these media portrayals both in news coverage and entertainment products has shaped the expectations of the population. It is quite common these days that juries put all emphasis on forensic evidence while ignoring other information, and being shocked when the forensic results are less comprehensive and clear-cut as they have come to expect from shows like CSI (Crime Scene Investigations, a cop show centered on forensic experts). By now global, instant, widely available and cheap communication is a normal part of life, and almost everybody owns personal computing devices (PCs, laptops, tablets, smart phones, game consoles) that outperform anything that was available just a few decades ago. This technology is of course available to criminals and other targets of law enforcement and intelligence agencies, as it is to anybody else. At the same time cryptography became widely available. Since the 1990s anybody can, with some effort, use cryptography that is practically unbreakable. Indeed, cryptographic protection has become so wide spread that it often goes unnoticed. It protects our online shopping, credit card transactions, cellphone calls, and a myriad of other applications. The modern world is unthinkable without cryptography protecting the integrity of data and allowing us to authenticate to remote systems for a host of useful purposes. However, it was the advent of individual use of cryptography to protect the confidentiality of communication that ushered in a new time. Instead of using cryptography only in the interaction with companies and the state, cryptography is now widely used for the protection of personal computer data storage and inter-personal communication. Every major operating system today ships with tools for hard disc encryption, and a whole host of messenger services offers encryption of communication directly between the persons that want to talk with each other, without relying on the security of the provider itself. Again, criminals are amongst the many users. Now police and intelligence agencies are increasingly confronted with communication they cannot tap anymore, and personal notes they cannot decrypt anymore. Them confiscating computers and smartphones is no longer a guarantee for gathering evidence that would stand up in court. The situation is even worsened by the wide availability of anonymous communication tools like Tor, I2P etc. Now not just the content of communication becomes virtually inaccessible, but also the fact of who communicates with whom. This quick and wide-spread individual use of cryptography has an increasing impact on long-cherished investigative methods, leading to more and more cases that cannot be solved or that don’t lead to convictions in court. Sources of information that were long relied upon are now “going dark”. Of course, law enforcement opposes this development. They want their work to be as easy and effective as possible. But it is also a development that receives critical attention in the public discourse. Many people are not willing to accept the laughing criminal that leaves the courthouse with a smile, simply because his computers could not be decrypted. This is especially true when it comes to the two areas of crime that incorporate the notion of universal evil and use of cryptography like no other: Computer Aided Child Exploitation (CACE) and International Terrorism (IntT). Cryptography has helped criminals of both kinds to cover their tracks and conceal evidence in many high profile cases. Due to the fact that both crimes are universally considered to be of the worst evil - exploiting and killing the random innocent - they fuel public outrage like nothing else. The public demands of law enforcement to prevent those crimes, and to bring the perpetrators to justice. It is thus no wonder that the new debate about crypto regulation was initiated by law enforcement failing (for a while) to access the iPhone of the San Bernadino attackers because it was encrypted. After every major terror incident we now see law enforcement and politicians complain about information being inaccessible because of technical protections. Similarly cases of alleged child pornography consumers - who’s hard discs are so well encrypted that the court cannot rely on them for evidence and that therefor escape prosecution - repeatedly made the news. In this context the outrage felt by many in politics, law enforcement and the public about the protections granted by cryptography is understandable, possibly even justified. It is important to really grasp the core of what is going on here. Possibly for the first time have methods to keep evidence from law enforcement reached an universal availability and wide spread use. This is quite possibly a qualitative change of singular importance. All previous means of hiding from law enforcement were based on error prone wit or physical protections that could be overcome or fail randomly, or were simply not widely available. Effective means to oppose, or hide from law enforcement have previously been banned from personal use - like effective body armor (in many countries), guns, doors that could resist police raids, forged identification papers, face masks… It is in light of cryptography providing effective limits to court orders and warrants, and the history of previous regulation to make law enforcement effective, that now regulation on cryptography is demanded. The main question remaining is if and how cryptography can be regulated without causing too much collateral damage to the societal uses of cryptography. It is necessary to stress that these positive uses exist, and are widely accepted. Even law enforcement and intelligence agencies have no interest in making cryptographic protections disappear completely, simply because they prevent a whole host of crimes every second and protect secrets of national importance against foreign spies. It could reasonably be said that everybody today loves and relies on cryptography, except for those few cases where it prevents the enforcement of law. This must lead to the realization that the current debate since 2015 is fundamentally different from the first Crypto War in the 1990s. The goal is not, and cannot be, to snatch strong cryptography from the hands of people. Instead, the current debate is about making the secrets that cryptography protects accessible to law enforcement. This is no minute point since it deeply shapes the approach that regulators take, and it is therefor the point with which pro-crypto activists must engage. Failing to see that the goal is access to the plaintext confines arguments into a space that is neither relevant nor commonly understandable for public opinion. While it is certainly difficult to appreciate this difference from the perspective of cryptography it is nevertheless substantial - because it allows for very different technical implementations and legislative action. Insisting that plaintext access is the same as banning strong cryptography misses the point and excludes pro-crypto activists from the debate. Instead one has to engage with cryptography in the actual technical context, including the hardware it is run on, the operating systems, networks, and current structure of service providers. Access to plaintext is of interest for law enforcement primarily in four forms: Data at Rest. This refers to data that is stored on the user’s local computer or phone. Cryptography here hinders access through device and hard disc encryption. Device encryption is by now a common feature on smartphones, and all major operating systems for personal computers include software to encrypt the local device, including full disk encryption which prevents all data except for the bootloader from being understood by anybody who does not have access to the user’s secret key or password. Data in Transit. The contents of communication between two or more parties that is carried by telecommunication networks, especially the Internet. Previously wiretaps would reveal this information, but with the use of encryption a growing part of the contents of Internet communication cannot be understood by anybody that does not have access to secret keys held only by the communication partners. Data in Cloud. A growing amount of data falls in between the “Data at Rest” and “Data in Transit” categories because it is stored remotely with cloud service providers. While the data is readily available to law enforcement through subpoenas and warrants, an increasing amount of data in the cloud is now encrypted. The cloud also serves as a means to transfer data between multiple parties without retransmission from the local device. Cloud data is especially valuable to law enforcement because it contains local device backups and histories/logs of many services. The contents of email accounts should be considered “Data in Cloud” as well. Data in Processing. This is data currently processed by a device and located in the volatile memory or temporary files. In security terms the data at rest is the easiest to protect, since an attacker needs physical access to the device. Data in Transit is more readily available since it travels a number of links on the Internet - including potentially insecure wireless networks - but apart from attacks to the user’s own local network most data is not readily available to the common criminal (though law enforcement wiretaps and intelligence agencies surveillance is commonplace). The least easy to protect data is the data in cloud. Numerous successful hacks on cloud providers and enormous data leaks every few weeks attest to that. In all three cases encryption serves as a meaningful way to secure data against unlawful use. Personal devices are stolen frequently, wireless networks are sniffed easily, and cloud storage providers are a juicy target for any hacker. The “Data at Rest” and “Data in Cloud” are similar from the point of cryptographic protection because both only require a single instance (the user) to have access to the key to encrypt/decrypt the data. “Data in Transit” however requires that both sender- and recipient share keys, in practice both even share a single secret key valid for a communication. Law enforcement would like to get access to all three forms of data, and precisely to the plaintext (unencrypted) content. Access is of interest in two time variations: “Realtime access”. This is the equivalent to the old wiretap. Law enforcement would like to record/listen in to communication while it happens. This applies directly only for Data in Transit. For realtime access Data in Cloud is sometimes an option if communication tools use the cloud to store conversation histories, or to access email communication. “Post-Fact access”. This is equivalent to a regular search warrant. Law enforcement would like to access data stored on the local device and in the cloud. This is the current focus of the debate, where after a crime has been committed police is searching for evidence to present in court, and information that would produce new investigative leads. Furthermore law enforcement usually seeks access to communication data logs stored by providers. Realtime access to both Data in Transit and Data in Cloud are most controversial in the debate. Both must be considered surveillance and happen without the affected person knowing about it. Post-fact access to devices in possession of a suspect is much less controversial since it is equivalent to the standard search warrant that is commonly accepted by the public. Post-fact access to devices is also least controversial from the point of view of international investigations. It only rarely requires action within more than one jurisdiction, and time constraints usually allow for legal processes to be adhered to. This is also reflected in the fact that international standards for digital evidence collection are debated and agreed upon in various international forums - notably the EU and the G20. For post-fact access to devices many issues of international cooperation and multi-jurisdictional applicability of law are much clearer and easier to solve than for realtime access for data in transit, or any access to data in cloud. This is of particular import for law enforcement because here the goal of investigations is usually the presentation of evidence that stands up in court. It must be noted however that two hybrids between the time variations and storage forms exist that has no previous parallel in analog technology: “Realtime access to Data in Processing”. This is of interest for capturing communication data before it is encrypted and transmitted. In essence this would constitute a tap into the local device - a “telephone bug”. “Realtime access to Data at Rest”. Law enforcement might want to search a device remotely that is in the possession of a user. This differs from the case of a usual search warrant because the user would not be aware of such a search while it happens, and thus be delayed - or even prevented - from legal recourse. Both of these methods are controversial because of their hidden nature and the necessity of remotely exploitable security bugs in the user’s device or a pre-installed backdoor. Nevertheless they are already considered, or even codified, in several jurisdictions because they prevent evidence from becoming inaccessible through encryption, or the loss/destruction of the user’s device. A further issue with realtime remote access to a user’s local device is that it poses questions for the admissibility of evidence. The precise targeting of the device, as well as the ability of such a remote access to modify data without trace, should cast doubt on any data collected. Both errors and undetectable falsification can occur and are extremely difficult to prevent - if they can be prevented at all. The above points mostly concern law enforcement investigations. They do not equally apply for intelligence agencies (IAs) for a couple of reasons: Intelligence Agencies often fall under special legal regimes. Due to the fact that they often have no direct law enforcement powers, and because they often operate outside their home jurisdiction they are imbued with special legal privileges that restrict their methods much less - amongst them not requiring previous legal codification of the methods they might want to employ. Thus, IAs have the ability to directly hack, steal or manipulate devices. IAs frequently pressure, infiltrate or hack service providers. IAs are far more concerned with the action of foreign hostile actors. They not only are interested in stealing the secrets of foreign governments, but also want to protect their own government and key industry against attacks by the same. This puts them in the double position of being both offensive and defensive in their activities. Globally undermining cryptography in a transparent way could potentially backfire and harm their mission. IAs do however try to covertly undermine cryptographic research and algorithms in such a way that the weakened products are only attackable by themselves. This is a very risky game, especially when discovered, or if the secret knowledge that is the foundation of such an asymmetric weakening becomes known. As such intelligence agencies are not the primary actors in the cryptography regulation debate. They either choose to abstain from the topic altogether, or only partake in the debate in a rather covert way. It is also not unlikely that they might occasionally put their defensive purposes first and thus become temporary - and questionable - allies of pro-crypto activists. No treatment of the “going dark problem” and the interests of law enforcement and intelligence is complete without highlighting a part of the debate that is all to often conveniently omitted by politician’s speeches. The spread of digital communication - encrypted or not - has lead to a new plethora of information that is available to relevant parties already. This is the whole realm of metadata - data about data - or specifically here: Data about communications. Police and others now have access to a depths of information of who communicated with whom, when and how long, as well as location data of nearly every communication device that is powered up. The range of devices that constantly produce this kind of data is growing daily, from mobile phones to cars, power meters to TV sets. Metadata has contributed as much to changes in law enforcement as the “going dark problem” has. New methods of investigation, often very effective, have become available. And contrary to content data, metadata lends itself to automated processing and analysis - leading to new problems like global mass surveillance. Interestingly these new opportunities for law enforcement and intelligence, and those new threats to citizens’ privacy, do not appear in the calls for crypto regulation. We shall come back to this issue below. Technical aspects of crypto regulation and plaintext recovery. We shall now give an overview of means by which regulation of cryptographic applications could potentially soften the Going Dark problem. Afterwards we will look at challenges that impact all regulation attempts in this field. Means to regulate: Outlaw strong algorithms The first attempt at regulation of cryptography has been the outlawing of strong cryptography and forcing users to rely on algorithms that could be broken by intelligence agencies, and potentially law enforcement. This approach is off the table today because the knowledge and processing power to attack those algorithms is, or would, be widely available. Most governments, and not few corporations and criminal organizations would be put into the position that they can intercept and decrypt communications and therefor have access to secrets globally. This would put economies and nations at an unprecedented risk in a world that is shaped, and relies on, international secure communication. We simply rely on strong cryptographic algorithms to deal with the risk of computer break-ins, espionage, cyber war and computer crime. The same applies to variations of weak algorithms like limiting the effective key size, transferring keys through side channels or oblivious transfer, or mandating predictable random number generators for key generation. Manipulate strong algorithms There have been attempts by intelligence agencies (specifically, the NSA) to manipulate algorithms so that their strength relies on the secrecy of hidden, or underhanded, parameters. This approach reduces to the outlawing of strong cryptography since the secrets on which the security of the algorithm rests would have either be spread widely to be used by law enforcement, or everybody would be at the mercy of the party that knows those secrets. While it might be an interesting method for a single intelligence agency, it would fail in softening the Going Dark problem in a meaningful way and at the same time create a power asymmetry that dooms the acceptance of such a scheme. Undermine Protocols Some recent statements by politicians (esp Rudd, UK SI) have hinted at making end-2-end encryption illegal, especially for messaging services. This would result in protocols that may provide confidentiality between user and provider, but not between user and user when communicating through a provider. Similar suggestions exist for mandating that all communication systems should retain interception capabilities. As with the Lavabit case, law enforcement might rely on the cooperation of the provider, or the sharing of identity keys with law enforcement to enable man-in-the-middle attacks, to gain access to data in transit. This is a possible approach for regulation since it leaves most of the existing infrastructure in place and puts all liability on the communication providers and intermediaries - as is already the case with lawful-interception legislation for telephone etc. The products most affected by this variant would be those that try to offer secure communication that is inaccessible by anybody but the final sender and recipient. Those products include instant messaging services, voice over IP telephony, video conferencing and encrypted email (PGP/SMIME). Furthermore Virtual Private Network links would fall under this approach, even though they don’t rely on an intermediary. Making end-2-end encryption illegal means that all security rests on the communication provider or intermediary, and potentially also on the certifier of keys that the parties require for mutual identification and integrity protection. Especially big global providers are thus put into the cross hairs of hackers and foreign governments since they present a treasure trove of valuable information. Auditable communication Auditable communication is used, and often mandated, in some industries already, like banking and high-security environments in which traffic traveling through a local network must be inspected by security appliances. To enable this, security devices need a way to decrypt the traffic either by active interception and reencryption (man in the middle) or by using deterministic key generation whose secret is shared with the security appliance. This approach can be applied to any mediated communication that transits a provider as mentioned in the previous section. It is therefor nothing novel. Key Escrow Key escrow in the strict sense means that all keys (in this debate, confidentiality keys), must be shared with a trusted agent - like a government agency - before they can be used for encryption. In the case that encrypted data must be decrypted under a warrant, the police would then request the key from the agent and perform the decryption. While possible to implement from a purely theoretical point of view, key escrow mechanism are inherently complex when deployed on a larger scale. It must be considered that there must be a secure way of transmitting the secret keys between the user and the escrow agent, and that those keys must be made accessible to law enforcement in some way. Very naive approaches use only one additional, global key to secure this key transport. But this makes that gloabl key a secret on which the confidentiality of all communication within the domain of regulation would rest. The escrowed keys must be stored, managed and protected against unlawful access. If recent history is any indicator, then building such a system even on a national scale is unrealistic. Many government agencies have suffered fatal data breaches recently, including the NSA (which is specialized on keeping secrets), the CIA (the same) and the Office for Personal Management in the USA. This list of breaches is far from being exhaustive, but it demonstrate the risk of a key escrow agent would face. This risk is compounded by the fact that two conflicting requirements exist for an escrow agent. On the one hand he must protect all keys against unlawful access, on the other hand he must establish a way to share those keys with law enforcement in a timely manner. This makes it necessary to keep some form of the key digitally available and online -which in turn exposes that key to attacks. To mitigate the risk of a single escrow key, some schemes suggest the use of splitting the user’s key between many key escrow agents that then have to cooperate to reveal the key. While the security of these schemes is higher, they also multiply the complexities and cost of such a system, especially in regards to deployment and operation. Furthermore the process by which law enforcement can request keys from the escrow agent(s) must be secured and authenticated, meaning that law enforcement requires to have some form of authentication key that would be used to demonstrate legal access. Each authorized agency and office would require one of those authentication keys. However, since each of those keys comes with the ability to reveal an escrowed key from the agent, the security of a key escrow scheme would rely on the secrecy of each of those authentication keys. Additional problems like secure key rotation, availability of the agent, and cost of operation would likely turn this approach into the biggest and most complex government mandated information system project in history. The risk of failure to deploy, security breaches, and the cost of operation make such an approach unrealistic. Another problem of key escrow systems is the scope in which they are to be deployed. If they are deployed as a global infrastructure, the management and regulation would require global political coordination. If they are however deployed on a national scale, they would require some means to enforce the specific demands of the jurisdiction on the user’s device - like choosing the transport key of the national key escrow agent. A further problem of key escrow mechanisms is that they conflict with cryptographic best practices, especially Perfect Forward Secrecy. Here a new key is generated for each message and old keys are immediately destroyed. This ensures that a leak of keys does not put all communication at risk of being decrypted, but only the communication during a short time frame for which the key was stolen. Key escrow systems however require that keys are shared with the agent which both introduces a long-term storage of secret keys that can potentially decrypt the communication of years and an enormous amount of communication between user and escrow agent since every new key needs to be escrowed. Another best practice that is incompatible with key escrow is the use of authenticated encryption. Here the same key is not only used for confidentiality, but also for integrity protection (and indirectly authentication) of the communication. Sharing this key with an escrow agent would allow the agent to not just read the communication, but also manipulate it without the original parties being able to detect this. Which means that not only confidentiality of data is at risk, but also the security of the communicating devices. Instead of the user generating a key and then sharing it with the escrow agent, the escrow agent could also generate keys for the user. This suffers from the same problems, but introduces an additional one that the security of all keys relies on the security of the key generation method employed by the escrow agent. Implementation mistakes in cryptographic algorithms are commonplace enough that this could potentially lead to a situation in which the security of all keys is undermined but without anybody being able to detect it - except for a successful attacker. Advances in cryptography may also lead to key escrow becoming much more secure. For example, various proxy re-encryption schemes could be employed to mitigate many of the security problems of previous approaches and reduce the complexity of implementing key escrow. Content Escrow Instead of encrypting data end-2-end between the intended sender and recipient only, a third party (called agent) can be introduced to which all content is encrypted. Various protocols exist that make this possible and enforceable, as long as at least one of the original parties is honest. The communication can then be intercepted by regular means and decrypted if the need arises. Content Escrow schemes allow the continued use of some forward secrecy mechanism as long as the agent actively supports them. One additional problem of content escrow mechanism is that the agent plays an active role in communication, which increases the demands for reliability and accessibility of the agent. Should the agent become unavailable, this could (depending on the protocol) prevent communication which turns the agent into a single point of failure and would make it a prime target for denial of service attacks. Key Recovery Key Recovery schemes are similar to Key Escrow schemes in that they make keys available to a trusted third party. However, keys are not directly made available to an escrow agent to be stored, but instead require access either to one of the devices that communicate with each other, or realtime interception of the communication. In key recovery schemes the confidentiality keys generated by the user are stored in a secure storage module of his device, stored in a remote cloud account, or transmitted with his communication. The keys are encrypted for one or more escrow agent keys. Key recovery schemes have the same problems that key escrow schemes have, but they are less resource intensive because no communication with the escrow agent is required by the user. Instead the existing interception capabilities of communications providers are used only in those cases when a need for interception actually arises. Key recovery schemes for data at rest, especially encrypted devices, are a seemingly attractive approach because any access to the secret keys would require access to the device as well as cooperation of the escrow agent(s). This could potentially satisfy part of the law enforcement demands without undermining security too much. However, the implementation of such a recovery scheme would require the creation and deployment of special secure storage modules in all relevant devices - current devices would not be covered. A final note should be added concerning key escrow, content escrow, and key recovery. All these approaches are brittle in the sense that there is no guarantee that they will work when they are most needed. Verifying that such a scheme works in a specific case requires actually decrypting the data of interest. If such a verification is not undertaken frequently, these schemes might break without being noticed. However, this creates new legal problems since the interception and decryption of data for verification purposes is hardly justifiable by current standards of law. Attempts to verify those schemes by employing the (automated) cooperation of the communication partners only applies for data in transit, and always relies on the honesty of at least one party. Since these schemes are only considered to catch criminals (people that actively and intentionally break the law), such a cooperation cannot be assumed. It is this verification (among some other aspects) that doomed the famous Clipper Chip key recovery system that the USA tried to roll out in the 1990s. Since then, no substantial improvement on this front has been made. Mandatory Key Discovery Several jurisdictions (UK, indirectly USA and Canada, amongst others) have codified laws that are meant to compel suspects to reveal their secret keys and passwords to law enforcement or the court. If the suspect does not comply, fines and prison time await him. This approach suffers from technical, practical and legal problems: First, it is of no use if the suspect employed Perfect Forward Secrecy in his communication, or uses timed encryption for his storage devices. Second, it is hard - and sometimes impossible - to distinguish between a suspect that is unwilling to reveal his keys and one that is unable to - either because he forgot or he never actually knew the keys (mis-attributed device, or hardware security token that has been destroyed). Third, it is questionable if anybody should be mandated to produce incriminating evidence against himself. Since we are no legal experts, we must refrain from further judgments. However, the legal implications are deeply troubling. Insecure default settings It seems that one of the approaches that have been tried by both the USA and the UK is to influence software and hardware vendors to abstain from making strong cryptography the default configuration of their products, while keeping the capability in tact. This attempts to at least catch the low hanging fruit, the fully incompetent criminals. Surprisingly, this might actually be a productive means since criminals in general are caught because of their incompetence - until they learn. Remote Access Schemes A prominent approach to solving the Going Dark problem is to allow law enforcement remote access to the device of a suspect. Various variations of this method exist which we will cover below. Common to those variations is that they suffer from three problems: Access control for the use of those remote access methods is a hard problem. Only law enforcement, and ideally only with a warrant, may be able to use them. Hackers and foreign governments must be excluded. This essentially mirrors some of the problems that key escrow systems have. There must be a secure way of targeting the device and necessary access credentials (or other secret knowledge required for access) must be securely managed. As is evident from the NSA and CIA Vault 7 leaks, it is an enormous undertaking to guarantee this. Without such a guarantee, remote access schemes have the potential to undermine the digital infrastructure of nations, making it vulnerable to hackers and cyberwar. From a purely national security perspective, this appears a price too high to be paid. Digital evidence gathered through remote access, as already mentioned before, is of questionable repute. Since remote access would necessarily allow control over the target system any data on it could be manipulated and falsified, including the suppression of evidence or the creation of false evidence. Because all access happens in a covert manner, legal recourse is at risk, and because the access methods must be closely guarded for security reason, they cannot be revealed in legal discovery. This boils down to the necessity to simply trust the individual law enforcement officers to be honest - and that in light of cases in which police has planted drugs as evidence, and the proverbial “Saturday Night Special”. Devices may be hard to assign to a jurisdiction. It is necessary to determine the actual location of a device before infiltrating it, otherwise the police of country A could break into a device in country B, leading to potential diplomatic turmoil. It is unlikely that a country like the USA would welcome the remote searching of a domestic device by the police of China or Russia. Mandatory Software Backdoors Government could mandate backdoors to be implemented in operating systems so that law enforcement can access any device remotely, given the necessary authentication credentials. This is highly problematic since it risks the integrity of all devices because of an intentional security hole. Securing the access credentials so that they do not fall prey to hackers and foreign adversaries would be an enormous, and potentially impossible task. Furthermore, since software and devices are shipped internationally, such a backdoor would have to be deployed per jurisdiction - potentially at the border. This is frankly unrealistic and dangerous beyond words. In addition, the backdoor would also be required to be securely programmed in the first place to prevent exploitation even if there are no valid authentication credentials known. Furthermore the communication towards such a remote backdoor would have to pass through all firewalls on the way - meaning that firewalls need to be configured accordingly as well. This applies not just to corporations but also to standard users since off the shelf home routers come with enabled firewalls. Beyond that, the targeting and the reachability of the device must be guaranteed, even though NAT, and especially Carrier Grade NAT is widely deployed and doesn’t support uninitialized incoming connections. This would mean that government has to deploy something like current malware that actively reaches out to a command and control server or network (C&C) to request instructions. This C&C would become a prime target for denial of service attacks, but also a great source to find out who is currently under investigation, counteracting investigative goals. Lawful Hacking Several countries, including Germany, the Netherlands, USA, have created legal frameworks to allow law enforcement to use existing security holes in deployed software to break into systems to remotely identify, search or tap them. The main problem with this approach is that it requires that law enforcement has access to exploits - software that uses security vulnerabilities in the target to gain system access. These exploits are highly sought after knowledge, and with the growing demand by not only cyber criminals but also law enforcement, intelligence agencies and military, they become a tradeable good that demands increasing prices. This creates a dilemma. On the one hand government has the mandate to protect its citizens (and that includes their computers) against crime and foreign aggression. On the other hand government needs to keep exploits secret because law enforcement relies on it to execute remote access for investigative purposes. In addition to the problem of deciding which security holes to make known to vendors for patching and which to keep secret, the demand for exploits by government potentially creates a market that further erodes security because criminals are incentivized to introduce these vulnerabilities into software. For example, contributers to open source software, or employees of software companies, might be tempted to introduce exploitable bugs into software and to later auction exploits for them to the highest bidder. Since these exploits often demand prices beyond 500,000 USD, this is a pressing risk - especially for open source software where contributors are usually not vetted and identified sufficiently. One suggested escape from this multi-faceted dilemma is that government only uses security vulnerabilities that have already been made known to vendors but not yet fixed. For example, it is rumored that the NSA has access to the CERT feed over which vendors are informed about found vulnerabilities. While this softens the dilemma, it comes with its own problems: The time to create and deploy the exploit code is significantly shortened, requiring that the government employs highly skilled and motivated experts that program and test these exploits around the clock. Again, those exploits should not fall into the wrong hands, but at the same time need to be quickly made available to authorized law enforcement entities. Giving government access to a stream of vulnerabilities also means that potentially many more people gain that knowledge, risking leaks. Furthermore: How to decide WHICH government should have priority access to that knowledge, and what consequences does this have for national security? At least the approach of using only 1-Day exploits (those vulnerabilities made known to vendors already) would contribute to drying up part of the market for exploits. A variant of this method has recently become known. In some (unidentified) countries, internet service providers were enlisted to help the government in targeting specific users by infecting downloads with remote access trojans on the fly. So called drive-by attacks depend however on insecure usage practices of the user and are unreliable. They also suffer from mistakenly attacking innocents. Targeted Updates A rarely discussed method for remote access is the subversion of update procedures. All devices require regular updates to fix existing security vulnerabilities or deliver new features. Update processes already inherently have the ability to change every part of the device’s software and they often provide targeting methods already - through device identifiers or licenses. As such, they could be considered to be intentional backdoors. Software vendors currently employ digital signatures to secure and authorize their updates. This method could however be used by law enforcement if software vendors can be convinced (or forced) to comply. It is certain that vendors would resist such a move vehemently, but they have also a record of previously cooperating, especially when it comes to third-party software delivery. Both Google (Android) as well as Apple (iOS/iPhone) have already suppressed and forcibly deinstalled software from their customers’ devices, which allows for the assumption that they could also be made to install software - if government asks for it and a sound legal process for it is established. Common Problems with various regulatory means. In the following we will touch several open questions and problems that are common to all attempts to regulate cryptography, as well as engage with some of the arguments against it that are often repeated. Regulation undermines security All means known to us that soften the Going Dark problem lower the security of information systems and communication to some extend. This is to be expected, since the whole question is that of granting access to third parties that is not necessary for operation in and of themselves. Security thus must be lowered to include those parties even against the will of the user, therefor lowering the extend to which the user is able to control his devices and software. This is even further amplified by the fact that any approach will increase the complexity of the software and infrastructure - and complexity is the enemy of security. Fundamentally, security and control are synonyms in this field. However, security is not binary. It is a gradient on which we pick a value in light of trade-offs like convenience and cost. The public policy decision to deal with the Going Dark problem is just one of these trade-offs, namely that of public security and enforcement of law. That presents us with the question on how to balance individual control against the provision of (at least) the rule of law. This is no question of cryptography or computer security, but one of social ethics, politics and statecraft. It therefor has to be answered in that domain. Within that domain previous answers have been to regulate gun ownership, doors that resist police raids, mandatory government identification schemes that enable identity theft, and TSA locks on luggage. For some special needs licensing schemes have been introduced, which could apply to crypto regulation as well - allowing unrestricted used of cryptography for some uses, like banking and e-commerce, while strictly regulating it everywhere else. Our answer to the public policy question is radically on the side of individual control and security: Cryptographic protections, privacy, control over our devices and the integrity of information processing systems is one of the most fundamental requirements in a world that relies on international communication and data processing for national, economic and personal wellbeing. This is especially true in face of risks of cyber crime and cyber warfare. Lowering our defenses will make us even more vulnerable than we are already, potentially risking our critical infrastructure and personal autonomy. Regulation undermines innovation Regulation in all areas creates a cost of compliance that redirects resources from developing what the customer actually needs. This is a commonplace. Furthermore, regulation in this particular field prevents the use of best practices, which actually creates harm for the customer. However, regulation in other fields is accepted since it is perceived to provide a public good that would otherwise not be created. Here the argument cannot be simply that of arguing against regulation - since it is otherwise widely accepted - but instead it must be questioned if regulation in the field of cryptography would create any public good at all. In our survey of the various means to regulate cryptography it should have become clear that while a primary public good - security - is attempted, all known means to provide it in this field also come with enormous risks for security itself, making regulation self-defeating. Certification If any specific regulation on cryptography is established, the question of certification arises. Which software and hardware implementations implement the legal requirements, and how can a customer verify this? This requires both setting a standard for implementation, as well as verifying individual implementations of this standard. The standard will also require timely updates as technology progresses. Since software and hardware are global markets, international cooperation will be required. It is unlikely that the majority of relevant countries, so diverse as USA, Germany, Canada, Russia, China, India (to name just a few), can agree on such a sensitive topic - especially since the necessary mutual trust is simply not present. If they don’t, a fragmentation of the market or mounting legal risks will be the result. Enforcement and Effectiveness For any regulation to be effective it first needs to be enforced, or adherence will be very low. Some pro-crypto activists argue however that enforcement is impossible, since it would require censoring all sources of illegal cryptographic software, apply protocol fingerprinting to detect illegal use, and that no big vendor would adhere to regulation anyways because of market demands. We would agree if the goal were total enforcement. However, almost no laws are completely enforced. All complete enforcement requires totalitarian systems. Instead, pragmatic politics is aware and accepts that enforcement is necessarily incomplete. This acceptance has three reasons: A large number of people will obey the law simply because there is a non-negligible risk of being caught and punished. As soon as enforcement pressure is high enough, most people fall in line. This is the case for copyright infringement on the Internet, speed limits, gun laws, and drug laws. Complete enforcement fails, but the behavior of the population is nevertheless shaped according to the goal of the law. Unless the want for cryptographic protection trumps the probability of being caught and the punishment to expect, people will obey. This is fundamentally influenced by how much cryptographic protection adds to the satisfaction of wants and needs, since it is no primary motivator for most people. For decades, strong cryptography was available on the market, but inconvenient to use. So the vast majority of people did not use it. Will the new inconvenience of enforced crypto regulation drive down use? It certainly will. For those that break the law the risk of being caught will increase. This means that some of the people that hide their crimes with illegal cryptography will at least be caught for their use of illegal cryptography. Law changes social perception. Lawbreakers must hide the fact that they break the law in front of people that might be law abiding. This leads to social ostracism which becomes a means of enforcement. Unless a behavior is widely accepted, this mechanism is highly effective. It is not unheard of that users of cryptography are asked justify themselves, most often with the old tome: “If you have nothing to hide you have nothing to fear.” Similarly, people that are suspected by their social environment to be criminals face a much higher risk of being reported to law enforcement. It is no question that some criminals will not forgo the protection of cryptography, but even criminals are faced with choices and trade-offs. They have to invest their time, money and risk-tolerance wisely. This will undoubtedly lead to some criminals lowering their guard in the common assumption that “it wont be me that is caught”. It is the easy access to cryptography that makes it wide-spread in some criminal circles, if the effort to procure cryptographic tools increases, some will not go the extra mile to procure it. And for those that do, if they cannot effectively hide their use of illegal cryptography, it will paint a target on their back for law enforcement to employ more targeted (and expensive) methods. Another consideration in this case is that the lack of a public market for cryptographic software will inevitably lead to less knowledge about which products are trustworthy and which are not. There will be less recognized experts looking at illegal products, so illegal products will be harder to trust. This will certainly be exploited by intelligence agencies that then will spread fear, doubt and uncertainty about some products, while trying to lure targets into products that have been outfitted with some kind of backdoor or weak algorithm. It must therefor be concluded that no regulation actually targets the highly professional, well equipped, deeply educated and risk aware criminal. No previous laws have - notwithstanding public assurances of the contrary, which are just for soothing the general public. Otherwise our prison would be empty and judges out of work. It is important to get this illusion of perfectly effective enforcement out of our heads, and the debate. It remains the question of how to lower the availability of cryptographic tools to even begin with having any enforcement pressure. The vast majority of all installations of cryptographic software is either shipped with the operating system (hard disc encryption) or delivered via App Stores (almost exclusively messenger software). The current majority usecase clearly points at smartphones and tablets, as well as netbooks. Sales of personal computers and laptops have been plummeting, except for gaming. This puts the platform operators into a position of substantial control. While side-loading of applications is possible on most devices, it is inconvenient and not emplyed by the majority of users. One approach then could be to enlist the support of these platform operators, that control both hardware, operating system and application delivery. A simple request could be to ban certain software from the App Stores. This has precedent, for example Apple banning VPN applications, and Google removing the Catalonian Referendum App because of a mere court order. Such a ban on illegal cryptographic software does not need to be total, it is sufficient if vendors remove those applications on a case by case basis as prompted by law enforcement. If vendors do not comply, they could be held liable. It is probably unfounded to believe that platform providers will actually stand up against government demand if they do not have public opinion strongly on their side. This should be concluded from their behavior towards China, Iran and India, where they cooperated with local government against the security and freedom interests of the population. Do those corporation risk losing customers when they go along with regulation? Certainly they do, though the impact is hard to measure. If public opinion can be swayed in favor of regulation, the impact will be minimal. Especially since most consumer decisions will not be primarily informed by privacy issues, but rather by convenience, availability, network effects, and low risk (through certification). This should have become clear in the wake of the Snowden leaks - neither Google, nor Facebook, nor Apple lost substantial numbers of customers - even though everybody now knew that they participated in mass surveillance programs. Unless public opinion firmly opposes regulation of cryptography, enforcement will be no major hindrance. And to accomplish public resistance, sound arguments are required. Plausible regulation to mitigate the Going Dark Problem. A prediction. After describing the motives for regulation and various technical approaches to implement it, we have to ask what actual means of regulation are realistic today. The survey of technologies has revealed that hard-handed approaches like those of the 1990s are hard to implement, and even harder to keep secure enough in a digital society. Furthermore government is confronted with a much wider landscape of cryptography vendors and international stakeholders than before. Various approaches can be ruled out: Undermining algorithms and outlawing strong cryptography: Both lower the security of critical systems, and the general security of the population so far that the risks posed by cybercrime and cyberwar would become unacceptable. Mandatory government backdoors: Again, the security impact here is out of control. Risking the computing and storage of whole nations to be subverted by criminals and foreign enemies cannot be justified. Domestic or international key escrow, content escrow and key discovery schemes: The costs of implementation, maintenance, verification and certification would make this the biggest coordinated information technology project ever. The risks of project failure, insufficient security and stifled innovation are enormous. The required international coordination to prevent market fracture goes beyond what is possible in the current global political climate. It is not clear if it can be ruled out that protocols would be undermined. While the risks posed by this approach are uncontrollable and many international technical standards would need modification, the repeated focus of some politicians on end-2-end encryption is concerning. Maybe this points at an actual attempt to persuade vendors to limit its use, or it is a position taken to shift the future compromise further into the field of the anti-crypto faction. This apparently leaves government hacking and to convince software vendors to ship software with less secure default settings, especially to hide key ownership verification (to support man in the middle attacks) and to automatically backup communication logs and recovery keys to the cloud. We think that this conclusion is a bit rushed and not in line with the (apparently) coordinated statements of politicians from various countries. The pressure generated by public opinion and law enforcement interest groups, and specific statements by politicians that they “just want frontdoor access” and “providers need to be able to provide plaintext” should give us pause and allow us to outline a few plausible additional regulatory steps. Let’s first remember that the goal of regulation can only be to influence mass market availability and adoption of cryptographic tools that preserve confidentiality of content and communication relationships (metadata) against targeted government investigatory powers (1). That is where the going dark problem rests, and it is the minimal request by law enforcement. An extension of this goal would be to make users of strong, unregulated cryptography easier to identify and consider the use of these technologies as circumstantial evidence for criminal intend (as is today the possession of a “weapon” while committing any other crime). Second, it seems that the problem with most regulatory approaches is that they create the centralization of control (escrow keys or access keys) in systems that are hard to build, maintain, secure and certify (2). Third, international coordination of detailed regulation does not seem realistic in the current global political context (3). Lastly, fourth, the technical context is currently dominated by few platform providers that control operating systems, application delivery and to some extend hardware (4). Can there be a regulatory approach that recognizes these four points and incorporates them? We argue that five regulatory approaches are both realistic, and likely: Defense of metadata access: Convince vendors to refrain from creating systems that do not produce or retain metadata. This solves one part of the Going Dark problem and is relatively easy to argue for in the public sphere. While confidentiality of content is a long-cherished value in many societies, the anonymity of communication is widely faced with suspicion. Furthermore systems that suppress metadata are relatively hard to create, while metadata itself is of great interest to many vendors because it opens potential monetization strategies. Various attempts in this direction have already been made, notably by the EU (data retention), UK (investigative powers act) and the USA (repeal of regulations that prevent metadata collection and use by providers). Furthermore metadata generation and retention are one of the core demands of Interpol and Europol reports on cybercrime. Nudge vendors to deliver software with less secure default settings: Many cryptographic tools can be weakened indirectly by exploiting human error. Most users are unaware of the necessity of verifying the identity of communication partners and the ownership of keys. This can be exploited by making key verification not a mandatory part of the user experience, but instead hide the feature, or refrain from implementing automatic protections. Intentionally or not, this could already be witnessed with Whatsapp, where verification of keys during key rotation was not performed, and the user not informed about the fact that keys had been rotated. This would in many cases be sufficient to trick users into communicating with a man-in-the-middle. A similar approach could be taken when it comes to preserving communication histories or backups of local data to the cloud in a way that keeps the data accessible by the vendor. This could already be witnessed with Apple iMessage. Lawful hacking: Various countries, among them the USA, Germany and the Netherlands have made the exploitation of security vulnerabilities and infiltration of computer systems legal for law enforcement. Even in the wake of protests and legal ambiguities, this is quickly becoming a standard tool of police. A further strengthening of international cooperation in this field, especially in the sharing of exploit code and methods, would decrease costs and increase applicability, and potentially mitigate the problem of 0-Day exploits (secret vulnerabilities) by making 1-Day exploits (vulnerabilities known to vendors but not yet patched) practicable. One issue in this regard does require more attention: International frameworks for cross-border lawful hacking are both necessary and so far non-existent. We can witness, however, that since the G-20 talks in 2017 there have been efforts to regulate this aspect. Various proposals for cross-border cooperation, digital evidence collection and legal process coordination have been made and are finding growing support especially in the group of EU, USA, UK, Canada. It should be expected that this tool will soon receive a multi-jurisdictional framework and standardization, which in turn will allow cooperation to increase effectiveness and efficiency. Lawful hacking most certainly is here to stay and spread. Use of update mechanisms to deliver police trojans: An extension of the lawful hacking approach is to use the system or application updaters to directly deliver government software to targeted devices. This is a very attractive method since it could potentially solve problems with exploit availability, targeting, security, and documentation that can be revealed in legal discovery. Updater software already exists, it already has the necessary authorization to install and modify running code, the delivery infrastructure exists, and pin-point targeting is available as well. Using updaters thus does not introduce new security or reliability problems while at the same time reaching the vast majority of devices. However, vendors need to cooperate in this and must actively support law enforcement with each investigation. It would be too risky to weaken update security by giving police direct access to the infrastructure or required signature keys. Vendors in turn will refuse to cooperate if the process is not completely transparent and secured by legal safeguards - like the issuing of warrants, auditing, and notification of users after the fact. Furthermore the use of this method must be effectively limited to prevent accusations of mass surveillance. It might plausibly be enough to agree on limits on the number of undermined update events and publish statistics to sway public opinion - and thus vendor cooperation - to support this. A variation of this approach is to not deliver police trojans, but to simply suppress updates for targeted devices. This could be used to extend the lifetime of security vulnerabilities that can be exploited by lawful hacking - after a device has been successfully infiltrated, patching of security vulnerabilities could again be allowed. One warning should be added: Vendor cooperation to target specific devices leaks investigation details to the vendor. Law enforcement prefers to keep this secret. Mandate plaintext access: An elegant solution to the intricacies of key escrow and key recovery issues, at least from the point of view of regulators, is to refrain from defining any specific scheme on how to implement these. This gets around many of the complex details of certification, verification and international standard creation, as well as the impacts of regulation on innovation, single point of failure creation, and some security issues. Instead of mandating specific technical implementations, a result driven regulation would “only” mandate that vendors have to be able to make the plaintext of specific messages or device contents available on request. Vendors would face fines in those cases where they are unable to deliver plaintext to law enforcement in specific cases. This approach would of course put the complete technical burden on the shoulders of vendors, meaning that some vendors would be driven out of the market since they cannot provide adequate technical and organizational implementations to fulfill law enforcement requests. But it would also mean that no new entities have to be introduced that would present new points of failure or breach - leading to a more decentralized infrastructure. To further enforce such a scheme, app-stores could be forced through court orders to remove applications that have failed to implement plaintext access as demonstrated by unsuccessful law enforcement requests - in the worst case automatic de-installation of those applications is technically feasible. This approach of simply mandating plaintext access is attractive to law enforcement and politicians since it reduces the complexity of their parts significantly, hides the problematic details, and shifts all effort and liability to vendors. It looks good on paper. But it may not be forgotten that, while reducing the complexity of the overall implementations, the security problems of key escrow, content escrow and key recovery schemes still exist, even if on a smaller scale. Implementing such a regulation would undoubtedly lead to lower overall confidentiality for data at rest and in transit - not just versus law enforcement, but also unlawful access. While this might be partially mitigated by some platform providers that also have control over the hardware design, independent software-only vendors would still face a situation in which they will have to increase the risks their users face just to be able to cater to law enforcement. Conclusion and advice. We hope to have given some perspective into the technical, organizational and legal aspects of this new iteration of the Crypto Wars. We face a different situation today than the one faced in the 1990s. The arguments usually parroted in media are not sufficient to make cryptographic regulation unattractive to politicians. Several possible routes of regulation exist, as well as approaches that do not require vendor regulation (for example, to support lawful hacking). The risk here is that overcome arguments distract from those alternative routes instead of resisting them. The problem we face is much bigger than just cryptographic regulation - we are facing a change in the views and guarantees of confidentiality. This means that we have to extend the debate to include these aspects: Vendor neutrality: Should it be possible to force vendors of software and hardware, and not just communication providers, to provide law enforcement with extraordinary means of access? Integrity of Information Processing Devices: Should we allow for provisions that undermine the integrity of - and the user’s control over - personal computers, smartphones and tablets? What are the ramifications of such provisions in light of legally binding digital contracts, liability, and the permissibility of digital evidence? Should information processing devices be considered extensions of the person that operates them, or do we consider them external artifacts that fall under public purview? Freedom of Processing: Do users have the right to control what software runs on their devices? Do they have the freedom to install, remove, and develop whatever software they see fit? Do users actually own - and control - their devices? The Right to Digital Self-Defense: Are individuals allowed to take steps to defend themselves against security risks in the digital sphere? Are they allowed to take best practice approaches to make themselves less vulnerable to cyber crime, and in extend contribute to making a nation less vulnerable to cyber war? Should we consider good security practice in the digital realm as part of civil defense? These are the hard questions to ask, and they are closer to the public’s interest and domain of knowledge than many intricacies of cryptography. The answers to those question also have broader applicability - they inform future debates as well and thus can serve as precedent for finding new norms in social ethics. Apart from these political and ethical aspects, the debate has also revealed potential weak spots in how we do computing these days. Especially the dominance of few platform providers, the vulnerability of update processes and lawful hacking should inform us to take technological action. A few suggestions in this regard: Software delivery should be secured by some form of “Single Source of Record” that automatically verifies that a product delivered to a device does not deviated from installations on other devices. Secure software development: The process by which software is developed needs to be taken more serious, especially for open source software. We need better review and auditing processes for security critical code, and greater isolation between security critical and non-critical modules. Furthermore, review and auditing processes deliver only limited protection to users if the build (compile) process cannot be verified. This means that verifiable, deterministic builds should become commonplace. Platform vendors: The angle by which any regulation today is possible is through the high dependence of most users on platform providers that control both the operating system and the application delivery channels. It is this dependency that allows regulation to capture wide sections of the market by focusing all attention on a handful of corporations in very few jurisdictions. Both from public policy and security perspectives this is a risk-laden state of affairs that requires to be defused. We hope that we could widen the perspective on the current debate with this text and warn against arguments that are not timely anymore. Furthermore we hope that we could give some hints to make the debate more fruitful, and suggest some areas of focus for engineers and developers in which they can help shape the environment of the debate towards more secure, and more freedom-preserving systems.]" } , { "href": "/2014/11/darknet-markets-silkroad-2-0-takedown-analysis-pending/", "title": "Darknet Markets (Silkroad 2.0) takedown - analysis pending", "tags": [], "content": "[Due to requests by our readers we will start investigating and analyzing the latest darknet market takedowns. This can take a while since data is still coming in and nothing definitive is known. Stay tuned.]" } , { "href": "/2013/10/tracking-the-silk-road-lessons-for-darknet-services/", "title": "Tracking the Silk Road - Lessons for darknet services", "tags": ["anonymity", "digital tradecraft", "physical tradecraft", "surveillance", "tradecraft"], "content": "[On Oct 2nd 2013, a person was arrested in San Francisco (CA USA) who allegedly operated the darknet marketplace website “The Silk Road”. Shortly after, the Silk Road went offline. Within minutes discussions on the Internet sprung up with thousands of people trying to cope with their loss, trying to make sense of what happened. Several “official” documents (a criminal complaint and an indictment) were released shortly after which, in turn, lead to commentators rushing to explain what stupid mistakes DPR – the Silk Road operator – had committed. Now, after a few days have passed, I’d like to give analysis a try myself. The sources for this are few, therefore I will be restricted to the official indictment and criminal complaint, as well as some reports on DPR’s arrest. The problem with the official documents is that they are not – as some read them – a complete and truthful narrative of the investigation that led to the arrest. Instead, they are both meant to establish probable cause for a judge or grand jury to issue forfeiture and arrest warrants against DPR. The contents are meant to convince the reader that the target of the legal action (DPR, the suspect/defendant) is really the person to blame for the activities connected to the Silk Road, and that those activities are unlawful. While I will give the benefit of doubt to the authors that the documents only include truthful statements it needs to be kept in mind that they do not include the whole and complete truth. The statements are worded and ordered to demonstrate that the activities in question are unlawful, and to demonstrate the (true) identity of DPR. This is the goal and method of structure. Also, one should keep in mind that the evidence presented in a later trial may be substantially different. To find out what the causal chain of the investigation was the statements of the documents need to be reordered chronologically, and we will have to throw in some educated guesses to fill in the gaps. From this there should be some hints on what were the crucial points at which the investigation turned into actionable results – and how to prevent this in future cases. With this method I constructed the narrative of the investigation that now follows. Enjoy! p.s.: I will not use the alleged real identity of DPR in this article. Instead I will use ARI as a stand-in. (ARI = Alleged Real Identity) There’s no reason to clutter the search engines with more entries on the real name, since the person might in fact be innocent. —- It is likely that the Silk Road (TSR) only got fleeting attention from law enforcement before June ’11. But with the media buzz started by Gawker and the demands made by Sen. Schumer it is likely that an agent was tasked with keeping an eye on TSR and make proposals if and how action against it should be taken. The first steps in such an investigation are to collect public knowledge on the subject and familiarize oneself with the matter. This also starts the ongoing iterative process of deciding if a case should be opened and what resources to assign to that case. To open a case requires that an activity is brought to the attention of law enforcement and that the activity is viewed to be unlawful by the investigators. The resources assigned to a case dependent on various considerations including, but not limited to, constraint on resources by other cases, public and media attention, potential intelligence and other leads gained from the case, and most importantly the predicted outcome of a court trial. Not every case opened by the FBI is meant to end up in court, often the goal is just to gather intelligence that might be of use at a later point. A case may be opened but soon get no more attention simply because the resources are needed somwhere else. When the case has been opened, an agent is assigned to handle it (this is AGENT-1 mentioned in the documents). When exactly the TSR-case was opened is unknown, but likely it happened some time between late June of 2011 and early April 2012, led by the DEA. The activities undertaken were mostly that of opening a file to collect information in, and to do research in public records (especially Internet searches) and public contents of TSR, to establish a timeline and connect people and resources to the case as well as find out what exactly that operation was about. During these early days the agent in question had to familiarize himself with Bitcoin and Tor and he established the first bit of the timeline which would later be used in the attribution phase of the case: 2011-01-23: SOMEONE created a blog at wordpress.com detailing how to access the Silk Road (silkroad420.wordpress.com). More records on the creation of the account (like IP address used, or the email address at signup) were not yet available. 2011-02-27: A user by the name ALTOID posted a link to the above wordpress blog on a drug-related internet forum called “shroomery.com”. 2011-01-29: A user by the same name posted a link to the Silk Road on the bitcoin forum. At this point ALTOID became a person of interest, but this was not enough to pull records. The public data simply ended up on the case file without further action taken. Requests for user records would have (and were) made later in the process. Just a little time after this the same user made the post on 2011-01-29, he made another post under the same account at bitcoinforums to ask for an IT professional to help with some coding. Included in this post was a gmail address that could potential be the ARI of the person of interest. Again, this piece of data simply went on file and would later be crucial in the attribution phase. Presumably in early April 2012 the case was pushed to its next phase. Active undercover work started. This involves three steps in the first stage. First, familiarity with the terrain needs to be gained. This means slow but growing involvement in the TSR forums. Second, the targets of the operation need to be identified (DPR, vendors, administrators, members with high reputation). Third, the targets are profiled. For DPR this likely resulted in “male, caucasian, american born and raised, technical or mathematical education, 20-30 years old” based on his writing style and other clues. On 2012-04-30 one of the undercover identities that would play a major part in this operation joined TSR, I will refer to him as UC-1 (simply called UNDERCOVER in the documents). Slowly working himself into the community at TSR, this agent then contacts DPR asking to help with a larger cocaine deal. UC-1 claimed that he wanted to sell 1kg of cocaine but that the market at TSR did not seem to be ready for this. DPR promised to handle the request and delegated the task to an administrator identified as EMPLOYEE in the documents. EMPLOYEE is another critical player in this story. He became an administrator of TSR on 2013-04-30 which gave him access to all messages sent between users and their transactions. During the course of this first undercover activity, UC-1 got EMPLOYEE to give his own residential address as a shipping destination for the deal, the shipping to be be conducted by courier. When the shipping address was revealed to UC-1 on 2013-01-10, the agents involved started a surveillance operation on this address. At about 2013-01-14, at the latest, direct physical surveillance of the address was in place, recording the comings and goings of the people living there and likely wiretaps. This likely lead to UC-1 asking for the shipment method to be changed to courier, possibly because multiple people resided at the same address and the door itself was not easy to see. On 2013-01-17 the delivery is made by two or more undercover agents and a little while later the payment was made. At this moment law enforcement knew enough to bring EMPLOYEE before a judge. The person was identified, the goods had changed hands, and the payment was completed. A multi-year sentence was certain for EMPLOYEE. This is the point at which TSR started to unravel. With a person on the inside (having access to the messaging and payment system) compromised, the linchpin was pulled. Now law enforcement had to cash in on it. The mistake on the side of TSR that lead to this dire situation is threefold: First, the transaction was conducted without minimum standards of tradecraft. The exchange should have been done at a location agreed on only a short while before the meeting and the location should have had no connection with any party involved. Second, persons involved in the operation of an organization have no place in exposing themselves in any transaction. This is where foot soldiers have their place (for example by utilizing the six-pawn-chess protocol). Third, organizations of this kind require compartmentalization. Never may any second tier operator have wide access to data and at the same time be involved in facilitation. On the side of law enforcement this operation went by the textbook. It was now time to maximize the profit from this catch. Some time between 2013-01-17 and 2013-01-26, most likely on or around 2013-01-20, EMPLOYEE was arrested by law enforcement and presented the facts of the matter at hand. He was set to go to jail for a substantial time and be separated from his wife and child. The alternative was a deal leading to a light sentence, in exchange for full cooperation in the ongoing undercover operation. This, again, is standard procedure. The structure of most organizations, the law on the book, the quality of the prison system and the character and experience of the targeted individuals work strongly in the favor of law enforcement. Especially for online crime, where the personal bounds and loyalty between members of an organization are weak and no expectation of “taking care” of the trial and his family exists, suspects are easy to turn. They have everything to lose and exactly nothing to gain from staying loyal. There is no social safety net for criminals waiting for them in jail, nobody who will protect and feed their family, nobody who will send a well paid lawyer. This makes these organizations far easier to infiltrate than the classical mafia. After being presented with the options EMPLOYEE agreed to cooperate fully. At this point law enforcement had access to almost all messages sent on TSR and the details of past deals. These records almost certainly went back at least to mid 2011 (it seems there was later a purge initiated by DPR on 2013-05-24). All data available was immediately copied and retained, in the order of importance of the various targets (DPR included). It can be assumed that the conversations collected from the system were incomplete in that they may not have included DPR’s messages themselves but only replies (including quotes) from his contacts. This may also have led to a sustained high-level access on the messaging system, either by gaining valuable information on other adminstrators, direct access to other administrators’ accounts or by DPR failing to later shut down EMPLOYEE’s account. I shall return to the importance of this data later when talking about how the server infrastructure of TSR was uncovered. It is not clear when exactly EMPLOYEE turned on DPR, but he did, no later than when DPR asked the undercover agent UC-1 to execute a hit on EMPLOYEE. That EMPLOYEE cooperated is demonstrated by him helping to stage a photo that is meant to prove his death to DPR. At this point the case immediately escalated to be an FBI operation (if it wasn’t already before). Armed with this massive trove of data law enforcement was in the position to both start a second undercover operation to attack DPR directly and to locate the servers. Starting 2013-03-13 a user of the platform called “FriendlyChemist”(FC) tried to extort money out of DPR by threatening to release a large amount of user data that was allegedly stolen from a vendor on the platform. The reason given were money problems FC had with a supplier of his. DPR asked to be brought into contact with the supplier “Redandwhite” (RAW). Over the next 2 weeks a story develops in which DPR purchases a hit on FC from RAW. I am inclined to think that this story is another undercover activity by law enforcement to position RAW into DPR’s circle of trust. Multiple hints point at this reading: 1.) UC-1 (the undercover agent and killer in the first undercover operation) was tainted because he was affiliated with the arrest of EMPLOYEE. This could have, and probably did at some time, raise suspicion for DPR. 2.) Before DPR’s laptop falls into the hands of the FBI contents of a TorChat conversation between RAW and DPR is known to law enforcement (conversation is on 2013-06-01, laptop ist only available after 2013-10-02 while the contents are known latest 2013-09-27). 3.) RAW procures counterfeit identity documents (passports, etc.) for DPR (2013-06-05) and ships them to DPR’s home address. This shipment is intercepted by Customs and Border Protection (2013-07-10). For this to be the result of a random inspection seems unlikely given the history and timing of the case. 4.) The first undercover action leading to the cooperation of EMPLOYEE likely generated enough data to convince DPR that the extortion threat was serious and imminent.   The ID documents later procured and shipped to DPR’s home address by RAW is the cornerstone on which the identification of DPR’s alleged real identity rests. The combination of photos, birth dates, home address and various social media records made that point watertight. Additional postings conducted early in 2011 (shroomery.com, bitcoinforums) and in March of 2013 (Stack Overflow) just hardened the evidence. At this point, between 2013-06-05 and 2013-06-20, the undercover agent RAW receives at least the photos and birthday information to be put on the forged documents and this leads to a match between a previously recorded person of interest in the file (from the posting on bitcoinforums looking for an IT pro and including the gmail address) and data received directly from DPR. This constitutes reasonable suspicion to pull the data on DPR’s gmail account, leading to IP records dating 2013-01-13 to 2013-06-20. Multiple entries are located to be from the location of one of DPR’s friend’s apartments and an internet cafe in San Francisco. Additional IP address data pulled for both locations reveals a list of IP addresses and times they were assigned to those ISP accounts. The data revealed by google likely also revealed the Stack Overflow account (SOA) created by DPR on 2012-03-05. The SOA later becomes an important link between authentication data found on the Silk Road servers and DPR. Please keep in mind that all this information ends up in the case file and is double-checked and cross referenced which might have led to the following find: On 2013-05-24 a TSR user had warned DPR that the site leaked an IP address (included in that message) that was used to restrict administrative access to the software running on it. If the FBI had sustained access to the messaging system of TSR after turning EMPLOYEE, this would have lead to an immediate location of that address to gain access to the data stored on that machine (hereafter TSR-VPN). As the documents show, this server was located and imaged between 2013-06-03 and 2013-09-15 latest. When the image was pulled from TSR-VPN, it revealed that on 2013-06-03 someone had accessed the system for the last time, wiping the contents of the user accounts and log files. However, this person forgot that his last access would also generate an entry when logging off, leaving the IP address used in the last access on the machine. This address again matched the records for the internet cafe mentioned above. However, it is unclear if TSR-VPN was discovered before or after the main web server of TSR was imaged. Some discussions on the Internet in the aftermath of DPR’s arrest also claim that DPR used a VPN service provider to access the servers of TSR. This is a misunderstanding that becomes evident with the close reading of the official documents. The system in question was a virtual private server that was used by DPR to connect to via a VPN, and to move from there to TSR. It was a single purpose machine only used by persons working for TSR. It is safe to assume that with the data made available through EMPLOYEE, RAW, the early case file and the IP records pulled for DPR’s google accounts and both the friend’s apartment and the internet cafe, the FBI had enough to being closing in on DPR. From 2013-06-20 (approx.) DPR was under direct physical surveillance to establish his habits and movements as well as to acquire additional digital evidence. It appears likely that DPR continued to do system administration tasks from the Internet cafe. The use of the TSR-VPN suggests that using SSH over Tor was too cumbersome for DPR (and any admin regularly doing this is now nodding his head in agreement). Therefor it seems to be likely that it was this that lead the FBI to find the TSR-WebServer. If DPR logged in from the cafe without using additional protection or with inadequate protection (like using an unencrypted proxy connection) any investigator listening in on the same cafe’s WLAN could gather the necessary identification to locate the TSR-WebServer. There are other explanations on how the TSR-WebServer might have been discovered. 1.) Payment tools and ID forgeries that DPR might have acquired through TSR and used to set up the TSR-WebServer were discovered by the FBI in the data received through EMPLOYEE. 2.) DPR used identifiable information of his real identity (especially payment means) to procure the TSR-WebServer. 3.) The FBI hacked the TSR-Webserver which seems to have had no configuration to prevent the leaking of public IP addresses. 4.) Communication between DPR and the web host for the TSR-WebServer was revealed through the surveillance of the residence of DPR. At this point it is impossible to say, but I do favor the theory that the server was identified by surveilling the cafe’s WLAN both digitally and physically, and that it was located around 2013-06-25. (It seems that DPR was also using a library’s WLAN for TSR work, the attack to locate the TSR-WebServer could just as easily have been conducted there.) Please note that for an operation at this location, no court orders would have been required. Using an open WLAN does not justify any expectation of privacy and is thus fair game for law enforcement. With the location of the TSR-WebServer known, the police of the country in which it was located was contacted by the FBI under a Mutual Assistance Treaty. This in turn led to the web hosting company to be contacted and to pull an image from the TSR-WebServer. For this to be possible the server was either a VPS (Virtual Private Server) like the TSR-VPN server mentioned above, or the system was utilizing RAID (in which case one of the mirror disks was pulled and replaced with an empty one). Also, it appears that no disk encryption was utilized. The contents of the disk then lead to the location of the TSR-WalletServer (used for financial transactions) and it revealed digital clues that linked the administrator of the TSR-WebServer to the Stack Overflow posting made by DPR. (The username contained in the SSH public key matched the name give on the Stack Overflow account.) After this critical operation had been concluded, without DPR’s notice, the last piece of confirmation was contributed by DHS visiting DPR’s residence and confronting him with the ID forgeries procured through undercover agent RAW. At this point the last sensible reaction of DPR would have been the immediate wiping of his personal laptop contents. However, he didn’t. When DPR went to a nearby library on 2013-10-02 to access TSR, the FBI was ready. Having staked out is movements and habits they had requested an arrest order and also planned the arrest. To minimize the risk of any data being made inaccessible (by disk encryption) they conducted the arrest in a location where they could separate DPR from his laptop quickly before he would be able to realize what was going on. And that’s exactly what they did, getting access to all contents of DPR’s laptop when arresting him. —— You will notice that I included a few assumptions on the timing of certain events (arrest of EMPLOYEE, when surveillance started, when TSR-WebServer was located). These are educated guesses based on how long it takes to get these operations moving and into place. These estimates however are based on third party observer experience from a different jurisdiction. US law enforcement might be a bit faster or slower. Also, I have assumed that the statements made in the official documents are truthful. Sadly there is the risk that many of them are not. It might have been that much of the undercover action was more than just a sting but instead it was fully fabricated after the fact to protect sources and methods. All of that we don’t know, which is why I stand to above analysis for the time being (and until more data becomes available during the trial – if there ever is one… which is not terribly likely). So, what was the thing that led to DPR’s fall, what were his crucial errors? Many have pointed to his activities on Stack Overflow and the bitcoinforum. I disagree. While these actions sealed his identification POST-FACT, they did not substantially contribute to his ANTE-FACT identification. Instead it was the vulnerability of the SilkRoad operation to undercover infiltration based on a lack of compartementalization and lack of tradecraft in exchanges that the TSR staff should have never gotten involved in. That is what broke open the organization and lead to an implant that was critical to identify DPR. Also, the lack of precautions taken in accessing the TSR-WebServer for system administration tasks and the lack of disk encryption were fatal. That communication on the platform was conducted without encryption and that deliveries were sent to true residence addresses added to the fall. So, in reverse, the lessons to take are: 1.) Never have the operators of such a system partake in any deal. 2.) Never do exchanges at true residence addresses. 3.) Use proxies for exchanges. 4.) Always use anonymization for system administration access. Better use it all the time, always. 5.) Always use disk encryption even on servers. 6.) Learn digital forensics to protect against it. 7.) Use random locations for physical operations to prevent geographic profiling. 8.) Use separate laptops and fully developed covers for all activity. 9.) Compartmentalize organizations deeply. Limit the damage that can be done by operators. For reference, the official timeline: SR-Timeline.html]" } , { "href": "/2013/02/reaching-us-247/", "title": "Reaching us 24/7", "tags": [], "content": "[Over the last few days our incoming gateways have been the target of a DDoS attack.However, we remain reachable anyways via the Tor and I2P darknets: Tor:shadow7jnzxjkvpz.onion (or via shadow7jnzxjkvpz.tor2web.org if you have no Tor installed). I2P: jme44x4m5k3ikzwk3sopi6huyp5qsqzr27plno2ds65cl4nv2b4a.b32.i2p (or via jme44x4m5k3ikzwk3sopi6huyp5qsqzr27plno2ds65cl4nv2b4a.b32.i2p.in if you don’t have I2P yourself). alternatively: shadowlife.i2p (or shadowlife.i2p.us or shadowlife.i2p.in or shadowlife.i2p.to if you do not have I2P running)]" } , { "href": "/2012/12/news-automated-passport-checks-to-be-extended-in-germany/", "title": "News: Automated passport checks to be extended in Germany", "tags": ["biometrics", "rfid"], "content": "[The interior ministry of Germany announced that automated passport checks are to be extended to more airports The system to be used is the EasyPass Gates system. Travelers are automatically checked for height and a picture of their face is taken. The picture then is checked against the electronic record stored in the passport, comparing the biometric features. On biometric match, the passenger may pass without further contact with the border agents. The core technologies used are biometric facial recognition and RFID of the passport. No previous individual enrollment of the passenger is required. After the system has been used in Frankfurt am Main, the airports of Hamburg, Berlin and Duesseldorf are to be added to the program. The EasyPass system was developed by L1 Identity Solutions which is now part of Morpho, a subsidiary of the Safran S.A, a french multinational defense and aircraft technology provider. ShadowLife commentary: EasyPass is a prime example of how biometric systems can be used for automated, low cost border passport checks. The same technology however can easily be extended to provide additional inland checkpoints and increases individual dependency on biometric RFID-enabled identity documents. Sources: http://www.heise.de/newsticker/meldung/Flughafenkontrollen-Abfertigungssystem-EasyPass-wird-ausgeweitet-1761689.html (in German) http://www.heise.de/newsticker/meldung/Erfolgsgeschichte-EasyPass-soll-fortgeschrieben-werden-1076438.html (in German) http://www.morphotrust.com/ http://en.wikipedia.org/wiki/Safran      ]" } , { "href": "/2012/12/lessons-learned-online-and-offline-part-iv/", "title": "Lessons learned. Anonymity - Online and Offline – Part IV", "tags": ["anonymity", "tradecraft"], "content": "[In the last three installments of this series we looked at the Theory of Anonymity, and what to expect of anonymity both online and offline. Several conclusion can be drawn and turned into lessons on how to protect anonymity more effectively. This we are going to explore in this part of the series. Lesson 1: The more knowledge an observer has about as many persons as possible, the weaker anonymity becomes. Anonymity being a knowledge problem, depends solely on what the observer knows about us and other people – the identifying information or unpooling attributes about members of our anonymity set. In the information age both the online world – the Internet – as well as the offline world betray us in keeping our identity protected.   Lesson 2: Anonymity is nothing that can be expected anymore, neither online nor offline. Unique identifiers and strongly unpooling attributes are the core both to the operation of the Internet as well as to the digitization of the physical world. These pieces of information are not only generated, but constantly collected and permanently stored. But not only those things that are invisibly to us created by the technology around us harm our anonymity, but also data collected about our behavior. ** ** Lesson 3: Anonymity must be actively created. The generation off de-anonymizing data and it’s collection is a process that does not need to be initiated by anyone to target a specific person. It happens in the background as the default mode of our world. To protect anonymity, active steps must be taken. ** ** Lesson 4: The protection of privacy relies on generating less data. Many motives exist for personal data to be generated and collected, from marketers to law-enforcement. Since data can be stored, transferred and traded easily, it can easily end up in unforeseen hands. Also, some parties possess special legal powers or direct access to data. Therefor: ** ** Lesson 5: Data that cannot be prevented from being generated needs to be concealed by use of technology or changed behavior. Both online and offline the technologies of daily use depend on certain personal identifiers to work. This data will always be generated if these technologies are used. However, this data can be concealed or made less telling by changing how technology is used and by protective technologies created specifically for the protection of anonymity. ** ** Lesson 6: Anonymity is protected by the individual. Protection requires taking the initiative to actively reduce or conceal data. None of these protective technologies should be expected to be used without the active effort of the person trying to protect himself. Also, changes to behavior to protect anonymity are the responsibility of the private individual. ** ** Lesson 7: Online anonymity relies on concealing IP-Addresses, removing Cookies and Referers, and obfuscation of browser fingerprints. In the online world of the Internet, the four strongest unpooling properties that are most widely spread are IP-Addresses, Cookies, Referers and Browser Fingerprints. ** ** Lesson 8: Offline anonymity relies on reducing the use of credit & loyalty cards, not carrying a cellphone and trying to escape recording by cameras. Offline, in the physical world, most de-anonymization is done through mobile phones, credit and loyalty cards, and face recognition. ** ** Lesson 9: Instead of making all data available to a single party, identifying information must be split over multiple parties. Even with the use of protective technologies, some data will be generated. To protect anonymity in these cases, it is necessary to make sure that identifying data is not correlated and shared between multiple parties. ** ** Lesson 10: Protecting anonymity requires awareness of how we behave and how technology works – and to adapt methods to protect anonymity. Technology around us is changing constantly. Our behavior is a strongly unpooling attribute that de-anonymizes us. This can only be countered by a constant awareness of how we act, how technology works, and how identifying information can be minimized in a changing world. Armed with these ten simple lessons, anonymity can be partially restored. In the future, ShadowLife.cc will present and explain various methods and technologies for protecting anonymity – and privacy in general – both online and offline. Five things to start with To start with protecting your anonymity, try out these things on a daily basis: Leave your mobile phone at home, or carry it switched off. Do not pay with your bank-issued credit card. Instead, get yourself a Visa or Mastercard Gift Card – or better yet, use cash. Stop making and uploading photos to the internet, and don’t volunteer to model for any or become a part of a stranger’s snapshot. Use the incognito mode or privacy mode of your browser. When asked at a coffee shop which name to put on your order, give out an invented name. While these five easy steps do not protect you fully, they are very easy to take. And they help with getting a feel for a lifestyle that puts a greater emphasize on privacy. Things to come… Part I: Theory of Anonymity In the first part of this series the theoretical aspects of what anonymity is are explored. Part II: Online Anonymity This part explores how much anonymity can be expected online and how anonymity is reduced by everyday technologies used in Internet communication. Part III: Offline Anonymity Here we apply the theory of anonymity to offline interaction. Part V: Concepts for increased online Anonymity The theory of anonymity applied to online communication and what methods can be used to increase anonymity.]" } , { "href": "/2012/12/no-place-to-hide-anonymity-online-and-offline-part-iii/", "title": "No place to hide. Anonymity - Online and Offline - Part III", "tags": ["anonymity", "physical tradecraft"], "content": "[In this article we are going to explore how anonymity in the physical world is eroded through technologies and conventions that have been introduced over the last 30 years. Most people assume that their physical behavior is mostly disconnected from the world of bits and bytes, databases and surveillance (see part I on the Theory of Anonymity, and part II about Online Anonymity). Sadly, this increasingly proves to be an illusion. It is easy to overlook how much the digital world has found its way into our physical lives over the last years. Just 30 years ago most people where only consumers of data, be it the TV or radio in their living-rooms. Most data produced by them was strictly personal or business related and never distributed widely or easily accessible to third parties. Daily transactions were settled with cash, travel records mostly non-existent. Only their telephones produced usage data, and even that was bound more to the house or office than to the individual user. The data trail left behind by individuals was very small and limited. This has changed fundamentally. Today, the vast majority of people generates a growing data trail with increasing frequency and accuracy. People have become constant producers of data in their daily, physical, non-Internet lives – often without noticing or understanding the processes involved. Digitization The main reason for this development in the increasing digitization of life. Computers and databases are not restricted to a natural habitat called the Internet, quite the contrary. Computer technology was developed mostly for managing physical events – managing warehouses, cataloging citizens and customers, calculating machine parameters, managing relationships and planning for the future. The attention focus on communication, social networking and the Internet in general contributed to many developments in the physical world to go almost unnoticed. And this is especially true for our perception and defense of anonymity. Most transactions taking place in the physical are now mirrored by transactions in the digital world, creating a digital shadow of our daily offline lives. Physical and digital, offline and online, are tightly linked. Physical objects are represented by digital objects used to track and understand the events in the physical world. And these digital representations are increasingly becoming the sole focus for decisions made in the physical world. This transition has only just begun, and will continue towards an Internet of Things that will dominate the way we deal with both the digital and the physical in the future. The digitization of life, the connection between physical objects and digital representations, has already enveloped most aspects of business-to-consumer transactions, travel and movement, as well as most communication. However, digitization of these areas comes with several challenges that are necessary to understand the impact on physical anonymity: Human life is notoriously ambiguous, a features that is hard to cope with by computers. This makes it necessary to create means to precisely describe and identify actions and objects that need to be digitally processed. The solution for this is the introduction of unique identifiers, numbers that are directly tied to one specific action or object and that will not be encountered in any other relation. Another challenge is data acquisition. For computers to be able to track physical objects or events, data about these must be made available in a digital form. This happens by the use of sensors that collect and transmit the data for further processing by computers. In those cases where data cannot be acquired automatically, or when data needs to be presented to humans, terminals are used. Here humans need to actively participate in data acquisition or communication with the computer. Furthermore, just to complete the description of digitization of life, some actions in the physical can be automated through actuators, devices that can perform operations like opening or closing doors, or moving objects. Lastly, there needs to be a method for connecting information about multiple objects and events – there needs to be correlation. This is done through the means of title and co-presence. Titles are formal connections between objects that are usually enforced through law – like titles of ownership for cars, identity papers like passports or objects that may only be found in the possession of a specific owner like credit cards. Co-presence refers to the fact that two or more objects can be located at a specific geographic point at the same time, preferably repeatedly. While this may sound excessively detailed, the combination of unique identifiers, sensors, terminals and correlation methods describes the infrastructure to collect vast amounts of identifying information and for automatic processing. Digitization and Anonymity Just a few decades ago, people only left behind data in the memory of other people. One person would witness the presence or action of another person, and maybe communicate it to a third party. But this data was widely distributed and disconnected, unreliable and only short-lived. Only in cases when a person was specifically targeted, means like photography, audio recording, fingerprint capture, on-foot surveillance and detailed record-keeping were employed. When not targeted, most people were anonymous outside of their direct social environment. They were not identified nor were records of their actions kept. This stands in stark contrast to today. Through the use of unique identifiers, sensors and correlation most people are constantly identified, their actions recorded and records kept indefinitely even if not specifically targeted. Many of these records are not yet interconnected, but many more of these records are kept by an increasing number of parties that individually combine them. Further interconnection will develop out of economic reasons and due to law-enforcement interests. Since these person-specific records are relatively cheap to store and manage, they are kept for not-yet-identified future use. All of these records reduce the anonymity set of an individual simply by containing massive amounts of unpooling properties. Since many of these properties are unique identifiers, the anonymity set is often reduced to a single member – leaving no anonymity in the physical world – unless the individual takes conscious countermeasures to protect his privacy. In the following we shall explore several of the technologies used for unique identification and sensors. Due to the nature of the subject this can only be an overview which is by no means comprehensive, but should enable us to identify other technologies when they are encountered. A more complete list can be found in the notes below. Everyday Tracking Probably the best known technology for physical tracking is the use of credit cards and other payment cards (with the exception of pre-paid, anonymous gift cards paid for in cash). They are directly tied to a person and connect that person to the time and place of a payment, in addition to making payment and shopping habits accessible. Thus credit/payment cards are unique identifiers that destroy anonymity. In addition, the payment data is made available both to the shop and the credit card company, and potentially to third parties requesting that data. The license plate of a car is another unique identifier that is currently gaining popularity with people trying to reduce the anonymity of others. Automated license plate scanners are set up in more and more locations, allowing the automated collection of license plate data combined with time and place. These are often coupled with additional sensors like toll collection systems to identify the in-car toll boxes. Combined, this allows for the automated creation of movement profiles that are directly connected to a person. A more personal, precise and reliable method to create movement profiles and to pinpoint an individual is the mobile phone. Almost everybody is carrying a mobile phone today, all the time, at all places. However, mobile phones are constantly traceable as long as they are switched on. The mobile phone network knows the location of every active phone at all times, simply by how the network is set up – not because of targeted surveillance or backdoors. Every mobile phone has a globally unique hardware identification number, the IMEI (International Mobile Equipment Identifier) which is broadcasted to the network frequently. Furthermore the IMSI (International Mobile Subscriber Identity) number which is stored on the phone’s SIM card is made known to the network so that calls can be routed. These pieces of information – location, time, IMEI and IMSI – are frequently stored for extended periods of time and made available to third parties. Together, they form a powerful method to find out where a person was at a given time. Since most mobile phones and network accounts for mobile telephony are bound to a person they are immediately de-anonymizing. But there are more ways electronic companions, be it smart-phone, tablet or laptop can de-anonymize the owner. When switched on, wireless enabled devices broadcast so called MAC addresses that can easily be captured over dozens of meters. These hardware addresses are intended to be globally unique and not change, so to identify the device to a local hotspot or other devices like headsets. Both WiFi/Wireless LAN as well as Bluetooth use identity broadcasting, though many devices can effectively suppress Bluetooth to send out it’s ID. Another strongly unpooling property are loyalty cards issued by various commercial entities. These also allow the collection of transaction data for products bought, the time and place of purchase, and the person. But since loyalty cards and not legally bound to a person they can be swapped to reduce the quality of data collected. This is why we classify loyalty cards as strongly unpooling property instead of unique identifiers. A less likely, but nevertheless frequently used method of tracing is the use of bank note serial numbers. Though they are not bound to a person, they can be connected to a person through the commercial transaction itself. This is a method frequently used in law-enforcement sting operations. For example, the bank note numbers are known to the ATM machine at which the target uses its credit card to withdraw money, making it possible to connect the serial numbers to his identity with a high probability. However, data between banks and grocery stores are rarely shared, especially not serial number tracing data. It is useful to keep this method in mind however, since automated bank note scanners are becoming a more frequent piece of equipment found not just at banks but at shops and border checkpoints. Far more prevalent are tracking methods based on pre-paid or subscription tickets for public transportation. For example, the Oyster Card used in London allows the long term tracking of movement because it can connect the built in unique number to the use at the gates to the public transportation network. Since the technology employed (MiFARE RFID chips) has been proven insecure, any stranger could read the ID of an Oyster Card carried in a target’s purse and then look up the locations and times of travel. Some transportation ticketing systems also allow access to subscription data, often identifying the person directly. The underlying technology in many ticket systems are RFID (Radio Frequency IDentification) chips. But RFID is far from being limited to tickets. RFID chips are found in passports, credit cards, tickets – but also attached to everyday products like clothing. RFID-tagged clothing is intended for stock management and anti-theft operations, but it also allows the silent tracking of persons. Since clothes are personal and we usually do not replace all our clothing at once, the correlation between RFID identities in clothing combined with payment data collected at stores can make the wearer long-term traceable. For example, RFID scanning gates put in at choke points like hotel entries, subway system entries and store doors can be used to track the wearer of RFID tagged clothing or other objects that are equipped with RFID tags. Lastly, the fastest growing area of de-anonymization and tracking is the spreading use of biometrics. Facial recognition systems are now being built into CCTV (camera surveillance networks) and even shop’s surveillance cameras. Since the human face can be quickly identified by current technology – and the face is constantly visible – this probably makes facial recognition the strongest future application for identification. However, facial recognition is not limited to surveillance cameras. Due to the growing use of mobile phones with built in cameras, and the spreading habit of shooting pictures always and everywhere to upload them on social media websites, more and more biometric data linked to place and time is made available – with the active and cheerful help of a whole generation of facebook users. Must digitization lead to loss of privacy? It should be noted that the process of digitization does not inevitably lead to a loss of anonymity. Many convenience and efficiency gains can be achieved without impacting privacy, if the technology is designed with data protection in mind. For example, unique identifiers are not always required, or they can be only temporary and changing. It would also be possible to offer more option to opt-out from data collection or to limit data collection to well defined circumstances in which its use can be demonstrated. However, these “Privacy by Design” approaches are rare to encounter, either because they are not requested by consumers or because non-economic interests (like law enforcement etc.) are at play. Conclusion It should be clear that anonymity should not be expected from the physical world. Credit cards, mobile phones and facial recognition being the three most frequent de-anonymizing technologies that we are constantly confronted with. Without special measures, physical anonymity does not exist anymore. Things to come… Part I: Theory of Anonymity In the first part of this series the theoretical aspects of what anonymity is are explored. Part II: Online Anonymity This part explores how much anonymity can be expected online and how anonymity is reduced by everyday technologies used in Internet communication. Part IV: Lessons for Anonymity Some lessons have been learned that can help to improve anonymity in general, both online and offline. Part V: Concepts for increased online Anonymity The theory of anonymity applied to online communication and what methods can be used to increase anonymity.   Further examples: Unique Identifiers, serial numbers: (Ultimately unpooling properties) Credit Cards, Cash Cards, ATM cards Financial Transactions: Account numbers, check numbers, routing codes Mobile phone, also built into many modern cars: IMSI: International Mobile Subscriber Identifier. Globally unique, associated with SIM card IMEI: International Mobile Equipment Identifier. Globally unique, associated with the mobile phone hardware Phone number Number plates / License plates Passports, identity cards Probable identifiers: (Strongly unpooling properties) Biometrics: Face geometry Fingerprints Voice characteristics DNA / Genetic fingerprint Eye: Iris & Retina Receipt numbers of purchases Loyalty cards MAC Address. Publicly visible hardware address of WiFi/Wireless LAN hardware. Bluetooth ID. Publicly visible hardware address of a Bluetooth device. Tickets for public transportation, Oyster card Banknote serial numbers Names used in personal interaction RFID tags, found in many products Artificial DNA marking of objects Automated toll payment boxes Weak unpooling properties: ‘Weak biometrics’, visual: Automated: Gait (can be automated) Hand/Ear patterns Race Gender Age Hand geometry Build Clothing Habits, patterns of behavior: Geolocation data (can be very strong) Buying habits Time habits Driving habits Power consumption Sensors & Terminals Payment terminals (credit cards, loyalty cards) Mobile phone Ticket systems (transportation) RFID Gates CCTV/Surveillance camera networks Automated license plate scanners]" } , { "href": "/2012/11/news-salt-lake-city-police-about-to-adopt-head-cameras/", "title": "News: Salt Lake City police about to adopt head cameras", "tags": ["head camera", "police state", "surveillance"], "content": "[The police chief in Salt Lake City, Utah, wants to make head cameras mandatory at his police department: This US police force wants to clip cameras on the side of all their officers’ heads via glasses, helmets or hats. The head cameras can record a crime scene or any interaction with the public, in addition to the footage already produced by dashboard cameras in their cars. Supporters of the technology claim that the head cameras are made in such a way that officers cannot edit the footage, helping to ensure transparency. The AXON Flex devices considered in Utah are manufactured by US firm TASER (they are an upgrade of the earlier AXON Pro system). Currently, there are 274 US law enforcement agencies using one or both version (some for all officers, others are just testing a few). UK police forces are also testing similar technology. For example, Grampian Police officers in Aberdeen have been using body cameras which attach to their helmets and vests since 2010. Sources: http://www.bbc.co.uk/news/technology-20348725 http://www.bbc.co.uk/news/uk-scotland-north-east-orkney-shetland-18981781]" } , { "href": "/2012/11/news-bill-to-authorizes-warrantless-access-to-americans-email/", "title": "News: Bill to authorizes warrantless access to Americans's email", "tags": ["email", "privacy law", "surveillance"], "content": "[A vote on a bill which authorizes warrantless access to American’s email is scheduled for next week: A Senate proposal touted as protecting Americans’ email privacy has been rewritten to give government agencies more surveillance power than they possess under current law. It would allow more than 22 agencies (including the SEC and the FCC) to access Americans’ email, Google Docs files, Facebook wall posts, and Twitter direct messages without a warrant. In some circumstances the FBI and DHS could get full access to Internet accounts without notifying the owner or a judge. This is a setback for Internet companies, which want to convince Congress to update the 1986 Electronic Communications Privacy Act to protect documents stored in the cloud. Currently Internet users enjoy more privacy rights for data stored on hard drives than for data stored in the cloud. ShadowLife comment: The law does not protect privacy, encryption does. Source: http://news.cnet.com/8301-13578_3-57552225-38/senate-bill-rewrite-lets-feds-read-your-e-mail-without-warrants/]" } , { "href": "/2012/11/news-uk-plans-to-block-online-porn-for-minors/", "title": "News: UK plans to block online porn for minors", "tags": ["censorship"], "content": "[UK government is moving forward with its plans to block online porn for minors: Anyone buying a new computer or signing up with a new Internet service provider (ISP) will be asked whether they have children on first login. On ‘yes‘ further questions will be asked to determine the stringency of the anti-pornography filters which will be installed. ISPs have to impose appropriate measures to ensure that those setting the parental controls are over 18. ISPs also have to prompt existing customers to install the filters. This plan differs from earlier opt-out plans which would have blocked online porn automatically. The Open Rights Group consider this ‘active choice’ proposal to be better than the earlier opt-out plans. Sources: http://www.dailymail.co.uk/news/article-2234264/David-Cameron-ensure-parents-led-filter-process-new-computers.html http://www.openrightsgroup.org/blog/2012/victory-government-backs-down-from-default-filtering]" } , { "href": "/2012/11/full-disk-encryption-with-ubuntu-linux/", "title": "Full disk encryption with Ubuntu Linux", "tags": ["digital tradecraft", "encryption"], "content": "[Ubuntu is one of the most popular Linux distributions and a good start for a secure and yet easy-to-use computing environment. In order to install it go to Ubuntu’s website to download the current release for your desktop or laptop computer and follow the installation guidelines there.**** Setting up full disk encryption in Ubuntu During the installation of Ubuntu check the “Encrypt the new Ubuntu installation for security” box in the graphical installer to activate full disk encryption (dm-crypt with the symmetric AES encryption algorithm is then used for that purpose): Make sure you use a good password, see the the article “How to choose a secure password” for details. The article “Encryption algorithm: a primer” gives you an introduction to encryption algorithms. Checking the box mentioned above and choosing a good password is all you have to do to activate full disk encryption in Ubuntu and keep your data secure. The Electronic Frontier Foundation (EFF) has more information about full disk encryption in Ubuntu 12.10. Removing data leaks in Ubuntu Unfortunately, the default install of Ubuntu 12.10 added some data leaks. If you perform a search on your desktop the search term is also sent to Ubuntu’s server in order to give you related Amazon products and Internet search results which is problematic. You should disable that with the following two steps: To disable Amazon advertisements open a terminal and type in the following command: sudo apt-get remove unity-lens-shopping To disable Internet search results open the Privacy app and disable Include online search results: The EFF has also more information about the data leaks in Ubuntu 12.10.]" } , { "href": "/2012/11/opinion-spying-on-petraeus-or-how-emails-quickly-become-incriminating-evidence/", "title": "Spying on Petraeus, or how emails quickly become incriminating evidence [Updated 2012-11-15]", "tags": ["anonymity", "digital dead drop", "digital tradecraft", "email"], "content": "[The current story of Gen. PETRAEUS and his affair BROADWELL shines a light at the possibilities of digital surveillance and tracing of crumbs of information. It can serve as an example and a warning against insufficient digital tradecraft. Though news reports about the exact order and nature of the events are imprecise, unreliable and contradictory, we are trying to put them together into a plausible series of events and give some background on techniques that were, or might have been, used to intrude on the privacy of both BROADWELL and PETRAEUS. Phase I: Threatening emails The case began when KELLEY received between 5 and 10[1] emails of threatening content that did not immediately identify the sender. The FBI was contacted through an agent being befriended to KELLEY and the matter was investigated by the FBI cybercrime unit. To prevent confusion we will refer to the address these emails were sent from as EMAIL_ACCOUNT_A, since the story involves multiple accounts. Phase II: Requesting account information Since the emails in question did not immediately reveal the identity of the sender, the FBI most likely contacted the email provider of EMAIL_ACCOUNT_A first, requesting the registration data for the address in question (likely using a subpoena). An email address consists of two parts, the “Local Part” or “user” and the domain managing the account. Together these form the address as user@domain.com. The information about the domain, and thus who manages an email address, is publicly available through the domain registration system and can be looked up within seconds (using a whois service like www.whois.com). Since EMAIL_ACCOUNT_A was registered under a pseudonym (false user information) and not the real identity of the owner, the FBI resorted to identify the account owner through other means. Phase III: Tracing access At this point the FBI either: Requested and received historic login data to EMAIL_ACCOUNT_A from the email provider. This would include the dates/times when an account was accessed and which IP-Addresses were used by the user. Or the FBI relied on the IP-Address information included in most emails in a section that most email programs hide from the user but that is nevertheless carried by the email itself and easily obtained through the email program. An example of how such an entry in an email looks like is shown here: Received: from [] by fmail.com via HTTP; Fri, 11 Nov 2011 11:11:11 PST At this point the FBI had a list that showed at what dates/times the owner accessed EMAIL_ACCOUNT_A with which IP-Address. From there the FBI used publicly available database to identify the owners and/or locations of the IP-Addresses in question, which resulted in a list that informed them about places and times EMAIL_ACCOUNT_A was used. Phase IV: Identifying the sender Apparently EMAIL_ACCOUNT_A was not used from a personal Internet connection to send the emails in question. This lead the FBI to contact the owners of the IP-Addresses identified in Phase III – which included multiple hotels – and request information about potential users of the Internet accounts identified by the collected IP-Addresses. Apparently the FBI needed no subpoenas or even court orders to access this information, hotels simply shared the guest records for the dates in question. At this point the FBI had a list of persons that included the user of EMAIL_ACCOUNT_A. They then simply looked for persons that had been at all of the places at the times in question. Leaving one suspect: BROADWELL. Phase V: Widening the picture At this point the FBI could convince a judge to issue a warrant to identify additional email accounts used by BROADWELL who had been successfully identified as owner of EMAIL_ACCOUNT_A. It is unclear what technique the FBI used to find additional accounts of BROADWELL. Possible options are: Using an FBI controlled software installed on BROADWELL’s computer to identify additional email accounts accessed. BROADWELL’s modus operandi included accessing email accounts from changing Internet connections like those of hotels. Since this was to be expected in the future as well, a FBI controlled data collection software installed on BROADWELL’s laptop would have been a good choice, simply because she would likely use that machine during travels. This software like Magic Lantern, CIPAV or any of their successors would have been the most promising path but also presenting legal obstacles. Another approach would have been buying available data from various data traders like Acxiom that often have information about multiple email addresses used by the same person on file. This data is usually collected from various sources and aggregated based on common identifiers like IP-Addresses which together yield a surprisingly detailed picture of the person in question. However, this data is often less complete than required in such an investigation and also makes case information available to a third party. Due to less legal obstacles involved a simple communication surveillance on the internet account used by BROADWELL at home – and potentially by her mobile phone – might have been the most likely route of investigation to take. A system of the likeness of Carnivore (since been replaced with more advanced implementations) could have been used to specifically and exclusively look for additional email accounts used as stated in the warrant. Asking BROADWELL: Sources are unclear at which point BROADWELL handed her computer over to the FBI for physical investigation of it’s contents. This would likely reveal other email accounts used by traces left in the browser history & bookmarks, configuration of email client software, and entries in automatic password managers or auto-fill records of the browser. [Update:] Some sources claim that both EMAIL_ACCOUNT_A and EMAIL_ACCOUNTS_B were managed by Google. It might be the case that the FBI only asked Google, as provider of EMAIL_ACCOUNT_A, to search for other email accounts that were accessed by the same IP-Addresses and at the same times. Google then would have searched the access logs it stores, discovering EMAIL_ACCOUNTS_B and then make them known to the FBI. Sources are unclear in this regard, but it remains a possibility at this point. By using any or all of the above methods, the FBI found more email accounts, EMAIL_ACCOUNTS_B, which were accessed regularly. Phase VI: Hitting Gold The FBI at this point gained access to EMAIL_ACCOUNTS_B discovered in phase V. How exactly the access was gained is unclear and depends on the exact method(s) used in phase V. Either account access credentials were discovered, or additional subpoenas/warrants were issued to access the accounts with the help of their respective providers (see phase II). When analyzing the content of these accounts stored on the providers’ servers a group of accounts, EMAIL_ACCOUNTS_C, stuck out due to two factors: Classified information was stored in the account. Multiple sources refer to this but it might be a confusion with files stored on BROADWELL’s computer which was at some point made available to the FBI. Excessive use of the “Drafts”-folder for communication Especially the use of the Drafts-folder appears to have caught the attention of the media, and possibly the FBI, because it is a common method used to conceal communication. This method is commonly referred to as a “Digital Dead Drop” (the term drop box is mostly a media error/invention). Here the communicating parties share the access credentials to an email account. By authoring emails and not sending them but storing them instead in the Drafts-folder the parties can exchange messages without actually generating additional traffic “on the wire”. This was popularized by reports about Al-Quaeda operatives using this method. While it is true that additional traffic is not generated through this technique, the traffic for accessing the accounts and the data in the accounts is still available and often under lower legal protection than actual communication that involves multiple accounts. The method was mostly used out of fear that intelligence agencies would have automated access to international internet communication (true) but would have no access to email accounts stored on servers (false). Even access to email accounts leaves traces that can be scooped up by surveillance operations, and data stored on email accounts is no more secure than transmitted data if the intelligence agency can gain access to the servers – which it usually can. Furthermore it concentrates all information about the account users in one place instead of spreading it over multiple networks that might not be equally surveilled. Due to the recording of access to email accounts a surveilling party only needs to secure the cooperation (or undermine the protections) of a single party to gain access to the IP-Addresses of communicating parties and times/dates when communication took place. And this appears to be so in this case. Phase VII: Identifying other parties It is unclear how PETRAEUS was linked to EMAIL_ACCOUNTS_C. Most likely the IP-Address information stored by the email provider at each access was used to identify other parties involved. For this subpoenas to Internet service providers could have been used to identify the users of the IP-Addresses stored in the email account logfiles. More likely however the FBI connected one or more of these IP-Addresses to the CIA immediately and left the final identification to their IT department. Commentary on the case Public knowledge about the case is very limited both in depths and reliability. What can be concluded however is that the FBI used a wide array of investigative methods and resources on a simple harassment case that escalated to a case about concerns on national security during the investigation. If this was in any way justified remains to be seen. Several lessons can be drawn from this story: Investigations that begin with a low interest and impact can escalate quickly, drawing in more and more potent methods and technologies. Most internet service providers, email providers and hospitality businesses are not sufficient guardians of one’s privacy. Context-Information and Meta-Data (email headers, access logs, IP-Addresses) are the prime source of information for intelligence and investigation operations. These can easily be processed automatically by software because they were created by computers for computers. Hear-say tradecraft (Drafts-folder as digital dead drop) without an understanding of backgrounds to protect one’s privacy is not only insufficient but even counter-productive as shown in this case. Good digital tradecraft for E-Mail Good tradecraft for protecting email communication does exists: Protect email content through message encryption, like GnuPG Do not rely on third party storage of emails. Download emails and delete them from the email server. Store email and other information (such as browser data) securely using Full Disk Encryption like TrueCrypt. Points 1-3 also mean that one shall not use webmail services. Select an email provider that is privacy conscious: Removing identifying header information from emails and protecting whois/domain-data or being registered in a jurisdiction other than your own. Use encryption to communicate with the email provider: Insist on TLS/SSL encrypted access to their SMTP (outgoing) or POP3/IMAP4 (incoming) servers. Only access the Internet with anonymization methods enabled that conceal your true IP-Address from third parties, like Tor/I2P/Multi-Hop VPNs. Do not draw unneeded attention towards yourself by harassing people needlessly. These are only the minimal tradecraft rules for secure and private email use. But they would have been sufficient to protect PETRAEUS and BROADWELL. Please also refer to our Anonymity Series (Part I, Part II) for more background on Anonymity. Media sources used for this article (in no particular order): http://www.newyorker.com/online/blogs/newsdesk/2012/11/david-petraeus-and-the-surveillance-state.html http://online.wsj.com/article/SB10001424127887324073504578113460852395852.html?mod=WSJ_hps_LEFTTopStories http://www.wired.com/threatlevel/2012/11/gmail-location-data-petraeus/ http://www.huffingtonpost.com/2012/11/12/petraeus-fbi-gmail_n_2119319.html http://www.nytimes.com/2012/11/12/us/us-officials-say-petraeuss-affair-known-in-summer.html?pagewanted=all http://online.wsj.com/article/SB10001424127887324073504578113460852395852.html http://openchannel.nbcnews.com/_news/2012/11/12/15119872-emails-on-coming-and-goings-of-petraeus-other-military-officials-escalated-fbi-concerns http://m.apnews.com/ap/db_289563/contentdetail.htm?contentguid=VOlvNjF4 1: Sources are vague on this issue. Update 2012-11-15: Added option 5 to “Phase V: Widening the picture”.]" } , { "href": "/2012/11/news-nec-offers-face-recognition-analysis-for-retailers/", "title": "News: NEC offers face recognition analysis for retailers", "tags": ["face recognition"], "content": "[Technology only requires off-the-shelf personal computer and video camera. It can estimate gender and age based only on video footage. Furthermore, repeat customers even across stores can be automatically recognized. The underlying face recognition product, NeoFace, is also used in services like intruder recognition and surviellance. NeoFace is a cloud service provided by NEC. NEC claims that the face templates generated cannot be reconstructed into images of faces. ShadowLife comment: ShadowLife disagrees with NEC’s claim that face templates cannot be used to reconstruct the images of the faces recorded. Similar claims were made before about biometric iris recognition templates that where then subsequently used to reconstruct iris images through genetic algorithms. NeoFace is an example of self-learning facial recognition software that adds face data to its database by simple observation of crowds in real life scenarios. Storing the NeoFace data in the cloud centralizes data management and gives other parties easy access to facial recognition data and other context data (store visited, date/time of visit, etc.) of all participating locations. Sources: http://www.diginfo.tv/v/12-0209-r-en.php http://www.youtube.com/watch?v=mTCUY4CUHFU http://www.wired.com/threatlevel/2012/07/reverse-engineering-iris-scans/all/]" } , { "href": "/2012/11/news-google-compliance-to-reveal-user-data-and-remove-content/", "title": "News: Google compliance to reveal user data and remove content", "tags": ["google"], "content": "[Report covers the 6 months between January and June 2012. Requests for user data increased by 30% compared to July-December 2011. Google received more than 20,938 requests to reveal user data of 34,614 accounts. Google complied in more than 13,900 cases. Majority of user data requests were made by the USA. Google complied in above 90% of US requests. During the same timespan Google received 1791 court orders and requests by executive branch (police etc.) to remove 17765 items from his search results or other services. Source: Google Transparency Report]" } , { "href": "/2012/11/anonymity-online-and-offline-part-ii/", "title": "Anonymity – Online and Offline – Part II", "tags": ["anonymity", "digital tradecraft"], "content": "[This article explores how much anonymity really exists online, and how anonymity is reduced by everyday technologies used in Internet communication (please check out Part I of this series for the theory behind anonymity). Many people expect their actions online to be far removed from their physical identity which often leads them to behave in ways they would never dare when their name were connected to it. But how well founded is this belief in online anonymity? Sadly, there is no such thing online anonymity per se. Without special technical measures anonymity on the Internet should be deemed non-existing. Every Internet user leaves a long trail of data behind, much of which can be directly and cheaply connected with his identity. It is necessary to understand the technologies involved to get a clear and true picture of the state of online anonymity: IP Address Every communication on the Internet – such as surfing to a website or making a VoIP call – involves data to be reformatted into smaller packets that are then delivered over a vastly complex network of routers – computers that pass the packet on from computer to computer until it reaches the final destination. Here’s an example path for an information packet that travels through the Internet, each line referring to one computer that passed the data on to the next one: Path of a packet 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. The various routes data can take on the Internet is determined by both the sender and the recipient of the data, as well as any system in between that is responsible for passing on the data – and the routes will constantly change. For this to work, every packet of data must come with a sender- and a recipient-address that uniquely identifies the computers that are talking to each other. This address is called Internet Protocol Address – or just IP-Address – and is simply a number ( is an example of such an address from the path shown above). While these numbers look innocent, they are directly related to the computer of the Internet user. When accessing the Internet, the ISP (Internet Service Provider) of the user will assign a unique IP-Address to the user’s computer and store this information in a database from which it can be retrieved with a subpoena, or even resold to data traders and marketers. But even third parties have information about IP-Addresses that they have gathered in various ways, making it possible to often pinpoint an IP-Address to a single street address by using only publicly available data (click on this link to find out what everybody can know about you right now, just based on your IP-Address Maxmind.com GeoIP). Each of the computers involved – be it sender, recipient or any of the routers in between – sees the IP-Addresses of the parties communicating with each other, and even what data is transferred. Be aware that dynamic IP-Address assignment, as it is offered by many ISPs, does not change the anonymity impact of IP-Addresses at all. At best, more data needs to be stored and analyzed to achieve an attribution of Internet communication to a user. Cookies, ETag, etc. By now, every Internet user should have heard about “cookies”. These are little pieces of data that a website can place on the visitor’s computer, and that will be sent back to the website when the user visits it again. They allow the website to connect multiple visits together as coming from a single computer. This sounds innocent enough, however: Not just the website a user visits but every bit of content loaded from it – like javascript, images and flash video – can create and load cookies from a computer. This makes it possible to track a visitor not just on one website, but connect his visits to multiple websites with each other. Depending on the business model of the data traders and marketers involved, user data from multiple websites is shared with the website operators as part of the deal, or resold separately. But even with cookies disabled in the user’s browser, there exist numerous similar techniques that are not as easy to block. This includes, but is not limited to, ETags (a method to optimize loading speed), flash cookies (Local Shared Objects used by Adobe Flash) and HTML5 (allows the storage of data in the browser in multiple ways). Combined, these are used to create “Zombie Cookies” which are exceedingly hard to remove from a computer. Website forms All anonymity ends when users fill out forms on the web. Be it the signup form for a website, the order form of a shop or even the search terms put into a search engine. Most users use true or close-to-true data when being asked for it. From there on, the data can be associated with the IP-Address used and cookies stored on the computer. Depending on the policies of the websites in question, the data can then be shared with other websites and associated with even more cookies and IP-Addresses, forming comprehensive profiles of browsing habits and identity attributes that are almost impossible to remove from the long-living databases of data traders and marketers. Frequently form data later becomes the subject of subpoenas, with authorities compiling in-depths reports on searches made on the Internet and websites used. Referers Especially search terms easily find their way through the Internet, and the reason for that is the “Referer”. Every time a user clicks on a link on a webpage, the newly opened webpage is sent the address of the previous one. From this a website learns the search terms used when the user clicks on a search result. But referers are not only created when users click on links. Any content loaded from a website (images, javascript, flash, etc) carries with it a referer. When a website loads the credit card logo from the servers of the website operator’s bank, the bank can know that a user visited that specific website, including the IP-Address of the user and potentially other information like cookies. The same happens with the “like” and “share” buttons that prominently decorate many websites today and that are loaded directly from the servers of the respective social network. In combination with cookies and IP-Addresses, this informs social networks about the majority of content their members consume, even when none of the websurfing involved any of their own websites directly. Bookmarks Some websites will generate new page addresses (URLs) for every new visitor. When these pages are then bookmarked or shared with others, website operaters are enabled to both recognize repeat visitors as well as gaining insight into who shared their pages with whom – giving them easy access to the user’s social relations. History & Cache When loading webpages the user’s browser will first test if it has a locally stored (cached) version from a previous visit, so that the websurfing experience can be sped up. Due to this behavior websites can connect visits together and recognize repeat visitors. Browser Fingerprint Another technique to track user’s on the Internet is by finding out about their browser’s fingerprint. Due to minute differences between installations, many browsers express a unique behavior which can be tested by websites. This makes it possible to identify repeat visitors and to even track them without relying on IP-Addresses or storing identifying data in their browsers (like cookies). To see how unique your browser is, check out this page by the Electronic Frontier Foundation: Panopticlick Mail headers Tracking methods are not limited to websurfing. As an example for other technologies that have anonymity implications email shall be quickly examined. Unbeknownst to most users, emails carry information that consist not just of the email addresses of the parties, but also IP-Addresses of the sending user. Received: from [] by freemail.com via HTTP; Fri, 11 Nov 2011 11:11:11 PST The above shows one of the “headers” included in an email that. The “Received” headers include the full path an email traveled from mail-server to mail-server, usually including the original IP-Address of the sender’s computer ( in this case). In addition to this, other headers exist that uniquely identify the message, the mail program used, or the conversation the mail refers to. All of this information is visible to any router on the path of the email. This is especially interesting to operators of free webmail services that attract a lot of users. The correspondence of their users allows the operators to temporarily attribute email addresses (and often names) to the IP-Addresses that were used in sending, creating a very precise database of user information that does not have to rely on the cooperation of other parties. Putting it all together This article could only present a shallow overview of the many methods and technologies that compromise user privacy and anonymity on the Internet. When combined with each other and utilized by specialized parties, they comprise powerful means to not only reduce anonymity on the Internet to nothing, but also to spread information about users between networks of actors. The depths of this threat materializes when multiple of these technologies are combined and the generated data is mined. Just using cookies, referers, IP-Addresses and mail headers, most users can be identified during most past and future connections to the Internet, essentially reducing the IP-Address to a unique identifier that is directly associated with the user’s name – without having to resort to subpoenas or data held only by the user’s ISP. Numerous parties with that kind of information exist, most of them unbeknown to the public. There is very little reporting about the methods used by data traders and marketeers, or how they compile vast databases of user information and make it available to paying customers. This is only natural, because methods to protect privacy and anonymity online exist as we will explore in Part V of this series. Too much attention on the factual non-existence of anonymity online would only result in users choosing to take the protection of their privacy back into their own hands, instead of misguidedly trusting it to the Internet itself. Only one conclusion can remain: The Internet provides no anonymity whatsoever, unless defensive technologies are employed by individual users. Things to come… Part I: Theory of Anonymity In the first part of this series the theoretical aspects of what anonymity is are explored. Part III: Offline Anonymity Here we apply the theory of anonymity to offline interaction. Part IV: Lessons for Anonymity Some lessons have been learned that can help to improve anonymity in general, both online and offline. Part V: Concepts for increased online Anonymity The theory of anonymity applied to online communication and what methods can be used to increase anonymity.]" } , { "href": "/2012/11/news-chrome-adds-do-not-track-header/", "title": "News: Chrome adds Do Not Track header", "tags": [], "content": "[Chrome has added the Do Not Track (DNT) header: DNT was added to the Chrome 23 release. Mozilla’s Firefox browser implemented the feature in June 2011. The upcoming release of Internet Explorer 10 (IE10) will enable DNT by default. The Apache webserver was recently updated to ignore DNT in IE10. Yahoo recently said it will also ignore DNT in IE10. Source: Ars Technica]" } , { "href": "/2012/11/how-to-choose-a-secure-password/", "title": "How to choose a secure password", "tags": ["digital tradecraft", "encryption"], "content": "[This article explains how to choose a secure password. For example, this is necessary to secure encrypted data or private keys against brute-force attacks. An introduction to encryption algorithms is given in the corresponding primer (brute-force attacks are also explained there). You should never use the same password for multiple purposes. It is fine to use the build-in password manger of the Firefox web browser to store your website passwords, but only if they are secured properly (by using hard disk encryption with a strong password as described below and the master-password feature). Theoretical consideration of passwort lengths To consider the information content of a password one first has to consider the underlying alphabet. Let’s assume we use the 26 letter Latin alphabet in upper- and lowercase plus the 10 decimal numbers 0-9 which gives us 62 characters in total. If we add some special characters like ‘!’ or ‘?’ we get more than 64 characters in total. For simplicity, let’s assume that we have an alphabet of exactly 64 characters. 64 = 26 which means that each character from this alphabet contains 6 bits of information, if the corresponding password is chosen randomly. If the password is taken from a dictionary the information content is vastly lower. For example, let’s compare a 10 character password chosen randomly with a 10 character password taken from a dictionary containing 4096 words. In the former case we have 60 bits of information. That is 260 ? 1.15 * 1018 possible passwords. In the latter case we just have 12 bits of information: 212 = 4096 possible passwords. That is, in the former case one has to test ~2.81 * 1014 more combinations than in the latter case in order to guess the password. In conclusion this means that completely random passwords are best. Passwords contained completely or in large parts in dictionaries are not secure against brute-force attacks! Capabilities of brute-force attacks Computing power is measured in FLOPS (floating-point operations per second). Current supercomputers have computing power in the range of petaFLOPS: 1 petaFLOPS = 1015 FLOPS. It it therefore safe to assume that the current capabilities of sophisticated brute-force attacks are up to 1015 passwords per second. Moore’s law states that the number of transistors on integrated circuits (and with it the computing power) doubles roughly every 18 month to two years. If we are optimistic and assume a doubling every 18 month, we get an increase of computing power by a factor of 1000 every 15 years (215 = 1024 ? 103). Passwords usually do not only have to be secure now, but also until the end of ones life. If we take 75 years one has to factor in a 1015 improvement in brute-force capabilities! Therefore, we want to make sure that our passwords are secure against brute-force attacks of up to 1030 passwords per second. What do these capabilities mean in terms of actual time? The dictionary password mentioned above stands no chance against a current supercomputer with a brute-force capability of 1015 passwords per second, it is broken in less than a second. All 10 character passwords (from an alphabet as described above) can be tried with such a computer in less than 20 minutes. Therefore, such a password is also not safe. To break a 20 character random password with a current supercomputer would take at most 42121558374361 years, but with the 1030 passwords per second supercomputer of the future it would take only about 2 weeks. It is always better to err on the side of safety and therefore we recommend random passwords with a minimum length of 30 characters. Practical hints For passwords used online a good approach is to generate a long random password for each website and store it securely in the browser. For example, you can generate such a password with the following two tools: Use javascript to generate the password for you. The code is run entirely in your browser, our webserver never sees the password: Random Passwordpass+=set.charAt(Math.floor(Math.random()*set.length));window.alert(‘Random password:\n’+pass+’\n’);})();){.bookmarklet} (Drag&Drop this link to the bookmarks bar of your browser to easily generate passwords in the future). If you are on a UNIX system you can use the this shell script to generate passwords for you: #!/bin/sh -e head /dev/urandom | uuencode -m - | sed -n 2p | cut -c1-${1:-32} Of course, that means that the hard disk where the passwords is stored needs to be encrypted securely. We recommend that you learn a 30 character random password for your hard disk encryption, it is not as hard as many imagine. It takes only about 30 minutes to learn a random password by typing it in repeatedly in order to put it into muscle memory. But if you cannot remember a 30 character random password, the following approach to generate and remember pseudo-random passwords could work for you. You make up an unusual sentence which should contain special characters and numbers, but which you can easily remember. For example: My favorite café has 32 different pictures on the wall. Among them are 3 with dogs, 5 with cats, and 12 portraits! ‘May I have your number?’, I asked the waitress and I got (703) 482-0623 :(. If you have such a sentence, you abbreviate it by using only the first characters, the numbers, and the special characters. In our example you’ll get a password like this: Mfch32potw.Ata3wd,5wc,a12p!’MIhyn?’,IatwaIg(703)482-0623:(. Such a password is much better than a password containing words from a dictionary, although it is not completely random (therefore such a password must be longer). Just make up some simple story or sentence which you can easily remember. For example, you can tell yourself a story about the stuff contained in your childhood room or some other memory which you can easily recall. This approach should give you a start for choosing good passwords. It is absolutely crucial to choose a good one for your disk encryption, the best encryption algorithms are worthless if you use weak passwords! Do not underestimate the speed of current processors and the machines and specialized password cracking hardware which will be available in the years to come! Summary Long random passwords are the best defence against brute-force attacks. Generate random 30 character password for your disk encryption and learn it by repeated typing. Generate separate password for each website which needs one and store it in browser.]" } , { "href": "/2012/11/concept-anonymity-online-and-offline-part-i/", "title": "Anonymity - Online and Offline - Part I", "tags": ["anonymity"], "content": "[This series explains the theory of Anonymity and what factors influence anonymity in online communication and offline interaction. The goal is to provide necessary background information to make educated judgements on the effectiveness of methods to increase anonymity. Let us start out with a definition of the term and then explore its implications: Anonymity is the degree of uncertainty in relating a person to an event, action or property. Anonymity is a problem of knowledge, it deals with the certainty or assurance an observer has for assigning information to a person – such as a person’s connection to an event or action, or a property of a person such as his name. The assigning of information to a person is called “attribution“, and the information in question is the “attribute“. The assurance of attribution is expressed in the “anonymity set“, the group of potential candidates each of which the attribute could be assigned to [The members of an anonymity set are also called “elements of an anonymity set”. We chose the term “member” here because it is less de-humanizing though technically improper]. The bigger the anonymity set, the less certain an attribution is and the more anonymity exists for its members. In the process of attribution the observer tries to decrease the anonymity set by applying deductive and inductive reasoning and by discovering properties that make certain members better or worse candidates for assigning the attribute. A property that makes a member of the set a more likely candidate for attribution is called “unpooling property” while a property that makes the member a more unlikely candidate is called “pooling property“. It is important to keep in mind that any new information learned by the observer can influence the make-up of the anonymity set and thus the attribution – even when this process of learning and applying spans considerable amounts of time. Each change in knowledge about any member of the anonymity set also changes the certainty of attribution for all other members. The discovery of a pooling property of one member increases the likeliness of attribution to any other member – the discovery of an unpooling property of one member decreases the likeliness of attribution to any other member. This way the anonymity set is repeatedly shrunk until the observer can assign the attribute to a person with a satisfying certainty. Attribution has become “plausible“. The method of reaching attribution by repeatedly decreasing the anonymity set is called “drill-down“. The above shows that anonymity is never an absolute, there is always a probability of attribution for each member of the anonymity set. Also, attribution is rarely absolute and strongly depends on the certainty required for the case in question. Even in such crucial instances as criminal investigations attribution is never achieved with a 100% certainty, but only with “sufficient” plausibility. Example Let us apply the above to a little story to make it easier to understand: A late evening in winter, a family – mother Hillary, father Mitt, son Ron and daughter Sarah – sits in the living room eating various cookies from a jar. When only a single peanut cookie is left in the jar, the mother leaves the room saying “Do not eat that cookie, I want to give it to our neighbor.” After a few minutes the mother comes back and finds the cookie jar empty. Mother Hillary asks: “Who took the peanut cookie from the jar?” Hillary has become the observer, the attribute to assign is “took the cookie from the jar”. The anonymity set is father Mitt, son Ron and daughter Sarah. Each of them being equally likely to be the thief. There is a 1⁄3 probability for each of them. In the first round of drill-down, Hillary notes that all three suspects have cookie crumbs all over them. This does not change the probability of any of them being a more likely thief than any of the others – the crumbs are a pooling property. The probability for each remains at 1⁄3. Second, she notices that the hands of father Mitt are far too large to fit into the cookie jar, it makes him less likely to be the thief (another pooling property) but does not finally exclude him. Hillary changes the probabilities to 1⁄5 for Mitt, 2⁄5 for Sarah, and 2⁄5 for Ron. Third, she remembers that her daughter Sarah is heavily allergic to peanuts (a strong pooling property) while her son Ron likes peanuts a lot (an unpooling property). Due to Sarah’s allergy, she is excluded from the anonymity set, and Ron’s probability of being the thief is increased: Mitt 1⁄3, Ron 2⁄3, Sarah 0. Finally, Hillary is pretty certain that her husband Mitt does not want any trouble with her, again reducing the probability of him being the thief: Mitt 1⁄4, Ron 3⁄4. Mother Hillary now grumbles at her son Ron, being assured enough that he was the thief. Of course, this line of reasoning does not guarantee Ron to be the culprit, but the anonymity set was reduced sufficiently for Hillary to risk having to apologize to her Son in the unlikely case that she was mistaken. After having shown the theory of anonymity working in an example we will explore more complex and realistic applications in the next parts of this series – and what Thomas Bayes has to do with it. Important concepts: Anonymity is the degree of uncertainty in relating a person to an event, action or property (the attributes). The opposite of Anonymity is attribution. The measure of Anonymity is the size of the anonymity set. Anonymity of a person is reduced through the discovery of unpooling properties for that person and the discovery of pooling properties of other members of the anonymity set. Anonymity and attribution are no absolutes but relative probabilities. Things to come… Part II: Online Anonymity This part explores how much anonymity can be expected online and how anonymity is reduced by everyday technologies used in Internet communication. Part III: Offline Anonymity Here we apply the theory of anonymity to offline interaction. Part IV: Lessons for Anonymity Some lessons have been learned that can help to improve anonymity in general, both online and offline. Part V: Concepts for increased online Anonymity The theory of anonymity applied to online communication and what methods can be used to increase anonymity.]" } , { "href": "/2012/11/news-dhs-will-scan-payment-cards-at-borders/", "title": "News: DHS will scan payment cards at borders", "tags": ["dhs", "fincen"], "content": "[The U.S. Department of Homeland Security (DHS) will scan payment cards at borders: Travelers leaving or entering the U.S. have to declare aggregated cash and other monetary instruments exceeding $10,000. Under a proposed amendment to the Bank Secrecy Act, FinCEN (Financial Crimes Enforcement Network) will also add the value of prepaid cards to this. The DHS develops advanced handheld card readers to differentiate between a credit card, a debit card, and a prepaid card. Credit cards and debit cards need not to be declared. Intangible Bitcoin brain wallets remain safe. Source: Forbes]" } , { "href": "/2012/11/secure-and-professional-bitcoin-otc-exchanges/", "title": "Secure and professional Bitcoin OTC exchanges", "tags": ["bitcoin", "otc", "tradecraft"], "content": "[The article Necessary conditions for the long-term success of Bitcoin has shown why widespread availability of over-the-counter (OTC) Bitcoin exchangers is crucial for Bitcoin to succeed in the long-run and give us more freedom. This article will explain how to exchange Bitcoin OTC securely and professionally. It should be of interest for Bitcoin users who want to get their coins anonymously from OTC exchangers and for people who want to earn a second income as Bitcoin OTC exchangers in the counter-economy. If you are dealing Bitcoin on the OTC market you have to consider two kinds of enemies: the state and evil customers (for example, fraudsters). To deal securely you have to mitigate the corresponding risks. That is, you have to drive up the cost of a successful attack and make it unlikely. The techniques for risk mitigation can be divided into two categories: secure IT infrastructure (privacy FTW!) and tradecraft. Secure IT for OTC exchangers OTC exchanges should be arranged completely online. To do that, you should get your secure IT infrastructure and privacy basics down: email encryption (GnuPG) use hard disk encryption use IP address anonymization (for example, with I2P/TOR/VPN) use (multiple) pseudonyms To arrange the OTC deal online you can use websites like bitcoin-otc.com or localbitcoins.com. You should agree on the price and the amount beforehand and limit the transaction size. In a single transaction, deal only what you could afford to loose. The actual face-to-face meeting only finalizes the deal, there must be no deviation from the agreement! If you deviate from the agreement, the probability of fraud rises sharply. The digital arrangement of OTC exchanges protects against the state: you produce less evidence and you make it harder to prove how much Bitcoin you dealt in the past. If you exchange a lot, make sure you are using multiple pseudonyms. Tradecraft for OTC exchangers Tradecraft is skill acquired through experience in a (typically clandestine) trade. In the Bitcoin OTC business tradecraft is about methods, customs, and protocols to secure and conceal your transactions. In general, you should always meet in public places during the day (for example, a café) to reduce the probability that you’ll get robbed. Meeting during the day decreases the probability of robbing, but keep in mind that it makes surveillance easier. During a transaction, the money is kept or placed on the table until the Bitcoin are transferred. Your Bitcoin client should only have the needed amount of Bitcoin on it. Once you received the money, you have to make sure that you don’t leave the protected public places with the money to avoid getting robbed after the deal! There are several methods to do that: Brush: You give the money to a second person unnoticed (beware of the toilet, you can easily get robbed there). Such a second person can also spot problems in advance and warn you if necessary. Drop/cache: Use a secure place to store the money. For example, a secure mailbox or door where you could put the money. Deposit the money: Go directly to the bank and deposit the money. In that case, you have to think of your cash card which could be used to force you to withdraw the money for a robber. You could send the card back by mail or in cases where you are already followed type in the wrong PIN three times in a row to get rid of the card at the ATM. Safe deposit box: Deposit the money there. But make sure that you need personal identification to access the box, rendering robbing attempts useless. Next level OTC If you follow the advice given above you are already in pretty good shape. But if you exchange a lot (in terms of amount or number of transactions), you should improve your OTC game to the next level. The best way to do that is to deal in teams of at least two persons. If you have at least one partner you can use the brush technique described above to get rid of the money you receive after the deal. You can also separate the buying of Bitcoin from the selling of Bitcoin: One team member exclusively sells Bitcoin and the other one exclusively buys them. In most jurisdictions, if Bitcoin is not considered a currency this is just simple selling/buying and not money changing. If Bitcoin is considered a currency, you could use goods of exchange (gold or silver) instead of cash for exchanges. Be professional A professional dealer has professional prices, because professionalism has its cost. Make sure that you don’t cut corners to lower your costs, this will defeat you in the long-term. If necessary, explain the benefits of a professional dealer to your customers. In my opinion, if your fee is significantly less than 5%, your are either dealing very large amounts or your are fooling yourself about your security measures. Risk rises with repetition and quantities, make sure you mitigate them appropriately. Let’s deal Bitcoin in the OTC market securely and professionally — for fun and profit! The content of this article was presented at the 2012 Bitcoin conference in London [slides].]" } , { "href": "/2012/11/necessary-conditions-for-the-long-term-success-of-bitcoin/", "title": "Necessary conditions for the long-term success of Bitcoin", "tags": ["agorism", "bitcoin", "counter-economy", "crypto-anarchy", "otc", "silk road"], "content": "[In this article I present my answer to the the question: What does Bitcoin need to succeed in the long-run? Before we consider the question, let’s put Bitcoin into the wider context of the counter-economy. The Counter-Economy The counter-economy (a.k.a. the informal economy) in general is all economic activity which is not fully regulated, taxed, or controlled by the state. In its simplest form it is a Lemonade stand which operates without a license, it partly includes a mom-and-pop store which optimizes its taxes by running some of its business off the books, and in the dark corners of it you can find completely separate market places like Silk Road. The counter-economy is not a small feat, if you combine all black markets of the world together you’ll get a 10 trillion US$ economy, second only to the United States of America. In many developing countries it already comprises large parts of the economy and it is growing faster then the officially recognized gross domestic product (GDP) [The Shadow Superpower, Foreign Policy, October 28, 2011]. The term counter-economy in a more specialized meaning is also used in Crypto-Anarchy and Agorism. Agorism is revolutionary market anarchism. In a market anarchist society, law and security would be provided by market actors instead of political institutions. Agorists recognize that situation can not develop through political reform. Instead, it will arise as a result of market processes [agorism.info]. A good introduction to Crypto-Anarchy is the following quote from the Crypto Anarchist Manifesto published by Timothy C. May in 1992: Computer technology is on the verge of providing the ability for individuals and groups to communicate and interact with each other in a totally anonymous manner. […] These developments will alter completely the nature of government regulation, the ability to tax and control economic interactions, the ability to keep information secret, and will even alter the nature of trust and reputation. One has to note that Crypto-Anarchy is not a philosophical utopia, but the attempt to shape life and society in the presence of disruptive technologies. The corresponding technologies have already arrived and we are facing a great divide: we will either live in the total surveillance state or in a Crypto-Anarchist libertopia. So what role does Bitcoin play in this context? A free society needs a free market and a free market needs sound money. Bitcoin is money with good properties: it is pseudonymous, there are no frozen accounts in the Bitcoin system, it doesn’t allow charge-backs (a big problem for merchants accepting credit cards), and it is very cheap and fast to transfer Bitcoin. As such, the use of Bitcoin is a huge advantage compared to a barter or cash-only economy, because developed economies need money transfer, at the very least for B2B transactions. Three hypotheses for the long-term success of Bitcoin So what does Bitcoin need to succeed in the long-run? In short, it needs no state, no banks, and OTC. The three hypotheses in more detail: The Bitcoin community should not try to get legality for Bitcoin, we should not ask the state to resolve conflicts in the community. The Bitcoin community should not focus on interoperability with the traditional banking system. Widespread availability of over-the-counter (OTC) Bitcoin exchangers is crucial for Bitcoin to succeed in the long-run and give us more freedom. Let me explain the reasoning behind this hypotheses. Public choice theory in general and plain common sense states that people will do what is in their self-interest. This includes politicians, bankers, and cops. It is very important to fully grasp this simple truth: People do what is in their interest and you cannot assume that your interests equal their interests. They are usually not the same. There is no such thing as people working for the common good. Even people who are supposedly helping others selflessly are actually helping them in order to live in accordance with their own value system. No state The state is a regional monopoly of force which extracts resources (usually money) from its citizens to (a.) mainly finance itself, its wars and its surveillance apparatus and (b.) use the rest to provide so-called services which could be provided better and cheaper by the free market. These services are usually used as a justification of the existence of the state, but the real reason of its existence is the easy money the state money recievers can get. The money is taken away from the productive citizens via taxation and the monopoly of the money supply (via inflation your money becomes worth less). Thereby, the latter strategy is better, because it is harder to notice by the ignorant masses. If you combine the institution of the state and its inherent interests with the conclusions from public choice theory and the Bitcoin system you are looking at the potential for a lot of trouble. Bitcoin prevents inflation (there is no inflation in Bitcoin once all coins have been mined) and helps tax evasion (it is hard to regulate and control). It is potentially life-threatening to the state, because it strikes at the root of state financing. Therefore it follows that the state will fight Bitcoin heavily once it realizes that. In my opinion it is absolutely ludicrous to think that the state will embrace Bitcoin. The most likely scenario is that the state will try to close down Bitcoin altogether. If that is not possible the state will try to change Bitcoin in a way that allows to implement know your customer (KYC) regulations more easily in the system. Just wait and see what kind of discussions we will get in the Bitcoin community once the state is cracking down more on Bitcoin exchangers and businesses and actors like the Bitcoin Foundation will try to remedy the situation by working together with state agencies to make Bitcoin more regulatory compliant. In my opinion, this shows why the Bitcoin community should not try to get legality for Bitcoin and should not ask the state to resolve conflicts in the community. All this will do is drive more unwanted attention to the Bitcoin ecosystem. The self-interests of the state prevents legality and regulatory acceptance of Bitcoin in its current form. History lesson: e-gold E-gold provides an important history lesson of the Those who cannot remember the past are condemned to repeat it (George Santayana) category. E-gold was a digital gold currency which existed between 1996 and 2009 and allowed the instant transfer of gold ownership. In 2008 the company reported more than 5 million accounts. A flourishing ecosystem existed around e-gold. In the end, exchangers were attacked and closed down due to regulatory problems. E-gold itself was indicted for money laundering and the operation of an unlicensed money transmitting business. The indictment happend although e-gold itself tried to get the corresponding license earlier and was told that is was not necessary. Sounds similar to the situation with Bitcoin right now? The game is rigged, folks! You cannot win if your are playing by the rules. No banks Banks are major beneficiaries of fractional-reserve banking and can borrow cheaply from the central banks. They operate in one of the most heavily regulated industries which results in huge barriers to entry and not much competition. This leads to large profits, for example from transaction and credit card fees. Financial service providers like PayPal, Western Union and Money Gram also have very large fees, because the regulatory hurdles reduce the amount of competition and result in large costs. Since small income foreign workers who send money home are the largest customer base for such services the high fees are actually a tax on the poor. Bitcoin threatens this profits and poses a regulatory risk. Therefore, Bitcoin exchangers will be attacked by competing financial institutions (remember TradeHill as such an example). A widely successful Bitcoin system is against the self-interests of the established financial industry and it makes no sense for them to deal with the corresponding regulatory challenges in the long-run. If the Bitcoin economy depends on the traditional banking system it is doomed to fail. Just imagine what would happen to the Bitcoin economy if Mt.Gox, which currently is responsible for about 80% of all Bitcoin exchanges, suddenly would have to close down. In my opinion, this shows the second hypothesis: The Bitcoin community should not focus on interoperability with the traditional banking system. The case for OTC We now have established that from a self-interest standpoint the state and the traditional financial industry is naturally opposed to Bitcoin. To ensure the long-term stability and success of the Bitcoin economy we need a completely separate system of exchange, a network of over-the-counter (OTC) exchangers. An OTC exchange happens when two people meet face-to-face trading Bitcoin for cash (or gold/silver). OTC is not the sending of cash in the mail or wire transfer. Such a widespread network of OTC exchangers is the system most resilient against state attacks, because it is heavily distributed and the banking system is skipped entirely. This reasoning supports the third hypothesis: Widespread availability of OTC Bitcoin exchangers is crucial for Bitcoin to succeed in the long-run and give us more freedom. The how-to Secure and professional Bitcoin OTC exchanges gives practical advice on OTC. The content of this article was presented at the 2012 Bitcoin conference in London [slides].]" } , { "href": "/2012/11/the-treasure-which-is-privacy/", "title": "The Treasure which is Privacy", "tags": ["anonymity", "privacy"], "content": "[The Philosophy of Privacy Extremism emphasizes that privacy should be maintained in all situations; that if in question, privacy should be given preference, unless sufficient arguments to the contrary apply to the specific situation. Privacy serves as a necessary condition for engaging in meaningful and truthful interpersonal relationships. Furthermore privacy is a necessary condition under which a person can develop a self and embrace individual responsibility for decisions and actions that result from them. A denial of privacy to the contrary establishes and maintains a lack and loss of esteem, respect and value in and for things and other persons. Privacy should therefor be the strong standard for personal behavior, normative for those that thrive towards personal human positive development. The Treasure which is Privacy The first response to someone who makes an effort to protect his privacy is often “I have nothing to hide because I have nothing to fear” – usually accompanied by an expression of righteous pride or the blissful presentation of carelessness. As with most routine responses that have become maxims of contemporary society and proverbs uttered in reply to trigger words, this statement is more informative about the speaker than of the addressee or the subject of discussion. More often than not its underlying meaning should be rephrased to read “I am uneasy, maybe even afraid, around people that hide something”. As such it carries the implied request to anyone hearing it, that they shall stop covering and hiding things to relieve the speaker of his uneasiness. But even when taken at face value, above sentence communicates that it is the lack of fear that is the speakers justification for not protecting his privacy. Apart from the simple rejection of this statement as being false in the light of existing and relevant threats and the reference to the blissfulness of ignorance, it is the exclusiveness of fear as the proposed reason for privacy that warrants consideration. The reference to fear in this context should first be understood as an instrument of rhetorics instead of an adequate choice of words in a balanced and clearheaded reasoning. Fear refers to the emotional response to existential danger and implies the lack or loss of courage to confront the danger. The sentence under analysis should thus be rephrased to read “You hide things because you lack courage in the presence of an imagined existential threat.” It is therefor a double accusation of both cowardice and delusion. Again it is not the focus of this analysis to show that protection of privacy and admitting to doing so requires a bit more courage than to repeat common proverbs, or that certain dangers exist that can be effectively answered by privacy. Nor does it need emphasis that those who protect the privacy of others often do so in the face of opponents that go a long way to ruin the names, property, freedom and sometimes even health and life of those courageous guards of privacy. Instead it should be pointed out, that there are for more reasons to protect ones privacy, and that of others, than fear of losing freedom or life or even good reputation. It is interesting to note that an old synonym for ‘fear’ could be awe, admiration or astonishment, even respect. Worded this way, one might read above sentence as “I hide nothing, because I admire and respect nothing.” This way it becomes clear that the denial of privacy is often nothing but a lack of things that are valued and the demand that others should not value something themselves. It thus contains the claim that nothing should be special and set apart. Which brings us to the original meaning of the word ‘private’. In Latin it refers to persons and things that were set apart from what would be available, subordinate and used by all persons – the public. Thus giving up one’s privacy, as in the sentence we discuss, entails nothing else but the transformation of the speaker into a not particularly important and indistinguishable fragment of the mass. If the speaker is really not in fear about anything, it would primarily refer to not fearing to become a nothingness in the grey mass – just a grain of dust in the crowd. It is safe to say then, that the speaker does not value and respect himself as an individual human person, or that he cowardly fears to be recognized as such. Leaving the analysis of the original statement one should now focus on the negation of the privacy opponent’s reply while keeping its completed meaning in mind: “Because I value and respect some things, I hide some things.” Three areas shall serve as examples of preserving value through hiding: Complex minority opinions, relationships between persons, and the human person itself. Complex opinions and bodies of knowledge that are valued highly by their bearers are often only communicated under strict conditions to prevent misunderstanding, misrepresentation, confusion and disintegration. This is especially useful if the opinion is only held by a minority or if the potential audience lacks the necessary context of knowledge to integrate and consider the new information. The strict conditions under which the information will be communicated serves herein as the boundary between public and private. The more complex, valuable and different from general knowledge the new information is, the stricter the conditions of communicating them becomes. This can be seen in various areas. Personal political or moral opinions, especially if they are held only by a minority, will often not be communicated in situations that only allow superficial or time restrained conversation. These situations do not allow for the speaker to present and argue for their position and thus risk for the information to be misunderstood and misrepresented later. The consequences of this disintegration of information can be witnessed in the effects of hearsay that considers itself with minority groups and opinions, leading to widespread false myths that often cannot be corrected afterwards because they have become part of common knowledge. Thus it is often favorable to conceal personal opinion and deprive the public of correct information if otherwise the reinforcement of false information or the support of slander are likely. The quality of public and political debate as well as the celebrity and gossip culture serve as evidence for this. Numerous further examples about the protection of ideas through hiding exist in history and shall only be mentioned for further reference: Pythagorism and Platonism, the Apologists of early Christianity, the Orthodox Church liturgy, natural science and political societies of the Enlightenment including Bacon and Newton as members, Judaism, early Socialism. Privacy in this regard serves to preserve the integrity, and often survival, of information, ideas and opinions. Another area of interest is privacy and the use of hiding for the sake of other persons. To understand what role privacy plays in the context of relationships between humans it is necessary to be aware of what communication is. Communication is any act of a sender to convey information to a receiver. This involves forming signs – distinguishable and perceivable features – into signals – the message to be transmitted. The choice of signs and signals by the sender and their interpretation by the receiver depend strongly on the context, what both parties perceive about each other, themselves and their environment. Another part of this context is the estimation of how difficult a sign is to be produced which has an influence on as how truthful and intentional a signal (message) is perceived. A proverbial example for this is “to preach water and drink wine”. One immediately understands that abstaining form wine – which is more costly than to consume it – increases the credibility of the message (and resolves the otherwise apparent contradiction). Maintaining privacy, in its various forms of hiding, concealing and silence, is such an act of communication, a sign that carries a signal. The sign of privacy, as it shall be called for sake of clarity, can carry a variety of signals that depend on the context of the communication, and it can be intended for a variety of recipients. In itself privacy is a signal that discriminates between various degrees of relationships, excluding some potential receivers from other intended receivers. It is thus communicating which kinds of relationship the sender intends to have, which in turn communicates the evaluation of the receiver by the sender. In blunt words, it separates the receivers into special and common people in the eyes of the sender. The “hijab” is an example which illustrates this well. Hijab refers to a veil worn by many muslim women as soon as they enter marriageable age. It is always worn in public and only taken off if no non-related men are present, such as in exclusively female meetings or in the family circle. Her husband will be the only non-related man that will see her hair, thus keeping her hair private. The woman, if she chooses to wear the hijab, hereby communicates towards her husband and all other men, that she chooses to have an exclusive intimate relationship only with her husband and that she values her husband as being of a special high value to her. It is a pledge of allegiance to her husband, and a separation of herself from the availability to other men. As can be seen in this example, hiding becomes a tool to communicate a value perception and status of relationship in a discriminatory way. Similar signs exist in western cultures as well. For example, the revelation of the family’s secret receipt towards the fiancee of a child serves as sign of acceptance and inclusion into the family. Similarly some topics of conversation are usually preserved for the close relationship between couples, or that of good friends. This not only is a sign of uptightness, if at all, but also a toll to show and maintain the deepness of a special and exclusive relationship that is built on the mutual holding of the other in high esteem. The opposite, divulging information indiscriminately, thus communicates that others are not held in high esteem and that the communicating party is unwilling or unable to come to different evaluations of others. Likewise the sharing of information with the public, if this information was gained within a special relationship, should rightly be viewed as an act of betrayal since it communicates that the thus damaged person is held in lower regard than the receiving masses, even as assured of the opposite. This hints at the reciprocity of these intimate relationships. Communicating information, that is viewed as belonging in the private domain of friendship or other kinds of deep and special relationship, will also signal to the receiver that he should answer in an equally private manner as to return the esteem granted to him as well as to save the speaker from embarrassment. It is thus a matter of courtesy to not speak about private matters indiscriminately since it puts the receiver into a potentially awkward situation. However, this does not only apply to situations that imply reciprocity. It speaks of equal disrespect of another person to make them part of an unasked for communication of subjects that are hurtful, unpleasant or put the recipient into a situation where he is challenged to act – if only to escape his status of a recipient. Instead, a communication that considers the reaction of others by using means of privacy signals both intended and accidental recipients that the speaker harbors respect for them. This is even more true when the subject constitutes a tempting or harmful one for the recipient. It shows utter disrespect if someone speaks of the exquisite taste and warm feeling in the throat when drinking an alcoholic beverage while a known dry alcoholic is addressed or present. It is as unwise to flaunt with riches and have them lay around openly in the house since this tempts the struggling housekeeper to steal out of impulse, or to communicate without regard for potentially causing conflicts of interests in the recipients. Instead of hiding nothing, it is the hiding of information and actions that is grounded in valuing and caring for others and truthfully communicating respect and high esteem. To conclude the use of privacy for the sake of others, one should also consider the effects of actions on observers. As mentioned before, the interpretation of signs as signals depends, among other things, on the receiver’s perception of the sender. This becomes relevant for the question of privacy especially if the sender is perceived as a role model or bad example. Here the behavior is a sign easily interpreted by the observer as sanctioning of the action or its proscription if the action is not considered separately from the sender. Examples of this can be seen when bad actions of public figures are used as justification for one’s own actions, when otherwise laudable behavior is viewed with suspicion when associated with persons of disgrace or when people imitate celebrities even in their failures and bad judgement. For additional consideration on privacy for the sake of others, an old book shall be mentioned as reference: “Ueber den Umgang mit Menschen” by Freiherr von Knigge. The last area to examine here as an example of preserving value through hiding is the human person itself. At the core of this matter lies the question of what makes a person a “self” instead of “an-other”, and how this self can refer to itself over time as in “I myself went to the park yesterday”. What is this “I” or “self” we refer to, and how does it come to be what it is instead of being something else. There is no current consensus how to answer these questions, nor should it be the task of this text to present and weigh the different views, nor to fully develop a theory of personhood on its own. Instead it will touch the process of the change of a person. How has a person become what it is now, and how will it become what it will be in the future? How does the process differentiate the self from another? The popular answer is that genes, upbringing and society are the shapers of persons, in different proportions depending on who one asks. Nevertheless individuals are treated as moral agents, acting by decision and responsible for the decisions made. It is a person who is punished for a crime, and not schools, parents, evolution or society. It is persons that are persuaded by others, asked to consider moral and ethical categories, respected or disgraced for individual actions. Clearly it is understood by most that a person is not shaped exclusively by that which is not part of him, but also by himself. Certainly genes, upbringing, society and the situative environment are influences, but it is also the self that forms the self. This self-forming takes place with every decision made, changing the status, the shape of oneself, the individual path of the person through life. Some might argue that every decision made is already and exclusively determined by the previous state of the person and its environment, and that as such no real decision is made because there is no choice but only the effect of the cause which is the state of the universe. Instead of refuting the deterministic and probabilistic denials of free will as being ultimately self-contradictory, it shall be asserted that free will – non-deterministic and non-probabilistic – is a required fact if rationality, ethics and morality – all three – are in any way justifiable. However small free will, that hard to grasp grain that tips the scales of our decisions, might be, it plays the central role in the person becoming a Self. For this to be effectually true, the influence of free will in the person’s decisions must be maximized so that it is will that dominates the decision in freedom. At that point privacy achieves its ultimate importance. Only in privacy can a decision be contemplated in separation from the influence of other persons and the own person, the self, actualized freely. Hiding in privacy removes the tainting of the decision through outside preselection of facts, outside censorship, the promise of reward and punishment by other humans, hubris, pride and shame. Here honesty towards one’s self is possible. It is only through and in privacy where a potential equilibrium of choices can be discovered, just to be resolved through the action of the free will of the Self. If one is in any way determined to work on one’s own self and aware of the responsibility this entails, then privacy in this regard must be maintained. Though even through giving up to develop one’s self, a choice has been made with the responsibility for it as it’s consequence – except that this choice is to be a product determined by others instead of a self. A disregard for maintaining privacy in this area thus equals the utter disrespect for the Self one is, and the potential selves one could become. It is the denial and defiling of oneself as an individual person. In conclusion the proposition is, that: Only in privacy the “self of now” transcends itself to actualize “the self of the future” through every decision made, integrating the “self of the past” fully and becoming more of a Self by removing the influence of an Other. In passing by it should be noted that the practice of hiding things because of their value, especially if it the hiding of information about something, must be subject of consideration as well. It cannot be argued for using lies as the method of concealment, since this would often result in doing a disfavor to the thing valued and respected. Nor can a life of lies result in a positive development of the Self. Instead it is the concealing of information, without replacing them with a false statement that is communicated as the whole truth only, that should be chosen as a means. Which however presents another problem: As much as the presence of a sign can be a signal, its absence can be one too. Indeed it is the presence of some signs that can signal the meaning communicated by other signs.Selective privacy might as such communicate the content of what should have been concealed. For example, if one is asked for one’s favorite color and presented with a series of potential answers, it is the denying of the incorrect answers and the silence towards the correct answer that communicates what was intended to remain hidden. It should thus be noticed, that the hiding of one thing necessitates the hiding of other things of the same context. As a means thereof it is preferable to keep silent instead of lying, as stated above. So far, the privacy opponent’s reply “I have nothing to hide because I have nothing to fear” has been shown to be a rhetoric trap, or at least an insufficiently contemplated cultural maxim. It has also been shown that there exist good reasons to embrace privacy, hiding and concealment. However, this text cannot be complete without some short answers to those, that identify privacy and secrecy as roots of evil in society that erode every social and political system and relationship. Their primary argument is, that privacy encourages and facilitates all kinds of corruption and abuse of power. Furthermore they claim that privacy results in the disintegration of the interpersonal bonds that hold society together. To the first, two replies shall be given: For one, it has long be understood that abuse of power and corruption are systemic to power and delegation themselves, and that transparency and accountability are mere interventions to limit the spread of these flaws at the root of the problem. Instead of attacking privacy as being the problem, one should think about alternative methods of cooperation and organization that are free of these negative systemic tendencies in themselves. On a more shallow note it should be pointed out that the people active in positions and offices have given up their status of private persons in exchange to be leaders and representatives of the public – the masses. Instead of developing themselves and their relationships they have chosen to become instruments of the public, or at least they pretend as much. How can such an argument against privacy then be used against the privacy of people that remain private instead of public? This appears to be fallacious. Towards their second argument, the “disintegration of interpersonal bonds that hold society together”, it should be be understood both what “society” is, and what “interpersonal bonds” may refer to. Society is not a collective of interdependent persons connected b shared emotional states and intimacy, that would be what is commonly referred to as “family”. Instead, society is the cooperative organization of persons that is held together by norms of interaction and shared understanding of necessary and useful methods of cooperation. It is thus the actions toward society in the realm of society and not the totality of actions and knowledge that constitute these bonds in practice. The partaking in society is thus a voluntary, freely chosen and limited activity by each of its members for the purpose of cooperation with all other others in society. Privacy only becomes erosive to societies that intend to regulate and organize even those individual activities that neither rely nor influence all of society. These societies are commonly identified with Totalitarism. Instead of relying on a bonding through a shared experience off weakness and lack of self, or directing society to be bound by the smallest – and lowest – common denominators, a society of privacy allows for the progression of all members to actualize higher potentials without replacing the individual person with the collective Other of society. Privacy thus nurtures societies that thrive for improvement. This might even hold the potential for individual actors to integrate justifiable norms of social interaction into their Selves through independent contemplation and decisions instead of understanding these norms as being imposed by an Other. Does this hold the promise of social interaction to become more reliable and truthful? Answering affirmative seems to be more justifiable than the negation. However, one warning against privacy is appropriate. Be it a personal lifestyle or a culture of privacy, both demand personal improvement from each partaking individual. This is the result of privacy to allow for, and supporting of, discriminatory relationships and the decoupling from the influence of others. Privacy thus removes many opportunities to blame others and to excuse oneself in light of personal error. Nevertheless, privacy also allows for many justified second chances and true forgiveness. In summary it can be concluded that maintaining privacy and hiding of things serves well in preserving and expressing the values one attributes to things and other persons. Furthermore privacy is a necessary condition for the continual development of the Self and the sustentation of truthful and honest interpersonal relationships by means of communicative discrimination. In turn, the denial of privacy must be realized to be unjustified and even harmful. The presented arguments for the allegedly negative impact of privacy have been found to be without merit or even supporting the strong use of privacy in society. The conclusion drawn is therefor that opposition to privacy as in “I hide nothing because I have nothing to fear” cannot be a default behavior. Instead the use and support of privacy in the form of “Because I value many things, therefor I hide many things” should be the standard unless it clearly needs to be abandoned for specific situations, if at all.]" } , { "href": "/2012/11/encryption-algorithms-a-primer/", "title": "Encryption algorithms: a primer", "tags": ["algorithms", "encryption"], "content": "[Encryption algorithms are used to secure the content of communications and stored data. An algorithm in general is a recipe for calculations which can be performed automatically by a computer. An encryption algorithm (also called a cipher) encrypts a readable plaintext into an unreadable ciphertext. A cipher can usually also perform the reverse operation of decrypting an unreadable ciphertext into a readable plaintext. For encryption and decryption a cipher needs a key. The security of a cipher depends on the secrecy of the used key. Two general categories of encryption algorithms exist: Symmetric encryption algorithms: the key used for encryption is the same as the key used for decryption. Asymmetric encryption algorithms: the key used for encryption differs from the key used for decryption. If you want to use a symmetric cipher for communication you are facing the key exchange problem: The key needs to be exchanged over a secure channel, otherwise the encryption will be useless. Because such a secure channel often does not exist asymmetric encryption can be used to solve the problem. Asymmetric encryption (also called public-key encryption) uses key pairs comprised of a public key and a private key: Something encrypted for a given public key can only be decrypted by the corresponding private key. The reverse operation is a digital signature: Something encrypted (signed) by a private key can only be decrypted (verified) by the corresponding public key. Public-key encryption solves the key exchange problem, because the public keys can be exchanged via an insecure channel. But to prevent man-in-the-middle attacks it still needs to be verified that the public key has not been tampered with by a third party. In a man-in-the-middle attack an active eavesdropper makes independent connections with the victims and relays messages between them, making them believe that they are talking directly to each other over a private connection, when in fact the entire conversation is controlled by the attacker. The verification can be done by comparing the fingerprints of the public keys on a different channel (for example, on the phone) or by employing a _web of trus_t. In a web of trust, public keys are signed by other parties to make the trust in them transferable. Asymmetric ciphers are often computationally expensive, especially for long plaintexts. In such cases hybrid encryption can remedy the problem. In a hybrid cipher a unique session key is generated which is then used to encrypt the plaintext with a symmetric cipher. Afterwards the session key is encrypted with an asymmetric cipher and send together with the ciphertext to the receiver. Because of the properties of symmetric and asymmetric ciphers explained above, for applications like hard disk encryption typically a symmetric cipher is used (no need to exchange a key). For secure communication via email or chat an asymmetric or hybrid cipher is usually the better solution, because it makes the secure key exchange simpler. An important encryption concept is the key length (also called key size). Usually larger key lengths are better, but key lengths cannot be compared between algorithms. For example, the symmetric AES algorithm uses key lengths of 128, 196, or 256 bit. The asymmetric RSA algorithm typically uses key sizes between 1024 and 4096 bit, but that doesn’t mean that RSA is more secure than AES. Short key lengths are problematic, because they are easier to attack with brute-force. In a brute-force attack a fast computer is used to try out all possible keys until the correct one is found. It is important to consider the relation between the length of a user chosen password and the corresponding key length of the underlying encryption algorithm. For example, if you use the rather secure AES algorithm with a key length of 196 bit to encrypt your hard disk, but the corresponding password has only a length of 32 bit you are vulnerable to brute-force attacks.]" } , { "href": "/2012/11/global-spying-realistic-probabilities-in-modern-signals-intelligence/", "title": "Global Spying: Realistic Probabilities In Modern Signals Intelligence", "tags": ["analysis", "surveillance"], "content": "[A paper on the probabilities of global internet surviellance, presented in 2010 at the Defcon conference in Las Vegas, proving to be close to what becomes common knowledge today. In this article, we will present insight to the realistic possibilities of Internet mass surveillance. When talking about the threat of Internet surveillance, the common argument is that there is so much traffic that any one conversation or email won’t be picked up unless there is reason to suspect those concerned; it is impossible that “they” can listen to us all. This argument assumes that there is a scarcity of resources and motivation required for mass surveillance. The truth is that motivation and resources are directly connected. If the resources are inexpensive enough, then the motivations present are sufficient to use them. This is visible in the economic effect of supply availability increasing the demand. The effect is that since it is more easily done, it will be done more readily. Another fault in the above argument is that it assumes that there is only all-or-nothing surveillance, which is incorrect. The paper can be downloaded here: Global Spying: Realistic Probabilities In Modern Signals Intelligence  ]" } , { "href": "/2012/11/libertopia-conference-2012-digital-tradecraft/", "title": "Libertopia Conference 2012 - Digital Tradecraft", "tags": ["digital tradecraft", "tradecraft"], "content": "[The slides of Frank Braun’s talk “Digital Tradecraft” can be found here: Digital Tradecraft]" } , { "href": "/2012/11/libertopia-conference-theory-practice-of-black-market-business/", "title": "Libertopia Conference - Theory & Practice of Black Market Business", "tags": ["physical tradecraft", "tradecraft"], "content": "[Slides for Jonathan Logan’s talk at Libertopia 2012: Theory & Practice of Black Market Business.]" } , { "href": "/2012/11/bitcoin-conference-2012-london-slides/", "title": "Bitcoin Conference 2012 London - Slides", "tags": ["bitcoin", "otc"], "content": "[The slides for Frank Braun’s talk on the need of OTC exchangers for the Bitcoin economy can be found here: Bitcoin OTC]" } , { "href": "/2012/11/introducing-shadowlife-cc/", "title": "Introducing ShadowLife.cc", "tags": [], "content": "[ShadowLife focuses on Privacy – how to protect it and why it matters. We are committed to make information about privacy enhancing strategies and technologies accessible to non-experts and to give practical advise on how to enhance one’s own level of privacy. Due to the inherent challenges of the subject – lack of publicly available information, complexity of the matter and wide-spread misinformation – ShadowLife adopts a policy of communicating clearly the quality of underlying information and our assurance thereof, to refrain from emotional and political language, and to refer to required context. Our team consists of people with a wide spectrum of experience ranging from computer security to open-source intelligence analysis to practical street smarts. What we suggest as practical solutions we have tested and applied ourselves. Nevertheless ShadowLife remains in a continual state of incompleteness. As such we rely on feedback and contribution from the community. ShadowLife publishes information in five different styles of presentation to reflect different approaches to information: News: Bullet-point condensed time-relevant pieces of information which have been primarily researched by third parties. ShadowLife offers an aggregate of privacy relevant news that is accessible in minutes, not hours per day. Dossier: ShadowLife publishes background information on repeated content for reference. Concept: Content of mainly theoretical nature that serves as foundational skill for privacy enhancing strategies and technologies. HowTo: Applying theoretical knowledge to practical solutions in a way that is accessible to all interested parties and not just specialists. Opinion: Analysis and commentary that cannot hold back on personal judgement.   For more information on our perspective on privacy, please refer to the page Privacy Extremism.]" } , { "href": "/page/contact-us/", "title": "Contact us", "tags": [], "content": "[Write us to contact (at) shadowlife.cc. (PGP/GPG: 0x18231c2ae18fa734. Available on many keyservers.) Donations: Thank you for your donations to Bitcoin address 1shadowRQqB4ui9xK2qPxC68ZjqZXLK1d.]" } , { "href": "/page/privacy-extremism/", "title": "Privacy Extremism", "tags": [], "content": "[The Philosophy of Privacy Extremism emphasizes that privacy should be maintained in all situations; that if in question, privacy should be given preference, unless sufficient arguments to the contrary apply to the specific context. Privacy is a necessary condition under which a person can develop his self and embrace individual responsibility for decisions and actions – it is the prerequisite for individual liberty. As such it is not granted but must be taken and protected vigilantly. Furthermore privacy proves to be essential when engaging in meaningful and truthfrul interpersonal relationships by making the social mask unnecessary and removing the need to keep up appearances, but instead maintaining an environment of trust. A denial of privacy to the contrary establishes and enforces a lack and loss of esteem, respect and value in and for things and other persons. Privacy should therefor be the strong standard for personal behavior, normative for those that thrive towards personal human positive development. However, privacy is under constant attack. Digital surveillance, data tracking, big data analysis, biometrics and cameras augmented with facial recognition represent just the edges of a massive trend towards deep surveillance and control structures. Opponents of surveillance are faced with only a few options to deal with this trend: Aggressive and violent destruction of surveillance installations and technology, which is not an option for peaceful and respectful individuals. The political process, which however promises only slow and incomplete change towards more privacy – if any. It also requires the impression of the individual will of the privacy defenders on the collective will of the political body – constituting a means of rulership which is not an acceptable choice for people that embrace individual liberty. Self-abandonment, while often the option realized through endless compromise, cannot be the goal of anyone conscious of his self-worth. This only leaves methods of self-protection to minimize data collection and surveillance without exclusively relying on third parties for protection. It is the privacy extremists choice to reduce data available on him. Through the choiceful self-protection the individual enables himself to choose whose observation, judgement and social memory he wants to become part of and who to exclude from relationships. Instead of a purely negative defense of privacy it becomes a positive means to shape relationships and express appreciation for others. Applied privacy extremism allows for the new ways to emphasise relationships, esteem and openness in a selective and meaningful way by supporting the individualization of both the privacy extremist himself and the counterparts in his relationships. Lastly privacy extremism constitutes a well-mannered and unobstrusive behavior by which the usual shallow grasp for attention is minimized.]" } , { "href": "/page/about/", "title": "About", "tags": [], "content": "[ShadowLife focuses on Privacy – how to protect it and why it matters. We are committed to make information about privacy enhancing strategies and technologies accessible to non-experts and to give practical advise on how to enhance one’s own level of privacy. Due to the inherent challenges of the subject – lack of publicly available information, complexity of the matter and wide-spread misinformation – ShadowLife adopts a policy of communicating clearly the quality of underlying information and our assurance thereof, of refraining from emotional and political language, and of referring to required context. Our team consists of people with a wide spectrum of experience ranging from computer security to open-source intelligence analysis to practical street smarts. What we suggest as practical solutions we have tested and applied ourselves. Nevertheless ShadowLife remains in a continual state of incompleteness. As such we rely on feedback and contribution from the community. ShadowLife publishes information in five different styles of presentation to reflect different approaches to information: News: Bullet-point condensed time-relevant pieces of information which have been primarily researched by third parties. ShadowLife offers an aggregate of privacy relevant news that is accessible in minutes, not hours per day. Dossier: ShadowLife publishes background information on repeated content for reference. Concept: Content of mainly theoretical nature that serves as foundational skill for privacy enhancing strategies and technologies. Listing of all Concept posts. HowTo: Applying theoretical knowledge to practical solutions in a way that is accessible to all interested parties and not just specialists. Listing of all HowTo posts. Opinion: Analysis and commentary that includes personal judgement. Listing of all Opinion posts.   For more information on our perspective on privacy, please refer to the page Privacy Extremism. Donations: Thank you for your donations to Bitcoin address 1shadowRQqB4ui9xK2qPxC68ZjqZXLK1d. ShadowLife.cc GnuPG/OpenPGP Key: 0x18231c2ae18fa734. ShadowLife.cc does support SSL/TLS: https://shadowlife.cc. SHA1 Fingerprint=12:2C:14:89:9B:E4:A8:93:FE:16:1E:11:03:D2:7C:E4:00:29:46:B4 ShadowLife.cc is reachable as a Tor hidden service: shadow7jnzxjkvpz.onion. ShadowLife.cc is reachable via I2P: shadowlife.i2p jme44x4m5k3ikzwk3sopi6huyp5qsqzr27plno2ds65cl4nv2b4a.b32.i2p Signed statement of addresses and keys.]" } , { "href": "/2013/02/", "title": "02", "tags": [], "content": "[]" } , { "href": "/2013/10/", "title": "10", "tags": [], "content": "[]" } , { "href": "/2017/10/", "title": "10", "tags": [], "content": "[]" } , { "href": "/2014/11/", "title": "11", "tags": [], "content": "[]" } , { "href": "/2012/11/", "title": "11", "tags": [], "content": "[]" } , { "href": "/2012/12/", "title": "12", "tags": [], "content": "[]" } , { "href": "/2010/", "title": "2010", "tags": [], "content": "[]" } , { "href": "/2011/", "title": "2011", "tags": [], "content": "[]" } , { "href": "/2012/", "title": "2012", "tags": [], "content": "[]" } , { "href": "/2013/", "title": "2013", "tags": [], "content": "[]" } , { "href": "/2014/", "title": "2014", "tags": [], "content": "[]" } , { "href": "/2015/", "title": "2015", "tags": [], "content": "[]" } , { "href": "/2016/", "title": "2016", "tags": [], "content": "[]" } , { "href": "/2017/", "title": "2017", "tags": [], "content": "[]" } , { "href": "/2018/", "title": "2018", "tags": [], "content": "[]" } , { "href": "/2019/", "title": "2019", "tags": [], "content": "[]" } , { "href": "/2020/", "title": "2020", "tags": [], "content": "[]" } , { "href": "/tags/agorism/", "title": "Agorism", "tags": [], "content": "[]" } , { "href": "/tags/algorithms/", "title": "Algorithms", "tags": [], "content": "[]" } , { "href": "/tags/analysis/", "title": "Analysis", "tags": [], "content": "[]" } , { "href": "/categories/announcements/", "title": "Announcements", "tags": [], "content": "[]" } , { "href": "/tags/anonymity/", "title": "Anonymity", "tags": [], "content": "[]" } , { "href": "/tags/biometrics/", "title": "Biometrics", "tags": [], "content": "[]" } , { "href": "/tags/bitcoin/", "title": "Bitcoin", "tags": [], "content": "[]" } , { "href": "/categories/", "title": "Categories", "tags": [], "content": "[]" } , { "href": "/tags/censorship/", "title": "Censorship", "tags": [], "content": "[]" } , { "href": "/categories/concepts/", "title": "Concepts", "tags": [], "content": "[]" } , { "href": "/tags/counter-economy/", "title": "Counter Economy", "tags": [], "content": "[]" } , { "href": "/tags/crypto-anarchy/", "title": "Crypto Anarchy", "tags": [], "content": "[]" } , { "href": "/tags/dhs/", "title": "Dhs", "tags": [], "content": "[]" } , { "href": "/tags/digital-dead-drop/", "title": "Digital Dead Drop", "tags": [], "content": "[]" } , { "href": "/tags/digital-tradecraft/", "title": "Digital Tradecraft", "tags": [], "content": "[]" } , { "href": "/categories/dossiers/", "title": "Dossiers", "tags": [], "content": "[]" } , { "href": "/tags/email/", "title": "Email", "tags": [], "content": "[]" } , { "href": "/tags/encryption/", "title": "Encryption", "tags": [], "content": "[]" } , { "href": "/tags/face-recognition/", "title": "Face Recognition", "tags": [], "content": "[]" } , { "href": "/tags/fincen/", "title": "Fincen", "tags": [], "content": "[]" } , { "href": "/categories/frontpage/", "title": "Frontpage", "tags": [], "content": "[]" } , { "href": "/tags/google/", "title": "Google", "tags": [], "content": "[]" } , { "href": "/tags/head-camera/", "title": "Head Camera", "tags": [], "content": "[]" } , { "href": "/categories/howto/", "title": "Howto", "tags": [], "content": "[]" } , { "href": "/categories/news/", "title": "News", "tags": [], "content": "[]" } , { "href": "/categories/opinion/", "title": "Opinion", "tags": [], "content": "[]" } , { "href": "/tags/otc/", "title": "Otc", "tags": [], "content": "[]" } , { "href": "/page/", "title": "Pages", "tags": [], "content": "[]" } , { "href": "/tags/physical-tradecraft/", "title": "Physical Tradecraft", "tags": [], "content": "[]" } , { "href": "/tags/police-state/", "title": "Police State", "tags": [], "content": "[]" } , { "href": "/post/", "title": "Posts", "tags": [], "content": "[]" } , { "href": "/tags/privacy/", "title": "Privacy", "tags": [], "content": "[]" } , { "href": "/tags/privacy-law/", "title": "Privacy Law", "tags": [], "content": "[]" } , { "href": "/tags/rfid/", "title": "Rfid", "tags": [], "content": "[]" } , { "href": "/", "title": "ShadowLife", "tags": [], "content": "[]" } , { "href": "/tags/silk-road/", "title": "Silk Road", "tags": [], "content": "[]" } , { "href": "/tags/surveillance/", "title": "Surveillance", "tags": [], "content": "[]" } , { "href": "/tags/", "title": "Tags", "tags": [], "content": "[]" } , { "href": "/tags/tradecraft/", "title": "Tradecraft", "tags": [], "content": "[]" } ]