[ { "href": "/2014/11/darknet-markets-silkroad-2-0-takedown-analysis-pending/", "title": "Darknet Markets (Silkroad 2.0) takedown - analysis pending", "tags": [], "content": "[Due to requests by our readers we will start investigating and analyzing the latest darknet market takedowns. This can take a while since data is still coming in and nothing definitive is known. Stay tuned.]" } , { "href": "/2013/10/tracking-the-silk-road-lessons-for-darknet-services/", "title": "Tracking the Silk Road - Lessons for darknet services", "tags": ["anonymity", "digital tradecraft", "physical tradecraft", "surveillance", "tradecraft"], "content": "[On Oct 2nd 2013, a person was arrested in San Francisco (CA USA) who allegedly operated the darknet marketplace website “The Silk Road”. Shortly after, the Silk Road went offline. Within minutes discussions on the Internet sprung up with thousands of people trying to cope with their loss, trying to make sense of what happened. Several “official” documents (a criminal complaint and an indictment) were released shortly after which, in turn, lead to commentators rushing to explain what stupid mistakes DPR – the Silk Road operator – had committed. Now, after a few days have passed, I’d like to give analysis a try myself. The sources for this are few, therefore I will be restricted to the official indictment and criminal complaint, as well as some reports on DPR’s arrest. The problem with the official documents is that they are not – as some read them – a complete and truthful narrative of the investigation that led to the arrest. Instead, they are both meant to establish probable cause for a judge or grand jury to issue forfeiture and arrest warrants against DPR. The contents are meant to convince the reader that the target of the legal action (DPR, the suspect/defendant) is really the person to blame for the activities connected to the Silk Road, and that those activities are unlawful. While I will give the benefit of doubt to the authors that the documents only include truthful statements it needs to be kept in mind that they do not include the whole and complete truth. The statements are worded and ordered to demonstrate that the activities in question are unlawful, and to demonstrate the (true) identity of DPR. This is the goal and method of structure. Also, one should keep in mind that the evidence presented in a later trial may be substantially different. To find out what the causal chain of the investigation was the statements of the documents need to be reordered chronologically, and we will have to throw in some educated guesses to fill in the gaps. From this there should be some hints on what were the crucial points at which the investigation turned into actionable results – and how to prevent this in future cases. With this method I constructed the narrative of the investigation that now follows. Enjoy! p.s.: I will not use the alleged real identity of DPR in this article. Instead I will use ARI as a stand-in. (ARI = Alleged Real Identity) There’s no reason to clutter the search engines with more entries on the real name, since the person might in fact be innocent. —- It is likely that the Silk Road (TSR) only got fleeting attention from law enforcement before June ’11. But with the media buzz started by Gawker and the demands made by Sen. Schumer it is likely that an agent was tasked with keeping an eye on TSR and make proposals if and how action against it should be taken. The first steps in such an investigation are to collect public knowledge on the subject and familiarize oneself with the matter. This also starts the ongoing iterative process of deciding if a case should be opened and what resources to assign to that case. To open a case requires that an activity is brought to the attention of law enforcement and that the activity is viewed to be unlawful by the investigators. The resources assigned to a case dependent on various considerations including, but not limited to, constraint on resources by other cases, public and media attention, potential intelligence and other leads gained from the case, and most importantly the predicted outcome of a court trial. Not every case opened by the FBI is meant to end up in court, often the goal is just to gather intelligence that might be of use at a later point. A case may be opened but soon get no more attention simply because the resources are needed somwhere else. When the case has been opened, an agent is assigned to handle it (this is AGENT-1 mentioned in the documents). When exactly the TSR-case was opened is unknown, but likely it happened some time between late June of 2011 and early April 2012, led by the DEA. The activities undertaken were mostly that of opening a file to collect information in, and to do research in public records (especially Internet searches) and public contents of TSR, to establish a timeline and connect people and resources to the case as well as find out what exactly that operation was about. During these early days the agent in question had to familiarize himself with Bitcoin and Tor and he established the first bit of the timeline which would later be used in the attribution phase of the case: 2011-01-23: SOMEONE created a blog at wordpress.com detailing how to access the Silk Road (silkroad420.wordpress.com). More records on the creation of the account (like IP address used, or the email address at signup) were not yet available. 2011-02-27: A user by the name ALTOID posted a link to the above wordpress blog on a drug-related internet forum called “shroomery.com”. 2011-01-29: A user by the same name posted a link to the Silk Road on the bitcoin forum. At this point ALTOID became a person of interest, but this was not enough to pull records. The public data simply ended up on the case file without further action taken. Requests for user records would have (and were) made later in the process. Just a little time after this the same user made the post on 2011-01-29, he made another post under the same account at bitcoinforums to ask for an IT professional to help with some coding. Included in this post was a gmail address that could potential be the ARI of the person of interest. Again, this piece of data simply went on file and would later be crucial in the attribution phase. Presumably in early April 2012 the case was pushed to its next phase. Active undercover work started. This involves three steps in the first stage. First, familiarity with the terrain needs to be gained. This means slow but growing involvement in the TSR forums. Second, the targets of the operation need to be identified (DPR, vendors, administrators, members with high reputation). Third, the targets are profiled. For DPR this likely resulted in “male, caucasian, american born and raised, technical or mathematical education, 20-30 years old” based on his writing style and other clues. On 2012-04-30 one of the undercover identities that would play a major part in this operation joined TSR, I will refer to him as UC-1 (simply called UNDERCOVER in the documents). Slowly working himself into the community at TSR, this agent then contacts DPR asking to help with a larger cocaine deal. UC-1 claimed that he wanted to sell 1kg of cocaine but that the market at TSR did not seem to be ready for this. DPR promised to handle the request and delegated the task to an administrator identified as EMPLOYEE in the documents. EMPLOYEE is another critical player in this story. He became an administrator of TSR on 2013-04-30 which gave him access to all messages sent between users and their transactions. During the course of this first undercover activity, UC-1 got EMPLOYEE to give his own residential address as a shipping destination for the deal, the shipping to be be conducted by courier. When the shipping address was revealed to UC-1 on 2013-01-10, the agents involved started a surveillance operation on this address. At about 2013-01-14, at the latest, direct physical surveillance of the address was in place, recording the comings and goings of the people living there and likely wiretaps. This likely lead to UC-1 asking for the shipment method to be changed to courier, possibly because multiple people resided at the same address and the door itself was not easy to see. On 2013-01-17 the delivery is made by two or more undercover agents and a little while later the payment was made. At this moment law enforcement knew enough to bring EMPLOYEE before a judge. The person was identified, the goods had changed hands, and the payment was completed. A multi-year sentence was certain for EMPLOYEE. This is the point at which TSR started to unravel. With a person on the inside (having access to the messaging and payment system) compromised, the linchpin was pulled. Now law enforcement had to cash in on it. The mistake on the side of TSR that lead to this dire situation is threefold: First, the transaction was conducted without minimum standards of tradecraft. The exchange should have been done at a location agreed on only a short while before the meeting and the location should have had no connection with any party involved. Second, persons involved in the operation of an organization have no place in exposing themselves in any transaction. This is where foot soldiers have their place (for example by utilizing the six-pawn-chess protocol). Third, organizations of this kind require compartmentalization. Never may any second tier operator have wide access to data and at the same time be involved in facilitation. On the side of law enforcement this operation went by the textbook. It was now time to maximize the profit from this catch. Some time between 2013-01-17 and 2013-01-26, most likely on or around 2013-01-20, EMPLOYEE was arrested by law enforcement and presented the facts of the matter at hand. He was set to go to jail for a substantial time and be separated from his wife and child. The alternative was a deal leading to a light sentence, in exchange for full cooperation in the ongoing undercover operation. This, again, is standard procedure. The structure of most organizations, the law on the book, the quality of the prison system and the character and experience of the targeted individuals work strongly in the favor of law enforcement. Especially for online crime, where the personal bounds and loyalty between members of an organization are weak and no expectation of “taking care” of the trial and his family exists, suspects are easy to turn. They have everything to lose and exactly nothing to gain from staying loyal. There is no social safety net for criminals waiting for them in jail, nobody who will protect and feed their family, nobody who will send a well paid lawyer. This makes these organizations far easier to infiltrate than the classical mafia. After being presented with the options EMPLOYEE agreed to cooperate fully. At this point law enforcement had access to almost all messages sent on TSR and the details of past deals. These records almost certainly went back at least to mid 2011 (it seems there was later a purge initiated by DPR on 2013-05-24). All data available was immediately copied and retained, in the order of importance of the various targets (DPR included). It can be assumed that the conversations collected from the system were incomplete in that they may not have included DPR’s messages themselves but only replies (including quotes) from his contacts. This may also have led to a sustained high-level access on the messaging system, either by gaining valuable information on other adminstrators, direct access to other administrators’ accounts or by DPR failing to later shut down EMPLOYEE’s account. I shall return to the importance of this data later when talking about how the server infrastructure of TSR was uncovered. It is not clear when exactly EMPLOYEE turned on DPR, but he did, no later than when DPR asked the undercover agent UC-1 to execute a hit on EMPLOYEE. That EMPLOYEE cooperated is demonstrated by him helping to stage a photo that is meant to prove his death to DPR. At this point the case immediately escalated to be an FBI operation (if it wasn’t already before). Armed with this massive trove of data law enforcement was in the position to both start a second undercover operation to attack DPR directly and to locate the servers. Starting 2013-03-13 a user of the platform called “FriendlyChemist”(FC) tried to extort money out of DPR by threatening to release a large amount of user data that was allegedly stolen from a vendor on the platform. The reason given were money problems FC had with a supplier of his. DPR asked to be brought into contact with the supplier “Redandwhite” (RAW). Over the next 2 weeks a story develops in which DPR purchases a hit on FC from RAW. I am inclined to think that this story is another undercover activity by law enforcement to position RAW into DPR’s circle of trust. Multiple hints point at this reading: 1.) UC-1 (the undercover agent and killer in the first undercover operation) was tainted because he was affiliated with the arrest of EMPLOYEE. This could have, and probably did at some time, raise suspicion for DPR. 2.) Before DPR’s laptop falls into the hands of the FBI contents of a TorChat conversation between RAW and DPR is known to law enforcement (conversation is on 2013-06-01, laptop ist only available after 2013-10-02 while the contents are known latest 2013-09-27). 3.) RAW procures counterfeit identity documents (passports, etc.) for DPR (2013-06-05) and ships them to DPR’s home address. This shipment is intercepted by Customs and Border Protection (2013-07-10). For this to be the result of a random inspection seems unlikely given the history and timing of the case. 4.) The first undercover action leading to the cooperation of EMPLOYEE likely generated enough data to convince DPR that the extortion threat was serious and imminent.   The ID documents later procured and shipped to DPR’s home address by RAW is the cornerstone on which the identification of DPR’s alleged real identity rests. The combination of photos, birth dates, home address and various social media records made that point watertight. Additional postings conducted early in 2011 (shroomery.com, bitcoinforums) and in March of 2013 (Stack Overflow) just hardened the evidence. At this point, between 2013-06-05 and 2013-06-20, the undercover agent RAW receives at least the photos and birthday information to be put on the forged documents and this leads to a match between a previously recorded person of interest in the file (from the posting on bitcoinforums looking for an IT pro and including the gmail address) and data received directly from DPR. This constitutes reasonable suspicion to pull the data on DPR’s gmail account, leading to IP records dating 2013-01-13 to 2013-06-20. Multiple entries are located to be from the location of one of DPR’s friend’s apartments and an internet cafe in San Francisco. Additional IP address data pulled for both locations reveals a list of IP addresses and times they were assigned to those ISP accounts. The data revealed by google likely also revealed the Stack Overflow account (SOA) created by DPR on 2012-03-05. The SOA later becomes an important link between authentication data found on the Silk Road servers and DPR. Please keep in mind that all this information ends up in the case file and is double-checked and cross referenced which might have led to the following find: On 2013-05-24 a TSR user had warned DPR that the site leaked an IP address (included in that message) that was used to restrict administrative access to the software running on it. If the FBI had sustained access to the messaging system of TSR after turning EMPLOYEE, this would have lead to an immediate location of that address to gain access to the data stored on that machine (hereafter TSR-VPN). As the documents show, this server was located and imaged between 2013-06-03 and 2013-09-15 latest. When the image was pulled from TSR-VPN, it revealed that on 2013-06-03 someone had accessed the system for the last time, wiping the contents of the user accounts and log files. However, this person forgot that his last access would also generate an entry when logging off, leaving the IP address used in the last access on the machine. This address again matched the records for the internet cafe mentioned above. However, it is unclear if TSR-VPN was discovered before or after the main web server of TSR was imaged. Some discussions on the Internet in the aftermath of DPR’s arrest also claim that DPR used a VPN service provider to access the servers of TSR. This is a misunderstanding that becomes evident with the close reading of the official documents. The system in question was a virtual private server that was used by DPR to connect to via a VPN, and to move from there to TSR. It was a single purpose machine only used by persons working for TSR. It is safe to assume that with the data made available through EMPLOYEE, RAW, the early case file and the IP records pulled for DPR’s google accounts and both the friend’s apartment and the internet cafe, the FBI had enough to being closing in on DPR. From 2013-06-20 (approx.) DPR was under direct physical surveillance to establish his habits and movements as well as to acquire additional digital evidence. It appears likely that DPR continued to do system administration tasks from the Internet cafe. The use of the TSR-VPN suggests that using SSH over Tor was too cumbersome for DPR (and any admin regularly doing this is now nodding his head in agreement). Therefor it seems to be likely that it was this that lead the FBI to find the TSR-WebServer. If DPR logged in from the cafe without using additional protection or with inadequate protection (like using an unencrypted proxy connection) any investigator listening in on the same cafe’s WLAN could gather the necessary identification to locate the TSR-WebServer. There are other explanations on how the TSR-WebServer might have been discovered. 1.) Payment tools and ID forgeries that DPR might have acquired through TSR and used to set up the TSR-WebServer were discovered by the FBI in the data received through EMPLOYEE. 2.) DPR used identifiable information of his real identity (especially payment means) to procure the TSR-WebServer. 3.) The FBI hacked the TSR-Webserver which seems to have had no configuration to prevent the leaking of public IP addresses. 4.) Communication between DPR and the web host for the TSR-WebServer was revealed through the surveillance of the residence of DPR. At this point it is impossible to say, but I do favor the theory that the server was identified by surveilling the cafe’s WLAN both digitally and physically, and that it was located around 2013-06-25. (It seems that DPR was also using a library’s WLAN for TSR work, the attack to locate the TSR-WebServer could just as easily have been conducted there.) Please note that for an operation at this location, no court orders would have been required. Using an open WLAN does not justify any expectation of privacy and is thus fair game for law enforcement. With the location of the TSR-WebServer known, the police of the country in which it was located was contacted by the FBI under a Mutual Assistance Treaty. This in turn led to the web hosting company to be contacted and to pull an image from the TSR-WebServer. For this to be possible the server was either a VPS (Virtual Private Server) like the TSR-VPN server mentioned above, or the system was utilizing RAID (in which case one of the mirror disks was pulled and replaced with an empty one). Also, it appears that no disk encryption was utilized. The contents of the disk then lead to the location of the TSR-WalletServer (used for financial transactions) and it revealed digital clues that linked the administrator of the TSR-WebServer to the Stack Overflow posting made by DPR. (The username contained in the SSH public key matched the name give on the Stack Overflow account.) After this critical operation had been concluded, without DPR’s notice, the last piece of confirmation was contributed by DHS visiting DPR’s residence and confronting him with the ID forgeries procured through undercover agent RAW. At this point the last sensible reaction of DPR would have been the immediate wiping of his personal laptop contents. However, he didn’t. When DPR went to a nearby library on 2013-10-02 to access TSR, the FBI was ready. Having staked out is movements and habits they had requested an arrest order and also planned the arrest. To minimize the risk of any data being made inaccessible (by disk encryption) they conducted the arrest in a location where they could separate DPR from his laptop quickly before he would be able to realize what was going on. And that’s exactly what they did, getting access to all contents of DPR’s laptop when arresting him. —— You will notice that I included a few assumptions on the timing of certain events (arrest of EMPLOYEE, when surveillance started, when TSR-WebServer was located). These are educated guesses based on how long it takes to get these operations moving and into place. These estimates however are based on third party observer experience from a different jurisdiction. US law enforcement might be a bit faster or slower. Also, I have assumed that the statements made in the official documents are truthful. Sadly there is the risk that many of them are not. It might have been that much of the undercover action was more than just a sting but instead it was fully fabricated after the fact to protect sources and methods. All of that we don’t know, which is why I stand to above analysis for the time being (and until more data becomes available during the trial – if there ever is one… which is not terribly likely). So, what was the thing that led to DPR’s fall, what were his crucial errors? Many have pointed to his activities on Stack Overflow and the bitcoinforum. I disagree. While these actions sealed his identification POST-FACT, they did not substantially contribute to his ANTE-FACT identification. Instead it was the vulnerability of the SilkRoad operation to undercover infiltration based on a lack of compartementalization and lack of tradecraft in exchanges that the TSR staff should have never gotten involved in. That is what broke open the organization and lead to an implant that was critical to identify DPR. Also, the lack of precautions taken in accessing the TSR-WebServer for system administration tasks and the lack of disk encryption were fatal. That communication on the platform was conducted without encryption and that deliveries were sent to true residence addresses added to the fall. So, in reverse, the lessons to take are: 1.) Never have the operators of such a system partake in any deal. 2.) Never do exchanges at true residence addresses. 3.) Use proxies for exchanges. 4.) Always use anonymization for system administration access. Better use it all the time, always. 5.) Always use disk encryption even on servers. 6.) Learn digital forensics to protect against it. 7.) Use random locations for physical operations to prevent geographic profiling. 8.) Use separate laptops and fully developed covers for all activity. 9.) Compartmentalize organizations deeply. Limit the damage that can be done by operators. For reference, the official timeline: SR-Timeline.html]" } , { "href": "/2013/02/reaching-us-247/", "title": "Reaching us 24/7", "tags": [], "content": "[Over the last few days our incoming gateways have been the target of a DDoS attack.However, we remain reachable anyways via the Tor and I2P darknets: Tor:shadow7jnzxjkvpz.onion (or via shadow7jnzxjkvpz.tor2web.org if you have no Tor installed). I2P: jme44x4m5k3ikzwk3sopi6huyp5qsqzr27plno2ds65cl4nv2b4a.b32.i2p (or via jme44x4m5k3ikzwk3sopi6huyp5qsqzr27plno2ds65cl4nv2b4a.b32.i2p.in if you don’t have I2P yourself). alternatively: shadowlife.i2p (or shadowlife.i2p.us or shadowlife.i2p.in or shadowlife.i2p.to if you do not have I2P running)]" } , { "href": "/2012/12/news-automated-passport-checks-to-be-extended-in-germany/", "title": "News: Automated passport checks to be extended in Germany", "tags": ["biometrics", "rfid"], "content": "[The interior ministry of Germany announced that automated passport checks are to be extended to more airports The system to be used is the EasyPass Gates system. Travelers are automatically checked for height and a picture of their face is taken. The picture then is checked against the electronic record stored in the passport, comparing the biometric features. On biometric match, the passenger may pass without further contact with the border agents. The core technologies used are biometric facial recognition and RFID of the passport. No previous individual enrollment of the passenger is required. After the system has been used in Frankfurt am Main, the airports of Hamburg, Berlin and Duesseldorf are to be added to the program. The EasyPass system was developed by L1 Identity Solutions which is now part of Morpho, a subsidiary of the Safran S.A, a french multinational defense and aircraft technology provider. ShadowLife commentary: EasyPass is a prime example of how biometric systems can be used for automated, low cost border passport checks. The same technology however can easily be extended to provide additional inland checkpoints and increases individual dependency on biometric RFID-enabled identity documents. Sources: http://www.heise.de/newsticker/meldung/Flughafenkontrollen-Abfertigungssystem-EasyPass-wird-ausgeweitet-1761689.html (in German) http://www.heise.de/newsticker/meldung/Erfolgsgeschichte-EasyPass-soll-fortgeschrieben-werden-1076438.html (in German) http://www.morphotrust.com/ http://en.wikipedia.org/wiki/Safran      ]" } , { "href": "/2012/12/lessons-learned-online-and-offline-part-iv/", "title": "Lessons learned. Anonymity - Online and Offline – Part IV", "tags": ["anonymity", "tradecraft"], "content": "[In the last three installments of this series we looked at the Theory of Anonymity, and what to expect of anonymity both online and offline. Several conclusion can be drawn and turned into lessons on how to protect anonymity more effectively. This we are going to explore in this part of the series. Lesson 1: The more knowledge an observer has about as many persons as possible, the weaker anonymity becomes. Anonymity being a knowledge problem, depends solely on what the observer knows about us and other people – the identifying information or unpooling attributes about members of our anonymity set. In the information age both the online world – the Internet – as well as the offline world betray us in keeping our identity protected.   Lesson 2: Anonymity is nothing that can be expected anymore, neither online nor offline. Unique identifiers and strongly unpooling attributes are the core both to the operation of the Internet as well as to the digitization of the physical world. These pieces of information are not only generated, but constantly collected and permanently stored. But not only those things that are invisibly to us created by the technology around us harm our anonymity, but also data collected about our behavior. ** ** Lesson 3: Anonymity must be actively created. The generation off de-anonymizing data and it’s collection is a process that does not need to be initiated by anyone to target a specific person. It happens in the background as the default mode of our world. To protect anonymity, active steps must be taken. ** ** Lesson 4: The protection of privacy relies on generating less data. Many motives exist for personal data to be generated and collected, from marketers to law-enforcement. Since data can be stored, transferred and traded easily, it can easily end up in unforeseen hands. Also, some parties possess special legal powers or direct access to data. Therefor: ** ** Lesson 5: Data that cannot be prevented from being generated needs to be concealed by use of technology or changed behavior. Both online and offline the technologies of daily use depend on certain personal identifiers to work. This data will always be generated if these technologies are used. However, this data can be concealed or made less telling by changing how technology is used and by protective technologies created specifically for the protection of anonymity. ** ** Lesson 6: Anonymity is protected by the individual. Protection requires taking the initiative to actively reduce or conceal data. None of these protective technologies should be expected to be used without the active effort of the person trying to protect himself. Also, changes to behavior to protect anonymity are the responsibility of the private individual. ** ** Lesson 7: Online anonymity relies on concealing IP-Addresses, removing Cookies and Referers, and obfuscation of browser fingerprints. In the online world of the Internet, the four strongest unpooling properties that are most widely spread are IP-Addresses, Cookies, Referers and Browser Fingerprints. ** ** Lesson 8: Offline anonymity relies on reducing the use of credit & loyalty cards, not carrying a cellphone and trying to escape recording by cameras. Offline, in the physical world, most de-anonymization is done through mobile phones, credit and loyalty cards, and face recognition. ** ** Lesson 9: Instead of making all data available to a single party, identifying information must be split over multiple parties. Even with the use of protective technologies, some data will be generated. To protect anonymity in these cases, it is necessary to make sure that identifying data is not correlated and shared between multiple parties. ** ** Lesson 10: Protecting anonymity requires awareness of how we behave and how technology works – and to adapt methods to protect anonymity. Technology around us is changing constantly. Our behavior is a strongly unpooling attribute that de-anonymizes us. This can only be countered by a constant awareness of how we act, how technology works, and how identifying information can be minimized in a changing world. Armed with these ten simple lessons, anonymity can be partially restored. In the future, ShadowLife.cc will present and explain various methods and technologies for protecting anonymity – and privacy in general – both online and offline. Five things to start with To start with protecting your anonymity, try out these things on a daily basis: Leave your mobile phone at home, or carry it switched off. Do not pay with your bank-issued credit card. Instead, get yourself a Visa or Mastercard Gift Card – or better yet, use cash. Stop making and uploading photos to the internet, and don’t volunteer to model for any or become a part of a stranger’s snapshot. Use the incognito mode or privacy mode of your browser. When asked at a coffee shop which name to put on your order, give out an invented name. While these five easy steps do not protect you fully, they are very easy to take. And they help with getting a feel for a lifestyle that puts a greater emphasize on privacy. Things to come… Part I: Theory of Anonymity In the first part of this series the theoretical aspects of what anonymity is are explored. Part II: Online Anonymity This part explores how much anonymity can be expected online and how anonymity is reduced by everyday technologies used in Internet communication. Part III: Offline Anonymity Here we apply the theory of anonymity to offline interaction. Part V: Concepts for increased online Anonymity The theory of anonymity applied to online communication and what methods can be used to increase anonymity.]" } , { "href": "/2012/12/no-place-to-hide-anonymity-online-and-offline-part-iii/", "title": "No place to hide. Anonymity - Online and Offline - Part III", "tags": ["anonymity", "physical tradecraft"], "content": "[In this article we are going to explore how anonymity in the physical world is eroded through technologies and conventions that have been introduced over the last 30 years. Most people assume that their physical behavior is mostly disconnected from the world of bits and bytes, databases and surveillance (see part I on the Theory of Anonymity, and part II about Online Anonymity). Sadly, this increasingly proves to be an illusion. It is easy to overlook how much the digital world has found its way into our physical lives over the last years. Just 30 years ago most people where only consumers of data, be it the TV or radio in their living-rooms. Most data produced by them was strictly personal or business related and never distributed widely or easily accessible to third parties. Daily transactions were settled with cash, travel records mostly non-existent. Only their telephones produced usage data, and even that was bound more to the house or office than to the individual user. The data trail left behind by individuals was very small and limited. This has changed fundamentally. Today, the vast majority of people generates a growing data trail with increasing frequency and accuracy. People have become constant producers of data in their daily, physical, non-Internet lives – often without noticing or understanding the processes involved. Digitization The main reason for this development in the increasing digitization of life. Computers and databases are not restricted to a natural habitat called the Internet, quite the contrary. Computer technology was developed mostly for managing physical events – managing warehouses, cataloging citizens and customers, calculating machine parameters, managing relationships and planning for the future. The attention focus on communication, social networking and the Internet in general contributed to many developments in the physical world to go almost unnoticed. And this is especially true for our perception and defense of anonymity. Most transactions taking place in the physical are now mirrored by transactions in the digital world, creating a digital shadow of our daily offline lives. Physical and digital, offline and online, are tightly linked. Physical objects are represented by digital objects used to track and understand the events in the physical world. And these digital representations are increasingly becoming the sole focus for decisions made in the physical world. This transition has only just begun, and will continue towards an Internet of Things that will dominate the way we deal with both the digital and the physical in the future. The digitization of life, the connection between physical objects and digital representations, has already enveloped most aspects of business-to-consumer transactions, travel and movement, as well as most communication. However, digitization of these areas comes with several challenges that are necessary to understand the impact on physical anonymity: Human life is notoriously ambiguous, a features that is hard to cope with by computers. This makes it necessary to create means to precisely describe and identify actions and objects that need to be digitally processed. The solution for this is the introduction of unique identifiers, numbers that are directly tied to one specific action or object and that will not be encountered in any other relation. Another challenge is data acquisition. For computers to be able to track physical objects or events, data about these must be made available in a digital form. This happens by the use of sensors that collect and transmit the data for further processing by computers. In those cases where data cannot be acquired automatically, or when data needs to be presented to humans, terminals are used. Here humans need to actively participate in data acquisition or communication with the computer. Furthermore, just to complete the description of digitization of life, some actions in the physical can be automated through actuators, devices that can perform operations like opening or closing doors, or moving objects. Lastly, there needs to be a method for connecting information about multiple objects and events – there needs to be correlation. This is done through the means of title and co-presence. Titles are formal connections between objects that are usually enforced through law – like titles of ownership for cars, identity papers like passports or objects that may only be found in the possession of a specific owner like credit cards. Co-presence refers to the fact that two or more objects can be located at a specific geographic point at the same time, preferably repeatedly. While this may sound excessively detailed, the combination of unique identifiers, sensors, terminals and correlation methods describes the infrastructure to collect vast amounts of identifying information and for automatic processing. Digitization and Anonymity Just a few decades ago, people only left behind data in the memory of other people. One person would witness the presence or action of another person, and maybe communicate it to a third party. But this data was widely distributed and disconnected, unreliable and only short-lived. Only in cases when a person was specifically targeted, means like photography, audio recording, fingerprint capture, on-foot surveillance and detailed record-keeping were employed. When not targeted, most people were anonymous outside of their direct social environment. They were not identified nor were records of their actions kept. This stands in stark contrast to today. Through the use of unique identifiers, sensors and correlation most people are constantly identified, their actions recorded and records kept indefinitely even if not specifically targeted. Many of these records are not yet interconnected, but many more of these records are kept by an increasing number of parties that individually combine them. Further interconnection will develop out of economic reasons and due to law-enforcement interests. Since these person-specific records are relatively cheap to store and manage, they are kept for not-yet-identified future use. All of these records reduce the anonymity set of an individual simply by containing massive amounts of unpooling properties. Since many of these properties are unique identifiers, the anonymity set is often reduced to a single member – leaving no anonymity in the physical world – unless the individual takes conscious countermeasures to protect his privacy. In the following we shall explore several of the technologies used for unique identification and sensors. Due to the nature of the subject this can only be an overview which is by no means comprehensive, but should enable us to identify other technologies when they are encountered. A more complete list can be found in the notes below. Everyday Tracking Probably the best known technology for physical tracking is the use of credit cards and other payment cards (with the exception of pre-paid, anonymous gift cards paid for in cash). They are directly tied to a person and connect that person to the time and place of a payment, in addition to making payment and shopping habits accessible. Thus credit/payment cards are unique identifiers that destroy anonymity. In addition, the payment data is made available both to the shop and the credit card company, and potentially to third parties requesting that data. The license plate of a car is another unique identifier that is currently gaining popularity with people trying to reduce the anonymity of others. Automated license plate scanners are set up in more and more locations, allowing the automated collection of license plate data combined with time and place. These are often coupled with additional sensors like toll collection systems to identify the in-car toll boxes. Combined, this allows for the automated creation of movement profiles that are directly connected to a person. A more personal, precise and reliable method to create movement profiles and to pinpoint an individual is the mobile phone. Almost everybody is carrying a mobile phone today, all the time, at all places. However, mobile phones are constantly traceable as long as they are switched on. The mobile phone network knows the location of every active phone at all times, simply by how the network is set up – not because of targeted surveillance or backdoors. Every mobile phone has a globally unique hardware identification number, the IMEI (International Mobile Equipment Identifier) which is broadcasted to the network frequently. Furthermore the IMSI (International Mobile Subscriber Identity) number which is stored on the phone’s SIM card is made known to the network so that calls can be routed. These pieces of information – location, time, IMEI and IMSI – are frequently stored for extended periods of time and made available to third parties. Together, they form a powerful method to find out where a person was at a given time. Since most mobile phones and network accounts for mobile telephony are bound to a person they are immediately de-anonymizing. But there are more ways electronic companions, be it smart-phone, tablet or laptop can de-anonymize the owner. When switched on, wireless enabled devices broadcast so called MAC addresses that can easily be captured over dozens of meters. These hardware addresses are intended to be globally unique and not change, so to identify the device to a local hotspot or other devices like headsets. Both WiFi/Wireless LAN as well as Bluetooth use identity broadcasting, though many devices can effectively suppress Bluetooth to send out it’s ID. Another strongly unpooling property are loyalty cards issued by various commercial entities. These also allow the collection of transaction data for products bought, the time and place of purchase, and the person. But since loyalty cards and not legally bound to a person they can be swapped to reduce the quality of data collected. This is why we classify loyalty cards as strongly unpooling property instead of unique identifiers. A less likely, but nevertheless frequently used method of tracing is the use of bank note serial numbers. Though they are not bound to a person, they can be connected to a person through the commercial transaction itself. This is a method frequently used in law-enforcement sting operations. For example, the bank note numbers are known to the ATM machine at which the target uses its credit card to withdraw money, making it possible to connect the serial numbers to his identity with a high probability. However, data between banks and grocery stores are rarely shared, especially not serial number tracing data. It is useful to keep this method in mind however, since automated bank note scanners are becoming a more frequent piece of equipment found not just at banks but at shops and border checkpoints. Far more prevalent are tracking methods based on pre-paid or subscription tickets for public transportation. For example, the Oyster Card used in London allows the long term tracking of movement because it can connect the built in unique number to the use at the gates to the public transportation network. Since the technology employed (MiFARE RFID chips) has been proven insecure, any stranger could read the ID of an Oyster Card carried in a target’s purse and then look up the locations and times of travel. Some transportation ticketing systems also allow access to subscription data, often identifying the person directly. The underlying technology in many ticket systems are RFID (Radio Frequency IDentification) chips. But RFID is far from being limited to tickets. RFID chips are found in passports, credit cards, tickets – but also attached to everyday products like clothing. RFID-tagged clothing is intended for stock management and anti-theft operations, but it also allows the silent tracking of persons. Since clothes are personal and we usually do not replace all our clothing at once, the correlation between RFID identities in clothing combined with payment data collected at stores can make the wearer long-term traceable. For example, RFID scanning gates put in at choke points like hotel entries, subway system entries and store doors can be used to track the wearer of RFID tagged clothing or other objects that are equipped with RFID tags. Lastly, the fastest growing area of de-anonymization and tracking is the spreading use of biometrics. Facial recognition systems are now being built into CCTV (camera surveillance networks) and even shop’s surveillance cameras. Since the human face can be quickly identified by current technology – and the face is constantly visible – this probably makes facial recognition the strongest future application for identification. However, facial recognition is not limited to surveillance cameras. Due to the growing use of mobile phones with built in cameras, and the spreading habit of shooting pictures always and everywhere to upload them on social media websites, more and more biometric data linked to place and time is made available – with the active and cheerful help of a whole generation of facebook users. Must digitization lead to loss of privacy? It should be noted that the process of digitization does not inevitably lead to a loss of anonymity. Many convenience and efficiency gains can be achieved without impacting privacy, if the technology is designed with data protection in mind. For example, unique identifiers are not always required, or they can be only temporary and changing. It would also be possible to offer more option to opt-out from data collection or to limit data collection to well defined circumstances in which its use can be demonstrated. However, these “Privacy by Design” approaches are rare to encounter, either because they are not requested by consumers or because non-economic interests (like law enforcement etc.) are at play. Conclusion It should be clear that anonymity should not be expected from the physical world. Credit cards, mobile phones and facial recognition being the three most frequent de-anonymizing technologies that we are constantly confronted with. Without special measures, physical anonymity does not exist anymore. Things to come… Part I: Theory of Anonymity In the first part of this series the theoretical aspects of what anonymity is are explored. Part II: Online Anonymity This part explores how much anonymity can be expected online and how anonymity is reduced by everyday technologies used in Internet communication. Part IV: Lessons for Anonymity Some lessons have been learned that can help to improve anonymity in general, both online and offline. Part V: Concepts for increased online Anonymity The theory of anonymity applied to online communication and what methods can be used to increase anonymity.   Further examples: Unique Identifiers, serial numbers: (Ultimately unpooling properties) Credit Cards, Cash Cards, ATM cards Financial Transactions: Account numbers, check numbers, routing codes Mobile phone, also built into many modern cars: IMSI: International Mobile Subscriber Identifier. Globally unique, associated with SIM card IMEI: International Mobile Equipment Identifier. Globally unique, associated with the mobile phone hardware Phone number Number plates / License plates Passports, identity cards Probable identifiers: (Strongly unpooling properties) Biometrics: Face geometry Fingerprints Voice characteristics DNA / Genetic fingerprint Eye: Iris & Retina Receipt numbers of purchases Loyalty cards MAC Address. Publicly visible hardware address of WiFi/Wireless LAN hardware. Bluetooth ID. Publicly visible hardware address of a Bluetooth device. Tickets for public transportation, Oyster card Banknote serial numbers Names used in personal interaction RFID tags, found in many products Artificial DNA marking of objects Automated toll payment boxes Weak unpooling properties: ‘Weak biometrics’, visual: Automated: Gait (can be automated) Hand/Ear patterns Race Gender Age Hand geometry Build Clothing Habits, patterns of behavior: Geolocation data (can be very strong) Buying habits Time habits Driving habits Power consumption Sensors & Terminals Payment terminals (credit cards, loyalty cards) Mobile phone Ticket systems (transportation) RFID Gates CCTV/Surveillance camera networks Automated license plate scanners]" } , { "href": "/2012/11/news-salt-lake-city-police-about-to-adopt-head-cameras/", "title": "News: Salt Lake City police about to adopt head cameras", "tags": ["head camera", "police state", "surveillance"], "content": "[The police chief in Salt Lake City, Utah, wants to make head cameras mandatory at his police department: This US police force wants to clip cameras on the side of all their officers’ heads via glasses, helmets or hats. The head cameras can record a crime scene or any interaction with the public, in addition to the footage already produced by dashboard cameras in their cars. Supporters of the technology claim that the head cameras are made in such a way that officers cannot edit the footage, helping to ensure transparency. The AXON Flex devices considered in Utah are manufactured by US firm TASER (they are an upgrade of the earlier AXON Pro system). Currently, there are 274 US law enforcement agencies using one or both version (some for all officers, others are just testing a few). UK police forces are also testing similar technology. For example, Grampian Police officers in Aberdeen have been using body cameras which attach to their helmets and vests since 2010. Sources: http://www.bbc.co.uk/news/technology-20348725 http://www.bbc.co.uk/news/uk-scotland-north-east-orkney-shetland-18981781]" } , { "href": "/2012/11/news-bill-to-authorizes-warrantless-access-to-americans-email/", "title": "News: Bill to authorizes warrantless access to Americans's email", "tags": ["email", "privacy law", "surveillance"], "content": "[A vote on a bill which authorizes warrantless access to American’s email is scheduled for next week: A Senate proposal touted as protecting Americans’ email privacy has been rewritten to give government agencies more surveillance power than they possess under current law. It would allow more than 22 agencies (including the SEC and the FCC) to access Americans’ email, Google Docs files, Facebook wall posts, and Twitter direct messages without a warrant. In some circumstances the FBI and DHS could get full access to Internet accounts without notifying the owner or a judge. This is a setback for Internet companies, which want to convince Congress to update the 1986 Electronic Communications Privacy Act to protect documents stored in the cloud. Currently Internet users enjoy more privacy rights for data stored on hard drives than for data stored in the cloud. ShadowLife comment: The law does not protect privacy, encryption does. Source: http://news.cnet.com/8301-13578_3-57552225-38/senate-bill-rewrite-lets-feds-read-your-e-mail-without-warrants/]" } , { "href": "/2012/11/news-uk-plans-to-block-online-porn-for-minors/", "title": "News: UK plans to block online porn for minors", "tags": ["censorship"], "content": "[UK government is moving forward with its plans to block online porn for minors: Anyone buying a new computer or signing up with a new Internet service provider (ISP) will be asked whether they have children on first login. On ‘yes‘ further questions will be asked to determine the stringency of the anti-pornography filters which will be installed. ISPs have to impose appropriate measures to ensure that those setting the parental controls are over 18. ISPs also have to prompt existing customers to install the filters. This plan differs from earlier opt-out plans which would have blocked online porn automatically. The Open Rights Group consider this ‘active choice’ proposal to be better than the earlier opt-out plans. Sources: http://www.dailymail.co.uk/news/article-2234264/David-Cameron-ensure-parents-led-filter-process-new-computers.html http://www.openrightsgroup.org/blog/2012/victory-government-backs-down-from-default-filtering]" } , { "href": "/2012/11/full-disk-encryption-with-ubuntu-linux/", "title": "Full disk encryption with Ubuntu Linux", "tags": ["digital tradecraft", "encryption"], "content": "[Ubuntu is one of the most popular Linux distributions and a good start for a secure and yet easy-to-use computing environment. In order to install it go to Ubuntu’s website to download the current release for your desktop or laptop computer and follow the installation guidelines there.**** Setting up full disk encryption in Ubuntu During the installation of Ubuntu check the “Encrypt the new Ubuntu installation for security” box in the graphical installer to activate full disk encryption (dm-crypt with the symmetric AES encryption algorithm is then used for that purpose): Make sure you use a good password, see the the article “How to choose a secure password” for details. The article “Encryption algorithm: a primer” gives you an introduction to encryption algorithms. Checking the box mentioned above and choosing a good password is all you have to do to activate full disk encryption in Ubuntu and keep your data secure. The Electronic Frontier Foundation (EFF) has more information about full disk encryption in Ubuntu 12.10. Removing data leaks in Ubuntu Unfortunately, the default install of Ubuntu 12.10 added some data leaks. If you perform a search on your desktop the search term is also sent to Ubuntu’s server in order to give you related Amazon products and Internet search results which is problematic. You should disable that with the following two steps: To disable Amazon advertisements open a terminal and type in the following command: sudo apt-get remove unity-lens-shopping To disable Internet search results open the Privacy app and disable Include online search results: The EFF has also more information about the data leaks in Ubuntu 12.10.]" } , { "href": "/2012/11/opinion-spying-on-petraeus-or-how-emails-quickly-become-incriminating-evidence/", "title": "Spying on Petraeus, or how emails quickly become incriminating evidence [Updated 2012-11-15]", "tags": ["anonymity", "digital dead drop", "digital tradecraft", "email"], "content": "[The current story of Gen. PETRAEUS and his affair BROADWELL shines a light at the possibilities of digital surveillance and tracing of crumbs of information. It can serve as an example and a warning against insufficient digital tradecraft. Though news reports about the exact order and nature of the events are imprecise, unreliable and contradictory, we are trying to put them together into a plausible series of events and give some background on techniques that were, or might have been, used to intrude on the privacy of both BROADWELL and PETRAEUS. Phase I: Threatening emails The case began when KELLEY received between 5 and 10[1] emails of threatening content that did not immediately identify the sender. The FBI was contacted through an agent being befriended to KELLEY and the matter was investigated by the FBI cybercrime unit. To prevent confusion we will refer to the address these emails were sent from as EMAIL_ACCOUNT_A, since the story involves multiple accounts. Phase II: Requesting account information Since the emails in question did not immediately reveal the identity of the sender, the FBI most likely contacted the email provider of EMAIL_ACCOUNT_A first, requesting the registration data for the address in question (likely using a subpoena). An email address consists of two parts, the “Local Part” or “user” and the domain managing the account. Together these form the address as user@domain.com. The information about the domain, and thus who manages an email address, is publicly available through the domain registration system and can be looked up within seconds (using a whois service like www.whois.com). Since EMAIL_ACCOUNT_A was registered under a pseudonym (false user information) and not the real identity of the owner, the FBI resorted to identify the account owner through other means. Phase III: Tracing access At this point the FBI either: Requested and received historic login data to EMAIL_ACCOUNT_A from the email provider. This would include the dates/times when an account was accessed and which IP-Addresses were used by the user. Or the FBI relied on the IP-Address information included in most emails in a section that most email programs hide from the user but that is nevertheless carried by the email itself and easily obtained through the email program. An example of how such an entry in an email looks like is shown here: Received: from [] by fmail.com via HTTP; Fri, 11 Nov 2011 11:11:11 PST At this point the FBI had a list that showed at what dates/times the owner accessed EMAIL_ACCOUNT_A with which IP-Address. From there the FBI used publicly available database to identify the owners and/or locations of the IP-Addresses in question, which resulted in a list that informed them about places and times EMAIL_ACCOUNT_A was used. Phase IV: Identifying the sender Apparently EMAIL_ACCOUNT_A was not used from a personal Internet connection to send the emails in question. This lead the FBI to contact the owners of the IP-Addresses identified in Phase III – which included multiple hotels – and request information about potential users of the Internet accounts identified by the collected IP-Addresses. Apparently the FBI needed no subpoenas or even court orders to access this information, hotels simply shared the guest records for the dates in question. At this point the FBI had a list of persons that included the user of EMAIL_ACCOUNT_A. They then simply looked for persons that had been at all of the places at the times in question. Leaving one suspect: BROADWELL. Phase V: Widening the picture At this point the FBI could convince a judge to issue a warrant to identify additional email accounts used by BROADWELL who had been successfully identified as owner of EMAIL_ACCOUNT_A. It is unclear what technique the FBI used to find additional accounts of BROADWELL. Possible options are: Using an FBI controlled software installed on BROADWELL’s computer to identify additional email accounts accessed. BROADWELL’s modus operandi included accessing email accounts from changing Internet connections like those of hotels. Since this was to be expected in the future as well, a FBI controlled data collection software installed on BROADWELL’s laptop would have been a good choice, simply because she would likely use that machine during travels. This software like Magic Lantern, CIPAV or any of their successors would have been the most promising path but also presenting legal obstacles. Another approach would have been buying available data from various data traders like Acxiom that often have information about multiple email addresses used by the same person on file. This data is usually collected from various sources and aggregated based on common identifiers like IP-Addresses which together yield a surprisingly detailed picture of the person in question. However, this data is often less complete than required in such an investigation and also makes case information available to a third party. Due to less legal obstacles involved a simple communication surveillance on the internet account used by BROADWELL at home – and potentially by her mobile phone – might have been the most likely route of investigation to take. A system of the likeness of Carnivore (since been replaced with more advanced implementations) could have been used to specifically and exclusively look for additional email accounts used as stated in the warrant. Asking BROADWELL: Sources are unclear at which point BROADWELL handed her computer over to the FBI for physical investigation of it’s contents. This would likely reveal other email accounts used by traces left in the browser history & bookmarks, configuration of email client software, and entries in automatic password managers or auto-fill records of the browser. [Update:] Some sources claim that both EMAIL_ACCOUNT_A and EMAIL_ACCOUNTS_B were managed by Google. It might be the case that the FBI only asked Google, as provider of EMAIL_ACCOUNT_A, to search for other email accounts that were accessed by the same IP-Addresses and at the same times. Google then would have searched the access logs it stores, discovering EMAIL_ACCOUNTS_B and then make them known to the FBI. Sources are unclear in this regard, but it remains a possibility at this point. By using any or all of the above methods, the FBI found more email accounts, EMAIL_ACCOUNTS_B, which were accessed regularly. Phase VI: Hitting Gold The FBI at this point gained access to EMAIL_ACCOUNTS_B discovered in phase V. How exactly the access was gained is unclear and depends on the exact method(s) used in phase V. Either account access credentials were discovered, or additional subpoenas/warrants were issued to access the accounts with the help of their respective providers (see phase II). When analyzing the content of these accounts stored on the providers’ servers a group of accounts, EMAIL_ACCOUNTS_C, stuck out due to two factors: Classified information was stored in the account. Multiple sources refer to this but it might be a confusion with files stored on BROADWELL’s computer which was at some point made available to the FBI. Excessive use of the “Drafts”-folder for communication Especially the use of the Drafts-folder appears to have caught the attention of the media, and possibly the FBI, because it is a common method used to conceal communication. This method is commonly referred to as a “Digital Dead Drop” (the term drop box is mostly a media error/invention). Here the communicating parties share the access credentials to an email account. By authoring emails and not sending them but storing them instead in the Drafts-folder the parties can exchange messages without actually generating additional traffic “on the wire”. This was popularized by reports about Al-Quaeda operatives using this method. While it is true that additional traffic is not generated through this technique, the traffic for accessing the accounts and the data in the accounts is still available and often under lower legal protection than actual communication that involves multiple accounts. The method was mostly used out of fear that intelligence agencies would have automated access to international internet communication (true) but would have no access to email accounts stored on servers (false). Even access to email accounts leaves traces that can be scooped up by surveillance operations, and data stored on email accounts is no more secure than transmitted data if the intelligence agency can gain access to the servers – which it usually can. Furthermore it concentrates all information about the account users in one place instead of spreading it over multiple networks that might not be equally surveilled. Due to the recording of access to email accounts a surveilling party only needs to secure the cooperation (or undermine the protections) of a single party to gain access to the IP-Addresses of communicating parties and times/dates when communication took place. And this appears to be so in this case. Phase VII: Identifying other parties It is unclear how PETRAEUS was linked to EMAIL_ACCOUNTS_C. Most likely the IP-Address information stored by the email provider at each access was used to identify other parties involved. For this subpoenas to Internet service providers could have been used to identify the users of the IP-Addresses stored in the email account logfiles. More likely however the FBI connected one or more of these IP-Addresses to the CIA immediately and left the final identification to their IT department. Commentary on the case Public knowledge about the case is very limited both in depths and reliability. What can be concluded however is that the FBI used a wide array of investigative methods and resources on a simple harassment case that escalated to a case about concerns on national security during the investigation. If this was in any way justified remains to be seen. Several lessons can be drawn from this story: Investigations that begin with a low interest and impact can escalate quickly, drawing in more and more potent methods and technologies. Most internet service providers, email providers and hospitality businesses are not sufficient guardians of one’s privacy. Context-Information and Meta-Data (email headers, access logs, IP-Addresses) are the prime source of information for intelligence and investigation operations. These can easily be processed automatically by software because they were created by computers for computers. Hear-say tradecraft (Drafts-folder as digital dead drop) without an understanding of backgrounds to protect one’s privacy is not only insufficient but even counter-productive as shown in this case. Good digital tradecraft for E-Mail Good tradecraft for protecting email communication does exists: Protect email content through message encryption, like GnuPG Do not rely on third party storage of emails. Download emails and delete them from the email server. Store email and other information (such as browser data) securely using Full Disk Encryption like TrueCrypt. Points 1-3 also mean that one shall not use webmail services. Select an email provider that is privacy conscious: Removing identifying header information from emails and protecting whois/domain-data or being registered in a jurisdiction other than your own. Use encryption to communicate with the email provider: Insist on TLS/SSL encrypted access to their SMTP (outgoing) or POP3/IMAP4 (incoming) servers. Only access the Internet with anonymization methods enabled that conceal your true IP-Address from third parties, like Tor/I2P/Multi-Hop VPNs. Do not draw unneeded attention towards yourself by harassing people needlessly. These are only the minimal tradecraft rules for secure and private email use. But they would have been sufficient to protect PETRAEUS and BROADWELL. Please also refer to our Anonymity Series (Part I, Part II) for more background on Anonymity. Media sources used for this article (in no particular order): http://www.newyorker.com/online/blogs/newsdesk/2012/11/david-petraeus-and-the-surveillance-state.html http://online.wsj.com/article/SB10001424127887324073504578113460852395852.html?mod=WSJ_hps_LEFTTopStories http://www.wired.com/threatlevel/2012/11/gmail-location-data-petraeus/ http://www.huffingtonpost.com/2012/11/12/petraeus-fbi-gmail_n_2119319.html http://www.nytimes.com/2012/11/12/us/us-officials-say-petraeuss-affair-known-in-summer.html?pagewanted=all http://online.wsj.com/article/SB10001424127887324073504578113460852395852.html http://openchannel.nbcnews.com/_news/2012/11/12/15119872-emails-on-coming-and-goings-of-petraeus-other-military-officials-escalated-fbi-concerns http://m.apnews.com/ap/db_289563/contentdetail.htm?contentguid=VOlvNjF4 1: Sources are vague on this issue. Update 2012-11-15: Added option 5 to “Phase V: Widening the picture”.]" } , { "href": "/2012/11/news-nec-offers-face-recognition-analysis-for-retailers/", "title": "News: NEC offers face recognition analysis for retailers", "tags": ["face recognition"], "content": "[Technology only requires off-the-shelf personal computer and video camera. It can estimate gender and age based only on video footage. Furthermore, repeat customers even across stores can be automatically recognized. The underlying face recognition product, NeoFace, is also used in services like intruder recognition and surviellance. NeoFace is a cloud service provided by NEC. NEC claims that the face templates generated cannot be reconstructed into images of faces. ShadowLife comment: ShadowLife disagrees with NEC’s claim that face templates cannot be used to reconstruct the images of the faces recorded. Similar claims were made before about biometric iris recognition templates that where then subsequently used to reconstruct iris images through genetic algorithms. NeoFace is an example of self-learning facial recognition software that adds face data to its database by simple observation of crowds in real life scenarios. Storing the NeoFace data in the cloud centralizes data management and gives other parties easy access to facial recognition data and other context data (store visited, date/time of visit, etc.) of all participating locations. Sources: http://www.diginfo.tv/v/12-0209-r-en.php http://www.youtube.com/watch?v=mTCUY4CUHFU http://www.wired.com/threatlevel/2012/07/reverse-engineering-iris-scans/all/]" } , { "href": "/2012/11/news-google-compliance-to-reveal-user-data-and-remove-content/", "title": "News: Google compliance to reveal user data and remove content", "tags": ["google"], "content": "[Report covers the 6 months between January and June 2012. Requests for user data increased by 30% compared to July-December 2011. Google received more than 20,938 requests to reveal user data of 34,614 accounts. Google complied in more than 13,900 cases. Majority of user data requests were made by the USA. Google complied in above 90% of US requests. During the same timespan Google received 1791 court orders and requests by executive branch (police etc.) to remove 17765 items from his search results or other services. Source: Google Transparency Report]" } , { "href": "/2012/11/anonymity-online-and-offline-part-ii/", "title": "Anonymity – Online and Offline – Part II", "tags": ["anonymity", "digital tradecraft"], "content": "[This article explores how much anonymity really exists online, and how anonymity is reduced by everyday technologies used in Internet communication (please check out Part I of this series for the theory behind anonymity). Many people expect their actions online to be far removed from their physical identity which often leads them to behave in ways they would never dare when their name were connected to it. But how well founded is this belief in online anonymity? Sadly, there is no such thing online anonymity per se. Without special technical measures anonymity on the Internet should be deemed non-existing. Every Internet user leaves a long trail of data behind, much of which can be directly and cheaply connected with his identity. It is necessary to understand the technologies involved to get a clear and true picture of the state of online anonymity: IP Address Every communication on the Internet – such as surfing to a website or making a VoIP call – involves data to be reformatted into smaller packets that are then delivered over a vastly complex network of routers – computers that pass the packet on from computer to computer until it reaches the final destination. Here’s an example path for an information packet that travels through the Internet, each line referring to one computer that passed the data on to the next one: Path of a packet 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. The various routes data can take on the Internet is determined by both the sender and the recipient of the data, as well as any system in between that is responsible for passing on the data – and the routes will constantly change. For this to work, every packet of data must come with a sender- and a recipient-address that uniquely identifies the computers that are talking to each other. This address is called Internet Protocol Address – or just IP-Address – and is simply a number ( is an example of such an address from the path shown above). While these numbers look innocent, they are directly related to the computer of the Internet user. When accessing the Internet, the ISP (Internet Service Provider) of the user will assign a unique IP-Address to the user’s computer and store this information in a database from which it can be retrieved with a subpoena, or even resold to data traders and marketers. But even third parties have information about IP-Addresses that they have gathered in various ways, making it possible to often pinpoint an IP-Address to a single street address by using only publicly available data (click on this link to find out what everybody can know about you right now, just based on your IP-Address Maxmind.com GeoIP). Each of the computers involved – be it sender, recipient or any of the routers in between – sees the IP-Addresses of the parties communicating with each other, and even what data is transferred. Be aware that dynamic IP-Address assignment, as it is offered by many ISPs, does not change the anonymity impact of IP-Addresses at all. At best, more data needs to be stored and analyzed to achieve an attribution of Internet communication to a user. Cookies, ETag, etc. By now, every Internet user should have heard about “cookies”. These are little pieces of data that a website can place on the visitor’s computer, and that will be sent back to the website when the user visits it again. They allow the website to connect multiple visits together as coming from a single computer. This sounds innocent enough, however: Not just the website a user visits but every bit of content loaded from it – like javascript, images and flash video – can create and load cookies from a computer. This makes it possible to track a visitor not just on one website, but connect his visits to multiple websites with each other. Depending on the business model of the data traders and marketers involved, user data from multiple websites is shared with the website operators as part of the deal, or resold separately. But even with cookies disabled in the user’s browser, there exist numerous similar techniques that are not as easy to block. This includes, but is not limited to, ETags (a method to optimize loading speed), flash cookies (Local Shared Objects used by Adobe Flash) and HTML5 (allows the storage of data in the browser in multiple ways). Combined, these are used to create “Zombie Cookies” which are exceedingly hard to remove from a computer. Website forms All anonymity ends when users fill out forms on the web. Be it the signup form for a website, the order form of a shop or even the search terms put into a search engine. Most users use true or close-to-true data when being asked for it. From there on, the data can be associated with the IP-Address used and cookies stored on the computer. Depending on the policies of the websites in question, the data can then be shared with other websites and associated with even more cookies and IP-Addresses, forming comprehensive profiles of browsing habits and identity attributes that are almost impossible to remove from the long-living databases of data traders and marketers. Frequently form data later becomes the subject of subpoenas, with authorities compiling in-depths reports on searches made on the Internet and websites used. Referers Especially search terms easily find their way through the Internet, and the reason for that is the “Referer”. Every time a user clicks on a link on a webpage, the newly opened webpage is sent the address of the previous one. From this a website learns the search terms used when the user clicks on a search result. But referers are not only created when users click on links. Any content loaded from a website (images, javascript, flash, etc) carries with it a referer. When a website loads the credit card logo from the servers of the website operator’s bank, the bank can know that a user visited that specific website, including the IP-Address of the user and potentially other information like cookies. The same happens with the “like” and “share” buttons that prominently decorate many websites today and that are loaded directly from the servers of the respective social network. In combination with cookies and IP-Addresses, this informs social networks about the majority of content their members consume, even when none of the websurfing involved any of their own websites directly. Bookmarks Some websites will generate new page addresses (URLs) for every new visitor. When these pages are then bookmarked or shared with others, website operaters are enabled to both recognize repeat visitors as well as gaining insight into who shared their pages with whom – giving them easy access to the user’s social relations. History & Cache When loading webpages the user’s browser will first test if it has a locally stored (cached) version from a previous visit, so that the websurfing experience can be sped up. Due to this behavior websites can connect visits together and recognize repeat visitors. Browser Fingerprint Another technique to track user’s on the Internet is by finding out about their browser’s fingerprint. Due to minute differences between installations, many browsers express a unique behavior which can be tested by websites. This makes it possible to identify repeat visitors and to even track them without relying on IP-Addresses or storing identifying data in their browsers (like cookies). To see how unique your browser is, check out this page by the Electronic Frontier Foundation: Panopticlick Mail headers Tracking methods are not limited to websurfing. As an example for other technologies that have anonymity implications email shall be quickly examined. Unbeknownst to most users, emails carry information that consist not just of the email addresses of the parties, but also IP-Addresses of the sending user. Received: from [] by freemail.com via HTTP; Fri, 11 Nov 2011 11:11:11 PST The above shows one of the “headers” included in an email that. The “Received” headers include the full path an email traveled from mail-server to mail-server, usually including the original IP-Address of the sender’s computer ( in this case). In addition to this, other headers exist that uniquely identify the message, the mail program used, or the conversation the mail refers to. All of this information is visible to any router on the path of the email. This is especially interesting to operators of free webmail services that attract a lot of users. The correspondence of their users allows the operators to temporarily attribute email addresses (and often names) to the IP-Addresses that were used in sending, creating a very precise database of user information that does not have to rely on the cooperation of other parties. Putting it all together This article could only present a shallow overview of the many methods and technologies that compromise user privacy and anonymity on the Internet. When combined with each other and utilized by specialized parties, they comprise powerful means to not only reduce anonymity on the Internet to nothing, but also to spread information about users between networks of actors. The depths of this threat materializes when multiple of these technologies are combined and the generated data is mined. Just using cookies, referers, IP-Addresses and mail headers, most users can be identified during most past and future connections to the Internet, essentially reducing the IP-Address to a unique identifier that is directly associated with the user’s name – without having to resort to subpoenas or data held only by the user’s ISP. Numerous parties with that kind of information exist, most of them unbeknown to the public. There is very little reporting about the methods used by data traders and marketeers, or how they compile vast databases of user information and make it available to paying customers. This is only natural, because methods to protect privacy and anonymity online exist as we will explore in Part V of this series. Too much attention on the factual non-existence of anonymity online would only result in users choosing to take the protection of their privacy back into their own hands, instead of misguidedly trusting it to the Internet itself. Only one conclusion can remain: The Internet provides no anonymity whatsoever, unless defensive technologies are employed by individual users. Things to come… Part I: Theory of Anonymity In the first part of this series the theoretical aspects of what anonymity is are explored. Part III: Offline Anonymity Here we apply the theory of anonymity to offline interaction. Part IV: Lessons for Anonymity Some lessons have been learned that can help to improve anonymity in general, both online and offline. Part V: Concepts for increased online Anonymity The theory of anonymity applied to online communication and what methods can be used to increase anonymity.]" } , { "href": "/2012/11/news-chrome-adds-do-not-track-header/", "title": "News: Chrome adds Do Not Track header", "tags": [], "content": "[Chrome has added the Do Not Track (DNT) header: DNT was added to the Chrome 23 release. Mozilla’s Firefox browser implemented the feature in June 2011. The upcoming release of Internet Explorer 10 (IE10) will enable DNT by default. The Apache webserver was recently updated to ignore DNT in IE10. Yahoo recently said it will also ignore DNT in IE10. Source: Ars Technica]" } , { "href": "/2012/11/how-to-choose-a-secure-password/", "title": "How to choose a secure password", "tags": ["digital tradecraft", "encryption"], "content": "[This article explains how to choose a secure password. For example, this is necessary to secure encrypted data or private keys against brute-force attacks. An introduction to encryption algorithms is given in the corresponding primer (brute-force attacks are also explained there). You should never use the same password for multiple purposes. It is fine to use the build-in password manger of the Firefox web browser to store your website passwords, but only if they are secured properly (by using hard disk encryption with a strong password as described below and the master-password feature). Theoretical consideration of passwort lengths To consider the information content of a password one first has to consider the underlying alphabet. Let’s assume we use the 26 letter Latin alphabet in upper- and lowercase plus the 10 decimal numbers 0-9 which gives us 62 characters in total. If we add some special characters like ‘!’ or ‘?’ we get more than 64 characters in total. For simplicity, let’s assume that we have an alphabet of exactly 64 characters. 64 = 26 which means that each character from this alphabet contains 6 bits of information, if the corresponding password is chosen randomly. If the password is taken from a dictionary the information content is vastly lower. For example, let’s compare a 10 character password chosen randomly with a 10 character password taken from a dictionary containing 4096 words. In the former case we have 60 bits of information. That is 260 ? 1.15 * 1018 possible passwords. In the latter case we just have 12 bits of information: 212 = 4096 possible passwords. That is, in the former case one has to test ~2.81 * 1014 more combinations than in the latter case in order to guess the password. In conclusion this means that completely random passwords are best. Passwords contained completely or in large parts in dictionaries are not secure against brute-force attacks! Capabilities of brute-force attacks Computing power is measured in FLOPS (floating-point operations per second). Current supercomputers have computing power in the range of petaFLOPS: 1 petaFLOPS = 1015 FLOPS. It it therefore safe to assume that the current capabilities of sophisticated brute-force attacks are up to 1015 passwords per second. Moore’s law states that the number of transistors on integrated circuits (and with it the computing power) doubles roughly every 18 month to two years. If we are optimistic and assume a doubling every 18 month, we get an increase of computing power by a factor of 1000 every 15 years (215 = 1024 ? 103). Passwords usually do not only have to be secure now, but also until the end of ones life. If we take 75 years one has to factor in a 1015 improvement in brute-force capabilities! Therefore, we want to make sure that our passwords are secure against brute-force attacks of up to 1030 passwords per second. What do these capabilities mean in terms of actual time? The dictionary password mentioned above stands no chance against a current supercomputer with a brute-force capability of 1015 passwords per second, it is broken in less than a second. All 10 character passwords (from an alphabet as described above) can be tried with such a computer in less than 20 minutes. Therefore, such a password is also not safe. To break a 20 character random password with a current supercomputer would take at most 42121558374361 years, but with the 1030 passwords per second supercomputer of the future it would take only about 2 weeks. It is always better to err on the side of safety and therefore we recommend random passwords with a minimum length of 30 characters. Practical hints For passwords used online a good approach is to generate a long random password for each website and store it securely in the browser. For example, you can generate such a password with the following two tools: Use javascript to generate the password for you. The code is run entirely in your browser, our webserver never sees the password: Random Passwordpass+=set.charAt(Math.floor(Math.random()*set.length));window.alert(‘Random password:\n’+pass+’\n’);})();){.bookmarklet} (Drag&Drop this link to the bookmarks bar of your browser to easily generate passwords in the future). If you are on a UNIX system you can use the this shell script to generate passwords for you: #!/bin/sh -e head /dev/urandom | uuencode -m - | sed -n 2p | cut -c1-${1:-32} Of course, that means that the hard disk where the passwords is stored needs to be encrypted securely. We recommend that you learn a 30 character random password for your hard disk encryption, it is not as hard as many imagine. It takes only about 30 minutes to learn a random password by typing it in repeatedly in order to put it into muscle memory. But if you cannot remember a 30 character random password, the following approach to generate and remember pseudo-random passwords could work for you. You make up an unusual sentence which should contain special characters and numbers, but which you can easily remember. For example: My favorite café has 32 different pictures on the wall. Among them are 3 with dogs, 5 with cats, and 12 portraits! ‘May I have your number?’, I asked the waitress and I got (703) 482-0623 :(. If you have such a sentence, you abbreviate it by using only the first characters, the numbers, and the special characters. In our example you’ll get a password like this: Mfch32potw.Ata3wd,5wc,a12p!’MIhyn?’,IatwaIg(703)482-0623:(. Such a password is much better than a password containing words from a dictionary, although it is not completely random (therefore such a password must be longer). Just make up some simple story or sentence which you can easily remember. For example, you can tell yourself a story about the stuff contained in your childhood room or some other memory which you can easily recall. This approach should give you a start for choosing good passwords. It is absolutely crucial to choose a good one for your disk encryption, the best encryption algorithms are worthless if you use weak passwords! Do not underestimate the speed of current processors and the machines and specialized password cracking hardware which will be available in the years to come! Summary Long random passwords are the best defence against brute-force attacks. Generate random 30 character password for your disk encryption and learn it by repeated typing. Generate separate password for each website which needs one and store it in browser.]" } , { "href": "/2012/11/concept-anonymity-online-and-offline-part-i/", "title": "Anonymity - Online and Offline - Part I", "tags": ["anonymity"], "content": "[This series explains the theory of Anonymity and what factors influence anonymity in online communication and offline interaction. The goal is to provide necessary background information to make educated judgements on the effectiveness of methods to increase anonymity. Let us start out with a definition of the term and then explore its implications: Anonymity is the degree of uncertainty in relating a person to an event, action or property. Anonymity is a problem of knowledge, it deals with the certainty or assurance an observer has for assigning information to a person – such as a person’s connection to an event or action, or a property of a person such as his name. The assigning of information to a person is called “attribution“, and the information in question is the “attribute“. The assurance of attribution is expressed in the “anonymity set“, the group of potential candidates each of which the attribute could be assigned to [The members of an anonymity set are also called “elements of an anonymity set”. We chose the term “member” here because it is less de-humanizing though technically improper]. The bigger the anonymity set, the less certain an attribution is and the more anonymity exists for its members. In the process of attribution the observer tries to decrease the anonymity set by applying deductive and inductive reasoning and by discovering properties that make certain members better or worse candidates for assigning the attribute. A property that makes a member of the set a more likely candidate for attribution is called “unpooling property” while a property that makes the member a more unlikely candidate is called “pooling property“. It is important to keep in mind that any new information learned by the observer can influence the make-up of the anonymity set and thus the attribution – even when this process of learning and applying spans considerable amounts of time. Each change in knowledge about any member of the anonymity set also changes the certainty of attribution for all other members. The discovery of a pooling property of one member increases the likeliness of attribution to any other member – the discovery of an unpooling property of one member decreases the likeliness of attribution to any other member. This way the anonymity set is repeatedly shrunk until the observer can assign the attribute to a person with a satisfying certainty. Attribution has become “plausible“. The method of reaching attribution by repeatedly decreasing the anonymity set is called “drill-down“. The above shows that anonymity is never an absolute, there is always a probability of attribution for each member of the anonymity set. Also, attribution is rarely absolute and strongly depends on the certainty required for the case in question. Even in such crucial instances as criminal investigations attribution is never achieved with a 100% certainty, but only with “sufficient” plausibility. Example Let us apply the above to a little story to make it easier to understand: A late evening in winter, a family – mother Hillary, father Mitt, son Ron and daughter Sarah – sits in the living room eating various cookies from a jar. When only a single peanut cookie is left in the jar, the mother leaves the room saying “Do not eat that cookie, I want to give it to our neighbor.” After a few minutes the mother comes back and finds the cookie jar empty. Mother Hillary asks: “Who took the peanut cookie from the jar?” Hillary has become the observer, the attribute to assign is “took the cookie from the jar”. The anonymity set is father Mitt, son Ron and daughter Sarah. Each of them being equally likely to be the thief. There is a 1⁄3 probability for each of them. In the first round of drill-down, Hillary notes that all three suspects have cookie crumbs all over them. This does not change the probability of any of them being a more likely thief than any of the others – the crumbs are a pooling property. The probability for each remains at 1⁄3. Second, she notices that the hands of father Mitt are far too large to fit into the cookie jar, it makes him less likely to be the thief (another pooling property) but does not finally exclude him. Hillary changes the probabilities to 1⁄5 for Mitt, 2⁄5 for Sarah, and 2⁄5 for Ron. Third, she remembers that her daughter Sarah is heavily allergic to peanuts (a strong pooling property) while her son Ron likes peanuts a lot (an unpooling property). Due to Sarah’s allergy, she is excluded from the anonymity set, and Ron’s probability of being the thief is increased: Mitt 1⁄3, Ron 2⁄3, Sarah 0. Finally, Hillary is pretty certain that her husband Mitt does not want any trouble with her, again reducing the probability of him being the thief: Mitt 1⁄4, Ron 3⁄4. Mother Hillary now grumbles at her son Ron, being assured enough that he was the thief. Of course, this line of reasoning does not guarantee Ron to be the culprit, but the anonymity set was reduced sufficiently for Hillary to risk having to apologize to her Son in the unlikely case that she was mistaken. After having shown the theory of anonymity working in an example we will explore more complex and realistic applications in the next parts of this series – and what Thomas Bayes has to do with it. Important concepts: Anonymity is the degree of uncertainty in relating a person to an event, action or property (the attributes). The opposite of Anonymity is attribution. The measure of Anonymity is the size of the anonymity set. Anonymity of a person is reduced through the discovery of unpooling properties for that person and the discovery of pooling properties of other members of the anonymity set. Anonymity and attribution are no absolutes but relative probabilities. Things to come… Part II: Online Anonymity This part explores how much anonymity can be expected online and how anonymity is reduced by everyday technologies used in Internet communication. Part III: Offline Anonymity Here we apply the theory of anonymity to offline interaction. Part IV: Lessons for Anonymity Some lessons have been learned that can help to improve anonymity in general, both online and offline. Part V: Concepts for increased online Anonymity The theory of anonymity applied to online communication and what methods can be used to increase anonymity.]" } , { "href": "/2012/11/news-dhs-will-scan-payment-cards-at-borders/", "title": "News: DHS will scan payment cards at borders", "tags": ["dhs", "fincen"], "content": "[The U.S. Department of Homeland Security (DHS) will scan payment cards at borders: Travelers leaving or entering the U.S. have to declare aggregated cash and other monetary instruments exceeding $10,000. Under a proposed amendment to the Bank Secrecy Act, FinCEN (Financial Crimes Enforcement Network) will also add the value of prepaid cards to this. The DHS develops advanced handheld card readers to differentiate between a credit card, a debit card, and a prepaid card. Credit cards and debit cards need not to be declared. Intangible Bitcoin brain wallets remain safe. Source: Forbes]" } , { "href": "/2012/11/secure-and-professional-bitcoin-otc-exchanges/", "title": "Secure and professional Bitcoin OTC exchanges", "tags": ["bitcoin", "otc", "tradecraft"], "content": "[The article Necessary conditions for the long-term success of Bitcoin has shown why widespread availability of over-the-counter (OTC) Bitcoin exchangers is crucial for Bitcoin to succeed in the long-run and give us more freedom. This article will explain how to exchange Bitcoin OTC securely and professionally. It should be of interest for Bitcoin users who want to get their coins anonymously from OTC exchangers and for people who want to earn a second income as Bitcoin OTC exchangers in the counter-economy. If you are dealing Bitcoin on the OTC market you have to consider two kinds of enemies: the state and evil customers (for example, fraudsters). To deal securely you have to mitigate the corresponding risks. That is, you have to drive up the cost of a successful attack and make it unlikely. The techniques for risk mitigation can be divided into two categories: secure IT infrastructure (privacy FTW!) and tradecraft. Secure IT for OTC exchangers OTC exchanges should be arranged completely online. To do that, you should get your secure IT infrastructure and privacy basics down: email encryption (GnuPG) use hard disk encryption use IP address anonymization (for example, with I2P/TOR/VPN) use (multiple) pseudonyms To arrange the OTC deal online you can use websites like bitcoin-otc.com or localbitcoins.com. You should agree on the price and the amount beforehand and limit the transaction size. In a single transaction, deal only what you could afford to loose. The actual face-to-face meeting only finalizes the deal, there must be no deviation from the agreement! If you deviate from the agreement, the probability of fraud rises sharply. The digital arrangement of OTC exchanges protects against the state: you produce less evidence and you make it harder to prove how much Bitcoin you dealt in the past. If you exchange a lot, make sure you are using multiple pseudonyms. Tradecraft for OTC exchangers Tradecraft is skill acquired through experience in a (typically clandestine) trade. In the Bitcoin OTC business tradecraft is about methods, customs, and protocols to secure and conceal your transactions. In general, you should always meet in public places during the day (for example, a café) to reduce the probability that you’ll get robbed. Meeting during the day decreases the probability of robbing, but keep in mind that it makes surveillance easier. During a transaction, the money is kept or placed on the table until the Bitcoin are transferred. Your Bitcoin client should only have the needed amount of Bitcoin on it. Once you received the money, you have to make sure that you don’t leave the protected public places with the money to avoid getting robbed after the deal! There are several methods to do that: Brush: You give the money to a second person unnoticed (beware of the toilet, you can easily get robbed there). Such a second person can also spot problems in advance and warn you if necessary. Drop/cache: Use a secure place to store the money. For example, a secure mailbox or door where you could put the money. Deposit the money: Go directly to the bank and deposit the money. In that case, you have to think of your cash card which could be used to force you to withdraw the money for a robber. You could send the card back by mail or in cases where you are already followed type in the wrong PIN three times in a row to get rid of the card at the ATM. Safe deposit box: Deposit the money there. But make sure that you need personal identification to access the box, rendering robbing attempts useless. Next level OTC If you follow the advice given above you are already in pretty good shape. But if you exchange a lot (in terms of amount or number of transactions), you should improve your OTC game to the next level. The best way to do that is to deal in teams of at least two persons. If you have at least one partner you can use the brush technique described above to get rid of the money you receive after the deal. You can also separate the buying of Bitcoin from the selling of Bitcoin: One team member exclusively sells Bitcoin and the other one exclusively buys them. In most jurisdictions, if Bitcoin is not considered a currency this is just simple selling/buying and not money changing. If Bitcoin is considered a currency, you could use goods of exchange (gold or silver) instead of cash for exchanges. Be professional A professional dealer has professional prices, because professionalism has its cost. Make sure that you don’t cut corners to lower your costs, this will defeat you in the long-term. If necessary, explain the benefits of a professional dealer to your customers. In my opinion, if your fee is significantly less than 5%, your are either dealing very large amounts or your are fooling yourself about your security measures. Risk rises with repetition and quantities, make sure you mitigate them appropriately. Let’s deal Bitcoin in the OTC market securely and professionally — for fun and profit! The content of this article was presented at the 2012 Bitcoin conference in London [slides].]" } , { "href": "/2012/11/necessary-conditions-for-the-long-term-success-of-bitcoin/", "title": "Necessary conditions for the long-term success of Bitcoin", "tags": ["agorism", "bitcoin", "counter-economy", "crypto-anarchy", "otc", "silk road"], "content": "[In this article I present my answer to the the question: What does Bitcoin need to succeed in the long-run? Before we consider the question, let’s put Bitcoin into the wider context of the counter-economy. The Counter-Economy The counter-economy (a.k.a. the informal economy) in general is all economic activity which is not fully regulated, taxed, or controlled by the state. In its simplest form it is a Lemonade stand which operates without a license, it partly includes a mom-and-pop store which optimizes its taxes by running some of its business off the books, and in the dark corners of it you can find completely separate market places like Silk Road. The counter-economy is not a small feat, if you combine all black markets of the world together you’ll get a 10 trillion US$ economy, second only to the United States of America. In many developing countries it already comprises large parts of the economy and it is growing faster then the officially recognized gross domestic product (GDP) [The Shadow Superpower, Foreign Policy, October 28, 2011]. The term counter-economy in a more specialized meaning is also used in Crypto-Anarchy and Agorism. Agorism is revolutionary market anarchism. In a market anarchist society, law and security would be provided by market actors instead of political institutions. Agorists recognize that situation can not develop through political reform. Instead, it will arise as a result of market processes [agorism.info]. A good introduction to Crypto-Anarchy is the following quote from the Crypto Anarchist Manifesto published by Timothy C. May in 1992: Computer technology is on the verge of providing the ability for individuals and groups to communicate and interact with each other in a totally anonymous manner. […] These developments will alter completely the nature of government regulation, the ability to tax and control economic interactions, the ability to keep information secret, and will even alter the nature of trust and reputation. One has to note that Crypto-Anarchy is not a philosophical utopia, but the attempt to shape life and society in the presence of disruptive technologies. The corresponding technologies have already arrived and we are facing a great divide: we will either live in the total surveillance state or in a Crypto-Anarchist libertopia. So what role does Bitcoin play in this context? A free society needs a free market and a free market needs sound money. Bitcoin is money with good properties: it is pseudonymous, there are no frozen accounts in the Bitcoin system, it doesn’t allow charge-backs (a big problem for merchants accepting credit cards), and it is very cheap and fast to transfer Bitcoin. As such, the use of Bitcoin is a huge advantage compared to a barter or cash-only economy, because developed economies need money transfer, at the very least for B2B transactions. Three hypotheses for the long-term success of Bitcoin So what does Bitcoin need to succeed in the long-run? In short, it needs no state, no banks, and OTC. The three hypotheses in more detail: The Bitcoin community should not try to get legality for Bitcoin, we should not ask the state to resolve conflicts in the community. The Bitcoin community should not focus on interoperability with the traditional banking system. Widespread availability of over-the-counter (OTC) Bitcoin exchangers is crucial for Bitcoin to succeed in the long-run and give us more freedom. Let me explain the reasoning behind this hypotheses. Public choice theory in general and plain common sense states that people will do what is in their self-interest. This includes politicians, bankers, and cops. It is very important to fully grasp this simple truth: People do what is in their interest and you cannot assume that your interests equal their interests. They are usually not the same. There is no such thing as people working for the common good. Even people who are supposedly helping others selflessly are actually helping them in order to live in accordance with their own value system. No state The state is a regional monopoly of force which extracts resources (usually money) from its citizens to (a.) mainly finance itself, its wars and its surveillance apparatus and (b.) use the rest to provide so-called services which could be provided better and cheaper by the free market. These services are usually used as a justification of the existence of the state, but the real reason of its existence is the easy money the state money recievers can get. The money is taken away from the productive citizens via taxation and the monopoly of the money supply (via inflation your money becomes worth less). Thereby, the latter strategy is better, because it is harder to notice by the ignorant masses. If you combine the institution of the state and its inherent interests with the conclusions from public choice theory and the Bitcoin system you are looking at the potential for a lot of trouble. Bitcoin prevents inflation (there is no inflation in Bitcoin once all coins have been mined) and helps tax evasion (it is hard to regulate and control). It is potentially life-threatening to the state, because it strikes at the root of state financing. Therefore it follows that the state will fight Bitcoin heavily once it realizes that. In my opinion it is absolutely ludicrous to think that the state will embrace Bitcoin. The most likely scenario is that the state will try to close down Bitcoin altogether. If that is not possible the state will try to change Bitcoin in a way that allows to implement know your customer (KYC) regulations more easily in the system. Just wait and see what kind of discussions we will get in the Bitcoin community once the state is cracking down more on Bitcoin exchangers and businesses and actors like the Bitcoin Foundation will try to remedy the situation by working together with state agencies to make Bitcoin more regulatory compliant. In my opinion, this shows why the Bitcoin community should not try to get legality for Bitcoin and should not ask the state to resolve conflicts in the community. All this will do is drive more unwanted attention to the Bitcoin ecosystem. The self-interests of the state prevents legality and regulatory acceptance of Bitcoin in its current form. History lesson: e-gold E-gold provides an important history lesson of the Those who cannot remember the past are condemned to repeat it (George Santayana) category. E-gold was a digital gold currency which existed between 1996 and 2009 and allowed the instant transfer of gold ownership. In 2008 the company reported more than 5 million accounts. A flourishing ecosystem existed around e-gold. In the end, exchangers were attacked and closed down due to regulatory problems. E-gold itself was indicted for money laundering and the operation of an unlicensed money transmitting business. The indictment happend although e-gold itself tried to get the corresponding license earlier and was told that is was not necessary. Sounds similar to the situation with Bitcoin right now? The game is rigged, folks! You cannot win if your are playing by the rules. No banks Banks are major beneficiaries of fractional-reserve banking and can borrow cheaply from the central banks. They operate in one of the most heavily regulated industries which results in huge barriers to entry and not much competition. This leads to large profits, for example from transaction and credit card fees. Financial service providers like PayPal, Western Union and Money Gram also have very large fees, because the regulatory hurdles reduce the amount of competition and result in large costs. Since small income foreign workers who send money home are the largest customer base for such services the high fees are actually a tax on the poor. Bitcoin threatens this profits and poses a regulatory risk. Therefore, Bitcoin exchangers will be attacked by competing financial institutions (remember TradeHill as such an example). A widely successful Bitcoin system is against the self-interests of the established financial industry and it makes no sense for them to deal with the corresponding regulatory challenges in the long-run. If the Bitcoin economy depends on the traditional banking system it is doomed to fail. Just imagine what would happen to the Bitcoin economy if Mt.Gox, which currently is responsible for about 80% of all Bitcoin exchanges, suddenly would have to close down. In my opinion, this shows the second hypothesis: The Bitcoin community should not focus on interoperability with the traditional banking system. The case for OTC We now have established that from a self-interest standpoint the state and the traditional financial industry is naturally opposed to Bitcoin. To ensure the long-term stability and success of the Bitcoin economy we need a completely separate system of exchange, a network of over-the-counter (OTC) exchangers. An OTC exchange happens when two people meet face-to-face trading Bitcoin for cash (or gold/silver). OTC is not the sending of cash in the mail or wire transfer. Such a widespread network of OTC exchangers is the system most resilient against state attacks, because it is heavily distributed and the banking system is skipped entirely. This reasoning supports the third hypothesis: Widespread availability of OTC Bitcoin exchangers is crucial for Bitcoin to succeed in the long-run and give us more freedom. The how-to Secure and professional Bitcoin OTC exchanges gives practical advice on OTC. The content of this article was presented at the 2012 Bitcoin conference in London [slides].]" } , { "href": "/2012/11/the-treasure-which-is-privacy/", "title": "The Treasure which is Privacy", "tags": ["anonymity", "privacy"], "content": "[The Philosophy of Privacy Extremism emphasizes that privacy should be maintained in all situations; that if in question, privacy should be given preference, unless sufficient arguments to the contrary apply to the specific situation. Privacy serves as a necessary condition for engaging in meaningful and truthful interpersonal relationships. Furthermore privacy is a necessary condition under which a person can develop a self and embrace individual responsibility for decisions and actions that result from them. A denial of privacy to the contrary establishes and maintains a lack and loss of esteem, respect and value in and for things and other persons. Privacy should therefor be the strong standard for personal behavior, normative for those that thrive towards personal human positive development. The Treasure which is Privacy The first response to someone who makes an effort to protect his privacy is often “I have nothing to hide because I have nothing to fear” – usually accompanied by an expression of righteous pride or the blissful presentation of carelessness. As with most routine responses that have become maxims of contemporary society and proverbs uttered in reply to trigger words, this statement is more informative about the speaker than of the addressee or the subject of discussion. More often than not its underlying meaning should be rephrased to read “I am uneasy, maybe even afraid, around people that hide something”. As such it carries the implied request to anyone hearing it, that they shall stop covering and hiding things to relieve the speaker of his uneasiness. But even when taken at face value, above sentence communicates that it is the lack of fear that is the speakers justification for not protecting his privacy. Apart from the simple rejection of this statement as being false in the light of existing and relevant threats and the reference to the blissfulness of ignorance, it is the exclusiveness of fear as the proposed reason for privacy that warrants consideration. The reference to fear in this context should first be understood as an instrument of rhetorics instead of an adequate choice of words in a balanced and clearheaded reasoning. Fear refers to the emotional response to existential danger and implies the lack or loss of courage to confront the danger. The sentence under analysis should thus be rephrased to read “You hide things because you lack courage in the presence of an imagined existential threat.” It is therefor a double accusation of both cowardice and delusion. Again it is not the focus of this analysis to show that protection of privacy and admitting to doing so requires a bit more courage than to repeat common proverbs, or that certain dangers exist that can be effectively answered by privacy. Nor does it need emphasis that those who protect the privacy of others often do so in the face of opponents that go a long way to ruin the names, property, freedom and sometimes even health and life of those courageous guards of privacy. Instead it should be pointed out, that there are for more reasons to protect ones privacy, and that of others, than fear of losing freedom or life or even good reputation. It is interesting to note that an old synonym for ‘fear’ could be awe, admiration or astonishment, even respect. Worded this way, one might read above sentence as “I hide nothing, because I admire and respect nothing.” This way it becomes clear that the denial of privacy is often nothing but a lack of things that are valued and the demand that others should not value something themselves. It thus contains the claim that nothing should be special and set apart. Which brings us to the original meaning of the word ‘private’. In Latin it refers to persons and things that were set apart from what would be available, subordinate and used by all persons – the public. Thus giving up one’s privacy, as in the sentence we discuss, entails nothing else but the transformation of the speaker into a not particularly important and indistinguishable fragment of the mass. If the speaker is really not in fear about anything, it would primarily refer to not fearing to become a nothingness in the grey mass – just a grain of dust in the crowd. It is safe to say then, that the speaker does not value and respect himself as an individual human person, or that he cowardly fears to be recognized as such. Leaving the analysis of the original statement one should now focus on the negation of the privacy opponent’s reply while keeping its completed meaning in mind: “Because I value and respect some things, I hide some things.” Three areas shall serve as examples of preserving value through hiding: Complex minority opinions, relationships between persons, and the human person itself. Complex opinions and bodies of knowledge that are valued highly by their bearers are often only communicated under strict conditions to prevent misunderstanding, misrepresentation, confusion and disintegration. This is especially useful if the opinion is only held by a minority or if the potential audience lacks the necessary context of knowledge to integrate and consider the new information. The strict conditions under which the information will be communicated serves herein as the boundary between public and private. The more complex, valuable and different from general knowledge the new information is, the stricter the conditions of communicating them becomes. This can be seen in various areas. Personal political or moral opinions, especially if they are held only by a minority, will often not be communicated in situations that only allow superficial or time restrained conversation. These situations do not allow for the speaker to present and argue for their position and thus risk for the information to be misunderstood and misrepresented later. The consequences of this disintegration of information can be witnessed in the effects of hearsay that considers itself with minority groups and opinions, leading to widespread false myths that often cannot be corrected afterwards because they have become part of common knowledge. Thus it is often favorable to conceal personal opinion and deprive the public of correct information if otherwise the reinforcement of false information or the support of slander are likely. The quality of public and political debate as well as the celebrity and gossip culture serve as evidence for this. Numerous further examples about the protection of ideas through hiding exist in history and shall only be mentioned for further reference: Pythagorism and Platonism, the Apologists of early Christianity, the Orthodox Church liturgy, natural science and political societies of the Enlightenment including Bacon and Newton as members, Judaism, early Socialism. Privacy in this regard serves to preserve the integrity, and often survival, of information, ideas and opinions. Another area of interest is privacy and the use of hiding for the sake of other persons. To understand what role privacy plays in the context of relationships between humans it is necessary to be aware of what communication is. Communication is any act of a sender to convey information to a receiver. This involves forming signs – distinguishable and perceivable features – into signals – the message to be transmitted. The choice of signs and signals by the sender and their interpretation by the receiver depend strongly on the context, what both parties perceive about each other, themselves and their environment. Another part of this context is the estimation of how difficult a sign is to be produced which has an influence on as how truthful and intentional a signal (message) is perceived. A proverbial example for this is “to preach water and drink wine”. One immediately understands that abstaining form wine – which is more costly than to consume it – increases the credibility of the message (and resolves the otherwise apparent contradiction). Maintaining privacy, in its various forms of hiding, concealing and silence, is such an act of communication, a sign that carries a signal. The sign of privacy, as it shall be called for sake of clarity, can carry a variety of signals that depend on the context of the communication, and it can be intended for a variety of recipients. In itself privacy is a signal that discriminates between various degrees of relationships, excluding some potential receivers from other intended receivers. It is thus communicating which kinds of relationship the sender intends to have, which in turn communicates the evaluation of the receiver by the sender. In blunt words, it separates the receivers into special and common people in the eyes of the sender. The “hijab” is an example which illustrates this well. Hijab refers to a veil worn by many muslim women as soon as they enter marriageable age. It is always worn in public and only taken off if no non-related men are present, such as in exclusively female meetings or in the family circle. Her husband will be the only non-related man that will see her hair, thus keeping her hair private. The woman, if she chooses to wear the hijab, hereby communicates towards her husband and all other men, that she chooses to have an exclusive intimate relationship only with her husband and that she values her husband as being of a special high value to her. It is a pledge of allegiance to her husband, and a separation of herself from the availability to other men. As can be seen in this example, hiding becomes a tool to communicate a value perception and status of relationship in a discriminatory way. Similar signs exist in western cultures as well. For example, the revelation of the family’s secret receipt towards the fiancee of a child serves as sign of acceptance and inclusion into the family. Similarly some topics of conversation are usually preserved for the close relationship between couples, or that of good friends. This not only is a sign of uptightness, if at all, but also a toll to show and maintain the deepness of a special and exclusive relationship that is built on the mutual holding of the other in high esteem. The opposite, divulging information indiscriminately, thus communicates that others are not held in high esteem and that the communicating party is unwilling or unable to come to different evaluations of others. Likewise the sharing of information with the public, if this information was gained within a special relationship, should rightly be viewed as an act of betrayal since it communicates that the thus damaged person is held in lower regard than the receiving masses, even as assured of the opposite. This hints at the reciprocity of these intimate relationships. Communicating information, that is viewed as belonging in the private domain of friendship or other kinds of deep and special relationship, will also signal to the receiver that he should answer in an equally private manner as to return the esteem granted to him as well as to save the speaker from embarrassment. It is thus a matter of courtesy to not speak about private matters indiscriminately since it puts the receiver into a potentially awkward situation. However, this does not only apply to situations that imply reciprocity. It speaks of equal disrespect of another person to make them part of an unasked for communication of subjects that are hurtful, unpleasant or put the recipient into a situation where he is challenged to act – if only to escape his status of a recipient. Instead, a communication that considers the reaction of others by using means of privacy signals both intended and accidental recipients that the speaker harbors respect for them. This is even more true when the subject constitutes a tempting or harmful one for the recipient. It shows utter disrespect if someone speaks of the exquisite taste and warm feeling in the throat when drinking an alcoholic beverage while a known dry alcoholic is addressed or present. It is as unwise to flaunt with riches and have them lay around openly in the house since this tempts the struggling housekeeper to steal out of impulse, or to communicate without regard for potentially causing conflicts of interests in the recipients. Instead of hiding nothing, it is the hiding of information and actions that is grounded in valuing and caring for others and truthfully communicating respect and high esteem. To conclude the use of privacy for the sake of others, one should also consider the effects of actions on observers. As mentioned before, the interpretation of signs as signals depends, among other things, on the receiver’s perception of the sender. This becomes relevant for the question of privacy especially if the sender is perceived as a role model or bad example. Here the behavior is a sign easily interpreted by the observer as sanctioning of the action or its proscription if the action is not considered separately from the sender. Examples of this can be seen when bad actions of public figures are used as justification for one’s own actions, when otherwise laudable behavior is viewed with suspicion when associated with persons of disgrace or when people imitate celebrities even in their failures and bad judgement. For additional consideration on privacy for the sake of others, an old book shall be mentioned as reference: “Ueber den Umgang mit Menschen” by Freiherr von Knigge. The last area to examine here as an example of preserving value through hiding is the human person itself. At the core of this matter lies the question of what makes a person a “self” instead of “an-other”, and how this self can refer to itself over time as in “I myself went to the park yesterday”. What is this “I” or “self” we refer to, and how does it come to be what it is instead of being something else. There is no current consensus how to answer these questions, nor should it be the task of this text to present and weigh the different views, nor to fully develop a theory of personhood on its own. Instead it will touch the process of the change of a person. How has a person become what it is now, and how will it become what it will be in the future? How does the process differentiate the self from another? The popular answer is that genes, upbringing and society are the shapers of persons, in different proportions depending on who one asks. Nevertheless individuals are treated as moral agents, acting by decision and responsible for the decisions made. It is a person who is punished for a crime, and not schools, parents, evolution or society. It is persons that are persuaded by others, asked to consider moral and ethical categories, respected or disgraced for individual actions. Clearly it is understood by most that a person is not shaped exclusively by that which is not part of him, but also by himself. Certainly genes, upbringing, society and the situative environment are influences, but it is also the self that forms the self. This self-forming takes place with every decision made, changing the status, the shape of oneself, the individual path of the person through life. Some might argue that every decision made is already and exclusively determined by the previous state of the person and its environment, and that as such no real decision is made because there is no choice but only the effect of the cause which is the state of the universe. Instead of refuting the deterministic and probabilistic denials of free will as being ultimately self-contradictory, it shall be asserted that free will – non-deterministic and non-probabilistic – is a required fact if rationality, ethics and morality – all three – are in any way justifiable. However small free will, that hard to grasp grain that tips the scales of our decisions, might be, it plays the central role in the person becoming a Self. For this to be effectually true, the influence of free will in the person’s decisions must be maximized so that it is will that dominates the decision in freedom. At that point privacy achieves its ultimate importance. Only in privacy can a decision be contemplated in separation from the influence of other persons and the own person, the self, actualized freely. Hiding in privacy removes the tainting of the decision through outside preselection of facts, outside censorship, the promise of reward and punishment by other humans, hubris, pride and shame. Here honesty towards one’s self is possible. It is only through and in privacy where a potential equilibrium of choices can be discovered, just to be resolved through the action of the free will of the Self. If one is in any way determined to work on one’s own self and aware of the responsibility this entails, then privacy in this regard must be maintained. Though even through giving up to develop one’s self, a choice has been made with the responsibility for it as it’s consequence – except that this choice is to be a product determined by others instead of a self. A disregard for maintaining privacy in this area thus equals the utter disrespect for the Self one is, and the potential selves one could become. It is the denial and defiling of oneself as an individual person. In conclusion the proposition is, that: Only in privacy the “self of now” transcends itself to actualize “the self of the future” through every decision made, integrating the “self of the past” fully and becoming more of a Self by removing the influence of an Other. In passing by it should be noted that the practice of hiding things because of their value, especially if it the hiding of information about something, must be subject of consideration as well. It cannot be argued for using lies as the method of concealment, since this would often result in doing a disfavor to the thing valued and respected. Nor can a life of lies result in a positive development of the Self. Instead it is the concealing of information, without replacing them with a false statement that is communicated as the whole truth only, that should be chosen as a means. Which however presents another problem: As much as the presence of a sign can be a signal, its absence can be one too. Indeed it is the presence of some signs that can signal the meaning communicated by other signs.Selective privacy might as such communicate the content of what should have been concealed. For example, if one is asked for one’s favorite color and presented with a series of potential answers, it is the denying of the incorrect answers and the silence towards the correct answer that communicates what was intended to remain hidden. It should thus be noticed, that the hiding of one thing necessitates the hiding of other things of the same context. As a means thereof it is preferable to keep silent instead of lying, as stated above. So far, the privacy opponent’s reply “I have nothing to hide because I have nothing to fear” has been shown to be a rhetoric trap, or at least an insufficiently contemplated cultural maxim. It has also been shown that there exist good reasons to embrace privacy, hiding and concealment. However, this text cannot be complete without some short answers to those, that identify privacy and secrecy as roots of evil in society that erode every social and political system and relationship. Their primary argument is, that privacy encourages and facilitates all kinds of corruption and abuse of power. Furthermore they claim that privacy results in the disintegration of the interpersonal bonds that hold society together. To the first, two replies shall be given: For one, it has long be understood that abuse of power and corruption are systemic to power and delegation themselves, and that transparency and accountability are mere interventions to limit the spread of these flaws at the root of the problem. Instead of attacking privacy as being the problem, one should think about alternative methods of cooperation and organization that are free of these negative systemic tendencies in themselves. On a more shallow note it should be pointed out that the people active in positions and offices have given up their status of private persons in exchange to be leaders and representatives of the public – the masses. Instead of developing themselves and their relationships they have chosen to become instruments of the public, or at least they pretend as much. How can such an argument against privacy then be used against the privacy of people that remain private instead of public? This appears to be fallacious. Towards their second argument, the “disintegration of interpersonal bonds that hold society together”, it should be be understood both what “society” is, and what “interpersonal bonds” may refer to. Society is not a collective of interdependent persons connected b shared emotional states and intimacy, that would be what is commonly referred to as “family”. Instead, society is the cooperative organization of persons that is held together by norms of interaction and shared understanding of necessary and useful methods of cooperation. It is thus the actions toward society in the realm of society and not the totality of actions and knowledge that constitute these bonds in practice. The partaking in society is thus a voluntary, freely chosen and limited activity by each of its members for the purpose of cooperation with all other others in society. Privacy only becomes erosive to societies that intend to regulate and organize even those individual activities that neither rely nor influence all of society. These societies are commonly identified with Totalitarism. Instead of relying on a bonding through a shared experience off weakness and lack of self, or directing society to be bound by the smallest – and lowest – common denominators, a society of privacy allows for the progression of all members to actualize higher potentials without replacing the individual person with the collective Other of society. Privacy thus nurtures societies that thrive for improvement. This might even hold the potential for individual actors to integrate justifiable norms of social interaction into their Selves through independent contemplation and decisions instead of understanding these norms as being imposed by an Other. Does this hold the promise of social interaction to become more reliable and truthful? Answering affirmative seems to be more justifiable than the negation. However, one warning against privacy is appropriate. Be it a personal lifestyle or a culture of privacy, both demand personal improvement from each partaking individual. This is the result of privacy to allow for, and supporting of, discriminatory relationships and the decoupling from the influence of others. Privacy thus removes many opportunities to blame others and to excuse oneself in light of personal error. Nevertheless, privacy also allows for many justified second chances and true forgiveness. In summary it can be concluded that maintaining privacy and hiding of things serves well in preserving and expressing the values one attributes to things and other persons. Furthermore privacy is a necessary condition for the continual development of the Self and the sustentation of truthful and honest interpersonal relationships by means of communicative discrimination. In turn, the denial of privacy must be realized to be unjustified and even harmful. The presented arguments for the allegedly negative impact of privacy have been found to be without merit or even supporting the strong use of privacy in society. The conclusion drawn is therefor that opposition to privacy as in “I hide nothing because I have nothing to fear” cannot be a default behavior. Instead the use and support of privacy in the form of “Because I value many things, therefor I hide many things” should be the standard unless it clearly needs to be abandoned for specific situations, if at all.]" } , { "href": "/2012/11/encryption-algorithms-a-primer/", "title": "Encryption algorithms: a primer", "tags": ["algorithms", "encryption"], "content": "[Encryption algorithms are used to secure the content of communications and stored data. An algorithm in general is a recipe for calculations which can be performed automatically by a computer. An encryption algorithm (also called a cipher) encrypts a readable plaintext into an unreadable ciphertext. A cipher can usually also perform the reverse operation of decrypting an unreadable ciphertext into a readable plaintext. For encryption and decryption a cipher needs a key. The security of a cipher depends on the secrecy of the used key. Two general categories of encryption algorithms exist: Symmetric encryption algorithms: the key used for encryption is the same as the key used for decryption. Asymmetric encryption algorithms: the key used for encryption differs from the key used for decryption. If you want to use a symmetric cipher for communication you are facing the key exchange problem: The key needs to be exchanged over a secure channel, otherwise the encryption will be useless. Because such a secure channel often does not exist asymmetric encryption can be used to solve the problem. Asymmetric encryption (also called public-key encryption) uses key pairs comprised of a public key and a private key: Something encrypted for a given public key can only be decrypted by the corresponding private key. The reverse operation is a digital signature: Something encrypted (signed) by a private key can only be decrypted (verified) by the corresponding public key. Public-key encryption solves the key exchange problem, because the public keys can be exchanged via an insecure channel. But to prevent man-in-the-middle attacks it still needs to be verified that the public key has not been tampered with by a third party. In a man-in-the-middle attack an active eavesdropper makes independent connections with the victims and relays messages between them, making them believe that they are talking directly to each other over a private connection, when in fact the entire conversation is controlled by the attacker. The verification can be done by comparing the fingerprints of the public keys on a different channel (for example, on the phone) or by employing a _web of trus_t. In a web of trust, public keys are signed by other parties to make the trust in them transferable. Asymmetric ciphers are often computationally expensive, especially for long plaintexts. In such cases hybrid encryption can remedy the problem. In a hybrid cipher a unique session key is generated which is then used to encrypt the plaintext with a symmetric cipher. Afterwards the session key is encrypted with an asymmetric cipher and send together with the ciphertext to the receiver. Because of the properties of symmetric and asymmetric ciphers explained above, for applications like hard disk encryption typically a symmetric cipher is used (no need to exchange a key). For secure communication via email or chat an asymmetric or hybrid cipher is usually the better solution, because it makes the secure key exchange simpler. An important encryption concept is the key length (also called key size). Usually larger key lengths are better, but key lengths cannot be compared between algorithms. For example, the symmetric AES algorithm uses key lengths of 128, 196, or 256 bit. The asymmetric RSA algorithm typically uses key sizes between 1024 and 4096 bit, but that doesn’t mean that RSA is more secure than AES. Short key lengths are problematic, because they are easier to attack with brute-force. In a brute-force attack a fast computer is used to try out all possible keys until the correct one is found. It is important to consider the relation between the length of a user chosen password and the corresponding key length of the underlying encryption algorithm. For example, if you use the rather secure AES algorithm with a key length of 196 bit to encrypt your hard disk, but the corresponding password has only a length of 32 bit you are vulnerable to brute-force attacks.]" } , { "href": "/2012/11/global-spying-realistic-probabilities-in-modern-signals-intelligence/", "title": "Global Spying: Realistic Probabilities In Modern Signals Intelligence", "tags": ["analysis", "surveillance"], "content": "[A paper on the probabilities of global internet surviellance, presented in 2010 at the Defcon conference in Las Vegas, proving to be close to what becomes common knowledge today. In this article, we will present insight to the realistic possibilities of Internet mass surveillance. When talking about the threat of Internet surveillance, the common argument is that there is so much traffic that any one conversation or email won’t be picked up unless there is reason to suspect those concerned; it is impossible that “they” can listen to us all. This argument assumes that there is a scarcity of resources and motivation required for mass surveillance. The truth is that motivation and resources are directly connected. If the resources are inexpensive enough, then the motivations present are sufficient to use them. This is visible in the economic effect of supply availability increasing the demand. The effect is that since it is more easily done, it will be done more readily. Another fault in the above argument is that it assumes that there is only all-or-nothing surveillance, which is incorrect. The paper can be downloaded here: Global Spying: Realistic Probabilities In Modern Signals Intelligence  ]" } , { "href": "/2012/11/libertopia-conference-2012-digital-tradecraft/", "title": "Libertopia Conference 2012 - Digital Tradecraft", "tags": ["digital tradecraft", "tradecraft"], "content": "[The slides of Frank Braun’s talk “Digital Tradecraft” can be found here: Digital Tradecraft]" } , { "href": "/2012/11/libertopia-conference-theory-practice-of-black-market-business/", "title": "Libertopia Conference - Theory & Practice of Black Market Business", "tags": ["physical tradecraft", "tradecraft"], "content": "[Slides for Jonathan Logan’s talk at Libertopia 2012: Theory & Practice of Black Market Business.]" } , { "href": "/2012/11/bitcoin-conference-2012-london-slides/", "title": "Bitcoin Conference 2012 London - Slides", "tags": ["bitcoin", "otc"], "content": "[The slides for Frank Braun’s talk on the need of OTC exchangers for the Bitcoin economy can be found here: Bitcoin OTC]" } , { "href": "/2012/11/introducing-shadowlife-cc/", "title": "Introducing ShadowLife.cc", "tags": [], "content": "[ShadowLife focuses on Privacy – how to protect it and why it matters. We are committed to make information about privacy enhancing strategies and technologies accessible to non-experts and to give practical advise on how to enhance one’s own level of privacy. Due to the inherent challenges of the subject – lack of publicly available information, complexity of the matter and wide-spread misinformation – ShadowLife adopts a policy of communicating clearly the quality of underlying information and our assurance thereof, to refrain from emotional and political language, and to refer to required context. Our team consists of people with a wide spectrum of experience ranging from computer security to open-source intelligence analysis to practical street smarts. What we suggest as practical solutions we have tested and applied ourselves. Nevertheless ShadowLife remains in a continual state of incompleteness. As such we rely on feedback and contribution from the community. ShadowLife publishes information in five different styles of presentation to reflect different approaches to information: News: Bullet-point condensed time-relevant pieces of information which have been primarily researched by third parties. ShadowLife offers an aggregate of privacy relevant news that is accessible in minutes, not hours per day. Dossier: ShadowLife publishes background information on repeated content for reference. Concept: Content of mainly theoretical nature that serves as foundational skill for privacy enhancing strategies and technologies. HowTo: Applying theoretical knowledge to practical solutions in a way that is accessible to all interested parties and not just specialists. Opinion: Analysis and commentary that cannot hold back on personal judgement.   For more information on our perspective on privacy, please refer to the page Privacy Extremism.]" } , { "href": "/page/contact-us/", "title": "Contact us", "tags": [], "content": "[Write us to contact (at) shadowlife.cc. (PGP/GPG: 0x18231c2ae18fa734. Available on many keyservers.) Donations: Thank you for your donations to Bitcoin address 1shadowRQqB4ui9xK2qPxC68ZjqZXLK1d.]" } , { "href": "/page/privacy-extremism/", "title": "Privacy Extremism", "tags": [], "content": "[The Philosophy of Privacy Extremism emphasizes that privacy should be maintained in all situations; that if in question, privacy should be given preference, unless sufficient arguments to the contrary apply to the specific context. Privacy is a necessary condition under which a person can develop his self and embrace individual responsibility for decisions and actions – it is the prerequisite for individual liberty. As such it is not granted but must be taken and protected vigilantly. Furthermore privacy proves to be essential when engaging in meaningful and truthfrul interpersonal relationships by making the social mask unnecessary and removing the need to keep up appearances, but instead maintaining an environment of trust. A denial of privacy to the contrary establishes and enforces a lack and loss of esteem, respect and value in and for things and other persons. Privacy should therefor be the strong standard for personal behavior, normative for those that thrive towards personal human positive development. However, privacy is under constant attack. Digital surveillance, data tracking, big data analysis, biometrics and cameras augmented with facial recognition represent just the edges of a massive trend towards deep surveillance and control structures. Opponents of surveillance are faced with only a few options to deal with this trend: Aggressive and violent destruction of surveillance installations and technology, which is not an option for peaceful and respectful individuals. The political process, which however promises only slow and incomplete change towards more privacy – if any. It also requires the impression of the individual will of the privacy defenders on the collective will of the political body – constituting a means of rulership which is not an acceptable choice for people that embrace individual liberty. Self-abandonment, while often the option realized through endless compromise, cannot be the goal of anyone conscious of his self-worth. This only leaves methods of self-protection to minimize data collection and surveillance without exclusively relying on third parties for protection. It is the privacy extremists choice to reduce data available on him. Through the choiceful self-protection the individual enables himself to choose whose observation, judgement and social memory he wants to become part of and who to exclude from relationships. Instead of a purely negative defense of privacy it becomes a positive means to shape relationships and express appreciation for others. Applied privacy extremism allows for the new ways to emphasise relationships, esteem and openness in a selective and meaningful way by supporting the individualization of both the privacy extremist himself and the counterparts in his relationships. Lastly privacy extremism constitutes a well-mannered and unobstrusive behavior by which the usual shallow grasp for attention is minimized.]" } , { "href": "/page/about/", "title": "About", "tags": [], "content": "[ShadowLife focuses on Privacy – how to protect it and why it matters. We are committed to make information about privacy enhancing strategies and technologies accessible to non-experts and to give practical advise on how to enhance one’s own level of privacy. Due to the inherent challenges of the subject – lack of publicly available information, complexity of the matter and wide-spread misinformation – ShadowLife adopts a policy of communicating clearly the quality of underlying information and our assurance thereof, of refraining from emotional and political language, and of referring to required context. Our team consists of people with a wide spectrum of experience ranging from computer security to open-source intelligence analysis to practical street smarts. What we suggest as practical solutions we have tested and applied ourselves. Nevertheless ShadowLife remains in a continual state of incompleteness. As such we rely on feedback and contribution from the community. ShadowLife publishes information in five different styles of presentation to reflect different approaches to information: News: Bullet-point condensed time-relevant pieces of information which have been primarily researched by third parties. ShadowLife offers an aggregate of privacy relevant news that is accessible in minutes, not hours per day. Dossier: ShadowLife publishes background information on repeated content for reference. Concept: Content of mainly theoretical nature that serves as foundational skill for privacy enhancing strategies and technologies. Listing of all Concept posts. HowTo: Applying theoretical knowledge to practical solutions in a way that is accessible to all interested parties and not just specialists. Listing of all HowTo posts. Opinion: Analysis and commentary that includes personal judgement. Listing of all Opinion posts.   For more information on our perspective on privacy, please refer to the page Privacy Extremism. Donations: Thank you for your donations to Bitcoin address 1shadowRQqB4ui9xK2qPxC68ZjqZXLK1d. ShadowLife.cc GnuPG/OpenPGP Key: 0x18231c2ae18fa734. ShadowLife.cc does support SSL/TLS: https://shadowlife.cc. SHA1 Fingerprint=12:2C:14:89:9B:E4:A8:93:FE:16:1E:11:03:D2:7C:E4:00:29:46:B4 ShadowLife.cc is reachable as a Tor hidden service: shadow7jnzxjkvpz.onion. ShadowLife.cc is reachable via I2P: shadowlife.i2p jme44x4m5k3ikzwk3sopi6huyp5qsqzr27plno2ds65cl4nv2b4a.b32.i2p Signed statement of addresses and keys.]" } , { "href": "/2013/02/", "title": "02", "tags": [], "content": "[]" } , { "href": "/2013/10/", "title": "10", "tags": [], "content": "[]" } , { "href": "/2012/11/", "title": "11", "tags": [], "content": "[]" } , { "href": "/2014/11/", "title": "11", "tags": [], "content": "[]" } , { "href": "/2012/12/", "title": "12", "tags": [], "content": "[]" } , { "href": "/2010/", "title": "2010", "tags": [], "content": "[]" } , { "href": "/2011/", "title": "2011", "tags": [], "content": "[]" } , { "href": "/2012/", "title": "2012", "tags": [], "content": "[]" } , { "href": "/2013/", "title": "2013", "tags": [], "content": "[]" } , { "href": "/2014/", "title": "2014", "tags": [], "content": "[]" } , { "href": "/2015/", "title": "2015", "tags": [], "content": "[]" } , { "href": "/2016/", "title": "2016", "tags": [], "content": "[]" } , { "href": "/2017/", "title": "2017", "tags": [], "content": "[]" } , { "href": "/2018/", "title": "2018", "tags": [], "content": "[]" } , { "href": "/2019/", "title": "2019", "tags": [], "content": "[]" } , { "href": "/2020/", "title": "2020", "tags": [], "content": "[]" } , { "href": "/tags/agorism/", "title": "Agorism", "tags": [], "content": "[]" } , { "href": "/tags/algorithms/", "title": "Algorithms", "tags": [], "content": "[]" } , { "href": "/tags/analysis/", "title": "Analysis", "tags": [], "content": "[]" } , { "href": "/categories/announcements/", "title": "Announcements", "tags": [], "content": "[]" } , { "href": "/tags/anonymity/", "title": "Anonymity", "tags": [], "content": "[]" } , { "href": "/tags/biometrics/", "title": "Biometrics", "tags": [], "content": "[]" } , { "href": "/tags/bitcoin/", "title": "Bitcoin", "tags": [], "content": "[]" } , { "href": "/categories/", "title": "Categories", "tags": [], "content": "[]" } , { "href": "/tags/censorship/", "title": "Censorship", "tags": [], "content": "[]" } , { "href": "/categories/concepts/", "title": "Concepts", "tags": [], "content": "[]" } , { "href": "/tags/counter-economy/", "title": "Counter Economy", "tags": [], "content": "[]" } , { "href": "/tags/crypto-anarchy/", "title": "Crypto Anarchy", "tags": [], "content": "[]" } , { "href": "/tags/dhs/", "title": "Dhs", "tags": [], "content": "[]" } , { "href": "/tags/digital-dead-drop/", "title": "Digital Dead Drop", "tags": [], "content": "[]" } , { "href": "/tags/digital-tradecraft/", "title": "Digital Tradecraft", "tags": [], "content": "[]" } , { "href": "/categories/dossiers/", "title": "Dossiers", "tags": [], "content": "[]" } , { "href": "/tags/email/", "title": "Email", "tags": [], "content": "[]" } , { "href": "/tags/encryption/", "title": "Encryption", "tags": [], "content": "[]" } , { "href": "/tags/face-recognition/", "title": "Face Recognition", "tags": [], "content": "[]" } , { "href": "/tags/fincen/", "title": "Fincen", "tags": [], "content": "[]" } , { "href": "/categories/frontpage/", "title": "Frontpage", "tags": [], "content": "[]" } , { "href": "/tags/google/", "title": "Google", "tags": [], "content": "[]" } , { "href": "/tags/head-camera/", "title": "Head Camera", "tags": [], "content": "[]" } , { "href": "/categories/howto/", "title": "Howto", "tags": [], "content": "[]" } , { "href": "/categories/news/", "title": "News", "tags": [], "content": "[]" } , { "href": "/categories/opinion/", "title": "Opinion", "tags": [], "content": "[]" } , { "href": "/tags/otc/", "title": "Otc", "tags": [], "content": "[]" } , { "href": "/page/", "title": "Pages", "tags": [], "content": "[]" } , { "href": "/tags/physical-tradecraft/", "title": "Physical Tradecraft", "tags": [], "content": "[]" } , { "href": "/tags/police-state/", "title": "Police State", "tags": [], "content": "[]" } , { "href": "/post/", "title": "Posts", "tags": [], "content": "[]" } , { "href": "/tags/privacy/", "title": "Privacy", "tags": [], "content": "[]" } , { "href": "/tags/privacy-law/", "title": "Privacy Law", "tags": [], "content": "[]" } , { "href": "/tags/rfid/", "title": "Rfid", "tags": [], "content": "[]" } , { "href": "/", "title": "ShadowLife", "tags": [], "content": "[]" } , { "href": "/tags/silk-road/", "title": "Silk Road", "tags": [], "content": "[]" } , { "href": "/tags/surveillance/", "title": "Surveillance", "tags": [], "content": "[]" } , { "href": "/tags/", "title": "Tags", "tags": [], "content": "[]" } , { "href": "/tags/tradecraft/", "title": "Tradecraft", "tags": [], "content": "[]" } ]