I have successfully gotten the fake LinkedIn account in my name deleted. To prevent someone from doing this again, I signed up for LinkedIn. This is my first -- and only -- post on that account:
My Only LinkedIn Post (Yes, Really)
Welcome to my LinkedIn page. It looks empty because I'm never here. I don't log in, I never post anything, and I won't read any notes or comments you leave on this site. Nor will I accept any invitations or click on any "connect" links. I'm sure LinkedIn is a nice place; I just don't have the time.
If you're looking for me, visit my webpage at www.schneier.com. There you'll find my blog, and just about everything I've written. My e-mail address is firstname.lastname@example.org, if you want to talk to me personally.
I mirror my blog on my Facebook page (https://www.facebook.com/bruce.
schneier/) and my Twitter feed (@schneierblog), but I don't visit those, either.
Now I hear that LinkedIn is e-mailing people on my behalf, suggesting that they friend, follow, connect, or whatever they do there with me. I assure you that I have nothing to do with any of those e-mails, nor do I care what anyone does in response.
There is an unpatchable vulnerability that affects most modern cars. It's buried in the Controller Area Network (CAN):
Researchers say this flaw is not a vulnerability in the classic meaning of the word. This is because the flaw is more of a CAN standard design choice that makes it unpatchable.
Patching the issue means changing how the CAN standard works at its lowest levels. Researchers say car manufacturers can only mitigate the vulnerability via specific network countermeasures, but cannot eliminate it entirely.
Details on how the attack works are here:
The CAN messages, including errors, are called "frames." Our attack focuses on how CAN handles errors. Errors arise when a device reads values that do not correspond to the original expected value on a frame. When a device detects such an event, it writes an error message onto the CAN bus in order to "recall" the errant frame and notify the other devices to entirely ignore the recalled frame. This mishap is very common and is usually due to natural causes, a transient malfunction, or simply by too many systems and modules trying to send frames through the CAN at the same time.
If a device sends out too many errors, then -- as CAN standards dictate -- it goes into a so-called Bus Off state, where it is cut off from the CAN and prevented from reading and/or writing any data onto the CAN. This feature is helpful in isolating clearly malfunctioning devices and stops them from triggering the other modules/systems on the CAN.
This is the exact feature that our attack abuses. Our attack triggers this particular feature by inducing enough errors such that a targeted device or system on the CAN is made to go into the Bus Off state, and thus rendered inert/inoperable. This, in turn, can drastically affect the car's performance to the point that it becomes dangerous and even fatal, especially when essential systems like the airbag system or the antilock braking system are deactivated. All it takes is a specially-crafted attack device, introduced to the car's CAN through local access, and the reuse of frames already circulating in the CAN rather than injecting new ones (as previous attacks in this manner have done).
The US Supreme Court is deciding a case that will establish whether the police need a warrant to access cell phone location data. This week I signed on to an amicus brief from a wide array of security technologists outlining the technical arguments as why the answer should be yes. Susan Landau summarized our arguments.
A bunch of tech companies also submitted a brief.
One of the common ways to hack a computer is to mess with its input data. That is, if you can feed the computer data that it interprets -- or misinterprets -- in a particular way, you can trick the computer into doing things that it wasn't intended to do. This is basically what a buffer overflow attack is: the data input overflows a buffer and ends up being executed by the computer process.
Well, some researchers did this with a computer that processes DNA, and they encoded their malware in the DNA strands themselves:
To make the malware, the team translated a simple computer command into a short stretch of 176 DNA letters, denoted as A, G, C, and T. After ordering copies of the DNA from a vendor for $89, they fed the strands to a sequencing machine, which read off the gene letters, storing them as binary digits, 0s and 1s.
Erlich says the attack took advantage of a spill-over effect, when data that exceeds a storage buffer can be interpreted as a computer command. In this case, the command contacted a server controlled by Kohno's team, from which they took control of a computer in their lab they were using to analyze the DNA file.
This video purports to be a bank robbery in Kiev. He first threatens a teller, who basically ignores him because she's behind bullet-proof glass. But then the robber threatens one of her co-workers, who is on his side of the glass. Interesting example of a security system failing for an unexpected reason.
The video is weird, though. The robber seems very unsure of himself, and never really points the gun at anyone or even holds it properly.
Details on how a squid's eye corrects for underwater distortion:
Spherical lenses, like the squids', usually can't focus the incoming light to one point as it passes through the curved surface, which causes an unclear image. The only way to correct this is by bending each ray of light differently as it falls on each location of the lens's surface. S-crystallin, the main protein in squid lenses, evolved the ability to do this by behaving as patchy colloids -- small molecules that have spots of molecular glue that they use to stick together in clusters.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
Read my blog posting guidelines here.
I seem to have a LinkedIn account.
This comes as a surprise, since I don't have a LinkedIn account, and have never logged in to LinkedIn.
Does anyone have any contacts into the company? I would like to report this fraudulent account, and possibly get control of it. I'm not on LinkedIn, but the best defense against this is probably to create a real account.
Researchers found that they could confuse the road sign detection algorithms of self-driving cars by adding stickers to the signs on the road. They could, for example, cause a car to think that a stop sign is a 45 mph speed limit sign. The changes are subtle, though -- look at the photo from the article.
"Robust Physical-World Attacks on Machine Learning Models," by Ivan Evtimov, Kevin Eykholt, Earlence Fernandes, Tadayoshi Kohno, Bo Li, Atul Prakash, Amir Rahmati, and Dawn Song:
Abstract: Deep neural network-based classifiers are known to be vulnerable to adversarial examples that can fool them into misclassifying their input through the addition of small-magnitude perturbations. However, recent studies have demonstrated that such adversarial examples are not very effective in the physical world--they either completely fail to cause misclassification or only work in restricted cases where a relatively complex image is perturbed and printed on paper. In this paper we propose a new attack algorithm--Robust Physical Perturbations (RP2)-- that generates perturbations by taking images under different conditions into account. Our algorithm can create spatially-constrained perturbations that mimic vandalism or art to reduce the likelihood of detection by a casual observer. We show that adversarial examples generated by RP2 achieve high success rates under various conditions for real road sign recognition by using an evaluation methodology that captures physical world conditions. We physically realized and evaluated two attacks, one that causes a Stop sign to be misclassified as a Speed Limit sign in 100% of the testing conditions, and one that causes a Right Turn sign to be misclassified as either a Stop or Added Lane sign in 100% of the testing conditions.
For once, the real story isn't as bad as it seems. A researcher has figured out how to install malware onto an Echo that causes it to stream audio back to a remote controller, but:
The technique requires gaining physical access to the target Echo, and it works only on devices sold before 2017. But there's no software fix for older units, Barnes warns, and the attack can be performed without leaving any sign of hardware intrusion.
The way to implement this attack is by intercepting the Echo before it arrives at the target location. But if you can do that, there are a lot of other things you can do. So while this is a vulnerability that needs to be fixed -- and seems to have inadvertently been fixed -- it's not a cause for alarm.
Richard Ledgett -- a former Deputy Director of the NSA -- argues against the US government disclosing all vulnerabilities:
Proponents argue that this would allow patches to be developed, which in turn would help ensure that networks are secure. On its face, this argument might seem to make sense -- but it is a gross oversimplification of the problem, one that not only would not have the desired effect but that also would be dangerous.
Actually, he doesn't make that argument at all. He basically says that security is a lot more complicated than finding and disclosing vulnerabilities -- something I don't think anyone disagrees with. His conclusion:
Malicious software like WannaCry and Petya is a scourge in our digital lives, and we need to take concerted action to protect ourselves. That action must be grounded in an accurate understanding of how the vulnerability ecosystem works. Software vendors need to continue working to build better software and to provide patching support for software deployed in critical infrastructure. Customers need to budget and plan for upgrades as part of the going-in cost of IT, or for compensatory measures when upgrades are impossible. Those who discover vulnerabilities need to responsibly disclose them or, if they are retained for national security purposes, adequately safeguard them. And the partnership of intelligence, law enforcement and industry needs to work together to identify and disrupt actors who use these vulnerabilities for their criminal and destructive ends. No single set of actions will solve the problem; we must work together to protect ourselves. As for blame, we should place it where it really lies: on the criminals who intentionally and maliciously assembled this destructive ransomware and released it on the world.
I don't think anyone would argue with any of that, either. The question is whether the US government should prioritize attack over defense, and security over surveillance. Disclosing, especially in a world where the secrecy of zero-day vulnerabilities is so fragile, greatly improves the security of our critical systems.
Interesting story about Uber drivers who have figured out how to game the company's algorithms to cause surge pricing:
According to the study. drivers manipulate Uber's algorithm by logging out of the app at the same time, making it think that there is a shortage of cars.
The study said drivers have been coordinating forced surge pricing, after interviews with drivers in London and New York, and research on online forums such as Uberpeople.net. In a post on the website for drivers, seen by the researchers, one person said: "Guys, stay logged off until surge. Less supply high demand = surge."
Passengers, of course, have long had tricks to avoid surge pricing.
I expect to see more of this sort of thing as algorithms become more prominent in our lives.
The venture is built on Alex's talent for reverse engineering the algorithms -- known as pseudorandom number generators, or PRNGs -- that govern how slot machine games behave. Armed with this knowledge, he can predict when certain games are likeliest to spit out moneyinsight that he shares with a legion of field agents who do the organization's grunt work.
These agents roam casinos from Poland to Macau to Peru in search of slots whose PRNGs have been deciphered by Alex. They use phones to record video of a vulnerable machine in action, then transmit the footage to an office in St. Petersburg. There, Alex and his assistants analyze the video to determine when the games' odds will briefly tilt against the house. They then send timing data to a custom app on an agent's phone; this data causes the phones to vibrate a split second before the agent should press the "Spin" button. By using these cues to beat slots in multiple casinos, a four-person team can earn more than $250,000 a week.
It's an interesting article; I have no idea how much of it is true.
The sad part is that the slot-machine vulnerability is so easy to fix. Although the article says that "writing such algorithms requires tremendous mathematical skill," it's really only true that designing the algorithms requires that skill. Using any secure encryption algorithm or hash function as a PRNG is trivially easy. And there's no reason why the system can't be designed with a real RNG. There is some randomness in the system somewhere, and it can be added into the mix as well. The programmers can use a well-designed algorithm, like my own Fortuna, but even something less well-thought-out is likely to foil this attack.