A new way to discover DNA modifications

DNA is made from four nucleosides, each known by its own letter—A, G, C, and T. However, since the structure of DNA was deciphered in 1953, scientists have discovered several other variants that are often added to the DNA sequences to replace one of the usual four letters.

from Phys.org – latest science and technology news stories See Link via IFTTT

Autonomous Killing Machines Are More Dangerous Than We Think

Autonomous Killing Machines Are More Dangerous Than We Think
Reaper drone firing missile. Credit: YouTube

A former Pentagon official is warning that autonomous weapons would likely be uncontrollable in real-world situations thanks to design failures, hacking, and external manipulation. The answer, he says, is to always keep humans “in the loop.”

The new report, titled “Autonomous Weapons and Operational Risk,” was written by Paul Scharre, a director at the Center for a New American Security. Scharre used to work at the office of the Secretary of Defense where he helped the US military craft its policy on the use of unmanned and autonomous weapons. Once deployed, these future weapons would be capable of choosing and engaging targets of their own choosing, raising a host of legal, ethical, and moral questions. But as Scharre points out in the new report, “They also raise critically important considerations regarding safety and risk.”

As Scharre is careful to point out, there’s a difference between semi-autonomous and fully autonomous weapons. With semi-autonomous weapons, a human controller would stay “in the loop,” monitoring the activity of the weapon or weapons system. Should it begin to fail, the controller would just hit the kill switch. But with autonomous weapons, the damage that be could be inflicted before a human is capable of intervening is significantly greater. Scharre worries that these systems are prone to design failures, hacking, spoofing, and manipulation by the enemy.

Future human-less weapons systems could include aerial drones with no operators, autonomous armed robotic vehicles, automated sentry machine guns, and autonomous sniper systems.

Scharre paints the potential consequences in grim terms:

In the most extreme case, an autonomous weapon could continue engaging inappropriate targets until it exhausts its magazine, potentially over a wide area. If the failure mode is replicated in other autonomous weapons of the same type, a military could face the disturbing prospect of large numbers of autonomous weapons failing simultaneously, with potentially catastrophic consequences.

From an operational standpoint, autonomous weapons pose a novel risk of mass fratricide, with large numbers of weapons turning on friendly forces. This could be because of hacking, enemy behavioral manipulation, unexpected interactions with the environment, or simple malfunctions or software errors. Moreover, as the complexity of the system increases, it becomes increasingly difficult to verify the system’s behavior under all possible conditions; the number of potential interactions within the system and with its environment is simply too large.

So that sounds like the makings of a most horrific dystopian sci-fi movie. However, Scharre believes that some of these risks can be mitigated and reduced, but the risk of accidents “never can be entirely eliminated.”

We’re still many years away from seeing fully autonomous systems deployed in the field, but it’s not too early to start thinking about the potential risks—and benefits. It has been argued, for example, that autonomous systems could reduce casualties and suffering on the battlefield. That may very well be the case, but as Scharre and his team at the Center for a New American Security point out, the risks are serious, indeed.

[Center for a New American Security via New York Times]

Email the author at george@gizmodo.com and follow him @dvorsky.

from Gizmodo See Link via IFTTT

US and Europe reveal how they’ll protect your personal data

The US and EU have published a big pile of documents that spill the beans on the pair’s replacement for Safe Harbor. The new provision is known as the EU-US Privacy Shield and is designed to limit how much personal data the NSA (amongst others) can access. The files also call for the creation of an independent regulator that’ll handle complaints from users which will be funded by contributions from internet companies. The most interesting factoid we’ve spotted so far is that firms like Facebook can choose if it wants to be subject to American or European data protection law — although it’ll default to the former.

If you’re not caught up, Safe Harbor was (essentially) a deal that made life easy for tech companies that operated in the US and Europe. It meant that outfits like Facebook could treat data about its users as movable, bouncing it between servers when it had to. So, for instance, it could take information about a German user, stored in a data center in Ireland, and push it to California for long-term storage. Except, when that data crossed the Atlantic, it became fair game for pushy surveillance agencies like the NSA.

Privacy campaigner Max Schrems was so annoyed at the idea that he launched a lawsuit in Ireland to force a decision. Unfortunately, the courts batted away his claim, so he took the case to the European Court which examined both the decision and Schrems’ claim. Shortly afterward, courts ruled that the Safe Harbor provisions did not protect local citizens, and declared them to be invalid. These new rules are expected to be formalized across the next few months, ending the potential headaches for almost every social network in the country.

Via: The Verge, NYT

Source: Department of Commerce, European Union

from Engadget See Link via IFTTT

Google self-driving car crashes into a bus (update: statement)

Google’s self-driving cars have been in accidents before, but always on the receiving end… at least, until now. The company has filed a California DMV accident report confirming that one of its autonomous vehicles (a Lexus RX450h) collided with a bus in Mountain View. The crash happened when the robotic SUV had to go into the center lane to make a right turn around some sand bags — both the vehicle and its test driver incorrectly assumed that a bus approaching from behind would slow or stop to let the car through. The Lexus smacked into the side of the bus at low speed, damaging its front fender, wheel and sensor in the process.

This was a minor incident, and we’re happy to report that there were no injuries. However, this might be the first instance where one of Google’s self-driving cars caused an accident. If so, the Mountain View crew can no longer say it’s an innocent dove on the roads — while this wasn’t a glitch, its software made a decision that led to a crash. We’ve reached out to Google to see if it can elaborate on what happened.

No matter what the response, it was always going to be difficult to avoid this kind of incident. Until self-driving cars can anticipate every possible road hazard, there’s always a chance that they’ll either be confused or make choices with unexpected (and sometimes unfortunate) consequences. However, the hope at this early stage isn’t to achieve a flawless track record. Instead, it’s to show that self-driving cars can be safer overall than their human-piloted counterparts.

Update: Google has provided us with its take on the incident from its February monthly report. It sees the accident as the result of that "normal part of driving" where there’s mutual blame: both sides made too many assumptions. So yes, Google acknowledges that it’s partly at fault for what happened. In the wake of the crash, it has already tweaked its software to accept that buses are "less likely to yield" and prevent issues like this in the future. Read the full copy below.

Our self-driving cars spend a lot of time on El Camino Real, a wide boulevard of three lanes in each direction that runs through Google’s hometown of Mountain View and up the peninsula along San Francisco Bay. With hundreds of sets of traffic lights and hundreds more intersections, this busy and historic artery has helped us learn a lot over the years. And on Valentine’s Day we ran into a tricky set of circumstances on El Camino that’s helped us improve an important skill for navigating similar roads.

El Camino has quite a few right-hand lanes wide enough to allow two lines of traffic. Most of the time it makes sense to drive in the middle of a lane. But when you’re teeing up a right-hand turn in a lane wide enough to handle two streams of traffic, annoyed traffic stacks up behind you. So several weeks ago we began giving the self-driving car the capabilities it needs to do what human drivers do: hug the rightmost side of the lane. This is the social norm because a turning vehicle often has to pause and wait for pedestrians; hugging the curb allows other drivers to continue on their way by passing on the left. It’s vital for us to develop advanced skills that respect not just the letter of the traffic code but the spirit of the road.

On February 14, our vehicle was driving autonomously and had pulled toward the right-hand curb to prepare for a right turn. It then detected sandbags near a storm drain blocking its path, so it needed to come to a stop. After waiting for some other vehicles to pass, our vehicle, still in autonomous mode, began angling back toward the center of the lane at around 2 mph — and made contact with the side of a passing bus traveling at 15 mph. Our car had detected the approaching bus, but predicted that it would yield to us because we were ahead of it. (You can read the details below in the report we submitted to the CA DMV.)

Our test driver, who had been watching the bus in the mirror, also expected the bus to slow or stop. And we can imagine the bus driver assumed we were going to stay put. Unfortunately, all these assumptions led us to the same spot in the lane at the same time. This type of misunderstanding happens between human drivers on the road every day.

This is a classic example of the negotiation that’s a normal part of driving — we’re all trying to predict each other’s movements. In this case, we clearly bear some responsibility, because if our car hadn’t moved there wouldn’t have been a collision. That said, our test driver believed the bus was going to slow or stop to allow us to merge into the traffic, and that there would be sufficient space to do that.

We’ve now reviewed this incident (and thousands of variations on it) in our simulator in detail and made refinements to our software. From now on, our cars will more deeply understand that buses (and other large vehicles) are less likely to yield to us than other types of vehicles, and we hope to handle situations like this more gracefully in the future.

Via: Reuters

Source: California DMV (PDF)

from Engadget See Link via IFTTT

NY judge rules feds can’t force Apple to unlock an iPhone

A US magistrate judge in New York has ruled that the government can’t force Apple to help law enforcement unlock an iPhone using the All Writs Act. This case in question is about drug trafficking and is not related to the San Bernardino shooter case.

While the cases are different, it’s a win for Apple in its battle with the FBI and Department of Justice. An Apple senior executive told Engadget that while it’s a important precedent, it’s not a binding precedent that magistrate Judge Pym in the San Bernardino case is legally bound to follow.

In his order magistrate Judge James Orenstein states: "More specifically, the established rules for interpreting a statute’s text constrain me to reject the government’s interpretation that the AWA empowers a court to grant any relief not outright prohibited by law."

Apple has earlier prodded Judge Orenstein to rule on the feds wanting Apple to help it unlock the phone of Jun Feng who was suspected of conspiracy to traffic methamphetamine. During a search, DEA agents seized an iPhone 5s belonging to Feng.

The government then initiated the execution of a search warrant for the contents of the phone which has a passcode. Apple then submitted an opposition to the order and after additional filings and oral arguments, we’re here. Another case with a locked iPhone that the government want’s Apple to help unlock and the company rejecting that order. Only in this case, a judge has already ruled and its in Apple’s favor.

In the brief, the judge concluded that this is an issue that should be handled by congress. If the government wants to use All Writs or CALEA to force companies to circumvent encryption, there needs to a clear law granting it that power.

One interesting bit of information to come out of today’s news is that during a conference call with reporters, an Apple senior executive insisted that Apple has never created or signed any piece of software to decrypt a phone.

The company has handed iCloud backups over to law enforcement when order by the courts. This is in line with the company’s remarks that if it has the data, it will comply with lawful orders.

Apple and Department of Justice will argue their cases in front of Judge Pym on March 22.

from Engadget See Link via IFTTT