Feed aggregator

Darpa Turns Aging Surveillance Drones Into Wi-Fi Hotspots

Wired - Tue, 15/04/2014 - 11:41
A fleet of surveillance drones once deployed in the skies over Iraq is being repurposed to provide aerial Wi-Fi in far-flung corners of the world, according to Darpa.






Bálint Réczey: Proposing amd64-hardened architecture for Debian

Planet Debian - Tue, 15/04/2014 - 11:02

Facing last week’s Heartbleed bug the need for improving the security of our systems became more apparent than usually. In Debian there are widely used methods for Hardening packages at build time and guidelines for improving the default installations’ security.

Employing such methods usually come at an expense, for example slower code execution of binaries due to additional checks or additional configuration steps when setting up a system. Balancing between usability and security Debian chose an approach which would satisfy the most users by using C/C++ features which only slightly decrease execution speed of built binaries and by using reasonable defaults in package installations.

All the architectures supported by  Debian aims using the same methods for enhancing security but it does not have to stay the same way. Amd64 is the most widely used architecture of Debian according to popcon and amd64 hardware comes with powerful CPU-s. I think there would be a significant amount of people (being one of them :-)) who would happily use a version of Debian with more security features enabled by default sacrificing some CPU power and installing and setting up additional packages.

My proposal for serving those security-focused users is introducing a new architecture targeting amd64 hardware, but with more security related C/C++ features turned on for every package (currently hardening has to be enabled by the maintainers in some way) through compiler flags as a start.

Introducing the new architecture would also let package maintainers enabling additional dependencies and build rules selectively for the new architecture improving the security further. On the users’ side the advantage of having a separate security enhanced architecture instead of a Debian derivative is the potential of installing a set of security enhanced packages using multiarch. You could have a fast amd64 installation as a base and run Apache or any other sensitive server from the amd64-hardened packages!

I have sent the proposal for discussion to debian-dev, too. Please join the discussion there or leave a comment here.

Andrew Pollock: [life] Day 77: Port of Brisbane tour

Planet Debian - Tue, 15/04/2014 - 06:14

Sarah dropped Zoe around this morning at about 8:30am. She was still a bit feverish, but otherwise in good spirits, so I decided to stick with my plan for today, which was a tour of the Port of Brisbane.

Originally the plan had been to do it with Megan and her Dad, Jason, but Jason had some stuff to work on on his house, so I offered to take Megan with us to allow him more time to work on the house uninterrupted.

I was casting around for something to do to pass the time until Jason dropped Megan off at 10:30am, and I thought we could do some foot painting. We searched high and low for something I could use as a foot washing bucket, other than the mop bucket, which I didn't want to use because of potential chemical residue. I gave up because I couldn't anything suitable, and we watched a bit of TV instead.

Jason dropped Megan around, and we immediately jumped in the car and headed out to the Port. I missed the on ramp for the M4 from Lytton Road, and so we took the slightly longer Lytton Road route, which was fine, because we had plenty of time to kill.

The plan was to get there for about 11:30am, have lunch in the observation cafe on the top floor of the visitor's centre building, and then get on the tour bus at 12:30pm. We ended up arriving much earlier than 11:30am, so we looked around the foyer of the visitor's centre for a bit.

It was quite a nice building. The foyer area had some displays, but the most interesting thing (for the girls) was an interactive webcam of the shore bird roost across the street. There was a tablet where you could control the camera and zoom in and out on the birds roosting on a man-made island. That passed the time nicely. One of the staff also gave the girls Easter eggs as we arrived.

We went up to the cafe for lunch next. The view was quite good from the 7th floor. On one side you could look out over the bay, notably Saint Helena Island, and on the other side you got quite a good view of the port operations and the container park.

Lunch didn't take all that long, and the girls were getting a bit rowdy, running around the cafe, so we headed back downstairs to kill some more time looking at the shore birds with the webcam, and then we boarded the bus.

It was just the three of us and three other adults, which was good. The girls were pretty fidgety, and I don't think they got that much out of it. The tour didn't really go anywhere that you couldn't go yourself in your own car, but you did get running commentary from the driver, which made all the difference. The girls spent the first 5 minutes trying to figure out where his voice was coming from (he was wired up with a microphone).

The thing I found most interesting about the port operations was the amount of automation. There were three container terminals, and the two operated by DP World and Hutchinson Ports employed fully automated overhead cranes for moving containers around. Completely unmanned, they'd go pick a container from the stack and place it on a waiting truck below.

What I found even more fascinating was the Patrick terminal, which used fully automated straddle carriers, which would, completely autonomously move about the container park, pick up a container, and then move over to a waiting truck in the loading area and place it on the truck. There were 27 of these things moving around the container park at a fairly decent clip.

Of course the girls didn't really appreciate any of this, and half way through the tour Megan was busting to go to the toilet, despite going before we started the tour. I was worried about her having an accident before we got back, she didn't, so it was all good.

I'd say in terms of a successful excursion, I'd score it about a 4 out of 10, because the girls didn't really enjoy the bus tour all that much. I was hoping we'd see more ships, but there weren't many (if any) in port today. They did enjoy the overall outing. Megan spontaneously thanked me as we were leaving, which was sweet.

We picked up the blank cake I'd ordered from Woolworths on the way through on the way home, and then dropped Megan off. Zoe wanted to play, so we hung around for a little while before returning home.

Zoe watched a bit more TV while we waited for Sarah to pick her up. Her fever picked up a bit more in the afternoon, but she was still very perky.

Watch Live Tonight as a Total Lunar Eclipse Turns the Moon Blood Red

Wired - Tue, 15/04/2014 - 03:08
[HTML1] Tonight the Earth, moon, and sun will align just right to put on a celestial show known as a total lunar eclipse. Though you can just look up in the sky to catch the event, we’ve also got some spectacular live feeds of the eclipse for those trapped inside by cold, cloud cover, or […]






Dirk Eddelbuettel: BH release 1.54.0-2

Planet Debian - Tue, 15/04/2014 - 02:47
Yesterday's release of RcppBDT 0.2.3 lead to an odd build error. If one used at the same time a 32-bit OS, a compiler as recent as g++ 4.7 and the Boost 1.54.0 headers (directly or via the BH package) then the file lexical_cast.hpp barked and failed to compile for lack of an 128-bit integer (which is not a surprise on a 32-bit OS).

After looking at this for a bit, and looking at some related bug report, I came up with a simple fix (which I mentioned in an update to the RcppBDT 0.2.3 release post). Sleeping over it, and comparing to the Boost 1.55 file, showed that the hunch was right, and I have since made a new release 1.54.0-2 of the BH package which contains the fix.

Changes in version 1.54.0-2 (2014-04-14)
  • Bug fix to lexical_cast.hpp which now uses the test for INT128 which the rest of Boost uses, consistent with Boost 1.55 too.

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

Comments and suggestions are welcome via the mailing list or issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Colin Watson: Porting GHC: A Tale of Two Architectures

Planet Debian - Tue, 15/04/2014 - 02:45

We had some requests to get GHC (the Glasgow Haskell Compiler) up and running on two new Ubuntu architectures: arm64, added in 13.10, and ppc64el, added in 14.04. This has been something of a saga, and has involved rather more late-night hacking than is probably good for me.

Book the First: Recalled to a life of strange build systems

You might not know it from the sheer bulk of uploads I do sometimes, but I actually don't speak a word of Haskell and it's not very high up my list of things to learn. But I am a pretty experienced build engineer, and I enjoy porting things to new architectures: I'm firmly of the belief that breadth of architecture support is a good way to shake out certain categories of issues in code, that it's worth doing aggressively across an entire distribution, and that, even if you don't think you need something now, new requirements have a habit of coming along when you least expect them and you might as well be prepared in advance. Furthermore, it annoys me when we have excessive noise in our build failure and proposed-migration output and I often put bits and pieces of spare time into gardening miscellaneous problems there, and at one point there was a lot of Haskell stuff on the list and it got a bit annoying to have to keep sending patches rather than just fixing things myself, and ... well, I ended up as probably the only non-Haskell-programmer on the Debian Haskell team and found myself fixing problems there in my free time. Life is a bit weird sometimes.

Bootstrapping packages on a new architecture is a bit of a black art that only a fairly small number of relatively bitter and twisted people know very much about. Doing it in Ubuntu is specifically painful because we've always forbidden direct binary uploads: all binaries have to come from a build daemon. Compilers in particular often tend to be written in the language they compile, and it's not uncommon for them to build-depend on themselves: that is, you need a previous version of the compiler to build the compiler, stretching back to the dawn of time where somebody put things together with a big magnet or something. So how do you get started on a new architecture? Well, what we do in this case is we construct a binary somehow (usually involving cross-compilation) and insert it as a build-dependency for a proper build in Launchpad. The ability to do this is restricted to a small group of Canonical employees, partly because it's very easy to make mistakes and partly because things like the classic "Reflections on Trusting Trust" are in the backs of our minds somewhere. We have an iron rule for our own sanity that the injected build-dependencies must themselves have been built from the unmodified source package in Ubuntu, although there can be source modifications further back in the chain. Fortunately, we don't need to do this very often, but it does mean that as somebody who can do it I feel an obligation to try and unblock other people where I can.

As far as constructing those build-dependencies goes, sometimes we look for binaries built by other distributions (particularly Debian), and that's pretty straightforward. In this case, though, these two architectures are pretty new and the Debian ports are only just getting going, and as far as I can tell none of the other distributions with active arm64 or ppc64el ports (or trivial name variants) has got as far as porting GHC yet. Well, OK. This was somewhere around the Christmas holidays and I had some time. Muggins here cracks his knuckles and decides to have a go at bootstrapping it from scratch. It can't be that hard, right? Not to mention that it was a blocker for over 600 entries on that build failure list I mentioned, which is definitely enough to make me sit up and take notice; we'd even had the odd customer request for it.

Several attempts later and I was starting to doubt my sanity, not least for trying in the first place. We ship GHC 7.6, and upgrading to 7.8 is not a project I'd like to tackle until the much more experienced Haskell folks in Debian have switched to it in unstable. The porting documentation for 7.6 has bitrotted more or less beyond usability, and the corresponding documentation for 7.8 really isn't backportable to 7.6. I tried building 7.8 for ppc64el anyway, picking that on the basis that we had quicker hardware for it and didn't seem likely to be particularly more arduous than arm64 (ho ho), and I even got to the point of having a cross-built stage2 compiler (stage1, in the cross-building case, is a GHC binary that runs on your starting architecture and generates code for your target architecture) that I could copy over to a ppc64el box and try to use as the base for a fully-native build, but it segfaulted incomprehensibly just after spawning any child process. Compilers tend to do rather a lot, especially when they're built to use GCC to generate object code, so this was a pretty serious problem, and it resisted analysis. I poked at it for a while but didn't get anywhere, and I had other things to do so declared it a write-off and gave up.

Book the Second: The golden thread of progress

In March, another mailing list conversation prodded me into finding a blog entry by Karel Gardas on building GHC for arm64. This was enough to be worth another look, and indeed it turned out that (with some help from Karel in private mail) I was able to cross-build a compiler that actually worked and could be used to run a fully-native build that also worked. Of course this was 7.8, since as I mentioned cross-building 7.6 is unrealistically difficult unless you're considerably more of an expert on GHC's labyrinthine build system than I am. OK, no problem, right? Getting a GHC at all is the hard bit, and 7.8 must be at least as capable as 7.6, so it should be able to build 7.6 easily enough ...

Not so much. What I'd missed here was that compiler engineers generally only care very much about building the compiler with older versions of itself, and if the language in question has any kind of deprecation cycle then the compiler itself is likely to be behind on various things compared to more typical code since it has to be buildable with older versions. This means that the removal of some deprecated interfaces from 7.8 posed a problem, as did some changes in certain primops that had gained an associated compatibility layer in 7.8 but nobody had gone back to put the corresponding compatibility layer into 7.6. GHC supports running Haskell code through the C preprocessor, and there's a __GLASGOW_HASKELL__ definition with the compiler's version number, so this was just a slog tracking down changes in git and adding #ifdef-guarded code that coped with the newer compiler (remembering that stage1 will be built with 7.8 and stage2 with stage1, i.e. 7.6, from the same source tree). More inscrutably, GHC has its own packaging system called Cabal which is also used by the compiler build process to determine which subpackages to build and how to link them against each other, and some crucial subpackages weren't being built: it looked like it was stuck on picking versions from "stage0" (i.e. the initial compiler used as an input to the whole process) when it should have been building its own. Eventually I figured out that this was because GHC's use of its packaging system hadn't anticipated this case, and was selecting the higher version of the ghc package itself from stage0 rather than the version it was about to build for itself, and thus never actually tried to build most of the compiler. Editing ghc_stage1_DEPS in ghc/stage1/package-data.mk after its initial generation sorted this out. One late night building round and round in circles for a while until I had something stable, and a Debian source upload to add basic support for the architecture name (and other changes which were a bit over the top in retrospect: I didn't need to touch the embedded copy of libffi, as we build with the system one), and I was able to feed this all into Launchpad and watch the builders munch away very satisfyingly at the Haskell library stack for a while.

This was all interesting, and finally all that work was actually paying off in terms of getting to watch a slew of several hundred build failures vanish from arm64 (the final count was something like 640, I think). The fly in the ointment was that ppc64el was still blocked, as the problem there wasn't building 7.6, it was getting a working 7.8. But now I really did have other much more urgent things to do, so I figured I just wouldn't get to this by release time and stuck it on the figurative shelf.

Book the Third: The track of a bug

Then, last Friday, I cleared out my urgent pile and thought I'd have another quick look. (I get a bit obsessive about things like this that smell of "interesting intellectual puzzle".) slyfox on the #ghc IRC channel gave me some general debugging advice and, particularly usefully, a reduced example program that I could use to debug just the process-spawning problem without having to wade through noise from running the rest of the compiler. I reproduced the same problem there, and then found that the program crashed earlier (in stg_ap_0_fast, part of the run-time system) if I compiled it with +RTS -Da -RTS. I nailed it down to a small enough region of assembly that I could see all of the assembly, the source code, and an intermediate representation or two from the compiler, and then started meditating on what makes ppc64el special.

You see, the vast majority of porting bugs come down to what I might call gross properties of the architecture. You have things like whether it's 32-bit or 64-bit, big-endian or little-endian, whether char is signed or unsigned, that sort of thing. There's a big table on the Debian wiki that handily summarises most of the important ones. Sometimes you have to deal with distribution-specific things like whether GL or GLES is used; often, especially for new variants of existing architectures, you have to cope with foolish configure scripts that think they can guess certain things from the architecture name and get it wrong (assuming that powerpc* means big-endian, for instance). We often have to update config.guess and config.sub, and on ppc64el we have the additional hassle of updating libtool macros too. But I've done a lot of this stuff and I'd accounted for everything I could think of. ppc64el is actually a lot like amd64 in terms of many of these porting-relevant properties, and not even that far off arm64 which I'd just successfully ported GHC to, so I couldn't be dealing with anything particularly obvious. There was some hand-written assembly which certainly could have been problematic, but I'd carefully checked that this wasn't being used by the "unregisterised" (no specialised machine dependencies, so relatively easy to port but not well-optimised) build I was using. A problem around spawning processes suggested a problem with SIGCHLD handling, but I ruled that out by slowing down the first child process that it spawned and using strace to confirm that SIGSEGV was the first signal received. What on earth was the problem?

From some painstaking gdb work, one thing I eventually noticed was that stg_ap_0_fast's local stack appeared to be being corrupted by a function call, specifically a call to the colourfully-named debugBelch. Now, when IBM's toolchain engineers were putting together ppc64el based on ppc64, they took the opportunity to fix a number of problems with their ABI: there's an OpenJDK bug with a handy list of references. One of the things I noticed there was that there were some stack allocation optimisations in the new ABI, which affected functions that don't call any vararg functions and don't call any functions that take enough parameters that some of them have to be passed on the stack rather than in registers. debugBelch takes varargs: hmm. Now, the calling code isn't quite in C as such, but in a related dialect called "Cmm", a variant of C-- (yes, minus), that GHC uses to help bridge the gap between the functional world and its code generation, and which is compiled down to C by GHC. When importing C functions into Cmm, GHC generates prototypes for them, but it doesn't do enough parsing to work out the true prototype; instead, they all just get something like extern StgFunPtr f(void);. In most architectures you can get away with this, because the arguments get passed in the usual calling convention anyway and it all works out, but on ppc64el this means that the caller doesn't generate enough stack space and then the callee tries to save its varargs onto the stack in an area that in fact belongs to the caller, and suddenly everything goes south. Things were starting to make sense.

Now, debugBelch is only used in optional debugging code; but runInteractiveProcess (the function associated with the initial round of failures) takes no fewer than twelve arguments, plenty to force some of them onto the stack. I poked around the GCC patch for this ABI change a bit and determined that it only optimised away the stack allocation if it had a full prototype for all the callees, so I guessed that changing those prototypes to extern StgFunPtr f(); might work: it's still technically wrong, not least because omitting the parameter list is an obsolescent feature in C11, but it's at least just omitting information about the parameter list rather than actively lying about it. I tweaked that and ran the cross-build from scratch again. Lo and behold, suddenly I had a working compiler, and I could go through the same build-7.6-using-7.8 procedure as with arm64, much more quickly this time now that I knew what I was doing. One upstream bug, one Debian upload, and several bootstrapping builds later, and GHC was up and running on another architecture in Launchpad. Success!

Epilogue

There's still more to do. I gather there may be a Google Summer of Code project in Linaro to write proper native code generation for GHC on arm64: this would make things a good deal faster, but also enable GHCi (the interpreter) and Template Haskell, and thus clear quite a few more build failures. Since there's already native code generation for ppc64 in GHC, getting it going for ppc64el would probably only be a couple of days' work at this point. But these are niceties by comparison, and I'm more than happy with what I got working for 14.04.

The upshot of all of this is that I may be the first non-Haskell-programmer to ever port GHC to two entirely new architectures. I'm not sure if I gain much from that personally aside from a lot of lost sleep and being considered extremely strange. It has, however, been by far the most challenging set of packages I've ported, and a fascinating trip through some odd corners of build systems and undefined behaviour that I don't normally need to touch.

Apple Buys Tiny Dams to Power Its Data Centers

Wired - Tue, 15/04/2014 - 01:04
Apple is buying up a hydro-electric project in Oregon, hoping to lock into an environmentally sustainable way of powering its massive data centers.






TrueCrypt audit finds “no evidence of backdoors” or malicious code

Ars Technica - Tue, 15/04/2014 - 00:45
Hc_07

On Monday, after seven months of discussion and planning, the first phase of a two-part audit of TrueCrypt was released.

The results? iSEC, the company contracted to review the bootloader and Windows kernel driver for any backdoor or related security issue, concluded (PDF) that TrueCrypt has: “no evidence of backdoors or otherwise intentionally malicious code in the assessed areas.”

While the team did find some minor vulnerabilities in the code itself, iSEC labeled them as appearing to be “unintentional, introduced as the result of bugs rather than malice.”

Read 7 remaining paragraphs | Comments

Hacker “weev” demands bacon following prison release

Ars Technica - Tue, 15/04/2014 - 00:10

Hacker and Internet troll Andrew 'weev' Auernheimer demanded bacon, cream cheese, and alfalfa sprouts following his Friday release from prison, hours after a federal appeals court vacated his conviction.

According to a YouTube video posted on Motherboard, Auernheimer is shown joking with friends that included his lawyer, Tor Ekeland. He demands bacon in the vehicle ride away from the Allenwood Federal Correctional Center in Pennsylvania, according to the YouTube video.

He says he lost 17 pounds, too, following his 2012 hacking conviction, which was viewed as a test of the reaches of the Computer Fraud and Abuse Act (CFAA), the same statute Aaron Swartz was being prosecuted for before his 2013 suicide death.

Read 8 remaining paragraphs | Comments

Here’s how to get Windows Phone 8.1 and Cortana

Ars Technica - Mon, 14/04/2014 - 23:40

If you use Windows Phone 8 and want to get the new Windows Phone 8.1 hotness and give Cortana a spin before it rolls out officially—and you do, it's really great—then don't despair. Although the early access is a "developer preview," you don't actually have to write any code to get your hands on it. You just need a Microsoft Account and a Windows Phone 8 phone (obviously).

First head to App Studio and sign in with the same Microsoft Account that you use on your phone. This will make some magical change to your account.

Next, install the app "Preview for Developers." It's a regular app you can find in the store, or you can send it to your phone here. Then run the app. It's straightforward. It'll ask you to accept some terms and conditions, warning that you may void your warranty. If you don't want to do that, then you'll have to wait for the final release. Otherwise, sign in with your Microsoft Account—the same one that you used at App Studio and check one final box.

Read 6 remaining paragraphs | Comments

Vicious Heartbleed bug bites millions of Android phones, other devices

Ars Technica - Mon, 14/04/2014 - 23:30
Centers for Disease Control and Prevention's Public Health Image Library via Wikipedia

The catastrophic Heartbleed security bug that has already bitten Yahoo Mail, the Canada Revenue Agency, and other public websites also poses a formidable threat to end-user applications and devices, including millions of Android handsets, security researchers warned.

Handsets running version 4.1.1 of Google's mobile operating system are vulnerable to attacks that might pluck passwords, the contents of personal messages, and other private information out of device memory, a company official warned on Friday. Marc Rogers, principal security researcher at Lookout Mobile, a provider of anti-malware software for Android phones, said some versions of Android 4.2.2 that have been customized by the carriers or hardware manufacturers have also been found to be susceptible. Rogers said other releases may contain the critical Heartbleed flaw as well. Officials with BlackBerry have warned the company's messenger app for iOS, Mac OS X, Android, and Windows contains the critical defect and have released an update to correct it.

The good news, according to researchers at security firm Symantec, is that major browsers don't rely on the OpenSSL cryptographic library to implement HTTPS cryptographic protections. That means people using a PC to browse websites should be immune to attacks that allow malicious servers to extract data from an end user's computer memory. Users of smartphones, and possibly those using routers and "Internet of things" appliances, aren't necessarily as safe.

Read 8 remaining paragraphs | Comments

Richard Hartmann: git-annex corner case: Changing commit messages retroactively and after syncing

Planet Debian - Mon, 14/04/2014 - 23:12

This is half a blog post and half a reminder for my future self.

So let's say you used the following commands:

git add foo git annex add bar git annex sync # move to different location with different remotes available git add quux git annex add quuux git annex sync

what I wanted to happen was to simply sync the already committed stuff to the other remotes. What happened instead was git annex sync's automagic commit feature (which you can not disable, it seems) doing its job: Commit what was added earlier and use "git-annex automatic sync" as commit message.

This is not a problem in and as of itself, but as this is my my master annex and as I managed to maintain clean commit messages for the last few years, I felt the need to clean this mess up.

Changing old commit messages is easy:

git rebase --interactive HEAD~3

pick the r option for "reword" and amend the two commit messages. I did the same on my remote and all the branches I could find with git branch -a. Problem is, git-annex pulls in changes from refs which are not shown as branches; run git annex sync and back are the old commits along with a merge commit like an ugly cherry on top. Blegh.

I decided to leave my comfort zone and ended up with the following:

# always back up before poking refs git clone --mirror repo backup git reset --hard 1234 git show-ref | grep master # for every ref returned, do: git update-ref $ref 1234

rinse repeat for every remote, git annex sync, et voilà. And yes, I avoided using an actual loop on purpose; sometimes, doing things slowly and by hand just feels safer.

For good measure, I am running

git fsck && git annex fsck

on all my remotes now, but everything looks good up to now.

Media, rights groups urge court to revisit takedown of anti-Muslim YouTube video

Ars Technica - Mon, 14/04/2014 - 23:00
"The Innocence of Muslims."

Several media groups and rights organizations have rallied behind Google, urging a federal appeals court to revisit its takedown order of the inflammatory "The Innocence of Muslims" video on YouTube.

Media groups like the Los Angeles Times, New York Times, Reporters Committee for Freedom of the Press, and others told the Ninth US Circuit of Appeals Friday that its February decision "arguably expands the concept of copyright ownership in a manner that could allow the subjects of news coverage to exercise veto power over unflattering broadcasts" (PDF).

The case concerns an actress in the 2012 video that sparked violent protests throughout the Muslim world. The actress, Cindy Lee Garcia, urged the appeals court to remove the video after complaining that she received death threats and was fired from her work. Garcia said she was duped into being in the "hateful anti-Islamic production."

Read 6 remaining paragraphs | Comments

IPCC finally weighs in on how to avoid further climate change

Ars Technica - Mon, 14/04/2014 - 22:40
Duke Energy

If you were collecting sections of the new report from the Intergovernmental Panel on Climate Change, you can now complete your set. Following the release of the section on the physical science of climate change in September and the section on the impacts of, and adaptations to, climate change just two weeks ago, the section on how to avoid future warming was finally released over the weekend in Berlin.

This section was written by 235 scientists from 58 countries and cites almost 10,000 studies. The final publication of the entire report will take place in October, along with a short synthesis report summarizing the key findings in simpler, less-technical terms.

How we got here

If you add up all the human-caused greenhouse gas emissions around the world in 2010, it was equivalent to 49 billion tons of CO2. That number isn’t just growing, its growth is accelerating. Over the previous decade, it increased by about one billion tons each year, while the average from 1970-2000 was about 0.4 billion tons more each year. More than three-quarters of these emissions come from fossil fuels, and the rest come from things like deforestation, livestock production, and industrial pollutants.

Read 12 remaining paragraphs | Comments

Prenda On Appeal: Copyright Troll Tactics Challenged in DC Circuit

EFF Breaking News - Mon, 14/04/2014 - 21:36

The DC Circuit Court of Appeals heard argument today in AF Holdings v. Does 1-1058, one of the few mass copyright cases to reach an appellate court, and the first to specifically raise the fundamental procedural problems that tilt the playing field firmly against the Doe Defendants. The appeal was brought by several internet service providers (Verizon, Comcast, AT&T and affiliates), with amicus support from EFF, the ACLU, the ACLU of the Nation's Capitol, Public Citizen, and Public Knowledge. On the other side: notorious copyright troll Prenda Law.

Copyright trolls like Prenda want to be able to sue thousands of people at once in the same court – even if those defendants have no connection to the venue or each other. The troll asks the court to let it quickly collect hundreds of customer names from ISPs. It then shakes those people down for settlements. These Doe defendants have a strong incentive to pay nuisance settlements rather than travel to a distant forum to defend themselves. The copyright troll business model relies on this unbalanced playing field.

In this case, Prenda sued 1058 Does (anonymous defendants identified only by an IP address) in federal district court in the District of Columbia. It then issued subpoenas demanding that ISPs identify the names of these customers. The ISPs objected to this request arguing that most of the IP addresses were associated with computers located outside of the court's jurisdiction. The ISPs and EFF also showed that Prenda could have used simple geolocation tools to determine the same thing. And we explained that joining together 1000+ subscribers in one lawsuit was fundamentally unfair and improper under the rules governing when defendants can be sued together (known as ‘joinder’).

Unfortunately, the district court did not agree, holding that any consideration of joinder and jurisdiction was "premature." In other words, the court can't consider whether the process is unfair unless and until a Doe comes to the court to raise the issue. By then, of course, it is too late; the subscribers will have already received threatening letters and, in many cases, be reluctant to take on the burden of defending themselves in a far away location.

We believe this ruling was fundamentally wrong. As we've said many times, plaintiffs have every right to go to court to enforce their rights. But they must play by the same litigation rules that everyone else has to follow. To get early discovery, plaintiffs must have a good-faith belief that jurisdiction and joinder are proper. Given the evidence presented to the district court, there is no way Prenda could have formed this good faith belief. So its demand for customer information should have been denied.

The ISPs appealed the district court’s troubling ruling. At the hearing today, the appellate court was particularly interested in the issue of joinder. The court seemed immediately skeptical of the notion of suing 1000 people at once, but wondered if it might be acceptable join together 20 Bittorrent users who had joined the same swarm to acquire the same work. The ISPs and amici said generally no, because the plaintiff can't know whether a given Doe 1 acquired anything from a given Doe 2 – in other words, they aren't necessarily part of the same "transaction or occurrence." We analogized a bittorrent swarm to a casino poker table: over the course of a weekend, a week, or a month, players may come and go, adding and subtracting from the pot, but the players on day one are unlikely to be related to the players on day 4, or day 30.

The ISPs and amici also stressed the issue of burden. While the ISPs were focused on the burden they faced in responding to the subpoenas, EFF directed the court's attention to the fundamental burden on the IP subscribers, noting that the subscribers identified as a result of a subpoena aren't necessarily going to be responsible for any unauthorized activity. An IP address, we explained, only tells you the name on the bill, not who is using the account. In this context, it is crucial that courts attend to the burden on the Does, as well as the ISPs.

The court had a number of question regarding jurisdiction, and directed many of them to counsel for AF Holdings, Paul Duffy. At root, the court seemed to want to know why AF Holdings had not used geolocation tools to help determine where its targets might be located, and why it had not dropped its effort to pursue many of them when the ISPs explained that the Does just weren't in the court's jurisdiction. Finally, the court had some questions about AF Holdings litigation tactics, including the shenanigans that have been widely reported elsewhere.

It is difficult to predict how a court will rule based only on a hearing. But we are encouraged that the judges asked the important and thoughtful questions, and clearly understood both the context and implications of their decision. Many district courts have now concluded that the copyright troll business model is fundamentally unfair, and have taken steps to ensure the judicial process is not abused to foster a shakedown scheme. Let's hope they will soon be joined by the DC Circuit Court of Appeals.

Related Issues: Fair Use and Intellectual Property: Defending the BalanceCopyright TrollsRelated Cases: AF Holdings v. Does
Share this:   ||  Join EFF

Snowden reporting lands Guardian and Post Pulitzers

Ars Technica - Mon, 14/04/2014 - 21:30
Edward Snowden The Nation Institute

The 2014 Pulitzer Public Service Award for "meritorious public service by a newspaper or news site" was awarded Monday to the UK-based publication The Guardian and the US-based publication The Washington Post for their reporting on documents provided to them by National Security Agency whistleblower Edward Snowden.

The two publications are being honored for their "revelation of widespread secret surveillance by the National Security Agency, helping through aggressive reporting to spark a debate about the relationship between the government and the public over issues of security and privacy."

Their reporting has, for example, helped reveal the extent to which the US' NSA and the UK's Government Communications Headquarters have collected information en masse about millions of Americans’ phone calls and e-mails. Additionally, it has illuminated the mechanisms through which US telecommunications and technology companies have been complicit in government spying efforts.

Read 2 remaining paragraphs | Comments

Four new stable kernels

LWN.net - Mon, 14/04/2014 - 21:29

Greg Kroah-Hartman has released kernels 3.14.1, 3.13.10, 3.10.37, and 3.4.87. Each contains important updates and fixes; in addition, Greg notes that 3.13.10 will be the next-to-last release in the 3.13.y stable series, so migration to 3.14.y soon is advisable.

TurboTax maker linked to grassroots campaign against free, simple tax filing

Ars Technica - Mon, 14/04/2014 - 21:15
Flickr user: Mandy Jansen

Over the last year, a rabbi, a state NAACP official, a small town mayor, and other community leaders wrote op-eds and letters to Congress with remarkably similar language on a remarkably obscure topic.

Each railed against a long-standing proposal that would give taxpayers the option to use pre-filled tax returns. They warned that the program would be a conflict of interest for the IRS and would especially hurt low-income people, who wouldn't have the resources to fight inaccurate returns. Rabbi Elliot Dorff wrote in a Jewish Journal op-ed that he "shudder[s] at the impact this program will have on the most vulnerable people in American society."

"It's alarming and offensive" that the IRS would target "the most vulnerable Americans," two other letters said. The concept, known as return-free filing, is a government "experiment" that would mean higher taxes for the poor, two op-eds argued.

Read 29 remaining paragraphs | Comments

British spy agency’s hometown gets tagged with Banksy-style mural

Ars Technica - Mon, 14/04/2014 - 20:45
A new Banksy-style street art mural that appeared in Cheltenham, England, on Sunday. Photo by Kathryn Wright

It's a known fact that the town of Cheltenham is home to the Government Communications Headquarters (GCHQ), the UK's counterpart to the National Security Agency. And on Sunday, locals received a GCHQ-themed surprise—new street art around a public telephone booth that depicts men in trench coats secretly recording what happens inside.

The mural was painted at the intersection of Fairview Road and Hewlett Road, just three miles from GCHQ's main building. Local residents believe that the mural could be the work of the British graffiti artist Banksy, according to The Telegraph. Banksy is famous for his various street art "vandalism" projects across the globe, which often comment on social and political themes.

While the wiretapping techniques in the mural appear historically bent, the theme of government eavesdropping on phone calls is quite timely. The message comes after a seemingly endless stream of revelations about the American and British dragnet surveillance, started by government whistleblower Edward Snowden and his first leaks in June 2013.

Read 6 remaining paragraphs | Comments

Google buys “atmospheric satellite” builder Titan Aerospace

Ars Technica - Mon, 14/04/2014 - 20:35
A model of the Solara 50, Titan Aerospace's commercial "atmospheric satellite," hangs above the company's booth at the AUVSI Unmanned Systems conference. Sean Gallagher

Titan Aerospace—the drone-maker that was previously pegged as a Facebook acquisition—has been snapped up by Google, according to a report from the Wall Street Journal (subscription required). Titan creates “atmospheric satellites,” solar-powered drones that can fly for five years without landing.

According to the report, Google says the Titan team will be headed to Project Loon, Google's balloon-based Internet project. Loon also uses solar-powered drones in the form of balloons instead of airplanes, so the two teams seem like a good match. The Journal also says the team might help out Manaki, a Google-owned company working on an airborne wind turbine (basically a drone plane on the end of a power cable). Atmospheric satellites could also be a big help to Google Maps and Google Earth since they both use satellite imagery. A fleet of camera-packing drones could take all the photos Google needs.

One of Titan's "smaller" drone models, called the "Solara 50," has a wingspan of 164 feet. That's larger than a Boeing 767. Before the acquisition, the Titan Aerospace's drone Internet project expected to hit "initial commercial operations" in 2015. By using specialty communications equipment, the company claimed it could get Internet speeds of up to one gigabit per second.

Read 1 remaining paragraphs | Comments

Syndicate content